A simple example:
CREATE TABLE dbo.x(a INT, b NCHAR(4));
GO
CREATE VIEW dbo.vx AS
SELECT a, b FROM dbo.x;
GO
ALTER TABLE dbo.x ALTER COLUMN a TINYINT;
ALTER TABLE dbo.x ALTER COLUMN b NVARCHAR(4);
GO
SELECT a,b INTO #blat FROM dbo.vx;
GO
EXEC tempdb.dbo.sp_columns N'#blat';
GO
DROP VIEW dbo.vx;
DROP TABLE dbo.x, #blat;
Partial output:
COLUMN_NAME TYPE_NAME PRECISION
----------- --------- ---------
a tinyint 3
b nvarchar 4
So, in practical terms, yes, any interaction with the view should yield the new data types.
That said, I would always call sp_refreshview
for any views that reference a table that has changed (and in fact I often use WITH SCHEMABINDING
so that I can't alter a table without knowing about the views and other objects it affects - this can make cowboy development/deployment painful, but I guess that's kind of the point).
You can build the script to refresh all referenced views dynamically like this (this is for a single table; you'll need to incorporate this into your existing script to make it dynamic for all affected tables):
DECLARE @sql NVARCHAR(MAX) = N'';
SELECT @sql += N'EXEC sp_refreshview '''
+ QUOTENAME(s.name) + '.' + QUOTENAME(v.name) + ''';'
FROM sys.sql_expression_dependencies AS d
INNER JOIN sys.views AS v
ON d.referencing_id = v.[object_id]
INNER JOIN sys.schemas AS s
ON v.[schema_id] = s.[schema_id]
WHERE d.referenced_id = OBJECT_ID('dbo.whatever')
GROUP BY s.name, v.name;
EXEC sp_executesql @sql;
Another reason you should always refresh all views after changing the base table(s), especially if your views use SELECT *
:
CREATE TABLE dbo.x(a INT, b NCHAR(4));
GO
CREATE VIEW dbo.vx AS
SELECT * FROM dbo.x;
GO
SELECT * INTO #b1 FROM dbo.vx;
GO
ALTER TABLE dbo.x ALTER COLUMN a TINYINT;
ALTER TABLE dbo.x ALTER COLUMN b NVARCHAR(4);
GO
SELECT * INTO #b2 FROM dbo.vx;
GO
ALTER TABLE dbo.x ADD d INT;
GO
SELECT * INTO #b3 FROM dbo.vx;
GO
EXEC sp_rename N'dbo.x.b', N'c', N'COLUMN';
EXEC sp_rename N'dbo.x.d', N'b', N'COLUMN';
GO
SELECT * INTO #b4 FROM dbo.vx;
GO
EXEC tempdb.dbo.sp_columns N'#b1';
EXEC tempdb.dbo.sp_columns N'#b2';
EXEC tempdb.dbo.sp_columns N'#b3';
EXEC tempdb.dbo.sp_columns N'#b4';
GO
DROP TABLE #b1, #b2, #b3, #b4;
GO
Partial results:
-- initial:
#b1____ a int 10
#b1____ b nchar 4
-- correct:
#b2____ a tinyint 3
#b2____ b nvarchar 4
-- missing new column d:
#b3____ a tinyint 3
#b3____ b nvarchar 4
-- missing column c, b still points at "old" b:
#b4____ a tinyint 3
#b4____ b nvarchar 4
But then if we refresh the view:
EXEC sp_refreshview N'dbo.vx';
GO
SELECT * INTO #b5 FROM dbo.vx;
EXEC tempdb.dbo.sp_columns N'#b5';
DROP VIEW dbo.vx;
DROP TABLE dbo.x, #b5;
Results:
#b5____ a tinyint 3
#b5____ c nvarchar 4
#b5____ b int 10
Note that the data types are correct now, but the columns are not in the order you would expect.
Best Answer
Length of the fields does not determine if the index is used for a query in SQL Server. But it could affect how performant that index is when it is used to serve the data. It's the same idea why indexing a
UUID
field is a little less performant than indexing anINT
, one reason being the difference in the number of bytes that the values use when indexed.When and how an index is used is solely based on which columns and their order in how you define in the index, the columns you use in your predicates (
JOIN
,WHERE
,HAVING
clauses), the current statistics on those columns at the time you run your query, and how you are comparing those columns in the predicates (e.g. in your linked thread you're using a function in one of your predicates which likely is preventing an index seek from happening and instead an index scan occurs).Things like using functions in predicates, or type conversions when comparing two different data types could cause a sub-optimal plan, and definitely cause performance problems like cardinality estimate issues.