assume there table like:
create table #data (id int identity(1, 1) not null, value int)
put data it:
insert #data (value) select top (1000000) case when (row_number() on (order @@spid)) % 5 in (0, 1) 1 else null end sys.all_columns c1, sys.all_columns c2
and 2 indexes:
create index #ix_data_n on #data (value) include (id) value null create index #ix_data_nn on #data (value) include (id) value not null
data queried like:
select id #data value null
or
select id #data value not null
if examine query plan, see in first case index seek performed , in second case index scan performed. why seek in first case , scan in second?
addition after comments:
if create ordinary covering index instead of 2 filtered covering:
create index #ix_data on #data (value) include (id)
query plan showing index seek both is null
, is not null
conditions, disregarding % of null values in column (0% of nulls or 10% or 90% or 100%, not matter). when there 2 filtered indexes, query plan showing index seek is null
always, , can index scan or table scan (depending on % of nulls), never index seek. so, seems, difference in way condition 'is not null' handled.
it means, probably, if index intended 'is not null' check only, either normal index or filtered index should perform better , preferred, isn't it? one?
sqlserver 2008, 2008r2 , 2012
the seek vs scan in query plans seeing red-herring.
in both cases, query being answered scanning appropriate non-clustered index beginning end, returning every row.
by examing xml query plan, can see index seek predicate "#data.value = scalar operator (null)" meaningless every row meets criteria.
Comments
Post a Comment