We're testing some rather large - at least for us - narrow tables that will be populated with between 200 and 500 million records.
Any access to the table will be by addressing a low-cardinality id (some 20-50 distinct values) Without the option of partitioning we test some index scenarios.
Id1 (high cardinality)
Id2 (low cardinality; always used in where clauses in queries)
When adding a non-unique non-clustered index , the index is only used when additional columns are included. The index space is then larger than the table.
When adding a non-unique clustered index , the index is always used (when Id2 is adressed) and index space is minimal
With DB2 as background and being used to Bitmap indices i'm trying to understand SQL Server's approach. The clustered index seems ideal, but what is the catch?