This is the case:
We have an accounting vendor package that is required to run a transaction against a table to search for a particular set of rows. The query is very simple but it takes about 7.30 minutes to run. Apparently, this process needs to run several times a day, therefore it's slowing other users down as soon as it hits the server.
Here are the particulars:
The table: it has 10.5 million records with about 50 columns and only expected to grow about 150,000 records per month.
The query: Select * from tablename where columnname = 'ON' (please don't shoot yet), I'm working on getting their developer team to tell me why they need all the columns and if not which ones are the ones they need.
The filtering column by has only two values ON or OFF and it's a varchar(15).
The primary key on the table is a varchar(15) and it seems to be all integers (don't ask me why they didn't use an integer data type from the beginning, vendor package remember, I know.)
The primary key has a clustered index key
There are four other non-clustered keys present in the table
The filtering column does not have an index key at all.
Also, the server has 2 CPU and 12GB of RAM (9GB are assigned to SQL), running SQL Server 2008 R2
My question is how can we improve this simple query. I tried adding a separate column using bit/integer to filter by this and still is very slow.
Any advise would be very appreciated.