February 11, 2011 at 10:34 am
Attached is the view. This view has two tables CB( 10100 records) and CDP ( 50,000 records). There is union of these two tables. Each of these have clustered index as well as few n-clustered indexes. This view is part of another procedure, however i see issue around CB and CDP table. I see a clustered index table scan( when the procedure is executed which has this view) on CB table. I am more concerned about the place where it is doing "UNION ALL" for CB and CDP table. Can someone please suggest me the appropriate index so that it doesnt do an index scan or effecient way to get the data.Thanks
February 11, 2011 at 10:42 am
Without a where condition it CANNOT do anything else than a scan.
That being said, maybe it's not required.
Can you post table / keys / indexes defs, sample data and required output?
.sqlplan is not a bad idea either.
February 11, 2011 at 12:57 pm
Ninja's_RGR'us (2/11/2011)
Without a where condition it CANNOT do anything else than a scan.That being said, maybe it's not required.
Can you post table / keys / indexes defs, sample data and required output?
.sqlplan is not a bad idea either.
Thanks for responding, i will try to get that. My query is doing a distinct with no "where" clause. Can i avoid distinct and use a "where" clause giving the same result so that it does seek?
February 11, 2011 at 1:01 pm
Unlikely. The problem with this setup if that you are basically downloading the whole table.
Unless you can bring that down to 3% and ideally very much less than that then you're screwed with a scan.
The more important point is how long is it taking the server to process that? Maybe that's not even the slowest par of the query?
February 11, 2011 at 2:30 pm
I took a quick look at this and see several problems. However, I think the biggest problem you have is the EAV model you are working with for the table RET.
I just don't see any way to get around the horrible performance you are going to have converting that structure into a normalized structure in a view. All of those LEFT OUTER JOINS are there to build a single row for each RK value in the table.
Not to mention all of the CONVERT's and ISNULL's to convert the data to valid data types because of the EAV model.
Now, as for your CB and CDP tables. You can only remove the DISTINCT if you know that you don't have duplicate values and/or you can identify a column that is unique and will always exist that you can filter on. If the definition of the StartDate and EndDate columns in those tables is datetime, then you don't need to convert to datetime.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
February 11, 2011 at 3:09 pm
Ninja's_RGR'us (2/11/2011)
Unlikely. The problem with this setup if that you are basically downloading the whole table.Unless you can bring that down to 3% and ideally very much less than that then you're screwed with a scan.
The more important point is how long is it taking the server to process that? Maybe that's not even the slowest par of the query?
That part was taking 25 % of the cost. I created a coverind index on the columns specified in the select statement for the two tables. Now it is doing "index scan"...is index scan better than clustered index scan? I see the I/o has reduced to half.
February 11, 2011 at 3:28 pm
sqldba_icon (2/11/2011)
Ninja's_RGR'us (2/11/2011)
Unlikely. The problem with this setup if that you are basically downloading the whole table.Unless you can bring that down to 3% and ideally very much less than that then you're screwed with a scan.
The more important point is how long is it taking the server to process that? Maybe that's not even the slowest par of the query?
That part was taking 25 % of the cost. I created a coverind index on the columns specified in the select statement for the two tables. Now it is doing "index scan"...is index scan better than clustered index scan? I see the I/o has reduced to half.
Depends on how many inserts / updates in the table.
Best thing to do is let it run for a while and see if you're getting more reads than writes on that index. You also need to see if you didn't increase locking issues. With only 60K rows I don't think this is an issue, but backups, reindex and logs can also become a problem if you add too many indexes. The problem is that you find out how much is too many when you hit the wall :w00t:.
Viewing 7 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply