Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase ««123»»

Performance Tips Part 1 Expand / Collapse
Author
Message
Posted Wednesday, March 9, 2005 6:59 AM
Old Hand

Old HandOld HandOld HandOld HandOld HandOld HandOld HandOld Hand

Group: General Forum Members
Last Login: Tuesday, August 20, 2013 2:05 PM
Points: 308, Visits: 55

I liked your article, so I hope you don't think I was being criticle. You gave some good ideas about things that can effect performance. My thought on renaming tables was simply adding to the discussion, not trying to say there was a better way. It seemed reasonable that instead of moving 3,000,000 records, it would be more effecient to move 10,000 and renaming would allow that. Clearly, any real world system may have additional complexities such as you pointed out.

Again, it was a good article both in its content and in the discussion it created. I'm looking forward to part two.




Bob
SuccessWare Software
Post #166562
Posted Wednesday, March 9, 2005 9:40 AM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Tuesday, November 18, 2014 7:33 AM
Points: 138, Visits: 280

Thanks for the warm words. I do like good solutions and your's is good for the discussion. I do follow your's advise while I am doing quick and dirty tests. In the real world I have some issues with object renaming. I have seeing the cases where after renaming an object and creating another one with the same name some applications start erroring with message that it can't find an object (even I can see it and work with it in Query Analyzer). Personally, I do avoid renaming in production databases.

Second part will be soon. It is under review.




Post #166612
Posted Wednesday, March 9, 2005 12:25 PM


Mr or Mrs. 500

Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500

Group: General Forum Members
Last Login: Tuesday, June 21, 2011 10:03 AM
Points: 577, Visits: 102
quoteIn the real world I have some issues with object renaming. I have seeing the cases where after renaming an object and creating another one with the same name some applications start erroring with message that it can't find an object (even I can see it and work with it in Query Analyzer). Personally, I do avoid renaming in production databases.

Per BOL..."Important  After renaming stored procedures and views, flush the procedure cache to ensure all dependent stored procedures and views are recompiled."

Notice the "DBCC FREEPROCCACHE" at the end of the rename script.

On another topic, I noticed you didn't comment on the different while loop.  What do you think about using a break instead of min and max variables?  If you check the optimizer it's definitely more efficient.



Signature is NULL
Post #166675
Posted Wednesday, March 9, 2005 5:07 PM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Tuesday, November 18, 2014 7:33 AM
Points: 138, Visits: 280

Possible that optimizer shows that the break is more effective but it is like 10 or 10.01 miles/gal. Is it different? For testing purpose  on factory - yes. In real world ????? When I am working, I am trying to cut worst parts and advice developers to focus on important changes. This one I am not consider as important because there is no major time cut and it is not influence another processes.




Post #166734
Posted Wednesday, March 9, 2005 5:51 PM


Mr or Mrs. 500

Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500

Group: General Forum Members
Last Login: Tuesday, June 21, 2011 10:03 AM
Points: 577, Visits: 102

Actually...it is significantly different, per my optimizer it costs 2.5% of the total cost of both (although this includes creating the table variable, which I would HOPE you had in for testing purposes only).

I would agree that this is not significant if it's under a 100 loops or so (which is the case in your example), but it is significant with more loops.  I might add, under a 100 loops there is no significant difference between a cursor and either while loop.

I certainly wasn't saying you should modify your existing code.  But since your article was about optimization, I figured you'd be interested.  Seems like you are a little defensive instead.  Oh well...

cl

 



Signature is NULL
Post #166737
Posted Wednesday, March 9, 2005 7:04 PM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Tuesday, November 18, 2014 7:33 AM
Points: 138, Visits: 280

I am not defencive, but trying to be realistic. If I need over 200 -500 loops then I need to look an alternative to the loop solution and in most cases I was able to find it. For example, instead of writing a loop to get permission for all tables 

grant SELECT on table1 to XYZ

Grant SELECT on table2 to XYZ

...........................................

it can be achived by

 

declare @cmd varchar(8000)

set @cmd = ''

select @cmd = @cmd + '

grant SELECT on ' + name + ' on XYZ '  from sysobjects where type = 'U'




Post #166743
Posted Wednesday, March 9, 2005 7:11 PM


Mr or Mrs. 500

Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500

Group: General Forum Members
Last Login: Tuesday, June 21, 2011 10:03 AM
Points: 577, Visits: 102

Definitely a set-based solution is always best, and you're right that many loops can be converted into set-based processing.

That's not what we were talking about, though; we were talking about replacements for cursors when a loop is needed.  I always appreciate seeing a "while" loop instead of a cursor (and i appreciate seeing a set-based solution even more), as they are cleaner and (I think) easier to read.  I commend you on this, but I was also pointing out that using a "break" instead of checking variables is more efficient, especially when using "Select min, max" to set the loop control variables.

I'm not trying to piss you off or anything...

cl



Signature is NULL
Post #166745
Posted Wednesday, March 9, 2005 7:21 PM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Tuesday, November 18, 2014 7:33 AM
Points: 138, Visits: 280
Let it be this way. Next time I will use your loop control when I can't awoid a big looping structure. Good solution. I am fine with all solutions as far as they are not completely off a practical use. I have seing many solutions that are good in theory but do not really help in reality. They are way off the practical use.


Post #166749
Posted Thursday, March 10, 2005 4:45 PM


SSC Eights!

SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!

Group: General Forum Members
Last Login: Friday, November 14, 2014 1:49 PM
Points: 932, Visits: 370
Talking more about situation three.

First, just to be completely pedantic, step 4 in your example seems to have the table names reversed, so that instead of copying the latest data back to the Call_Log_Event table, you're copying data from the (empty) Call_Log_Event table to Call_Log_Event_Archive again. Of course, since no one else has mentioned this, maybe I'm badly misreading things.

Second, in cases where there is a need to archive data, there is usually a need to do it more than once. Your process is a one-time solution to what will probably become a repeating problem.

You could continue by inserting the current data from Call_Event_Log into Call_Event_Log_Archive as you've done in this instant. However, as the archive table grows larger and larger, I believe you will hit a point where searching for the records to be copied back, then removed will take much longer than simply deleting the records in the first place. It's possible that, with proper indexing, this wouldn't be significantly worse for some time, but I believe it would happen.

Of course, if one of the concerns is how fast you can release Call_Event_Log for use, your method still might work well if the Call_Event_Log data was always written to a temp table instead of to the full Archive table. Restore the data from the temp table to Call_Event_Log first, commit, then write the data to be moved to Call_Event_Log_Archive, and drop the temp teble. This would almost certainly be slower than your solution overall (two writes of the archived data), but you can lock one table at a time, keeping things flowing as freely as possible. As you point out, the time when critical resources are locked can be far more important than the actual time it takes to complete an operation.



R David Francis
Post #167051
Posted Thursday, March 10, 2005 6:50 PM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Tuesday, November 18, 2014 7:33 AM
Points: 138, Visits: 280
Good catch.
 
It should be
 
insert into Call_Log_Event (call_log_id, event_desc, event_cd , eventdt, archivedt)
select call_log_id, event_desc, event_cd , eventdt, getdate()
  from Call_Log_Event_Archive
  where datediff(dd,eventdt, getdate()) <= 10 
 
I placed only an idea to show that many ways around to get better solution than the one (actually many people using) I described. Depend on the situation different decision may be taken. You are right.



Post #167065
« Prev Topic | Next Topic »

Add to briefcase ««123»»

Permissions Expand / Collapse