Viewing 15 posts - 6,301 through 6,315 (of 7,614 total)
I suggest making index changes scripted below, following these steps:
1) Capture and save the existing index missing/usage stats for that table immediately, before any hanges are made.
2) Run the code...
December 13, 2013 at 10:35 am
1) Maybe you can combine the three separate UPDATEs into one, as shown below, avoiding repeated joins of the same tables.
2) Depending on the row counts, you might consider creating...
December 12, 2013 at 2:04 pm
You could also consider something like below, particularly since presumably you are on Enterprise Edition and thus can recreate all the non-clustered indexes ONLINE:
Before the shrink:
1) verify you have current...
December 10, 2013 at 4:19 pm
Try changing:
!= ''
to
IS NOT NULL.
The variables should contain NULL, not be empty, if no row is found.
December 10, 2013 at 4:11 pm
For performance reasons, it would almost certainly be better to have a trigger on the Calculations table that automatically identified and saved all variables used in a normalized table whenever...
December 10, 2013 at 11:09 am
Maybe something like this?:
declare @replace1 varchar(max)
declare @replace2 varchar(max)
select @replace1 = (
select 'REPLACE('
from #CalculationVariables
for xml path('')
...
December 10, 2013 at 11:00 am
FYI: the wheel's been re-invented countless times, or we'd all be riding around on wooden wheels with no rims!
December 9, 2013 at 4:23 pm
SELECT RTRIM(LEFT(CustomerName,CHARINDEX('*',CustomerName + '*')-1))
December 9, 2013 at 4:16 pm
As long as you have fixed leading values, i.e. 'value1%' and not '%value1%',
an index on myColumn should handle the query fine. If it's a large % of the...
December 9, 2013 at 4:09 pm
If you're using Enterprise Edition, just use Change Data Capture and let SQL do all the hard work for you.
If not, the overhead of what you want to do is...
December 9, 2013 at 4:06 pm
Sorry, extremely busy, but here's the re-write. I couldn't test it, of course, so you'll have to do that ;-):
Set NOCOUNT ON
If Object_ID('tempdb..#auditlogidsToArchive') > 0
Drop...
December 4, 2013 at 2:46 pm
Need to clarify whether the other part of my initial impressions was true or not:
Is logtimestamp from the original insert, and so will also always ascend in conjunction with the...
December 4, 2013 at 9:58 am
Stamey (12/4/2013)
I have figured it out, thanks to Google and an example from guys from 'rola. ... I changed it around so that the commit is in the Try and...
December 4, 2013 at 7:57 am
The quick and very-dirty-in-this-case "solution" is to put a COMMIT TRANSACTION immediately before the END TRY.
But the whole process overall could be a lot cleaner and faster, particularly if:
auditlogid is...
December 3, 2013 at 10:02 am
keyser soze-308506 (11/27/2013)
I have a query like that
select c.client_group, s.product_id, count(*) as q
from Sales s
inner join Client c on c.client_id= s.client_id
group by c.client_id, s.product_id
the Sales.pk (clustered index) consist of...
November 29, 2013 at 10:30 am
Viewing 15 posts - 6,301 through 6,315 (of 7,614 total)