Viewing 15 posts - 31 through 45 (of 53 total)
That's the same algorithm I posted further up the thread, although rather better explained.
The only assumption that is made is that you know a Monday, which in your case is...
August 23, 2011 at 2:33 am
Oh, what a can of worms date handling in SQL Server is!!
declare @known_monday varchar(10) = '1900-03-05'
set dateformat dmy
select cast(@known_monday as date) as Date05Mar1900
, cast(@known_monday as datetime) as DateTime05Mar1900
, cast(@known_monday as...
August 19, 2011 at 3:39 am
Fast and accurate, but dependent on the Language of the server.
This code relies only on knowning the date of some known monday (but is perhaps a little clunky)...
;withDayOne as (select...
August 19, 2011 at 3:15 am
Trying the test again, but with 2 windows both running your original script concurrently, then both updated 255 rows every time.
I can't explain this, but it must be something to...
July 12, 2011 at 4:03 am
That logic can be done in one single update (which doesn't require any particular locking strategies or isolation levels):
DECLARE @nextMergeID INT ;
DECLARE @customerID INT;
DECLARE @mergeDate DATETIME;
DECLARE @reset BIT;
UPDATE dbo.merge_standard_queue
SET...
July 12, 2011 at 3:59 am
In general, functions in SQL Server are a feature worth avoiding.
The functions you suggest are probably the "best" or maybe "least-worst", in that they are scalar functions that access no...
March 9, 2011 at 2:38 am
This is simpler...
declare @mylist nvarchar(100);
set @mylist = 'A,B, C, D , 1, 2,345, EFG, H,';
declare @delim varchar(2)=',';
select LEN(@mylist) - LEN(replace(@mylist,@delim,''));
March 7, 2011 at 2:04 am
There is some significant overlap between this post and yours, so I did some performance testing.
On the combination of both the test data you supply, and the 8000 replications of...
July 23, 2010 at 9:42 am
You're right, my suggestion doesn't perform well.
It could be done better - here is a slightly improved version
;with spc (id, val, num, k) as (
select a.id, a.val, t.num, ROW_NUMBER() over...
July 14, 2010 at 6:39 am
declare @file1 varchar(max) set @file1 = 'file1.test'
declare @file2 varchar(max) set @file2 = 'file2.test'
declare @ret int
declare @sql nvarchar(max) set @sql = N'exec @ret = xp_cmdshell ' + char(39) + 'rename '...
July 14, 2010 at 3:07 am
Nice. Can't help thinking, though, that recursive CTEs are a solution looking for a problem.
That method works well for short strings, but longer strings will soon hit the maximum...
July 14, 2010 at 2:38 am
hmm.. you're right.
If you replace the relevant line with:
set @val = cast((@seed * 63 * power(2,@l)) as int) % 62
it will give a significantly better distribution of the final...
June 22, 2010 at 8:28 am
I've done some performance checks.
I expect the performance would be poor where there are many (millions of) consecutive rows to flatten, but the data I need to process is typically...
June 10, 2010 at 7:50 am
Thanks for the suggestion.
However, while that has one big advantage (in that it works), it has a problem for me (not SQL2000 compatible). Although that shouldn't matter on this forum,...
June 10, 2010 at 3:04 am
sp_MSForEachTable is an undocumented system stored procedure. Althought the official Microsoft documentation does not include any details on it, there is much information available on the 'net: http://www.google.com/search?hl=en&q=sql+server+sp_msforeachtable.
It runs...
June 10, 2010 at 2:14 am
Viewing 15 posts - 31 through 45 (of 53 total)