Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase «««1234»»

Randomizing Result Sets with NEWID Expand / Collapse
Author
Message
Posted Monday, March 1, 2010 11:03 PM


SSC-Insane

SSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-Insane

Group: General Forum Members
Last Login: Friday, August 29, 2014 1:51 PM
Points: 21,644, Visits: 15,317
Thanks for the article.



Jason AKA CirqueDeSQLeil
I have given a name to my pain...
MCM SQL Server


SQL RNNR

Posting Performance Based Questions - Gail Shaw
Posting Data Etiquette - Jeff Moden
Hidden RBAR - Jeff Moden
VLFs and the Tran Log - Kimberly Tripp
Post #874864
Posted Tuesday, March 2, 2010 7:20 AM


SSChasing Mays

SSChasing MaysSSChasing MaysSSChasing MaysSSChasing MaysSSChasing MaysSSChasing MaysSSChasing MaysSSChasing Mays

Group: General Forum Members
Last Login: Friday, January 3, 2014 10:59 AM
Points: 626, Visits: 836
Jonathan Kehayias (3/1/2010)
SQLBOT (3/1/2010)


It's the random insertion, not the datatype that causes the problem.
What's the differece if the data inserted is Johnson, Jonsonn, Johnsen or three guids?
Under the hood, there's not a difference.



Craig,

Respectfully, the rate of fragmenation partially depends on what the datatype is for the column. If you are inserting random values into a varchar(8) column, the end impact for fragmentation would be different than a char(8), nchar(8) or nvarchar(8) column because the storage size is different for each so fragmentation rates would be different. A GUID is 16 bytes so it takes more space = fuller pages faster = more page splits = faster fragmentation rates.

Your point is accurate, just playing semantics with you is all.



Hey, that's a great point.

If one were to write up a list of best/worst clustered index keys, I think the guid will fall somewhere in the middle... that's all I'm saying. Worst would be (I think) a long composite key based on random insertions for the reasons we both pointed out.

I smell another article coming on!
PM me if you want to contribute.


Thanks,

~Craig




Craig Outcalt



Tips for new DBAs: http://www.sqlservercentral.com/articles/Career/64632
My other articles: http://www.sqlservercentral.com/Authors/Articles/Craig_Outcalt/560258
Post #875137
Posted Tuesday, March 2, 2010 7:31 AM
Old Hand

Old HandOld HandOld HandOld HandOld HandOld HandOld HandOld Hand

Group: General Forum Members
Last Login: Tuesday, December 7, 2010 8:19 AM
Points: 327, Visits: 123
I'm curious after reading this thread...why would you assign a random number via RAND() to a varchar instead of simply using one of the numeric data types available? If you are going to index or sort on a column, my recollection from a past read is that numeric data types are more efficient for indexing / sorting.

Any thoughts based on experience from the group on this?
Post #875149
Posted Tuesday, March 2, 2010 7:35 AM


SSC Eights!

SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!

Group: General Forum Members
Last Login: Thursday, August 28, 2014 9:20 AM
Points: 810, Visits: 2,130
SQLBOT (3/1/2010)
I also forgot to say that there is the NEWSEQUENTIALID() function if you're going to batch load.
That should wreck your cluidx a little less, too.


If NEWSEQUENTIALID() is the default value on a column, it would be great, but that's the only way to use it as it normally can't be generated on the fly. To get around that, a kluge someone taught me once is:

create proc GenerateSequentialID
as
create table #temp_seqid (rowval uniqueidentifier default newsequentialid())

insert into #temp_seqid default values
select rowval from #temp_seqid
go



Gaby
________________________________________________________________
"In theory, theory and practice are the same. In practice, they are not."
- Albert Einstein
Post #875154
Posted Tuesday, March 2, 2010 7:55 AM


SSC Eights!

SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!

Group: General Forum Members
Last Login: Thursday, August 28, 2014 9:20 AM
Points: 810, Visits: 2,130
Paul White (3/1/2010)
GabyYYZ (3/1/2010)
[quote]One option, especially if you have an indexed identity column on your source table, is to generate a separate table of random row numbers, create a clustered index on it, and join with the original table.

Nice idea. Of course, the 'random' numbers are then a bit, er, 'fixed' aren't they?
Can't believe you used a RBAR method to populate your table.
For smallish numbers of random rows, I prefer an approach very similar to the one posted by Gary earlier.
It does require a table with a sequential ID, but that's pretty common - excepting those that like GUIDs as a PK *shudder*

LOL, just tried it like I suggested, it did NOT work. Strange, so for now, newid() is still the best way.


Gaby
________________________________________________________________
"In theory, theory and practice are the same. In practice, they are not."
- Albert Einstein
Post #875187
Posted Wednesday, March 3, 2010 12:53 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Sunday, May 19, 2013 12:44 AM
Points: 2, Visits: 169
To counter performance issues, the easiest thing to do is add tablesample (10 percent) to the query.
This way the newid() function only needs to run on an already randomized sample of 10 percent instead of against the entire data set.

Barry
Post #875724
Posted Wednesday, March 3, 2010 1:11 AM


SSChampion

SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

Group: General Forum Members
Last Login: 2 days ago @ 4:20 PM
Points: 11,194, Visits: 11,142
Barry-193141 (3/3/2010)
To counter performance issues, the easiest thing to do is add tablesample (10 percent) to the query.
This way the newid() function only needs to run on an already randomized sample of 10 percent instead of against the entire data set.

Just to sure never to use the technique with small tables - you'll likely get no rows at all.

One other point for the general discussion: If a good distribution of random values is important to you, ORDER BY CHECKSUM(NEWID()) is better in that respect.

Paul




Paul White
SQL Server MVP
SQLblog.com
@SQL_Kiwi
Post #875731
Posted Wednesday, March 3, 2010 1:49 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Monday, August 23, 2010 9:31 AM
Points: 6, Visits: 57
Mr Random again....

tablesample limitations per msdn:
Rows on individual pages of the table are not correlated with other rows on the same page.

Never saw a database where that condition could be assumed. most are entered sequentially which is very likley to have correlations.

newid() and any function of it is "too perfect" the nice properties, such as good distribution of digits, has to be built in.

true random aren't so perfect exept in very large samples, and good psuedo random should be difficult to distinguish from true random.

If your doing a lottery, or a statistical study, definately look for better solutions.
Post #875746
Posted Wednesday, March 3, 2010 3:03 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Sunday, May 19, 2013 12:44 AM
Points: 2, Visits: 169
All that means from MSDN is that a random sample MAY be grouped. Its still a random sample.
Say you had 26 buckets - 1 for each letter of the alphabet - and they were filled with names and you wanted to choose a persons name randomly from one of those buckets.
Tablesample + NewID() would still get you a random person.

Tablesample would randomly get you one of the 26 letters and then newid() would get you a random person from that letter.

Seems as random as any other method.

sqlservercentral-1070393 (3/3/2010)
Mr Random again....

tablesample limitations per msdn:
Rows on individual pages of the table are not correlated with other rows on the same page.

Never saw a database where that condition could be assumed. most are entered sequentially which is very likley to have correlations.

newid() and any function of it is "too perfect" the nice properties, such as good distribution of digits, has to be built in.

true random aren't so perfect exept in very large samples, and good psuedo random should be difficult to distinguish from true random.

If your doing a lottery, or a statistical study, definately look for better solutions.
Post #875774
Posted Sunday, March 7, 2010 4:21 PM


SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Tuesday, August 26, 2014 8:56 PM
Points: 178, Visits: 571
Someone appears to have drawn some inspiration from your article.

http://subhrosaha.wordpress.com/

Post #878368
« Prev Topic | Next Topic »

Add to briefcase «««1234»»

Permissions Expand / Collapse