Randomizing Result Sets with NEWID

  • I also forgot to say that there is the NEWSEQUENTIALID() function if you're going to batch load.

    That should wreck your cluidx a little less, too.

  • Jonathan Kehayias (3/1/2010)


    It probably was clustered, its common for App Developers to do this kind of thing. It happened at Microsoft around the Windows 7 RC downloads...

    Ah yes, I remember that one 😀

    Highly amusing at the time...

  • SQLBOT (3/1/2010)


    It's the random insertion, not the datatype that causes the problem.

    What's the differece if the data inserted is Johnson, Jonsonn, Johnsen or three guids?

    Under the hood, there's not a difference.

    Craig,

    Respectfully, the rate of fragmenation partially depends on what the datatype is for the column. If you are inserting random values into a varchar(8) column, the end impact for fragmentation would be different than a char(8), nchar(8) or nvarchar(8) column because the storage size is different for each so fragmentation rates would be different. A GUID is 16 bytes so it takes more space = fuller pages faster = more page splits = faster fragmentation rates.

    Your point is accurate, just playing semantics with you is all. 😉

    Jonathan Kehayias | Principal Consultant | MCM: SQL Server 2008
    My Blog | Twitter | MVP Profile
    Training | Consulting | Become a SQLskills Insider
    Troubleshooting SQL Server: A Guide for Accidental DBAs[/url]

  • Paul White (3/1/2010)


    Jonathan Kehayias (3/1/2010)


    It probably was clustered, its common for App Developers to do this kind of thing. It happened at Microsoft around the Windows 7 RC downloads...

    Ah yes, I remember that one 😀

    Highly amusing at the time...

    It is clustered. Am I doing something terribly wrong? Some light on this please, I am not an expert in SQL and would appreciate guidance from SQL gurus.

    Thanks

  • adish (3/1/2010)


    It is clustered. Am I doing something terribly wrong? Some light on this please, I am not an expert in SQL and would appreciate guidance from SQL gurus.

    Thanks

    adish,

    Read the blog post on the link I provided by Paul Randal. It explains why having a clustered index/primary key on GUID is suboptimal.

    Jonathan Kehayias | Principal Consultant | MCM: SQL Server 2008
    My Blog | Twitter | MVP Profile
    Training | Consulting | Become a SQLskills Insider
    Troubleshooting SQL Server: A Guide for Accidental DBAs[/url]

  • Thanks for the article.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • Jonathan Kehayias (3/1/2010)


    SQLBOT (3/1/2010)


    It's the random insertion, not the datatype that causes the problem.

    What's the differece if the data inserted is Johnson, Jonsonn, Johnsen or three guids?

    Under the hood, there's not a difference.

    Craig,

    Respectfully, the rate of fragmenation partially depends on what the datatype is for the column. If you are inserting random values into a varchar(8) column, the end impact for fragmentation would be different than a char(8), nchar(8) or nvarchar(8) column because the storage size is different for each so fragmentation rates would be different. A GUID is 16 bytes so it takes more space = fuller pages faster = more page splits = faster fragmentation rates.

    Your point is accurate, just playing semantics with you is all. 😉

    Hey, that's a great point.

    If one were to write up a list of best/worst clustered index keys, I think the guid will fall somewhere in the middle... that's all I'm saying. Worst would be (I think) a long composite key based on random insertions for the reasons we both pointed out.

    I smell another article coming on!

    PM me if you want to contribute.

    Thanks,

    ~Craig

  • I'm curious after reading this thread...why would you assign a random number via RAND() to a varchar instead of simply using one of the numeric data types available? If you are going to index or sort on a column, my recollection from a past read is that numeric data types are more efficient for indexing / sorting.

    Any thoughts based on experience from the group on this?

  • SQLBOT (3/1/2010)


    I also forgot to say that there is the NEWSEQUENTIALID() function if you're going to batch load.

    That should wreck your cluidx a little less, too.

    If NEWSEQUENTIALID() is the default value on a column, it would be great, but that's the only way to use it as it normally can't be generated on the fly. To get around that, a kluge someone taught me once is:

    create proc GenerateSequentialID

    as

    create table #temp_seqid (rowval uniqueidentifier default newsequentialid())

    insert into #temp_seqid default values

    select rowval from #temp_seqid

    go

    Gaby________________________________________________________________"In theory, theory and practice are the same. In practice, they are not." - Albert Einstein

  • Paul White (3/1/2010)


    GabyYYZ (3/1/2010)


    One option, especially if you have an indexed identity column on your source table, is to generate a separate table of random row numbers, create a clustered index on it, and join with the original table.

    Nice idea. Of course, the 'random' numbers are then a bit, er, 'fixed' aren't they?

    Can't believe you used a RBAR method to populate your table. 😛

    For smallish numbers of random rows, I prefer an approach very similar to the one posted by Gary earlier.

    It does require a table with a sequential ID, but that's pretty common - excepting those that like GUIDs as a PK *shudder*

    LOL, just tried it like I suggested, it did NOT work. Strange, so for now, newid() is still the best way. 🙂

    Gaby________________________________________________________________"In theory, theory and practice are the same. In practice, they are not." - Albert Einstein

  • To counter performance issues, the easiest thing to do is add tablesample (10 percent) to the query.

    This way the newid() function only needs to run on an already randomized sample of 10 percent instead of against the entire data set.

    Barry

  • Barry-193141 (3/3/2010)


    To counter performance issues, the easiest thing to do is add tablesample (10 percent) to the query.

    This way the newid() function only needs to run on an already randomized sample of 10 percent instead of against the entire data set.

    Just to sure never to use the technique with small tables - you'll likely get no rows at all.

    One other point for the general discussion: If a good distribution of random values is important to you, ORDER BY CHECKSUM(NEWID()) is better in that respect.

    Paul

  • Mr Random again....

    tablesample limitations per msdn:

    Rows on individual pages of the table are not correlated with other rows on the same page.

    Never saw a database where that condition could be assumed. most are entered sequentially which is very likley to have correlations.

    newid() and any function of it is "too perfect" the nice properties, such as good distribution of digits, has to be built in.

    true random aren't so perfect exept in very large samples, and good psuedo random should be difficult to distinguish from true random.

    If your doing a lottery, or a statistical study, definately look for better solutions.

  • All that means from MSDN is that a random sample MAY be grouped. Its still a random sample.

    Say you had 26 buckets - 1 for each letter of the alphabet - and they were filled with names and you wanted to choose a persons name randomly from one of those buckets.

    Tablesample + NewID() would still get you a random person.

    Tablesample would randomly get you one of the 26 letters and then newid() would get you a random person from that letter.

    Seems as random as any other method.

    sqlservercentral-1070393 (3/3/2010)


    Mr Random again....

    tablesample limitations per msdn:

    Rows on individual pages of the table are not correlated with other rows on the same page.

    Never saw a database where that condition could be assumed. most are entered sequentially which is very likley to have correlations.

    newid() and any function of it is "too perfect" the nice properties, such as good distribution of digits, has to be built in.

    true random aren't so perfect exept in very large samples, and good psuedo random should be difficult to distinguish from true random.

    If your doing a lottery, or a statistical study, definately look for better solutions.

  • Someone appears to have drawn some inspiration from your article.

    http://subhrosaha.wordpress.com/

Viewing 15 posts - 16 through 30 (of 37 total)

You must be logged in to reply to this topic. Login to reply