Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase «««12345»»»

Generating Test Data: Part 1 - Generating Random Integers and Floats Expand / Collapse
Author
Message
Posted Monday, March 26, 2012 1:57 PM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Today @ 4:53 PM
Points: 36,795, Visits: 31,257
Matt Miller (#4) (3/26/2012)
sknox (3/26/2012)
For testing purposes (both scientific and software) pseudo-random numbers are preferable to truly random numbers*, because you want to see how the system responds to the entire range of possible inputs. A truly random number source cannot be trusted to give you a representative sample.

* This is, of course, assuming that the pseudo-random number generator produces uniformly-distributed data. More on that in a bit.


That's a good point to bring up. A random distribution will create a uniform distribution across a range of data, but cannot on its own replicate any non-uniform data patterns. So if you're looking to find out if there's a normal distribution in your data (or any number of other patterns across the set), using random data may not be a good option.

This would be one of those big caveats in the "why would you need random data". The random set will allow you to test for behavior of a varity of inputs at the detail level, but won't help with test the set as a whole.


Hmmmm... the constraints on range and domain aren't enough to satisfy this problem? Such constraints could actually form a "bell curve" (or whatever) using a CASE statement to "weight" the outcome of the constrained random generator.


--Jeff Moden
"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T." --22 Aug 2013

Helpful Links:
How to post code problems
How to post performance problems
Post #1272961
Posted Monday, March 26, 2012 2:22 PM
Hall of Fame

Hall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of Fame

Group: General Forum Members
Last Login: Today @ 4:08 PM
Points: 3,132, Visits: 11,429
Here is an alternate method that I use to generate the pseudo random numbers. The basic method is to take the right 7 bytes from the NEWID function and convert that to a BIGINT before applying the MODULUS operator. No need for the ABS function, since 7 bytes can only produce a positive BIGINT number.


if object_id('tempdb..#t','U') is not null begin drop table #t end

-- Generate 20,000,000 rows
select top 20000000
NUMBER = identity(int,1,1)
into
#t
from
(select top 4473 * from master.dbo.syscolumns) a
cross join
(select top 4473 * from master.dbo.syscolumns) b

-- Show distribution of rowcount around average of 40000
select
a.RandomNo,
Rows = count(*)
from
(
select
RandomNo =
(convert(bigint,convert(varbinary(7),newid()))%500)+1
from
#t aa
) a
group by
a.RandomNo
order by
count(*),
a.RandomNo



RandomNo             Rows        
-------------------- -----------
335 39455
3 39457
76 39481
426 39489
494 39535
242 39539
278 39539
490 39548
445 39553
244 39566
...
...
...
124 40400
228 40402
425 40410
286 40434
45 40458
463 40463
373 40531
152 40586

(500 row(s) affected)





Post #1272999
Posted Monday, March 26, 2012 3:05 PM


SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: Today @ 11:11 AM
Points: 7,122, Visits: 15,031
Jeff Moden (3/26/2012)
Matt Miller (#4) (3/26/2012)
sknox (3/26/2012)
For testing purposes (both scientific and software) pseudo-random numbers are preferable to truly random numbers*, because you want to see how the system responds to the entire range of possible inputs. A truly random number source cannot be trusted to give you a representative sample.

* This is, of course, assuming that the pseudo-random number generator produces uniformly-distributed data. More on that in a bit.


That's a good point to bring up. A random distribution will create a uniform distribution across a range of data, but cannot on its own replicate any non-uniform data patterns. So if you're looking to find out if there's a normal distribution in your data (or any number of other patterns across the set), using random data may not be a good option.

This would be one of those big caveats in the "why would you need random data". The random set will allow you to test for behavior of a varity of inputs at the detail level, but won't help with test the set as a whole.


Hmmmm... the constraints on range and domain aren't enough to satisfy this problem? Such constraints could actually form a "bell curve" (or whatever) using a CASE statement to "weight" the outcome of the constrained random generator.


That's kind of what I meant by the "on its own" comment. You can use the random data generator to pull in representative data in all allowed ranges, but you would need to play with the frequency or weight based on how far away from the mean you happen to be. Assuming you have some knowledge of your data, you can shape your test data to match, using the random set as a base.



----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
Post #1273030
Posted Monday, March 26, 2012 3:07 PM


SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: Today @ 5:40 PM
Points: 6,582, Visits: 8,864
Excellent article Jeff.

Nice coincedence today... I went to the site to find how you did this, and here's the article explaining it all.

Thanks for taking the time for this really great article that explains the how and why.


Wayne
Microsoft Certified Master: SQL Server 2008
If you can't explain to another person how the code that you're copying from the internet works, then DON'T USE IT on a production system! After all, you will be the one supporting it!
Links: For better assistance in answering your questions, How to ask a question, Performance Problems, Common date/time routines,
CROSS-TABS and PIVOT tables Part 1 & Part 2, Using APPLY Part 1 & Part 2, Splitting Delimited Strings
Post #1273031
Posted Monday, March 26, 2012 3:29 PM


Right there with Babe

Right there with BabeRight there with BabeRight there with BabeRight there with BabeRight there with BabeRight there with BabeRight there with BabeRight there with Babe

Group: General Forum Members
Last Login: 2 days ago @ 1:38 PM
Points: 795, Visits: 1,549
That's a good point to bring up. A random distribution will create a uniform distribution across a range of data


I heartily agree. It's been a long time since I studied statistical distributions but a basic understanding of them is hugely useful. It would be great to have a method of generating random data that approximated a distribution, whether it be Gaussian or exponential decay, or an F distribution or whatever.

A common phenomenon is where a column might validly accept one of say 30 integers. The vast majority of the time people will record one of five values and the frequency of recording of the others tapers off, with a few being used exceeding rarely. If you were testing things like index cardinality and column statistics generation, I wonder whether you'd get more representative testing results if your test data could mimic the distribution of what you expected to occur in production.




One of the symptoms of an approaching nervous breakdown is the belief that one's work is terribly important.
Bertrand Russell
Post #1273040
Posted Monday, March 26, 2012 4:46 PM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Today @ 4:53 PM
Points: 36,795, Visits: 31,257
Michael Valentine Jones (3/26/2012)
Here is an alternate method that I use to generate the pseudo random numbers. The basic method is to take the right 7 bytes from the NEWID function and convert that to a BIGINT before applying the MODULUS operator. No need for the ABS function, since 7 bytes can only produce a positive BIGINT number.


if object_id('tempdb..#t','U') is not null begin drop table #t end

-- Generate 20,000,000 rows
select top 20000000
NUMBER = identity(int,1,1)
into
#t
from
(select top 4473 * from master.dbo.syscolumns) a
cross join
(select top 4473 * from master.dbo.syscolumns) b

-- Show distribution of rowcount around average of 40000
select
a.RandomNo,
Rows = count(*)
from
(
select
RandomNo =
(convert(bigint,convert(varbinary(7),newid()))%500)+1
from
#t aa
) a
group by
a.RandomNo
order by
count(*),
a.RandomNo



RandomNo             Rows        
-------------------- -----------
335 39455
3 39457
76 39481
426 39489
494 39535
242 39539
278 39539
490 39548
445 39553
244 39566
...
...
...
124 40400
228 40402
425 40410
286 40434
45 40458
463 40463
373 40531
152 40586

(500 row(s) affected)







Like I said in the article, the conversion to VARBINARY will slow things down and to no good end if you don't really need BIGINT for the random integer. If you really want BIGINT capability (and I realize that wasn't one of your goals in your example), I believe you'd also have to convert the whole NEWID() to VARBINARY.

I also thought you were involved in some testing that showed the use of the square root of the final number of desired rows as a TOP for the self joined table in the Cross Join really wasn't worth it.

The main point that I'm trying to make is that if it's too complicated, folks won't use it.


--Jeff Moden
"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T." --22 Aug 2013

Helpful Links:
How to post code problems
How to post performance problems
Post #1273091
Posted Monday, March 26, 2012 4:48 PM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Today @ 4:53 PM
Points: 36,795, Visits: 31,257
Matt Miller (#4) (3/26/2012)
Jeff Moden (3/26/2012)
Matt Miller (#4) (3/26/2012)
sknox (3/26/2012)
For testing purposes (both scientific and software) pseudo-random numbers are preferable to truly random numbers*, because you want to see how the system responds to the entire range of possible inputs. A truly random number source cannot be trusted to give you a representative sample.

* This is, of course, assuming that the pseudo-random number generator produces uniformly-distributed data. More on that in a bit.


That's a good point to bring up. A random distribution will create a uniform distribution across a range of data, but cannot on its own replicate any non-uniform data patterns. So if you're looking to find out if there's a normal distribution in your data (or any number of other patterns across the set), using random data may not be a good option.

This would be one of those big caveats in the "why would you need random data". The random set will allow you to test for behavior of a varity of inputs at the detail level, but won't help with test the set as a whole.


Hmmmm... the constraints on range and domain aren't enough to satisfy this problem? Such constraints could actually form a "bell curve" (or whatever) using a CASE statement to "weight" the outcome of the constrained random generator.


That's kind of what I meant by the "on its own" comment. You can use the random data generator to pull in representative data in all allowed ranges, but you would need to play with the frequency or weight based on how far away from the mean you happen to be. Assuming you have some knowledge of your data, you can shape your test data to match, using the random set as a base.


Ah... understood. Thanks, Matt.


--Jeff Moden
"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T." --22 Aug 2013

Helpful Links:
How to post code problems
How to post performance problems
Post #1273092
Posted Monday, March 26, 2012 4:51 PM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Today @ 4:53 PM
Points: 36,795, Visits: 31,257
GPO (3/26/2012)
That's a good point to bring up. A random distribution will create a uniform distribution across a range of data


I heartily agree. It's been a long time since I studied statistical distributions but a basic understanding of them is hugely useful. It would be great to have a method of generating random data that approximated a distribution, whether it be Gaussian or exponential decay, or an F distribution or whatever.

A common phenomenon is where a column might validly accept one of say 30 integers. The vast majority of the time people will record one of five values and the frequency of recording of the others tapers off, with a few being used exceeding rarely. If you were testing things like index cardinality and column statistics generation, I wonder whether you'd get more representative testing results if your test data could mimic the distribution of what you expected to occur in production.


Hmmmm... maybe there needs to be a Part 4 to this series.


--Jeff Moden
"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T." --22 Aug 2013

Helpful Links:
How to post code problems
How to post performance problems
Post #1273094
Posted Monday, March 26, 2012 4:54 PM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Today @ 4:53 PM
Points: 36,795, Visits: 31,257
WayneS (3/26/2012)
Excellent article Jeff.

Nice coincedence today... I went to the site to find how you did this, and here's the article explaining it all.

Thanks for taking the time for this really great article that explains the how and why.


I know I said it before but thank you for the time you spent helping with the review.


--Jeff Moden
"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T." --22 Aug 2013

Helpful Links:
How to post code problems
How to post performance problems
Post #1273097
Posted Monday, March 26, 2012 6:43 PM


Hall of Fame

Hall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of Fame

Group: General Forum Members
Last Login: Yesterday @ 5:45 PM
Points: 3,617, Visits: 5,237
GPO said:
It would be great to have a method of generating random data that approximated a distribution, whether it be Gaussian or exponential decay, or an F distribution or whatever.


The approach requires multiplying the numbers in the uniform distribution by the inverse of the new distribution's probability function. This is not for the faint of heart. I've done it before (not in SQL) for a Weibull distribution.

This article shows how it can be done for a Gaussian distribution:
http://murison.alpheratz.net/Maple/GaussianDistribution/GaussianDistribution.pdf



My mantra: No loops! No CURSORs! No RBAR! Hoo-uh!

My thought question: Have you ever been told that your query runs too fast?

My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.


Need to UNPIVOT? Why not CROSS APPLY VALUES instead?
Since random numbers are too important to be left to chance, let's generate some!
Learn to understand recursive CTEs by example.
Splitting strings based on patterns can be fast!
Post #1273119
« Prev Topic | Next Topic »

Add to briefcase «««12345»»»

Permissions Expand / Collapse