• The production size issues for relational are one thing (and I agree with Rudy, I think the orders of magnitude are off), but the sizing is for use in development is another entirely different can of worms. It becomes a big problem when you can't create a completely isolated development environment when even a 10% sample data set won't fit inside a Virtual PC image.

    I primarily do BI consulting and processing a OLAP cube on top of three quarters of a billion rows of development data is a PITA. Especially when you know that you're working with a 10% "sample" of real production health insurance claim data for a "small" regional health insurance carrier. It's a challenge to determine if your 10% "sample" is really representative or not...

    20 billion rows is nothin'. I'll be the boys in Vegas who do all the "real" cutting edge BI work are playing with even larger numbers of rows by a couple orders of magnitude...

    I'd love to know how many point of sale records Wal-Mart has in their warehouse. Extracts that I've seen in the past for just a few manufacturers' product lines at a time in selected parts of the continental US for just three months of data were in the 400M record range.

    [Does somebody at the NSA think that it makes people feel better about the program to brag about the size of their database(s)? Doh!]