Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase ««12

scalable hardware solution for 10TB now to 100TB in 3 years Expand / Collapse
Author
Message
Posted Monday, January 3, 2011 7:34 PM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Yesterday @ 9:58 AM
Points: 36,995, Visits: 31,517
mlbauer (12/31/2010)
We want to scan the *complete data in less than 1 hour*


I have to ask... to what end? Why is this necessary and why will it be necessary when you have 50TB of data?


--Jeff Moden
"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T." --22 Aug 2013

Helpful Links:
How to post code problems
How to post performance problems
Post #1042124
Posted Tuesday, January 25, 2011 3:57 AM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Thursday, October 20, 2011 4:32 AM
Points: 10, Visits: 132
Hi,

good question. The short answer is: We know our data will grow. We want to do our current tasks in the future, so we want 10x performance and capacity. It is not clear how fast our data will grow, so we want to be sure to have some additional capacity available.

P.S.
I have openend a small suvey about SAN systems here:
http://www.sqlservercentral.com/Forums/Topic1052953-377-1.aspx
Post #1052987
Posted Wednesday, January 26, 2011 6:16 AM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Friday, August 29, 2014 4:19 AM
Points: 243, Visits: 2,686
Also.... will you continue to add data without an archive strategy? Some years of data are likely to no longer be used at some point.
Post #1053816
Posted Thursday, January 27, 2011 1:20 PM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Thursday, October 20, 2011 4:32 AM
Points: 10, Visits: 132
hi,

you are right. Old data will be less important at some point of time. An archive will be necessary for historic data. But i would be nice to have it available - maybe with less speed.
We are doing data mining, so a large collection of historic data will help us. We are currently developing and testing different ways of analyzing our data so it will help us to have lots of data available with a performance high enough to do many experiments without having to wait for weeks in every step of development.
At the moment, our hardware guys have a clear tendency towards netapp hardware, with an estimated cost of about 1 million euros for a 200 TB solution. This is a huge leap of costs compared to our current hardware and it would be nice to have at least one or two alternative suggestions for the discussion about the best hardware for our purposes. Does any of the SAN manufacturers provide any technical feature that others do not?
P.S. Our options seem to be only HP or NetApp.
Post #1054893
Posted Thursday, January 27, 2011 1:59 PM
Hall of Fame

Hall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of Fame

Group: General Forum Members
Last Login: Yesterday @ 2:50 PM
Points: 3,135, Visits: 11,482
You may want to look into SSD storage. For example, this product promises 6 GB/Sec of bandwidth on a 5 TB device.
http://www.fusionio.com/products/iodriveoctal

With 10 units, you would have 60 GB/Sec of IO bandwidth with 50 TB of storage. That would be enough bandwidth to let you read 50 TB in about 14 minutes.

Of course you may want to verify that the vendors product can actually do what they claim.

There are plenty of other potential bottlenecks when you get into this area: PCI bus speed, memory speed, front-side bus speed, processor speed, etc. I think you will find this a difficult challenge with current hardware.

I would recommend waiting as long as possible to buy the hardware, instead of trying to buy something now that will be good for three years. Performance of hardware per dollar will be much better later, especially for emerging technology like SSD storage.

I would also look into database compression if you are not already using it. If you can get 70% compression that will save a lot of space and IO bandwidth. Use it with partitioned tables to tailor compression for best performance, like compressing anything older than 90 days. Even though it uses more CPU, you save a lot on IO and memory footprint.

Also, I would seriously explore the importance of this to the business. It is easy to demand fantastic performance when you don't understand the cost, but when you start talking millions of dollars people will take a harder look at the value they are getting for that money. Perhaps a solution where they could see all the recent data quickly would be enough. Or you might be able to break the most important data out to a smaller dataset that doesn't require as much time to query.



Post #1054922
Posted Thursday, February 3, 2011 12:06 AM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Yesterday @ 9:58 AM
Points: 36,995, Visits: 31,517
mlbauer (1/25/2011)
Hi,

good question. The short answer is: We know our data will grow. We want to do our current tasks in the future, so we want 10x performance and capacity. It is not clear how fast our data will grow, so we want to be sure to have some additional capacity available.

P.S.
I have openend a small suvey about SAN systems here:
http://www.sqlservercentral.com/Forums/Topic1052953-377-1.aspx


Have you considered simply partitioning the tables?


--Jeff Moden
"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T." --22 Aug 2013

Helpful Links:
How to post code problems
How to post performance problems
Post #1057847
Posted Friday, February 4, 2011 7:27 AM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Thursday, October 20, 2011 4:32 AM
Points: 10, Visits: 132
Yes. We are using partitioned tables for all data that are loaded daily.
Post #1058741
« Prev Topic | Next Topic »

Add to briefcase ««12

Permissions Expand / Collapse