• fregatepallada (12/18/2014)


    bigint datatype occupies 8 bytes of storage

    uniqueidentifier - twice more 16 bytes,

    Therefore for each 1 million of records you got an overhead of extra 0.763 Mb for additional storage.

    Seems to be a small value but - how many tables do you have? And in cloud-based scenario you are paying for storage PER MONTH. Good luck!

    Actually I am currently in the process of getting the numbers for moving to either AWS or maybe Azure (if they can really support the size and load).

    From a pure physical onsite architecture, I will give an example of a single database in our system which we would multiply times the number of these type of databases in ur system.

    Lets say we have 1TB of actual data in the data file, roughly 60GB of transactions in the tlogs at any one time, 80GB of locked down tempdb files, and 20 odd GBs of system stuff.

    The data files pull 6000 iops at 80%read, 20% write

    The log files pull near 1000 iops at 99%write/1% read (we have 1.5TB of ram on the servers)

    The tempdb pulls about 3000 iops at 50%read/50% write

    Now we do some magic with the data files using various forms of SAN cache and SSD, but we are looking overall we are looking at a 10,000 iop system for 1TB of data.

    Now lets say we have 10 of these (which we do, and more) and you want to buy storage. This is a made up figure, but not far off the actual figure. So lets say to achieve nearly 100,000 iops for approximately 10TB on a SAN system, you will spend about 1.5 million dollars or approximately $150,000 per TB for a mostly spinning disk system. This is capital. For the operating figure for cloud numbers, you just poke these into whatever calculator your potential cloud provider has and see what you come up with.