Stretch Pricing

  • Comments posted to this topic are about the item Stretch Pricing

  • Steve Jones - SSC Editor (6/14/2016)


    If you stretch a sales database, say a 100GB database and want to move 60GB of that data away, you're going to pay at least US$930/month for the compute at the lowest performance level. Regardless of whether anyone queries the data. If you want better performance, you can run up in roughly multiples of that amount ($1860, $2790, $3720, etc)[/B]

    I find these prices totally insane. You can easily justify buying top notch disks for your SAN if you compare the cost of storage to these prices.

  • Seems like there are many more cost-effective ways to handle old, infrequently-accessed data--for example, unloading data to Amazon S3 or Glacier. Stretch is maybe more convenient to manage... but I doubt I could justify the cost.

  • Sounds like the oldest trick in the book... "Give them the razor... and then sell them lots of blades." 😀

    Shifting gears a bit...

    From the article:


    Less data should mean much better performance from your local system.

    ...Less data shouldn't even mean less maintenance. Proper handling of legacy data does. The only thing that I've seen less data help for sure is code that needs help.

    When I started 4 years ago at the company I'm currently working at, the "money make" database only had something less than 70GB and CPU usage was over 30% with daily outages on the floor when a large batch job ran. That database is now a half TB and we've added several other databases of various sizes each larger than the original 70GB (one is teetering at an additional 600GB). Because of a "Continuous Improvement Program" that we instituted, CPU is down to 4% and there hasn't been any outages on the floor for more than 2 years.

    On another system (our in-house telephone system), backups were taking in excess of 10 hours 2 years ago. What's worse is that a DR restore would take a bit longer. That system has also grown from about 70GB 4 years ago to 3/4 of TB today. With a little SQL prestidigitation, we used partitioning to keep from backing up all but the current month of data and we have about 7 years of data. Our backups take a little over 10 minutes now. Because of the way we did the partitioning, we can do a "Get back in business" DR Restore in about 10 minutes and do piecemeal restores over time to bring the infrequently accessed legacy data back on line almost at our leisure. We're setting up to do similar with the "money maker" database.

    To wit, I have to say, performance is in the code, not in the absence of data. 😀

    Since were' not backing up to the cloud or storing our "legacy data" there, we've saved relatively a lot of money because, although still expensive for the common man, its still a whole lot cheaper and a K-buck here and a K-buck there every month.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • It sounds like stretch databases are trying to do Enterprise Information Integration (EII).

    From what I can gather you would write a query against the EII tool (thinking it was a physical database) and the tool itself works out where in the enterprise your data actually is.

    In SSIS there is a DataReaderDestination object. I believe that you can consume it as if it was a DataReader and under the hood it runs the SSIS ETL transforms. From what I remember it is a bit of a poor-mans EII.

    Ab Initio has a query layer that does a similar thing.

    These tools tend to be very expensive and run on beefy hardware. They are sophisticated data and query caching tools to minimise the hit on the source systems but still connecting to those systems. Because of this they are not suited to all types of query.

    Fascinating but way outside of the price range of many organisations.

  • It's not really query and find the data, it's automatic partitioning, moving data to another instance of SQL Server. There's a difference, and it's a valuable tool to have.

    I just think it's mis-priced.

    Certainly you can do things like table partitioning, or distributed partitioning, but those require work. Arguably the cost might be similar initially, but I think Stretch becomes more expensive over time. There are also the costs of changing how you do business and working on the project. Those are hard in some cases, but stretch is easy.

    Just pay.

    I think there are places where this makes sense, but not many.

  • Wow, for small instances, the pricing does look high. For just storage, the pricing is low, so if you have a lot of data, and need access, this could be cost effective. Still, for smaller sites, this does not look good. Too bad. Much of the early days of SQL Server were for small/medium sized outfits.

    The more you are prepared, the less you need it.

  • Yet another feature that the pricing doesn't scale (down) well.

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • ... If you stretch a sales database, say a 100GB database and want to move 60GB of that data away, you're going to pay at least US$930/month for the compute at the lowest performance level...

    If an organization has 100 GB of sales data, then US$930/month is a reasonable price to pay for archival. I mean, sales data is revenue generating, that's a LOT of customers, and they're also saving on the operational expense of more on premises SAN storage.

    However, if you're a team of university students or a startup company trying to archive 100 GB of research data, then I can see how the economics of stretching to Azure doesn't work for you.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Maybe stretching into Amazon or GoDaddy is an option?

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Eric M Russell (12/8/2016)


    ...If an organization has 100 GB of sales data, then US$930/month is a reasonable price to pay for archival. I mean, sales data is revenue generating, that's a LOT of customers...

    Depends upon the profit on each sale and the quantity of data required to store for each sale. Surely.

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • $930/Month for only 100GB is an insane amount of money for any kind of data. It's a real "stretch" of the imagination to think otherwise. 😉

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (12/8/2016)


    $930/Month for only 100GB is an insane amount of money for any kind of data. It's a real "stretch" of the imagination to think otherwise. 😉

    That comes out to $10,000 / year, which isn't a lot of money for a data archival system. 100 GB is enough to contain the entire CRM or purchase order system for a corporation the size of Microsoft.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Eric M Russell (12/8/2016)


    Jeff Moden (12/8/2016)


    $930/Month for only 100GB is an insane amount of money for any kind of data. It's a real "stretch" of the imagination to think otherwise. 😉

    That comes out to $10,000 / year, which isn't a lot of money for a data archival system. 100 GB is enough to contain the entire CRM or purchase order system for a corporation the size of Microsoft.

    CRMs aren't what I'd need to archive and I still think that $10K is insane for storing only 100GB especially when you see something like the following (just a first search and NOT any kind of endorsement). https://listings.emergentsx.com/products/dell-equallogic-ps6500-48-x-2tb-7-2k-sata-iscsi-san-storage-system-ps6500e

    Yeah... I know... it's only 7.2K RPM but it's also for an "Archive" which won't be accessed much.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Feels like an order of magnitude too high (at least) to me. I'd say $10,000 a year for 1TB, or maybe 10TB, not 100GB.

Viewing 15 posts - 1 through 15 (of 15 total)

You must be logged in to reply to this topic. Login to reply