vNext 2016

  • GilaMonster (5/7/2015)


    The 'restricted features' was true a few months ago.

    That was the last time I had to use it at that level 🙂 Thanks, I'll have another detailed look when I get a moment.

  • My big question will be for the encryption... does it come in Standard Edition or do you have to have Enterprise Edition to get that feature?

  • Markus (5/7/2015)


    My big question will be for the encryption... does it come in Standard Edition or do you have to have Enterprise Edition to get that feature?

    In Azure SQL Database, TDE is in all editions, and Always Encrypted will be in all editions when it comes out.

    In the boxed product, licensing isn't public yet. It's typically decided by the licensing team much closer to release, after they know exactly what features they have to work with. The only feature they've announced (this week) is a subset of AlwaysOn Availability Groups in Standard Edition, basically replacing database mirroring, which has been deprecated.

  • Michael Meierruth (5/7/2015)


    Care to give us a hint on which feature we 'may NOT like'?

    I suspect it's that different features will not appeal to different people. For example, the JSON stuff might not be a good idea for your application, and you might not want to see this added.

  • Brent Ozar (5/7/2015)


    The only feature they've announced (this week) is a subset of AlwaysOn Availability Groups in Standard Edition, basically replacing database mirroring, which has been deprecated.

    And about bloody time too.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Yeah, having some of that in Standard will be nice. I still won't be completely satisfied with respect to an eventual removal of mirroring, since I rather liked being able to use mirroring to reduce downtime during migrations. That's a use that Availability Groups in standard won't help with.

    Once mirroring goes away, it'll be back to tail-log backups and such. C'est la vie.

    I can always hope they drag their feet on removing mirroring. 🙂

  • I'm trying out a converted form of our database with In-Memory-Tables and it is not easy. Especially the severe limitations on the natively-compiled procedures.

    For example, several SPs take XML from a webservice, parse it and insert/update the entries as appropriate at the same time. With natively-compiled procedures, the functions that allow the XML to be parsed (.item.value() and .nodes()) aren't supported nor is the FROM clause in the UPDATE statement. It meant that I had ro run the natively-compiled statement x-times (RBAR) rather than just once in the Interop. It was, by no means, 30x faster. It was simply slower.

    At this point in time, I feel that all of Microsoft's marketing about In-Memory-OLTP is disingenuous.

    What I would like to see in SQL Server 2016 In-Memory OLTP is:

    • the ability to create indexes on nullable columns;

    • Constraints — Foreign key, check und unique;

    • Outer joins in natively-compiled procedures;

    • Alter Table — at the moment, one has to drop a table and recreate it in order to make a change, add a column, rename it etc.;

    • cross-database queries — although linked servers work;

    • Sub-queries in natively-compiled subqueries;

    • FROM clause in the UPDATE statement;

    • IN-operator in natively-compiled procedures;

    Microsoft, please release something when it is *ready*, and not half-baked as this is.

  • Sean Redmond (5/8/2015)


    ...Microsoft, please release something when it is *ready*, and not half-baked as this is.

    DevOps has a term "Minimal Viable Product" which basically means ship it and get feedback from your users once you have created the minimum that can be useful. The idea for this is that the features deemed important by the stakeholders involved in producing the software probably do not match those required by the users i.e. feedback is essential early in order to avoid wasted effort.

    EDIT: Great writing...unless you had to read it 😉

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • Sean Redmond (5/8/2015)


    At this point in time, I feel that all of Microsoft's marketing about In-Memory-OLTP is disingenuous.

    What I would like to see in SQL Server 2016 In-Memory OLTP is:

    ...

    Microsoft, please release something when it is *ready*, and not half-baked as this is.

    On your first point, absoutely. The marketing makes this seem like In-Memory is a drop in replacement for existing tables. It's not, and shouldn't be presented as such.

    In terms of half-baked, I'm not sure that's fair. It's a feature that works very well in certain situations. Having every table moved to in-memory, which is what many people think they want, doesn't really improve your system. Most people can't stick everything in memory since they don't have the memory. If you had huge levels of memory now, likely you would experience less issues.

    The feature, like the first version of columnstore is a min viable product, built to handle limited situations. If we had to wait for every feature to remove all restrictions, we'd get very little new stuff coming.

    Personally I like having new features released every couple years, with the choice for me to use them in the situations where it makes sense (code change ability and cost absorption).

  • Hi Steve,

    How often do we get a version that can promise a 10x improvement to a system? I see In-Memory-OLTP as the biggest change, or, at least, potential change since I started working with SQL Server in 1999. The thing is, though, is that the version that they released is not complete. 'Half-baked' is the wrong phrase, 'not yet finished' is better. This isn't v1.0 that came with SQL Server 2014, it's more like v0.6.

    If any RDBMS company released a product that didn't support foreign keys, they would be slated and rightly so. In order to achieve the performance improvement that they boast about, one needs to use natively-compiled procedures, and these don't support outer joins, sub-queries and so on. It is not complete — but it does hold massive potential.

    The performance improvement does allow one to make an impressive case for new hardware with the necessary RAM, but, at the moment, the amount of extra code that has to be put in or totally re-written in order to compensate for its failings (manual referential integrity, anyone ?) negates the case completely.

    I would love to build a demo-system for the board of directors and show them the difference in performance, but the amount of what I feel is unnecessary effort required is too great at the moment.

    All the best on this mild, sunny day in Switzerland,

    Sean.

  • Keep in mind that you're not going to be moving the entire database to in-memory. That's not the goal. Move what makes sense, natively compile what needs to be that way, not everything.

    A DB using hekaton is likely to have a small handful of tables in-memory and the rest either traditional row-based, disk tables or Columnstore, as applicable

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Hi Gilamonster,

    It's not the space that is the problem, it's the inherent limitations of the In-Memory system. I have been given a 64GB system that I may use for testing. The In-Memory tables are so supposed to be at their best for high-volume write loads. We have a few tables that are heavily read from and very often written to. I loaded these (and their attendant tables, which are much smaller) into the In-Memory DB. I ran a trace on the productive system to get the SPs that use these tables and I am now making In-Memory equivalents of these SPs.

    The problems come when I try to start testing and comparing the old system with the new. Aside from the foreign-key and constraints' problem, writing natively-compiled procedures as equivalents to the existing SPs is not all straightforward. Because so many things in standard SQL are not (yet?) supported, I have to make circuitous routes to get the same results. I have to factor in the fact that even when I get a functional equivalent of the old SP, I still have to run the checks that would have been carried out (or prevented) by the constraints.

    It is a very useful worthwhile learning curve. I do believe that this is the future of OLTP databases and it may well be that SQL Server 2024 drops traditional disk-based systems altogether.

    Sean.

  • Sean Redmond (5/11/2015)


    Hi Steve,

    How often do we get a version that can promise a 10x improvement to a system? I see In-Memory-OLTP as the biggest change, or, at least, potential change since I started working with SQL Server in 1999. The thing is, though, is that the version that they released is not complete. 'Half-baked' is the wrong phrase, 'not yet finished' is better. This isn't v1.0 that came with SQL Server 2014, it's more like v0.6.

    Is it? If it doesn't work for you, then it's not completed. However there are customers for which this works well. It fits their needs. Should the feature not be released in 2014, but held for 2016 when it works for some customers?

    Or should you just decide it doesn't work for you and not use it in the 2014?

    Personally, I think it's a feature that's mis-marketed. It works well for certain domains, and those should be disclosed. However trying to note it's a "general use feature" that works as a table replacement is disingenuous at the least.

  • Hi Steve,

    I believe that they should have released it initially as an option they way that Reporting Services v1.0 for SQL Server 2000 was — with its own installer. This way one is under no delusion that it is complete. Microsoft should have emphasised what was not yet complete and provided a roadmap. It is for testing, to give a flavour of what can be done, of how potential it has. It may be used for production purposes because it will be supported and developed further in the future, but it is, at the moment, by no means complete.

    I remember 25 years' ago the RAM-disk functionality on the Macintoshes. If the Mac had enough RAM, it could be allocated as a volume. One could boot from it and use it as regular volume. They were fast — faster than hard-drives at the time — and a joy to with. With this in mind, I was more than happy that Microsoft had started to harness the speed of RAM.

    At all of the presentations that I have been to, it has been presented to the world as a finished product. There is also an element that of foolishness on my part. I discarded my usual cynicism for new products with abandon because I wanted to believe that Microsoft had produced what Fred Brooks called a 'silver bullet'. I let my expectations be high for the In-Memory tables, only to have them deflated by the various gotchas.

    All the best,

    Sean.

  • Sean Redmond (5/12/2015)


    At all of the presentations that I have been to, it has been presented to the world as a finished product. There is also an element that of foolishness on my part. I discarded my usual cynicism for new products with abandon because I wanted to believe that Microsoft had produced what Fred Brooks called a 'silver bullet'. I let my expectations be high for the In-Memory tables, only to have them deflated by the various gotchas.

    All the best,

    Sean.

    I disagree with releasing it separately, but I completely agree with the presentation. Far, far too much "information" given out by marketers and product managers is disingenuous or an outright lie. The feature isn't close to finished, but neither are a few others. Certainly I think that they are no better or worse than most other software vendors who are "stretching" the truth that something with a niche or narrow use case is a primary feature anyone can use.

Viewing 15 posts - 16 through 30 (of 31 total)

You must be logged in to reply to this topic. Login to reply