Getting Close to the 2017 RTM

  • Comments posted to this topic are about the item Getting Close to the 2017 RTM

  • We upgraded to SQL Server 2016 in the springtime from SQL Server 2008 and the first difference that jumped out at me was the time taken for backups. What had taken on average 400s fell to about 220s. I was very impressed. There may be other differences but I haven't done a full performance review yet, so I can't comment.

  • If MS are going to release at this cadence then they need to start selling licensing such that it covers 2 versions, i.e. if you buy 2017 it should cover 2017 and 2018.
    It can't be in MS' interests to have so many versions of SQL Server out there.

    For me and the project I work on it currently runs on SQL 2008R2 and SQL 2014, but to save ourselves effort and grief we only use the 2008R2 feature set which is pretty limiting to be honest as there are several TSQL commands we can't use that I would if we could. Honestly I'd love to be on 2017 as soon as it's available.

    Unfortunately for us we are using SSRS (I fricking hate it so much), and MS broken the interface for custom security in SQL 2016 so at present although I'd upgrade in heartbeat we simply can't otherwise it would break our reporting. Given the project is a website used by multiple customers who have their own install hosted in the cloud then we have to write our own security DLL because SSRS only gives you 2 options, active directory or write your own, and they broke the latter, 🙁 Maybe they will change it back in SQL 2017.

  • Microsoft's development and release cadence is really impressive but one of the biggest concerns I see is how developers and admins stay up to date with the latest features and improvements, differences between versions?

    For example, CI/CD and DevOps has been around for many years but SQL Server 2017's use of containers and Microsoft's approach of using containers for the CI/CD and release process of SQL Server is a huge change, that will require large changes to established CI/CD pipelines and release processes. The reason I highlight this change is without it I can't see how an organisation can keep up.

    How is everyone keeping up with these changes? It occurs to me that developers and admin (maybe more admins?) will need to specialise in particular areas of SQL Server as being a generalist you will never know the technology in depth enough to be considered an expert.

    Would be really interested to your thoughts.

  • We upgrade whenever the support expires, namely every 7-8 years. There are rarely enough new features that are so good that we can make a business case to initiate an upgrade procedure. Upgrading the DB server is a major upgrade and everything must be tested.
    I keep an eye on new features and test them whenever I can. I'm a big fan of the promise of In-Memory OLTP. I did a project on it in 2015 with SQL Server 2014 and found it wanting. I'm planning a new project with SQL Server 2016 early next year. I'm currently involved in a project introducing OLAP cubes to the company.

  • I am a SQL Dev, and I really feel for our DBAs.  As far as I can tell, we have 86 unique DBs (excluding the data warehouses), installed in various combinations across 470 SQL instances around the world.  Add to that Linked servers, Replication, and home grown data syncs.

    Every new version of SQL, our DBA's have to test that an upgrade won't break any of the above.  And since the upgrades are not instant, they also have to test variations of SQL versions.  We currently have a mix of 2008R2, 2012, 2014, and are still busy rolling out SQL 2014 to the approved instances.

  • I'm using 2014, no business or technical need for anything newer.  Even if we upgraded quickly that would mean another version among the several different versions to test and support.

     Also still using Visual Studio 2015, newer versions still have some issues to work out.  But I'm enjoying the rapid changes to VS Code, but it's not yet an enterprise tool.

  • I have been testing IN-Memory OLTP. With the release of 2017 most of the restrictions I have encountered have been addressed (Support for CASE, Unlimited Indexes on Memory Optimized Table for example). 
    One of the features I really like in the ability to have schema only memory tables. 
    There are still some features I would like to see. Subquery support for example.
    The combination of Memory optimized tables and native compiled procedures and functions have shown incredible performance results.
     In my opinion, disk based SQL Server will be replaced with IN-Memory for OLTP. With the changes in hardware (memory) it just make sense.
    If you think about it, you should have the entire OLTP database in memory even for disk-based SQL server.

  • I think we're in the works to upgrading to SQL 2016. We have a few instances of SQL 2012, but mostly lots of SQL 2008 R2.

    Kindest Regards, Rod Connect with me on LinkedIn.

  • riversoft - Tuesday, July 18, 2017 8:05 AM

    I have been testing IN-Memory OLTP. With the release of 2017 most of the restrictions I have encountered have been addressed (Support for CASE, Unlimited Indexes on Memory Optimized Table for example). 
    ...
    If you think about it, you should have the entire OLTP database in memory even for disk-based SQL server.

    Despite Microsoft pushing Azure and da Cloud heavily, I expect in-memory DBs to be the future. Unless something drastic happens or unless the Cloud takes off in a really big way, servers are just going to have more RAM and more processing power, both of which suit In-Memory OLTP.
    How do you simulate load, btw? I worked off modified traces from our productive system last time, but it took quite a while to amend it.

  • Curious what the SSRS native portal will look like since SharePoint integrated mode got dropped. (version management, tags replacements?)

  • Sean Redmond - Tuesday, July 18, 2017 8:23 AM

    riversoft - Tuesday, July 18, 2017 8:05 AM

    I have been testing IN-Memory OLTP. With the release of 2017 most of the restrictions I have encountered have been addressed (Support for CASE, Unlimited Indexes on Memory Optimized Table for example). 
    ...
    If you think about it, you should have the entire OLTP database in memory even for disk-based SQL server.

    Despite Microsoft pushing Azure and da Cloud heavily, I expect in-memory DBs to be the future. Unless something drastic happens or unless the Cloud takes off in a really big way, servers are just going to have more RAM and more processing power, both of which suit In-Memory OLTP.
    How do you simulate load, btw? I worked off modified traces from our productive system last time, but it took quite a while to amend it.

    I just wrote a test driver in c# using multiple threads to really crank up the load and test for conflicts. We use stored procedures for processing so all I had to do is call the appropriate procedure. I will say converting the tables is easy. Converting the stored procedures/functions is a bit more tedious if you have subqueries. You have to select the subquery to a temp table then feed it to the update for example.
    One note is that since there is not locking on memory tables you have to handle conflicts differently. It used row versioning. The conflicts seem to be really isolated (basically on a row) so only if you write to the same data before the transaction ends do you have a problem. I chose to implement try catch logic to handle conflicts. So far it's working OK. Disclaimer: I haven't put any of this in production. Just sharing what I had found so far.

  • Sean Redmond - Monday, July 17, 2017 11:45 PM

    We upgraded to SQL Server 2016 in the springtime from SQL Server 2008 and the first difference that jumped out at me was the time taken for backups. What had taken on average 400s fell to about 220s. I was very impressed. There may be other differences but I haven't done a full performance review yet, so I can't comment.

    Nice to hear. I think some of this is incremental improvement from 2012-2016, but glad you've seen improvements.

  • peter.row - Tuesday, July 18, 2017 1:36 AM

    If MS are going to release at this cadence then they need to start selling licensing such that it covers 2 versions, i.e. if you buy 2017 it should cover 2017 and 2018.
    It can't be in MS' interests to have so many versions of SQL Server out there.
    ...

    Why? Do you think that you get less value from SS2017 if SS2018 (or likely early SS2019) is released 18 months later? The support is the same. You'll get 5 yrs of SS2017 support + 5 more of security and potential 6 more if you pay.

    Is it in MS interest? I think it is. They only get x% upgrading to each version. If they release every 12-18 months, I think they get more upgrades along the way, especially as new features come along.

    I do think that there is a slight support increase, but they way they've moved to a streamlined engineering process and feature flags, I think providing support for SS2016+ is easier than previous versions.

  • We're currently waiting for SQL 2017 as we're thinking of getting a separate BI server. It won't have as much of a punch as the main box, but will have an enterprise licence on it instead (so we have access to the nice SSAS functions to start with). It'll still have enough resources so that it can rebuild the warehouse(s) over night. Even it that takes 6-8 hours every night when it would maybe take half that on the main box, that's permissible; no one is really in the office before 08:00, so if the task kicks off at 22:00 it'll be happily done before anyone gets in.

    If I recall correctly, compatibility for PowerBI and SSRS are meant to be coming with SQL Server 2017 (or shortly afterwards), which is something we're quite interested/hopeful for as well.

    Thom~

    Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
    Larnu.uk

Viewing 15 posts - 1 through 15 (of 36 total)

You must be logged in to reply to this topic. Login to reply