SQL Server is Lagging a Bit

  • Comments posted to this topic are about the item SQL Server is Lagging a Bit

  • While it's good that there are standards around SQL, otherwise we'd be living in a complete hell of vendor specific nonsense (much more than we are), there are two problems. Firstly, the progress of SQL standards is glacial at best which in a fast moving field like IT isn't always useful and while good thought and careful consideration on standards is a good thing, glacial progress isn't. Secondly, vendor uptake of standards can often be at least as glacial because they prioritise their own non-standards rather than fully implementing SQL standards. Yes, some are optional but by now vendor instances really should support all features of SQL-92, this standard is 25 years old FFS (one that really annoys me is the lack of DOMAIN support in MS-SQL).

    On the other hand I generally prefer a stable, known platform compared to one that changes rapidly and therefore is more potentially unstable - data integrity wise, not feature wise. I can usually work around implementation annoyances, working around data integrity failure is not so trivial.

    As for developers being able to do stupid or, at best, inadvisable things like stuffing data structures into blob fields... I see too much of this unfortunately. While sometimes this is done for performance reasons as it's often quicker time wise to perform a single BLOB read of data from a database and process this in memory I've seen it used too many times where developers just don't understand the concept of good, often basic, database design and relational databases and perceive the database as little more than a value-pair or awkward file store and the more that developers are abstracted away from database design by high level toolkits the more this seems to happen. There will always be scope for doing inadvisable things in a flexible environment and I'd rather have a flexible environment than not.

  • n.ryan - Wednesday, June 28, 2017 2:49 AM

    On the other hand I generally prefer a stable, known platform compared to one that changes rapidly and therefore is more potentially unstable - data integrity wise, not feature wise. I can usually work around implementation annoyances, working around data integrity failure is not so trivial.

    I think your statement above is likely the main reason the pace of change *is* as slow as it is.  If it moved faster, I think you'd have a lot more problems from changes to the standards breaking previous functionality or causing odd behavior from unforeseen interactions.

    Or worse, no backwards compatibility, so if you moved from say a vendors standard A-based product to the same vendors standard B-based product, well, you need to gin up something to move the data because product B can't restore backups made with product A, or the data-types have changed internally, or other such nonsense.

    You *want* your back-end (and really, that's what the database is, is the back-end, or the foundation if you prefer, for your applications) to be as stable and solid as possible.  You may remodel the house on a regular basis when new construction methods come along (C++ to C# to Ruby to Python, or whatever,) but you generally don't mess with the foundation until it starts getting to it's end-of-life (why else do people still have SQL2000 / 2005 / 2008 back-ends for mission-critical applications?)

  • I am glad that date/time work gets some attention in the standards, but since I can never remember the actual syntax and values, I don't really care. I'll depend on SQL Prompt to get me through changes here, and then try to get all data in yyy-mm-dd format so I am not confused. LISTAGG is interesting, but I expect JSON to be used more. There are other changes, and I'm not sure how valuable they are for most data professionals.

    yyy? How did you deal with the Y1K issue during the Middle Ages? 😛

  • Windowing functions like RANK(), ROWNUMBER(), LAG / LEAD were a giant leap forward. Other more niche functions related to temporal data types and xml / json / regex parsing would be of great use to some. None of these features really empower SQL programmers to do things we couldn't do before, but what it does is standardize it, preventing the need to resort to cursoring or CRL based implementations.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • This is interesting. And I'm most interested in the inclusion and expansion of JSON into SQL. But I wonder how often it is used? Personally, I've never used the XML data type in SQL Server, even though it is there. At my previous job we just didn't need it. At my current job they have an odd habit of declaring columns in tables to be XML data, but for some reason just declare those columns to be VARCHAR(MAX), rather than a native XML. We're creating new databases with columns in tables that are supposed to be XML data, but are instead declared as strings. I don't get that, but that's what we're doing.

    Just curious to know, do other people here declare columns as VARCHAR and then stuff in XML or JSON?

    Kindest Regards, Rod Connect with me on LinkedIn.

  • Rod at work - Wednesday, June 28, 2017 8:00 AM

    This is interesting. And I'm most interested in the inclusion and expansion of JSON into SQL. But I wonder how often it is used? Personally, I've never used the XML data type in SQL Server, even though it is there. At my previous job we just didn't need it. At my current job they have an odd habit of declaring columns in tables to be XML data, but for some reason just declare those columns to be VARCHAR(MAX), rather than a native XML. We're creating new databases with columns in tables that are supposed to be XML data, but are instead declared as strings. I don't get that, but that's what we're doing.

    Just curious to know, do other people here declare columns as VARCHAR and then stuff in XML or JSON?

    One consideration is that the XML document is reformatted when it is inserted into an XML typed column. 
    - The order of attributes in an XML instance is not preserved
    - Some occurrences of white space are not preserved
    - Quotation around attribute values is not preserved
    etc.
    https://technet.microsoft.com/en-us/library/ms187107(v=sql.90).aspx

    If the XML data is to be consumed entirely within SQL Server, then it matter not. However, if the database is intended to be a document repository for an application, then maybe that matters in some cases.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Rod at work - Wednesday, June 28, 2017 8:00 AM

    Just curious to know, do other people here declare columns as VARCHAR and then stuff in XML or JSON?

    No, if I'm storing XML, I definitely use the xml datatype. From my perspective, storing XML in a varchar is like storing a numeric or date in a varchar. I just slows processes down when you want to do "simple" tasks. You can't SUM a varchar, SQL server has to implicitly cast it, which means an overhead. XML is the same, and if you want to use XQuery, you'll need to cast it first (which is an unneeded overhead).

    I've not used JSON in a SQL environment. I've dabbled with it briefly when experimenting with Python on Linux, but that's it. I can, however, see the need (or want) for it. JSON is now at least readable and producable in SQL 2016, but it isn't a data type. Unfortunately, I don't know the true strength/weaknesses of JSON to have an opinion on if the data type will bring much in.

    The date CAST(... AS <Type>, <FORMAT>) function feels very much like using FORMAT(CAST value AS date),'dd-MM-yyyy'), which you can already do in SQL. I'll admit, however, that FORMAT is not quick and could probably do with some work if SQL Server introduce something along the lines of CAST(value AS date,'dd-MM-yyyy'). Otherwise, you're still better off sticking with CONVERT.

    Thom~

    Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
    Larnu.uk

  • chrisn-585491 - Wednesday, June 28, 2017 6:07 AM

    yyy? How did you deal with the Y1K issue during the Middle Ages? 😛

    Sticks and stones

  • Eric M Russell - Wednesday, June 28, 2017 7:45 AM

    Windowing functions like RANK(), ROWNUMBER(), LAG / LEAD were a giant leap forward. Other more niche functions related to temporal data types and xml / json / regex parsing would be of great use to some. None of these features really empower SQL programmers to do things we couldn't do before, but what it does is standardize it, preventing the need to resort to cursoring or CRL based implementations.

    They also increase performance, and I'd argue, they do increase quality. Some of the lead/lag stuff was poorly written, and contained logical bugs, but since we don't formally test, or have good test cases, some of those were missed.

  • Rod at work - Wednesday, June 28, 2017 8:00 AM

    Just curious to know, do other people here declare columns as VARCHAR and then stuff in XML or JSON?

    Me, not XML. I'd use the XML type, because I could index it. However, the only times I leave XML intact are if
    a) need to reproduce this later as is for an application. Not worth shredding and trying to reassemble.
    b) this contains rarely used elements/attributes that I'd use relatively rarely. In that case, I'll get the data when needed, and not bother taking the time to shred.

    For JSON, the same ideas, though you have to use varchar. Older apps might not be refactored to use the XML type and still use varchar.

    There are always cases where it makes sense to leave a data structure intact.

  • Thom A - Wednesday, June 28, 2017 8:27 AM

    Rod at work - Wednesday, June 28, 2017 8:00 AM

    Just curious to know, do other people here declare columns as VARCHAR and then stuff in XML or JSON?

    No, if I'm storing XML, I definitely use the xml datatype. From my perspective, storing XML in a varchar is like storing a numeric or date in a varchar. I just slows processes down when you want to do "simple" tasks. You can't SUM a varchar, SQL server has to implicitly cast it, which means an overhead. XML is the same, and if you want to use XQuery, you'll need to cast it first (which is an unneeded overhead).

    I've not used JSON in a SQL environment. I've dabbled with it briefly when experimenting with Python on Linux, but that's it. I can, however, see the need (or want) for it. JSON is now at least readable and producable in SQL 2016, but it isn't a data type. Unfortunately, I don't know the true strength/weaknesses of JSON to have an opinion on if the data type will bring much in.

    The date CAST(... AS <Type>, <FORMAT>) function feels very much like using FORMAT(CAST value AS date),'dd-MM-yyyy'), which you can already do in SQL. I'll admit, however, that FORMAT is not quick and could probably do with some work if SQL Server introduce something along the lines of CAST(value AS date,'dd-MM-yyyy'). Otherwise, you're still better off sticking with CONVERT.

    Thank you Thom. I agree with you. Perhaps my colleagues just aren't aware of the XML data type in SQL Oh well, its my problem. Thanks again.

    Kindest Regards, Rod Connect with me on LinkedIn.

  • Steve Jones - SSC Editor - Wednesday, June 28, 2017 8:42 AM

    Eric M Russell - Wednesday, June 28, 2017 7:45 AM

    Windowing functions like RANK(), ROWNUMBER(), LAG / LEAD were a giant leap forward. Other more niche functions related to temporal data types and xml / json / regex parsing would be of great use to some. None of these features really empower SQL programmers to do things we couldn't do before, but what it does is standardize it, preventing the need to resort to cursoring or CRL based implementations.

    They also increase performance, and I'd argue, they do increase quality. Some of the lead/lag stuff was poorly written, and contained logical bugs, but since we don't formally test, or have good test cases, some of those were missed.

    They absolutely increase performance and quality.  I think they're a huge leap and am glad to have them.  Then again, I'm still supporting SQL 2008 instances and I must admit I feel like my hands are tied sometimes.  There is hope...it just takes a while to materialize because of budget.

  • Steve Jones - SSC Editor - Wednesday, June 28, 2017 8:41 AM

    chrisn-585491 - Wednesday, June 28, 2017 6:07 AM

    yyy? How did you deal with the Y1K issue during the Middle Ages? 😛

    Sticks and stones

    Aparently Y1K was anti-climatic, because there were several different versions of the calendar, so everyone wasn't on the same page in terms of when exactly to panic.

    http://www.nytimes.com/1999/07/17/nyregion/beliefs-millennial-fears-year-1000-apocalypse-then-apocalypse-now-apocalypse.html

    March 25, Good Friday or Easter. Rural life was measured less by years than by the cycles of planting and harvesting, holy days and the feast days of saints.
    So were there religious terrors and overwrought expectations of the Final Judgment in the year 999? Absolutely. And also in the years 899, 1199, 1299 -- you name it. One might as well turn the question around and ask, ''How could it have been otherwise?''

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Hell, even today there is no broad consensus on when exactly we should start dancing in the streets to celebrate the arrival of the Age Of Aquarius. Some New Age gurus said 2012 and then (embarrassingly) 2016. 

    http://earthsky.org/human-world/when-will-the-age-of-aquarius-begin

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

Viewing 15 posts - 1 through 14 (of 14 total)

You must be logged in to reply to this topic. Login to reply