Microsoft SQL Server “Denali” (CTP1) – 11.0.1103.9 (X64)
I’ve been playing with the CTP for a few days (with absolutely no documentation) and there are a few fun and interesting things that I found. These may not be earth-shattering changes, but they are interesting, and I don’t doubt there’s much more fun to come.
It’s now much easier to add and remove startup parameters for the service. No more of SQL not starting because you forgot the ; between the end of the location of the master log file and the traceflag that you added
What’s SQL HADR all about? Wish I knew. It requires a clustered installation and I don’t have a cluster handy to play with (yet). So no investigations of the Denali High availability/Disaster recovery features yet.
New SQL Server features
So what can we do with these things?
They can be used in general select, insert and update statements. That means that the problem of ‘reserving’ an ID value for a later insert by a front end app is now solved. The front end can select a number from the sequence and later either insert using it or discard it, sure that no one else is going to take that number.
They can be used as defaults for columns in a table, and unlike identity, they can be shared by multiple tables if there’s a requirement for a number that’s unique across a set of tables, it’s also not limited to one per table if there’s some bizarre requirement for multiple sequential numbers in a table
Enhanced Paging features
In SQL 2005 and 2008 if you want to page a resultset, you typically use ROW_NUMBER, a subquery and a BETWEEN. That’s how LINQ appears to do it. But no more…
And, yes, the offset and page size can be variables/parameters
New Dynamic Management Views
There are 20 new DMVs in Denali
I’m going to ignore the HADR ones for now, as I don’t have a clustered instance to play with. I’m not sure what the log-related ones are, so will ignore until I have some documentation. They look interesting though…
3 out of the first 4 however…
This is a function that takes a single parameter – compatibility level. I’d say more, but even with objects that contained deprecated (and removed) syntax, this never showed any results. Either I’m using it wrong or it’s only partially implemented. Hard to say.
This one is nice. It takes 3 parameters, a piece of T-SQL, a list of parameters and a bit called ‘include browse information’. It then returns a resultset with one row for each column that piece of T-SQL would return if run. The resultset contains, among other things, the name, data type, collation, precision, scale, nullability and just about anything else you could ever possibly want to know about the columns in a resultset.
One interesting thing here is that if the batch has a syntax error in, an error is not raised. Rather the error details are included in the resultset returned.
I’m undecided as to whether this is a good thing or whether it just complicates the already complex error handling of SQL.
Pretty much the same as the previous, except it takes just an object id and the ‘include browse information’ flag.
One concern is that neither this, nor the previous DMF can cater for procedures or T-SQL batches that can return different resultsets depending on the parameters passed
For example, this procedure breaks both of these DMFs
CREATE PROCEDURE NotQuite @Param1 int AS IF (@Param1=0) SELECT 'Hello' as col1, 'Goodbye' as col2 ELSE SELECT 1,2,3 GO
If the dm_exec_describe_first_result_set_for_object is queried for that procedure, you get back a resultset with one row that contains error information in it.
The metadata could not be determined because the statement ‘SELECT ‘Hello’ as col1, ‘Goodbye’ as col2′ in procedure ‘NotQuite’ is not compatible with the statement ‘SELECT 1,2,3′ in procedure ‘NotQuite’.
Pity, because this case is something that I would have used these DMFs for a lot.
New Profiler Events
Database Suspect Data Page (Under Errors and Warnings)
I’m going to assume this does exactly what it suggests, fires when a suspect page is encountered. Something good for the default trace (if it’s in there) otherwise for an enhanced ‘default’ trace. There’s no such thing as too much warning for corruption (error log, alerts, trace event)
Plan Guide Successful and Plan Guide Unsuccessful (Under Performance)
Because if you tell the optimiser to do things your way, it’s nice to know whether it’s obeying you or not.
Mirroring login and fulltext (Under Audit)
Not quite sure what these trace, but more audit ability is always a good thing.
Preconnect Starting/Preconnect Completed (Under Sessions)
Traces the start and completion of login triggers and resource governor classifier functions. Very handy if there are slow/failing connections and these are suspected.
I think that’s enough for ‘first impressions’. Once I have the documentation there will very likely be more thoughts on Denali.