Technical Article

Understanding "Yukon" Schema Separation

Well it has finally arrived, at least in the Beta version. Microsoft's long awaited latest version of it's SQL Server product has arrived in Beta version and holds promise to be a major and successful revision of this fine product. I have had the Beta version for a few months now and one of the new security items that has intrigued me the most is the separation of users and schemas. I've worked with this form of separation before in Microsoft's chief competitor, but this article is not a comparison of the two products or the way they implement schema separation; it is an article on the basics of user/schema separation for those SQL Server DBAs who may have not worked with separated schema separation before.

SQLServerCentral Article

Can You Compute?

Transact-SQL in SQL Server 2000 has some interesting features, many of which most DBAs will never use. While many DBAs are famliar with the basic aggregate functions, there are a few that are advanced and not well understood. The ROLLUP and COMPUTE operators are two of these and David Poole takes a look at how these work and a practical application for them.

Technical Article

Trace-scrubbing Tools

Andrew Zanevsky shares his trace-scrubbing procedures that make it easy for you to handle large trace files and aggregate transactions by type–even when captured T-SQL code has variations.

SQL Server Profiler is a veritable treasure trove when it comes to helping DBAs optimize their T-SQL code. But, the surfeit of riches (I'm reminded of the Arabian Nights tale of Aladdin) can be overwhelming. I recently had one of those "sinking" feelings when I first tried to make sense of the enormous amount of data collected by traces on a client's servers. At this particular client, the online transactions processing system executes more than 4 million database transactions per hour. That means that even a 30-minute trace that captures "SQL Batch Completed" events results in a table with 2 million rows. Of course, it's simply impractical to process so many records without some automation, and even selecting the longest or most expensive transactions doesn't necessarily help in identifying bottlenecks. After all, short transactions can be the culprits of poor performance when executed thousands of times per minute.

Blogs

JSON_OBJECTAGG is an Aggregate: #SQLNewBlogger

By

I wrote an article recently on the JSON_OBJECTAGG function, but neglected to include an...

Cultural Change: Fostering a Cost-Aware Culture in Your Organisation

By

After working deep in cloud operations, I’ve learned that FinOps isn’t really about dashboards...

Beyond VARBINARY: How to Store PDFs in SQL Server Using FILESTREAM and FileTable

By

Hello, dear blog reader. Today’s post is coming to you straight from the home...

Read the latest Blogs

Forums

Creating a JSON Document I

By Steve Jones - SSC Editor

Comments posted to this topic are about the item Creating a JSON Document I

Who is Irresponsible?

By Steve Jones - SSC Editor

Comments posted to this topic are about the item Who is Irresponsible?

Designing Database Changes Before Deployment: Level 1 of the Stairway to Reliable Database Deployments

By Steve Jones - SSC Editor

Comments posted to this topic are about the item Designing Database Changes Before Deployment:...

Visit the forum

Question of the Day

Creating a JSON Document I

I want to create a JSON document that contains data from this table:

TeamID  TeamName  City          YearEstablished
1       Cowboys   Dallas        1960
2       Eagles  Philadelphia  1933
If I run this code, what is returned?
SELECT json_objectagg('Team' : TeamName)
FROM dbo.NFLTeams;

See possible answers