Technical Article

Understanding "Yukon" Schema Separation

Well it has finally arrived, at least in the Beta version. Microsoft's long awaited latest version of it's SQL Server product has arrived in Beta version and holds promise to be a major and successful revision of this fine product. I have had the Beta version for a few months now and one of the new security items that has intrigued me the most is the separation of users and schemas. I've worked with this form of separation before in Microsoft's chief competitor, but this article is not a comparison of the two products or the way they implement schema separation; it is an article on the basics of user/schema separation for those SQL Server DBAs who may have not worked with separated schema separation before.

SQLServerCentral Article

Can You Compute?

Transact-SQL in SQL Server 2000 has some interesting features, many of which most DBAs will never use. While many DBAs are famliar with the basic aggregate functions, there are a few that are advanced and not well understood. The ROLLUP and COMPUTE operators are two of these and David Poole takes a look at how these work and a practical application for them.

Technical Article

Trace-scrubbing Tools

Andrew Zanevsky shares his trace-scrubbing procedures that make it easy for you to handle large trace files and aggregate transactions by type–even when captured T-SQL code has variations.

SQL Server Profiler is a veritable treasure trove when it comes to helping DBAs optimize their T-SQL code. But, the surfeit of riches (I'm reminded of the Arabian Nights tale of Aladdin) can be overwhelming. I recently had one of those "sinking" feelings when I first tried to make sense of the enormous amount of data collected by traces on a client's servers. At this particular client, the online transactions processing system executes more than 4 million database transactions per hour. That means that even a 30-minute trace that captures "SQL Batch Completed" events results in a table with 2 million rows. Of course, it's simply impractical to process so many records without some automation, and even selecting the longest or most expensive transactions doesn't necessarily help in identifying bottlenecks. After all, short transactions can be the culprits of poor performance when executed thousands of times per minute.

Blogs

Learning from Mistakes: T-SQL Tuesday #194

By

We’re a week late, once again my fault. I was still coming out of...

Stupid Things I Did With AI: ASCII Art

By

I ran across this article recently (https://www.gatesnotes.com/meet-bill/source-code/reader/microsoft-original-source-code) and it has a great opening piece...

Simple Talks Podcasting in 2026

By

I’m in the UK today, having arrived this morning in London. Hopefully, by this...

Read the latest Blogs

Forums

Learning From Breakage

By Steve Jones - SSC Editor

Comments posted to this topic are about the item Learning From Breakage

Python in Action to Auto-Generate an Optimized PostgreSQL Index Strategy

By sabyda

Comments posted to this topic are about the item Python in Action to Auto-Generate...

Adding and Dropping Columns I

By Steve Jones - SSC Editor

Comments posted to this topic are about the item Adding and Dropping Columns I

Visit the forum

Question of the Day

Adding and Dropping Columns I

I have this table in my SQL Server 2022 database:

CREATE TABLE [dbo].[CityList]
(
[CityNameID] [int] NOT NULL IDENTITY(1, 1),
[CityName] [varchar] (30) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
) ON [PRIMARY]
GO
I decide to add two new columns for the StateProvince and Country. What code should I use?

See possible answers