Oracle treats most DDL as auto-committing, meaning once it executes, it's done. SQL Server, however, blurs that line in fascinating ways, allowing some DDL operations to be part of an explicit transaction.
The data professional’s world is changing and I know you hear this from me in editorials, blogs posts and social media, but it’s the truth. With the rise of Microsoft Fabric, we’re not just seeing another platform shift; we’re witnessing a redefinition of how data is valued, governed, and protected across the enterprise. Fabric isn’t […]
Introduction It was the week before Black Friday — the biggest online ad rush of the year. Our US-based ad-tech platform was gearing up for an insane traffic spike. Hundreds of real-time campaigns were about to go live across multiple brands, each with thousands of user sessions flowing through our system. Every incoming user impression […]
Take a basic look at database diagrams, what they are, and how to create one.
Unlocking Interoperability: A Guide to Foreign Data Wrappers in PostgreSQL and Aurora PostgreSQL AWS RDS As a database professional, I often encounter scenarios where data is fragmented across various systems. In today's distributed IT landscape, it's not uncommon for critical business information to reside in different databases, perhaps an on-premise PostgreSQL instance for legacy applications, […]
As with others, I've had to deal with death in the family recently. Some other family members are dealing with cancer (a few friends too). Happily none of us has recently been a disaster zone, but that's happened too. So yeah, big, nasty scary stuff happens in life. However, for most of us, most of […]
I was just reading about how the Philippines are working to update their databases in support of faster and better responses in the case of an emergency. While I do volunteer for some of the local emergency services, I'm right at the bottom of the heap as just a radio operator. I don't have any […]
In this article, I wanted to test a common assumption we DBAs make – that adding INCLUDE columns to indexes is harmless. I created a FULL recovery test database with a realistic wide Orders table containing extra large VARCHAR columns to simulate an ERP workload. I ran updates and measured transaction log backup sizes before and after adding INCLUDE columns to a nonclustered index. The results shocked me. The update without INCLUDE columns generated a 10 MB log backup, while the same update with INCLUDE columns produced over 170 MB – a 17x increase in log volume. I explain why this happens: INCLUDE columns are physically stored in index leaf rows, so updates affecting them write bigger log records. I also clarify that updating key columns generates even more log than INCLUDE updates because it involves row movement (delete + insert), but INCLUDE updates still cost more log than if those columns weren’t indexed at all. The takeaway is clear – INCLUDE columns are powerful, but they silently increase transaction log generation, impacting backup sizes, replication lag, and DR readiness. Always measure their real cost before deploying to production.
By Steve Jones
I’ve often done some analysis of my year in different ways. Last year I...
By Steve Jones
This was Redgate in 2010, spread across the globe. First the EU/US Here’s Asia...
By John
Today is Christmas and while I do not expect anybody to actual be reading...
Comments posted to this topic are about the item Database security permissions save script
I have a SQL Agent job for backing up a set of Analysis Services...
Comments posted to this topic are about the item SQL Server 2025 Backup Compression...
I want to use the new BASE64_ENCODE() function in SQL Server 2025, but return a string that isn't large type. What is the longest varbinary string I can pass in and still get a varchar(8000) returned?
See possible answers