Oracle treats most DDL as auto-committing, meaning once it executes, it's done. SQL Server, however, blurs that line in fascinating ways, allowing some DDL operations to be part of an explicit transaction.
The data professional’s world is changing and I know you hear this from me in editorials, blogs posts and social media, but it’s the truth. With the rise of Microsoft Fabric, we’re not just seeing another platform shift; we’re witnessing a redefinition of how data is valued, governed, and protected across the enterprise. Fabric isn’t […]
Introduction It was the week before Black Friday — the biggest online ad rush of the year. Our US-based ad-tech platform was gearing up for an insane traffic spike. Hundreds of real-time campaigns were about to go live across multiple brands, each with thousands of user sessions flowing through our system. Every incoming user impression […]
Take a basic look at database diagrams, what they are, and how to create one.
Unlocking Interoperability: A Guide to Foreign Data Wrappers in PostgreSQL and Aurora PostgreSQL AWS RDS As a database professional, I often encounter scenarios where data is fragmented across various systems. In today's distributed IT landscape, it's not uncommon for critical business information to reside in different databases, perhaps an on-premise PostgreSQL instance for legacy applications, […]
As with others, I've had to deal with death in the family recently. Some other family members are dealing with cancer (a few friends too). Happily none of us has recently been a disaster zone, but that's happened too. So yeah, big, nasty scary stuff happens in life. However, for most of us, most of […]
I was just reading about how the Philippines are working to update their databases in support of faster and better responses in the case of an emergency. While I do volunteer for some of the local emergency services, I'm right at the bottom of the heap as just a radio operator. I don't have any […]
In this article, I wanted to test a common assumption we DBAs make – that adding INCLUDE columns to indexes is harmless. I created a FULL recovery test database with a realistic wide Orders table containing extra large VARCHAR columns to simulate an ERP workload. I ran updates and measured transaction log backup sizes before and after adding INCLUDE columns to a nonclustered index. The results shocked me. The update without INCLUDE columns generated a 10 MB log backup, while the same update with INCLUDE columns produced over 170 MB – a 17x increase in log volume. I explain why this happens: INCLUDE columns are physically stored in index leaf rows, so updates affecting them write bigger log records. I also clarify that updating key columns generates even more log than INCLUDE updates because it involves row movement (delete + insert), but INCLUDE updates still cost more log than if those columns weren’t indexed at all. The takeaway is clear – INCLUDE columns are powerful, but they silently increase transaction log generation, impacting backup sizes, replication lag, and DR readiness. Always measure their real cost before deploying to production.
By Chris Yates
There was a time when the Chief Data Officer lived in the shadows of...
By Rayis Imayev
"But I don’t want to go among mad people," Alice remarked."Oh, you can’t help...
By Steve Jones
I saw some good reviews of the small gemma3 model in a few places...
Comments posted to this topic are about the item Create an HTML Report on...
Comments posted to this topic are about the item We Should Demand Better
Comments posted to this topic are about the item Estimated Rows
I have two calls to the GENERATE_SERIES TVF in this code:
SELECT TOP 10 gs.value FROM GENERATE_SERIES(1, 10) AS gs ORDER BY NEWID () OPTION (RECOMPILE); go DECLARE @a int = 10; SELECT TOP (@a) gs.value FROM GENERATE_SERIES(1, @a) AS gs ORDER BY NEWID () OPTION (RECOMPILE);In the actual query plans, what is the estimated number of rows for each batch? See possible answers