Blog Post

T-SQL Tuesday #198 Invitation: How Do You Detect Data Changes?

,

T-SQL Tuesday logo

It’s time for T-SQL Tuesday #198! T-SQL Tuesday is a monthly community blogging event started by Adam Machanic in 2009. Each month a host picks a topic, participants write about it on the second Tuesday of the month, and the host posts a recap with links to all the responses. If you haven’t participated before, I hope this is the month you give it a try.

This month’s topic is change detection.

Why change detection?

I’ve been spending a lot of time lately with Fabric Mirroring for SQL Server 2025. One of the things that makes the SQL Server 2025 change feed interesting is how it handles change detection: rather than writing changes to intermediate change tables and having Fabric poll those changes, the change feed scans the transaction log at high frequency and publishes committed changes directly to OneLake. It eliminates a hop and reduces overhead on the source system.

Before Fabric Mirroring existed, incremental loads in data integration projects with source data in SQL Server required Change Data Capture, Change Tracking, or your own high watermark approach. Now my mirrored data lands in Delta tables in Fabric, which have their own Change Data Feed feature.

That got me thinking about how many different ways we’ve all solved this problem — figuring out what changed since the last time we looked — and what issues come up depending on your environment, your workload, and what you actually need to do with the changes.

The prompt

Share a tip, technique, or lesson learned about how you’ve handled detecting data changes in your technology of choice.

This is intentionally broad. Some angles to consider:

  • How do you handle incremental loads in your ETL or ELT pipelines? What’s your watermark strategy, and what edge cases have caused problems?
  • Have you used CDC, Change Tracking, temporal tables, or the SQL Server 2025 change feed in a real system? What issues did you run into that the documentation didn’t mention?
  • Have you been burned by a particular approach? Missed deletes, timezone issues, LSN gaps after a failover, clock skew between source and destination?
  • What does change detection look like in your non-SQL-Server tools (dbt, Spark, Databricks, Azure Data Factory, C# application, etc.)?
  • Do you have a script or process that checks for unexpected data drift between two environments?

SQL Server-specific or not, incremental loads or data validation — if it’s about knowing what’s different now versus before, it fits.

The rules

  1. Publish your post on Tuesday, May 12, 2026. Posts should go live between 00:00 UTC and 23:59 UTC.
  2. Include the T-SQL Tuesday logo in your post, linked back to this invitation post.
  3. Let me know you participated by leaving a comment below or sending a trackback. Feel free to also use the #tsql2sday hashtag on your favorite social media platform. I’ll be using it when I share this post.

I’ll post a roundup within a week. I’m looking forward to seeing what approaches people are using and what they’ve learned the hard way.

If you’re interested in hosting a future T-SQL Tuesday, contact Steve Jones at tsqltuesday.com/requesttohost.

The post T-SQL Tuesday #198 Invitation: How Do You Detect Data Changes? first appeared on Data Savvy.

Original post (opens in new tab)
View comments in original post (opens in new tab)

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating