I've always thought that any times we recorded in a row were related to the actual data itself. Almost all of the time I've built software, we've assumed that the time a row is written or updated is the time we need to record as a timestamp. This works great for many applications and it certainly simplifies programming when the database can capture the time. System versioned tables are a great example of where we use time in the database as the time of record.
Over the years we've started to build more distributed applications that reach a widely dispersed audience. With the Internet today, it's entirely possible that a row inserted by one user takes place in time zones far away from the next row inserted by a different user. What may be more confusing is that I may insert a row in GMT/UTC, but then update that row a day later in the Mountain Standard Time time zone. In fact, that's something I've done.
The complexity of time in an application means that we can't assume we know how best to capture or store time. While I think system versioned tables are a great addition to SQL Server, they are an incomplete one. We need the ability to use these tables with some sort of application time marker, not just the database server time.
This week I'm wondering how many of you might feel the same way. How many of you need to capture application time separately from the system time? Do you do that now, with some bespoke system? Perhaps you wish you could and this is a feature your system is missing.
There is a feedback request for Microsoft that asks if application time can be captured as a part of the system versioned timestamps. Regardless of whether or not you have the need, this is an important enhancement that would greatly improve the system versioned table feature. Cast a vote today, and let us know if you would use application time in addition to the time in your database server.