If blocking becomes an issue, I'll switch READ_COMMITTED_SNAPSHOT on. I have it enabled in most of my databases.
Not sure about read uncommitted as helping, but I could possibly use a table lock for part of the process. Most of the work is actually against a series of temp tables.
Didn't mention that aspect of it. The first say 10% of the process is to extract ONLY the necessary data.
Maybe I could explain the process a little.
1) First I find the series that I need. There are say 10k possible main series. Normally I might need examine 2000 of those.
2) I then find all of the time series that apply for each of those series. Each of those 10k series have 1-3 series(daily,weekly,monthly)
Then for each of those series, You have a type id. there are 32 distinct types. But in most cases, I am only interested in 1 or two of those fields.
3) I pivot all of this out, to get something like the following
create table #TempTable
That part is actually not that bad, and the worst case I have found only takes in a worst case situation about 5 min. This is what actually hits the 1billion row table. The #TempTable will have between 2M-25M rows.
Now where this gets painful.
4) The _Change fields would be the difference between the day before and the current day for the specific type. This can be quite painful.
;WITH X (RN,StoredCalcDataID,AsOfDate,[Weekly_Type1]) AS (
SELECT ROW_NUMBER() OVER (ORDER BY MainSeriesID ,theDate ) AS RN,
MainSeriesID ,theDate ,[Weekly_Type1]
SET Weekly_Type1_Change =
WHEN Newer.Weekly_Type1 > Older.Weekly_Type1 THEN 1
WHEN Newer.Weekly_Type1 < Older.Weekly_Type1 THEN -1
ELSE 0 END
INNER JOIN X AS Newer
ON Newer.MainSeriesID = #TempTable.MainSeriesID
AND Newer.theDate = #TempTable.theDate
INNER JOIN X AS Older
ON Older.RN + 1= Newer.RN
AND Older.MainSeriesID = Newer.MainSeriesID