• Thanks for the reply Jeff.

    Unfortunately the varbinary is BLOB. The partition key is a bigint. There are currently 2 filegroups being used and the one that doesn't have a range value is close to filling up the drive it is on. The reason for the partition is because our SQL Servers run on VMs and it seems that 3TB drives are the limit.

    What I'm thinking I'll have to do (since I believe that I do not have the window to drop and rebuild the index on the scheme/function with right range) is to split the range into new partitions so that the new partition receives the new "slice" of data (which is smaller than the entire partition). I hope that makes sense; I realize it may be hard to follow.

    This will have to be an ongoing thing since the data keeps coming in.

    What do you think would take longer? Splitting the range where 2TB of data had to be moved or rebuilding an index on a 4TB heap? I don't really know the internals of what is going on there.