Chris, The version is 2017 enterprise.
Jeff, Thanks again for the detailed response - and it's good to know the info comes from real-world background.
You're correct, the only reason for daily is to reduce the time for switching; however I have two types of queries that primarily hit this data - one is seeking the last day's worth of data, the other query is the last month's worth of data. All queries know the date bounds that they're seeking in addition to the other data columns or filtering results.
The 40GB of index data is in addition to the 40GB of clustered index data (just coincidence they happen to be similar in size). The table contains a bigint identity (as unique record reference) and a datetime2(7) column which forms the primary key. Partitioning occurs on the datetime2(7) column.
The challenges I'm encountering with this environment is that records are bulk-logged (20-50K rows total across different transactions) into the table every few minutes or so; I'm also seeing high index fragmentation which I'm managing on a nightly basis to keep it in check.
After data hits the table, I need to perform some basic processing on the data and a comparison to the previously logged record specific to that rows' source. I'm currently achieving this with bulk update scripts as agent jobs in batches.
Certainly keen to understand new/better ways at handling this type of situation.