Viewing 5 posts - 1 through 6 (of 6 total)
Hi Sergiy,
I have tried the script with a batch size of 4.000.000 records, writing to a staging table without any indexes, but the batch speed is slower than writing to...
June 6, 2021 at 5:14 pm
Hi Frederico,
I have tried # case 2 - insert rowgroup size ( 1,048,576 rows). I could see a drastic change in copy speed. approx 3.000.000 records copied in 15 min....
June 6, 2021 at 5:09 pm
Column store index is clustered on both source and destination.
The source table has close to 600.000.000 rows and the Destination table has approx 160.000.000 rows.
Each Date has 6.000.000 rows approx...
June 5, 2021 at 11:37 am
Hi Frederico,
Thank you for your response.
Source Table has 5 Indexes,
> Two Non Clustered on Date field
> One Column store Index
> Two Non Clustered indexes for Unique key constraints (One of...
June 5, 2021 at 9:59 am
Hi,
You can use the undocumented function sys.fn_dblog() to identify the insert time from the transaction log.
Please refer to the following link for more details.
https://dba.stackexchange.com/questions/189485/how-can-i-find-time-of-an-insertion#:~:text=2%20Answers&text=You%20can%20find%20INSERT%20time,name)%20in%20the%20result%20set.
Thanks.
May 22, 2021 at 2:39 am
Viewing 5 posts - 1 through 6 (of 6 total)