We wanted to do load test using the sql filestream enabled table. It is an audit tablr which captures the request and response API.
We have enabled filestream on request and response fields.
I understand the sql filestream will create the respective files for each record behind the scene.
Now i want to pre-populate the table with around 5 Million records at one stretch, however this might not be the case in prod where the data gets loaded over the period of time.
Consider the request and response as 5 MB.
Constraints i could see,
Time taken to perform insert - high
Any problem with the backup file going to create for that day?
Any other problems ?
How we can do this with minimal impact ?.