November 10, 2025 at 12:00 am
Comments posted to this topic are about the item MySQL’s BLACKHOLE Storage Engine: The Odd Feature I Wish SQL Server Shipped With
November 11, 2025 at 12:10 am
Thanks for posting your issue and hopefully someone will answer soon.
This is an automated bump to increase visibility of your question.
November 11, 2025 at 10:29 am
Well, MSSQL has several mechanics that can do something similar.
If you want to run a huge SELECT but don't care about the results open the Query Options go to Results | Results to Grid and check the "Discard results after execution". This will speedup the query a lot, since SSMS hasn't to parce / paint the results into the grid (I'm not sure if it skips the network transfer too).
You could use #local_temp_tables that will be thrown away after the procedure / session ends. Or ##global_temp_tables that will be deleted, all sessions, that have accessed it, are closed / done.
You could use regular tables (that allows triggers) and truncate them regularly. If this is an always-on-process you could use partitions on the insert date or whatever and truncate just the older ones.
I agree, that the special case, where you replicate the inserts to a second server is a bit harder to do, but its possible too - just don't use replication but a regular DTL job there, either in SSIS or via a procedure.
God is real, unless declared integer.
November 12, 2025 at 3:22 pm
Well, MSSQL has several mechanics that can do something similar.
If you want to run a huge SELECT but don't care about the results open the Query Options go to Results | Results to Grid and check the "Discard results after execution". This will speedup the query a lot, since SSMS hasn't to parce / paint the results into the grid (I'm not sure if it skips the network transfer too).
You could use #local_temp_tables that will be thrown away after the procedure / session ends. Or ##global_temp_tables that will be deleted, all sessions, that have accessed it, are closed / done.
You could use regular tables (that allows triggers) and truncate them regularly. If this is an always-on-process you could use partitions on the insert date or whatever and truncate just the older ones.
I agree, that the special case, where you replicate the inserts to a second server is a bit harder to do, but its possible too - just don't use replication but a regular DTL job there, either in SSIS or via a procedure.
Just wanted to add to your response about the discarding the results of the query - if you do results to text instead of grid, you remove the overhead of displaying things in the grid as well. If you set statistics IO and statistics TIME on, you can use those to measure the performance of your query but make sure you have your results set to grid or file.
You can also wrap your query in something intended to discard the results such as SELECT 1 FROM ( <insert your query here> ) AS DATA. This will give you 1 row with a value of 1 which is really quick and easy. Since you are not writing to disk, you have minimal disk I/O unless things get written to TEMPDB in which case you introduce disk I/O again.
With the special case where you are inserting to another database, you can do that in SQL server as well. Create the table and put an trigger on on the table with an INSTEAD OF trigger. So when the INSERT hits, you run your trigger instead of the insert and you insert the data to the secondary table. Personally, I'm not a fan of this approach - if the application data is meant to be written to a second database, the application should connect and write to that database. The database should be the data storage location for the applications and shouldn't be manipulating the storage location for the data.
My opinion - BLACKHOLE in MySQL is neat, but not necessary. It's handy for benchmarking, but when benchmarking you usually want to include the disk I/O cost as well. Having your query complete in 1 second when not writing to disk and 1 minute when writing to disk isn't likely to be a production ready query unless the blocking is acceptable. Yes it helps you determine the disk is the bottleneck BUT you can prove that too with other methods such as statistics IO. For pretty much any purpose I can think of, SQL Server has alternate solutions.
EDIT - I also wanted to add - for code verification, SSMS has autocomplete and syntax checking built in. If you run your query and it has a syntax error, the query will fail to run. There is no need to "write" your data to null, just write to a temporary table or table variable when testing and then update it later. Or better yet - wrap the query in a transaction and roll back when complete if there are data changes. PLUS development should be on a test system, so if a transaction rollback isn't acceptable, write your data and then delete it when you are done. Repeat until things work as expected. Then push to source control, get code verification, and deploy.
The above is all just my opinion on what you should do.
As with all advice you find on a random internet forum - you shouldn't blindly follow it. Always test on a test server to see if there is negative side effects before making changes to live!
I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.
Viewing 4 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply