• Evil Kraig F (7/24/2014)


    Coderx7 (7/24/2014)


    We didn't have a clue how to go about it

    You lack research, young padawan. Let us show you where you errored.

    so we decided to create a a stored procedure to insert some rows in a loop ( around 1000,000 ).

    Oracle is built around the concept of developers building loops. It's primary access technique is using Cursors. In SQL Server, Cursors are (usually) one of the worst performing things you can do. Remove the thought of 'transferrable SQL' between systems. You have to build towards the specific system. Insert 1,000 rows *at once* as a test, not a loop.

    This is not all, I have read in StackOverflow (i guess) that SQL Server is transaction based internally as well, so if in the middle of something we cancel our operation, the inserted rows or updated columns must be deleted or get reverted back.

    To test this, we actually tried to run the stored procedures mentioned above, and cancelling its execution after couple of seconds

    Oracle indeed reverted back the changes, but SQL Server 2012 didn't do that at all, it just cancelled the loop half way and the data that were inserted in to the table, remained there, weren't erased !!

    This is the difference between implicit and explicit transactions, and how the two systems consider them. I'm not an Oracle expert, but your loop, because of how Oracle likes it's code, sounds like it contains each looping insert as part of an overall implicit transaction.

    In SQL Server, each of your INSERTS in the loop is its own implicit transaction. Without an explicitly declared transaction, it keeps what's performed to that point. Also, you need to have it do a ROLLBACK when it fails like that, or the transactions can stick around in a ghost like state (ie: READ UNCOMMITTED can find them) until the SPID fails and the transaction rolls back via internal mechanisms.

    But such a low performance compared to oracle according to our simplistic sample test is not just right!

    I would be thankful if anyone could help me get what the problem is here.

    Thanks in advance

    This is where the whole idea of 'standard transferable SQL' goes out the window. Each engine, be it MS SQL, Oracle, MySQL, Mongo... has their own internal engines for optimization. Some are better than others for particular tasks, but in general they perform reasonably similar as long as you code to the engine you're working against.

    Thanks alot, very informative indeed, i appreciate it.

    You know, the reason we tried that loop, was actually to simulate a busy server accepting lots of transactions, such as a website like Facebook which users post lots of status updates or other things alike. we though to ourselves that, so in that situations we are facing lots of inserts! so lets create a loop to provide such a burden on each of these servers and see how the handle it.

    I really cant get the part to insert 1000 rows at once! How is that even possible? is it mot dangerous ? since it something happens you lose 1000 rows! not just 1, two or 100 rows, 1000 rows!

    I am not an oracle expert either, i am just a developer that has been using SQL Sever for the last 10 years! So i would be greatful if you or any one else can explain more. suppose you want to benchmark two db servers, how would you do that ?