Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase 12»»

I am Confused about SQL Server performance Expand / Collapse
Author
Message
Posted Thursday, July 24, 2014 1:08 AM


Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Friday, July 25, 2014 5:38 PM
Points: 10, Visits: 31
Hello Everyone,
About two months ago me and two of my friends tried to see for ourselves whether Oracle is faster than SQL Server or vice versa, and or if they are on par most of the time,
We didn't have a clue how to go about it, so we decided to create a a stored procedure to insert some rows in a loop ( around 1000,000 ).
In oracle (11g i guess ) it took 1:57 minutes and sqlserver 2012 took ~3 minutes.
we had an auto incrementing Id, and a field representing name for example.
I cant believe SQL Server is this slow compared to oracle.What are we missing here?
This is not all, I have read in StackOverflow (i guess) that SQL Server is transaction based internally as well, so if in the middle of something we cancel our operation, the inserted rows or updated columns must be deleted or get reverted back.
To test this, we actually tried to run the stored procedures mentioned above, and cancelling its execution after couple of seconds
Oracle indeed reverted back the changes, but SQL Server 2012 didn't do that at all, it just cancelled the loop half way and the data that were inserted in to the table, remained there, weren't erased !!

So Why is SQLServer acting this weird? what are we doing wrong? Apparently SQLServer is being used on highly data intensive servers world wide as well , But such a low performance compared to oracle according to our simplistic sample test is not just right!

I would be thankful if anyone could help me get what the problem is here.
Thanks in advance
Post #1595798
Posted Thursday, July 24, 2014 2:07 AM


SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: 2 days ago @ 4:43 AM
Points: 5,401, Visits: 7,512
Coderx7 (7/24/2014)

We didn't have a clue how to go about it
You lack research, young padawan. Let us show you where you errored.

so we decided to create a a stored procedure to insert some rows in a loop ( around 1000,000 ).

Oracle is built around the concept of developers building loops. It's primary access technique is using Cursors. In SQL Server, Cursors are (usually) one of the worst performing things you can do. Remove the thought of 'transferrable SQL' between systems. You have to build towards the specific system. Insert 1,000 rows *at once* as a test, not a loop.

This is not all, I have read in StackOverflow (i guess) that SQL Server is transaction based internally as well, so if in the middle of something we cancel our operation, the inserted rows or updated columns must be deleted or get reverted back.
To test this, we actually tried to run the stored procedures mentioned above, and cancelling its execution after couple of seconds
Oracle indeed reverted back the changes, but SQL Server 2012 didn't do that at all, it just cancelled the loop half way and the data that were inserted in to the table, remained there, weren't erased !!


This is the difference between implicit and explicit transactions, and how the two systems consider them. I'm not an Oracle expert, but your loop, because of how Oracle likes it's code, sounds like it contains each looping insert as part of an overall implicit transaction.

In SQL Server, each of your INSERTS in the loop is its own implicit transaction. Without an explicitly declared transaction, it keeps what's performed to that point. Also, you need to have it do a ROLLBACK when it fails like that, or the transactions can stick around in a ghost like state (ie: READ UNCOMMITTED can find them) until the SPID fails and the transaction rolls back via internal mechanisms.

But such a low performance compared to oracle according to our simplistic sample test is not just right!

I would be thankful if anyone could help me get what the problem is here.
Thanks in advance

This is where the whole idea of 'standard transferable SQL' goes out the window. Each engine, be it MS SQL, Oracle, MySQL, Mongo... has their own internal engines for optimization. Some are better than others for particular tasks, but in general they perform reasonably similar as long as you code to the engine you're working against.



- Craig Farrell

Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

For better assistance in answering your questions | Forum Netiquette
For index/tuning help, follow these directions. |Tally Tables

Twitter: @AnyWayDBA
Post #1595803
Posted Thursday, July 24, 2014 4:31 AM


Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Friday, July 25, 2014 5:38 PM
Points: 10, Visits: 31
Evil Kraig F (7/24/2014)
Coderx7 (7/24/2014)

We didn't have a clue how to go about it
You lack research, young padawan. Let us show you where you errored.

so we decided to create a a stored procedure to insert some rows in a loop ( around 1000,000 ).

Oracle is built around the concept of developers building loops. It's primary access technique is using Cursors. In SQL Server, Cursors are (usually) one of the worst performing things you can do. Remove the thought of 'transferrable SQL' between systems. You have to build towards the specific system. Insert 1,000 rows *at once* as a test, not a loop.

This is not all, I have read in StackOverflow (i guess) that SQL Server is transaction based internally as well, so if in the middle of something we cancel our operation, the inserted rows or updated columns must be deleted or get reverted back.
To test this, we actually tried to run the stored procedures mentioned above, and cancelling its execution after couple of seconds
Oracle indeed reverted back the changes, but SQL Server 2012 didn't do that at all, it just cancelled the loop half way and the data that were inserted in to the table, remained there, weren't erased !!


This is the difference between implicit and explicit transactions, and how the two systems consider them. I'm not an Oracle expert, but your loop, because of how Oracle likes it's code, sounds like it contains each looping insert as part of an overall implicit transaction.

In SQL Server, each of your INSERTS in the loop is its own implicit transaction. Without an explicitly declared transaction, it keeps what's performed to that point. Also, you need to have it do a ROLLBACK when it fails like that, or the transactions can stick around in a ghost like state (ie: READ UNCOMMITTED can find them) until the SPID fails and the transaction rolls back via internal mechanisms.

But such a low performance compared to oracle according to our simplistic sample test is not just right!

I would be thankful if anyone could help me get what the problem is here.
Thanks in advance

This is where the whole idea of 'standard transferable SQL' goes out the window. Each engine, be it MS SQL, Oracle, MySQL, Mongo... has their own internal engines for optimization. Some are better than others for particular tasks, but in general they perform reasonably similar as long as you code to the engine you're working against.


Thanks alot, very informative indeed, i appreciate it.
You know, the reason we tried that loop, was actually to simulate a busy server accepting lots of transactions, such as a website like Facebook which users post lots of status updates or other things alike. we though to ourselves that, so in that situations we are facing lots of inserts! so lets create a loop to provide such a burden on each of these servers and see how the handle it.
I really cant get the part to insert 1000 rows at once! How is that even possible? is it mot dangerous ? since it something happens you lose 1000 rows! not just 1, two or 100 rows, 1000 rows!
I am not an oracle expert either, i am just a developer that has been using SQL Sever for the last 10 years! So i would be greatful if you or any one else can explain more. suppose you want to benchmark two db servers, how would you do that ?
Post #1595845
Posted Thursday, July 24, 2014 8:01 AM


Hall of Fame

Hall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of Fame

Group: General Forum Members
Last Login: Yesterday @ 6:27 PM
Points: 3,742, Visits: 8,389
One of the reasons that you're facing this is because SQL Server auto commits every transaction and Oracle doesn't. You need to manually commit any DML actions. If you use an explicit transaction around the loop in SQL Server, you'll find an improvement on its performance or commit every row insert on Oracle and the performance should decrease.

To try the insert of 1000 at once, read about tally tables and how do they replace a loop. Here's a good place to start: http://www.sqlservercentral.com/articles/T-SQL/62867/



Luis C.
Are you seriously taking the advice and code from someone from the internet without testing it? Do you at least understand it? Or can it easily kill your server?

Forum Etiquette: How to post data/code on a forum to get the best help
Post #1595900
Posted Thursday, July 24, 2014 9:15 AM


Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Friday, July 25, 2014 5:38 PM
Points: 10, Visits: 31
Thanks alot ;)
Post #1595932
Posted Thursday, July 24, 2014 1:23 PM


SSC-Insane

SSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-Insane

Group: General Forum Members
Last Login: Yesterday @ 9:53 PM
Points: 20,727, Visits: 32,485
One thing to remember about Oracle and SQL Server, they are not the same. The internals are different, and there for how the operate is different.

For example, the following will work fine in SQL Server but will fail in Oracle:


select current_timestamp


To get that SQL to run in Oracle it needs to be written as this:


select current_timestamp from dual


I'll leave it to you to find out what dual is in Oracle.



Lynn Pettis

For better assistance in answering your questions, click here
For tips to get better help with Performance Problems, click here
For Running Totals and its variations, click here or when working with partitioned tables
For more about Tally Tables, click here
For more about Cross Tabs and Pivots, click here and here
Managing Transaction Logs

SQL Musings from the Desert Fountain Valley SQL (My Mirror Blog)
Post #1596003
Posted Thursday, July 24, 2014 4:13 PM
SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: Friday, October 17, 2014 4:50 PM
Points: 2,189, Visits: 3,297
Lynn Pettis (7/24/2014)
One thing to remember about Oracle and SQL Server, they are not the same. The internals are different, and there for how the operate is different.

For example, the following will work fine in SQL Server but will fail in Oracle:


select current_timestamp


To get that SQL to run in Oracle it needs to be written as this:


select current_timestamp from dual


I'll leave it to you to find out what dual is in Oracle.


Actually in Oracle it must be written as:
select current_timestamp from dual;
Oracle absolutely requires the ending ; before it will run a query. As to "dual", Oracle's parser requires a FROM clause after a SELECT, so Oracle create a dummy (pseudo-)table to use when the SELECT doesn't pull data from an actual table.

I'm sure not what only 1,000 rows is going to tell you, but, as noted by others, to be fair to SQL Server, explicitly start a transaction before any insert, then explicitly commit it afterward. The 1000 separate transactions are huge overhead vs. a single, larger trans.


SQL DBA,SQL Server MVP('07, '08, '09)

Carl Sagan said: "There is no such thing as a dumb question." Sagan obviously never watched a congressional hearing!
Post #1596038
Posted Thursday, July 24, 2014 6:20 PM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Yesterday @ 8:45 PM
Points: 35,342, Visits: 31,875
Coderx7 (7/24/2014)
Hello Everyone,
About two months ago me and two of my friends tried to see for ourselves whether Oracle is faster than SQL Server or vice versa, and or if they are on par most of the time,
We didn't have a clue how to go about it, so we decided to create a a stored procedure to insert some rows in a loop ( around 1000,000 ).
In oracle (11g i guess ) it took 1:57 minutes and sqlserver 2012 took ~3 minutes.
we had an auto incrementing Id, and a field representing name for example.
I cant believe SQL Server is this slow compared to oracle.What are we missing here?
This is not all, I have read in StackOverflow (i guess) that SQL Server is transaction based internally as well, so if in the middle of something we cancel our operation, the inserted rows or updated columns must be deleted or get reverted back.
To test this, we actually tried to run the stored procedures mentioned above, and cancelling its execution after couple of seconds
Oracle indeed reverted back the changes, but SQL Server 2012 didn't do that at all, it just cancelled the loop half way and the data that were inserted in to the table, remained there, weren't erased !!

So Why is SQLServer acting this weird? what are we doing wrong? Apparently SQLServer is being used on highly data intensive servers world wide as well , But such a low performance compared to oracle according to our simplistic sample test is not just right!

I would be thankful if anyone could help me get what the problem is here.
Thanks in advance


Post both snippets of code and let's have a look.


--Jeff Moden
"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T." --22 Aug 2013

Helpful Links:
How to post code problems
How to post performance problems
Post #1596053
Posted Friday, July 25, 2014 7:39 AM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Today @ 2:34 AM
Points: 217, Visits: 867
Coderx7 (7/24/2014)
[quote]Evil Kraig F (7/24/2014)
[quote]Coderx7 (7/24/2014)




I really cant get the part to insert 1000 rows at once! How is that even possible? is it mot dangerous ? since it something happens you lose 1000 rows! not just 1, two or 100 rows, 1000 rows!



That's the point - if something fails it rolls the whole thing back so you know what state the system is in and can just resubmit the same task. In the other case you have rolled back just the single insert and not the whole transaction. If you were looking to insert 1000 records you now need to work out where it failed and change your insert task to only insert those records after it failed and manual intervention is the surest way to have bugs creep in.
Post #1596216
Posted Friday, July 25, 2014 8:04 AM


SSC-Insane

SSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-Insane

Group: General Forum Members
Last Login: Yesterday @ 9:53 PM
Points: 20,727, Visits: 32,485
crmitchell (7/25/2014)
Coderx7 (7/24/2014)
[quote]Evil Kraig F (7/24/2014)
[quote]Coderx7 (7/24/2014)




I really cant get the part to insert 1000 rows at once! How is that even possible? is it mot dangerous ? since it something happens you lose 1000 rows! not just 1, two or 100 rows, 1000 rows!



That's the point - if something fails it rolls the whole thing back so you know what state the system is in and can just resubmit the same task. In the other case you have rolled back just the single insert and not the whole transaction. If you were looking to insert 1000 records you now need to work out where it failed and change your insert task to only insert those records after it failed and manual intervention is the surest way to have bugs creep in.


Difference between SQL Server and Oracle.

If you want to treat the loop inserting 1000 rows of data as a single transaction in SQL Server you need to explicitly define the transaction.


BEGIN TRANSACTION

your 1000 iteration loop

COMMIT TRANSACTION


Now, if you kill the process running this while still in the look, all rows entered up to the point you kill the process will be rolled back.




Lynn Pettis

For better assistance in answering your questions, click here
For tips to get better help with Performance Problems, click here
For Running Totals and its variations, click here or when working with partitioned tables
For more about Tally Tables, click here
For more about Cross Tabs and Pivots, click here and here
Managing Transaction Logs

SQL Musings from the Desert Fountain Valley SQL (My Mirror Blog)
Post #1596226
« Prev Topic | Next Topic »

Add to briefcase 12»»

Permissions Expand / Collapse