Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase

SQL Server Maximum Write Performance Expand / Collapse
Author
Message
Posted Tuesday, April 22, 2014 12:11 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Tuesday, May 6, 2014 6:39 AM
Points: 1, Visits: 7
Hi SQL Gurus,

I want to know if we are already getting the fastest SqlSever Write Performance for our application.

We created a sample application that performs a BulkCopy operation to a local SQL Server database. The BulkCopy operation writes 100,000 rows of data from a DataTable in memory. The table being inserted into has no indexes. This is because we just want to get the maximum write speed of SQL Server.

Here is the schema of the table we are inserting into:

CREATE TABLE [dbo].[HistorySampleValues](
[HistoryParameterID] [bigint] NOT NULL,
[SourceTimeStamp] [datetime2](7) NOT NULL,
[ArchiveTimestamp] [datetime2](7) NOT NULL,
[ValueStatus] [int] NOT NULL,
[ArchiveStatus] [int] NOT NULL,
[IntegerValue] [int] SPARSE NULL,
[DoubleValue] [float] SPARSE NULL,
[StringValue] [varchar](100) SPARSE NULL,
[EnumNamedSetName] [varchar](100) SPARSE NULL,
[EnumNumericValue] [int] SPARSE NULL,
[EnumTextualValue] [varchar](256) SPARSE NULL
) ON [PRIMARY]

We measure the performance from our C# code.

public double PerformBulkCopy()
{
DateTime timeToBulkCopy = DateTime.Now;
double bulkCopyTimeSpentMs = -1.0;

DataTable historySampleValuesDataTable = CreateBulkCopyRecords();

//start the timer here
timeToBulkCopy = DateTime.Now;
using (SqlConnection sqlConn = ConnectDatabase())
{
sqlConn.Open();

using (SqlTransaction sqlTransaction = sqlConn.BeginTransaction())
{
try
{
using (SqlBulkCopy sqlBulkCopy = new SqlBulkCopy(sqlConn, SqlBulkCopyOptions.KeepIdentity, sqlTransaction))
{
sqlBulkCopy.ColumnMappings.Add(SqlServerDatabaseStrings.SQL_FIELD_HISTORY_PARMETER_ID, SqlServerDatabaseStrings.SQL_FIELD_HISTORY_PARMETER_ID);
sqlBulkCopy.ColumnMappings.Add(SqlServerDatabaseStrings.SQL_FIELD_SOURCE_TIMESTAMP, SqlServerDatabaseStrings.SQL_FIELD_SOURCE_TIMESTAMP);
sqlBulkCopy.ColumnMappings.Add(SqlServerDatabaseStrings.SQL_FIELD_VALUE_STATUS, SqlServerDatabaseStrings.SQL_FIELD_VALUE_STATUS);
sqlBulkCopy.ColumnMappings.Add(SqlServerDatabaseStrings.SQL_FIELD_ARCHIVE_STATUS, SqlServerDatabaseStrings.SQL_FIELD_ARCHIVE_STATUS);
sqlBulkCopy.ColumnMappings.Add(SqlServerDatabaseStrings.SQL_FIELD_INTEGER_VALUE, SqlServerDatabaseStrings.SQL_FIELD_INTEGER_VALUE);
sqlBulkCopy.ColumnMappings.Add(SqlServerDatabaseStrings.SQL_FIELD_DOUBLE_VALUE, SqlServerDatabaseStrings.SQL_FIELD_DOUBLE_VALUE);
sqlBulkCopy.ColumnMappings.Add(SqlServerDatabaseStrings.SQL_FIELD_STRING_VALUE, SqlServerDatabaseStrings.SQL_FIELD_STRING_VALUE);
sqlBulkCopy.ColumnMappings.Add(SqlServerDatabaseStrings.SQL_FIELD_ENUM_NAMEDSET_NAME, SqlServerDatabaseStrings.SQL_FIELD_ENUM_NAMEDSET_NAME);
sqlBulkCopy.ColumnMappings.Add(SqlServerDatabaseStrings.SQL_FIELD_ENUM_NUMERIC_VALUE, SqlServerDatabaseStrings.SQL_FIELD_ENUM_NUMERIC_VALUE);
sqlBulkCopy.ColumnMappings.Add(SqlServerDatabaseStrings.SQL_FIELD_ENUM_TEXTUAL_VALUE, SqlServerDatabaseStrings.SQL_FIELD_ENUM_TEXTUAL_VALUE);

sqlBulkCopy.DestinationTableName = SqlServerDatabaseStrings.SQL_TABLE_HISTORYSAMPLEVALUES;

sqlBulkCopy.WriteToServer(historySampleValuesDataTable);
}

sqlTransaction.Commit();
//end the timer here
bulkCopyTimeSpentMs = DateTime.Now.Subtract(timeToBulkCopy).TotalMilliseconds;
}
catch (Exception ex)
{
sqlTransaction.Rollback();
}

CleanUpDatabase(sqlConn);
}
sqlConn.Close();
}

return bulkCopyTimeSpentMs;
}

I have tried the different overloads of SqlBulkCopy.WriteToServer(): DataTable, DataReader and DataRow[].

On a machine with this specs:
I3-2120 CPU @ 3.30GHz
8GB of RAM
Seagate Barracuda 7200.12 ST3500413AS 500GB 7200 RPM

I am getting a throughput of ~150K-160K rows inserted per second using the different overloads.

I am asking now, given our sample data and the sample table, is this the most we can get out of SQL Server SE? Or is there something we can do to make this even faster?

Let me know if there is more information you need about our setup
Post #1563690
Posted Tuesday, April 22, 2014 4:12 AM


SSCoach

SSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoach

Group: General Forum Members
Last Login: Today @ 12:11 PM
Points: 15,517, Visits: 27,893
The thing I'd look to are the wait statistics. What's causing the system to slow down? That will tell you if you are stuck on I/O or if it's something else like memory or CPU. Also, you can try SQLIOSim or SQLIO to test the disk subsystem on your server. That will tell you what the hardware itself is capable of. If your queries are then similar in scale, you'll know that you're at the limit of the system. If not, you'll know that you have a bottleneck elsewhere, possibly in code, structure, or in another resource.

----------------------------------------------------
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood..." Theodore Roosevelt
The Scary DBA
Author of: SQL Server 2012 Query Performance Tuning
SQL Server 2008 Query Performance Tuning Distilled
and
SQL Server Execution Plans

Product Evangelist for Red Gate Software
Post #1563741
Posted Tuesday, April 22, 2014 7:48 AM


SSCarpal Tunnel

SSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal Tunnel

Group: General Forum Members
Last Login: Yesterday @ 8:36 AM
Points: 4,319, Visits: 6,112
That is a VERY low-end machine, esp. the one 7200 rpm disk issue. And there are SOOO many things in SQL Server that need to be changed to get optimal performance. I can't wait until the first time I get to present my "SQL Server Defaults SUCK!!" session at a SQL Saturday event!

In addition to wait stats analysis you need to also check file IO stall analysis. I am betting the 1MB default data growth setting is a problem, and TLOG writes (and TLOG growth) may very well be as well. Also, if you are running the test code on the same machine you could be hitting that i3 processor too hard. If you are running it over the network then the network could be a bottleneck (although not likely).



Best,

Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru at GMail
Post #1563855
Posted Tuesday, April 22, 2014 8:19 AM


SSCoach

SSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoach

Group: General Forum Members
Last Login: Today @ 12:11 PM
Points: 15,517, Visits: 27,893
Yeah, hard to argue with you Kevin (like it's ever easy). 8gb of RAM for SQL Server is probably barely adequate. My local VMs running on my laptop get more resources than that.

----------------------------------------------------
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood..." Theodore Roosevelt
The Scary DBA
Author of: SQL Server 2012 Query Performance Tuning
SQL Server 2008 Query Performance Tuning Distilled
and
SQL Server Execution Plans

Product Evangelist for Red Gate Software
Post #1563874
Posted Tuesday, April 22, 2014 5:49 PM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Today @ 11:31 AM
Points: 36,714, Visits: 31,164
Grant Fritchey (4/22/2014)
Yeah, hard to argue with you Kevin (like it's ever easy). 8gb of RAM for SQL Server is probably barely adequate. My local VMs running on my laptop get more resources than that.


It's amazing how things have changed... mostly for the worst. It wasn't so long ago that a previous company that I worked for was running their whole enterprise using 32 bit SQL Server 2000 Standard Edition on a 4 processor box played against a SAN with nothing but 7.5K RPM hard disks (state of the art at the time) and had no problems with processing or disk writes even though we had 1.2 million customers and we were handling all of the long distance calls, to boot. Sure... they started out with some problems with many overnight procs that took in excess of 10 hours to run but some thoughtful coding brought all of those jobs down to less than half an hour and some of them took as little as 3 minutes (some SERIOUS crap code was at work).

The rules haven't actually changed that much but expectations (hardware can fix anything, right?), perceptions (it's cheaper to throw hardware at a problem than to have someone fix the code, right?), and needs (we need it now and it'll be cheaper to ship it and then fix it, right?) have gone through the roof and code quality has fallen through the bottom of the basket. It's a deadly combination where even today's incredible hardware can't keep up because crap code keeps hitting the proverbial fan.

Hardware helps but performance is in the code.


--Jeff Moden
"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T." --22 Aug 2013

Helpful Links:
How to post code problems
How to post performance problems
Post #1564066
Posted Tuesday, April 22, 2014 8:14 PM


SSCarpal Tunnel

SSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal Tunnel

Group: General Forum Members
Last Login: Yesterday @ 8:36 AM
Points: 4,319, Visits: 6,112
Good observations Jeff. I believe a lot of it is in the masses thinking that Google/Twitter/Facebook/EBay/Amazon et al storing kajigabytes of information and retrieving it nearly instantaneously make developers think they can slap sh!t together and the hardware will make it do their stuff run like that too. They truly have NO idea about the serious rocket-scientist smart people working at those entities endlessly to tweak out every last microsecond of time from everything they do while simultaneously trying to figure out innovative ways to build the next generation and be even better.

There is some awesome reading about such topics linked here, especially the All Time Favorites: http://highscalability.com/

I like this one especially: https://code.facebook.com/posts/229861827208629/scaling-the-facebook-data-warehouse-to-300-pb/

Thankfully as "The SQL Guru" I get to benefit from sooOOOO many companies and developers/development teams continuing to slap crap together!! Performance Is Dead - Long Live Performance!!


Best,

Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru at GMail
Post #1564073
« Prev Topic | Next Topic »

Add to briefcase

Permissions Expand / Collapse