Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase ««12

Provisioning SQL disk on a Compellent SAN Expand / Collapse
Author
Message
Posted Friday, May 27, 2011 6:48 PM


SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Yesterday @ 4:35 PM
Points: 259, Visits: 1,086
Michael Valentine Jones (5/27/2011)
Paul Randal address the whole tempdb data file per processor core story here:

A SQL Server DBA myth a day: (12/30) tempdb should always have one data file per processor core
http://www.sqlskills.com/BLOGS/PAUL/post/A-SQL-Server-DBA-myth-a-day-(1230)-tempdb-should-always-have-one-data-file-per-processor-core.aspx


Been there, done that. It's one of many, many articles out there on this subject.

The truth is, just about every database server can benefit from a few TempDb data files. How many? How big? On different LUNs or disks?

It all depends. You have to experiment.

LC
Post #1116573
Posted Friday, July 22, 2011 9:32 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Friday, June 6, 2014 1:24 PM
Points: 49, Visits: 189
I saw a demo of this the other day from a Dell rep. He claimed that Compellant's system architect said that performance tuning these tiered SANs is "as easy as clicking checkboxes and then going to drink beer".

I had one question for the Dell rep, and he did not have an answer for me since he was not a SQL Server guy. I asked him how tools that try to use heuristics to automatically discover bottlenecks, like the PALTools on Codeplex, could still work in this environment. Obviously, PALTools could still show you non-Disk bottlenecks (CPU, Memory), but I am trying to wrap my mind around how it would report on disk bottlenecks.

I am still a "Junior DBA" (really, I am a software developer who was asked to be the performance guru on our SQL Servers), so I apologize if this question is naive or stupid.

The geek in me finds this tiered data storage thing fascinating.
Post #1146733
Posted Friday, July 22, 2011 12:19 PM


SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Yesterday @ 4:35 PM
Points: 259, Visits: 1,086
johnzabroski (7/22/2011)
I saw a demo of this the other day from a Dell rep. He claimed that Compellant's system architect said that performance tuning these tiered SANs is "as easy as clicking checkboxes and then going to drink beer".

I had one question for the Dell rep, and he did not have an answer for me since he was not a SQL Server guy. I asked him how tools that try to use heuristics to automatically discover bottlenecks, like the PALTools on Codeplex, could still work in this environment. Obviously, PALTools could still show you non-Disk bottlenecks (CPU, Memory), but I am trying to wrap my mind around how it would report on disk bottlenecks.

I am still a "Junior DBA" (really, I am a software developer who was asked to be the performance guru on our SQL Servers), so I apologize if this question is naive or stupid.

The geek in me finds this tiered data storage thing fascinating.


I've not used the PALTools. For the most part, I use the tools provided by Microsoft (Perfmon, Performance Dashboard) and the various UI's provided by Compellent to evaluate system and SAN performance so I can't offer you a comparison.

The Compellent UIs offer an almost overwhelming array of ways to view and evaluate SAN performance and their associated heuristics. You don't look it operations on a per spindle basis. You look at operations in other terms, like how many IOops (IO operations) your SAN can perform relative to how many IOops your SAN is executing, and how much data throughput is occurring relative to the SAN's maximum data throughput in Bytes per second, both on the front end and back end, to determine how the SAN is being used and where the stress points are.

What determines the maximum number of IOops for your SAN? Primarily the number and rotational speed of the disks you have on each Tier. For instance, if Tier 1 storage is 15K rpm Fiber Channel drives, and you have 32 of them, that will be your limiting factor, since all data is written across all 32 spindles. If you have 64 spindles in Tier 1 storage, then the number of IOops your SAN can handle is doubled (relative to the maximum number of IOops for 32 spindles). If you have 16 spindles in Tier 1 storage, then the number of IOops your SAN can handle is cut in half (relative to the maximum number of IOops for 32 spindles).

The same logic would apply to Tier 2 or Tier 3 storage (usually 7000+ rpm SATA or SAS drives), except that the disks' rotational speeds are slower than 15K rpm disks so the maximum number of IOops is lower.

Additionally, there are limitations imposed on the front end of the SAN by the maximum number of packets your iSCSI connnections can handle. There are also limitations imposed on the back end of the SAN by the speed of its internal communications.

In our production environment, these technical aspects are rarely considered because everything is so fast. The most important considerations are setting up data progression properly so that rarely used data that originally was stored on Tier 1 storage is slowly migrated to Tier 3 storage. This keeps your most frequently accessed data on your highest speed Tier and your most infrequently accessed data on your lowest speed Tier.

And, of course, you must have efficient queries and appropriate indexes to support efficient use of your SAN. THIS is still the most important aspect of system tuning that I do. If your queries are causing clustered index scans of large tables, this may appear to be a disk bottleneck when in fact it is an improper database related implementation.

I can't say that tuning a Compellent SAN is as easy as drinking beer (that's pretty easy and comes naturally to me) but it's not rocket science either. It's pretty straightforward once you understand the concepts upon which the Compellent SAN is designed and operates.

LC
Post #1146839
Posted Friday, July 22, 2011 12:30 PM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Friday, June 6, 2014 1:24 PM
Points: 49, Visits: 189
PALTools just go on top of PerfMon.

It's nothing extraordinarily fancy. What it allows is to share/save performance "gotchas" as relationships so that given new performance data, it can compare the relationships it knows to the new data. It's just automating what you probably do already.

Not familiar with Performance Dashboard. Is that a MOM (Microsoft Operations Manager) feature?
Post #1146851
Posted Friday, July 22, 2011 12:49 PM


SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Yesterday @ 4:35 PM
Points: 259, Visits: 1,086
Performance Dashboard is an add-on feature for SQL Server.

Type this into Google and you'll get a bunch of hits: "sql server performance dashboard". Pick the selection that is appropriate to your version of SQL Server.

It is one of the most useful set of tools I've ever used. I'm not sure why it is not part of the SQL Server installation.

You'll need to download it, install it, then execute the "Setup.sql" script to get it ready to run as a Custom Report in the Reports section of SQL Server Management Studio.

If you have any difficulties getting it to execute, let me know.

LC
Post #1146868
Posted Friday, July 22, 2011 1:01 PM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Friday, June 6, 2014 1:24 PM
Points: 49, Visits: 189
I've seen that, and had trouble getting it to work for SQL Server 2008. Never put much effort into it. In general, I find SQL Server Reporting Services, and thereby any tool built on top of its infrastructure, to be poorly designed and ill-thought out. For example, the index report built-in to SSMS 2008 is stupid and doesn't take into account internal vs. external fragmentation! Ugh. And it is proprietary so you can't open the code that wrote the report and change it. How dumb. And it doesn't have workflow integrated, for push-button resolution of problems.

It is only a matter of time (20 years? :)) before Microsoft wakes up, buys Tableau Software (the best reporting tool out there), and re-engineers SSIS to work more like Google Refine (a free tool not currently geared toward massive ETL, but could be).

We will have to wait a long time, primarily due to reporting zombies who are satisfied with the status quo.

My current pet project is to build a performance dashboard with PowerShell that automatically launches, in sync, PerfMon traces and SQLserver server-side traces simultaneously and gathers the data and reports it in a useful manner. It was inspired by dealing with how stupid SQL Server Management Studio is. I tried talking to the guys who designed SSMS and SMO and basically got thumbs-in-ears replies.

As a software developer, I hate looking at the tools DBAs torture themselves with and know that it is still the best they have available.
Post #1146875
Posted Saturday, July 23, 2011 8:13 AM


SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Yesterday @ 4:35 PM
Points: 259, Visits: 1,086
I think you and I have different perspectives on things. I was a software developer for over 25 years, wrote about a half a million lines of source code in 10 different languages, and I don't have a problem with most of the tools available to me, as a DBA, from Microsoft. I find them to generally be adequate, and in some cases excellent, for everything a senior DBA needs to do.

If you're inclined to further investigate SQL Server Performance Dashboard, this link will provide you with a simple fix that will allow you to execute it on SS2008: http://blogs.technet.com/b/rob/archive/2009/02/18/performance-dashboard-reports-for-sql-server-2008.aspx

LC
Post #1147073
Posted Monday, March 31, 2014 5:55 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Wednesday, May 7, 2014 2:44 PM
Points: 2, Visits: 161
Our SQL production servers have now been re-engineered to use compellent backup technology. I have the backup files taken with compellent from the production server and wants to restore the files to our SQL test server. I created the scripts for the refresh below but the job runs endlessly with no failures but nothing gets done. Please help.

Here are the scripts I am using.

powershell -File “C:\Scripts\Compellent\NEO_DB_Dismount_Replay.ps1”
powershell -File “C:\Scripts\Compellent\NEO_Log_Dismount_Replay.ps1”
powershell -File “C:\Scripts\Compellent\NEO_DB_Mount_Replay.ps1”
powershell -File “C:\Scripts\Compellent\NEO_Log_Mount_Replay.ps1”


CREATE DATABASE PROCESSDB_CLONE
ON
(FILENAME = 'C:\MP\MP001\SQL\Data\processdb.mdf'),
(FILENAME = 'C:\MP\MP001\SQL\Data\processdb_1.ndf'),
(FILENAME = 'C:\MP\MP001\SQL\Data\processdb_2.ndf'),
(FILENAME = 'C:\MP\MP001\SQL\Data\processdb_3.ndf'),

(FILENAME = 'C:\MP\MP002\SQL\Logs\processdb_4.ldf'),
(FILENAME = 'C:\MP\MP002\SQL\Logs\processdb_5.ldf')
FOR ATTACH



Post #1556435
« Prev Topic | Next Topic »

Add to briefcase ««12

Permissions Expand / Collapse