A need to be exposed - career wise that is!

  • Morning all,

    I was lucky enough to be promoted at the beginning of this year and am now the sole DBA at my company looking after 5 SQL Server 2008 R2 instances. I've been on three MS Certified SQL Server courses covering design, implementation, optimisation, maintenance, security etc and have taught myself T-SQL to a standard that I can effectively query data and perform maintenance tasks etc, here's the rub though - because the company DB's are quite small (the largest is circa 600MB) and most don't have huge I/O, I don't feel that I'm getting the exposure to some of the more challenging issues DBA's are forced to deal with (i.e. data corruption, deadlocks, disk I/O problems, performance issues etc), does anyone have any suggestions as to what I could do (with the exception of moving somewhere else!) to get some more exposure and be able to practice resolving these sorts of problems? I'm looking at taking the MS 70-432 exam towards the end of this year but I'm not confident that I'll have the relevant pratical knowledge to pass and I don't want to let my company down.

    I've recently built a test/dev server running SQL 2008 R2 to enable me to practice but need some scenarios and don't know how to instigate some of the above problems. Any help would be hugely appreciated. Cheers, M.


    MCITP
    MCTS - E-Business Card
    Twitter: WWDMark

    Try not! Do or do not, there is no try

    email: info@weekendwebdesign.co.uk
    Personal Website: http://markallen.co.uk/
    Business Website: https://www.weekendwebdesign.co.uk

  • This is probably very general advice, but it's what's worked for me as far as learning goes :-). First, in your dev environment, try to create a giant lump of data by using a method such as the one in this article:

    http://weblogs.sqlteam.com/joew/archive/2008/02/06/60490.aspx

    That should give you a rather large heap of data, completely free of indexes and such. Once you have that basic framework, try adding some additional components; perhaps a DateCreated column for each name that you'll have in that table, and then think of a way to generate the dates. Add more columns and populate them as needed, possibly with purchase values, occasional notes on some of the names, and so on.

    Once all of this is set up, go ahead and try to retrieve data that a company would want; say, who the biggest spenders are, what days they shopped on, what day has the most sales, and other similar scenarios. The important part for you here, it seems, is not so much figuring out how to get that data; you've said you're good at querying as it is, so that should be the easy part. The major step here is to look at the execution plans for the queries you cook up to get that data. With about 14 million rows, the effects of performance tuning will be quite noticeable once you've got it worked out, and if you can get even more rows tossed in there, the effects will be magnified that much further. Any faults with performance outside of the query and table design should also be made manifest in the process, and you should be able to experiment happily with indexes and tuning to see what makes the biggest impact with the fewest tradeoffs.

    Mostly, that's how I've been practicing at my current job, where I've been the DBA for almost a year now. My practice runs have been done with a dev copy of our data, though, so I've got things like dates and other values already set in. From this, though, I've been able to formulate test cases to dig up information I was merely interested about, such as finding out what year was out best year for sales, what items we have are the most profitable ones, the slowest months we have each year, typical variance in profits from month to month, and so on.

    This is probably quite basic advice, but I've been browsing these forums regularly and trying to apply problems I see brought up by others to my dev environment, and I've learned a ton from all the testing :-).

    - 😀

  • VygonDBA (8/3/2012)


    Morning all,

    I was lucky enough to be promoted at the beginning of this year and am now the sole DBA at my company looking after 5 SQL Server 2008 R2 instances. I've been on three MS Certified SQL Server courses covering design, implementation, optimisation, maintenance, security etc and have taught myself T-SQL to a standard that I can effectively query data and perform maintenance tasks etc, here's the rub though - because the company DB's are quite small (the largest is circa 600MB) and most don't have huge I/O, I don't feel that I'm getting the exposure to some of the more challenging issues DBA's are forced to deal with (i.e. data corruption, deadlocks, disk I/O problems, performance issues etc), does anyone have any suggestions as to what I could do (with the exception of moving somewhere else!) to get some more exposure and be able to practice resolving these sorts of problems? I'm looking at taking the MS 70-432 exam towards the end of this year but I'm not confident that I'll have the relevant pratical knowledge to pass and I don't want to let my company down.

    I've recently built a test/dev server running SQL 2008 R2 to enable me to practice but need some scenarios and don't know how to instigate some of the above problems. Any help would be hugely appreciated. Cheers, M.

    I suggest installing dev edition on a laptop, or a desktop you can play with the system resources and drives on, something seperate from your dev environment. You devs won't be too happy about their horked environment.

    Practice configuring the correct way, then try breaking it, try configuring it in the worst way you can think of that still runs and try fixing it without interruption. Start thinking of terrible what if situations, what if I have my clustered indexes in their own file group on a seperate drive and the cord is cut? Load 80 million rows at the same time index maintenance is going on. What if the master db file somehow gets deleted? What if I have two instances trying to use the same exact database files on the same network drives? Try writing a single statement that blows up the tempdb, or fills the drive the log file is on. Create a deadlock. Write a statement that eats all the procs and makes everything wait including itself. Write a batchfile that opens 1000 connections and executes inserts, deletes and selects on the same dataset. What if someone updates all of the dates to getdate()? How would you fix this? How would you prevent this?

    If you have another database person around to bounce things off of you can make a sport of it.

    Figure out a way to monitoring and auditing for all of this, then monitor your monitors to see what kind of load they are putting on your server.

    http://www.littlekendra.com/2011/01/24/corrupthexeditor/

    Also definately follow Gail Shaw (GilaMonster) and Paul Randal.

  • VygonDBA (8/3/2012)


    ...I don't feel that I'm getting the exposure to some of the more challenging issues DBA's are forced to deal with (i.e. data corruption, deadlocks, disk I/O problems, performance issues etc)

    Be very careful of what you wish for 😀

    If really into it... how about getting a job in a larger shop? Chances are your current employer would never provide the kind of challenges you are craving for and - trust me - a home lab database would never provide the level of complexity a large production environment would.

    _____________________________________
    Pablo (Paul) Berzukov

    Author of Understanding Database Administration available at Amazon and other bookstores.

    Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.
  • Huge thanks to everyone who's reply to my post, there's been some great ideas that I'm going to put into place and use which should keep me out of trouble (unless I use the live DB's rather than the test server and then I'll have a massive amount of trouble shooting to do and problem solved!!!).

    If there's anything else anyone can think of I'd be very grateful.

    Many thanks again folks. M


    MCITP
    MCTS - E-Business Card
    Twitter: WWDMark

    Try not! Do or do not, there is no try

    email: info@weekendwebdesign.co.uk
    Personal Website: http://markallen.co.uk/
    Business Website: https://www.weekendwebdesign.co.uk

  • hisakimatama (8/3/2012)


    This is probably very general advice, but it's what's worked for me as far as learning goes :-). First, in your dev environment, try to create a giant lump of data by using a method such as the one in this article:

    http://weblogs.sqlteam.com/joew/archive/2008/02/06/60490.aspx

    That should give you a rather large heap of data, completely free of indexes and such. Once you have that basic framework, try adding some additional components; perhaps a DateCreated column for each name that you'll have in that table, and then think of a way to generate the dates. Add more columns and populate them as needed, possibly with purchase values, occasional notes on some of the names, and so on.

    Once all of this is set up, go ahead and try to retrieve data that a company would want; say, who the biggest spenders are, what days they shopped on, what day has the most sales, and other similar scenarios. The important part for you here, it seems, is not so much figuring out how to get that data; you've said you're good at querying as it is, so that should be the easy part. The major step here is to look at the execution plans for the queries you cook up to get that data. With about 14 million rows, the effects of performance tuning will be quite noticeable once you've got it worked out, and if you can get even more rows tossed in there, the effects will be magnified that much further. Any faults with performance outside of the query and table design should also be made manifest in the process, and you should be able to experiment happily with indexes and tuning to see what makes the biggest impact with the fewest tradeoffs.

    Mostly, that's how I've been practicing at my current job, where I've been the DBA for almost a year now. My practice runs have been done with a dev copy of our data, though, so I've got things like dates and other values already set in. From this, though, I've been able to formulate test cases to dig up information I was merely interested about, such as finding out what year was out best year for sales, what items we have are the most profitable ones, the slowest months we have each year, typical variance in profits from month to month, and so on.

    This is probably quite basic advice, but I've been browsing these forums regularly and trying to apply problems I see brought up by others to my dev environment, and I've learned a ton from all the testing :-).

    Good advice. Here (see the links below) are a couple of other tools to quickly generate random constrained (ie: controlled limits) data. With a couple of combinations of the methods, you can quickly generate a million rows of test data in just a couple of seconds.

    http://www.sqlservercentral.com/articles/Data+Generation/87901/

    http://www.sqlservercentral.com/articles/Test+Data/88964/

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Hello,

    I face a similar issue, and I agree with the answers that the best thing to do is install Developer Edition somewhere, create a sizable amount of data and start playing with it.

    But to add to that, I find it helps to read what others are doing on places like this, and Simple-Talk, and the blogs of other good DBAs. This helps me to see things that I won't encounter in my daily work. I also find it helpful to write. This gives me a motivation to explore things I might not otherwise look at, and then forces me to actually organize my thoughts on the the topic. I write articles here and recently started my own blog for precisely that reason.

    Also, I think pursuing certifications is helpful. Once you have some real experience they are not all that significant on a resume, but the process of preparing for a certification gives you a goal to work towards and helps organize your study.

    Finally, you can look for SQL Challenges in other places. If your current employer and your time schedule both allow, you may be able to find contract work or volunteer work with SQL Server on the side.

    Good luck with your studies.

    Tim

    ---
    Timothy A Wiseman
    SQL Blog: http://timothyawiseman.wordpress.com/

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply