How have you improved the SQL Servers at your work

  • I am wondering what each of you have done to improve the SQL Servers where you work and hoping to get some ideas that I haven't thought of for improving ours.

    I'll start off with one of the improvements I made after starting at bigdough.com:

    We had a DTS package that would run nightly to update data on our web site. It usually took about 30 to 45 minutes to run. What I didn't like about how it worked was that it would truncate the data in the tables before importing the new data. From my point of view this meant 30 to 45 minutes of time when our clients couldn't view any data. I didn't like that.

    What I did was modify the DTS package to create copies of each table with _new on the end of them. Then I imported the data into the new tables. Once the imports was finished I renamed the current tables to add _old to them and then renamed the tables with _new to remove _new from their names. I placed all the rename commands between a BEGIN and COMMIT transaction so that either all the tables were renamed or none. I renamed the current tables instead of droping them thinking that it would be easier to rollback name changes. One the name changes are done I drop the tables with _old on the end.

    Now, instead of not being able to access the data during the update, the clients only experience a slight slowness while the data pump tasks are running. To me and our clients this is a great improvement. We used to get complaints (from clients in Japan) when the data was unavailable. Now we don't get those complaints.

    Robert Marda

    Robert W. Marda
    Billing and OSS Specialist - SQL Programmer
    MCL Systems

  • A couple of simple things:

    1) Wrote a "heartbeat" check program that makes an ADO connection to each SQL Server every hour. Production servers are reported to the DBAs and to the manager over the DBA team. Development servers which are "production like" are reported to the DBAs but not to the managers. One email per hour, max. Also logs server unavailability.

    2) Helped build stored procedures that would reschedule jobs if files from a mainframe process weren't available. This allows us to pop a check to see if the files are there, and if so, to begin processing. If not, we reschedule for 15 minutes later.

    3) Worked to consolidate 20+ servers scattered across slow WAN links to bring them back in house. Approximately 800MB total of data was having to be pushed out to these servers once a month, causing CIR issues. The same data now resides on a single server in house, with Citrix leveraged to give users access to the data.

    4) Implemented backups to disk with file copies (let's not get started about FTP, if it had been an option, it would have been used since there's less overhead than a normal file copy) to a central network "backup" server which is itself backed up. Backups are renamed based on SLAs determining how many hours, days, or weeks need to be kept on-site. This has allowed us to restore without the need to get a tape from off-site. It also ensures the backup processes are fast and efficient.

    K. Brian Kelley

    bkelley@sqlservercentral.com

    http://www.sqlservercentral.com/columnists/bkelley/

    K. Brian Kelley
    @kbriankelley

  • This heartbeat check program you developed. Is it just a simple program or is it something complex. Our network people have something set up to detect if a SQL Server is up or not, however it also reports the servers to be down if the T1 between us and our production servers goes down briefly. I've always wondered if we, from the database side could do better. We have VB developers and I am trying to learn VB. Do you think some thing like this could be developed using VB?

    You mentioning rescheduling jobs, reminds me of another improvement I made to help automate our import process. The preparation DTS package checks for files and if not found it will reschedule the job that executes it to run an hour or half hour later. Upon successful completion the preparation DTS package schedules and enables the job that executes the Import DTS package and finally, the import DTS package upon completion disables the job that executes it. So now, all I have to do is watch for failure e-mails.

    Robert Marda

    Robert W. Marda
    Billing and OSS Specialist - SQL Programmer
    MCL Systems

  • Actually, it's all contained within a DTS package at the present time. The tables for servers to check, reporting outages, and determining which servers notify which people are in SQL Server. The DTS package has a ActiveX script that runs through a simple ADO connection, checks the status of the connection, and if there isn't one (within the default ADO timeout period of 30 seconds), writes the server couldn't be contacted to a staging table.

    It then executes two SQL tasks for notification. One tasks emails the DBAs using xp_sendmail based on the servers they should get notified about. The other emails the manager.

    The final SQL step is to take the information out of the staging table and to put it into the outage table for a record of the unavailability.

    It's saved our lunch a couple of times. HealthMon (part of SMS) and some enterprise management software can check on the status of services running, but those weren't available to us. So I built the heartbeat check.

    K. Brian Kelley

    bkelley@sqlservercentral.com

    http://www.sqlservercentral.com/columnists/bkelley/

    K. Brian Kelley
    @kbriankelley

  • I know almost nothing about ADO unless using an Active Script Task using ActiveX to create a connection counts as ADO. I also know very little about ActiveX. However, it seems to me I could just create a DTS package to fetch a row of data from each server and if it fails it could mean the server is unavailable, or would this not be reliable enough.

    Robert Marda

    Robert W. Marda
    Billing and OSS Specialist - SQL Programmer
    MCL Systems

  • Brian, sounds like an article

    Here I'm not sure I improved the SQL servers. I did upgrade from SQL 7 to 2000.

    I also helped architect an inhouse data center as opposed to colo or managed. Went from $6k/month costs to < $3k (US).

    I have built a number of DTS packages that auto generate reports in Excel format for people that want them.

    Wrote an extraction program that sucked out data for customers and allowed sales people to query via a web page for people that might be interested in whatever special they came up with. It then generated a list of emails they can send to. Worked great, never used

    Also, I restored a table I deleted within 3 minutes (from disk) and no one noticed

    Steve Jones

    steve@dkranch.net

  • quote:


    I also helped architect an inhouse data center as opposed to colo or managed. Went from $6k/month costs to < $3k (US).


    What were some of the major issues you ran across?

    K. Brian Kelley

    bkelley@sqlservercentral.com

    http://www.sqlservercentral.com/columnists/bkelley/

    K. Brian Kelley
    @kbriankelley

  • My biggest improvements:

    1:) Got rid of the majority of all queries using cursors and found the best way to do them.

    2:) Got rid of all but important indexes.

    3:) Got most of the queries into stored procedures.

    4:) Segmented large tables into historical and current tables.

    5:) Stopped allowing foobar queries in Crystal Reports being run with more data than they need to be returned. Users love using Crystals grouping and select expert but I hate that for 300 rows on some queries it would read 10000 or more from the database across the link to the web server.

    Several other things cannot remember off hand.

  • At present we don't use Crystal Reports, but I have heard of it. What are foobar queries and was it easy to stop them (such as with a change in permissions)?

    Robert Marda

    Robert W. Marda
    Billing and OSS Specialist - SQL Programmer
    MCL Systems

  • Crystal Reports is a reporting tool that creates nice reports for management and non-technical types. Giving them a point and click interface, they build reports on data without knowing a lick of SQL. The problem is, they can build some really awful queries through the interface without knowing it. One thing you can do, at least with 8.0 on is force him to go through your stored procedures. Don't give 'em direct access to the tables. They still have to log on through an ODBC connection, which means you can control permissions based on their login id at the SQL Server level.

    K. Brian Kelley

    bkelley@sqlservercentral.com

    http://www.sqlservercentral.com/columnists/bkelley/

    K. Brian Kelley
    @kbriankelley

  • No I made them utilize stored procedures and to be more specific about the data. Most of the reports are web based and my group controls. But with Crystal they would let it create the select statement and then apply filters with select expert. So what happens is a large number of rows return and crystal would generate the output for only a portion of that. So when I came in I took a look at the queires and what the select expert was doing and created a stored procedure with the proper parameters to do the same thing as select expert and thus return from SQL far fewer rows. In addition to that moved those reports from ODBC bound to full ADO. And got a 100 MB hub and linked the SQL Server directly to the Web Server to free up bandwidth on the other NIC to the websites themselves. And foobar is somewhat generic to say really, really messed up. The worst query I saw was something to the effect of Select 10rowsorsoher FROM tblWhatever. Then in select expert it would have DATE = 1/1/2002 and NAME = Babs. As you can see it possed general non-specific queries, not hard to stop just time consuming.

  • Thanks for the explanations! I learn something new every day.

    Robert Marda

    Robert W. Marda
    Billing and OSS Specialist - SQL Programmer
    MCL Systems

  • I know that feeling. But if I ever quite learning I would be bored.

  • Hi all,

    I set it to use a fixed memory size .....

    Not too sure whether it helps or not as I did not do profiling before and after fixing the memory size ...

  • I set up a 3 tables on every SQL Server that we have (about 20) that has servername, dbname, tablename, and 365 additional columns D1 through D365. Every night at midnight a process runs on each server that logs the number of records, the size of the data and the size of the indexes for each table in eack database. Then we put a web front end up that allows us to browse the data day to day, week to week, and month to month. It is great for trending table sizes for planning and spotting performance improvement opportunities with large tables.

    Edited by - jschrec on 02/12/2002 11:48:39 AM

Viewing 15 posts - 1 through 15 (of 19 total)

You must be logged in to reply to this topic. Login to reply