November 26, 2008 at 9:35 am
Some very good pointers there guys, thanks a million. I'd imagine some kind of archiving process along with a server side trace would suffice, particularily if the trace was set up to capture more than the error logs would.
Thanks again!
November 26, 2008 at 10:46 am
Very nice article! It runs where the tires meet the road with specifics that even the newest to the product can eventually digest. This is good list to keep and train with. It also in effect shows more of what a DBA does, on a level that even management can digest, and demonstrates that a database is not just a maintenance-free "bucket" some place to dump and retrieve data.
November 26, 2008 at 12:22 pm
Steven Webster (11/26/2008)
I'd certainly agree that a registry hack is not the way forward. Unfortunately though where I work an auditors word is always taken ahead of mine! Interesting point tho whether the registry would be overwritten if a service pack were to be applied - I'll test that out and get back.
Isn't an Auditor asking you to hack the registry kind of like an Oxymoron anyway? Should't they be making sure we are using the reccomended settings instead of ones that are not supported?
November 27, 2008 at 3:37 am
I would add two more tasks to the checklist
1. Remove Built-in admiinstrators
2. Disable or at least rename the sa account (in 2005 or higher).
About hte loging of succesful logins I would recommend to use a LOGIN TRIGGER which records the login and the last time it connected. This avoids filling up the SQL errorlog and it's much easier to search through in case you want to know when was the last time a login was used.
[font="Verdana"]Markus Bohse[/font]
November 27, 2008 at 6:57 am
MarkusB (11/27/2008)
I would add two more tasks to the checklist1. Remove Built-in admiinstrators
2. Disable or at least rename the sa account (in 2005 or higher).
About hte loging of succesful logins I would recommend to use a LOGIN TRIGGER which records the login and the last time it connected. This avoids filling up the SQL errorlog and it's much easier to search through in case you want to know when was the last time a login was used.
I was considering removing Bulti-In Administrators but wasn't sure if that is a good idea or not. I thought I am just being over jelouse by not wanting to give server operations any permissions on the SQL Servers.
But as for SA account I don't think it needs to be renamed because you shouldn't be using the SA for day-to-day work anyways. I tend to leave the SA account name as is, but set a strong password with at least 15 characters, caps, lower-case, number, symbols, etc.
Thanks.
Mohit K. Gupta, MCITP: Database Administrator (2005), My Blog, Twitter: @SQLCAN[/url].
Microsoft FTE - SQL Server PFE
* Some time its the search that counts, not the finding...
* I didn't think so, but if I was wrong, I was wrong. I'd rather do something, and make a mistake than be frightened and be doing nothing. :smooooth:[/font]
November 27, 2008 at 7:13 am
Mohit (11/27/2008)
I was considering removing Bulti-In Administrators but wasn't sure if that is a good idea or not. I thought I am just being over jelouse by not wanting to give server operations any permissions on the SQL Servers.
The problem is that Builtin\Administrators are sysadmins if you don't remove them or change them. This can be an audit problem. Does the server operations group grant you domain admin or server admin on every server? You should manage them just like you manage other users and grant them specific permissions based on business needs.
Jack Corbett
Consultant - Straight Path Solutions
Check out these links on how to get faster and more accurate answers:
Forum Etiquette: How to post data/code on a forum to get the best help
Need an Answer? Actually, No ... You Need a Question
November 27, 2008 at 7:36 am
Jack Corbett (11/27/2008)
The problem is that Builtin\Administrators are sysadmins if you don't remove them or change them. This can be an audit problem. Does the server operations group grant you domain admin or server admin on every server? You should manage them just like you manage other users and grant them specific permissions based on business needs.
:hehe::laugh::hehe::laugh:
Yaa right I don't get access to all the servers or domain admin LOL. They did have a heart attack, yaa I said the same thing few times. Heh, and actually on new servers I am starting to implement that; so far no one has noticed :rolleyes:. Lets see how long it lasts ;-).
Mohit K. Gupta, MCITP: Database Administrator (2005), My Blog, Twitter: @SQLCAN[/url].
Microsoft FTE - SQL Server PFE
* Some time its the search that counts, not the finding...
* I didn't think so, but if I was wrong, I was wrong. I'd rather do something, and make a mistake than be frightened and be doing nothing. :smooooth:[/font]
November 27, 2008 at 2:53 pm
Nice article.
I keep 99 (max) errorlogs. It depends on the size of each errorlog/entries in the errorlog. When the size is over 1 MB; it takes time to open it (in my case). This is one of the factors I consider how often to recycle the errorlog.
I zip the old errorlogs by the end of the year and move it to a central depository. This can be automated, too.
Keeping 25000 errorlog may be not realistic; it will take a lot of space on the current live system.
November 27, 2008 at 3:33 pm
December 1, 2008 at 1:08 pm
Ken,
Nice article. I will be adding some of the items you mentioned.
Some of the things we do may be of interest.
6. Determine Drive Structure and move system databases if necessary.
We also resize all the system databases and set them to grow by a reasonable amount of MB and set a max after the growth factor is applied 2 to 4 times.
I would stay away from 10MB as a growth number, as I have seen this still be interpreted as 10%.
15. Make sure each maintenance job has an output file in a standard directory.
We set up a jobs directory on each server with a cmd, reports, and sql subdirectory.
All jobs are run from the cmd directory and the output of each job is piped to a member in the reports directory with the same name as the cmd member name.
One of the jobs in the cmd directory is sqlcmd that accepts a database and a sql script as input.
When you set up a new server you can copy this directory structure to the server and copy the server jobs with an SSIS package and you have most of the jobs you need on the new server.
We also have a daily report for each server that contains the space available on each disk drive. The size and space available for each database file. The log space used and recovery mode of each database. An edited report containing backups, update statistics, and reindex along with errors found in the last two logs, backups, and dbcccs. We keep a copy of these reports for each server for 30 days.
December 14, 2008 at 9:39 pm
Wow, no offense to any other authors, but this is probably one of the most valuable articles I have ever seen on SQL Server Central. What a great compilation of immediately useful resources in one place. Thank you for all the work!
January 2, 2009 at 9:17 am
Excellent article, Ken. Your list should be part of every administrator's personal standard practices list, perhaps altered a bit for individual taste and installation specifics.
I also do full audits, but then every night I parse my error logs through a Perl script that ignores successful logins and gives me a text file of just SQL events and failed logins. I can always go back to the server log and check it for successful logins: we're almost exclusively 2000, so we don't do login audit triggers at this time.
I would add one thing for pre-2005 servers: create yourself an additional admin account in case something happens to your network login or you lose network connectivity and have to go local console. I like to use a strong password, sometimes completely random, for SA just to ensure it won't be used, so this serves as my back door when an admin connection can't be used.
And I'd add backing up the master, model, and msdb databases regularly. Great help for restoring systems.
-----
[font="Arial"]Knowledge is of two kinds. We know a subject ourselves or we know where we can find information upon it. --Samuel Johnson[/font]
January 2, 2009 at 10:23 am
Forgot something that I do on 2005. I have all my 2005 instances set to use checksums for torn page detection, so I run the following code in addition to my DBCC step:
select * from msdb..suspect_pages
go
The output goes into a text file, suspectpagesresult.txt. If I get any rows in my result, I know I have a problem (haven't gotten any yet).
I also filter my DBCC runs
d:
cd\dbccs
type dbccresult.txt | find "errors" >server1_dbcc.txt
type suspectpagesresult.txt >>server1_dbcc.txt
So my result file looks someting like this:
....
DBCC Results:
CHECKDB found 0 allocation errors and 0 consistency errors in database
CHECKDB found 0 allocation errors and 0 consistency errors in database
CHECKDB found 0 allocation errors and 0 consistency errors in database
CHECKDB found 0 allocation errors and 0 consistency errors in database
CHECKDB found 0 allocation errors and 0 consistency errors in database
CHECKDB found 0 allocation errors and 0 consistency errors in database
CHECKDB found 0 allocation errors and 0 consistency errors in database
CHECKDB found 0 allocation errors and 0 consistency errors in database
CHECKDB found 0 allocation errors and 0 consistency errors in database
CHECKDB found 0 allocation errors and 0 consistency errors in database
1> 2> 1> 2> 3> database_id file_id page_id event_type error_count
last_update_date
----------- ----------- -------------------- ----------- -----------
-----------------------
(0 rows affected)
1>
Makes things a lot easier to see if there are problems rather than sifting through pages of DBCC results. If I see a non-zero result, I know there is a problem and I have the full log on my server to investigate further.
-----
[font="Arial"]Knowledge is of two kinds. We know a subject ourselves or we know where we can find information upon it. --Samuel Johnson[/font]
Viewing 13 posts - 16 through 27 (of 27 total)
You must be logged in to reply to this topic. Login to reply