Configuring a Database

  • I am trying to come up with the ideal database configuration for my production database server. The database in development is currently 70MB which is soon scheduled to move to production. The production's server specs are:

    1)dual processor (1 GHertz a piece)

    2)3 RAID 5 drives each with 18.6 G

    3)1 GB of RAM

    I have read numerous articles on how I should configure filegroups and datafiles vs. Logfiles across physical disks and I am so confused now. If I have RAID5 and 34-36GB of useful space across two physical drives, do I even have to drape filegroups & files accross the disks? Or do I just need to create one blob of a file group, then spread files across the physical disks within the filegroup? Please help! If anyone has any scripts they've used in the past, I would appreciated viewing them.

    Thanks,

    Julie

    Okay after everyone's input(which I am very thankful for) I have come up with the following script. Please advise. -- =============================================

    -- Create database on multiple data and transaction log files

    -- =============================================

    IF EXISTS (SELECT *

    FROM master..sysdatabases

    WHERE name = N'patriot')

    DROP DATABASE patriot

    GO

    CREATE DATABASE patriot

    ON PRIMARY

    ( NAME = patriot_db_file_1,

    FILENAME = N'd:\program files\microsoft sql server\mssql\data\patriot_db_1.mdf', --System files

    SIZE = 10MB,

    MAXSIZE = 200MB,

    FILEGROWTH = 10%),

    ( NAME = patriot_db_file_2,

    FILENAME = N'd:\program files\microsoft sql server\mssql\data\patriot_db_2.ndf', -- User Data files

    SIZE = 10MB,

    MAXSIZE = 200MB,

    FILEGROWTH = 10%),

    ( NAME = patriot_db_file_3,

    FILENAME = N'f:\program files\microsoft sql server\mssql\data\patriot_db_3.ndf', -- User Data files for BLOBs & indexes

    SIZE = 10MB,

    MAXSIZE = 200MB,

    FILEGROWTH = 10%)

    LOG ON

    ( NAME = patriot_db_log_file_1,

    FILENAME = N'd:\program files\microsoft sql server\mssql\data\patriot_db_log_1.ldf',--System files

    SIZE = 2500KB,

    MAXSIZE = 50MB,

    FILEGROWTH = 10%),

    ( NAME = patriot_db_log_file_2,

    FILENAME = N'f:\program files\microsoft sql server\mssql\data\patriot_db_log_2.ldf',-- User Data files

    SIZE = 2500KB,

    MAXSIZE = 50MB,

    FILEGROWTH = 10%),

    ( NAME = patriot_db_log_file_3,

    FILENAME = N'f:\program files\microsoft sql server\mssql\data\patriot_db_log_3.ldf',-- User Data files for BLOBs & indexes

    SIZE = 2500KB,

    MAXSIZE = 50MB,

    FILEGROWTH = 10%)

    GO

    Edited by - jdoering on 10/16/2001 08:30:04 AM

  • When you say 3 RAID 5 disks, is this one RAId-5 array with 5 disks of 18GB apeice?

    If so, just make a single file/filegroup. I have usually split the system tables from the data to make movement easier if I get more physical drives.

    Here's a basic script:

    /*

    Database Creation Template

    Contains the following parameters:

    ---------------------------------------------------

    database_nameName of the database

    sys_and_log_pathphysical path on the server to be used for the database system

    table file and log files.

    data_group_nameName of the data filegroup. system group defaults to Primary

    data_pathPhysical path on the server for the database data tables and indexes.

    sys_file_sizeinitial size of the system table file

    sys_file_maxmaximum size of the system table file

    sys_file_growpercentage growth of the file

    log_file_sizeinitial size of the log table file

    log_file_maxmaximum size of the log table file

    log_file_growpercentage growth of the log file

    data_file_sizeinitial size of the data table file

    data_file_maxmaximum size of the data table file

    data_file_growpercentage growth of the data file

    Uses the following standards:

    ---------------------------------------------------

    System Tables File<database_name>_sys.mdf

    Log File<database_name>_log.ldf

    Data and Index File<database_name>_data.ndf

    Adds "Developer" role to the database

    Optional Add Role commented out at end of script

    Optional AddUser to some role commented out at end of script.

    */

    IF EXISTS (SELECT name FROM master.dbo.sysdatabases WHERE name = N'<database_name, dbname, dbname>')

    DROP DATABASE [<database_name, dbname, dbname>]

    GO

    CREATE DATABASE [<database_name, dbname, dbname>] ON

    (NAME = N'<database_name, dbname, dbname>_sys',

    FILENAME = N'<sys_and_log_path, drive letter, c:\MSSQL7\data>\<database_name, dbname, dbname>_sys.MDF' ,

    SIZE = <sys_file_size, number, 30>,

    MAXSIZE = <sys_file_max, number, 35>,

    FILEGROWTH = <sys_file_grow, number, 10>%

    )

    LOG ON

    (NAME = N'<database_name, dbname, dbname>_Log',

    FILENAME = N'<sys_and_log_path, drive letter, c:\MSSQL7\data>\<database_name, dbname, dbname>_Log.LDF' ,

    SIZE = <log_file_size, number, 100>,

    MAXSIZE = <log_file_max, number, 101>,

    FILEGROWTH = <log_file_grow, number, 10>%

    )

    GO

    ALTER DATABASE [<database_name, dbname, dbname>] ADD FILEGROUP [<data_group_name, filegroup name, DataGroup>]

    GO

    ALTER DATABASE [<database_name, dbname, dbname>] ADD FILE

    (NAME = N'<database_name, dbname, dbname>_data',

    FILENAME = N'<data_path, drive letter, c:\MSSQL7\data>\<database_name, dbname, dbname>_data.NDF' ,

    SIZE = <data_file_size, number, 300>,

    MAXSIZE = <data_file_max, number, 350>,

    FILEGROWTH = <data_file_grow, number, 10>%

    )

    TO FILEGROUP [<data_group_name, filegroup name, DataGroup>]

    GO

    Alter database <database_name, dbname, dbname> modify filegroup <data_group_name, filegroup name, DataGroup> default

    GO

    exec sp_dboption N'<database_name, dbname, dbname>', N'autoclose', N'false'

    GO

    exec sp_dboption N'<database_name, dbname, dbname>', N'bulkcopy', N'false'

    GO

    exec sp_dboption N'<database_name, dbname, dbname>', N'trunc. log', N'false'

    GO

    exec sp_dboption N'<database_name, dbname, dbname>', N'torn page detection', N'false'

    GO

    exec sp_dboption N'<database_name, dbname, dbname>', N'read only', N'false'

    GO

    exec sp_dboption N'<database_name, dbname, dbname>', N'dbo use', N'false'

    GO

    exec sp_dboption N'<database_name, dbname, dbname>', N'single', N'false'

    GO

    exec sp_dboption N'<database_name, dbname, dbname>', N'autoshrink', N'false'

    GO

    exec sp_dboption N'<database_name, dbname, dbname>', N'ANSI null default', N'false'

    GO

    exec sp_dboption N'<database_name, dbname, dbname>', N'recursive triggers', N'false'

    GO

    exec sp_dboption N'<database_name, dbname, dbname>', N'ANSI nulls', N'false'

    GO

    exec sp_dboption N'<database_name, dbname, dbname>', N'concat null yields null', N'false'

    GO

    exec sp_dboption N'<database_name, dbname, dbname>', N'cursor close on commit', N'false'

    GO

    exec sp_dboption N'<database_name, dbname, dbname>', N'default to local cursor', N'false'

    GO

    exec sp_dboption N'<database_name, dbname, dbname>', N'quoted identifier', N'false'

    GO

    exec sp_dboption N'<database_name, dbname, dbname>', N'ANSI warnings', N'false'

    GO

    exec sp_dboption N'<database_name, dbname, dbname>', N'auto create statistics', N'true'

    GO

    exec sp_dboption N'<database_name, dbname, dbname>', N'auto update statistics', N'true'

    GO

    use [<database_name, dbname, dbname>]

    GO

    exec sp_addrole 'Developer'

    -- exec sp_addrole 'WebUser'

    -- exec sp_adduser '', '', 'Developer'

    Steve Jones

    steve@dkranch.net

  • It is 3 RAID5 disks with 18.9 GB a piece. Thanks for you info.

  • I rarely use filegroups - in my view they are only useful when you get to the point that you're can't complete a full backup in the time allowed. But Im open to discussion on this Steve!

    Honestly, for a 70mb db, I'd use a single file group. I'd say that is plenty of hardware for what you're doing. You'll easily cache everything that needs to be cached, which will make up for the relatively small disk subsystem.

    Going forward, you need to think about your anticipated growth. You'll save yourself a lot of headache by adding the drive space now instead of later.

    Andy

  • Hi

    My general setup is

    C Drive Raid1 - Operating system files

    D Drive Raid1 - Database Log files

    E Drive Raid5 - Database data files - except tempdb

    F Drive Raid1 - Tempdb

    I generally just use the default filegroup and full backups are generally stored in a different location than the transaction log backups. This configuration I beleive gives you the best performance and security, but of course it is expensive.

  • Hey Diane,

    I'd have to disagree with you about where you're putting tempdb. If you have enough tempdb usage to matter, you'd be better off putting it on your RAID5 set so you can have more available IO. Even though it's tempdb, its still logged, so I believe (opinion!) that its better to treat as a standard db.

    Ady

  • out of any db on the server I always give tempdb as much IO and disk space as possible. You can bring a server down quickly if your tempdb is on a slow disk or gets full. If you can put all your disk in a single array I would do a 0+1 for speed and recovery. Just my .02$ worth

    Wes

  • I'd agree with Wes. If you can do 0+1, I'd do that, even if I combine tempdb and the data files.

    Steve Jones

    steve@dkranch.net

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply