Open Source SQL DDL Scripting Tool

, 2008-03-03 (first published: )

I have been a proponent of file-based database development for years, and am always amazed

at how many DBA's and database developers never create files from which to deploy

projects. I am also a fan of managing these files in a source management system such as

Visual SourceSafe.

As such, Microsoft's Visual Studio for Database Professionals [the "Data Dude"]

caught my eye -- I like the manner in which it produces the files and the directory tree it

creates and stores them. This makes good sense to me. Then the February issue of SQL

Server Magazine came out with an article entitled "SQL Server 2005 Schema

Scripting," and it got my attention, again. I have developed tools in the past to script

out SQL Server 2000/2005 using both SQL-DMO and SQL-SMO. But this article,

combined with the ideas of "The Data Dude," got me going again.

So I built a widget which scripts out SQL Server 2000/2005 database objects to individual

files in a manner which mimics Microsoft's Visual Studio for Database Professionals.

This tool might be thought of as "a poor man's alternative" to the Data Dude. It

is freely available on the Internet at

http://sourceforge.net/projects/script-sql-db/

. It has been out there since February 4, 2007, and people have found it somehow and

over 200 of them have downloaded it -- about 75 in the first 2 days after SSWUG.org

made a reference to it in their February 14 newsletter. This tells me that others see a need

for such a tool. Scripting the files out the way the Data Dude does makes good sense to me --

and doing it for $0.00 makes even more sense.

The program is a simple C# program which uses SQL-SMO to script out each database object

to its own file. It steps through each object type [table, view, procedure, etc.] and

creates the appropriate folders as it goes. The folder structure is exactly the same as

that produced by the Data Dude  let's just adopt Microsoft's tree and file

naming convention as a standard and not try to invent a new one.

See "FilesystemTree.txt".

The program is a console application [.exe] which requires 3 parameters:

  • The Server name
  • The Database name
  • The Root Path to store the files 

    the Server name and Database name will be created as sub-folders under this path

Optionally, for users who cannot logon via Windows Authentication, the Username and

Password may be supplied as the 4th and 5th parameters.

The entire source code is available on the SourceForge web site, but here are a couple of

highlights. After validating the parameters, it creates an SMO scripter object which contains

the scripting options. Most object types use the same settings, so these are established up

front  see "Common Scripter.txt." It then simply steps through each object type

and iterates through the SMO collection of that type and sets additional, appropriate

scripting parameters including the filename. For each object it then calls the

simple "ScriptIt" method which writes out the file. See "ScriptIt.txt". An

example of how it iterates through each object type is shown for Stored Procedures

in "StoredProcedures.txt."

Files, source management, project deployment

Scripting out a database in the manner proscribed by the Data Dude creates dozens,

even hundreds of files for each database. What do you do with all these files? At a bare

minimum you would produce these files periodically just to make sure you have your databases

documented.

However, a better plan is to put them in a source management system such as Visual SourceSafe.

Unless you use source management, "the truth" resides only in the production

databases -- and what "the truth" was in days gone by is unknown -- except in backup

files. What an object looked like last week is not tracked. But for every type of development

other than database development, "the truth" has always resided in the source

management system. Source management tracks the who/when/why each object was changed, and

provides a means of labeling sets of files into a project [i.e., a "release"].

Plan A, therefore, is to take these files and check them into the source management system.

From that point forward, development follows the widely accepted checkout/modify/check-in

paradigm. Deployment of a project to the test database and eventually to the production

database must be done using the versions of the files modified for the project

obtained from the source management system, typically using a "get

latest".

So, for example, a CREATE PROCEDURE file originally produced by the tool and now residing

in source management is checked out. The predicate is changed to ALTER PROCEDURE and the

modifications are made and tested. The file is then checked back in to source management --

and the who/when/why are recorded. Some object types don't have an ALTER predicate,

so for these the file is modified to have both the DROP and the CREATE statement. This

tried-an-true source control methodology has the additional benefit of allowing only one

developer at a time to work on each object -- because the source management system

won't let the second developer check the file out until the first developer checks it

back in [usually after the release is deployed].

Tables, of course, contain data so dropping and recreating a table is inconvenient at best.

For tables I suggest that each table can have up to 4 supplemental files, each with a

specific name. To review, each table was given its own file containing the CREATE TABLE

statement with a naming pattern of:

   <schema>.<name>.table.sql

Additions or changes to the table's indexes, triggers and constraints are made by

modifying the separate, individual files containing those child objects. But you need to control

other actions on tables [e.g., for lookup tables the project may need to insert, update

or delete values]. Of course, you often want to add a column or two to the table. And, again,

you need to be able to prevent multiple developers from modifying the table concurrently. So

adopt a standard such as the following for naming a table's supplemental files:

   <schema>.<name>.table.alter.sql
   <schema>.<name>.table.insert.sql
   <schema>.<name>.table.update.sql
   <schema>.<name>.table.delete.sql

Once a table is modified and the <schema>.<name>.table.alter.sql

file is created, this same file would be reused for each ensuing alteration. All changes for

a given deployment for an individual table would be in this file in an ALTER TABLE statement.

Thus, these changes get tracked from deployment to deployment, and the current table's

structure can be recreated from scratch by redeploying the original CREATE TABLE file and

all deployed versions of the ALTER file.

The project includes a command file which automates deployment of the files in a

release. Called "Deploy.cmd", it iterates through the file tree in the

appropriate sequence and executes each file via osql.exe. It takes as arguments:

  • the path of the files [where you put them when you did

    the "get latest" for the current release]

  • the target server
  • the target database
  • optionally, a flag which indicates that you want the files

    deleted after being processed

That's all there is to it. Script out your databases and begin using source management

to control your database development.

SourceForge

The SourceForge web site [

http://sourceforge.net/projects/script-sql-db/

] has a convenient way to report bugs, ask questions, submit suggestions, etc., and all are

welcome. The enhancement which allows a user to connect via Username and Password came from

just such a request entered in the project's Open Discussion Forum.

You can even register to "monitor" the project -- it you do this you will

be notified by email whenever an update to the project occurs. [The project admins do not

know who is on the list -- just that one or more people have registered.]

Of course, you will have the source code so if you want to make your own changes you are

free to do so.

Rate

5 (5)

Share

Share

Rate

5 (5)

Related content

Customizable Error Log Scanning

Monitoring your servers for issues and alerts is something every DBA should be doing on a regular basis. However even with a small number of servers, a DBA can easily be overwhelmed if some level of automation is not implemented. Thomas LaRock brings us a method of scanning your error logs automatically and notifying the DBA of problems.

5 (3)

2007-12-12 (first published: )

8,535 reads