SQLServerCentral Article

Customer Database Update A Practical Solution

,

Customer Database Update – A Practical Solution

Introduction

If you are a DBA in a company that develops and maintains various database

products for its customers, you know that the release and implementation of a

new product version can be a challenge, since you are probably responsible for

preparing of the database update packages and their installation on the customer

sites. Since any product changes over time, and different clients often run

different product versions, you need an efficient strategy to perform and keep

track of regular database updates (by "update" in this article I mean

implementation of a new version of the client-server application with the

database on the back end). Ideally this strategy should work with all types of

database and product changes, and be applicable to all your customers regardless

of the product version they run, and the differences in the customer business

rules.

One approach is to change only the objects that have to be changed the way

they have to be changed for each client. Another solution is to only change the

database schema, and then recreate everything else anew based on the specific

client requirements. This method is clean and much less error-prone, but is

complicated and requires a lot of work performed in a very strict order, so it

is hard - unless we can automate it.

The key points of the article are:

  • updating databases of any size and complexity with a single command
  • implementation of a simple, clear and convenient structure for the

    database update package

  • generating correct creation order for database objects that can have

    hierarchical dependencies

  • creating a program that generates the full set of the database update

    scripts that comprise the package

Simply put, this article describes my approach to the problem, in form of the

notes on preparing the database update package for the new product release. It

suggests the folder structure, files, scripts and settings that I believe could

be useful and convenient. I will discuss some common problems that you might

face, and offer a solution for them, including code examples that could be

helpful if you decide to apply this approach in your own database. The final

result of these efforts is a program that automatically generates a database

update package containing a number of always-up-to-date scripts that can be run

on the customer site with one mouse click. This way, the update can be performed

even by low-qualified personnel, and even if you are not given the access to the

customer database.

Parsing Update Script

Whenever developers release a new product version, you have to make the

necessary database and client updates. If the customer company has DBAs on

staff, then they can implement the required changes. However, you problem

becomes slightly more complex if the customer does not have its own DBAs. In

this case you have to find a way to help a non-qualified employee do it.

The common way of doing it is to write the scripts, which are making the

changes and updates, to be run in the target database; these scripts can be

tested in the dedicated environment, and leave the trail of changes for future

audits. Then they are shipped to every customer – since most of them do not

usually give you access to their databases. Even in your own database, scripting

the changes is more convenient and secure.

In my opinion, main database/client releases and patches/hot fixes should be

implemented in different ways. In this article we will discuss the first one;

implementing patches is the subject of the next article.

Let’s now review our own sample set of scripts. Imagine the ideal folder and

file structure of the script written for the standard pubs database. As a

customer you receive a compressed file called "pubs20.zip". I usually include

database name and version into the file name. When the zip file is unpacked, a

customer has the following directory and file tree structure (see Figure 1):

Figure 1. Folder and file structure.

With this structure it is easier to explain which way the update will be done

and what exactly will be updated.

Three major things to be updated are database schema, database objects, and

the data itself. Changing the schema is the significant update but curiously

enough it is the only thing we will change; the database objects and the data,

which need changes, will be completely replaced.

Let’s review the schema updates. There are at least two ways of doing it. One

way is to unload data from the database into flat or XML files, change the

database structure, and load the data back into the database. This will require

reformatting data on either load or unload, or both, and is complicated and

time-consuming method. It is worth only when the schema is being significantly

changed. We are not reviewing such a case in this article.

In the majority of cases the other approach is taken. It involves keeping the

detailed history of the previous database changes, and requires developers to

thoroughly review them when writing the scripts for the next schema update. The

"schema" folder, from Figure 1, will contain the database schema update

scripts.

There are two possible ways to modify database objects: a partial update and

a full update. A partial update means that the objects which are no longer

needed will be dropped, objects which need changes will be modified, and the

newly created objects will be added. In my opinion this way is more appropriate

for hot fixes and patches issued between main database versions rather than for

performing a full scale database update.

When modifying the database using the full update, all of the original

database objects will be removed completely and the new objects will replace

them. This approach is better for updating the database objects, since it does

not require keeping track of the database objects history.

All scripts that are needed for the deletion and creation of the database

objects are stored in the "drop" and "create" folders respectively. The source

code of the various database objects is stored in special directories: folder

named "uddt" contains the source code for the user-defined data types, folder

"udf" is used for user-defined functions, folder "view" contains view

definitions, folder "sp" is for stored procedures, and folder "trigger" is for

triggers.

The next thing to update is the data itself. Usually there are several tables

that contain the data influencing the application’s functionality. As an

example, a database can store the application menu and submenu items, sets of

report samples, standard finance information, some geographical data, etc. This

information should be either completely or partially refreshed during database

update. Thus a folder named "data" contains the data from all the tables that

should be copied into the customer database and folder named "copydata" contains

all the necessary scripts to do it. Folder "patch" contains the scripts to fix

inconsistent data – if any.

Folder "job" contains source code for all the jobs that should be run on the

database server. For example, if you need to run a report on a regular basis you

can create a job to do that. Also, you can create a job that will monitor

particular database tables and execute necessary procedures upon the specified

data change.

Having discussed all things that have to be changed during the database

update, let’s now take a look at how this update works. Before you start the

update I would strongly recommend to make a full database backup and to ensure

that the backup is good.

To start database update you have to execute the "run_update_db.bat" batch

file (see Figure 1) with no parameters. This file is just a convenience to call

another batch file with a lot of preset parameters:

update_db.bat server1 pubs sa_password %CD% customer1 > update_result.txt

You can modify the parameters by changing server name and sa user password,

save the file and double-click its name in Windows Explorer to start the update.

The output log file update_result.txt will show up in the current directory.

While update is running, let's review the "update_db.bat" file (see Listing

1a). The first part of this script checks all passed parameters:

rem Check parameters
rem Server\Instance
if "%1" == "" goto usage
if "%1" == "/?" goto usage
if "%1" == "help" goto usage
rem Database
if "%2" == "" goto usage
rem sa password
if "%3" == "" goto usage
rem Scripts Directory
if "%4" == "" goto usage
rem Insert Data
if "%5" == "" goto usage
date /T
time /T

The first three parameters are setting the connection to the database:

  • Instance name. If you need to update database on the default instance,

    this parameter can be the server name, alias name or server IP address. If the

    database is on the named instance, the instance name should be added after

    back slash.

  • Database name
  • Password for sa login

The fourth parameter is the root path to our scripts - the directory where

the batch file is stored and where the output log file will be created. It could

be a full path like "C:\Build\pubs20\db", or if you run the script from the

current directory, it could be the operating system variable %CD%.

The fifth parameter lets you add specific customer or customer group name in

case some data should be updated in slightly different way for different

customers. We will discuss this parameter later.

The first two script commands generate the run start timestamp; the two

similar commands at the end of the script generate the run end timestamp and

allow us to calculate the run time. These calculations can help us schedule the

update for the most appropriate time.

The second part of this script updates database schema and recreates the

database objects. All of the database objects depend on each other, thus it is

very important to modify them in the required order.

First of all we should create new User Defined Data Types that could be used

in the table’s DDL:

rem Create new data types
osql  -S%1 -d%2 -Usa -P%3 -i%4\uddt\uddt_zip.SQL -n -b
if %ERRORLEVEL%==1 GOTO end

After that, the database structure should be modified. All modification

scripts should be ran in the order in which they have been released by

developers:

rem Run AlterTable scripts
osql  -S%1 -d%2 -Usa -P%3 -i%4\schema\pubs_2_0_01.SQL -n -b
if %ERRORLEVEL%==1 GOTO end
osql  -S%1 -d%2 -Usa -P%3 -i%4\schema\pubs_2_0_02.SQL -n -b
if %ERRORLEVEL%==1 GOTO end

Then the following commands delete all other existing database objects:

rem Run Delete Database Objects Scripts
osql  -S%1 -d%2 -Usa -P%3 -i%4\drop\drop_tr.sql -n
osql  -S%1 -d%2 -Usa -P%3 -i%4\drop\drop_sp.sql -n
osql  -S%1 -d%2 -Usa -P%3 -i%4\drop\drop_view.sql -n
osql  -S%1 -d%2 -Usa -P%3 -i%4\drop\drop_udf.sql -n

After these objects have been deleted they must be recreated in the following

order: User Defined Functions, Views, Stored Procedures, and finally Triggers.

Keeping this order is very important because every new group of created objects

may need the previous groups of objects to already exist. For example, some

views can use UDF’s, and stored procedures can use both UDF’s and views, and so

on. The following code will recreate the database objects in the described

order:

rem Run Create Database Objects Scripts
call create\create_udf.bat %1 %2 %3 %4
call create\create_view.bat %1 %2 %3 %4
call create\create_sp.bat %1 %2 %3 %4
call create\create_tr.bat %1 %2 %3 %4

The third part of this script processes the data that needs to be re-created.

In this example we will completely change data in the "stores" and "jobs"

tables, and change the user-defined error messages in the sysmessages

table in the master database. Also, we will add certain data into the new

"app_modules" table which contains customer-related data chosen based on the

fifth parameter. Here are the commands that do all of this:

rem Truncate Tables
osql  -S%1 -d%2 -Usa -P%3 -i%4\copydata\remove_data.sql -n
rem Insert Data
if "%5" == "customer1" call copydata\bcpin_customer1.bat %1 %2 %3 %4
if "%5" == "customer2" call copydata\bcpin_customer2.bat %1 %2 %3 %4
rem Restore Constraints
osql  -S%1 -d%2 -Usa -P%3 -i%4\copydata\restore_constraints.sql -n

Let’s assume that during software development process some errors were made

and, as a result, some of the data needs to be fixed. In given example we fix

data in the discounts table:

rem Run Patches
osql  -S%1 -d%2 -Usa -P%3 -i%4\patch\patch_discounts.sql -n

You can also create jobs that can assist your application. In the example

below the job will generate a daily report:

rem Create Jobs
osql -S%1 -d%2 -Usa -P%3 -i%4\job\ytd_report.sql -n

Finally, the script refreshes database version information:

rem Update ver_history table
osql  -S%1 -d%2 -Usa -P%3 -i%4\copydata\ins_ver_history.sql -n

Now we know what happens when "run_update_db.bat" executes. When the update

is done, we can review "update_result.txt" log file (see Listing

1b). It contains results of all the commands that the script called. It also

contains all warning, error, and failure messages. The size of this log file

depends on the number of the scripts we ran, and can become really big. To

automate the tedious process of finding the typical warnings or errors in the

text, you can write a simple program searching for the following keywords:

‘Msg’, ‘Level’, ‘SQLState’, and ‘NativeError’. Below I describe the most typical

messages.

A warning message is logged if the stored procedure being generated is

calling another procedure that cannot be found:

C:\Build\pubs20\db >osql -Sserver1 -dpubs -Usa -Psa_password -i" C:\Build\pubs20\db\sp\aubyroyalty.SQL" -n
Cannot add rows to sysdepends for the current stored procedure because it
depends on the missing object 'byroyalty'. The stored procedure will still be
created.

If the update script is correct, a warning can only appear in case the stored

procedure being generated is recursive.

An error message is logged in case the creation of a database object failed

for any reason; the specific failure reason, object name, and the code line

number are listed in the message:

C:\Build\pubs20\db>osql -Sserver1 -dpubs -Usa –Psa_password - C:\Build\pubs20\db\uddt\zip.SQL -n -b
Msg 15180, Level 16, State 1, Server server1, Procedure sp_droptype, Line 32
Cannot drop. The data type is being used.

The next example is a typical error message occurring when the data is being

copied into a table:

C:\Build\pubs20\db >bcp "pubs.dbo.stores" in
"C:\Build\pubs20\db\data\stores.dat" -c -t"|" -Usa -Psa_password -Sserver1
Starting copy...
SQLState = 23000, NativeError = 515
Error = [Microsoft][ODBC SQL Server Driver][SQL Server]Cannot insert the value NULL into column 'stor_id', table 'pubs.dbo.stores'; column does not allow nulls. INSERT fails.
SQLState = 01000, NativeError = 3621
Warning = [Microsoft][ODBC SQL Server Driver][SQL Server]The statement has been terminated.
BCP copy in failed

The log file can be very helpful during update script development. Also, if

the errors occur during the actual update of the customer's database, this log

file can help you identify the possible problems in the target database, or find

errors in the scripts, which were overlooked.

In case of a serious error, you can restore the database from the backup, fix

the scripts, and run the process again. Now we are ready to prepare the update

package.

Preparing Update Package

Developers usually store their source code in source code control systems,

such as Microsoft Visual Source Safe, CVS, etc. The folder structure in such a

system should be the same or very similar to the structure shown on the Figure

1. All source code should be stored in the corresponding folders. Only "create"

folder should be empty at this moment. We will discuss how to prepare all

database object creation scripts later in this chapter.

Let’s look at examples of source code from each folder. Folder "uddt"

contains the "zip.sql" script which creates a new user-defined data type called

"zip". Since the script shown in Listing 1a does not delete UDDT, the script in

Listing 2 includes a drop section to avoid an error message, if such data type

already exists.

/* zip.SQL */if exists (select * from dbo.systypes where name = N'zip')
exec sp_droptype N'zip'
GO
setuser
GO
EXEC sp_addtype N'zip', N'char (5)', N'null'
GO
setuser
GO

Listing 2. Create new user defined data type.

Folder "schema" contains two scripts: "pubs_2_0_01.SQL" which modifies

existing tables, and "pubs_2_0_02.SQL" which creates two new tables. Because the

pubs database has no specific users, permission for each database object should

be granted to public role only. This way we can add the Grant section after the

Create section for each created object (see Listing 3).

/* pubs_2_0_01.SQL */ALTER TABLE publishers
ADD zip zip
GO
ALTER TABLE employee
ADD manager_id         empid
         CONSTRAINT CK_manager_id CHECK (manager_id LIKE
            '[A-Z][A-Z][A-Z][1-9][0-9][0-9][0-9][0-9][FM]' or
            manager_id LIKE '[A-Z]-[A-Z][1-9][0-9][0-9][0-9][0-9][FM]')
GO
ALTER TABLE employee ADD
 FOREIGN KEY (manager_id) REFERENCES employee (emp_id)
GO
/* pubs_2_0_02.SQL */CREATE TABLE ver_history (
time_stampdatetimeNOT NULL DEFAULT (getdate()),
applied_to_vervarchar(50)NULL,
packagevarchar(255)NULL)
GO
GRANT ALL ON ver_history TO public
GO
CREATE TABLE app_modules (
mod_ididCONSTRAINT UPKCL_appmod PRIMARY KEY CLUSTERED,
mod_titlevarchar(50)NOT NULL,
mod_filevarchar(50) NOT NULL,
install_key bitNOT NULL)
GO
GRANT ALL ON app_modules TO public
GO

Listing 3. Schema modification scripts.

Folder "drop" contains scripts that delete all existing database objects:

triggers, stored procedures, views, and UDF.

/* drop_tr.sql */-- drop all triggers
declare @trigger_namevarchar(255)
,@SQLString NVARCHAR(1000)
DECLARE Trigger_list CURSOR
forward_only static for
select [name]
from sysobjects
where xtype = 'TR'-- Trigger
and status >= 0
for read only
OPEN Trigger_list
FETCH NEXT FROM Trigger_list
into @trigger_name
WHILE @@FETCH_STATUS = 0
BEGIN
set @SQLString = N'if exists (select * from dbo.sysobjects where id = object_id(N''[dbo].['+@trigger_name+']'') and OBJECTPROPERTY(id, N''IsTrigger'') = 1)
drop trigger [dbo].['+@trigger_name+']'
exec sp_executesql @SQLString
    FETCH NEXT FROM Trigger_list
 into @trigger_name
END
CLOSE Trigger_list
DEALLOCATE Trigger_list
/* drop_sp.sql */-- drop all stored procedures
declare @sp_namevarchar(255)
,@SQLString NVARCHAR(1000)
DECLARE sp_list CURSOR
forward_only static for
select [name]
from sysobjects
where xtype = 'P'-- stored procedures
and status >= 0
for read only
OPEN sp_list
FETCH NEXT FROM sp_list
into @sp_name
WHILE @@FETCH_STATUS = 0
BEGIN
set @SQLString = N'if exists (select * from dbo.sysobjects where id = object_id(N''[dbo].['+@sp_name+']'') and OBJECTPROPERTY(id, N''IsProcedure'') = 1)
drop procedure [dbo].['+@sp_name+']'
exec sp_executesql @SQLString
    FETCH NEXT FROM sp_list
 into @sp_name
END
CLOSE sp_list
DEALLOCATE sp_list
/* drop_view.sql */-- drop all views
declare @view_namevarchar(255)
,@SQLString NVARCHAR(1000)
DECLARE view_list CURSOR
forward_only static for
select [name]
from sysobjects
where xtype = 'V'-- view
and status >= 0
for read only
OPEN view_list
FETCH NEXT FROM view_list
into @view_name
WHILE @@FETCH_STATUS = 0
BEGIN
set @SQLString = N'if exists (select * from dbo.sysobjects where id = object_id(N''[dbo].['+@view_name+']'') and OBJECTPROPERTY(id, N''IsView'') = 1)
drop view [dbo].['+@view_name+']'
exec sp_executesql @SQLString
    FETCH NEXT FROM view_list
 into @view_name
END
CLOSE view_list
DEALLOCATE view_list
/* drop_udf.sql */-- drop all UDF
declare @udf_namevarchar(255)
,@SQLString NVARCHAR(1000)
DECLARE udf_list CURSOR
forward_only static for
select [name]
from sysobjects
where xtype in (N'FN', N'IF', N'TF')-- Scalar function,Inlined table-function,Table function
and status >= 0
for read only
OPEN udf_list
FETCH NEXT FROM udf_list
into @udf_name
WHILE @@FETCH_STATUS = 0
BEGIN
set @SQLString = N'if exists (select * from dbo.sysobjects where id = object_id(N''[dbo].['+@udf_name+']'') and xtype in (N''FN'', N''IF'', N''TF''))
drop function [dbo].['+@udf_name+']'
exec sp_executesql @SQLString
    FETCH NEXT FROM udf_list
 into @udf_name
END
CLOSE udf_list
DEALLOCATE udf_list

Listing 4. Drop database object scripts.

Folders "udf", "view", and "sp" contain source code with embedded Grant

sections. If actual production database contains more users, and customer

business rules require complex database object/user policy, then all Grant

statements should be moved to a dedicated script in the "create" folder and a

new command should be added to "update_db.bat" in the “Create Database Objects”

section. The command will call this script.

File "areacode.SQL" in the "udf" folder is an example of a user-defined

function returning the area code of a phone number (see Listing 5).

/* UDF areacode */CREATE FUNCTION areacode
-- input phone number
   (@PhoneNumber char(12))
RETURNS char(3) -- output area code
AS
BEGIN
   RETURN (left(@PhoneNumber,3))
END
GO
GRANT execute ON areacode TO public
GO

Listing 5. New UDF areacode.

Folder "view" contains source code for the old "titleview" view, and two

files with source code for new views – "aupubview" and "empjobview". Please

notice that "aupubview" view is based on "titleview" view.

/* view empjobview */CREATE VIEW empjobview
AS
SELECT e.fname+' '+e.lname AS emp_name, j.job_desc
FROM employee e, jobs j
WHERE e.job_id=j.job_id
GO
GRANT ALL ON empjobview TO public
GO
/* view aupubview */CREATE VIEW aupubview
AS
SELECT DISTINCT tv.au_lname, p.pub_name
FROM titleview tv, publishers p
WHERE tv.pub_id = p.pub_id
GO
GRANT ALL ON aupubview TO public
GO

Listing 6. New views for pubs database.

Folder "sp" includes four old procedures: "byroyalty", "reptq1", 

"reptq2",  "reptq3", and the new procedure "aubyroyalty" which uses

"areacode" function and "byroyalty" procedure.

/* proc aubyroyalty */CREATE PROCEDURE aubyroyalty @pct int
AS
CREATE TABLE #au (au_id varchar(11))
INSERT INTO #au EXEC byroyalty @percentage=@pct
SELECT au_fname, au_lname, dbo.areacode(phone) AS area_code
FROM authors a, #au
WHERE a.au_id = #au.au_id
DROP TABLE #au
GO
GRANT execute ON aubyroyalty TO public
GO

Listing 7. New stored procedure "aubyroyalty".

There is nothing new in the "trigger" folder, just the old "employee_insupd"

trigger.

The "data" folder contains the following pipe-delimited data files:

  • stores.dat -  contains the sample records to be inserted into STORES

    table that has no IDENTITY column; some records are new and some are copies of

    already existing records;

  • jobs.dat - contains the sample records to be inserted into JOBS table that

    does have an IDENTITY column; here also, some records are new and some are

    copies of already existing records;

  • app_modules_cust1.dat and app_modules_cust2.dat - contain application data

    for two different customers or customer groups;

  • sysmessages.dat - contains three user-defined error messages.

Script "remove_data.sql" in the "copydata" folder demonstrates both ways of

removing data: truncation and deletion, and contains examples of how to prepare

the tables for data purge -  unchecking or dropping constraints, disabling

triggers, allowing system tables modification, etc. While, script

"restore_constrains.sql" rolls back all these changes after the data has been

replaced.

If new table with data has been created or data in the existing table has

been significantly changed the UPDATE STATISTICS command for such tables can be

added into the "restore_constrains.sql" script.

Two batch files: "bcpin_customer1.bat" and "bcpin_customer2.bat" show how to

use BCP utility to copy data into the tables that do not have an IDENTITY

column, and how to use BULK INSERT command with osql utility to insert data into

the table which does have an IDENTITY column.

/* remove_data.sql */-- truncate table without constraints
TRUNCATE TABLE app_modules
GO
-- drop constraints and then truncate table
ALTER TABLE employee DROP CONSTRAINT FK__employee__job_id__1BFD2C07
GO
TRUNCATE TABLE jobs
GO
-- disable constraints and then delete table
ALTER TABLE sales NOCHECK CONSTRAINT FK__sales__stor_id__0AD2A005
ALTER TABLE discounts NOCHECK CONSTRAINT FK__discounts__stor___0F975522
GO
DELETE stores
GO
-- enable system tables update
EXEC sp_sqlexec 'sp_configure "allow updates", 1'
EXEC sp_sqlexec 'reconfigure with override'
GO
DELETE master..sysmessages
WHERE error > 50000
GO

Listing 8. Remove data from tables.

/* restore_constraints.sql */-- restore constraints
ALTER TABLE employee ADD CONSTRAINT FK__employee__job_id__1BFD2C07
 FOREIGN KEY (job_id) REFERENCES jobs (job_id)
GO
-- enable constraints
ALTER TABLE sales CHECK CONSTRAINT FK__sales__stor_id__0AD2A005
ALTER TABLE discounts CHECK CONSTRAINT FK__discounts__stor___0F975522
GO
-- disable system tables update
EXEC sp_sqlexec 'sp_configure "allow updates", 0'
GO
EXEC sp_sqlexec 'reconfigure with override'
GO

Listing 9. Roll back all changes.

rem bcpin_customer1.bat
rem bulk copy data from data files to database
rem Server\Instance
if "%1" == "" goto usage
if "%1" == "/?" goto usage
if "%1" == "help" goto usage
rem Database
if "%2" == "" goto usage
rem sa password
if "%3" == "" goto usage
rem Data Directory
if "%4" == "" goto usage
rem Insert data into the tables without IDENTITY column
bcp "%2.dbo.app_modules" in "%4\data\app_modules_cust1.dat" -c -t"|" -Usa -P%3 -S%1
bcp "%2.dbo.stores" in "%4\data\stores.dat" -c -t"|" -Usa -P%3 -S%1
rem Insert data into the tables with IDENTITY column
osql -S%1 -d%2 -Usa -P%3 -Q"BULK INSERT jobs FROM '%4\data\jobs.dat' WITH (FIELDTERMINATOR = '|',ROWTERMINATOR = '\n',KEEPIDENTITY)" -n
rem Insert data into the system table
bcp "master.dbo.sysmessages" in "%4\data\sysmessages.dat" -c -t"|" -Usa -P%3 -S%1
goto end
:usage
echo usage: bcpin_customer1.bat Server\Instance Database sa_password ScriptsDirectory
echo Example usage: bcpin_customer1.bat server1 pubs sa_password C:\Build\pubs20\db
:end

Listing 10. Copy data into the tables.

The "patch_discount.sql" script in the "patch" directory (Listing 11)

provides an example of how to fix data if necessary.

/* patch_discount.sql */-- Fixed data in the discounts table
UPDATE discounts
SET lowqty=10, highqty=100
WHERE discounttype='Initial Customer'
GO

Listing 11. Script to fix the data.

The "job" folder contains the source

code of the "ytd_reports" job, that executes the "reptq1" procedure daily at

11:00 PM and saves the report at location "C:\pubs\report\ytd_report.txt".

If all previous commands have completed successfully, the

"ins_ver_history.sql" script refreshes the database version information (see

Listing 12).

/* ins_ver_history.sql */INSERT ver_history (applied_to_ver,package)
VALUES('pubs2000.2.0','UpdatePubs2.0_200603071632')
GO

Listing 12. Refresh database version information.

By now all the folders but one are filled with scripts and data files. Folder

"create" still remains empty. We will generate and store in this folder the

following four scripts, that create the database objects:

  • create_udf.bat
  • create_view.bat
  • create_sp.bat
  • create_tr.bat

Since triggers cannot be nested they can be generated in any order. However,

functions, views, and procedures can be nested and therefore must be created in

the reverse order of referencing. In the case that the referenced objects are

not found the following happens for each type of database object:

  • functions are created successfully but will produce run-time errors;
  • stored procedures are created with warnings;
  • views fail to generate and return an error message.

To generate all those objects in the proper order I use simple C# program

that assumes that all required source code modules are already in the proper

folders, and the names of the source  files correspond to the names of the

database objects; there are other ways of code parsing, but using the file name

convention seems to be easy and reliable. Another advantage of this approach is

that it does not depend on any source code management tool.

The size of this article does not allow providing the full program listing

but we list the key function (see Listing 13). This function is a member

function of the DatabaseObjectCollection class. It creates dependency trees for

a set (collection) of database objects. Each collection object has a name, SQL

source code (script), and the list of dependencies where we will store

references to the other collection objects it depends on.

To create the dependency trees this function uses two nested FOR loops, both

iterating over the whole objects collection. The outer loop treats each object

as a parent and the inner loop tries to find its children – the objects this

parent depends on. To do it the function takes the name of the object and

searches for that name in the parent's script. If the name is found, the object

is added to the parent’s collection of dependencies.

When searching for a name, the function looks for the first valid appearance

of this name in the parent’s script. A valid appearance means that the name is

either enclosed in double quotes or brackets, or is bordered by a space or tab

on the left and by a space, tab, comma, or EOL on the right. For the sake of

simplicity we do not analyze the script for comments.

After the dependencies have been created, generating the batch script becomes

simple, since each collection object has its own collection of dependencies. To

generate the batch script, recursively traverse the dependency tree of each

object. At each step of the traversal, if the current object hasn’t been already

added to the batch file, we recursively process it, traversing its children

before writing the current object to the batch script.

public void BuildObjectDependancies()
{
  string up; //string that stores the name of a potential child
  string script; //string that stores the script of a parent
  Console.Write("["+DateTime.Now.ToString()+"] Analyzing dependencies... ");
  foreach(DatabaseObject parent in this) //traverse each parent
   {
    script = parent.Script.ToUpper(); //get the sql source, of the parent
    foreach(DatabaseObject child in this) //traverse each potential child
     {
      up = child.Name.ToUpper(); //get the name of the child's script
      int length = child.Name.Length; //length of the name
      char first, last; //the characters that border the name found in the script
      int index=1; //start searching at 1, because the name of sp can't be in the begining
      bool done = false; //indicates when the search is over
      while (!done && index<script.Length) //search for the name in the parent's script
       {
        if ((index = script.IndexOf(up,index)) != -1)
         { //found another instance of the name in the script
           first = script[index - 1]; //get the bordering char on the left
           index += length; //update the starting index
           if (index >= script.Length) last = '\n'; //get the bordering char on the right
            else last = script[index];
           //check if the appearance is valid
           if (((first == ' ' || first == '\t' || first == '.') && //bordered by space ?
             (last == ' ' || last == '\t' || last == '\n' || last=='\r' || last==',')) ||
             (first=='\"' && last=='\"') || //bordered by quotes ?
             (first=='[' && last==']')) //bordered by brackets ?
             { //valid appearance of the name found
              parent.DependsOn.Add(child); //set this object as the child of the parent
              done=true; //search is over
             }
         }
        else done = true; //if no instance found, then search is over
      }
  }
}
}

Listing 13. The dependency building function.

The examples of the generated batch scripts are below (see Listing 14).

Please note that since view "aupubview" references view "titleview" (see Listing

6), the "titleview.SQL" file will be run before "aupubview.SQL" file; for the

same reason stored procedure "byroyalty" will be generated before stored

procedure "aubyroyalty" (see Listing 7):

rem create_udf.bat
osql -S%1 -d%2 -Usa -P%3 -i"%4\udf\areacode.SQL" –n
rem create_view.bat
osql -S%1 -d%2 -Usa -P%3 -i"%4\view\titleview.SQL" -n
osql -S%1 -d%2 -Usa -P%3 -i"%4\view\aupubview.SQL" -n
osql -S%1 -d%2 -Usa -P%3 -i"%4\view\empjobview.SQL" -n
rem create_sp.bat
osql -S%1 -d%2 -Usa -P%3 -i"%4\sp\byroyalty.SQL" -n
osql -S%1 -d%2 -Usa -P%3 -i"%4\sp\aubyroyalty.SQL" -n
osql -S%1 -d%2 -Usa -P%3 -i"%4\sp\reptq1.SQL" -n
osql -S%1 -d%2 -Usa -P%3 -i"%4\sp\reptq2.SQL" -n
osql -S%1 -d%2 -Usa -P%3 -i"%4\sp\reptq3.SQL" –n

Listing 14. Generating the database objects.

Now we are ready to package the database update scripts into a zip file (like

the "pubs20.zip" mentioned in the beginning) and ship it to the customer. The

scripts can be run against either local or remote servers. Please note that

there are two restrictions that will prevent running the scripts on the remote

server:

  • if you have to create a job on the server;
  • if you use BULK INSERT command in the osql utility to insert the data into

    the table with an IDENTITY column.

Conclusion

Based on the provided scripts and material, it is easy to customize and

automate the update scripts preparation process, and write your own program that

creates required folders, checks out source code from a source control system to

those folders, and generates all the scripts described in this article. I used

such scripts, for the past 2 years, to successfully update various customer

versions of the database which contained more than 300 tables, 900 views and

3,000 stored procedures.

Code:
Scripts for Pubs
copy of other code

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating