AzureLinuxMicrosoftSQLServer

Linux Scripting, Part III

In the previous blog posts, we learned how to set up the first part of a standard shell script- how to interactively set variables, including how to pass them as part of the script execution. In this next step, we’ll use those to build out Azure resources. If you’re working on-premises, you can use this type of scripting with SQL Server 2019 Linux but will need to use CLI commands and SQLCMD. I will cover this in later posts, but honestly, the cloud makes deployment quicker for any business to get what they need deployed and with the amount of revenue riding on getting to market faster, this should be the first choice of any DBA with vision.

When I started at Microsoft close to a year ago, along with subsequent journey to Azure, it was challenging to locate the Azure commands for BASH, (which is still incorrectly referred to as “the CLI”). I could locate the Powershell commands immediately, but as Microsoft matures its foothold in Linux, it is realizing, just like those of us that were adept in other shell languages- there’s one shell that’s winning the war and that’s BASH. We can love the shell we started with, the one that has served us well, but BASH has been around so long and is so robust, we need to recognize that instead of re-inventing the wheel unnecessarily, we can use what is out there already. I now locate the AZ commands to be used with BASH faster than the Powershell specific commands. I am glad they will continue to support Powershell Azure commands, but highly recommend learning about the standard AZ commands, using them in Powershell and learning BASH for the future.

OK, off my soapbox and let’s get to the learning…😊

Generating JSON

Now there are a few ways to generate JSON, but sometimes, using a simple CAT and passing the in variables we have collected as part of our previous steps in our script is just a clean way to do this:

cat >./$parametersFile1 <<EOF
{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "name": {
            "value": "$adfname"
        },
        "location": {
            "value": "$zone"
        },
        "apiVersion": {
            "value": "2018-06-01"
        }
    }
}
EOF

What we’ve done in the above snippet of code, is to

  1. Cat, i.e. concatenate the text that falls inside the redirect, (<< EOF) until the next (EOF)
    1. This command line can be read as follows: concatenate and redirect the output to the parameter file 1 everything that is between End of File section
  2. Use the exact formatting, including indentation, etc.
  3. Writing it to the JSON file passed into my variable $parametersFile1, which happens to equal to parameters1.json.
  4. Note that I am passing in dynamic values for schema, adfname and the zone.
  5. Enhancements that could be written into this could be to pull the apiVersion, content version and other values from the Azure catalog, removing any hardcoded values.

We will do this for each of the parameter files, updating all dynamic vales. I prefer smaller, separate JSON files for each resource, which makes it easier to break down scripts, making it easy to reuse them, too:

cat >./$parametersFile2 <<EOF
{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "name": {
            "value": "$aasname"
        },
        "location": {
            "value": "$zone"
        },
        "sku": {
            "value": "B1"
        },
        "backupBlobContainerUri": {
            "value": ""
        },
        "managedMode": {
            "value": 1
        }
    }
}
EOF

The above file has notes that I would like to make similar enhancements in the future, removing any static values to pull dynamically from the Azure catalog as time permits, making the scripting more robust.

Do the Stuff Julie

Let’s step into the first steps of the deployment now to Azure, now that we’ve created everything we need for values and files. One of the files transferred is the data files if the customer wants to deploy sample data. This data is sanitized, created from a nightly pull from the database, creating the DDL and then a second that has DDL and data. This is large when in an unzipped format, but compresses quite small, making it easy to manage, even on Github. Due to this, the first step in my script is to cleanup any pre-existing files, (always thinking ahead, peeps) and then unzipping the files to a new directory.

# Unzip data load files
rm -rf ./hied_data
unzip hied_data.zip
mv ./hied_data/* .

As decompression occurs into a sub folder, I move the data back up to the directory I want. I could do this all in one step, but as its small, I’m currenting just moving it back up to where I want it.

The next step is to ease the amount of coding I need to perform. I set the subscription ID to the one we’re going to use as default, then I no longer need to have it passed in each of the commands. If you do this in all your scripts, it just removes one more thing to have to remember in each deployment.

az account set --subscription $subscriptionID

AZ commands on linux are cap-sensitive, so be aware, if we are working in Linux, you must be cognizant of this requirement. The break down is as follows:

az <command> --<full argument written out> <value>
az <command> -<argument abbreviation> <value>

The first resource we’ll deploy is the most important- our resource group. This is the container that will house all of our resources for easier management and control. In this step, I’ll also demonstrate that I lead into each section of the commands with a comment description of what the commands will be doing:

# Create a resource group
az group create \
  --name $groupname \
  --location $zone

The AZ commands can be issued to one line, but with multiple arguments, it’s easier to read by breaking it up into lines for each argument, using a backslash for each new line until the last argument. No syntax is required to end the command, a simple return will suffice.

The next step is to create the SQL Server to house our databases. As you will note, all of the values are dynamically build off of our values we created in the last blog post, which is based off the answers to the questions to the user executing the script:

# Create a logical server in the resource group
az sql server create \
 --name $servername \
 --resource-group $groupname \
 --location $zone  \
 --admin-user $adminlogin \
 --admin-password $password

Even the password uses the entry, then dynamically meets the naming convention to ensure success. If you’re concerned about knowing the password, as you’ll see later on in the script, all information about this deployment is sent to an output file. This deployment is hardened in the second phase of the deployment, but we want to ensure that its successful for the initial step.

The last step we’ll cover in this post is the firewall rule. This is the only firewall rule that will be created as part of the deployment. It is a firewall rule for the Azure Cloud Shell session to access the SQL Server to deploy the logical objects and any data requested. Accessing the databases with tools post the deployment, such as with SSMS, Visual Studio, etc. will require a firewall rule, but most tools offer to create these for you. This one is a requirement for successful deployment:

# Configure a firewall rule for the server
az sql server firewall-rule create \
 --resource-group $groupname \
 --server $servername \
 -n AllowYourIp \
 --start-ip-address $startip \
 --end-ip-address $endip

The above code was designed for an IP range, but for this deployment, there is only one, so the starting and ending IP address are the same. The rule can be removed post the deployment if desired.

This is the last step in this blog post, but let’s go over what we learned today:

  1. How to generate JSON file from a CAT command.
  2. Unzipping dynamically generated scripts/data to prepare for loading
  3. How to use AZ commands
  4. How to dynamically set values for AZ commands
  5. Why we would or would not generate firewall rules as part of the deployment

In the next post, we’ll get into building out the databases, Azure Data Factory and an Analysis Server. Have a great week!

 

 

Kellyn

http://about.me/dbakevlar