Following on from my previous post about hitting the Kubernetes API from SQL Server 2025 let’s go a little further and deploy SQL Server 2025 to Kubernetes from…SQL Server 2025.
You may be thinking….why? Well…
In all honesty, this is just a bit of fun, I’m just playing around with the sp_invoke_external_rest_endpoint stored procedure, I don’t expect anyone to actually do this in a live environment. Anyway…let’s run through how this works.
First things first, we need to update the role created previously to allow our service account access: –
cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: pod-reader namespace: default rules: - apiGroups: [""] resources: ["pods", "services", "persistentvolumeclaims","secrets"] verbs: ["get", "list", "watch", "create","update","patch"] - apiGroups: ["apps"] resources: ["statefulsets"] verbs: ["get", "list", "watch", "create","update","patch"] EOF
Storage classes are a cluster-scoped resource so we need to create a ClusterRole to be able to create one: –
cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: mssql-storageclass-deployer rules: - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch", "create","update","patch"] EOF
And then we need a ClusterRoleBinding to map the ClusterRole to the service account: –
cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: mssql-storageclass-deployer-binding subjects: - kind: ServiceAccount name: api-reader namespace: default roleRef: kind: ClusterRole name: mssql-storageclass-deployer apiGroup: rbac.authorization.k8s.io EOF
Ok, ready to start deploying SQL Server….but before we get to that, we need a storage class. I’ve installed OpenEBS on my cluster so I’ll reference that: –
DECLARE @deploy NVARCHAR(MAX);
EXEC sp_invoke_external_rest_endpoint
@url = 'https://api.dbafromthecold.local/apis/storage.k8s.io/v1/storageclasses',
@headers = '{"Authorization":"Bearer eyXXXXX....XXXX"}',
@method = 'POST',
@payload = '{
"apiVersion": "storage.k8s.io/v1",
"kind": "StorageClass",
"metadata": {
"name": "mssql-storage",
"annotations": {
"openebs.io/cas-type": "local",
"cas.openebs.io/config": "- name: StorageTypen value: hostpathn- name: BasePathn value: /var/local-hostpath"
}
},
"provisioner": "openebs.io/local",
"reclaimPolicy": "Delete",
"volumeBindingMode": "WaitForFirstConsumer"
}',
@response = @deploy OUTPUT
SELECT
s.[name]
FROM OPENJSON(@deploy, '$.result')
WITH (
[name] NVARCHAR(100) '$.metadata.name'
) AS s;
And we’ll need a secret to store the SQL instance’s sa password: –
DECLARE @deploy NVARCHAR(MAX);
EXEC sp_invoke_external_rest_endpoint
@url = 'https://api.dbafromthecold.local/api/v1/namespaces/default/secrets',
@headers = '{"Authorization":"Bearer eyXXXXX....XXXX"}',
@method = 'POST',
@payload = '{
"apiVersion": "v1",
"kind": "Secret",
"metadata": {
"name": "mssql-sa-secret"
},
"type": "Opaque",
"stringData": {
"MSSQL_SA_PASSWORD": "Testing1122"
}
}',
@response = @deploy OUTPUT
SELECT
s.[name]
FROM OPENJSON(@deploy, '$.result')
WITH (
[name] NVARCHAR(100) '$.metadata.name'
) AS s;
Right! Let’s deploy SQL Server 2025 to Kubernetes with 1 persistent volume claim using the storage class and referencing the secret for the sa password: –
DECLARE @deploy NVARCHAR(MAX);
EXEC sp_invoke_external_rest_endpoint
@url = 'https://api.dbafromthecold.local/apis/apps/v1/namespaces/default/statefulsets',
@headers = '{"Authorization":"Bearer eyXXXXX....XXXX"}',
@method = 'POST',
@payload = '{
"apiVersion": "apps/v1",
"kind": "StatefulSet",
"metadata": {
"name": "mssql-statefulset"
},
"spec": {
"serviceName": "mssql",
"replicas": 1,
"selector": {
"matchLabels": {
"name": "mssql-pod"
}
},
"template": {
"metadata": {
"labels": {
"name": "mssql-pod"
}
},
"spec": {
"securityContext": {
"fsGroup": 10001
},
"containers": [
{
"name": "mssql-container",
"image": "mcr.microsoft.com/mssql/server:2025-CTP2.0-ubuntu-22.04",
"ports": [
{
"containerPort": 1433,
"name": "mssql-port"
}
],
"env": [
{
"name": "MSSQL_PID",
"value": "Developer"
},
{
"name": "ACCEPT_EULA",
"value": "Y"
},
{
"name": "MSSQL_AGENT_ENABLED",
"value": "1"
},
{
"name": "MSSQL_SA_PASSWORD",
"valueFrom": {
"secretKeyRef": {
"name": "mssql-sa-secret",
"key": "MSSQL_SA_PASSWORD"
}
}
}
],
"volumeMounts": [
{
"name": "sqlsystem",
"mountPath": "/var/opt/mssql"
}
]
}
]
}
},
"volumeClaimTemplates": [
{
"metadata": {
"name": "sqlsystem"
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "1Gi"
}
},
"storageClassName": "mssql-storage"
}
}
]
}
}',
@response = @deploy OUTPUT
SELECT
s.[name],
s.[namespace],
s.replicas
FROM OPENJSON(@deploy, '$.result')
WITH (
[name] NVARCHAR(100) '$.metadata.name',
[namespace] NVARCHAR(100) '$.metadata.namespace',
replicas INT '$.spec.replicas'
) AS s;
And finally, we’ll need a load balanced service (using Metallb that I’ve previously installed on the cluster) to connect to SQL Server within Kubernetes: –
DECLARE @deploy NVARCHAR(MAX);
EXEC sp_invoke_external_rest_endpoint
@url = 'https://api.dbafromthecold.local/api/v1/namespaces/default/services',
@headers = '{"Authorization":"Bearer eyXXXXX....XXXX"}',
@method = 'POST',
@payload = '{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"name": "mssql-service"
},
"spec": {
"ports": [
{
"name": "mssql-port",
"port": 1433,
"targetPort": 1433
}
],
"selector": {
"name": "mssql-pod"
},
"type": "LoadBalancer"
}
}',
@response = @deploy OUTPUT
SELECT
s.[name],
s.[namespace]
FROM OPENJSON(@deploy, '$.result')
WITH (
[name] NVARCHAR(100) '$.metadata.name',
[namespace] NVARCHAR(100) '$.metadata.namespace'
) AS s;
Cool! Let’s confirm on the cluster: –
kubectl get sc kubectl get pvc kubectl get secret kubectl get all
Final test, connecting to the instance in SSMS: –
Thanks for reading!


