
SQL Server migrations are a headache, ask anyone who’s been through the pain of moving a reasonably large server and I’m sure they will agree.
There are all sorts of methods out there for getting the data over to our new server.
We can simply backup the databases and restore them to their new home. The problem there is that if we’re talking about anything more than a very trivial database, there’s going to be some potentially significant downtime.
We can log ship, a similar approach to above, save that the databases will already be staged on the new kit and the downtime will be much less, as we only have to copy and restore the latest transaction log backup. Log shipping is nice and is often my preferred method but it can be fiddly if there are a large number of databases with the added stress of making sure that last log backup has been applied.
Another option that we have is to utilise availability groups and distributed availability groups.
A distributed availability group is a special type of availability group that spans two separate availability groups. The availability groups that participate in a distributed availability group don’t need to be in the same location. They can be physical, virtual, on-premises, in the public cloud, or anywhere that supports an availability group deployment. This includes cross-domain and even cross-platform – such as between an availability group hosted on Linux and one hosted on Windows. As long as two availability groups can communicate, you can configure a distributed availability group with them.
Because it doesn’t require a common cluster, a distributed availability group allows us to link servers in situations were a cluster isn’t possible. Servers could be in remote locations, members or different domains, different OS levels or even different operating systems (yes, we can link Windows and Linux based AGs).
The ability to link servers in this way gives us a very nice and easy way to replicate data between servers when thinking about a migration.
Let’s have a quick look at our situation. I’ve got two servers, one running SQL2019 and one on SQL2022. I’m looking to migrate from SQL2019 to SQL2022.

Both of those servers hosts an AG. I’m looking to migrate the database from SQL2019AG to SQL2022AG. The destination AG is currently empty.

Create the Distributed AG
We’re going to need to link those AGs and start that database replicating between the two. To do that we’re going to need to build a distributed availability group.
Against the source AG, SQL2019AG, we’ll need to run the following, filling in the appropriate AG names and URLs.
CREATE AVAILABILITY
GROUP <Distributed AG Name>
WITH (DISTRIBUTED) AVAILABILITY
GROUP ON <source AG name>
WITH (
LISTENER_URL = '<source listener URL>:<source mirroring endpoint port>'
,AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT
,FAILOVER_MODE = MANUAL
,SEEDING_MODE = AUTOMATIC
)
,<destination AG name>
WITH (
LISTENER_URL = '<destination listener URL>:<destination mirroring endpoint port>'
,AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT
,FAILOVER_MODE = MANUAL
,SEEDING_MODE = AUTOMATIC
)
So, in my example that would be,
CREATE AVAILABILITY
GROUP DistAG
WITH (DISTRIBUTED) AVAILABILITY
GROUP ON 'SQL2019AG'
WITH (
LISTENER_URL = 'tcp://SQL2019AG.sqlundercover.local:5022'
,AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT
,FAILOVER_MODE = MANUAL
,SEEDING_MODE = AUTOMATIC
)
,'SQL2022AG'
WITH (
LISTENER_URL = 'tcp://SQL2022AG.sqlundercover.local:5022'
,AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT
,FAILOVER_MODE = MANUAL
,SEEDING_MODE = AUTOMATIC
)
NOTE: Please note that the port specified in the listener URL is the port number that the mirroring endpoint is listening (usually 5022) on and not the SQL Server port. This isn’t always clear in some of the documentation.
Now if we take a look under the availability groups tab in SSMS for the source server, we should see our new distributed availability group.

The replicas are showing but it looks like there’s an issue connecting to the 2022 server.
We’ll need to get things setup on there now. Running the following against SQL2022 should do the trick…
ALTER AVAILABILITY GROUP <Distributed AG Name>
JOIN AVAILABILITY GROUP ON '<source AG name>'
WITH (
LISTENER_URL = '<source listener URL>:<source mirroring endpoint port>'
,AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT
,FAILOVER_MODE = MANUAL
,SEEDING_MODE = AUTOMATIC
)
,'<destination AG name>'
WITH (
LISTENER_URL = '<destination listener URL>:<destination mirroring endpoint port>'
,AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT
,FAILOVER_MODE = MANUAL
,SEEDING_MODE = AUTOMATIC
)
or in my case that’s going to be
ALTER AVAILABILITY GROUP DistAG
JOIN AVAILABILITY GROUP ON 'SQL2019AG'
WITH (
LISTENER_URL = 'tcp://SQL2019AG.sqlundercover.local:5022'
,AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT
,FAILOVER_MODE = MANUAL
,SEEDING_MODE = AUTOMATIC
)
,'SQL2022AG'
WITH (
LISTENER_URL = 'tcp://SQL2022AG.sqlundercover.local:5022'
,AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT
,FAILOVER_MODE = MANUAL
,SEEDING_MODE = AUTOMATIC
)
Lets give that Availability Groups tab on the source a refresh and see how it’s looking…

Much better.
And lets take a look at things on the destination, 2022 server.

As we can see, the MigrationTest database now exists on the destination server and is part it’s availability group.

The situation now is that the two AGs are both part of the distributed AG with the databases being replicated from the source to the destination.
Ensure Destination is up to Date
To migrate to our new server, all we need to do is fail over the distributed AG. But before we can do that, we’re going to need to make sure that the destination is up to date and there’s going to be no loss of data.
When we created the distributed AG, we set the availability mode to asynchronous. To ensure that we’re not going to lose any data, we’ll need to change it to synchronous.
The reason for setting the availability mode to async to start with was because any latency between the two clusters can result in poor performance at the database. If you’re confident that latency isn’t going to be an issue, you could have set it to synchronous when creating the distributed AG.
Check the Primary Servers for Both Availability Groups are in Synchronous Commit
If either of the primary servers are in asynchronous commit mode, switch them to synchronous commit.
Switch the Distributed AG to Synchronous Commit
Run the following code against both the source and the destination to switch the availability mode to synchronous.
ALTER AVAILABILITY GROUP <Distributed AG Name>MODIFY AVAILABILITY GROUP ON
'<source AG>' WITH (AVAILABILITY_MODE = SYNCHRONOUS_COMMIT),
'<destination AG>' WITH (AVAILABILITY_MODE = SYNCHRONOUS_COMMIT);
In my example, the code would be…
ALTER AVAILABILITY GROUP DistAG MODIFY AVAILABILITY GROUP ON
'SQL2019AG' WITH (AVAILABILITY_MODE = SYNCHRONOUS_COMMIT),
'SQL2022AG' WITH (AVAILABILITY_MODE = SYNCHRONOUS_COMMIT);
Check Replication is Up To Date
With traditional AGs, we’ve got a nice dashboard which can give us an easy way to check that things are up to date. Unfortunately, we don’t get this for distributed AGs, have a look and you’ll notice that the dashboard option is greyed out.

It would be nice to have some sort of dashboard for distributed AGs, watch this space, I feel a fiddle around with PowerBI coming up soon…
No, we have to revert to a couple of DMVs to get the info that we need.
Run the following on both the source and destination instances.
SELECT AGs.name, db_name(database_id) DBName, synchronization_state_desc, last_hardened_lsn
FROM sys.dm_hadr_database_replica_states replicaStates
JOIN sys.availability_groups AGs ON AGs.group_id = replicaStates.group_id
WHERE (is_distributed = 1) OR (is_primary_replica = 1)
We want to see the synchronisation state showing as ‘SYNCHRONIZED’ and the last_hardened_lsn for each database on one server matching the value for its corresponding database on the opposite server.


The distributed AG is synchronised and the last hardened lsns match so I’m happy with that.
We’re ready to fail over.
Failover the Distributed AG
Failing over a distributed AG is slightly different to how we’d do it for a normal AG.
First thing that we’re going to need to do is mark the current primary as a secondary. This is something that’s unique to a distributed AG.
Run this on the source server.
ALTER AVAILABILITY GROUP DistAG SET (ROLE = SECONDARY);
Lets check out our distributed AG (can’t we just call them DAGs? It’d be much easier to type) in SSMS. As we can see, it’s now showing as secondary.

With things in this state, we’re ready to fail over the distributed AG.
This is the same as you’d do with any AG. Run the following against the destination server. Notice that we’re using FORCE_FAILOVER_ALLOW_DATA_LOSS? Not something that I’d usually recommend but with a distributed AG, we have no other choice. That’s why we made sure that everything was synchronised earlier.
ALTER AVAILABILITY GROUP DistAG FORCE_FAILOVER_ALLOW_DATA_LOSS;
The distributed AG should have failed over the database should be available on our new servers.

Tidy Up
Now that the migration is complete you can probably get rid of that distributed AG.
Run against both sides of the distributed AG and your migration is complete.
DROP AVAILABILITY GROUP DistAG
I really love how simple a distributed AG can make what is usually quite a stressful process.
Migrations can be prepared and staged ahead of time so that when it comes to cut over day, all you need to worry about it making sure that things are in sync and failing over.
With a bit of thought and planning, any system downtime can be kept to an absolute minimum.