February 20, 2025 at 3:24 pm
Hey,
I've previously used this or something like this:
RESTORE DATABASE [DBNAME]
FROM DISK = N'X:\Path\To\File\<FILENAME>.bak'
WITH FILE = 1
,MOVE N'DBNAME' TO N'U:\Path\To\File\DBNAME.mdf'
,MOVE N'DBNAME_log' TO N'L:\Path\To\File\DBNAME_log.ldf'
,NOUNLOAD
,STATS = 5
But I've now encountered a DB with 27 data files (+1 log file).
Is there a way to relocate the files on the restored database without explicitly listing all of them? For context, this is an overnight refresh of the main database to a test copy of it.
Thanks
February 20, 2025 at 4:52 pm
I usually restore filelistonly and copy paste the LogicalName and PhysicalName to a query window and then edit with search and replace, but I just looked for an easier way and I think this query does most of the work. If the relative paths differ then it will take some adjustment.
SELECT name,
physical_name,
CONCAT(', MOVE ', '', 'N''', name, '''', ' TO N''', REPLACE(physical_name, 'F:\', 'X:\'),'''')
FROM [sys].[database_files]
February 21, 2025 at 9:49 am
Thanks for the reply.
This will be a scheduled overnight process, with an Agent Job so manually running a query first isn't workable.
The current code first checks if the copy DB exists and if so, deletes it. Then the restore happens. If I scrap deleting the copy database first and instead just restore over the top, does T-SQL maintain the paths/filenames of the target db, or will it try to keep them from the source DB's backup file?
February 21, 2025 at 2:38 pm
If you specify different paths with MOVE, it will ignore the existing paths & use the paths specified.
It sounds like you will need to generate dynamic SQL for the restores based on the target databases existence & file paths.
February 21, 2025 at 4:25 pm
Thanks for the reply.
This will be a scheduled overnight process, with an Agent Job so manually running a query first isn't workable.
If you're restoring the same database every night then you just need to create the restore script once. The static query was to save typing 90% of it. Once you have the restore script built you can add a job step as inline sql or a stored procedure. I think that using WITH REPLACE still retrieves the file locations from the backup so you can't avoid using move. This is a more complete example, I did try using STRING_AGG to build the move lines but it failed when the length exceeded 8000.
--USE MyDatabase
DECLARE @NewDBName NVARCHAR(100) = 'MyDatabaseCopy',
@BackupPath NVARCHAR(200) = 'F:\database\backups\Mydatabase.bak',
@SourceData NVARCHAR(200) = 'D:\database\',
@SourceLog NVARCHAR(200) = 'D:\database\',
@TargetData NVARCHAR(200) = 'X:\database\MyDatabaseCopy\data\',
@TargetLog VARCHAR(200) = 'X:\database\MyDatabaseCopy\log\',
@SQL1 NVARCHAR(2000),
@SQL2 NVARCHAR(MAX) = '',
@SQL3 NVARCHAR(1000),
@CR VARCHAR(20) = CHAR(13)+CHAR(10),
@SQL4 VARCHAR(MAX)
SET @SQL1 = CONCAT('RESTORE DATABASE ', @NewDBName, @CR, 'FROM DISK = ''', @BackupPath, '''', @CR, 'WITH ')
SET @SQL3 = CONCAT(@CR, 'RECOVERY, REPLACE, STATS = 10;')
SELECT @SQL2 = @SQL2 + CONCAT('MOVE ''', a.name, ''' TO ''',
REPLACE(a.physical_name,
IIF(a.type_desc = 'LOG', @SourceLog, @SourceData),
IIF(a.type_desc = 'LOG', @TargetLog, @TargetData)), ''',',@CR)
FROM sys.database_files AS a
ORDER BY a.file_id
SET @SQL4 = CONCAT(@SQL1, @SQL2, @SQL3)
SELECT @SQL4
With a lot of files the variable might be too large to return in a single select so you might have to union multiple selects. Be aware that it will probably split in the middle of lines so you'd need to delete a few carriage returns.
SELECT SUBSTRING(@SQL4,1,2000)
UNION ALL
SELECT SUBSTRING(@SQL4,2001,4000)
UNION ALL
SELECT SUBSTRING(@SQL4,4001,6000)
UNION ALL
SELECT SUBSTRING(@SQL4,6001,8000)
UNION ALL
SELECT SUBSTRING(@SQL4,8001,10000)
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply
This website stores cookies on your computer.
These cookies are used to improve your website experience and provide more personalized services to you, both on this website and through other media.
To find out more about the cookies we use, see our Privacy Policy