Optimizing Database Restores

You can Google/Bing/AltaVista and get a lot of results for speeding up your backups. Those results will include using third-party tools, performing file and filegroup backups, differential backups and the like. Some of the more technical blogs will talk about adjusting some of the backup parameters to improve throughput.

There’s really not much written (that I’ve found, please add comments with links to posts you’ve found) that talks about tuning database restores.

I started messing with some restore parameters last week and found that it’s possible to make some dramatic performance improvements.


How this started

We were performing a data center migration at work. As a part of this migration we were going from SQL 2000 to SQL 2008R2 for some of our databases.

The database, we’ll call MyDB, is 620GB in size. There are 5 data files and a single log file. Current backups are performed using Snap technology, which wasn’t going to help us with performing a migration as we needed to get the data over and setup logshipping, so we needed to perform a regular backup.

Third party tools were not an option as this SQL instance was running SP3a and we weren’t able to find anything that would work below SP4.

A full backup was taken to local disk, it took a little over 9 hours (server and disk limitations prevented any work to try and improve that).

The backup file was transferred to a USB drive, taken to the new datacenter and copied over to a CIFS share where it would be available for performing restores.


Hardware/software setup

It’s going to help to have an idea of the configuration that’s being used here.


  • Server: HP DL585 G7 (clustered)
  • CPU: 4 x AMD 6176SE (2294Mhz, 12 cores) for a total of 48 cores
  • RAM: 128GB
  • Storage: Netapp SAN with WAFL filesystem – Fiber connected with 2 x 4Gb HBAs, MPIO configured for least queue depth
  • Network: 10Gb/s (HP NC375i) – 2 connections, configured for failover only, no load balancing
  • One LUN defined (as mount points) for each file in the database to be restored



  • SQL 2008R2 CU7 Enterprise Edition
  • Max server memory: 100GB
  • Affinity masks not set
  • Max degree of parallelism: 6
  • 4 x tempdb files



Basic restore

The initial restore was performed with default options, no optimization was performed.

RESTORE DATABASE MyDB FROM DISK = '\CIFSDeviceBackupSQLBackupsProductionForRestore.bak' 


    , MOVE 'MyDB2' TO 'L:MyDB_Data2MSSQLDataMyDB_Data2.ndf'

    , MOVE 'MyDB3' TO 'L:MyDB_Data3MSSQLDataMyDB_Data3.ndf'

    , MOVE 'MyDB4' TO 'L:MyDB_Data4MSSQLDataMyDB_Data4.ndf'

    , MOVE 'MyDB5' TO 'L:MyDB_Data5MSSQLDataMyDB_Data5.ndf'

    , MOVE 'MyDB_Log' TO 'L:MyDB_LogsMSSQLDataMyDB_Log.ldf'

    , STATS = 5



Restore time was 2:48.


Optimization techniques

I knew that adjusting the MaxTransferSize and BufferCount settings could provide significant improvements in backup performance (go read a fantastic SQLCat article on this) and figured that the same would be true for performing restores.

Before I could start changing values I needed to know what values were being used with the basic restore. To do this I enabled trace flags 3213 (trace SQL activity during a backup/restore) & 3605 (output trace activity to the SQL error log).

DBCC TRACEON (3213, -1)

DBCC TRACEON (3605, -1)


With the flags enabled I performed another restore which provided the following information:


So by default we’re running with a BufferCount of 6 and MaxTransferSize of 1,024.


The MaxTransferSize value is the largest unit of transfer to be used between SQL Server and the backup media. This is specified in bytes and can range in value from 65536 (64 KB) to 4194304 (4 MB).

The BufferCount specifies the number of IO buffers to be used for the restore operation. Any positive number can be used for this, however you have to be careful with larger numbers as you could potentially cause out of memory errors.


Changing the MaxTransferSize

Not having any obvious hardware restrictions I decided that as the restore was being performed across the network adjusting the MaxTransferSize might provide the biggest benefit.

Feeling confident I just bumped it to the max and performed the restore.

RESTORE DATABASE MyDB FROM DISK = '\CIFSDeviceBackupSQLBackupsProductionForRestore.bak' 


    , MOVE 'MyDB2' TO 'L:MyDB_Data2MSSQLDataMyDB_Data2.ndf'

    , MOVE 'MyDB3' TO 'L:MyDB_Data3MSSQLDataMyDB_Data3.ndf'

    , MOVE 'MyDB4' TO 'L:MyDB_Data4MSSQLDataMyDB_Data4.ndf'

    , MOVE 'MyDB5' TO 'L:MyDB_Data5MSSQLDataMyDB_Data5.ndf'

    , MOVE 'MyDB_Log' TO 'L:MyDB_LogsMSSQLDataMyDB_Log.ldf'

    , STATS = 5


    , MAXTRANSFERSIZE = 4194304 


This time the restore took 45 minutes, a reduction of 2 hours over the default.


Changing the BufferCount

Having already reduced the restore time by over 70% I wanted to see if I could get a little bit more performance from it. So I made an adjustment to the BufferCount value and set it to 75.

RESTORE DATABASE MyDB FROM DISK = '\CIFSDeviceBackupSQLBackupsProductionForRestore.bak' 


    , MOVE 'MyDB2' TO 'L:MyDB_Data2MSSQLDataMyDB_Data2.ndf'

    , MOVE 'MyDB3' TO 'L:MyDB_Data3MSSQLDataMyDB_Data3.ndf'

    , MOVE 'MyDB4' TO 'L:MyDB_Data4MSSQLDataMyDB_Data4.ndf'

    , MOVE 'MyDB5' TO 'L:MyDB_Data5MSSQLDataMyDB_Data5.ndf'

    , MOVE 'MyDB_Log' TO 'L:MyDB_LogsMSSQLDataMyDB_Log.ldf'

    , STATS = 5


    , MAXTRANSFERSIZE = 4194304 

    , BUFFERCOUNT = 75


This restore took a little less time, 38 minutes, shaving another 7 minutes off the already shortened time. At this point I was down about 78% over the regular restore command.

I tried pushing the BufferCount to 100 to see if I could trim things further but found no appreciable difference and so left it at 75.


Comparing backup types

As a part of the move log shipping was to be setup to offload reporting for users and so I needed to perform a restore of the database from the new SQL 2008R2 server to another with exactly the same specifications.

The native SQL 2000 backup file was 520GB in size, with no optimizations it restored in 2:48, and optimized restored in just 38 minutes (not bad for a network restore).

The compressed SQL 2008R2 backup file was just 121GB in size. An optimized restore of this database took just 23 minutes. That’s an 86% saving over the unoptimized SQL 2000 based restore.



There are times when you are not able to perform partial restores, or you need to restore entire databases to a new environment in as short a time as possible. Don’t always accept the defaults that SQL Server will give you in these situations. There are things that you can do to improve the performance of your restores (and your backups). It’s worth your spending some time now working in your environment and optimizing your restores.

A little time playing with these things now could be the difference between your database being down for 20 minutes as opposed to 3 hours.

How much is 2 1/2 hours worth to your business?



Such fun.

, , , ,

7 Responses to Optimizing Database Restores

  1. John Sansom 2011-09-26 at 12:57 #

    Very interesting results. Thanks for sharing.

    I'm currently working through a similar exercise in our shop, albeit using a 3rd party backup utility, and your post has prompted me to look into more areas for potential optimization.

  2. Nicholas Cain 2011-09-26 at 20:27 #

    I'd be interested to see the numbers around your environment and whether you were able to get any savings in time by messing around with the defaults.

  3. gwrgqr gwrgqr 2011-12-18 at 13:16 #

    Excellent post. It makes me realize the energy of words and pictures. I learn a lot, thank you! Wish you make a further progress in the future. hmyveb hmyveb – North Face Jackets.

  4. Justin Dearing (@zippy1981) 2015-02-05 at 16:52 #

    Thanks for writing this, Nic. I was dealing with smaller files, and restores in minutes so I used 3604 and only set the flags locally

    DBCC TRACEON (3213) — Trace backup statstics
    DBCC TRACEON(3604) — Restore info to the screen

    Using MAXTRANSFERSIZE = 4194304, BUFFERCOUNT = 24 I went from 6xx seconds to 117.821 seconds, and in perfmon, I/O data bytes/sec was pegged at the top of the graph, not spiking. My next question is, if I wanted to tweak this some more, and be a bit more scientific about things, what other perfmon counters or other indicators should I be looking at? Other than using so much memory SQL Server swaps out, what happens if I tune these numbers too high?

    • sirsql 2015-02-05 at 16:59 #

      The big ones are going to be your memory counters and disk performance (on both your source and target). Bear in mind, if you thrash the storage taking your backup then any transactions that are ongoing will be hurt as they are contending for those disk resources. It really comes down to what your app side can handle while you do your thing. After all, it’s all about the end user when it comes to the crunch.

      • Justin Dearing (@zippy1981) 2015-02-05 at 17:42 #

        In this case its a VM that only I’m using, and I stop doing everything when I’m waiting for a restore. Backup is on the same disk as the restore, and getting another drive would not be worth the hassle.


  1. What perfmon counters should I use to get the fastest restore possible on an idle system? | DL-UAT - 2015-02-06

    […] Nic Cain’s post I was able to get my restore process from 6xx seconds to ~100-120 seconds with the following […]

Leave a Reply

%d bloggers like this: