Tuning your backups is a wonderful thing. You could easily reduce the time it takes to backup your databases by 50% of more just by changing a few settings, and yet so few people do it. I asked myself the question why and came up with two answers.
- People do not know how to tune their backups.
- It takes too long to run through the tuning process.
How to tune your backups
I’m not going to go over methods for tuning your backups in this post. There are people who have done a far better job at both explaining the adjustments you can make and that have created videos to help you understand and perform the process yourself.
My biggest concern was directed at the level of effort required to test all the possible permutations of files, maxtransfersize and buffercount values, after all, who has time to figure all of that out and then capture the data to look at the relative performance characteristics of each one?
I decided that the best way to do this was to create a nice little test harness which would run through all those tests without manual intervention, and then figure out a way to get the output from all of those backups into some kind of meaningful graph for that instant visual on performance differences.
The backup testing script
Step one in the automated tuning is a SQL script I created which accepts a few input variables:
- @DatabaseName – name of the database you want to use for testing
- @MaxBackupFiles – maximum number of files you want to write to at a time
- @BackupFolder – destination folder for the backups
- @WithCompression – whether or not to use SQL Server backup compression
- @CreateJob – Whether or not to create a SQL Agent job to run the tests
- @JobLogFileName – file path and name for a log for the job
- @OutputToScreen – outputs the backup commands to the screen
When executed the script is designed to created SQL Agent job which you can then execute at a time of your choosing. The job will run through a series of backups for a database (at minimum 40) and capture the information in the job log file.
Be warned, performing this many backups test, with so many settings, can and will impact your server performance at some point. Be aware of this. Do not cause issues for your users when performing the tests, they will not like you for doing so.
Visualizing the results
Once the backup testing job is complete a job log file is left behind. This log file contains all of the information for each of the backups performed including the:
- number of files backed up to
- maxtranfersize setting used
- buffercount setting used
- speed of the backup in MB/sec
- time the backup took to complete in seconds
Getting that information usable is done using a PowerShell script. A blog post by Jonathan Kehayis described his method for parsing SQLIO output to excel charts. Based on this script, the update version parses the backup log file that has been created, imports the information into excel and creates a couple of charts.
These charts show the speed (Figure 1) and backup time (Figure 2) for the default backup options, the absolute fastest backup to NUL, and the all of the other options that were tested.
The graphs provide a nice visual representation of the backup performance which helps to quickly identify that there are improvements that can be made.
The default SQL backup runs at just over 200MB/sec and takes about 235 seconds to complete, but we know that we can perform that same backup to a NUL device in about 75 seconds at a rate of over 1200MB/sec.
Just looking through the rest of the data it is clear to see that a lot of the values used make a negligible change to the performance of the backups, so we can work to make an informed decision on what the best file, maxtransfersize and buffercount settings should be.
Hopefully this is something that you can find useful. Please give it a try and send me feedback on your testing efforts.