Category: SQL

Please Don’t Use Deprecated Data Types

I know that a lot of vendors like to write for the lowest common denominator (i.e. SQL 2000) but really folks it’s gone too far. I’m sick of cracking open vendor code that’s certified for SQL 2008 and seeing things like IMAGE and TEXT data types. Microsoft deprecated these things back when they released SQL 2005 (see http://msdn.microsoft.com/en-US/library/ms143729(v=SQL.90).aspx under Textpointers). Why are you persisting these things six years later?

I bring this up because I’ve come across further egregious usage of these data types in vendor code yet again. The vendor in question? Microsoft.

Yes, that’s right, the folks that deprecated the data type six years ago is still using it to a large extent within the ReportServer and ReportServerTempDB databases that support SQL Server Reporting Services. Seriously Microsoft? Can you please get with the plan and fix this nonsense?

The following query, run against the ReportServer database will show 14 different tables (31 columns) using a variety of NTEXT and IMAGE data types.

select st.name as TableName, t.name as DataType, sc.name as ColumnName 

    from sys.types t 

        inner join sys.columns sc

            on t.system_type_id = sc.system_type_id

        inner join sys.tables st

            on sc.object_id = st.object_id

where 

    t.name in ('image', 'text', 'ntext')

order by 

    st.name, t.name

 

I have filed a Connect item asking Microsoft to fix this. Please go vote for it at https://connect.microsoft.com/SQLServer/feedback/details/714117/ssrs-using-deprecated-data-types-in-its-databases and help us rid the world of this old stuff.

Quickly Build Transaction Log Restore Commands With PowerShell

I’m in the middle of a server migration and so I’m setting up log shipping to get the databases consistent between the two servers prior to cutting over.

The backup copy over the WAN was taking a long time and so I had a lot of transaction log files that I needed to apply. I could have let log shipping take care of performing the log restores, but I found that a secondary process had created log backups also, just with different names. This lack of consistency was causing me a problem so I needed a quick way to build a restore command of all the files in order.

This was a quick win for PowerShell. I was able to quickly grab a list of the files in date order and pipe those to a string and build the restore command. This I was able to copy into SSMS and perform the restores.

$filelist = dir LogShippingPrimaryp$backup | where-object {$_.Extension -eq '.trn'} | Sort-Object {$_.CreationTime} | select Name

 

foreach ($file in $filelist)

{

$restorefile = $file.Name

 

$text = "restore log MyDatabase from disk = 'P:ForRestore$restorefile' with norecovery;"

Write-Output $text

}

 

If you wanted to take this a step further you could actually use invoke-sqlcmd to execute the $text string and perform the restore (don’t even contemplate doing that if you aren’t running at least SQL 2008 R2). I always prefer to do that within SSMS though.

While I’ve used this script for a migration task it also works really well if you need to perform a point in time restore for your database from a full backup and some transaction log backups. Can you imagine how long it would take you to build out the restore string to recover a database which had transaction log backups every 15 minutes for the last 20 hours? That’s 80 log files. Want to do that manually? This is a much nicer way to go.

Optimizing Database Restores

You can Google/Bing/AltaVista and get a lot of results for speeding up your backups. Those results will include using third-party tools, performing file and filegroup backups, differential backups and the like. Some of the more technical blogs will talk about adjusting some of the backup parameters to improve throughput.

There’s really not much written (that I’ve found, please add comments with links to posts you’ve found) that talks about tuning database restores.

I started messing with some restore parameters last week and found that it’s possible to make some dramatic performance improvements.

 

How this started

We were performing a data center migration at work. As a part of this migration we were going from SQL 2000 to SQL 2008R2 for some of our databases.

The database, we’ll call MyDB, is 620GB in size. There are 5 data files and a single log file. Current backups are performed using Snap technology, which wasn’t going to help us with performing a migration as we needed to get the data over and setup logshipping, so we needed to perform a regular backup.

Third party tools were not an option as this SQL instance was running SP3a and we weren’t able to find anything that would work below SP4.

A full backup was taken to local disk, it took a little over 9 hours (server and disk limitations prevented any work to try and improve that).

The backup file was transferred to a USB drive, taken to the new datacenter and copied over to a CIFS share where it would be available for performing restores.

 

Hardware/software setup

It’s going to help to have an idea of the configuration that’s being used here.

Hardware:

  • Server: HP DL585 G7 (clustered)
  • CPU: 4 x AMD 6176SE (2294Mhz, 12 cores) for a total of 48 cores
  • RAM: 128GB
  • Storage: Netapp SAN with WAFL filesystem – Fiber connected with 2 x 4Gb HBAs, MPIO configured for least queue depth
  • Network: 10Gb/s (HP NC375i) – 2 connections, configured for failover only, no load balancing
  • One LUN defined (as mount points) for each file in the database to be restored

 

Software:

  • SQL 2008R2 CU7 Enterprise Edition
  • Max server memory: 100GB
  • Affinity masks not set
  • Max degree of parallelism: 6
  • 4 x tempdb files

 

 

Basic restore

The initial restore was performed with default options, no optimization was performed.

RESTORE DATABASE MyDB FROM DISK = 'CIFSDeviceBackupSQLBackupsProductionForRestore.bak' 

WITH MOVE 'MyDB' TO 'L:MyDB_DataMSSQLDataMyDB_Data.mdf'

    , MOVE 'MyDB2' TO 'L:MyDB_Data2MSSQLDataMyDB_Data2.ndf'

    , MOVE 'MyDB3' TO 'L:MyDB_Data3MSSQLDataMyDB_Data3.ndf'

    , MOVE 'MyDB4' TO 'L:MyDB_Data4MSSQLDataMyDB_Data4.ndf'

    , MOVE 'MyDB5' TO 'L:MyDB_Data5MSSQLDataMyDB_Data5.ndf'

    , MOVE 'MyDB_Log' TO 'L:MyDB_LogsMSSQLDataMyDB_Log.ldf'

    , STATS = 5

    , NORECOVERY

 

Restore time was 2:48.

 

Optimization techniques

I knew that adjusting the MaxTransferSize and BufferCount settings could provide significant improvements in backup performance (go read a fantastic SQLCat article on this) and figured that the same would be true for performing restores.

Before I could start changing values I needed to know what values were being used with the basic restore. To do this I enabled trace flags 3213 (trace SQL activity during a backup/restore) & 3605 (output trace activity to the SQL error log).

DBCC TRACEON (3213, -1)

DBCC TRACEON (3605, -1)

 

With the flags enabled I performed another restore which provided the following information:

image

So by default we’re running with a BufferCount of 6 and MaxTransferSize of 1,024.

 

The MaxTransferSize value is the largest unit of transfer to be used between SQL Server and the backup media. This is specified in bytes and can range in value from 65536 (64 KB) to 4194304 (4 MB).

The BufferCount specifies the number of IO buffers to be used for the restore operation. Any positive number can be used for this, however you have to be careful with larger numbers as you could potentially cause out of memory errors.

 

Changing the MaxTransferSize

Not having any obvious hardware restrictions I decided that as the restore was being performed across the network adjusting the MaxTransferSize might provide the biggest benefit.

Feeling confident I just bumped it to the max and performed the restore.

RESTORE DATABASE MyDB FROM DISK = 'CIFSDeviceBackupSQLBackupsProductionForRestore.bak' 

WITH MOVE 'MyDB' TO 'L:MyDB_DataMSSQLDataMyDB_Data.mdf'

    , MOVE 'MyDB2' TO 'L:MyDB_Data2MSSQLDataMyDB_Data2.ndf'

    , MOVE 'MyDB3' TO 'L:MyDB_Data3MSSQLDataMyDB_Data3.ndf'

    , MOVE 'MyDB4' TO 'L:MyDB_Data4MSSQLDataMyDB_Data4.ndf'

    , MOVE 'MyDB5' TO 'L:MyDB_Data5MSSQLDataMyDB_Data5.ndf'

    , MOVE 'MyDB_Log' TO 'L:MyDB_LogsMSSQLDataMyDB_Log.ldf'

    , STATS = 5

    , NORECOVERY

    , MAXTRANSFERSIZE = 4194304 

 

This time the restore took 45 minutes, a reduction of 2 hours over the default.

 

Changing the BufferCount

Having already reduced the restore time by over 70% I wanted to see if I could get a little bit more performance from it. So I made an adjustment to the BufferCount value and set it to 75.

RESTORE DATABASE MyDB FROM DISK = 'CIFSDeviceBackupSQLBackupsProductionForRestore.bak' 

WITH MOVE 'MyDB' TO 'L:MyDB_DataMSSQLDataMyDB_Data.mdf'

    , MOVE 'MyDB2' TO 'L:MyDB_Data2MSSQLDataMyDB_Data2.ndf'

    , MOVE 'MyDB3' TO 'L:MyDB_Data3MSSQLDataMyDB_Data3.ndf'

    , MOVE 'MyDB4' TO 'L:MyDB_Data4MSSQLDataMyDB_Data4.ndf'

    , MOVE 'MyDB5' TO 'L:MyDB_Data5MSSQLDataMyDB_Data5.ndf'

    , MOVE 'MyDB_Log' TO 'L:MyDB_LogsMSSQLDataMyDB_Log.ldf'

    , STATS = 5

    , NORECOVERY

    , MAXTRANSFERSIZE = 4194304 

    , BUFFERCOUNT = 75

 

This restore took a little less time, 38 minutes, shaving another 7 minutes off the already shortened time. At this point I was down about 78% over the regular restore command.

I tried pushing the BufferCount to 100 to see if I could trim things further but found no appreciable difference and so left it at 75.

 

Comparing backup types

As a part of the move log shipping was to be setup to offload reporting for users and so I needed to perform a restore of the database from the new SQL 2008R2 server to another with exactly the same specifications.

The native SQL 2000 backup file was 520GB in size, with no optimizations it restored in 2:48, and optimized restored in just 38 minutes (not bad for a network restore).

The compressed SQL 2008R2 backup file was just 121GB in size. An optimized restore of this database took just 23 minutes. That’s an 86% saving over the unoptimized SQL 2000 based restore.

 

Summary

There are times when you are not able to perform partial restores, or you need to restore entire databases to a new environment in as short a time as possible. Don’t always accept the defaults that SQL Server will give you in these situations. There are things that you can do to improve the performance of your restores (and your backups). It’s worth your spending some time now working in your environment and optimizing your restores.

A little time playing with these things now could be the difference between your database being down for 20 minutes as opposed to 3 hours.

How much is 2 1/2 hours worth to your business?

 

 

Such fun.

SSRS Validation Of Viewstate MAC Failed

You aren’t likely to see me post much about SSRS, mainly because I use it only on rare ocassions and when I don’t tend to hit blog-worthy things. Today is different though…

We recently deployed a new SSRS farm at the office. The deployment was smooth, the first reports went on without any issues. Everything looked good.

I was setting up a data driven subscription for a report and received the following error:

Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that <machineKey> configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster

 

What the heck?

Searching the web provided a plethora of hits around this, but they were all related to regular ASP.NET and there wasn’t anything in particular around SSRS and this problem.

After digging around I went back to Books Online and started digging through the documentation there with a fine tooth comb, and that’s where I found the solution (see it for yourself at http://msdn.microsoft.com/en-us/library/cc281307(SQL.105).aspx#ViewState )

 

Due to Reporting Services running in a load balanced configuration it couldn’t autogenerate the key used for validation for the view state. In this case a manual key had to be generated and deployed to each of the SSRS front ends.

I needed a resource to generate a key so I went out to http://aspnetresources.com/tools/machineKey where one click gave me what I needed.

Example key:

<machineKey validationKey=”E764BC3B6B17235E6CC9B2C81FA1C2694D54FD8B47BE1E081AE50E3291FE731C392FD7E2B86DD9985498242BD82FBFC3AB53F78DC228E637089686AD3B6936D2″ decryptionKey=”FC20492987C969987BC88814BB264A4B8986074C0B3452F5DD81877F144D28DB” validation=”SHA1″ decryption=”AES” />

 

I then pasted the key into the ReportManager web.config file on each front end server. After a service restart everything came back up and I was able to deploy the subscription without any further issues.

 

Moral of the story here: Carefully read Books Online, it’s actually a great reference.

 

Such fun.

 

Creating A Clustered SQL Install ini File Using PowerShell–Part 3

In parts 1 and 2 the cluster was made ready for the SQL install and the Active Directory objects were created. In the final part of the series we will finally get to creating the ini file and installing SQL.

As an FYI this script is designed to install the following components along with the Database Engine:

  • SQL Server Replication
  • Full-Text Search
  • Business Intelligence Development Studio
  • Client Tools Connectivity
  • Integration Services
  • Client Tools Backwards Compatibility
  • Client Tools SDK
  • SQL Server Books Online
  • Management Tools – Basic
    • Management Tools – Complete
  • SQL Client Connectivity SDK
  • Microsoft Sync Framework

The ini can be quickly modified to remove certain of these components if they are not required (I’m looking to add selections for these items to a future version).

 

Copy the script to the server

Download the CreateClusteredSQLInstallFile.ps1 and copy it over to the server. From there it can either be opened directly in PowerShell ISE or called from within the PowerShell shell.

If calling the script from the shell you might get an error:

 

If so you’ll need to reopen the shell as an administrator and change the execution policy to RemoteSigned

set-executionpolicy Remotesigned

 

This time it should run without any errors.

 

Working through the script

The first thing to come up is an informational box letting you know that you’ll be creating ini files:

 

Next, depending upon whether you started the script through the shell or the ISE you’ll see different things…in the ISE you’ll see a pop-up window asking if you want to “Install a new clustered instance of add node to an existing cluster?”, this is in case you want to generate an ini file for a cluster you already have in place. By default this will create files for a new clustered install of SQL Server:

In the shell you will just get a prompt”:

 

A new window will appear asking where you want to put the ini files when they are created. I’ve chosen to put them in C:SQL Install ini (this is the same if you use the shell or the ISE)

 

Next you’ll be prompted to enter the SQL virtual name (going forward I’ll be showing the prompts through the ISE, keep an eye on the shell if that’s what you are using). This is the disabled computer account we created in Active Directory:

 

The next prompt will ask for an instance name, here I’m going with PROD (which will give us the SQL instance of GROOPROD):

 

Follow this up by entering the IP address:

 

The next two prompts are for the SQL Service and SQL Agent accounts to be used:

 

Now choose a root folder for the SQL system databases (master/model/msdb). A subdirectory of SQLSystem will automatically be created, as such I am using the root of G:

 

A similar window will pop-up and ask for the location of TempDB, the data and log files and backups. This is just another case of choosing the appropriate locations. As a note from the location you choose the following subfolders will be created and used:

  • TempDB – <root>MSSQLData
  • SQL Data Files – <root>MSSQLData
  • SQL Log Files – <root>MSSQLLogs
  • Backups – <root>MSSQLBackup

 

 

The final step is a prompt that will appear asking what failover clustered disks should be used. These will be listed under available storage in the Failover Cluster Manager. Only the disks to actually be used should be added. This is a comma separated list of those disks (the screenshot shows the disks next to the prompt for easy identification):

 

A completion pop-up appears indicating what files were created and how to use them:

 

The filenames are generated based upon the user creating the script and the time that the script started executing (this helps to prevent accidental duplicates).

 

If you’d used the shell to execute the script you’d expect to see the following:

 

I’ve got files, so now what?

Now that we have generated the ini files it’s time to put them to use. The first thing I like to do is rename them to something more appropriate (like the SQL virtual name and their purpose) so

become

 

I also like to store these files away in a repository, this way it’s a very quick and easy operation to add a node to a cluster or find out what installation settings were used.

 

Can we do the SQL install already?

We sure can, it’s go time! But first, take a quick look at the ini file that was created:

;File created by: AWESOMESAUCENic
;File creation date: 2011-07-17 16:47
;Script to install new SQL clustered instance
;SQLSERVER2008 Configuration File
[SQLSERVER2008]
IACCEPTSQLSERVERLICENSETERMS=”TRUE”
FAILOVERCLUSTERNETWORKNAME=”GROO”
INSTANCEID=”PROD”
INSTANCENAME=”PROD”
FAILOVERCLUSTERGROUP=”SQL Server (PROD)”
FAILOVERCLUSTERIPADDRESSES=”IPv4;192.168.137.150;Public Network;255.255.255.0″
INSTALLSHAREDDIR=”C:Program FilesMicrosoft SQL Server”
INSTALLSHAREDWOWDIR=”C:Program Files (x86)Microsoft SQL Server”
INSTANCEDIR=”C:Program FilesMicrosoft SQL Server”
ACTION=”InstallFailoverCluster”
FEATURES=SQLENGINE,REPLICATION,FULLTEXT,BIDS,CONN,IS,BC,SDK,BOL,SSMS,ADV_SSMS,SNAC_SDK,OCS
HELP=”False”
INDICATEPROGRESS=”True”
QUIET=”True”
QUIETSIMPLE=”False”
X86=”False”
ENU=”True”
ERRORREPORTING=”False”
SQMREPORTING=”False”
FILESTREAMLEVEL=”0″
ISSVCSTARTUPTYPE=”Automatic”
ISSVCACCOUNT=”NT AUTHORITYNetworkService”
SQLCOLLATION=”SQL_Latin1_General_CP1_CI_AS”
FTSVCACCOUNT=”NT AUTHORITYLOCAL SERVICE”
SQLSYSADMINACCOUNTS=”awesomesaucenic”
SQLSVCACCOUNT=”awesomesauceGroo_SQLEngine”
AGTSVCACCOUNT=”awesomesauceGroo_SQLAgent”
INSTALLSQLDATADIR=”G:SQLSystem”
SQLTEMPDBDIR=”G:TempDBMSSQLData”
SQLUSERDBDIR=”G:SQLDataMSSQLData”
SQLUSERDBLOGDIR=”G:SQLDataMSSQLLogs”
SQLBACKUPDIR=”H:MSSQLBackup”
FAILOVERCLUSTERDISKS=”SQL Backup Disk” “SQL Data Disk”

 

 

That’s all the information needed to perform the install.

Now open up an elevated command prompt and go to the location of the install media. Now just call setup.exe and pass along the location and name of the SQLInstall.ini file along with the SQL and Agent passwords, then hit enter:

 

Now go and make yourself a beverage of your choice, the install will take a little while….after an indeterminate amount of time you should see the following nondescript message in the command prompt window:

 

Not exactly a whole bunch of bells and whistles there, but “Setup result: 0” tells us that the install was successful (even though it’s somewhat cryptic).

We can test this by cranking up SSMS and connecting:

 

Add another node

A single node cluster while nice isn’t particularly highly available, so we need to have this SQL instance deployed on the other node(s). This is a quick and painless task. Just copy over the SQL Install ini folder to the other node(s), login and run the setup.exe, this time calling the AddNode.ini file:

 

Again, go for a walk around the block, talk to your colleagues and eventually come back to:

 

 

Rinse and repeat for as many nodes as you need.

 

Summary

Setup your cluster, ensure Active Directory is good to go and use CreateClusteredSQLInstallFile.ps1 to quickly build your ini file to install SQL Server on to your cluster. It’s quick and easy. It allowed me to deploy 4 SQL Server instances to 5 nodes in under 4 1/2 hours.

Give it a try, let me know what you think. I’ll be looking to enhance it in the future.

 

Read Part 1

Read Part 2

Creating A Clustered SQL Install ini File Using PowerShell–Part 2

In Part 1 I showed how the cluster was readied for the SQL install. In this short part 2 I’ll go over the Active Directory configuration components so that everything is ready for us to start performing the install.

 

Create the computer account

The first thing is to create a new Computer object in Active Directory. This will be the virtual name used in the cluster.

 

Once added we need to go back in and edit the properties of the Computer (you’ll need to use Advanced Features for this bit, look under the view menu item).

Go to the security tab of the Computer properties and hit the Add button. In the Select Users, Computers, Service Accounts or Groups window hit Object Types and ensure that Computers is checked. The cluster account will be used to manage the Computer resource (in this case CLUSTER1) so we add CLUSTER1$

 

And then give CLUSTER1$ Full Control over the computer object.

 

Finally the computer object needs to be disabled (if it’s not disabled your cluster install will not work correctly and it will fail to come online)

 

 

Create the service accounts

In the interest of security I’ll be using one account for the SQL Engine and another for the SQL Agent. Quickly create a couple of accounts with nice secure passwords (that of course you will keep somewhere secure, like in a KeePass file).

 

I like to give the accounts good descriptive names. It makes them easier to find later (actually the best way is to have a separate OU for the SQL accounts and for the SQL computer objects as it makes it a lot easier to manage and create Group Policies for them).

 

The boring stuff is done

The setup items are complete, we’re now ready to create the ini file….in part 3

Creating A Clustered SQL Install ini File Using PowerShell–Part 1

I’ve written a few posts recently around Clustering. Most of thse have been inspired by working on building out 4 new clusters consisting of 16 nodes and 14 new SQL instances. Each of the servers in the clusters are identical and each one of the SQL installs needed to be exactly the same.

Ensuring that all the installs are the same can be challenging when using the GUI, especially when you have you are installing a lot of instances. In this case:

  • Cluster A – 5 nodes x 4 SQL instances = 20 installs
  • Cluster B  – 4 nodes x 3 SQL instances = 12 installs
  • Cluster C – 4 nodes x 4 SQL instances = 16 installs
  • Cluster D – 3 nodes x 3 SQL instances = 9 installs
  • Total installs: 57

Given how slow it is to use the SQL installer GUI I figured it would take me about 2 1/2 lifetimes to perform the installs I needed.

 

A better and more consistent way to install SQL

So what would be a better way to perform these installs? It makes sense to perform automated installs using ini files that hold all the configuration information for the instance.

There’s a good MSDN article on “How to: Install SQL Server 2008 R2 Using a Configuration File”.

Using the SQL Server instance prep tool is again a major undertaking using the GUI. I figured that there had to be a better and faster way to build an ini file to perform the installs I needed. From previous cluster installs I had a couple of configuration.ini files laying around, it’s a pretty easy task to manipulate those to reference new clusters/disks and IP addresses. Thing is that it’s not particularly quick and also very prone to error.

So what would be a better way?

 

PowerShell to the rescue

Yet again, PowerShell is the savior in this situation.

I put together a script that is quick and lightweight, asks the relevant questions and provides you with the ini files that you need to install a new clustered SQL Server instance and the ini files that you need to perform the installs on the additional cluster nodes.

 

First things first

Before using the script we need to ensure the cluster is setup and ready to go. Here I’ll be using my home machine to install a new clustered SQL Server instance.

I’ve created a new cluster called CLUSTER1 consisting of two nodes CLUSTERNODEA and CLUSTERNODEB which consists of a private and public network (named as such).

 

The cluster had node and disk majority. Storage wise (as this is a demo) I’ve only got a couple of disks. SQL Data Disk to hold my SQL system, data and log files plus SQL Backup Disk to hold the backups. This is obviously not how I would setup a normal system, this is just iSCSI disk running on my laptop, don’t judge me too harshly.

 

 

In part 2 I’ll go over the configured Active Directory components so that everything is in place ready for the install to happen.

MSDTC Failing In A Cluster

I’m merrily working away on installing SQL 2008R2 on a bunch of new clusters. The SQL installs have gone fine and I’m getting set to install MSDTC, one for each SQL instance (read the awesome post by Cindy Gross on this).

The install of MSDTC went smoothly and it seemed very happy. Then I failed over the cluster…

MSDTC failed.

It wouldn’t restart.

I failed back to the original node and it wouldn’t start there either.

 

What’s the error?

I dumped the last few minutes of the cluster log by opening a dos box command prompt and running…

cluster log /gen /span:5

 

This dumps the last five minutes of cluster logs into C:WindowsClustercluster.log

I scrolled through to get to the MSDTC error piece:

INFO  [RES] Physical Disk: Failed to get vol MP root for path ?, status 123
ERR   [RHS] Error 123 from ResourceControl for resource <instance>_MSDTC.
WARN  [RCM] ResourceControl(STORAGE_IS_PATH_VALID) to <instance>_MSDTC returned 123.

I checked the disk resource for MSDTC and it was online. Looking at the filesystem on that disk and there was an MSDTC directory, so I knew there were no access problems. It didn’t make any sense.

 

So what’s going on?

The key error here is the failure to get MP root for path ?

Apparently MSDTC is not supported does not work with Mount Points, which is what I had set the dependency to. There were no warnings on this when setting MSDTC up and I’d not seen or heard of any documentation that speaks to this.

I was finally pointed to a Connect item opened by someone who’d had the same issue https://connect.microsoft.com/SQLServer/feedback/details/576545/msdtc-fails-to-restart-in-sql-server-2008-r2-clustered-group

Side note: I love it when Connect items such as this are closed as by design. Why is this by design? Can someone explain to me why MSDTC shouldn’t be supported on Mount Points?

 

I deleted the MSDTC resource and added it again, this time using a regular drive as a dependency and everything worked perfectly. I was able to failover and have SQL perform distributed transactions.

Be Careful Installing Windows Features

Did you know that installing certain Windows features could impact the way that SQL gets installed on your server? Me either.

When performing some installs recently I came across a problem whereby I was not able to change the shared tools location. The option was greyed out. This didn’t make any sense to me. The install was happening on a new server, there had been no previous SQL installs on the machine and so no problems with installed components preventing those kinds of problems.

 

Digging through the filesystem I found a C:Program FilesMicrosoft SQL Server folder. That didn’t make any sense to me at all, I’d not installed anything.

I could not even find anything that might be using this folder when looking through the installed software and yet I couldn’t delete the folder as there were files in use.

After much head scratching I finally found the problem.

 

I had installed the Windows System Resource Manager feature to help manage resources, which is very useful when running more than one instance of SQL on a machine, or if you have multiple software installs and want to keep control of your CPU and memory. This feature uses (and so automatically installs) another feature called the Windows Internal Database which it says is a relational data store.

 

By the looks of things the Windows Internal Database is based around the SQL 2005 engine, this gets installed into C:Program FilesMicrosoft SQL Server.

When performing the new SQL install it went out and identified that SQL components were already installed and forced installation of the shared tools into that same location.

 

After removing the Windows System Resource Manager and Windows Internal Database features I was able to move the shared tools install to another location.

Be very aware of what features and roles might be install on your machines, it might bite you when you least expect it.

Central Auditing Of SQL Permissions Scripts

Yesterday I gave a presentation for the PASS PowerShell Virtual Chapter on central auditing of SQL permissions with PowerShell and TSQL. For those that attended feel free to download the scripts I used in my presentation. For those that didn’t…well you can still download them but this picture may not apply to you…

Image from http://whosawesome.com/