Tag: OS

Using PowerShell To Restrict MSDTC Ports

Ever tried to create a linked server that uses MSDTC only to find yourself blocked by your company firewall? Ever tried to run a WMI query against a server just to find yourself blocked? Ever had the firewall team tell you that they aren’t going to open up ports 49152-65535 so that you can use RPC?

Let’s be fair, your network team shouldn’t have to open up all those ports because RPC responds somewhere within a large dynamic range.

How to configure RPC dynamic port allocation to work with firewalls will tell you how to edit your registry to restrict that port range and make your network admin a little happier.

Working with the registry is not fun at the best of times, and when you are setting up a bunch of machines it takes time. Sure, you could create a .reg file and run that on each machine, but this is 2011 and we have PowerShell now.


The following script checks and if necessary adds the required registry keys to restrict that port range. In the example below windows is being limited to ports 5000-5200.





   Restricts the RPC ports to be used on Windows from 5000-5200


   Execute to add registry entries on the local machine to restrict the RPC ports from 5000-5200. Requires a reboot once executed.

.PARAMETER <paramName>






if (Test-Path 'HKLM:SOFTWAREMicrosoftRpcInternet' -ErrorAction SilentlyContinue) { "Registry Key Exists" } 

else { md 'HKLM:SOFTWAREMicrosoftRpcInternet' }


if (Get-ItemProperty -Name "Ports" -Path 'HKLM:SOFTWAREMicrosoftRpcInternet' -ErrorAction SilentlyContinue ) { "Ports value exists" }

else { New-ItemProperty 'HKLM:SOFTWAREMicrosoftRpcInternet' -Name 'Ports' -Value '5000-5200' -PropertyType 'MultiString' }


if (Get-ItemProperty -Name "UseInternetPorts" -Path 'HKLM:SOFTWAREMicrosoftRpcInternet' -ErrorAction SilentlyContinue ) { "UseInternetPorts value exists" }

else { New-ItemProperty 'HKLM:SOFTWAREMicrosoftRpcInternet' -Name 'UseInternetPorts' -Value 'Y' -PropertyType 'String' }


if (Get-ItemProperty -Name "PortsInternetAvailable" -Path 'HKLM:SOFTWAREMicrosoftRpcInternet' -ErrorAction SilentlyContinue ) { "PortsInternetAvailable value exists" }

else { New-ItemProperty 'HKLM:SOFTWAREMicrosoftRpcInternet' -Name 'PortsInternetAvailable' -Value 'Y' -PropertyType 'String' }

Creating A Clustered SQL Install ini File Using PowerShell–Part 3

In parts 1 and 2 the cluster was made ready for the SQL install and the Active Directory objects were created. In the final part of the series we will finally get to creating the ini file and installing SQL.

As an FYI this script is designed to install the following components along with the Database Engine:

  • SQL Server Replication
  • Full-Text Search
  • Business Intelligence Development Studio
  • Client Tools Connectivity
  • Integration Services
  • Client Tools Backwards Compatibility
  • Client Tools SDK
  • SQL Server Books Online
  • Management Tools – Basic
    • Management Tools – Complete
  • SQL Client Connectivity SDK
  • Microsoft Sync Framework

The ini can be quickly modified to remove certain of these components if they are not required (I’m looking to add selections for these items to a future version).


Copy the script to the server

Download the CreateClusteredSQLInstallFile.ps1 and copy it over to the server. From there it can either be opened directly in PowerShell ISE or called from within the PowerShell shell.

If calling the script from the shell you might get an error:


If so you’ll need to reopen the shell as an administrator and change the execution policy to RemoteSigned

set-executionpolicy Remotesigned


This time it should run without any errors.


Working through the script

The first thing to come up is an informational box letting you know that you’ll be creating ini files:


Next, depending upon whether you started the script through the shell or the ISE you’ll see different things…in the ISE you’ll see a pop-up window asking if you want to “Install a new clustered instance of add node to an existing cluster?”, this is in case you want to generate an ini file for a cluster you already have in place. By default this will create files for a new clustered install of SQL Server:

In the shell you will just get a prompt”:


A new window will appear asking where you want to put the ini files when they are created. I’ve chosen to put them in C:SQL Install ini (this is the same if you use the shell or the ISE)


Next you’ll be prompted to enter the SQL virtual name (going forward I’ll be showing the prompts through the ISE, keep an eye on the shell if that’s what you are using). This is the disabled computer account we created in Active Directory:


The next prompt will ask for an instance name, here I’m going with PROD (which will give us the SQL instance of GROOPROD):


Follow this up by entering the IP address:


The next two prompts are for the SQL Service and SQL Agent accounts to be used:


Now choose a root folder for the SQL system databases (master/model/msdb). A subdirectory of SQLSystem will automatically be created, as such I am using the root of G:


A similar window will pop-up and ask for the location of TempDB, the data and log files and backups. This is just another case of choosing the appropriate locations. As a note from the location you choose the following subfolders will be created and used:

  • TempDB – <root>MSSQLData
  • SQL Data Files – <root>MSSQLData
  • SQL Log Files – <root>MSSQLLogs
  • Backups – <root>MSSQLBackup



The final step is a prompt that will appear asking what failover clustered disks should be used. These will be listed under available storage in the Failover Cluster Manager. Only the disks to actually be used should be added. This is a comma separated list of those disks (the screenshot shows the disks next to the prompt for easy identification):


A completion pop-up appears indicating what files were created and how to use them:


The filenames are generated based upon the user creating the script and the time that the script started executing (this helps to prevent accidental duplicates).


If you’d used the shell to execute the script you’d expect to see the following:


I’ve got files, so now what?

Now that we have generated the ini files it’s time to put them to use. The first thing I like to do is rename them to something more appropriate (like the SQL virtual name and their purpose) so



I also like to store these files away in a repository, this way it’s a very quick and easy operation to add a node to a cluster or find out what installation settings were used.


Can we do the SQL install already?

We sure can, it’s go time! But first, take a quick look at the ini file that was created:

;File created by: AWESOMESAUCENic
;File creation date: 2011-07-17 16:47
;Script to install new SQL clustered instance
;SQLSERVER2008 Configuration File
INSTALLSHAREDDIR=”C:Program FilesMicrosoft SQL Server”
INSTALLSHAREDWOWDIR=”C:Program Files (x86)Microsoft SQL Server”
INSTANCEDIR=”C:Program FilesMicrosoft SQL Server”



That’s all the information needed to perform the install.

Now open up an elevated command prompt and go to the location of the install media. Now just call setup.exe and pass along the location and name of the SQLInstall.ini file along with the SQL and Agent passwords, then hit enter:


Now go and make yourself a beverage of your choice, the install will take a little while….after an indeterminate amount of time you should see the following nondescript message in the command prompt window:


Not exactly a whole bunch of bells and whistles there, but “Setup result: 0” tells us that the install was successful (even though it’s somewhat cryptic).

We can test this by cranking up SSMS and connecting:


Add another node

A single node cluster while nice isn’t particularly highly available, so we need to have this SQL instance deployed on the other node(s). This is a quick and painless task. Just copy over the SQL Install ini folder to the other node(s), login and run the setup.exe, this time calling the AddNode.ini file:


Again, go for a walk around the block, talk to your colleagues and eventually come back to:



Rinse and repeat for as many nodes as you need.



Setup your cluster, ensure Active Directory is good to go and use CreateClusteredSQLInstallFile.ps1 to quickly build your ini file to install SQL Server on to your cluster. It’s quick and easy. It allowed me to deploy 4 SQL Server instances to 5 nodes in under 4 1/2 hours.

Give it a try, let me know what you think. I’ll be looking to enhance it in the future.


Read Part 1

Read Part 2

Creating A Clustered SQL Install ini File Using PowerShell–Part 2

In Part 1 I showed how the cluster was readied for the SQL install. In this short part 2 I’ll go over the Active Directory configuration components so that everything is ready for us to start performing the install.


Create the computer account

The first thing is to create a new Computer object in Active Directory. This will be the virtual name used in the cluster.


Once added we need to go back in and edit the properties of the Computer (you’ll need to use Advanced Features for this bit, look under the view menu item).

Go to the security tab of the Computer properties and hit the Add button. In the Select Users, Computers, Service Accounts or Groups window hit Object Types and ensure that Computers is checked. The cluster account will be used to manage the Computer resource (in this case CLUSTER1) so we add CLUSTER1$


And then give CLUSTER1$ Full Control over the computer object.


Finally the computer object needs to be disabled (if it’s not disabled your cluster install will not work correctly and it will fail to come online)



Create the service accounts

In the interest of security I’ll be using one account for the SQL Engine and another for the SQL Agent. Quickly create a couple of accounts with nice secure passwords (that of course you will keep somewhere secure, like in a KeePass file).


I like to give the accounts good descriptive names. It makes them easier to find later (actually the best way is to have a separate OU for the SQL accounts and for the SQL computer objects as it makes it a lot easier to manage and create Group Policies for them).


The boring stuff is done

The setup items are complete, we’re now ready to create the ini file….in part 3

Grabbing The Newest File From Subdirectories Using PowerShell

Every once in a while I come up with a need for something a little out of the ordinary, in this instance I was moving backups from one machine to another. Robocopy is the obvious tool of choice to do this. Trouble was that the backups were not all within a single directory, rather they were in multiple subdirectories. Robocopy will of course handle this with the /S switch. What Robocopy can’t handle is the fact that I want only the most recent file from each one of those subdirectories, not all of them (in this case I just wanted to move the most recent differential backup from one location to another).

I figured I could sit down and query msdb for this information and dump that out. I mean it works, it’s functional and does exactly what I would need it to do. Where’s the fun in that though, really? Add to that it would only handle SQL backups, what if I had a need to do the same thing later on  for other types of files? The TSQL query wouldn’t work for me then.


PowerShell to the rescue

Seriously, I think that I’m going to get a cape for PowerShell that it can wear around the place as it’s that damn super (although I am not sure that I want to see it’s underwear outside its pants).

In this example I’m going to be working from C:Temp on my local machine.

Within C:Temp there are two folders and a subfolder:

Each of these folders contains a couple of files:


To grab the most recent file from a particular folder is a quick one liner:

dir c:tempsubfolder2 | sort-object {$_.LastWriteTime} -Descending | select-object -First 1


That is the basis for the rest of the script. Essentially everything else just handles recursing through the subdirectories to grab this information:


$Path = 'C:Temp' #Root path to look for files

$DestinationPath = 'RemoteD$' #Remote destination for file copy


#Grab a recursive list of all subfolders

$SubFolders = dir $Path -Recurse | Where-Object {$_.PSIsContainer} | ForEach-Object -Process {$_.FullName}


#Iterate through the list of subfolders and grab the first file in each

ForEach ($Folder in $SubFolders)


    $FullFileName = dir $Folder | Where-Object {!$_.PSIsContainer} | Sort-Object {$_.LastWriteTime} -Descending | Select-Object -First 1 


    #For every file grab it's location and output the robocopy command ready for use

    ForEach ($File in $FullFileName)


        $FilePath = $File.DirectoryName

        $FileName = $File.Name

        Write-Output "robocopy $FilePath $DestinationPath $FileName /R:6 /W:30 /Z"



Running this gives the results:


Quick and easy, does just what it says on the box. Feel free to download CopyNewestFileFromSubDir.ps1 and give it a try. Let me know what enhancements you put around it (here’s a quick one, to limit the type of files evaluated change !$_.PSIsContainer to $_.Extension –eq “.bak” )

MSDTC Failing In A Cluster

I’m merrily working away on installing SQL 2008R2 on a bunch of new clusters. The SQL installs have gone fine and I’m getting set to install MSDTC, one for each SQL instance (read the awesome post by Cindy Gross on this).

The install of MSDTC went smoothly and it seemed very happy. Then I failed over the cluster…

MSDTC failed.

It wouldn’t restart.

I failed back to the original node and it wouldn’t start there either.


What’s the error?

I dumped the last few minutes of the cluster log by opening a dos box command prompt and running…

cluster log /gen /span:5


This dumps the last five minutes of cluster logs into C:WindowsClustercluster.log

I scrolled through to get to the MSDTC error piece:

INFO  [RES] Physical Disk: Failed to get vol MP root for path ?, status 123
ERR   [RHS] Error 123 from ResourceControl for resource <instance>_MSDTC.
WARN  [RCM] ResourceControl(STORAGE_IS_PATH_VALID) to <instance>_MSDTC returned 123.

I checked the disk resource for MSDTC and it was online. Looking at the filesystem on that disk and there was an MSDTC directory, so I knew there were no access problems. It didn’t make any sense.


So what’s going on?

The key error here is the failure to get MP root for path ?

Apparently MSDTC is not supported does not work with Mount Points, which is what I had set the dependency to. There were no warnings on this when setting MSDTC up and I’d not seen or heard of any documentation that speaks to this.

I was finally pointed to a Connect item opened by someone who’d had the same issue https://connect.microsoft.com/SQLServer/feedback/details/576545/msdtc-fails-to-restart-in-sql-server-2008-r2-clustered-group

Side note: I love it when Connect items such as this are closed as by design. Why is this by design? Can someone explain to me why MSDTC shouldn’t be supported on Mount Points?


I deleted the MSDTC resource and added it again, this time using a regular drive as a dependency and everything worked perfectly. I was able to failover and have SQL perform distributed transactions.

SQL Clustering–Network Binding Order Warnings

In setting up my Windows 2008 R2/SQL 2008 R2 cluster this week I came across a warning in the Setup Support Rules stating that “The domain network is not the first bound network.”

 Network Binding Order Error


This didn’t make any sense to me as I had been very careful in setting the binding order in the network connections properties:

Binding Order Set


Deciding to look for further information on this I opened up the Detail.txt file from the Setup Bootstrap/Log directory and searched for network binding order:

Binding Order Setup Log


Strangely the first bound network is Local Area Connect* 9. I had no idea what this network was as it was not listed in the network adapter list.

I remembered a discussion on Twitter a couple of weeks ago between Allan Hirt (blog|twitter) and Denny Cherry (blog|twitter) where they talked about this exact problem and the fact that it was a hidden cluster adapter which needed to be moved in the binding order, and that had to be done in the registry.

I opened up regedit and drilled down to HKEY_LOCAL_MACHINESYSTEMCurrentControlSetservicesTcpipLinkage and opened up the Bind value:

Registry Showing Incorrect Binding Order


I’ve highlighted the first item in the bind order. The trouble is that this GUID value really means nothing to us…time to crack open PowerShell

gwmi Win32_NetworkAdapterConfiguration | where-object {$_.IPEnabled -eq "True"} | ft Description, SettingID -auto

List of active adapters


As you can see the Microsoft Failover Cluster Virtual Adapter GUID matches the first bound value. Back to the registry to move that adapter down the list with a quick cut and paste.

*Warning, playing in the registry can cause system instability, BSOD problems, singularities and quantum fluctuations*

Registry with correct binding


Starting up the cluster install again and the Network binding order warning is gone


Hopefully the next releases of Windows and SQL won’t have this problem. In the meantime this is good to know. Thanks Allan and Denny for engaging on Twitter and leading me to the answer to the problem.

Checking Windows Power Plans With PowerShell

Are you getting the best performance out of your CPUs in Windows 2008? You might think you are however the Power Plan that Windows is using might be hurting you and degrading your CPU performance by as much as 25%


A Power Plan?

Power Plans are used by Windows to adjust certain performance settings to improve power management and reduce power and cooling requirements. You might be more familiar with this in a laptop setting where the CPU gets stepped down and screen dims when you are on battery power.

Power Plan settings can be found under the Control Panel –> Hardware and Sound –> Power Options



For a high performing database server you really need to be using the High Performance plan.

I highly recommend going and reading Glenn Berry’s (blog|twitter) most recent post on power management (and then go read the other posts in the SQL Server Hardware Nugget series).


Checking Power Plans on your servers

It could be very time consuming to go through all your servers and check that the Power Plan is set appropriately. With this in mind I adapted a script by Ed Wilson (blog|twitter) who posted on his Scripting Guys blog a post which shows how to check the Power Plan settings using PowerShell.

My expanded script is just a function that returns the Power Plan of the computer that is passed in.

Firstly it checks the Windows version of the passed in computer, unless we’re running Windows version 6x or higher the win32_Powerplan WMI provider does not exist and we will get errors.

Assuming a version that has the WMI provider we query that, grab the active plan GUID and then use that to get the actual Power Plan in use. This information is then written to the console window. In order to make sure that it stands out that the incorrect plan is set the font color is changed to red, while a correct plan shows green.

Finally, given that this is a function we can easily grab a list of machines from a text file (or database table) and pass those in using a foreach (%) loop so multiple machines can be queried at once quickly and easily.


The PowerShell

# Read more about the importance of the right power plan at http://sqlserverperformance.wordpress.com/2010/12/28/the-importance-of-windows-power-plans-for-server-performance/


Function Get-PowerPlan ($Computer)


    # Grab the windows version so we know whether to query for the power plan or not

    $winver = gwmi -Class win32_OperatingSystem -ComputerName $Computer

        # Version 6x is Win7/2008 powerplan not relevant below that

        if ($winver.Version.substring(0,1) -gt 5) 


            $plan = Get-WmiObject -Class win32_Powerplan -Computername $Computer -Namespace rootcimv2power -Filter "isActive='true'"  

            $regex = [regex]"{(.*?)}$" 

            $planGuid = $regex.Match($plan.instanceID.Tostring()).groups[1].value 

            $PlanType = powercfg -query $planGuid

            # Grab just the first record which has the actual plan being used

            # From that record just grab the actual plan type which is enclosed in parenthesis            

            $PlanType = $PlanType[0].Substring($PlanType[0].LastIndexOf("(")+1) -replace ")", ""


            # If the plan isn't high performance let's make it stand out

            if ($PlanType -ne "High performance") { Write-Host $Computer":" $PlanType.ToUpper() -foregroundcolor "Red" }

            else {  Write-Host $Computer":" $PlanType -foregroundcolor "Green" }




            # If the Windows version doesn't support power plans just let us know

            Write-Host $Computer": n/a"




#Based upon a list of machines contained in c:tempserverlist.txt

get-content C:tempserverlist.txt | %  {  Get-PowerPlan $_ ; }


Running this against a couple of machines locally I got the following results:

localhost: BALANCED
anony-lptp: High performance


Give the script a try, let me know how the results look for you.

Adding LUNs In VMware – A Cautionary Tale

Over the last 10 years plus of being a DBA I’ve performed LUN manipulation tasks on dozens of occasions with no errors or problems. Other than adding new disks to Windows 2003 clusters I’ve never had to take SQL Server offline to perform these tasks either.


A simple request

I needed a new drive to be added to a server I had just taken over in order to perform backups of some large databases. This is something that the SAN engineers and Windows support teams have a lot of experience doing and so we weren’t concerned that there might be any problems.


The uh-oh moment

I was running a couple of scripts on the SQL instance hosted on that server when all of a sudden I had a corruption errors show up. The dreaded error 823 sat there in red staring me in the face. I had just taken over this machine, there were no backups and right as we were trying to add a LUN to be able to perform those things went boom.

My stomach hit the floor.


Remembering the words of Douglas Adams

Don’t panic!

The last thing you want to do as a DBA when there’s a problem is panic. Panic I did not. I took a deep breath and walked down to speak with the SAN and Windows admins to find out what might have happened. They reported no problems, showed that the new LUN was on there and a new drive created ready for me to start performing backups.

My nerves calmed somewhat I went back to my desk and started trying to figure out what the problem was. The error logs were showing that the data files did not exist. Very strange as after connecting to a remote desktop session on the server I was able to see the drive and files without any issue even though SQL could not.

I started digging through the Windows event logs to see if anything stood out. Right at the time that the errors started happening a single error showed in the system event log:

Event Type: Error
Event Source: PlugPlayManager
Event Category: None
Event ID: 12
Description: The device ‘LUN SCSI Disk Device’ (SCSIDisk&Ven&Prod_LUN&Rev4&23432&3&2) disappeared from the system without first being prepared for removal.

The error stated that the LUN hosting the data files lost connectivity and came back online almost immediately.


The fix

With SQL not able to see the files or drive I had no choice but to restart the instance in the hope that things would come back online. With bated breath I restarted the service and waited to see what would happen. One by one the databases started coming back online and my pulse slowed a little. Once all the databases showed as recovered without error I kicked off a DBCC CHECKDB on each one so that I could feel comfortable that everything was good and happy. The DBCC execution took 7 hours and everything came back clean. Happy with this knowledge I kicked off a backup and went to bed.


So what happened?

Unlike any of the SQL instances I have worked with before this one was running on a VMware ESX virtual machine and the SQL LUNs, rather than being a part of the VM itself, were raw mapped drives. A raw mapped drive is handled using raw device mapping which creates a symbolic link from within the VMware file system to external LUN (at least that’s what my admin tells me and I believe him).

What we found was a quirk that when VMware performed a bus scan to find the new LUN and create the mapping it very briefly dropped connectivity to the existing raw mapped drive. This is what caused Windows to lose the drive and SQL to lose access to the files. The drive was down for about 3 seconds, more than enough time for SQL to recognize a problem.


Lesson learned

From now on I will be very careful with any kind of LUN changes on VMware machines with raw mapped drives. I’ll ensure that we schedule an outage and take SQL offline while any adjustments are made just in case something like this crops up again.