Tag: PowerShell

Configuring DTC Security In A Cluster With PowerShell

The other day Allan Hirt (blog|twitter) write a fantastic post around “How to properly configure DTC for Clustered Instances of SQL Server with Windows Server 2008 R2”. He included a PowerShell script to handle all the funky setup stuff. The only downside was that after that you had to manually go through and configure Network Access and security for the clustered DTC using the GUI.

“There has to be a better way” I thought. I recalled how security back in 2003 could be set in the registry so I went digging. Eventually I was able to find the requisite keys under the Distribute Transaction Coordinator resource. From there it was just a case of tying things back so that those keys could be easily found from the SQL Resource Name and then updating them so that they matched what Allan demonstrated on his page.

I’ve tested it on a couple of my clusters and it’s worked exactly as designed, you just need to change the ServiceName variable so that it matches your clustered service name and then decided what authentication method you want to use (it defaults to the most secure level: Mutual Authentication Required).

Get the code below or download ConfigureMSDTC.ps1

<#

.SYNOPSIS

   Configures MSDTC on a cluster

.DESCRIPTION

   This configures a basic networked config for MSDTC for clustered SQL Server instances.

   Uncomment the transaction manager security setting and enter the correct ServiceName that can be found in FC manager

.PARAMETER <paramName>

   NONE

.EXAMPLE

   NONE

#>

 

$ServiceName = "SQL Server (SOX2)"

 

#--What MSDTC Transaction Manager Security setting is requested? (uncomment one)

$TranManSec = "Mutual" #Mutual Authentication Required

#$TranManSec = "Incoming" #Incoming Called Authentication Required

#$TranManSec = "None" #No Authentication Required

 

 

 

 

#Grab a list of cluster groups

$GroupPath = "HKLM:ClusterGroups"

$GroupList = dir $GroupPath

 

#Iterate through the groups to find the one matching the service name

foreach ($Group in $GroupList)

{

    $GroupChildPath = $Group.PSPath

    if ((Get-ItemProperty -path $GroupChildPath -name Name).Name -eq $ServiceName)

    {

        #we got a match! Now grab a list of the groups in this service

        $ReferencedResources = (Get-ItemProperty -path $GroupChildPath -name Contains).Contains

        foreach ($Resource in $ReferencedResources)

        {

            #Query each of the resources for their type and work with MSDTC

            $ResourcePath = "HKLM:ClusterResources$Resource"

            if ((Get-ItemProperty -path $ResourcePath).Type -eq "Distributed Transaction Coordinator")

            {

                #We found MSDTC resource for that service group, let's configure it

                $SecurityPath = "$ResourcePathMSDTCPRIVATEMSDTCSecurity"

                Set-ItemProperty -path $SecurityPath -name "NetworkDtcAccess" -value 1

                Set-ItemProperty -path $SecurityPath -name "NetworkDtcAccessClients" -value 0

                Set-ItemProperty -path $SecurityPath -name "NetworkDtcAccessTransactions" -value 0

                Set-ItemProperty -path $SecurityPath -name "NetworkDtcAccessInbound" -value 1

                Set-ItemProperty -path $SecurityPath -name "NetworkDtcAccessOutbound" -value 1

                Set-ItemProperty -path $SecurityPath -name "LuTransactions" -value 1

 

                #Now configure the authentication method for MSDTC (defaulting to Mutual Auth as it's most secure)

                $SecurityPath = "$ResourcePathMSDTCPRIVATEMSDTC"

                if ($TranManSec -eq "None")

                {

                    Set-ItemProperty -path $MSDTCPath -name "TurnOffRpcSecurity" -value 1

                    Set-ItemProperty -path $MSDTCPath -name "AllowOnlySecureRpcCalls" -value 0

                    Set-ItemProperty -path $MSDTCPath -name "FallbackToUnsecureRPCIfNecessary" -value 0

                }

 

                elseif ($TranManSec -eq "Incoming")

                {

                    Set-ItemProperty -path $MSDTCPath -name "TurnOffRpcSecurity" -value 0

                    Set-ItemProperty -path $MSDTCPath -name "AllowOnlySecureRpcCalls" -value 0

                    Set-ItemProperty -path $MSDTCPath -name "FallbackToUnsecureRPCIfNecessary" -value 1

                }

 

                else 

                {

                    Set-ItemProperty -path $MSDTCPath -name "TurnOffRpcSecurity" -value 0

                    Set-ItemProperty -path $MSDTCPath -name "AllowOnlySecureRpcCalls" -value 1

                    Set-ItemProperty -path $MSDTCPath -name "FallbackToUnsecureRPCIfNecessary" -value 0

                } 

 

            }

        }

    }

}

 

Querying SQL Error Logs With PowerShell

It’s a real pain trying to get data out of the SQL Error Logs I’ve noticed. You either don’t cycle your files very often and end up with stupid amounts of data in there to sift through or you cycle them frequently and then you end up having to try and figure out which of the files is related to the data you want. You could use T-SQL to import all the files and grab stuff, alternatively you can use PowerShell to get information.

I wanted to take this process a step further so I created a PowerShell script that will accept servername, start and end date/time parameters and return to you, the kind reader, all the entries in the logs between those times (and just between those times). This is great for when someone asks you to give them information from the log for 3-7 days ago. One quick execution of the script and the data is sitting there in PowerShell grid format, ready to go. Even better, using the grid you can filter the information further and quickly copy/paste it to excel or some other program.

I’ve pasted the code down below, but it might be easier to download Get-SQLErrorLogs.ps1

This requires the PowerShell SQL snappins that come with 2008/R2 but can query versions 2000 and up.

Next steps are to add filtering and options to output to file. I’m sure I’ll get to that eventually.

 

<#

.SYNOPSIS

   Retrieves SQL Server error log entries

.DESCRIPTION

   Pulls error log details from SQL Server and outputs them to the client

.PARAMETER <paramName>

   -SQLInstance (required) The SQL Instance to pull the logs from

   -MaxDate (required) The most recent log record to return

   -MinDate (required) the oldest log record to return

.EXAMPLE

   ./Get-SQLErrorLogs -SQLInstance MyInstance -MinDate 2012-05-10 -MaxDate "2012-05-15 09:00"

#>

 

param

(

[string]$SQLInstance = $(Read-Host -prompt "SQL Instance [e.g. MyClusterInstance]"),

[string] $MinDate = $(Read-Host -prompt "Lowest log date entry (yyyy-mm-dd hh:mm)"),

[string] $MaxDate = $(Read-Host -prompt "Highest log date entry (yyyy-mm-dd hh:mm"),

[int] $LogNumber

)

 

cls

 

$CheckDateMin = [datetime]::Parse($MinDate)

$CheckDateMax = [datetime]::Parse($MaxDate)

 

 

Function Main {

    #Load the SQL snapins

    Load-SQLSnapins

 

    #Create a datatable to hold the data

    $Results = New-Object system.Data.DataTable "Results"

    #Setup columns

    $LogDate = New-Object system.Data.DataColumn LogDate,([DateTime])

    $ProcessInfo = New-Object system.Data.DataColumn ProcessInfo,([string])

    $Text = New-Object system.Data.DataColumn Text,([string])

    #Add columns to datatable

    $Results.columns.add($LogDate)

    $Results.columns.add($ProcessInfo)

    $Results.columns.add($Text)

 

    #Another to hold logs read

    $ReadLogs = New-Object system.Data.DataTable "ReadLogs"

    $ReadLogNum = New-Object system.Data.DataColumn LogNumber,([int])

    $ReadLogs.columns.add($ReadLogNum)

 

 

    #Shred the SQL Instance to get the right data

    $SQLName = Shred-SQLName $SQLInstance

    $InstanceName = Shred-SQLInstance $SQLInstance

 

 

    #Grabs SQL Logs by Date

    Get-SQLLogByDate $SQLName $InstanceName $SQLInstance $CheckDateMin $CheckDateMax

 

    $results | Sort-Object LogDate | ogv

 

}

 

 

 

Function Shred-SQLInstance ($SQLInstance) {

 

    $SQLInstance = $SQLInstance.ToUpper()

 

    if ($SQLInstance -ilike "**") 

    {

        $string = $SQLInstance.Split("")

        $Instance = $string[1]

        Return $Instance

    }

    else

    {

        $Instance = "DEFAULT"

        Return $Instance

    }

}

 

Function Shred-SQLName ($SQLInstance) {

    $SQLInstance = $SQLInstance.ToUpper()

    if ($SQLInstance -ilike "**") 

    {

        $string = $SQLInstance.Split("")

        $Name = $string[0]

        Return $Name

    }

    else

    {

        $Name = $SQLInstance

        Return $Name

    }

}

 

Function Get-SQLLogByDate ($SQLName, $InstanceName, $SQLInstance,$CheckDateMin ,$CheckDateMax) {

 

    IF ($InstanceName -eq "DEFAULT") {$Logs = DIR SQLSERVER:SQL$SQLName | Where-Object {$_.Name -eq $SQLInstance}; }

    ELSE {$Logs = DIR SQLSERVER:SQL$SQLName | Where-Object {$_.Name -eq $SQLInstance}; }

 

    $LogCount = ($Logs.EnumErrorLogs() | Measure-Object).Count

    Write-Output "--------------------------------------------------------------------------------------------------------------"

    Write-Output "Checking error logs on server $SQLInstance for logs created between $CheckDateMin & $CheckDateMax"

    Write-Output "--------------------------------------------------------------------------------------------------------------"

    Write-Output "Checking through $LogCount logs, please wait."

    Write-Output "-----------------------------------------------"

 

    $LogList = $Logs.EnumErrorLogs()

    #we'll need to do something hokey here. SQL error log creation is not when the log was created, rather when it was closed

    #not intended circumstances. This means we'll have to get the min log and then get the one before that

    ForEach ($ErrorLog in $LogList) {

 

        IF ($ErrorLog.CreateDate -ge $CheckDateMin -and $ErrorLog.CreateDate -le $CheckDateMax)

        {

 

            $CurrentLog = $ErrorLog.CreateDate

            $CurrentLogName = $ErrorLog.Name

            Write-Output "Currently reading error log $CurrentLogName dated $CurrentLog..."

            #write the log number to the $ReadLogs datatable

            $NewLogRow = $ReadLogs.NewRow(); $NewLogRow.LogNumber = $CurrentLogName; $ReadLogs.Rows.Add($NewLogRow);

            $Logs.ReadErrorLog(($ErrorLog.Name)) | Where-Object {$_.LogDate -ge $CheckDateMin -AND $_.LogDate -le $CheckDateMax} |

            % {$NewRow = $Results.NewRow(); 

                $NewRow.LogDate = $_.LogDate; 

                $NewRow.ProcessInfo = $_.ProcessInfo;

                $NewRow.Text = $_.Text;

                $Results.Rows.Add($NewRow);

            }

        }

 

 

 

    }

    #Now we've read all the logs read the one before the newest so we get that extra data 

    #unless the most current log (0) is included in the results already

    #Write-Output $ReadLogs | ft -AutoSize

    $LowestLog = ($ReadLogs | Measure-Object LogNumber -Minimum).Minimum

    IF ($LowestLog -gt 0)

    {

        $LogToRead = $LowestLog - 1

        Write-Output "Currently reading error log $LogToRead..."

        #write the log number to the $ReadLogs datatable

        $NewLogRow = $ReadLogs.NewRow(); $NewLogRow.LogNumber = $CurrentLogName; $ReadLogs.Rows.Add($NewLogRow);

        $Logs.ReadErrorLog($LogToRead) | Where-Object {$_.LogDate -ge $CheckDateMin -AND $_.LogDate -le $CheckDateMax} |

        % {$NewRow = $Results.NewRow(); 

            $NewRow.LogDate = $_.LogDate; 

            $NewRow.ProcessInfo = $_.ProcessInfo;

            $NewRow.Text = $_.Text;

            $Results.Rows.Add($NewRow);

        }

    }

    #on the off chance that the dates reside in the current log only pull that one in

    #but only if no other logs have been loaded (edge case)

    IF (!$LowestLog) 

    {

        Write-Output "Currently reading error log 0..."

        $Logs.ReadErrorLog(0) | Where-Object {$_.LogDate -ge $CheckDateMin -AND $_.LogDate -le $CheckDateMax} |

        % {$NewRow = $Results.NewRow(); 

            $NewRow.LogDate = $_.LogDate; 

            $NewRow.ProcessInfo = $_.ProcessInfo;

            $NewRow.Text = $_.Text;

            $Results.Rows.Add($NewRow);

        }

    }

}

 

Function Load-SQLSnapins 

{    $CheckSQLSnapins = get-pssnapin | where {$_.Name -match "SqlServerCmdletSnapin100"} 

    if (!$CheckSQLSnapins) 

    {

        Add-PSSnapin SqlServerCmdletSnapin100 

        Write-Host "SqlServerCmdletSnapin100 Loaded"

    }

    else { Write-Host "SqlServerCmdletSnapin100 Already Loaded" }

 

    $CheckSQLSnapins = get-pssnapin | where {$_.Name -match "SqlServerProviderSnapin100"} 

    if (!$CheckSQLSnapins) 

    {

        Add-PSSnapin SqlServerProviderSnapin100 

        Write-Host "SqlServerProviderSnapin100 Loaded"

    }

    else { Write-Host "SqlServerProviderSnapin100 Already Loaded" }

}

 

main

Changing The Default Email Address In Office 365

Not SQL related today but a good example of PowerShell in action.

I recently migrated my personal email over to Office 365 from Microsoft. The migration itself went very smoothly but I ran into an issue where the primary email address on the account was showing with a @sirsql.onmicrosoft.com address instead of @sirsql.net. This seriously bugged me and I couldn’t find a way to change it in the Office 365 admin interface.

After some digging I found that you could only make a change such as this using PowerShell, and that you couldn’t do it using the account you want to change (makes sense).

So I created a new admin account just for the purpose and initiated a session in Office 365 (entering the admin users credentials when prompted)

$Livecred = get-credential

$Session = new-pssession -ConfigurationName Microsoft.Exchange -ConnectionUri https://ps.outlook.com/powershell/ -Credential $Livecred -Authentication Basic -AllowRedirection

Import-PSSession $Session

Now I had to find the account, easy enough:

get-mailbox | where-object {$_.Identity -eq "<my account>"} | fl

This gave me all the details, including the PrimarySmtpAddress

image

As you can see from the membership properties you can get and set this information, so that’s what I went to do:

set-mailbox <my account> -PrimarySmtpAddress <my account>@sirsql.net

And got an error

A positional parameter cannot be found that accepts argument ‘-PrimarySmtpAddress’.
    + CategoryInfo          : InvalidArgument: (:) [Set-Mailbox], ParameterBindingException
    + FullyQualifiedErrorId : PositionalParameterNotFound,Set-Mailbox

 

Well this made no sense. No matter what I tried I could not do anything other than a get on the PrimarySmtpAddress. Back to the searching and I discovered that Microsoft have not provided the capability to run a set on that particular member object in Office 365 (although it’s available with Exchange). Way to do with that Microsoft.

After some more digging I was able to get the SMTP address changed by setting the WindowsEmailAddress to the correct value

set-mailbox <my account> -WindowsEmailAddress <my account>@sirsql.net

 

One other note, when using PowerShell to manage O365 and it says to use Identity use the Alias for the account instead or you might up with unintended consequences.

Reading SQL Server Error Logs Using PowerShell

I was messing around with PowerShell the other day and using it to read SQL Server error logs. It’s actually a pretty trivial thing.

From SQLPLS

$Logs = DIR SQLSERVER:SQLLOCALHOST

$Logs.ReadErrorLog()

Quick and easy. Then I tried to read from a clustered instance of SQL Server and ran into an issue

$Logs = DIR SQLSERVER:SQLCLUSTEREDINSTANCE

$Logs.ReadErrorLog()

image

This didn’t make a lot of sense, after all the method was there before. I decided to go and check to see if the method existed

$Logs = DIR SQLSERVER:SQLCLUSTEREDINSTANCE

$Logs | gm

image

Yup, the method exists just fine.

 

I couldn’t figure out what was going on. I enlisted the help of Sean McCown (blog|twitter) who posted a short training video on how to read the error logs on named instances of SQL Server.

Grabbing Table Row Counts Into SQL Using PowerShell

Sometimes Twitter gives you the impetus to get something done that you’ve been meaning to accomplish but just haven’t quite gotten around to. Yesterday was one of the days that happened.

Ed Watson (blog|twitter)  asked the following question on #SQLHelp

SQLHelp

 

This is something I’ve been meaning to do for a while so that I can get good ongoing trending information around table row counts.

There are several ways to accomplish this but PowerShell was the clear winner for me. I wanted the script to accomplish a couple of things

  • Capture the SQL instance name, database name, table name (including schema) and row count for all tables in a database
  • To grab the information for all the databases on a server (including system databases)
  • To write the data to a SQL table
  • Provide the option of dumping the data to the screen
  • Allow me to execute it quickly and easily against multiple servers without providing interactive input

 

The script I’ve written does all this. The only bugs I’ve found are that the model and tempdb table information is not accessible, but this isn’t really an issue as model doesn’t get any use and tempdb is far too frequently updated to be of any use.

 

The Scripts

Firstly we need to create a couple of SQL tables to hold the results. These should be created on the SQL instance and database of your choosing. The script itself should be updated to reference the locations of these tables (ultimately it made more sense to do it this way rather than pass the information in as a parameter).

 

CREATE TABLE dbo.SQLTableRowCounts

    (

      SQLInstance NVARCHAR(128) ,

      DBName NVARCHAR(128) ,

      TableName NVARCHAR(256) ,

      TableRowCount BIGINT, 

      LoadDate DATE DEFAULT CONVERT(DATE, GETDATE())

    );

    

CREATE TABLE dbo.SQLTableRowCountsArchive

    (

      SQLInstance NVARCHAR(128) ,

      DBName NVARCHAR(128) ,

      TableName NVARCHAR(256) ,

      TableRowCount BIGINT,

      LoadDate DATE DEFAULT CONVERT(DATE, GETDATE())

    );    

 

There are no indexes on the tables currently. Depending upon your usage patterns a couple of different indexes could apply. I’ll let you decide what works best for you.

 

The PowerShell script is quite well documented and includes execution examples (you’ll want to run this on a machine where the SQL PS providers have been installed).

<#

.SYNOPSIS

   Script takes in a list of SQL Instances and iterates through the databases to retrieve a row count for each table. This is then loaded into a SQL database.

   Within the script the $StorageSQLServer and $StorageDatabase should be updated to reflect the relevant locations to load the data.

.DESCRIPTION

   Script takes in a list of SQL Instances and iterates through the databases to retrieve a row count for each table. This is then loaded into a SQL database.

   Within the script the $StorageSQLServer and $StorageDatabase should be updated to reflect the relevant locations to load the data.

   Requires bulk admin permissions on the SQL Server that you are loading the data to.

   

   Accepts pipelined inputs so that you can pass an array for working with multiple SQL Instances

.PARAMETER <$SQLInstance>

   This is the SQL Instance that you'll be working with

.PARAMETER <$verbose>

    Initially set to false. When set to true will also output the data to the console

.EXAMPLE

   Basic example: Get-TableRowCounts MySQLInstance

   Advanced: Get-Content C:TempSQLInstances.txt | % { Get-TableRowCounts $_ }

   

#>

 

param

(

[string]$SQLInstance = "$(Read-Host 'SQL Server Instance [e.g. SQLPROD]')",

[switch]$verbose = $false

)

 

#Load the SQL snapins

get-pssnapin | where {$_.Name -match "SqlServerProviderSnapin100"} 

if (!$CheckSQLSnapins) 

{

    Add-PSSnapin SqlServerProviderSnapin100 

    Write-Host "SqlServerProviderSnapin100 Loaded"

}

 

#Set the following to the SQL instance and database that you want to load the data to

#You will need to have bulk insert permissions to be able to perform this action

$StorageSQLServer = "localhost"

$StorageDatabase = "SQLMonitor"

 

$BCPconnectionString = "Data Source=$StorageSQLServer;Integrated Security=true;Initial Catalog=$StorageDatabase;"

 

#Clean out the current data table if the data is not from today

Invoke-Sqlcmd -ServerInstance $StorageSQLServer -Database $StorageDatabase -Query "DELETE FROM dbo.SQLTableRowCounts WHERE LoadDate != CONVERT(DATE, GETDATE());"

 

 

 

function TableList ($SQLInstance, $DBName)

{

 

    #Handle default vs named instances

    if ($SQLInstance.Contains("`"))

    {        $location = "SQLSERVER:SQL$SQLInstanceDatabases$DBNameTables" }

    else

    {        $location = "SQLSERVER:SQL$SQLInstanceDEFAULTDatabases$DBNameTables" }

 

    <#

    Create data table to hold the results

    The datatable will live for the life of the function call to reduce memory overhead

    pulling from multiple servers

    #>

    $dataTable = New-Object system.Data.DataTable "Results"

    #Specify the column names for the data table

    $col1 = New-Object system.Data.DataColumn SQLInstance, ([string])

    $col2 = New-Object system.Data.DataColumn DatabaseName, ([string])

    $col3 = New-Object system.Data.DataColumn FullTableName,([string])

    $col4 = New-Object system.Data.DataColumn RowCounts,([int64])

 

    #Add the columns to the data table

    $dataTable.Columns.Add($col1)

    $dataTable.Columns.Add($col2)

    $dataTable.Columns.Add($col3)

    $dataTable.Columns.Add($col4)

 

    #Get our table list

    $TableList = dir $location -Force | select Owner, Name, RowCount

 

    #Iterate through the tables, load the rowcounts into the datatable

    foreach ($TableInfo in $TableList)

    {

        $TableOwner = $TableInfo.Owner 

        $TableName = $TableInfo.Name

        $TableRowCount = $TableInfo.RowCount

 

        $TableFullName = "$TableOwner.$TableName"

 

        #Provided there are rows in the table

        #Load the table/rowcounts into the datatable

        #This prevents errors due to nulls in model & tempdb

        if ($TableRowCount)

        {

            $row = $dataTable.NewRow()

            $row.SQLInstance = $SQLInstance

            $row.DatabaseName = $DBName

            $row.FullTableName = $TableFullName

            $row.RowCounts = $TableRowCount

            $dataTable.Rows.Add($row)

        }

 

    }

 

    if ($verbose) { Write-Output $dataTable }

 

    #Bulk load the data into SQL from the data table

    $bulkCopyRoles = new-object ("Data.SqlClient.SqlBulkCopy") $BCPconnectionString

    #Loads to the current table

    $bulkCopyRoles.DestinationTableName = "SQLTableRowCounts"

    $bulkCopyRoles.WriteToServer($dataTable)

    #Loads to the archive table

    $bulkCopyRoles.DestinationTableName = "SQLTableRowCountsArchive"

    $bulkCopyRoles.WriteToServer($dataTable)

 

    #Clean up after ourselves so we release memory used to hold the data table

    Remove-Variable dataTable

}

 

 

function DBList ($SQLInstance)

{

 

    #Handle default vs named instances

    if ($SQLInstance.Contains("`"))

    {        $BaseLocation = "SQLSERVER:SQL$SQLInstanceDatabases" }

    else

    {        $BaseLocation = "SQLSERVER:SQL$SQLInstanceDEFAULTDatabases" }

 

    $dblist = dir $BaseLocation -Force | select Name

    foreach ($db in $dblist)

    {

        $DBName = $db.Name

        TableList $SQLInstance $DBName 

    }

}

 

DBList $SQLInstance 

 

 

This script could easily be setup to execute daily to load the data into SQL so that you can perform quick and easy trends on table growth.

 

Give it a try, let me know what you think.

Adding Users To Local Groups With PowerShell

I ran into a situation last week where I needed to add a user to the Administrators group on a whole bunch of remote servers. Having had serious problems with cubital tunnel I try to avoid mouse clicks as much as possible, and anyway this is one of those situations that PowerShell is great for.

After a little trial and error I put together a script. The script doesn’t accept any pipelining, rather it will prompt you for the values (old school).

When asked enter the name of the local group (you don’t always want to add people to Administrators after all). Then the AD user or group name (include the domain for example PRODSomeAccount). Finally a comma separated list of servers to add the account to. Yes, I could have done this with a text file or something like that, but this is to be used as a quick thing and it’s easier to just type a list of servers in rather than create a file every single time you run it.

Once those three things have been entered the script will go out, check to see if the AD user/group already exists in that server group. If it exists you’ll get a warning that it’s already there, otherwise it will add the user/group to the local server group. There’s also a quick check to see if the server is available, if not you’ll get an error for that particular machine, it will continue to work on the others.

Of course the account you execute this script as will have to have the relevant permissions on the remote servers to allow you to add the user/group.

 

<#

.SYNOPSIS

   Adds a domain user or group to a local Windows group on a local or remote server

.DESCRIPTION

   This script can be used in the event you need to add a domain user or group to a local Windows server group.

   It's real strength comes in being able to enter a comma delimited list of servers so that you can add the same domain user/group to multple machines quickly and easily.

   The user executing the process will require the rights on the remote machines to be able to add the accounts.

   Contains basic error handling to check if the server is reachable, if the group exists and if the user is already a member of the group.

.PARAMETER <paramName>

   No parameters, script file will ask for input

#>

 

$usrName = (Read-Host "Enter the domain account or group that you wish to add to the local server group (eg PRODMyAccount)")

$grpName = (Read-Host "Enter the server group name that the user should be added to (eg Administrators)")

$srvList = (read-host "Enter a comma delimited list of servers (eg SrvA, SrvB)").split(",")

Write-Host ""

$Exists = 0 #Initialize the Exists variable which is used to see whether or not users should be added to groups

 

foreach ($srvName in $srvList)

    {

    $srvName = $srvName.Trim().ToUpper() #Gets rid of spaces

    

try

{

if([ADSI]::Exists("WinNT://$srvName/$grpName,group")) #Check to see if the group exists

    { 

        #Set the comparison string for the account to add

        $usrName = $usrName  -replace "", "/"

        $chkOutput = "WinNT://$usrName"

        #Write-Output "Checking for $chkOutput"

    

        $group = [ADSI]("WinNT://$srvName/$grpName,group") 

        $group.Members() | 

        % { 

            $AdPath = $_.GetType().InvokeMember("Adspath", 'GetProperty', $null, $_, $null) 

            #Write-Output $AdPath

            if ($AdPath -ilike $chkOutput) 

                {

                Write-Warning "User $usrName already a member of the $grpName group on $srvName"

                $Exists = 1 #This way we won't try to add an account twice

                }

        }        

        

        if ($Exists -eq 0)

        { 

            Write-Output "Account $usrName does not exist in the local $grpName group on $srvName. Adding now..." 

            $group.add("WinNT://$usrName,user") #Add the account to the local server group

            $Exists = 0 #Reset the Exists variable ready for the next server

        }

    }

else

{

    Write-Warning "The $grpName group does not exist on server $srvName."

    }

 

}

 

catch { Write-Error "Server $srvName was unreachable." } 

}

 

Quickly Build Transaction Log Restore Commands With PowerShell

I’m in the middle of a server migration and so I’m setting up log shipping to get the databases consistent between the two servers prior to cutting over.

The backup copy over the WAN was taking a long time and so I had a lot of transaction log files that I needed to apply. I could have let log shipping take care of performing the log restores, but I found that a secondary process had created log backups also, just with different names. This lack of consistency was causing me a problem so I needed a quick way to build a restore command of all the files in order.

This was a quick win for PowerShell. I was able to quickly grab a list of the files in date order and pipe those to a string and build the restore command. This I was able to copy into SSMS and perform the restores.

$filelist = dir LogShippingPrimaryp$backup | where-object {$_.Extension -eq '.trn'} | Sort-Object {$_.CreationTime} | select Name

 

foreach ($file in $filelist)

{

$restorefile = $file.Name

 

$text = "restore log MyDatabase from disk = 'P:ForRestore$restorefile' with norecovery;"

Write-Output $text

}

 

If you wanted to take this a step further you could actually use invoke-sqlcmd to execute the $text string and perform the restore (don’t even contemplate doing that if you aren’t running at least SQL 2008 R2). I always prefer to do that within SSMS though.

While I’ve used this script for a migration task it also works really well if you need to perform a point in time restore for your database from a full backup and some transaction log backups. Can you imagine how long it would take you to build out the restore string to recover a database which had transaction log backups every 15 minutes for the last 20 hours? That’s 80 log files. Want to do that manually? This is a much nicer way to go.