Tag: PowerShell

Going Into Standby Using Powershell

Since moving to Windows 8 on my primary laptop I have found myself using PowerShell to perform shutdown and restart tasks. For me hitting the charms and going through the options to do a simple shutdown was just way too much work, especially when I could just hit the PowerShell icon and type stop-computer or restart-computer. 

One annoyance I found was that there was no pause-computer or equivalent to put the machine into standby, something very useful for a laptop.

Given that this was something I wanted to do I put something a new function in my profile to give me that ability. Now I can just type pause-computer and it will go into standby. Script below…

Function Pause-Computer 

    {

        Write-Output "Going to standby..."

        Start-Sleep -Seconds 1

        &"$env:SystemRootSystem32rundll32.exe" powrprof.dll,SetSuspendState Standby

    }

How To Log Off of Server 2012

I’m finding little things that are petty annoyances when working with Windows Server 2012.

NoLogoffOption

The most recent one of these is the lack of a logoff option when connected to a server via a remote desktop session. When using the charm you can get the option to Disconnect, Shut down or Restart the server, but not log off.

It turns out that the solution is pretty simple, but not intuitive.

Just open up PowerShell and type logoff.

Automated Backup Tuning

Tuning your backups is a wonderful thing. You could easily reduce the time it takes to backup your databases by 50% of more just by changing a few settings, and yet so few people do it. I asked myself the question why and came up with two answers.

  1. People do not know how to tune their backups.
  2. It takes too long to run through the tuning process.

How to tune your backups

I’m not going to go over methods for tuning your backups in this post. There are people who have done a far better job at both explaining the adjustments you can make and that have created videos to help you understand and perform the process yourself.

My biggest concern was directed at the level of effort required to test all the possible permutations of files, maxtransfersize and buffercount values, after all, who has time to figure all of that out and then capture the data to look at the relative performance characteristics of each one?

I decided that the best way to do this was to create a nice little test harness which would run through all those tests without manual intervention, and then figure out a way to get the output from all of those backups into some kind of meaningful graph for that instant visual on performance differences.

NobodyGotTimeForThat
No manual backup tuning for this DBA

The backup testing script

Step one in the automated tuning is a SQL script I created which accepts a few input variables:

  • @DatabaseName – name of the database you want to use for testing
  • @MaxBackupFiles – maximum number of files you want to write to at a time
  • @BackupFolder – destination folder for the backups
  • @WithCompression – whether or not to use SQL Server backup compression
  • @CreateJob – Whether or not to create a SQL Agent job to run the tests
  • @JobLogFileName – file path and name for a log for the job
  • @OutputToScreen – outputs the backup commands to the screen

When executed the script is designed to created SQL Agent job which you can then execute at a time of your choosing. The job will run through a series of backups for a database (at minimum 40) and capture the information in the job log file.

Be warned, performing this many backups test, with so many settings, can and will impact your server performance at some point. Be aware of this. Do not cause issues for your users when performing the tests, they will not like you for doing so.

Visualizing the results

Once the backup testing job is complete a job log file is left behind. This log file contains all of the information for each of the backups performed including the:

  • number of files backed up to
  • maxtranfersize setting used
  • buffercount setting used
  • speed of the backup in MB/sec
  • time the backup took to complete in seconds

Getting that information usable is done using a PowerShell script. A blog post by Jonathan Kehayis described his method for parsing SQLIO output to excel charts. Based on this script, the update version parses the backup log file that has been created, imports the information into excel and creates a couple of charts.

These charts show the speed (Figure 1) and backup time (Figure 2) for the default backup options, the absolute fastest backup to NUL, and the all of the other options that were tested.

Backup+Speed
Figure 1. Backup speed results for 100GB database backed up across 10Gb network to CIFS share

Backup+Time
Figure 2. Backup time results for 100GB database backed up across 10Gb network to CIFS share

The graphs provide a nice visual representation of the backup performance which helps to quickly identify that there are improvements that can be made.

The default SQL backup runs at just over 200MB/sec and takes about 235 seconds to complete, but we know that we can perform that same backup to a NUL device in about 75 seconds at a rate of over 1200MB/sec.

Just looking through the rest of the data it is clear to see that a lot of the values used make a negligible change to the performance of the backups, so we can work to make an informed decision on what the best file, maxtransfersize and buffercount settings should be.

Hopefully this is something that you can find useful. Please give it a try and send me feedback on your testing efforts.

Downloads: Backup Test Harness.sqlParse Backup Perf Tests.ps1

Disabling Windows Updates With PowerShell

Allan Hirt wrote a post back in February entitled “Stop Automatically Updating Production Servers” in which he gave some excellent reasons on why you should ensure that your servers are only updated manually (after all who wants an accidental production down scenario, am I right?).

This hasn’t hit me as a problem as all of my servers are disconnected from the Internet and there’s no WSUS server configured for them to reach, we just push patches down to the machines from a central location. I was perfectly happy with this scenario, and still am, however there is a PITA reason why I wanted to turn off the automatic checks for Windows Updates; constant dumping of events into the System and Application Event Logs:

image

 

image

 

I was really sick of seeing these errors and wanted a quick way to turn off Windows Update without having to go through the multi-click interface nonsense.

Once more PowerShell to the rescue.

Disable-WindowsUpdates.ps1 does just what it says on the box. It disables Windows Updates on the machine it’s run on (provided that’s allowed by Group Policy). You have to execute it as an administrator. When it done it will remove those pesky alerts from the logs and you can feel free that Windows won’t decide to reboot your servers in the middle of the day to apply that IE update you’ve always wanted.

Comparing Indexes Using PowerShell

When you have data replicated out to multiple machines or are working on migrations it can be a challenge to ensure that the indexes are in sync, thus preventing query performance issues. I found the need to check and sync up indexes myself the other day and I didn’t have any third party software installed to help me do the compare. Yet again PowerShell came to the rescue and I ended up putting together a script that just kind of grew and grew on me, so this morning I decided to stop and put it out for anyone to use.

The script accepts a few parameters:

  • Server1 – The source server that you’ll use as your source of index goodness
  • Server2 – The destination that you want to compare (this can be the same server)
  • Database1 – The source database that contains the tables and indexes
  • Database2 – (this is optional) the destination database to check, if this isn’t passed then this defaults to the same value as Database1
  • OutputScriptLocation – The path and filename for the output SQL script file which you can run to create the indexes
  • Verbose – A switch which will dump lots of information to the screen, which can be useful as it will also tell you if there are missing tables on the destination

 

I decided not to do the actual index creation within the script as it’s good to be able to check the output and be sure that’s the thing you want. I did however output the script in such a way to make it as easy as possible to run, and so you can just open the script, switch to SQLCMD mode and run it. The script will make the connection to your destination server, to the right database and then create the indexes. It will also output to the screen when an index is created (this is very useful for those times when you are creating a lot of indexes and some of them are large).

 

I’ve tested it on a few servers and it’s worked out great for me, let me know if you run into any problems running it or encounter strange bugs.

Download Compare-Indexes.ps1 or get the code below.

<#

.SYNOPSIS

   Script can be used to compare indexes on tables on different servers/databases.

.DESCRIPTION

   When executed the script will check passed in databases on different (or the same) server and output a SQLCMD script which

   can be executed to add missing indexes.

.PARAMETER <paramName>

   -Server1 The source SQL Instance that is used as the master

   -Server2 The destination SQL Instance that will be the target for additional indexes

   -Database1 The source database

   -Database2 The destination database (leave blank if the source and destination databases are the same)

   -OutputScriptLocation Path and filename for the .sql file that will be created for any indexes that need to be added

   -Verbose Output some logging information

.EXAMPLE

   ./Compare-Indexes -Server1 localhost -Server2 remotehostinstance1 -Database MyDB -OutputScriptLocation C:TempNewIndexes.sql -Verbose

#>

 

param

(

[string]$Server1 = "$(Read-Host 'SQL Instance 1 [e.g. (local)]')",

[string]$Server2 = "$(Read-Host 'SQL Instance 2 [e.g. (local)]')",

[string]$Database1 = "$(Read-Host 'Database Name')",

[string]$Database2, 

[string]$OutputScriptLocation = "$(Read-Host 'Output File Name')",

[switch]$Verbose

)

 

if (!$Database2) { $Database2 = $Database1}

 

$MissingIndexCount = 0

 

try

{

    #Initialize the output file

    $FileInitiation = @"

:ON ERROR exit 

:setvar SQLInstance `"$Server2`"

:setvar Database `"$Database2`"

:CONNECT `$(SQLInstance)

 

USE `$(Database);

 

 

"@

 

    $FileInitiation | Out-File $OutputScriptLocation

    $File = "True"

}

catch { Write-Host "Output folder does not exist...aborting script." 

    $File = "False"

}

 

 

if ($File -eq "True")

{

    if ($Server1 -ilike "**") { $BaseServer1 = "SQLSERVER:SQL$Server1Databases$Database1Tables" }

    else { $BaseServer1 = "SQLSERVER:SQL$Server1defaultDatabases$Database1Tables" }

 

    if ($Server2 -ilike "**") { $BaseServer2 = "SQLSERVER:SQL$Server2Databases$Database2Tables" }

    else { $BaseServer2 = "SQLSERVER:SQL$Server2defaultDatabases$Database2Tables" } 

 

 

    Write-Output "Please wait...comparing $Server1.$Database1 to $Server2.$Database2"

 

    $Server1List = dir $BaseServer1 | select DisplayName

    $Server2List = dir $BaseServer2 | select DisplayName

 

    #For information purposes show a list of tables missing on server2 if verbose enabled

    if ($Verbose)

    {

        $MissingTables = Compare-Object $Server1List $Server2List -Property DisplayName | Where-Object {$_.SideIndicator -eq '<=' } | Select DisplayName

        foreach ($MsTbl in $MissingTables)

        {            $MissingTable = $MsTbl.DisplayName

            Write-Output "Missing table - $MissingTable"

        }

    }

 

    #The compare uses -IncludeEqual as we only want to compare indexes on tables that exist on both servers

    $TableList = Compare-Object $Server1List $Server2List -Property DisplayName -IncludeEqual | Where-Object {$_.SideIndicator -eq '==' } | Select DisplayName

    if ($TableList)

    {

    foreach ($Table in $TableList)

    {

        $MissingIndex = "" #Initialize the variable so we can null it

        Clear-Variable MissingIndex #Null the variable as we'll be checking for a value in here later

        $TableToCheck = $Table.DisplayName

        $Table1Index = dir $BaseServer1$TableToCheckIndexes | Select Name # | Where-Object {$_.DisplayName -eq "$TableToCheck"}).indexes | Select Name

        $Table2Index = dir $BaseServer2$TableToCheckIndexes | Select Name # | Where-Object {$_.DisplayName -eq "$TableToCheck"}).indexes | Select Name

        if ($Verbose) {Write-Host "Checking $TableToCheck"}

        $MissingIndex = Compare-Object $Table1Index $Table2Index -Property Name -IncludeEqual | Where-Object {$_.SideIndicator -eq '<='} | Select Name

        if ($MissingIndex)

        {

            $MissingIndexCount += 1

            $Index = $MissingIndex.Name

            Write-Output "Missing Index - $Index on $TableToCheck"

            dir $BaseServer1$TableToCheckIndexes | where-object {$_.Name -eq "$Index"} |

            % {$_.Script() | out-file $OutputScriptLocation -append; "; `r`nRAISERROR(`'Index $TableToCheck.$Index created...`',0,1) WITH NOWAIT; `r`n`r`n " |

                out-file $OutputScriptLocation -Append;}

        }

    }

    if ($MissingIndexCount -gt 0) {Write-Output "File $OutputScriptLocation created. Open this file in SSMS and execute in SQLCMD mode"}

        else {Write-Output "No missing indexes found!"}

    }

    else { Write-Output "No matching tables found."}

}

Passing SQL Data To Command Line Processes Within SQL Jobs

Wow, that title is a bit of a mouthful, let me give you a scenario to help understand what the business problem was that I ran in to today.

We have a production database running on SQL 2008 which contains a CLR procedure that accepts a reportID value, queries some data and writes out to the filesystem on a remote share. This procedure is called by multiple daily and weekly jobs to perform extracts for business partners. Yes, I know this is ugly. The database has TRUSTWORTHY on, which is a big security risk and we wanted to mitigate that risk with the minimum amount of work required.

Here’s an example of one of the jobs that makes the call in to that proc:

DECLARE @ReportID INT;

 

SELECT  @ReportID = ReportID

FROM    dbo.ReportList

WHERE   BusinessPartner = 'Customer1'

        AND Frequency = 'Daily';

 

EXEC Data2File @ReportID;

 

The first step to changing this was to get the CLR code out of source control and rebuild it as an executable file. This took the developer about 60 minutes. Now I had to figure out how we were going to call the executable with the appropriate ReportID.

The obvious way to call this would be to create a cmdline job step for D:ExportExeData2File.exe (the name and location of the new executable). This would be great except that it doesn’t contain the ReportID. The smart move here would be to just pass along the ReportID in the cmdline, except that we don’t know what that is for any particular report as they get deleted and added fairly frequently, we need to actually pass the results of the query along. The cmdline really wasn’t going to help here.

As is frequently the case, PowerShell came to the rescue.

All I had to do was create a PowerShell job step, run the query, pass the result into a variable and then call the executable with the variable. Sounds really simple, it took a few minutes to get it right, in the end I wound up with the following PowerShell script that runs from within a PowerShell job step:

$ErrorActionPreference  = "Stop"

 

$Query = @"

SELECT  ReportID

FROM    dbo.ReportList

WHERE   BusinessPartner = 'Customer1'

        AND Frequency = 'Daily';

"@

 

$ResultSet = invoke-sqlcmd -ServerInstance MySQLServer -Database MyDatabase -Query $Query -QueryTimeout 30

[int]$RptID = $ResultSet.ReportID

Write-Output "Calling Data2File.exe with ReportID: $RptID"

 

& "D:ExportExeData2File.exe" $RptId

 

In this script I’m building the query, calling it with invoke-sqlcmd and then passing the output ReportID to the external executable.

While this is still pretty ugly and not the way that this sort of thing should be done (SSIS anyone?) it does work and more importantly it allows me to turn off the trustworthy setting on that database and improve the security on my SQL Servers.

Changing SQL Agent Job Subsystem Queue Lengths

In the event you are running a lot of jobs on your SQL Server, or you happen to have a great number of jobs that kick off close to each other and use the same subsystem (i.e. PowerShell) then you might receive a warning in the SQL Agent Error Log stating that the job step “is being queued for the PowerShell subsystem”.

This queuing is deliberate. Microsoft put in safeguards around the various subsystems that the Agent calls in order to help prevent any particular subsystem from taking over all the worker threads (after all you wouldn’t want your 25 PowerShell job steps preventing your queries from running because they’d taken all the worker threads would you?)

This is one of the reasons that I like to use dedicated servers just to handle all jobs/ETL processes. I like to keep my local jobs to just maintenance processes (backup/reindex/checkdb) wherever possible. In doing this I need to be sure that I am able to increase the max worker threads associated to a subsystem to be sure that jobs don’t get delayed. As such I keep an eye on the SQL Agent Error Log and server resources to check that I don’t get thread starvation issues (and being just a job server that’s pretty unusual).

Today I saw a bunch of warnings in the log to do with PowerShell job steps being queued. This wasn’t a surprise as I had added a whole bunch of new jobs and had not adjusted things from the default values. First I went through and grabbed the default information for each of the subsystems so that I could easily revert if the need arose:

SELECT [subsystem_id]

      ,[subsystem]

      ,[max_worker_threads]

  FROM [msdb].[dbo].[syssubsystems]

image

 

As you can see here there are only 2 worker threads set for PowerShell. I need to bump this up to 20 so that it could handle the concurrent job load. I have plenty of memory available on the server so I’m not concerned with the overhead associated with the loading of PowerShell either. Always something you need to keep in mind (oh and you have to be a sysadmin to be able to update this table).

UPDATE msdb.dbo.syssubsystems

SET max_worker_threads = 20

WHERE subsystem = N'PowerShell'  

 

Checking the value again yields the updated information:

SELECT [subsystem_id]

      ,[subsystem]

      ,[max_worker_threads]

  FROM [msdb].[dbo].[syssubsystems]

image

 

Now it’s a simple case of restarting the SQL Server Agent service for the change to take effect.

Note: Applying Service Packs or Cumulative Updates could potentially reset this value, so make sure you document the change and confirm it after any upgrade (for example http://support.microsoft.com/kb/972759 )

Configuring DTC Security In A Cluster With PowerShell

The other day Allan Hirt (blog|twitter) write a fantastic post around “How to properly configure DTC for Clustered Instances of SQL Server with Windows Server 2008 R2”. He included a PowerShell script to handle all the funky setup stuff. The only downside was that after that you had to manually go through and configure Network Access and security for the clustered DTC using the GUI.

“There has to be a better way” I thought. I recalled how security back in 2003 could be set in the registry so I went digging. Eventually I was able to find the requisite keys under the Distribute Transaction Coordinator resource. From there it was just a case of tying things back so that those keys could be easily found from the SQL Resource Name and then updating them so that they matched what Allan demonstrated on his page.

I’ve tested it on a couple of my clusters and it’s worked exactly as designed, you just need to change the ServiceName variable so that it matches your clustered service name and then decided what authentication method you want to use (it defaults to the most secure level: Mutual Authentication Required).

Get the code below or download ConfigureMSDTC.ps1

<#

.SYNOPSIS

   Configures MSDTC on a cluster

.DESCRIPTION

   This configures a basic networked config for MSDTC for clustered SQL Server instances.

   Uncomment the transaction manager security setting and enter the correct ServiceName that can be found in FC manager

.PARAMETER <paramName>

   NONE

.EXAMPLE

   NONE

#>

 

$ServiceName = "SQL Server (SOX2)"

 

#--What MSDTC Transaction Manager Security setting is requested? (uncomment one)

$TranManSec = "Mutual" #Mutual Authentication Required

#$TranManSec = "Incoming" #Incoming Called Authentication Required

#$TranManSec = "None" #No Authentication Required

 

 

 

 

#Grab a list of cluster groups

$GroupPath = "HKLM:ClusterGroups"

$GroupList = dir $GroupPath

 

#Iterate through the groups to find the one matching the service name

foreach ($Group in $GroupList)

{

    $GroupChildPath = $Group.PSPath

    if ((Get-ItemProperty -path $GroupChildPath -name Name).Name -eq $ServiceName)

    {

        #we got a match! Now grab a list of the groups in this service

        $ReferencedResources = (Get-ItemProperty -path $GroupChildPath -name Contains).Contains

        foreach ($Resource in $ReferencedResources)

        {

            #Query each of the resources for their type and work with MSDTC

            $ResourcePath = "HKLM:ClusterResources$Resource"

            if ((Get-ItemProperty -path $ResourcePath).Type -eq "Distributed Transaction Coordinator")

            {

                #We found MSDTC resource for that service group, let's configure it

                $SecurityPath = "$ResourcePathMSDTCPRIVATEMSDTCSecurity"

                Set-ItemProperty -path $SecurityPath -name "NetworkDtcAccess" -value 1

                Set-ItemProperty -path $SecurityPath -name "NetworkDtcAccessClients" -value 0

                Set-ItemProperty -path $SecurityPath -name "NetworkDtcAccessTransactions" -value 0

                Set-ItemProperty -path $SecurityPath -name "NetworkDtcAccessInbound" -value 1

                Set-ItemProperty -path $SecurityPath -name "NetworkDtcAccessOutbound" -value 1

                Set-ItemProperty -path $SecurityPath -name "LuTransactions" -value 1

 

                #Now configure the authentication method for MSDTC (defaulting to Mutual Auth as it's most secure)

                $SecurityPath = "$ResourcePathMSDTCPRIVATEMSDTC"

                if ($TranManSec -eq "None")

                {

                    Set-ItemProperty -path $MSDTCPath -name "TurnOffRpcSecurity" -value 1

                    Set-ItemProperty -path $MSDTCPath -name "AllowOnlySecureRpcCalls" -value 0

                    Set-ItemProperty -path $MSDTCPath -name "FallbackToUnsecureRPCIfNecessary" -value 0

                }

 

                elseif ($TranManSec -eq "Incoming")

                {

                    Set-ItemProperty -path $MSDTCPath -name "TurnOffRpcSecurity" -value 0

                    Set-ItemProperty -path $MSDTCPath -name "AllowOnlySecureRpcCalls" -value 0

                    Set-ItemProperty -path $MSDTCPath -name "FallbackToUnsecureRPCIfNecessary" -value 1

                }

 

                else 

                {

                    Set-ItemProperty -path $MSDTCPath -name "TurnOffRpcSecurity" -value 0

                    Set-ItemProperty -path $MSDTCPath -name "AllowOnlySecureRpcCalls" -value 1

                    Set-ItemProperty -path $MSDTCPath -name "FallbackToUnsecureRPCIfNecessary" -value 0

                } 

 

            }

        }

    }

}

 

Querying SQL Error Logs With PowerShell

It’s a real pain trying to get data out of the SQL Error Logs I’ve noticed. You either don’t cycle your files very often and end up with stupid amounts of data in there to sift through or you cycle them frequently and then you end up having to try and figure out which of the files is related to the data you want. You could use T-SQL to import all the files and grab stuff, alternatively you can use PowerShell to get information.

I wanted to take this process a step further so I created a PowerShell script that will accept servername, start and end date/time parameters and return to you, the kind reader, all the entries in the logs between those times (and just between those times). This is great for when someone asks you to give them information from the log for 3-7 days ago. One quick execution of the script and the data is sitting there in PowerShell grid format, ready to go. Even better, using the grid you can filter the information further and quickly copy/paste it to excel or some other program.

I’ve pasted the code down below, but it might be easier to download Get-SQLErrorLogs.ps1

This requires the PowerShell SQL snappins that come with 2008/R2 but can query versions 2000 and up.

Next steps are to add filtering and options to output to file. I’m sure I’ll get to that eventually.

 

<#

.SYNOPSIS

   Retrieves SQL Server error log entries

.DESCRIPTION

   Pulls error log details from SQL Server and outputs them to the client

.PARAMETER <paramName>

   -SQLInstance (required) The SQL Instance to pull the logs from

   -MaxDate (required) The most recent log record to return

   -MinDate (required) the oldest log record to return

.EXAMPLE

   ./Get-SQLErrorLogs -SQLInstance MyInstance -MinDate 2012-05-10 -MaxDate "2012-05-15 09:00"

#>

 

param

(

[string]$SQLInstance = $(Read-Host -prompt "SQL Instance [e.g. MyClusterInstance]"),

[string] $MinDate = $(Read-Host -prompt "Lowest log date entry (yyyy-mm-dd hh:mm)"),

[string] $MaxDate = $(Read-Host -prompt "Highest log date entry (yyyy-mm-dd hh:mm"),

[int] $LogNumber

)

 

cls

 

$CheckDateMin = [datetime]::Parse($MinDate)

$CheckDateMax = [datetime]::Parse($MaxDate)

 

 

Function Main {

    #Load the SQL snapins

    Load-SQLSnapins

 

    #Create a datatable to hold the data

    $Results = New-Object system.Data.DataTable "Results"

    #Setup columns

    $LogDate = New-Object system.Data.DataColumn LogDate,([DateTime])

    $ProcessInfo = New-Object system.Data.DataColumn ProcessInfo,([string])

    $Text = New-Object system.Data.DataColumn Text,([string])

    #Add columns to datatable

    $Results.columns.add($LogDate)

    $Results.columns.add($ProcessInfo)

    $Results.columns.add($Text)

 

    #Another to hold logs read

    $ReadLogs = New-Object system.Data.DataTable "ReadLogs"

    $ReadLogNum = New-Object system.Data.DataColumn LogNumber,([int])

    $ReadLogs.columns.add($ReadLogNum)

 

 

    #Shred the SQL Instance to get the right data

    $SQLName = Shred-SQLName $SQLInstance

    $InstanceName = Shred-SQLInstance $SQLInstance

 

 

    #Grabs SQL Logs by Date

    Get-SQLLogByDate $SQLName $InstanceName $SQLInstance $CheckDateMin $CheckDateMax

 

    $results | Sort-Object LogDate | ogv

 

}

 

 

 

Function Shred-SQLInstance ($SQLInstance) {

 

    $SQLInstance = $SQLInstance.ToUpper()

 

    if ($SQLInstance -ilike "**") 

    {

        $string = $SQLInstance.Split("")

        $Instance = $string[1]

        Return $Instance

    }

    else

    {

        $Instance = "DEFAULT"

        Return $Instance

    }

}

 

Function Shred-SQLName ($SQLInstance) {

    $SQLInstance = $SQLInstance.ToUpper()

    if ($SQLInstance -ilike "**") 

    {

        $string = $SQLInstance.Split("")

        $Name = $string[0]

        Return $Name

    }

    else

    {

        $Name = $SQLInstance

        Return $Name

    }

}

 

Function Get-SQLLogByDate ($SQLName, $InstanceName, $SQLInstance,$CheckDateMin ,$CheckDateMax) {

 

    IF ($InstanceName -eq "DEFAULT") {$Logs = DIR SQLSERVER:SQL$SQLName | Where-Object {$_.Name -eq $SQLInstance}; }

    ELSE {$Logs = DIR SQLSERVER:SQL$SQLName | Where-Object {$_.Name -eq $SQLInstance}; }

 

    $LogCount = ($Logs.EnumErrorLogs() | Measure-Object).Count

    Write-Output "--------------------------------------------------------------------------------------------------------------"

    Write-Output "Checking error logs on server $SQLInstance for logs created between $CheckDateMin & $CheckDateMax"

    Write-Output "--------------------------------------------------------------------------------------------------------------"

    Write-Output "Checking through $LogCount logs, please wait."

    Write-Output "-----------------------------------------------"

 

    $LogList = $Logs.EnumErrorLogs()

    #we'll need to do something hokey here. SQL error log creation is not when the log was created, rather when it was closed

    #not intended circumstances. This means we'll have to get the min log and then get the one before that

    ForEach ($ErrorLog in $LogList) {

 

        IF ($ErrorLog.CreateDate -ge $CheckDateMin -and $ErrorLog.CreateDate -le $CheckDateMax)

        {

 

            $CurrentLog = $ErrorLog.CreateDate

            $CurrentLogName = $ErrorLog.Name

            Write-Output "Currently reading error log $CurrentLogName dated $CurrentLog..."

            #write the log number to the $ReadLogs datatable

            $NewLogRow = $ReadLogs.NewRow(); $NewLogRow.LogNumber = $CurrentLogName; $ReadLogs.Rows.Add($NewLogRow);

            $Logs.ReadErrorLog(($ErrorLog.Name)) | Where-Object {$_.LogDate -ge $CheckDateMin -AND $_.LogDate -le $CheckDateMax} |

            % {$NewRow = $Results.NewRow(); 

                $NewRow.LogDate = $_.LogDate; 

                $NewRow.ProcessInfo = $_.ProcessInfo;

                $NewRow.Text = $_.Text;

                $Results.Rows.Add($NewRow);

            }

        }

 

 

 

    }

    #Now we've read all the logs read the one before the newest so we get that extra data 

    #unless the most current log (0) is included in the results already

    #Write-Output $ReadLogs | ft -AutoSize

    $LowestLog = ($ReadLogs | Measure-Object LogNumber -Minimum).Minimum

    IF ($LowestLog -gt 0)

    {

        $LogToRead = $LowestLog - 1

        Write-Output "Currently reading error log $LogToRead..."

        #write the log number to the $ReadLogs datatable

        $NewLogRow = $ReadLogs.NewRow(); $NewLogRow.LogNumber = $CurrentLogName; $ReadLogs.Rows.Add($NewLogRow);

        $Logs.ReadErrorLog($LogToRead) | Where-Object {$_.LogDate -ge $CheckDateMin -AND $_.LogDate -le $CheckDateMax} |

        % {$NewRow = $Results.NewRow(); 

            $NewRow.LogDate = $_.LogDate; 

            $NewRow.ProcessInfo = $_.ProcessInfo;

            $NewRow.Text = $_.Text;

            $Results.Rows.Add($NewRow);

        }

    }

    #on the off chance that the dates reside in the current log only pull that one in

    #but only if no other logs have been loaded (edge case)

    IF (!$LowestLog) 

    {

        Write-Output "Currently reading error log 0..."

        $Logs.ReadErrorLog(0) | Where-Object {$_.LogDate -ge $CheckDateMin -AND $_.LogDate -le $CheckDateMax} |

        % {$NewRow = $Results.NewRow(); 

            $NewRow.LogDate = $_.LogDate; 

            $NewRow.ProcessInfo = $_.ProcessInfo;

            $NewRow.Text = $_.Text;

            $Results.Rows.Add($NewRow);

        }

    }

}

 

Function Load-SQLSnapins 

{    $CheckSQLSnapins = get-pssnapin | where {$_.Name -match "SqlServerCmdletSnapin100"} 

    if (!$CheckSQLSnapins) 

    {

        Add-PSSnapin SqlServerCmdletSnapin100 

        Write-Host "SqlServerCmdletSnapin100 Loaded"

    }

    else { Write-Host "SqlServerCmdletSnapin100 Already Loaded" }

 

    $CheckSQLSnapins = get-pssnapin | where {$_.Name -match "SqlServerProviderSnapin100"} 

    if (!$CheckSQLSnapins) 

    {

        Add-PSSnapin SqlServerProviderSnapin100 

        Write-Host "SqlServerProviderSnapin100 Loaded"

    }

    else { Write-Host "SqlServerProviderSnapin100 Already Loaded" }

}

 

main

Changing The Default Email Address In Office 365

Not SQL related today but a good example of PowerShell in action.

I recently migrated my personal email over to Office 365 from Microsoft. The migration itself went very smoothly but I ran into an issue where the primary email address on the account was showing with a @sirsql.onmicrosoft.com address instead of @sirsql.net. This seriously bugged me and I couldn’t find a way to change it in the Office 365 admin interface.

After some digging I found that you could only make a change such as this using PowerShell, and that you couldn’t do it using the account you want to change (makes sense).

So I created a new admin account just for the purpose and initiated a session in Office 365 (entering the admin users credentials when prompted)

$Livecred = get-credential

$Session = new-pssession -ConfigurationName Microsoft.Exchange -ConnectionUri https://ps.outlook.com/powershell/ -Credential $Livecred -Authentication Basic -AllowRedirection

Import-PSSession $Session

Now I had to find the account, easy enough:

get-mailbox | where-object {$_.Identity -eq "<my account>"} | fl

This gave me all the details, including the PrimarySmtpAddress

image

As you can see from the membership properties you can get and set this information, so that’s what I went to do:

set-mailbox <my account> -PrimarySmtpAddress <my account>@sirsql.net

And got an error

A positional parameter cannot be found that accepts argument ‘-PrimarySmtpAddress’.
    + CategoryInfo          : InvalidArgument: (:) [Set-Mailbox], ParameterBindingException
    + FullyQualifiedErrorId : PositionalParameterNotFound,Set-Mailbox

 

Well this made no sense. No matter what I tried I could not do anything other than a get on the PrimarySmtpAddress. Back to the searching and I discovered that Microsoft have not provided the capability to run a set on that particular member object in Office 365 (although it’s available with Exchange). Way to do with that Microsoft.

After some more digging I was able to get the SMTP address changed by setting the WindowsEmailAddress to the correct value

set-mailbox <my account> -WindowsEmailAddress <my account>@sirsql.net

 

One other note, when using PowerShell to manage O365 and it says to use Identity use the Alias for the account instead or you might up with unintended consequences.