SQL Clustering–Network Binding Order Warnings

In setting up my Windows 2008 R2/SQL 2008 R2 cluster this week I came across a warning in the Setup Support Rules stating that “The domain network is not the first bound network.”

 Network Binding Order Error

 

This didn’t make any sense to me as I had been very careful in setting the binding order in the network connections properties:

Binding Order Set

 

Deciding to look for further information on this I opened up the Detail.txt file from the Setup Bootstrap/Log directory and searched for network binding order:

Binding Order Setup Log

 

Strangely the first bound network is Local Area Connect* 9. I had no idea what this network was as it was not listed in the network adapter list.

I remembered a discussion on Twitter a couple of weeks ago between Allan Hirt (blog|twitter) and Denny Cherry (blog|twitter) where they talked about this exact problem and the fact that it was a hidden cluster adapter which needed to be moved in the binding order, and that had to be done in the registry.

I opened up regedit and drilled down to HKEY_LOCAL_MACHINESYSTEMCurrentControlSetservicesTcpipLinkage and opened up the Bind value:

Registry Showing Incorrect Binding Order

 

I’ve highlighted the first item in the bind order. The trouble is that this GUID value really means nothing to us…time to crack open PowerShell

gwmi Win32_NetworkAdapterConfiguration | where-object {$_.IPEnabled -eq "True"} | ft Description, SettingID -auto

List of active adapters

 

As you can see the Microsoft Failover Cluster Virtual Adapter GUID matches the first bound value. Back to the registry to move that adapter down the list with a quick cut and paste.

*Warning, playing in the registry can cause system instability, BSOD problems, singularities and quantum fluctuations*

Registry with correct binding

 

Starting up the cluster install again and the Network binding order warning is gone

 

Hopefully the next releases of Windows and SQL won’t have this problem. In the meantime this is good to know. Thanks Allan and Denny for engaging on Twitter and leading me to the answer to the problem.

Rock The PASS Summit Vote

The PASS Summit 2011 is fast approaching. It’s being held in October this year and the call for speaker sessions has already closed. In a great turn of events PASS is asking you to take a part in helping to decide on the sessions to be presented. Up until May 20th you can vote on the sessions that you want to see.

I’ve already voted for multiple sessions over at the Session Preferencing page, please ensure that you do so as well. In doing so I hope that you consider voting for two sessions I submitted.

 

PowerShell: Are you checking out my profile? [100 level]
Session Category: Regular Session (75 minutes) 
Session Track: Enterprise Database Administration and Deployment 

PowerShell is a very powerful management tool and you can spend hours writing magical scripts to provide automation for frequently run tasks. Often forgotten is the PowerShell profile, a place you can add your own functions which can provide you lightning fast access to information. 

In this session we’ll talk about the power a profile puts at your fingertips. I will also demo (and share) several PowerShell functions that I use frequently for common tasks like checking database backups and disk space.
I’ll show you my PowerShell profile if you show me yours.

 

I’ve given this presentation at a couple of SQLSaturday events and it’s proven very popular both times. I have a couple of new items in my profile which extends this topic further.

 

Centralized auditing of permissions with SQL Server and PowerShell [100 level]
Session Category: Regular Session (75 minutes) 
Session Track: Enterprise Database Administration and Deployment 

As a DBA it can be a challenge to know who has permissions to what SQL instances and what objects. The more instances you have the more complex that task. In this presentation I’ll share a method using PowerShell and TSQL that can be used to capture permissions from all of your SQL instances and load them into a centralized location. We’ll even take it a step further by auditing those permissions so that we can quickly and easily identify any that might have changed.

I will actually be presenting a shortened version of this on May 18th at 1pm EST for the PASS PowerShell Virtual Chapter. I hope you can attend that and get an idea of how the longer session could help you in auditing your SQL servers.

T-SQL Tuesday #18–CTE A Simpler Form Of Recursion

It’s T-SQL Tuesday time folks. Bob Pusateri (blog|twitter) is the host and has chosen Common Table Expressions (CTEs) as the topic of the month.

Several ideas came to mind for writing about CTEs, one of the best uses I’ve seen for one recently was to grab the name of the most recent backup file for a database. You’ll have to ask Aaron Nelson (blog|twitter) to hook you up with that one though.

I thought I’d write about an interesting problem that was posed to me in an interview a couple of months ago.

 

Here’s a table

I was given a table with two columns; ManagerID, EmployeeID.

This table was populated with a few values thusly:

USE TempDB

GO

create table #ManagersEmployees (ManagerID int, EmployeeID int)

insert into #ManagersEmployees 

values(1,2), (2,3), (2,4), (2,5), (3,6), (3,7), (3,8)

    , (4,10),(5,11),(5,12), (12,13), (12,14)

GO

I was asked to write a recursive procedure to pull out the manager, employee tree for a given ManagerID.

 

CTEs to the rescue

Having done a little work with CTEs and understanding that I could easily write a recursive query using them I was able to quite quickly put together a script to pull the information needed. By throwing it into a procedure it could quickly and easily be executed.

CREATE PROCEDURE ManagerRecursion_CTE @ManagerID INT

AS

SET NOCOUNT ON

;WITH Managers_CTE (ManagerID, EmployeeID )

AS ( SELECT ManagerID, EmployeeID FROM #ManagersEmployees  

        WHERE ManagerID = @ManagerID

UNION ALL

    SELECT e.ManagerID, e.EmployeeID 

        FROM #ManagersEmployees e 

            INNER JOIN Managers_CTE c on e.ManagerID = c.EmployeeID)

SELECT * FROM Managers_CTE ORDER BY ManagerID, EmployeeID

GO

 

I tested and this worked nicely, it was a simple solution and provided the requested results.

 

That’s not recursion

The trouble is that while the results were not correct I was advised that this was not recursive and did not meet the criteria. Back to the drawing board then.

After a lot more work I came up with the following:

CREATE PROCEDURE ManagerRecursion_NonCTE @ManagerID INT

AS

SET NOCOUNT ON

DECLARE @rowcnt INT, @lastrow INT

DECLARE @Tbl_Results TABLE (rowid INT IDENTITY(1,1), ManagerID INT, EmployeeID INT)

 

INSERT INTO @Tbl_Results (ManagerID, EmployeeID)

SELECT ManagerID, EmployeeID

FROM #ManagersEmployees

WHERE ManagerID = @ManagerID

 

SET @rowcnt = @@ROWCOUNT

SET @lastrow = 0

WHILE @rowcnt > 0

BEGIN

INSERT INTO @Tbl_Results (ManagerID, EmployeeID)

SELECT m.ManagerID, m.EmployeeID

FROM #ManagersEmployees m

    INNER JOIN @Tbl_Results t

        ON m.ManagerID = t.EmployeeID

WHERE rowid > @lastrow

 

SELECT @rowcnt = @@ROWCOUNT

 

SELECT @lastrow = @@IDENTITY - @rowcnt

END

SELECT ManagerID, EmployeeID FROM @Tbl_Results ORDER BY ManagerID, EmployeeID

GO

 

I tested this and got back the same results as with the first procedure with all the values I passed in. Deep breath on this one as it was pushing the limits of what I could produce on the spot in an interview.

 

That’s still not recursion

Again, while the results we correct this was not recursive. It was back to the drawing board once more. This time I had to admit defeat, however did tell the interviewer that I would work on a solution at home and email it in. He gave me his contact information, we completed the rest of the interview and I went home determined to get the right data in the manner that the interviewer wanted.

After a whole bunch of reading and a lot of work I finally came up with correct results in a recursive procedure which I emailed in to get feedback.

CREATE PROC ManagerRecursion @EmpID INT, @InnerLoop INT = 0

AS

BEGIN

    IF @InnerLoop = 0

        BEGIN

        CREATE TABLE #Tbl_Results (ManagerID INT, EmployeeID INT)

        END

            INSERT INTO #Tbl_Results (ManagerID, EmployeeID)

            SELECT ManagerID, EmployeeID FROM #ManagersEmployees WHERE ManagerID = @EmpID

 

            SET @EmpID = (SELECT MIN(EmployeeID) FROM #ManagersEmployees WHERE ManagerID = @EmpID)

 

            WHILE @EmpID IS NOT NULL

            BEGIN

                EXEC ManagerRecursion @EmpID, 1

                SET @EmpID = (SELECT MIN(EmployeeID) FROM #ManagersEmployees WHERE EmployeeID > @EmpID 

                    AND EmployeeID NOT IN (SELECT ManagerID FROM #Tbl_Results)

                    AND EmployeeID IN (SELECT EmployeeID FROM #Tbl_Results))

            END

    IF @InnerLoop = 0   

        BEGIN

        SELECT * FROM #Tbl_Results order by ManagerID, EmployeeID

        DROP TABLE #Tbl_Results

        END      

END

 

GO

 

Unfortunately no feedback was forthcoming. I felt good about providing this solution despite that. I enjoy a challenge and this was certainly one of those.

 

So what was the right way?

That depends on who you ask. For me the first way was the right was. It performed well, the code was clean and easy and required a minimum amount of development. I feel that the solution here was exactly the reason that CTEs were created in the first place.

The second solution was a lot more work, the query got more complex and it does not perform as well as the first.

The final procedure was true recursion in that the procedure calls itself over and over again until all of the results are returned. It’s a loop that makes cursors look like they perform well. It was easily the worst performing of the three.

 

It all goes to show there’s more than one way to get the results you need. It’s also interesting how an example like this shows just how much work the SQL development team have done to help reduce complexity and improve performance.

Upcoming Clustering Fun

I’ve a fun week upcoming, I get to do a bunch of installs of SQL 2008 R2 on some Windows 2008 R2 clusters.

 

I’ve had a lot of experience in building and working with Windows 2003 clusters and deploying SQL 2008, this is a new era for me and means new toys to play with and new things to learn. I’m excited to really get to grips with Windows 2008 clustering, there are some significant differences between it and 2003 which is going to provide some challenges. I’m aiming to perform quite a few build up and tear downs of one cluster over the next couple of weeks to build up my levels of comfort with the newer technology, and work towards getting some nice scripts together to do so.

 

As a consequence of Windows 2008 R2 being new to me for clustering I decided to pick up Pro SQL Server 2008 Failover Clustering by Allan Hirt (blog|twitter), a Clustering MVP and guru (who has a new whitepaper out about Applying Updates to a Clustered Instance of SQL Server 2008 or SQL Server 2008 R2). Allan was on the MidnightDBA web show a couple of weeks ago (go watch it) where there was lots of conversation on several aspects of High Availability.

 

I’m still only a quarter of the way through the book but have already found out some fantastic information on things such as the network priority and installing server roles from the command line.

 

As the build goes along I plan on putting up a couple of posts to track the process and gotchas as well as a final review of Allan’s book, so check back soon.

Handling Periods In Database Names With PowerShell

I came across an interesting problem today. We have a vendor who’s tool, for some unknown reason, creates a database with a period in the name (ie Data.Base). Why on earth anyone would do this I don’t know, but it’s happened. I found this out when my PowerShell script to capture database sizes failed.

It seems that PowerShell was not able to handle that period.

 

Reproducing the problem

Create a new database using SSMS:

CREATE DATABASE [Terrible.DBName]

 

Now open up PowerShell and try to query what’s in that database:

dir SQLSERVER:SQLlocalhostdefaultdatabasesTerrible.DBName


Doing so gives the error:

Get-ChildItem : Cannot retrieve the dynamic parameters for the cmdlet. SQL Server PowerShell provider error: The number of keys specified does not match the number of keys required to address this object. The number of keys required are: Name.
At line:1 char:4
+ dir <<<<  SQLSERVER:SQLlocalhostdefaultdatabasesTerrible.DBName
    + CategoryInfo          : InvalidArgument: (:) [Get-ChildItem], ParameterBindingException
    + FullyQualifiedErrorId : GetDynamicParametersException,Microsoft.PowerShell.Commands.GetChildItem
   Command

I tried encapsulating the path in quotes and escaping the period with a back tick, both to no avail.

dir ‘SQLSERVER:SQLlocalhostdefaultdatabasesTerrible.DBName’
dir SQLSERVER:SQLlocalhostdefaultdatabasesTerrible`.DBName


 

Fixing the problem

Eventually I found the solution. Changing the period to it’s hex value of %2e allowed me to query correctly.

dir SQLSERVER:SQLlocalhostdefaultdatabasesTerrible%2eDBName

 

PeriodInDBName

 

 

Updating scripts

Now that I found the problem and resolution I’ll be going through and changing my scripts to take this into account.

One thing to note, when trying to use –replace to replace the period in a variable you need to place it within square brackets otherwise the entire string gets replaced with %2e.

The wrong way:

$a = “Some.String”
$a = $a -replace ".", "%2e"

$a


PeriodInDBNameWrongString

 

The right way:

$a = "Some.String"

$a = $a -replace "[.]", "%2e"

$a


PeriodInDBNameRightString

Checking Database Space With PowerShell

Have you ever been asked the question “quick, I need to know how much free space there is in the Nodrap database on server Cromulent”?

Ever wished that you could quickly grab that information?

You might well have a TSQL script that will grab that information for you. I’ve been working on pulling that information centrally, grabbing that data for multiple servers becomes a much simpler task when using PowerShell.

 

 

Why PowerShell and not TSQL?

For me the big bonus that PowerShell provides me for grabbing this data is that I can pull it quickly and easily, I can also simply run it for multiple machines and the same script works for versions SQL 2000 and up.

 

 

How big is my database?

Open up your favorite PowerShell ISE (you can even use the one that comes with Windows). If the aren’t already loaded you’ll need to add the SQL snapins

Add-PSSnapin SqlServer* 


 

 

To get a list of databases, their sizes and the space available in each on the local SQL instance.

dir SQLSERVER:SQLlocalhostdefaultdatabases | Select Name, Size, SpaceAvailable | ft -auto


 

CheckDBWPS SizeSpace

 

Very quick and easy. The information here is a little misleading though.

The Size is the size of the database and includes size of the transaction log(s). SpaceAvailable only represents the space available in the data files however, unless we’re looking at SQL 2000 in which case it’s the free space in the data and log files. To make things even more confusing Size is reported in MB and SpaceAvailable in KB.

To get more details we need to look at the transaction log information.

 

 

How big is my transaction log?

To grab this information we need to go deeper. In this case we will focus on the AdventureWorks database.

dir SQLSERVER:SQLlocalhostdefaultdatabasesAdventureWorkslogfiles | 

    Select Name, Size, UsedSpace | ft -auto


 

CheckDBWPS LogSpace

 

 This information is all in KB which helps have it make a little more sense.

 

Doing the math

Now we have the size of the database, the size of the log and the free space in each it’s some quick math which gives us the sizing information.

  • Size = Database Size in MB
  • Size / 1024 (from log) = Log size in MB
  • UsedSpace / 1024 = Log used in MB
  • (Size – UsedSpace) / 1024 = Log free in MB
  • Size – (Size / 1024) (from log) = Data files size in MB
  • SpaceAvailable / 1024 = Space free in data files
  • Size – (SpaceAvailable / 1024) – (Size / 1024) (from log) = Space used in data files
  • Size – ((SpaceAvailable / 1024) – ((Size – UsedSpace) / 1024) = Space used in data files (SQL 2000)

 

Outputting nice results

Running all this and grabbing the data from different areas can lead to messy results. This is where the PowerShell DataTable come to the rescue.

A DataTable is a PowerShell object, much like a SQL table that can hold data for you.

First you create the object, define and add columns and then add rows before finally returning the data.

Here’s a quick example:

$dataTable = New-Object system.Data.DataTable "SomeStuff"

#Specify the column names for the data table

$col1 = New-Object system.Data.DataColumn FunkyText,([string])

$col2 = New-Object system.Data.DataColumn JustANumber,([int])

$col3 = New-Object system.Data.DataColumn TodaysDate,([datetime])

 

#Add the columns to the data table

$dataTable.Columns.Add($col1)

$dataTable.Columns.Add($col2)

$dataTable.Columns.Add($col3)

#Create a new row

$row = $dataTable.NewRow()

#Add data for each column in the row

$row.FunkyText = "Something really funky"

$row.JustANumber = 1395

$row.TodaysDate = ((Get-Date ).ToString("yyyy-MM-dd HH:mm:ss"))

#Add the new row to the datatable

$dataTable.Rows.Add($row)

 

#Output the datatable

$dataTable | out-gridview


 

CheckDBWPS Datatable

 

 

Putting everything together

Running the following script will pull the data and log information for all the databases on the SQL instance specified in the top parameter. It also handles named instances without modification, and is factored to return the correct information for SQL 2000 as well as higher versions

Note: Run DBCC UPDATEUSAGE for SQL 2000 instances to ensure that the data is accurate

Param ($Servername= 'localhost')

  
Function Get-DBSizes ($Servername)

{

$Servername

#Check to see if it's a name instance, if it is the location will be different

if ($Servername -ilike "**") { $BaseLocation = "SQLSERVER:SQL$Servernamedatabases" }

    else { $BaseLocation = "SQLSERVER:SQL$Servernamedefaultdatabases" }

$dblist = dir $BaseLocation -Force | select Name

foreach ($db in $dblist)

{

    $location = $db.Name

    $locationFixed = $location -replace "[.]", "%2e"

    #Grab the database information

    $DBInfo = dir $BaseLocation -Force | Where-Object {$_.Name -eq $location; $_.Refresh()} | 

        select Name, 

                size,

                SpaceAvailable,

                CompatibilityLevel

 

    #Pull the log information

    #Use measure-object to sum up sizes in the event that we might have more than one log file

    $logsize = dir $BaseLocation$locationFixedlogfiles | Where-Object {$_.Name; $_.Refresh()} | Measure-Object -Property Size -Sum 

    $logused = dir $BaseLocation$locationFixedlogfiles | Where-Object {$_.Name; $_.Refresh()} | Measure-Object -Property UsedSpace -Sum 

    $sp = $DBInfo.SpaceAvailable

    $TotalDBSizeMB = $DBInfo.size

    $LogSizeMB = ($logsize.sum / 1024)

    $LogUsedMB = ($logused.Sum / 1024)

    $LogFreeMB = ($LogSizeMB - $LogUsedMB)

    $DataFilesSizeMB = ($TotalDBSizeMB - $LogSizeMB)

    $SpaceAvail = ($dbinfo.SpaceAvailable / 1024)

    
    #Because SQL2000 spaceavailable    includes log space we have to do different calculations depending on version

    #Run DBCC UPDATEUSAGE on your 2000 databases nightly to help ensure this data is accurate

    $Compat = $DBInfo.CompatibilityLevel

    if ($Compat -eq 'Version80') { $DataFileFreeMB = ($SpaceAvail - $LogFreeMB)    }

        else { $DataFileFreeMB = $SpaceAvail }

    

    

    $DataFileUsedMB = ($DataFilesSizeMB - $DataFileFreeMB)

    $DataFilePercentUsed = ($DataFileUsedMB / $DataFilesSizeMB) * 100

    $DataFilePercentFree = 100 - $DataFilePercentUsed 

    $LogPercentUsed = ($LogUsedMB / $LogSizeMB) * 100

    $LogPercentFree = 100 - $LogPercentUsed

    $date = (Get-Date ).ToString("yyyy-MM-dd HH:mm:ss")

    

    #Write the results into the data table

    $row = $dataTable.NewRow()

    $row.ServerName = $Servername

    $row.DatabaseName = $location

    $row.TotalDBSizeMB = $TotalDBSizeMB

    $row.DataFilesSizeMB = $DataFilesSizeMB

    $row.DataFilesUsedMB = $DataFileUsedMB

    $row.DataFilesFreeMB = $DataFileFreeMB

    $row.DataPercentUsed = $DataFilePercentUsed

    $row.DataPercentFree = $DataFilePercentFree

    $row.LogFilesSizeMB = $LogSizeMB

    $row.LogFilesUsedMB = $LogUsedMB

    $row.LogFilesFreeMB = $LogFreeMB

    $row.LogPercentUsed = $LogPercentUsed

    $row.LogPercentFree = $LogPercentFree

    $row.Date = $date

    $dataTable.Rows.Add($row)

    
    #And we are done

}

}

 
 

#Create data table to hold the results

$dataTable = New-Object system.Data.DataTable "Results"

#Specify the column names for the data table

$col1 = New-Object system.Data.DataColumn ServerName,([string])

$col2 = New-Object system.Data.DataColumn DatabaseName,([string])

$col3 = New-Object system.Data.DataColumn TotalDBSizeMB,([int])

$col4 = New-Object system.Data.DataColumn DataFilesSizeMB,([int])

$col5 = New-Object system.Data.DataColumn DataFilesUsedMB,([int])

$col6 = New-Object system.Data.DataColumn DataFilesFreeMB,([int])

$col7 = New-Object system.Data.DataColumn DataPercentUsed,([decimal])

$col8 = New-Object system.Data.DataColumn DataPercentFree,([decimal])

$col9 = New-Object system.Data.DataColumn LogFilesSizeMB,([int])

$col10 = New-Object system.Data.DataColumn LogFilesUsedMB,([int])

$col11 = New-Object system.Data.DataColumn LogFilesFreeMB,([int])

$col12 = New-Object system.Data.DataColumn LogPercentUsed,([decimal])

$col13 = New-Object system.Data.DataColumn LogPercentFree,([decimal])

$col14 = New-Object system.Data.DataColumn Date,([datetime])

#Add the columns to the data table

$dataTable.Columns.Add($col1)

$dataTable.Columns.Add($col2)

$dataTable.Columns.Add($col3)

$dataTable.Columns.Add($col4)

$dataTable.Columns.Add($col5)

$dataTable.Columns.Add($col6)

$dataTable.Columns.Add($col7)

$dataTable.Columns.Add($col8)

$dataTable.Columns.Add($col9)

$dataTable.Columns.Add($col10)

$dataTable.Columns.Add($col11)

$dataTable.Columns.Add($col12)

$dataTable.Columns.Add($col13)

$dataTable.Columns.Add($col14)

 
 

 
#Provide the name of the SQL server that we want to check

 
$WarningPreference = "silentlycontinue"

#Call the function to populate the results

#get-content c:Tempserverlist.txt | % {Get-DBSizes $_  } 

Get-DBSizes $Servername

 

#Output the results

$dataTable | Out-GridView

 

 

Click to embiggen

 

Final note

Download the PS1 file and save it on your machine, from there you can call the script and just pass in the SQL instance to get the results eg:

./Get-DatabaseSize.ps1 localhost
./Get-DatabaseSize.ps1 cromulentawesomesauce

In a follow up post I’ll show how we apply a minor tweak to the script and have the data loaded into a SQL database.

Guest Post On Hey, Scripting Guy! Blog

Recently I was contacted by Aaron Nelson (blog|twitter) who provided me an awesome opportunity to write a guest blog post for the Hey, Scripting Guy! Blog. Naturally I jumped at the chance and the post went live this week as a part of a week of SQL related post in honor of SQLRally which is coming up next week in Orlando.SQLRally

 

My post was on using PowerShell to report on SQL Server backup status. Go have a read and let me know what you think. With the rest of the week being dedicated to SQL and PowerShell don’t forget to keep checking the Hey, Scripting Guy! Blog.

Checking Windows Power Plans With PowerShell

Are you getting the best performance out of your CPUs in Windows 2008? You might think you are however the Power Plan that Windows is using might be hurting you and degrading your CPU performance by as much as 25%

 

A Power Plan?

Power Plans are used by Windows to adjust certain performance settings to improve power management and reduce power and cooling requirements. You might be more familiar with this in a laptop setting where the CPU gets stepped down and screen dims when you are on battery power.

Power Plan settings can be found under the Control Panel –> Hardware and Sound –> Power Options

PowerPlanSettings

 

For a high performing database server you really need to be using the High Performance plan.

I highly recommend going and reading Glenn Berry’s (blog|twitter) most recent post on power management (and then go read the other posts in the SQL Server Hardware Nugget series).

 

Checking Power Plans on your servers

It could be very time consuming to go through all your servers and check that the Power Plan is set appropriately. With this in mind I adapted a script by Ed Wilson (blog|twitter) who posted on his Scripting Guys blog a post which shows how to check the Power Plan settings using PowerShell.

My expanded script is just a function that returns the Power Plan of the computer that is passed in.

Firstly it checks the Windows version of the passed in computer, unless we’re running Windows version 6x or higher the win32_Powerplan WMI provider does not exist and we will get errors.

Assuming a version that has the WMI provider we query that, grab the active plan GUID and then use that to get the actual Power Plan in use. This information is then written to the console window. In order to make sure that it stands out that the incorrect plan is set the font color is changed to red, while a correct plan shows green.

Finally, given that this is a function we can easily grab a list of machines from a text file (or database table) and pass those in using a foreach (%) loop so multiple machines can be queried at once quickly and easily.

 

The PowerShell

# Read more about the importance of the right power plan at http://sqlserverperformance.wordpress.com/2010/12/28/the-importance-of-windows-power-plans-for-server-performance/

 

Function Get-PowerPlan ($Computer)

{

    # Grab the windows version so we know whether to query for the power plan or not

    $winver = gwmi -Class win32_OperatingSystem -ComputerName $Computer

        
        # Version 6x is Win7/2008 powerplan not relevant below that

        if ($winver.Version.substring(0,1) -gt 5) 

        {

            $plan = Get-WmiObject -Class win32_Powerplan -Computername $Computer -Namespace rootcimv2power -Filter "isActive='true'"  

            $regex = [regex]"{(.*?)}$" 

            $planGuid = $regex.Match($plan.instanceID.Tostring()).groups[1].value 

            $PlanType = powercfg -query $planGuid

            
            # Grab just the first record which has the actual plan being used

            # From that record just grab the actual plan type which is enclosed in parenthesis            

            $PlanType = $PlanType[0].Substring($PlanType[0].LastIndexOf("(")+1) -replace ")", ""

           

            # If the plan isn't high performance let's make it stand out

            if ($PlanType -ne "High performance") { Write-Host $Computer":" $PlanType.ToUpper() -foregroundcolor "Red" }

            else {  Write-Host $Computer":" $PlanType -foregroundcolor "Green" }

        }

        else

        {

            # If the Windows version doesn't support power plans just let us know

            Write-Host $Computer": n/a"

        }

}

 

#Based upon a list of machines contained in c:tempserverlist.txt

get-content C:tempserverlist.txt | %  {  Get-PowerPlan $_ ; }

 

Running this against a couple of machines locally I got the following results:

localhost: BALANCED
anony-lptp: High performance

 

Give the script a try, let me know how the results look for you.

Stop Logging Successful Backups

It’s great that SQL writes to the Event log and SQL log every time a backup completes. You get to see lots of data saying that database master was backed up and database model was backed up, then that msdb was backed up etc…

 

Is it really that useful?

Well, at least it can be useful to have in there. The thing is there are better ways to find out when your databases were last backed up (such as using PowerShell or querying MSDB).

The downside is that it quickly fills your logs with information that you don’t really need to know. It gets even worse if you perform frequent transaction logs backups. If you have three databases that are having their logs dumped to disk every 20 minutes all of a sudden your SQL log is next to useless. All you are seeing is backup success messages and not much else. It makes it all too easy to miss important the things that you really need to pay attention to.

 

There is hope

Yes, it’s true! You can stop SQL from dumping that information into the log.

By enabling trace flag T3226 you kill these messages dead. Worry not, any failure will still be written to the logs however all those pointless notifications will vanish. All of a sudden your logs will be cleaner and meaner, important things will stand out and your scroll wheel can take a break.

 

How to enable T3226

Open up SQL Configuration Manager and the SQL service. Under the Advanced tab you’ll see startup parameters. Just tack ;-T3226 on to the end, apply the changes and restart SQL.

 

The steps are further outlined at http://msdn.microsoft.com/en-us/library/ms345416.aspx and http://msdn.microsoft.com/en-us/library/ms190737.aspx.

 

Go ahead, clean up those log files…you know you want to.

SQLSaturday #68 Recap

This post has been a long time coming. It’s been two weeks since SQLSaturday #68 in Olympia, WA. A busy workload and dead laptop meant that this post is extremely late in summing up what was a new experience for me in presenting.  

 

Om nom nom

Living (relatively) locally in WA there was no big travel for me to this event. That was nice, although the drive down to Olympia from Bellevue on a Friday afternoon is not the greatest. Luckily the sun was out and the weather warm so I wound down the windows and played bad music while sitting in what seemed like endless traffic.

 

The speaker dinner was, as always, a great event. It’s so great to just be able to sit down and speak to fellow professionals in an informal and relaxed environment. I was fortunate enough to sit and have my meal while listening to the conversation of Buck Woody (blog|twitter), Crys Manson (blog|twitter),  Scott Stauffer (blog|twitter) and John Halunen (twitter).

 

Post meal I followed my usual pre-evening presentation schedule by going over my demos to check they were all working and then watching the DBAs At Midnight web show.  

 

When things go wrong 

Saturday morning saw me arrive at the South Puget Sound Community College event location reasonably early where I ran into Mike Decuir (blog|twitter). We wandered in and headed right for the chocolate muffins before hitting registration. 

As people started filtering in I hung around the speaker area chatting with folks until the first sessions started. 

Rather than attend the first session of the day I sat down to bring up all of my virtual machines and get my demos in ready to go so that I could walk into the presentation room, plug in and I would be all ready to talk PowerShell profiles. I spun up my four VMs and to my horror I found that the evaluation Windows 2008 licenses had expired, all of my iSCSI targets had vanished and my cluster was dead. 

Not good. Deep breath time.  

 

Backup plans 

I’m a DBA, I always have a backup. I knew I could run the presentation from my laptop, just that some of the functionality that I wanted to show would be missing. I managed to get a VM up and running so decided to work with that. I’d never had to go to that DR presentation plan before so this was a new thing for me. I was more than a little nervous to see how this would go.  

 

Presenting: the backup plan 

I walked into the presentation room hoping to get twenty or so folks in attendance. When I got there a lot of the seats were already filled and then more people started coming in. All the seats were taken and still more people came in. Now the nerves really kicked in because I didn’t want to let all these folks down. Luckily there were a couple of friendly faces in the crowd. Cameron Mergel (blog|twitter) and Hope Foley (blog|twitter) were up near the front. It really helped knowing they were there.  

Things started reasonably well and then about 10 minutes in my VM shut down. At this point I was ready for just about anything to be thrown at me, so I started it up again and kept on talking. I felt myself talking a little too fast and tried to slow down a little and worked to get some questions from the audience to help the timing. The crowd was well engaged and asked some great questions. My time went by very quickly and I was very happy to see that I had as many people in the room at the end of the session as I’d had at the start. 

Things done and I had several folks come up and thank me and say that they had enjoyed the presentation which was nice and very well received. The slides and demos are available for download here

Frankly I was exhausted at this point and just want to take a nap, but went and grabbed lunch instead.  

 

A mellow afternoon 

After eating my lunch and spending a little time chatting to some folks I got what little wits I had left and went to watch Buck Woody talk Cloud computing. You can tell Buck is an educator, his sessions are so easy and free flowing and he obviously knows the material inside and out. 

I moved from my chair to a more comfortable seat to enjoy Chuck Lathrope (blog|twitter) in the final session of the day talk Transactional Replication. I’ve dealt with Transactional Replication for years and I’m pretty comfortable with it however I know that Chuck always has nuggets that I can take away and work with so when he talks it’s always useful to me. 

The day done there were a few raffle giveaways and folks descended upon a local Tavern for a bite to eat and something to drink.  

 

Lessons learned 

This little experience taught me a couple of things. 

Firstly, never build your presentations on evaluation copies of software, it’ll come back and bite you in the rear. I’ll be purchasing an MSDN sub in the next couple of weeks to ensure that doesn’t happen. 

Secondly, it can be really exhausting presenting. I’m thinking power naps for me after speaking from now on. 

Thirdly, practice the DR portion of the presentation. Figure out what could go wrong and the mitigation strategy. While I knew I could get by with a lot less I had not built a second presentation around that and had to change a lot on the fly. I won’t be making that mistake again.