Grabbing Table Row Counts Into SQL Using PowerShell

Sometimes Twitter gives you the impetus to get something done that you’ve been meaning to accomplish but just haven’t quite gotten around to. Yesterday was one of the days that happened.

Ed Watson (blog|twitter)  asked the following question on #SQLHelp

SQLHelp

 

This is something I’ve been meaning to do for a while so that I can get good ongoing trending information around table row counts.

There are several ways to accomplish this but PowerShell was the clear winner for me. I wanted the script to accomplish a couple of things

  • Capture the SQL instance name, database name, table name (including schema) and row count for all tables in a database
  • To grab the information for all the databases on a server (including system databases)
  • To write the data to a SQL table
  • Provide the option of dumping the data to the screen
  • Allow me to execute it quickly and easily against multiple servers without providing interactive input

 

The script I’ve written does all this. The only bugs I’ve found are that the model and tempdb table information is not accessible, but this isn’t really an issue as model doesn’t get any use and tempdb is far too frequently updated to be of any use.

 

The Scripts

Firstly we need to create a couple of SQL tables to hold the results. These should be created on the SQL instance and database of your choosing. The script itself should be updated to reference the locations of these tables (ultimately it made more sense to do it this way rather than pass the information in as a parameter).

 

CREATE TABLE dbo.SQLTableRowCounts

    (

      SQLInstance NVARCHAR(128) ,

      DBName NVARCHAR(128) ,

      TableName NVARCHAR(256) ,

      TableRowCount BIGINT, 

      LoadDate DATE DEFAULT CONVERT(DATE, GETDATE())

    );

    

CREATE TABLE dbo.SQLTableRowCountsArchive

    (

      SQLInstance NVARCHAR(128) ,

      DBName NVARCHAR(128) ,

      TableName NVARCHAR(256) ,

      TableRowCount BIGINT,

      LoadDate DATE DEFAULT CONVERT(DATE, GETDATE())

    );    

 

There are no indexes on the tables currently. Depending upon your usage patterns a couple of different indexes could apply. I’ll let you decide what works best for you.

 

The PowerShell script is quite well documented and includes execution examples (you’ll want to run this on a machine where the SQL PS providers have been installed).

<#

.SYNOPSIS

   Script takes in a list of SQL Instances and iterates through the databases to retrieve a row count for each table. This is then loaded into a SQL database.

   Within the script the $StorageSQLServer and $StorageDatabase should be updated to reflect the relevant locations to load the data.

.DESCRIPTION

   Script takes in a list of SQL Instances and iterates through the databases to retrieve a row count for each table. This is then loaded into a SQL database.

   Within the script the $StorageSQLServer and $StorageDatabase should be updated to reflect the relevant locations to load the data.

   Requires bulk admin permissions on the SQL Server that you are loading the data to.

   

   Accepts pipelined inputs so that you can pass an array for working with multiple SQL Instances

.PARAMETER <$SQLInstance>

   This is the SQL Instance that you'll be working with

.PARAMETER <$verbose>

    Initially set to false. When set to true will also output the data to the console

.EXAMPLE

   Basic example: Get-TableRowCounts MySQLInstance

   Advanced: Get-Content C:TempSQLInstances.txt | % { Get-TableRowCounts $_ }

   

#>

 

param

(

[string]$SQLInstance = "$(Read-Host 'SQL Server Instance [e.g. SQLPROD]')",

[switch]$verbose = $false

)

 

#Load the SQL snapins

get-pssnapin | where {$_.Name -match "SqlServerProviderSnapin100"} 

if (!$CheckSQLSnapins) 

{

    Add-PSSnapin SqlServerProviderSnapin100 

    Write-Host "SqlServerProviderSnapin100 Loaded"

}

 

#Set the following to the SQL instance and database that you want to load the data to

#You will need to have bulk insert permissions to be able to perform this action

$StorageSQLServer = "localhost"

$StorageDatabase = "SQLMonitor"

 

$BCPconnectionString = "Data Source=$StorageSQLServer;Integrated Security=true;Initial Catalog=$StorageDatabase;"

 

#Clean out the current data table if the data is not from today

Invoke-Sqlcmd -ServerInstance $StorageSQLServer -Database $StorageDatabase -Query "DELETE FROM dbo.SQLTableRowCounts WHERE LoadDate != CONVERT(DATE, GETDATE());"

 

 

 

function TableList ($SQLInstance, $DBName)

{

 

    #Handle default vs named instances

    if ($SQLInstance.Contains("`"))

    {        $location = "SQLSERVER:SQL$SQLInstanceDatabases$DBNameTables" }

    else

    {        $location = "SQLSERVER:SQL$SQLInstanceDEFAULTDatabases$DBNameTables" }

 

    <#

    Create data table to hold the results

    The datatable will live for the life of the function call to reduce memory overhead

    pulling from multiple servers

    #>

    $dataTable = New-Object system.Data.DataTable "Results"

    #Specify the column names for the data table

    $col1 = New-Object system.Data.DataColumn SQLInstance, ([string])

    $col2 = New-Object system.Data.DataColumn DatabaseName, ([string])

    $col3 = New-Object system.Data.DataColumn FullTableName,([string])

    $col4 = New-Object system.Data.DataColumn RowCounts,([int64])

 

    #Add the columns to the data table

    $dataTable.Columns.Add($col1)

    $dataTable.Columns.Add($col2)

    $dataTable.Columns.Add($col3)

    $dataTable.Columns.Add($col4)

 

    #Get our table list

    $TableList = dir $location -Force | select Owner, Name, RowCount

 

    #Iterate through the tables, load the rowcounts into the datatable

    foreach ($TableInfo in $TableList)

    {

        $TableOwner = $TableInfo.Owner 

        $TableName = $TableInfo.Name

        $TableRowCount = $TableInfo.RowCount

 

        $TableFullName = "$TableOwner.$TableName"

 

        #Provided there are rows in the table

        #Load the table/rowcounts into the datatable

        #This prevents errors due to nulls in model & tempdb

        if ($TableRowCount)

        {

            $row = $dataTable.NewRow()

            $row.SQLInstance = $SQLInstance

            $row.DatabaseName = $DBName

            $row.FullTableName = $TableFullName

            $row.RowCounts = $TableRowCount

            $dataTable.Rows.Add($row)

        }

 

    }

 

    if ($verbose) { Write-Output $dataTable }

 

    #Bulk load the data into SQL from the data table

    $bulkCopyRoles = new-object ("Data.SqlClient.SqlBulkCopy") $BCPconnectionString

    #Loads to the current table

    $bulkCopyRoles.DestinationTableName = "SQLTableRowCounts"

    $bulkCopyRoles.WriteToServer($dataTable)

    #Loads to the archive table

    $bulkCopyRoles.DestinationTableName = "SQLTableRowCountsArchive"

    $bulkCopyRoles.WriteToServer($dataTable)

 

    #Clean up after ourselves so we release memory used to hold the data table

    Remove-Variable dataTable

}

 

 

function DBList ($SQLInstance)

{

 

    #Handle default vs named instances

    if ($SQLInstance.Contains("`"))

    {        $BaseLocation = "SQLSERVER:SQL$SQLInstanceDatabases" }

    else

    {        $BaseLocation = "SQLSERVER:SQL$SQLInstanceDEFAULTDatabases" }

 

    $dblist = dir $BaseLocation -Force | select Name

    foreach ($db in $dblist)

    {

        $DBName = $db.Name

        TableList $SQLInstance $DBName 

    }

}

 

DBList $SQLInstance 

 

 

This script could easily be setup to execute daily to load the data into SQL so that you can perform quick and easy trends on table growth.

 

Give it a try, let me know what you think.

Stop Bad Database Design

Every year that goes by I sit in hope that I won’t see bad database design.

Every year I am disappointed.

 

As an example here is a table create statement that I saw the other day (table and column names have been changed to protect the innocent)

CREATE TABLE [dbo].BestestTableEVAR(

 Col1 [int] IDENTITY(1,1) NOT NULL,

 Col2 [uniqueidentifier] NULL,

 Col3 [uniqueidentifier] NOT NULL,

 Col4 [smallint] NULL,

 Col5 [smallint] NOT NULL,

 Col6 [bit] NOT NULL,

 Col7 [xml] NULL,

 Col8 [xml] NULL,

 ColA [xml] NULL,

 ColB [xml] NULL,

 ColC [datetime2](2) NULL,

 ColD [datetime2](2) NULL,

 COlE [datetime2](2) NULL,

 ColF [datetime2](2) NULL,

 CONSTRAINT [PK_BestestTableEVAR] PRIMARY KEY CLUSTERED 

(

 Col3 ASC

)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON, FILLFACTOR = 90) ON [PRIMARY]

) ON [PRIMARY]

 

So what’s wrong with this?

The clustered primary key on this table is a GUID. Is that bad? That’s an unequivocal YES! Read all that Kimberly Tripp (blog|twitter) has to say about GUIDs in database design.

What makes this all the more crazy is that the table has an identity column. That’s a natural clustering key ready and waiting to be used and yet for some reason it’s not.

This table is going to fragment like crazy, it won’t scale and performance will be hideous. Additionally, thanks to the XML columns this table can’t even be rebuilt online meaning there’s no way to help the fragmentation or performance without actually taking the table offline to do it, meaning it can’t handle any transactions. This is a problem in a table on an OLTP system.

 

I would go back and change some things. Let’s say you wanted to keep the table structure the same, that’s fine, but let’s be smart about the keys and indexes.

It would make sense to change the identity column to be clustered (I would also make this the primary key) and then, to ensure uniqueness on Col2 which is the current primary key a unique index is warranted.

CREATE TABLE [dbo].BestestTableEVAR(

 Col1 [int] IDENTITY(1,1) NOT NULL,

 Col2 [uniqueidentifier] NULL,

 Col3 [uniqueidentifier] NOT NULL,

 Col4 [smallint] NULL,

 Col5 [smallint] NOT NULL,

 Col6 [bit] NOT NULL,

 Col7 [xml] NULL,

 Col8 [xml] NULL,

 ColA [xml] NULL,

 ColB [xml] NULL,

 ColC [datetime2](2) NULL,

 ColD [datetime2](2) NULL,

 COlE [datetime2](2) NULL,

 ColF [datetime2](2) NULL,

 CONSTRAINT [PK_BestestTableEVAR] PRIMARY KEY CLUSTERED 

(

 Col1 ASC

)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON, FILLFACTOR = 90) ON [PRIMARY]

) ON [PRIMARY];

 

CREATE UNIQUE NONCLUSTERED INDEX UI_BestestTableEVAR_Col2 on BestestTableEVAR (Col2);

Sure, we still won’t be able to rebuild the indexes online, but we won’t have the same crazy levels of fragmentation that we would have had before.

 

I know I’ll be seeing a lot of bad design this year and I know that I’ll be forced to push that bad design into production. Doesn’t stop me trying to change things however. Help your devs out, let them know when their design is a problem. Who knows, maybe you’ll change things for the better.

Please Don’t Use Deprecated Data Types

I know that a lot of vendors like to write for the lowest common denominator (i.e. SQL 2000) but really folks it’s gone too far. I’m sick of cracking open vendor code that’s certified for SQL 2008 and seeing things like IMAGE and TEXT data types. Microsoft deprecated these things back when they released SQL 2005 (see http://msdn.microsoft.com/en-US/library/ms143729(v=SQL.90).aspx under Textpointers). Why are you persisting these things six years later?

I bring this up because I’ve come across further egregious usage of these data types in vendor code yet again. The vendor in question? Microsoft.

Yes, that’s right, the folks that deprecated the data type six years ago is still using it to a large extent within the ReportServer and ReportServerTempDB databases that support SQL Server Reporting Services. Seriously Microsoft? Can you please get with the plan and fix this nonsense?

The following query, run against the ReportServer database will show 14 different tables (31 columns) using a variety of NTEXT and IMAGE data types.

select st.name as TableName, t.name as DataType, sc.name as ColumnName 

    from sys.types t 

        inner join sys.columns sc

            on t.system_type_id = sc.system_type_id

        inner join sys.tables st

            on sc.object_id = st.object_id

where 

    t.name in ('image', 'text', 'ntext')

order by 

    st.name, t.name

 

I have filed a Connect item asking Microsoft to fix this. Please go vote for it at https://connect.microsoft.com/SQLServer/feedback/details/714117/ssrs-using-deprecated-data-types-in-its-databases and help us rid the world of this old stuff.

T-SQL Tuesday #25–Did Your RS Report Fail?

Allen White (blog|twitter), in hosting this months T-SQL Tuesday, is looking for tricks to help make you more effective.

I could probably try and figure out lots of things that I mess around with to make myself more effective. For example adding aliases for my servers so that instead of having to go to “PRODSERVER17FML27” I can just go to “FML27”.

Instead of that I thought I’d throw something up real quick that turned up as an issue the other day at work.

 

We have a bunch of scheduled Reporting Services reports that get emailed to users and files put out on shares. These schedules are actually run as SQL Server Agent jobs. We ran into an issue whereby a report failed over a weekend. We knew nothing about the failure, the first we knew was when the customer started complaining that the report was not available.

Knowing that the schedules are running in SQL Agent I looked for failed jobs. There were none. All the job does is submit the required report into the job queue and then say that it did it’s job. It does not actually monitor successful execution of the report itself. There doesn’t actually appear to be any quick and simple way of doing this. As such I wrote a process that goes out to the Reporting Services database on a daily basis, looks for reports that have failed and then sends me a report to let me know. It’s a good way to get ahead of the customer and find problems before they are reported.

Code below (in this code the default database ReportServer is used for the Reporting Services database)

 

/* 

Query grabs a list of the failed Reporting Services reports for the last 24 hours.

This list is then sent as an HTML table via email.

Data in email includes:

    Report Name

    Path To Report

    Username Executing Report

    Date/Time When Report Executed

    Parameters Used In Report Execution

*/

 

SET NOCOUNT ON

DECLARE @HTML VARCHAR(MAX)

DECLARE @HTML_Header VARCHAR(MAX)

DECLARE @HTML_Body VARCHAR(MAX)

DECLARE @HTML_Footer VARCHAR(MAX)

DECLARE @COUNT INT

 

--create the table to hold the dataset

DECLARE @Results TABLE

    (

      ID INT IDENTITY(1, 1) ,

      RSName NVARCHAR(850) ,

      RSPath NVARCHAR(850) ,

      UserName NVARCHAR(520) ,

      TimeStart DATETIME ,

      Parameters NVARCHAR(MAX)

    )

 

--Grab the data from the ReportServer database for failed reports

INSERT  INTO @Results

        ( RSName ,

          RSPath ,

          UserName ,

          TimeStart ,

          Parameters

        )

        SELECT  c.Name ,

                c.Path ,

                [UserName] ,

                [TimeStart] ,

                [Parameters]

        FROM    [ReportServer].[dbo].[ExecutionLogStorage] els

                INNER JOIN ReportServer.dbo.Catalog c ON els.ReportID = c.ItemID

        WHERE   status != 'rsSuccess'

                AND TimeStart > DATEADD(hh, -24, GETDATE())

        ORDER BY LogEntryId 

 

 

--create the email

SELECT  @count = MAX(ID)

FROM    @Results

 

IF @count <> 0 

    BEGIN

        SET @HTML_Header = '<html><header></header><body>

        <TABLE>

        <table border="1" cellspacing="1" cellpadding="1"> 

        <TR colspan=2style="color: #A0522D; font-family: Verdana; font-size: 11; font-weight:bold" align=Left>

        <td>Report Name</TD>

        <TD>Report Path</TD>

        <TD>Executed On</TD>

        <TD>Executed By</TD>

        <TD>Parameters</TD>

        </TR>

        '

 

        SET @HTML_FOoter = '</TABLE>

                <BR><BR>

                <FONT FACE="color: #A0522D; font-family: Verdana; font-size: 12";></FONT></body></html>

        '

    END

ELSE 

    BEGIN

        SET @HTML_Header = 'No failed reports in the last 24 hours'

        SET @HTML_Footer = '</TABLE>'

    END

 

SELECT  @HTML_Body = ''

 

WHILE @count > 0 

    BEGIN

        SELECT  @HTML_Body = @html_body + '<TR style="font-size: 10">

                           <TD style="text-align: left;" >' + RSName + '</TD>

                           <TD style="text-align: LEFT;">&nbsp ' + RSPath

                + '</TD>

                           <TD>&nbsp ' + CONVERT(VARCHAR, timestart, 120)

                + '</TD>

                           <TD style="text-align: left;">&nbsp ' + Username

                + '</TD>

                           <TD style="text-align: left;">&nbsp '

                + ISNULL(Parameters, '') + '</TD>

                          

</TR>'

        FROM    @results

        WHERE   ID = @count   

 

        SET @Count = @count - 1

 

    END    

 

SET @HTML = @HTML_Header + @HTML_Body + @HTML_FOoter

 

 

EXEC MSDB.DBO.SP_SEND_DBMAIL @RECIPIENTS = '<Your Email Address>',

    @SUBJECT = 'Failed RS Reports - Last 24 hours', @BODY = @HTML,

    @BODY_FORMAT = 'HTML'

--, @IMPORTANCE = 'HIGH'

GO

PASS BoD–Who I’m Voting For

Hopefully everyone who reads this is a member of PASS and has received a ballot for voting in this years Board of Directors elections.

There’s a really strong candidate turnout this year, it makes for tough choices when it comes to voting. I was able to pick my three, I’m going to tell you who and why.

*None of the following is endorsed by any candidate, nor have I mentioned that I am writing this post to any of them. The thoughts and feelings expressed as regards each candidate are my own. I am not being negative towards any of the other candidates either, and nor should you be. I am just looking at the positives for these three as I perceive their value on the Board.

 

Rob Farley

I first got to meet Rob at the PASS Summit a couple of years ago and found him to be an all round nice guy (despite the fact that he is an Arsenal supporter). He’s worked very hard to build his business back home in Australia and by all accounts been very successful in doing so.  Rob has already shown great dedication to the community which can be shown by virtue of his being an MVP for the past 6 years.

Rob brings a great many things to the table. The one thing that I really feel makes him a standout candidate is how he fits into the future of PASS. A great deal has been talked about how PASS needs to grow internationally. Rob is a part of the international market. Indeed his different perspective was acknowledged by the current Board when they brought him in as a non-voting member earlier this year.

Rob really seems to care about this stuff. Enthusiasm is infectious and I believe he will kick start some different thinking.

 

Adam Jorgensen

I don’t know Adam. I’ve never met him, but I know of him. He’s the President of Pragmatic Works; on the board of their foundation for getting teachers and veterans back to work in technology; SQL MVP and PASS regional mentor.  Community really seems to be at the core of everything that Adam does (highly commendable stuff).

Again, Adam brings a lot to the table, the key thing for me is his financial experience. There were delays in getting the PASS budget approved this year. It’s not something that the Board wanted to happen or intended. Budgets are always tight and with planned international expansions and additional things like SQL Saturday and SQLRally events I think someone with a firm understanding and great experience with business financials would be a bonus to the Board.

 

Kendal Van Dyke

I’ve met Kendal a couple of times briefly (you know how PASS can be crazy that way).  He started MagicPASS down in Orlando and was on the planning team for the first SQLRally event. Kendal deservedly received his first MVP award this year (he was one of those guys where you go “wait, he’s not an MVP?”)

There were a couple of things that really piqued my interest as regards Kendal’s platform. Firstly his wanting to work with the virtual chapters. I feel that they are very neglected right now from a Board perspective. I think a great push needs to be made for those. We get hundreds of people for every 24HOP presentation but the numbers are very low for the Virtual Chapters. Getting the word out about them and having a solid push in that direction from the Board would go a long way. Second was his desire to work with PASS partners and work on mutual benefits with them. I think it would be a good thing for PASS membership to go beyond folks just getting a vote. We really need to find a way to grow membership and engage people further. I feel that this may be a way to do achieve that.

 

 

Get out and vote

Or rather sit on your rear and click some buttons. If you have a ballot go and vote. It’s important you help put people in place that are going to affect how PASS will impact you in the future.

You have until December 20th to get your vote in.

While I’ve stated who received my vote there are other candidates, I encourage you to check them out and see if they might be a better choice for you:

SQL 2012 – MDS IIS Issue

It’s funny, I thought my first SQL 2012 post would be something based around Clustering, AlwaysOn or admin fun but something cropped up today as regards Master Data Services (MDS) that I felt I should throw up there.

To say I’ve barely dealt with MDS is an understatement. I’ve never installed it, never worked with it, in fact I barely know what it does. As happens a request came in for it to be deployed for the dev folks to have a play around. The install appeared to have gone without any issues but then came the need to configure things using the MDS Configuration Mapper.

 

Upon launching the Configuration Mapper an error popped up as regards IIS and stopped everything in it’s tracks.

“The required .svc handler mappings are not installed in IIS”

Well gee, thanks Microsoft, that’s a really helpful error you’ve got there. Add on to that a lack of errors in the event logs and I figured it would be a dog to track down the problem.That’s Bing, not Ting – Image from http://www.flickr.com/photos/thecolourmill/3739742771/in/photostream/

First things first, I figured let’s try searching on the error, so I Bing’d it (yes people Bing, that’s my default search engine). The second result was promising, it led me to a Connect entry about the self same issue.

In the comments someone at Microsoft kindly thanked the submitter for finding the issue and stated “the fix for this issue will not be able to make this release”. Huh, never seen that before.

Fortunately there were a couple of post workarounds. The first one caught my attention. It mentions missing .svc handler mappings in IIS. A-ha, this looked promising I thought. I looked at the list of handlers, they all had 4.0 in them. Looks like a .NET thing I thought to myself.

 

I cracked open the IIS Manager and took a look at the installed Handler Mapping features. I could see .NET 2.0 components but not .NET 4.0 even though .NET 4.0 was installed on the system. This seemed like a case of .NET not being registered with IIS, something that was a serious pain back when .NET first came about and I had to deal with IIS installations (not something that I miss let me tell you).

I cracked open an elevated command prompt and drilled down to the .NET Framework 4.0 folder (cd %windir%Microsoft.NETFramework64v4.0.30319) and ran

aspnet_regiis.exe –i

which installs that version in IIS (there were no other sites on the server so I wasn’t concerned about impacting existing sites by changing their .NET version, read more about the switches before you do this).

With that complete, I did a reboot (I always like to try turning it off and back on again) and brought up the MDS Configuration Tool again. This time it was all happy and the configuration was able to continue.

 

Who said Connect was useless?

Adding Users To Local Groups With PowerShell

I ran into a situation last week where I needed to add a user to the Administrators group on a whole bunch of remote servers. Having had serious problems with cubital tunnel I try to avoid mouse clicks as much as possible, and anyway this is one of those situations that PowerShell is great for.

After a little trial and error I put together a script. The script doesn’t accept any pipelining, rather it will prompt you for the values (old school).

When asked enter the name of the local group (you don’t always want to add people to Administrators after all). Then the AD user or group name (include the domain for example PRODSomeAccount). Finally a comma separated list of servers to add the account to. Yes, I could have done this with a text file or something like that, but this is to be used as a quick thing and it’s easier to just type a list of servers in rather than create a file every single time you run it.

Once those three things have been entered the script will go out, check to see if the AD user/group already exists in that server group. If it exists you’ll get a warning that it’s already there, otherwise it will add the user/group to the local server group. There’s also a quick check to see if the server is available, if not you’ll get an error for that particular machine, it will continue to work on the others.

Of course the account you execute this script as will have to have the relevant permissions on the remote servers to allow you to add the user/group.

 

<#

.SYNOPSIS

   Adds a domain user or group to a local Windows group on a local or remote server

.DESCRIPTION

   This script can be used in the event you need to add a domain user or group to a local Windows server group.

   It's real strength comes in being able to enter a comma delimited list of servers so that you can add the same domain user/group to multple machines quickly and easily.

   The user executing the process will require the rights on the remote machines to be able to add the accounts.

   Contains basic error handling to check if the server is reachable, if the group exists and if the user is already a member of the group.

.PARAMETER <paramName>

   No parameters, script file will ask for input

#>

 

$usrName = (Read-Host "Enter the domain account or group that you wish to add to the local server group (eg PRODMyAccount)")

$grpName = (Read-Host "Enter the server group name that the user should be added to (eg Administrators)")

$srvList = (read-host "Enter a comma delimited list of servers (eg SrvA, SrvB)").split(",")

Write-Host ""

$Exists = 0 #Initialize the Exists variable which is used to see whether or not users should be added to groups

 

foreach ($srvName in $srvList)

    {

    $srvName = $srvName.Trim().ToUpper() #Gets rid of spaces

    

try

{

if([ADSI]::Exists("WinNT://$srvName/$grpName,group")) #Check to see if the group exists

    { 

        #Set the comparison string for the account to add

        $usrName = $usrName  -replace "", "/"

        $chkOutput = "WinNT://$usrName"

        #Write-Output "Checking for $chkOutput"

    

        $group = [ADSI]("WinNT://$srvName/$grpName,group") 

        $group.Members() | 

        % { 

            $AdPath = $_.GetType().InvokeMember("Adspath", 'GetProperty', $null, $_, $null) 

            #Write-Output $AdPath

            if ($AdPath -ilike $chkOutput) 

                {

                Write-Warning "User $usrName already a member of the $grpName group on $srvName"

                $Exists = 1 #This way we won't try to add an account twice

                }

        }        

        

        if ($Exists -eq 0)

        { 

            Write-Output "Account $usrName does not exist in the local $grpName group on $srvName. Adding now..." 

            $group.add("WinNT://$usrName,user") #Add the account to the local server group

            $Exists = 0 #Reset the Exists variable ready for the next server

        }

    }

else

{

    Write-Warning "The $grpName group does not exist on server $srvName."

    }

 

}

 

catch { Write-Error "Server $srvName was unreachable." } 

}

 

Your “Five Nines” Means Nothing To Me

There’s a lot of talk on High Availability. I know I’m a huge fan of it, in particular clustering (but I know it’s not for all situations and the changes that SQL 2012 with AlwaysOn Groups may mean that traditional clustering is used less and less). There are of course other HA solutions out there like Log Shipping, Mirroring and SAN replication technologies (sorry folks, I disagree on transactional replication as an HA concept unless you are talking downstream reporting infrastructure behind load balancers).

People constantly push to achieve the magical five nines availability number. What does that mean?

Five nines means that your SQL Server must be available 99.999% of the time. This means you can have only 5.26 minutes of downtime a year.

So let’s say that you are running clustering and mirroring; that you have load balancing with peer-to-peer replication; that you are running SAN replication between two datacenters and have log shipping setup ready to go live at the flick of a switch. You have all your hardware ducks in a row and are ready for any emergency that may crop up.

I commend you for all of that.

Now let’s say that your manager is going to bonus you on the downtime that you have. You are the DBA, a single link in the chain, and your entire bonus is going to based on the number of nines that you can get over the year.

Time to sit down with your manager and ask some questions:

 

  • Does this five nines bonus plan include maintenance time?
  • How are you going to measure uptime?
  • What tools are going to be used to measure it?
  • What are the tolerances on those tools?
  • What about upstream items?

 

 Let’s take a look at these one at a time.

 

Is maintenance time included?

There may be bugs in the database code for the application that would require an app being offline while changes are made. You may want to patch SQL Server with the latest service pack in order to maintain supportability. You may want to test failover and DR. If none of these things are excluded then chances are you’ll have trouble making three nines (8.76 hours of downtime in a year) and none at all of making five nines.

 

How will uptime be measured?

Is uptime calculated based upon SQL Server being up and running? Or maybe on that there are successful connections? How about certain queries getting returned? Or a website returning a page within a certain time period?

So let’s say you are running a query. What if that query gets blocked or deadlocked? SQL is up and processing other transactions but that single query is having a problem.

Successful connections? Great, doesn’t mean that the database is available that they are going to use. Same issue with checking if SQL Server is available. Heck, are you measuring server uptime or database uptime here?

Website returning a page? What if there’s an issue with the web server? Or a networking problem between there and the SQL database? 

 

What tool will you use?

Are you going to buy something off the shelf? Then who’s going to spend the time configuring it for your particular environment? Isn’t that time better spent on solidifying  things,  performance analysis? Query tuning?

You’re going to make your own tool? Great, how are you going to do that, what technologies? How is the information going to be reported upwards? Who has the last say on possible exceptions? You have the time to write this yourself? Awesome, I’m glad that you do.

 

What is your tool tolerance?

Tolerance? Yup. Let’s say that you’ve figured out all the other stuff and have agreements in place for everything and got a tool that will do the job. How does that tool work?

Let’s say it runs a query and fails. It says that the database is down. When will it next attempt to query the database? In 15 minutes? If that’s the case then any bad poll from your tool just put you outside of that five nines criteria and your bonus has been dinged.

Let’s just say that there was an actual problem and that happened 30 seconds after the last poll but you resolved it before the next one. Everything was down for 14 minutes but your tool didn’t capture that.

Some serious flaws there.

 

Upstream items

Are you the Windows, network, firewall, and datacenter admin? Are you responsible for generating the electricity that comes into the building? How about the HVAC system, is that yours? Are you checking the diesel levels in your backup generators weekly?

Each one of these upstream items can and will affect peoples ability to access your SQL Server and its databases. Unless you are responsible for them all then how can you be held accountable for perceived uptime on SQL Server?

SQL can be up, able to process data and all the databases available, but without a funcitoning network nobody would know that and your monitoring wouldn’t pick it up.

 

 

All of these things add up to not being able to accurately measure the mythical five nines uptime creature. If indeed we can’t get those kinds of accurate measurements and I can’t own those upstream processes then why should I be held accountable for a number.

Here’s a better idea. Let’s look at the processes, let’s look at whether or not things are available from a users perspective. Let’s gauge customer interactions, build outs, deployments. Let’s track tickets and issues. Let’s talk about the things we need to do to make things better. Let’s not go pulling a number out of the ether, it does nobody any favors.

Quickly Build Transaction Log Restore Commands With PowerShell

I’m in the middle of a server migration and so I’m setting up log shipping to get the databases consistent between the two servers prior to cutting over.

The backup copy over the WAN was taking a long time and so I had a lot of transaction log files that I needed to apply. I could have let log shipping take care of performing the log restores, but I found that a secondary process had created log backups also, just with different names. This lack of consistency was causing me a problem so I needed a quick way to build a restore command of all the files in order.

This was a quick win for PowerShell. I was able to quickly grab a list of the files in date order and pipe those to a string and build the restore command. This I was able to copy into SSMS and perform the restores.

$filelist = dir LogShippingPrimaryp$backup | where-object {$_.Extension -eq '.trn'} | Sort-Object {$_.CreationTime} | select Name

 

foreach ($file in $filelist)

{

$restorefile = $file.Name

 

$text = "restore log MyDatabase from disk = 'P:ForRestore$restorefile' with norecovery;"

Write-Output $text

}

 

If you wanted to take this a step further you could actually use invoke-sqlcmd to execute the $text string and perform the restore (don’t even contemplate doing that if you aren’t running at least SQL 2008 R2). I always prefer to do that within SSMS though.

While I’ve used this script for a migration task it also works really well if you need to perform a point in time restore for your database from a full backup and some transaction log backups. Can you imagine how long it would take you to build out the restore string to recover a database which had transaction log backups every 15 minutes for the last 20 hours? That’s 80 log files. Want to do that manually? This is a much nicer way to go.

Using PowerShell To Restrict MSDTC Ports

Ever tried to create a linked server that uses MSDTC only to find yourself blocked by your company firewall? Ever tried to run a WMI query against a server just to find yourself blocked? Ever had the firewall team tell you that they aren’t going to open up ports 49152-65535 so that you can use RPC?

Let’s be fair, your network team shouldn’t have to open up all those ports because RPC responds somewhere within a large dynamic range.

How to configure RPC dynamic port allocation to work with firewalls will tell you how to edit your registry to restrict that port range and make your network admin a little happier.

Working with the registry is not fun at the best of times, and when you are setting up a bunch of machines it takes time. Sure, you could create a .reg file and run that on each machine, but this is 2011 and we have PowerShell now.

 

The following script checks and if necessary adds the required registry keys to restrict that port range. In the example below windows is being limited to ports 5000-5200.

 

 

<#

.SYNOPSIS

   Restricts the RPC ports to be used on Windows from 5000-5200

.DESCRIPTION

   Execute to add registry entries on the local machine to restrict the RPC ports from 5000-5200. Requires a reboot once executed.

.PARAMETER <paramName>

   NONE

.EXAMPLE

   NONE

#>

 

if (Test-Path 'HKLM:SOFTWAREMicrosoftRpcInternet' -ErrorAction SilentlyContinue) { "Registry Key Exists" } 

else { md 'HKLM:SOFTWAREMicrosoftRpcInternet' }

 

if (Get-ItemProperty -Name "Ports" -Path 'HKLM:SOFTWAREMicrosoftRpcInternet' -ErrorAction SilentlyContinue ) { "Ports value exists" }

else { New-ItemProperty 'HKLM:SOFTWAREMicrosoftRpcInternet' -Name 'Ports' -Value '5000-5200' -PropertyType 'MultiString' }

 

if (Get-ItemProperty -Name "UseInternetPorts" -Path 'HKLM:SOFTWAREMicrosoftRpcInternet' -ErrorAction SilentlyContinue ) { "UseInternetPorts value exists" }

else { New-ItemProperty 'HKLM:SOFTWAREMicrosoftRpcInternet' -Name 'UseInternetPorts' -Value 'Y' -PropertyType 'String' }

 

if (Get-ItemProperty -Name "PortsInternetAvailable" -Path 'HKLM:SOFTWAREMicrosoftRpcInternet' -ErrorAction SilentlyContinue ) { "PortsInternetAvailable value exists" }

else { New-ItemProperty 'HKLM:SOFTWAREMicrosoftRpcInternet' -Name 'PortsInternetAvailable' -Value 'Y' -PropertyType 'String' }