How To Securely Access Your Windows Home Computer–pt1

Sometimes you are sitting at the office, or are away on vacation and you need to either get to files on a computer at home, or you need to login to a machine and do some work. There are some great paid options, such as GoToMyPC which will allow you to do this. There are downsides to that kind of solution, they cost money and they usually use Java, which brings with it a whole bunch of security issues.

 

A free alternative

I was looking around for free alternatives to the commercial products someone recommended trying out a product called Bitvise which would allow me to tunnel a remote desktop session over SSH and login to home completely securely, without needing Java.

Bitvise seemed to have everything I wanted, and it is free for non-commercial personal users. Sounds like a bargain. But does it work? It sure, and here’s how to get it up and running with the minimal amount of fuss.

 

Install the Bitvise SSH Server

Download WinSSHD server from Bitvise and open up the installer.

image

Accept the license term, leave the defaults and hit the install button.

Next choose the edition. We’ll be going with personal (which does have limitations, but as this is for personal use we aren’t going to run into those).

image

The installer will then go about it’s merry way and complete in under a minute leaving you with a message box letting you know that you are good to go.

image

 

Configure Bitvise SSH Server

Once the installation has completed you will be presented with the SSH Server control panel. In here you can manage the SSH Server service, work with your host keys and manage your settings.

image

Hit the Open easy settings link to configure access.

Leave the defaults for the Server Settings. This will allow you to listen on IPv4 & v6 on port 22 and open up a hole in the Windows Firewall for access.

image

On the Windows account page uncheck the “Allow login to any Windows account” box (this is good for security) and add only the users that you want to give access to.

image

Hit ok to add the user and then save the changes.

That’s it, you are now ready to go.

 

Connecting to the Bitvise SSH Server

Now that the SSH Server is setup and configured we need to be able to access it somehow. The simplest way to do this is to download and use the Bitvise SSH Client.

The installation is as straightforward as the SSH Server install, just accept the license and the default settings (do this on a different machine in your house to ensure that it is working as intended)

image

image

Then attempt to connect and see if you can get in by entering the IP of your SSH Server and the username.

image

You will be asked to accept the host key and then to enter your password.

image

If you entered your information correctly you will be logged in and a terminal session and SFTP session will be launched on the client machine.

image

 

This completes your basic connectivity tests from inside your network.

 

In part two I will go over configuring some basic router settings so that you can access the SSH Server from outside of your home network, and how to utilize these tools to give you remote access on to your Windows machine.

Issues With Slipstreaming Service Pack 2 on to SQL 2008 R2

One of the great things Microsoft introduced a few years ago was the ability to slipstream service packs on to the install media of SQL 2008 and 2008 R2. It was a bit of a painful experience to do, but the results were great as it made deployment of standardized builds a breeze, and when you also added cumulative updates to the mix it became ridiculously easy to ensure every new SQL instance was at the right revision of the product.

Slipstreaming has gone by the wayside with SQL Server 2012. Now, instead of extracting the service pack and doing a bunch of file work you just have to stick the service pack into a folder and add the PCUSOURCE information in your default configuration.ini file.

 

How to slipstream a service pack into SQL 2008 & 2008 R2

The manual process of updating the original source media is not the worst thing in the world, but it’s not intuitive either. Peter Saddow over at Microsoft posted something last year that clearly outline the steps involved to make this happen. You can find the full details on how over at http://blogs.msdn.com/b/petersad/archive/2011/07/13/how-to-slipstream-sql-server-2008-r2-and-a-sql-server-2008-r2-service-pack-1-sp1.aspx

I used those steps for creating slipstreamed versions of SQL 2008 R2 with Service Pack1 and SQL 2008 R2 with Service Pack 2.

 

Testing slipstreamed service pack 2

I’ve been using the slipstream of service pack 1 for quite a while now and have never encountered an issue. When SQL Server 2008 R2 Service Pack 2 came out a little while ago I worked to get it deployed on my preprod and production machines, then decided a couple of weeks ago that it was time to slipstream my installation media so that this would not need to happen for future deployments. We have a lot of QA and test deployments of SQL Server coming up, so I felt it a good use of my time.

I followed Peter’s steps and built myself some new installation media that included service pack 2. Being the good DBA that I am I got a couple of virtual machine spun up so that I could perform some installation testing. I wanted to be sure that all the components were installed at the relevant levels and that my configuration files did not need any tweaking.

The install of just SQL Server 2008 R2 w/SP2 along with replication, full-text search and the client tools went fine. There were no problems with the install and all of the components were at the service pack 2 level when I checked. This being good I moved on to an install that included Reporting Services. This is where I started encountering problems.

 

Installation errors

All of the pre-installation steps went without a hitch, and the components seemed to install without any problems, but then it attempted to start the Reporting Services service, at which point things went sideways.

The install was unable to bring the service online and the installation failed. The SQL Server engine and tools all installed successfully, I was able to access those, but no matter what I could not get Reporting Services to start. I took a while to go through the install logs and found the error:

Parameter 0 : SQL Server 2008 R2@RTM@
Parameter 2 : Microsoft.SqlServer.Configuration.Sco.Service.StartService
Parameter 3 : Microsoft.SqlServer.Configuration.Sco.ScoException@1211@1
Parameter 4 : System.ComponentModel.Win32Exception@-2147467259
Parameter 5 : SqlRSConfigAction_install_Startup
Parameter 7 : ReportServer
Parameter 8 : ReportServer
Final Parameter Values
Parameter 0 : SQL Server 2008 R2@RTM@
Parameter 2 : 0xDC112D1C
Parameter 3 : 0xD3BEBD98@1211@1
Parameter 4 : 0xDC80C325
Parameter 5 : SqlRSConfigAction_install_Startup
Parameter 7 : 0x22C8A7B3
Parameter 8 : 0x22C8A7B3

 

Strangely it appears as though Reporting Services is running at the RTM level, and this causes a problem as everything else is at the service pack 2 revision.

I wondered if I had done something wrong, so I tried to slipstream the service pack again, just in case I missed something. I got the same result.

Then I tried putting it over the top of service pack 1 slipstreamed media, but was met with the same result.

No matter what I tried I could not get it to work. Even worse,  I could not apply service pack 2 to the failed Reporting Services installation in an attempt to bring it online. At this point I decided it must be a bug and gave up.

 

Filing a connect item

After multiple attempts with different media on different servers and under different conditions I was no closer to getting Reporting Services installed. Not being able to figure out a way to get it on there (short of deploying media with service pack 1 and then updating it to service pack 2) I decided to use Microsoft Connect to file it as a bug.

Connect ID 771260 is open for this. I fully expect it not to get fixed, as there are only a couple of years left on the support lifecycle, but it’s important that these things are at least brought to the attention of the product team.

 

What you can do

If you slipstream your SQL installs and have encountered this issue please go and upvote the connect item (and say that you can reproduce it). If you slipstream and have not come across this please let me know.

Fun With Recruiters

I love it when I get those special kinds of emails from recruitment agencies who claim they have the perfect position. I got one of those kinds of emails last week, I thought I would share it (as well as my response).

 

Title: Front End Web Development Lead
Position Type: Direct Placement
Location: Bothell, WA, United States
Description:

Duration: 0-6 month(s)
Job Description:
Front-End Web Development Lead – Bothell, WA
Every day over 19,000 Amdocs employees, serving customers in more than 60 countries, collaborate to help our customers realize their vision. We have a 30-year track record of ensuring service providers¿ success by embracing their most complex, mission-critical challenges. 100% of Fortune¿s Global 500 quad-play providers rely on Amdocs to help them run their businesses better.
Amdocs is a ¿can do¿ company that leads the industry, is fully accountable and most importantly, always delivers. This is our DNA. Our success has been sparked and sustained by hiring exceptional people. If this sounds like you— if you have the drive, focus and passion to succeed in a fast-paced, delivery-focused, global environment– then Amdocs would like to talk with you. Amdocs: Embrace Challenge, Experience Success.
– Please Note: All applicants must be currently authorized to work in the United States without employer sponsorship now or in the future.
Role Overview:
We are looking for a Front-End Web Development Lead to be a team lead directing a multi-shore group of developers tasked with providing issue resolution support for a very large-scale web retail store. Some of the responsibilities and duties include, but are not limited to:
Interface with defect assurance team to accept inbound production issues for resolution
Direct and coordinate work of offshore development team to ensure accurate and timely resolution of front-end production issues
Interface with customer development, business, and other teams as needed to provide good service, promote team visibility and positive perception
As team grows, evaluate potential additional team candidates and support Amdocs executive management by providing expert advice as required to grow our presence with the customer and provide continuous improvement
Provide analytical support to identify, develop, and drive strategic improvement initiatives involving functionality improvements, innovation solutions, and development and implementation methodologies
Serve as trusted advisor to management and client
Work day-to-day with key client management, development fulfillment partner, QA testing organization, providing expert support to each as needed and appropriate
Support development of improved governance of production defect management, including definitions of severity, criteria for prioritization, and defect management lifecycle processes.
Requirements:
5+ years front-end web development experience
5+ years hands on experience with the following key technologies: JSP Integration, HTML / HTML 5, AJAX, CSS, JavaScript, JSON, XML, JQuery
Strong leadership skills
Preferences:
Large scale /enterprise web retail experience
Integration with ATG Commerce
Integration with Adobe CQ
Experience with other industry standard integration technologies (e.g. WebLogic)
Technical leadership experiences in relevant technologies
Telecom experience
All Amdocs roles require strong verbal and written communications skills, position-appropriate mentoring/leadership abilities, ability to quickly master new systems and/or processes, capacity to stay organized while managing competing priorities, and a deep customer service orientation, both internally and externally.

 

I’m a database guy, I’ve never been a developer let alone a dev lead, and so I replied…

 

As a solutions provider I would expect you have have some great analytics. This leads me to ask the question as to what part of my skillset or background leads you, or anyone at your company to believe that I would be a good fit for, or consider the opportunity that you list below.

 

If I ever get a response I’ll be sure to post it.

PASS Summit 2012 By The Numbers

Last week was the 2012 Summit. I thought I might just provide some interesting data points rather than give you a bunch of talk about the awesome presentations I saw, people I met, and things I did.

 

Steps taken: 49,404
Floors climbed: 126
Miles walked: 24.08
Hours slept: 23.7
Precons attended: 1
Sessions attended: 9
Sessions I left early 1
Time spent in keynote 1: 100
Time wasted attending keynote 2: 45
Flashmobs danced in: 1
Sponsored events attended outside of Summit: 4
# nights at Bush Gardens 1
# days pants were worn 1
Best session award: Adam Machanic
Funniest stalker award: Mike Fal

Do You Trust Your Application Admins?

I was sitting at my desk, happily minding my own business when an alert came through that a database backup had failed. Ok, backups fail, I just figured one of the transaction log backups hiccupped (we’ve been having some problems the last few days).

When I looked at the failure it was a backup trying to write to the C drive on the server.

I NEVER backup to C. It can easily fill the drive and take down the system.

A bigger indicator that something was up was that all of our backups are done across a 10Gb network to a centralized location for ease of tape backup. This indicated that someone, not a DBA, had the access to run a SQL backup.

I trawled through the permissions on the server and nobody has that level of access so I couldn’t figure out who had done this and how.

 

So What Happened?

Looking through the SQL logs I saw multiple attempts by a contractor to login to SQL, all of which failed, then about 5 minutes after the backup error came through. Interesting stuff, so I walked over to the contractor and asked what was going on.

After he was unable to login he went to the application admin who helped him out with access…using the application service account.

One of the third party applications from Microsoft some unnamed vendor has a database on that server. Due to the nature of the well designed code the database owner has to be the same as the service account of the application. The application admin knows this password (not my doing).

After logging this contractor in as the application service account the app admin walked away and left him to his own devices. As a result this contractor was dbo on a database which manages security for the entire company. We should just consider ourselves lucky all this guy did was attempt to perform a backup.

 

Preventative Actions

In order to try and prevent this kind of thing in the future I am looking at implementing a login trigger for the service account which checks the host and application connecting and denying access to anything not in a specifically approved list. There is also a conversation going on to possibly disable interactive logons for the service account using a group policy at the domain level.

 

It is a Matter of Trust

While the application admin is obviously at serious fault here it leads to a question of how well do you trust your admin team?

Domain admins will be able to access your SQL Servers (get over it, there is no way you can keep them out, if they really want in there are numerous ways for them to do so).

Anyone with a password could share that with someone else and allow them to access your servers.

Ultimately you have to trust those that you work with to do the right thing. It’s always sad when those people let you down.

Disabling Windows Updates With PowerShell

Allan Hirt wrote a post back in February entitled “Stop Automatically Updating Production Servers” in which he gave some excellent reasons on why you should ensure that your servers are only updated manually (after all who wants an accidental production down scenario, am I right?).

This hasn’t hit me as a problem as all of my servers are disconnected from the Internet and there’s no WSUS server configured for them to reach, we just push patches down to the machines from a central location. I was perfectly happy with this scenario, and still am, however there is a PITA reason why I wanted to turn off the automatic checks for Windows Updates; constant dumping of events into the System and Application Event Logs:

image

 

image

 

I was really sick of seeing these errors and wanted a quick way to turn off Windows Update without having to go through the multi-click interface nonsense.

Once more PowerShell to the rescue.

Disable-WindowsUpdates.ps1 does just what it says on the box. It disables Windows Updates on the machine it’s run on (provided that’s allowed by Group Policy). You have to execute it as an administrator. When it done it will remove those pesky alerts from the logs and you can feel free that Windows won’t decide to reboot your servers in the middle of the day to apply that IE update you’ve always wanted.

Extended Properties Are Your Friend

It’s nice to have friends, why aren’t you a friend of extended properties? They can make your like so much easier by helping to document your databases and objects.

Take a basic table create statement

CREATE TABLE [dbo].[EatSomeBoogers](

    [ID] [int] NOT NULL,

    [BgType] [varchar](20) NOT NULL,

    [Size] [smallint] NULL,

    [Viscosity] [tinyint] NULL

) ON [PRIMARY]

 

GO

 

Pretty simple and we can infer a lot of the information about the table, but we mention size, are we measuring in inches, feet, yards?

We can add extended properties that will help to provide that sort of guidance for someone to reference later on.

EXEC sys.sp_addextendedproperty @name = N'Details',

    @value = N'Size is measured in mm', @level0type = N'SCHEMA',

    @level0name = N'dbo', @level1type = N'TABLE',

    @level1name = N'EatSomeBoogers', @level2type = N'COLUMN',

    @level2name = N'Size'

GO

We can also add properties at the table level:

EXEC sys.sp_addextendedproperty @name = N'Purpose',

    @value = N'Holds information about all the gold digging’,

    @level0type = N'SCHEMA', @level0name = N'dbo', @level1type = N'TABLE',

    @level1name = N'EatSomeBoogers'

GO

And at the database level:

EXEC [MyDB].sys.sp_addextendedproperty @name = N'Usage',

    @value = N'Will handle all information as relates to digging' 

GO

 

You can even add extended properties to other objects, like stored procedures:

EXEC sys.sp_addextendedproperty @name = N'ProcUsage',

    @value = N'Gets booger sizes and types', @level0type = N'SCHEMA',

    @level0name = N'dbo', @level1type = N'PROCEDURE',

    @level1name = N'GrabBoogers'

GO

 

What’s great is that you can then quickly and easily query the extended properties for your objects:

SELECT  

        OBJECT_NAME(ep.major_id) AS ObjectName ,

        CASE 

        WHEN c.name IS NOT NULL then 'COLUMN'

        else o.type_desc

        END AS ExtendedPropertyType,

        c.name AS 'ColumnName' ,

        ep.name AS ExtendedPropertyName ,

        ep.value AS ExtendedPropertyValue

FROM    sys.extended_properties ep

        LEFT JOIN sys.columns c ON ep.major_id = c.object_id

                                   AND ep.minor_id = c.column_id

        LEFT JOIN sys.objects o ON ep.major_id = o.object_id

ORDER BY ObjectName, ColumnName

image

 

 

This give you a really quick and easy way to document your objects. I highly recommend that during your next development project that you make life easy for yourself and add extended properties to your objects (and trust me, your DBA will thank you).

Mind Your P’s And Q’s

The 2012 PASS Summit is coming up in November and the board have decided that they want everyone to be free to enjoy the conference without having to worry about harassment. As such they have posted an anti-harassment policy that applies to all attendees at all Summit (and PASS sponsored social) events. You can find the policy over at http://www.sqlpass.org/summit/2012/AntiHarassment.aspx

There have been serious problems at other tech conferences which has led to people (in particular women) to not attend. Valerie Aurora wrote about harassment as the reason why she would not be attending DEFCON over at http://adainitiative.org/2012/08/defcon-why-conference-harassment-matters/ and she is exactly right to call this out.

I admire that PASS have taken it on their own initiative to put together a policy, and I agree with it in almost every regard. My one issue comes in the final sentence of the Participant Behavior section (emphasis mine):

Similarly, sexual, racist, derogatory, threatening, or other inappropriate language and imagery are not appropriate for any conference venue, including sessions.

 

This is such an open ended point. What is inappropriate language and who gets to set that standard? Are we going to be held to the “7 dirty words”?

 

I didn’t grow up in the US and I have a different standard around what words are ok to say here (gets me in trouble at times). It goes farther than swearing though. What if a christian, in all good faith, starts talking of God to an atheist, or an atheist tells a christian that their beliefs are wrong. Either way this could be easily considered inappropriate by the other party. That then will go before the Anti-Harassment Review Committee who will decide on whether or not the policy has been violated. Given that there is no published guidance around this it could easily lead to someone getting kicked out, after all

 

I personally would like to see the policy amended around this point with either stronger guidance as to what does and does not constitute inappropriate language or that part struck out.

How Do You Provide Passwords To Your Users?

Passwords are a necessary evil and there are times when you have to hand out a password for a SQL login (because the POS application doesn’t support Windows Authentication). Traditionally I’ve done this by sending an email to the user with the login and a separate one with the password, figuring that internal security controls would be good enough to prevent anyone from accessing both emails. Recently it came to light that all emails were being siphoned off and an information security team had access to all email that traversed our Exchange servers. Now I’m not saying that I don’t trust these guys, but there’s no way in hell I would ever let them get one of my passwords.

I needed to come up with a better solution for getting passwords to users that had a pretty good level of security around it. Yes, I know that the password can easily be set to force a change at the next login, however this does not work in a lot of cases where it will be used by an application and the person doing the configuration doesn’t have the knowledge or tools to go in and change the password themselves.

I decided that I wanted to have a two-factor authentication type method that would limit the availability to a password and that would provide the information once and once only for the user so that it would never be stored for a long period of time.

First I created a table to hold the password and a unique-identifier and nothing else. I didn’t want to store a login name along with this data just for extra security purposes. This way even if someone got access to the password they wouldn’t know what login it was for, helping with additional security.

CREATE TABLE [dbo].[PwInfo]

    (

      [AuthenticationID] [uniqueidentifier] NULL ,

      [NewPwd] [varchar](128) NULL

    )

ON  [PRIMARY]

 

GO

 

ALTER TABLE [dbo].[PwInfo] ADD  DEFAULT (NEWID()) FOR [AuthenticationID]

GO

 

Now I needed a quick and easy way to get a password once entered. I wrote a procedure that accepts the AuthenticationID, returns the password and then deletes the entry.

CREATE PROCEDURE [dbo].[GetPwdByAuthenticationID]

    @AuthenticationID UNIQUEIDENTIFIER

AS 

    SET NOCOUNT ON

 

    DECLARE @NewPwd VARCHAR(128)

 

    SELECT  @NewPwd = NewPwd

    FROM    dbo.PwInfo

    WHERE   AuthenticationID = @AuthenticationID

 

    DELETE  FROM dbo.PwInfo

    WHERE   AuthenticationID = @AuthenticationID

 

    SELECT  @NewPwd

    

 

GO

 

Finally I added a proc which would accept a password, add it to the table and then return some information to later allow the password to be retrieved.

CREATE PROCEDURE [dbo].[AddNewPassword] @NewPwd VARCHAR(128)

AS 

    SET NOCOUNT ON

 

    DECLARE @AuthIDTbl TABLE

        (

          AuthenticationID UNIQUEIDENTIFIER

        )

 

    INSERT  INTO dbo.PwInfo

            ( NewPwd )

    OUTPUT  INSERTED.AuthenticationID

            INTO @AuthIDTbl

    VALUES  ( @NewPwd )

 

    DECLARE @AuthenticationID VARCHAR(128)

    DECLARE @Msg VARCHAR(4000)

 

    SELECT  @AuthenticationID = AuthenticationID

    FROM    @AuthIDTbl

 

    SELECT  @Msg = 'Password added. Add the users AD account to the report folder (http://ReportingServices/Reports/Pages/Folder.aspx?ItemPath=%2fDBA+Reports%2fUser+Password+Information )and remove once they have pulled the data.

 

Send the following to the user:

 

For your security purposes please visit the following URL in your browser http://ReportingServices/Reports/Pages/Report.aspx?ItemPath=%2fDBA+Reports%2fUser+Password+Information%2fGet+Password and enter the authentication ID of '

            + @AuthenticationID

            + '

 

This is a one time use authentication token, if you need the password information again you will need to contact the production DBA team.

'

 

    PRINT @Msg

 

 

GO

 

If you read through this code you’ll see that it outputs a message that provides a couple of links to Reporting Services. This is where the extra authentication comes in.

Within Reporting Services I created a report which called the GetPwdByAuthenticationID proc and just returned the password (nothing more). This report lives in it’s own subfolder which is only accessible by the DBA team.

 

Here’s how it works:

A user requests a password from us, we pull that password from our secure repository (highly encrypted) and use dbo.AddNewPassword to add this password to the table. We get back a message which we then use to email the requestor; this contains the URL and the AuthenticationID that they need to enter into SSRS to get the information out. We then go to SSRS and grant the user browser permissions in the folder, allowing them to run the report and get back the password. Once they have retrieved the password we then go and remove the user from the folder, closing down access once more.

This provides several layers of security:

  • The user must be logged in to AD with their own account to be able to access the report
  • The user must have the the AuthenticationID provided by the DBA team to get the password
  • The password has a one time access restriction meaning the process cannot be repeated
  • The login is never given along with the password and never stored by the DBA team together except in a highly encrypted password vault inaccessible to anyone but the DBAs.

 

I feel this a much better solution than sending passwords via email, and considering it only took an hour to put together I figure it a very worthwhile piece of my time. Sure, there are improvements that could be made around automation of access and notifications to users, but as a quick win I think this does a good job.

 

I’m interested to know what solutions you other folks might be using for those times when you need to provide password information to users. Please comment below.

Comparing Indexes Using PowerShell

When you have data replicated out to multiple machines or are working on migrations it can be a challenge to ensure that the indexes are in sync, thus preventing query performance issues. I found the need to check and sync up indexes myself the other day and I didn’t have any third party software installed to help me do the compare. Yet again PowerShell came to the rescue and I ended up putting together a script that just kind of grew and grew on me, so this morning I decided to stop and put it out for anyone to use.

The script accepts a few parameters:

  • Server1 – The source server that you’ll use as your source of index goodness
  • Server2 – The destination that you want to compare (this can be the same server)
  • Database1 – The source database that contains the tables and indexes
  • Database2 – (this is optional) the destination database to check, if this isn’t passed then this defaults to the same value as Database1
  • OutputScriptLocation – The path and filename for the output SQL script file which you can run to create the indexes
  • Verbose – A switch which will dump lots of information to the screen, which can be useful as it will also tell you if there are missing tables on the destination

 

I decided not to do the actual index creation within the script as it’s good to be able to check the output and be sure that’s the thing you want. I did however output the script in such a way to make it as easy as possible to run, and so you can just open the script, switch to SQLCMD mode and run it. The script will make the connection to your destination server, to the right database and then create the indexes. It will also output to the screen when an index is created (this is very useful for those times when you are creating a lot of indexes and some of them are large).

 

I’ve tested it on a few servers and it’s worked out great for me, let me know if you run into any problems running it or encounter strange bugs.

Download Compare-Indexes.ps1 or get the code below.

<#

.SYNOPSIS

   Script can be used to compare indexes on tables on different servers/databases.

.DESCRIPTION

   When executed the script will check passed in databases on different (or the same) server and output a SQLCMD script which

   can be executed to add missing indexes.

.PARAMETER <paramName>

   -Server1 The source SQL Instance that is used as the master

   -Server2 The destination SQL Instance that will be the target for additional indexes

   -Database1 The source database

   -Database2 The destination database (leave blank if the source and destination databases are the same)

   -OutputScriptLocation Path and filename for the .sql file that will be created for any indexes that need to be added

   -Verbose Output some logging information

.EXAMPLE

   ./Compare-Indexes -Server1 localhost -Server2 remotehostinstance1 -Database MyDB -OutputScriptLocation C:TempNewIndexes.sql -Verbose

#>

 

param

(

[string]$Server1 = "$(Read-Host 'SQL Instance 1 [e.g. (local)]')",

[string]$Server2 = "$(Read-Host 'SQL Instance 2 [e.g. (local)]')",

[string]$Database1 = "$(Read-Host 'Database Name')",

[string]$Database2, 

[string]$OutputScriptLocation = "$(Read-Host 'Output File Name')",

[switch]$Verbose

)

 

if (!$Database2) { $Database2 = $Database1}

 

$MissingIndexCount = 0

 

try

{

    #Initialize the output file

    $FileInitiation = @"

:ON ERROR exit 

:setvar SQLInstance `"$Server2`"

:setvar Database `"$Database2`"

:CONNECT `$(SQLInstance)

 

USE `$(Database);

 

 

"@

 

    $FileInitiation | Out-File $OutputScriptLocation

    $File = "True"

}

catch { Write-Host "Output folder does not exist...aborting script." 

    $File = "False"

}

 

 

if ($File -eq "True")

{

    if ($Server1 -ilike "**") { $BaseServer1 = "SQLSERVER:SQL$Server1Databases$Database1Tables" }

    else { $BaseServer1 = "SQLSERVER:SQL$Server1defaultDatabases$Database1Tables" }

 

    if ($Server2 -ilike "**") { $BaseServer2 = "SQLSERVER:SQL$Server2Databases$Database2Tables" }

    else { $BaseServer2 = "SQLSERVER:SQL$Server2defaultDatabases$Database2Tables" } 

 

 

    Write-Output "Please wait...comparing $Server1.$Database1 to $Server2.$Database2"

 

    $Server1List = dir $BaseServer1 | select DisplayName

    $Server2List = dir $BaseServer2 | select DisplayName

 

    #For information purposes show a list of tables missing on server2 if verbose enabled

    if ($Verbose)

    {

        $MissingTables = Compare-Object $Server1List $Server2List -Property DisplayName | Where-Object {$_.SideIndicator -eq '<=' } | Select DisplayName

        foreach ($MsTbl in $MissingTables)

        {            $MissingTable = $MsTbl.DisplayName

            Write-Output "Missing table - $MissingTable"

        }

    }

 

    #The compare uses -IncludeEqual as we only want to compare indexes on tables that exist on both servers

    $TableList = Compare-Object $Server1List $Server2List -Property DisplayName -IncludeEqual | Where-Object {$_.SideIndicator -eq '==' } | Select DisplayName

    if ($TableList)

    {

    foreach ($Table in $TableList)

    {

        $MissingIndex = "" #Initialize the variable so we can null it

        Clear-Variable MissingIndex #Null the variable as we'll be checking for a value in here later

        $TableToCheck = $Table.DisplayName

        $Table1Index = dir $BaseServer1$TableToCheckIndexes | Select Name # | Where-Object {$_.DisplayName -eq "$TableToCheck"}).indexes | Select Name

        $Table2Index = dir $BaseServer2$TableToCheckIndexes | Select Name # | Where-Object {$_.DisplayName -eq "$TableToCheck"}).indexes | Select Name

        if ($Verbose) {Write-Host "Checking $TableToCheck"}

        $MissingIndex = Compare-Object $Table1Index $Table2Index -Property Name -IncludeEqual | Where-Object {$_.SideIndicator -eq '<='} | Select Name

        if ($MissingIndex)

        {

            $MissingIndexCount += 1

            $Index = $MissingIndex.Name

            Write-Output "Missing Index - $Index on $TableToCheck"

            dir $BaseServer1$TableToCheckIndexes | where-object {$_.Name -eq "$Index"} |

            % {$_.Script() | out-file $OutputScriptLocation -append; "; `r`nRAISERROR(`'Index $TableToCheck.$Index created...`',0,1) WITH NOWAIT; `r`n`r`n " |

                out-file $OutputScriptLocation -Append;}

        }

    }

    if ($MissingIndexCount -gt 0) {Write-Output "File $OutputScriptLocation created. Open this file in SSMS and execute in SQLCMD mode"}

        else {Write-Output "No missing indexes found!"}

    }

    else { Write-Output "No matching tables found."}

}