PASSS Summit 2011 – Day 2 Keynote

8:17 – Bill Graziano hits the stage wearing a kilt. Good start. Shout out to the kilted. Nice start injecting some humor. Bill’s parents watching the keynote from home. Nice.

Celebrating volunteers. You can’t run this Summit in fact PASS doesn’t run without volunteers.

Outstanding volunteers Tim Radney, Jack Corbett.

PASSion award winner Lori Edwards. So well deserved. She put this thing together from a selection and speaker perspective. She really has worked so hard. I was on the speaker selection committee and know the work that she had to put in to make things a little easier for us. Always in contact, always available. Aren’t enough good words to say.

8:27 – Quinten Clark to the stage

Videos of people talking up SQL 2012. It’s nice, but why are you trying to sell this product to us? We’re here. Show us things.

Metro PowerPoint slides. I actually like these, they are nice and clear with a certain smoothness.

Tag cloud slides, biggest thing that stands out? ODBC driver for Linux.

Integration Services as a Server, HA for StreamInsight, AlwaysOn – all this for “required 9’s & protection” (of course this could be one 9, there’s nothing behind that message)

Bob Erickson brought up to talk how his company does mission critical. Lots of talk of how his business has to have high availability.

OK, how they use AlwaysOn, this is about to get real.
Demoing using the wizard to setup an AlwaysOn group. I’m down with this. Technical bits in keynotes to demonstrate the new tech that you want us to buy is smart.

There’s going to be a SCOM pack to monitor this (shame that SCOM still can’t monitor mount points out of the box, but baby steps)

Nice demo, even a little zoomit so that we could see what was going on.

Quinten back

ColumnStore indexes are going to be big.

Self service BI – Power View + PowerPivot (not sure why there’s a space in Power View, can’t we get some consistency), admin from SharePoint.

Data consistency – BI Semantic Model (BISM), Data Quality Services (DQS), Master Data Services (MDS). Lots of new acronyms to learn.

Lara Rubbelke up with a sense of humor and an awareness of the crowd

Users love SharePoint. Slow running query on stage and data quality issues. Demo of creating a columnstore index and showing the perf difference. NOT USING THE GUI!!!
That’s what a presentation should be at a keynote like this.

Back to Quinten

Expanded auditing for compliance and user defined server roles (very glad to see this one)

Peace of mind – distributed replay so that you can test load from multiple machines, nice.

Fast time to solution – SQL appliances (pre-tuned & optimized), HW + SW+ Support (just plug it in) and a choice of hardware.

Talking hardware – Parallel Data Warehouse, Business Decision, Enterprise Data Warehouse, Business Data Warehouse appliances. I’m not seeing any OLTP appliances.
Ah, the consolidation appliance willl take care of that. Nice nerd hardware talk. Good stuff.

The future of PDW includes changing the optimizer. That could be fascinating. Wonder if there will be any talk of that tomorrow.

ODBC drivers for linux coming out today, CDC for SSIS & Oracle. Nice move for getting to people who might want to switch.

Michael Rhys on statistica semantic search. There’s slide or something on screen. It’s so small we can’t see. Semantic search recommends the Odessy after reading the Iliad. That’s semantic search for you kids. Actually wound up being a good demo after zoomit was used.

Scale on demand – IT’S THE CLOUD with Nicholas Dritsas. There’s atually some good information coming out of ths. however I’m not a cloud guy, so there’s not much Im going to say about it.

Elastic scale with Azure – Chihan Biyikoglu – federation. Sharding for Azure. This could be significant for some companies. I think it’s a niche thing, but could be key for some customers.

Databases up to 150GB in size by years end. That might make the cloud more interesting for some companies. Metro UI for Azure. MS is certainly all in on that one.

We get booklets as we leave and CTP on DVD.

Overall much better than yesterday with Lara being the real stand-out.

PASS Summit 2011 – Opening Keynote

It’s Wednesday morning and time for the opening keynote.

The bloggers tables are filling up as we get ready for what we hope is not another Tina Turner spectacular.

Journey playing over the speakers. Can’t beat a bit of 80’s rock.

8:15 – ok, we’re ready, can we start now?

8:22 – Lights going down, here we go

Videos of people talking up the Summit. I guess this is meaningful to those watching at home. The rest of us are here.

Rushabh Mehta on the stage. No Tina Turner!

Board meet and greet on Friday. Want to ask something of the board? Be there and you can. Might be a good opportunity.

Damn, 430K technical hours of training provided by PASS this year (SQL Sats, UGs, virtual chapters, Summit,Rally, 24HOP etc). That’s a lot of hours. 20,000 new members this year. It’s a community of over 80,000 people. That’s a hell of a lot of nerds. Yes, nerds, I said it.

For some reason Rushabh thinks it’s important people who aren’t at the Summit care about schedule changes. No idea why that is. On the up side he is pushing the DVDs (get them, whether you are here or not, it’s well worth it)

I’m sure that it has nothing to do with the streaming of the keynote, but WiFi is just jacked here again. Second day in a row. I guess the convention center is just not cut out to provide the kind of bandwidth that a huge bunch of nerds require.

Apparently it’s fun to meet the vendors. Well some of them are fun.

8:43 – Ted Kummert to the stage. Hope we get something good, not a long sales pitch.

Ted gets to sit around every day and imagine the future. Nice work if you can get it (he probably works really hard, that comment doesn’t make it sound that way).

Little dig at Oracle….nice

4,000 registred PASS attendees? Damn this thing got HUGE

300 Microsoft SQL MVPs? I challenge you to name them all.

Ted talking of consistency across the environment with SQL & Azure, manage it all with the same tools and keep things seamless.

Denali going to be one of the most significant releases of the product in history. Talking AlwaysOn, BISM, MDS, DQS, data marketplace, Crescent and of course the cloud.

Cloud, cloud, cloud, cloud shut up about the damn cloud already.

SQL Denali now SQL 2012 and will release in H1 2012. This is what we’ve been waiting to hear about. Still no firm date though sadly. Wish they’d given something more, but at least we have a name now.

Funny, Ted’s slide deck looks like Mango.

Talking consistency across the tools pack.

Big data…what is it? I’m guessing giant fonts, but probably wrong. I’m waiting for the announcement of SQL Server NoSQL at this point.

Close, announcement of support for Hadoop to get it on Windows & Azure. Javascript framework for Hadoop, ODBC drivers for Apache Hive, Apache Hadoop based distribution for Windows & Azure. Seems to me that MS is a little concerned on the NoSQL stuff. Partnership with Hortonworks.

Microsoft Hadoop coming soon?

There’s lots of talk around Hadoop, Microsoft, community and I’ve fallen asleep.

Hadoop on Azure CTP by years end.

Denny Lee bringing the demos…

First slide showing an example from a web log. It shows Firefox as the browser. I guess MS testers don’t use IE either.

Oh joy, there’s the Hadoop connector.

Wait, when did this become a NoSQL blog?

Here’s how you quickly dig into data…show the results of the report that took an hour to run last night. Erm, what?

Ted back…

Now Tim, Nino talking “Data Explorer” in the Azure labs. AZURE PEOPLE! IT’S THE FUTURE GET OVER IT! Is the message I’m getting here.

Cool, Contoso have moved on from bikes and are now branching out to frozen yoghurt. But only in the cloud.

Interesting how everyone seems how folks on stage seems to be using Windows 8.

Oooh “traditional reference data” with air-quotes

3-way join between Azure, excel, marketplace data. That’s nice.

Kids love frozen yoghurt. No, kids love ice-cream.

Ted again…

I, am a lot of people around me have completely checked out of this keynote. All we are hearing is Azure and self-service BI.

Amir Netz to the stage. He’s great and knowledgeable. Let’s see if he brings something interesting.

Well there’s some nice graphs, quickly building information from live data. He presents well and has a lot of enthusiasm which it’s difficult not to get caught up in. He’s really doing his best to get the crowd back. Of course all the energy he builds up will be dissapated as soon as Ted starts up with Azure again.

Oh wait, Amir lost everyone by bringing up a windows phone. Damn, he was doing so well in repeating last years presentation.

Time to leave and head down to see Red Gate who I’ve heard have something cool and interesting to say.

Summit 2011–Getting A Seat At The Table

If you’ve attended the Summit then you might have seen a bunch of tables near the back of the room with a whole bunch of reserved seats. The tables are manned by crazed looking folks, laptops lit up, fingers flicking over the keyboard as events unfurl on the stage in front of them.

These fine folk are bloggers and they have been presented with the opportunity to get comfy chairs and write about the keynotes. Liveblogs update and tweets flutter their way into the ether as they get incensed about Tina Tuner impersonators or wowed by the straightforward technical awesomeness and presentation skills shown by David DeWitt.

This year PASS opened up the opportunity for folks to enter into a raffle to get one of these coveted seats.

I was lucky enough to get one of those seats, so on Wednesday, the opening keynote I get a seat at one of the tables. I’ll have my tablet setup, tweetdeck running and will be hoping that the keynote will not be a repeat fiasco of last year (although it’ll be great fodder if it is).

Look for something interesting coming to my blog Wednesday and follow me on twitter (@sirsql) to see what I have to say.

 

Such fun.

Optimizing Database Restores

You can Google/Bing/AltaVista and get a lot of results for speeding up your backups. Those results will include using third-party tools, performing file and filegroup backups, differential backups and the like. Some of the more technical blogs will talk about adjusting some of the backup parameters to improve throughput.

There’s really not much written (that I’ve found, please add comments with links to posts you’ve found) that talks about tuning database restores.

I started messing with some restore parameters last week and found that it’s possible to make some dramatic performance improvements.

 

How this started

We were performing a data center migration at work. As a part of this migration we were going from SQL 2000 to SQL 2008R2 for some of our databases.

The database, we’ll call MyDB, is 620GB in size. There are 5 data files and a single log file. Current backups are performed using Snap technology, which wasn’t going to help us with performing a migration as we needed to get the data over and setup logshipping, so we needed to perform a regular backup.

Third party tools were not an option as this SQL instance was running SP3a and we weren’t able to find anything that would work below SP4.

A full backup was taken to local disk, it took a little over 9 hours (server and disk limitations prevented any work to try and improve that).

The backup file was transferred to a USB drive, taken to the new datacenter and copied over to a CIFS share where it would be available for performing restores.

 

Hardware/software setup

It’s going to help to have an idea of the configuration that’s being used here.

Hardware:

  • Server: HP DL585 G7 (clustered)
  • CPU: 4 x AMD 6176SE (2294Mhz, 12 cores) for a total of 48 cores
  • RAM: 128GB
  • Storage: Netapp SAN with WAFL filesystem – Fiber connected with 2 x 4Gb HBAs, MPIO configured for least queue depth
  • Network: 10Gb/s (HP NC375i) – 2 connections, configured for failover only, no load balancing
  • One LUN defined (as mount points) for each file in the database to be restored

 

Software:

  • SQL 2008R2 CU7 Enterprise Edition
  • Max server memory: 100GB
  • Affinity masks not set
  • Max degree of parallelism: 6
  • 4 x tempdb files

 

 

Basic restore

The initial restore was performed with default options, no optimization was performed.

RESTORE DATABASE MyDB FROM DISK = 'CIFSDeviceBackupSQLBackupsProductionForRestore.bak' 

WITH MOVE 'MyDB' TO 'L:MyDB_DataMSSQLDataMyDB_Data.mdf'

    , MOVE 'MyDB2' TO 'L:MyDB_Data2MSSQLDataMyDB_Data2.ndf'

    , MOVE 'MyDB3' TO 'L:MyDB_Data3MSSQLDataMyDB_Data3.ndf'

    , MOVE 'MyDB4' TO 'L:MyDB_Data4MSSQLDataMyDB_Data4.ndf'

    , MOVE 'MyDB5' TO 'L:MyDB_Data5MSSQLDataMyDB_Data5.ndf'

    , MOVE 'MyDB_Log' TO 'L:MyDB_LogsMSSQLDataMyDB_Log.ldf'

    , STATS = 5

    , NORECOVERY

 

Restore time was 2:48.

 

Optimization techniques

I knew that adjusting the MaxTransferSize and BufferCount settings could provide significant improvements in backup performance (go read a fantastic SQLCat article on this) and figured that the same would be true for performing restores.

Before I could start changing values I needed to know what values were being used with the basic restore. To do this I enabled trace flags 3213 (trace SQL activity during a backup/restore) & 3605 (output trace activity to the SQL error log).

DBCC TRACEON (3213, -1)

DBCC TRACEON (3605, -1)

 

With the flags enabled I performed another restore which provided the following information:

image

So by default we’re running with a BufferCount of 6 and MaxTransferSize of 1,024.

 

The MaxTransferSize value is the largest unit of transfer to be used between SQL Server and the backup media. This is specified in bytes and can range in value from 65536 (64 KB) to 4194304 (4 MB).

The BufferCount specifies the number of IO buffers to be used for the restore operation. Any positive number can be used for this, however you have to be careful with larger numbers as you could potentially cause out of memory errors.

 

Changing the MaxTransferSize

Not having any obvious hardware restrictions I decided that as the restore was being performed across the network adjusting the MaxTransferSize might provide the biggest benefit.

Feeling confident I just bumped it to the max and performed the restore.

RESTORE DATABASE MyDB FROM DISK = 'CIFSDeviceBackupSQLBackupsProductionForRestore.bak' 

WITH MOVE 'MyDB' TO 'L:MyDB_DataMSSQLDataMyDB_Data.mdf'

    , MOVE 'MyDB2' TO 'L:MyDB_Data2MSSQLDataMyDB_Data2.ndf'

    , MOVE 'MyDB3' TO 'L:MyDB_Data3MSSQLDataMyDB_Data3.ndf'

    , MOVE 'MyDB4' TO 'L:MyDB_Data4MSSQLDataMyDB_Data4.ndf'

    , MOVE 'MyDB5' TO 'L:MyDB_Data5MSSQLDataMyDB_Data5.ndf'

    , MOVE 'MyDB_Log' TO 'L:MyDB_LogsMSSQLDataMyDB_Log.ldf'

    , STATS = 5

    , NORECOVERY

    , MAXTRANSFERSIZE = 4194304 

 

This time the restore took 45 minutes, a reduction of 2 hours over the default.

 

Changing the BufferCount

Having already reduced the restore time by over 70% I wanted to see if I could get a little bit more performance from it. So I made an adjustment to the BufferCount value and set it to 75.

RESTORE DATABASE MyDB FROM DISK = 'CIFSDeviceBackupSQLBackupsProductionForRestore.bak' 

WITH MOVE 'MyDB' TO 'L:MyDB_DataMSSQLDataMyDB_Data.mdf'

    , MOVE 'MyDB2' TO 'L:MyDB_Data2MSSQLDataMyDB_Data2.ndf'

    , MOVE 'MyDB3' TO 'L:MyDB_Data3MSSQLDataMyDB_Data3.ndf'

    , MOVE 'MyDB4' TO 'L:MyDB_Data4MSSQLDataMyDB_Data4.ndf'

    , MOVE 'MyDB5' TO 'L:MyDB_Data5MSSQLDataMyDB_Data5.ndf'

    , MOVE 'MyDB_Log' TO 'L:MyDB_LogsMSSQLDataMyDB_Log.ldf'

    , STATS = 5

    , NORECOVERY

    , MAXTRANSFERSIZE = 4194304 

    , BUFFERCOUNT = 75

 

This restore took a little less time, 38 minutes, shaving another 7 minutes off the already shortened time. At this point I was down about 78% over the regular restore command.

I tried pushing the BufferCount to 100 to see if I could trim things further but found no appreciable difference and so left it at 75.

 

Comparing backup types

As a part of the move log shipping was to be setup to offload reporting for users and so I needed to perform a restore of the database from the new SQL 2008R2 server to another with exactly the same specifications.

The native SQL 2000 backup file was 520GB in size, with no optimizations it restored in 2:48, and optimized restored in just 38 minutes (not bad for a network restore).

The compressed SQL 2008R2 backup file was just 121GB in size. An optimized restore of this database took just 23 minutes. That’s an 86% saving over the unoptimized SQL 2000 based restore.

 

Summary

There are times when you are not able to perform partial restores, or you need to restore entire databases to a new environment in as short a time as possible. Don’t always accept the defaults that SQL Server will give you in these situations. There are things that you can do to improve the performance of your restores (and your backups). It’s worth your spending some time now working in your environment and optimizing your restores.

A little time playing with these things now could be the difference between your database being down for 20 minutes as opposed to 3 hours.

How much is 2 1/2 hours worth to your business?

 

 

Such fun.

SSRS Validation Of Viewstate MAC Failed

You aren’t likely to see me post much about SSRS, mainly because I use it only on rare ocassions and when I don’t tend to hit blog-worthy things. Today is different though…

We recently deployed a new SSRS farm at the office. The deployment was smooth, the first reports went on without any issues. Everything looked good.

I was setting up a data driven subscription for a report and received the following error:

Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that <machineKey> configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster

 

What the heck?

Searching the web provided a plethora of hits around this, but they were all related to regular ASP.NET and there wasn’t anything in particular around SSRS and this problem.

After digging around I went back to Books Online and started digging through the documentation there with a fine tooth comb, and that’s where I found the solution (see it for yourself at http://msdn.microsoft.com/en-us/library/cc281307(SQL.105).aspx#ViewState )

 

Due to Reporting Services running in a load balanced configuration it couldn’t autogenerate the key used for validation for the view state. In this case a manual key had to be generated and deployed to each of the SSRS front ends.

I needed a resource to generate a key so I went out to http://aspnetresources.com/tools/machineKey where one click gave me what I needed.

Example key:

<machineKey validationKey=”E764BC3B6B17235E6CC9B2C81FA1C2694D54FD8B47BE1E081AE50E3291FE731C392FD7E2B86DD9985498242BD82FBFC3AB53F78DC228E637089686AD3B6936D2″ decryptionKey=”FC20492987C969987BC88814BB264A4B8986074C0B3452F5DD81877F144D28DB” validation=”SHA1″ decryption=”AES” />

 

I then pasted the key into the ReportManager web.config file on each front end server. After a service restart everything came back up and I was able to deploy the subscription without any further issues.

 

Moral of the story here: Carefully read Books Online, it’s actually a great reference.

 

Such fun.

 

Being Inappropriate At The Summit

Last year Sean & Jen McCown (you might also know them as the Midnight DBAs) put on a little event called “Inappropriate PASS”. It was a smallish gathering of 30 or so folks who wanted to give and watch presentations that were a little on risque side. 

There were drinks, noms, curse words a plenty and enough stuff to make you laugh that your sides hurt. A fun time was had by all in attendance.

With the Summit coming up in a little over a month planning is well underway for this years event. To help with the planning Jen is asking for some feedback for possible attendees (those of you that will be at the Summit and those of you that live in the local area. Please go to http://www.midnightdba.com/Jen/2011/08/important-inappropass-survey/ and fill out the 30 second survey. 

I think you should seriously consider joining us for a couple of hours of craziness and should put some serious thought into something that you would like to talk on for 5 minutes. Nothing is off the table or out of bounds so bring your worst. It’s going to be a blast.

 

Such fun.

Presenting at SQLSaturday #89–Atlanta

I’m very excited to be going to Atlanta on September 17th and presenting at the SQL Saturday event being held there. I was lucky enough to get two submissions accepted.

 

Centralized auditing of permissions with SQL Server

As a DBA it can be a challenge to know who has permissions to what SQL instances and what objects. The more instances you have the more complex that task. In this presentation I’ll share a method using PowerShell and TSQL that can be used to capture permissions from all of your SQL instances and load them into a centralized location. We’ll even take it a step further by auditing those permissions so that we can quickly and easily identify any that might have changed.

 

PowerShell: Are you checking out my profile?

PowerShell is a very powerful management tool and you can spend hours writing magical scripts to provide automation for frequently run tasks. Often forgotten is the PowerShell profile, a place you can add your own functions which can provide you lightning fast access to information. In this session we’ll talk about the power a profile puts at your fingertips. I will also demo (and share) several PowerShell functions that I use frequently for common tasks like checking database backups and disk space. I’ll show you my PowerShell profile if you show me yours.

 

 

There are a grand total of 17 MVP’s presenting that day. I’m frankly stunned I have to opportunity to be in rarified company and be able to speak myself (in fact my first session I’m up at the same time as 7 MVP’s so I’m expecting a quiet room). It should be a blast though, be sure to say hi if you’re there. I might even sneak in a preview of my PASS Summit Lightning talk.

 

Such fun.

Speaking At PASS Summit 2011

I’ll be speaking at the PASS Summit 2011 coming up in just over a month.

“Wait…what?” I hear you say, “you said in your blog post https://sirsql.net/blog/2011/6/20/not-presenting-at-pass-summit-2011.html that you were not going to be presenting”.

 

Well a couple of weeks ago PASS opened up the opportunity for folks to submit lightning sessions (5 minute talks on a subject). I submitted 4 of those and one of them was selected:

 

PowerShell: It’s your new BFF

PowerShell wants to be your bestie. Please accept PowerShell’s friend request and put it in your circle. Find out why you should.

 

Mine is just one of 24 lightning sessions that have been selected, check out the others at http://www.sqlpass.org/summit/2011/SummitContent/LightningTalks2011.aspx

 

Hope you’ll find the time to swing by at least one of the lightning sessions. They are a lot of fun.

 

Creating A Clustered SQL Install ini File Using PowerShell–Part 3

In parts 1 and 2 the cluster was made ready for the SQL install and the Active Directory objects were created. In the final part of the series we will finally get to creating the ini file and installing SQL.

As an FYI this script is designed to install the following components along with the Database Engine:

  • SQL Server Replication
  • Full-Text Search
  • Business Intelligence Development Studio
  • Client Tools Connectivity
  • Integration Services
  • Client Tools Backwards Compatibility
  • Client Tools SDK
  • SQL Server Books Online
  • Management Tools – Basic
    • Management Tools – Complete
  • SQL Client Connectivity SDK
  • Microsoft Sync Framework

The ini can be quickly modified to remove certain of these components if they are not required (I’m looking to add selections for these items to a future version).

 

Copy the script to the server

Download the CreateClusteredSQLInstallFile.ps1 and copy it over to the server. From there it can either be opened directly in PowerShell ISE or called from within the PowerShell shell.

If calling the script from the shell you might get an error:

 

If so you’ll need to reopen the shell as an administrator and change the execution policy to RemoteSigned

set-executionpolicy Remotesigned

 

This time it should run without any errors.

 

Working through the script

The first thing to come up is an informational box letting you know that you’ll be creating ini files:

 

Next, depending upon whether you started the script through the shell or the ISE you’ll see different things…in the ISE you’ll see a pop-up window asking if you want to “Install a new clustered instance of add node to an existing cluster?”, this is in case you want to generate an ini file for a cluster you already have in place. By default this will create files for a new clustered install of SQL Server:

In the shell you will just get a prompt”:

 

A new window will appear asking where you want to put the ini files when they are created. I’ve chosen to put them in C:SQL Install ini (this is the same if you use the shell or the ISE)

 

Next you’ll be prompted to enter the SQL virtual name (going forward I’ll be showing the prompts through the ISE, keep an eye on the shell if that’s what you are using). This is the disabled computer account we created in Active Directory:

 

The next prompt will ask for an instance name, here I’m going with PROD (which will give us the SQL instance of GROOPROD):

 

Follow this up by entering the IP address:

 

The next two prompts are for the SQL Service and SQL Agent accounts to be used:

 

Now choose a root folder for the SQL system databases (master/model/msdb). A subdirectory of SQLSystem will automatically be created, as such I am using the root of G:

 

A similar window will pop-up and ask for the location of TempDB, the data and log files and backups. This is just another case of choosing the appropriate locations. As a note from the location you choose the following subfolders will be created and used:

  • TempDB – <root>MSSQLData
  • SQL Data Files – <root>MSSQLData
  • SQL Log Files – <root>MSSQLLogs
  • Backups – <root>MSSQLBackup

 

 

The final step is a prompt that will appear asking what failover clustered disks should be used. These will be listed under available storage in the Failover Cluster Manager. Only the disks to actually be used should be added. This is a comma separated list of those disks (the screenshot shows the disks next to the prompt for easy identification):

 

A completion pop-up appears indicating what files were created and how to use them:

 

The filenames are generated based upon the user creating the script and the time that the script started executing (this helps to prevent accidental duplicates).

 

If you’d used the shell to execute the script you’d expect to see the following:

 

I’ve got files, so now what?

Now that we have generated the ini files it’s time to put them to use. The first thing I like to do is rename them to something more appropriate (like the SQL virtual name and their purpose) so

become

 

I also like to store these files away in a repository, this way it’s a very quick and easy operation to add a node to a cluster or find out what installation settings were used.

 

Can we do the SQL install already?

We sure can, it’s go time! But first, take a quick look at the ini file that was created:

;File created by: AWESOMESAUCENic
;File creation date: 2011-07-17 16:47
;Script to install new SQL clustered instance
;SQLSERVER2008 Configuration File
[SQLSERVER2008]
IACCEPTSQLSERVERLICENSETERMS=”TRUE”
FAILOVERCLUSTERNETWORKNAME=”GROO”
INSTANCEID=”PROD”
INSTANCENAME=”PROD”
FAILOVERCLUSTERGROUP=”SQL Server (PROD)”
FAILOVERCLUSTERIPADDRESSES=”IPv4;192.168.137.150;Public Network;255.255.255.0″
INSTALLSHAREDDIR=”C:Program FilesMicrosoft SQL Server”
INSTALLSHAREDWOWDIR=”C:Program Files (x86)Microsoft SQL Server”
INSTANCEDIR=”C:Program FilesMicrosoft SQL Server”
ACTION=”InstallFailoverCluster”
FEATURES=SQLENGINE,REPLICATION,FULLTEXT,BIDS,CONN,IS,BC,SDK,BOL,SSMS,ADV_SSMS,SNAC_SDK,OCS
HELP=”False”
INDICATEPROGRESS=”True”
QUIET=”True”
QUIETSIMPLE=”False”
X86=”False”
ENU=”True”
ERRORREPORTING=”False”
SQMREPORTING=”False”
FILESTREAMLEVEL=”0″
ISSVCSTARTUPTYPE=”Automatic”
ISSVCACCOUNT=”NT AUTHORITYNetworkService”
SQLCOLLATION=”SQL_Latin1_General_CP1_CI_AS”
FTSVCACCOUNT=”NT AUTHORITYLOCAL SERVICE”
SQLSYSADMINACCOUNTS=”awesomesaucenic”
SQLSVCACCOUNT=”awesomesauceGroo_SQLEngine”
AGTSVCACCOUNT=”awesomesauceGroo_SQLAgent”
INSTALLSQLDATADIR=”G:SQLSystem”
SQLTEMPDBDIR=”G:TempDBMSSQLData”
SQLUSERDBDIR=”G:SQLDataMSSQLData”
SQLUSERDBLOGDIR=”G:SQLDataMSSQLLogs”
SQLBACKUPDIR=”H:MSSQLBackup”
FAILOVERCLUSTERDISKS=”SQL Backup Disk” “SQL Data Disk”

 

 

That’s all the information needed to perform the install.

Now open up an elevated command prompt and go to the location of the install media. Now just call setup.exe and pass along the location and name of the SQLInstall.ini file along with the SQL and Agent passwords, then hit enter:

 

Now go and make yourself a beverage of your choice, the install will take a little while….after an indeterminate amount of time you should see the following nondescript message in the command prompt window:

 

Not exactly a whole bunch of bells and whistles there, but “Setup result: 0” tells us that the install was successful (even though it’s somewhat cryptic).

We can test this by cranking up SSMS and connecting:

 

Add another node

A single node cluster while nice isn’t particularly highly available, so we need to have this SQL instance deployed on the other node(s). This is a quick and painless task. Just copy over the SQL Install ini folder to the other node(s), login and run the setup.exe, this time calling the AddNode.ini file:

 

Again, go for a walk around the block, talk to your colleagues and eventually come back to:

 

 

Rinse and repeat for as many nodes as you need.

 

Summary

Setup your cluster, ensure Active Directory is good to go and use CreateClusteredSQLInstallFile.ps1 to quickly build your ini file to install SQL Server on to your cluster. It’s quick and easy. It allowed me to deploy 4 SQL Server instances to 5 nodes in under 4 1/2 hours.

Give it a try, let me know what you think. I’ll be looking to enhance it in the future.

 

Read Part 1

Read Part 2

Creating A Clustered SQL Install ini File Using PowerShell–Part 2

In Part 1 I showed how the cluster was readied for the SQL install. In this short part 2 I’ll go over the Active Directory configuration components so that everything is ready for us to start performing the install.

 

Create the computer account

The first thing is to create a new Computer object in Active Directory. This will be the virtual name used in the cluster.

 

Once added we need to go back in and edit the properties of the Computer (you’ll need to use Advanced Features for this bit, look under the view menu item).

Go to the security tab of the Computer properties and hit the Add button. In the Select Users, Computers, Service Accounts or Groups window hit Object Types and ensure that Computers is checked. The cluster account will be used to manage the Computer resource (in this case CLUSTER1) so we add CLUSTER1$

 

And then give CLUSTER1$ Full Control over the computer object.

 

Finally the computer object needs to be disabled (if it’s not disabled your cluster install will not work correctly and it will fail to come online)

 

 

Create the service accounts

In the interest of security I’ll be using one account for the SQL Engine and another for the SQL Agent. Quickly create a couple of accounts with nice secure passwords (that of course you will keep somewhere secure, like in a KeePass file).

 

I like to give the accounts good descriptive names. It makes them easier to find later (actually the best way is to have a separate OU for the SQL accounts and for the SQL computer objects as it makes it a lot easier to manage and create Group Policies for them).

 

The boring stuff is done

The setup items are complete, we’re now ready to create the ini file….in part 3