Tag: SQLBP

Stop Bad Database Design

Every year that goes by I sit in hope that I won’t see bad database design.

Every year I am disappointed.

 

As an example here is a table create statement that I saw the other day (table and column names have been changed to protect the innocent)

CREATE TABLE [dbo].BestestTableEVAR(

 Col1 [int] IDENTITY(1,1) NOT NULL,

 Col2 [uniqueidentifier] NULL,

 Col3 [uniqueidentifier] NOT NULL,

 Col4 [smallint] NULL,

 Col5 [smallint] NOT NULL,

 Col6 [bit] NOT NULL,

 Col7 [xml] NULL,

 Col8 [xml] NULL,

 ColA [xml] NULL,

 ColB [xml] NULL,

 ColC [datetime2](2) NULL,

 ColD [datetime2](2) NULL,

 COlE [datetime2](2) NULL,

 ColF [datetime2](2) NULL,

 CONSTRAINT [PK_BestestTableEVAR] PRIMARY KEY CLUSTERED 

(

 Col3 ASC

)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON, FILLFACTOR = 90) ON [PRIMARY]

) ON [PRIMARY]

 

So what’s wrong with this?

The clustered primary key on this table is a GUID. Is that bad? That’s an unequivocal YES! Read all that Kimberly Tripp (blog|twitter) has to say about GUIDs in database design.

What makes this all the more crazy is that the table has an identity column. That’s a natural clustering key ready and waiting to be used and yet for some reason it’s not.

This table is going to fragment like crazy, it won’t scale and performance will be hideous. Additionally, thanks to the XML columns this table can’t even be rebuilt online meaning there’s no way to help the fragmentation or performance without actually taking the table offline to do it, meaning it can’t handle any transactions. This is a problem in a table on an OLTP system.

 

I would go back and change some things. Let’s say you wanted to keep the table structure the same, that’s fine, but let’s be smart about the keys and indexes.

It would make sense to change the identity column to be clustered (I would also make this the primary key) and then, to ensure uniqueness on Col2 which is the current primary key a unique index is warranted.

CREATE TABLE [dbo].BestestTableEVAR(

 Col1 [int] IDENTITY(1,1) NOT NULL,

 Col2 [uniqueidentifier] NULL,

 Col3 [uniqueidentifier] NOT NULL,

 Col4 [smallint] NULL,

 Col5 [smallint] NOT NULL,

 Col6 [bit] NOT NULL,

 Col7 [xml] NULL,

 Col8 [xml] NULL,

 ColA [xml] NULL,

 ColB [xml] NULL,

 ColC [datetime2](2) NULL,

 ColD [datetime2](2) NULL,

 COlE [datetime2](2) NULL,

 ColF [datetime2](2) NULL,

 CONSTRAINT [PK_BestestTableEVAR] PRIMARY KEY CLUSTERED 

(

 Col1 ASC

)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON, FILLFACTOR = 90) ON [PRIMARY]

) ON [PRIMARY];

 

CREATE UNIQUE NONCLUSTERED INDEX UI_BestestTableEVAR_Col2 on BestestTableEVAR (Col2);

Sure, we still won’t be able to rebuild the indexes online, but we won’t have the same crazy levels of fragmentation that we would have had before.

 

I know I’ll be seeing a lot of bad design this year and I know that I’ll be forced to push that bad design into production. Doesn’t stop me trying to change things however. Help your devs out, let them know when their design is a problem. Who knows, maybe you’ll change things for the better.

Please Don’t Use Deprecated Data Types

I know that a lot of vendors like to write for the lowest common denominator (i.e. SQL 2000) but really folks it’s gone too far. I’m sick of cracking open vendor code that’s certified for SQL 2008 and seeing things like IMAGE and TEXT data types. Microsoft deprecated these things back when they released SQL 2005 (see http://msdn.microsoft.com/en-US/library/ms143729(v=SQL.90).aspx under Textpointers). Why are you persisting these things six years later?

I bring this up because I’ve come across further egregious usage of these data types in vendor code yet again. The vendor in question? Microsoft.

Yes, that’s right, the folks that deprecated the data type six years ago is still using it to a large extent within the ReportServer and ReportServerTempDB databases that support SQL Server Reporting Services. Seriously Microsoft? Can you please get with the plan and fix this nonsense?

The following query, run against the ReportServer database will show 14 different tables (31 columns) using a variety of NTEXT and IMAGE data types.

select st.name as TableName, t.name as DataType, sc.name as ColumnName 

    from sys.types t 

        inner join sys.columns sc

            on t.system_type_id = sc.system_type_id

        inner join sys.tables st

            on sc.object_id = st.object_id

where 

    t.name in ('image', 'text', 'ntext')

order by 

    st.name, t.name

 

I have filed a Connect item asking Microsoft to fix this. Please go vote for it at https://connect.microsoft.com/SQLServer/feedback/details/714117/ssrs-using-deprecated-data-types-in-its-databases and help us rid the world of this old stuff.

Your “Five Nines” Means Nothing To Me

There’s a lot of talk on High Availability. I know I’m a huge fan of it, in particular clustering (but I know it’s not for all situations and the changes that SQL 2012 with AlwaysOn Groups may mean that traditional clustering is used less and less). There are of course other HA solutions out there like Log Shipping, Mirroring and SAN replication technologies (sorry folks, I disagree on transactional replication as an HA concept unless you are talking downstream reporting infrastructure behind load balancers).

People constantly push to achieve the magical five nines availability number. What does that mean?

Five nines means that your SQL Server must be available 99.999% of the time. This means you can have only 5.26 minutes of downtime a year.

So let’s say that you are running clustering and mirroring; that you have load balancing with peer-to-peer replication; that you are running SAN replication between two datacenters and have log shipping setup ready to go live at the flick of a switch. You have all your hardware ducks in a row and are ready for any emergency that may crop up.

I commend you for all of that.

Now let’s say that your manager is going to bonus you on the downtime that you have. You are the DBA, a single link in the chain, and your entire bonus is going to based on the number of nines that you can get over the year.

Time to sit down with your manager and ask some questions:

 

  • Does this five nines bonus plan include maintenance time?
  • How are you going to measure uptime?
  • What tools are going to be used to measure it?
  • What are the tolerances on those tools?
  • What about upstream items?

 

 Let’s take a look at these one at a time.

 

Is maintenance time included?

There may be bugs in the database code for the application that would require an app being offline while changes are made. You may want to patch SQL Server with the latest service pack in order to maintain supportability. You may want to test failover and DR. If none of these things are excluded then chances are you’ll have trouble making three nines (8.76 hours of downtime in a year) and none at all of making five nines.

 

How will uptime be measured?

Is uptime calculated based upon SQL Server being up and running? Or maybe on that there are successful connections? How about certain queries getting returned? Or a website returning a page within a certain time period?

So let’s say you are running a query. What if that query gets blocked or deadlocked? SQL is up and processing other transactions but that single query is having a problem.

Successful connections? Great, doesn’t mean that the database is available that they are going to use. Same issue with checking if SQL Server is available. Heck, are you measuring server uptime or database uptime here?

Website returning a page? What if there’s an issue with the web server? Or a networking problem between there and the SQL database? 

 

What tool will you use?

Are you going to buy something off the shelf? Then who’s going to spend the time configuring it for your particular environment? Isn’t that time better spent on solidifying  things,  performance analysis? Query tuning?

You’re going to make your own tool? Great, how are you going to do that, what technologies? How is the information going to be reported upwards? Who has the last say on possible exceptions? You have the time to write this yourself? Awesome, I’m glad that you do.

 

What is your tool tolerance?

Tolerance? Yup. Let’s say that you’ve figured out all the other stuff and have agreements in place for everything and got a tool that will do the job. How does that tool work?

Let’s say it runs a query and fails. It says that the database is down. When will it next attempt to query the database? In 15 minutes? If that’s the case then any bad poll from your tool just put you outside of that five nines criteria and your bonus has been dinged.

Let’s just say that there was an actual problem and that happened 30 seconds after the last poll but you resolved it before the next one. Everything was down for 14 minutes but your tool didn’t capture that.

Some serious flaws there.

 

Upstream items

Are you the Windows, network, firewall, and datacenter admin? Are you responsible for generating the electricity that comes into the building? How about the HVAC system, is that yours? Are you checking the diesel levels in your backup generators weekly?

Each one of these upstream items can and will affect peoples ability to access your SQL Server and its databases. Unless you are responsible for them all then how can you be held accountable for perceived uptime on SQL Server?

SQL can be up, able to process data and all the databases available, but without a funcitoning network nobody would know that and your monitoring wouldn’t pick it up.

 

 

All of these things add up to not being able to accurately measure the mythical five nines uptime creature. If indeed we can’t get those kinds of accurate measurements and I can’t own those upstream processes then why should I be held accountable for a number.

Here’s a better idea. Let’s look at the processes, let’s look at whether or not things are available from a users perspective. Let’s gauge customer interactions, build outs, deployments. Let’s track tickets and issues. Let’s talk about the things we need to do to make things better. Let’s not go pulling a number out of the ether, it does nobody any favors.

Optimizing Database Restores

You can Google/Bing/AltaVista and get a lot of results for speeding up your backups. Those results will include using third-party tools, performing file and filegroup backups, differential backups and the like. Some of the more technical blogs will talk about adjusting some of the backup parameters to improve throughput.

There’s really not much written (that I’ve found, please add comments with links to posts you’ve found) that talks about tuning database restores.

I started messing with some restore parameters last week and found that it’s possible to make some dramatic performance improvements.

 

How this started

We were performing a data center migration at work. As a part of this migration we were going from SQL 2000 to SQL 2008R2 for some of our databases.

The database, we’ll call MyDB, is 620GB in size. There are 5 data files and a single log file. Current backups are performed using Snap technology, which wasn’t going to help us with performing a migration as we needed to get the data over and setup logshipping, so we needed to perform a regular backup.

Third party tools were not an option as this SQL instance was running SP3a and we weren’t able to find anything that would work below SP4.

A full backup was taken to local disk, it took a little over 9 hours (server and disk limitations prevented any work to try and improve that).

The backup file was transferred to a USB drive, taken to the new datacenter and copied over to a CIFS share where it would be available for performing restores.

 

Hardware/software setup

It’s going to help to have an idea of the configuration that’s being used here.

Hardware:

  • Server: HP DL585 G7 (clustered)
  • CPU: 4 x AMD 6176SE (2294Mhz, 12 cores) for a total of 48 cores
  • RAM: 128GB
  • Storage: Netapp SAN with WAFL filesystem – Fiber connected with 2 x 4Gb HBAs, MPIO configured for least queue depth
  • Network: 10Gb/s (HP NC375i) – 2 connections, configured for failover only, no load balancing
  • One LUN defined (as mount points) for each file in the database to be restored

 

Software:

  • SQL 2008R2 CU7 Enterprise Edition
  • Max server memory: 100GB
  • Affinity masks not set
  • Max degree of parallelism: 6
  • 4 x tempdb files

 

 

Basic restore

The initial restore was performed with default options, no optimization was performed.

RESTORE DATABASE MyDB FROM DISK = 'CIFSDeviceBackupSQLBackupsProductionForRestore.bak' 

WITH MOVE 'MyDB' TO 'L:MyDB_DataMSSQLDataMyDB_Data.mdf'

    , MOVE 'MyDB2' TO 'L:MyDB_Data2MSSQLDataMyDB_Data2.ndf'

    , MOVE 'MyDB3' TO 'L:MyDB_Data3MSSQLDataMyDB_Data3.ndf'

    , MOVE 'MyDB4' TO 'L:MyDB_Data4MSSQLDataMyDB_Data4.ndf'

    , MOVE 'MyDB5' TO 'L:MyDB_Data5MSSQLDataMyDB_Data5.ndf'

    , MOVE 'MyDB_Log' TO 'L:MyDB_LogsMSSQLDataMyDB_Log.ldf'

    , STATS = 5

    , NORECOVERY

 

Restore time was 2:48.

 

Optimization techniques

I knew that adjusting the MaxTransferSize and BufferCount settings could provide significant improvements in backup performance (go read a fantastic SQLCat article on this) and figured that the same would be true for performing restores.

Before I could start changing values I needed to know what values were being used with the basic restore. To do this I enabled trace flags 3213 (trace SQL activity during a backup/restore) & 3605 (output trace activity to the SQL error log).

DBCC TRACEON (3213, -1)

DBCC TRACEON (3605, -1)

 

With the flags enabled I performed another restore which provided the following information:

image

So by default we’re running with a BufferCount of 6 and MaxTransferSize of 1,024.

 

The MaxTransferSize value is the largest unit of transfer to be used between SQL Server and the backup media. This is specified in bytes and can range in value from 65536 (64 KB) to 4194304 (4 MB).

The BufferCount specifies the number of IO buffers to be used for the restore operation. Any positive number can be used for this, however you have to be careful with larger numbers as you could potentially cause out of memory errors.

 

Changing the MaxTransferSize

Not having any obvious hardware restrictions I decided that as the restore was being performed across the network adjusting the MaxTransferSize might provide the biggest benefit.

Feeling confident I just bumped it to the max and performed the restore.

RESTORE DATABASE MyDB FROM DISK = 'CIFSDeviceBackupSQLBackupsProductionForRestore.bak' 

WITH MOVE 'MyDB' TO 'L:MyDB_DataMSSQLDataMyDB_Data.mdf'

    , MOVE 'MyDB2' TO 'L:MyDB_Data2MSSQLDataMyDB_Data2.ndf'

    , MOVE 'MyDB3' TO 'L:MyDB_Data3MSSQLDataMyDB_Data3.ndf'

    , MOVE 'MyDB4' TO 'L:MyDB_Data4MSSQLDataMyDB_Data4.ndf'

    , MOVE 'MyDB5' TO 'L:MyDB_Data5MSSQLDataMyDB_Data5.ndf'

    , MOVE 'MyDB_Log' TO 'L:MyDB_LogsMSSQLDataMyDB_Log.ldf'

    , STATS = 5

    , NORECOVERY

    , MAXTRANSFERSIZE = 4194304 

 

This time the restore took 45 minutes, a reduction of 2 hours over the default.

 

Changing the BufferCount

Having already reduced the restore time by over 70% I wanted to see if I could get a little bit more performance from it. So I made an adjustment to the BufferCount value and set it to 75.

RESTORE DATABASE MyDB FROM DISK = 'CIFSDeviceBackupSQLBackupsProductionForRestore.bak' 

WITH MOVE 'MyDB' TO 'L:MyDB_DataMSSQLDataMyDB_Data.mdf'

    , MOVE 'MyDB2' TO 'L:MyDB_Data2MSSQLDataMyDB_Data2.ndf'

    , MOVE 'MyDB3' TO 'L:MyDB_Data3MSSQLDataMyDB_Data3.ndf'

    , MOVE 'MyDB4' TO 'L:MyDB_Data4MSSQLDataMyDB_Data4.ndf'

    , MOVE 'MyDB5' TO 'L:MyDB_Data5MSSQLDataMyDB_Data5.ndf'

    , MOVE 'MyDB_Log' TO 'L:MyDB_LogsMSSQLDataMyDB_Log.ldf'

    , STATS = 5

    , NORECOVERY

    , MAXTRANSFERSIZE = 4194304 

    , BUFFERCOUNT = 75

 

This restore took a little less time, 38 minutes, shaving another 7 minutes off the already shortened time. At this point I was down about 78% over the regular restore command.

I tried pushing the BufferCount to 100 to see if I could trim things further but found no appreciable difference and so left it at 75.

 

Comparing backup types

As a part of the move log shipping was to be setup to offload reporting for users and so I needed to perform a restore of the database from the new SQL 2008R2 server to another with exactly the same specifications.

The native SQL 2000 backup file was 520GB in size, with no optimizations it restored in 2:48, and optimized restored in just 38 minutes (not bad for a network restore).

The compressed SQL 2008R2 backup file was just 121GB in size. An optimized restore of this database took just 23 minutes. That’s an 86% saving over the unoptimized SQL 2000 based restore.

 

Summary

There are times when you are not able to perform partial restores, or you need to restore entire databases to a new environment in as short a time as possible. Don’t always accept the defaults that SQL Server will give you in these situations. There are things that you can do to improve the performance of your restores (and your backups). It’s worth your spending some time now working in your environment and optimizing your restores.

A little time playing with these things now could be the difference between your database being down for 20 minutes as opposed to 3 hours.

How much is 2 1/2 hours worth to your business?

 

 

Such fun.

Be A Courteous Blogger

Today I received an email, nothing unusual about that. This email happened to be from a SQL blogger. It’s one of those newletter type things, it even said at the bottom that I had opted into receiving this email. The only thing is that I, at no point, signed up to receive a newsletter. In fact this blog is not one I have visited other than on a couple of ocassions. I believe I may have commented on one of the posts I saw and that is how the blog owner got my email address.

My guess is that the blogger in question went through the WordPress database (yes WordPress site) and pulled emails for everyone that had posted comments and sent out this unsolicited email.

That is absolutely the wrong way to do something like this. A strongly worded email is on its way to the blog owner. Small mercies…they included an unsubscribe link on the email, so hopefully I won’t get another.

 

The right way to have done this was to do the exact same thing that Paul Randal (blog|twitter), Robert Davis (blog|twitter) and Brent Ozar PLF (blog|twitter) have done. That is let people know you have a newsletter and give folks the opportunity to opt in and receive it.

All you bloggers out there, please do the right thing.

T-SQL Tuesday #19–What A Disaster

Allen Kinsel (blog|twitter) is running this months T-SQL Tuesday and wanted to know about preparing or recovering from a disaster. I thought this might be a good opportunity to tell a little story of how a disaster sucked up around three weeks of my life a couple of years ago.

 

It was a normal day, I was sitting and quietly going through a small staging release when my director asked me to come into his office. Instantly I started wondering “what did I do?”

Not being able to come up with anything egregious I settled down the stomach gurgles and went wandering in. That I walked in and was offered a seat I got nervous again, this was not a good sign with this director. Now I was really curious as to what could be up.

 

What was up?

“Are you working on anything big right now?” were the words that kicked off something that changed my views on a lot of things.

“Nothing that can’t be put on hold, what’s up?”

“Did you hear about the <redacted> outage?”

“Sure, everyone here has heard about that, it’s a seriously messed up situation.”

“How do you feel about being a part of the solution?”

My long nurtured DBA sense of responsibility kicked in at this point and I heard myself saying “sure”

“Great, I’ll shoot an email off to the admin, you’ll be on a flight to <redacted> first thing in the morning.”

 

I quickly rearranged all of my plans for the next few days and resisted giving myself a facepalm until I was well clear of the office.

 

Up, up and away

The next morning I hopped on a plane and by mid-afternoon was at my place of destination. When I walked in the door it was all hands to the pump where people were rushing around like crazy and the smell of desperation was in the air.

I was brought into a room and the situation explained to me…

A couple of days prior there had been an attempt at a microcode upgrade on a SAN. The upgrade failed, crashed the SAN and corrupted all of the data. No databases could be attached or started, no files were accessible, there was no filesystem, nothing. It was bad.

I asked at what point the decision was going to be made to scratch it and go to a backup (figuring if there had been a DR site for this it would have been in place already).

 

Backups? We don’t need no stinking backups

Yup, you guessed it, there were no backups. I asked when the last time a backup was taken, someone stated that they thought a backup had been taken 9 months before, but they couldn’t be sure and they didn’t know of anyone that could get into that datacenter and see.

The reason for no backup? Apparently it was taking too long to perform the backup and so the decision was made to just turn off the backup process.

I came to find out later no effort had been made to tune the backup process or to attempt alternative backup methods.

I find it unfathomable that there were no backups on a critical system that supported > 1 million users and had extremely high visibility. You would have thought that DR would have been priority #1, but that was not the case.

 

Sleepless nights

While there was little that could be done with the dead storage there was a lot of work around what could be done as a mitigation strategy. What could be done to restore service while other work was done in an attempt to recover some amount of data.

A new SAN was brought up, database installations performed, a change management process was put in place (one had not existed before) and there was a lot of discussion around getting backups working immediately.

I didn’t move from the building the first 36 hours. Thankfully the company brought in three square meals a day for everyone that was there to ensure that we at least got fed. People were sleeping on the floor in offices just to try and get a couple of hours sleep so they could remain functional.

Restoration of service was a slow arduous process as great care had to be taken with the order of enabling certain components.

Slowly things got back to normal, hourly calls with the VP dropped to every four hours. I was able to sleep in a bed and get some rest and a change of clothes (at one point I told the VP I was running out of clothes and asked how much longer I was going to be there, his response was that I should probably go and expense some underwear).

 

Getting back some data

A little over two weeks after everything went kaboom we started getting word of some data recovery. A third party company had been brought in and had been performing a block by block recovery of the storage from the bad SAN. They were not able to pull files or anything that simple, they were just able to pull access data on blocks. With a great deal of effort they managed to recover 90% of the data, which then had to somehow get validated and reconciled with the data now in the system.

Scripts abounded, levels of confidence in the data had to be decided upon and the risks ascertained for data restoration. That to me was a very scary concept. I’m glad that the decision on that one was made well above my pay grade.

After about three weeks I was able to go home. My work (and everyone else who was sent down to work on the recovery attempt) was acknowledged in a company meeting a couple of months later.

 

Takeaways

This is a real world example of a disaster. If there had been a backup a great many good people would not have been stuck away from home for three weeks.

It gave me a much greater appreciation for what can happen in a disaster. Don’t get caught out, make sure your backups are good and you have a strong business continuity strategy.

Speakers Third Rule

You’ve mastered the first rule and have submitted a session to present at a SQLSaturday, local user group or at your office, excellent. Your presentation is written, all the details are there…slides, demos and you’ve got your patter. You’ve followed the second rule and the presentation is all you. What’s next?

Practice makes perfect

 

 

Time to run through your presentation multiple times so that you know it well and can give it without any problems.

You might have a demo heavy session and need to practice those demos over and over, getting them just perfect. It could be a presentation that’s all theory with multiple slides; run through them, know what’s on each, understand each statement that you have put out there. Do you smell BACON?

It really doesn’t matter the kind of demo just practice it over and over again. Drive your dog crazy by holding it a captive audience as you present while holding a strip of bacon just out of reach.

Don’t worry if you don’t have a dog as a cat, spouse, girlfriend, boyfriend or fluffy bunny rabbit will work just as well.

The more you run through your presentation the more comfortable you will feel with it. This comfort is going to be vitally important for when you first get up in front of that “live studio audience”.

When those nerves kick in (and they will) you’ll have the mind muscle memory to carry you through. After all you practiced it until it was second nature to present, there’s not going to be anything to worry about come presentation time…right?

Speakers Second Rule

After writing a post for Un-SQL Friday on the first rule of speaking I decided that it would make quite a good series of posts, so here’s my second rule of speaking:

Find your own voice

 

What do I mean by this?

As an analogy when I first started out blogging I tried to emulate other writers in the community. In doing so I felt it very difficult to get across what I was trying to say. The message was lost in my trying to be someone I wasn’t. In addition I found it a real chore to write anything and pretty soon gave up posting.

My second attempt at blogging was a great deal more successful, at least it feels that way. Instead of trying to copy the style of someone else I am writing in my own voice. Whatever sentence structure I come up with is what I use. I don’t find it a chore to get words down, I feel like my message gets across and I am a great deal happier in the Don’t be this guythings that I publish.

 

Speaking is no different. Sure, you could try to emulate Buck Woody (blog|twitter), Brent Ozar (blog|twitter) or Sean McCown (blog|twitter) but I wouldn’t recommend it. Each one of those speaker is very different in the way that they present. Each has a very clear way of putting the message across and each one does so in a way that captures your attention. They have built their styles from years of presenting in their own voice. They are all very comfortable in how they present, they do this by being themselves.

Taking a lesson from my blogging experiences I decided that my presentations would be in my voice. I would present them in a way that I felt comfortable. By using my voice I knew I would never be outside of my comfort zone. Think about trying to talk like someone else, then add in a technical aspect. Now imagine fifty people are watching you try to be someone you’re not.

Stomach churning yet?

 

Presenting is going to be challenge enough for you at first. Go in being yourself, it’s worth it.

Un-SQL Friday #004–Speakers First Rule

I may be a day late and a dollar short for Un-SQL Friday #004: Speaker Lessons Learned but still felt I had something to share.

One of my favorite book series is The Sword of Truth by Terry Goodkind. The first of the 11 book series is called Wizards First Rule. It’s one of those oft rehashed tales of someone ordinary who turns out to be anything but. One of the things passed along was the first of a series of rules about magic. It made me think of speaking and the rules around that.

There are numerous rules (sure, you could call them guidelines or pieces of advice but we’re playing semantics here so go with it).

 

So here’s rule #1…

Don’t be afraid, go for it!

 

 

Such a simple rule. So often I read tweets from folks who talk about wanting to be able to present and wishing they could do so. There are people who really want to speak at the PASS Summit but haven’t submitted a session because of fear.

Fear of public speaking, making a mistake, things going wrong and rejection are all valid reasons why people get scared to submit their first session. It’s why folks don’t submit for their user groups or local SQLSaturday events. It’s why you may even be too nervous to present on a subject to team members at your company.

I get that, I’ve been there. You know what though? I decided I wanted to present and that ultimately I wanted to present at the Summit. To do that I had to get started. I chose my first event and submitted a session. Things went well the first time, not so well the second. I’ve had (what felt like) a complete disaster in a session and yet I’m still here, still standing and I’ve submitted to present at this years Summit. Whether or not I get chosen doesn’t matter at this point, the chief thing is that I tried (see getting over the fear of rejection).

I managed to conquer my fear, I know that you can do that same. What’s the worst that can happen?

Be Careful Installing Windows Features

Did you know that installing certain Windows features could impact the way that SQL gets installed on your server? Me either.

When performing some installs recently I came across a problem whereby I was not able to change the shared tools location. The option was greyed out. This didn’t make any sense to me. The install was happening on a new server, there had been no previous SQL installs on the machine and so no problems with installed components preventing those kinds of problems.

 

Digging through the filesystem I found a C:Program FilesMicrosoft SQL Server folder. That didn’t make any sense to me at all, I’d not installed anything.

I could not even find anything that might be using this folder when looking through the installed software and yet I couldn’t delete the folder as there were files in use.

After much head scratching I finally found the problem.

 

I had installed the Windows System Resource Manager feature to help manage resources, which is very useful when running more than one instance of SQL on a machine, or if you have multiple software installs and want to keep control of your CPU and memory. This feature uses (and so automatically installs) another feature called the Windows Internal Database which it says is a relational data store.

 

By the looks of things the Windows Internal Database is based around the SQL 2005 engine, this gets installed into C:Program FilesMicrosoft SQL Server.

When performing the new SQL install it went out and identified that SQL components were already installed and forced installation of the shared tools into that same location.

 

After removing the Windows System Resource Manager and Windows Internal Database features I was able to move the shared tools install to another location.

Be very aware of what features and roles might be install on your machines, it might bite you when you least expect it.