“SP_WHO2 no more. DMVs are complex. SP_WHOISACTIVE gives me everything easily!”
I gave a thought to tagging some people but I’m sure those folks are way ahead of me on this already.
“SP_WHO2 no more. DMVs are complex. SP_WHOISACTIVE gives me everything easily!”
I gave a thought to tagging some people but I’m sure those folks are way ahead of me on this already.
This weekend I had the pleasure of attending and presenting at SQLSaturday #67 and boy do these folks in Chicago know how to put together an event!
I flew in Friday afternoon and after dropping my bag off at the hotel I headed down to Devry to assist in the prep work for the next morning. When I arrived things were already well in hand and a group of volunteers were hard at work stuffing swag bags. Finishing that led to collating raffle tickets and name badges so that everything was ready for the attendees arrival Saturday.
If you are planning on attending a SQLSaturday event, be it local or remote, please volunteer to assist in the setup. Not only can these events not happen without someone like you being there to help get things ready, but you also get to meet some really cool community people and chat with them. It’s very rewarding and you have no excuse not to.
Once everything was prepped and ready we headed over to a local eatery where SQL Sentry (of the awesome and free Plan Explorer tool) opened up their wallet and paid for the speaker dinner. Please note, this was the first (of many) instances where I looked around the room and inside my head the following played….
Today Is Saturday, Wear A Smile.
As a kid one of my favorite shows was on ITV, it was a hilarious, pie throwing, dog spitting, fly dying couple of hours on a Saturday morning. I loved it. It used to provide that kind of inane grin that you just couldn’t wipe off your face, it was just so enjoyable. That pretty much sums up this event.
I was there nice and early, ready to help if there were any last minute things that needed doing but it was all in hand. I got there just as the bagels and donuts showed up. It was just luck it happened that way, really (oh and it was so good to have Dunkin Donuts again, we don’t get those here out in the PNW).
Attendees started to show up in droves and it started to get crowded pretty quickly so I escaped to the relative quiet of the speaker room.
There was a scary amount of knowledge in that room. Take a look at the session list and you’ll see what I mean. I think that pretty much any question anybody could have come up with as regards SQL Server could have been answered by the combined smarts in there. Did I feel out of place? Heck yeah. That is my issue though because none of the presenters in there were ever anything but awesome. You never get looked down on by these folks, they just treat you like a peer, which I think makes it all the more humbling.
Rather than attend one of the first sessions of the day I decided to put my demos through their paces.
A week before the event I decided to change things up on my session and make the demo’s show their stuff a little better. This lead me to go out and buy a 128GB SSD drive and 8GB of RAM for my laptop and my reinstalling Windows. Then I built 4 shiny new Virtual Machines including a DC, iSCSI targets, a cluster and 3 instances of SQL Server.
I’d had some difficulties getting a mount point working in my cluster before the event but that morning I was able to get it plugged in and working (which probably doesn’t mean much when it comes to the presentation however I was glad to have the functionality show in the demo that this provided). I tested all the demos and felt pretty confident that I was as prepared as I was going to get.
The presentation materials are available for download from http://www.sqlsaturday.com/viewsession.aspx?sat=67&sessionid=3183
After a great lunch I found myself watching Jeremiah Peschka (blog|twitter) talking refactoring. I love his presentations, they flow so well and his slides are artistic and just a punctuation mark to what he is saying. I want to be able to present like that when I grow up.
I ended up missing the second round of afternoon sessions as I was talking clusters with Allan Hirt (blog|twitter). Yeah, I missed out on watching a great session, but it’s not often you have to chance to sit and talk face-to-face with a clustering guru, I considered that hour alone worth the weekend.
With the sessions over it was raffle time. Prizes were won and I got a $100 Amazon gift card. Quality!
The day official over things got more crazy. Food was eaten, beverages were consumed, karaoke was sung and talking was had until the wee hours of the morning. A really fun night to wind up the awesome day. I even managed to get a couple of hours sleep before heading for the airport and my flight home on Sunday.
Christina’s session was on SQL internals. This was her first presentation and it was one on a highly difficult and complicated subject. She did a fantastic job, starting things off with a story and tying it back into the SQL internals. Unfortunately there was one highly obnoxious person in the audience who, for whatever reason, decided that he was not going to agree with anything Christina had to say. He openly challenged her and other people multiple times (and it’s not like he was correct in what he was saying).
It was incredibly disrespectful to someone who was freely giving their time to try and impart knowledge on others. On top of that it took away from the time of the other attendees. At the end of the session more than one person commiserated with Christina on the actions of that one person and told her she did a fantastic job.
If you are attending a session show respect for the person speaking. Even if you think that they are saying something wrong don’t openly challenge them. If you can’t keep yourself from berating the presenter then leave the room and speak to them afterwards in private.
SQLSaturday organizers, I would really like to see a buddy system be implemented. While there are some very experienced speakers at these events there are those that are new to speaking. I think that everyone who is a first time presenter (at the very least) should have an experienced speaker with them in the room just in case they run into this kind of situation. They might not need to interject but the knowledge that they are there and have the back of the speaker would mean a great deal. A nod of the head just to reinforce the speakers confidence at a key moment might well mean more than standing up and taking someone to task.
When someone learns to walk a tightrope they have a safety net. Let’s get one in place for new speakers, we don’t want some callous comment or malicious person to deprive the community of a great speaker.
As I’ve said this whole event was awesome and I want to call out the team that put it all together:
I can’t provide any answers to that without first explaining:
I attended my first PASS Summit back in 2008. I’m far from a social butterfly and I didn’t know anybody when I got to the Summit, I walked around for a week and left having learned a great deal but still knowing nobody.
Funny thing though, attending that Summit is what got me started with being involved in the community at large. While there I saw some signs touting the benefits of Virtual SIGs and went over to ask about them. Blythe Morrow (blog|twitter) helped me get signed up for the DBA one and explained to me how useful this whole Twitter thing could be. Not being captain social I wasn’t sure I’d ever use it but signed up anyway. Nothing to lose right?
Over time I attended a few of the virtual groups and started following folks on Twitter. Then I started interacting. I got to know people; learned a huge amount; started blogging; attended a SQL Saturday; presented at a SQL Saturday; attended the local user group frequently; volunteered for and went to the 2010 Summit.
What was different about the latest Summit? I went in knowing a few folks and came away knowing a lot more.
For me PASS was about getting me started in the community. Now that PASS owns the SQL Saturday brand it’s even more about getting folks started in the community than ever before.
I don’t go to PASS for daily community involvement. It’s not the right place, that’s the niche that Twitter fills. I don’t go there for technical content, that’s why I have 115 RSS subscriptions in Google Reader.
The PASS website is not a place I visit often. Fact is I rarely go to the PASS site, there’s nothing there that I need for the most part. Other than trying to find out information about the Summit or an upcoming event such as 24 Hours Of PASS.
The nominated Board of Directors have a tough job and I think for the most part they do it very well. Treading that line between community and company cannot be easy. I know that I’ll never sit on that board and so have to admire the dedication of those that do. Saying that, the Board needs to take a great deal of care with the nominations after the controversy of last year. How about things being a little less political and more straight up?
The PASS Summit is the keystone that holds everything else in place. Keep on keeping on with that. The one thing I would like to see is something that’s already in the works and that’s moving the Summit every couple of years. While I leave in WA making it simple to get to the Summit for me I know that it’s not the case for everyone. There has been a great deal of talk about this. I think it’s good to move it every once in a while.
The first SQLRally is due to take place this year. It’s being billed as a regional event although it’s more like a mini-summit, I like this idea. Do it again next year but be sure to move it around the country. Dallas, Chicago and Atlanta will all hopefully be in serious consideration for next year. It might also be worth considering having the SQLRally in Seattle those years that the Summit is moved.
I’d look at possibly changing one thing for SQL Saturdays. Maybe a small fund could be set up to ensure that a highly respected speaker or two could make it to each event. This would cover their travel expenses and maybe a put a few bucks in their pocket. Other than that I would like PASS to really keep their distance from these events. A non-intervention pact should be put in place to allow each event to continue to run independently. I believe without this independence something new will spring up and take it’s place.
The question has been asked, by Sean McCown (blog|twitter) as to whether or not mentors are dead. I don’t want to believe that they are. PASS should try to get involved and start an official mentorship program. I’m sure there are a lot of experienced DBAs, developers and BI folks out there who would gladly help bring along someone still in their early days of working with SQL. Why not bring these folks together. Offer up some kind of support program to help facilitate the contact between folks and continue that relationship by seeking feedback from both parties.
Don’t change anything for 24 Hours of PASS. It’s great. Just be sure not to exclude speakers, give the community a chance to vote on every submitted session. Thomas LaRock (blog|twitter) has taken on the mantle of owning 24HOP. His idea for the March 2010 24HOP having only women speakers I think is great, it really helps promote diversity in the speaker pool which takes me to my final point.
There are some great speakers out there, PASS needs to help groom the next generation. This is one of the areas I think that could be huge for the community. It would be great if PASS could introduce some kind of speaker program that included feedback, evaluations and training. It would really help take people from SQLSaturday/local user group to PASS Summit speaker and beyond.
PASS Board, keep on keeping on. Overall you’re doing a great job. Keep getting people involved with volunteer efforts, run the Summit, Rally and SQL Saturdays. Don’t try and move into areas already well established with other companies, websites or tools.
Sometimes you come across quirkiness when playing with SQL Server. Once in a while those things are related to security. This happens to be one of those times.
I was provided with a script by the dev team to create a new proc and grant permissions on that proc to a Windows login that already had access to SQL. No big deal. I executed the script, received no errors and happily went about the rest of my day. A couple of hours later I got an email from the developer saying that when they attempted to call the proc they were getting an error stating that the server principal was not able to access the database under the current security context.
After much troubleshooting to no avail I ended up scripting out all the users permissions, dropping and recreating the user, then applying those permissions back again. Everything worked and the developers went away happy. I myself was not happy and so restored an old copy of the database to a test server to try and find out what the deal was. I found something interesting. Join me in a short but fascinating journey into the world of the Windows login…
The following is based upon test cases in SQL 2008 and 2008 R2. I have not tested on SQL 2005 and so do not know if the behavior might be different.
Create yourself a new shiny Windows login on your machine and then. Mine’s called SQL1 (because I’m original like that). Now add that as a login within SQL.
Just because validation is good it’s worth double checking that the login exists
Looks like we’re good on that front.
Now let’s create a dummy database for our security test and add a table and proc to that.
Feel free to execute the proc at this point just to be sure that you get results.
We’ve not added any users to this database, and most certainly that new login we just created isn’t going to be there. In the interests of being sure let’s check real quick.
Nope, no user there.
If we try to grant execute on the ShowData proc to that login it’s going to fail because that user doesn’t exist in the database, right? I know for sure that’s what happens with SQL logins, after all I’ve seen the “Cannot find the user ‘<username>’, because it does not exist or you do not have permission” error on countless occasions.
Let’s try it anyway and see what happens
Yup, there’s the error…wait, what? There’s no error, it was successful. That must be a mistake surely?
Well let’s check the database principals again
So it appears as though a user has been created for the login without us needing to specify it to do so. In actual fact it’s gone one step further and also created a schema.
Let’s try to execute the proc under the context of that user
Hey, that’s the error that the devs were reporting they were having. This makes no sense though. We granted permissions on the proc and there’s a user there and everything, what gives?
So there’s a user and there’s permissions on the proc but we get an error. Most strange. Well, actually I know we granted permissions on the proc, but did they actually make it there? We can use sys.database_permissions to double check this. In this case we’ll check permissions for all the Windows users in the database.
Yup, the permissions are definitely there. Execute on ShowCols. So that’s not an issue.
Note, there’s another row in the table showing CONNECT permissions for dbo. This permission is missing for our login. Interesting. Let’s add that for our user.
Running the permissions script again now gives 3 rows.
Now if we attempt to run the proc under the context of the login.
By attempting to assign permissions to an object in a database to a Windows login a schema and user are automatically created however the login is not granted the right to connect to the database in order to be able to utilize those permissions. We can easily grant the ability to connect to the database that is required for that login to use the already assigned permissions.
This is a very strange behavior (especially given that SQL logins do not work the same way). I’m sure that there is reason behind this and use cases, even though I’ve not been able to find any documentation describing this.
If you’ve run across this before, have use cases or know of documentation describing the behavior please let me know in the comments and I’ll update the post.
I had some code come through the other day from a developer, it was designed to perform an insert and then return the inserted data back to the client for display. This kind of code is probably seen very frequently out there in the real world.
As we’ll be playing around a little lets throw in a small table just for demonstration purposes:
Here was the developer code that came through:
The first thing of note here is a complete lack of error handling in the statement. Sure we have a begin and commit tran but if there’s a problem with the insert we just don’t deal with it and a transaction will be left open (not to mention nothing being returned to the client).
The second thing to notice is how this would return data back to the client. We would actually perform a query against the table using SCOPE_IDENTITY() in our where clause. SCOPE_IDENTITY() contains the value inserted within the current session and current scope (meaning that it won’t return the value for any table that might have had an insert due to a trigger).
This kind of query was heavily used back in the 2000 days but now there is a much better way to do this. Enter…
Whenever an insert, update, delete or merge is performed you can use OUTPUT to return back the impacted data.
Usage is very simple, just plop the OUTPUT right in the middle there and you are done. So taking the initial example (and for now ignoring the error aspects) we can simply rewrite it as the following:
We got back results exactly the same way as we would have in the old school fashion and did so without having to perform the extra read against the OutputTesting table.
Nothing’s perfect right? The thing with OUTPUT is that it will attempt to return results even if your statement fails. Let’s rewrite the query slightly and use the TRY…CATCH syntax to handle errors:
Ok, great, this worked as expected. Let’s throw a spanner in the works and pass invalid data to the insert and see what happens:
Hey, what gives? The insert failed (we have our insert error there) yet it still attempted to return results for the OUTPUT. We could end up with all kinds of problems if we tried to return that back to the client. People would freak out, it would be anarchy!
If we’re going to go ahead and use OUTPUT it’s always best to push that data into a table variable and then use that to return results back to the client if the insert is successful, otherwise we can just pass back the error:
Now if we this time attempt to insert invalid data:
All we get back is the actual insert error, so much cleaner.
We should really take the error handling a step further and use RAISERROR and pass in all the parameters, but this is at least a solid foundation and far better than what we had previously.
Heck no, output is good for all you DML operations.
Delete will tell you the records you just removed:
Update can give you information on what the old and new values are for the record:
Merge has a little bonus OUTPUT feature called $action which tells you what what kind of DML action was performed as well as giving you the data:
Remember folks, old school has it’s place, like a Flock Of Seagulls hairstyle at an 80’s revival concert, but there are new and better things now.
Output loves you, it gives you lots of information and it won’t kill your server to give it to you. Output is putting out, are you?
I have a bunch of monitoring routines running against my SQL Servers. One of these checks available disk space on drives. Today I was surprised to see that one of my SQL system drives was looking pretty full. This is unusual to say the least because I dedicate a drive to the system (master/model/msdb) databases. Upon connecting to the server I discovered that MSDB had blown up to over 12GB in size.
I used the following query to look for large tables in MSDB (this query is completely based upon running profiler against SQL Server while running a Disk Usage By Table report.
The results were a little shocking to say the least.
MSDB is something frequently forgotten. Cleaning up job history is nice and easy, it’s just a setting in the SQL Agent. Cleaning up your sent emails and backup records is not as straightforward. Why Microsoft hasn’t included this I don’t know, especially given that they have provided stored procedures to perform the cleanup of this data.
My server has been in service since March 2009. That’s a lot of backups and a lot of emails sent. I needed to get a handle on cleaning this up and then implementing a process to ensure that these tables didn’t get out of control again.
Microsoft have provided the procedure sysmail_delete_mailitems_sp to perform cleanup of the sysmail_mailitems table. Two parameters can get passed in:
In this instance I was not concerned with what items were going to be deleted, I just wanted to get rid of everything older than 14 days.
Here’s a bad way to do this:
Actually let me clarify, this isn’t a bad way to do this if you don’t have 2 years and 9GB of data to delete. In fact this is the script I implemented in a job to continue to keep the table size down. Running this against such a large amount of data just blew up the transaction log, filled the drive and then forced a rollback.
Going a slightly smarter route I performed the deletion in small batches (5 days worth of email at a time) using the following:
This took a little while however the transaction log stayed small and it completed without incident.
Again Microsoft do a kindness to help cleanup old data. This time it’s the procedure sp_delete_backuphistory which we pass
Having learned my lesson from deleting the old mail items I again attempted to perform the delete in batches using:
I figured this would take a while. Half an hour later it was still churning away. Wait stats showed a crazy amount of i/o going on. This didn’t seem right so I stopped the script and took a look at the execution plan.
A pretty large plan which makes it difficult to look through. Luckily a few weeks ago I had downloaded the completely awesome SQL Sentry Plan Explorer from SQLSentry.net. If you have not installed this FREE tool yet then I recommend you go out there right now, well after finishing reading this post, and get it. It makes execution plans not just readable but sortable, viewable and workable. Just using Plan Explorer once can save you hours. I just want to add I am not associated with SQL Sentry in any manner nor am I getting any kickback, the tool is just that good.
Pasting the estimated execution plan XML into Plan Explorer showed multiple clustered index scans against the backupset table. It seems that a couple of critical indexes are missing from this table.
I was quickly able to ascertain that a where clause was being used on the backup_finish_date, which is indexed, however it needed to return the media_set_id which is not included in the index. Later on in the plan I could see that there was a join against the media_set_id which isn’t indexed.
These missing indexes were causing the massive i/o numbers I was seeing. To resolve I quickly added a couple of indexes.
This done I took another look at the estimated plan and things looked a great deal cleaner.
I kicked off the script again and this time it completed in under 5 minutes. Several million rows were deleted and 2GB of space reclaimed.
Not wanting to ever run into this situation again I added a job with a couple of steps to delete mail items older than 14 days and backup history older than 90 days.
This is done with a quick execution of the cleanup procedures. Thanks to the data being brought current and the job running every night I won’t have to worry about extended run times or MSDB getting out of control.
I hadn’t taken sufficient care in ensuring that old data was kept clean. In fact I hadn’t even realized that backup history and old mail items could be a concern. Since I found this today I’ve started going through and performing this cleanup action on each of my SQL instances. Once that’s done I’ll be using PowerShell to deploy the nightly cleanup job on every instance.
I highly recommend checking your own servers to see if you might be encountering the same problems.
Jen asks “Who are the IT giants you talk to/read that make you feel like a technical poser?” and instructs that invoking Paul Randal (blog|twitter) is not allowed. Per Jen’s instructions saying everybody is also not acceptable. As such I had to sit down and figure out who I could add on the list.
When Jen originally mentioned this on Twitter I came up with a few names on the spot
That’s a pretty good list right there and one worth sticking with.
In no particular order:
I could easily write another 15-20 names here. I currently have 65 SQL bloggers in my RSS reader. Each one of them at one time or another has made me feel like a tech poser.
It’s actually great to have all these folks out there writing their stuff as any time I think about getting complacent in one area or another I just have to pull up a random post from anyone, read it and say to myself “you’ve got a lot to learn yet kid”.