A Day in the Life (4/?) - August 2, 2019
This is my fourth installment in a series responding to Steve Jones’s (blog | twitter) #SQLCareer challenge. I jotted down most of what I did through the day, filling a page and then some in a small notebook with timestamps and short reminders of what happened. For more, check out the #SQLCareer hashtag on Twitter.
I bet you thought I’d forgotten all about this “project”. I decided to pick things back up on this day because I’ve been light on content lately, I had a few things going on, and keeping notes for this series strangely helps me focus on my day. As it turned out, there were a few twists in the story of my Friday.
As before, I recommend reading the first, second, and third installments to get a handle on some of the tasks & terms I might throw around here.
08:00 - Arrive at the office.
08:23 - Get back to work on a report that I spent too much time on yesterday.
08:30 - Brief break, get a hot beverage.
08:45 - Back to the report. The report itself isn’t difficult, in fact I don’t think there’s anything wrong with it - it’s just misunderstood or folks are trying to use it for the wrong thing. So instead, I focus on taking work I’d done for another report to produce the report that I think is actually being requested.
09:30 - Report is done, discuss Quentin Tarrantino’s latest movie with my colleague. I haven’t seen it, but she saw it a few nights ago and before she went I mentioned one of the quirks of Tarrantino’s filmmaking. She hadn’t noticed it previously, but she spotted it this time around and we discussed a bit.
09:45 - Back to real work. I love dbatools but one thing that makes it difficult for me to use is the amount of data that SMO collects about each database when you connect to an instance, and then gets refreshed on a regular basis. It’s all good when you’ve got a “normal” sized instance, but with over 8000 databases, simple things can take upwards of 10 minutes (or even time out) because of this.
So, I tried changing things up a bit and telling SMO to not initialize any of those fields when connecting by passing an empty collection to
Server.SetDefaultInitFields(). I only got a bit of a start on this before 10 AM rolled around.
10:00 - Convene for a phone call to discuss options for migrating portions of our environment to a public cloud.
11:13 - After the meeting and a quick walk around the building to reset my mind, I resumed my work on the dbatools connection process. Unfortunately, I pushed it so far that I managed to break connecting to instances altogether.
11:32 - Pick up the report again and get the new one into a usable form. Share it with the requestor, who confirms that it’s what she’s looking for.
11:45 - Pick up an emergency ticket, someone needs data pulled from a backup of a database from a couple days ago. Luckily it’s recent enough that I don’t have to fetch it from tape. A couple dbatools functions and the database is restored & ready to be used.
12:30 - Lunchtime snuck up on me! Head outside and take another quick walk around the building.
12:45 - During my walk, I got a tweet that turned my whole day around.
BAHAHA ANDY!! I just had a meeting with a community guy that said he knew you, and when I was showing him the disk activity tab, I was like well... there's the Andy Levy limit here where if you have a bazillion databases this visualization does top out... :)— Dev’s Ops 🌻 (@RestinBeachFace) August 2, 2019
So apparently I’m either famous or infamous around the SentryOne offices. The queries behind one of the panels in SentryOne Monitor that shows database backup statuses used to have a
TOP N limit coded into it and due to the number of databases I have in production…well, I broke it. That’s not an issue anymore as the limit has been increased significantly in the most recent release.
12:50 - Right. Back to work. Do some extra cleanup on that emergency ticket from earlier.
13:00 - Working on that report again. Got confirmation that it is what my customer needs so I pushed out to the reporting server, but for reasons I don’t understand yet, it’s not working 100%. I hammered on it for about an hour before deciding enough was enough. It’ll still be there Monday.
13:56 - Pick up another semi-emergency ticket. It’s got a deadline of Monday so I was planning on doing it then, but we got a request to do it ASAP.
14:52 - While wandering around the office, I checked in with someone in finance to see how things were going on a monster report I’d put together a couple months ago. We ended up discussing the peculiar arrangement of overhead I-beams (the building used to be a factory) and after about 10 minutes, concluded that it was a setup for an overhead crane.
14:37 - Return to my desk to discuss some unexpected behavior of SQL Server when returning JSON-formatted results. It turns out that it’s somewhat documented but the piece we didn’t know is that it’s broken into multiple rows at 2033 characters (which Aaron Bertrand found and blogged about when SQL Server 2016 was in the CTP stage). The solution is to loop over the rows in the result set and concatenate into a single JSON string.
14:50 - Back to that semi-emergency ticket, there’s some additional work to be done.
15:15 - Wander off to the front of the office in search of a snack. Find a powdered donut from Dunkin'.
15:30 - Start reading a Microsoft paper on migrating to Azure SQL DB.
16:38 - Time to head home. Concert traffic made that lots of fun.
Somehow, this felt like both a “normal” day and a rather unusual one. I bounced around a fair bit, had some highs and some lows. On two occasions, I had to utilize the most under-appreciated strategy in troubleshooting - knowing when to walk away and approach it another day.
Steve’s original challenge was for four days. Will I do more of these? Undecided. It’s an interesting exercise for sure.