Volunteer for PASS!

This week, I had the opportunity to be the moderator for Joseph Barth’s (b|t) 24 Hours of PASS Summit Preview session about Azure Data Factory V2. It was fun, easy, and I encourage you to sign up to do the same!

Throughout the year, PASS hosts a number of online learning events. 24 Hours of PASS and virtual chapter webinars being the most common/visible. And in each session, the presenter needs a little help managing questions and watching the clock so they can focus on delivering their great content. It’s pretty easy. You just:

  • Sign in about half an hour ahead of the session start time
  • Make sure your audio is working right
  • Chat with the presenter(s) about the timing, whether they want to address audience questions during the presentation or at the end, when they want time alerts, etc.
  • When the session starts, read the PASS lead-in script that’s provided and introduce the speaker
  • Watch for questions and let the speaker know when you’ve hit the agreed-upon checkpoints
  • Read audience questions to the speaker
  • Wrap-up: thank the speaker and audience, read the wrap-up script, and (where applicable) invite the audience to stick around for the next session

So how do you sign up for such a sweet gig? Just set up your PASS profile to indicate that you’re interested in volunteering. When an opportunity comes up, you’ll be contacted by PASS HQ and asked if you’re available for the event.

In the case of 24 Hours of PASS, I was asked to pick a few time slots where I was available but not told who the speaker was in each (which is fine by me – the result is that I attended a session I normally wouldn’t have, and learned some new stuff!). My slot was confirmed and I learned that Joseph would be my speaker. Great! I met him at Summit last year and he founded a user group that I’m familiar with, so we had something to chat about before his session started.

The clock struck 01:00 UTC, I read my script, Joseph did his presentation, and we wrapped up. It went really well and I had fun with it.

So, dear reader, here’s what you’re going to do:

  1. Go to your PASS profile’s myVolunteering section
  2. Check at least two boxes
    • “I would like to sign up to become a PASS volunteer”
    • Any one of the Areas of Interest
  3. When you receive the email from PASS HQ or local coordinators asking for volunteers for an upcoming event, you say “yes!”
  4. Help out with the event
  5. Meet new folks in the SQL Server community
  6. Learn something new

Communities like ours work best when everyone chips in a little bit. Whether it’s speaking, moderating online events, working with a local user group or helping to put on SQL Saturday. It’s a great way to meet other people in the community and give back to a group that gives us all so much, both personally and professionally.

Becoming a Production DBA – A Family Decision

I really enjoy my job. I became a full-time production DBA about 14 months ago and it has been an overwhelmingly positive move. I work for a good company and with a terrific group of people. Many days, I have to force myself to leave the office because I was so engrossed in a task and just didn’t want to set it aside.

But there’s something that not everyone might consider before taking on this job. If you have a partner, children, or both, taking a job as a production DBA is really a family decision.

Being on-call is potentially disruptive to your family schedule. And sleep schedules! My on-call rotation is two weeks on, two weeks off. In those two weeks, I have:

  • The usual alerts that can come in anytime day or night, the emergency fixes when someone deletes something that shouldn’t be deleted, etc.
  • A software release which requires that I get up at 3:45 AM once per rotation
  • Monthly server patching at 2 AM, if it happens during my rotation

Many years ago, I had a job where I carried on-call responsibilities and it was rough. Lots of nights and weekends. Then I got a decade-long break. Before I took my current job, I discussed the on-call requirements with my spouse a bit before accepting. I didn’t want to subject her to that again without making sure that she was OK with it. She is a very light sleeper, so any chirp from the phone is likely to wake her up (by contrast, I once put my phone three inches from my head and slept through multiple personal email alerts).

This job has the potential to impact the whole family, in both small ways and large. Chris Sommer (blog|twitter) said one day in the SQL Community Slack that being a production DBA is kind of a blue-collar job. Shift work, etc. He makes a good point. I’ve adapted to the schedule and it’s not bad…for me.

But I’m not alone in the house and yes, everyone has had to adjust. Sleep has been lost. If an alert comes in overnight, my spouse wakes up too. We’ve scheduled family activities around the on-call schedule. Carried the work laptop all over creation “just in case.” Left the beach to handle urgent tickets. Skipped weekend morning outings. Stayed up late, got up early, missed dinner, or paused a movie to baby-sit a critical job or troubleshoot system issues.

It’s worth it though. After taking on the new role, my job security increased. My career security has increased. My work is more challenging, more interesting, and I have more autonomy than ever before. I look forward to going to work every day. I’m getting more involved in the SQL Server community. On average I’m getting home earlier than I used to, so I’m spending more time with the kids on weekdays. It hurts waking up at 3:45 AM once a month but I’m there to greet them when they get home from school.

Life is full of tradeoffs and compromises, and taking a job with on-call responsibilities involves a lot of those tradeoffs. Overall, it’s been a net win for me. Would I prefer to not have to deal with overnights and weekends? Who wouldn’t? But the positive changes that this job has meant for my career, my family, and myself make it worthwhile.

One Line of Code Is All It Takes

This tweet showed up in the dbatools Slack channel Friday afternoon.

My first thought was “huh? John (t) hadn’t kicked code in previously? I thought he had.” Once I was over that, I reflected a bit on what John wrote here, and was reminded of how I felt when I started helping out with dbatools.

It’s similar to Impostor Syndrome – I felt like I wasn’t doing much, small things here and there, in large part “just” documentation cleanup. The feeling that I was just throwing changes into the codebase just for the sake of making changes. It took me a couple of months and talking to several people before I understood that what I was doing was useful to someone other than myself and internalized what I was hearing.

Here’s the thing that I have finally come to realize. Every contribution to an open source project is beneficial, no matter how small it may seem. I’d heard this over the years but didn’t really understand until very recently.

John’s single line of code, no matter how it is that he got it into the dbatools codebase, made it better. His code will be executed by thousands of users of dbatools the world over.

Most open source project maintainers/leaders are looking for help. Get out there on GitHub and look up a project you use. Find an issue that’s tagged good first issue or help wanted. Hop over to Up For Grabs and find a project that needs a little help. If your PR isn’t immediately accepted, work with the maintainers to get it into a condition where it can be merged .

Single lines of code are welcome improvements to projects. Find yours.

How I Became a…SQL Server DBA

Kevin Hill mentioned this idea/series on a SQL community slack channel back in April and I thought it would be a good way to get back to blogging. The timing worked out well as I had just started a new job, my first with the official title of “SQL Server DBA.” So how’d I get here?

In college, I took a single database course. I’d messed around with Microsoft Access a bit, but wanted to get a better handle on what I was doing. The course was not at all what I was expecting. I passed and did OK, but I didn’t completely grasp the material. The class was mostly deep RDBMS theory including “how do we store this on disk” – I wrote minimal amounts of SQL in this course because it wasn’t required.

I graduated and took my shiny new Computer Science diploma to my first job, and within a few months I had a solid handle on Classic ASP, building apps with it and handling some of the server admin stuff on the NT4 boxes that hosted them. I spent a little over 5 years there and got minimal exposure to databases as that wasn’t what my job function demanded – I’d write some queries against DB2 on the mainframe or a SQL Server instance, but that was about it. The DBAs took care of everything else.

After a few years, I moved on from that position as I wanted to relocate for personal reasons. I found a job doing some Java work on an in-house application and system customization/integration for a purchased application that was used as the hub for the company’s core business. In the course of working on those systems, I started doing a lot more SQL work, but at the time I only knew enough to be dangerous.

During a project to upgrade that system, I got a crash course in writing good SQL from Allen White (b|t), and learned much more about how SQL Server works from both him and Kendal Van Dyke (b|t). Allen and Kendal also introduced me to the SQL Server community and my eyes were opened. This was huge.

Over the next several years, I discovered that I was a developer who had DBA tendencies that I just hadn’t realized yet. I started to get involved with the SQL Server community. Talked to so many people. Subscribed to dozens of blogs. Attended SQL Saturdays and PASS Summits.

Then, one evening after we finished unpacking equipment and supplies from one of our Rochester SQL Saturdays, Matt Slocum (b|t) just asked me, point-blank. “So do you wanna be a DBA or what?” Ding! The lightbulb flicked on. I’m already doing a whole bunch of this stuff, and enjoying it – why not go for it?

I refocused my efforts on really understanding how SQL Server works. Looked for ways to leverage my programming experience with a slant toward managing databases. Did a lot more non-production DBA type work (I didn’t a lot of access to production, which was probably a good thing). After searching for a while, I landed a job as a full-time production DBA with a company operating a SaaS platform. It was a bit of a leap but one that I had to take as it was the right thing that came along at the right time. I’m nearly 2 months in now and I’ve learned a ton already. Made a few slip-ups, but that’s to be expected – just have to learn from that and move forward.

SQL New Blogger Challenge November 2015 Edition – Week 3 Digest

This week’s #sqlnewblogger posts!

AuthorPost
@eleightondick[T-SQL Tuesday] Data modeling: The trouble with prefixes | The Data Files
@tomsqlAdventures With TomSQL, aka Tom Staab
@EdDebugAutomatically name primary key constraints in SSDT | the.agilesql.club
@rabrystBorn SQL on Twitter: “Temporal Tables – Under the Covers with the Transaction Log. 
@YatesSQLCommunity Involvement–Why Wait? | The SQL Professor
@cjsommerIdentity Column Increment Value (EVEN/ODD) | cjsommer.com
@DBA_ANDYNebraska SQL from @DBA_ANDY: CHECKDB – The database could not be exclusively locked to perform the operation
@ALevyInROCSelectively Locking Down Data – Gracefully – The Rest is Just Code
@eleightondickSQLNewBlogger, Week 3 | The Data Files
@tomsqlBeing Our Collective Best
@SQLMickeyT-SQL Tuesday #72 Summary – Data Modeling Gone Wrong | Mickey’s T-SQL Ponderings

SQL New Blogger Challenge November 2015 Edition – Week 2 Digest

This week’s #sqlnewblogger posts!

AuthorPost
@arrowdriveAnders On SQL: T-SQL Tuesday #72: Data modelling gone extremely wrong
@rabrystTime After Time – An Introduction to Temporal Tables in SQL Server 2016 using a DeLorean
@EdDebugDeploy SSDT INSERTS in Batches | the.agilesql.club
@ALevyInROCDon’t Trust the Wizard
@DBA_ANDYNebraska SQL from @DBA_ANDY: T-SQL Tuesday #72 – Implicit Conversion Problems
@eleightondickSQL New Blogger Challenge: Week 1 recap | The Data Files
@eleightondickSQL New Blogger Challenge: Week 2 ideas | The Data Files
@BeginTrySQL Server 2012 Upgrade: The RPC Server is Unavailable | It’s All Just Electrons

SQL New Blogger Challenge, November Edition, Week 1 Digest

Ed Leighton-Dick has renewed his New Blogger Challenge this month. Here are all (I think) the posts for this week after Ed posted his announcement. If I’ve missed any, please let me know and I’ll update.

AuthorPost
@arrowdriveAnders On SQL: First Timer Summit impressions.
@EdDebugDeploy SSDT INSERTS in Batches | the.agilesql.club
@EdDebugLooking at SSDT upgrade scripts | the.agilesql.club
@DBA_ANDYNebraska SQL from @DBA_ANDY: PASS Summit 2015 Recap
@eleightondickPASS Summit 2015 Highlights | The Data Files
@OliverAsmusPASS Summit 2015: My Experience | OliverAsmus.com
@EdDebugScriptDom Visualizer | the.agilesql.club
@eleightondickSQL New Blogger Challenge: Looking back… and a new challenge! | The Data Files
@Clem1029Tearing down the wall | SQLDEV@Clemsplace
@ALevyInROCWhy Ask Why?
@rabrystThe SQL Server Family

Why Ask Why?

Spend any time around a 4 year old, and you will inevitably find yourself involved in a conversation which evolves into this:

  • Please do this thing
  • Why?
  • Reasonable answer
  • Why?
  • Restatement of reasonable answer
  • Why?
  • Shorter, more frustrated restatement of reasonable answer
  • Why?
  • Because that’s what has to be done
  • Why?
  • Because
  • Why?
  • I give up. Go ask your other parent

It’s a simple, but powerful and important question. The trouble is that when it’s a 4 year old asking it, in a lot of cases they can’t understand the answer. More often, they aren’t interested in understanding it.

Fortunately, there aren’t any 4 year olds in the average IT shop (although it may not be too far off).

A while ago, a data issue came to my team. Nothing major, but enough that it caused problems for a user. It’s a small glitch with an application component which pops up maybe once or twice a year, so it’s been decided that it’s better to just fix the data in those rare cases as opposed to spending 20 hours tracking down the root cause & fixing it there (I’m the SME for this component).

The requested correction was to delete the entire record, based on a previous fix to a similar but unrelated data problem. By the time I saw the request, a teammate had picked it up & started working on it.

“Wait! Don’t do it that way!” I said. “All we should be doing here is fixing the one erroneous field on that record.” This had come up in the past, but with it happening so rarely it’s easy to forget about.

I paused to catch my breath, then heard it.

Why?

I had to pause even longer to collect my thoughts. I don’t often get asked questions on things like this but I wish it happened daily.

This is the moment in which knowledge is gained, even by the answerer.

When you live & breathe a system for years on end, it’s easy to take certain aspects of it for granted. You respond without having to think about why things work the way they do – you just know that’s how it is.

The ensuing conversation was productive and I hope informative for my co-workers. While deleting the record would have the desired short-term result (making the application function properly), in the long term it would break the link between the data and a document which is referenced by that data. A net loss. Fixing the one column (setting it to the value which it should have been in the first place) allows the application to function correctly and retain access to that referenced document.

The conversation also forced me to take a closer look at my own understanding of the issue and re-evaluate what I thought I knew about it. It turns out, I had some bad assumptions in there too, which I was able to correct.

Not only did my teammates learn, I learned too. Everyone wins.

So why was the original solution of deleting the whole record requested? The answer isn’t too far removed from the idea of cargo cult programming. That is, someone saw the solution of deleting the whole record used in a similar case years ago, documented it, and it was seen as the One True Answer from that point forward – regardless of its applicability to the situation at hand.  A detailed explanation of “why” isn’t usually written for every issue that comes to our team for resolution, for a few reasons:

  • We don’t think to do it.
  • There isn’t a good way to distinguish between this bug in the system and others without having a fairly deep knowledge of the system.
  • There isn’t a way in our ticketing system to record information that isn’t visible to everyone, and the whole company does not need to see the dirty details of the internals of every system – in fact, it would probably be counterproductive.

In hindsight, a carefully-written, more thorough explanation many years ago may have prevented this particular request from being written as it was.

Asking why became the basis for Toyota’s approach to improving their manufacturing processes, and is built into Six Sigma and many other process improvement methodologies. This one word is the gateway to understanding, and once we understand, we can find ways to do things better.

If you’re curious about something, release your inner 4 year old. Just don’t act like a 4 year old when you do it. Keep asking why, get to the answers – and make sure you understand them.

If someone asks you why, embrace the question. This person is interested, they’re engaged, they want to learn! Take advantage of that opportunity to teach and spread that knowledge. Along the way, you just might learn something yourself.

SQL New Blogger Challenge Digest – Week 4

This week marks the end of Ed Leighton-Dick’s New Blogger Challenge. It’s terrific seeing everyone sticking with the challenge all month and I’m looking forward to catching up with all the posts. Great job, everyone! Keep going!

AuthorPost
@MtnDBA#SQLNewBlogger Week 4 – My 1st SQLSaturday session | DBA With Altitude
@Lance_LT“MongoDB is the WORST!” | Lance Tidwell the Silent DBA
@ceedubveeA Insider’s View of the Autism Spectrum: Autism and Information Technology: Big Data for Diagnosis
@JorrissA Podcast Is Born
@toddkleinhansA Tale of SQL Server Disk Space Trials and Tribulations | toddkleinhans.com
@arrowdriveAnders On SQL: First “real” job with SQL.
@arrowdriveAnders On SQL: Stupid Stuff I have done. 2/?. Sometimes even a dev server is not a good dev environment
@way0utwestApril Blogger Challenge 4–Filtered Index Limitations | Voice of the DBA
@ALevyInROCAre You Backing Everything Up? | The Rest is Just Code
@DesertIsleSQLAzure Data Lake: Why you might want one |
@EdDebugBIML is better even for simple packages | the.agilesql.club
@tpet1433Corruption – The Denmark of SQL Instances – Tim Peters
@eleightondickCreating a Self-Contained Multi-Subnet Test Environment, Part II – Adding a Domain Controller | The Data Files
@MattBatalonCreating an Azure SQL Database | Matt Batalon
@pshore73Database on the Move – Part I | Shore SQL
@pmpjrDo you wanna build a cluster?! | I have no idea what I’m doing
@DwainCSQLExcel in T-SQL Part 1 – HARMEAN, GEOMEAN and FREQUENCY | dwaincsql
@AalamRangiGotcha – SSIS ImportExport Wizard Can Kill Your Diagrams | SQL Erudition
@toddkleinhansHow Do Blind People Use SQL Server? | toddkleinhans.com
@DBAFromTheColdIn-Memory OLTP: Part 4 – Native Compilation | The DBA Who Came In From The Cold
@AaronBertrandIt’s a Harsh Reality – Listen Up – SQL Sentry Team Blog
@GuruArthurLooking back at April – Arthur Baan
@nocentinoMoving SQL Server data between filegroups – Part 2 – The implementation – Centino Systems Blog
@MyHumbleSQLTipsMy Humble SQL Tips: Tracking Query Plan Changes
@m82labsReduce SQL Agent Job Overlaps · m82labs
@fade2blackukRob Sewell on Twitter: “Instances and Ports with PowerShell http://t.co/kwN2KwVDOS”
@DwainCSQLRuminations on Writing Great T-SQL | dwaincsql
@sqlsanctumSecurity of PWDCOMPARE and SQL Hashing | SQL Sanctum
@PittfurgSQL Server Backup and Restores with PowerShell Part 1: Setting up – Port 1433
@cjsommerUsing PowerShell to Export SQL Data to CSV. How well does it perform? | cjsommer.com
@gorandalfUsing SSIS Lookup Transformation in ETL Packages | Gorandalf’s SQL Blog
@nicharshWords on Words: 5 Books That Will Improve Your Writing

Are You Backing Everything Up?

We hear the common refrain among DBAs all the time. Back up your data! Test your restores! If you can’t restore the backup, it’s worthless. And yes, absolutely, you have to back up your databases – your job, and the company, depend upon it.

But are you backing everything up?

Saturday night was an ordinary night. It was getting late, and I was about to put my computer to sleep so I could do likewise. Suddenly, everything on my screen was replaced with a very nice message telling me that something had gone wrong and my computer needed to be restarted.

Uh oh.

In 7 1/2 years of using OS X, I’ve had something like this happen maybe 4 times.

After waiting whet felt like an eternity, the system finished booting & I got back into my applications. I opened up PowerPoint, as I had it open before the crash so I could work on my SQL Saturday Rochester slide deck whenever inspiration struck. I opened my file, and was greeted by nothingness. I flipped over to Finder and saw zero bytes displayed as the file size.

Uh oh.

“But Andy,” you say, “you use CrashPlan, right? Can’t you just recover the file from there?” Well, you’re half right. I do use CrashPlan. I even have a local, external hard drive (two, actually) that I back up to in addition to CrashPlan’s cloud service. But I couldn’t recover from any of those.

CrashPlan configuration - oops

Because Dropbox is already “in the cloud”, I had opted to not back it up with CrashPlan when I first set it up. After all, it’s already a backup right? It’s not my only copy, it’s offsite, it’s all good.

Not so fast. When my system came back up, Dropbox dutifully synced everything that had changed – including my now-empty file.

Dropbox - 0 bytes

So, now what? Fortunately, Dropbox allows you to revert to older versions, and I was able to select my last good version and restore it.

Screenshot 2015-04-26 21.04.48

Lessons Learned

I broke The Computer Backup Rule of Three and very nearly regretted it. For my presentation:

  • I had copies in two different formats – Dropbox & my local (internal) hard drive
  • I had one copy offsite (Dropbox)
  • I only had two copies, not three (local and Dropbox).

Even scarier, if Dropbox didn’t have a version history or it had taken me more than 30 days to realize that this file had been truncated, I’d have lost it completely.

Everything else on my computer was in compliance with the Rule Of Three; I just got lazy with the data in my Dropbox and Google Drive folders. I’ve since updated my CrashPlan settings to include my local Dropbox and Google Drive folders so that my presentation should now be fully protected:

  • Five copies
    • Local drive
    • Two external drives w/ CrashPlan
    • CrashPlan cloud service
    • Dropbox/Google Drive (different content in each)
  • Three formats
    • Spinning platters in my possession
    • Dropbox/Google Drive
    • Crashplan
  • Two copies offsite
    • CrashPlan cloud
    • Dropbox/Google Drive

And don’t forget to test those backups before you need to use them. Dropbox, Google Drive and other online file storage/sync solutions are very useful, but you cannot rely upon them as backups. I don’t think you’ll ever regret having “extra” backups of your data, as long as that process is automatic.