One Peril of Database Proliferation

By now many of us have upgraded from SQL Server 2008R2 and we’re on the “regular Cumulative Updates” train now. For the rest, it’ll (hopefully) happen soon. And since we want a long runway, we’re upgrading to SQL Server 2016 or 2017. Current software! New features! Mainstream support! But…there’s a catch.

Staying current

DBAs & sysadmins don’t want to fall too far behind on patching for a variety of reasons. It used to be that monthly patch cycles were primarily for Windows Server. When we were running SQL Server 2008R2, our last significant update was years ago. SQL Server 2012 is now out of support and isn’t receiving updates very often. Point being, a SQL Server update install has been the exception, not the norm, in the patch cycle for many of us the past couple years.

Now, Microsoft is releasing monthly updates for SQL Server 2017, and bi-monthly updates for SQL Server 2014 and 2016 (even 2012 was getting frequent updates for a while). Updating SQL Server is becoming part of the regular cycle. But SQL Server updates aren’t as simple as most Windows patches.

It’s not just the software

The first time you start SQL Server after you’ve installed a Service Pack or Cumulative Update, SQL Server goes through a process called “script upgrade mode.” In this process, the system takes two passes over all the databases, updating system objects/views and publisher/subscriber metadata. For pub/sub this takes about a quarter second per database if you aren’t using replication. The system objects take about one second per database. For most environments, this might add a couple minutes to the whole process. Not a big deal.

Watching script update mode progress in the SQL Server error log

But it’s a big deal to me!

When you have 8000+ databases on an instance, this is a huge deal. You’re looking at over two and a half hours just to bring SQL Server online after installing an SP or CU. While the instance is in script update mode, incoming connections are locked down and the service remains in the Starting status. Only the Dedicated Administrator Connection can be used to connect to the instance remotely.

Taking advantage of having a Failover Cluster Instance to patch the passive node in advance is great for minimizing downtime for Windows updates. But whether you have an FCI or not, SQL Server will remain in the “Starting” state until all of your databases have been through this process after installing an update. What was once a 10-minute failover is now a multi-hour ordeal, and maintenance windows become a lot harder to negotiate.

No easy solution

What’s the takeaway here? As those of us who were/are still clinging to SQL Server 2008 R2 finally make the move, we’re facing a new reality with patching our systems.

You can’t fight city hall. You’ve got SLAs to meet, and you need to stay on top of those SQL Server SPs and CUs. Your standard maintenance windows may not be long enough to accommodate the extra time to install CUs plus do the post-install housekeeping. If that’s the case, you’ll probably find yourself only installing CUs every quarter or six months and negotiating an extended outage for those.

I’ve logged an item on Microsoft’s feedback portal asking for script update mode to run in parallel, up to the MAXDOP setting for the instance. Please upvote if you think it’s worthwhile!

4 thoughts on “One Peril of Database Proliferation”

  1. Voted! – This is absolutely worthwhile for as long as the ‘little & often’ type updates are going to continue.

  2. When applying just only CU (not a big SP or engine upgrade), I usually detach the disk volumes with data files for databases that are not used in replication.

    This will cause to avoid updating replication system objects.

    Using this method I have applied all ~10 CU on SQL 2016 and didn’t experience any problem.

  3. WOW! 8k databases is a crazy number for sure. Since dealing with an FCI, what about a patching strategy of detaching less important and non-replicated databases (if possible) prior to patching and attaching post-patching?

    1. Detaching a database can be a bit drastic. SSMS does not show detached databases, so you have to remember what you detached and where the files are for when you want to re-attach them.

      You can achieve the same effect regarding speeding up patching by putting the non-critical databases offline. SSMS still shows the offline databases, and all you need to do to get them patched is to bring them online.

Leave a Reply

Your email address will not be published. Required fields are marked *