Carlos Chacon (twitter) was kind enough to have me back on the SQL Data Partners Podcast to talk about my experiences with managing 8000 databases on a single instance and upgrading to SQL Server 2016. He, Kevin Feasel (blog | twitter) & I had a great conversation in which I may have gushed a bit about dbatools. Then we wrapped up with the SQLFamily questions as we didn’t do them on my previous appearance last year.
In my previous post, I outlined the preparations we undertook to migrate a large SQL Server 2008R2 instance to SQL Server 2016. This post details migration day.
Final Prep We completed our nightly backups as usual on Friday night, so when I arrived Saturday I kicked off a final differential backup to catch any overnight changes. We’ve multi-threaded Ola’s backup script by creating multiple jobs and I started them all at once with (of course) PowerShell.
A bit over a year ago, I blogged about my experience migrating a test SQL Server instance from a VM to a physical machine with a little help from my friends. That migration went well and the instance has been running trouble-free ever since. But it’s small potatoes. A modest instance, it’s only about 5% the size of production. With SQL Server 2008R2’s EOL looming, it was time to migrate production to SQL Server 2016.
A few weeks ago, I teased good news.
just got some good news. can't wait to share it
— Andy Levy (@ALevyInROC) July 20, 2018 One person hypothesized that I’m joining Microsoft (it seems to be the thing to do lately) and another jumped to the conclusion that I must be pregnant. Both creative responses, but not quite correct.
I’ll be at PASS Summit 2018!
So much to do!
Pick some sessions Make my checklist of #sqlfamily I need to see Find a way to pack lighter (I think the iPad will stay home this time) Up my selfie game Get back into shape for #sqlrun Print up some more dbatools ribbons Figure out the social media photo situation (see above, “Up my selfie game”) If you’re attending Summit, let’s meet up!
Triggers can be really useful in your database but you have to be careful with them. We often use them to record history to a separate table (at least, until we can implement temporal tables) or keep track of when a record was last updated. It’s usually implemented via an AFTER UPDATE trigger.
Several times a day, I’m alerted by our monitoring suite that two processes were deadlocked. This is a (mostly) normal event, but one that we’d like to avoid if we can.
This week I had a user come to me asking about how fields were defined on a few tables he was using in writing some reports. Long story short, he’s been tasked with writing some new reports and updating existing ones, but he doesn’t have visibility to the database itself so he’s left with the “ok, let’s try this” approach and then reading error messages to debug when things go sideways.
I’ve been a proponent of dbatools for close to a year now and even contributed to the project, but surprisingly haven’t been a heavy user of it. Mostly due to a lack of opportunity. I’m aware of many of the functions by virtue of working on the built-in documentation and following the project and presentations about it.
So when the need arose to move a development/test instance of SQL Server from a VM onto a physical server, I knew exactly what I wanted to do.
One of the (many) fun things to do at PASS Summit is to check out the ribbons people have attached to their badges. Some are witty or goofy, others informational, others technical, and still more that let you express how you identify with a community within the community.
To celebrate dbatools and the awesome team & community around it, two limited edition badges will be available from/distributed by me and a handful of other folks all week at Summit.
This month’s T-SQL Tuesday is hosted by Raul Gonzalez and he’s asked everyone to share things we might be a bit embarrassed about:
For this month, I want you peers to write about those important lessons that you learned the hard way, for instance something you did and put your systems down or maybe something you didn’t do and took your systems down. It can be also a bad decision you or someone else took back in the day and you’re still paying for it…
A few weeks ago I was looking at a query and got tripped up by the network and my own forgetfulness. It was a pretty simple query with a simple-looking execution plan. It didn’t even do that much work. About 20K logical reads and returned 200K records. For a server as large as the one I was working with, this should have been nothing. Instead, was waiting three minutes to get my results.