Something in SQL Server that isn’t always obvious to beginners is that when you create a default value for a column on a table, SQL Server creates a constraint (much like a primary or foreign key). All constraints must have a name, and if one isn’t specified SQL Server will generate one for you. For example:
[code language=”sql”]CREATE TABLE [dbo].[point_types] ( [typeid] [int] NOT NULL DEFAULT(NEXT VALUE FOR [pointtypeid]) ,[typename] [nvarchar](30) NOT NULL DEFAULT ‘Unspecified’ ,CONSTRAINT [PK_PointType] PRIMARY KEY CLUSTERED ([typeid] ASC) ) GO[/code]
We’ve got a simple table here and both fields have a default value set (the primary key’s value is generated from a sequence object, pointtypeid). Time goes on, and a change in policy comes up which requires that I change the default value of typename to Unknown. To do this, I have to drop the constraint and re-create it. To find the name of the constraint, I can either ask sp_help, or run this query:
[code language=”sql”]SELECT all_columns.NAME ,default_constraints.NAME ,default_constraints.DEFINITION FROM sys.all_columns INNER JOIN sys.tables ON all_columns.object_id = tables.object_id INNER JOIN sys.schemas ON tables.schema_id = schemas.schema_id INNER JOIN sys.default_constraints ON all_columns.default_object_id = default_constraints.object_id WHERE schemas.NAME = ‘dbo’ AND tables.NAME = ‘point_types’;[/code]
I’ve got my constraint name now, so I can drop it & re-create it
DROP CONSTRAINT DF__point_typ__typen__21B6055D; GO
ALTER TABLE [dbo].[point_types] ADD DEFAULT(‘Unknown’) FOR [typename]; GO [/code]
And if I re-run the above query, I can see that the constraint’s name is different.
This means that everywhere I need to change this constraint (development, test and production), I’ll need to figure out the constraint name in that particular database and drop it before re-creating it. This makes a deployment script a bit messier, as it needs more code to find those constraint names
ALTER TABLE [dbo].[point_types] ADD DEFAULT(‘Unknown’) FOR [typename]; GO[/code]
But this doesn’t really solve my problem, it just works around it. It’s still messy and fragile. If I need to do other operations on the default constraint, I need to go through the same exercise to find its name.
Fortunately, SQL Server lets us name default constraints just like any other constraint, and by doing so we avoid this trouble. By setting my own name for the constraint, I know what it’ll be in every database, without having to query system tables. The name can be set in both the CREATE TABLE statement and an independent ALTER TABLE.
[code language=”sql”]CREATE TABLE [dbo].[point_types] ( [typeid] [int] NOT NULL DEFAULT(NEXT VALUE FOR [pointtypeid]) ,[typename] [nvarchar](30) NOT NULL CONSTRAINT [DF_PT_TypeName] DEFAULT ‘Unspecified’ ,CONSTRAINT [PK_PointType] PRIMARY KEY CLUSTERED ([typeid] ASC) ); GO
ALTER TABLE [dbo].[point_types]
DROP CONSTRAINT [DF_PT_TypeName]; GO
ALTER TABLE [dbo].[point_types] ADD CONSTRAINT [DF_PT_TypeName] DEFAULT(‘Unknown’) FOR [typename]; GO[/code]
I can also combine these in the next deployment that requires a change to the default constraint, dropping the system-generated name and establishing my own static name to make things simpler in the future.
Is explicitly naming default (or any other) constraints necessary? No, but doing so helps your database document itself, and it makes future deployment/promotion scripts simpler and less prone to breakage. SQL Server needs a name for the constraint regardless; it’s worth specifying it yourself.
If you need to move data from one table into a new table, or even tables in a database into another database, the Import/Export Wizard in SQL Server Management Studio looks pretty tempting. Set up a source & destination, click a few buttons, kick back with a cup of tea and watch the progress bars, right?
It turns out that the wizard just isn’t as smart as it may seem. If you’re not careful, you won’t get what you’re expecting. Let’s check it out.
We’ll start by creating a real simple table in a database, containing a primary key and a computed column.
[code language=”sql”] Create table sample.dbo.SourceTable ( RowId int identity(1,1) not null primary key, Num1 int not null, Num2 int not null, Total as (Num1+Num2) ); [/code]
Let’s populate it with a few rows of data, then update some of that data to make sure the computed column is working. Remember, this is just to demonstrate the idea.
[code language=”sql”] insert into sample.dbo.SourceTable (Num1, Num2) values (1,2); go 100 select top 5 * from sample.dbo.SourceTable order by RowId; update sample.dbo.SourceTable set Num1 = Num1 * RowId where RowId <= 3; select top 5 * from sample.dbo.SourceTable order by RowId; [/code]
Great! We’ve got data, the computed columns are working, let’s copy it over to a new table in another database. We’ll just going to click Next, Next, Next through the wizard this time around.
Success! Our table has been copied and the data’s all there.
[code language=”sql”] select top 5 * from Sample2.dbo.SourceTable order by RowId; [/code]
Let’s do some work on our new table and check out the results.
[code language=”sql”] select top 5 * from Sample2.dbo.SourceTable order by RowId; update Sample2.dbo.SourceTable set Num2 = Num2 * RowId where RowId < 3; select top 5 * from Sample2.dbo.SourceTable order by RowId; [/code]
Woah! That’s not right. That Total column is supposed to be Num1 + Num2, and last time I checked 2 + 4 was not 4. Let’s keep going and try adding a new record the same way it was done earlier. [code language=”sql”] insert into Sample2.dbo.SourceTable (Num1, Num2) values (100,200); [/code] Cannot insert the value NULL into column 'RowId', table 'Sample2.dbo.SourceTable'; column does not allow nulls. INSERT fails.
Huh. Now that’s really odd, isn’t it? RowId is supposed to be an identity – we shouldn’t have to populate it. What is going on here? Let’s script out the table.
[code language=”sql”] USE [Sample2] GO
/****** Object: Table [dbo].[SourceTable] Script Date: 2015-11-10 22:36:23 ******/ SET ANSI_NULLS ON GO
SET QUOTED_IDENTIFIER ON GO
CREATE TABLE [dbo].[SourceTable]( [RowId] [int] NOT NULL, [Num1] [int] NOT NULL, [Num2] [int] NOT NULL, [Total] [int] NULL ) ON [PRIMARY]
GO [/code] This is all kinds of wrong! What happened to the primary key? Or the computed column? Well, it turns out that the wizard isn’t that smart, and if you just take all the default values, you’re going to get burned. Let’s go back to the wizard and click that Edit Mappings button in the Select Source Tables and Views screen.
Well…that looks like what we got above. And it’s not what we wanted. If we click Edit SQL, this is confirmed – the table being created is not defined the same way the source table is being defined.
Fortunately, we can edit the SQL here and make it match the source table definition, then finish the wizard.
OK, data’s copied – what do we have? [code language=”sql”] select top 5 * from Sample3.dbo.SourceTable order by RowId; update Sample3.dbo.SourceTable set Num2 = Num2 * RowId where RowId < 3; select top 5 * from Sample3.dbo.SourceTable order by RowId; insert into Sample3.dbo.SourceTable (Num1, Num2) values (100,200); select * from sample3.dbo.SourceTable where rowid >= 100 order by RowId; [/code]
Everything’s there, and it’s working the way it’s supposed to. Lesson learned: don’t blindly trust the defaults, especially the ones in a wizard. Double-check everything, and then verify that your schema works the way you expect it to before doing any business with it.
Fixed-position data formats will seemingly be with us forever. Despite the relative ease of parsing CSV (or other delimited formats), or even XML, many data exchanges require a fixed-position input. Characters 1-10 are X, characters 11-15 are Y and if the source data is fewer than 5 characters, we have to left-pad with a filler character, etc. When you’re accustomed to working with data that says what it means and means what it says, having to add “extra fluff” like left-padding your integers with a half-dozen zeroes can be a hassle.
I received a draft of a stored procedure recently which had to do exactly this. The intent is for the procedure to output the data almost entirely formatted as required, one record per line in the output file, and dump the result set to a file on disk. As it was given to me, the procedure was peppered with CASE statements like this (only more complex) in the SELECT clause:
[code language=”sql”] — Method 1 select case len(cast(logid as varchar)) when 9 then ‘0’ + cast(logid as varchar) when 8 then ’00’ + cast(logid as varchar) when 7 then ‘000’ + cast(logid as varchar) when 6 then ‘0000’ + cast(logid as varchar) when 5 then ‘00000’ + cast(logid as varchar) when 4 then ‘000000’ + cast(logid as varchar) when 3 then ‘0000000’ + cast(logid as varchar) when 2 then ‘00000000’ + cast(logid as varchar) when 1 then ‘000000000’ + cast(logid as varchar) when 0 then ‘0000000000’ + cast(logid as varchar) end as logid ,logtext from cachedb.dbo.logs; [/code]
It’s perfectly valid, it works, and there’s nothing inherently wrong with it. But I find it a bit tough to read, and it could become trouble if the format changes later, as additional (or fewer) cases will have to be accounted for. Fortunately, the day I received this procedure was right around the day I learned about the REPLICATE() T-SQL function. Maybe we can make this simpler:
[code language=”sql”] select replicate(‘0’,10-len(cast(logid as varchar))) + cast(logid as varchar) as logid,logtext from cachedb.dbo.logs; [/code]
Not bad. But it leaves us with a magic number and similar to the previous example, if the file format changes we have to seek out these magic numbers and fix them. This is easily remedied by defining these field lengths at the beginning of the procedure, so that they’re all in one place if anything needs to change.
[code language=”sql”] — Method 2 declare @paddedlength int = 10; select replicate(‘0’,@paddedlength-len(cast(logid as varchar))) + cast(logid as varchar) as logid,logtext from cachedb.dbo.logs; [/code]
Yet another approach would be to pad out the value beyond what we need, then trim the resulting string back to the required length. Again, we have to be careful to not leave ourselves with magic numbers; the solution is the same as when using REPLICATE():
[code language=”sql”] — Method 3 select right(‘0000000000’ + cast(logid as varchar), 10) as logid,logtext from cachedb.dbo.logs; — Or, with more flexibility/fewer magic numbers — Method 4 declare @paddedlength int = 10; select right(replicate(‘0’,@paddedlength) + cast(logid as varchar), @paddedlength) as logid,logtext from cachedb.dbo.logs; [/code]
All four methods yield the same results, as far as the data itself is concerned. But what about performance? For a table with 523,732 records, execution times were:
2,000ms CPU time, 261,785ms elapsed
2,265ms CPU time, 294,399ms elapsed
2,000ms CPU time, 297,593ms elapsed
2,078ms CPU time, 302,045ms elapsed
Each method had an identical execution plan, so I’m probably going to opt for the code that’s more readable and maintainable – method 2 or 4.
As with any tuning, be sure to test with your own data & queries.
I’m working with a number of SQLite databases as extra data sources in addition to the SQL Server database I’m primarily using for a project. Brian Davis (b|t) wrote a blog post a few years ago that covers setting up the connection quite well. In my case, I’ve got nine SQLite databases to connect to, and that gets tedious. PowerShell to the rescue!
PASS Summit is nearly upon us. I’m excited to be attending my second Summit in Seattle and cannot wait to get there to see everyone. With one Summit and a few SQL Saturdays under my belt I’ve got a laundry list of things and people I can’t miss, and very little time to pack it all into.
The greatest part of Summit (and SQL Saturday) for me is meeting people and exchanging ideas. If you haven’t experienced it, #SQLFamily is amazing. When I reached the convention center two years ago, the first feeling that hit me was “I finally found my people!” We’re all friendly, I swear. Just say “hi, I’m <your name here>.” I guarantee you will find people who are into the same stuff you’re into, and I’m not talking just talking about SQL Server. Music, dance, outdoor activities, all kinds of stuff. We have a common thing that brought us together, but that’s not what keeps us together. It is an amazing community and it just keeps getting better. On Sunday, as you’re decompressing from the event and travel, you will miss these people who you didn’t even know a week before.
You can even connect strangers with common interests. In 2012, I met someone over a power outlet who asked if I’d done anything with a particular piece of hardware and what I thought of it. Turns out that I hadn’t, but I knew that a former co-worker was also in attendance and he had used the hardware, so I gave them each others’ contact information.
Ping me on Twitter, find me at one of the places/events listed below, breakfast or lunch in the dining hall, or if you think you see me passing in the hall (picture on my Twitter profile), say something (and if it’s not me, you’ll meet someone else, which is still awesome). Maybe even dinner at the airport on Friday evening.
Get on Twitter
So many things happen at Summit which are announced and/or organized via Twitter. The main hashtag to follow is (I think) #summit14 but once you hit the ground you’ll start figuring out who and what to follow to get all the dirt.
I’m arriving in Seattle late Tuesday morning and doing some sightseeing before checking into the hotel and Summit late in the afternoon. Then it’s off to the welcome reception. The first of several visits to Tap House Grill may be in order too.
Wednesday starts dark & early with #SQLRun at 6 AM. I had a great time getting a 5K in before dawn at my first Summit and I’m sure this one will be great too. Don’t forget to bring the right gear; it’s pre-dawn and right now the forecast is for 50°F and rain (in Seattle. Go figure).
Aside from the sessions throughout the day, I’ll probably be found in the Community Zone. I’ll also be serving as an Ambassador helping to direct people to the dining hall for lunch, posted outside room 4C so stop by and say hi.
Once we’ve wrapped up there, I’ll go wherever the wind may take me; probably back to the Tap House.
Thursday is my light day at Summit. I don’t have any sessions double-booked and the only event I really need to catch is the Argenis Without Borders folks in their fuzzy rainbow leggings.
Thursday evening I’ll be at the Ray Gun Lounge for Table Top Game Night. I’m looking forward to getting to know folks there and learn some new games. We don’t play a lot of table top games at home and I’d like to change that.
Lots more sessions on Friday, plus winding everything down. By the afternoon, I’ll probably be beat and just trying to rest at the Community Zone.
I fly out late Friday night, so I’ll be trying to find dinner somewhere between the convention center and airport. I’ll probably kill a lot of time in the terminal by wandering around, playing Ingress.
At my first Summit, I learned a few lessons about what to take and what not to take. The most important thing to bring: empty space for all the stuff you’ll bring home. SWAG from the exhibitors, souvenirs, books and more. Next most important: power! Electrical outlets are few and far between, and there will be 5000 people vying for them to top off their phones and tablets. A quick rundown of some of the stuff that might not be obvious to bring (or easily forgotten) that I’m packing:
Small (1 pint) widemouth water bottle. I’m partial to this Nalgene bottle I got at a 5K earlier this year.
NUUN electrolyte tabs. Water gets boring after a while. These will help you stave off SQLPlague (don’t forget your vitamins too!).
Comfortable shoes. You’ll be on your feet a lot and walking even more; the convention center is big. Not to mention the evening activities.
A small notepad for taking non-session notes – phone numbers, names, etc. I love my Field Notes notebook.
A larger notepad for taking notes in sessions. Oh, and don’t forget a pen or three. I’ve tried doing notes on a tablet and on a computer, and it just doesn’t work as well as paper & pen for me. Bonus: no batteries!
Hand sanitizer. Because when you have 5000 people in one place, germs get around in a hurry no matter how careful you are.
A good wall charger for your devices. I found myself short chargers last time and had to buy one at Radio Shack. It didn’t cut it. This one has two USB ports that charge at 2.1A, which will give you a good boost when you get near a plug, and you can share with a friend. It’ll also recharge pretty much anything while you sleep. Best of all, it’s really compact.
A good external battery pack. Matt Slocum (b | t) got me hooked on the Anker E5 15000 mAH battery. 2 ports so you can share with a friend and it’ll recharge most phones 4-5 times from a completely empty battery.
Plenty of USB cords to go with both of the above.
Business cards! I ordered mine at the office too late last time and had to get some made at Staples in a pinch.
A small, light backpack to carry all of this in (well, not the shoes). Session rooms get cramped, so carrying a big pack can be a pain.
A lock code on my phone and tablet. I normally don’t use one but at any large gathering like this, it’s better to be safe.
A list of the people I need to see/find/meet/reconnect with.
This Summit is going to be a blast. I cannot wait. There’s only two things I don’t look forward to:
This past weekend I made the journey to Cleveland, OH (Westlake, actually) for SQL Saturday#241. I’ve attended two local SQL Saturdays in the past (helping organize/run one), but seeing the list of speakers and knowing a few of the local chapter members, I couldn’t pass up the opportunity to visit.
I packed my bags and hit the road. It’s about a 300 mile trip so I gassed up, settled in with my backlog of podcasts and set the cruise control. The drive was almost zen-like. The sky was clear and the sun shining (almost painfully bright) and I don’t recall the last time I took a road trip solo. It was a very relaxing drive.
After arriving at the hotel and settling in, I went out for a bit to stretch my legs and get some fresh air before searching for dinner. Forgetting that most people coming from out of town are speakers and there’s usually a speaker’s dinner on the Friday night before SQL Saturday, I had trouble finding people to meet for dinner. Ultimately I met up with Travis Garland (t) and his team at the Winking Lizard in downtown Cleveland after a 20 minute search for parking. I wasn’t able to stay too late as I wanted to be up early on Saturday.
6 AM is pretty early when you’re on the road. I pulled myself out of bed, packed up my stuff, got ready & headed down the road to Hyland Software, the venue for the event. The place is set up perfectly for a SQL Saturday. Great layout, a good amount of space, and terrific technical facilities.
After getting registered, I made the rounds of the vendor tables & caught up with several of the people I was looking forward to seeing again, including Karla Landrum (t), Hope Foley (b|t), Allen White (b|t), and Kendal Van Dyke (b|t). While I was at it, I managed to meet one new person, Wendy Pastrick (b|t) – so immediately the weekend was a networking success (one of my goals was to meet & talk with at least one new person this weekend, and I was up to 5 by 8:15 AM on Saturday). After the opening remarks from Allen, Thomas LaRock (b|t) and Adam Belebczuk (b|t), it was time to get our learn on.
Session 1 – Query Performance Tuning: A 12 Step Program
Query performance is always a concern for DBAs, developers and users alike. Thomas & Tim Chapman (t) presented a great series of steps to take while investigating slow queries, and did it in a very engaging way. Some of these techniques I was already familiar with, but there was plenty of new material in here as well. The biggest surprise to me was that they put examining execution plans halfway down the list – 5 stages of investigation before even looking at the plan! The execution plan is usually one of my first stops in trying to optimize queries, but I’m going to adjust that thinking going forward.
Session 2 – Writing Better Queries with T-SQL Window Functions
Confession time: this was my “backup” session. I had planned on attending Grant Fritchey’s (b|t) Statistics and Query Optimization session, but by the time I got to the room it was standing room only with a line out the door. So I crossed the hall to this session, presented by Kathi Kellenberger (t). I’d heard about window functions before, and seen them in blog posts, but had no idea what they were really good for. Kathi presented the material in a very clear, relatable way and by the end of it my mind was racing, trying to find scenarios at work where I could put them to use. I’m very happy that I landed in this session – lots of great stuff to work with. It’s rare that I sit in a session where I know nothing coming into it and feel like I can apply the knowledge gained immediately. This was one of those sessions – highly recommended if you see Kathi presenting at another PASS event.
Session 3 – Code Management with SQL Server Data Tools
I’ve dabbled with SSDT a bit, but never been able to put it to good use for a variety of reasons. Lisa Gardner (t) presented this session and provided good information around some of the things that I’m trying to implement better in my environments – namely automated deploys and keeping environments in sync. I’m not sure when exactly I’ll get to use all of it, but now I know more about what situations call for it and a start down the path of using it to its fullest potential.
The provided lunch was good, a self-serve taco bar and lots of available seating. There was plenty of seating available in the common areas, but several sponsors had presentations in the training rooms and I took a seat with SQL Sentry. I’ve been using their Performance Monitor and Event Manager products for a couple years now, but always forget about Plan Explorer so I soaked up the demo.
Session 4 – Discover, Document & Diagnose Your Servers on Your Coffee Break
I’ve been working with Kendal for a while and I was eager to see what SQL Power Doc was all about. I came for the PowerShell and stayed for the amazingly detailed documentation of any SQL Server environment. I lack the credentials to run it at work, but I know that it’s being run on a regular basis on our network. It’s everything you wanted to know about your environment but didn’t know it could be documented. The volume of information it churns out is amazing. Note to self: run SQL Power Doc against my computers where I have dev instances set up.
Session 5 – Spatial Data: Cooler Than You’d Think!
I’ve wanted to see this session since I first heard about it last year. I have a project I’ve been working on for a bit that has a lot of GPS data in it and I was eager to learn from Hope what else I could/should be doing with it, as well as validating that I was already on the right track. I got some good ideas here with regard to how my tables are defined and what options are available for mapping the data I’ve captured – something that I hadn’t even considered yet.
Session 6 – Making the Leap from Profiler to Extended Events
I met Erin Stellato (b|t) at PASS Summit 2012 during #SQLRun, and her session about DBCC was the first regular session I attended that week (portions of which were way over my head – not an unexpected phenomenon). So I had to check this session out. I don’t do much with Profiler but I know it’s going away and I want to be ready. This session was a great introduction to XE without getting into too much complexity, and she did a great job of showing off some of the aspects of XE that make it so much better than Profiler.
Raffle drawings! I continued my non-winning streak.
After wrapping up at Hyland Software and dropping all my stuff at the hotel, I headed over to Dave & Buster’s for the after-party. Some game credits were provided, but I was there more to talk to people than play games (aside: I’m really bad at meeting people & “working the room.” I’m better at it than I used to be, but it’s still something I have to work very hard at). I spent most of the event camped out at a table with a rotating cast, but I left without catching up with everyone I wanted to speak with.
After Dave & Buster’s, a few people were headed to the “other” hotel for a few hours of Cards Against Humanity & invited me to join. I’d never played before, and did pretty poorly, but it was a hell of a time. I just wish I was staying at that hotel so I didn’t have to worry about making the drive back to mine.
After being out until 1 AM, my 7 AM alarm sounded way too early. I slowly rolled out of bed, packed up & prepped to hit the road. In sharp contrast to my drive to Cleveland, the drive home was looking a bit rough with lake-effect snow on the radar until my halfway point. Of course, as I approached that point I checked again and saw that snow would follow me all the way home. It was slow, treacherous going in places but I managed to stay on the road and got home without any drama.
Everyone involved with this SQL Saturday did an amazing job. Terrific venue, terrific speakers, and everything seemed to go off without a hitch. If you haven’t been to a SQL Saturday and there’s one coming up near you, take advantage of the opportunity to attend. Just being around the PASS community can stir up or re-ignite a passion for all things data-related. And as always, the biggest thing I learned this weekend is that there are so many more things out there for me to learn.
I brought a backpack and PC with me to collect swag and possibly fiddle with a project while I was at the event, but it just ended up being a lot of dead weight I carried around all day. Next time, I’m leaving the computer at home and replacing it with a big battery pack for my phone.
I can’t wait to make the trip to Cleveland again next year – in the meantime, there’s #302 & #303 this summer which I’ll be attending
I don’t think I’ve ever had a Windows Update go bad – until this week.
I recently upgraded to Office 2013 and late Monday afternoon, decided to check in with Windows Update prior to our company’s normal monthly patching to see how bad the damage would be. Nearly 1 GB of updates, thanks to my fresh Office install. But there were also a couple optional updates, including a .NET Framework update. So I figured I may as well go ahead and do everything while I was at it. This way I could control the reboot instead of being forced into one in the middle of important tasks.
Tuesday morning, I got a call asking if we were having any issues with one of our key systems. I fired up my trusty SQL Sentry client to check things out. And failed. I couldn’t connect to the server to start monitoring. Never a good sign. Then I tried SSMS 2012. Again, couldn’t connect to any servers. I got the following error:
Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
That sounds pretty ominous. Try a few more times, no luck. Reboot, no better. Uninstall SQL Sentry Client and attempt to reinstall – still nothing. Things were going from bad to worse in a hurry.
I bounced the error messages off Kendal Van Dyke (b | t) and he suggested that it might be an issue with the SQL Native Client. So I reinstalled that from the SQL Server 2012 Feature Pack. And still, I couldn’t connect. I even tried PowerShell (v3), both the SQLPS module and SqlServerCmdletSnapin100 Snap-In and got the same errors SSMS 2012 threw out.
Taking a deep breath and stepping back, I started reviewing the situation. What do all of these have in common? They’re all using the latest (or at least a fairly recent version) .NET Framework. Let’s take a step back and try something older. SSMS 2008 R2 – works fine. Start up PowerShell v2 – that works too. Now we’re onto something!
The Framework update I installed was KB2858725, bringing it to version 4.5.1. My first thought was that maybe the installation was botched somehow, so I downloaded the installer and tried again. But to no avail. So I uninstalled it entirely and reinstalled 4.5 completely. This finally did the trick.
Due to other commitments, I haven’t had a chance yet to re-try the 4.5.1 update, but I’m also in no rush. While it won’t take me 4+ hours to troubleshoot and fix if it breaks things the next time around, I need my system stable for the next few weeks so I won’t be trying again until 2014.
My first official entry for T-SQL Tuesday (my first was a guest post hosted by Kendal Van Dyke (b|t), so I’m not really counting it) is brought to you by PowerShell, or PoSH. Ever since I discovered PoSH and really dove into learning it a couple years ago, my co-workers have gotten a bit annoyed by my insistence upon using it for everything. It is my favorite hammer, and around me I see nothing but acres and acres of nails.
I’m not a DBA, so I don’t do as much managing of databases with it as most people joining this party, but I still use PoSH with SQL Server pretty often. I spend a lot of time pulling data from SQL Server & doing crunching/analysis, sometimes merging it with some filesystem data as well.
So what have I done lately?
Space usage analysis. I work with a system which generates PDF documents, saves them to the server filesystem, and records the details to a table. But how quickly are we consuming space? When will we run out? I whipped up a quick PoSH script to pull the details off the table, then locate the files on the filesystem, and record everything to another table for slicing & dicing.
Quick ad-hoc data dumps. Sometimes we just need to pull some data out of the database, crunch a few numbers, and send it off to someone upstairs. Before PoSH, I’d run the query in SSMS, copy the data, paste it into Excel, drop in a few formulas and maybe a graph, and be done. But I’d spend more time fighting Excel on formatting & getting the columns right than I did getting the data into it. Invoke-SQLCmd piped to Export-CSV solves that really quickly.
I’ve been working on upgrading a system we purchased from a vendor and migrating everything to a new server at the same time. Moving & updating XML configuration files, thousands of check images, restarting services, migrating databases. And this isn’t a one-time event – we have to do this over 200 times! The SQL Server portion of this process isn’t particularly involved, but it’s vitally important:
After most of the heavy lifting of moving things around is complete, one table has to be updated to point at the new path for the images that were migrated.
When all of the migrations are finished, we have to validate that everything moved over properly. A few carefully-selected queries to compare critical tables between the old version of the database and the new version and minds are put at ease that all of our data has come over cleanly. Those queries, along with the other components of the validation, are run via the PoSH script & output to a file for review.
For reporting purposes, we load a small subset of data from Active Directory into a pair of SQL Server tables on a nightly basis. Previously, it was only 2 fields but recently this has been expanded to about 8. Once again, PoSH to the rescue! Pull the AD user accounts, select the properties we need, and insert it all into the tables. Originally I used my old standby Invoke-SQLCmd, but with the addition of new fields I got concerned about building queries via string concatenation using arbitrary data retrieved from another system. System.Data.SqlClient & prepared statements to the rescue! It’s more code, but it’s a lot safer.
Dave Ballantyne reminded me that I have two PowerShell scripts for SSRS.
The first, based upon this script from Randy Alrdich Paulo to deploy reports into SSRS. My addition was the option to pull an RDL file straight from our Subversion repository instead of the person doing the deployment having to check it out.
The second is based upon this StackOverflow post, and is used to render reports to file. In one case, I then SFTP the generated report to a vendor immediately after.
These are pretty mundane compared to what I’m sure a lot of people participating will be posting, but the key is this: PoSH is saving me time and by scripting everything I can, I’m significantly reducing the chance for errors. Of course, when there is an error, it’s magnified tremendously – so it all gets tested against a non-production server first.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.
If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.