Automated Database Deployments - Getting Buy-in
Automated Database Deployments is rather disruptive. But it is disruptive in a good way. Depending on the company, database deployments could be a "wild west" process where every team has their way of deploying changes. Getting any sense of consistency is going to be difficult. When I was working on this process I had to learn the hard way getting buy-in from key people was critical. In some ways more critical than the actual automated database deployment process, because without buy-in this process would wither and die very quickly.
Getting Started
There are many advantages to working for an enterprise. A lot of things, such as keeping the web servers up and running, maintaining the network, make sure the SQL servers have enough disc space, and so on, are off-loaded onto other teams of people. This is great because it lets me just focus on development. Inserting a new process, especially a highly disruptive one is one of the areas where it is more difficult working in an enterprise. This process affects a lot of people. It is just the way it is.
The key here is perseverance and attitude. Or as I like to call it, "don't be a dick." People are going to give push-back because this is "moving the cheese." This another one of those times where I wish I knew what I know now when I first started this process. I really thought I would help put together the process, everyone would see how great it is, and then adopt it quickly. I did not think about getting buy-in from everyone because I thought everyone would see the benefits right away. At one point during before the process was fully adopted, I got a lot push-back from a couple of key people. I responded poorly. As in, I acted like a child who didn't get to play with their favorite toy. Attitude can make all the difference and mine was piss poor. Thankfully this didn't delay the adoption too much, but I got very lucky. It could have been a disaster.
Job Roles
Here are the key job roles I (and the people helping me) focused on getting buy-in from while we were creating the process.
- Database Architects: responsible for ensuring all database servers are up and running at all times. They approve and deploy all database changes to pre-production and production.
- Database Developers: handle all the difficult database work and/or mentor application developers. One day they may create a view and the next day they might performance tune a stored proc.
- Application Developers: write the code and work on some of the database changes with input from the database developer
- Leaders: basically your boss, or bosses boss, or bosses bosses boss.
Database Architects
At Farm Credit Services of America, we have over 10 development teams. The DBAs have to support each team. Before automated database deployments there were 10 different ways teams asked the DBAs to deploy changes to production. Some teams put scripts in a folder in a specific order.
- Deploy Folder
- 001Script.sql
- 002Script.sql
- 003Script.sql
Others teams created a master deployment script and had the DBAs run that script from sqlcmd. It was very difficult to keep track when a change went to production. A script might change a stored proc which is called by an SQL job. If any SQL job fails the DBAs are paged and they have to look at them.
From their point of view the following problems were solved:
- They would have consistency between all the teams on deployments.
- An audit record of when something went into production.
- An easy way to promote database changes to each environment via Octopus Deploy and Red Gate SQLCI.
- Consistent deployments between all environments, decreasing the chance they get paged because of a failed job.
- Frees them up to focus on the important things a DBA should focus on
Database Developers
Prior to automated deployments this was the process of getting changes promoted to production
- Keep track of all changes made during a project (could be one week or three months)
- Run a Red Gate SQL Compare between environments for schema differences while being sure to only select the items that actually changed. Generate a script from that tool.
- Run a Red Gate SQL Data Compare between environments for any static data. Be sure to only bring along changes that were made for what needs to be deployed. Generate a script from that tool.
- Place all the necessary scripts into a shared folder
- Notify the DBAs of the shared folder
- Practice running the scripts on a day old copy of production. If anything breaks the fixes have to be applied to all environments.
- Have DBAs manually run the scripts in production on the night of the deployment.
Tedious and prone to error. One of the databases my team is responsible for has over 6000 objects in the schema (stored procs for all CRUD operations adds up fast). It is unfair to expect one person to keep track of every change made. Ask a developer this question "what code did you change three weeks ago?" Why would we make a database developer remember the same thing? Why not ask them to memorize the complete works of Shakespeare while you're at it.
For database developers Automated Database Deployments solve:
- Significantly reduced manual script generation, reducing chance of error
- History of changes to the database with source control
- Removes tedious steps, freeing them up to focus on important database developer work like performance tuning, table creation, etc.
Application Developers
Our automated database deployment process has each developer have a local copy of a database running. This basically made each developer a lite DBA. Their cheese moved a lot in this process.
Most application developers don't suffer from the same problems as database developers or DBAs. This is where I really I had to change my approach. My key areas of focus were:
- Local Database means I can mess it up and not affect anyone else. I don't affect testing or other teams.
- Local databases support branching and try out some database changes. If they don't pan out, no problem, I just abandon the branch and revert the changes.
- All database changes and code changes can be checked in at the same time and deployed at the same time. Previously we would make changes on the database server, then make the code work with the change, then check-in the code. It could take anywhere from an hour to a day before the code was ready to support the database changes. Now that time is reduced to a minute.
This is a major mind-shift for most developers. That's okay. Anything worth having is hard. This is a great area where having a good attitude really goes a long way. There was a lot of mentoring going on because the skill set with running and maintaining an SQL server, as well as making a database change, can vary greatly with developers. The key here was reaching critical mass, where the number of developers who knew the process could support teaching the number of developers who needed to learn the process.
Leaders
My approach changed significantly when getting buy-in from leaders. They don't have to deal with the process down in the weeds like the other job roles. My selling points boiled down to two key points, cost and time.
Let's look at cost first. These tools cost money. And in some cases, there is a sticker shock. The Red Gate SQL Toolbelt costs $2,745 per developer at the time of this writing. You get a price break at 5 licenses and 10 licenses. If you buy 10 licenses or more licenses then the cost is $1,922 per developer. Octopus Deploy costs anywhere from Free to $5,000. If you wanted to outfit a team of 10 developers you are looking at $20,000 added to the budget.
My leader was great because she knew we would see an ROI on this process rather quickly. This was a big help with getting other leaders on board. Below is, in a nutshell, the selling points used.
Each time we deployed the system with over 6000 database objects we inevitably had one or two changes that didn't go out. Each time that happened we got production support tickets coming in. It takes time to find out why a bug is happening. Let's say it takes an average two hours to find and fix a bug. Let's low-ball a developer's salary and say their average pay is $30 an hour.
2 Bugs * 2 hours * $30 = $120 in fixes per deployment
But that is not all, depending on the bug it could affect a number of people. We have 700 users. On average, we have anywhere from 100 to 200 in the system at a time. Let's say the bugs only affect 10% of the users. For two hours a user is unable to do their job properly.
2 Bugs * 2 hours * $30 * 10 users = $1200 in fixes per deployment
Now assume a deployment happens every 2 months, or 6 deployments a year. So now the math looks like
2 Bugs * 2 hours * $30 * 10 users * 6 deployments = $7200 a year in fixes.
I am going to caveat this that this math is speculative, I can't give hard numbers (sorry). But it is easy to see how they add up quickly. I have a feeling I might be low-balling my estimate. And that is just for one application. Most teams support more than one application.
One other item to consider. If a developer has to do production support then they are not working on new features to help meet deadlines.
Conclusion
The effort of getting buy-in is well worth the reward. In this case, it meant people are going to be using automated database deployments. Since my team has adopted this process we haven't had to do an emergency push to production because we forgot some random database change. It has freed us up to focus on more important work. That to me is worth all the effort it took.