Copyright 2017 Jason Ross, All Rights Reserved

Whatever development method you use, eventually your software will need to be deployed to your production environment.

It’s a scenario that occurs in every company with a software development team: the software is declared to be finished and ready to be deployed from development into production. The deployment scripts and installers are ready (if you’re not using installers then that’s a totally different set of problems), and there is an air of tension around the team responsible for the deployment. That air of tension is actually the first serious warning sign and you should take notice of it.

How is the software that's installed on your systems built? Do your developers manually build every version on their machines, or do they use a dedicated build system? If you don't know, ask your development manager to show you the latest build on the build system. If they're not sure, ask one of your senior developers; ideally their answer will be along the lines of "Which build server do you want to see?".

If they can't show you the build system, ask them whether they're using Continuous Integration, or CI, to build the software. If they're not, ask them why not; bear in mind there is NO right answer to this question!

Developing software is a process that involves a lot of repetitious work. Building the code, updating configuration files, creating database change scripts, unit testing, deploying and integration testing are all tasks that are repeated many times during development.

Whatever anyone tells you, none of these tasks are exciting, in fact most of them are tedious and prone to error. That’s why so many developers automate them; if they run correctly once, they’ll keep on doing that. Remember, computers are faster and more reliable than people. Automating these parts of the development process is a solved problem.

Why is it then, that the tests run by so many testers and QA departments are manual?

Visual Studio is very good at migrating solutions and projects from its older versions. However everything has limits, and I’ve seen a few very rare cases where it doesn’t quite work. The main problem I’ve encountered can be recreated as follows:

Angry Brown Bear In Grass

The overall performance of systems is often an afterthought. Developers spend months building a system, often muttering things like “we have plenty of CPU”, “we can add more servers” or “parallel processing is more trouble than it’s worth” or the even more popular “premature optimization is the root of all evil”. After all, Donald Knuth said that last quote, and he was quoting Sir Tony Hoare, so it must be right!

The quote is correct, it’s just taken out of context.

If you are responsible in any way for the system's production you need to stamp on this attitude quickly, before this complacency turns into the acceptance of mediocrity.

Assuming that you missed your chance to do this, at some stage the system is declared complete because it meets the customer’s functional requirements. It also runs slowly. Pathetically slowly. Maybe users think it has crashed (apparently your system has seven seconds to respond before this occurs) or they are faced with crashes, graphs and charts taking minutes to draw, progress bars, loading screens, “(Not responding)” warnings in window title bars or any of a number of other indications that at least some of your architects and developers are incompetent.

When you ask why the system is slow, you should expect to be told that it’s because “the algorithms are really complex”, “there’s a lot of data to process”, “those graphs/charts take a long time to render” and various other excuses. These statements are almost always, shall we say, inaccurate at best.

So now, in addition to your other problems, you need to make sure the system runs at a reasonable speed. How do you do this?