For a Software Development professional, one of the earliest truths you learn is that estimating the delivery effort for new system is surprisingly hard. A project manager at IBM told me once, back in the nineties, that the difference between a senior developer and a junior one, can be found in the finely tuned padding he or she adds to their estimates. This is not to say that you won’t be able to quickly develop a sense for the amount of work needed if the subject domain is in your field of experience, but what you learn over time is how much to add for what wasn’t specified explicitly. Frederick Brooks (of The Mythical Man-Month fame) called this the difference between essential complexity and accidental complexity. It leads to the familiar frustration with developers that, even though their estimates were spot-on, the system was nevertheless late, and they get blamed for the discrepancy.
An often heard remark, especially at management level, is that software development needs to get “industrialized”: just like the production of cars and kitchen appliances, writing software needs to becomes standardized to the extent that estimates necessarily improve. This is related to the idea that a higher level of maturity in the software development process will automatically cause the costs to come down. What really messes up these expectations however, is that, in a very fundamental way, producing software is not like producing a car.
From Appliances to Applications
When I started at the university, computing was at the start of the PC-revolution. Until then, if a company bought a word processor, they bought an appliance rather than an application. The term “Word Processor” in those days referred to an actual machine rather than just the software, and companies had the choice between buying that or a typewriter. Mind you, there were pretty fancy typewriters in those days. If you bought a second typewriter a year or so after, you might not be able to get the same model, but the functionality provided fit the machine. “Upgrading” was usually a matter of buying a newer model typewriter, not getting a new program for your old one.
This is how it works with appliances: you buy a machine that is purpose-built to perform the requested function. What we see most nowadays, is a generic computer that you run different applications on to make let it perform different functions. Initially computer manufacturers called these “development systems”: systems that you use to develop an appliance. When the application is finished, you buy a simplified computer that only contains the hardware needed for your application, and package that with the application to create an appliance. Actually, this is still quite common for simple household appliances, but on a somewhat larger scale also happens e.g. with medical systems. Still, the computer controlling the diagnostics equipment in hospitals might be a recognisable PC. It might even be a standardised model from one of the major PC brands.
So here’s the issue: can a hospital tell the manufacturer: “Hey, we just updated our infrastructure and are standardising on brand X PCs with this model processor, that much memory, and such and such Operating System. Please make sure you do too.” If you are in the peripheral development business, as medical appliance manufacturers essentially are, chances are you will want to convince the customer not to ask that. Why? Because every extra platform to support means a greater essential complexity (being able to support more variations in the hardware) and an even greater accidental complexity, due to the added dependencies on other vendors and all the bugs and peculiarities their products contain.
Wash, Rinse, Dry, and Repeat
If you want to be able to efficiently, and predictably, produce software, you want as little change as possible in as many of the requirements as possible. You want exactly the same piece of software again? No problem, I can give you a fresh copy in a jiffy. This is what we do with cars, coffee makers, and computers. Manufacturers can probably tell you to the minute (or even second) how long it will take to build one. Want a different color? No problem, we have all paint in stock, won’t make a difference. You want leather seats? Shouldn’t make a difference, the longer lead time for the seat just means we order it earlier, but still during car build, so no problem. You want the model with all the bells and whistles on the menu? We know the time to add for each, it’ll take just a bit longer, but we can still come up with the correct number.
Ok, so do me an estimate for an X-57 fighter plane. Come on, you’ve done 56 already, so why the hesitation? Don’t know the requirements? That’s easy: it should out-fly the latest Chinese and Russian jets, be acceptable for both Navy and Air Force, and let’s throw in the UK Royal Air Force as a potential customer just to sweeten the deal. Oh yeah, let’s not forget we want it to be fuel efficient and cheap, using the latest state-of-the-art technology in radar avoidance, and built using composite materials, because that sounds good and will convince Congress it’s high-tech. I made it easy for you, because I left out all the weaponry; we can add those later on, right?
So how much trust can you put in the estimate for the X-57? And yes, I am implying that the same thing is happening every day in software development. You want that car estimate? It’s still there. Nice and repeatable, so good estimates. Why isn’t writing an application more like making a car? The short answer would be that each subsequent software development project is about a different model. You’d reuse parts that are still state-of-the-art, but you’re building something that nobody built before, so who knows what you’ll encounter. Might even be that some parts will turn out unusable, that even though they were all right for the last ten builds, they just don’t fit the new requirements. And tools? You know that electronic screwdriver will save you loads of time, but you didn’t calculate for the waits for the battery recharges, so the savings are less than you thought. Could get an extra battery, but then the costs would go up.
Well, just Manage it!
The difficulty of predicting software development effort is not a new issue. In 1975 Frederick Brooks’ classic, the Mythical Man-Month, was published, trying to convince the reader that you cannot produce a baby by giving 9 women a month each. Even if the work can be divided among different people, the overhead due to essential coordination and communication will increase. Ultimately, you end up with truisms as “Adding resources to a late project will only make it later.”
So what can we do? We can manage the problem. Get a project manager and let him or her run the show. A good PM will make sure the project’s goals and constraints are clearly defined, and estimates are given their due level of uncertainty. With enough domain knowledge and insight in his team’s capabilities, the PM will add management buffers and prioritized requirements lists, as well as a bit of padding of his own. (naturally management knows this and shaves off a bit before they approve, but anyways, that’s all in the game as they say.) Then, while the project runs, the risks to the project’s goals are monitored, and change management used to highlight where uncertainty resolves to substantially aberrant reality, the world decides different than foreseen, or the customer simply changes his mind. Within the triangle of Scope, Schedule, and Resources, the PM will manage the project towards its conclusion, with all stakeholders in agreement that result matches with (the latest) expectations.
So have we done anything to solve the quality of our estimates? Nope. Have we made software development cheaper? No, we actually made it more expensive, because we’ve added work. What we did achieve was helping to keep the customer happy, because stakeholder management makes sure they know what we’re doing and why.
So hurry up and stop moving!
Because accidental complexity of software can be so devastating to our deadlines, we should try to reduce it. However, we seem to keep adding to it faster than we are able to manage reduction. New platforms, new languages, new hardware. It all adds complexity and thus costs. To make matters worse, the IT community is under a constant barrage of management actions to force us to reduce costs in software maintenance, because that is software that’s already done, so you shouldn’t have to spend much time on it, right? Launch an application built with 10 year old technology and you’re not taken seriously. Worse; if your apps are for mobile use, last year’s phones and tablets are “proven technology” and all their gadgets and gags must be supported. Programming platforms, even languages, are invented in a steady stream with support for new hardware and new paradigms, while industry struggles to implement them.
So, while we are forced to focus on stability, security, and low-cost maintenance, we are also asked to use up-to-date tools and technologies for new work. Granted, new development generally comes with new budget, but the added complexity of the newness increases the chances of scope changes (read: reductions) in favour of meeting the deadline, without removing the need for the dropped requirements. “We’ll pick that up after going live” is the usual approach, happily ignoring (or shall we say: not mentioning) the fact that maintenance is paid from the ever-tighter maintenance budget, and the real focus of attention will soon shift to the next project. That this new bit of maintenance sports a higher complexity and is built using completely different standards than the previous apps is also left unsaid.
Will nobody please think of the poor customer?
What’s all this gab about complexity and maintenance? Others are writing great stuff every day. I get loads of offers for the most fantastic apps and stellar productivity, so why can’t you do the same for me?
So, let’s think about the customer first. What do you value more: an accurate estimate for a half-year project, or knowing if that idea is actually going to work for the customer in two months? Also, what project is more likely to get funding on the basis of a single powerpoint presentation and a two-page budget description: a multi-million multi-year one, or one that runs for a few months, costs a few thousand, and will fail fast if unsuccessful? Consider: the first one will cost as much as the second, just to achieve the same level of confidence in its success. The second is done and it’s initiator already working on something new. This is what Lean Startup is about. There’s also Lean Enterprise, just to show you even bigger organizations can do it, if they want.
Now apply this to software development effort. How can we improve estimates? Reduce complexity. Take smaller steps. I don’t mind if it is going to use the latest bleeding edge technology on the Web, if you can tell me in one sprint if it is worthwhile to try. Next, do it right and get a team with the focus on it. Let them use what’s needed to build the app, but tell them: “You build it, your run it.” No wall between project and maintenance, so no risk of scope being thrown over that budget wall. Give them time to refactor. Even Gartner puts it at 20%, so calculate that for every 5 sprints of work, only 4 are on new functionality, the extra one is to clean up. Remember that running fast, you cannot spend the time to look too far ahead. That keeps the pace high, but if your intention was to conquer the world, a little planning ahead might have been better. Smaller batches of work with a focused team gives you way better estimates, and you get all the other benefits of Agile and Lean development. Better still, invest in a team to automate some of the dreary handwork, and you get happy and productive teams, with continuously tested software that can go straight to production. Don’t believe me? Read up on Continuous Delivery and DevOps.