Life on the Edge

“But even more important,” he said, “is the way complex systems seem to
strike a balance between the need for order and the imperative to change.
Complex systems tend to locate themselves at a place we call ‘the edge of chaos.'”
Ian Malcolm at the Santa Fe Institute


If you’ve been following the announcements surrounding the Java platform, you may have noticed something peculiar happening with the latest Java SE versions. Hardly has the ink dried on books describing the Java 9 release, and we’re already on Java 10. Even more unusual: if you go to Oracle’s Java downloads, you’ll find sections on Java 10 and Java 8, but 9 is gone already, and that in only half a year! The obvious reason for this is Oracle’s new release cadence, where so-called “Long-Term-Release” versions will still use the 3-year cadence, but between these LTS versions Oracle has moved to a 6-month cadence. The reasoning: “developers prefer rapid innovation”. While on the whole Oracle had been getting a lot of flak for it’s slow pace, the combination of a faster pace with rapid removal of older versions will come as a shock for many.

If you compare this with what has happened in the world of Browsers, then it shouldn’t come as a big surprise, because there we see that “big” release numbers have already vanished into oblivion. Who still remembers the toils of “IE fill-in-your-favorite-number” compatibility, especially when catering to slow-changing corporations? Nowadays browser version numbers are hardly ever mentioned, and they range from 17 for Microsoft Edge to 66 for Google Chrome. Sites like “Can I use?” ( provide detailed analyses of the different versions and how well they support HTML, CSS, and other standards, and specialized JavaScript libraries dynamically add support for missing bits.

So is this increased pace really a Good Thing™, simply a natural and inevitable evolution, or is Gartner finally right with its continuing predictions of doom for the Java platform?

 The perceived need for stable change

When I worked on a large program in the early zeroes, an often heard phrase was “cutting edge, proven technology”. The simplest reaction would be to simply laugh out loud, and consider it a joke. Unfortunately, it described a real need felt by management, where they wanted to convey the requirement to include recent technological developments, but not at the cost of quality and stability. These were the years that agile methodologies finally started crossing over from industrial practices, but most IT was still governed as a “cost center”, and clear plans specifying costs and benefits of any undertaking were the norm. IT was seen as a driver for innovation, but innovation in IT as an external process. The simple fact that innovation, in IT as well as industry, can only flourish when combined with its application, and consequently together speed up tremendously, was not yet acknowledged. These were the days of the Sauropods of IT, producing huge systems, taking several years to produce and costing many millions of dollars. Sure, their production was a booming business, and many of them are still around, but their use was limited, and the costs of failure gigantic.

The major problem with these large systems is the investment in time and money they take to come to fruition.  For a board of directors used to assigning tens, if not hundreds, of millions to programs taking several years, the obvious response was to require some form of guarantee it would be finished on time and budget. Going for new and as yet unproven technology, would then not look so appealing. On the other hand, if you’re going to produce a new fighter jet over the course of 20 or 30 years, you don’t want to end up with technology that will be obsolete when finally delivered. So you want to be able to take new developments in technology along, but only the bits that succeed in the long run, which is a bit of a problem. Even so, who can blame them for the demand, because they’ll be authorizing the spending of those huge amounts of money.

Safely crossing the street

In Harry Harrison’s book “The Turing Option“, a researcher is trying to teach his AI to cross a (simulated) street. To his frustration, it refuses on the grounds that it cannot know for sure crossing is safe. When he argues that no car is visible and the assumed speed of such a hazard is unrealistic, it counters with a calculation that admittedly makes it unlikely, but still not impossible. This is comparable referring to “running” as “an act of controlled falling”, followed by a refusal to run because you might actually fail to control it correctly, and fall. Us humans have learned to accept a certain level of risk, based on a combination of acknowledged skill, and the realization you can probably take care of any unforeseen problems along the way. Reasoning it out in advance, combined with large amounts of numbers to provide a basis for that confidence, is usually more expensive than just “having a go.”

But is it really a good idea for everyone to adopt the experimentation approach? Can we afford to have a go at crossing the street? Small companies, who have to deal with a substantially higher impact of failure,  seem the ones least able to absorb the costs of experimentation. Large corporations on the other hand, although they need to compensate their larger size by finding cost savings in those same large numbers, would appear best able to absorb the risks. In practice however, we see most large companies embracing a risk averse attitude, preferring to buy proven technology, whereas the small companies are the ones going for the new and untested waters. The reason is comparable to what I wrote about reusability of software vs actual reuse. The former is an interesting subject where a computer scientist can experiment a lot, whereas the latter is the domain of economy and psychology. The ‘-ilities’ are what we can work on with technology, but whether or not you get a chance to actually do something with it, depends on how well you can convince someone to put time and money in it.

Evolution and extinction

In Michael Crichton’s book “The lost world” we read about Ian Malcolm discussing “extinction” as a more interesting subject than “evolution.” We know pretty well how to get differentiation in a species through processes like mutation, but what we understand much less is how “nature” cleans out the failures. Certain species may be the penultimate development for a given environment, but a reduced need to adapt tends to make species more susceptible to extinction due to environmental change. The most active developments of new species seem to go hand-in-hand with the most rapid extinctions, and this is what he terms “the edge of chaos.” Species in a Red Queen’s Race for survival develop into the most complex organisms, on the one hand profiting most from the associated “experimentation”, while at the same time constantly at risk of dropping out of the race as a failure.

With technology we see the same: you either have an environment that favors proven technology and a slow rate of change, or embrace change and … well, you don’t know what next year will bring. Progress cannot be stopped however, and even big corporations run the risk of losing out to hot new startups. If you become complacent, a rapidly changing market can easily trip you up. If you decide to adapt, for example by simply buying out the newcomer, you’re still not safe. If you truly adopt the new way of life together with the newly acquired technology, you’re doing to your older self what the startup was doing. If you fail to adopt, the people behind the startup will likely start over and come at you with the next concept, or one of their former competitors will jump in the gap. It’s a tough world.


Diversification as a strategy for survival

So what are we to do? Clearly we can choose the “safe and steady” approach, and trade in a big part of our ability to adapt to change, but in he long run that will get us into trouble. The alternative is to adopt the experimentation approach, and try to find our way into the great unknown by taking risks. The latter approach lets us reap the biggest rewards, but how to go about this is the million dollar question. You want to be able to try out different technologies, but obviously the burden of keeping all those experiments alive can make you fail just as well.

In “Lean Enterprise” Jez Humble, Joanne Molesky, and Barry O’Reilly describe how experiments go through phases, where you limit the investments in experiments as long as the risk of their failure is high. Then when the product takes off, you start to build it out, moving to a cost optimization strategy when it stops growing. A Micro Services Architecture allows you to do this even on a small scale, using new and untested technologies on small components, secure in the knowledge that their size makes replacement with another technology won’t run you into large debts. Only when you build big should you be careful about what technology to apply, as any exit strategy is now a potential back-breaker. Diversification will allow you to bet on more than one horse, consolidating on the proven ones when it becomes clear which stand the test of time. Or when that single developer hot on Modula-3 leaves the company. Ah yes, now there was a language…

This entry was posted in Agile, Microservices, Programming Languages, Software Architecture. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s