The James Webb Space Telescope is finally up and running, bringing back images of galaxies at mind-boggling distances from Earth. Hubble’s successor was three decades in the making and cost more than ten billion dollars. The code executed on hardware a million miles from Earth must have been written with superior attention to detail and quality. Well, I assume.

The cost to get the reliability you need for those top echelons of mission-critical software is highly non-linear. Squeezing out a hundredth percentage point, to go from 99.99% to 99.999% uptime, is expensive. It must be worth the effort. So, we adjust quality needs based on risk. How much will we suffer if the software reveals crucial bugs after deployment? How likely is this and how do we mitigate those odds? How fast can we track down and repair mistakes? For the Webb telescope, the answer is obvious: the reputational damage alone would be devastating. If serious bugs ruin a project because they can’t be quickly rectified, then a mega investment in first-time-right is justified.

Top
Generated by Feedzy