When software goes wrong, it's natural to look for a culprit — especially when the wrong software has caused very bad things. Howard Schmidt, who used to advise the American government on computer security, has an answer: make the developers personally responsible. Personal liability, like being hanged in a fortnight, concentrates the mind wonderfully.
Yet personal liability, again like being hanged in a fortnight, is the wrong approach. It is inimical to justice, terrifying to contemplate and useless for fixing problems. Any developer knows that it is vanishingly rare to be allowed to write the perfect product, even were such a thing theoretically possible. Constraints of time, marketing, manpower and environment force compromises in design, implementation and testing: faced with personal liability for the outcome but little control over the process, who'd want the job?
And assuming something goes wrong, identifying the culprit would mean a long and doubtless expensive process of discovery, quite possibly aimed at someone who'd left the company years ago. Even if one individual could be shown to have been at fault (and not, say, the testers who were supposed to check the code or the managers who signed off on it), what then? Bankrupt the person? Throw them in jail? Cut off their right hand? Make them visit everyone affected and say sorry?
The only result of such an approach would be to kill system design stone dead. Nobody would dare to try and write a successful program, and nobody could afford the liability insurance.
Companies are the appropriate entities to assume liability. They have the resources to take on the responsibilities, and the massed brainpower and experience to produce reliable software in the first place. Corporations may have no backside to be kicked or soul to be damned, but they have shareholders to be frightened and boards to be embarrassed.
True, many of the old ways of quantifying and testing for reliability won't work with software — industry standard certification schemes may work for toughened glass or car tyres, but software is infinitely mutable and often operates in an ill-defined environment. Where a standard expectation of functionality isn't appropriate, then the contract is the place to set up expectations and remedies.
In the end, responsibility works through partnership and shared risk. Any attempt to pass the buck or hide behind boilerplate, in the hope that the other party may be in for the long drop, will be fatal for both.Who do you think should be held accountable for software flaws? Have your say by voting in this poll.