Among the constant hubbub of intellectual property disputes, security alarms and industry politics, it's easy to forget that everything depends on people writing programs. In that respect, computing has changed less over the years than you might think: despite constant promises of automatic programming languages, natural language systems and AI-heavy program generators, nearly all software these days is written by furrowed brow and arcane ritual.
The latest technology with pretensions for taking us away from all that is MDA, the Model Driven Architecture: use this, its proponents say, and you can create software that runs on lots of different computers without worrying over programming details.
Twenty years ago, it was fashionable to classify programming languages in terms of generations, from first generation languages to fourth or fifth. 1GL are the raw bits of machine code that the processor understands, and 2GL the assembler language that both humans and computers could understand. 3GL are the 'high level' languages with which most programmers are familiar -- BASIC, C, Java and the like.
Such ideas were popular with marketing men and scary-eyed prophets of the programming future, because they implied that 4GL were going to provide as big a leap forward in software creation as high level languages -- "PRINT "Hello, World!" -- were from assembler's cryptic "mov ax,word ptr _io_evnt", which were themselves better than 10111001,10011010. 4GL, we were told, would let you spend all your time describing what you wanted in a language close to English, and the magic machinery would turn that into real computer code.
You may have noticed that this never happened. There's an old computer joke: make a computer programmable in English, and you'll find programmers can't write that either. As anyone who's had to maintain other people's software for a living will tell you, this is uncommonly true. No matter what language you write in, you're supposed to describe what you're thinking in a running commentary included in the source code, but in practice this rarely happens. Programmers see themselves as coders first, not narrative writers: adding comments isn't thought to help in the actual writing of the software -- it's there for posterity, and the psyche of the software creator has little room for that.
Having to maintain software with few if any comments is hard enough when it's a few thousand lines of fairly straightforward code -- I've been there, and I've religiously overcommented my own software ever since. But with large programming projects now easily generating hundreds of thousands of lines, pulled together from many teams and including previous software, the problem can become overwhelming. Even when code is adequately commented in the beginning, changes made subsequently to the software are rarely reflected in the documentation.
MDA should be self-documenting. The idea is that to make an application, you build a model of what it should do according to business rules, and includes the kind of data that'll be handled. Many programming projects start like this with models on paper: the programmers then turn the paper ideas into working code. MDA has a different approach: the model is described in the Unified Modelling Language (UML) that can be automatically converted to runnable code. This happens by stages, so that multiple different programs for different platforms can be generated from a single platform independent intermediate model.
Because most of the development work happens in the model, changes to the application will happen there -- and the MDA development tool will then generate a modified program. Programmers will at all times be working on the latest, most accurate source of the design ideas behind the program, rather than the program itself: not only will this be much quicker, MDA's proponents say, but much less prone to error.
Can it work? Although MDA is conceptually finished, there are still gaps in the standard and much discussion going on, especially in the trickiest bits: how do you turn a conceptual model into real code without going through the head of a programmer? And then, how do you test what you've done and verify that it works? These are the details that can take a good idea and damn it to irrelevancy. In the past, 4GL languages, computer-aided software engineering (CASE) and many other loudly trumpeted breakthroughs in programming have either lost relevance or been scaled down because their promise on paper failed to translate into a workable system that programmers found more useful than infuriating.
There's no doubt that, at some point, programming will become a matter of designing a model and waving a magic wand. Whether that time is now -- and whether the efficiency of the MDA process consistently matches or equals the old ways -- can only be answered over time. Case studies are good -- suggesting 35 to 50 percent productivity increase with no loss of code quality -- and products like CompuWare's OptimalJ enterprise application development environment are provoking much interest. This could be the last time that programmers can ever answer questions about their code with "no comment".