Imagine that you've applied for a new job in a different company: one that designs, implements, and supports distributed control networks for machines like passenger jets, submarines, nuclear power plants, and deepsea drilling rigs.
During the interview they show you video of a model railway set in operation - Which technologies and software development processes are you familiar with that could contribute? quite a big one with 30+ engines; 200+ cars, many of them specialized; several hundred lineal feet of track; and, more than one hundred stations ranging in purpose from passenger services to ore loading and unloading. Everything works, everything is electrically powered, and everything from traffic lights to the roundhouse is digitally controlled. How, they ask, would you approach the problem of building a central control console for the whole thing given the requirement that operations never stop?
Think about this a bit and you'll probably agree that the problem is actually far more difficult than the things we deal with in every day business computing. Add the fact that many of the machines run via embedded products have twenty and thirty year planned lifetimes and you begin to see how hard some of this stuff is.
There are some fairly obvious issues you can talk about in an interview of this kind: things like the need to plan for a future in which your stuff will still be working, but today's UltraSPARC T1 or Freescale 8641D will have the historical distance the Z80A has for us, for mission change, for component failure, for "insane" commands, and for both hardware and software sabotage.
But the question they will keep dragging you back to, is what do you bring to the table: which technologies and software development processes are you familiar with that could contribute?
There are some superficial answers: things like network redundancy via tcp/ip, device recognition via JINI, using dtrace for testing, distributed kernel research in both the Solaris and BSD camps, and the general strategy of using hardware and software adapters that transfer information from one standard format to another.
That's all good stuff, but there's a quick bottom line here: people who work with embedded control systems have been learning how to solve this problem for more than forty years and they're getting good at it, but they're not learning much from us and we're not generally paying much attention to them. Oh, there's some cross over - in networking, in Sun's use of WindRiver's Vx for control processors, in the marketing for both SPARC and PPC embedded processors (there's virtually no Wintel presence in this market), and in the use of Linux for lower risk embedded work, but it isn't enough.
As businesses start to implement real network computing, as we find ways to make distributed control and intelligence work well with centralized policy setting and information management, more and more of what they already know will become applicable to what we're trying to do.