Go back even just to the turn of the century, the landscape of the industry worked was that you would have a few "ivory tower" type companies that would sit there and push dictats into the industry. Microsoft might push a five-year plan of how people are supposed to use Windows Server, SQL Server, IE, IIS, and ASP.NET to build applications for use in organisations. They could be clear when products were dropping, could define the tooling that was used, and seed the community with information that was needed to bring their objectives to pass.
Sun, Oracle, IBM, and a few others were also able to do this. Keep going back in time and you can roll in very old-school companies like DEC. The point here is that all of these are sales-led operations.
The problem with a sales-led approach is that it doesn't promote critical thinking. It is adversarial (I'll come onto that), but it's more about "tricking" the customer into building a mental picture where the good parts seem larger than reality and the bad parts seem smaller than reality. Thus as an industry we're immensely bad at being able to frame a constructive discussion as to -- again picking stuff at random -- why .NET is better than Java or vice-versa. In other industries with more rigour, that sort of comparison is easier because they are better at cultivating bodies of evidence that helps build a clear picture through disinterested analysis.
But, the good news is is that I believe the industry is moving to a next stage of maturity where we're starting to shed our "sales-led" approach and moving to one which is more like academia.
The purpose of the academic process is to take an idea and tear it to shreds in order to see if it works. Some of this is to do with safety -- you have to make sure a new drug is safe before it's generally available. However, most of the academic process is just to do with basic scientific rigour. There is no point in having everyone believe that the sun goes round the earth if that actually isn't a true fact. One example of this is that when CERN make a big announcement with regards to finding the Higgs boson, everyone grabs the data and starts validating it in a manner that's both adversarial and constructive, rather than just assuming the discoverers were correct and cracking out the champagne.
At this point, the only difference lies in how being an adversary works. In a sales-led approach, it's about trickery. In an academic approach, it's about taking an intentionally opposing view and looking to disprove a theory. Structurally, how I think this is happening in our industry -- and this part of the discussion is more biased towards software engineering -- is that we're creating analogues for mechanisms that exist in academia.
The ivory tower approach in software engineering no longer works. Most people aren't waiting for the next big thing from Microsoft on .NET or Oracle on Java. (And if you look at something like, they're not even trying to use an ivory tower approach -- they're working in sympathy to the community.) What we have now is a large network of "cells" -- each of these cells being analogous to a research group. A couple of people might come up with an idea -- for example "jQuery" -- and put it out into the community. This is analogous to a "peer-reviewed paper", except for the review is not formalised, it's simply judged on whether the it gains momentum. (Perhaps this is like crowdsourcing the peer review?) Eventually the "paper" gets accepted into the body of understood facts within the community -- in this example we now know that jQuery has moved from the "theory" that it might be a good way to work with a DOM to the point that we now know for a "fact" that it is.
There's an angle here with regards to open source. People tend to think as open source as an "enabler" of this approach and that being open source provides an advantage. In this model, open source is not an enabler -- a lack of open source has a frictional effect. Imagine you work for a company and write a toolset. You go to your boss and ask to make it public. He/she turns round to you and says "no, it's part of our intellectual property". What's happening there is a commercial imperative is stopping the "publication of your peer-review" paper. Because open source doesn't have that frictional effect, it has an advantage. Note as well that an open source approach also has an analogue in application of scientific rigour. You can't validate an idea if it's a secret -- you have to have the paper/idea out there, being pulled apart and examined by as many people as possible in order to get validation.
The shift here is that the vendors are no longer in a position imposing "facts" on the industry. We are now much more empowered to create our own facts. But what goes along with this this is that if you're not applying a proper scientific rigour to our own personal analysis of technologies that we support, we're just being an unpaid salespeople.
Maturation of our the industry can only come from criticism and academic rigour. That's why I'm no longer an ally to any of the vendors, or a fanboi for any technology.
We need more evidence, and less salesmanship.
What do you think? Post a comment, or talk to me on Twitter: @mbrit.
Image credit: Wikimedia