In googling for an appropriate definition of the sunk cost fallacy to quote here I came across a blog by Chris Campbell of wufoo fame defining it in a personal programming context:
Basically, the sunk cost fallacy occurs when you make a decision based on the time and resources you've already committed and not on what would be the best way to spend your remaining time and energy. The sunk cost fallacy is usually applied to economics, but it can also pop up when you're writing code or working on a new feature. Here are a few examples of sunk cost fallacy showing up in our workflow while programming.
Unless you're the Yoda of programming, there's going to be places where your code becomes unruly. This isn't always the result of lack of effort (it's nearly impossible to know exactly what every class or function will be responsible for six months down the road), but what often starts out as a 'quick fix' usually ends up a labyrinth of code that has to be relearned each time you visit that problem area. The sunk cost fallacy happens when you continue to add to the inefficient code because you've already invested so much time into what's there and because reworking the code would be a headache. The perspective you should take on the situation is weighing the time you'll have to spend deciphering and adding to your spaghetti code in the future against how long it'll take to rework it now.
He goes on to talk about how loyalty to old features and the attraction of new ones can have similar effects, but I think he captures the essence of the thing in that first sentence: "Basically, the sunk cost fallacy occurs when you make a decision based on the time and resources you've already committed and not on what would be the best way to spend your remaining time and energy."
That comment applies about equally well to data center operations: if you let the fact that you have brand X hardware and/or software installed influence decisions about new stuff, you're committing the sunk cost fallacy - and probably ripping off your employer.
Think about that in terms of this simple mental experiment: imagine that you were asked to recommend a complete solution (hardware, software, methods, and staffing) for an enterprise scale application, and then ask whether your recommendations would depend on information about what the client already has in operation.
If the answer is yes, and the scale of the new application is significant relative to the existing operation, then you're committing the sunk cost fallacy - and so's the guy who listens to you or expects that you'll care what he's already got running.
Luckily, once you recognize the problem: there's something you can do about it. Specifically, make it a habit to always look closely at the individual costs that make up the total error you're committing.
I know - that seems weird, but consider the possible outcomes if you make your recommendations independently of your client's previous "architectural" decisions. There are only two possibilities:
- Your recommendations are sufficiently "architecturally" consistent with what the client has to gain automatic acceptance on this issue; or,
- They're not.
In the first case you should ask whether the client selected you as an advisor because he knew what the advice would be (or, sometimes, because he knew what it would not be - shared hates can be as devastating as shared commitments) -and then go do the indicated research before finalizing your recommendations.
And, in the second, you should quantify the cost of falling into the sunk cost fallacy by costing each element of the divergence -i.e. ask, for each difference, what it would cost the client's employer if he does it his way instead of whatever you've decided is the best way.
Suppose, for example, that the differences are limited to hardware branding and your Dell recommendation clashes with his HP preferences.
In this case, the impact on system operations is likely to be negligible, quantifying that cost is trivial, and he's really just asking his employer to pay more for the bruises HP sticks on the same bananas Dell buys.
Now imagine your client lives in an all Wintel world, and you recommend Macs with Adobe software for the sixty some people around the company who put out flyers and other local market advertising.
This looks vastly more complicated, the numbers you need will not be obvious, and the IT people who hired you probably don't want to hear about Apple's productivity advantages.
So what do you do? Remember who's really writing the check: not the IT guy you're reporting to, but his employer.
And that, I think, is the bottom line generality here: whether you're the consultant or the person hiring that consultant, failure to consider the real cost of pre-committing to an existing infrastructure amounts to commercial fraud - because it's the employer, not the employee with the preconceptions (even if that's really you), that's paying the freight.