TV's subliminal tech messages

TV's subliminal tech messages

Summary: in reality the kind of highly reliable, seamlessly integrated, service these shows assume doesn't exist largely because neither the dominant technologies nor the dominant management methods are set up to make such things possible.

SHARE:
TOPICS: Hardware
23

If, like me, you watch too much TV you've probably noticed that the medium is full of messages about computing and computing technology. My favorite these days is some PC reseller's formulation of what should be the official wintel industry slogan: "So easy to use, you need someone who doesn't speak English to help you with it." And the subliminal? everyone has to learn his language - you, the sales guy: everyone.

And then there's the standard TV assumption that it's both easy and perfectly all right to commit a (U.S.) federal crime in pursuit of the bad guys. On NCIS, for example, the thirty something pretending to be sixteen "hacks into" CIA and other enemy computers every time her boss needs information that he can't legally get quickly enough to deliver justice in the forty some minutes available to the program. And the subliminal there? resistance is futile - and the younger and cuter the hacker, the more futile it is. Remember Alias girl?

And then there's my all time favorite - on TV everything works, all the time. I watched some crime scene investigation thing last week in which the stars had their geeks use computers to collect and automatically correlate data from a dozen different systems ranging from a hacked FBI access to numerous public and private databases -all to auto-magically identify product names and local buyers from spectroscopic component identification. Uh huh, but reality is more like FEMA versus the world during the run-up to Katrina: nothing worked across agencies (and not much has changed since) - with inter-agency information access that would be invisibly automated on TV queueing up, in reality, for bureaucratic and political review under high stress conditions.

And what's the subliminal? simply that if your stuff doesn't work quite that seamlessly - well, since the technology obviously does, failures have to be your fault, don't they? It's all an emperor's new clothes fantasy: in reality the kind of highly reliable, seamlessly integrated, service these shows assume doesn't exist largely because neither the dominant technologies nor the dominant management methods are set up to make such things possible - but as long as enough people believe the reality works for other people, most of us will go right on acting as if it's our fault that we can't make ours work.

Topic: Hardware

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

23 comments
Log in or register to join the discussion
  • Singularity?

    Not there yet. And no, it's not just a question of waiting for Moore's law to provide the computing power. I don't think we're at the point where it's "just an engineering problem."

    What can be learned from the TV examples is that:
    (1) They assume very powerful computer programs exist (true, but not to the extent they are assuming)
    (2) That ease-of-use is positively correlated with the power of the system.

    The problem is ease-of-use is generally negatively correlated to power. Our computers are idiot savants. A better depiction of reality would be showing a scene from Rain Man, maybe where the main character (it's been a long, long time since I've seen it) is trying to get his brother to play black jack. And that's dealing with one system at one time. Imagine a room full of them....
    Erik Engbrecht
    • oh, what we learned

      For real-world users ease-of-use and power are positively correlated, because ease-of-use is a prerequisite to leveraging the power.

      Paying attention solely to power is a bad, bad idea.
      Erik Engbrecht
  • Integration, logic and technology

    Though I would agree that current technology makes integration more difficult than it should be I think we tend to forget that a lot of the integration issues aren't to do with the technology but with people and understanding.

    In order for a group of people to all use a system in a way that ensures you get information you can trust out it then all the users have to have the same interpretation of what the system means in relation to reality.

    I think this is one of the problems with something like the semantic web. Though I would agree that integration requires you to have a common method of data representation and that method of data representation should be based on predicate logic, the semantic web assumes that all users automatically have the same interpretation of the data and I think this is an unrealistic assumption.

    So while logic and methods of programming and data representation that are based on it offer a much better chance of integration than today's approaches, there are some limits and getting it all to work is far from being just a technological issue.
    jorwell
    • we're screwed

      "In order for a group of people to all use a system in a way that ensures you get information you can trust out it then all the users have to have the same interpretation of what the system means in relation to reality."

      They also have to have the same concept of reality, otherwise measuring something in relation to reality is meaningless.
      Erik Engbrecht
      • Not really

        What it does mean is that when two systems exchange information there has to be some understanding at the start that we mean the same things when we use the same words. This happens by people agreeing about it, not by technological means.
        jorwell
        • propogation of uncertainty

          The problem is small deviations in root definitions create small ambiguities that by the time you propogate them through a complex chain of reasoning basically result in nothing more than educated guesses.

          Now, you could argue that this is no worse than human reasoning, because humans generally don't bother propogating their uncertainties and can therefore confidently reach moronic results.

          Computer scientists will never be able to prove that they have created an intelligent machine, but they quite possibly will prove that people are not intelligent...
          Erik Engbrecht
          • Intelligent machine no, but a logical machine would be a good start

            Though I would agree with Terry Winograd and Fernando Flores thesis that human intelligence (whatever that may be) isn't based on logic (and therefore a machine that was purely based on logic wouldn't be "intelligent"), this doesn't mean you should take the second step and believe that a logic based system isn't useful.

            Consistency is a prerequisite of truth but not a guarantee, if something is inconsistent it is necessarily false (or rather I can prove anything from it), however nothing stops me from constructing a perfectly consistent web of falsehoods (but this is hard to maintain if it must at some point intersect with someone's interpretation of the system in respect to the real world).
            jorwell
          • Antinomies.

            If it's any consolation, a logic-based system is not guaranteed to produce any more rational results than one based on human agreement.

            And if you want a consistent falsehood, remember that premises do not require proof. So, let me choose my premises, and I'll be able to prove anything with unquestionable logic.

            Murph, for example, starts with the view that Sun's corporate behavior is appropriate to a result other than disaster. He needs no more to begin steadily remaking the world.
            Anton Philidor
          • However

            To put forward the premise that Sun is the best and to conclude that Sun is the best is a consistent argument; meaningless but consistent.

            To state in one place that Sun is the best and to deny that somewhere else would be a contradiction and would cast far more doubt on Murphy's credibility.

            Where logic helps us is in identifying ambiguities and contradictions. The same sentence in natural language can have many different meanings, logic helps to make those different meanings explicit, so we can then implement formally the one we actually want.
            jorwell
    • Yes- but it's not that bad

      I suspect we've all seen situations in which two or more organizational units use the same words to describe different things -the canonical example being the use of "voucher" to label radically different things in HR, AP, and AR. In addition I've seen (and many others will have too) organizations which preserve unique usages in their internal vocabulary - meaning that newcomers or outsiders mis-understand the words and therefore the processes.

      Oh, and they get upset if you ask them "what do you mean by.." because to them it's obvvious and you're stupid if you don't know that address means salutation....

      On the other hand, science defines its terms rather clearly and so science (but not math) based labelling interchange works well.

      Most importantly, however, a lot of the problems you run into in the real world don't arise because the data gets interpreted differently, they arise because the interchange either can't be handled effectively at the technology level or because people put "controls" in the way of effective information use. The former is an artifact largely now of wintel market dominance and the latter one of organizational distrust and rivalry. You can get tech to beat the first of these - but not the second and that's the most difficult problem of all.
      murph_z
      • Oh come now

        "the interchange either can't be handled effectively at the technology level"

        What communications protocol is used by
        a) Windows
        b) Unix
        ?

        The consistent use of terminology is an important issue, few people would define every legal entity defined in a Customer Relationship Management system as a customer for example (many of them are merely sales leads).

        Asking lots of questions that may seem stupid is part of the job of systems analysis, get used to it.
        jorwell
        • And the operating systems poses no obstacles

          With the right technology I can make tables running in a DBMS on a Unix box look like they are running under a DBMS running under Windows, and vice versa (even if the DBMSs are from different vendors).

          The operating system is a complete non-issue (though I might raise questions about the scalability of Intel based hardware).
          jorwell
          • That's right - scaling issues often faciitate disasters

            In the Unix world I have no concerns about running stats reports off the live production database - or about running an entire ERP/SCM package against one database implementation. With Wintel you generally can't do that -instead you end up with multiple replications and over time you get drift.. as the AR one gets modified a tiny bit, and the asset management/Preventitive Maintenance app changes a bit and HR decides to credit after.. and .. and..

            and pretty soon you have unknown, but deeply embedded, differences - all of which are enabled -but not made necessary- by the limited scaling of wintel.
            murph_z
          • There's plenty of replication in the Unix world too

            Or at least there seem to be lots of work for datawarehousing specialists; an extended exercise in redundancy.

            My point was though, that the operating system doesn't create an obstacle to data integration (and therefore a reason why you might not need that datawarehouse after all, or that middleware for that matter).
            jorwell
          • you bet

            and it demonstrates your earlier argument: that most of the problem is created by people making less than reasonable choices.

            Notice that i did not say that using Unix guarantees sensible decision making -only that enables that.
            murph_z
  • Bond. James Bond.

    The all-time greatest computer achievement is the moment when Bond confronts a computer which controls a missile already launched. He hacks in, learns the instructions, and causes the missile to self-destruct in less than 2 minutes.

    I believe it was a Mac. Villains can afford Apple products. It's one of the ways the rest of us know they're villains.


    Okay, honorable mention to the movie (Independence Day) in which humans are able to introduce a virus into the alien computers which causes the self-destruction of the entire system and the hardware dependent upon it.

    Goes without saying the evil aliens had purchased Macs.


    Popular entertainment has proven that, so long as Apple is willing to stay in the computer business, the world is safe from evil.
    Anton Philidor
    • Okay - but didn't you say...somewhere :) ..

      that "Independence Day" leaked the fact that Steve Jobs is an alien - because that's imputable from the Mac's ability to interface to the alien system?
      murph_z
      • Perhaps the aliens were able to afford...

        ... to purchse a system from Apple without a substantial discount. Though their home planet is said to be dying, they miight still have collected sufficient resources through decades of confiscatory taxation.

        This would also explain their generally unsympathetic attitude. They respond to Earthlings as their native-born counterparts discuss Windows users, and I don't think it's coincidence.

        Did Apple's win of an interplanetary supply contract lead to a hostile invasion, or did the invaders select Macs because they were already hostile and convinced of their superiority?
        Anton Philidor
        • Wow

          "They respond to Earthlings as their native-born counterparts discuss Windows users, and I don't think it's coincidence."

          Great line. :)

          Carl Rapson
          rapson
    • But what about Jurassic Park?

      We might note that Spielberg has his finger on the popular pulse by the fact that only two people are killed by dinosaurs in the first movie: the lawyer and the computer guy. We might also note that while the lawyer meets a more undignified end, the computer guy is the villain of the piece.

      And of course an 8 year old child is able to administer the Unix system (maybe someone should remind CIOs about this). Given that they were running Unix it was inevitable that some dinosaurs would escape eventually but it was only after they switched to Windows that the dinosaurs were able to get off the island.
      jorwell