Memo to Google & Amazon: ‘Cloud Computing’ Really Is Time-Sharing. Next: Will Punch Cards Make A Comeback?

Memo to Google & Amazon: ‘Cloud Computing’ Really Is Time-Sharing. Next: Will Punch Cards Make A Comeback?

Summary: SAS, the North Carolina supplier of business analytics software, last week got a lot of attention from its announcement that it was spending $70 million to construct a “cloud computing’ facility.Chief executive Jim Goodnight, one this nation’s billionaire software magnates, though, was having a bit of a chuckle Monday at his company’s annual executive and user conference, its Global Forum.

SHARE:

SAS, the North Carolina supplier of business analytics software, last week got a lot of attention from its announcement that it was spending $70 million to construct a “cloud computing’ facility.

Chief executive Jim Goodnight, one this nation’s billionaire software magnates, though, was having a bit of a chuckle Monday at his company’s annual executive and user conference, its Global Forum.

Seems he just regards “cloud computing” as a marketing hook. And that there’s nothing really new about drawing on computing power in “the cloud,” where you can’t see where it’s coming from.

In fact, the marketing mavens at SAS originally wrote up the release about the new facility in Cary, N.C., as a building with a lot of servers inside.

“It’s really a server farm. They wrote it up as a “server farm,’ ‘’ Goodnight said Monday at the forum. “ I said, Eh, no let’s not use that. That’s old-fashioned. Use “cloud computing.”

That led to national and international coverage of what amounts to a fairly small capital expenditure for a company that generates $2.6 billion a year in software revenue.

“Wow, my God, we came up with that and all the papers went crazy,’’ he said. “They’re calling it ‘cloud computing.’ The cloud is nothing more than a damn big server farm.’’

In effect, SAS is making fun of the use of terminology to solve an age-old problem: How to make use of excess capacity in your data infrastructure. If Amazon or Google wants to make hay with the term “cloud computing,’’ SAS will just pitch in with its fork, as well.

“Google, Amazon had these huge server farms that they had to have to store all the data and they got all these CPUs that aren’t that terribly busy. Why not try to sell them off? Sell some of the time,’’ he said.

“What we’re talking about here is a concept called time sharing. That’s all it is. We’ll sell you a piece of our hardware if you give us X number of dollars. In this case, it’s real cheap. But that’s all it is, time-sharing,’’ he said.

Their goal, he contends, is simply to sell the excess hardware they have got sitting around, without having a lot more people to service it. This is not “ anything hugely different’’ from how time-sharing began in the early days of corporate computing.

“It’s funny we’ve gotten to where everybody wants everything delivered on the Web. So we’re back to like (IBM) 3270 mainframe days,’’ he said. “All the interactivity we used to have on the desktop is being sacrificed to go back a very simple static screen like we had on the mainframe. It all comes around. I don’t know when we’ll see punched cards again, but you never know.”

Topics: Amazon, Cloud, Google, Hardware, Servers

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

8 comments
Log in or register to join the discussion
  • Natural Evolution of Software, Not Devolution

    Two important innovations since the times of time-sharing:
    multitasking and virtualization. Whereas time sharing was marked by
    batch jobs run serially, multitasking allowed for sharing of computing
    power between concurrent program processes and virtualization took
    that one step further and allowed for sharing of the computer by
    multiple operating system stacks. I wouldn't call it a reversion back to
    time-sharing principles but a continuous evolution from sharing of
    computing power for different programs-jobs, then different
    concurrent processes, and now different operating system instances.
    Don't get the impression that that any of this looks like the monolithic
    days of mainframes; server farms are built upon redundancy of all
    components and commodity hardware.
    Adam J
  • Yesterday and Today: A BIG Difference

    "All the interactivity we used to have on the desktop is being sacrificed to go back a very simple static screen like we had on the mainframe."

    Today, with Virtual Desktop Infrastructure, you get to keep your Desktop and do with it as you are accustomed, only the screen you see is presented via 'Thin Client' technology and so, it is this convergence of newer technologies of Today with older ones of Yesterday that is making centralized 'time sharing', if you will, an attractive cost-cutting benefit.

    The Cloud is also interesting because now ANYONE can get into the IT business.

    This is a good thing because not all IT is good efficient IT.

    Competition for IT dollars is good for people such as myself.

    I am a Linux IT consultant and run a Cloud Computing service.

    Dietrich T. Schmitz
    http://www.dtschmitz.com
    no_zd_user_name
    • Really? The cloud can do all that?


      [Pre] "you get to keep your desktop" Really?

      You can keep your desktop just without using all the hardware that is connected to your desktop. So you would still need to keep your desktop to scan files, do any kind of graphic design, connect you video camera, play video games, etc...

      "The Cloud is also interesting because now ANYONE can get into the IT business."

      Not ANYONE gets hired... And that is a good thing because some folks are full of themselves. They just don't have the qualification they pretend to have.

      "This is a good thing because not all IT is good efficient IT."

      So how do you think the cloud will help corporations sort their management issues? Project failure and inefficient IT systems are mostly business failures.

      It is extremely rare today that a corporation will attempt to implement a completely new system. Virtually all large corporations only do things that are technologically proven beyond any doubt. So project failures have nothing to do with the tool you can chose to use but with the quality of the team you have and the resolve of the business owners.

      It does not matter which cloud you float on, to have efficient IT systems you need strong mature management. Something rare in the clouds. [/Pre]
      provincialplace@...
  • The interactivity is NOT going away

    I thought Jim Goodnight was on the mark until I read his comment about interactivity being cut back to the level we had in the IBM 3270 days.

    I guess he hasn't been using those "Web 2.0" websites that use the Ajax approach to make an individual web page a highly interactive thing. And he may not have been aware that the Google search bar available in Firefox instantly gives you popular searches as you begin typing.

    Yes, we're returning to time-sharing. And a good thing too, since most people shouldn't store their data on their own computers, where it's rarely backed up properly and rarely accessible from mobile devices. But this time it's far more interactive; perhaps we should call it Time-Sharing 2.0. And punched cards will stay in the museums.
    Rohan Jayasekera
    • What is to terribly, horribly wrong with the local storage concept?

      "since most people shouldn't store their data on their own computers"

      [b]SAYS WHO?[/b]

      What is to terribly, horribly wrong with the local storage concept?

      "where it's rarely backed up properly and rarely accessible from mobile devices."

      That can be easily fixed with devices like Window Home Server. Doesn't need to be shoved into "clouds" to make it work.
      CobraA1
  • RE: Memo to Google

    You're being too literal in interpreting Jim Goodnight's comments. Not to put words in his mouth, but I think what he's getting at is:

    - old model: central IT staff handles all the geeky details

    - current model: gazillions of "end-users" are perversely tricked into being slaves for their so-called "personal computers" ... and do, or do not do, all the geeky support tasks like backups and process documentation

    - cloud model: central IT staff again handles all the geeky details

    From a SOFTWARE AS A SERVICE (high-level) perspective, the main difference between old (mainframe 3270) model and cloud-model is that the old model used short wires & employees, and the cloud model can use connection wires that go anywhere in the world & the SAAS vendor's employees.
    Doug_Dame@...
  • Maybe I'm not old enough

    I agree, cloud. SaaS. etc are old, old, old (relatively, of course, like 40 years). And in many ways reinvented again, going through the same growing pains as before, weird? Nothing learned?

    But comments as no multitasking, no virtualization, etc sound a little odd to me but I just started at very end of 60's / early 70s with mainframes when those were norm in mainframes - LOL.

    Multitasking, virtualization, multiple processors, loosely coupled systems, remote backup sites, world wide connections/users with language problems, object oriented, relational model / algebra, thousands of online users in system, and so on were everyday experience at least in mainframe environments I worked?

    The difference might be, at that time (most) IT (IS) was running as a profit center - most actually profiting, even some government computing centers. And there comes the normal problems - new "cloud", SaaS, etc systems don't do very well in accounting / billing the usage which makes it very difficult for customers / users to budget / predict the cost to use such systems. It also make capacity planning (a forgotten art) much, much difficult and expensive than it should / could be!
    tuomo@...
  • heh

    "It?s funny we?ve gotten to where everybody wants everything delivered on the Web. So we?re back to like (IBM) 3270 mainframe days,"

    Yeah - history repeats itself. When this gets old, there'll likely be a push to make computing local again.

    Followed by another push to make it remote.
    Followed by another push to make it local.
    Followed by another push to make it remote.
    Followed by another push to make it local.
    Followed by another push to make it remote.
    Followed by another push to make it local.

    I'm getting dizzy . . .

    In the end, though - I really do prefer it to be local.
    CobraA1