Six clicks: History of supercomputers -- fast, faster, fastest

Six clicks: History of supercomputers -- fast, faster, fastest

Summary: From CDC's 40MHz "supercomputer" to 2014's Tianhe-2's 33.86 PetaFlops per second, supercomputers are continuing to push computing to its ultimate limits.

SHARE:

 |  Image 1 of 6

  • Thumbnail 1
  • Thumbnail 2
  • Thumbnail 3
  • Thumbnail 4
  • Thumbnail 5
  • Thumbnail 6
  • CDC-6600

    After you've gotten used to your new computer, the first thing you, and everyone else, wants, whether you're just playing games or chasing down the Higgs Boson particle, is a faster computer. The ultimate search for faster computers happens in supercomputers.

    Supercomputers are designed to be orders of magnitude faster than the current generation of computers. So it is that when Seymour Cray designed the Control Data Center (CDC) 6600 in 1964, it was the world's fastest computer... at a top speed of 40MHz or 3 million floating point operations per second (MegaFlops). By comparison, an early model Raspberry Pi with an ARM1176JZF-S 700 MHz processor can run at 42 MegaFlops.  

  • Cray 1

    If you know only one name in supercomputi it's probably "Cray." Seymour Cray, the first major and without doubt the greatest supercomputer architect, created the first of his eponymous supercomputers in 1976.

    This 80MHz system used integrated circuits to achieve performance rates as high as 136 MegaFlops. Part of the Cray-1's remarkable — for the times — speed came from its unusual "C" shape. This look was not done for aesthetic reasons, but because the shape gave the most speed-dependent circuit boards shorter, hence faster, circuit lengths.

    This attention to every last detail of design from the CPU up is a distinguishing mark of Cray's work. In the long run, as we shall see, this custom design approach would prove a dead end. 

Topics: Hardware, Data Centers, Linux, Open Source

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

13 comments
Log in or register to join the discussion
  • How about including the supercomputer the Air Force built

    by loading Linux on a couple thousand PlayStations running gigabit Ethernet that Sony was selling below its manufacturing cost?
    Rick_R
  • 41-48 MFlops DP using processor but 24GFlops using RPi GPU

    Single precision is faster at 65Mflops

    Nice comparison.
    MeMyselfAndI_z
    • Hand on.

      @MeMyselfAndI_z

      Hang On.

      The CDC 6600 was a 60 bit machine. Single precision was 60 bit.

      You didn't use double precision unless you had a very unusual problem. It was really only there for compiler compatibility (since the compiler said double had to be twice the length of single).
      Henry 3 Dogg
      • Yep but for 32 bit processors and many GPUs this is an issue.

        You have to distinguish because often there is a huge difference between sp and dp depending on the architecture so when someone says X flops which are they talking about. But to compare CDC 6600 you would probably compare its SP to x86 or ARM DP.

        And then if you compare storage.....
        MeMyselfAndI_z
  • Get the terminology right!

    If you are going to write about supercomputers you really should take note that the "ops" in Megaflops (Gigaflops etc) stands for "operations per second". So it is redundant to speak of "Petaflops per second". It's just so many Petaflops. Rather like people who refer to the speed of ships as "knots per hour".
    Achilles-9158f
    • Get the "Get it right" right

      [Mega]flops aren't operations per second. They're specially floating point operations per second.

      If you just want to talk about generic operations [instructions], then you would talk about MIPS.
      Henry 3 Dogg
  • Vector processing wasn't new with NEC SX-3

    It started in the 60s... (http://en.wikipedia.org/wiki/Vector_processor)

    And would be exemplified in the Cray 1.

    It may have been new to NEC, but the technique had been around for several years.
    jessepollard
    • And before that

      The CDC Star-100 was vector processing 5 years before the Cray 1
      Henry 3 Dogg
  • ASCII Red was not intels first.

    That would be the Paragon series, based around the Intel i860 RISC.

    Personally, I think Intel made the Paragon just to show they could make a supercomputer if they wanted to.
    jessepollard
  • Some fine tuning

    I believe you will find that CDC is short for Control Data Corporation.
    geoffmartinis@...
  • Hire a proof reader or check your work

    That goes for just about every columnist on ZDnet. Nearly every article is riddled with typos which force the readers to spend half their time correcting the errors as they go. For example (one of far too many) 100 petaflops = 100 thousand trillion flops. You were only out by two decimal orders of magnitude!
    allis0
  • Get it right . . .

    CDC was Control Data Corporation. Not Control Data Centre.

    And it's pointless to compare the maximum throughput of a uniprocessor, with that of a multiprocessor, or clustered arrangement.
    Henry 3 Dogg
    • Did he? The ARM 11 in the RPi is a single core chip

      I only saw a comparison to RPi which is a single core system. He wasn't talking about clustered RPi but just a single board. Maybe I missed something ....

      And in one sense it is isn't useless to give absolute comparisons. Ultimately it is the amount of work that gets done, not how it is done (implementation detail).

      The point of the article is that we've come along way and we have.
      MeMyselfAndI_z