Nvidia announces next-generation GPU, code-named Pascal

Nvidia announces next-generation GPU, code-named Pascal

Summary: The next-generation GPU -- code-named Pascal -- featured stacked 3D DRAM, unified memory, and a technology called NVLink to bring more power to today's small and thin devices.

SHARE:
TOPICS: Hardware, Mobility
7
Nvidia's Pascal GPU architecture
(Source: Nvidia)

During the keynote speech at Nvidia's annual GPU Technology Conference in San Jose, California, CEO Jen-Hsun Huang updated the company's public GPU roadmap with the announcement of Pascal, the next-generation GPU family that will follow Maxwell GPUs, which are incoming this year.

Pascal, which is named after the 17th century French mathematician Blaise Pascal, features three key new features:

  • Stacked DRAM, or 3D memory: This is where DRAM chips are stacked into dense modules with wide interfaces, which allows the DRAM to be inside the same package as the GPU. This allows the GPUs fast access to the data held in RAM, boosting throughput and efficiency. It also allows for more compact GPUs which can be fitted into smaller devices. This offers a several-fold increase in bandwidth, more than twice the memory capacity and quadrupled energy efficiency
  • Unified memory: This allows applications that take advantage of both GPUs and CPUs and allows the CPU to access the GPU’s memory, and the GPU to access the CPU’s memory. This means that developers don’t have to allocate resources between the two chips.
  • NVLink: This increases the bandwidth between the CPU and GPU from the current 16GB/s to more than 80GB/s.

Because Pascal GPUs come on a board one-third the size used today, the chips can be used to bring more power to smaller devices.

GPUs making use of the Pascal architecture are due to land in 2016.

Topics: Hardware, Mobility

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

7 comments
Log in or register to join the discussion
  • Pascal is a game changer, especially with unified memory and peformance!

    I think NVIDIA has got the spotlight with Pascal. If we can trust the performance claims, it should be a game-changer. I feel the same way with AMD's KAVERI processors and heterogeneous processor core standard. Both NVIDIA and AMD are in the zone. Intel has to match that but I don't see anything near that yet! The key innovation is here is ease of programming. CPU+GPU are equally accessible, equally programmable. No more system on the side like before.
    cp10000
    • Intel doesn't, hasn't, and probably won't.

      "Intel has to match that but I don't see anything near that yet!"

      Intel doesn't, hasn't, and probably won't. They've always been many years behind both nVidia and AMD. Their chips are for running Windows and doing multimedia stuff, they've never really been for games or heavy processing.
      CobraA1
      • Intel doesn't, hasn't, and probably won't.

        What about the Intel Xeon Phi?

        It fits in a standard PCIe slot and has sixty Pentium class CPUs running on shared memory (IIRC) giving a double precision TeraFLOP/s for $2k.

        Also, you can code this in OpenCL which means the same language for all the CPUs, graphics chips, etc.

        The Intel chips have been around for a year or so now, so I'm not sure why this chip is generating excitement. Can it do a double precision TeraFLOP/s as a plugin accelerator for a standard grey box?
        Slurry
  • Cool but not practal

    High end products like this are cool but for most users they are not necessary or practical especially considering price. I would rather see more work focused on reasonably priced graphics processors that are targeted for 4 monitors. This would make them ready for 3 monitors or for the newer 4K monitors. It seems 2 is the limit before you jump to game quality cards.
    MichaelInMA
    • Once you start having those kinds of monitors . . .,

      " I would rather see more work focused on reasonably priced graphics processors that are targeted for 4 monitors. This would make them ready for 3 monitors or for the newer 4K monitors. It seems 2 is the limit before you jump to game quality cards."

      Well, once you start having those kinds of monitors - you *are* essentially talking either high end graphical workstations or gaming rigs.
      CobraA1
  • yea well

    Thats cool but 2016... so sometime 2+ years from now it will be out. Interesting but at the same time there are more things that are coming out today that are already doing this (Kaveri which is already out), so they might be behind by the time this launches just as they are today with GCN.
    Jimster480
    • Yes, but this is nVidia

      a company that knows how to develop firmware and software, not just hardware. AMD makes great hardware, but the ATi division still cannot write a video driver without difficulty, and seldom brings one to the fore that is not full of problems.

      It has been this way since the time of the ATi Mach 8 chips, which predate the company nVidia, and still these people cannot code with any quality or consistency.

      The market sorely needs a third choice.
      chrome_slinky@...