AMD says "Fusion" chip looks good, but is 2011 too late?

AMD says "Fusion" chip looks good, but is 2011 too late?

Summary: AMD appears to be on track to release its first "Fusion" processor that combines an x86 CPU core with a graphics processor on a single silicon chip sometime in 2011. The first so-called APU, or Accelerated Processing Unit, is code-named Llano and will be manufactured using 32nm process technology.


AMD appears to be on track to release its first "Fusion" processor that combines an x86 CPU core with a graphics processor on a single silicon chip sometime in 2011. The first so-called APU, or Accelerated Processing Unit, is code-named Llano and will be manufactured using 32nm process technology.

In an interview with the enthusiast site X-bit Labs, Chekib Akrout, an executive who heads up AMD's central engineering group, said the company was happy with the results so far. "The current schedule is for 2011 introduction so it is still early, but because we are using an existing CPU core for the first product and not making big changes in the memory structure right away, we feel quite confident about where we are with Llano," he told X-bit Labs.

Last week, a hardware site in China,, posted what appears to be a detailed AMD notebook roadmap. One slide shows the Sabine notebook platform with Llano APU arriving in 2011, on the heels of a 45nm dual-core Caspian chip this year and a 45nm quad-core Champlain CPU in 2010. These are all designed for mainstream notebooks, though Llano, which will have up to four CPU cores, will also be used in mainstream desktops. AMD currently manufactures desktop and server chips at 45nm, but it has not yet released a 45nm mobile processor.

By reducing the complexity of PC designs, these APUs should enable computer makers to smaller and less-costly laptops and desktops. In theory, by putting the CPU and GPU on the same chip, you can also distribute computing tasks among the cores more efficiently--what Chekib alluded to as "the more interesting optimizations possible with the two different types of cores." Both Intel and Nvidia have made similar comments about the opportunities for "repartitioning" the work handled by CPUs and GPUs.

Llano will arrive some five years after AMD closed its $5.4 billion acquisition of ATI and immediately announced the Fusion project. Since then AMD has run into both financial and technical difficulties, and recently was forced to spin-off its manufacturing to a new, consolidated unit known as GlobalFoundries. All of AMD's CPUs are manufactured using a process known as SOI, for Silicon-On-Insulator, while the ATI Radeon GPUs use a different bulk silicon process. GlobalFoundries will have the ability to manufacture chips using both techniques, and it will be interesting to see which technology they choose. It is also likely to be the first time that AMD and GlobalFoundries will be working with new materials, specifically the high-k insulator and metal gates, which could make things more challenging.

Intel is taking a less ambitious approach--at least in the short term. Rather than putting the CPU cores and the GPU on the same piece of silicon, Intel plans to put the two chips into a single package--a configuration known as a "Multi-Chip Package" or MCP. Then again, this family of products, known as Westmere, should ship before the end of this year.

The first versions, the Clarkdale desktop chip and Arrandale notebook chip, will combine Intel's first 32nm CPU--a dual-core chip with two threads per core--with a 45nm GPU and memory controller. Because Intel already uses high-k materials and metal gates at 45nm, the transition to 32nm should be relatively easy, and a multi-chip package is less technically challenging than a single die, but delivers many of the same benefits for PC designers. Like AMD's Llano, Clarkdale and Arrandale are designed for mainstream PCs.

In last week's earnings call, CEO Paul Otellini revealed that Intel had already shipped thousands of sample 32nm Westmere chips to more than 30 customers. Take this one with a big grain of salt, but has posted some photos and early test results from what is supposedly a sample 2.4GHz Clarkdale.

Topics: Networking, Hardware, Intel, Laptops, Mobility, Processors

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • good

    Biggest project of the world, advertise with only 3 $ dollars for 5 years, buy ads , cheap advertise for your traffic on web site or personal page
  • What about the transition to Immersion Lithography


    You mention that AMD's transition to 32nm is more risky than Intel's because they still have to move to high-k. But you fail to mention that AMD has already moved to immersion lithography manufacturing processes, and Intel will have to do so when it moves to 32nm.

    Wondering why you didn't think that was a significant enough risk to mention? Perhaps Intel told you not to worry about it, so you just decided to leave it out?
  • I suppose . . .

    I suppose it'll be okay for the computers most people buy in stores.

    But power users and gamers will likely want to keep their dedicated graphics cards.

    . . . and I hope that Intel puts hardware T&L on their Clarkdales. I mean [b]seriously[/b] this has been the thing that drives me absolutely bonkers with Intel's integrated graphics chipsets. Some of their chipsets would be decent for gaming if they weren't bogged down with all of the software transforms and lighting calculations.

    I don't know what Intel is thinking with their graphics offerings. I simply don't. Even a tricked out i7 overclocked to the absolute limits isn't gonna do any decent gaming if it's choking on computations designed to run on hardware that normally has anywhere from over a dozen to over 400 streaming cores.

    You just can't do T&L on a CPU anymore. Despite the advances in CPU technology, GPUs are pushing harder than ever for their own advances, and are keeping well ahead of CPUs in terms of graphics processing power.

    So I'd really like to see native T&L support on the GPU side of the Clarkdale.
  • Pure cloud computing is a myth

    This type of advancement is exactly why a pure cloud computing will never materialize. You don't achieve the necessary economy of scale with centralized processing, local processing will always be an important part of the future of computing.
    • Agree

      I don't understand the big push to "cloud", and the hype that it's akin to the Cure for Cancer for all things computing related.

      The weakest link, the Internet, is prone to disruption, inherently insecure, and represents a significant performance barrier for a wide variety of applications. Not to mention the security issues if your company and a competing company both have purchased services from the same Cloud vendor.

      And, more to your point, it's much more difficult to upgrade, expand, maintain, and secure huge server farms designed to support 100's or 1000's of disparate clients than it is to upgrade 1 or 2 dedicated servers.
      • cloud factor?

        "I don't understand the big push to "cloud""

        Maybe this is part of the impetus?
  • RE: AMD says to I Suppose.....

    I think you deserve the Capt. Obvious award.
  • Not too late

    I think this is good synergy between the AMD processor and ATI GPU divisions within the company. This is what they bought ATI for in the first place.

    Having an integrated design is the future and it's better to do it when your company is integrated as well in terms of CPU and GPU divisions.
  • RE: AMD says

    Who really wants a 4 core cpu coupled to a totally worthless video chip. I hope intel tries to market a 4500 video chip with a 3ghz 4 core cpu. Just exactly what you don't need.
    • Worthless GPU?

      Maybe someone running a server-type system where the video display is mostly irrelevant. Someone who's doing lots of processing but not playing games.
      In my limited experience, even my dual processor is a con job, as my applications can't use both cores. The only advantage I get is when I'm doing several things literally at the same time.
  • RE: AMD says

    Will Apple bite?
  • RE: Pure cloud myth...

    I don't believe that cloud computing and desktop units
    are destined to be mutually exclusive in terms of
    existence. This is what I don't get about the people
    who don't get the push to cloud computing...

    The majority of average users (whether home or office)
    don't use half the capabilities of their desktop
    units. It is to these users that cloud computing
    would make more sense. Cheap, thin clients with
    flexible, mobile platform bases to do your word-
    processing, email and web, maybe a little data
    processing, maybe. Oh, and media consumption.
    Listening to or viewing, that is... not creation
    thereof. That's the average user's day to day use of
    a PC.

    Those who NEED dedicated desktop units, tailored to
    high-end computing and therefore worthy of the
    capabilities offered by chip advances such as this
    will stick with their dedicated desktop units. And
    this is admittedly a broad spectrum of users, from
    gamers to graphic artists to engineers and on.

    Thin client, cloud based computing has its place. The
    push towards it is a good thing in terms of ubiquity
    and mobility of computing and services. But single-
    cell dedicated desktop computing very simply will not
    be pushed out. We're looking at two very different
    applications of usage.
    • Cloud, Totally at the mercy of connectivity

      Have you ever been working on a Word document when the network gets hosed for one reason or another? If you have and your document isn't on the network you're OK.

      The cloud puts both the word processor and the doc on the network so you come to a complete halt in production.

      Not to mention whether you'll be able to recover what work you have already done.

      Even in an enterprise you had better have 2 network connections to 2 different network fabrics or the enterprise is going to come to a halt at a great cost per minute of downtime.

      I'd rather have too much hardware so I can run a virtual environment if I want than to have a "lite" client.
  • Always "too late"?

    What a World we've built. Nobody cares about innovation any more. It's all about "What have you done for me TODAY!!!". Good things take time. Patience pays in multitudes, rushing pays nothing but headaches.
    • Especially in the CPU and GPU fields...

      Totally agree.
  • RE: AMD says

    I am interested in the idea. What I'm not understanding is whether the gpu side will have fast access to its own memory from the chip or will it have to share access to the main system memory with the cpu. What would be cool would be a board where you could add as much video memory and system memory as you like. That'd be cool to me anyway.