Why Intel's 22nm technology really matters

Why Intel's 22nm technology really matters

Summary: Intel's 22nm announcement was no surprise--the tick-tock of Intel's technology continues. What is surprising, however, is how Intel got there: The 3-D transistors are a break from the planar transistor used in integrated circuits since the 1950s.

SHARE:
41

The fact that Intel announced its 22nm technology, the world's most advanced process for manufacturing logic, at a press event earlier today was hardly surprising. The tick-tock cadence of Intel's technology continues like clockwork. What is very surprising, however, is how Intel got there. The 3-D transistors, known as tri-gates, that Intel will introduce at 22nm later this year are a break from the basic planar transistor that has been the foundation of integrated circuits since their invention in the 1950s.

Intel will use this novel 22nm technology to manufacture its Ivy Bridge processors, which should be in volume production in the second half of this year and available in PCs and servers starting in early 2012. To demonstrate that Ivy Bridge is real, Intel showed several working systems including a server with a dual-core processor, a desktop running a driving game and a laptop playing a 1080p video. Intel said that the 22nm tri-gate transistor will deliver 37% better performance than the 32nm planar transistors used in Sandy Bridge chips--already the fastest by a wide margin. Alternatively, Intel can tune the tri-gate transistors to provide the same level of performance while using half the power of Sandy Bridge. The 22nm technology will increase the CPU performance in Ivy Bridge, but Dadi Perlmutter, a vice president and General Manager of the Intel Architecture Group, also hinted it will make a big difference in the graphics and media processing capabilities of Ivy Bridge.

(Intel has posted lots of background material on the 22nm technology.)

This transition will occur as the rest of the industry is shifting from 45nm/40nm to 32nm/28nm. Like the introduction of high-k materials and metal gates (HKMG)--where competitors are still playing catch-up three years later--the shift to tri-gates could propel Intel years ahead of AMD. And because the 22nm technology with tri-gates is not only denser, but also uses less power, it should work well in mobile devices giving Intel a fresh chance to challenge companies that design ARM-based application processors such as Qualcomm, Samsung, Texas Instruments and Nvidia. While SOCs (System-On-Chip) using Intel's 22nm process technology will come later, Atom is on an accelerated schedule and will be released around the same time as new PC processors in future generations.

The vast majority of today's integrated circuits are built using planar transistors, meaning ones in which the silicon channels that conduct the flow of electrons when the switch (the gate electrode) is turned on and off lie flat on a silicon base or substrate. For decades, the industry has been able to successively shrink the features of these transistors, packing more into a given area of silicon with each new generation of process technology--the phenomenon known as Moore's Law. But starting at around 90nm, which Intel introduced in 2004, the industry hit a roadblock. Certain elements became so small that the gates that control the switching of transistors began leaking current creating a power problem. The solution was the HKMG recipe that Intel introduced on its 45nm processors starting in early 2008. This allowed Intel to use thicker insulating layers to control gate leakage without sacrificing performance.

It's possible to build 32nm/28nm chips using conventional polysilicon oxynitride gates--most semiconductor foundries will offer this--but the benefits of HKMG are so significant that the rest of the industry is following suit. AMD's Llano processor, which is now shipping and should appear in desktops and laptops starting in June, is manufactured by GlobalFoundries using a 32nm HKMG process. TSMC, the world's largest semiconductor foundry, will start volume production of chips using a 28nm HKMG process later this year, followed by GlobalFoundries on that node in early 2012.

As chip designers scale transistors beyond 32nm, however, the features start to become so small that it creates more electrostatic problems. In other words, it is difficult to properly control the switching of the transistors. One solution to this is a 3-D or non-planar transistor structure. Most of the industry refers to this as a FinFET (Field-Effect Transistor) because the conducting channel sticks up from the substrate like a fin with a gate on either side--or a double-gate--to better control switching. The problem with FinFETs is that they require a relatively thin and tall fin, which is difficult to manufacture. Think of it like building a skyscraper versus a small office building (although you could fit perhaps 5,000 of these "skyscrapers" in the width of a human hair). Intel has a different twist on the FinFET. The tri-gate surrounds the channel on three sides so that it can effectively control a shorter and wider fin that should be easier to build, though it is still more challenging than the tried-and-true planar transistor. (The ideal transistor would have a gate that wrapped all the way around a tiny silicon nanowire, but this is impossible to manufacture using today's technology.)

When you provide more than 80 percent of the world's microprocessors, you don't just roll the dice on a completely new technology. Like HKMG, tri-gates have been in the works for a long time. In 2002, Intel gave a presentation showing why tri-gates would be easier to manufacture than other fully-depleted such as the single-gate (planar) or double-gate (FinFET). In 2004, Intel showed tri-gates could improve electrostatics and extend transistors scaling for 32nm and beyond. And by 2006, the company was discussing how tri-gates could be combined with other key technologies such as HKMG and strained silicon to produce circuits with higher performance and lower power than planar transistors on the same node (65nm at the time).

The introduction of tri-gate transistors will enable Moore's Law to continue. Intel said the tri-gate structure will work not only at 22nm, but also on the 14nm technology scheduled for production in late 2013. More important, it should allow Intel's customers to build not only laptops but a whole range of devices from smartphones to servers in large data centers that have significantly better performance and use less power.

In the meantime, Intel's competitors will have a lot of tough choices to make after 28nm. They can stick with a planar transistor structure on an exotic substrate known as ET-SOI (Extremely Thin-Silicon On Insulator), but these wafers are difficult to manufacture. Intel said its tri-gate approach will add two to three percent to the cost of a finished wafer while ET-SOI will add 10 percent to the manufacturing cost. Or the foundries may choose to switch to a 3-D double- or tri-gate structure starting at 14nm. None of these will be easy. In addition, Intel has the luxury of creating one technology optimized specifically to work on its processor design. The foundries need to come up with process technology that will work with everything from programmable logic (Altera and Xilinx) to graphics processors (AMD and Nvidia) to mobile processors and wireless basebands (Qualcomm, Broadcom and others). Intel Senior Fellow Mark Bohr said that Ivy Bridge will give Intel as much as a three-year head start versus its competition, but looking at all these factors you could argue the lead may be even greater in the next few years.

Topics: Hardware, Intel, Processors

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

41 comments
Log in or register to join the discussion
  • RE: Why Intel's 22nm technology really matters

    This really is going to change the processor landscape and could well propel Intel downwards into ARM's territory as well as upwards into uncharted desktop/server territories too.

    If Intel really can drop their processor's power consumption into ARM's performance per watt territory, then ARM may have the first credible competitor for the first time in a long time.

    Much has been said here and elsewhere about the potential issues in porting Windows to ARM, particularly compatibility issues for existing applications. If Intel really can create a SOC with better than ARM levels of performance at comparable power consumption rates, then Windows notebooks, tablets (and even phones) running on IvyBridge SOC could well prove to be an even more credible competitor to Windows on ARM.
    bitcrazed
    • RE: Why Intel's 22nm technology really matters

      @bitcrazed I would agree. On the flip side, envision what could be done to improve the already good ARM architecture with this 3D-gate or tri-gate technology built into it!
      QuimaxW
    • RE: Why Intel's 22nm technology really matters

      @bitcrazed

      It's no wonder MS hasn't been sweating over the smartphone, tablet war. This could be a huge step for Windows.
      Rob.sharp
      • RE: Why Intel's 22nm technology really matters

        @rob.sharp@...

        @rob.sharp@...

        I read this two hours ago, put it down, and just came back to say that I cannot imagine how this makes any sense whatsoever.

        While it's true that greater computing power in a smaller and more power-efficient form means that "Windows" could be run on more mobilized equipment, it has been proven, definitively, that no one wants it.

        This Intel advancement moves every platform's potential ability forward.

        Please explain, if you don't mind, how this affects Windows in any way whatsoever, and why it validates Microsoft's perceived current lack of response to mobility/tablet trends?
        lelandhendrix
      • RE: Why Intel's 22nm technology really matters

        @rob.sharp@... Despite what another responder said your comment makes perfect sense.
        deepee912
      • RE: Why Intel's 22nm technology really matters

        @deepee912
        The only thing I can come up with is that you believe a smaller, more power efficient CPU will save Microsoft in the smartphone/tablet market, because then they can put Windows on it.

        Will it have a Bluetooth mouse? ?There's no way possible for any of the mandatory design elements of Windows 7 to work on phone device-- menu bars, pulldowns, minimize/maximize/close button clusters--even at 4.3", which is practically a tablet anyhow.

        This is why people like me keep screaming that if Microsoft doesn't wise up they will be permanent failures in the segment.

        Some seem to think that tablets started with the iPad or shortly before. ?No, they've been around since 2000 with a HUGE push and demonstrations from Bill Gates. ?They were resistive, used a stylus, and ran full-on windows. This went on for 10 years, and nobody cared except a very small subset of mostly medical use. ?They/he were either lazy or just didn't have the balls to write a new OS. Embedded compact (CE) sort of counts but not really, as it was never ever intended to be, nor even the slightest bit developed, for a consumer high-volume device.?

        Then Apple comes in, with 9X the battery life, 1/4th the weight and bulk, and only 40% (arguably even much less) the functionality of Windows tablets and ATE MSFT's LUNCH, AND drank their milkshake. Drank it ALL UP!!

        I've heard it argued that iPad sales are to fanboys only...but independent research shows 80% iPad owners don't have a mac, and 67% don't own an iPhone! ?

        When apple chose to only have 40% (or less) the functionality of a Windows tablet, they chose the RIGHT 40%--the features that people cared about. None of the ports, and even none of the multitasking.

        I'm realize I'm telling you things you already know, but I still don't get it. I don't get how "It's no wonder MS hasn't been sweating over the smartphone, tablet war. This could be a huge step for Windows."
        makes any sense at all.

        Smaller, more power efficient CPU? ?Great. ?But what about the chipset, the digitizer, a/d d/a, and blah blah. Microsoft neither designs nor assembles anything.

        Microsoft is not a computer hardware maker, they are coders and sometimes can even damned good ones. ?But windows 7 (Win 8 for that matter) to help them recapture the flag on smartphones and tablets? ?Nobodoy wanted it before, and I don't what's changed to make anyone want it now.
        lelandhendrix
    • RE: Why Intel's 22nm technology really matters

      @bitcrazed
      Check out the power/performance on the i3-2100T. 35 watts with as much throughput as last years AMD quads
      mswift1
    • Why Intel can't beat ARM (yet)

      @bitcrazed, it's not that simple. Intel is way behind RAM right now and 22nm-3D is just catching up, maybe, yes maybe at 14nm they are even. Remember that ARM is a moving target and Atom really isn't at the same level. Intel needs a new chip design similar to ARM's in order to beat the arms race (pun intended). Only then can 3D bring a superior advantage, but Intel is still in a hurry, because competitors will in 3 years catch up the 3D. The time window's really small to conquer the world. BUT on the other hand on x64 technology eg. against AMD: the battle is already won buy Intel. AMD will starve to death in 3 years. The other processors will all suffer except IBM, which has the Power to compete. IBM's research could be as PowerFull or even bigger than Intel's, but they are not direct rivals on small scale servers (nore desktop). Well, we could speculate here for 3 years, but then we will see...
      CyberAngel
    • RE: Why Intel's 22nm technology really matters

      @bitcrazed
      Still some generations off.

      Intel Chip's drink Ferrari amounts of power, compared to ARM with a miserly Prius appetite. Intel need another 5 years to get where ARM were last year.
      neilpost
  • RE: Why Intel's 22nm technology really matters

    It seems that the next step will require construction technology a step above anything we have now.
    john_gillespie
    • RE: Why Intel's 22nm technology really matters

      @john_gillespie@... It's time to leave gold wire interconnects and vias behind and jump onto locally emitting diodes, for intraprocessor communications.
      doceigen
  • RE: Why Intel's 22nm technology really matters

    Better hardware will only show the flaws in poorly written software

    Intel is not waiting around for Microsoft anymore. WINTEL is pretty much a thing of the past. Especially when you start talking about multiple core programming.

    CPU upgrades always bring a 35-37% performance gain.

    Multiple Cores that can scale properly are suppose to provide 1x for each additional core on the CPU. So if you have 4 Cores, you should really get a 4x gain in performance. That is 400% compared to the 35-37% gain from hardware.

    The problem, Microsoft isn't providing the Parallel Processing software that can bring these performance gains.

    AND INTEL ISN'T WAITING FOR MICROSOFT TO DO SO.

    Intel maybe more involved in Open Source activity and other R&D projects, then they are in ventures with MS.

    Times are a changing. Been to INTEL and witness first hand.
    thecatch
    • RE: Why Intel's 22nm technology really matters

      @thecatch

      Where do you get the 35-37% number? I would be really curious to read up on that.

      Regardless, your assertion that you should expect a 400% gain is silly. Multiple cores allow more isolated processes to run simultaneously, but doesn't make them complete faster. Here's a contrived example.

      Let assume you think that a very large number <i>q</i> is prime and you want to make sure. Let's also assume that you have a list of primes up to the square root of <i>q</i>. For each <i>p<sub>i</sub></i> in the list just see if <i>q</i> mod <i>p<sub>i</sub></i> = 0 (not prime). If you have enough cores you could potentially assign each mod operation its own thread. Unfortunately, no matter how many simultaneous threads you have open, the <b>fastest</b> you could complete this operation would be the slowest thread. In some cases, a single threaded algorithm could actually finish faster (it could exit earlier).

      A more formal definition with better calculations can be found here: <a href="http://en.wikipedia.org/wiki/Amdahl%27s_law">Amdahl's law</a>

      For me personally, the bigger issue is that I'm rarely running my CPUs at their full capacity. As I'm typing this I think I've seen a spike to 20% but I'm sitting mostly at 6%. I actually need the horsepower on a pretty regular basis, but most of the time the processors just wait for something to do.
      Rich Miles
      • RE: Why Intel's 22nm technology really matters

        @Rich Miles
        I know all about Gene Amdahl's law.

        We are talking new software, created in assembler language, with a new programming language specifically designed to promote parallel execution within the process, new file system, data base, and data base language, with a scheduler designed to assigned tasks and subtasks to all available cores, fully utilizing all cores until end of run-time.

        Reading and writing to memory is as critical to the process as much as anything else. The problem you provided is a traditional massively parallel problem. I am unsure of what language the problem is written in or what platform and OS is executing said problem. But when Gene Amdahl discussed his theory, it was during an era of sequential processing and sequential platforms. Even Fortran and Cobol are not great parallel processing languages.

        A lot is going on surrounding this topic. Even Amdahl is working with a company that deals with Massively Parallel applications.

        Our approach isn't about algorithm's and the math, we actually created a platform designed specifically to take advantage of the machine, CPU's memory etc.

        We were at Intel with the SCC project, and saw what was going on concerning this topic.

        More later.
        thecatch
    • RE: Why Intel's 22nm technology really matters

      @thecatch This is about a new hardware architecture not about an instruction set, the same OS's that ran on the older planar system will still run on the new tri-gate systems. From what I have read, what is exciting is that 8 and 12 core processors become feasible without the thermal and power requirements of todays dual core processors. Tomorrows desktops will exceed the abilities of todays Itanium based workstations and this type of power will migrate both to laptops and to WP7 phones and tablets. I would say that Intel is going to change the landscape and other processor manufacturers are really going to have to scramble to catch up with this quantum leap over current processor technology.
      Rndmacts
      • RE: Why Intel's 22nm technology really matters

        @Rndmacts What does this have to do with Windows Phone 7?
        lelandhendrix
      • RE: Why Intel's 22nm technology really matters

        @Rndmacts
        If you can't schedule more cores they are pretty much useless to the process. No currently commercial availble OS can schedule multiple cores well, not well at all.

        After the addition of the 2 to 3 cores it can be argued that only a 1% gain is achieved. This is what Gene Amdahls law was all about.

        The energy and power requirements are improved but that doesn't improved the OS's and their ability to schedule these cores.
        thecatch
    • RE: Why Intel's 22nm technology really matters

      @thecatch On the contrary, they have produced really good parallel code in .Net 4.0 and the parallel library. Also the Async library is great. And the TPL is built on the parallel stuff to give OS quality processing libraries to developers.
      rjt9
    • RE: Why Intel's 22nm technology really matters

      @thecatch
      Over ten years ago, IBM built a parallel CPU to determine how Operating Systems may be optimised to take advantage of multiple processors to optimise algorithms that could be split into several parallel processes.

      Also there was research done on which kinds of tasks could be done with parallel rather than linear or single processes.

      IBM discovered that tasks that were linear would not benefit from a multiple core or multicore CPU. So your 400% figure is meaningless for such linear tasks. A linear task will just execute on one core, and be affected by the single core's clock speed, use of the caches, main DDR RAM interface speed, and so on.

      The statement that multicores that can scale properly and are supposed to provide 1x for each additional core on the CPU, so if you have 4 Cores, you should really get a 4 x gain in performance, giving a 400% improvement confuses most people. I have seen bloggs or forums where people assume that say, a 4 core running at say 2GHz is the equivalent of a single core 8GHz CPU. This is simply not as straight forward as it may seem because of issues regarding the cache shared by all the cores, and the single DDR RAM to CPU interface.
      neilrued
  • stunning...

    I hope AMD can survive this one, I need the competition to drive the cost of my intel systems down. =)
    pgit