Intel touts 14nm technology and previews Broadwell-Y chips

Intel touts 14nm technology and previews Broadwell-Y chips

Summary: Intel says that after some delays its 14nm technology is back on track and provides some new details on the technology and the first Broadwell processors due later this year.

SHARE:
7
Intel1

It took a bit longer than expected, but Intel has apparently worked out the kinks with the industry’s first 14-nanometer manufacturing process and started to ship processors based on the Broadwell architecture. At an event and webcast today, Intel executives provided some of the first details on technology and the chips built using it.

Special Feature

Next Generation Networks

Next Generation Networks

The rising tides of big data, video, and cloud computing are driving tremendous demand for faster and more efficient networks. We delve into how things like software-defined networks (SDN) and new wireless technologies are enabling business transformation.

The 14nm technology will eventually be used in a wide range of products from smartphones and tablets to servers, networking and storage. But for today’s event, Intel focused strictly on the 14nm process technology and the Broadwell-Y architecture used in low-voltage Core-M processors, which it said will enable fanless designs that are less than 9mm thick.

“We drove considerable TDP reductions,” said Rani Borkar, Intel’s Vice President of Platform Engineering. “With Broadwell-Y we are delivering the experience of the Intel Core in fanless systems."

Borkar said that from 2010 to the Broadwell architecture in 2014, Intel has doubled the CPU core performance, increased graphics performance by seven times and reduced power by 4x. Over that time, systems based on Intel’s silicon have delivered double the battery life while cutting the battery size in half, she said.

Mark Bohr, a Senior Fellow in the Technology & Manufacturing Group, said that in comparison to the competition Intel was delivering a “true 14nm technology.” To demonstrate this, he showed the measurements for key features such as the density of the fins, gates and interconnects; the height of the fins; and the size of SRAM memory cells used in chip caches.

“So now they are packed more closely together for improved layout and density,” Bohr said. “In addition, we’ve made the fins taller and skinnier which helps to improve performance.”

As with previous generations, the improvements can be used either to boost performance or to reduce power. But the real target for Intel is combined performance per watt. Over the past several generations Intel said it has increased performance per watt at a rate of 1.6x with each new generation. But Bohr said Broadwell-Y will deliver more than double the performance per watt compared to the current generation due to the second-generation tri-gate transistors, more aggressive physical scaling, close collaboration between the process and engineering teams, and enhancements to the microarchitecture.

The overall logic area scaling, a measure of the density of both the transistors and interconnects, continues to scale at a rate of 0.53x per generation. Bohr conceded that the competing semiconductor foundries “have tended to have better density on this metric, but they ship much later.”

“Others are pausing to develop 14nm FinFET technology. Their 20nm technologies are still using old-style planar transistors while what they call 14nm or 16nm will convert to the new FinFET or tri-gate transistors,” he said. “Our 14nm is both denser and earlier than what others call 14mm or 16nm.”

Intel2

One of the chief benefits of Moore’s Law is lower cost. When you can pack more transistors in the same space, the cost of making a chip goes down. But lately there has been a lot of talk that the cost reductions are slowing down. In particular, at 20nm where the foundries are still relying on planar transistors, the cost benefits are unclear. But Intel said it has been able to continue to reduce cost with 14nm by “better than normal” by using some advanced lithography techniques.

“For Intel, cost per transistor is continuing to come down, if anything at a slightly faster rate using this 14nm process technology,” Bohr said.

Intel3

Stephan Jourdan, an Intel Fellow in the Platform Engineering Group, provided a few details on Broadwell-Y processors for thin, fanless systems including the smaller and thinner package size and a 2x reduction in the maximum power rating and 60 percent lower wide power for longer battery life. He suggested Broadwell will deliver some gains in CPU core performance, and more significant improvements in graphics and media processing, but said we’d have to wait until next month’s Intel Developer Forum for any real details. “Everything will be done at IDF,” Jourdan said.

Broadwell was originally slated to enter volume production in late 2013 and show in systems in the first half of this year. Earlier this year the company announced it had been delayed as it wrestled with manufacturing yields. Bohr said that the 14nm technology had now achieved “healthy yields” and the first Broadwell processors were qualified and in volume production. In June, at Computex in Taiwan, Intel revealed that the first Broadwell part, an ultra low-voltage chip known as the Cortex-M, would be available in systems by the end of this year.

Intel4

Topics: Processors, Laptops, Tablets

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

7 comments
Log in or register to join the discussion
  • Hmm a 7.5 watt ULV i7 would be nice and close to ARM A15 quad

    7.5 watt TDP i7 (Haswell has 15W i7) would be pretty nice. Note this supports 64 bit. But it doesn't include the chipset

    For Atom (SoC) this might significantly beat quad core A15. I still feel that GPU performance of ARM SoC will beat intels integrated GPU performance for a given Watt.
    MeMyselfAndI_z
    • Watt is an i7

      It used to be easy to understand what an i7, i5 and i3 were. You counted cores and preprocessors. Lately an i7 has 2 cores and an i5 has 2 cores. The both have preprocessors. The only difference was the number of cores in the built in GPU. For the desktop it is different. I am totally lost now. The iX designations meaningless. They have become nothing but marketing hype.
      MichaelInMA
      • Not Hype but technology.

        i7 can have 2 cores, 4 cores or 6 cores but come with Hyperthreading which makes it act like a 4 , 8 and 12 core respectively in terms of the number of parallel threads.

        They currently (haswell) have ULV at 15W (2 core 4 thread), 35W(4 core, 8 thread), 65W (3.6 GHz) all the way up to > 100W levels (6 core at 4 GHz ). All have turbo modes which allow the system to significantly boost clock speeds on single, multiple or all cores depending on need and heat. Most of them allow for aggressive external overclocking.

        The i5 doesn't have hyperthreading. The i3 doesn't have turbo.

        All of them have AVX, SSE 4+ which radically increases floating point performance. All are 64 bit and support virtualization. They all aggressively optimize code execution with branch prediction and out-of-order. The Ivy-bridge and Haswell series are more aggressive at moving up and down power states.

        Xeon has all of the i-series but have optimizes data paths for multi-processor communications and ECC correcting memory controllers. You can also get them with more cores.

        Atom is a cut down x86 and quite a few are 32 bit only and do not have all the bells and whistles of the i-series. But this makes them less power hungry to where they can function at sub-watt levels.
        MeMyselfAndI_z
        • mis-informed

          The i5's do have Hyper-Threading enabled, check Intel's spec lists.

          The way I look at it is this. Rather than the Pentium/Celeron choose, there are now 3.

          The i3 is like the Celeron. It's useless. For general system usage go for an i5 or if your workload is heavier, opt for the i7.
          thrakath_z
    • You're not paying attention ...

      Even Intel realised their in-house GPUs couldn't satisfy the porn habits of the world.

      http://www.fool.com/investing/general/2014/06/05/how-does-intels-atom-z3580-hold-up-against-qualcom.aspx
      P0l0nium
  • Smart TV

    Let's pair this with a 20 GB SSD, 3 GB RAM, and a 0$ OS, and it's the end of the half-functional smart TVs. Smart TVs will be replaced by TV-sized wall-mounted Monitors connected to boxes that can run applications like VLC, Steam In-Home Streaming, and real browsers. That's also the best chance for the Kinect 3.0 to become relevant.
    Sacr
    • I recently installed a 256 GB Crucial MX-100 in my single core Acer Aspire.

      It's state-of-the-art 16 nm technology. It's very nice, however, it would probably be better in a dual core or quad core machine, but the netbook has 4GB of RAM and boots up in less than 30 seconds.

      The new Linux 3.9 kernel incorporates SSD caching.

      http://www.zdnet.com/linux-3-9-kernel-release-offers-ssd-caching-and-server-performance-improvements-7000014649/
      Joe.Smetona