Qualcomm unveils plans for 64-bit Snapdragon 410 mobile processor

Qualcomm unveils plans for 64-bit Snapdragon 410 mobile processor

Summary: Despite labeling Apple's 64-bit A7 processor a "marketing gimmick" that offered consumers "zero benefit," Qualcomm is now heading down the 64-bit path itself.

SHARE:
TOPICS: Mobility, Hardware
14

Mobile chipmaker Qualcomm has announced plans to follow Apple's example and develop 64-bit processors, only Qualcomm plans to make them available for low-priced handsets.

The new Snapdragon 410 processor will also feature an integrated 4G LTE chipset and be aimed at the fast-growing Chinese market, where the company expects it to appearing in low-cost smartphones during the second half of 2014.

The move is part of Qualcomm's plan to encourage a shift to 64-bit mobile processing.

"It's a little bit chicken and the egg," said Leyden Li, Qualcomm's senior director in charge of marketing of the Snapdragon line. "We see this transition happening and we want to be there to help enable the ecosystem."

Back in October, Qualcomm's senior vice president and chief marketing officer called Apple's 64-bit A7 processor a "marketing gimmick" and claimed that it offered consumers "zero benefit."

"I know there's a lot of noise because Apple did [64-bit] on their A7," said Chandrasekher. "I think they are doing a marketing gimmick. There's zero benefit a consumer gets from that."

"Predominantly... you need it for memory addressability beyond 4GB. That's it. You don't really need it for performance, and the kinds of applications that 64-bit get used in mostly are large, server-class applications," said Chandrasekher.

In addition to 64 bit and 4G LTE, the Snapdragon 410 supports cameras up to  13-megapixel camera, 1080p video playback, and come with an Adreno 306 GPU.

The Snapdragon 410 will be built using the same 28-nanometer architecture used to manufacture the Snapdragon 800.

Qualcomm claims that phones featuring the Snapdragon 410 processors will retail for around $150.

See also:

Topics: Mobility, Hardware

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

14 comments
Log in or register to join the discussion
  • tell the whole story or shut the hell up

    Like Google's SSL certificates you've proven to be untrustworthy.
    greywolf7
  • "64bit-ness"...

    ...is indeed NOT the reason for A7's performance. Touting this is a marketing gimmick indeed. The ARMv8 architecture is the real improvement, a characteristic of which being 64bit.
    Ghest
    • It might take over the data center

      You are correct, Ghest. It's the new ARM architecture that brings performance gains.

      These processors may also be destined for servers and data centers, where the energy savings would be substantial.
      Vbitrate
    • Yes and no. But mainly no.

      The 64bit instruction set can address more registers. Therefore going to an instruction set and providing more registers provided an benefit now and required the 64 bit move.

      However, when processing streams of data, handling them in 64 bit chunks rather than 32 bit chunks will generally double the throughput.

      How is that a marketing gimmick?

      Now having 3 idle processors on a quad core chip rather than 1 on a dual core chip, THAT is a marketing gimmick.

      Qualcomm distanced itself from Chandrasekher remarks and reassigned him as senior cloakroom attendant. But the worlds press largely chose to ignore this as it wasn't useful for Apple Bashing.
      Henry 3 Dogg
      • multicore vs 64bits

        Performance scales better with more cores vs doubling the databus width.
        64bits does not mean you can process twice as much if the independent 32bit operation cannot be independently processed by a single 64bit operation. Some operations can scale, but most cannot.
        Bitwise manipulation that can scale to 64bits are better processed by the bit blitter co-processors and GPU.
        Extra cores scale performance very well. Certainly easier than scaling by 64bits.
        warboat
      • No

        There are not that many "streams of data" that benefit from a single "64-bit chunk" instruction. That's the extent of the "64-bit-ness" that Apple added.

        Every ARM implementing the NEON instruction set was already processing "streams of data" in "64-bit chunks".... but only for streams of 8, 16, or 32-bit data. The 64-bit in one operation part of 64-bit computing, that needs the 64-bit processor.

        Similarly, in hardware, ARM chips have 64-bit or larger interfaces to the on-chip cache units, in parallel... so instructions and data run independently of each other. The memory bus interfaces can be crappy old single 32-bit interfaces (the main problem with the nVidia Tegra 3 used in the Surface RT and a bunch of Android devices), but it's up to the chip designer to use more... the TI OMAP in my 2-year-old Galaxy Nexus has dual 32-bit buses to memory. That actually does go twice as fast as a single 32-bit interface.

        The ability to do 64-bit math operations is absolutely the least of the advantages of a 64-bit ARM, because the need for that is rare. Extra registers, at some point the ability to hit more than 4GB of RAM + I/O, generally improved instruction set and features, etc... that's the win of going to 64-bit. Always was, even when MIPS did to enable 64-bit workstations and Nintendos back in the 90s. Ditto when AMD did it to rescue the world from Intel's non-consumer 64-bit plans.
        Hazydave
  • 28-nanometer architecture

    Why are they going with the "28-nanometer architecture"? 14 is being used my Samsung today.
    Rann Xeroxx
    • beacuse Qualcomm uses TSMC

      and that's also the reason why Apple still stick with Samsung. TSMC just isn't ready yet.
      Samic
    • Internal vs. External

      Samsung and Intel both have better-than-28nm chips in production. For their own stuff.. not for Apple's, and not for yours. TSMC and Global and the other "pure play" semiconductor foundries are usually a bit behind the leading edge in-house companies. Part of that's because they only have other people's chips to run. Samsung can spend big money on a new fab in order to run relatively simple DRAM or Flash. Intel has crazy money to spend crunching a new CPU into 22nm or 14nm or whatever... they're investing their money to build these processes for parts that sell in crazy numbers -- payoff in sight.

      The Pure Play companies have to pretty much prove a new process first... maybe with the help of a client, maybe not. So for example, TSMC is probably going on-line with a 20nm process this year. When Qualcomm or nVidia or whomever signs up for that process, they expect it work already... they're not expecting to spend their money to hone TSMC's new process.

      So sure, Samsung does foundry work, and Intel's at least considered it. But don't expect that just being at Samsung gets you the 14nm process... they have their priorities. The reason so much of the semiconductor industry has moved to Pure Play foundries and fabless design firms is that pretty much none of these design firms could afford their own competitive processes. And everyone using a fab has access to the same process, in theory. Certainly Apple's getting Samsung's best available fab process.. but not the stuff they keep for themselves. To maintain a competitive advantage. That Apple's helping to fund. Fun times!
      Hazydave
  • This too sounds like a marketing gimmick

    We won't be seeing low end entry phones with over 4G for awhile so not really a chicken and egg thing yet. And with only 13MP camera support it's not really great for the high end hero phones either. And yesteryears 28nm tech rounds this out as a very average chip. Nothing here would lead this chip to a design win against a airmont.
    Johnny Vegas
    • The coders can start cracking

      There are no 4GB phones. Yet.

      But seeding the market with 64-bit processors allows the OS vendors to go 64-bit, and the developers to start transitioning their apps to 64-bit. So that when a phone does come to market with 8 gigs of RAM, we'll all be ready for it.
      Vbitrate
      • Give it a rest

        "There are no 4GB phones. Yet."

        Not again!!!

        When you hit a memory addressability limit, then you need more bits on the address registers. Not necessarily on the data registers.

        And since there processors do their own memory management, you would only need to increase the register size for addressability reasons, when a single process was going to hit 4G.

        However, that is just one of many reasons for going to 64 bit.

        For example

        For years Windows had problems with files of over 2G and special "large file" APIs to try to lash up handling of larger files. This was a serious problem for Windows users. Especially if they wanted to edit video. It had nothing to do with memory size. It was about file offsets being too big to represent in a word.

        That's one of the reasons people moved video editing onto Macs which had no such limit.

        But actually modern processors spend much of their time processing byte streams.

        Whenever possible they don't process these byte by byte. They do it word by word.

        And doing it in 64 bit words doubles the throughput from doing it in 32 bit word.
        Henry 3 Dogg
        • Not correct

          Modern processors have three sets of instructions. On x86, you have standard scalar instructions, the original x86 instruction set, plus the many little enhancements over the years. On ARM, the original instruction set, plus the many little enhancements. Each of these works on just one thing at a time: a byte, a 16-bit work, a 32-bit word, maybe a 64-bit word.

          Then there's floating-point.... pretty much all modern processors support both 32-bit and 64-bit floating point operations. This is old-timey stuff... processors did 64-bit floating point in the 1980s.

          They also have "vector" instruction sets, also called SIMD, for "Single Instruction Multiple Data". This is called SSE (and maybe also AVX) on x86 chips, and NEON on ARM. Each of these instruction sets works on "streams" of data. SSE works on data at 128-bits at a time, AVX at 256-bits at a time, and NEON works on 128-bits at a time. So in the 128-bit case, that means you're doing something to four 32-bit, eight 16-bit, or sixteen 8-bit values at the same time.

          Ok, got that... while NEON was optional in the ARM architecture, most ARM chips have it. They're already able to, when coded for it, to process sixteen 8-bit values in one instruction (think some kinds of string processing), etc.

          In going to 64-bit, it's that very first kind of computing that changes. "64-bit processor" refers to the scalar, original, one instruction at a time kind of processing. While a move to a 64-bit instruction set might also open up the chip for FPU or SIMD for improvements... the x86-64 ISA totally fixed up the horribly designed FPU in the x86, as well as adding separate registers for SIMD (rather than re-using the FPU registers). But neither of those are what's meant by "64-bit".

          CPUs do what they're told. If you have an algorithm that's coded byte-by-byte, there is no magical formula that turns this into an SIMD instruction. This is just as fast on a 32-bit processor as a 64-bit processor, all else being equal. And if that was coded as an SIMD instruction, it's still going to be the same on a 64-bit processor as a 32-bit processor.

          What can make things faster, but has absolutely nothing to do with the "bitness" of this chip, is the actual hardware. So you have a 128-bit NEON vector to load. Using just a 32-bit bus, that's going to take one random-access cycle (grabs 64-bits, because of the DDR thing) plus one more clock (64-bits, another DDR cycle). On a 64-bit bus, you get that 128-bit NEON data in just one DDR cycle (64-bits each half cycle). The win here is that you get the next vector in the next cycle, etc. However, bus fetch units (the hardware that talks to your DRAM) haven't been directly tied to the CPU for three decades... they grab memory and put in in the L1 (and perhaps other) caches, as well as serving it directly to the CPU. And of course, in-chip, the cache buses are already really fast and wide.. sometimes 128-bit, sometimes 256-bit, even given a 32-bit or 64-bit CPU. None of the hardware is all that related to the chip architecture. And it never really was... we had 16-bit and 32-bit CPUs with 8-bit buses back in the 80s (like the famous 8088 in the original IBM PC), and since the Pentium, every 32-bit Intel has had a 64-bit bus... sometimes as many as four 64-bit buses, like the PC I'm typing this upon.
          Hazydave
  • Apple drops Qualcomm in 2014 and Beyond

    Time for Apple to look for new loyal partner.
    Netteligent