Apple's 64-bit chip hit Qualcomm 'in the gut'

Apple's 64-bit chip hit Qualcomm 'in the gut'

Summary: According to a Qualcomm insider, Apple's rapid and aggressive transition to a 64-bit processor with the iPhone 5s delivered a sucker punch to its competitors that left them reeling.

SHARE:
TOPICS: Mobility
137

It seems that Apple's rapid and aggressive transition to a 64-bit processor with the iPhone 5s delivered a sucker punch to its competitors that left them reeling.

"The 64-bit Apple chip hit us in the gut," says a Qualcomm employee to HubSpot. "Not just us, but everyone, really. We were slack-jawed, and stunned, and unprepared. It’s not that big a performance difference right now, since most current software won’t benefit. But in Spinal Tap terms it’s like, 32 more, and now everyone wants it."

At the time, Qualcomm's chief marketing officer Anand Chandrasekher called Apple's move a "marketing gimmick" and claimed that it offered consumers "zero benefit." However, this can't have reflected Qualcomm's views since the company later issued a retraction, reassigned Chandrasekher, and then came out with its own 64-bit silicon in the form of the Snapdragon 410.

Apple's decision to shift from a 32-bit processor to a 64-bity part was an interesting one since the iPhone 5s continues to be essentially a 32-bit platform. It runs 32-bit code and doesn't need the extra bits in order to be able to access more RAM space (since the iPhone is kitted out only with 1GB of RAM).

So why did Apple make the leap? Two reasons spring to mind:

  1. Marketing. When everyone else was stuck in the 32-bit stone age, having a 64-bit processor made Apple seem high-tech and futuristic.
  2. Future-proofing. Apple is showing everyone – consumers, developers, and competition alike – where it is headed.

One thing was clear – as soon as Apple came out with a 64-bit mobile processor, everyone would suddenly want to have a 64-bit part.  Once again, it shows how much Apple is innovating, leaving its competition scrabbling to follow behind.

Topic: Mobility

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

137 comments
Log in or register to join the discussion
  • Marketing & Future proofing = innovation

    Did you just say that marketing and future proofing is innovation?

    I don't want to downplay the A7, but last I checked "marketing" and "future proofing" don't overlap with innovation.
    Sacr
    • Well

      Sure, there's that but, don't forget that he's smoking some funny stuff as well.

      Qualcomm sells millions more processors than Apple makes and the Apple announcement hasn't changed that so I'm not sure exactly how this was a punch in the gut.

      Seriously, Apple wasn't the first to 64 bit computing so, other than bragging rights this really isn't anything to worry about.
      slickjim
      • I'm certainly no software/hardware engineer . . .

        . . . but could 64 bit processing be used for faster heavy-duty (2048 or better) encryption? If so, could it be coming to an iPhone 5s near you soon?

        Take that NSA!
        Gr8Music
        • Encryption

          Unlikely unless they added new machine instructions to do large key encryption algos in HW - until then it doesn't appear to run the 32 bit software it currently uses faster than competing 32 bit processors to make a material difference

          Then again where would you use 2048 bit ciphers beyond a limited set of bespoke SW? The "secure" internet world is currently stuck on meager 128 bit SSL that the NSA has little/no problem cracking
          archangel9999
          • Yup

            Yes, the A7 has additional instructions to support high speed encryption/decryption.

            The A7 can memory-map the entire 128 GB SSD in an iPad.

            The A7 doesn't have to fake 64-bit file access like 32-bit ARM does.

            The A7 has twice the integer and floating point registers of 32-bit ARM (and even the Intel/AMD X64 architecture).

            The dual-core A7 performs far better than 32-bit quad-core ARM processors clocked far higher.

            Result: you're all wet.
            dogbreath1
          • RDF

            dogbreath
            storage is not 64bit access, it is SERIAL 1 bit storage.
            There is no benefit in direct mapping the storage space as apps cannot ever, nor should they, access it natively and directly.
            If apps could random access storage natively, it would open the door to accidental and malicious corruption.
            as for the myth about performance benefits, there is greater potential with 4 cores processing 4 independent 32 bit data than 2 cores processing 2 independent 64 bits. The majority of datatypes is 32bits and less, so your extra 32bit dependent bandwidth goes to waste most of the time. 64bits doesn't magically scale 32bit processing by 2 as it can't independently process it like extra cores can.
            warboat
          • And once again, you don't know what you're talking about

            1) SDRAM storage is not accessed in serial
            2) Accessing storage addressing directly does NOT inherently lead to corruption. What utter nonsense. Sandboxing in external memory stores as well as data duplication work just as they would in RAM.
            3) The extra 32 bit bandwidth is used to double load data and instruction words in the A7. It is NOT wasted.

            How about a list of professional Video houses that use RAMDISKS instead of dedicated disk arrays.
            .DeusExMachina.
          • off your high horse

            it's you who don't know what you're talking about.
            1. storage is not SDRAM, it's NAND flash.
            NAND flash die has always been 1bit. only recently are they talking about 3bit NAND.
            to achieve greater than 1 bit access, multiple 1bit dice is stacked much like raid0 striping of hard disks. I'm not sure how many dice is stacked in the NAND flash of the iphone5s or ipad but it isn't 64bits wide. So you won't benefit from fetching it with a 64bit cpu vs 32bit cpu. Memory mapping it achieves little benefit as the wait states for accessing NAND flash is huge. Even RAM has wait states such that cache is required to speed things up.
            a 64bit cpu does not speed up NAND access.
            2. no need to argue about point 2 if you can't even get point 1 right
            3. in an ideal world, this would be a gain. however, 32bit ARM already uses a lot of THUMB instructions where double loading in 64bits would gain nothing. Using 64bit operands in 64bit mode, you're back to the same situation having to load instructions and operands with more than 1 fetch.
            code bloat with no real gain.
            warboat
          • You're both confused

            Typical NAND flash is actually on an 8-bit or 16-bit packet bus... it's accessed 8 or 16-bits at a time, but that's even oversimplified. The NAND controller sends out the address and command on the packet bus first, then does the read or write of the data... a block at a

            time. Modern flash is not accessed byte-by-byte like RAM, it's hit in architecture-defined blocks... like a disc drive.

            Sure, you could file map the entire flash memory. But no one's going to access it byte by byte... it works just like file mapping a hard drive, run through a virtual memory manager.

            When folks speak of "1-bit" or "3-bit" Flash, they're not talking about the data bus, they're talking about how many effective bits are stored in a single flash memory cell. Flash memory works by storing a permanent electric charge in an electrically isolated memory cell. In the early days, this charge was "yes" or "no"... there's a charge or there isn't... that's been dubbed SLC (single level cell) flash in modern times. You can find SLC flash in very expensive SSDs for servers, but pretty much everything else is multi-level cell (MLC) flash. That stores either four (2-bits per cell) or nine (3-bits per cell) different charge levels in each cell of the flash device.

            The size of the memory bus word (eg, 32-bit, 64-bit, etc) has absolutely nothing to do with the size of the CPU word. There have been 32-bit processors with 8-bit buses. The original IBM PC had a 16-bit CPU with an 8-bit bus. Since the Pentium (a 32-bit processor), every x86 has had at least a 64-bit bus... the CPU in the PC I'm typing this on has four separate 64-bit buses. Pretty much all ARMs have had at least one 32-bit bus, most have wider buses, and more of them. CPUs today almost never actually access the main bus... they talk to the L1 cache unit, which talks to the L2 cache unit, which may talk to the L3 cache unit if you have one, which in turn talks to the main memory bus.

            Most of the time when a CPU hits main memory, that's going to load 256, 512, or more bits of memory into at least one cache. This is done because, even if the CPU is just after 8-bits, it has to access a full word size to read anything (eg, 32-bit, 64-bit, etc). Since it's all DDR memory now, if you're reading a 32-bit bus, you get another 32-bits for "free", as the second half of the DDR cycle. One more clock per 64-bits, and historically, it was rare to run less than four memory cycles, so that's 256-bits on a 32-bit bus, 512-bits on a 64-bit bus, read into cache. This is also important because caches only work on these same boundaries, not individual words or bytes. Dealing efficiently with modern memory is also why the "bitness" of the CPU has absolutely nothing to do with the "bitness" of main memory.
            Hazydave
          • And so you double down on the dumb

            Again, either SDRAM NOR NAND access is serial., You just have NO idea what you're talking about. For instance, all your nonsense about 1bit NAND. That has NOTHING to do with how you access the data! It refers to the number of bit stored per cell.
            #2, you're right here, no reason to argue point 2 when you can't get point 1 right.
            #3 tell that to Anand, who has actually benchmarked the chip in detail. You have no idea what you're talking about.
            .DeusExMachina.
          • Play Infinity Blade 3 on an A5 or A6-powered device

            then play it on an A7-powered device. Even if Infinity Blade were available for the quad-core Androids, the A7 would still load and play it faster thanks to the facts that A) mobile games aren't optimized for more than 2 threads to this day and B) the 64-bit cores execute 32-bit code faster than 32-bit chips do, just like in the AMD64 vs Pentium 4 days.
            Champ_Kind
          • it's all in the GPU

            Infinity Blade is heavily dependent on the GPU. Almost all the gains is right there.
            It is bullsh17 that games like Infinity Blade can't take advantage of more than 2 cores.
            Apple did a good thing by boosting the GPU performance by almost double.
            There is little publicity about the new series 6 GPU in the A7.
            However, MASSIVE publicity about 64bits.
            The GPU got the job done behind the scenes and 64bits took all the credit and glory.
            warboat
          • Well...

            Looking back at the iPhone series, the one consistent thing Apple always does, just about every time, is double the GPU performance. They figured out very early on that iOS could take a commanding lead as a gaming platform, and they have kept true to that.

            That's also why it's not really news. It always happens.
            Hazydave
          • Errr...

            Mobile games are not designed for more than two processors on iOS because there are no iOS systems with more than two processors. That is not the case on Android. There are other performance issues in Android, but this is not one of them.
            Hazydave
        • Only a little... and don't get too crazy about crypto bit sizes

          You could probably accelerate some kinds of crypto operations, primarily RSA these days, just a bit. You're doing big math algorithms for certain kinds of crypto operations, and sure, doing it in 64-bit integers rather than 32-bit means half as many carries to process. But it won't make a crazy difference... and that's probably a good thing. The whole idea of calculating a large modulo root is supposed to be very difficult, otherwise, the crypto itself wouldn't be any good. And this is only done for RSA keys... then you're going to switch to a block cypher like AES.

          And yeah, you could certainly speed up AES a little, for the parts, like XORing data against the cypher, that can happen in large blocks. On the AES algorithm itself, not sure.

          But it's not all that useful a question anyway, since pretty much all modern ARM SOCs have their own cryptographic engine. ARM chips tend to implement dedicated hardware that can do these calculations very fast, in hardware, with dedicated registers designed for these established algorithms. Intel's approach is to put dedicated instructions into the CPU, which also accelerate things much more than the relatively simple change from 32-bit to 64-bit instructions.
          Hazydave
      • Agreed.

        "Punch in the gut" is a lot different than simply being caught by surprise. "Punch in the gut" implies there was significant pain and damage involved. I highly doubt this has affected Qualcomm in any significant way other than forcing them to push out their own 64 bit chip sooner than they had planned.

        The most obvious reason I can see for Apple switching to 64 bit chips is that they plan to further merge OS X and iOS. They needed the chips sooner in order to make development and testing much easier. Personally, I'm not a fan of merging desktop and mobile touch OSes. I'm afraid they're going to make both unusable in the process.
        BillDem
        • Punch in the Gut

          It's not a punch in the Gut when someone comes up with some marketing hype.

          It's a punch in the gut when you are left technologically in the dust. As happened here.

          The statement...

          "Apple's decision to shift from a 32-bit processor to a 64-bity part was an interesting one since the iPhone 5s continues to be essentially a 32-bit platform. It runs 32-bit code and doesn't need the extra bits in order to be able to access more RAM space (since the iPhone is kitted out only with 1GB of RAM).'

          ... is essentially garbage.

          Run up an iPhone 5S and look at the process stack.

          The whole of the OS, library set, and all delivered utilities were fully 64 bit from the day the 5S shipped. Not like Microsoft who are still struggling with a 64 bit IE now.

          Apple own additional apps are all 64 bit.

          Then try loading up the latest versions of your favourite apps, particularly any that are processor heavy. And look at the process stack again. You will find that most of them are 64 bit already. This is a competitive marketplace. And the tools make short word of updating well written code. So people have quietly upgraded.

          Yes, the device can run 32 bit code. As can every other 64 bit processor that is the decedent of a 32 bit predecessor. That doesn't make it essentially 32 bit.

          No it doesn't need the extra bits to access more RAM.

          A circa 1965 CDC 6600, at the time the fastest computer in the world, was a 60 bit computer. It had 60 bit registers and 60 bit operations. But it only had 18 bit address registers. So it could only address 262K 60 bit words. 4G of memory would have been unimaginable. But it was still a 60 bit computer. It processed 60 bit data. FAST. Nobody ever said that it was essentially an 18 bit computer.

          The Motorola 68000 was a 16 bit processor with some 32 bit registers and instructions. It was variously referred to as 16 bit, 32 bit, or 16/32 bit depending on who was describing it. But it had a 24 bit registers. Nobody ever said that it was essentially a 24 bit platform.

          And even if the 5S had 64G of RAM, it wouldn't need to be a 64 bit processor to access it. It would just need 36 bit address registers. And then only if individual processes exceeded a 32 bit address space.

          Just because many PC users went 64 bit because they needed more RAM, that does not mean that that is the reason for, and definition of, a 64 bit system. That was just a timing issue that caused that to be the timing of the need issue for those upgrades. Personally I went 64 bit years before because I was fed up with OS limitations on accessing files greater than 2G. That didn't require more ram. It required the ability to handle the pointer byte addresses within the files.

          This is a 64 bit processor in a fully 64 bit platform with a capability to handle legacy 32 bit code.

          It is NOT essentially a 32-bit platform.

          It is bad enough to have the bracketing trolls from competing platforms who have commercial reasons to play down the significance of this change, misrepresenting it.

          You, adrian, should know better.
          Henry 3 Dogg
          • errata.

            The Motorola 68000 was a 16 bit processor with some 32 bit registers and instructions. It was variously referred to as 16 bit, 32 bit, or 16/32 bit depending on who was describing it. But it had a 24 bit ADDRESS registers. Nobody ever said that it was essentially a 24 bit platform.
            Henry 3 Dogg
          • But that's Motorola not ARM there isn't a 36 address line ARM

            So the number of bits really is an issue to go beyond the 4G 32 bit barrier. You could add address lines (slower because with only 32 bit registers it would take two operations) but ARM hasn't done this.

            My take is that the hardware peripherals (with the possible exception of the GPU) is all 32 bit. And that means the RAM. To do otherwise would require more power, real-estate and emit more heat.

            So I am guessing hardware wise it is pretty hybridized. But 64 bit registers do allow for faster math calculations.
            DevGuy_z
          • DUH

            1) There is no 36 address line ARM.

            There is no anything until someone makes one.

            My point is that if the purpose was simply to address more memory there is a well established principle of increasing the address space without increasing the with of the operation set and data registers

            2) some of the peripherals may be 32 bit. Others are probably 16 bit and 8 bit. So what. That is true on all machines.

            3) 64 bit registers do not allow for faster maths calculations.

            a combination of 64 bit registers and 64 bit instructions do allow for faster 64 bit calculations, than fabricating the 64 bit calculations from multiple 32 bit calculations.

            However much of the realworld processing is on byte streams but can be processes in words. And processing 64 bit chunks of bytes will double the throughput, i.e. speed, compared to processing 32 bit chunks of bytes.
            Henry 3 Dogg