X
Innovation

The new M1 Macs make cutting-edge, machine-learning workstations

When you think of programming machine-learning PCs, hard-core developers dreams turn to high-priced powerhouse machines running Linux. But a recent benchmark by TensorFlow programmers showed that Macs powered by Apple's new M1 chip can give these boxes a run for their money.
Written by Steven Vaughan-Nichols, Senior Contributing Editor

Some friends and I were recently talking about how much computing power you really need these days. After all, even a relatively low-powered Chromebook can handle all your office work. But, I reminded them, serious programmers working on machine learning (ML) need all the CPU horsepower they can get. A typical entry-level ML workstation, such as the HP Z6 G4 Workstation comes with a single 28 core, 2.0 GHz Intel Xeon Platinum, 48GBs of RAM, a 500GB 2.5" SSD, and an NVIDIA GeForce RTX 2080Ti running Ubuntu Linux 20.04. An Apple Mac? Surely you jest!

Well, actually, no it's not a joke. Pankaj Kanwar, Google's Tensor Processing Units (TPU) Technical Program Manager, found that TensorFlow on Macs powered by Apple's new M1 chip kicks rump and takes names.  

Also: My biggest Apple M1 question: What's Intel been doing all these years? 

TensorFlow, which Google created, is the most popular open-source ML platform. It's used by almost everyone in the ML/artificial intelligence world.

Kanwar and Fred Alcober, Google's Product Marketing Lead for TensorFlow, AI/ML, and Quantum AI found that Apple's Mac-optimized version of TensorFlow 2.4 running on a preproduction 13-inch MacBook Pro system with Apple M1 chip, 16GB of RAM, and 256GB SSD, saw a huge jump in performance over its Intel predecessors. 

How much of one? The M1-powered MacBook Pro with its 8-core CPU, 8-core GPU, and a 16-core neural engine, stomped a production 1.7GHz quad-core Intel Core i7-based 13-inch MacBook Pro system with Intel Iris Plus Graphics 645, 16GB of RAM, and 2TB SSD. 

That's impressive, but not earth-shattering. What was really eye-opening is the new Mac also smoked a production 3.2GHz 16-core Intel Xeon W-based Mac Pro system with 32GBs of RAM with an AMD Radeon Pro Vega II Duo graphics with 64GB of HBM2, and 256GB SSD. All the tests were run with prerelease macOS Big Sur, TensorFlow 2.3, and prerelease TensorFlow 2.4.

This wasn't just because the M1 is so much faster than its Intel competition. Apple's version of TensorFlow 2.4 has been optimized for the M1. Between the improved hardware and software, TensorFlow runs four to five times faster than the old machines with the old software. 

In addition, ML Compute, Apple's new framework that powers training for TensorFlow models right on the Mac, can take full advantage of accelerated CPU and GPU training on both M1- and Intel-powered Macs.

Programmers won't need to change their existing TensorFlow scripts to use ML Compute as a backend for TensorFlow and TensorFlow Addons. To get started, visit Apple's GitHub repo for instructions to download and install the Mac-optimized TensorFlow 2.4 fork. Soon, Google will integrate the forked version into the TensorFlow master branch.

I strongly suspect the HP Z6 G4 Workstation would still beat the M1 Macbook Pro as configured. That said, ML developers who might otherwise never even consider a Mac might want to start thinking about buying one. The only real problem is the first generation of M1 Macs can't easily be upgraded. Still, it's worth a thought. 

Related Stories:

Editorial standards