Running artificial intelligence on mobile devices is a hot area of competition between vendors such as Apple and Samsung, as amply shown by Apple's continued emphasis on the "neural engine" circuitry within its "A-series" processors in the iPhone.
But as a technology, mobile neural network reasoning is still a field evolving by fits and starts.
Recent research highlights just how uneven are the efforts to run neural nets on Google's Android operating system. Benchmark results from researchers at Swiss university ETH Zurich reveal that development of neural networks on mobile devices is still a hairy business, with frameworks that are incomplete, chipsets with mixed support for networks, and results that are difficult to benchmark reliably.
In a paper posted on arXiv this week, titled "PIRM Challenge on Perceptual Image Enhancement on Smartphones," Andrey Ignatov and Radu Timofte, both of the computer vision laboratory at ETH Zurich, describe how they ranked teams of developers who competed with different types of neural networks running on Android phones.
The reason for the competition, as Ignatov and Timofte explain, is that AI development these days is dominated by the approaches used on PCs and servers, with little consideration for what's needed in the constrained operating environment of smartphones. ( See the challenge's Web page.)
"The general recipe for achieving top results in these competitions is quite similar: more layers/filters, deeper architectures and longer training on dozens of GPUs."
Maybe, the authors write, "It is possible to achieve very similar perceptual results by using much smaller and resource-efficient networks that can run on common portable hardware like smartphones or tablets."
The competitors were tasked with coming up with mixtures of network elements, such as convolutional neural networks, or CNNs, to perform basic image tasks, such as improving the look of photos taken on the phone. Their networks were required to be written in Google's TensorFlow framework, had to fit in a file no larger than 100 megabytes, and had to operating in no more than 3.5 gigabytes of DRAM. The models were run by Ignatov and Timofte on two devices, a 2017-era "Razer Phone" from Motorola, running on Android 7.1.1; and a Huawei "P20" from April of this year.
The results were ranked according to which network was the most efficient implementation in terms of time in milliseconds taken on the CPU to compute the networks, and also some measures of the quality of the work produced.
The competition was held in conjunction with the European Conference on Computer Vision held in mid-September in Munich, Germany.
The background for all this is that hardware acceleration of neural networks remains a mixed bag. In a separate paper, " AI Benchmark: Running Deep Neural Networks on Android Smartphones," released this week by Ignatov and Timofte, and co-authored with representatives from Google, mobile chip giant Qualcomm, competitor MediaTek, the authors took a look at how different chips in shipping Android phones perform when doing some basic image-processing operations, such as face recognition, image classification, and image de-blurring.
The authors tested nine tasks across 10,000 mobile phones operating in the wild, with over 50 different models of processor containing numerous neural net accelerators and graphical processing units, or GPUs.
What they found was a real hodge-podge. The simplest way to program the networks, they notes, is using Google's "TensorFlow Mobile" framework, but that framework doesn't support a newer library, known as "Android NNAPI," for "Android Neural Net- works API." NNAPI was built to abstract away hardware details of individual processors from Qualcomm, MediaTek, Huawei and Samsung.
So a new library, TensorFlow "Lite" has been put forward by Google to replace the mobile version, and it does support NNAPI, but Lite has its own limitations: it is in a "preview" release as of the time of the report, and so it lacks "full support" for a number of neural network operations, including "batch and instance normalization."
The authors also found Lite can also consume much more DRAM than the Mobile version. As for NNAPI, it does not support all types of neural networks. CNNs, for example, will all be deployed on the AI accelerators in the devices, or GPUs, but other kinds of networks have to resort to running on the CPU.
In sum, the authors found that hardware acceleration for neural nets "is now evolving extremely fast," but that "the current lack of standardized requirements and publicly available specifications does not always allow for an objective assessment of their real advantages and limitations."
In case you're interested in the hardware results, the authors found that the "Kirin 970" processor running in Huawei phones, and developed by Huawei subsidiary HISilicon, topped the charts in overall performance across the nine tasks. It was followed by Mediatek's "Helio P60," and Samsung's "Exynos 9810."
But the authors caution that they won't take sides as far as whose chip is better, given "as our analysis has demonstrated that almost all SoC manufacturers have the potential to achieve similar re- sults in their new chipsets." Rather, they pledge to provide ongoing benchmark test results as new chipsets, and new frameworks and drivers, emerge.
Previous and related coverage:
An executive guide to artificial intelligence, from machine learning and general AI to neural networks.
The lowdown on deep learning: from how it relates to the wider field of machine learning through to how to get started with it.
This guide explains what machine learning is, how it is related to artificial intelligence, how it works and why it matters.
An introduction to cloud computing right from the basics up to IaaS and PaaS, hybrid, public, and private cloud.
- There is no one role for AI or data science: this is a team effort
- Startup Kindred brings sliver of hope for AI in robotics
- AI: The view from the Chief Data Science Office
- Salesforce intros Einstein Voice, an AI voice assistant for enterprises
- It's not the jobs AI is destroying that bother me, it's the ones that are growing
- How Facebook scales AI