Two weeks ago I covered a series of announcements from GPU powerhouse Nvidia, around new AI-focused products. Among these announcements was a partnership around the integration of certain Nvidia technology into ARM chip designs, specifically to deliver optimized AI processing in IoT devices.
Also read: Nvidia doubles down on AI
This week, not to be outdone, Qualcomm is announcing its own IoT- and AI-optimized SoC (system on a chip) platforms, geared towards computer vision applications. The company is today launching the QCS603 and QCS605 in this arena, and these two products have a lot going on.
Seshu Madhavapeddy, Qualcomm's VP of Product Management for IoT, briefed me on the new platform and went into a lot of detail.
Madhavapeddy explained to me that the new SoCs are designed for IoT edge computing, wherein as much processing goes on in the device as possible, thus minimizing data movement to remote infrastructure. And, specifically, the QCS603 and 605 are geared towards vision intelligence. Applications for this technology include security cameras, sports cameras, wearable cameras, virtual reality cameras, robotics and smart displays.
Also: Edge, core, and cloud: Where all the workloads go
Mr. Madhavapeddy further explained to me how IoT cameras differ greatly from their smartphone counterparts. IoT cameras have to operate in very low-light conditions, down to 1 Lux. They also have very different image stabilization applications: in the IoT world, it's not about less-blurry snapshots, it's about making video taken from helmet-mounted cameras in a sporting scenario, or from drone cameras while its host goes on a mission, focused and readable.
The 603 and 605 can handle these low-light and image stabilization requirements. They also handle the obstacle avoidance requirements common in robotics applications, by combining image processing with AI.
AI at the edge
Qualcomm's AI engine includes its Snapdragon Neural Processing Engine (NPE) software framework which in turn can accommodate models created with major deep learning libraries, including Tensorflow, Caffe and Caffe2, the Android Neural Networks API and Qualcomm's own Hexagon Neural Network library.
These models can be ported to the Qualcomm's AI engine through the use of source platform-specific software developer kits (SDKs). Qualcomm's AI engine optimizes inferencing -- that is, scoring image data against machine learning models deployed to the device. Model training will still likely occur in the cloud, where the big GPU iron is plentiful and available on an elastic basis.
Also: AI applied: How SAP and MapR are adding AI to their platforms
The 603 and 605 are low-power products, designed to run in battery-powered devices. They both have Wi-Fi on-board, along with a Qualcomm Adreno GPU, multiple Qualcomm Cryo ARM CPU cores, a Hexagon 685 Vector Processor and Qualcomm's aforementioned AI engine.
Video on command, Wi-Fi a gogo
Images doesn't just mean still images -- it means video. And the 603 and 605 are video monsters. The 605 can handle up to simultaneous 4K (Ultra HD) and 1080p (Full HD) feeds, each at 60 fps. The SoC can also handle even more simultaneous streams at lower resolution. The 603 maxes out at simultaneous 4K and 720p streams, each at 30 fps.
In general, by the way, the QCS603 and 605 are similar products, but the 603 runs at lower power, has a smaller footprint, and lower specs to make all that possible. For example the 605 has 8 CPU cores and 2x2 802.11ac Wi-Fi, while the 603 is a quad-core product, with 1x1 802.11ac Wi-Fi.
High-powered performance in a low-power SoC
Qualcomm's Vision Intelligence Platform musters 2.1 TOPS* (tera operations per second -- similar to TFLOPS , but not involving floating point operations) of compute performance for deep neural network inferences. Supporting this are dual 14-bit Spectra 270 image signal processors, supporting dual 16 megapixels sensors.
Halt and catch AI fever
Just as PCs in the 1980s dispelled the notion that all high-value computing had to happen on mainframe host computers, intelligent IoT devices, and the chipsets that support them, are showing us now that AI computing can be done at the edge, and not just in the cloud. It's the same principal, really: client devices don't have to be dumb, as long as you don't build them that way.
Get ready for hardware advances like the ones discussed in this post to bring the AI state-of-the-art forward. The sooner we get AI out of the data center (as its exclusive location, anyway), the sooner it will become more accessible, to ordinary end-users and developers alike.