'

Google makes it easier to incorporate machine learning into mobile apps

At the Google I/O developer conference, Google launched ML Kit, a machine learning SDK available on Firebase for Android or iOS.

Video: Facebook, Microsoft, and Google aim to woo developers

Google on Tuesday is rolling out a new tool that makes it easier for mobile developers to incorporate machine learning into their apps.

read this

Everything you need to know about AI

An executive guide to artificial intelligence, from machine learning and general AI to neural networks.

Read More

The launch of the machine learning SDK -- called ML Kit -- comes on the first day of the annual Google I/O conference, where the tech giant is once again stressing that AI is at the core of its business.

CNET: The latest product announcements from Google I/O 2018

"ML Kit is our way to bring a lot of Google's machine learning technologies that we've developed over many years into a single, easy-to-use package," Brahim Elbouchikhi, Google's ML lead for Android, said to ZDNet. The goal, he said, is to "make machine learning just another tool available to mobile developers. It's not exceptional... It's just another part of the toolkit to build really awesome apps."

ML Kit is available on Firebase (Google's mobile app development platform) for both Android or iOS. It comes with five APIs for common ML use cases: Recognizing text, detecting faces, detecting landmarks, scanning barcodes, and labeling images. All these are available as cloud APIs, which leverage the power of Google Cloud Platform's machine learning technology. They're all -- except the landmarks API -- also available as on-device APIs for free; they process data quickly, and without any network connection, but with less accuracy.

mlkit.png

In addition to offering the base APIs, ML Kit also takes care of hosting and serving TensorFlow Lite models for developers who want to deploy custom models. This keeps models out of developers' APK/bundles, reducing their app install size. It also enables developers to update their models without having to re-publish apps.

Google is also experimenting with a feature that enables developers to upload a full TensorFlow model, along with training data, and receive in return a compressed TensorFlow Lite model. Google is still working with developers to test the feature. Compression is a major issue, Elbouchikhi said, given some models can be 100MB in size.

Incorporating machine learning into mobile apps is typically challenging for three reasons, Elbouchikhi said: First, training machine learning models at sufficient scale and quality can be expensive, time-intensive, and simply impractical. Next, building a model optimized for mobile -- in other words, one that can run without draining a phone's battery or taking up all its storage -- is another huge challenge. Finally, experimentation and deployment is difficult.

Consequently, relatively few developers are deploying machine learning on mobile apps now. Elbouchikhi said, "We believe at Google when you make things really easy to use... You'll see an explosion in innovation and ideas."