Thanks to the proliferation of data and advances in computing, the next decade promises to bring huge advances in AI. This week, Google showcased a variety of AI research projects it's pursuing as an "AI first" company. In some cases, such as Google's translation research, the commercial applications for Google already exist. In other areas, such as interactive textiles, the practical use cases aren't quite clear yet.
In all cases, according to Google AI chief Jeff Dean, Google's AI researchers are focused on the bigger picture.
"We try to do long-term work, and often that provides an arc of direction, where along the path of an eight- to 10-year journey we throw off useful results [for commercial applications]... and then continue to work on those harder problems," Dean said to reporters at Google's San Francisco office. "We're pretty excited for what the next decade is going to hold for us, and for everyone."
Dean highlighted a few of the more interesting problems and opportunities that AI researchers will address over the next decade. Multi-modal learning, for instance, is going to be a "growing trend," he said.
Unlike today's machine learning models, multi-modal models will be able to take multiple kinds of inputs -- such as text, audio or visual data -- and "do sensible things with them," Dean explained to ZDNet.
For example, he continued, "regardless of whether you see a picture of a leopard or you hear the word 'leopard' or you see the word leopard written, there's some common response in a model that helps you understand the properties of leopards, [such as] what they look like."
Another upcoming challenge that Dean cited is the ability to "take machine learning models that had to run in the past on large server-based setups and run them on device."
Google's already making strides on this front, with on-device translation services offered in 59 different languages.
While Google and the industry at large have made significant strides in AI in the past few years, public awareness of the technology's potential drawbacks -- and corresponding regulation -- is only now beginning to catch up with the industry.
Google CEO Sundar Pichai: This is why AI must be regulated ¦ US government proposes a 'light-touch' to developing AI regulation ¦ Regulating big tech: One presidential candidate offers a digital bill of rights ¦ Microsoft and Google just can't agree on proposed ban on facial recognition
Google has, in turn, started talking more about the ethical guidelines it applies to its AI research. About a year and a half ago, the company released a set of principles to help guide its development of AI applications. Google also committed to refraining from building AI for technologies that could cause harm, such as weapons.
"As we start to think about how these systems and this research gets out into the world, it's really important for us to think about what are the implications of this work, and how should we be thinking about applying it to certain kinds of problems, and the problems we shouldn't be applying it to," Dean said.
While it's easy to look at Google's commitments and scratch "weaponized drones" off its list of technologies to build, there are plenty of other AI-driven technologies -- even seemingly innocuous ones -- that could cause harm.
Open sourced creativity
For instance, the Magenta team at Google is an open source project exploring the role of machine learning in creating art and music. Part of the goal is to push forward the frontiers of audio generation -- without promoting deepfakes. The team this week demonstrated the way its algorithms can take one particular audio input -- such as a person singing, music from an instrument or a cat meowing -- and make it sound like another musical instrument, such as a flute or a violin.
- Google's war on deepfakes: As election looms, it shares ton of AI-faked videos
- Facebook, Microsoft: We'll pay out $10m for tech to spot deepfake videos
- Forget email: Scammers use CEO voice 'deepfakes' to con workers into wiring cash
What it won't do is synthesize another human's voice. The team made a very deliberate choice to only train the model on musical tones and tempo so it won't be able to generate intelligible speech.
"There are a lot of ethical implications when it comes to generating very convincing speech," said Lamtharn Hantrakul, an AI Resident at Google. "Our model, the worst thing it can do is just generate very bad violin playing or very bad flute playing. Of course, there will be people out there -- this is open source -- that will try to do those things, like with any other technology, but we as a group have this very strong ethical standpoint that the stuff we're training on is not going to be from those domains."
Google also demonstrated ways it could literally weave AI into wearables. Researchers showcased the I/O braid, a touch-sensitive cord that could be used as an input or output device on garments or for wearable electronics. With the I/O braid, you could, for instance, twist your hoodie drawstrings to control your cell phone or tap your headphone cord to jump to the next song.
The cord is made from conductive yarns sensitive to touch. Some of the cords in the demo included fiber optic fibers, enabling visual feedback -- your cord might start glowing, for instance, if you have a new notification.
The research is based on the centuries-old practice of creating different structures with interwoven yarns and textiles.
"We're leveraging these structures as part of the algorithm to simplify the type of gesture recognition we're doing," Google senior research scientist Alex Olwal explained to ZDNet. "In some of the examples, with continuous tracking, it's more of a heuristic algorithm. Some of the more discrete gestures, which expand the types of interactions we can do, are based on machine learning where we've trained the system to recognize specific gestures and execute them."