AWS adds 22 new languages to Amazon Translate

Ahead of the AWS re:Invent conference, Amazon makes a series of announcements related to AI-powered services and IOT.

Amazon Translate, Amazon Web Service's real-time translation service, is getting an update with support for 22 new languages. The announcement comes a week ahead of the AWS re:Invent conference, where AWS will promote Translate and a slew of other AI-powered tools for its cloud customers. AWS on Monday also announced new services related to image recognition, voice-based UIs and IOT. 

executive guide

What is AI? Everything you need to know about Artificial Intelligence

A guide to artificial intelligence, from machine learning and general AI to neural networks.

Read More

Amazon Translate now supports a total of 54 languages and dialects, with 2,804 language pairs now supported. The neural machine translation service enables customers to easily translate information from one language to many. For instance, Siemens useit to analyze employee surveys in different languages, while Hotels.com uses it to translate customer reviews into many languages for localized websites around the world. 

The new supported languages are: Afrikaans, Albanian, Amharic, Azerbaijani, Bengali, Bosnian, Bulgarian, Croatian, Dari, Estonian, Canadian French, Georgian, Hausa, Latvian, Pashto, Serbian, Slovak, Slovenian, Somali, Swahili, Tagalog, and Tamil.

Amazon Translate is also expanding to six new regions, making it available in a total of 17 AWS regions. Customers should see some benefits from being able to translate data in the region where it's stored.

AWS on Monday morning also announced a new "Custom Labels" feature for Rekognition, Amazon's image recognition and analysis service. The new features lets customers customize the service to detect unique objects and scenes. For instance, a manufacturer could customize it to identify specific machine parts like "turbochargers" and "transmission torque converters."

Instead of having to train a custom machine learning model from scratch, the Custom Labels feature enables users without any machine learning expertise to train a model with as few as 10 labeled images, AWS says. Once a model is trained, customers can get visualizations to see how it's performing, as well as suggestions for how to improve it. 

The Custom Labels API can process tens of thousands of images stored in Amazon S3 in an hour, AWS says. The feature will be generally available on December 3. 

Amazon also made a number of announcements related to IOT, including an integration between Alexa Voice Services (AVS) and AWS IOT Core. 

Previously, building the Alexa voice assistant directly into a device required on-device memory of at least 100MB RAM and ARM Cortex 'A' class microprocessors for compute. It was also a complex process. With the new integration, bringing AVS to a device only requires 1MB of RAM and ARM Cortex 'M' class microcontrollers. 

AWS says the integration lowers the Alexa built-in cost up to 50 percent by offloading compute- and memory-intensive workloads to the cloud. The feature should make it easier to build Alexa into simple products like light switches or thermostats. 

Amazon also announced new features for AWS IOT Greengrass, including container support. The Greengrass service allows customers to run AWS compute, messaging, data caching, and sync capabilities on connected devices. By packaging applications into a Docker container image, customers should be able to deploy applications to their IOT devices, even if those applications weren't developed in Greengrass-supported languages.