Amazon Web Services on Tuesday announced new capabilities for three of its AI services -- the text-to-speech service Amazon Polly, the real-time translation service Amazon Translate; and the multi-language transcription service Amazon Transcribe.
Also: Top cloud providers 2018: How AWS, Microsoft, Google Cloud Platform, IBM Cloud, Oracle, Alibaba stack up
The expanded capabilities follow a series of similar announcements made recently, all in advance of the annual AWS re:Invent conference. Last year's re:Invent conference was used to roll out a slew of new services, with many bringing customers new machine learning capabilities -- including Amazon Translate and Amazon Transcribe. AI and machine learning are quickly moving from a competitive advantage in the cloud to table stakes, so it makes sense for AWS to improve its existing services ahead of this year's conference.
Specifically, Amazon is announcing support for 14 new languages, distinct accents and voices across Polly, Translate and Transcribe.
Amazon Polly customers now have access to new voices for Castilian Spanish and Italian, as well as a new Mexican Spanish voice. That brings the Amazon Polly portfolio to 57 voices across 28 languages. Polly uses deep learning to synthesize speech and sound like a human voice. Customers can integrate it into their applications without any machine learning skills, Amazon says.
Also: Amazon's Cloud Cam finds the right balance for home security CNET
Meanwhile, Amazon Translate customers are getting access new languages including Dutch, Swedish, Polish, Danish, Hebrew, Finnish, Korean, and Indonesian. The neural machine translation service now supports 21 languages and 417 translation combinations.
As for Amazon Transcribe, customers are getting access to new accents including British English, Australian English, and Canadian French. This automatic speech recognition (ASR) service lets developers add speech-to-text capability to their applications.
Amazon is also adding streaming transcription to Transcribe, letting users to pass a live audio stream to the AWS service and receive text transcripts in real time. The new feature could be useful for a range of use cases and industries, including media, courtroom record keeping, finance or call centers. For example, a call center could use it to detect keyworks in a streaming transcript to trigger specific actions like calling for a supervisor.
Also: 51% of tech pros say cloud is the no. 1 most important TechRepublic
Meanwhile, Amazon has also updated Amazon Comprehend, a natural language processing (NLP) service that uses machine learning to classify text. Comprehend -- another service introduced at last year's re:Invent -- now includes support for French, German, Italian, and Portuguese. Additionally, Amazon has expanded the service to identify natural language terms and classify text that's specialized to a customer's team, business or industry.
Also, earlier this month, Amazon made some of its services -- Translate, Comprehend and Transcribe -- more accessible to the healthcare industry by making them HIPAA-eligible (that's the US Health Insurance Portability and Accountability Act of 1996).
What a hybrid cloud is in the 'multi-cloud era,' and why you may already have one
Now that the services used by an enterprise and provided to its customers may be hosted on servers in the public cloud or on-premises, maybe "hybrid cloud" isn't an architecture any more. While that may the case, that's not stopping some in the digital transformation business from proclaiming it a way of work unto itself.
Cloud computing: Here comes a major tipping point
Application spending has moved to the cloud fastest, but other areas of IT spending are catching up.