Amazon’s Alexa to Sound More Human

Amazon announced that Alexa’s developers in the US, UK, and Germany now have a way to make this intelligent personal assistant to sound more human.

In a blog post, Amazon made public five new Speech Synthesis Markup Language (SSML) tags that can now be used with Alexa in the US, UK, and Germany. These new SSML tags will enable Alexa to whisper, substitute words other than what is written, and change the pitch, volume and rate of speech. SSML is a standardized markup language that enables developers to control how speech is synthesized.

According to Amazon, in order to maintain intelligible speech and to provide the best user experience, the amount of change applied to pitch, volume, and rate are limited. As it currently stands, without these new SSML tags, Alexa sounds robotic, especially if she is reading longer sentences and phrases.

Amazon also announced this week that developers in the UK and Germany can now use speechcons – words and phrases that are specific to a certain country. Speechcons such as “gotcha,” “aloha,” “abracadabra,” and “ahem,” are already available in the U.S.

Amazon’s service called Alexa Skills Kit (ASK) allows developers to build new skills for Alexa. To date, developers have built more than 10,000 skills for Alexa with ASK.