If you’re wondering when voice assistants will get the generative AI treatment, Amazon has your answer.
On Wednesday, the retail giant previewed an evolved version of Alexa, powered by generative AI. Soon, Amazon’s voice assistant technology will be powered by a proprietary large language model (LLM) that is “customized and optimized specifically for voice interactions, and the things we know what our customers love — getting real-time information, efficient smart home control, and maximizing their home entertainment,” the announcement said.
Ever since OpenAI’s ChatGPT kicked off a creative AI frenzy nearly a year ago, techies have wondered if and when voice assistants would get a much-needed upgrade. Integrating LLMs with voice assistants seemed like a perfect use case. But the process is more complicated than deploying new code or updating software, because voice assistants use less dynamic methods of AI. With existing voice assistants, machine learning and natural language processing are applied to a limited database of words and phrases. Alternatively, LLMs are able to generate new knowledge and build on top of their knowledge to become smarter over time. So basically, Alexa has the same name but completely new inner workings.
Some of the new Alexa features highlighted in the announcement include: low latency, sensors inside Amazon Echo devices to understand non-verbal cues, integration with third-party APIs including the fictional character app. Character.AI. Overall, the new Alexa will be able to understand context, pull information from previous conversations, and become more personalized to your family the more you use it.
Since this is just a preview, Amazon didn’t share much detail about the new Alexa’s timeline. But he noted that there will soon be a free preview for Alexa users in the US.
Credit : mashable.com