Google is preparing to host its annual Google I/O developer conference next week, and of course it will be focused on artificial intelligence. The company didn’t hide it. Since last year’s I/O, Gemini, a new, more powerful model designed to compete with OpenAI’s ChatGPT, has debuted and has been deeply testing new features for Search, Google Maps and Android. Expect to hear a lot about this topic this year.
Google I/O begins on Tuesday, May 14 at 10 a.m. PT / 1 p.m. ET with a keynote lecture. You can watch it on Google’s website or on its YouTube channel using the live streaming link, which is also located at the top of this page. (There is also a version with an American Sign Language interpreter). Spend enough time on this; I/O usually takes several hours.
Google is also likely to focus on how it plans to turn the smartphone into an AI gadget. This means more generative AI features in Google apps. He has worked on artificial intelligence features that help with eating and shopping, or finding electric vehicle chargers on Google Maps, for example. Google is also testing a feature that uses artificial intelligence to call a company and wait for you until someone is actually available to talk to.
WI/O may also have debuted a new, more personal version of the digital assistant, reportedly called “Pixie.” The Gemini-based assistant is expected to integrate multimodal features, such as the ability to take photos of objects to learn how to use them or get directions to where they can be purchased.
This sort of thing could be bad news for devices like the Rabbit R1 and Human Ai Pin, each of which have recently launched and struggled to justify their existence. Right now, the only advantage they may have is that it’s quite difficult (though not impossible) to use a smartphone as an AI wearable.
Credit : www.theverge.com