This week, Meta executed a small trial in the United States to launch a new AI product with multimedia support. This product was developed in collaboration with Ray-Ban, a company specialized in luxury eyewear.
The company tested the multimedia AI during the trial phase through an early access program in the United States. Meta is scheduled to publicly launch the new multimedia AI in 2024.
Thanks to the built-in camera in the glasses, the new AI can interact intelligently with video and photography, as well as display what the wearer sees directly and smartly.
Meta’s assistant is designed for voice control, allowing glasses users to converse with the assistant in the same way as with popular voice assistants like Amazon’s Alexa or Apple’s Siri.
Some of the new features offered by the multimedia AI include:
The ability to identify and describe glasses. The ability to translate written texts and clarify images. The ability to provide suggestions for clothing or other items that match what the user is wearing.
This step is expected due to Meta’s comprehensive adoption of AI in its products and platforms, and its promotion of open-source AI through Llama 2.