Table Of Contents
In a significant move to improve user experience, Google has launched a new update for its Google Lens application, allowing users to search within video clips and receive instant answers about the details in their surroundings. This groundbreaking feature leverages Google’s AI model, “Gemini,” to analyze content and provide accurate responses.
AI Enhancements in Google Lens
The new feature allows English-speaking users on Android and iOS devices to record a video and ask questions about specific details within it. The AI analyzes key frames from the video to respond to user inquiries. The technology goes beyond recognizing static objects, offering insights into dynamic phenomena, such as animal behavior or other events.
Google’s Director of Product Management, Lon Wang, explained that Google Lens is no longer just a tool for image search but has evolved into a powerful medium for understanding and analyzing video content. To access this feature, users need to join the “Search Labs” program and test the new “AI Overviews” feature.
Voice Interaction with Google Lens
Among the latest updates Google introduced is the ability to interact with Google Lens using voice commands. While Google Lens has been known for its image analysis capabilities, users can now ask questions verbally rather than solely relying on image capture. For instance, users can point their camera at a plant and ask a voice question about its type or how to care for it.
This feature marks a significant advancement in how users interact with technology, simplifying search and exploration without the need for typing or traditional text-based queries. The feature is now available worldwide for both Android and iOS users who operate in English.
AI-Driven Video Search
Another feature currently in testing is AI-powered video search, allowing users to search within video footage for specific details. Users can record video clips and ask questions about the content during filming. Google’s AI analyzes the video and provides answers based on relevant frames.
For example, a user could record a video of a tablet screen and ask something like, “Why is the screen flickering?” Google’s AI would then analyze the video to offer a response. This feature opens significant possibilities for expanding search capabilities in line with users’ evolving needs.
Enhancing Online Shopping with AI
In addition to the enhanced search features, Google has streamlined the online shopping process through an update to Google Lens. When users capture an image of a product, the app now displays a well-organized results page featuring product information, including prices, reviews, and available retailers. This update makes shopping easier and more precise, enabling users to compare prices and read reviews before making a purchase decision.
This shopping feature is also linked to Google’s “Circle to Search” technology, which allows users to highlight specific parts of an image or screen to obtain additional details. This update enhances the visual search experience, making it more interactive and efficient.
With these new innovations, Google continues to offer a smarter and more interactive search experience. From voice interactions with Google Lens to searching within video content, the company is working to improve and expand the search tools available to users. These steps are part of Google’s broader strategy to integrate AI into everyday life, offering users faster, easier, and more accurate access to information.