Google announced during its annual developers conference that the artificial intelligence model Gemini capabilities will soon be available on the Maps platform for developers, starting from the Places API.
With this new option, developers can display artificial intelligence summaries of places and areas in their applications and websites.
Summaries are created through Gemini’s analysis of insights from Google Maps community, which includes over 300 million contributors.
Thanks to this new feature, developers will no longer need to write descriptions for places, saving them a lot of effort and time they used to invest in writing summaries for each searched location by the app users.
For example, if a developer has a restaurant booking app, this enhancement helps customers choose the restaurant that suits them best.
When users search for restaurants in the app, they can easily see all the necessary information, such as the type of cuisine, offers, and the atmosphere of the place.
The search result includes a brief and distinctive description of the restaurant with an artificial intelligence-enhanced summary, and users can expand this summary to get more information about the location.
The new summaries cover different types of places such as restaurants, shops, supermarkets, parks, and cinemas.
– Google is also working on providing contextually enhanced search results supported by artificial intelligence through the Places API.
When users search for places in one of the developers’ apps, developers can show comments and images related to their specific search.
If a developer has an app that allows users to explore local restaurants, their users can search for suitable restaurants and see a list of relevant dining places, along with ratings and restaurant-related images.
Contextual search results are available globally, with access to place summaries in the United States. Google plans to expand this service to include other countries in the future.