Google Maps is changing, three new features based on AI are coming to create realistic images, analyzes and maps

Google Maps is changing, three new features based on AI are coming to create realistic images, analyzes and maps

Credit: Google.

Google has announced a series of important updates for Google Maps And Google Earthall powered by the artificial intelligence of Gemini. Presented during the Google Cloud Next (a conference scheduled for these days organized by the Mountain View giant), the innovations revolve around three key ideas: creating realistic images starting from real data (Maps Imagery Grounding) designed for construction and cinematographic uses, automating the analysis of satellite images and making ready-made AI models available to interpret the territory. The goal is to save time for those who work with geographic data (companies, urban planners, analysts, etc.) by making everything faster and scalable.

Maps Imagery Grounding with Gemini coming to Google Maps

Among the most interesting innovations is technology Maps Imagery Groundingwhich allows you to generate visual content anchored to the real world. In practice, AI does not invent landscapes out of thin air: there builds starting from Street View imageswith consistent and credible results. Via the platform Gemini Enterprisejust describe a place in words to obtain an image in a few seconds that seems to have been taken there.

The practical implications are immediate. In the cinemafor example, you can simulate locations without physically movingreducing costs and times to carry out inspections. In the marketingagencies can set their campaigns in realistic urban contexts without organizing shootings around the world. And by integrating tools like Veo (a Google model that allows you to generate videos with AI) you can go directly from an idea to an animated sequence.

Geospatial analysis becomes automatic

Google is also working to make it much easier toanalysis of aerial and satellite images. Until now, examining this type of data meant relying on human operators manually scrolling through huge amounts of visual material: a slow process full of margins of error.

With the new integration in BigQueryGoogle’s cloud platform for data analytics, all of this can be automated. The result? Operations that previously required weeks of work can now be completed by city planners and data analysts in minutes. A huge advantage for figures like these who deal with urban planning and who, thanks to this technology, will be able to monitor the expansion of a city, identify active construction sites or plan new infrastructures in a much simpler way than before.

Image
Detection of a construction site (in the yellow box) thanks to the use of AI. Credit: Google.

The new Earth AI Imagery models

To complete the picture they arrive two new Earth AI Imagery modelsavailable in Google Cloud Model Garden. These models are designed to automatically recognize infrastructure elements in images – bridges, roads, power lines, etc. – without companies having to build and train similar systems from scratch, which requires specialized skills and months of work.

The usefulness of these tools emerges clearly in emergency situationssuch as those due to natural disasters of various kinds. After a flood or a violent storm, knowing quickly where the greatest damage is concentrated can make the difference, both in the first rescue operations than in being able to do a first estimate of damages.

Companies like Vantorspecialized in spatial intelligence, already use these models to transform raw satellite images into operational information: blocked roads, debris, areas to be reached with priority. An analysis that would previously have taken days is now done automatically, with direct benefits for communities affected by disasters.