Niantic Spatial & The Large Geospatial Model
Many are declaring 2026 the year of the World Model. After years of language models teaching AI how to reason, the industry is racing to build models that can finally understand and predict the physical world itself.
We agree. The next phase of AI will be defined by not just reasoning and language, but by a grounded understanding of the physical world.
I also believe that for AI to truly inhabit our world, it cannot rely on imagination alone. The story of 2026 won't be focused on a single model type; it will be the combination of three distinct layers of intelligence: Large Language Models (LLMs), World Foundation Models (WFMs), and Large Geospatial Models (LGMs).
Spatial intelligence needs grounding
In their report on the digital transformation of the US economy, Citigroup reports roughly 80% of global activity – logistics, construction, energy, and transportation – depends on the physical world. Yet, most AI investment has so far focused on digital content. In 2026, building spatial intelligence in tandem with LGMs and WFMs will emerge as the critical component for the broader AI movement:
-
World Foundation Models (WFMs) aim to simulate the physical world with AI. They generate synthetic worlds for games and visual effects, and create data for robotics training and simulation. This is exciting and powerful – these models will provide a foundation for building AI that can generally understand the world based on what it probably will look like.
-
Large Geospatial Models (LGMs) capture the structure and semantics of the real-world in a AI native way.. They are 3D models of the real world that enable spatial intelligence for humans and machines, rooted in georeferenced, real-world data like 3D scans and centimeter-level positioning.
At Niantic Spatial, we are building the LGM. While a World Model might predict a "plausible" street corner, our LGM provides the ground truth required for a machine to operate safely. It answers the fundamental questions: Where am I? How am I oriented? What am I looking at and how does that inform my next decision?
Equipped with this understanding, a phone, robot, pair of AR glasses, or drone can effectively function as a spatial AI assistant or agent. For example:
-
Navigating complex urban environments to enable accurate and efficient delivery.
-
Aligning work orders, inspection comments, and documentation with the as-built environment during a field inspection.
-
Providing mission-critical location data to enable navigation in a GPS-denied environment.
Real-world data is hard to come by
Synthetic data scales infinitely and helps cover rare cases with simulations. But as AI moves from digital reasoning to physical action, data becomes increasingly scarce.
Real-world data is complementary and critical to unlocking physical AI at scale, especially when:
-
Location accuracy is critical to the task (think of visual navigation)
-
The environment is messy and data is chaotic (kind of like the real world)
-
A user or customer wants to bring their own data (BYOD) (a customer-owned environment)
-
Semantic understanding of the environment is needed to inform decisions (like the next action)
At Niantic Spatial, we leverage an extraordinary corpus of real world data to train our LGM. With this ever expanding georeferenced real world dataset, we are able to provide the spatial intelligence needed to move AI into the physical world effectively.
True embodied AI will employ LLMs, WFMs and the LGM
The interplay between these models is where the real value lies. The most capable real-world AI systems will combine models rather than choosing one. Where a WFM provides a baseline training with simulations, an LGM provides verifiable and precise real-world understanding.
By the end of 2026, the most capable AI systems will no longer be trapped behind screens. They will navigate our streets, factories, and homes using a shared understanding of space. We are building the 3D model of the world that finally allows humans and machines to communicate in the same physical language.