Inside Niantic Spatial’s AI location-aware companion

Ground Truth: Inside Niantic Spatial’s AI location-aware companion
This post is part of our Ground Truth series exploring the future of AI and the physical world.
For too long, AI has been locked inside our pockets or behind our computer screens. But what becomes possible when AI gets a real-world point of view?
Unlocking AI from its Text Box
Today, artificial intelligence is primarily “book smart,” but its true potential is constrained by a little text box that we query. It lacks situational awareness because it can't see the world from our perspective. The next frontier of AI is to unleash its power by layering it onto the world, making it contextually aware of our surroundings.
At AWE in Long Beach, we, in collaboration with our partners at Snap, offered a look at this future. We unleashed Peridot as a spatially intelligent AI agent running on Snap Spectacles, demonstrating what happens when AI can finally see the world through our eyes.
A Glimpse of a Smarter Future
Project Jade, the experience featuring Dot, is an important flagship experience for what our technology can power. It’s a clear use case for how spatial intelligence can transform planet-scale wayfinding and our understanding of the world.
Attendees could walk through a designated area at the Long Beach Convention Center and have a natural conversation with Dot about their surroundings. The interaction was simple, but powerful:
-
Ask, "Tell me about that mosaic," and Dot could provide its history.
-
Wonder, "Where’s a good place to sit down?" and Dot would highlight the nearby wave benches.
-
Or need a bite? "Find me a close by restaurant," would trigger Dot to direct attendees to The Pike Outlets across the street.
While this isn’t the ultimate vision for our technology, it’s a tangible example of an AI that doesn’t just answer questions, but enhances our experience of the world.
The Tech Giving Dot Street Smarts
Bringing this demo to life required a seamless integration of hardware, software, and a new kind of map. Here’s a look at the core components:
-
The First-Person View: The entire experience begins with the hardware: AR glasses, in this case, Snap Spectacles. This form factor is revolutionary because it gives our AI a human-like, first-person perspective. The ability to see the world from the exact same point of view as the user is the critical visual input that powers everything else.
-
A Precise Sense of Place: That visual feed is then processed by Niantic Spatial’s Visual Positioning System (VPS). While GPS can get you to the right block, it fails indoors or in dense cities. VPS overcomes this by using computer vision to pinpoint the user's location and orientation with centimeter-level accuracy. This precision unlocks the core of modern AR: the ability to anchor, persist, and share AR content and characters like Dot with six degrees of freedom (6 DOF), making it feel firmly part of the real world. Crucially for developers, this all happens within seconds of starting an AR experience at any VPS-enabled location and at a fraction of the performance cost of other anchoring solutions.
-
An AI Brain Optimized for Context: Finally, with a clear view of the world and a precise understanding of its location, Dot is powered by an AI brain that synthesizes this information. It can process the visual context from the camera, combine it with pre-existing knowledge about that location, and interact directly with the user – meaning Dot can have a natural conversation about its surroundings. For the AWE demo, this brain was finely tuned with additional bespoke knowledge about Long Beach, but in the future, the goal is for it to scale globally, pulling from our geospatial AI world map and real-time data from the internet to provide helpful context anywhere.
The Blueprint for Real-World AI
We built this experience to be a benchmark, a clear example for developers of what is possible at the intersection of AR glasses, geospatial AI, and a robust VPS. The same technology that allows Dot to guide a tourist can be used to direct a warehouse worker to a specific package or allow an expert to remotely guide a field technician through a complex repair using AR overlays.
But a demo is only as good as the lessons it teaches. Here’s what we learned from building Project Jade in 8 weeks, in a live, unpredictable environment:
-
Ensuring Flawless, Real-World Localization. We know that localization quality is a function of the underlying map data. To guarantee a flawless experience at AWE, our team controlled for this by scouting the location's key landmarks on different days and at various times of day, building a map resilient to real-world changes like shifting light. This meticulous process highlights the power of our scalable solution: Niantic Spatial's Large Geospatial Model (LGM). By learning from millions of worldwide scans, the LGM understands not just landmarks, but the principles of how their appearance changes with environmental conditions. It then applies that global intelligence to any single location, ensuring the robust, fast localization needed to scale experiences like these globally over time.
-
Test Remotely, Then Test Again. You can't over-prepare for a live, real-world experience. We created a “digital twin” of the Long Beach location in Unity, which allowed our team to test remotely using pre-recorded AR sessions. This let us debug and refine the core experience regardless of whether we were onsite.
-
Design for Shared Experiences. Augmented reality is most powerful when it's shared. We intentionally chose a location large enough to support multiple users simultaneously. This is a crucial consideration in designing for co-located experiences where groups of people can interact with AI and digital content together.
-
Account for Real-World Connectivity. Unlike a controlled office environment, public spaces have unpredictable network conditions. We tested the experience on different cellular carriers to understand and mitigate potential latency, ensuring Dot's responses felt snappy and interactive.
Connecting the Dots, Globally
The journey from a single VPS location demo to a globally-aware AI companion is a challenge we’re actively solving. The next great challenge is connecting the millions of individual VPS-activated locations to enable seamless, city-scale and global-scale navigation.
Our mission is to free AI from the text box. By giving AI a true understanding of the physical world, we can transform how people and machines interact with our surroundings, creating solutions that fuse digital utility with real-world places to foster discovery, connection, and a deeper appreciation for the world around us.
You Might Also Like
How Large Geospatial Models Help Businesses Predict, Plan, and Scale
Discover how LGMs transform enterprise operations—turning complex location data into predictive insights that optimize logistics, infrastructure, and spatial planning.
Visual Positioning System (VPS): The Future of Navigation
Discover how VPS technology works, its real-world applications across industries, and why it’s essential for cm-level navigation in indoor, urban, and GPS-denied environments.