Skip to main content

NaviNote: Combining Precise Localization and AI for Blind and Low Vision Navigation

Date:5/1/2026
Category:
Research

NaviNote, our CHI 2026 Honourable Mention, gives blind and low vision people precise navigation and a voice in how shared spaces are described.

Imagine you're three metres from where you need to be. You know that, because your phone told you. But GPS has drifted and those three metres could be in any direction. There's no kerb to follow, no sound to orient toward. You're close, but close isn't there.

This is the "last few meters" problem, and it's one of the most persistent frustrations for blind and low vision (BLV) people navigating the real world. NaviNote, a Niantic Spatial research project, was built to close that gap and we're proud to announce it has been awarded an Honourable Mention at CHI 2026, placing it in the top ~5% of thousands of submissions to the world's premier Human-Computer Interaction conference.

Why GPS isn't enough

Current GPS-based navigation systems can drift several metres from a user's actual position. For most people, that's a minor inconvenience. For BLV people, it’s fundamentally more challenging.

There's a second problem too. Research has long shown that BLV people benefit from spatial annotations - notes tied to specific physical locations that describe what's there, flag hazards, or share local knowledge. But existing tools don't let BLV users create those annotations independently, in the field, in the moment.

NaviNote was designed to solve both problems together.

How it works

NaviNote runs on a smartphone and uses two positioning layers. Standard GPS provides broad environmental awareness from the start. Then, as Visual Positioning System (VPS) - the same technology at the core of Niantic Spatial's platform - establishes a precise fix (with an accuracy of centimeters), the system upgrades its understanding of exactly where the user is and which direction they're facing. No pointing the phone at objects required.

Navigation is voice-driven throughout. Users can ask what's around them and receive a natural language description of the space. When they want to go somewhere specific, NaviNote gives turn-by-turn directions using a clock-face system ("10 o'clock, 6.4 metres") alongside an audio compass: louder when you're facing the right way, quieter when you're off course. As users get close, the system identifies physical guides - the edge of a flower bed, a pathway - they can follow with a white cane to reach their exact destination.

As users move through a space, NaviNote automatically surfaces safety-critical annotations and signals the presence of nearby notes with a subtle audio cue, letting users choose when to hear more. And when users want to contribute their own knowledge, they simply speak: "I want to create a note saying there are pink flowers in the centre of the square." The annotation is saved at a precise 3D location, ready to help the next person.

88% vs 38%

We evaluated NaviNote with 18 BLV participants in a public square. With standard GPS navigation, 38% of participants successfully reached their destination. With NaviNote, that figure rose to 88%.

Participants also rated NaviNote as significantly more effective, easier to use, less mentally demanding, and less frustrating. And every participant - all 18 - independently authored their own spatial annotations during the study.

That last point matters as much as the navigation result. Participants didn't just use the system - they contributed to it. They created notes for friends, for the wider BLV community, and for sighted people too. NaviNote became, in the course of a single study session, a shared resource.

What’s next?

Niantic Spatial's VPS technology was built to understand the real world at a level of precision that GPS can't reach. NaviNote demonstrates what becomes possible when that precision is applied with accessibility as the design brief.

The implications extend further. Precise, voice-driven navigation to a specific object. Spatial annotations anchored to exact 3D locations, contributed and consumed by a community. These capabilities are useful for everyone. NaviNote just makes clear how urgently they're needed by some.


NaviNote was presented at CHI 2026. Read the paper or watch the full presentation on YouTube.

Are you working on accessibility, spatial computing, or XR platforms? We'd love to connect.

Share: