Abstract
Embodied visual navigation remains a challenging task, as agents must explore unknown environments with limited knowledge. Existing zero-shot studies have shown that incorporating memory mechanisms to support goal-directed behavior can improve long-horizon planning performance. However, they overlook visual frontier boundaries, which fundamentally dictate future trajectories and observations, and fall short of inferring the relationship between partial visual observations and navigation goals. In this paper, we propose Semantic Cognition Over Potential-based Exploration (SCOPE), a zero-shot framework that explicitly leverages frontier information to drive potential-based exploration, enabling more informed and goal-relevant decisions. SCOPE estimates exploration potential with a Vision-Language Model and organizes it into a spatio-temporal potential graph, capturing boundary dynamics to support long-horizon planning. In addition, SCOPE incorporates a self-reconsideration mechanism that revisits and refines prior decisions, enhancing reliability and reducing overconfident errors. Experimental results on two diverse embodied navigation tasks show that SCOPE outperforms state-of-the-art baselines by 4.6% in accuracy. Further analysis demonstrates that its core components lead to improved calibration, stronger generalization, and higher decision quality.
Overview of SCOPE. The agent predicts frontier utility via a VLM-based estimator and encodes it into a structured potential graph for spatiotemporal reasoning. Action decisions are guided by this graph and further reconsideration through a self-refinement module to avoid impulsive errors.
Performance comparison on GOAT-Bench and A-EQA. Bold indicates the best performance.
Performance comparison between SCOPE and 3D-Mem. Top: Results on the GOAT-Bench and A-EQA benchmarks, covering goal-based navigation (GB) and embodied question answering (EQA) tasks. Bottom: Detailed breakdown of GOAT-Bench SR and SPL across object-, image-, and description-goal settings. SCOPE achieves higher average performance and lower variance than 3D-Mem.
Calibration of 3D-Mem and SCOPE. "ECE" represents the estimated calibration error (×100), with lower values indicating better calibration. The dashed line denotes perfect calibration, and the bar colors become darker as they approach ideal calibration.
Ablation study evaluating the contribution of SCOPE components. SCOPE w/o F. Img. removes the frontier image input to the agent while retaining it for the potential estimator. SCOPE w/o PG. disables the potential graph module, exposing the agent only to raw estimated potential scores without spatial propagation.
BibTeX
@misc{wang2025expandscopesemanticcognition,
title = {Expand Your SCOPE: Semantic Cognition over Potential-Based Exploration for Embodied Visual Navigation},
author = {Ningnan Wang and Weihuang Chen and Liming Chen and Haoxuan Ji and Zhongyu Guo and Xuchong Zhang and Hongbin Sun},
year = {2025},
eprint = {2511.08935},
archivePrefix = {arXiv},
primaryClass = {cs.RO},
url = {https://arxiv.org/abs/2511.08935},
}