Interpretability Research on Semantic Segmentation Models in Spatial Autonomous Perception
-
-
Abstract
With the continuous advancement of deep space exploration and intelligent autonomous systems, semantic segmentation models have demonstrated growing value in spatial environmental perception, playing a critical role in planetary terrain recognition, path planning, and risk assessment tasks for space-based autonomous sensing. However, most deep learning-based semantic segmentation models adopt a “black-box” architecture, making their internal decision-making processes difficult to interpret. This lack of interpretability significantly limits their credibility and controllability in mission-critical scenarios. To address the current deficiency of interpretability research in semantic segmentation models for space-based autonomous perception, this study investigates a perturbation-based interpretability approach—RISE (Randomized Input Sampling for Explanation)—in combination with a deletion mechanism, using representative Mars remote sensing imagery. By visualizing and intervening in the pixel-level saliency regions of the model, and systematically analyzing variations in heatmaps under different saliency mask settings along with the corresponding effects of region deletion on model predictions, this work reveals the model's reliance on specific features for terrain classification. Findings indicate issues such as excessive dependence on color and overlapping saliency regions across categories. On this basis, the study proposes targeted optimization strategies to enhance model transparency and reliability, providing theoretical support and technical pathways for the interpretable deployment of semantic segmentation models in space-based autonomous systems.
-
-