Abstract
<jats:p>Autonomous navigation of unmanned aerial vehicles (UAVs) relies on accurate and robust state estimation under uncertain and dynamically changing operating conditions. This review paper analyzes established and emerging sensor fusion methods for UAV localization, mapping, and navigation, with a focus on inertial, visual, GNSS, LiDAR, and radar measurements. Approaches based on Bayesian filtering, optimization-based estimation, and methods incorporating machine learning are compared in terms of their assumptions, computational costs, and practical trade-offs. The literature survey indicates that filtering schemes (KF/EKF/UKF) remain the most efficient for real-time loops with strict latency constraints, whereas optimization-based methods (sliding-window, factor-graph) provide higher long-term trajectory consistency at the cost of increased computational load. The best practical balance of accuracy, robustness, and computational efficiency in challenging field conditions is achieved by hybrid architectures, where model-based estimators are complemented with learned modules to enhance perception robustness and detect sensor degradation. Finally, open challenges and promising directions for future research toward reliable real-world deployment are outlined. Key words: Autonomous aerial vehicles, inertial navigation, navigation, sensor fusion, sensor systems, SLAM, state estimation, visual odometry</jats:p>