DFD-SLAM: Visual SLAM with Deep Features in Dynamic Environment
DFD-SLAM: Visual SLAM with Deep Features in Dynamic Environment
Blog Article
Visual SLAM technology is one of the important technologies for mobile robots.Existing feature-based visual SLAM techniques suffer from tracking and loop closure performance degradation in complex environments.We propose the DFD-SLAM system to ensure outstanding accuracy and robustness across diverse environments.
Initially, building on the ORB-SLAM3 system, we replace the original feature extraction component with the HFNet network and introduce a frame rotation estimation method.This method determines the rotation angles between consecutive frames to select superior local here descriptors.Furthermore, we utilize CNN-extracted global descriptors to replace the bag-of-words approach.
Subsequently, we develop a precise removal strategy, combining semantic information from YOLOv8 to accurately eliminate dynamic feature points.In the TUM-VI dataset, DFD-SLAM shows an improvement over ORB-SLAM3 of 29.24% in the corridor sequences, 40.
07% in the magistrale sequences, 28.75% in the room sequences, and 35.26% in the slides sequences.
In the TUM-RGBD dataset, DFD-SLAM demonstrates a 91.57% improvement over ORB-SLAM3 in highly dynamic powell and mahoney bloody mary mix scenarios.This demonstrates the effectiveness of our approach.