- https://github.com/KTH-RPL/DynamicMap_Benchmark
- Dynablox: Real-time Detection of Diverse Dynamic Objects in Complex Environments
-
(IROS 2022) CFP-SLAM: A Real-time Visual SLAM Based on Coarse-to-Fine Probability in Dynamic Environments
-
(IROS 2022) DRG-SLAM: A Semantic RGB-D SLAM using Geometric Features for Indoor Dynamic Scene
- (IEEE RA-L’22) DynaVINS: A Visual-Inertial SLAM for Dynamic Environments, code: https://github.com/url-kaist/dynaVINS
- Non-deep learning approach, using constraints to remove feature points on moving objects
- DeFlowSLAM: Self-Supervised Scene Motion Decomposition for Dynamic Dense SLAM
- Zhejiang University, seems to be a part of DROID-SLAM, divides optical flow into dynamic and static parts for subsequent operations. The results look promising. page, code
- Efficient Spatial-Temporal Information Fusion for LiDAR-Based 3D Moving Object Segmentation
- Haomo.AI, code, dynamic detection
- [2203.03923] ROLL: Long-Term Robust LiDAR-based Localization With Temporary Mapping in Changing Environments IROS 2022
- code: https://github.com/HaisenbergPeng/ROLL
- POCD: Probabilistic Object-Level Change Detection and Volumetric Mapping in Semi-Static Scenes
- RSS 2022, map updating in semi-static scenes
- J. Schauer and A. Nuchter, “The Peopleremover—Removing Dynamic Objects From 3-D Point Cloud Data by Traversing a Voxel Occupancy Grid,” IEEE Robot. Autom. Lett., vol. 3, no. 3, pp. 1679–1686, Jul. 2018, doi: 10.1109/LRA.2018.2801797.
- Method for removing dynamic objects based on voxel traversal. Despite its many shortcomings, the paper proposes many tricks to solve these problems, and the results look quite good.
- code, video
- N. Rufus, U. K. R. Nair, A. V. S. S. B. Kumar, V. Madiraju, and K. M. Krishna, “SROM: Simple Real-time Odometry and Mapping using LiDAR data for Autonomous Vehicles,” IV 2020
- Roughly removes possible moving objects, removes the ground, and then extracts the remaining parts
- M. Schorghuber, D. Steininger, Y. Cabon, M. Humenberger, and M. Gelautz, “SLAMANTIC - Leveraging Semantics to Improve VSLAM in Dynamic Environments” ICCV 2019 workshop
- Visual SLAM in dynamic environments. Uses semantics to calculate confidence in points, uses high-confidence points to assist low-confidence points, and ultimately determines which parts are used for localization and mapping.
- S. Gu, S. Yao, J. Yang, and H. Kong, “Semantics-Guided Moving Object Segmentation with 3D LiDAR,” arxiv 2022.05
- Dynamic object segmentation network, based on the idea of rangenet.
- Y. Pan, B. Gao, J. Mei, S. Geng, C. Li, and H. Zhao, “SemanticPOSS: A Point Cloud Dataset with Large Quantity of Dynamic Instances,” IV 2020
- Outdoor dataset of dynamic objects, Peking University, website
- S. Pagad, D. Agarwal, S. Narayanan, K. Rangan, H. Kim, and G. Yalla, “Robust Method for Removing Dynamic Objects from Point Clouds,” ICRA 2020
- L. Sun, Z. Yan, A. Zaganidis, C. Zhao, and T. Duckett, “Recurrent-OctoMap: Learning State-Based Map Refinement for Long-Term Semantic Mapping With 3-D-Lidar Data,” RAL
- P. Egger, P. V. K. Borges, G. Catt, A. Pfrunder, R. Siegwart, and R. Dubé, “PoseMap: Lifelong, Multi-Environment 3D LiDAR Localization,” IROS 2018
- Lifelong SLAM, ETH SAL group
- DynamicFilter: an Online Dynamic Objects Removal Framework for Highly Dynamic Environments, ICRA 2022
- IJRR experts, unfortunately not open source, HKUST, SUSTech
- X. Ma, Y. Wang, B. Zhang, H.-J. Ma, and C. Luo, “DynPL-SVO: A New Method Using Point and Line Features for Stereo Visual Odometry in Dynamic Scenes.” arXiv, May 17, 2022
- Stereo visual odometry using point and line features in dynamic scenes, Northeast University, not yet open source
- M. T. Lázaro, R. Capobianco, and G. Grisetti, “Efficient Long-term Mapping in Dynamic Environments,” IROS 2018
- Efficient ICP scheme, achieving map entity merging. As it deals with 2D maps, there are not many things to handle. Dynamic point clouds can be removed using point visualization.
- code,
-
T. Krajník, J. P. Fentanes, J. M. Santos, and T. Duckett, “FreMEn: Frequency Map Enhancement for Long-Term Mobile Robot Autonomy in Changing Environments,” TRO 2017
- G. Kurz, M. Holoch, and P. Biber, “Geometry-based Graph Pruning for Lifelong SLAM.” IROS 2021
- We propose a new method that considers geometric criteria for selecting vertices to prune. This is efficient, easy to implement, and results in a graph with uniformly distributed vertices that remain part of the robot trajectory. Additionally, we propose a new marginalization method that is more robust to erroneous loop closures compared to existing methods. Mainly involves optimization of the SLAM back-end, addressing how to prune the factor graph when the map or factor graph is updated.
- Quei-An Chen and Akihiro Tsukada, “Flow Supervised Neural Radiance Fields for Static-Dynamic Decomposition,” ICRA 2022
- AI Korea, code, dynamic object removal and repair using NeRF + optical flow, video.
- W. Ding, S. Hou, H. Gao, G. Wan, and S. Song, “LiDAR Inertial Odometry Aided Robust LiDAR Localization System in Changing City Scenes,” ICRA 2020
- Baidu’s solution using LiDAR and IMU for localization in dynamic scenes, updating the map with new elements in the scene.
- Life-long SLAM
- G. D. Tipaldi, D. Meyer-Delius, and W. Burgard, “Lifelong localization in changing environments,” IJRR 2013
- S. Zhu, X. Zhang, S. Guo, J. Li, and H. Liu, “Lifelong Localization in Semi-Dynamic Environment,” ICRA 2021
- Tsinghua University, life-long localization
- F. Pomerleau, P. Krüsi, F. Colas, P. Furgale, and R. Siegwart, “Long-term 3D map maintenance in dynamic environments,” ICRA 2014
- Map updating in dynamic environments
- D. J. Yoon, T. Y. Tang, and T. D. Barfoot, “Mapless Online Detection of Dynamic Objects in 3D Lidar.” Conference on Computer and Robot Vision (CRV) 2019
- Point cloud dynamic detection
-
Dynamic-SLAM: Semantic monocular visual localization and mapping based on deep learning in dynamic environment. Robotics and Autonomous Systems 2019
- M. Zhao et al., “A General Framework for Lifelong Localization and Mapping in Changing Environment,” IROS 2021
- Highseer Robotics’ life-long localization paper
- Multi-session map representation and an efficient online map update strategy, subsystems: local laser odometry (LLO), global laser matching (GLM), and pose graph optimization (PGR). LLO constructs a series of locally consistent sub-maps, GLM calculates relative constraints between incoming scan clouds and global sub-maps, and PGR collects sub-maps and constraints from LLO and GLM, prunes old sub-maps in historical maps, and performs pose graph sparsification and optimization.
- D. Henning, T. Laidlow, and S. Leutenegger, “BodySLAM: Joint Camera Localisation, Mapping, and Human Motion Tracking,” *arXiv:2205.02301
- Combines human body reconstruction with SLAM, similar to AirDOS
- Pfreundschuh, Patrick, et al. “Dynamic Object Aware LiDAR SLAM Based on Automatic Generation of Training Data.” (ICRA 2021)
- ETH ASL, code, video, dataset, LiDAR
- Authors use a deep-learning approach (3D-MiniNet network) for real-time 3D dynamic object detection, filtering out dynamic objects and feeding the remaining point cloud to LOAM for conventional LiDAR SLAM. The learning method is unsupervised.
- Canovas Bruce, et al. “Speed and Memory Efficient Dense RGB-D SLAM in Dynamic Scenes.” (IROS 2020)
- Yuan Xun and Chen Song, “SaD-SLAM: A Visual SLAM Based on Semantic and Depth Information,” (IROS 2020)
- Dong, Erqun, et al. “Pair-Navi: Peer-to-Peer Indoor Navigation with Mobile Visual SLAM,” (ICCC 2019)
-
Ji Tete, et al. “Towards Real-Time Semantic RGB-D SLAM in Dynamic Environments,” (ICRA 2021)
- Palazzolo Emanuele, et al. “ReFusion: 3D Reconstruction in Dynamic Environments for RGB-D Cameras Exploiting Residuals,” (IROS 2019)
-
Arora Mehul, et al. “Mapping the Static Parts of Dynamic Scenes from 3D LiDAR Point Clouds Exploiting Ground Segmentation.”
- Chen Xieyuanli, et al. “Moving Object Segmentation in 3D LiDAR Data: A Learning-Based Approach Exploiting Sequential Data,” IEEE Robotics and Automation Letters, 2021
- Zhang Tianwei, et al. “FlowFusion: Dynamic Dense RGB-D SLAM Based on Optical Flow,” (ICRA 2020)
- Zhang Tianwei, et al. “AcousticFusion: Fusing Sound Source Localization to Visual SLAM in Dynamic Environments,” IROS 2021
- video. Combines sound signals
-
Liu Yubao and Miura Jun, “RDS-SLAM: Real-Time Dynamic SLAM Using Semantic Segmentation Methods,” IEEE Access 2021
-
Cheng Jiyu, et al. “Improving Visual Localization Accuracy in Dynamic Environments Based on Dynamic Region Removal,” IEEE Transactions on Automation Science and Engineering, vol. 17, no. 3, July 2020, pp. 1585–96. IEEE Xplore.
- Soares João Carlos Virgolino, et al
- Youngjae Min, Do-Un Kim, and Han-Lim Choi, “Kernel-Based 3-D Dynamic Occupancy Mapping with Particle Tracking,” 2021 IEEE International Conference on Robotics and Automation (ICRA)
- code: https://github.com/youngjae-min/k3dom
- DyOb-SLAM: Dynamic Object Tracking SLAM System (2022)
- Combination of VDO-SLAM and DynaSLAM
-
DynaVIG: Monocular Vision/INS/GNSS Integrated Navigation and Object Tracking for AGV in Dynamic Scenes (2022)
- DirectTracker: 3D Multi-Object Tracking Using Direct Image Alignment and Photometric Bundle Adjustment (2022)
- Direct method for dynamic object tracking, page
- (IROS 2022) MOTSLAM: MOT-assisted monocular dynamic SLAM using single-view depth estimation (2022)
- TwistSLAM++: Fusing multiple modalities for accurate dynamic semantic SLAM (2022)
- (IROS 2022) Visual-Inertial Multi-Instance Dynamic SLAM with Object-level Relocalisation (2022)
- IROS 2022, lab website: https://mlr.in.tum.de/research/semanicobjectlevelanddynamicslam
- Learning to Complete Object Shapes for Object-level Mapping in Dynamic Scenes (2022), by the same author as above,
- T. Ma and Y. Ou, “MLO: Multi-Object Tracking and Lidar Odometry in Dynamic Environment,” arXiv, Apr. 29, 2022
- Z. Wang, W. Li, Y. Shen, and B. Cai, “4-D SLAM: An Efficient Dynamic Bayes Network-Based Approach for Dynamic Scene Understanding,” IEEE Access
- Semantic recognition of dynamics, uses UKF for dynamic tracking, but the graph results are poor.
- T. Ma and Y. Ou, “MLO: Multi-Object Tracking and Lidar Odometry in Dynamic Environment,” ArXiv 2022
- Based on LOAM for target tracking, separately estimates moving objects and self, then fuses the results. Seems loosely coupled.
- (IROS 2022) R. Long, C. Rauch, T. Zhang, V. Ivan, T. L. Lam, and S. Vijayakumar, “RGB-D SLAM in Indoor Planar Environments with Multiple Large Dynamic Objects,”
- Performs dynamic removal first, followed by dynamic tracking. SLAM + MOT in structured environments (surfaces)
- Qiu Yuheng, et al., “**AirDOS: Dynamic SLAM benefits from Articulated Objects,” 2021 (Arxiv)
- Ballester, Irene, et al., “DOT: Dynamic Object Tracking for Visual SLAM,” ICRA 2021
- code, video, University of Zaragoza, vision
-
Liu Yubao and Miura Jun, “RDMO-SLAM: Real-Time Visual SLAM for Dynamic Environments Using Semantic Label Prediction With Optical Flow,” IEEE Access.
- Kim Aleksandr, et al., “EagerMOT: 3D Multi-Object Tracking via Sensor Fusion,” ICRA 2021
- Shan, Mo, et al., “OrcVIO: Object Residual Constrained Visual-Inertial Odometry,” IROS2020
-
Rosen, David M., et al., “Towards Lifelong Feature-Based Mapping in Semi-Static Environments,” ICRA 2016.
- Henein Mina, et al., “Dynamic SLAM: The Need For Speed,” ICRA 2020.
- Zhang Jun, et al., “VDO-SLAM: A Visual Dynamic Object-Aware SLAM System,” ArXiv 2020.
-
“Robust Ego and Object 6-DoF Motion Estimation and Tracking,” Jun Zhang, Mina Henein, Robert Mahony, and Viorela Ila, IROS 2020 (code)
- code, video, vision
-
Minoda, Koji, et al., “VIODE: A Simulated Dataset to Address the Challenges of Visual-Inertial Odometry in Dynamic Environments,” RAL 2021
-
Vincent, Jonathan, et al., “Dynamic Object Tracking and Masking for Visual SLAM,” IROS 2020
-
Huang, Jiahui, et al., “ClusterVO: Clustering Moving Instances and Estimating Visual Odometry for Self and Surroundings,” CVPR 2020
-
Liu, Yuzhen, et al., “A Switching-Coupled Backend for Simultaneous Localization and Dynamic Object Tracking,” RAL 2021
-
Yang Charig, et al., “Self-Supervised Video Object Segmentation by Motion Grouping,” ICCV 2021
-
Long Ran, et al., “RigidFusion: Robot Localisation and Mapping in Environments with Large Dynamic Rigid Objects,” RAL 2021
-
Yang Bohong, et al., “Multi-Classes and Motion Properties for Concurrent Visual SLAM in Dynamic Environments,” IEEE Transactions on Multimedia, 2021
-
Yang Gengshan and Ramanan Deva, “Learning to Segment Rigid Motions from Two Frames,” CVPR 2021
-
Thomas Hugues, et al., “Learning Spatiotemporal Occupancy Grid Maps for Lifelong Navigation in Dynamic Scenes,”
-
Jung Dongki, et al., “DnD: Dense Depth Estimation in Crowded Dynamic Indoor Scenes,” ICCV 2021
-
Luiten Jonathon, et al., “Track to Reconstruct and Reconstruct to Track,” RAL+ICRA 2020
-
Grinvald, Margarita, et al., “TSDF++: A Multi-Object Formulation for Dynamic Object Tracking and Reconstruction,” ICRA 2021
-
Wang Chieh-Chih, et al., “Simultaneous Localization, Mapping and Moving Object Tracking,” The International Journal of Robotics Research, 2007
-
Ran Teng, et al., “RS-SLAM: A Robust Semantic SLAM in Dynamic Environments Based on RGB-D Sensor.”
-
Xu Hua, et al., “OD-SLAM: Real-Time Localization and Mapping in Dynamic Environment through Multi-Sensor Fusion,” (ICARM 2020) https://doi.org/10.1109/ICARM49381.2020.9195374.
-
Wimbauer Felix, et al., “MonoRec: Semi-Supervised Dense Reconstruction in Dynamic Environments from a Single Moving Camera,” CVPR 2021
-
Liu Yu, et al., “Dynamic RGB-D SLAM Based on Static Probability and Observation Number,” IEEE Transactions on Instrumentation and Measurement, vol. 70, 2021, pp. 1–11. IEEE Xplore, https://doi.org/10.1109/TIM.2021.3089228.
-
P. Li, T. Qin, and S. Shen, “Stereo Vision-based Semantic 3D Object and Ego-motion Tracking for Autonomous Driving,” arXiv 2018
-
G. B. Nair et al., “Multi-object Monocular SLAM for Dynamic Environments,” IV2020
-
M. Rünz and L. Agapito, “Co-fusion: Real-time segmentation, tracking and fusion of multiple objects,” in 2017 IEEE International Conference on Robotics and Automation (ICRA), May 2017, pp. 4471–4478.
- (IROS 2022) TwistSLAM: Constrained SLAM in Dynamic Environment,
- Follow-up to S3LAM, uses panoramic segmentation as the detection front-end