This is a repository for Evidential Semantic Mapping in Off-road Environments with Uncertainty-aware Bayesian Kernel Inference which is accepted for IROS 2024.
Representative Qualitative Results of our framework
- You can pull our docker images from Docker Hub. We only tested our code under settings with:
Ubuntu 22.04,CUDA 11.6,ROS1-noetic - Docker Image for Semantic Segmentation: Link
- OR You can pull the image via
docker pull jykim157/ebs_semseg
- OR You can pull the image via
- Docker Image for Semantic Mapping: Link
- OR You can pull the image via
docker pull jykim157/rosbki
- OR You can pull the image via
- Semantic Segmentation
docker run --rm -it --gpus=all -v {Your Workspace Directory}:/workspace -v {Your Data Directory}:/data --shm-size=16G --name {Container Name} jykim157/ebs_semseg - Semantic Mapping
docker run --rm -it --gpus=all --net=host -e DISPLAY --privileged --device=/dev/dri:/dev/dri -v /tmp/.X11-unix:/tmp/.X11-unix -v {Your Workspace Directory}:/workspace/ --name {Container Name} jykim157/rosbki
Once you download the RELLIS-3D dataset in your workspace, our framework assumes the following directory structures.
Main Data
RELLIS_ROOT
└── {00000, 00001, 00002, 00003, 00004}
├── os1_cloud_node_kitti_bin/ -- directory containing ".bin" files with Ouster 64-Channels point clouds.
├── pylon_camera_node/ -- directory containing ".jpg" files from the color camera.
├── pylon_camera_node_label_color -- directory containing ".png" files: color image lable
└── poses.txt -- file containing the poses of every scan.
Camera Intrinsic
RELLIS_CAMERA_INFO
└── {00000, 00001, 00002, 00003, 00004}
└── camera_info.txt
Basler Camera to Ouster LiDAR
RELLIS_TRANSFORM
└── {00000, 00001, 00002, 00003, 00004}
└── transforms.yaml
Once you download the RUGD dataset in your workspace, our framework assumes the following directory structures.
Main Data
RUGD_ROOT ├── RUGD_frames-with-annotations -- directories containing images ├── RUGD_annotations -- directories containing labels
If you operate on your own custom ros bag data, you might process your .bag file, then save your LiDAR, Image, and Pose information separately. The easiest way would be adopting the format of RELLIS-3D dataset. In short, the numpy format of 3D point cloud is suggested (i.e., N x K numpy matrix where N is the number of points and K >= 3 is the number of field for each point. In general, there are various additional fields such as intensity. However, our framework only utilize (x, y, z) fields).
Our overall framework for constructing semantic maps
You can read more detailed information from the README.md file for each directory.
Goal: Train an EDL-trained semantic segmentation model, and obtain semantic probability map and its corresponding uncertainty map in 2D image.
- Using docker image
jykim157/ebs_semseg - [Train] Given 2D (Image, Label) pairs, train a sementic segmentation model
- [Prep] Given the trained semantic segmentation model, inference for making 2D semantic segmentation results.
- With
--model evidential, prep mode will yield evidence vector in.npyformat.
- With
Goal: Project (or Lift) semantic probability map and uncertainty map into 3D space via 3D point clouds.
- Using docker image
jykim157/ebs_semseg - Given (2D Semantic Segmentation Results, LiDAR, Pose) dataset, project those 2D results onto the 3D point cloud and transform them into the global coordinate system.
Goal: Given semantic points, construct a semantic map with its corresponding uncertainty map.
- Using docker image
jykim157/rosbki - Format
numpypoint cloud type to thepcdpoint cloud type viaroslaunch evsemmap pcd_conversion.launch - Given semantic (or evidential) points, build semantic map via
roslaunch evsemmap mapping.launch dataset:={DATASET} method:={METHOD} result_name:={OUTPUT_DIR}- Example command:
roslaunch evsemmap mapping.launch dataset:=deploy_rellisv3_4_1-30 method:=ebs result_name:=/workspace/deployTest/ - You can choose
dempster,ebs,sbkimethods.dempster: the method using Dempster-Shafer Theory of Evidence, which was presented in ICRA 2024 Workshop. (paper)ebs: the method proposed in IROS 2024. (paper)sbki: the baseline method. (paper)
- You can modify parameters or add new config files in
SemanticMap/src/SemanticMap/config/datasets/*.yamlorSemanticMap/src/SemanticMap/config/methods/*.yaml.
- Example command:
Qualitative Results in RELLIS-3D dataset
Qualitative Results in our off-road dataset
We utilize the data and code from various works:
If you use our codes or find our work useful in your research work, consider citing our paper.
You can also find additional information in our project website.
IROS2024
@article{kim2024evidential,
title={Evidential Semantic Mapping in Off-road Environments with Uncertainty-aware Bayesian Kernel Inference},
author={Kim, Junyoung and Seo, Junwon and Min, Jihong},
journal={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
year={2024}
}
ICRA2024 Workshop
@article{kim2024uncertainty,
title={Uncertainty-aware Semantic Mapping in Off-road Environments with Dempster-Shafer Theory of Evidence},
author={Kim, Junyoung and Seo, Junwon},
journal={ICRA 2024 Workshop on Resilient Off-road Autonomy},
year={2024}
}
