- [2025/9/21] ✨✨ The
README.mdhas been updated. - [2025/9/19] ✨✨ The [arxiv] paper will coming soon。
- ❎ submit to arxiv
- ❎ upload training code
- ❎ upload Bridge model weights
We add mmsegmentation as our repository submodule .
So, one should clone this repository use the script as follows:
clone repository
git clone --recurse-submodules https://github.com/woldier/Bridge
If one already cloned the project and forgot --recurse-submodules,
# cloned the project and forgot clone submodules 🥲🥲 git clone https://github.com/woldier/Bridge # initialize and update each submodule in the repository 🥰🥰 git submodule update --init
after that, we link submodule-mmseg/mmseg mmseg:
soft link
ln -s submodule-mmseg/mmseg mmsegThis repo use python-3.8, for nvcc -v with cuda >= 11.6.
torch 2.1.1, cuda 12.1, mmcv 2.1.0, mmengine 0.9.1
Install script
conda create -n peft-mmpretrain python==3.8 -y
conda activate peft-mmpretrain
pip install torch==2.1.2+cu121 torchvision==0.16.2+cu121 -f https://download.pytorch.org/whl/torch_stable.html
# for CN user use follow script
pip install torch==2.1.2+cu121 torchvision==0.16.2+cu121 -f https://mirrors.aliyun.com/pytorch-wheels/cu121/
pip install mmcv==2.1.0 mmengine==0.9.1 -f https://download.openmmlab.com/mmcv/dist/cu121/torch2.1/index.html
pip install -r submodule-mmseg/requirements/runtime.txtInstallation of the reference document refer:
Torch and torchvision versions relationship.
We selected Postsdam, Vaihingen and LoveDA as benchmark datasets and created train, val, test lists for researchers.
The Potsdam dataset is for urban semantic segmentation used in the 2D Semantic Labeling Contest - Potsdam.
The dataset can be requested at the challenge homepage. The '2_Ortho_RGB.zip' and '5_Labels_all_noBoundary.zip' are required.
The Vaihingen dataset is for urban semantic segmentation used in the 2D Semantic Labeling Contest - Vaihingen.
The dataset can be requested at the challenge homepage. The 'ISPRS_semantic_labeling_Vaihingen.zip' and 'ISPRS_semantic_labeling_Vaihingen_ground_truth_eroded_COMPLETE.zip' are required.
The data could be downloaded from Google Drive here.
Or it can be downloaded from zenodo, you should run the following command:
loveda download
cd /{your_project_base_path}/Bridge/data/LoveDA
# Download Train.zip
wget https://zenodo.org/record/5706578/files/Train.zip
# Download Val.zip
wget https://zenodo.org/record/5706578/files/Val.zip
# Download Test.zip
wget https://zenodo.org/record/5706578/files/Test.zipPlace the downloaded file in the corresponding path The format is as follows:
file
Bridge/
├── data/
│ ├── LoveDA/
│ │ ├── Test.zip
│ │ ├── Train.zip
│ │ └── Val.zip
├── ├── Potsdam_RGB_DA/
│ │ ├── 2_Ortho_RGB.zip
│ │ └── 5_Labels_all_noBoundary.zip
├── ├── Vaihingen_IRRG_DA/
│ │ ├── ISPRS_semantic_labeling_Vaihingen.zip
│ │ └── ISPRS_semantic_labeling_Vaihingen_ground_truth_eroded_COMPLETE.zip
after that we can convert dataset:
details
- Potsdam
python tools/convert_datasets/potsdam.py data/Potsdam_IRRG/ --clip_size 512 --stride_size 512
python tools/convert_datasets/potsdam.py data/Potsdam_RGB/ --clip_size 512 --stride_size 512- Vaihingen
python tools/convert_datasets/vaihingen.py data/Vaihingen_IRRG/ --clip_size 512 --stride_size 256