Skip to content
/ Bridge Public

[Official Repo] Bridge: Leveraging Vision Foundation Models for Efficient Cross-Domain Remote Sensing Segmentation

License

Notifications You must be signed in to change notification settings

woldier/Bridge

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Bridge: Leveraging Vision Foundation Models for Efficient Cross-Domain Remote Sensing Segmentation


Static Badge Static Badge Static Badge Static Badge
GitHub Issues or Pull Requests GitHub Issues or Pull Requests GitHub forks GitHub Repo stars


Bridge
Network Overview

🔍️🔍️ NEWS

  • [2025/9/21] ✨✨ The README.md has been updated.
  • [2025/9/19] ✨✨ The [arxiv] paper will coming soon。

📄📄 TODO

  • ❎ submit to arxiv
  • ❎ upload training code
  • ❎ upload Bridge model weights

Clone Repo


We add mmsegmentation as our repository submodule .

So, one should clone this repository use the script as follows:

clone repository
git clone --recurse-submodules https://github.com/woldier/Bridge

Tips

If one already cloned the project and forgot --recurse-submodules,

 # cloned the project and forgot clone submodules 🥲🥲
 git clone https://github.com/woldier/Bridge 

 # initialize and update each submodule in the repository 🥰🥰
 git submodule update --init

after that, we link submodule-mmseg/mmseg $\to$ mmseg:

soft link
ln -s submodule-mmseg/mmseg mmseg

1. Creating Virtual Environment


This repo use python-3.8, for nvcc -v with cuda >= 11.6.

torch 2.1.1, cuda 12.1, mmcv 2.1.0, mmengine 0.9.1

Install script
conda create -n  peft-mmpretrain  python==3.8 -y
conda activate peft-mmpretrain


pip install torch==2.1.2+cu121  torchvision==0.16.2+cu121 -f https://download.pytorch.org/whl/torch_stable.html
# for CN user use follow script
pip install torch==2.1.2+cu121  torchvision==0.16.2+cu121 -f https://mirrors.aliyun.com/pytorch-wheels/cu121/  

pip install mmcv==2.1.0 mmengine==0.9.1 -f https://download.openmmlab.com/mmcv/dist/cu121/torch2.1/index.html

pip install -r submodule-mmseg/requirements/runtime.txt

Installation of the reference document refer:

Torch and torchvision versions relationship.

Official Repo CSDN

2.Preparation of data sets


We selected Postsdam, Vaihingen and LoveDA as benchmark datasets and created train, val, test lists for researchers.

2.1 Download of datasets

ISPRS Potsdam

The Potsdam dataset is for urban semantic segmentation used in the 2D Semantic Labeling Contest - Potsdam.

The dataset can be requested at the challenge homepage. The '2_Ortho_RGB.zip' and '5_Labels_all_noBoundary.zip' are required.

ISPRS Vaihingen

The Vaihingen dataset is for urban semantic segmentation used in the 2D Semantic Labeling Contest - Vaihingen.

The dataset can be requested at the challenge homepage. The 'ISPRS_semantic_labeling_Vaihingen.zip' and 'ISPRS_semantic_labeling_Vaihingen_ground_truth_eroded_COMPLETE.zip' are required.

LoveDA

The data could be downloaded from Google Drive here.

Or it can be downloaded from zenodo, you should run the following command:

loveda download
cd /{your_project_base_path}/Bridge/data/LoveDA

# Download Train.zip
wget https://zenodo.org/record/5706578/files/Train.zip
# Download Val.zip
wget https://zenodo.org/record/5706578/files/Val.zip
# Download Test.zip
wget https://zenodo.org/record/5706578/files/Test.zip

2.2 Data set preprocessing

Place the downloaded file in the corresponding path The format is as follows:

file
Bridge/
├── data/
│   ├── LoveDA/
│   │   ├── Test.zip
│   │   ├── Train.zip
│   │   └── Val.zip
├── ├── Potsdam_RGB_DA/
│   │   ├── 2_Ortho_RGB.zip
│   │   └── 5_Labels_all_noBoundary.zip
├── ├── Vaihingen_IRRG_DA/
│   │   ├── ISPRS_semantic_labeling_Vaihingen.zip
│   │   └── ISPRS_semantic_labeling_Vaihingen_ground_truth_eroded_COMPLETE.zip

after that we can convert dataset:

details
  • Potsdam
python tools/convert_datasets/potsdam.py data/Potsdam_IRRG/ --clip_size 512 --stride_size 512
python tools/convert_datasets/potsdam.py data/Potsdam_RGB/ --clip_size 512 --stride_size 512
  • Vaihingen
python tools/convert_datasets/vaihingen.py data/Vaihingen_IRRG/ --clip_size 512 --stride_size 256

The code in coming soon 🤗🤗

About

[Official Repo] Bridge: Leveraging Vision Foundation Models for Efficient Cross-Domain Remote Sensing Segmentation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published