This repository is the PyTorch implementation of our manuscript "An Arbitrary Scale Super-Resolution Approach for 3D MR Images via Implicit Neural Representation". [ArXiv, IEEE Xplore]
Figure 1: Overview of the ArSSR model.
The MR images shown in Figure 2 can be downloaded in LR image, 2x SR result, 3.2x SR result, 4x SR result.
Figure 2: An example of the SISR tasks of three different isotropic up-sampling scales k={2, 3.2, 4} for a 3D brain MR image by the single ArSSR model.
- python 3.7.9
- pytorch-gpu 1.8.1
- tensorboard 2.6.0
- SimpleITK, tqdm, numpy, scipy, skimage
In the pre_trained_models folder, we provide the three pre-trained ArSSR models (with three difference encoder networks) on HCP-1200 dataset. You can improve the resolution of your images thourgh the following commands:
python test.py -input_path [input_path] \
-output_path [output_path] \
-encoder [RDN, ResCNN, or SRResNet] \
-pre_trained_model [pre_trained_model]
-scale [scale] \
-is_gpu [is_gpu] \
-gpu [gpu]where,
-
input_pathis the path of LR input image, it should be not contain the input finename. -
output_pathis the path of outputs, it should be not contain the output finename. -
encoder_nameis the type of the encoder network, including RDN, ResCNN, or SRResNet. -
pre_trained_modelis the full-path of pre-trained ArSSR model (e.g, for ArSSR model with RDB encoder network: ./pre_trained_models/ArSSR_RDN.pkl). -
!!! Note that here
encoder_nameandpre_trained_modelhave to be matched. E.g., if you use the ArSSR model with ResCNN encoder network,encoder_nameshould be ResCNN andpre_trained_modelshould be ./pre_trained_models/ArSSR_ResCNN.pkl -
scaleis up-sampling scale k, it can be int or float. -
is_gpuis the identification of whether to use GPU (0->CPU, 1->GPU). -
gpuis the number of GPU.
In our experiment, we train the ArSSR model on the HCP-1200 Dataset. In particular, the HCP-1200 dataset is split into three parts: 780 training examples, 111 validation examples, and 222 testing examples. More details about the HCP-1200 can be found in our manuscript [ArXiv]. And you can download the pre-processed training set and validation set [Google Drive].
By using the pre-processed training set and validation set by ourselves from [Google Drive], the pipeline of training the ArSSR model can be divided into three steps:
- unzip the downloaded file
data.zip. - put the
datain the ArSSR directory. - run the following command.
python train.py -encoder_name [encoder_name] \
-decoder_depth [decoder_depth] \
-decoder_width [decoder_width] \
-feature_dim [feature_dim] \
-hr_data_train [hr_data_train] \
-hr_data_val [hr_data_val] \
-lr [lr] \
-lr_decay_epoch [lr_decay_epoch] \
-epoch [epoch] \
-summary_epoch [summary_epoch] \
-bs [bs] \
-ss [ss] \
-gpu [gpu]where,
encoder_nameis the type of the encoder network, including RDN, ResCNN, or SRResNet.decoder_depthis the depth of the decoder network (default=8).decoder_widthis the width of the decoder network (default=256).feature_dimis the dimension size of the feature vector (default=128)hr_data_trainis the file path of HR patches for training (if you use our pre-processed data, this item can be ignored).hr_data_valis the file path of HR patches for validation (if you use our pre-processed data, this item can be ignored).lris the initial learning rate (default=1e-4).lr_decay_epochis learning rate multiply by 0.5 per some epochs (default=200).epochis the total number of epochs for training (default=2500).summary_epochis the current model that will be saved per some epochs (default=200).bsis the number of LR-HR patch pairs, i.e., N in Equ. 3 (default=15).ssis the number of sampled voxel coordinates, i.e., K in Equ. 3 (default=8000).gpuis the number of GPU.
If you find our work useful in your research, please cite:
@ARTICLE{9954892,
author={Wu, Qing and Li, Yuwei and Sun, Yawen and Zhou, Yan and Wei, Hongjiang and Yu, Jingyi and Zhang, Yuyao},
journal={IEEE Journal of Biomedical and Health Informatics},
title={An Arbitrary Scale Super-Resolution Approach for 3D MR Images via Implicit Neural Representation},
year={2023},
volume={27},
number={2},
pages={1004-1015},
doi={10.1109/JBHI.2022.3223106}}
