GitHub - ZhexinLiang/CLIP-LIT: [ICCV 2023, Oral] Iterative Prompt Learning for Unsupervised Backlit Image Enhancement (original) (raw)

S-Lab, Nanyang Technological University

Accepted to ICCV 2023, Oral

demo_result0.mp4 demo_result1.mp4

CLIP-LIT trained using only hundreds of unpaired images yields favorable results on unseen backlit images captured in various scenarios.

📖 For more visual results, go checkout our project page.


📣 Updates

🖥️ Requirements

# git clone this repository
git clone https://github.com/ZhexinLiang/CLIP-LIT.git
cd CLIP-LIT

# create new anaconda env
conda create -n CLIP_LIT python=3.7 -y
conda activate CLIP_LIT

# install python dependencies
pip install -r requirements.txt

🏃‍♀️ Inference

Prepare Testing Data:

You can put the testing images in the input folder. If you want to test the backlit images, you can download the BAID test dataset and the Backlit300 dataset from [Google Drive | BaiduPan (key:1234)].

Testing:

The path of input images and output images and checkpoints can be changed.

Example usage:

python test.py -i ./Backlit300 -o ./inference_results/Backlit300 -c ./pretrained_models/enhancement_model.pth

🚋 Training

Prepare Training Data and the initial weights:

You should download the backlit and reference image dataset and put it under the repo. In our experiment, we randomly select 380 backlit images from BAID training dataset and 384 well-lit images from DIV2K dataset as the unpaired training data. We provide the training data we use at [Google Drive | BaiduPan (key:1234)] for your reference.

You should also download the initial prompt pair checkpoint (init_prompt_pair.pth) from [Release | Google Drive | BaiduPan (key:1234)] and put it into pretrained_models/init_pretrained_models folder.

After the data and the initial model weights are prepared, you can use the command to change the training data path, fine-tune the prompt and train the model.

If you don't want to download the initial prompt pair, you can train without the initial checkpoints using the command below. But in this way, the number of the total iterations should be at least 50K50K50K based on our experiments.

Commands

Example usage:

python train.py -b ./train_data/BAID_380/resize_input/ -r ./train_data/DIV2K_384/

There are other arguments you may want to change. You can change the hyperparameters using the cmd line.

For example, you can use the following command to train from scratch.

python train.py \
 -b ./train_data/BAID_380/resize_input/ \
 -r ./train_data/DIV2K_384/             \
 --train_lr 0.00002                     \
 --prompt_lr 0.000005                   \
 --eta_min 5e-6                         \
 --weight_decay 0.001                   \
 --num_epochs 3000                      \
 --num_reconstruction_iters 1000        \
 --num_clip_pretrained_iters 8000       \
 --train_batch_size 8                   \
 --prompt_batch_size 16                 \
 --display_iter 20                      \
 --snapshot_iter 20                     \
 --prompt_display_iter 20               \
 --prompt_snapshot_iter 100             \
 --load_pretrain False                  \
 --load_pretrain_prompt False

Here are the explanation for important arguments:

🤟 Citation

If you find our work useful for your research, please consider citing the paper:

@inproceedings{liang2023iterative,
  title={Iterative prompt learning for unsupervised backlit image enhancement},
  author={Liang, Zhexin and Li, Chongyi and Zhou, Shangchen and Feng, Ruicheng and Loy, Chen Change},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={8094--8103},
  year={2023}
}

Contact

If you have any questions, please feel free to reach out at zhexinliang@gmail.com.