GitHub - Fediory/HVI-CIDNet: [CVPR2025 && NTIRE2025] HVI: A New Color Space for Low-light Image Enhancement (Official Implementation) (original) (raw)

Qingsen Yanβˆ— , Yixu Fengβˆ— , Cheng Zhang, Guansong Pang, Kangbiao Shi, Peng Wu, Wei Dong, Jinqiu Sun, Yanning Zhang

Previous Version: You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement

arXiv Gradio Spaces Hugging Face Paper zhihu

HVI-CIDNet Demo:

results3

News πŸ†•

Proposed HVI-CIDNet βš™

Motivation:

results3

HVI-CIDNet pipeline:

results3

Lighten Cross-Attention (LCA) Block structure:

results4

Visual Comparison πŸ–Ό

LOL-v1, LOL-v2-real, and LOL-v2-synthetic:

results1

DICM, LIME, MEF, NPE, and VV:

results2

LOL-Blur:

results2

Weights and Results 🧾

All the weights that we trained on different datasets is available at [Baidu Pan] (code: yixu) and [One Drive] (code: yixu). Results on DICM, LIME, MEF, NPE, and VV datasets can be downloaded from [Baidu Pan] (code: yixu) and [One Drive] (code: yixu).Bolded fonts represent impressive metrics.

The metrics of HVI-CIDNet on paired datasets are shown in the following table:

All the link code is yixu.

Folder (test datasets) PSNR SSIM LPIPS GT Mean Results Weights Path
(LOLv1)v1 w perc loss/ wo gt mean 23.8091 0.8574 0.0856 Baidu Pan and One Drive LOLv1/w_perc.pth
(LOLv1)v1 w perc loss/ w gt mean 27.7146 0.8760 0.0791 √ ditto LOLv1/w_perc.pth
(LOLv1)v1 wo perc loss/ wo gt mean 23.5000 0.8703 0.1053 Baidu Pan and One Drive LOLv1/wo_perc.pth
(LOLv1)v1 wo perc loss/ w gt mean 28.1405 0.8887 0.0988 √ ditto LOLv1/wo_perc.pth
(LOLv2_real)v2 wo perc loss/ wo gt mean 23.4269 0.8622 0.1691 Baidu Pan and One Drive (lost)
(LOLv2_real)v2 wo perc loss/ w gt mean 27.7619 0.8812 0.1649 √ ditto (lost)
(LOLv2_real)v2 best gt mean 28.1387 0.8920 0.1008 √ Baidu Pan and One Drive LOLv2_real/w_prec.pth
(LOLv2_real)v2 best Normal 24.1106 0.8675 0.1162 Baidu Pan and One Drive (lost)
(LOLv2_real)v2 best PSNR 23.9040 0.8656 0.1219 Baidu Pan and One Drive LOLv2_real/best_PSNR.pth
(LOLv2_real)v2 best SSIM 23.8975 0.8705 0.1185 Baidu Pan and One Drive LOLv2_real/best_SSIM.pth
(LOLv2_real)v2 best SSIM/ w gt mean 28.3926 0.8873 0.1136 √ None LOLv2_real/best_SSIM.pth
(LOLv2_syn)syn wo perc loss/ wo gt mean 25.7048 0.9419 0.0471 Baidu Pan and One Drive LOLv2_syn/wo_perc.pth
(LOLv2_syn)syn wo perc loss/ w gt mean 29.5663 0.9497 0.0437 √ ditto LOLv2_syn/wo_perc.pth
(LOLv2_syn)syn w perc loss/ wo gt mean 25.1294 0.9388 0.0450 Baidu Pan and One Drive LOLv2_syn/w_perc.pth
(LOLv2_syn)syn w perc loss/ w gt mean 29.3666 0.9500 0.0403 √ ditto LOLv2_syn/w_perc.pth
Sony_Total_Dark 22.9039 0.6763 0.4109 Baidu Pan and One Drive SID.pth
LOL-Blur 26.5719 0.8903 0.1203 Baidu Pan and One Drive LOL-Blur.pth
SICE-Mix 13.4235 0.6360 0.3624 √ Baidu Pan and One Drive SICE.pth
SICE-Grad 13.4453 0.6477 0.3181 √ Baidu Pan and One Drive SICE.pth
FiveKfollow Retinexformer 24.4587 0.8769 0.0851 Baidu Pan and One Drive fivek.pth

Performance on five unpaired datasets are shown in the following table:

metrics DICM LIME MEF NPE VV
NIQE 3.79 4.13 3.56 3.74 3.21
BRISQUE 21.47 16.25 13.77 18.92 30.63

Furthermore, we found that use random gamma function in the training process can improve the generalization of CIDNet. You can see details in train.py line 53-55, also you can turn-on the random-gamma mode in data/options.py line 60 during training process.

We trained on LOLv2-Syn dataset with the random-gamma mode, and save the weights as LOLv2_syn/generalization.pth (you can find in the link). The performance are shown in the following table, and you can see 7 metrics improved:

metrics DICM LIME MEF NPE VV Results
NIQE 3.55 3.85 3.46 3.82 3.24 Baidu Pan and One Drive
BRISQUE 25.62 16.02 13.08 18.90 29.55 ditto

The weights with the "strongest" generalization ability we put on the HVI-CIDNet demo on the Hugging Face website, which we highly recommend. Here are its NIQE metrics on five unpaired datasets, which you can easily reproduce on Hugging Face:

metrics DICM LIME MEF NPE VV Average
NIQE 3.36 3.03 3.11 3.33 2.49 3.13

Test Finetune:

Folder (test datasets) PSNR SSIM LPIPS GT Mean Results Weights Path
(LOLv1)v1 test finetuning 25.4036 0.8652 0.0897 Baidu Pan and One Drive LOLv1/test_finetuning.pth
(LOLv1)v1 test finetuning 27.5969 0.8696 0.0869 √ ditto ditto

Contributions from other peers:

Datasets PSNR SSIM LPIPS GT Mean Results Weights Path Contributor Detail GPU
LOLv1 24.7401 0.8604 0.0896 Baidu Pan and One Drive LOLv1/other/PSNR_24.74.pth [Xi’an Polytechnic University]Yingjian Li NVIDIA 4070

1. Get Started πŸŒ‘

Dependencies and Installation

(1) Create Conda Environment

conda create --name CIDNet python=3.7.0 conda activate CIDNet

(2) Clone Repo

git clone git@github.com:Fediory/HVI-CIDNet.git

(3) Install Dependencies

cd HVI-CIDNet pip install -r requirements.txt

Data Preparation

You can refer to the following links to download the datasets. Note that we only use low_blur and high_sharp_scaled subsets of LOL-Blur dataset.

Then, put them in the following folder:

datasets (click to expand)

β”œβ”€β”€ datasets
    β”œβ”€β”€ DICM
    β”œβ”€β”€ FiveK
        β”œβ”€β”€ test
            β”œβ”€β”€input
            β”œβ”€β”€target
        β”œβ”€β”€ train
            β”œβ”€β”€input
            β”œβ”€β”€target
    β”œβ”€β”€ LIME
    β”œβ”€β”€ LOLdataset
        β”œβ”€β”€ our485
            β”œβ”€β”€low
            β”œβ”€β”€high
        β”œβ”€β”€ eval15
            β”œβ”€β”€low
            β”œβ”€β”€high
    β”œβ”€β”€ LOLv2
        β”œβ”€β”€ Real_captured
            β”œβ”€β”€ Train
                β”œβ”€β”€ Low
                β”œβ”€β”€ Normal
            β”œβ”€β”€ Test
                β”œβ”€β”€ Low
                β”œβ”€β”€ Normal
        β”œβ”€β”€ Synthetic
            β”œβ”€β”€ Train
                β”œβ”€β”€ Low
                β”œβ”€β”€ Normal
            β”œβ”€β”€ Test
                β”œβ”€β”€ Low
                β”œβ”€β”€ Normal
    β”œβ”€β”€ LOL_blur
        β”œβ”€β”€ eval
            β”œβ”€β”€ high_sharp_scaled
            β”œβ”€β”€ low_blur
        β”œβ”€β”€ test
            β”œβ”€β”€ high_sharp_scaled
                β”œβ”€β”€ 0012
                β”œβ”€β”€ 0017
                ...
            β”œβ”€β”€ low_blur
                β”œβ”€β”€ 0012
                β”œβ”€β”€ 0017
                ...
        β”œβ”€β”€ train
            β”œβ”€β”€ high_sharp_scaled
                β”œβ”€β”€ 0000
                β”œβ”€β”€ 0001
                ...
            β”œβ”€β”€ low_blur
                β”œβ”€β”€ 0000
                β”œβ”€β”€ 0001
                ...
    β”œβ”€β”€ MEF
    β”œβ”€β”€ NPE
    β”œβ”€β”€ SICE
        β”œβ”€β”€ Dataset
            β”œβ”€β”€ eval
                β”œβ”€β”€ target
                β”œβ”€β”€ test
            β”œβ”€β”€ label
            β”œβ”€β”€ train
                β”œβ”€β”€ 1
                β”œβ”€β”€ 2
                ...
        β”œβ”€β”€ SICE_Grad
        β”œβ”€β”€ SICE_Mix
        β”œβ”€β”€ SICE_Reshape
    β”œβ”€β”€ Sony_total_dark
        β”œβ”€β”€ eval
            β”œβ”€β”€ long
            β”œβ”€β”€ short
        β”œβ”€β”€ test
            β”œβ”€β”€ long
                β”œβ”€β”€ 10003
                β”œβ”€β”€ 10006
                ...
            β”œβ”€β”€ short
                β”œβ”€β”€ 10003
                β”œβ”€β”€ 10006
                ...
        β”œβ”€β”€ train
            β”œβ”€β”€ long
                β”œβ”€β”€ 00001
                β”œβ”€β”€ 00002
                ...
            β”œβ”€β”€ short
                β”œβ”€β”€ 00001
                β”œβ”€β”€ 00002
                ...
    β”œβ”€β”€ VV

2. Testing πŸŒ’

Download our weights from [Baidu Pan] (code: yixu) and put them in folder weights:

weights (click to expand)

β”œβ”€β”€ weights
    β”œβ”€β”€ LOLv1
        β”œβ”€β”€ w_perc.pth
        β”œβ”€β”€ wo_perc.pth
        β”œβ”€β”€ test_finetuning.pth
    β”œβ”€β”€ LOLv2_real
        β”œβ”€β”€ best_PSNR.pth
        β”œβ”€β”€ best_SSIM.pth
        β”œβ”€β”€ w_perc.pth
    β”œβ”€β”€ LOLv2_syn
        β”œβ”€β”€ generalization.pth
        β”œβ”€β”€ w_perc.pth
        β”œβ”€β”€ wo_perc.pth
    β”œβ”€β”€ LOL-Blur.pth
    β”œβ”€β”€ SICE.pth
    β”œβ”€β”€ SID.pth

you can find all weights in https://huggingface.co/papers/2502.20272

python eval_hf.py --path fediory/our_model_path --input_img your/img/path --alpha_s 1.0 --alpha_i 1.0 --gamma 1.0

for example

python eval_hf.py --path fediory/HVI-CIDNet-LOLv1-wperc --input_img ./datasets/DICM/01.JPG --alpha_s 1.0 --alpha_i 1.0 --gamma 1.0

and your enhanced image will be saved at ./output_hf.

LOLv1

python eval.py --lol --perc # weights that trained with perceptual loss python eval.py --lol # weights that trained without perceptual loss

LOLv2-real

python eval.py --lol_v2_real --best_GT_mean # you can choose best_GT_mean or best_PSNR or best_SSIM

LOLv2-syn

python eval.py --lol_v2_syn --perc # weights that trained with perceptual loss python eval.py --lol_v2_syn # weights that trained without perceptual loss

SICE

python eval.py --SICE_grad # output SICE_grad python eval.py --SICE_mix # output SICE_mix

FiveK

python eval.py --fivek # output FiveK follow Retinexformer

Sony-Total-Dark

python eval_SID_blur --SID

LOL-Blur

python eval_SID_blur --Blur

five unpaired datasets DICM, LIME, MEF, NPE, VV.

We note that: you can choose one weights in ./weights folder, and set the alpha float number (defualt=1.0) as illumination scale of the datasets.

gamma denotes the gamma function (curve), see line 59 of "eval.py"

You can change "--DICM" to the other unpaired datasets "LIME, MEF, NPE, VV".

python eval.py --unpaired --DICM --unpaired_weights --alpha

e.g.

python eval.py --unpaired --DICM --unpaired_weights ./weights/LOLv2_syn/w_perc.pth --alpha 0.9 --gamma 0.9

Custome Datasets: alpha and gamma are optional.

python eval.py --unpaired --custome --custome_path ./your/costome/dataset/path --unpaired_weights ./weights/LOLv2_syn/w_perc.pth --alpha 0.9 --gamma 0.9

LOLv1

python measure.py --lol

LOLv2-real

python measure.py --lol_v2_real

LOLv2-syn

python measure.py --lol_v2_syn

Sony-Total-Dark

python measure_SID_blur.py --SID

LOL-Blur

python measure_SID_blur.py --Blur

SICE-Grad

python measure.py --SICE_grad

SICE-Mix

python measure.py --SICE_mix

fivek

python measure.py --fivek

five unpaired datasets DICM, LIME, MEF, NPE, VV.

You can change "--DICM" to the other unpaired datasets "LIME, MEF, NPE, VV".

python measure_niqe_bris.py --DICM

Note: Following LLFlow, KinD, and Retinxformer, we have also adjusted the brightness of the output image produced by the network, based on the average value of GroundTruth (GT). This only works in paired datasets. If you want to measure it, please add "--use_GT_mean".

e.g.

python measure.py --lol --use_GT_mean

3. Training πŸŒ“

You can choose --start_warmup True/False for setting the warmup in training stage.

You can choose these dataset for training: lol_v1, lolv2_real, lolv2_syn, lol_blur, SID, SICE_mix, SICE_grad, fivek.

Below is the example.

python train.py --dataset lol_v1

4. Contacts πŸŒ”

If you have any questions, please contact us or submit an issue to the repository!

Yixu Feng (yixu-nwpu@mail.nwpu.edu.cn)

5. Citation πŸŒ•

If you find our work useful for your research, please cite our paper

@article{yan2025hvi,
  title={HVI: A New color space for Low-light Image Enhancement},
  author={Yan, Qingsen and Feng, Yixu and Zhang, Cheng and Pang, Guansong and Shi, Kangbiao and Wu, Peng and Dong, Wei and Sun, Jinqiu and Zhang, Yanning},
  journal={arXiv preprint arXiv:2502.20272},
  year={2025}
}

@misc{feng2024hvi,
      title={You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement}, 
      author={Yixu Feng and Cheng Zhang and Pei Wang and Peng Wu and Qingsen Yan and Yanning Zhang},
      year={2024},
      eprint={2402.05809},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}