GitHub - OpenGVLab/VideoChat-Flash: VideoChat-Flash: Hierarchical Compression for Long-Context Video Modeling (original) (raw)

VideoChat-Flash: Hierarchical Compression for Long-Context Video Modeling

Xinhao Li, Yi Wang, Jiashuo Yu, Xiangyu Zeng, Yuhan Zhu, Haian Huang, Jianfei Gao, Kunchang Li, Yinan He, Chenting Wang, Yu Qiao, Yali Wang, and Limin Wang

πŸ€— Model & Data | πŸ–₯️ Demo | πŸ“‘ Paper | 🌐 Blog

πŸ”₯ Updates

πŸ“‘ Future Plan

As I am currently very busy with work and find it difficult to complete the above plans quickly, I sincerely ask friends in the community to join in and submit a PR.

🦜 Introduction

πŸš€State-of-the-art performance in short and long video understanding, with temporal localization capabilities comparable to expert models.alt text πŸ”­Supports ultra-long video inputs, achieving a groundbreaking needle-in-a-haystack evaluation accuracy of 99.1% on 10,000 frames, capable of processing videos up to three hours long.alt text ⚑Highly efficient model architecture with exceptional inference speed, encoding each video frame into just 16 tokens, making it 5–10 times faster than the previous model.alt text

Demo & Inference

Refer to hf README to inference our model.

Evaluation

See evaluation codes. And lmms-eval have supported our model, you also could use it to evaluate our model on varous benchmarks.

Training

See training codes based LLaVA for VideoChat-Flash and training codes based XTuner for finetuning InternVideo2.5.

πŸ“Š NIAH

alt text

See xtuner-eval_niah for evaluation of Single-Hop NIAH-Video and Multi-Hop NIAH-Video.

πŸ“„ Citation

If you find this project useful in your research, please consider cite:

@article{li2024videochat, title={VideoChat-Flash: Hierarchical Compression for Long-Context Video Modeling}, author={Li, Xinhao and Wang, Yi and Yu, Jiashuo and Zeng, Xiangyu and Zhu, Yuhan and Huang, Haian and Gao, Jianfei and Li, Kunchang and He, Yinan and Wang, Chenting and Qiao, Yu and Wang, Yali and Wang, Limin}, journal={arXiv preprint arXiv:2501.00574}, year={2024} }

πŸ’« Acknowledgement

Thanks to the open source of the following projects: InternVideo, UMT, Qwen, LLaVA-VL, lmms-eval, Ask-Anything, ToMe, LongVLM, FastV, LLaVolta, PyramidDrop, LongVA, their implementation provides valuable reference experience for our project.