GitHub - huangb23/VTimeLLM: [CVPR'2024 Highlight] Official PyTorch implementation of the paper "VTimeLLM: Empower LLM to Grasp Video Moments". (original) (raw)

VTimeLLM [Paper]

Official PyTorch implementation of the paper "VTimeLLM: Empower LLM to Grasp Video Moments".

PWC

PWC

PWC

PWC

PWC

PWC

PWC


📢 Latest Updates


VTimeLLM Overview 💡

VTimeLLM is a novel Video LLM designed for fine-grained video moment understanding and reasoning with respect to time boundary.

VTimeLLM adopts a boundary-aware three-stage training strategy, which respectively utilizes image-text pairs for feature alignment, multiple-event videos to increase temporal-boundary awareness, and high-quality video-instruction tuning to further improve temporal understanding ability as well as align with human intents.

framework


Contributions 🏆


Installation 🔧

We recommend setting up a conda environment for the project:

conda create --name=vtimellm python=3.10 conda activate vtimellm

git clone https://github.com/huangb23/VTimeLLM.git cd VTimeLLM pip install -r requirements.txt

Additionally, install additional packages for training cases.

pip install ninja pip install flash-attn --no-build-isolation

Running Demo Offline 💿

To run the demo offline, please refer to the instructions in offline_demo.md.

Training 🚋

For training instructions, check out train.md.

Qualitative Analysis 🔍

A Comprehensive Evaluation of VTimeLLM's Performance across Multiple Tasks.

Video Understanding and Conversational Tasks 💬

0


Creative Tasks 🖌️

1


Fine-grained Understanding Tasks 🌐

2


Video Reasoning Tasks ❓

3


Acknowledgements 🙏

We are grateful for the following awesome projects our VTimeLLM arising from:

If you're using VTimeLLM in your research or applications, please cite using this BibTeX:

@inproceedings{huang2024vtimellm, title={Vtimellm: Empower llm to grasp video moments}, author={Huang, Bin and Wang, Xin and Chen, Hong and Song, Zihan and Zhu, Wenwu}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={14271--14280}, year={2024} }

License 📜

Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License.

Looking forward to your feedback, contributions, and stars! 🌟