[Model] add vllm compatible models by Luodian · Pull Request #544 · EvolvingLMMs-Lab/lmms-eval (original) (raw)
- Introduce VLLM model in the model registry.
- Update AVAILABLE_MODELS to include new models:
- models/init.py: Added "aria", "internvideo2", "llama_vision", "oryx", "ross", "slime", "videochat2", "vllm", "xcomposer2_4KHD", "xcomposer2d5".
- Create vllm.py for VLLM model implementation:
- Implemented encoding for images and videos.
- Added methods for generating responses and handling multi-round generation.
- Update mmu tasks with new prompt formats and evaluation metrics:
- mmmu_val.yaml: Added specific kwargs for prompt types.
- mmmu_val_reasoning.yaml: Enhanced prompts for reasoning tasks.
- utils.py: Adjusted evaluation rules and scoring for predictions.
- Added script for easy model execution:
- vllm_qwen2vl.sh: Script to run VLLM with specified parameters.