Update llava conv_template in lmms_eval/models/llava.py · EvolvingLMMs-Lab/lmms-eval@3415633 (original) (raw)
`@@ -11,9 +11,9 @@
`
11
11
``
12
12
`# Annoucement
`
13
13
``
14
``
`` -
- [2024-06] The
lmms-eval/v0.2
has been upgraded to support video evaluations for video models like LLaVA-NeXT Video and Gemini 1.5 Pro across tasks such as EgoSchema, PerceptionTest, VideoMME, and more. Please refer to the blog for more details
``
``
14
`` +
- [2024-06] 🎬🎬 The
lmms-eval/v0.2
has been upgraded to support video evaluations for video models like LLaVA-NeXT Video and Gemini 1.5 Pro across tasks such as EgoSchema, PerceptionTest, VideoMME, and more. Please refer to the blog for more details
``
15
15
``
16
``
`` -
- [2024-03] We have released the first version of
lmms-eval
, please refer to the blog for more details
``
``
16
`` +
- [2024-03] 📝📝 We have released the first version of
lmms-eval
, please refer to the blog for more details
``
17
17
``
18
18
`` # Why lmms-eval
?
``
19
19
``
`@@ -120,7 +120,9 @@ python3 -m accelerate.commands.launch \
`
120
120
` --output_path ./logs/
`
121
121
```` ```
`122`
`122`
``
`123`
``
`` -
**For other variants llava. Note that `conv_template` is an arg of the init function of llava in `lmms_eval/models/llava.py`**
``
``
`123`
`` +
**For other variants llava. Please change the `conv_template` in the `model_args`**
``
``
`124`
`+`
``
`125`
`` +
> `conv_template` is an arg of the init function of llava in `lmms_eval/models/llava.py`, you could find the corresponding value at LLaVA's code, probably in a dict variable `conv_templates` in `llava/conversations.py`
``
`124`
`126`
``
`125`
`127`
```` ```bash
126
128
`python3 -m accelerate.commands.launch \
`