GitHub - modelscope/evalscope: A streamlined and customizable framework for efficient large model evaluation and performance benchmarking (original) (raw)

δΈ­ζ–‡ | English

PyPI version PyPI - Downloads Documentation Status

πŸ“– δΈ­ζ–‡ζ–‡ζ‘£ | πŸ“– English Documents

⭐ If you like this project, please click the "Star" button at the top right to support us. Your support is our motivation to keep going!

πŸ“‹ Contents

πŸ“ Introduction

EvalScope is a comprehensive model evaluation and performance benchmarking framework meticulously crafted by the ModelScope Community, offering a one-stop solution for your model assessment needs. Regardless of the type of model you are developing, EvalScope is equipped to cater to your requirements:

EvalScope is not merely an evaluation tool; it is a valuable ally in your model optimization journey:

Below is the overall architecture diagram of EvalScope:


EvalScope Framework.

Framework Description

The architecture includes the following modules:

  1. Model Adapter: The model adapter is used to convert the outputs of specific models into the format required by the framework, supporting both API call models and locally run models.
  2. Data Adapter: The data adapter is responsible for converting and processing input data to meet various evaluation needs and formats.
  3. Evaluation Backend:
    • Native: EvalScope’s own default evaluation framework, supporting various evaluation modes, including single model evaluation, arena mode, baseline model comparison mode, etc.
    • OpenCompass: Supports OpenCompass as the evaluation backend, providing advanced encapsulation and task simplification, allowing you to submit tasks for evaluation more easily.
    • VLMEvalKit: Supports VLMEvalKit as the evaluation backend, enabling easy initiation of multi-modal evaluation tasks, supporting various multi-modal models and datasets.
    • RAGEval: Supports RAG evaluation, supporting independent evaluation of embedding models and rerankers using MTEB/CMTEB, as well as end-to-end evaluation using RAGAS.
    • ThirdParty: Other third-party evaluation tasks, such as ToolBench.
  4. Performance Evaluator: Model performance evaluation, responsible for measuring model inference service performance, including performance testing, stress testing, performance report generation, and visualization.
  5. Evaluation Report: The final generated evaluation report summarizes the model's performance, which can be used for decision-making and further model optimization.
  6. Visualization: Visualization results help users intuitively understand evaluation results, facilitating analysis and comparison of different model performances.

☎ User Groups

Please scan the QR code below to join our community groups:

Discord Group WeChat Group DingTalk Group

πŸŽ‰ News

πŸ› οΈ Installation

Method 1: Install Using pip

We recommend using conda to manage your environment and installing dependencies with pip:

  1. Create a conda environment (optional)

It is recommended to use Python 3.10

conda create -n evalscope python=3.10

Activate the conda environment

conda activate evalscope 2. Install dependencies using pip
pip install evalscope # Install Native backend (default)

Additional options

pip install 'evalscope[opencompass]' # Install OpenCompass backend
pip install 'evalscope[vlmeval]' # Install VLMEvalKit backend
pip install 'evalscope[rag]' # Install RAGEval backend
pip install 'evalscope[perf]' # Install dependencies for the model performance testing module
pip install 'evalscope[app]' # Install dependencies for visualization
pip install 'evalscope[all]' # Install all backends (Native, OpenCompass, VLMEvalKit, RAGEval)

Warning

As the project has been renamed to evalscope, for versions v0.4.3 or earlier, you can install using the following command:

pip install llmuses<=0.4.3

To import relevant dependencies using llmuses:

Method 2: Install from Source

  1. Download the source code
    git clone https://github.com/modelscope/evalscope.git
  2. Install dependencies
    cd evalscope/
    pip install -e . # Install Native backend

Additional options

pip install -e '.[opencompass]' # Install OpenCompass backend
pip install -e '.[vlmeval]' # Install VLMEvalKit backend
pip install -e '.[rag]' # Install RAGEval backend
pip install -e '.[perf]' # Install Perf dependencies
pip install -e '.[app]' # Install visualization dependencies
pip install -e '.[all]' # Install all backends (Native, OpenCompass, VLMEvalKit, RAGEval)

πŸš€ Quick Start

To evaluate a model on specified datasets using default configurations, this framework supports two ways to initiate evaluation tasks: using the command line or using Python code.

Method 1. Using Command Line

Execute the eval command in any directory:

evalscope eval
--model Qwen/Qwen2.5-0.5B-Instruct
--datasets gsm8k arc
--limit 5

Method 2. Using Python Code

When using Python code for evaluation, you need to submit the evaluation task using the run_task function, passing a TaskConfig as a parameter. It can also be a Python dictionary, yaml file path, or json file path, for example:

Using Python Dictionary

from evalscope.run import run_task

task_cfg = { 'model': 'Qwen/Qwen2.5-0.5B-Instruct', 'datasets': ['gsm8k', 'arc'], 'limit': 5 }

run_task(task_cfg=task_cfg)

More Startup Methods

Using TaskConfig

from evalscope.run import run_task from evalscope.config import TaskConfig

task_cfg = TaskConfig( model='Qwen/Qwen2.5-0.5B-Instruct', datasets=['gsm8k', 'arc'], limit=5 )

run_task(task_cfg=task_cfg)

Using yaml file

config.yaml:

model: Qwen/Qwen2.5-0.5B-Instruct datasets:

from evalscope.run import run_task

run_task(task_cfg="config.yaml")

Using json file

config.json:

{ "model": "Qwen/Qwen2.5-0.5B-Instruct", "datasets": ["gsm8k", "arc"], "limit": 5 }

from evalscope.run import run_task

run_task(task_cfg="config.json")

Basic Parameter

Output Results

+-----------------------+----------------+-----------------+-----------------+---------------+-------+---------+
| Model Name            | Dataset Name   | Metric Name     | Category Name   | Subset Name   |   Num |   Score |
+=======================+================+=================+=================+===============+=======+=========+
| Qwen2.5-0.5B-Instruct | gsm8k          | AverageAccuracy | default         | main          |     5 |     0.4 |
+-----------------------+----------------+-----------------+-----------------+---------------+-------+---------+
| Qwen2.5-0.5B-Instruct | ai2_arc        | AverageAccuracy | default         | ARC-Easy      |     5 |     0.8 |
+-----------------------+----------------+-----------------+-----------------+---------------+-------+---------+
| Qwen2.5-0.5B-Instruct | ai2_arc        | AverageAccuracy | default         | ARC-Challenge |     5 |     0.4 |
+-----------------------+----------------+-----------------+-----------------+---------------+-------+---------+

πŸ“ˆ Visualization of Evaluation Results

  1. Install the dependencies required for visualization, including gradio, plotly, etc.

pip install 'evalscope[app]'

  1. Start the Visualization Service

Run the following command to start the visualization service.

You can access the visualization service in the browser if the following output appears.

* Running on local URL:  http://127.0.0.1:7861

To create a public link, set `share=True` in `launch()`.
Setting Setting Interface Model Compare Model Comparison
Report Overview Report Overview Report Details Report Details

For more details, refer to: πŸ“– Visualization of Evaluation Results

🌐 Evaluation of Specified Model API

Specify the model API service address (api_url) and API Key (api_key) to evaluate the deployed model API service. In this case, the eval-type parameter must be specified as service, for example:

For example, to launch a model service using vLLM:

export VLLM_USE_MODELSCOPE=True && python -m vllm.entrypoints.openai.api_server --model Qwen/Qwen2.5-0.5B-Instruct --served-model-name qwen2.5 --trust_remote_code --port 8801

Then, you can use the following command to evaluate the model API service:

evalscope eval
--model qwen2.5
--api-url http://127.0.0.1:8801/v1
--api-key EMPTY
--eval-type service
--datasets gsm8k
--limit 10

βš™οΈ Custom Parameter Evaluation

For more customized evaluations, such as customizing model parameters or dataset parameters, you can use the following command. The evaluation startup method is the same as simple evaluation. Below shows how to start the evaluation using the eval command:

evalscope eval
--model Qwen/Qwen3-0.6B
--model-args '{"revision": "master", "precision": "torch.float16", "device_map": "auto"}'
--generation-config '{"do_sample":true,"temperature":0.6,"max_new_tokens":512,"chat_template_kwargs":{"enable_thinking": false}}'
--dataset-args '{"gsm8k": {"few_shot_num": 0, "few_shot_random": false}}'
--datasets gsm8k
--limit 10

Parameter Description

Reference: Full Parameter Description

Evaluation Backend

EvalScope supports using third-party evaluation frameworks to initiate evaluation tasks, which we call Evaluation Backend. Currently supported Evaluation Backend includes:

πŸ“ˆ Model Serving Performance Evaluation

A stress testing tool focused on large language models, which can be customized to support various dataset formats and different API protocol formats.

Reference: Performance Testing πŸ“– User Guide

Output example

multi_perf

Supports wandb for recording results

wandb sample

Supports swanlab for recording results

swanlab sample

Supports Speed Benchmark

It supports speed testing and provides speed benchmarks similar to those found in the official Qwen reports:

Speed Benchmark Results:
+---------------+-----------------+----------------+
| Prompt Tokens | Speed(tokens/s) | GPU Memory(GB) |
+---------------+-----------------+----------------+
|       1       |      50.69      |      0.97      |
|     6144      |      51.36      |      1.23      |
|     14336     |      49.93      |      1.59      |
|     30720     |      49.56      |      2.34      |
+---------------+-----------------+----------------+

πŸ–ŠοΈ Custom Dataset Evaluation

EvalScope supports custom dataset evaluation. For detailed information, please refer to the Custom Dataset Evaluation πŸ“–User Guide

🏟️ Arena Mode

The Arena mode allows multiple candidate models to be evaluated through pairwise battles, and can choose to use the AI Enhanced Auto-Reviewer (AAR) automatic evaluation process or manual evaluation to obtain the evaluation report.

Refer to: Arena Mode πŸ“– User Guide

πŸ‘·β€β™‚οΈ Contribution

EvalScope, as the official evaluation tool of ModelScope, is continuously optimizing its benchmark evaluation features! We invite you to refer to the Contribution Guide to easily add your own evaluation benchmarks and share your contributions with the community. Let’s work together to support the growth of EvalScope and make our tools even better! Join us now!

πŸ”œ Roadmap

Star History

Star History Chart