Build an online OPT service using Colossal-AI in 5 minutes | Colossal-AI (original) (raw)

Introduction

This tutorial shows how to build your own service with OPT with the help of Colossal-AI.

Colossal-AI Inference Overview

Colossal-AI provides an inference subsystem Energon-AI, a serving system built upon Colossal-AI, which has the following characteristics:

Basic Usage:

  1. Download OPT model

To launch the distributed inference service quickly, you can download the OPT-125M from here. You can get details for loading other sizes of models here.

  1. Prepare a prebuilt service image

Pull a docker image from docker hub installed with Colossal-AI inference.

docker pull hpcaitech/energon-ai:latest
  1. Launch an HTTP service

To launch a service, we need to provide python scripts to describe the model type and related configurations, and settings for the HTTP service. We have provided a set of examples. We will use the OPT example in this tutorial. The entrance of the service is a bash script server.sh. The config of the service is at opt_config.py, which defines the model type, the checkpoint file path, the parallel strategy, and http settings. You can adapt it for your own case. For example, set the model class as opt_125M and set the correct checkpoint path as follows.

model_class = opt_125M
checkpoint = 'your_file_path'

Set the tensor parallelism degree the same as your gpu number.

Now, we can launch a service using docker. You can map the path of the checkpoint and directory containing configs to local disk path /model_checkpoint and /config.

export CHECKPOINT_DIR="your_opt_checkpoint_path"
# the ${CONFIG_DIR} must contain a server.sh file as the entry of service
export CONFIG_DIR="config_file_path"

docker run --gpus all  --rm -it -p 8020:8020 -v <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mrow><mi>C</mi><mi>H</mi><mi>E</mi><mi>C</mi><mi>K</mi><mi>P</mi><mi>O</mi><mi>I</mi><mi>N</mi><msub><mi>T</mi><mi>D</mi></msub><mi>I</mi><mi>R</mi></mrow><mo>:</mo><mi mathvariant="normal">/</mi><mi>m</mi><mi>o</mi><mi>d</mi><mi>e</mi><msub><mi>l</mi><mi>c</mi></msub><mi>h</mi><mi>e</mi><mi>c</mi><mi>k</mi><mi>p</mi><mi>o</mi><mi>i</mi><mi>n</mi><mi>t</mi><mo>−</mo><mi>v</mi></mrow><annotation encoding="application/x-tex">{CHECKPOINT_DIR}:/model_checkpoint -v </annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8333em;vertical-align:-0.15em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.07153em;">C</span><span class="mord mathnormal" style="margin-right:0.08125em;">H</span><span class="mord mathnormal" style="margin-right:0.07153em;">EC</span><span class="mord mathnormal" style="margin-right:0.07153em;">K</span><span class="mord mathnormal" style="margin-right:0.02778em;">PO</span><span class="mord mathnormal" style="margin-right:0.07847em;">I</span><span class="mord mathnormal" style="margin-right:0.10903em;">N</span><span class="mord"><span class="mord mathnormal" style="margin-right:0.13889em;">T</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3283em;"><span style="top:-2.55em;margin-left:-0.1389em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathnormal mtight" style="margin-right:0.02778em;">D</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mord mathnormal" style="margin-right:0.07847em;">I</span><span class="mord mathnormal" style="margin-right:0.00773em;">R</span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">:</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:1em;vertical-align:-0.25em;"></span><span class="mord">/</span><span class="mord mathnormal">m</span><span class="mord mathnormal">o</span><span class="mord mathnormal">d</span><span class="mord mathnormal">e</span><span class="mord"><span class="mord mathnormal" style="margin-right:0.01968em;">l</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.1514em;"><span style="top:-2.55em;margin-left:-0.0197em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathnormal mtight">c</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mord mathnormal">h</span><span class="mord mathnormal">ec</span><span class="mord mathnormal" style="margin-right:0.03148em;">k</span><span class="mord mathnormal">p</span><span class="mord mathnormal">o</span><span class="mord mathnormal">in</span><span class="mord mathnormal">t</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">−</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.4306em;"></span><span class="mord mathnormal" style="margin-right:0.03588em;">v</span></span></span></span>{CONFIG_DIR}:/config --ipc=host energonai:latest

Then open https://[IP-ADDRESS]:8020/docs# in your browser to try out!

Advance Features Usage:

  1. Batching Optimization

To use our advanced batching technique to collect multiple queries in batches to serve, you can set the executor_max_batch_size as the max batch size. Note, that only the decoder task with the same top_k, top_p and temperature can be batched together.

executor_max_batch_size = 16

All queries are submitted to a FIFO queue. All consecutive queries whose number of decoding steps is less than or equal to that of the head of the queue can be batched together. Left padding is applied to ensure correctness. executor_max_batch_size should not be too large. This ensures batching won't increase latency. For opt-30b, executor_max_batch_size=16 may be a good choice, while for opt-175b, executor_max_batch_size=4 may be better.

  1. Cache Optimization.

You can cache several recently served query results for each independent serving process. Set the cache_size and cache_list_size in config.py. The cache size is the number of queries cached. The cache_list_size is the number of results stored for each query. And a random cached result will be returned. When the cache is full, LRU is applied to evict cached queries. cache_size=0means no cache is applied.

cache_size = 50
cache_list_size = 2