DeepSpeed — SkyPilot documentation (original) (raw)

You are viewing the latest developer preview docs. Click here to view docs for the latest stable release.

Source: examples/deepspeed-multinode

This example shows how to launch a multinode DeepSpeed training job with SkyPilot.

Included files#

sky.yaml

Example: a distributed DeepSpeed job (DeepSpeed-Chat) on 2 VMs.

This takes care constructing a "hostfile" to pass to DeepSpeed.

If running on Kubernetes, use the nvidia/cuda:12.1.1-devel-ubuntu20.04 image

because DeepSpeed requires nvcc.

Usage:

$ sky launch sky.yaml -r --down -c ds

If running on Kubernetes:

$ sky launch sky.yaml -r --down -c ds --cloud k8s --image-id nvidia/cuda:12.1.1-devel-ubuntu20.04

# Optional: After the job starts running, you can log into the two nodes and

# check gpustat:

$ ssh ds

$ gpustat -i

$ ssh ds-worker1

$ gpustat -i

resources: accelerators: A100:1 # GCP, Lambda

accelerators: A100-80GB:1 # Azure, GCP, SCP

accelerators: A10G:1 # AWS. Will OOM for (1) single_node/run_1.3b_lora.sh (2) multi_node/run_66b.sh.

accelerators: T4:1 # AWS, Azure, GCP. Will OOM for (1) single_node/run_1.3b_lora.sh (2) multi_node/run_66b.sh.

image_id: docker:nvidia/cuda:12.1.1-devel-ubuntu20.04 # This image is required for nvcc to be available on Kubernetes pods. Not necessary on most cloud providers.

num_nodes: 2

envs: MY_VAR_1: "hello" MY_VAR_2: "world"

List of env vars to propagate to all nodes in deepspeed. If you add an env above, add it to this list.

DEEPSPEED_ENVS: "MY_VAR_1,MY_VAR_2,SKYPILOT_NODE_RANK"

setup: | if ! command -v git &> /dev/null then echo "git is not installed. Installing git..." sudo apt-get update sudo apt-get install -y git fi

git clone https://github.com/microsoft/DeepSpeedExamples.git || true cd DeepSpeedExamples git checkout d7c42b4f34df91035e7ed3e0c51500bb53d0bc71

source ~/deepspeed-venv/bin/activate if [ $? -eq 0 ]; then echo 'venv exists' else uv venv ~/deepspeed-venv --seed --python=3.10 source ~/deepspeed-venv/bin/activate fi uv pip install deepspeed==0.14.4

cd applications/DeepSpeed-Chat uv pip install -r requirements.txt

uv pip install transformers==4.44.0 datasets

Required by DeepSpeed in multi-node settings.

NOTE(skypilot): DeepSpeed uses pdsh to log into each node and calls

ninja --version; so it has to be installed system-wide rather than in

the above 'deepspeed' conda env.

sudo apt-get update sudo apt-get -y install pdsh ninja-build

run: | source ~/deepspeed-venv/bin/activate cd DeepSpeedExamples

Launch on the first node only

if [ "${SKYPILOT_NODE_RANK}" == "0" ]; then

# Prepare a hostfile.
HOSTFILE_PATH=/tmp/hostfile.${SKYPILOT_TASK_ID}
python -c "import os;n_gpus=os.environ['SKYPILOT_NUM_GPUS_PER_NODE'];print('\n'.join([f'{ip} slots={n_gpus}' for ip in os.environ['SKYPILOT_NODE_IPS'].splitlines()]))" > ${HOSTFILE_PATH}

# Generate .deepspeed_env to propagate env vars to all workers spawned by DeepSpeed.
echo "Generating .deepspeed_env"
python3 -c 'import os; f = open(".deepspeed_env", "w"); f.write("\n".join(["{}=\"{}\"".format(var, os.getenv(var, "")) for var in os.getenv("DEEPSPEED_ENVS").split(",")])); f.write("\n"); f.close()'

echo "*******************************************"
echo "Hostfile: ${HOSTFILE_PATH}"
cat ${HOSTFILE_PATH}
echo "*******************************************"

################ Your launch command goes here ################

cd applications/DeepSpeed-Chat/training/step1_supervised_finetuning/

# Adapted from: training_scripts/single_node/run_1.3b_lora.sh
# Note the additional argument: --hostfile $HOSTFILE_PATH
# Alternatively, you can move HOSTFILE_PATH to /job/hostfile:
#   sudo mkdir -p /job; sudo chmod 777 /job; mv ${HOSTFILE_PATH} /job/hostfile

OUTPUT_PATH=./output
mkdir -p $OUTPUT_PATH
deepspeed \
  --hostfile $HOSTFILE_PATH \
  main.py \
  --data_path Dahoas/rm-static Dahoas/full-hh-rlhf Dahoas/synthetic-instruct-gptj-pairwise yitingxie/rlhf-reward-datasets \
  --data_split 2,4,4 \
  --model_name_or_path facebook/opt-1.3b \
  --per_device_train_batch_size 8 \
  --per_device_eval_batch_size 8 \
  --max_seq_len 512 \
  --learning_rate 1e-3 \
  --weight_decay 0.1 \
  --num_train_epochs 16 \
  --gradient_accumulation_steps 1 \
  --lr_scheduler_type cosine \
  --num_warmup_steps 0 \
  --seed 1234 \
  --zero_stage 0 \
  --lora_dim 128 \
  --lora_module_name decoder.layers. \
  --only_optimize_lora \
  --deepspeed \
  --output_dir $OUTPUT_PATH \
  | tee $OUTPUT_PATH/training.log

fi