Deploy a MLflow Model to SageMaker — Amazon SageMaker Examples 1.0.0 documentation (original) (raw)
This notebook’s CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook.
Setup environment
Upgrade SageMaker Python SDK
!pip install --upgrade --quiet sagemaker>=2.215.0
Install MLflow and AWS MLflow plugin
!pip install mlflow==2.13.2 sagemaker-mlflow==0.1.0
Import necessary libraries
import sagemaker from sagemaker import get_execution_role from sagemaker.sklearn.estimator import SKLearn from sagemaker.serve import SchemaBuilder from sagemaker.serve import ModelBuilder from sagemaker.serve.mode.function_pointers import Mode import mlflow from mlflow import MlflowClient import boto3 import numpy as np import pandas as pd import os
Declare some variables used later
Define session, role, and region so we can
perform any SageMaker tasks we need
sagemaker_session = sagemaker.Session() role = get_execution_role() region = sagemaker_session.boto_region_name
S3 prefix for the training dataset to be uploaded to
prefix = "DEMO-scikit-iris"
Provide the ARN of the Tracking Server that you want to track your training job with
tracking_server_arn = "your tracking server arn here"
Get some training data
Let’s download the save the Iris dataset
os.makedirs("./data", exist_ok=True)
s3_client = boto3.client("s3") s3_client.download_file( f"sagemaker-example-files-prod-{region}", "datasets/tabular/iris/iris.data", "./data/iris.csv" )
df_iris = pd.read_csv("./data/iris.csv", header=None) df_iris[4] = df_iris[4].map({"Iris-setosa": 0, "Iris-versicolor": 1, "Iris-virginica": 2}) iris = df_iris[[4, 0, 1, 2, 3]].to_numpy() np.savetxt("./data/iris.csv", iris, delimiter=",", fmt="%1.1f, %1.3f, %1.3f, %1.3f, %1.3f")
And now let’s upload that data to S3
WORK_DIRECTORY = "data"
train_input = sagemaker_session.upload_data( WORK_DIRECTORY, key_prefix="{}/{}".format(prefix, WORK_DIRECTORY) )
Write your training script
Let’s write the code to train a Decision Tree model using the scikit-learn framework
%%writefile training_code/train.py
from future import print_function
import argparse import joblib import os import pandas as pd
from sklearn import tree
import mlflow
if name == 'main': parser = argparse.ArgumentParser()
# Hyperparameters are described here. In this simple example we are just including one hyperparameter.
parser.add_argument('--max_leaf_nodes', type=int, default=-1)
# Sagemaker specific arguments. Defaults are set in the environment variables.
parser.add_argument('--output-data-dir', type=str, default=os.environ['SM_OUTPUT_DATA_DIR'])
parser.add_argument('--model-dir', type=str, default=os.environ['SM_MODEL_DIR'])
parser.add_argument('--train', type=str, default=os.environ['SM_CHANNEL_TRAIN'])
args = parser.parse_args()
# Take the set of files and read them all into a single pandas dataframe
input_files = [ os.path.join(args.train, file) for file in os.listdir(args.train) if os.path.isfile(os.path.join(args.train, file))]
if len(input_files) == 0:
raise ValueError(('There are no files in {}.\n' +
'This usually indicates that the channel ({}) was incorrectly specified,\n' +
'the data specification in S3 was incorrectly specified or the role specified\n' +
'does not have permission to access the data.').format(args.train, "train"))
raw_data = [ pd.read_csv(file, header=None, engine="python") for file in input_files ]
train_data = pd.concat(raw_data)
# Set the Tracking Server URI using the ARN of the Tracking Server you created
mlflow.set_tracking_uri(os.environ['MLFLOW_TRACKING_ARN'])
# Enable autologging in MLflow
mlflow.autolog()
# labels are in the first column
train_y = train_data.iloc[:, 0]
train_X = train_data.iloc[:, 1:]
# Here we support a single hyperparameter, 'max_leaf_nodes'. Note that you can add as many
# as your training my require in the ArgumentParser above.
max_leaf_nodes = args.max_leaf_nodes
# Now use scikit-learn's decision tree classifier to train the model.
clf = tree.DecisionTreeClassifier(max_leaf_nodes=max_leaf_nodes)
clf = clf.fit(train_X, train_y)
# Print the coefficients of the trained classifier, and save the coefficients
joblib.dump(clf, os.path.join(args.model_dir, "model.joblib"))
# Register the model with MLflow
run_id = mlflow.last_active_run().info.run_id
artifact_path = "model"
model_uri = "runs:/{run_id}/{artifact_path}".format(run_id=run_id, artifact_path=artifact_path)
model_details = mlflow.register_model(model_uri=model_uri, name="sm-job-experiment-model")
Since we’re using MLflow in our training script, let’s make sure the container installs mlflow
along with our MLflow plugin before running our training script. We can do this by creating a requirements.txt
file and putting it in the same directory as our training script.
%%writefile training_code/requirements.txt mlflow==2.13.2 sagemaker-mlflow==0.1.0 cloudpickle==2.2.1 # Required for Sagemaker Python SDK
SageMaker Training and MLflow
Train your Decision tree model by launching a SageMaker Training job.
sklearn = SKLearn( entry_point="train.py", source_dir="training_code", framework_version="1.2-1", instance_type="ml.c4.xlarge", role=role, sagemaker_session=sagemaker_session, hyperparameters={"max_leaf_nodes": 30}, keep_alive_period_in_seconds=3600, environment={"MLFLOW_TRACKING_ARN": tracking_server_arn}, )
sklearn.fit({"train": train_input})
Access the model in MLflow UI and SageMaker Studio UI
After the execution completes, you can find the trained model in both the MLflow UI and SageMaker Studio UI.
To view the model in the MLflow UI, select the “Models” tab:
To view the model in SageMaker Studio UI, you will need to navigate to SageMaker Studio:
- Choose a domain and launch Studio from one of the user profiles associated with it
- Select “Models” in the menu to see the SageMaker Model Registry. From here you will see your
sm-job-experiment-model
model
Deploy MLflow Model to SageMaker
mlflow.set_tracking_uri(tracking_server_arn) client = MlflowClient() registered_model = client.get_registered_model(name="sm-job-experiment-model") source_path = registered_model.latest_versions[0].source
Artifact URI of the model
source_path
Define of the Schema of the sklearn model
Model Builder requires the definition of the model schema, this is the input and output of the model. In this case it is a [4x1] vector for the input and an integer for the output.
sklearn_input = np.array([1.0, 2.0, 3.0, 4.0]).reshape(1, -1) sklearn_output = 1 sklearn_schema_builder = SchemaBuilder( sample_input=sklearn_input, sample_output=sklearn_output, )
Build and deploy the model
Create model builder with the schema builder.
model_builder = ModelBuilder( mode=Mode.SAGEMAKER_ENDPOINT, schema_builder=sklearn_schema_builder, role_arn=role, model_metadata={"MLFLOW_MODEL_PATH": source_path}, )
built_model = model_builder.build()
predictor = built_model.deploy(initial_instance_count=1, instance_type="ml.m5.large")
Inference on Deployed Model
predictor.predict(sklearn_input)
Cleanup Resources
sagemaker_session.delete_model(model_name=built_model.name) sagemaker_session.delete_endpoint_config(endpoint_config_name=built_model.endpoint_name) sagemaker_session.delete_endpoint(endpoint_name=built_model.endpoint_name)
Notebook CI Test Results
This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.