Scikit-Learn Iris Model using customData — seldon-core documentation (original) (raw)

Dependencies

Train locally

import os

import numpy as np from sklearn import datasets from sklearn.externals import joblib from sklearn.linear_model import LogisticRegression from sklearn.pipeline import Pipeline

def main(): clf = LogisticRegression() p = Pipeline([("clf", clf)]) print("Training model...") p.fit(X, y) print("Model trained!")

filename_p = "IrisClassifier.sav"
print("Saving model in %s" % filename_p)
joblib.dump(p, filename_p)
print("Model saved!")

if name == "main": print("Loading iris data set...") iris = datasets.load_iris() X, y = iris.data, iris.target print("Dataset loaded!") main()

Custom Protobuf Specification

First, we’ll need to define our custom protobuf specification so that it can be leveraged.

%%writefile iris.proto

syntax = "proto3";

package iris;

message IrisPredictRequest { float sepal_length = 1; float sepal_width = 2; float petal_length = 3; float petal_width = 4; }

message IrisPredictResponse { float setosa = 1; float versicolor = 2; float virginica = 3; }

Custom Protobuf Compilation

We will need to compile our custom protobuf for python so that we can unpack the customData field passed to our predict method later on.

!python -m grpc.tools.protoc --python_out=./ --proto_path=. iris.proto

gRPC test

Wrap model using s2i

!s2i build . seldonio/seldon-core-s2i-python37-ubi8:1.7.0-dev seldonio/sklearn-iris-customdata:0.1

Serve the model locally

!docker run --name "iris_predictor" -d --rm -p 5000:5000 seldonio/sklearn-iris-customdata:0.1

Test using custom protobuf payload

import grpc from iris_pb2 import IrisPredictRequest, IrisPredictResponse

from seldon_core.proto import prediction_pb2, prediction_pb2_grpc

channel = grpc.insecure_channel("localhost:5000") stub = prediction_pb2_grpc.ModelStub(channel)

iris_request = IrisPredictRequest( sepal_length=7.233, sepal_width=4.652, petal_length=7.39, petal_width=0.324 )

seldon_request = prediction_pb2.SeldonMessage() seldon_request.customData.Pack(iris_request)

response = stub.Predict(seldon_request)

iris_response = IrisPredictResponse() response.customData.Unpack(iris_response)

print(iris_response)

Stop serving model

!docker rm iris_predictor --force

Setup Seldon Core

Use the setup notebook to setup Seldon Core with an ingress - either Ambassador or Istio

Then port-forward to that ingress on localhost:8003 in a separate terminal either with:

!kubectl create namespace seldon

!kubectl config set-context $(kubectl config current-context) --namespace=seldon

Deploy your Seldon Model

We first create a configuration file:

%%writefile sklearn_iris_customdata_deployment.yaml

apiVersion: machinelearning.seldon.io/v1 kind: SeldonDeployment metadata: name: seldon-deployment-example spec: name: sklearn-iris-deployment predictors:

Run the model in our cluster

Apply the Seldon Deployment configuration file we just created

!kubectl create -f sklearn_iris_customdata_deployment.yaml

Check that the model has been deployed

!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=seldon-deployment-example -o jsonpath='{.items[0].metadata.name}')

Test by sending prediction calls

IrisPredictRequest sent via the customData field.

iris_request = IrisPredictRequest( sepal_length=7.233, sepal_width=4.652, petal_length=7.39, petal_width=0.324 )

seldon_request = prediction_pb2.SeldonMessage() seldon_request.customData.Pack(iris_request)

channel = grpc.insecure_channel("localhost:8003") stub = prediction_pb2_grpc.SeldonStub(channel)

metadata = [("seldon", "seldon-deployment-example"), ("namespace", "seldon")]

response = stub.Predict(request=seldon_request, metadata=metadata)

iris_response = IrisPredictResponse() response.customData.Unpack(iris_response)

print(iris_response)

Cleanup our deployment

!kubectl delete -f sklearn_iris_customdata_deployment.yaml