Torchserve support for PyTorch Inference by dk19y · Pull Request #58 · aws/sagemaker-inference-toolkit (original) (raw)
Torchserve support for PyTorch Inference
This is part of 3 PRs in sagemaker-inference-toolkit / sagemaker-pytorch-inference-toolkit / dlc to add support for TorchServe in Sagmaker.
Other 2 PRs which depend on this : aws/sagemaker-pytorch-inference-toolkit#79 / aws/deep-learning-containers#347
Testing done
- Unit tests
- The torchserve support is primarlty consumed by sagemaker-pytorch-inference-toolkit. Integration tests were performed against the consuming package to ensure that existing contract works. More details in breaking: Change Model server to Torchserve for PyTorch Inference sagemaker-pytorch-inference-toolkit#79
General
- I have read the CONTRIBUTING doc
- I used the commit message format described in CONTRIBUTING
- I have used the regional endpoint when creating S3 and/or STS clients (if appropriate)
- I have updated any necessary documentation, including READMEs
Tests
- I have added tests that prove my fix is effective or that my feature works (if appropriate)
- I have checked that my tests are not configured for a specific region or account (if appropriate)
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.