Integrate SetFit with API Inference + Tests by tomaarsen · Pull Request #359 · huggingface/api-inference-community (original) (raw)
Hello!
Pull Request overview
- Integrate SetFit into API Inference
- Copied
docker_images/common
. - Edited
docker_images/setfit/requirements.txt
,docker_images/setfit/main.py
anddocker_images/setfit/pipelines/token_classification.py
. - Removed unused pipeline files, tests and imports.
- Edited
setfit/tests/test_api.py
with aMiniLM
model for quick tests. - Edited
tests/test_dockers.py
with a new test fordef test_setfit(self)
. - Added two workflows (
python-api-setfit-cd.yaml
andpython-api-setfit.yaml
)
- Copied
Details
SetFit is a library for text classification with ~1200 models on the Hub at the time of writing. A v1.0.0 release is upcoming, and it's a good time to add this widget support.
I've used my tomaarsen/setfit-all-MiniLM-L6-v2-sst2-32-shot model throughout the tests. This model is based on sentence-transformers/all-MiniLM-L6-v2 embedding model, which should be fairly small (~90MB).
To the best of my knowledge, I've followed all of the steps in the README and from the integration documentation. Please let me know if you need anything else from me at this point!
Presumably I don't need to mess around with #158?
Related PRs: