GitHub - kubeflow/trainer: Distributed ML Training and Fine-Tuning on Kubernetes (original) (raw)
Kubeflow Trainer
Overview
Kubeflow Trainer is a Kubernetes-native project designed for large language models (LLMs) fine-tuning and enabling scalable, distributed training of machine learning (ML) models across various frameworks, including PyTorch, JAX, TensorFlow, and others.
You can integrate other ML libraries such as HuggingFace,DeepSpeed, or Megatron-LMwith Kubeflow Training to orchestrate their ML training on Kubernetes.
Kubeflow Trainer allows you effortlessly develop your LLMs with the Kubeflow Python SDK and build Kubernetes-native Training Runtimes with Kubernetes Custom Resources APIs.
Kubeflow Trainer Introduction
The following KubeCon + CloudNativeCon 2024 talk provides an overview of Kubeflow Trainer capabilities:
Getting Started
Please check the official Kubeflow documentationto install and get started with Kubeflow Trainer.
Community
The following links provide information on how to get involved in the community:
- Join our #kubeflow-trainer Slack channel.
- Attend the bi-weekly AutoML and Training Working Group community meeting.
- Check out who is using Kubeflow Trainer.
Contributing
Please refer to the CONTRIBUTING guide.
Changelog
Please refer to the CHANGELOG.
Kubeflow Training Operator V1
Kubeflow Trainer project is currently in alpha status, and APIs may change. If you are using Kubeflow Training Operator V1, please refer to this migration document.
Kubeflow Community will maintain the Training Operator V1 source code atthe release-1.9 branch.
You can find the documentation for Kubeflow Training Operator V1 in these guides.
Acknowledgement
This project was originally started as a distributed training operator for TensorFlow and later we merged efforts from other Kubeflow Training Operators to provide a unified and simplified experience for both users and developers. We are very grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions. We'd also like to thank everyone who's contributed to and maintained the original operators.
- PyTorch Operator: list of contributorsand maintainers.
- MPI Operator: list of contributorsand maintainers.
- XGBoost Operator: list of contributorsand maintainers.
- Common library: list of contributors andmaintainers.