ray.tune.Tuner — Ray 2.45.0 (original) (raw)
class ray.tune.Tuner(trainable: str | Callable | Type[Trainable] | BaseTrainer | None = None, *, param_space: Dict[str, Any] | None = None, tune_config: TuneConfig | None = None, run_config: RunConfig | None = None, _tuner_kwargs: Dict | None = None, _tuner_internal: TunerInternal | None = None, _entrypoint: AirEntrypoint = AirEntrypoint.TUNER)[source]#
Tuner is the recommended way of launching hyperparameter tuning jobs with Ray Tune.
Parameters:
- trainable – The trainable to be tuned.
- param_space – Search space of the tuning job. See Working with Tune Search Spaces.
- tune_config – Tuning specific configs, such as setting customsearch algorithms andtrial scheduling algorithms.
- run_config – Job-level run configuration, which includes configs for persistent storage, checkpointing, fault tolerance, etc.
Usage pattern:
import ray.tune
def trainable(config): # Your training logic here ray.tune.report({"accuracy": 0.8})
tuner = Tuner( trainable=trainable, param_space={"lr": ray.tune.grid_search([0.001, 0.01])}, run_config=ray.tune.RunConfig(name="my_tune_run"), ) results = tuner.fit()
To retry a failed Tune run, you can then do
tuner = Tuner.restore(results.experiment_path, trainable=trainable) tuner.fit()
results.experiment_path
can be retrieved from theResultGrid object. It can also be easily seen in the log output from your first run.
PublicAPI (beta): This API is in beta and may change before becoming stable.
Methods
__init__ | Configure and construct a tune run. |
---|---|
can_restore | Checks whether a given directory contains a restorable Tune experiment. |
fit | Executes hyperparameter tuning job as configured and returns result. |
get_results | Get results of a hyperparameter tuning run. |
restore | Restores Tuner after a previously failed run. |