TensorFlow — sagemaker 2.247.0 documentation (original) (raw)

TensorFlow Estimator

class sagemaker.tensorflow.estimator.TensorFlow(py_version=None, framework_version=None, model_dir=None, image_uri=None, distribution=None, compiler_config=None, **kwargs)

Bases: Framework

Handle end-to-end training and deployment of user-provided TensorFlow code.

Initialize a TensorFlow estimator.

Parameters:

}
}

This distribution strategy option is available for TensorFlow 2.9 and later in the SageMaker Python SDK v2.xx.yy and later. To learn more about the mirrored strategy for TensorFlow, see TensorFlow Distributed Trainingin the TensorFlow documentation.
To enable MPI:

To enable parameter server:

create_model(role=None, vpc_config_override='VPC_CONFIG_DEFAULT', entry_point=None, source_dir=None, dependencies=None, **kwargs)

Creates TensorFlowModel object to be used for creating SageMaker model entities.

This can be done by deploying it to a SageMaker endpoint, or starting SageMaker Batch Transform jobs.

Parameters:

Returns:

A TensorFlowModel object.

See TensorFlowModel for full details.

Return type:

sagemaker.tensorflow.model.TensorFlowModel

hyperparameters()

Return hyperparameters used by your custom TensorFlow code during model training.

transformer(instance_count, instance_type, strategy=None, assemble_with=None, output_path=None, output_kms_key=None, accept=None, env=None, max_concurrent_transforms=None, max_payload=None, tags=None, role=None, volume_kms_key=None, entry_point=None, vpc_config_override='VPC_CONFIG_DEFAULT', enable_network_isolation=None, model_name=None)

Return a Transformer that uses a SageMaker Model based on the training job.

It reuses the SageMaker Session and base job name used by the Estimator.

Parameters:

TensorFlow Training Compiler Configuration

class sagemaker.tensorflow.TrainingCompilerConfig(enabled=True, debug=False)

Bases: TrainingCompilerConfig

The SageMaker Training Compiler configuration class.

This class initializes a TrainingCompilerConfig instance.

Amazon SageMaker Training Compileris a feature of SageMaker Training and speeds up training jobs by optimizing model execution graphs.

You can compile TensorFlow models by passing the object of this configuration class to the compiler_configparameter of the TensorFlowestimator.

Parameters:

Example: The following code shows the basic usage of thesagemaker.tensorflow.TrainingCompilerConfig() class to run a TensorFlow training job with the compiler.

from sagemaker.tensorflow import TensorFlow, TrainingCompilerConfig

tensorflow_estimator=TensorFlow( ... compiler_config=TrainingCompilerConfig() )

SUPPORTED_INSTANCE_CLASS_PREFIXES = ['p3', 'p3dn', 'g4dn', 'p4d', 'g5']

MIN_SUPPORTED_VERSION = '2.9'

MAX_SUPPORTED_VERSION = '2.11'

classmethod validate(estimator)

Checks if SageMaker Training Compiler is configured correctly.

Parameters:

estimator (sagemaker.tensorflow.estimator.TensorFlow) – A estimator object If SageMaker Training Compiler is enabled, it will validate whether the estimator is configured to be compatible with Training Compiler.

Raises:

ValueError – Raised if the requested configuration is not compatible with SageMaker Training Compiler.

TensorFlow Serving Model

class sagemaker.tensorflow.model.TensorFlowModel(model_data, role=None, entry_point=None, image_uri=None, framework_version=None, container_log_level=None, predictor_cls=<class 'sagemaker.tensorflow.model.TensorFlowPredictor'>, **kwargs)

Bases: FrameworkModel

A FrameworkModel implementation for inference with TensorFlow Serving.

Initialize a Model.

Parameters:

Tip

You can find additional parameters for initializing this class atFrameworkModel andModel.

LOG_LEVEL_PARAM_NAME = 'SAGEMAKER_TFS_NGINX_LOGLEVEL'

LOG_LEVEL_MAP = {10: 'debug', 20: 'info', 30: 'warn', 40: 'error', 50: 'crit'}

LATEST_EIA_VERSION = [2, 3]

register(content_types=None, response_types=None, inference_instances=None, transform_instances=None, model_package_name=None, model_package_group_name=None, image_uri=None, model_metrics=None, metadata_properties=None, marketplace_cert=False, approval_status=None, description=None, drift_check_baselines=None, customer_metadata_properties=None, domain=None, sample_payload_url=None, task=None, framework=None, framework_version=None, nearest_model_name=None, data_input_configuration=None, skip_model_validation=None, source_uri=None, model_card=None, model_life_cycle=None)

Creates a model package for creating SageMaker models or listing on Marketplace.

Parameters:

Returns:

A sagemaker.model.ModelPackage instance.

deploy(initial_instance_count=None, instance_type=None, serializer=None, deserializer=None, accelerator_type=None, endpoint_name=None, tags=None, kms_key=None, wait=True, data_capture_config=None, async_inference_config=None, serverless_inference_config=None, volume_size=None, model_data_download_timeout=None, container_startup_health_check_timeout=None, inference_recommendation_id=None, explainer_config=None, update_endpoint=False, **kwargs)

Deploy a Tensorflow Model to a SageMaker Endpoint.

Parameters:

update_endpoint (bool | None) –

prepare_container_def(instance_type=None, accelerator_type=None, serverless_inference_config=None, accept_eula=None, model_reference_arn=None)

Prepare the container definition.

Parameters:

Returns:

A container definition for deploying a Model to an Endpoint.

serving_image_uri(region_name, instance_type, accelerator_type=None, serverless_inference_config=None)

Create a URI for the serving image.

Parameters:

Returns:

The appropriate image URI based on the given parameters.

Return type:

str

TensorFlow Serving Predictor

class sagemaker.tensorflow.model.TensorFlowPredictor(endpoint_name, sagemaker_session=None, serializer=<sagemaker.base_serializers.JSONSerializer object>, deserializer=<sagemaker.base_deserializers.JSONDeserializer object>, model_name=None, model_version=None, component_name=None, **kwargs)

Bases: Predictor

A Predictor implementation for inference against TensorFlow Serving endpoints.

Initialize a TensorFlowPredictor.

See Predictor for more info about parameters.

Parameters:

classify(data)

Placeholder docstring.

regress(data)

Placeholder docstring.

predict(data, initial_args=None)

Placeholder docstring.

TensorFlow Processor

class sagemaker.tensorflow.processing.TensorFlowProcessor(framework_version, role=None, instance_count=None, instance_type=None, py_version='py3', image_uri=None, command=None, volume_size_in_gb=30, volume_kms_key=None, output_kms_key=None, code_location=None, max_runtime_in_seconds=None, base_job_name=None, sagemaker_session=None, env=None, tags=None, network_config=None)

Bases: FrameworkProcessor

Handles Amazon SageMaker processing tasks for jobs using TensorFlow containers.

This processor executes a Python script in a TensorFlow execution environment.

Unless image_uri is specified, the TensorFlow environment is an Amazon-built Docker container that executes functions defined in the suppliedcode Python script.

The arguments have the exact same meaning as in FrameworkProcessor.

Parameters:

estimator_cls

alias of TensorFlow