TF SavedModel (original) (raw)

Understand How to Export to TF SavedModel Format From YOLO11

Deploying machine learning models can be challenging. However, using an efficient and flexible model format can make your job easier. TF SavedModel is an open-source machine-learning framework used by TensorFlow to load machine-learning models in a consistent way. It is like a suitcase for TensorFlow models, making them easy to carry and use on different devices and systems.

Learning how to export to TF SavedModel from Ultralytics YOLO11 models can help you deploy models easily across different platforms and environments. In this guide, we'll walk through how to convert your models to the TF SavedModel format, simplifying the process of running inferences with your models on different devices.

Why Should You Export to TF SavedModel?

The TensorFlow SavedModel format is a part of the TensorFlow ecosystem developed by Google as shown below. It is designed to save and serialize TensorFlow models seamlessly. It encapsulates the complete details of models like the architecture, weights, and even compilation information. This makes it straightforward to share, deploy, and continue training across different environments.

TF SavedModel

The TF SavedModel has a key advantage: its compatibility. It works well with TensorFlow Serving, TensorFlow Lite, and TensorFlow.js. This compatibility makes it easier to share and deploy models across various platforms, including web and mobile applications. The TF SavedModel format is useful both for research and production. It provides a unified way to manage your models, ensuring they are ready for any application.

Key Features of TF SavedModels

Here are the key features that make TF SavedModel a great option for AI developers:

Deployment Options with TF SavedModel

Before we dive into the process of exporting YOLO11 models to the TF SavedModel format, let's explore some typical deployment scenarios where this format is used.

TF SavedModel provides a range of options to deploy your machine learning models:

Exporting YOLO11 Models to TF SavedModel

By exporting YOLO11 models to the TF SavedModel format, you enhance their adaptability and ease of deployment across various platforms.

Installation

To install the required package, run:

Installation

CLI

# Install the required package for YOLO11 pip install ultralytics

For detailed instructions and best practices related to the installation process, check our Ultralytics Installation guide. While installing the required packages for YOLO11, if you encounter any difficulties, consult our Common Issues guide for solutions and tips.

Usage

All Ultralytics YOLO11 models are designed to support export out of the box, making it easy to integrate them into your preferred deployment workflow. You can view the full list of supported export formats and configuration options to choose the best setup for your application.

Usage

PythonCLI

`from ultralytics import YOLO

Load the YOLO11 model

model = YOLO("yolo11n.pt")

Export the model to TF SavedModel format

model.export(format="saved_model") # creates '/yolo11n_saved_model'

Load the exported TF SavedModel model

tf_savedmodel_model = YOLO("./yolo11n_saved_model")

Run inference

results = tf_savedmodel_model("https://ultralytics.com/images/bus.jpg") `

`# Export a YOLO11n PyTorch model to TF SavedModel format yolo export model=yolo11n.pt format=saved_model # creates '/yolo11n_saved_model'

Run inference with the exported model

yolo predict model='./yolo11n_saved_model' source='https://ultralytics.com/images/bus.jpg' `

Export Arguments

Argument Type Default Description
format str 'saved_model' Target format for the exported model, defining compatibility with various deployment environments.
imgsz int or tuple 640 Desired image size for the model input. Can be an integer for square images or a tuple (height, width) for specific dimensions.
keras bool False Enables export to Keras format, providing compatibility with TensorFlow serving and APIs.
int8 bool False Activates INT8 quantization, further compressing the model and speeding up inference with minimal accuracy loss, primarily for edge devices.
nms bool False Adds Non-Maximum Suppression (NMS), essential for accurate and efficient detection post-processing.
batch int 1 Specifies export model batch inference size or the max number of images the exported model will process concurrently in predict mode.
device str None Specifies the device for exporting: CPU (device=cpu), MPS for Apple silicon (device=mps).

For more details about the export process, visit the Ultralytics documentation page on exporting.

Deploying Exported YOLO11 TF SavedModel Models

Now that you have exported your YOLO11 model to the TF SavedModel format, the next step is to deploy it. The primary and recommended first step for running a TF SavedModel model is to use the YOLO("yolo11n_saved_model/") method, as previously shown in the usage code snippet.

However, for in-depth instructions on deploying your TF SavedModel models, take a look at the following resources:

Summary

In this guide, we explored how to export Ultralytics YOLO11 models to the TF SavedModel format. By exporting to TF SavedModel, you gain the flexibility to optimize, deploy, and scale your YOLO11 models on a wide range of platforms.

For further details on usage, visit the TF SavedModel official documentation.

For more information on integrating Ultralytics YOLO11 with other platforms and frameworks, don't forget to check out our integration guide page. It's packed with great resources to help you make the most of YOLO11 in your projects.

FAQ

How do I export an Ultralytics YOLO model to TensorFlow SavedModel format?

Exporting an Ultralytics YOLO model to the TensorFlow SavedModel format is straightforward. You can use either Python or CLI to achieve this:

Exporting YOLO11 to TF SavedModel

PythonCLI

`from ultralytics import YOLO

Load the YOLO11 model

model = YOLO("yolo11n.pt")

Export the model to TF SavedModel format

model.export(format="saved_model") # creates '/yolo11n_saved_model'

Load the exported TF SavedModel for inference

tf_savedmodel_model = YOLO("./yolo11n_saved_model") results = tf_savedmodel_model("https://ultralytics.com/images/bus.jpg") `

`# Export the YOLO11 model to TF SavedModel format yolo export model=yolo11n.pt format=saved_model # creates '/yolo11n_saved_model'

Run inference with the exported model

yolo predict model='./yolo11n_saved_model' source='https://ultralytics.com/images/bus.jpg' `

Refer to the Ultralytics Export documentation for more details.

Why should I use the TensorFlow SavedModel format?

The TensorFlow SavedModel format offers several advantages for model deployment:

For more benefits and deployment options, check out the Ultralytics YOLO model deployment options.

What are the typical deployment scenarios for TF SavedModel?

TF SavedModel can be deployed in various environments, including:

For detailed deployment options, visit the official guides on deploying TensorFlow models.

How can I install the necessary packages to export YOLO11 models?

To export YOLO11 models, you need to install the ultralytics package. Run the following command in your terminal:

For more detailed installation instructions and best practices, refer to our Ultralytics Installation guide. If you encounter any issues, consult our Common Issues guide.

What are the key features of the TensorFlow SavedModel format?

TF SavedModel format is beneficial for AI developers due to the following features:

For further details, explore the official TensorFlow documentation.