Quickstart for Linux-based devices with Python (original) (raw)

Using LiteRT with Python is great for embedded devices based on Linux, such as Raspberry Pi andCoral devices with Edge TPU, among many others.

This page shows how you can start running LiteRT models with Python in just a few minutes. All you need is a TensorFlow model converted to TensorFlow Lite. (If you don't have a model converted yet, you can experiment using the model provided with the example linked below.)

About the LiteRT runtime package

To quickly start executing LiteRT models with Python, you can install just the LiteRT interpreter, instead of all TensorFlow packages. We call this simplified Python package tflite_runtime.

The tflite_runtime package is a fraction the size of the full tensorflowpackage and includes the bare minimum code required to run inferences with LiteRT—primarily theInterpreterPython class. This small package is ideal when all you want to do is execute.tflite models and avoid wasting disk space with the large TensorFlow library.

Install LiteRT for Python

You can install on Linux with pip:

python3 -m pip install tflite-runtime

Supported platforms

The tflite-runtime Python wheels are pre-built and provided for these platforms:

If you want to run LiteRT models on other platforms, you should either use the full TensorFlow package, orbuild the tflite-runtime package from source.

If you're using TensorFlow with the Coral Edge TPU, you should instead follow the appropriate Coral setup documentation.

Run an inference using tflite_runtime

Instead of importing Interpreter from the tensorflow module, you now need to import it from tflite_runtime.

For example, after you install the package above, copy and run thelabel_image.pyfile. It will (probably) fail because you don't have the tensorflow library installed. To fix it, edit this line of the file:

import tensorflow as tf

So it instead reads:

import tflite_runtime.interpreter as tflite

And then change this line:

interpreter = tf.lite.Interpreter(model_path=args.model_file)

So it reads:

interpreter = tflite.Interpreter(model_path=args.model_file)

Now run label_image.py again. That's it! You're now executing LiteRT models.

Learn more