Importing Models into Lucid (original) (raw)

Lucid provides dozens of models for visualization without any setup in the Lucid modelzoo. But if you're looking at this page, you likely want to visualize your own model.

In order to visualize your model, Lucid needs to know a number of things about your model. Importing your model into Lucid creates a special file describing your model which contains all the necessary information. Once your model is imported, anyone can easily visualize it without additional information -- whether another researcher, an artist, or you five years from now when you've forgotten all the details.

Overview

As of 2019, the recommended way to import models for visualization is using Lucid's Model.save(). Your code will look something like this:

from lucid.modelzoo.vision_models import Model

with tf.Graph().as_default() as graph, tf.Session() as sess: images = tf.placeholder("float32", [None, 224, 224, 3], name="input")

# <Code to construct & load your model inference graph goes here>

Model.save( ... )

You can now import your model into Lucid for visualization!

model = Model.load("saved_model.pb")

render.render_vis(model, "layer_name:0")

Detailed Description

Note: This tutorial assumes you know how to make an inference graph of your model, like you use when you run your model over your test set.

As long as you know how to construct and load your inference graph, the main challenge to importing your model will be determining the correct metadata to give lucid.

Lucid needs four pieces of metadata:

Sometimes Lucid can infer some of the metadata for you, if you don't know it offhand, using Model.suggest_save_args(). Here's an example of the typical output.

from lucid.modelzoo.vision_models import Model

with tf.Graph().as_default() as graph, tf.Session() as sess: images = tf.placeholder("float32", [None, 224, 224, 3], name="input")

# <Code to construct & load your model inference graph goes here>

Model.suggest_save_args()
Inferred: input_name = input (because it was the only Placeholder in the graph_def)
Inferred: image_shape = [224, 224, 3]
Inferred: output_names = ['Softmax']  (because those are all the Softmax ops)
# Please sanity check all inferred values before using this code.
Incorrect `image_value_range` is the most common cause of feature visualization bugs! Most methods will fail silently with incorrect visualizations!
Model.save(
    input_name='input',
    image_shape=[224, 224, 3],
    output_names=['Softmax'],
    image_value_range=_,                   # TODO (eg. '[-1, 1], [0, 1], [0, 255], or [-117, 138]')
  )

However, you'll likely need to fill in some information yourself.

Note: Getting imagevaluerange wrong is the single most common cause of feature visualization bugs, and is really annoying to catch later on. All the code will run, but your results will be off because the model is getting values that don't make sense to it. We recommend double checking you got imagevaluerange correct.

Your final code should look something like this:

from lucid.modelzoo.vision_models import Model

with tf.Graph().as_default() as graph, tf.Session() as sess: images = tf.placeholder("float32", [None, 224, 224, 3], name="input")

# <Code to construct & load your model inference graph goes here>
# ...

Model.save(
  "saved_model.pb",
  image_shape=[W, W, 3],
  input_name='input',
  output_names=['Softmax'],
  image_value_range=[0,1],
)

Keras Specific Advice

Keras doesn't register it's session as default. As such, you'll want to do something like this:

with K.get_session().as_default(): ...

Exporting Directly to Google Cloud Storage

Model.save() can save directly to a Google Cloud bucket if your computer is properly authenticated with Google Cloud. This can be useful for permanently archiving your model for future visualization and analysis, making it accessible across servers, and sharing with others.

Model.save("gs://bucket-name/saved_model_path/model_name.pb", ...)

Debugging Common Issues

I can run feature visualization, but my model features don't seem right

Debugging check list:

For more detailed help with these problems, see Failure Modes

model.layers is empty

This is expected. The layer list only exists in manually defined classes, like those in modelzoo. Model.load() will always return an empty layer list.

This is because model.layers is a human defined list of layers of interest. There are a few reasons for this:

  1. For most purposes, you do not need layers defined. We didn't wish to make exporting models much more complicated for something only needed in some cases.
  2. Automatically crating a list of layers would require us to heuristically determine which nodes in the graphs are "layers." We now have pretty good heuristics for guessing which layers are of interest, but we didn't when we made this originally. Even these improved heuristics break for unusual models.
  3. The most important aspect of having layers is having layer.activations, an array of how the layer responds to a fixed set of ImageNet classes. Creating these requires access to ImageNet, and would make importing models much slower and more difficult.

If you want to create a manually defined Model class, look at examples in modelzoo.