imageLIME - Explain network predictions using LIME - MATLAB (original) (raw)

Explain network predictions using LIME

Since R2020b

Syntax

Description

[scoreMap](#mw%5F8afadc7e-5b22-47d9-be0f-f9ee0934dcb4) = imageLIME([net](#mw%5Fe2d07c11-3478-4c39-a7f3-4a413f6d2753%5Fsep%5Fmw%5F075d5133-18d8-40e3-9f2b-2fcc5ae3edef),[X](#mw%5Fe2d07c11-3478-4c39-a7f3-4a413f6d2753%5Fsep%5Fmw%5Ff88a0900-573a-431a-b360-ce2868945269),[channelIdx](#mw%5Fe2d07c11-3478-4c39-a7f3-4a413f6d2753%5Fsep%5Fmw%5F8cf129de-7d38-4fe6-acd4-75baea5a7e0d)) uses the locally-interpretable model-agnostic explanation (LIME) technique to compute a map of the importance of the features in the input image X when the networknet evaluates the activation score for the channel given bychannelIdx. For classification tasks, specify thechannelIdx as the channel in the softmax layer corresponding to the class label of interest.

The LIME technique approximates the behavior of the net using a simpler, more interpretable model. By generating synthetic data from inputX, computing network predictions for the synthetic data usingnet, and then using the results to fit a simple regression model, theimageLIME function determines the importance of each feature ofX to the network's activation score for the channel given bychannelIdx.

This function requires Statistics and Machine Learning Toolbox™.

example

[[scoreMap](#mw%5F8afadc7e-5b22-47d9-be0f-f9ee0934dcb4),[featureMap](#mw%5F45eaaf1e-1368-405b-9c09-0842036f14cb),[featureImportance](#mw%5F9ad0c717-ab4a-427e-a012-9adb9a62ed28)] = imageLIME([net](#mw%5Fe2d07c11-3478-4c39-a7f3-4a413f6d2753%5Fsep%5Fmw%5F075d5133-18d8-40e3-9f2b-2fcc5ae3edef),[X](#mw%5Fe2d07c11-3478-4c39-a7f3-4a413f6d2753%5Fsep%5Fmw%5Ff88a0900-573a-431a-b360-ce2868945269),[channelIdx](#mw%5Fe2d07c11-3478-4c39-a7f3-4a413f6d2753%5Fsep%5Fmw%5F8cf129de-7d38-4fe6-acd4-75baea5a7e0d)) also returns a map of the features used to compute the LIME results and the calculated importance of each feature.

example

___ = imageLIME(___,[Name=Value](#namevaluepairarguments)) specifies options using one or more name-value arguments in addition to the input arguments in previous syntaxes. For example, NumFeatures=100 sets the target number of features to 100.

example

Examples

collapse all

Use imageLIME to visualize the parts of an image are important to a network for a classification decision.

Import the pretrained network SqueezeNet.

[net, classNames] = imagePretrainedNetwork("squeezenet");

Import the image and resize to match the input size for the network.

X = imread("laika_grass.jpg"); inputSize = net.Layers(1).InputSize(1:2); X = imresize(X,inputSize);

Display the image. The image is of a dog named Laika.

Compute the channel corresponding to the maximum class score. For single observation input, make predictions using the predict function. To make predictions using the GPU, first convert the data to gpuArray. Making predictions on a GPU requires a Parallel Computing Toolbox™ license and a supported GPU device. For information on supported devices, see GPU Computing Requirements (Parallel Computing Toolbox).

score = predict(net,single(X)); [~, channel] = max(score);

Use imageLIME to determine which parts of the image are important to the classification result.

scoreMap = imageLIME(net,X,channel);

Plot the result over the original image with transparency to see which areas of the image affect the classification score.

figure imshow(X) hold on imagesc(scoreMap,AlphaData=0.5) colormap jet

The network focuses predominantly on Laika's head and back to make the classification decision. Laika's eye and ear are also important to the classification result.

Use imageLIME to determine the most important features in an image and isolate them from the unimportant features.

Load a pretrained SqueezeNet network and the corresponding class names. For a list of all available networks, see Pretrained Deep Neural Networks.

[net, classNames] = imagePretrainedNetwork("squeezenet");

Import the image and resize to match the input size for the network.

X = imread("sherlock.jpg"); inputSize = net.Layers(1).InputSize(1:2); X = imresize(X,inputSize);

Classify the image. To make prediction with a single observation, use the predict function. To convert the prediction scores to labels, use the scores2label function. To use a GPU, first convert the data to gpuArray. Using a GPU requires a Parallel Computing Toolbox™ license and a supported GPU device. For information on supported devices, see GPU Computing Requirements (Parallel Computing Toolbox).

if canUseGPU X = gpuArray(X); end scores = predict(net,single(X)); [label,score] = scores2label(scores,classNames);

Compute the map of the feature importance and also obtain the map of the features and the feature importance. Set the image segmentation method to 'grid', the number of features to 64, and the number of synthetic images to 3072.

channel = find(label == categorical(classNames)); [scoreMap,featureMap,featureImportance] = imageLIME(net,X,channel,'Segmentation','grid','NumFeatures',64,'NumSamples',3072);

Plot the result over the original image with transparency to see which areas of the image affect the classification score.

figure imshow(X) hold on imagesc(scoreMap,'AlphaData',0.5) colormap jet colorbar

Figure contains an axes object. The hidden axes object contains 2 objects of type image.

Use the feature importance to find the indices of the most important five features.

numTopFeatures = 5; [~,idx] = maxk(featureImportance,numTopFeatures);

Use the map of the features to mask out the image so only the most important five features are visible. Display the masked image.

mask = ismember(featureMap,idx); maskedImg = uint8(mask).*X; figure imshow(maskedImg);

Figure contains an axes object. The hidden axes object contains an object of type image.

Use imageLIME with a custom segmentation map to view the most important features for a classification decision.

Import the pretrained network GoogLeNet.

[net,classNames] = imagePretrainedNetwork("googlenet");

Import the image and resize to match the input size for the network.

X = imread("sherlock.jpg"); inputSize = net.Layers(1).InputSize(1:2); X = imresize(X,inputSize);

Classify the test images. To make predictions with multiple observations, use the minibatchpredict function. To convert the prediction scores to labels, use the scores2label function. The minibatchpredict function automatically uses a GPU if one is available. Using a GPU requires a Parallel Computing Toolbox™ license and a supported GPU device. For information on supported devices, see GPU Computing Requirements (Parallel Computing Toolbox). Otherwise, the function uses the CPU.

scores = minibatchpredict(net,X); label = scores2label(scores,classNames);

Create a matrix defining a custom segmentation map which divides the image into triangular segments. Each triangular segment represents a feature.

Start by defining a matrix with size equal to the input size of the image.

segmentationMap = zeros(inputSize(1));

Next, create a smaller segmentation map which divides a 56-by-56 pixel region into two triangular features. Assign values 1 and 2 to the upper and lower segments, representing the first and second features, respectively.

blockSize = 56;

segmentationSubset = ones(blockSize); segmentationSubset = tril(segmentationSubset) + segmentationSubset;

% Set the diagonal elements to alternate values 1 and 2. segmentationSubset(1:(blockSize+1):end) = repmat([1 2],1,blockSize/2)';

To create a custom segmentation map for the whole image, repeat the small segmentation map. Each time you repeat the smaller map, increase the feature index values so that the pixels in each triangular segment correspond to a unique feature. In the final matrix, value 1 indicates the first feature, value 2 the second feature, and so on for each segment in the image.

blocksPerSide = inputSize(1)/blockSize; subset = 0; for i=1:blocksPerSide for j=1:blocksPerSide xidx = (blockSize*(i-1))+1:(blockSizei); yidx = (blockSize(j-1))+1:(blockSizej); segmentationMap(xidx,yidx) = segmentationSubset + 2subset; subset = subset + 1; end end

View the segmentation map. This map divides the image into 32 triangular regions.

figure imshow(X) hold on imagesc(segmentationMap,'AlphaData',0.8); title('Custom Segmentation Map') colormap gray

Use imageLIME with the custom segmentation map to determine which parts of the image are most important to the classification result.

channel = find(label == categorical(classNames)); scoreMap = imageLIME(net,X, channel, ... 'Segmentation',segmentationMap);

Plot the result of imageLIME over the original image to see which areas of the image affect the classification score.

figure; imshow(X) hold on title('Image LIME (Golden Retriever)') colormap jet; imagesc(scoreMap, "AlphaData", 0.5);

Red areas of the map have a higher importance — when these areas are removed, the score for the golden retriever class goes down. The most important feature for this classification is the ear.

Input Arguments

collapse all

Trained network, specified as a dlnetwork object.

net must contain a single input layer. The input layer must be an imageInputLayer.

Input image, specified as a numeric array.

The image must be the same size as the image input size of the networknet. The input size is specified by theInputSize property of the imageInputLayer in the network.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Channel index, specified as a scalar or a vector of channel indices. The possible choices for channelIdx depend on the selected layer. The function computes the scores using the layer specified by the OutputNames property of the dlnetwork object net and the channel specified by channelIdx.

If channelIdx is specified as a vector, the feature importance map for each specified channel is calculated independently. In that case,scoreMap(:,:,i) corresponds to the map for theith element in channelIdx.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Name-Value Arguments

collapse all

Specify optional pairs of arguments asName1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Example: NumFeatures=100,Segmentation="grid",OutputUpsampling="bicubic",ExecutionEnvironment="gpu" segments the input image into a grid of approximately 100 features, executes the calculation on the GPU, and upsamples the resulting map to the same size as the input image using bicubic interpolation.

Target number of features to divide the input image into, specified as a positive integer.

A larger value divides the input image into more, smaller features. To get the best results when using a larger number of features, also increase the number of synthetic images using the NumSamples option.

The exact number of features depends on the input image and segmentation method specified using the Segmentation option and can be less than the target number of features.

Example: NumFeatures=100

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Number of synthetic images to generate, specified as a positive integer.

A larger number of synthetic images gives better results but takes more time to compute.

Example: NumSamples=1024

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Segmentation method to use to divide the input image into features, specified as"superpixels", "grid", or a two-dimensional segmentation matrix.

The function segments the input image into features in the following way:

For photographic image data, the "superpixels" option usually gives better results. In this case, features are based on the contents of the image, by segmenting the image into regions of similar pixel value. For other types of images, such as spectrograms, the more regular "grid" option or a custom segmentation map can provide more useful results.

Example: Segmentation="grid"

Type of simple model to fit, specified as "tree" or"linear".

The imageLIME function generates a network prediction for the synthetic images using the network net and then uses the results to fit a simple, interpretable model. The methods used to fit the results and determine the importance of each feature depend on the type of simple model used.

Example: Model="linear"

Data Types: char | string

Output upsampling method, specified as:

If OutputUpsampling is "bicubic" or "nearest", the computed map is upsampled to the size of the input data using the imresize function.

Example: OutputUpsampling="none"

Size of the mini-batch to use to compute the score map, specified as a positive integer.

The mini-batch size specifies the number of images that are passed to the network at once. Larger mini-batch sizes lead to faster computation, at the cost of more memory.

Example: MiniBatchSize=256

Hardware resource, specified as one of these values:

Output Arguments

collapse all

Map of feature importance, returned as a numeric array. Areas in the map with higher positive values correspond to regions of input data that contribute positively to the total activation.

The value of scoreMap(i,j) denotes the importance of the image pixel (i,j) to the simple model, except when you use the optionsSegmentation="grid", andOutputUpsampling="none". In that case, thescoreMap is smaller than the input image, and the value ofscoreMap(i,j) denotes the importance of the feature at position(i,j) in the grid of features.

If channelIdx is specified as a vector, then the change in total activation for each specified channel is calculated independently. In that case,scoreMap(:,:,i) corresponds to the score map for theith element in channelIdx.

Map of features, returned as a numeric array.

For each pixel (i,j) in the input image, idx = featureMap(i,j) is an integer corresponding to the index of the feature containing that pixel.

Feature importance, returned as a numeric array.

The value of featureImportance(idx) is the calculated importance of the feature specified by idx. If you providechannelIdx as a vector of numeric indices, thenfeatureImportance(idx,k) corresponds to the importance of featureidx for channelIdx(k).

More About

collapse all

The locally interpretable model-agnostic explanations (LIME) technique is an explainability technique used to explain the decisions made by a deep neural network.

Given the decision of deep network for a piece of input data, the LIME technique calculates the importance of each feature of the input data with respect to the network output.

The LIME technique approximates the behavior of a deep neural network using a simpler, more interpretable model, such as a regression tree. To map the importance of different parts of the input image, the imageLIME function of performs the following steps.

The resulting map can be used to determine which features were most important to a particular output. This can be especially useful for making sure your network is focusing on the appropriate features when making predictions.

Extended Capabilities

expand all

The imageLIME function fully supports GPU acceleration.

By default, imageLIME uses a GPU if one is available. You can specify the hardware that the imageLIME function uses by specifying theExecutionEnvironment name-value argument.

For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox).

Version History

Introduced in R2020b

expand all

DAGNetwork and SeriesNetwork objects are not recommended. Use dlnetwork objects instead.

The syntax scoreMap = imageLIME(net,X,label) is supported forDAGNetwork and SeriesNetwork objects only, wherelabel is the class label used to calculate change in classification score, specified as a categorical, a character array, or a string array. To use adlnetwork object with the imageLIME function, you must specify the channel index instead. To find the channel index, you must know the order of the classes that the network was trained on.

Use the trainnet function to create a dlnetwork object. To convert an existing DAGNetwork or SeriesNetwork object to a dlnetwork object, use the dag2dlnetwork function.

This table shows how to convert code that uses a DAGNetwork object to code that uses a dlnetwork object. You can use the same syntaxes to convert a SeriesNetwork object.

Not recommended (DAGNetwork object) Recommended (dlnetwork object)
map = imageLIME(DAGnet,X,label); net = dag2dlnetwork(DAGnet); channelIdx = find(label == classNames); map = imageLIME(net,X,channelIdx);classNames contains the classes on which the network was trained. For example, you can extract the class names from a trained classificationDAGNetwork using this code.classNames = DAGnet.Layers(end).Classes;

Starting in R2024a, you can use the imageLIME function to generate scores maps for nonclassification tasks, such as regression.

The Segmentation name-value argument ofimageLIME now accepts a two-dimensional segmentation matrix the same size as the input image. Custom segmentation maps are useful for using LIME on tasks involving non-natural images, such as spectrogram or floor plan data.