Custom LRP Rules
One of the design goals of RelevancePropagation.jl is to combine ease of use and extensibility for the purpose of research.
This example will show you how to implement custom LRP rules.
This package is part the Julia-XAI ecosystem and builds on the basics shown in the Getting started guide.
We start out by loading the same pre-trained LeNet-5 model and MNIST input data:
using RelevancePropagation
using VisionHeatmaps
using Flux
using MLDatasets
using ImageCore
using BSON
index = 10
x, y = MNIST(Float32, :test)[10]
input = reshape(x, 28, 28, 1, :)
model = BSON.load("../model.bson", @__MODULE__)[:model] # load pre-trained LeNet-5 modelChain(
Conv((5, 5), 1 => 6, relu), # 156 parameters
MaxPool((2, 2)),
Conv((5, 5), 6 => 16, relu), # 2_416 parameters
MaxPool((2, 2)),
Flux.flatten,
Dense(256 => 120, relu), # 30_840 parameters
Dense(120 => 84, relu), # 10_164 parameters
Dense(84 => 10), # 850 parameters
) # Total: 10 arrays, 44_426 parameters, 174.344 KiB.Implementing a custom rule
Step 1: Define rule struct
Let's define a rule that modifies the weights and biases of our layer on the forward pass. The rule has to be of supertype AbstractLRPRule.
struct MyGammaRule <: AbstractLRPRule endStep 2: Implement rule behavior
It is then possible to dispatch on the following four utility functions with the rule type MyCustomLRPRule to define custom rules without writing boilerplate code.
modify_input(rule::MyGammaRule, input)modify_parameters(rule::MyGammaRule, parameter)modify_denominator(rule::MyGammaRule, denominator)is_compatible(rule::MyGammaRule, layer)
By default:
modify_inputdoesn't change the inputmodify_parametersdoesn't change the parametersmodify_denominatoravoids division by zero by adding a small epsilon-term (1.0f-9)is_compatiblereturnstrueif a layer has fieldsweightandbias
To extend internal functions, import them explicitly:
import RelevancePropagation: modify_parameters
modify_parameters(::MyGammaRule, param) = param + 0.25f0 * relu.(param)modify_parameters (generic function with 7 methods)Note that we didn't implement three of the four functions. This is because the defaults are sufficient to implement the GammaRule.
Step 3: Use rule in LRP analyzer
We can directly use our rule to make an analyzer!
rules = [
ZPlusRule(),
EpsilonRule(),
MyGammaRule(), # our custom GammaRule
EpsilonRule(),
ZeroRule(),
ZeroRule(),
ZeroRule(),
ZeroRule(),
]
analyzer = LRP(model, rules)
heatmap(input, analyzer) # using VisionHeatmaps.jlWe just implemented our own version of the $γ$-rule in 2 lines of code. The heatmap perfectly matches the pre-implemented GammaRule:
rules = [
ZPlusRule(),
EpsilonRule(),
GammaRule(), # XAI.jl's GammaRule
EpsilonRule(),
ZeroRule(),
ZeroRule(),
ZeroRule(),
ZeroRule(),
]
analyzer = LRP(model, rules)
heatmap(input, analyzer)Performance tips
- Make sure functions like
modify_parametersdon't promote the type of weights (e.g. fromFloat32toFloat64). - If your rule
MyRuledoesn't modify weights or biases, definingmodify_layer(::MyRule, layer) = nothingcan provide reduce memory allocations and improve performance.
Advanced layer modification
For more granular control over weights and biases, modify_weight and modify_bias can be used.
If the layer doesn't use weights (layer.weight) and biases (layer.bias), RelevancePropagation provides a lower-level variant of modify_parameters called modify_layer. This function is expected to take a layer and return a new, modified layer. To add compatibility checks between rule and layer types, extend is_compatible.
Use of a custom function modify_layer will overwrite functionality of modify_parameters, modify_weight and modify_bias for the implemented combination of rule and layer types. This is due to the fact that internally, modify_weight and modify_bias are called by the default implementation of modify_layer. modify_weight and modify_bias in turn call modify_parameters by default.
The default call structure looks as follows:
┌─────────────────────────────────────────┐
│ modify_layer │
└─────────┬─────────────────────┬─────────┘
│ calls │ calls
┌─────────▼─────────┐ ┌─────────▼─────────┐
│ modify_weight │ │ modify_bias │
└─────────┬─────────┘ └─────────┬─────────┘
│ calls │ calls
┌─────────▼─────────┐ ┌─────────▼─────────┐
│ modify_parameters │ │ modify_parameters │
└───────────────────┘ └───────────────────┘Therefore modify_layer should only be extended for a specific rule and a specific layer type.
Advanced LRP rules
To implement custom LRP rules that require more than modify_layer, modify_input and modify_denominator, take a look at the LRP developer documentation.
This page was generated using Literate.jl.