Visualizing activations of DL models
Any model architecture can be visualized with the filters of any layer. Only the initial layers are comprehensible using the technique. The last layer is useful for the nearest neighbor approach. Nearest neighbor visualisation of the images is useful when objects are similar and hence, we can understand the model’s prediction. This last layer can also be visualised with PCA and t-SNE (use TensorBoard embeddings).
Guided backpropagation
The visualization of features directly can be less informative. Hence, we use the training procedure of backprop to activate the filters for better visualization. Since we pick what neurons are to be activated for backprop, it is called guided backprop.