Mission Overview

Interpreting Convolutional Neural Networks

Machine learning techniques can be powerful tools for categorizing data and performing data analysis questions. However, machine learning techniques often involve a lot of hidden computation that is not immediately meaningful. The black-box nature of intermediary processes, especially in layered neural networks, can make it difficult to interpret and understand. The goal of this notebook is to familiarize you with some of the various techniques used to make sense of machine learning and convolutional neural networks (CNNs) in particular. CNNs in particular can be very difficult to interpret due to their multi-layered structure and convolutional layers. In this notebook, we will examine two methods of visualizing CNN results (Backpropagation and Grad-CAM) and another method for testing model architecture.

Data:  The DeepMerge HLSP

Notebook: ​​​​​​Interpreting Convolutional Neural Networks

Released: 2024-06-05

Updated: 2024-06-05

Tags: deep learning, 2d data, interpretation, convolutional neural networks

interp_map.png
Illustration of saliency, a method for identifying the pixels in an input 2-d image which most strongly influence the predictions of a convolutional neural network. The above image shows a largely blank field with a ring of bright pixels which surround a galaxy cluster in the center (not pictured). Credit: Oliver Lin (2024).