Revelio: Interpreting and leveraging visual semantic information in diffusion models

Explore how rich visual semantic information is represented within various layers and denoising timesteps of different diffusion architectures.

Start Exploring
Abstract visualization

About Revelio

We study how rich visual semantic information is represented within various layers and denoising timesteps of different diffusion architectures. We uncover monosemantic interpretable features by leveraging k-sparse autoencoders (k-SAE). We substantiate our mechanistic interpretations via transfer learning using light-weight classifiers on off-the-shelf diffusion models' features. On 4 datasets, we demonstrate the effectiveness of diffusion features for representation learning. We provide in-depth analysis of how different diffusion architectures, pre-training datasets, and language model conditioning impacts visual representation granularity, inductive biases, and transfer learning capabilities. Our work is a critical step towards deepening our interpretability of black-box diffusion models.


Figure 2

Interactive Image Grid

Click on an image to explore what the layer captures

Original Image
Bottleneck Image
Bottleneck
Up_ft0 Image
Up_ft0
Up_ft1 Image
Up_ft1
Up_ft2 Image
Up_ft2