Skip to content

Using Lucent with smaller images (CIFAR-100) #10

@Randophilus

Description

@Randophilus

I am currently trying to use Lucent with a VGG model that I have trained on CIFAR-100 (32x32x3 images). I modified the network by removing the global average pool and replacing the linear layers with a single 512>100 linear layer before training from scratch. I was previously using ONNX to transfer my trained models to Tensorflow and visualizing using Lucid. Unfortunately, the newer version of pytorch are no longer compatible on Tensorflow 1.X and Lucid is not built for Tensorflow 2.X. I found your library recently and was going through the example code and getting great results. When I tried to use Lucent on my trained models while also using fixed_image_size=32, the visualizations are more blurry, not as colorful, and not as semantic.

Here is an example of a network visualization on Lucid (all 512 filters of the last layer of vgg11_bn)
image

and here is an example of a network visualization (same network) on Lucent
image

both images use the parameters:
param_f = lambda: param.image(32, fft=True, decorrelate=True, batch=1)
where in the Lucent library, I also use fixed_image_size=32

I already looked into transform.py in both Lucid and Lucent and they both seem to have identical standard_transforms. I also did some tests in jupyter notebook where I toggled decorrelate, fft, and transforms separately, and none of them seem to affect the visualization quality.

No transforms, no FFT, no decorrelate:
image

standard transforms, no FFT, no decorrelate:
image

standard transforms, FFT, no decorrelate:
image

standard transforms, no FFT, decorrelate:
image

no transforms, FFT, decorrelate:
image

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions