Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Probing experiments #356

Closed
odunkel opened this issue Mar 23, 2024 · 1 comment
Closed

Probing experiments #356

odunkel opened this issue Mar 23, 2024 · 1 comment

Comments

@odunkel
Copy link

odunkel commented Mar 23, 2024

Hello,

thanks for the very interesting work 'Towards Understanding Cross and Self-Attention in Stable Diffusion for Text-Guided Image Editing'.
Do you also plan to release the code, trained classifiers etc. for performing the probing analysis?

Thanks for your support in advance.

@Bingyan-Liu
Copy link
Collaborator

Thank you for your attention. Unfortunately, for the Probing experiments, we validated on the test data directly after each training session without saving the trained classifier. The classifier is a two-layer MLP as follows, which can be easily trained using torch once the data is prepared.

class Net(torch.nn.Module):
def init(self, input_size: int = 64 * 64, num_classes: int = 10):
super(Net,self).init()
self.input_size = input_size
self.hidden_size = max(input_size // 8, 2048)
self.mlp = torch.nn.Sequential(
torch.nn.Flatten(),
torch.nn.Dropout(0.1),
torch.nn.Linear(in_features=self.input_size, out_features=self.hidden_size),
torch.nn.Tanh(),
# torch.nn.ReLU(),
torch.nn.Linear(in_features=self.hidden_size, out_features=num_classes)
)
def forward(self, x):
output = self.mlp(x)
return output

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants