Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Accuracy on CIFAR is not similar to that in the paper #41

Closed
Karami-m opened this issue Nov 29, 2023 · 4 comments
Closed

Accuracy on CIFAR is not similar to that in the paper #41

Karami-m opened this issue Nov 29, 2023 · 4 comments

Comments

@Karami-m
Copy link

First of all, thanks for your great work and maintaining this repository.

I have tried to get Hyena's results on CIFAR but the accuracy reaches to ~60% after 100 epochs. From the Appendix, the model dimension is 128 which is different from experiment/cifar/hyena-vit-cifar.yaml. So, I wonder if this is the only setting from this config file that should be fixed to get the results reported in the paper (Acc= 91%)?

@Karami-m Karami-m changed the title Performance on CIFAR is much lower than that in the paper Accuracy on CIFAR is not similar to that in the paper Nov 29, 2023
@Karami-m
Copy link
Author

Karami-m commented Dec 1, 2023

I also tried it with python -m train experiment=cifar/hyena-vit-cifar model.d_model=128 to match the description in Appendix of Hyena paper. As a result, the performance improved to $\sim 70$%, but it is still far from the 90% performance that was reported in the paper. Am I missing something on the config for this experiment?

@DanFu09
Copy link
Contributor

DanFu09 commented Dec 2, 2023

Hi, thanks for the issue.

It looks like the config and code in this version is quite different from the one used to run CIFAR for the paper, @exnx might be able to help a bit more with recreating the exact config.

One thing I can tell is different is the order of the filter - try setting model.layer.filter_order=128 in the config.

Probably more important, I think the scheduler in the hyena-vit-cifar config may not be set correctly, probably a copy-paste problem. Try changing the values at the top:

# @package _global_
defaults:
  - /pipeline: cifar-2d
  - /model: vit
  - override /model/layer: hyena
  - override /scheduler: cosine_warmup_timm

to:

# @package _global_
defaults:
  - /pipeline: cifar-2d
  - /model: vit
  - override /model/layer: hyena
  - override /scheduler: cosine_warmup

scheduler:
  num_training_steps: 100000

(following this config for the scheduler: https://github.com/HazyResearch/safari/blob/main/configs/experiment/cifar/s4-simple-cifar.yaml)

EDIT: nevermind, I think this config is not supposed to be used... Eric will comment with a correction soon.

@exnx
Copy link

exnx commented Dec 2, 2023

Hello!

So the current vit-cifar config was only meant for testing the pipeline, it's not meant for comparing cifar results. (eg, testing ViT pipeline on cifar is faster than running it on ImageNet).

In the Hyena paper, the cifar results are for a * 2D * version of Hyena (notably not a ViT model).

We still need to port the 2D version over (for cifar), which we'll do at some point soon!

@Karami-m
Copy link
Author

Karami-m commented Dec 4, 2023

Hi, thanks for the issue.

It looks like the config and code in this version is quite different from the one used to run CIFAR for the paper, @exnx might be able to help a bit more with recreating the exact config.

One thing I can tell is different is the order of the filter - try setting model.layer.filter_order=128 in the config.

Probably more important, I think the scheduler in the hyena-vit-cifar config may not be set correctly, probably a copy-paste problem. Try changing the values at the top:

# @package _global_
defaults:
  - /pipeline: cifar-2d
  - /model: vit
  - override /model/layer: hyena
  - override /scheduler: cosine_warmup_timm

to:

# @package _global_
defaults:
  - /pipeline: cifar-2d
  - /model: vit
  - override /model/layer: hyena
  - override /scheduler: cosine_warmup

scheduler:
  num_training_steps: 100000

(following this config for the scheduler: https://github.com/HazyResearch/safari/blob/main/configs/experiment/cifar/s4-simple-cifar.yaml)

EDIT: nevermind, I think this config is not supposed to be used... Eric will comment with a correction soon.

Thanks for your response. With the current setup, I got accuracy of 87% using the following setting:

scheduler:
  t_in_epochs: False
  t_initial: ${eval:${div_up:50000, ${train.global_batch_size}} * ${trainer.max_epochs}}
  warmup_lr_init: 1e-6
  warmup_t: 500
  lr_min: ${eval:0.1 * ${optimizer.lr}}
  
model:
  _name_: vit_b_16
  d_model: 128 

@exnx exnx closed this as completed Dec 30, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants