Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MNIST: ODEBlock possibly redundant? #32

Closed
simonvary opened this issue Feb 15, 2019 · 8 comments
Closed

MNIST: ODEBlock possibly redundant? #32

simonvary opened this issue Feb 15, 2019 · 8 comments

Comments

@simonvary
Copy link

Hello,

Thank you for your work. It introduces a very interesting concept.

I have a question regarding your experimental section that acts as verification of the ODE model for MNIST classification.

Your ODE MNIST model in the paper is the following
model = nn.Sequential(*downsampling_layers, *feature_layers, *fc_layers).to(device),
with the ODE block being in the middle as feature_layers and downsampling_method == conv. It has overall 208266 parameters and achieves test error of 0.42%.

However, if you get rid of the middle block altogether and construct the following instead
model = nn.Sequential(*downsampling_layers, *fc_layers).to(device),
with downsampling_layers and fc_layers exactly as in the case before, you get a model with 132874 that achieves a similar test error of under 0.6% after roughly 100 epochs.

Can it be that your experiment shows remarkable efficiency of your downsampling_layers rather than of the ODE block?

Thanks,

Simon

@tacocat21
Copy link

I've tried the ODEnet on CIFAR-10 and got ~85% accuracy using the conv for downsampling. I didn't try too many different hyperparameters but the ODE block seems to be working.

@zlannnn
Copy link

zlannnn commented May 30, 2019

I've tried the ODEnet on CIFAR-10 and got ~85% accuracy using the conv for downsampling. I didn't try too many different hyperparameters but the ODE block seems to be working.

I tried some hyperparameters and tiny tricks based on same structure. ODENet could achieve a stable result around 91-92 accuracy on CIFAR-10 after something like 130-150 epochs. It was just for quick test due to my equipment limitation. With settings which does not take time into consideration, 93-94 is possible I guess.

@jjjjjie
Copy link

jjjjjie commented Aug 8, 2019

@zlannnn Could you share your hyperparameters and tricks used to achieve 91 accuracy on CIFAR-10? I am doing this experiments but the best accuracy is ~88%. Thanks!

@zlannnn
Copy link

zlannnn commented Aug 14, 2019

@jjjjjie I just used same structure as official minist example (which means 2 resblocks involved). Some regular augmentation, SGD, little more filters. I think major improvments are done by augmentation and more feature channals. If you try this with more channels, you would be able to achieve higher acc.
ODEnet is really great and working well. But as this issue mentioned, the two resblocks in example also works very well.
If you were trying to use pure ODE, that might be a little hard to get 93-94 on cifar-10
On the other side, if you trying to use some modern shallow structure (also around 4 layers as two resblocks) replacing two resblocks, you will have some better results. However, it would be kind of away from why ODE was proposed.

@jjjjjie
Copy link

jjjjjie commented Aug 14, 2019

@zlannnn Thanks for you reply! I still have some questions about your tricks:

  1. Are 2 resblocks in downsampling layers? In this issue, they all used conv downsampling layers so I never think about this and use conv downsampling.:joy: If they are, is network in odefunc still two layers of concatConv2d function in your model?
  2. Does 'some regular augmentations' in your model mean data augmentation? Just like flip, rotation?
  3. Now feature channels are 64, you means I should use more channels like 128 or more? If doing this, the number of parameters will increase dramatically but if this operation can increase the accuracy, I will have a try.:smile:

Thank you in advance for your answer.

@zlannnn
Copy link

zlannnn commented Aug 14, 2019

@jjjjjie Hi

  1. Yes, just the same as official example in downsam layers.
  2. Yes, just flip and rotation.
  3. Also yes, some more channels will give you higher acc and slow down your training.
    ODE is great but kind of slow with accurate tol, my computer could not deal with it very well so I did not test too much about it. I personally suggest you set 'tol' a little bit higher at first. That may save some time.

@jjjjjie
Copy link

jjjjjie commented Aug 15, 2019

@zlannnn Thanks l lot! I will have a try

@HolmesShuan
Copy link

Evaluate three structures on CIFAR-100, and get the following results:

-downsampling+ODE Blocks+FC layers (0.22M) =>56.16%
-downsampling+Res Blocks+FC layers (0.58M) =>57.34%
-downsampling+FC layers (0.14M) =>48.38%

ODE-block works well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants