Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

change the candidate's input resolution #20

Closed
Mshz2 opened this issue Aug 5, 2021 · 5 comments
Closed

change the candidate's input resolution #20

Mshz2 opened this issue Aug 5, 2021 · 5 comments
Assignees
Labels
question Further information is requested

Comments

@Mshz2
Copy link

Mshz2 commented Aug 5, 2021

Hi, I would like to use you NATS-Bench for other datasets except cifar and Imagenet, and with higher resolutions like (256*256). Is it possible to sample a network like what you did for cifar below and then change the cells resolutions?

import xautodl, nats_bench

from nats_bench import create
from xautodl.models import get_cell_based_tiny_net

api = create(None, 'tss', fast_mode=True, verbose=True)

config = api.get_net_config(12, 'cifar10')
network = get_cell_based_tiny_net(config)

#then a code to change the input resolution to the target size of 256*256

Thanks for your response

@D-X-Y D-X-Y self-assigned this Aug 5, 2021
@D-X-Y D-X-Y added the enhancement New feature or request label Aug 5, 2021
@D-X-Y
Copy link
Owner

D-X-Y commented Aug 5, 2021

Thanks for your questions.
The code above will direct u to execute these lines, so as to create the macro structure defined in NATS-Bench (https://github.com/D-X-Y/AutoDL-Projects/blob/58733c18becf18cd5c66392eb0ca6a80e2d14d23/xautodl/models/cell_infers/tiny_network.py#L10).

This macro structure will downsample twice and have a global pooling layer before the last FC layer. Therefore, this structure is resolution-agnostic. You can use either 1616 inputs as ImageNet-16-120, or 3232 for CIFAR, or 256*256 for your datasets.

Having said that, for the 256 * 256 input resolution, 2 downsampling layer in the network may not be enough regarding the model capacity. While that is another question.

In sum, so far, the config u obtained from the above code is resolution-agnostic, u can directly use it for inputs with different resolutions.

@Mshz2
Copy link
Author

Mshz2 commented Aug 6, 2021

Thanks for your questions.
The code above will direct u to execute these lines, so as to create the macro structure defined in NATS-Bench (https://github.com/D-X-Y/AutoDL-Projects/blob/58733c18becf18cd5c66392eb0ca6a80e2d14d23/xautodl/models/cell_infers/tiny_network.py#L10).

This macro structure will downsample twice and have a global pooling layer before the last FC layer. Therefore, this structure is resolution-agnostic. You can use either 16_16 inputs as ImageNet-16-120, or 32_32 for CIFAR, or 256*256 for your datasets.

Having said that, for the 256 * 256 input resolution, 2 downsampling layer in the network may not be enough regarding the model capacity. While that is another question.

In sum, so far, the config u obtained from the above code is resolution-agnostic, u can directly use it for inputs with different resolutions.

Thanks a lot for your fast response.
So, the config below

config = api.get_net_config(12, 'cifar10')

obtains a config appropriate for cifar10 completely agnostic to the pre-defined input resolution in get_datasets function, right? and If I simply resize cifar10 size to e.g. 64 or 128 in the get_datasets function, the appropriate results can still be obtained (except the number of downsampling issue that you mentioned)?

@D-X-Y
Copy link
Owner

D-X-Y commented Aug 7, 2021

Yes, you are right.

BTW, if you want to resize the CIFAR-10 size, you need to add a "resize transform" at here and also revised the shape at here, which is used to compute the FLOPs.

@D-X-Y D-X-Y added question Further information is requested and removed enhancement New feature or request labels Aug 7, 2021
@D-X-Y D-X-Y closed this as completed Aug 8, 2021
@Mshz2
Copy link
Author

Mshz2 commented Oct 12, 2021

Thanks for your questions. The code above will direct u to execute these lines, so as to create the macro structure defined in NATS-Bench (https://github.com/D-X-Y/AutoDL-Projects/blob/58733c18becf18cd5c66392eb0ca6a80e2d14d23/xautodl/models/cell_infers/tiny_network.py#L10).

This macro structure will downsample twice and have a global pooling layer before the last FC layer. Therefore, this structure is resolution-agnostic. You can use either 16_16 inputs as ImageNet-16-120, or 32_32 for CIFAR, or 256*256 for your datasets.

Having said that, for the 256 * 256 input resolution, 2 downsampling layer in the network may not be enough regarding the model capacity. While that is another question.

In sum, so far, the config u obtained from the above code is resolution-agnostic, u can directly use it for inputs with different resolutions.

Hi! I was thinking about some changes in my project and I came back to our topic again :)
You mentioned that the structure is resolution-agnostic and I can use 16_16 inputs for my custom datasets with 128x128 resolution. But, wouldn't it reduce the dimensionality of my image in such a way that small objects in it disappear?

+Where and how in here can I increase the number of downsample layers? how many do U suggest for input C of 128 and 256?

  • I changed the channel input size by config['C'] = 32 after config = api.get_net_config(arch, 'cifar10').Do you consider changing config['C'] = 32 or any other values is effective way for increasing accuracy for custom datasets with higher resolutions?

Thanks a lot for you help! <3

@D-X-Y
Copy link
Owner

D-X-Y commented Oct 12, 2021

To increase the number of downsample layers, you need to change the definition of TinyNetwork. Please see here: https://github.com/D-X-Y/AutoDL-Projects/blob/58733c18becf18cd5c66392eb0ca6a80e2d14d23/xautodl/models/cell_infers/tiny_network.py#L21, a True means a downsample layer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants