Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ConvNeXt Architecture #68

Closed
LukeWood opened this issue Jan 30, 2022 · 16 comments
Closed

ConvNeXt Architecture #68

LukeWood opened this issue Jan 30, 2022 · 16 comments

Comments

@LukeWood
Copy link
Contributor

Sayak is working on this

@sayakpaul
Copy link
Contributor

I have implemented it: https://github.com/sayakpaul/ConvNeXt-TF/. Should be available here soon: https://tfhub.dev/sayakpaul/collections/convnext/1.

@bhack
Copy link
Contributor

bhack commented Feb 1, 2022

I suppose that one had an inital approach of converting the Pytorch reference impl weights.
It would be nice to see how we will handle this with the points we have at #71

@sayakpaul
Copy link
Contributor

Yeah, absolutely. If you serialize the weights of the converted models you will have something similar to what's expected here.

@bhack
Copy link
Contributor

bhack commented Feb 1, 2022

More ore less. I think that some of difference in this repo could be:

  • As a library we could care to have a reusable components API of the network components. Probably more generic that just to build the original Networks and its variants that we could find in the same paper.

  • The training scirpts, dataset interfaces API and Markdowns to reproduce the weights from scratch

  • Extra (IMHO) finte tuning

  • Extra++ Having a community CI infra/github self-hosted actions on GKE to launch user contributed training/fine-tuning jobs approved by maintainers revieiws.

@sayakpaul
Copy link
Contributor

I am not absolutely sure about any of these points. I am also not sure how individual contributors could run the models from scratch without any involved support from the maintainers.

For implementing architectures, there could be specific bits while generic bits could still benefit from what the library already offers. If you see what I have implemented, you'd probably notice there's not too many specific bits there.

Also, it's helpful to have examples aiding your points. For example, if you could provide an example on what you meant in your first point in the context of this issue thread, that would be super helpful.

@bhack
Copy link
Contributor

bhack commented Feb 1, 2022

I am not absolutely sure about any of these points. I am also not sure how individual contributors could run the models from scratch without any involved support from the maintainers.

We don't have this infra right now inpalce. So we cannot contribute a Github action orchestrating the training job for reproducibility.
It was just my perspective feature request (this why was tagget as extra++).

For implementing architectures, there could be specific bits while generic bits could still benefit from what the library already offers. If you see what I have implemented, you'd probably notice there's not too many specific bits there. Also, it's helpful to have examples aiding your points. For example, if you could provide an example on what you meant in your first point in the context of this issue thread, that would be super helpful.

Yes other then thinking about obivious reusable components related to a specific network like new layers, optimizers, losses and metrics that It could introduce and exposed here as API. There was also:
#59

So mainly It is just thinking as a library with the network as an e2e integration example of the new (if needed) introduced components API

@LukeWood
Copy link
Contributor Author

LukeWood commented Feb 1, 2022

Hey @sayakpaul just a heads up, we are planning to hold off on incorporating models for a little bit longer.

@qlzh727 has some great ideas on changing the structure for Keras applications a bit, and we'd like to iron those out before adding any models.

@sayakpaul
Copy link
Contributor

Thanks for letting me know. Does this also mean keras.applications will be held off from accepting new models for now?

@LukeWood
Copy link
Contributor Author

LukeWood commented Feb 2, 2022

Thanks for letting me know. Does this also mean keras.applications will be held off from accepting new models for now?

I'd guess so, this should only last a month or so before we have the new sample model ready though.

@sayakpaul
Copy link
Contributor

Understood. Thank you.

@Tony363
Copy link

Tony363 commented Apr 13, 2022

@LukeWood Is this issue open for contributions as well?! Next to SWIN transformers, ConvNext boasts even higher performance and stats utilizing similar robustness of training datasets! I definitely want to learn more about ConvNext in kerasCV as one of my research projects would have direct benefits having ConvNext integrated into Keras!

@LukeWood
Copy link
Contributor Author

It is not open for contributions as of now. I believe Sayak will be contributing these to Keras core. Is that correct @sayakpaul? As of now KerasCV won't be accepting models until some details are ironed out on our end.

@sayakpaul
Copy link
Contributor

I am working on it, yes.

@LukeWood
Copy link
Contributor Author

@sayakpaul we can migrate this to KerasCV when you are ready.

@sayakpaul
Copy link
Contributor

Yes sure. I will start working on it very soon.

@LukeWood
Copy link
Contributor Author

Sayak fixed this!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants