-
Notifications
You must be signed in to change notification settings - Fork 330
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update the README with some basic information #30
Conversation
Previously the README mentioned we would share more details with end users in Q4 2021. I think that these details we currently have are appropriate to share with users.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the update. Please wait for the final LGTM from Francois.
We may also want to include information on tangible goals as well. (I.e. Imagenet-1k top1 80% classification accuracy using only keras-cv built ins, etc). For now having this is probably better than having the outdated under construction blurb |
As we are going to see more and more multi modal networks I suppose that we need to handle a strong coordination with keras-nlp as we could have components that are a little be hard to put in the vision or nlp only bucket. |
Also I would like that we could describe how we evaluate a new component request/PR proposed by a community member. Do we want to have a paper citation treshold or any other evalustion metric? What about the component maintainership policiy? Can we scale over the community maintainership/ownership of the components? I think also that we could not go to only accumulate components over time how we are going to evaluate and handle deprecations? I think that these are quite basic topics that we could define for transparency with a community/opensoruce repo without infinite resources. I suppose that we could partially use the |
We want an adoption metric of some kind, and a citation threshold (~50) is a convenient metric.
We'll try. Someone who contributes a component will be called to fix issues with it if any arise.
Deprecation decisions are always the result of cost/benefits analysis. The picture varies from component to component and over the lifetime of the repo. It's a case by case basis.
Yes, this is mostly information that should go in the contributors' guide. |
@fchollet Thanks, for deprecations if we scale over the community maintainership we need to find how to handle MIA/proxy that It isn't strictly a deprecation but could bring a component to be orphan and then to a deprecation. About the threshold this is what we have in TF Addons: https://github.com/tensorflow/addons/blob/master/CONTRIBUTING.md
So I suppose that with CV+NLP (or the mentioned multimodal) here we really need to figure out what we want from TF Addons as we have almost the same barrier. I suppose using TF Addons only to collect custom ops (currently c++/CUDA) and the related python binding will not work. |
Update the README with some basic information
Update the README with some basic information
* Added confusion metrics -- still using TF ops * Fixed structure + tests pass for TF (still need to port to multi-backend) * Got rid of most tf deps, still a few more to go * Full removal of TF. Tests pass for both Jax and TF * Full removal of TF. Tests pass for both Jax and TF * Formatting * Formatting * Review comments * More review comments + formatting
Previously the README mentioned we would share more details with end
users in Q4 2021. I think that these details we currently have are
appropriate to share with users.