Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update the README with some basic information #30

Merged
merged 1 commit into from
Jan 10, 2022
Merged

Conversation

LukeWood
Copy link
Contributor

@LukeWood LukeWood commented Jan 6, 2022

Previously the README mentioned we would share more details with end
users in Q4 2021. I think that these details we currently have are
appropriate to share with users.

Previously the README mentioned we would share more details with end
users in Q4 2021.  I think that these details we currently have are
appropriate to share with users.
@LukeWood
Copy link
Contributor Author

@fchollet @qlzh727 let me know what you both think about this! Think it's good to start fleshing out a real readme over time.

Copy link
Member

@qlzh727 qlzh727 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the update. Please wait for the final LGTM from Francois.

@LukeWood
Copy link
Contributor Author

We may also want to include information on tangible goals as well. (I.e. Imagenet-1k top1 80% classification accuracy using only keras-cv built ins, etc). For now having this is probably better than having the outdated under construction blurb

@bhack
Copy link
Contributor

bhack commented Jan 10, 2022

As we are going to see more and more multi modal networks I suppose that we need to handle a strong coordination with keras-nlp as we could have components that are a little be hard to put in the vision or nlp only bucket.

https://blog.google/products/search/introducing-mum/

https://arxiv.org/abs/2105.11087

@bhack
Copy link
Contributor

bhack commented Jan 10, 2022

Also I would like that we could describe how we evaluate a new component request/PR proposed by a community member.

Do we want to have a paper citation treshold or any other evalustion metric?

What about the component maintainership policiy? Can we scale over the community maintainership/ownership of the components?

I think also that we could not go to only accumulate components over time how we are going to evaluate and handle deprecations?

I think that these are quite basic topics that we could define for transparency with a community/opensoruce repo without infinite resources.

I suppose that we could partially use the README.MD for this and partially CONTRIBUTING.MD

@fchollet
Copy link
Member

Do we want to have a paper citation treshold or any other evalustion metric?

We want an adoption metric of some kind, and a citation threshold (~50) is a convenient metric.

What about the component maintainership policiy? Can we scale over the community maintainership/ownership of the components?

We'll try. Someone who contributes a component will be called to fix issues with it if any arise.

I think also that we could not go to only accumulate components over time how we are going to evaluate and handle deprecations?

Deprecation decisions are always the result of cost/benefits analysis. The picture varies from component to component and over the lifetime of the repo. It's a case by case basis.

I suppose that we could partially use the README.MD for this and partially CONTRIBUTING.MD

Yes, this is mostly information that should go in the contributors' guide.

@LukeWood LukeWood merged commit cca343e into master Jan 10, 2022
@bhack
Copy link
Contributor

bhack commented Jan 10, 2022

@fchollet Thanks, for deprecations if we scale over the community maintainership we need to find how to handle MIA/proxy that It isn't strictly a deprecation but could bring a component to be orphan and then to a deprecation.

About the threshold this is what we have in TF Addons:

https://github.com/tensorflow/addons/blob/master/CONTRIBUTING.md

Suggested guidelines for new feature requests:
The feature contains an official reference implementation.
Should be able to reproduce the same results in a published paper.
The academic paper exceeds 50 citations.

So I suppose that with CV+NLP (or the mentioned multimodal) here we really need to figure out what we want from TF Addons as we have almost the same barrier.

I suppose using TF Addons only to collect custom ops (currently c++/CUDA) and the related python binding will not work.

@LukeWood LukeWood deleted the readme-update branch February 21, 2022 19:35
ianstenbit pushed a commit to ianstenbit/keras-cv that referenced this pull request Aug 6, 2022
Update the README with some basic information
adhadse pushed a commit to adhadse/keras-cv that referenced this pull request Sep 17, 2022
Update the README with some basic information
@bhack bhack mentioned this pull request Sep 23, 2022
freedomtan pushed a commit to freedomtan/keras-cv that referenced this pull request Jul 20, 2023
* Added confusion metrics -- still using TF ops

* Fixed structure + tests pass for TF (still need to port to multi-backend)

* Got rid of most tf deps, still a few more to go

* Full removal of TF. Tests pass for both Jax and TF

* Full removal of TF. Tests pass for both Jax and TF

* Formatting

* Formatting

* Review comments

* More review comments + formatting
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants