Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add unetr #1

Draft
wants to merge 383 commits into
base: main
Choose a base branch
from
Draft

Add unetr #1

wants to merge 383 commits into from

Conversation

caleb-vicente
Copy link
Owner

@caleb-vicente caleb-vicente commented Nov 6, 2022

What does this PR do?

Fixes huggingface#17309

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

daspartho and others added 30 commits October 14, 2022 17:21
* [Whisper] Fix gradient checkpointing (again!)

* [Whisper] Fix checkpointing (again!)
* ResNet Config for doctest

* added empty lines as suggested

* ran make style
* update feature extractor params

* update attention mask handling

* fix doc and pipeline test

* add warning when skipping test

* add whisper translation and transcription test

* fix build doc test

* Correct whisper processor

* make fix copies

* remove sample docstring as it does not fit whisper model

* Update src/transformers/models/whisper/modeling_whisper.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* fix, doctests are passing

* Nit

* last nit

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
… name (huggingface#19124)

* simplify loop

* fix layer map split

* update

* update for special variables

* add rag test

* fixup

* revert change : for next PR
)

* adds vision_encoder_decoder to Doc tests

* keep the initial order
…9582)

* initial changes

* update the suggested order of import
* Data2Vec Text Config for doctest

* typo fix

* made suggested changes
* trocr Config for doctest

* ran make style
* Remove key word argument X from pipeline predict and transform methods

As __call__ of pipeline clasees require one positional argument, passing
the input as a keyword argument inside predict, transform methods, causing
__call__ to fail. Hence in this commit the keyword argument is modified
into positional argument.

* Implement basic tests for scikitcompat pipeline interface

* Seperate tests instead of running with parameterized based on framework as both frameworks will not be active at the same time
* add type hints to mctct

* run auto style corrections

* change torch.bool to bool#

* Update src/transformers/models/mctct/modeling_mctct.py

Co-authored-by: Matt <Rocketknight1@users.noreply.github.com>

* Remove optional tags for attention_mask and head_mask'

* fix optional tags'

* Update src/transformers/models/mctct/modeling_mctct.py

Co-authored-by: Matt <Rocketknight1@users.noreply.github.com>
* Partial TF port for ESM model

* Add ESM-TF tests

* Add the various imports for TF-ESM

* TF weight conversion almost ready

* Stop ignoring the decoder weights in PT

* Add tests and lots of fixes

* fix-copies

* Fix imports, add model docs

* Add get_vocab() to tokenizer

* Fix vocab links for pretrained files

* Allow multiple inputs with a sep

* Use EOS as SEP token because ESM vocab lacks SEP

* Correctly return special tokens mask from ESM tokenizer

* make fixup

* Stop testing unsupported embedding resizing

* Handle TF bias correctly

* Skip all models with slow tokenizers in the token classification test

* Fixing the batch/unbatcher of pipelines to accomodate the `None` being

passed around.

* Fixing pipeline bug caused by slow tokenizer  being different.

* Update src/transformers/models/esm/modeling_tf_esm.py

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

* Update src/transformers/models/esm/modeling_tf_esm.py

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

* Update src/transformers/models/esm/modeling_tf_esm.py

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

* Update set_input_embeddings and the copyright notices

Co-authored-by: Your Name <you@example.com>
Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* added type hints for Yolos Pytorch model

* make fixup

* Update src/transformers/models/yolos/convert_yolos_to_pytorch.py

* Update src/transformers/models/yolos/convert_yolos_to_pytorch.py

* Update src/transformers/models/yolos/convert_yolos_to_pytorch.py

Co-authored-by: Matt <rocketknight1@gmail.com>
Co-authored-by: Matt <Rocketknight1@users.noreply.github.com>
…9584)

* Fixes

* update expected values

* style

* fix

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
* Removed Bert interdependency from Funnel transformer

* passed consistency check

* Revert "passed consistency check"

This reverts commit ba55a08.

* Fixed docstrings

Co-authored-by: mukesh663 <mukesh13034@gmail.com>
* fix warnings in deberta

* fix copies

* Revert "fix copies"

This reverts commit 324cb3f.

* fix copies

* fix copies again

* revert changes to whitespace that make style did since it results in an infinite chain of fix-copies

* argh

Co-authored-by: Sander Land <sander@chatdesk.com>
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
* add return_tensors parameter for feature_extraction  w/ test

add return_tensor parameter for feature extraction

Revert "Merge branch 'feature-extraction-return-tensor' of https://github.com/ajsanjoaquin/transformers into feature-extraction-return-tensor"

This reverts commit d559da7, reversing
changes made to bbef892.

* call parameter directly

Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>

* Fixup.

* Update src/transformers/pipelines/feature_extraction.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
sanchit-gandhi and others added 22 commits November 3, 2022 14:22
* [Whisper Tokenizer] Make more user-friendly

* use property

* make indexing rigorous

* small clean-up

* tests

* skip seq2seq tests

* remove multilingual arg

* reorder args

* collapse to one function

Co-authored-by: ArthurZucker <arthur@huggingface.co>

* option to override attributes

Co-authored-by: ArthurZucker <arthur@huggingface.co>

* add to docs

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* make comment more clear

Co-authored-by: sgugger <sylvain@huggingface.co>

* don't add special tokens in get_decoder_prompt_ids

* add test for set_prefix_tokens

Co-authored-by: ArthurZucker <arthur@huggingface.co>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: sgugger <sylvain@huggingface.co>
…ggingface#19066)

* fix led eos_mask

* add Futur Warning

* revert uselesss cahnges

* Update src/transformers/models/led/modeling_led.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
…t.trace tuple input sequence, update related doc (huggingface#19891)

* fix jit trace error for classification usecase, update related doc

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* add implementation in torch 1.14.0

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* update_doc

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* update_doc

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* Update ESM conversion script for ESMfold

* Fix bug in ESMFold example

* make fixup and move restypes to one line
* Only resize embeddings when necessary

* Add comment
…e tensors to numpy (huggingface#19976)

* Speed up TF postprocessing by converting to numpy before

* Fix bug that was triggered when offset_mapping was None

Co-authored-by: Patrick Deutschmann <patrick.deutschmann@dedalus.com>
* Fix esm lm head test

* make fixup
…0048)

* Poolformer image processor defaults to previous FE

* Remove unnecessary math.floor
* Fix Swin

* Remove file

* Update code snippet

* Add copied from to maskformer

* Fix docstring

* Add whole name to replace

Co-authored-by: Niels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
* Update READMEs for ESMFold and add notebooks

* Fix PyCharm formatting

* make fix-copies
…dings, to be in line with Tips section for BERT and GPT2 (huggingface#20068)

Co-authored-by: jordiclive <jordiclive19@imperial.ac.uk>
…gface#20044)

* POC

* For more CLIP-like models

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
…0069)

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
* Update defaults and logic to match old FE

* Use docker run rest values
@caleb-vicente caleb-vicente self-assigned this Nov 6, 2022
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment