Skip to content

Commit 5f0c6f1

Browse files
committed
add transformers with code udpate of t5 model
1 parent 170cc3f commit 5f0c6f1

File tree

1,792 files changed

+658821
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,792 files changed

+658821
-0
lines changed
Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
# Troubleshooting
2+
3+
This is a document explaining how to deal with various issues on Circle-CI. The entries may include actually solutions or pointers to Issues that cover those.
4+
5+
## Circle CI
6+
7+
* pytest worker runs out of resident RAM and gets killed by `cgroups`: https://github.com/huggingface/transformers/issues/11408

transformers/.circleci/config.yml

Lines changed: 1006 additions & 0 deletions
Large diffs are not rendered by default.

transformers/.coveragerc

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
[run]
2+
source=transformers
3+
omit =
4+
# skip convertion scripts from testing for now
5+
*/convert_*
6+
*/__main__.py
7+
[report]
8+
exclude_lines =
9+
pragma: no cover
10+
raise
11+
except
12+
register_parameter

transformers/.gitattributes

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
*.py eol=lf
2+
*.rst eol=lf
3+
*.md eol=lf
Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
---
2+
name: "\U0001F5A5 New benchmark"
3+
about: Benchmark a part of this library and share your results
4+
title: "[Benchmark]"
5+
labels: ''
6+
assignees: ''
7+
8+
---
9+
10+
# 🖥 Benchmarking `transformers`
11+
12+
## Benchmark
13+
14+
Which part of `transformers` did you benchmark?
15+
16+
## Set-up
17+
18+
What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use?
19+
20+
## Results
21+
22+
Put your results here!
Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
---
2+
name: "\U0001F31F New model addition"
3+
about: Submit a proposal/request to implement a new Transformer-based model
4+
title: ''
5+
labels: New model
6+
assignees: ''
7+
8+
---
9+
10+
# 🌟 New model addition
11+
12+
## Model description
13+
14+
<!-- Important information -->
15+
16+
## Open source status
17+
18+
* [ ] the model implementation is available: (give details)
19+
* [ ] the model weights are available: (give details)
20+
* [ ] who are the authors: (mention them, if possible by @gh-username)
Lines changed: 106 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,106 @@
1+
---
2+
name: "\U0001F41B Bug Report"
3+
about: Submit a bug report to help us improve transformers
4+
title: ''
5+
labels: ''
6+
assignees: ''
7+
8+
---
9+
10+
11+
## Environment info
12+
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
13+
Don't forget to fill out the missing fields in that output! -->
14+
15+
- `transformers` version:
16+
- Platform:
17+
- Python version:
18+
- PyTorch version (GPU?):
19+
- Tensorflow version (GPU?):
20+
- Using GPU in script?:
21+
- Using distributed or parallel set-up in script?:
22+
23+
### Who can help
24+
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
25+
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
26+
Please tag fewer than 3 people.
27+
28+
Models:
29+
30+
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
31+
- T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten
32+
- Blenderbot, MBART: @patil-suraj
33+
- Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten
34+
- FSMT: @stas00
35+
- Funnel: @sgugger
36+
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
37+
- RAG, DPR: @patrickvonplaten, @lhoestq
38+
- TensorFlow: @Rocketknight1
39+
- JAX/Flax: @patil-suraj
40+
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
41+
- GPT-Neo, GPT-J, CLIP: @patil-suraj
42+
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
43+
44+
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
45+
46+
Library:
47+
48+
- Benchmarks: @patrickvonplaten
49+
- Deepspeed: @stas00
50+
- Ray/raytune: @richardliaw, @amogkam
51+
- Text generation: @patrickvonplaten @narsil
52+
- Tokenizers: @SaulLu
53+
- Trainer: @sgugger
54+
- Pipelines: @Narsil
55+
- Speech: @patrickvonplaten, @anton-l
56+
- Vision: @NielsRogge, @sgugger
57+
58+
Documentation: @sgugger
59+
60+
Model hub:
61+
62+
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
63+
64+
HF projects:
65+
66+
- datasets: [different repo](https://github.com/huggingface/datasets)
67+
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
68+
69+
Examples:
70+
71+
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
72+
73+
For research projetcs, please ping the contributor directly. For example, on the following projects:
74+
75+
- research_projects/bert-loses-patience: @JetRunner
76+
- research_projects/distillation: @VictorSanh
77+
78+
-->
79+
80+
## Information
81+
82+
Model I am using (Bert, XLNet ...):
83+
84+
The problem arises when using:
85+
* [ ] the official example scripts: (give details below)
86+
* [ ] my own modified scripts: (give details below)
87+
88+
The tasks I am working on is:
89+
* [ ] an official GLUE/SQUaD task: (give the name)
90+
* [ ] my own task or dataset: (give details below)
91+
92+
## To reproduce
93+
94+
Steps to reproduce the behavior:
95+
96+
1.
97+
2.
98+
3.
99+
100+
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
101+
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
102+
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
103+
104+
## Expected behavior
105+
106+
<!-- A clear and concise description of what you would expect to happen. -->
Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
---
2+
name: "\U0001F680 Feature request"
3+
about: Submit a proposal/request for a new transformers feature
4+
title: ''
5+
labels: ''
6+
assignees: ''
7+
8+
---
9+
10+
# 🚀 Feature request
11+
12+
<!-- A clear and concise description of the feature proposal.
13+
Please provide a link to the paper and code in case they exist. -->
14+
15+
## Motivation
16+
17+
<!-- Please outline the motivation for the proposal. Is your feature request
18+
related to a problem? e.g., I'm always frustrated when [...]. If this is related
19+
to another GitHub issue, please link here too. -->
20+
21+
## Your contribution
22+
23+
<!-- Is there any way that you could help, e.g. by submitting a PR?
24+
Make sure to read the CONTRIBUTING.MD readme:
25+
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,58 @@
1+
---
2+
name: "\U0001F4DA Migration from pytorch-pretrained-bert or pytorch-transformers"
3+
about: Report a problem when migrating from pytorch-pretrained-bert or pytorch-transformers
4+
to transformers
5+
title: ''
6+
labels: Migration
7+
assignees: ''
8+
9+
---
10+
11+
# 📚 Migration
12+
13+
## Information
14+
15+
<!-- Important information -->
16+
17+
Model I am using (Bert, XLNet ...):
18+
19+
Language I am using the model on (English, Chinese ...):
20+
21+
The problem arises when using:
22+
* [ ] the official example scripts: (give details below)
23+
* [ ] my own modified scripts: (give details below)
24+
25+
The tasks I am working on is:
26+
* [ ] an official GLUE/SQUaD task: (give the name)
27+
* [ ] my own task or dataset: (give details below)
28+
29+
## Details
30+
31+
<!-- A clear and concise description of the migration issue.
32+
If you have code snippets, please provide it here as well.
33+
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
34+
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
35+
-->
36+
37+
## Environment info
38+
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
39+
Don't forget to fill out the missing fields in that output! -->
40+
41+
- `transformers` version:
42+
- Platform:
43+
- Python version:
44+
- PyTorch version (GPU?):
45+
- Tensorflow version (GPU?):
46+
- Using GPU in script?:
47+
- Using distributed or parallel set-up in script?:
48+
49+
<!-- IMPORTANT: which version of the former library do you use? -->
50+
* `pytorch-transformers` or `pytorch-pretrained-bert` version (or branch):
51+
52+
53+
## Checklist
54+
55+
- [ ] I have read the migration guide in the readme.
56+
([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers);
57+
[pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers))
58+
- [ ] I checked if a related official extension example runs on my machine.
Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
---
2+
name: "❓ Questions & Help"
3+
about: Post your general questions on the Hugging Face forum: https://discuss.huggingface.co/
4+
title: ''
5+
labels: ''
6+
assignees: ''
7+
8+
---
9+
10+
# ❓ Questions & Help
11+
12+
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
13+
new models, benchmarks, and migration questions. For all other questions,
14+
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
15+
-->
16+
17+
## Details
18+
19+
<!-- Description of your issue -->
20+
21+
<!-- You should first ask your question on the forum, and only if
22+
you didn't get an answer after a few days ask it here on GitHub. -->
23+
24+
**A link to original question on the forum**:
25+
26+
<!-- Your issue will be closed if you don't fill this part. -->
Lines changed: 74 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,74 @@
1+
# What does this PR do?
2+
3+
<!--
4+
Congratulations! You've made it this far! You're not quite done yet though.
5+
6+
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
7+
8+
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
9+
10+
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
11+
-->
12+
13+
<!-- Remove if not applicable -->
14+
15+
Fixes # (issue)
16+
17+
18+
## Before submitting
19+
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
20+
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
21+
Pull Request section?
22+
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
23+
to it if that's the case.
24+
- [ ] Did you make sure to update the documentation with your changes? Here are the
25+
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
26+
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
27+
- [ ] Did you write any new necessary tests?
28+
29+
30+
## Who can review?
31+
32+
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
33+
members/contributors who may be interested in your PR.
34+
35+
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
36+
37+
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
38+
Please tag fewer than 3 people.
39+
40+
Models:
41+
42+
- albert, bert, xlm: @LysandreJik
43+
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
44+
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
45+
- fsmt: @stas00
46+
- funnel: @sgugger
47+
- gpt2: @patrickvonplaten, @LysandreJik
48+
- rag: @patrickvonplaten, @lhoestq
49+
- tensorflow: @LysandreJik
50+
51+
Library:
52+
53+
- benchmarks: @patrickvonplaten
54+
- deepspeed: @stas00
55+
- ray/raytune: @richardliaw, @amogkam
56+
- text generation: @patrickvonplaten
57+
- tokenizers: @n1t0, @LysandreJik
58+
- trainer: @sgugger
59+
- pipelines: @LysandreJik
60+
61+
Documentation: @sgugger
62+
63+
HF projects:
64+
65+
- datasets: [different repo](https://github.com/huggingface/datasets)
66+
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
67+
68+
Examples:
69+
70+
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
71+
- research_projects/bert-loses-patience: @JetRunner
72+
- research_projects/distillation: @VictorSanh
73+
74+
-->

transformers/.github/conda/build.sh

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
$PYTHON setup.py install # Python command to install the script.

0 commit comments

Comments
 (0)