You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Model trains correctly. It is also connected to W&B
Trace of model card . Once the model is trained
[INFO|modelcard.py:452] 2023-09-02 23:08:32,386 >> Dropping the following result as it does not have all the necessary fields:
{'task': {'name': 'Sequence-to-sequence Language Modeling', 'type': 'text2text-generation'}, 'dataset': {'name': 'opus100', 'type': 'opus100', 'config': 'en-hi', 'split': 'validation', 'args': 'en-hi'}}
The model is pushed to the HUB
Expected behavior
Correct task recognition and inference : Somehow the task is uploaded in Hub as text-generation and not as a translation task.
Inference shows text-generation as well, And the model card seems to point at that too.
During search , I visited/Read forum, but I think it makes reference to the BLEU generation metric and not the task (if Im understanding well ) I麓ve also checked Tasks docs and I think it gives you a guide on how to add a task, not change it - please let me know if I shall follow this path - and Troubleshoot page , but couldn麓t find anything.
Tangential Note :
Im aware that the Bleu Score is 0 , and I tried another languages and modifying some logic in compute_metrics function , as well as trying with another language that computed BLEU well. However the model was also loaded as text-generation. If keeping the experimentation up can prove some hypothesis I might have about this logic and BLEU ( that impact languages with alphabets distinct from latin ones) I will let you know , but I made those experiments to test if the task issue was somehow related to the task
Any help with clarifying and poiniting to translation task would be much appreciated
And if some change in the script or docs might come from this happy to contribute
Thanks for making transformers 馃 , for the time dedicated to this issue 馃暈 and have a nice day 馃!
The text was updated successfully, but these errors were encountered:
SoyGema
changed the title
[Pytorch] Unexpected task example translation : Generation instead of Translation in model card and Hub
[Pytorch] Unexpected task example translation : text-generation instead of Translation in model card and Hub
Sep 3, 2023
If you want to change the task that a model is mapped to, you can do so by clicking on the Edit model card button
and then selecting the desired pipeline_tag.
Why this is automapped to text2text generation when the task is specified in the script, I'm not sure. However, this tag isn't technically incorrect - T5 is an encoder-decoder model and this is a text generation task. cc @Narsil@muellerzr do either of you know?
With regards to your questions about BLEU, this is a question best placed in our forums. We try to reserve the github issues for feature requests and bug reports.
Why this is automapped to text2text generation when the task is specified in the script, I'm not sure. However, this tag isn't technically incorrect - T5 is an encoder-decoder model and this is a text generation task. cc @Narsil@muellerzr do either of you know?
The hub infers tasks automatically from the config.json@architectures when it's missing from the README.
Thanks for the support given in this issue. I consider this complete as the main challenge has been supported and some derivative as well. With that and the fact that I tend to own the issues I open, Im proceeding to close it. Feel free to reopen if necessary. Thanks so much!!馃ス
System Info
Hello there!
Thanks for making translation example with Pytorch.
馃檹馃檹 The documentation is amazing and the script is very well structured! 馃檹馃檹
System Info
Who can help?
@patil-suraj
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
Context
Fine-tuning english-hindi Translation model with t5-small and opus100 dataset
Running the example run_translation.py from transformers repository.
Small modification for making the dataset a little bit smaller for testing end-to-end
Checked recommendations from README.md when using T5 family models
--source_prefix
flag--source_lang
,--target_lang
and--source_prefix
Model trains correctly. It is also connected to W&B
Trace of model card . Once the model is trained
The model is pushed to the HUB
Expected behavior
Inference shows text-generation as well, And the model card seems to point at that too.
During search , I visited/Read forum, but I think it makes reference to the BLEU generation metric and not the task (if Im understanding well ) I麓ve also checked Tasks docs and I think it gives you a guide on how to add a task, not change it - please let me know if I shall follow this path - and Troubleshoot page , but couldn麓t find anything.
Tangential Note :
Im aware that the Bleu Score is 0 , and I tried another languages and modifying some logic in compute_metrics function , as well as trying with another language that computed BLEU well. However the model was also loaded as text-generation. If keeping the experimentation up can prove some hypothesis I might have about this logic and BLEU ( that impact languages with alphabets distinct from latin ones) I will let you know , but I made those experiments to test if the task issue was somehow related to the task
Any help with clarifying and poiniting to translation task would be much appreciated
And if some change in the script or docs might come from this happy to contribute
Thanks for making transformers 馃 , for the time dedicated to this issue 馃暈 and have a nice day 馃!
The text was updated successfully, but these errors were encountered: