-
Notifications
You must be signed in to change notification settings - Fork 457
No more text2text
#1590
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No more text2text
#1590
Conversation
Heads up this is pure vibe-coding _pre-LLM_, i.e. I'm not sure what I'm doing but I'm still doing it, manually (though I tried to take inspiration from #457) The goal is to address https://discuss.huggingface.co/t/no-0-models-returned-by-text2text-search-filter/161546 following huggingface-internal/moon-landing#14258
just dropping this here, anyone feel free to push on top of it x) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Looks ok"
@@ -1,7 +1,6 @@ | |||
import type { TaskDataCustom } from "../index.js"; | |||
|
|||
const taskData: TaskDataCustom = { | |||
canonicalId: "text2text-generation", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't we replace by
canonicalId: "text-generation",
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
well no, because since those examples were created, we added dedicated pipelines for both summarization and translation, iiuc, cc @SBrandeis
or is it not how it works? ^^'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok so this canonical thing was added in #449 actually, so quite recently, so ok with text-generation
, even though the whole thing is still confusing to me since it seems we have top-level pipelines for summarization and translation
@@ -1,7 +1,6 @@ | |||
import type { TaskDataCustom } from "../index.js"; | |||
|
|||
const taskData: TaskDataCustom = { | |||
canonicalId: "text2text-generation", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here ?
Nice! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice!
[Text-to-Text generation models](https://huggingface.co/models?pipeline_tag=text2text-generation&sort=downloads) have a separate pipeline called `text2text-generation`. This pipeline takes an input containing the sentence including the task and returns the output of the accomplished task. | ||
|
||
```python | ||
from transformers import pipeline | ||
|
||
text2text_generator = pipeline("text2text-generation") | ||
text2text_generator("question: What is 42 ? context: 42 is the answer to life, the universe and everything") | ||
[{'generated_text': 'the answer to life, the universe and everything'}] | ||
|
||
text2text_generator("translate from English to French: I'm very happy") | ||
[{'generated_text': 'Je suis très heureux'}] | ||
``` | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This still exists in transformers, so I'm not entirely sure we should remove it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd suggest we keep it here but add a "historical note" comment. cc @LysandreJik @ArthurZucker for viz
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remember to re-add that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
switched all the links to other
instead of pipeline_tag
then
let's go? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
did we want to redirect https://huggingface.co/models?pipeline_tag=text2text-generation or we don't care?
@@ -14,7 +14,7 @@ const taskData: TaskDataCustom = { | |||
widgetModels: [], | |||
youtubeId: undefined, | |||
/// If this is a subtask, link to the most general task ID | |||
/// (eg, text2text-generation is the canonical ID of translation) | |||
/// (eg, text-generation is the canonical ID of text-simplification) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @merveenoyan for viz btw (tasks)
Heads up this is pure vibe-coding pre-LLM, i.e. I'm not sure what I'm doing but I'm still doing it, manually (though I tried to take inspiration from #457)
The goal is to address https://discuss.huggingface.co/t/no-0-models-returned-by-text2text-search-filter/161546 following https://github.com/huggingface-internal/moon-landing/pull/14258