Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for non-text modalities (images, speech, video) #316

Open
neubig opened this issue Sep 2, 2023 · 4 comments
Open

Support for non-text modalities (images, speech, video) #316

neubig opened this issue Sep 2, 2023 · 4 comments
Labels
enhancement New feature or request

Comments

@neubig
Copy link
Collaborator

neubig commented Sep 2, 2023

Currently prompt2model is limited to text input text output tasks. The underlying framework can certainly handle different modalities, and it would be great to see prompt2model be able to handle different types of tasks as well (such as image classification/generation, speech tasks, etc.).

But we'll probably need to think about several things such as:

  1. How are we picking appropriate base models and datasets for the modality
  2. What do we do about dataset generation?
  3. In the case of non-text output, how do we adjust our evaluation?

We can start discussing the necessary steps on this issue, and implement the necessary pieces bit-by-bit. We'd be happy for contributions!

@neubig neubig added the enhancement New feature or request label Sep 2, 2023
@neubig neubig changed the title Support for other modalities Support for non-text modalities (images, speech, video) Sep 2, 2023
@MahamedDucale
Copy link

MahamedDucale commented Sep 2, 2023

For model selection we can use an llm to determine modality from user prompt and then retrieve appropriate dataset and model.

I think dataset generation would entail using another model retriever module to select a generative model for the modality of interest, that is if model performance increases only and if not dataset retrieval would only be used.

In the case of non text ouptut evaluation, perhaps using an appropriate evaluation metric using the huggingface evaluation library which is also retrieved.

@zhaochenyang20
Copy link
Collaborator

Cool. Some HCI faculties in Tsinghua also talked me with multi-modalities Prompt2Model.

@pieris98
Copy link

pieris98 commented Mar 5, 2024

For other modalities (e.g. visual QA, video anomaly detection, image generation, speech-to-text, text-to-speech etc.), it would be nice for start to just propose existing datasets and/or models, since prompt2model is advertised as a better solution for retrieval of datasets/models than search engines and human manual searching.

@neubig
Copy link
Collaborator Author

neubig commented Mar 5, 2024

Thanks @pieris98 ! In theory it should already be able to do this, but we might have to include these datasets in the dataset index. CC @ritugala who has recently re-created the dataset index and might be able to give additional guidance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants