Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: Llava executor #609

Merged
merged 11 commits into from
Mar 29, 2024
Merged

Conversation

SignalRT
Copy link
Collaborator

This is just a preview, the code it's not clean or finished. But I think It's enough to talk about the implementation.

Introduction to the example

  • Changes in the Interactive Executor to add an additional constructor to enable the support of vision models.
  • An example using the Interactive Executor to create and initialize the llava model.

This is just a raw capture of the example working:

Llava

Things to talk about:

  • The interface to the executors. I just create some properties in ILLavaExecutor (I know there are some duplicate property) basically to identify if it's a vision model and the current Image. This was the easiest approach to get an example working. I would like to get your suggestions on the integration on ILLamaExecutor interface. Especially I didn´t review the implications in the Chat or Semantic Kernel integrations, to allow to support that kind of models in all the layers.

@AsakusaRinne
Copy link
Collaborator

LGTM, please convert it from draft to formal PR if it's ready to merge.

@AsakusaRinne AsakusaRinne added this to the v0.11.0 milestone Mar 23, 2024
@SignalRT
Copy link
Collaborator Author

I will solve the current conflicts today and change the PR to formal PR.

@SignalRT
Copy link
Collaborator Author

The latest changes are:

  • Change the example to make a "preview" in the console of the image. Just to have an idea of the file loaded.
  • Allow to introduce image files with the path in the console (in the example).
  • Theoretically to be able to manage more than a file.

Notes: The example is just tested with llava-v1.6-mistral. The prompt could change if the model used is a different one.

This is an execution of the example:

Llava

@SignalRT SignalRT marked this pull request as ready for review March 26, 2024 22:37
@SignalRT SignalRT mentioned this pull request Mar 26, 2024
8 tasks
@AsakusaRinne AsakusaRinne merged commit 156f369 into SciSharp:master Mar 29, 2024
3 checks passed
@SignalRT SignalRT deleted the LlavaExecutor branch April 4, 2024 19:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants