Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support api_endpoint for ollama #15

Merged
merged 7 commits into from
Jun 12, 2024

Conversation

luixiao0
Copy link
Contributor

No description provided.

@SilasMarvin
Copy link
Owner

SilasMarvin commented Jun 10, 2024

This is really awesome thank you!

What do you think about explicitly adding a chat_endpoint and a generate_endpoint in the config options for Ollama instead of just endpoint? We do this for every other API backend. If a user uses Ollama remotely are we sure they will have chat at /api/chat and generate at /api/generate?

@SilasMarvin SilasMarvin added the in review This pull request is in review label Jun 10, 2024
@luixiao0
Copy link
Contributor Author

luixiao0 commented Jun 11, 2024

This is really awesome thank you!

What do you think about explicitly adding a chat_endpoint and a generate_endpoint in the config options for Ollama instead of just endpoint? We do this for every other API backend. If a user uses Ollama remotely are we sure they will have chat at /api/chat and generate at /api/generate?

I think Adding separate chat_endpoint and generate_endpoint for Ollama isn't necessary for now because once Ollama is running, it provides both chat and generate functionalities at /api/chat and /api/generate. But it is Fine to have 2 config options, providing better consistency across providers

@SilasMarvin
Copy link
Owner

If you can make the suggested changes and resolve the merge conflict I will merge it in! Thank you!

@luixiao0
Copy link
Contributor Author

luixiao0 commented Jun 11, 2024

resolved, 3a351ff237f4cf

@SilasMarvin
Copy link
Owner

resolved, 3a351ff237f4cf

Awesome thank you. If you can implement the changes I suggested above I will merge it

@luixiao0
Copy link
Contributor Author

Sorry, I misunderstood your suggestion. The suggested changes have been committed.

@SilasMarvin
Copy link
Owner

Sorry, I misunderstood your suggestion. The suggested changes have been committed.

Don't apologize you are totally fine. I think we are having some issues communicating. If you scroll to the top of the page, you will see I have added some suggestions for changes. You can click commit on those directly, or you can add them yourself. Please verify they work and I will merge the PR. Thank you for your contributions!

Copy link
Contributor Author

@luixiao0 luixiao0 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

checked, work at my env, ican set up with this config

generate example
{
    "lsp-ai.serverConfiguration": {
        "memory": {
            "file_store": {}
          },
          "models": {
            "model1": {
              "type": "ollama",
              "completions_endpoint": "http://remote:11434/api/generate",
              "model": "deepseek-coder",
            }
          }
    },
}

chat example
{
    "lsp-ai.serverConfiguration": {
        "memory": {
            "file_store": {}
          },
          "models": {
            "model1": {
              "type": "ollama",
              "chat_endpoint": "http://localhost:11434/api/chat",
              "model": "deepseek-coder",
            }
          }
    },
    "lsp-ai.generationConfiguration": {
      "model": "model1",
      "parameters": {
        "messages": [
          {
            "role": "system",
            "content": "Instructions:"
          }
        ]
      }
    },
}

i didn't find comment in "Files changed" tab, are you talking about "explicitly adding a chat_endpoint and a generate_endpoint in the config options for Ollama"?
Do you mean when only chat_endpoint is set, lsp-ai should use chat completions instead of generate completions?
Both cases

@SilasMarvin
Copy link
Owner

Thanks for testing it locally. These are the suggestions I am talking about:

Screenshot 2024-06-12 at 12 14 49 PM

We shouldn't call it completions_endpoint as Ollama calls it generate. I liked the defaults you had before.

If you can commit those suggestions and test it, I will merge it!

@luixiao0
Copy link
Contributor Author

luixiao0 commented Jun 12, 2024

Is github not working as intended? i cannot see them on my side, nor my mailbox, i will make the change shown in your screenshot
image

I liked the defaults you had before.

i will revert them at next commit

suggested by gpt-4o: try "submitreview"

@SilasMarvin
Copy link
Owner

Is github not working as intended? i cannot see them on my side, nor my mailbox, i will make the change shown in your screenshot image

I liked the defaults you had before.

i will revert them at next commit

suggested by gpt-4o: try "submitreview"

Oh super strange. I have attached the two other suggestions that I made.

Screenshot 2024-06-12 at 12 38 12 PM Screenshot 2024-06-12 at 12 38 16 PM

@luixiao0
Copy link
Contributor Author

Is github not working as intended? i cannot see them on my side, nor my mailbox, i will make the change shown in your screenshot image

I liked the defaults you had before.

i will revert them at next commit
suggested by gpt-4o: try "submitreview"

Oh super strange. I have attached the two other suggestions that I made.

Screenshot 2024-06-12 at 12 38 12 PM Screenshot 2024-06-12 at 12 38 16 PM

I've addressed the suggested modifications and it's ready for your feedback

@SilasMarvin
Copy link
Owner

Looks pretty good!

You should be able to change:

.post(self
                .configuration
                .generate_endpoint
                .as_ref()
                .unwrap_or(&"http://localhost:11434/api/generate".to_string())

To:

.post(self.configuration.generate_endpoint.as_deref().unwrap_or("http://localhost:11434/api/generate"))

@luixiao0
Copy link
Contributor Author

luixiao0 commented Jun 12, 2024

my bad, it's done, never heard of deref before

@SilasMarvin
Copy link
Owner

SilasMarvin commented Jun 12, 2024

No worries! Awesome work, I'm going to test locally and merge it tonight!

Thanks for contributing!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
in review This pull request is in review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants