Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: On-Demand Ollama Service Initialization #2328

Closed
tibrezus opened this issue Mar 9, 2024 · 2 comments
Closed

[Feature]: On-Demand Ollama Service Initialization #2328

tibrezus opened this issue Mar 9, 2024 · 2 comments
Labels
enhancement New feature or request Stale

Comments

@tibrezus
Copy link

tibrezus commented Mar 9, 2024

The Feature

Overview

Currently, the process to integrate Ollama requires manually starting the Ollama service by running ollama run llama2 command after uncommenting necessary configurations in the .env file (i.e., OLLAMA_API_BASE_URL). This approach requires the Ollama service to be running in the background, potentially consuming resources even when not in use.

Suggestion for Improvement

I propose we enhance by adopting a design similar to Langchain's, where the service checks if the Ollama service is running and, if not, initializes it on-demand. This design would streamline the user experience by removing the manual step of starting the service beforehand and ensuring that Ollama is only running when needed, optimizing resource utilization.

Benefits

  1. Improved User Experience: Automating the service initiation process simplifies setup for the users, making it easier to get started with Ollama.
  2. Resource Efficiency: By only running Ollama when necessary, we minimize idle resource consumption on the user's system.
  3. Consistency with Best Practices: Adopting a resource-as-a-service model aligns with modern software design principles, offering scalability and efficiency.

Implementation Considerations

  • Investigate how Langchain detects and initiates Ollama services on-demand.
  • Ensure that the on-demand service initialization does not significantly delay requests to Ollama.
  • Update documentation to reflect the new automated process, including any new environment variables or configuration options.

Motivation, pitch

I believe this enhancement will significantly benefit users by providing a smoother setup process and a more efficient use of resources. Looking forward to the team's thoughts on this proposal.

Twitter / LinkedIn details

No response

@tibrezus tibrezus added the enhancement New feature or request label Mar 9, 2024
Copy link

dosubot bot commented Mar 9, 2024

It seems like you've got this under control, if you want help or have specific questions, let me know what I can do for you!

About Dosu

This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

Copy link
Contributor

github-actions bot commented Jun 7, 2024

Thanks for your contributions, we'll be closing this issue as it has gone stale. Feel free to reopen if you'd like to continue the discussion.

@github-actions github-actions bot added the Stale label Jun 7, 2024
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jun 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request Stale
Projects
None yet
Development

No branches or pull requests

1 participant