Skip to content

fix: add --base_url CLI option for custom LLM endpoints#229

Open
octo-patch wants to merge 1 commit intoVectifyAI:mainfrom
octo-patch:fix/issue-224-add-base-url-support
Open

fix: add --base_url CLI option for custom LLM endpoints#229
octo-patch wants to merge 1 commit intoVectifyAI:mainfrom
octo-patch:fix/issue-224-add-base-url-support

Conversation

@octo-patch
Copy link
Copy Markdown

Fixes #224

Problem

Users running local LLM servers (Ollama, vLLM, LM Studio, etc.) have no way to specify a custom API base URL through the CLI. The OPENAI_API_BASE environment variable works when set manually, but there is no --base_url argument, making the workflow unnecessarily awkward.

Solution

  • run_pageindex.py: Added --base_url argument that sets os.environ["OPENAI_API_BASE"] before any LLM calls are made.
  • pageindex/utils.py: Updated llm_completion and llm_acompletion to read OPENAI_API_BASE from the environment and forward it to litellm as api_base, so all LLM calls in the pipeline respect the custom endpoint.

The change is fully backward-compatible: if --base_url is not provided and OPENAI_API_BASE is not set, behaviour is identical to before.

Usage

# Ollama
python run_pageindex.py --pdf_path doc.pdf \
  --model ollama/llama3 \
  --base_url http://localhost:11434

# vLLM / LM Studio or any OpenAI-compatible server
python run_pageindex.py --pdf_path doc.pdf \
  --model openai/my-model \
  --base_url http://localhost:8000/v1

Testing

Verified that the argument is parsed correctly and OPENAI_API_BASE is set before any LLM call. The api_base kwarg is only injected into litellm when the env var is present, so there is no overhead for standard OpenAI usage.

…yAI#224)

Expose OPENAI_API_BASE through a --base_url CLI argument so users can
point PageIndex at Ollama, vLLM, LM Studio, or any other
OpenAI-compatible local server without manually setting environment
variables.

- run_pageindex.py: add --base_url argument that sets OPENAI_API_BASE
- utils.py: read OPENAI_API_BASE in llm_completion / llm_acompletion
  and pass as api_base to litellm, enabling custom endpoints for all
  LLM calls

Usage:
  python run_pageindex.py --pdf_path doc.pdf \
    --model ollama/llama3 \
    --base_url http://localhost:11434
Copy link
Copy Markdown

@claude claude bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Claude Code Review

This pull request is from a fork — automated review is disabled. A repository maintainer can comment @claude review to run a one-time review.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Base url missing

1 participant