Skip to content

Conversation

@tisnik
Copy link
Contributor

@tisnik tisnik commented Aug 21, 2025

Description

Simple script to ask LLM. Simpler to type than curl command with all its quoting/escaping on CLI.

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up service version
  • Bump-up dependent library
  • Bump-up library or tool used for development (does not change the final image)
  • CI configuration change
  • Konflux configuration change
  • Unit tests improvement
  • Integration tests improvement
  • End to end tests improvement
  • Other

Summary by CodeRabbit

  • New Features
    • Adds a lightweight CLI tool to query a local AI service: send a query and optional system prompt, receive and display the model’s response, and show measured response time.
    • Configurable endpoint URL and timeout via options or environment variable, with clear error messages on failures for easy local testing and validation.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 21, 2025

Warning

Rate limit exceeded

@tisnik has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 22 minutes and 55 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between b9f7ad5 and ae8f979.

📒 Files selected for processing (1)
  • scripts/query_llm.py (1 hunks)

Walkthrough

Adds a new CLI Python script at scripts/query_llm.py that posts {"query","system_prompt"} to a configurable local LLM endpoint (DEFAULT_URL from LLM_URL or http://localhost:8080/v1/query/), measures elapsed time, handles HTTP/JSON/missing-field errors with specific exit codes, and prints the LLM response.

Changes

Cohort / File(s) Summary of Changes
LLM test client script
scripts/query_llm.py
New CLI script with shebang and module docstring; defines DEFAULT_URL (from LLM_URL env or fallback), main() -> int; accepts CLI options --query, --system-prompt, --url, --timeout; POSTs JSON payload {"query", "system_prompt"}; measures elapsed time; prints response and elapsed time on success; robust error handling with distinct stderr messages and exit codes for HTTP errors (exit 1), invalid JSON (exit 2), and missing "response" field (exit 3).

Sequence Diagram(s)

sequenceDiagram
    autonumber
    actor User
    participant Script as query_llm.py
    participant API as Local LLM API (DEFAULT_URL)
    Note over Script: Build JSON payload {query, system_prompt}<br/>start timer
    User->>Script: Run script (CLI opts)
    Script->>API: POST /v1/query/ with JSON + timeout
    alt HTTP error (non-2xx)
        API-->>Script: HTTP error
        Script-->>User: stderr failure msg (elapsed) and exit 1
    else 200 OK
        API-->>Script: 200 OK + body (text)
        Script->>Script: parse JSON
        alt invalid JSON
            Script-->>User: stderr JSON parse error + response snippet and exit 2
        else JSON parsed
            alt missing "response"
                Script-->>User: stderr missing-field + print JSON and exit 3
            else has "response"
                Script-->>User: stdout response + "Response time ..." and exit 0
            end
        end
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

I tapped my paws on port eight-oh-eight,
Sent a query quick—couldn’t bear to wait.
JSON hopped back, tidy and bright,
I counted the seconds, snug in the night.
Thump! said the rabbit—response in sight. 🐇

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
scripts/query_llm.py (4)

1-1: Use env-based shebang for portability.

Switching to /usr/bin/env python3 makes the script work across environments where Python may not live at /usr/bin/python3. Covered in the comprehensive diff below.


3-3: Tighten the docstring (what/why/how).

A short “what” plus a hint on usage helps future readers; consider mentioning the default URL and that it’s a quick CLI helper. Covered in the comprehensive diff below.


5-5: Prefer perf_counter() over time() for elapsed timing.

perf_counter() is monotonic and higher resolution; better for measuring request latency. Covered in the comprehensive diff below.


9-9: Minor grammar and configurability.

  • Grammar nit: “You are helpful assistant” → “You are a helpful assistant”.
  • Hardcoding the payload makes the script less useful; add argparse to pass query/system prompt, with sensible defaults.
    Covered in the comprehensive diff below.
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 6ecd1d3 and ecd03b1.

📒 Files selected for processing (1)
  • scripts/query_llm.py (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: build-pr
  • GitHub Check: e2e_tests
🔇 Additional comments (1)
scripts/query_llm.py (1)

7-7: All set: dependency and executable bit are correctly configured

The requests library is already declared in pyproject.toml (requests>=2.32.4), and scripts/query_llm.py has the executable bit set (-rwxr-xr-x). No further changes needed.

@tisnik tisnik force-pushed the simple-script-to-ask-llm branch from ecd03b1 to b9f7ad5 Compare August 21, 2025 07:24
@tisnik
Copy link
Contributor Author

tisnik commented Aug 21, 2025

@CodeRabbit fullreview

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 21, 2025

@tisnik I'll conduct a full review of all the changes in this PR.

✅ Actions performed

Full review triggered.

@tisnik tisnik force-pushed the simple-script-to-ask-llm branch from b9f7ad5 to ae8f979 Compare August 21, 2025 07:27
@tisnik tisnik merged commit 8fbd8f5 into lightspeed-core:main Aug 21, 2025
18 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant