Skip to content

Dev > main; LLM response added#90

Merged
ovchynnikov merged 3 commits intomainfrom
dev
Apr 18, 2025
Merged

Dev > main; LLM response added#90
ovchynnikov merged 3 commits intomainfrom
dev

Conversation

@ovchynnikov
Copy link
Copy Markdown
Owner

@ovchynnikov ovchynnikov commented Apr 18, 2025

Summary by CodeRabbit

  • New Features
    • Added integration with an external language model API to generate bot responses when mentioned, configurable via new environment variables.
  • Chores
    • Added a new dependency to support asynchronous HTTP requests.

USE_LLM=;
LLM_MODEL=;
LLM_API_ADDR=;
@ovchynnikov ovchynnikov self-assigned this Apr 18, 2025
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 18, 2025

Walkthrough

The changes introduce integration with an external large language model (LLM) API for generating bot responses when the bot is mentioned. This integration is controlled by new environment variables: USE_LLM, LLM_MODEL, and LLM_API_ADDR. The aiohttp library is added as a dependency to support asynchronous HTTP requests. The message handling logic is updated to conditionally use the LLM API for responses based on the configuration. A new asynchronous function is implemented for communicating with the LLM API, including comprehensive error handling for various failure scenarios.

Changes

File(s) Change Summary
src/main.py Added integration with external LLM API for bot responses, controlled by new env variables. Introduced respond_with_llm_message async function with error handling. Updated message handler to use LLM when enabled.
src/requirements.txt Added aiohttp==3.9.3 to support asynchronous HTTP requests for LLM API integration.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant Bot
    participant LLM_API

    User->>Bot: Send message (mentions bot)
    Bot->>Bot: Check USE_LLM flag
    alt USE_LLM is enabled
        Bot->>LLM_API: POST prompt + model to LLM API
        LLM_API-->>Bot: Return generated response or error
        Bot->>User: Reply with LLM-generated text or error message
    else USE_LLM is disabled
        Bot->>User: Reply with standard bot message
    end
Loading

Tip

⚡💬 Agentic Chat (Pro Plan, General Availability)
  • We're introducing multi-step agentic chat in review comments and issue comments, within and outside of PR's. This feature enhances review and issue discussions with the CodeRabbit agentic chat by enabling advanced interactions, including the ability to create pull requests directly from comments and add commits to existing pull requests.
✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@ovchynnikov ovchynnikov merged commit bf657fc into main Apr 18, 2025
6 of 7 checks passed
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
src/main.py (1)

448-457: Simplify nested with statements.

The static analysis tool correctly identified that these nested with statements can be simplified into a single statement with multiple contexts.

- async with aiohttp.ClientSession() as session:
-     async with session.post(
+ async with aiohttp.ClientSession() as session, session.post(
🧰 Tools
🪛 Ruff (0.8.2)

448-457: Use a single with statement with multiple contexts instead of nested with statements

(SIM117)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2a70a66 and a1212db.

📒 Files selected for processing (2)
  • src/main.py (4 hunks)
  • src/requirements.txt (1 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
src/main.py

448-457: Use a single with statement with multiple contexts instead of nested with statements

(SIM117)

🔇 Additional comments (4)
src/requirements.txt (1)

5-5: LGTM: aiohttp dependency added.

I see you've added the aiohttp package (v3.9.3) to support the new LLM integration functionality.

src/main.py (3)

7-7: LGTM: aiohttp import added.

This import is needed for the async HTTP requests to the LLM API.


35-37: LGTM: LLM configuration environment variables.

Good use of environment variables with sensible defaults for the LLM integration.


174-177: LGTM: Conditional LLM response handling.

Clean implementation that preserves backward compatibility by using the LLM integration only when enabled.

Comment thread src/main.py
Comment on lines +440 to +474
async def respond_with_llm_message(update):
"""Handle LLM responses when bot is mentioned."""
message_text = update.message.text
# Remove bot mention from the message
prompt = message_text.replace("@your_bot_username", "").strip()

try:
# Make request to Ollama API
async with aiohttp.ClientSession() as session:
async with session.post(
f"{LLM_API_ADDR}/api/generate",
json={
"model": LLM_MODEL,
"prompt": prompt,
"stream": False,
"num_predict": 200, # Limit response to approximately 200 tokens
},
) as response:
if response.status == 200:
result = await response.json()
bot_response = result.get("response", "Sorry, I couldn't generate a response.")
else:
bot_response = "Sorry, I encountered an error while processing your request."

await update.message.reply_text(bot_response)
except (aiohttp.ClientResponseError, aiohttp.ContentTypeError) as e:
print(f"Response error in LLM request: {e}")
await update.message.reply_text("Sorry, I received an invalid response from the AI service.")
except aiohttp.ClientError as e:
print(f"Network error in LLM request: {e}")
await update.message.reply_text("Sorry, I couldn't connect to the AI service.")
except ValueError as e:
print(f"Data processing error in LLM request: {e}")
await update.message.reply_text("Sorry, I had trouble processing the AI service response.")

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Fix several issues in the LLM integration.

There are several issues that need to be addressed in the LLM response function:

  1. The hardcoded "@your_bot_username" should be replaced with the actual bot username
  2. Error logging should use the existing logging system instead of print statements
  3. The response should maintain consistency with other functions by replying to the original message
  4. Error messages should support multiple languages like the rest of the application
async def respond_with_llm_message(update):
    """Handle LLM responses when bot is mentioned."""
    message_text = update.message.text
    # Remove bot mention from the message
-   prompt = message_text.replace("@your_bot_username", "").strip()
+   # Get the bot's username dynamically
+   bot_username = update.get_bot().username
+   prompt = message_text.replace(f"@{bot_username}", "").replace("ботяра", "").replace("bot_health", "").strip()

    try:
        # Make request to Ollama API
        async with aiohttp.ClientSession() as session, session.post(
            f"{LLM_API_ADDR}/api/generate",
            json={
                "model": LLM_MODEL,
                "prompt": prompt,
                "stream": False,
                "num_predict": 200,  # Limit response to approximately 200 tokens
            },
        ) as response:
            if response.status == 200:
                result = await response.json()
                bot_response = result.get("response", "Sorry, I couldn't generate a response.")
            else:
-               bot_response = "Sorry, I encountered an error while processing your request."
+               error(f"LLM API returned status code {response.status}")
+               bot_response = responses.get("llm_error", "Sorry, I encountered an error while processing your request.")

-       await update.message.reply_text(bot_response)
+       await update.message.reply_text(
+           bot_response,
+           reply_to_message_id=update.message.message_id,
+       )
+       info("LLM response sent to [Chat ID]: %s by the user %s", update.effective_chat.id, update.effective_user.username)
    except (aiohttp.ClientResponseError, aiohttp.ContentTypeError) as e:
-       print(f"Response error in LLM request: {e}")
-       await update.message.reply_text("Sorry, I received an invalid response from the AI service.")
+       error(f"Response error in LLM request: {e}")
+       await update.message.reply_text(
+           responses.get("llm_invalid_response", "Sorry, I received an invalid response from the AI service."),
+           reply_to_message_id=update.message.message_id,
+       )
    except aiohttp.ClientError as e:
-       print(f"Network error in LLM request: {e}")
-       await update.message.reply_text("Sorry, I couldn't connect to the AI service.")
+       error(f"Network error in LLM request: {e}")
+       await update.message.reply_text(
+           responses.get("llm_connection_error", "Sorry, I couldn't connect to the AI service."),
+           reply_to_message_id=update.message.message_id,
+       )
    except ValueError as e:
-       print(f"Data processing error in LLM request: {e}")
-       await update.message.reply_text("Sorry, I had trouble processing the AI service response.")
+       error(f"Data processing error in LLM request: {e}")
+       await update.message.reply_text(
+           responses.get("llm_processing_error", "Sorry, I had trouble processing the AI service response."),
+           reply_to_message_id=update.message.message_id,
+       )

Also, consider adding a timeout parameter to the aiohttp request to prevent hanging if the LLM service is slow to respond:

async with aiohttp.ClientSession() as session, session.post(
    f"{LLM_API_ADDR}/api/generate",
    json={
        "model": LLM_MODEL,
        "prompt": prompt,
        "stream": False,
        "num_predict": 200,  # Limit response to approximately 200 tokens
    },
+   timeout=aiohttp.ClientTimeout(total=30),  # 30 second timeout
) as response:

You'll also need to update your responses JSON files to include these new message keys for proper localization.

🧰 Tools
🪛 Ruff (0.8.2)

448-457: Use a single with statement with multiple contexts instead of nested with statements

(SIM117)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant