Dev > main; LLM response added#90
Conversation
WalkthroughThe changes introduce integration with an external large language model (LLM) API for generating bot responses when the bot is mentioned. This integration is controlled by new environment variables: Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Bot
participant LLM_API
User->>Bot: Send message (mentions bot)
Bot->>Bot: Check USE_LLM flag
alt USE_LLM is enabled
Bot->>LLM_API: POST prompt + model to LLM API
LLM_API-->>Bot: Return generated response or error
Bot->>User: Reply with LLM-generated text or error message
else USE_LLM is disabled
Bot->>User: Reply with standard bot message
end
Tip ⚡💬 Agentic Chat (Pro Plan, General Availability)
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
src/main.py (1)
448-457: Simplify nestedwithstatements.The static analysis tool correctly identified that these nested
withstatements can be simplified into a single statement with multiple contexts.- async with aiohttp.ClientSession() as session: - async with session.post( + async with aiohttp.ClientSession() as session, session.post(🧰 Tools
🪛 Ruff (0.8.2)
448-457: Use a single
withstatement with multiple contexts instead of nestedwithstatements(SIM117)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
src/main.py(4 hunks)src/requirements.txt(1 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
src/main.py
448-457: Use a single with statement with multiple contexts instead of nested with statements
(SIM117)
🔇 Additional comments (4)
src/requirements.txt (1)
5-5: LGTM: aiohttp dependency added.I see you've added the aiohttp package (v3.9.3) to support the new LLM integration functionality.
src/main.py (3)
7-7: LGTM: aiohttp import added.This import is needed for the async HTTP requests to the LLM API.
35-37: LGTM: LLM configuration environment variables.Good use of environment variables with sensible defaults for the LLM integration.
174-177: LGTM: Conditional LLM response handling.Clean implementation that preserves backward compatibility by using the LLM integration only when enabled.
| async def respond_with_llm_message(update): | ||
| """Handle LLM responses when bot is mentioned.""" | ||
| message_text = update.message.text | ||
| # Remove bot mention from the message | ||
| prompt = message_text.replace("@your_bot_username", "").strip() | ||
|
|
||
| try: | ||
| # Make request to Ollama API | ||
| async with aiohttp.ClientSession() as session: | ||
| async with session.post( | ||
| f"{LLM_API_ADDR}/api/generate", | ||
| json={ | ||
| "model": LLM_MODEL, | ||
| "prompt": prompt, | ||
| "stream": False, | ||
| "num_predict": 200, # Limit response to approximately 200 tokens | ||
| }, | ||
| ) as response: | ||
| if response.status == 200: | ||
| result = await response.json() | ||
| bot_response = result.get("response", "Sorry, I couldn't generate a response.") | ||
| else: | ||
| bot_response = "Sorry, I encountered an error while processing your request." | ||
|
|
||
| await update.message.reply_text(bot_response) | ||
| except (aiohttp.ClientResponseError, aiohttp.ContentTypeError) as e: | ||
| print(f"Response error in LLM request: {e}") | ||
| await update.message.reply_text("Sorry, I received an invalid response from the AI service.") | ||
| except aiohttp.ClientError as e: | ||
| print(f"Network error in LLM request: {e}") | ||
| await update.message.reply_text("Sorry, I couldn't connect to the AI service.") | ||
| except ValueError as e: | ||
| print(f"Data processing error in LLM request: {e}") | ||
| await update.message.reply_text("Sorry, I had trouble processing the AI service response.") | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Fix several issues in the LLM integration.
There are several issues that need to be addressed in the LLM response function:
- The hardcoded "@your_bot_username" should be replaced with the actual bot username
- Error logging should use the existing logging system instead of print statements
- The response should maintain consistency with other functions by replying to the original message
- Error messages should support multiple languages like the rest of the application
async def respond_with_llm_message(update):
"""Handle LLM responses when bot is mentioned."""
message_text = update.message.text
# Remove bot mention from the message
- prompt = message_text.replace("@your_bot_username", "").strip()
+ # Get the bot's username dynamically
+ bot_username = update.get_bot().username
+ prompt = message_text.replace(f"@{bot_username}", "").replace("ботяра", "").replace("bot_health", "").strip()
try:
# Make request to Ollama API
async with aiohttp.ClientSession() as session, session.post(
f"{LLM_API_ADDR}/api/generate",
json={
"model": LLM_MODEL,
"prompt": prompt,
"stream": False,
"num_predict": 200, # Limit response to approximately 200 tokens
},
) as response:
if response.status == 200:
result = await response.json()
bot_response = result.get("response", "Sorry, I couldn't generate a response.")
else:
- bot_response = "Sorry, I encountered an error while processing your request."
+ error(f"LLM API returned status code {response.status}")
+ bot_response = responses.get("llm_error", "Sorry, I encountered an error while processing your request.")
- await update.message.reply_text(bot_response)
+ await update.message.reply_text(
+ bot_response,
+ reply_to_message_id=update.message.message_id,
+ )
+ info("LLM response sent to [Chat ID]: %s by the user %s", update.effective_chat.id, update.effective_user.username)
except (aiohttp.ClientResponseError, aiohttp.ContentTypeError) as e:
- print(f"Response error in LLM request: {e}")
- await update.message.reply_text("Sorry, I received an invalid response from the AI service.")
+ error(f"Response error in LLM request: {e}")
+ await update.message.reply_text(
+ responses.get("llm_invalid_response", "Sorry, I received an invalid response from the AI service."),
+ reply_to_message_id=update.message.message_id,
+ )
except aiohttp.ClientError as e:
- print(f"Network error in LLM request: {e}")
- await update.message.reply_text("Sorry, I couldn't connect to the AI service.")
+ error(f"Network error in LLM request: {e}")
+ await update.message.reply_text(
+ responses.get("llm_connection_error", "Sorry, I couldn't connect to the AI service."),
+ reply_to_message_id=update.message.message_id,
+ )
except ValueError as e:
- print(f"Data processing error in LLM request: {e}")
- await update.message.reply_text("Sorry, I had trouble processing the AI service response.")
+ error(f"Data processing error in LLM request: {e}")
+ await update.message.reply_text(
+ responses.get("llm_processing_error", "Sorry, I had trouble processing the AI service response."),
+ reply_to_message_id=update.message.message_id,
+ )Also, consider adding a timeout parameter to the aiohttp request to prevent hanging if the LLM service is slow to respond:
async with aiohttp.ClientSession() as session, session.post(
f"{LLM_API_ADDR}/api/generate",
json={
"model": LLM_MODEL,
"prompt": prompt,
"stream": False,
"num_predict": 200, # Limit response to approximately 200 tokens
},
+ timeout=aiohttp.ClientTimeout(total=30), # 30 second timeout
) as response:You'll also need to update your responses JSON files to include these new message keys for proper localization.
🧰 Tools
🪛 Ruff (0.8.2)
448-457: Use a single with statement with multiple contexts instead of nested with statements
(SIM117)
Summary by CodeRabbit