-
Notifications
You must be signed in to change notification settings - Fork 2.9k
Files added Deep Researcher using LinkUp #113
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughThis update introduces the initial structure and core functionality for the "Agentic Deep Researcher" project. It adds configuration files for Python versioning, dependency management, and Git ignore rules. The main application logic is implemented in a new Streamlit interface that interacts with a multi-agent research system built using CrewAI and the LinkUp API. The agents are defined to perform web search, analysis, and technical writing, forming a sequential workflow. The README provides an overview and setup instructions, while environment variables and API keys are managed for secure configuration. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant StreamlitApp
participant AgentsModule
participant CrewAI
participant LinkUpAPI
User->>StreamlitApp: Submit research query
StreamlitApp->>AgentsModule: run_research(query)
AgentsModule->>CrewAI: Create and execute Crew (agents & tasks)
CrewAI->>LinkUpAPI: Web Searcher performs search
LinkUpAPI-->>CrewAI: Search results
CrewAI->>CrewAI: Research Analyst analyzes results
CrewAI->>CrewAI: Technical Writer drafts summary
CrewAI-->>AgentsModule: Final output
AgentsModule-->>StreamlitApp: Return result
StreamlitApp-->>User: Display assistant's response
Poem
Tip ⚡💬 Agentic Chat (Pro Plan, General Availability)
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Nitpick comments (7)
Multi-Agent Deep Researcher/README.md (3)
12-21
: Setup instructions need heading level correctionThe heading structure jumps from H1 (main title) to H3 (SetUp), skipping H2 level.
-### SetUp +## SetUp🧰 Tools
🪛 markdownlint-cli2 (0.17.2)
12-12: Heading levels should only increment by one level at a time
Expected: h2; Actual: h3(MD001, heading-increment)
16-16: Fenced code blocks should have a language specified
null(MD040, fenced-code-language)
16-20
: Add language specifier to code blockMissing language specifier in the fenced code block which helps with syntax highlighting.
-``` +```bash uv venv # creates a virtual environment (if not yet) source .venv/bin/activate # or `.venv\Scripts\activate` on Windows uv pip install -e . # installs the project in editable mode🧰 Tools
🪛 markdownlint-cli2 (0.17.2)
16-16: Fenced code blocks should have a language specified
null(MD040, fenced-code-language)
30-38
: Simplify contribution section languageThe contribution section uses excessive exclamation marks and could be more professional.
-Contributions are welcome! Feel free to fork this repository and submit pull requests with your improvements. +Contributions are welcome. Please fork this repository and submit pull requests with your improvements.🧰 Tools
🪛 LanguageTool
[style] ~38-~38: Using many exclamation marks might seem excessive (in this case: 4 exclamation marks for a text that’s 893 characters long)
Context: ... Contribution Contributions are welcome! Feel free to fork this repository and s...(EN_EXCESSIVE_EXCLAMATION)
[style] ~38-~38: Consider using a less frequent alternative to set your writing apart from others and make it sound more professional.
Context: ...ontribution Contributions are welcome! Feel free to fork this repository and submit pull re...(FEEL_FREE_TO_STYLE_ME)
Multi-Agent Deep Researcher/app.py (2)
65-65
: Improve the chat input placeholder textThe current placeholder text references documents, but the app appears to be a general research tool rather than a document-specific search.
-if prompt := st.chat_input("Ask a question about your documents..."): +if prompt := st.chat_input("Enter a research question or topic..."):
73-78
: Enhance error handling and feedbackConsider providing more informative error messages and logging the full error details for debugging.
with st.spinner("Researching... This may take a moment..."): try: result = run_research(prompt) response = result except Exception as e: - response = f"An error occurred: {str(e)}" + error_message = str(e) + print(f"Research error: {error_message}") # For server-side logging + response = f"I encountered an error while researching: {error_message}\n\nPlease try rephrasing your question or check your API key."Multi-Agent Deep Researcher/agents.py (2)
24-31
: UseEnum
/Literal
for validated fields
depth
andoutput_type
accept only a few fixed strings; encoding them as free‑formstr
loses type checking and runtime validation. A smallEnum
(ortyping.Literal
on Python 3.11+) will prevent accidental typos and improve editor auto‑completion.Example:
from enum import Enum class SearchDepth(str, Enum): standard = "standard" deep = "deep" class OutputType(str, Enum): searchResults = "searchResults" sourcedAnswer = "sourcedAnswer" structured = "structured" class LinkUpSearchInput(BaseModel): query: str depth: SearchDepth = SearchDepth.standard output_type: OutputType = OutputType.searchResults
59-75
: Minor: share the singleLinkUpSearchTool
across tasks only – not per Agent
Because only theWeb Searcher
uses the tool directly, you don’t need to attach it to thesearch_task
again (CrewAI passes the agent’s tools to its tasks). Removing the duplication avoids confusion about which instance is actually invoked.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Multi-Agent Deep Researcher/uv.lock
is excluded by!**/*.lock
📒 Files selected for processing (6)
Multi-Agent Deep Researcher/.gitignore
(1 hunks)Multi-Agent Deep Researcher/.python-version
(1 hunks)Multi-Agent Deep Researcher/README.md
(1 hunks)Multi-Agent Deep Researcher/agents.py
(1 hunks)Multi-Agent Deep Researcher/app.py
(1 hunks)Multi-Agent Deep Researcher/pyproject.toml
(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
Multi-Agent Deep Researcher/app.py (1)
Multi-Agent Deep Researcher/agents.py (1)
run_research
(130-137)
🪛 LanguageTool
Multi-Agent Deep Researcher/README.md
[style] ~38-~38: Using many exclamation marks might seem excessive (in this case: 4 exclamation marks for a text that’s 893 characters long)
Context: ... Contribution Contributions are welcome! Feel free to fork this repository and s...
(EN_EXCESSIVE_EXCLAMATION)
[style] ~38-~38: Consider using a less frequent alternative to set your writing apart from others and make it sound more professional.
Context: ...ontribution Contributions are welcome! Feel free to fork this repository and submit pull re...
(FEEL_FREE_TO_STYLE_ME)
🪛 markdownlint-cli2 (0.17.2)
Multi-Agent Deep Researcher/README.md
12-12: Heading levels should only increment by one level at a time
Expected: h2; Actual: h3
(MD001, heading-increment)
16-16: Fenced code blocks should have a language specified
null
(MD040, fenced-code-language)
🔇 Additional comments (4)
Multi-Agent Deep Researcher/.gitignore (1)
1-11
: Standard Python .gitignore configuration ✓The .gitignore file correctly excludes common Python-generated files and virtual environments. This is essential for keeping the repository clean and focused on source code only.
Multi-Agent Deep Researcher/.python-version (1)
1-1
: Python 3.11 requirement aligns with dependenciesThe specified Python version matches the
requires-python = ">=3.11"
in the pyproject.toml file, ensuring consistency across the project.Multi-Agent Deep Researcher/pyproject.toml (1)
1-14
: Dependency configuration looks goodThe project configuration includes all required dependencies for a multi-agent research system using CrewAI, LinkUp API, and Streamlit. Version constraints are appropriately specified.
Multi-Agent Deep Researcher/README.md (1)
1-11
: README provides a clear project overviewThe introduction and technology stack explanation are concise and informative.
linkup_api_key = st.text_input( | ||
"Enter your Linkup API Key", type="password") | ||
if linkup_api_key: | ||
st.session_state.linkup_api_key = linkup_api_key | ||
# Update the environment variable | ||
os.environ["LINKUP_API_KEY"] = linkup_api_key | ||
st.success("API Key stored successfully!") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Validate API key before indicating success
The code stores any non-empty input as the API key without validation. Consider verifying the key format or making a test request before showing a success message.
if linkup_api_key:
st.session_state.linkup_api_key = linkup_api_key
# Update the environment variable
os.environ["LINKUP_API_KEY"] = linkup_api_key
- st.success("API Key stored successfully!")
+ # Validate the API key format (basic check)
+ if linkup_api_key.startswith("lk-") and len(linkup_api_key) > 10:
+ st.success("API Key stored successfully!")
+ else:
+ st.warning("API key format might be incorrect. Please verify your key.")
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
linkup_api_key = st.text_input( | |
"Enter your Linkup API Key", type="password") | |
if linkup_api_key: | |
st.session_state.linkup_api_key = linkup_api_key | |
# Update the environment variable | |
os.environ["LINKUP_API_KEY"] = linkup_api_key | |
st.success("API Key stored successfully!") | |
linkup_api_key = st.text_input( | |
"Enter your Linkup API Key", type="password") | |
if linkup_api_key: | |
st.session_state.linkup_api_key = linkup_api_key | |
# Update the environment variable | |
os.environ["LINKUP_API_KEY"] = linkup_api_key | |
# Validate the API key format (basic check) | |
if linkup_api_key.startswith("lk-") and len(linkup_api_key) > 10: | |
st.success("API Key stored successfully!") | |
else: | |
st.warning("API key format might be incorrect. Please verify your key.") |
import streamlit as st | ||
from agents import run_research | ||
import os | ||
|
||
# Set up page configuration | ||
st.set_page_config(page_title="🔍 Agentic Deep Researcher", layout="wide") | ||
|
||
# Initialize session state variables | ||
if "linkup_api_key" not in st.session_state: | ||
st.session_state.linkup_api_key = "" | ||
if "messages" not in st.session_state: | ||
st.session_state.messages = [] | ||
|
||
def reset_chat(): | ||
st.session_state.messages = [] | ||
|
||
# Sidebar: Linkup Configuration with updated logo link | ||
with st.sidebar: | ||
col1, col2 = st.columns([1, 3]) | ||
with col1: | ||
st.write("") | ||
st.image( | ||
"https://avatars.githubusercontent.com/u/175112039?s=200&v=4", width=65) | ||
with col2: | ||
st.header("Linkup Configuration") | ||
st.write("Deep Web Search") | ||
|
||
st.markdown("[Get your API key](https://app.linkup.so/sign-up)", | ||
unsafe_allow_html=True) | ||
|
||
linkup_api_key = st.text_input( | ||
"Enter your Linkup API Key", type="password") | ||
if linkup_api_key: | ||
st.session_state.linkup_api_key = linkup_api_key | ||
# Update the environment variable | ||
os.environ["LINKUP_API_KEY"] = linkup_api_key | ||
st.success("API Key stored successfully!") | ||
|
||
# Main Chat Interface Header with powered by logos from original code links | ||
col1, col2 = st.columns([6, 1]) | ||
with col1: | ||
st.markdown("<h2 style='color: #0066cc;'>🔍 Agentic Deep Researcher</h2>", | ||
unsafe_allow_html=True) | ||
powered_by_html = """ | ||
<div style='display: flex; align-items: center; gap: 10px; margin-top: 5px;'> | ||
<span style='font-size: 20px; color: #666;'>Powered by</span> | ||
<img src="https://cdn.prod.website-files.com/66cf2bfc3ed15b02da0ca770/66d07240057721394308addd_Logo%20(1).svg" width="80"> | ||
<span style='font-size: 20px; color: #666;'>and</span> | ||
<img src="https://framerusercontent.com/images/wLLGrlJoyqYr9WvgZwzlw91A8U.png?scale-down-to=512" width="100"> | ||
</div> | ||
""" | ||
st.markdown(powered_by_html, unsafe_allow_html=True) | ||
with col2: | ||
st.button("Clear ↺", on_click=reset_chat) | ||
|
||
# Add spacing between header and chat history | ||
st.markdown("<div style='height: 30px;'></div>", unsafe_allow_html=True) | ||
|
||
# Display chat history | ||
for message in st.session_state.messages: | ||
with st.chat_message(message["role"]): | ||
st.markdown(message["content"]) | ||
|
||
# Accept user input and process the research query | ||
if prompt := st.chat_input("Ask a question about your documents..."): | ||
st.session_state.messages.append({"role": "user", "content": prompt}) | ||
with st.chat_message("user"): | ||
st.markdown(prompt) | ||
|
||
if not st.session_state.linkup_api_key: | ||
response = "Please enter your Linkup API Key in the sidebar." | ||
else: | ||
with st.spinner("Researching... This may take a moment..."): | ||
try: | ||
result = run_research(prompt) | ||
response = result | ||
except Exception as e: | ||
response = f"An error occurred: {str(e)}" | ||
|
||
with st.chat_message("assistant"): | ||
st.markdown(response) | ||
st.session_state.messages.append( | ||
{"role": "assistant", "content": response}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Consider implementing environment variable file loading
Since you're using python-dotenv as a dependency, consider implementing proper .env file loading for API keys rather than relying solely on session state.
import streamlit as st
from agents import run_research
import os
+from dotenv import load_dotenv
+# Load environment variables from .env file if present
+load_dotenv()
# Set up page configuration
st.set_page_config(page_title="🔍 Agentic Deep Researcher", layout="wide")
# Initialize session state variables
+# Try to get API key from environment first
+initial_api_key = os.environ.get("LINKUP_API_KEY", "")
if "linkup_api_key" not in st.session_state:
- st.session_state.linkup_api_key = ""
+ st.session_state.linkup_api_key = initial_api_key
This change would allow users to store their API key in a .env file for development, providing a better experience for returning users and aligning with the dependencies declared in pyproject.toml.
🏁 Script executed:
#!/bin/bash
# Check if dotenv is imported anywhere in the codebase
grep -r "from dotenv import" --include="*.py" .
# Check if .env is mentioned in any files
grep -r "\.env" --include="*.py" .
Length of output: 3539
🏁 Script executed:
#!/bin/bash
# Check if python-dotenv is listed as a dependency in pyproject.toml
grep -R "dotenv" -n pyproject.toml || echo "No dotenv entry in pyproject.toml"
# Check if python-dotenv appears in requirements.txt
if [ -f requirements.txt ]; then
grep -R "dotenv" -n requirements.txt || echo "No dotenv entry in requirements.txt"
else
echo "requirements.txt not found"
fi
Length of output: 252
🏁 Script executed:
#!/bin/bash
# Search for project metadata files in repo (2 levels deep)
echo "Searching for pyproject.toml, requirements.txt, Pipfile..."
fd --type f --max-depth 2 pyproject.toml || echo "No pyproject.toml found under 2 levels"
fd --type f --max-depth 2 requirements.txt || echo "No requirements.txt found under 2 levels"
fd --type f --max-depth 2 Pipfile || echo "No Pipfile found under 2 levels"
# If a pyproject.toml exists in the Multi-Agent Deep Researcher directory, check for dotenv
if [ -f "Multi-Agent Deep Researcher/pyproject.toml" ]; then
echo "Checking for dotenv in Multi-Agent Deep Researcher/pyproject.toml..."
grep -R "dotenv" -n "Multi-Agent Deep Researcher/pyproject.toml" || echo "No dotenv entry in that file"
fi
Length of output: 856
Load environment variables from your .env file
Since python-dotenv
(≥1.1.0) is already declared in Multi-Agent Deep Researcher/pyproject.toml
, add a call to load_dotenv()
at the top of app.py
so users can store their LINKUP_API_KEY
in a .env
file.
• File: Multi-Agent Deep Researcher/app.py
• Suggested changes:
import streamlit as st
from agents import run_research
import os
+from dotenv import load_dotenv
+# Load variables from .env into os.environ
+load_dotenv()
# Set up page configuration
st.set_page_config(page_title="🔍 Agentic Deep Researcher", layout="wide")
# Initialize session state variables
- if "linkup_api_key" not in st.session_state:
- st.session_state.linkup_api_key = ""
+ # Prefill from environment if available
+ initial_api_key = os.environ.get("LINKUP_API_KEY", "")
+ if "linkup_api_key" not in st.session_state:
+ st.session_state.linkup_api_key = initial_api_key
This ensures returning users can simply drop their API key into a .env
file without retyping it each session.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
import streamlit as st | |
from agents import run_research | |
import os | |
# Set up page configuration | |
st.set_page_config(page_title="🔍 Agentic Deep Researcher", layout="wide") | |
# Initialize session state variables | |
if "linkup_api_key" not in st.session_state: | |
st.session_state.linkup_api_key = "" | |
if "messages" not in st.session_state: | |
st.session_state.messages = [] | |
def reset_chat(): | |
st.session_state.messages = [] | |
# Sidebar: Linkup Configuration with updated logo link | |
with st.sidebar: | |
col1, col2 = st.columns([1, 3]) | |
with col1: | |
st.write("") | |
st.image( | |
"https://avatars.githubusercontent.com/u/175112039?s=200&v=4", width=65) | |
with col2: | |
st.header("Linkup Configuration") | |
st.write("Deep Web Search") | |
st.markdown("[Get your API key](https://app.linkup.so/sign-up)", | |
unsafe_allow_html=True) | |
linkup_api_key = st.text_input( | |
"Enter your Linkup API Key", type="password") | |
if linkup_api_key: | |
st.session_state.linkup_api_key = linkup_api_key | |
# Update the environment variable | |
os.environ["LINKUP_API_KEY"] = linkup_api_key | |
st.success("API Key stored successfully!") | |
# Main Chat Interface Header with powered by logos from original code links | |
col1, col2 = st.columns([6, 1]) | |
with col1: | |
st.markdown("<h2 style='color: #0066cc;'>🔍 Agentic Deep Researcher</h2>", | |
unsafe_allow_html=True) | |
powered_by_html = """ | |
<div style='display: flex; align-items: center; gap: 10px; margin-top: 5px;'> | |
<span style='font-size: 20px; color: #666;'>Powered by</span> | |
<img src="https://cdn.prod.website-files.com/66cf2bfc3ed15b02da0ca770/66d07240057721394308addd_Logo%20(1).svg" width="80"> | |
<span style='font-size: 20px; color: #666;'>and</span> | |
<img src="https://framerusercontent.com/images/wLLGrlJoyqYr9WvgZwzlw91A8U.png?scale-down-to=512" width="100"> | |
</div> | |
""" | |
st.markdown(powered_by_html, unsafe_allow_html=True) | |
with col2: | |
st.button("Clear ↺", on_click=reset_chat) | |
# Add spacing between header and chat history | |
st.markdown("<div style='height: 30px;'></div>", unsafe_allow_html=True) | |
# Display chat history | |
for message in st.session_state.messages: | |
with st.chat_message(message["role"]): | |
st.markdown(message["content"]) | |
# Accept user input and process the research query | |
if prompt := st.chat_input("Ask a question about your documents..."): | |
st.session_state.messages.append({"role": "user", "content": prompt}) | |
with st.chat_message("user"): | |
st.markdown(prompt) | |
if not st.session_state.linkup_api_key: | |
response = "Please enter your Linkup API Key in the sidebar." | |
else: | |
with st.spinner("Researching... This may take a moment..."): | |
try: | |
result = run_research(prompt) | |
response = result | |
except Exception as e: | |
response = f"An error occurred: {str(e)}" | |
with st.chat_message("assistant"): | |
st.markdown(response) | |
st.session_state.messages.append( | |
{"role": "assistant", "content": response}) | |
import streamlit as st | |
from agents import run_research | |
import os | |
from dotenv import load_dotenv | |
# Load variables from .env into os.environ | |
load_dotenv() | |
# Set up page configuration | |
st.set_page_config(page_title="🔍 Agentic Deep Researcher", layout="wide") | |
# Initialize session state variables | |
# Prefill from environment if available | |
initial_api_key = os.environ.get("LINKUP_API_KEY", "") | |
if "linkup_api_key" not in st.session_state: | |
st.session_state.linkup_api_key = initial_api_key | |
if "messages" not in st.session_state: | |
st.session_state.messages = [] | |
def reset_chat(): | |
st.session_state.messages = [] | |
# Sidebar: Linkup Configuration with updated logo link | |
with st.sidebar: | |
col1, col2 = st.columns([1, 3]) | |
with col1: | |
st.write("") | |
st.image( | |
"https://avatars.githubusercontent.com/u/175112039?s=200&v=4", width=65) | |
with col2: | |
st.header("Linkup Configuration") | |
st.write("Deep Web Search") | |
st.markdown("[Get your API key](https://app.linkup.so/sign-up)", | |
unsafe_allow_html=True) | |
linkup_api_key = st.text_input( | |
"Enter your Linkup API Key", type="password") | |
if linkup_api_key: | |
st.session_state.linkup_api_key = linkup_api_key | |
# Update the environment variable | |
os.environ["LINKUP_API_KEY"] = linkup_api_key | |
st.success("API Key stored successfully!") | |
# Main Chat Interface Header with powered by logos from original code links | |
col1, col2 = st.columns([6, 1]) | |
with col1: | |
st.markdown("<h2 style='color: #0066cc;'>🔍 Agentic Deep Researcher</h2>", | |
unsafe_allow_html=True) | |
powered_by_html = """ | |
<div style='display: flex; align-items: center; gap: 10px; margin-top: 5px;'> | |
<span style='font-size: 20px; color: #666;'>Powered by</span> | |
<img src="https://cdn.prod.website-files.com/66cf2bfc3ed15b02da0ca770/66d07240057721394308addd_Logo%20(1).svg" width="80"> | |
<span style='font-size: 20px; color: #666;'>and</span> | |
<img src="https://framerusercontent.com/images/wLLGrlJoyqYr9WvgZwzlw91A8U.png?scale-down-to=512" width="100"> | |
</div> | |
""" | |
st.markdown(powered_by_html, unsafe_allow_html=True) | |
with col2: | |
st.button("Clear ↺", on_click=reset_chat) | |
# Add spacing between header and chat history | |
st.markdown("<div style='height: 30px;'></div>", unsafe_allow_html=True) | |
# Display chat history | |
for message in st.session_state.messages: | |
with st.chat_message(message["role"]): | |
st.markdown(message["content"]) | |
# Accept user input and process the research query | |
if prompt := st.chat_input("Ask a question about your documents..."): | |
st.session_state.messages.append({"role": "user", "content": prompt}) | |
with st.chat_message("user"): | |
st.markdown(prompt) | |
if not st.session_state.linkup_api_key: | |
response = "Please enter your Linkup API Key in the sidebar." | |
else: | |
with st.spinner("Researching... This may take a moment..."): | |
try: | |
result = run_research(prompt) | |
response = result | |
except Exception as e: | |
response = f"An error occurred: {str(e)}" | |
with st.chat_message("assistant"): | |
st.markdown(response) | |
st.session_state.messages.append( | |
{"role": "assistant", "content": response}) |
def run_research(query: str): | ||
"""Run the research process and return results""" | ||
try: | ||
crew = create_research_crew(query) | ||
result = crew.kickoff() | ||
return result.raw | ||
except Exception as e: | ||
return f"Error: {str(e)}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
crew.kickoff()
already returns the final answer – .raw
may not exist
In current CrewAI versions kickoff()
returns a plain str
. Accessing .raw
will raise AttributeError
, masking the real result.
- result = crew.kickoff()
- return result.raw
+ return crew.kickoff()
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def run_research(query: str): | |
"""Run the research process and return results""" | |
try: | |
crew = create_research_crew(query) | |
result = crew.kickoff() | |
return result.raw | |
except Exception as e: | |
return f"Error: {str(e)}" | |
def run_research(query: str): | |
"""Run the research process and return results""" | |
try: | |
crew = create_research_crew(query) | |
return crew.kickoff() | |
except Exception as e: | |
return f"Error: {str(e)}" |
def get_llm_client(): | ||
"""Initialize and return the LLM client""" | ||
return LLM( | ||
model="openrouter/deepseek/deepseek-chat:free", | ||
base_url="https://openrouter.ai/api/v1", | ||
api_key=os.getenv("OPENROUTER_API_KEY") | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Guard against missing OPENROUTER_API_KEY
and cache the LLM instance
get_llm_client()
will silently pass None
to LLM
if the environment variable is unset, leading to a late‑failing HTTP 401 that is hard to trace. In addition, a new client is created for every call, which is wasteful.
+from functools import lru_cache
+
-def get_llm_client():
+@lru_cache(maxsize=1)
+def get_llm_client() -> LLM:
"""Initialize and return the LLM client"""
- return LLM(
+ api_key = os.getenv("OPENROUTER_API_KEY")
+ if not api_key:
+ raise EnvironmentError("OPENROUTER_API_KEY is not set")
+
+ return LLM(
model="openrouter/deepseek/deepseek-chat:free",
base_url="https://openrouter.ai/api/v1",
- api_key=os.getenv("OPENROUTER_API_KEY")
+ api_key=api_key,
)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def get_llm_client(): | |
"""Initialize and return the LLM client""" | |
return LLM( | |
model="openrouter/deepseek/deepseek-chat:free", | |
base_url="https://openrouter.ai/api/v1", | |
api_key=os.getenv("OPENROUTER_API_KEY") | |
) | |
from functools import lru_cache | |
@lru_cache(maxsize=1) | |
def get_llm_client() -> LLM: | |
"""Initialize and return the LLM client""" | |
api_key = os.getenv("OPENROUTER_API_KEY") | |
if not api_key: | |
raise EnvironmentError("OPENROUTER_API_KEY is not set") | |
return LLM( | |
model="openrouter/deepseek/deepseek-chat:free", | |
base_url="https://openrouter.ai/api/v1", | |
api_key=api_key, | |
) |
class LinkUpSearchTool(BaseTool): | ||
name: str = "LinkUp Search" | ||
description: str = "Search the web for information using LinkUp and return comprehensive results" | ||
args_schema: Type[BaseModel] = LinkUpSearchInput | ||
|
||
def __init__(self): | ||
super().__init__() | ||
|
||
def _run(self, query: str, depth: str = "standard", output_type: str = "searchResults") -> str: | ||
"""Execute LinkUp search and return results.""" | ||
try: | ||
# Initialize LinkUp client with API key from environment variables | ||
linkup_client = LinkupClient(api_key=os.getenv("LINKUP_API_KEY")) | ||
|
||
# Perform search | ||
search_response = linkup_client.search( | ||
query=query, | ||
depth=depth, | ||
output_type=output_type | ||
) | ||
|
||
return str(search_response) | ||
except Exception as e: | ||
return f"Error occurred while searching: {str(e)}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve error handling & signature of _run
- Returning a stringified exception hides the stack trace from CrewAI and makes automated retries impossible.
- If
LINKUP_API_KEY
is missing, the call should fail fast rather than creating a client that raises later. - Consider returning the native
search_response
object; downstream agents can still cast tostr
if needed. - CrewAI expects an
_arun
coroutine for async tools—worth adding for network I/O.
- try:
- # Initialize LinkUp client with API key from environment variables
- linkup_client = LinkupClient(api_key=os.getenv("LINKUP_API_KEY"))
+ api_key = os.getenv("LINKUP_API_KEY")
+ if not api_key:
+ raise EnvironmentError("LINKUP_API_KEY is not set")
+
+ try:
+ linkup_client = LinkupClient(api_key=api_key)
...
- return str(search_response)
- except Exception as e:
- return f"Error occurred while searching: {str(e)}"
+ return search_response
+ except Exception as e:
+ # Re‑raise so CrewAI can surface the root cause
+ raise RuntimeError("LinkUp search failed") from e
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
class LinkUpSearchTool(BaseTool): | |
name: str = "LinkUp Search" | |
description: str = "Search the web for information using LinkUp and return comprehensive results" | |
args_schema: Type[BaseModel] = LinkUpSearchInput | |
def __init__(self): | |
super().__init__() | |
def _run(self, query: str, depth: str = "standard", output_type: str = "searchResults") -> str: | |
"""Execute LinkUp search and return results.""" | |
try: | |
# Initialize LinkUp client with API key from environment variables | |
linkup_client = LinkupClient(api_key=os.getenv("LINKUP_API_KEY")) | |
# Perform search | |
search_response = linkup_client.search( | |
query=query, | |
depth=depth, | |
output_type=output_type | |
) | |
return str(search_response) | |
except Exception as e: | |
return f"Error occurred while searching: {str(e)}" | |
class LinkUpSearchTool(BaseTool): | |
name: str = "LinkUp Search" | |
description: str = "Search the web for information using LinkUp and return comprehensive results" | |
args_schema: Type[BaseModel] = LinkUpSearchInput | |
def __init__(self): | |
super().__init__() | |
def _run(self, query: str, depth: str = "standard", output_type: str = "searchResults") -> str: | |
"""Execute LinkUp search and return results.""" | |
api_key = os.getenv("LINKUP_API_KEY") | |
if not api_key: | |
raise EnvironmentError("LINKUP_API_KEY is not set") | |
try: | |
linkup_client = LinkupClient(api_key=api_key) | |
# Perform search | |
search_response = linkup_client.search( | |
query=query, | |
depth=depth, | |
output_type=output_type | |
) | |
return search_response | |
except Exception as e: | |
# Re‑raise so CrewAI can surface the root cause | |
raise RuntimeError("LinkUp search failed") from e |
Summary by CodeRabbit
New Features
Documentation
Chores