Skip to content

Files added Deep Researcher using LinkUp #113

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

DarkRaiderCB
Copy link
Contributor

@DarkRaiderCB DarkRaiderCB commented Apr 18, 2025

Summary by CodeRabbit

  • New Features

    • Introduced an interactive Streamlit app for multi-agent deep research, allowing users to submit queries and receive AI-generated research outputs.
    • Added a multi-agent research workflow that automates web searching, analysis, and technical writing using AI.
    • Enabled API key configuration within the app for secure access to research tools.
  • Documentation

    • Added a comprehensive README with project overview, setup instructions, and contribution guidelines.
  • Chores

    • Added project configuration files for Python version management, dependency management, and version control exclusions.

Copy link
Contributor

coderabbitai bot commented Apr 18, 2025

Walkthrough

This update introduces the initial structure and core functionality for the "Agentic Deep Researcher" project. It adds configuration files for Python versioning, dependency management, and Git ignore rules. The main application logic is implemented in a new Streamlit interface that interacts with a multi-agent research system built using CrewAI and the LinkUp API. The agents are defined to perform web search, analysis, and technical writing, forming a sequential workflow. The README provides an overview and setup instructions, while environment variables and API keys are managed for secure configuration.

Changes

File(s) Change Summary
.gitignore Added file to exclude Python artifacts, build outputs, and virtual environments from version control.
.python-version Added file specifying Python version 3.11 for environment consistency.
README.md Introduced project documentation with overview, setup instructions, component descriptions, and contribution guidelines.
pyproject.toml Added project configuration with metadata and dependencies including CrewAI, LinkUp SDK, OpenAI, dotenv, Streamlit, and Streamlit CrewAI process output.
agents.py Implemented multi-agent research system: defined LLM client initialization, a LinkUp search tool, three specialized agents (Web Searcher, Research Analyst, Technical Writer), their tasks, and a function to run the workflow using CrewAI.
app.py Created Streamlit app: manages user input, API key configuration, chat history, and calls the multi-agent research workflow; includes UI elements for chat and error handling.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant StreamlitApp
    participant AgentsModule
    participant CrewAI
    participant LinkUpAPI

    User->>StreamlitApp: Submit research query
    StreamlitApp->>AgentsModule: run_research(query)
    AgentsModule->>CrewAI: Create and execute Crew (agents & tasks)
    CrewAI->>LinkUpAPI: Web Searcher performs search
    LinkUpAPI-->>CrewAI: Search results
    CrewAI->>CrewAI: Research Analyst analyzes results
    CrewAI->>CrewAI: Technical Writer drafts summary
    CrewAI-->>AgentsModule: Final output
    AgentsModule-->>StreamlitApp: Return result
    StreamlitApp-->>User: Display assistant's response

Poem

In a warren of code, deep research begins,
With agents and queries, the journey spins.
Streamlit brings chat, CrewAI the brains,
LinkUp fetches knowledge, like gentle spring rains.
Python and teamwork, dependencies set—
A rabbit’s delight in this digital net!
🐇✨

Tip

⚡💬 Agentic Chat (Pro Plan, General Availability)
  • We're introducing multi-step agentic chat in review comments and issue comments, within and outside of PR's. This feature enhances review and issue discussions with the CodeRabbit agentic chat by enabling advanced interactions, including the ability to create pull requests directly from comments and add commits to existing pull requests.
✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (7)
Multi-Agent Deep Researcher/README.md (3)

12-21: Setup instructions need heading level correction

The heading structure jumps from H1 (main title) to H3 (SetUp), skipping H2 level.

-### SetUp
+## SetUp
🧰 Tools
🪛 markdownlint-cli2 (0.17.2)

12-12: Heading levels should only increment by one level at a time
Expected: h2; Actual: h3

(MD001, heading-increment)


16-16: Fenced code blocks should have a language specified
null

(MD040, fenced-code-language)


16-20: Add language specifier to code block

Missing language specifier in the fenced code block which helps with syntax highlighting.

-```
+```bash
 uv venv  # creates a virtual environment (if not yet)
 source .venv/bin/activate  # or `.venv\Scripts\activate` on Windows
 uv pip install -e .  # installs the project in editable mode
🧰 Tools
🪛 markdownlint-cli2 (0.17.2)

16-16: Fenced code blocks should have a language specified
null

(MD040, fenced-code-language)


30-38: Simplify contribution section language

The contribution section uses excessive exclamation marks and could be more professional.

-Contributions are welcome! Feel free to fork this repository and submit pull requests with your improvements.
+Contributions are welcome. Please fork this repository and submit pull requests with your improvements.
🧰 Tools
🪛 LanguageTool

[style] ~38-~38: Using many exclamation marks might seem excessive (in this case: 4 exclamation marks for a text that’s 893 characters long)
Context: ... Contribution Contributions are welcome! Feel free to fork this repository and s...

(EN_EXCESSIVE_EXCLAMATION)


[style] ~38-~38: Consider using a less frequent alternative to set your writing apart from others and make it sound more professional.
Context: ...ontribution Contributions are welcome! Feel free to fork this repository and submit pull re...

(FEEL_FREE_TO_STYLE_ME)

Multi-Agent Deep Researcher/app.py (2)

65-65: Improve the chat input placeholder text

The current placeholder text references documents, but the app appears to be a general research tool rather than a document-specific search.

-if prompt := st.chat_input("Ask a question about your documents..."):
+if prompt := st.chat_input("Enter a research question or topic..."):

73-78: Enhance error handling and feedback

Consider providing more informative error messages and logging the full error details for debugging.

with st.spinner("Researching... This may take a moment..."):
    try:
        result = run_research(prompt)
        response = result
    except Exception as e:
-       response = f"An error occurred: {str(e)}"
+       error_message = str(e)
+       print(f"Research error: {error_message}")  # For server-side logging
+       response = f"I encountered an error while researching: {error_message}\n\nPlease try rephrasing your question or check your API key."
Multi-Agent Deep Researcher/agents.py (2)

24-31: Use Enum / Literal for validated fields
depth and output_type accept only a few fixed strings; encoding them as free‑form str loses type checking and runtime validation. A small Enum (or typing.Literal on Python 3.11+) will prevent accidental typos and improve editor auto‑completion.

Example:

from enum import Enum

class SearchDepth(str, Enum):
    standard = "standard"
    deep = "deep"

class OutputType(str, Enum):
    searchResults = "searchResults"
    sourcedAnswer = "sourcedAnswer"
    structured = "structured"

class LinkUpSearchInput(BaseModel):
    query: str
    depth: SearchDepth = SearchDepth.standard
    output_type: OutputType = OutputType.searchResults

59-75: Minor: share the single LinkUpSearchTool across tasks only – not per Agent
Because only the Web Searcher uses the tool directly, you don’t need to attach it to the search_task again (CrewAI passes the agent’s tools to its tasks). Removing the duplication avoids confusion about which instance is actually invoked.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 07f2d33 and 6ef9cd6.

⛔ Files ignored due to path filters (1)
  • Multi-Agent Deep Researcher/uv.lock is excluded by !**/*.lock
📒 Files selected for processing (6)
  • Multi-Agent Deep Researcher/.gitignore (1 hunks)
  • Multi-Agent Deep Researcher/.python-version (1 hunks)
  • Multi-Agent Deep Researcher/README.md (1 hunks)
  • Multi-Agent Deep Researcher/agents.py (1 hunks)
  • Multi-Agent Deep Researcher/app.py (1 hunks)
  • Multi-Agent Deep Researcher/pyproject.toml (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
Multi-Agent Deep Researcher/app.py (1)
Multi-Agent Deep Researcher/agents.py (1)
  • run_research (130-137)
🪛 LanguageTool
Multi-Agent Deep Researcher/README.md

[style] ~38-~38: Using many exclamation marks might seem excessive (in this case: 4 exclamation marks for a text that’s 893 characters long)
Context: ... Contribution Contributions are welcome! Feel free to fork this repository and s...

(EN_EXCESSIVE_EXCLAMATION)


[style] ~38-~38: Consider using a less frequent alternative to set your writing apart from others and make it sound more professional.
Context: ...ontribution Contributions are welcome! Feel free to fork this repository and submit pull re...

(FEEL_FREE_TO_STYLE_ME)

🪛 markdownlint-cli2 (0.17.2)
Multi-Agent Deep Researcher/README.md

12-12: Heading levels should only increment by one level at a time
Expected: h2; Actual: h3

(MD001, heading-increment)


16-16: Fenced code blocks should have a language specified
null

(MD040, fenced-code-language)

🔇 Additional comments (4)
Multi-Agent Deep Researcher/.gitignore (1)

1-11: Standard Python .gitignore configuration ✓

The .gitignore file correctly excludes common Python-generated files and virtual environments. This is essential for keeping the repository clean and focused on source code only.

Multi-Agent Deep Researcher/.python-version (1)

1-1: Python 3.11 requirement aligns with dependencies

The specified Python version matches the requires-python = ">=3.11" in the pyproject.toml file, ensuring consistency across the project.

Multi-Agent Deep Researcher/pyproject.toml (1)

1-14: Dependency configuration looks good

The project configuration includes all required dependencies for a multi-agent research system using CrewAI, LinkUp API, and Streamlit. Version constraints are appropriately specified.

Multi-Agent Deep Researcher/README.md (1)

1-11: README provides a clear project overview

The introduction and technology stack explanation are concise and informative.

Comment on lines +31 to +37
linkup_api_key = st.text_input(
"Enter your Linkup API Key", type="password")
if linkup_api_key:
st.session_state.linkup_api_key = linkup_api_key
# Update the environment variable
os.environ["LINKUP_API_KEY"] = linkup_api_key
st.success("API Key stored successfully!")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Validate API key before indicating success

The code stores any non-empty input as the API key without validation. Consider verifying the key format or making a test request before showing a success message.

if linkup_api_key:
    st.session_state.linkup_api_key = linkup_api_key
    # Update the environment variable
    os.environ["LINKUP_API_KEY"] = linkup_api_key
-   st.success("API Key stored successfully!")
+   # Validate the API key format (basic check)
+   if linkup_api_key.startswith("lk-") and len(linkup_api_key) > 10:
+       st.success("API Key stored successfully!")
+   else:
+       st.warning("API key format might be incorrect. Please verify your key.")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
linkup_api_key = st.text_input(
"Enter your Linkup API Key", type="password")
if linkup_api_key:
st.session_state.linkup_api_key = linkup_api_key
# Update the environment variable
os.environ["LINKUP_API_KEY"] = linkup_api_key
st.success("API Key stored successfully!")
linkup_api_key = st.text_input(
"Enter your Linkup API Key", type="password")
if linkup_api_key:
st.session_state.linkup_api_key = linkup_api_key
# Update the environment variable
os.environ["LINKUP_API_KEY"] = linkup_api_key
# Validate the API key format (basic check)
if linkup_api_key.startswith("lk-") and len(linkup_api_key) > 10:
st.success("API Key stored successfully!")
else:
st.warning("API key format might be incorrect. Please verify your key.")

Comment on lines +1 to +83
import streamlit as st
from agents import run_research
import os

# Set up page configuration
st.set_page_config(page_title="🔍 Agentic Deep Researcher", layout="wide")

# Initialize session state variables
if "linkup_api_key" not in st.session_state:
st.session_state.linkup_api_key = ""
if "messages" not in st.session_state:
st.session_state.messages = []

def reset_chat():
st.session_state.messages = []

# Sidebar: Linkup Configuration with updated logo link
with st.sidebar:
col1, col2 = st.columns([1, 3])
with col1:
st.write("")
st.image(
"https://avatars.githubusercontent.com/u/175112039?s=200&v=4", width=65)
with col2:
st.header("Linkup Configuration")
st.write("Deep Web Search")

st.markdown("[Get your API key](https://app.linkup.so/sign-up)",
unsafe_allow_html=True)

linkup_api_key = st.text_input(
"Enter your Linkup API Key", type="password")
if linkup_api_key:
st.session_state.linkup_api_key = linkup_api_key
# Update the environment variable
os.environ["LINKUP_API_KEY"] = linkup_api_key
st.success("API Key stored successfully!")

# Main Chat Interface Header with powered by logos from original code links
col1, col2 = st.columns([6, 1])
with col1:
st.markdown("<h2 style='color: #0066cc;'>🔍 Agentic Deep Researcher</h2>",
unsafe_allow_html=True)
powered_by_html = """
<div style='display: flex; align-items: center; gap: 10px; margin-top: 5px;'>
<span style='font-size: 20px; color: #666;'>Powered by</span>
<img src="https://cdn.prod.website-files.com/66cf2bfc3ed15b02da0ca770/66d07240057721394308addd_Logo%20(1).svg" width="80">
<span style='font-size: 20px; color: #666;'>and</span>
<img src="https://framerusercontent.com/images/wLLGrlJoyqYr9WvgZwzlw91A8U.png?scale-down-to=512" width="100">
</div>
"""
st.markdown(powered_by_html, unsafe_allow_html=True)
with col2:
st.button("Clear ↺", on_click=reset_chat)

# Add spacing between header and chat history
st.markdown("<div style='height: 30px;'></div>", unsafe_allow_html=True)

# Display chat history
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.markdown(message["content"])

# Accept user input and process the research query
if prompt := st.chat_input("Ask a question about your documents..."):
st.session_state.messages.append({"role": "user", "content": prompt})
with st.chat_message("user"):
st.markdown(prompt)

if not st.session_state.linkup_api_key:
response = "Please enter your Linkup API Key in the sidebar."
else:
with st.spinner("Researching... This may take a moment..."):
try:
result = run_research(prompt)
response = result
except Exception as e:
response = f"An error occurred: {str(e)}"

with st.chat_message("assistant"):
st.markdown(response)
st.session_state.messages.append(
{"role": "assistant", "content": response})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Consider implementing environment variable file loading

Since you're using python-dotenv as a dependency, consider implementing proper .env file loading for API keys rather than relying solely on session state.

import streamlit as st
from agents import run_research
import os
+from dotenv import load_dotenv

+# Load environment variables from .env file if present
+load_dotenv()

# Set up page configuration
st.set_page_config(page_title="🔍 Agentic Deep Researcher", layout="wide")

# Initialize session state variables
+# Try to get API key from environment first
+initial_api_key = os.environ.get("LINKUP_API_KEY", "")
if "linkup_api_key" not in st.session_state:
-    st.session_state.linkup_api_key = ""
+    st.session_state.linkup_api_key = initial_api_key

This change would allow users to store their API key in a .env file for development, providing a better experience for returning users and aligning with the dependencies declared in pyproject.toml.


🏁 Script executed:

#!/bin/bash
# Check if dotenv is imported anywhere in the codebase
grep -r "from dotenv import" --include="*.py" .
# Check if .env is mentioned in any files
grep -r "\.env" --include="*.py" .

Length of output: 3539


🏁 Script executed:

#!/bin/bash
# Check if python-dotenv is listed as a dependency in pyproject.toml
grep -R "dotenv" -n pyproject.toml || echo "No dotenv entry in pyproject.toml"

# Check if python-dotenv appears in requirements.txt
if [ -f requirements.txt ]; then
  grep -R "dotenv" -n requirements.txt || echo "No dotenv entry in requirements.txt"
else
  echo "requirements.txt not found"
fi

Length of output: 252


🏁 Script executed:

#!/bin/bash
# Search for project metadata files in repo (2 levels deep)
echo "Searching for pyproject.toml, requirements.txt, Pipfile..."
fd --type f --max-depth 2 pyproject.toml || echo "No pyproject.toml found under 2 levels"
fd --type f --max-depth 2 requirements.txt || echo "No requirements.txt found under 2 levels"
fd --type f --max-depth 2 Pipfile || echo "No Pipfile found under 2 levels"

# If a pyproject.toml exists in the Multi-Agent Deep Researcher directory, check for dotenv
if [ -f "Multi-Agent Deep Researcher/pyproject.toml" ]; then
  echo "Checking for dotenv in Multi-Agent Deep Researcher/pyproject.toml..."
  grep -R "dotenv" -n "Multi-Agent Deep Researcher/pyproject.toml" || echo "No dotenv entry in that file"
fi

Length of output: 856


Load environment variables from your .env file
Since python-dotenv (≥1.1.0) is already declared in Multi-Agent Deep Researcher/pyproject.toml, add a call to load_dotenv() at the top of app.py so users can store their LINKUP_API_KEY in a .env file.

• File: Multi-Agent Deep Researcher/app.py
• Suggested changes:

 import streamlit as st
 from agents import run_research
 import os
+from dotenv import load_dotenv

+# Load variables from .env into os.environ
+load_dotenv()

 # Set up page configuration
 st.set_page_config(page_title="🔍 Agentic Deep Researcher", layout="wide")

 # Initialize session state variables
- if "linkup_api_key" not in st.session_state:
-     st.session_state.linkup_api_key = ""
+ # Prefill from environment if available
+ initial_api_key = os.environ.get("LINKUP_API_KEY", "")
+ if "linkup_api_key" not in st.session_state:
+     st.session_state.linkup_api_key = initial_api_key

This ensures returning users can simply drop their API key into a .env file without retyping it each session.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import streamlit as st
from agents import run_research
import os
# Set up page configuration
st.set_page_config(page_title="🔍 Agentic Deep Researcher", layout="wide")
# Initialize session state variables
if "linkup_api_key" not in st.session_state:
st.session_state.linkup_api_key = ""
if "messages" not in st.session_state:
st.session_state.messages = []
def reset_chat():
st.session_state.messages = []
# Sidebar: Linkup Configuration with updated logo link
with st.sidebar:
col1, col2 = st.columns([1, 3])
with col1:
st.write("")
st.image(
"https://avatars.githubusercontent.com/u/175112039?s=200&v=4", width=65)
with col2:
st.header("Linkup Configuration")
st.write("Deep Web Search")
st.markdown("[Get your API key](https://app.linkup.so/sign-up)",
unsafe_allow_html=True)
linkup_api_key = st.text_input(
"Enter your Linkup API Key", type="password")
if linkup_api_key:
st.session_state.linkup_api_key = linkup_api_key
# Update the environment variable
os.environ["LINKUP_API_KEY"] = linkup_api_key
st.success("API Key stored successfully!")
# Main Chat Interface Header with powered by logos from original code links
col1, col2 = st.columns([6, 1])
with col1:
st.markdown("<h2 style='color: #0066cc;'>🔍 Agentic Deep Researcher</h2>",
unsafe_allow_html=True)
powered_by_html = """
<div style='display: flex; align-items: center; gap: 10px; margin-top: 5px;'>
<span style='font-size: 20px; color: #666;'>Powered by</span>
<img src="https://cdn.prod.website-files.com/66cf2bfc3ed15b02da0ca770/66d07240057721394308addd_Logo%20(1).svg" width="80">
<span style='font-size: 20px; color: #666;'>and</span>
<img src="https://framerusercontent.com/images/wLLGrlJoyqYr9WvgZwzlw91A8U.png?scale-down-to=512" width="100">
</div>
"""
st.markdown(powered_by_html, unsafe_allow_html=True)
with col2:
st.button("Clear ↺", on_click=reset_chat)
# Add spacing between header and chat history
st.markdown("<div style='height: 30px;'></div>", unsafe_allow_html=True)
# Display chat history
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.markdown(message["content"])
# Accept user input and process the research query
if prompt := st.chat_input("Ask a question about your documents..."):
st.session_state.messages.append({"role": "user", "content": prompt})
with st.chat_message("user"):
st.markdown(prompt)
if not st.session_state.linkup_api_key:
response = "Please enter your Linkup API Key in the sidebar."
else:
with st.spinner("Researching... This may take a moment..."):
try:
result = run_research(prompt)
response = result
except Exception as e:
response = f"An error occurred: {str(e)}"
with st.chat_message("assistant"):
st.markdown(response)
st.session_state.messages.append(
{"role": "assistant", "content": response})
import streamlit as st
from agents import run_research
import os
from dotenv import load_dotenv
# Load variables from .env into os.environ
load_dotenv()
# Set up page configuration
st.set_page_config(page_title="🔍 Agentic Deep Researcher", layout="wide")
# Initialize session state variables
# Prefill from environment if available
initial_api_key = os.environ.get("LINKUP_API_KEY", "")
if "linkup_api_key" not in st.session_state:
st.session_state.linkup_api_key = initial_api_key
if "messages" not in st.session_state:
st.session_state.messages = []
def reset_chat():
st.session_state.messages = []
# Sidebar: Linkup Configuration with updated logo link
with st.sidebar:
col1, col2 = st.columns([1, 3])
with col1:
st.write("")
st.image(
"https://avatars.githubusercontent.com/u/175112039?s=200&v=4", width=65)
with col2:
st.header("Linkup Configuration")
st.write("Deep Web Search")
st.markdown("[Get your API key](https://app.linkup.so/sign-up)",
unsafe_allow_html=True)
linkup_api_key = st.text_input(
"Enter your Linkup API Key", type="password")
if linkup_api_key:
st.session_state.linkup_api_key = linkup_api_key
# Update the environment variable
os.environ["LINKUP_API_KEY"] = linkup_api_key
st.success("API Key stored successfully!")
# Main Chat Interface Header with powered by logos from original code links
col1, col2 = st.columns([6, 1])
with col1:
st.markdown("<h2 style='color: #0066cc;'>🔍 Agentic Deep Researcher</h2>",
unsafe_allow_html=True)
powered_by_html = """
<div style='display: flex; align-items: center; gap: 10px; margin-top: 5px;'>
<span style='font-size: 20px; color: #666;'>Powered by</span>
<img src="https://cdn.prod.website-files.com/66cf2bfc3ed15b02da0ca770/66d07240057721394308addd_Logo%20(1).svg" width="80">
<span style='font-size: 20px; color: #666;'>and</span>
<img src="https://framerusercontent.com/images/wLLGrlJoyqYr9WvgZwzlw91A8U.png?scale-down-to=512" width="100">
</div>
"""
st.markdown(powered_by_html, unsafe_allow_html=True)
with col2:
st.button("Clear ↺", on_click=reset_chat)
# Add spacing between header and chat history
st.markdown("<div style='height: 30px;'></div>", unsafe_allow_html=True)
# Display chat history
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.markdown(message["content"])
# Accept user input and process the research query
if prompt := st.chat_input("Ask a question about your documents..."):
st.session_state.messages.append({"role": "user", "content": prompt})
with st.chat_message("user"):
st.markdown(prompt)
if not st.session_state.linkup_api_key:
response = "Please enter your Linkup API Key in the sidebar."
else:
with st.spinner("Researching... This may take a moment..."):
try:
result = run_research(prompt)
response = result
except Exception as e:
response = f"An error occurred: {str(e)}"
with st.chat_message("assistant"):
st.markdown(response)
st.session_state.messages.append(
{"role": "assistant", "content": response})

Comment on lines +130 to +137
def run_research(query: str):
"""Run the research process and return results"""
try:
crew = create_research_crew(query)
result = crew.kickoff()
return result.raw
except Exception as e:
return f"Error: {str(e)}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

crew.kickoff() already returns the final answer – .raw may not exist
In current CrewAI versions kickoff() returns a plain str. Accessing .raw will raise AttributeError, masking the real result.

-        result = crew.kickoff()
-        return result.raw
+        return crew.kickoff()
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def run_research(query: str):
"""Run the research process and return results"""
try:
crew = create_research_crew(query)
result = crew.kickoff()
return result.raw
except Exception as e:
return f"Error: {str(e)}"
def run_research(query: str):
"""Run the research process and return results"""
try:
crew = create_research_crew(query)
return crew.kickoff()
except Exception as e:
return f"Error: {str(e)}"

Comment on lines +13 to +19
def get_llm_client():
"""Initialize and return the LLM client"""
return LLM(
model="openrouter/deepseek/deepseek-chat:free",
base_url="https://openrouter.ai/api/v1",
api_key=os.getenv("OPENROUTER_API_KEY")
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Guard against missing OPENROUTER_API_KEY and cache the LLM instance
get_llm_client() will silently pass None to LLM if the environment variable is unset, leading to a late‑failing HTTP 401 that is hard to trace. In addition, a new client is created for every call, which is wasteful.

+from functools import lru_cache
+
-def get_llm_client():
+@lru_cache(maxsize=1)
+def get_llm_client() -> LLM:
     """Initialize and return the LLM client"""
-    return LLM(
+    api_key = os.getenv("OPENROUTER_API_KEY")
+    if not api_key:
+        raise EnvironmentError("OPENROUTER_API_KEY is not set")
+
+    return LLM(
         model="openrouter/deepseek/deepseek-chat:free",
         base_url="https://openrouter.ai/api/v1",
-        api_key=os.getenv("OPENROUTER_API_KEY")
+        api_key=api_key,
     )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def get_llm_client():
"""Initialize and return the LLM client"""
return LLM(
model="openrouter/deepseek/deepseek-chat:free",
base_url="https://openrouter.ai/api/v1",
api_key=os.getenv("OPENROUTER_API_KEY")
)
from functools import lru_cache
@lru_cache(maxsize=1)
def get_llm_client() -> LLM:
"""Initialize and return the LLM client"""
api_key = os.getenv("OPENROUTER_API_KEY")
if not api_key:
raise EnvironmentError("OPENROUTER_API_KEY is not set")
return LLM(
model="openrouter/deepseek/deepseek-chat:free",
base_url="https://openrouter.ai/api/v1",
api_key=api_key,
)

Comment on lines +33 to +56
class LinkUpSearchTool(BaseTool):
name: str = "LinkUp Search"
description: str = "Search the web for information using LinkUp and return comprehensive results"
args_schema: Type[BaseModel] = LinkUpSearchInput

def __init__(self):
super().__init__()

def _run(self, query: str, depth: str = "standard", output_type: str = "searchResults") -> str:
"""Execute LinkUp search and return results."""
try:
# Initialize LinkUp client with API key from environment variables
linkup_client = LinkupClient(api_key=os.getenv("LINKUP_API_KEY"))

# Perform search
search_response = linkup_client.search(
query=query,
depth=depth,
output_type=output_type
)

return str(search_response)
except Exception as e:
return f"Error occurred while searching: {str(e)}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve error handling & signature of _run

  1. Returning a stringified exception hides the stack trace from CrewAI and makes automated retries impossible.
  2. If LINKUP_API_KEY is missing, the call should fail fast rather than creating a client that raises later.
  3. Consider returning the native search_response object; downstream agents can still cast to str if needed.
  4. CrewAI expects an _arun coroutine for async tools—worth adding for network I/O.
-        try:
-            # Initialize LinkUp client with API key from environment variables
-            linkup_client = LinkupClient(api_key=os.getenv("LINKUP_API_KEY"))
+        api_key = os.getenv("LINKUP_API_KEY")
+        if not api_key:
+            raise EnvironmentError("LINKUP_API_KEY is not set")
+
+        try:
+            linkup_client = LinkupClient(api_key=api_key)
             ...
-            return str(search_response)
-        except Exception as e:
-            return f"Error occurred while searching: {str(e)}"
+            return search_response
+        except Exception as e:
+            # Re‑raise so CrewAI can surface the root cause
+            raise RuntimeError("LinkUp search failed") from e
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
class LinkUpSearchTool(BaseTool):
name: str = "LinkUp Search"
description: str = "Search the web for information using LinkUp and return comprehensive results"
args_schema: Type[BaseModel] = LinkUpSearchInput
def __init__(self):
super().__init__()
def _run(self, query: str, depth: str = "standard", output_type: str = "searchResults") -> str:
"""Execute LinkUp search and return results."""
try:
# Initialize LinkUp client with API key from environment variables
linkup_client = LinkupClient(api_key=os.getenv("LINKUP_API_KEY"))
# Perform search
search_response = linkup_client.search(
query=query,
depth=depth,
output_type=output_type
)
return str(search_response)
except Exception as e:
return f"Error occurred while searching: {str(e)}"
class LinkUpSearchTool(BaseTool):
name: str = "LinkUp Search"
description: str = "Search the web for information using LinkUp and return comprehensive results"
args_schema: Type[BaseModel] = LinkUpSearchInput
def __init__(self):
super().__init__()
def _run(self, query: str, depth: str = "standard", output_type: str = "searchResults") -> str:
"""Execute LinkUp search and return results."""
api_key = os.getenv("LINKUP_API_KEY")
if not api_key:
raise EnvironmentError("LINKUP_API_KEY is not set")
try:
linkup_client = LinkupClient(api_key=api_key)
# Perform search
search_response = linkup_client.search(
query=query,
depth=depth,
output_type=output_type
)
return search_response
except Exception as e:
# Re‑raise so CrewAI can surface the root cause
raise RuntimeError("LinkUp search failed") from e

@DarkRaiderCB DarkRaiderCB changed the title Files added Files added Deep Researcher using LinkUp Apr 18, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant