Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 21 additions & 0 deletions ai/gen-ai-agents/crewai-oci-integration/LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
MIT License

Copyright (c) 2025 Luigi Saetta

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
107 changes: 107 additions & 0 deletions ai/gen-ai-agents/crewai-oci-integration/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,107 @@
# CrewAI ↔ OCI Generative AI Integration

This repository provides examples and configuration guidelines for integrating **[CrewAI](https://github.com/joaomdmoura/crewAI)** with **Oracle Cloud Infrastructure (OCI) Generative AI** services.
The goal is to demonstrate how CrewAI agents can seamlessly leverage OCI-hosted models through the **LiteLLM gateway**.

Reviewed: 31.10.2025

---

## 🔐 Security Configuration

Before running the demos, you must configure access credentials for OCI.

In these examples, we use a **locally stored key pair** for authentication.
Ensure your local OCI configuration (`~/.oci/config` and private key) is correctly set up and accessible to the Python SDK.

To correctly start the **LiteLLM gateway** you need to create and configure correctly a **config.yml** file. To create this file use the [template](./config_template.yml).

In addition, you should be **enabled** to use OCI Generative AI Service in your tenant. If you haven't yet used OCI GenAI ask to your tenant's admin to setup the **needed policies**.

---

## 🧩 Demos Included

- [Simple CrewAI Agent](./simple_test_crewai_agent.py) — basic CrewAI agent interacting with an LLM through OCI
- [OCi Consumption Report](./crew_agent_mcp02.py)
- *(More demos to be added soon)*

---

## 📦 Dependencies

The project relies on the following main packages:

| Dependency | Purpose |
|-------------|----------|
| **CrewAI** | Framework for creating multi-agent workflows |
| **OCI Python SDK** | Access OCI services programmatically |
| **LiteLLM (Gateway)** | OpenAI-compatible proxy for accessing OCI Generative AI models |

To connect CrewAI to OCI models, we use a **LiteLLM gateway**, which exposes OCI GenAI via an **OpenAI-compatible** REST API.

---

## ⚙️ Environment Setup

1. **Create a Conda environment**
```bash
conda create -n crewai python=3.11
```

2. **Activate** the environment
```
conda activate crewai
```

3. **Install** the required **packages**
```
pip install -U oci litellm "litellm[proxy]" crewai
```

4. Run the LiteLLM Gateway

Start the LiteLLM gateway using your configuration file (config.yml):
```
./start_gateway.sh
```

Make sure the gateway starts successfully and is listening on the configured port (e.g., http://localhost:4000/v1).

🧠 Test the Integration

Run the sample CrewAI agent to verify that CrewAI can connect to OCI through LiteLLM:

```
python simple_test_crewai_agent.py
```

If the setup is correct, you should see the agent’s output using an OCI model.

## Integrate Agents with **MCP** servers.
Install this additional package:

```
pip install 'crewai-tools[mcp]'
```

You can test the integration with **MCP** using [OCI Consumption report](./crew_agent_mcp02.py) that generates a report
of the consumption in your tenant (top 5 compartments, for 4 weeks).

To have this demo up&running:
* download the code for the MCP server from [here](https://github.com/oracle-devrel/technology-engineering/blob/main/ai/gen-ai-agents/mcp-oci-integration/mcp_consumption.py)
* start the MCP server, on a free port (for example 9500)
* register the URL, in [source](./crew_agent_mcp02.py), in the section:
```
server_params = {
"url": "http://localhost:9500/mcp",
"transport": "streamable-http"
}
```

If you don't want to secure (with JWT) the communication with the MCP server, put
```
ENABLE_JWT_TOKEN = False
```
in the config.py file.

36 changes: 36 additions & 0 deletions ai/gen-ai-agents/crewai-oci-integration/config_template.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# config.yaml for litellm with OCI Grok models
litellm_settings:
drop_params: true
# drop unsupported params instead of 500 errors
additional_drop_params: ["max_retries"]

# Common OCI connection parameters
common_oci: &common_oci
provider: oci
oci_region: us-chicago-1
oci_serving_mode: ON_DEMAND
supports_tool_calls: true
oci_user: your-oci-user-ocid
oci_fingerprint: your-oci-api-key-fingerprint
oci_tenancy: your-oci-tenancy-ocid
oci_compartment_id: your-oci-compartment-ocid
oci_key_file: /path/to/your/oci_api_key.pem
api_key: key4321


# List of models
model_list:
- model_name: grok4-oci
litellm_params:
<<: *common_oci # merge common OCI params
model: oci/xai.grok-4

- model_name: grok4-fast-oci
litellm_params:
<<: *common_oci
model: oci/xai.grok-4-fast-reasoning

general_settings:
telemetry: false
proxy_logging: false
allow_model_alias: true
58 changes: 58 additions & 0 deletions ai/gen-ai-agents/crewai-oci-integration/crew_agent_mcp01.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
"""
CrewAI agent with MCP

This one is doing Deep research using internet search tools via MCP server.

see:
https://docs.crewai.com/en/mcp/overview
https://docs.crewai.com/en/mcp/multiple-servers
"""
import os
from crewai import Agent, Task, Crew, LLM
from crewai_tools import MCPServerAdapter

# Disable telemetry, tracing, and logging
os.environ["CREWAI_LOGGING_ENABLED"] = "false"
os.environ["CREWAI_TELEMETRY_ENABLED"] = "false"
os.environ["CREWAI_TRACING_ENABLED"] = "false"

llm = LLM(
model="grok4-fast-oci",
# LiteLLM proxy endpoint
base_url="http://localhost:4000/v1",
api_key="sk-local-any",
temperature=0.2,
max_tokens=4000,
)

server_params = {
"url": "http://localhost:8500/mcp",
"transport": "streamable-http"
}

# Create agent with MCP tools
with MCPServerAdapter(server_params, connect_timeout=60) as mcp_tools:
print(f"Available tools: {[tool.name for tool in mcp_tools]}")

research_agent = Agent(
role="Research Analyst",
goal="Find and analyze information using advanced search tools",
backstory="Expert researcher with access to multiple data sources",
llm=llm,
tools=mcp_tools,
verbose=True
)

# Create task
research_task = Task(
description="Research the latest developments in AI agent frameworks",
expected_output="Comprehensive research report with citations",
agent=research_agent
)

# Create and run crew
crew = Crew(agents=[research_agent], tasks=[research_task])

result = crew.kickoff()

print(result)
79 changes: 79 additions & 0 deletions ai/gen-ai-agents/crewai-oci-integration/crew_agent_mcp02.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
"""
CrewAI agent with MCP

This one is analyzing tenant consumption via MCP server.

see:
https://docs.crewai.com/en/mcp/overview
https://docs.crewai.com/en/mcp/multiple-servers
"""
import os
from datetime import datetime
from crewai import Agent, Task, Crew, LLM
from crewai_tools import MCPServerAdapter

# Disable telemetry, tracing, and logging
os.environ["CREWAI_LOGGING_ENABLED"] = "false"
os.environ["CREWAI_TELEMETRY_ENABLED"] = "false"
os.environ["CREWAI_TRACING_ENABLED"] = "false"

llm = LLM(
model="grok4-oci",
# LiteLLM proxy endpoint
base_url="http://localhost:4000/v1",
api_key="sk-local-any",
temperature=0.,
max_tokens=4000,
)

# OCI consumption
server_params = {
"url": "http://localhost:9500/mcp",
"transport": "streamable-http"
}

# Create agent with MCP tools
with MCPServerAdapter(server_params, connect_timeout=60) as mcp_tools:
print(f"Available tools: {[tool.name for tool in mcp_tools]}")

research_agent = Agent(
role="OCI Consumption Analyst",
goal="Find and analyze information about OCI tenant consumption.",
backstory="Expert analyst with access to multiple data sources",
llm=llm,
tools=mcp_tools,
max_iter=30,
max_retry_limit=5,
verbose=True
)

# Create task
research_task = Task(
description="Identify the top 5 compartments by consumption (amount) for the OCI tenant "
"in the weeks of the month of september 2025, analyze the trends and provide insights on usage patterns."
"Analyze fully the top 5 compartments. Use only the amount, not the quantity.",
expected_output="Comprehensive report with data-backed insights.",
agent=research_agent
)

# Create and run crew
crew = Crew(agents=[research_agent], tasks=[research_task])

result = crew.kickoff()

print(result)

# --- Save the result to a Markdown file ---
# Create an output directory if it doesn’t exist
output_dir = "reports"
os.makedirs(output_dir, exist_ok=True)

# Use timestamped filename for clarity
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
output_path = os.path.join(output_dir, f"oci_consumption_report_{timestamp}.md")

# Write the result
with open(output_path, "w", encoding="utf-8") as f:
f.write(str(result))

print(f"\n✅ Report saved successfully to: {output_path}")
Loading