Skip to content

Mai0313/TradingAgents

TradingAgents

PyPI version python uv Ruff Pydantic v2 tests code-quality Ask DeepWiki license PRs contributors

🚀 TradingAgents is a multi-agent LLM financial trading framework that leverages large language models to simulate analyst teams, research debates, and portfolio management decisions for stock trading analysis.

Other Languages: English | 繁體中文 | 简体中文

✨ Highlights

  • Built on LangGraph for robust multi-agent orchestration
  • Multi-agent architecture: Analyst Team → Research Team → Trader → Risk Management → Portfolio Management
  • Powered by langchain.chat_models.init_chat_model; supports any provider keyed via an explicit llm_provider field plus a model name (OpenAI, Anthropic, Google Gemini, xAI (Grok), OpenRouter, Ollama, HuggingFace, LiteLLM)
  • Unified reasoning_effort knob (low / medium / high / xhigh / max) mapped per provider to native parameters (Anthropic effort, OpenAI reasoning_effort, Google thinking_level)
  • Market data powered by yfinance for OHLCV, fundamentals, technical indicators, news, and insider transactions
  • Pydantic-based configuration with strict typing and validation
  • Analysis results automatically saved to results/ with organized subfolders
  • Modern src/ layout with full type-annotated code
  • Fast dependency management via uv
  • Pre-commit suite: ruff, mdformat, codespell, mypy, uv hooks
  • Pytest with coverage; MkDocs Material documentation

🚀 Quick Start

git clone https://github.com/Mai0313/TradingAgents.git
cd TradingAgents
make uv-install               # Install uv (only needed once)
uv sync                       # Install dependencies
cp .env.example .env          # Configure your API keys

Configure API Keys

Edit .env and set your LLM provider keys:

# LLM Providers (set the one you use)
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=AIza...
XAI_API_KEY=...
OPENROUTER_API_KEY=...

Usage

Command line / interactive

The package ships a tradingagents console script with two subcommands:

uv run tradingagents tui                     # interactive questionary prompts
uv run tradingagents cli                     # run with all defaults
uv run tradingagents cli --ticker AAPL \
    --deep_think_llm gpt-5 \
    --quick_think_llm gpt-5-mini             # override flags
uv run tradingagents --help                  # rich-rendered top-level help
uv run tradingagents cli --help              # rich-rendered per-command flags

tradingagents tui walks you through every parameter (ticker, date, provider, models, debate rounds, analyst selection, ...) via interactive prompts; tradingagents cli is the same flow but driven entirely by command-line flags so it composes with shell scripts and CI. Both routes stream LangGraph agent messages through Rich panels (Markdown for prose, JSON-pretty for tool output, truncated when payloads exceed a screenful). python -m tradingagents <subcommand> works as well.

Programmatic

from tradingagents.config import TradingAgentsConfig
from tradingagents.graph.trading_graph import TradingAgentsGraph

config = TradingAgentsConfig(
    llm_provider="openai",
    deep_think_llm="gpt-5",
    quick_think_llm="gpt-5-mini",
    max_debate_rounds=1,
    max_risk_discuss_rounds=1,
    max_recur_limit=100,
    reasoning_effort="medium",
    response_language="en-US",
)

ta = TradingAgentsGraph(debug=True, config=config)
_, decision = ta.propagate("NVDA", "2024-05-10")
print(decision)

response_language is a BCP 47 tag from the ResponseLanguage Literal (zh-TW, zh-CN, en-US, ja-JP, ko-KR, de-DE); pick the closest one to the language you want the agents to reason in.

TradingAgentsGraph.propagate also accepts an optional on_message callback (Callable[[AnyMessage], None]) that fires once per streamed LangGraph message — useful for plugging in your own renderer; the bundled CLI / TUI use this hook to drive the Rich panels.

llm_provider is one of the langchain.chat_models.init_chat_model registry keys (openai, anthropic, google_genai, xai, openrouter, ollama, huggingface, litellm); deep_think_llm / quick_think_llm take the model name as accepted by that provider (gpt-5, claude-sonnet-4-6, gemini-3-pro-preview, grok-4, ...).

Set response_language to control the language requested in all agent prompts. Tickers without exchange suffixes are resolved automatically with Yahoo Finance Search. For Taiwan stocks, pass the numeric stock code directly, such as 2330, 2408, or 8069; explicit Yahoo Finance symbols such as 2330.TW, 8069.TWO, AAPL, and TSM are also supported.

config = TradingAgentsConfig(
    llm_provider="openai",
    deep_think_llm="gpt-5",
    quick_think_llm="gpt-5-mini",
    max_debate_rounds=1,
    max_risk_discuss_rounds=1,
    max_recur_limit=100,
    response_language="zh-TW",
)

ta = TradingAgentsGraph(config=config)
_, decision = ta.propagate("2330", "2024-05-10")

📁 Project Structure

src/
└── tradingagents/
    ├── agents/           # Agent implementations
    │   ├── analysts/     # Market, News, Social, Fundamentals analysts
    │   ├── managers/     # Research & Portfolio managers
    │   ├── researchers/  # Bull & Bear researchers
    │   ├── risk_mgmt/    # Risk management agents
    │   ├── trader/       # Trader agent
    │   └── utils/        # Shared agent utilities
    ├── dataflows/        # Data ingestion via yfinance
    ├── graph/            # LangGraph trading graph setup
    ├── interface/        # CLI / TUI implementations
    │   ├── cli.py        # fire-driven flag runner (run_cli)
    │   ├── tui.py        # questionary-driven interactive runner (run_tui)
    │   ├── display.py    # rich-based LangChain message renderer
    │   └── help.py       # rich-based replacement for fire's pager help
    ├── llm.py            # Chat model construction (init_chat_model wrapper + reasoning_effort mapping)
    ├── config.py         # TradingAgentsConfig schema + global singleton
    ├── __init__.py       # Top-level public API (TradingAgentsConfig, TradingAgentsGraph)
    └── __main__.py       # Console script entry point (fire dispatcher with rich help)

🤖 Agent Workflow

TradingAgents orchestrates 12 LLM agents plus 2 supporting components through a LangGraph StateGraph. Every run goes through 4 sequential phases, and the state (reports, debate transcripts, trade decisions) is persisted through a Pydantic AgentState shared across all nodes.

Phase 1 — Analyst Team (Data Collection)

Four analysts run in sequence. Each analyst has its LLM bound to a specific set of yfinance-backed @tool functions, and loops with its own ToolNode until no more tool calls are emitted. Between analysts a Msg Clear node resets the conversation history (emitting RemoveMessage + a HumanMessage("Continue") placeholder for Anthropic compatibility).

Analyst LLM-bound tools Writes to state
Market Analyst get_stock_data, get_indicators market_report
Social Media Analyst get_news sentiment_report
News Analyst get_news, get_global_news news_report
Fundamentals Analyst get_fundamentals, get_balance_sheet, get_cashflow, get_income_statement fundamentals_report

Supported technical indicators (selected by the Market Analyst, up to 8 per run): close_50_sma, close_200_sma, close_10_ema, macd, macds, macdh, rsi, boll, boll_ub, boll_lb, atr, vwma.

Phase 2 — Research Debate

  • Bull Researcher and Bear Researcher debate for max_debate_rounds rounds (default: 1 round each), taking turns based on who spoke last. Each researcher retrieves top-k BM25 matches from its own FinancialSituationMemory before arguing.
  • Termination: count >= 2 * max_debate_rounds routes the graph to Research Manager (deep-thinking LLM), which evaluates the full debate, produces the investment_plan, and populates investment_debate_state.judge_decision.

Phase 3 — Trader

Trader (quick-thinking LLM) consumes investment_plan plus the top-k trader_memory matches and produces trader_investment_plan. Its output must end with FINAL TRANSACTION PROPOSAL: **BUY/HOLD/SELL**.

Phase 4 — Risk Control Debate

Three debaters rotate in a fixed order — Aggressive → Conservative → Neutral → Aggressive → … — for max_risk_discuss_rounds rounds (default: 1 round per stance). Termination: count >= 3 * max_risk_discuss_rounds routes to the Risk Judge (deep-thinking LLM via create_risk_manager), which revises the trader's plan and writes the final_trade_decision. A lightweight SignalProcessor LLM then extracts the canonical BUY / SELL / HOLD token from that natural-language decision.

Supporting components

  • FinancialSituationMemory — BM25Okapi-backed per-agent memory (5 instances: bull, bear, trader, invest_judge, risk_manager). Purely lexical, no external embeddings API required.
  • Reflector — After the trade outcome is known, TradingAgentsGraph.reflect_and_remember(returns_losses) runs post-trade reflection against each of the 5 memories so future runs can learn from past decisions.

Flow Diagram

START
  │
  ▼
[Market Analyst ⇄ tools_market] → Msg Clear
  │
  ▼
[Social Analyst ⇄ tools_social] → Msg Clear
  │
  ▼
[News Analyst ⇄ tools_news] → Msg Clear
  │
  ▼
[Fundamentals Analyst ⇄ tools_fundamentals] → Msg Clear
  │
  ▼
[Bull Researcher ⇄ Bear Researcher] × max_debate_rounds
  │
  ▼
Research Manager  →  Trader
                        │
                        ▼
[Aggressive → Conservative → Neutral] × max_risk_discuss_rounds
  │
  ▼
Risk Judge  →  SignalProcessor  →  END

Per-run logs are written under results/<TICKER>/: full_states_log_<TICKER>_<DATE>.json, conversation_log_<TICKER>_<DATE>.txt, and conversation_log_<TICKER>_<DATE>.json (the base path resolves from TradingAgentsConfig.results_dir, which defaults to ./results).

🤝 Contributing

For development instructions including documentation, testing, and Docker services, please see CONTRIBUTING.md.

  • Open issues/PRs
  • Follow the coding style (ruff, type hints)
  • Use Conventional Commit messages and descriptive PR titles

📄 License

MIT — see LICENSE.

About

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors