This project now has:
- AgentField reasoners for due diligence + swarm coordination
- Feedback loop that reweights specialist agents
- Flask frontend for a live demo
- Optional OpenAI opinions (if
OPENAI_API_KEYset) - Optional Tavily news search (if
TAVILY_API_KEYset)
Specialist agents:
finance_risk_agentcareer_market_agentfamily_stability_agentlinkedin_positioning_agentpeer_opinion_agentjob_search_agentnews_agentknowledge_synth_agent
Memory:
- stored in
/Users/lorky/Documents/New project 3/swarm_memory.json - feedback updates weights so future decisions are influenced by real outcomes
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
cp .env.example .envLLM and news (optional) in .env:
OPENAI_API_KEY=...OPENAI_MODEL=gpt-4o-miniTAVILY_API_KEY=...
Terminal A:
af serverTerminal B:
source .venv/bin/activate
python main.pycurl -X POST "http://localhost:8080/api/v1/execute/quit-job-due-diligence-agent.import_from_singpass" \
-H "Content-Type: application/json" \
-d @sample_singpass_import.jsoncurl -X POST "http://localhost:8080/api/v1/execute/quit-job-due-diligence-agent.recommend_with_memory" \
-H "Content-Type: application/json" \
-d @sample_input.jsoncurl -X POST "http://localhost:8080/api/v1/execute/quit-job-due-diligence-agent.submit_feedback" \
-H "Content-Type: application/json" \
-d @sample_feedback.jsonsource .venv/bin/activate
python frontend.pyOpen:
Frontend flow:
Your Details: connect LinkedIn + Singpass, run own-agent opinion.Simulated Personas: paste LinkedIn URLs for boss/coworker opinions.Jobs + News Agents: job search and news horizon (Tavily) with opinion.Agentic Swarm + Memory: merges self + peers + jobs + news; memory inswarm_memory.json.
Manual side-investment inputs:
other_investments_usdexpected_investment_monthly_income_usd
- This is decision support, not financial/legal advice.
- For a hackathon demo, this local JSON memory loop is enough to show collective learning.