Skip to content

serverless-yoda/langchain-engineering

Repository files navigation

🎯 Project #1: Local Multi-Provider Chat Agent (Ollama-first)

Source code: /chat-agent-api

Build a simple HTTP service that:

Uses Ollama as the primary model (e.g., llama3), but keeps the code ready to switch providers via init_chat_model-style config.

Exposes:

  • Single-turn Q&A (/ask).
  • Multi-turn chat with message history using HumanMessage and AIMessage.
  • Wraps everything in a small "agent" abstraction (no tools yet) using create_agent so you can later add tools.

Example usage flow:

POST /chat with { 
  "session_id": "abc", 
  "message": "Tell me about Luna City" 
}
  • Service loads past messages for session_id, calls the agent, streams tokens back to the client.

  • All LLM calls go through Ollama instead of OpenAI.

🎯 Project #2: YouTube Content Researcher API

Source code: /youtube-researcher-api

100% Local Ollama - no external APIs

What it does:

  • /research → Analyzes video topics, suggests titles/thumbnails
  • /trending → Finds hot YouTube niches (AI/IoT/embedded)
  • /competitors → Analyzes competitor channels

About

A collection of hands-on LangChain experiments for self-study, covering chat agents, RAG pipelines, tool integration, and deployment patterns. Each project demonstrates practical AI engineering workflows using LangChain's core components, model integrations, and production-ready patterns

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages