This repository contains five advanced AI projects demonstrating different approaches to building intelligent conversational systems, ranging from simple tool integration to sophisticated multi-agent orchestration and retrieval-augmented generation (RAG) systems.
| Feature | Career Agent | Sidekick Assistant | Dataset Generator | Personal Knowledge Worker | RAG Insurance LLM |
|---|---|---|---|---|---|
| Primary Use | Career chatbot | Research & automation | Data generation | Knowledge base Q&A | Insurance knowledge Q&A |
| Interface | Gradio web UI | Gradio web UI | CLI (Command-line) | Gradio web UI | Gradio web UI |
| Architecture | Simple tool integration | Multi-agent with evaluator | Iterative with validation | RAG with image processing | RAG with vector store |
| Key Capability | Answer questions + record leads | Browse web, execute code, plan trips | Generate synthetic datasets | Index personal knowledge + chat | Query insurance documents |
| User Input | Natural language chat | Natural language chat | Structured prompts | Natural language chat | Natural language chat |
| Output | Conversational responses | Task completion + artifacts | JSON datasets (50 records) | Chat responses with context | Answers from knowledge base |
| Knowledge Source | LinkedIn PDF | Web/Wikipedia/Code | Generated | MHT files + images | Markdown documents |
| Feedback Loop | None | Evaluator retries | Quality-based regeneration | Conversation memory | Conversation memory |
| Deployment | HuggingFace Spaces | Cloud/Docker | Local/Scheduled script | Local/Docker | Local/Docker |
| Complexity | β Beginner | βββ Advanced | ββ Intermediate | ββ Intermediate | β Beginner |
- Project 1: Career Conversation Agent (ChatWithPushNotifications)
- Project 2: Sidekick Multi-Agent Assistant
- Project 3: Synthetic Dataset Generator
- Project 4: Personal Knowledge Worker
- Project 5: RAG Insurance LLM
- Installation & Setup
- Architecture Patterns
- Deployment
A specialized Gradio-based chatbot that impersonates a professional (powered by your LinkedIn profile and summary). The agent:
- Answers questions about your background, skills, and experience
- Records user details when people express interest in connecting
- Tracks unanswered questions for follow-up
- Sends push notifications for each user interaction
- Uses OpenAI tools to intelligently manage conversations
β
Azure OpenAI Integration - Uses GPT-4o with custom authentication
β
Tool Integration - Two custom tools:
record_user_details: Captures interested users' email and notesrecord_unknown_question: Records questions you can't answer
β
Push Notifications - Integrates with Pushover API for real-time alerts
β
PDF Resume Parsing - Reads LinkedIn profile as PDF
β
Simple Web Interface - Built with Gradio, easy to chat with
β
Deployable - Ready for HuggingFace Spaces deployment
π 1_foundations/
βββ ChatWithPushNotifications.ipynb (Main notebook)
βββ app.py (Deployment script)
βββ me/
β βββ linkedin.pdf (Your LinkedIn profile)
β βββ summary.txt (Your professional summary)
βββ requirements.txt
# Navigate to project
cd 1_foundations
# Install dependencies (from root)
cd ..
uv sync
cd 1_foundations
# Run the notebook
jupyter notebook ChatWithPushNotifications.ipynb
# Or run the app directly
python app.pyCreate a .env file with:
OPENAI_API_KEY=sk-...
AUTOX_API_KEY=... (if using Azure)
PUSHOVER_USER=...
PUSHOVER_TOKEN=...
NTNET_USERNAME=...
User Question
β
LLM with Tools (GPT-4o)
β
ββ If has answer β Return response
ββ If user interested β Call record_user_details tool
ββ If question unanswered β Call record_unknown_question tool
β
Push Notification Sent
β
Response Displayed
uv tool install 'huggingface_hub[cli]'
hf auth login
cd 1_foundations
uv run gradio deployFollow the prompts to:
- Name it "career_conversation"
- Specify
app.pyas entry point - Choose CPU-basic hardware
- Provide your secrets (API keys, Pushover credentials)
A sophisticated LangGraph-based AI agent that can:
- Research information using web search and Wikipedia
- Browse websites using Playwright with color-coded location markers
- Plan trips with interactive Google Maps
- Execute Python code for data analysis
- Manage files in a sandboxed directory
- Provide feedback loops - agents improve through evaluator feedback
- Maintain conversation memory with persistent state
β
Multi-Agent Pattern - Worker, Tools, and Evaluator nodes
β
Feedback Loops - Agents learn and improve from evaluator feedback
β
7 Built-in Tools:
- Playwright browser automation (navigate, extract text, click, get links)
- Web search (Google Serper)
- Wikipedia queries
- Python code execution (REPL)
- File management (read/write/create)
- Push notifications
- Trip planning with Google Maps (NEW!)
β
Structured Outputs - Pydantic models for type-safe responses
β
Conversation Threading - MemorySaver for persistent state
β
Gradio Web UI - Beautiful interface with real-time updates
β
Async/Await - Non-blocking I/O for responsive UI
π 4_langgraph/
βββ app.py (Gradio launcher)
βββ sidekick.py (Core agent logic)
βββ sidekick_tools.py (Tool definitions - includes Google Maps)
βββ sandbox/ (Working directory for file operations)
β βββ trip_map.html (Generated trip maps)
βββ pyproject.toml (Dependencies)
# Navigate to project
cd 4_langgraph
# Install dependencies (from root)
cd ..
uv sync
cd 4_langgraph
# Run the Sidekick
python app.pyCreate a .env file with:
AUTOX_API_KEY=...
NTNET_USERNAME=...
PUSHOVER_USER=...
PUSHOVER_TOKEN=...
START
β
WORKER NODE (LLM with tools)
β
ββ Has tool_calls? β YES β TOOLS NODE
β β
β Execute tool
β β
β Back to WORKER
β
ββ No tool_calls? β YES β EVALUATOR NODE
β
Success criteria met?
β
ββ YES β END β
ββ NO β Back to WORKER (retry with feedback)
User: "Find Bitcoin price"
β
Worker: Uses search tool
β
Evaluator: Checks criteria met
β
Returns: Price information β
User: "Write a summary"
β
Worker: Asks "Summary of what?"
β
Evaluator: Needs user input
β
Returns: Question to user β
User: "Create file with 100+ word summary"
β
Worker: Creates file (20 words)
β
Evaluator: REJECTS - too short, provides feedback
β
Worker: Reads feedback, creates longer version (250 words)
β
Evaluator: ACCEPTS β
The Sidekick now includes a color-coded trip planner:
# Sidekick generates locations:
[
{"name": "Hotel", "address": "Times Square, NYC", "color": "blue"},
{"name": "Breakfast", "address": "Joe's Pizza, NYC", "color": "green"},
{"name": "Museum", "address": "MoMA, NYC", "color": "red"}
]
# Tool creates: sandbox/trip_map.html
# Interactive map with markers, routes, and zoomTry it: Ask Sidekick: "Plan a 5-stop NYC day trip and create a map"
A Super Step = one iteration through the graph nodes:
Super Step 1: User β Worker β Router β Decision
Super Step 2: Tools execute β Worker β Router β Decision
Super Step 3: Evaluator checks β Decision (End or retry)
Each user interaction = separate super steps. The loop continues until success or user input needed.
The Sidekick uses async/await for non-blocking I/O:
# Without async (BLOCKS UI):
result = graph.invoke(state) # UI freezes for 10+ seconds
# With async (UI RESPONSIVE):
result = await graph.ainvoke(state) # UI stays responsive
# Other coroutines can run while waitingAll on the same thread, but multiplexed execution!
An intelligent LangGraph-based system that generates high-quality synthetic datasets for testing, training, and development purposes. The agent:
- Generates realistic data based on your use case description
- Validates quality automatically using AI-powered evaluation
- Provides feedback loops for iterative improvement
- Allows manual editing of specific records
- Exports to JSON with 50 customizable records
- Ensures consistency across data types and structure
β
AI-Powered Generation - Uses GPT-4o to create contextually appropriate data
β
Automatic Quality Evaluation - Built-in evaluator checks 6 quality criteria:
- Structure Consistency
- Data Type Compliance
- Realistic Values
- Logical Coherence
- Data Variety
- Use Case Alignment
β
Iterative Improvement - Regenerates data based on validation feedback
β
Interactive CLI - User-friendly command-line interface
β
Manual Override - Edit specific records or accept with quality warnings
β
Structured Graph Flow - LangGraph nodes for each stage
β
JSON Export - Clean, ready-to-use output format
π 4_langgraph/
βββ DatasetGenerator.py (Complete standalone application)
# Navigate to project
cd 4_langgraph
# Install dependencies (from root)
cd ..
uv sync
cd 4_langgraph
# Run the Dataset Generator
python DatasetGenerator.pyUses the same .env configuration as Sidekick:
AUTOX_API_KEY=...
NTNET_USERNAME=...
User Input (Use Case + Example)
β
GENERATE DATASET NODE (50 records)
β
EVALUATE DATASET NODE
β
ββ Quality Score β₯ 70? β YES β DISPLAY & EDIT
β β
β User accepts? β EXPORT β
β β
β User edits? β Back to DISPLAY
β
ββ Quality Score < 70? β NO β HANDLE FEEDBACK
β
Regenerate or Accept?
β
Back to GENERATE
START
β
INPUT COLLECTION NODE
β
GENERATE DATASET NODE
β
EVALUATE DATASET NODE
β
ββ PASS (score β₯ 70) β DISPLAY & EDIT NODE
β β
β ββ Accept β EXPORT NODE β END β
β ββ Regenerate β Back to GENERATE
β ββ Edit β Stay in DISPLAY
β ββ Feedback β Back to GENERATE
β
ββ FAIL (score < 70) β HANDLE EVALUATION FEEDBACK NODE
β
ββ Regenerate β Back to GENERATE
ββ Accept anyway β DISPLAY & EDIT
ββ Restart β Back to INPUT COLLECTION
1. Provide Use Case:
Describe your use case: Customer data for e-commerce platform
2. Provide Example Structure:
Provide example data format:
{
"customer_id": "CUST-001",
"name": "John Doe",
"email": "john@example.com",
"age": 35,
"total_purchases": 1250.50,
"membership_tier": "Gold"
}
3. AI Generates 50 Records:
Generated 50 synthetic records successfully.
4. Automatic Validation:
=== DATASET VALIDATION REPORT ===
Overall Score: 92/100
Recommendation: PASS
Compliance Checks:
- Structure Compliant: True
- Data Types Correct: True
- Realistic Data: True
- Logically Coherent: True
- Sufficient Variety: True
- Use Case Aligned: True
5. Review and Export:
Preview of generated data (first 3 records):
Record 1: {...}
Record 2: {...}
Record 3: {...}
Options:
1. Accept dataset and export
2. Regenerate dataset
3. Edit specific records
4. Provide feedback for improvement
6. Dataset Saved:
Dataset saved to 'synthetic_dataset.json'
The AI evaluator automatically checks:
| Criterion | Description |
|---|---|
| Structure Consistency | All records follow the example structure |
| Data Type Compliance | Values match expected types (string, number, date) |
| Realistic Values | Data is plausible and contextually appropriate |
| Logical Coherence | Related fields are logically consistent |
| Data Variety | Sufficient diversity, not repetitive |
| Use Case Alignment | Data fits the described use case |
Passing Score: 70/100 + "PASS" recommendation
The system supports multiple improvement paths:
Automatic Regeneration:
- Triggered when quality score < 70
- AI receives detailed validation report
- Addresses specific issues in next iteration
User Feedback:
- Provide custom instructions
- Example: "Make ages more diverse" or "Add international customers"
- AI incorporates feedback in regeneration
Manual Editing:
- Edit individual records by number
- Update specific fields with JSON input
- Useful for final tweaks
Perfect for:
- Testing: Generate test data for applications
- ML Training: Create training datasets
- API Mocking: Populate mock API responses
- Prototyping: Quickly create realistic demo data
- Database Seeding: Generate initial database records
- Documentation: Create example data for docs
State Management:
class DatasetGeneratorState(TypedDict):
messages: list[BaseMessage] # Conversation history
use_case: str # User's use case
example_data: str # Example structure
generated_dataset: list[dict] # Current dataset
dataset_count: int # Number of records
feedback: str # User feedback
success_criteria_met: bool # Quality check status
validation_report: str # Evaluation details
retry_count: int # Generation attempts6 Graph Nodes:
- Input Collection - Gather use case and example
- Generate Dataset - Create 50 synthetic records
- Evaluate Dataset - AI quality assessment
- Handle Evaluation Feedback - Decision on failed validation
- Display & Edit - Preview, edit, or regenerate
- Export Dataset - Save to JSON file
A sophisticated RAG (Retrieval Augmented Generation) system that converts your personal knowledge base (OneNote exports, MHT files) into an intelligent AI chatbot. The system:
- Processes images - Automatically analyzes and describes images using Azure OpenAI Vision API
- Extracts links - Captures and indexes hyperlinks from your documents
- Creates vector embeddings - Converts text chunks into searchable semantic vectors
- Enables smart search - Uses semantic similarity to find relevant information
- Maintains conversation history - Remembers context across multiple questions
- Caches intelligently - Avoids redundant API calls through smart caching
β
Image Processing - Azure OpenAI Vision automatically describes images in your notes
β
Smart Caching - Processed images cached to avoid redundant API calls
β
RAG Architecture - Vector embeddings for semantic search across your knowledge
β
Link Extraction - Automatically captures hyperlinks from documents
β
Gradio Web UI - Beautiful, responsive chat interface
β
Conversation Memory - Maintains context across multiple queries
β
Offline Indexing - Uses local HuggingFace embeddings (no external API for embeddings)
π PersonalKnowledgeWorker/
βββ main.py (Setup and initialization script)
βββ gradio_app.py (Web interface launcher)
βββ config.py (Configuration and Azure setup)
βββ image_processor.py (Image extraction and AI description)
βββ chunk_creator.py (Text chunking with enrichment)
βββ vector_store.py (Vector store and conversation management)
βββ requirements.txt
βββ amdocsKnowledgeBase/ (Your knowledge base folder)
βββ Company.mht (Your OneNote export)
βββ images_cache.pkl (Generated cache)
βββ knowledge_base_db/ (Generated vector store)
# Navigate to project
cd PersonalKnowledgeWorker
# Install dependencies
pip install -r requirements.txt
# First time setup (processes images and builds vector store)
python main.py
# Launch web interface
python gradio_app.pyCreate a .env file with:
AUTOX_API_KEY=...
NTNET_USERNAME=...
Your MHT Knowledge Base
β
Image Processing (Azure Vision)
β
Text Chunking & Enrichment
β
Vector Embeddings (HuggingFace)
β
ChromaDB Vector Store
β
Semantic Search + LLM
β
Chat Response with Context
- Image Extraction: Scans MHT file for embedded images
- Vision Analysis: Sends to Azure OpenAI Vision API for descriptions
- Caching: Stores descriptions locally to avoid re-processing
- Text Parsing: Extracts text sections and HTML structure
- Link Extraction: Captures all hyperlinks from document
- Chunk Creation: Combines text with image descriptions
- Embeddings: Converts to vector embeddings using HuggingFace
- Vector Store: Indexes in ChromaDB for fast retrieval
- Chat Interface: Enables semantic search conversations
- First run: ~15-30 minutes (processes 150+ images)
- Subsequent runs: Instant (uses cached descriptions)
- Vector store persists to disk (no rebuild needed)
- Supports multi-turn conversations with memory
A comprehensive RAG system specifically designed for insurance company knowledge bases. This Jupyter notebook demonstrates how to:
- Load structured documents - Processes employees, products, contracts, and company information
- Create semantic search - Builds vector embeddings for intelligent retrieval
- Enable conversational Q&A - Answers questions about company knowledge
- Visualize embeddings - Explores knowledge distribution in 2D/3D space
- Maintain context - Remembers conversation history for multi-turn Q&A
- Deploy with Gradio - Web interface for easy access
β
Multi-Document Type Support - Handles employees, products, contracts, company data
β
Semantic Search - Uses vector embeddings for intelligent retrieval
β
Conversation Memory - Maintains context across multiple queries
β
Document Visualization - 2D/3D t-SNE plots of knowledge distribution
β
Gradio Chat UI - Easy-to-use web interface
β
Azure OpenAI Integration - Uses corporate-compatible LLM setup
β
HuggingFace Embeddings - Local embeddings for privacy
β
Chroma Vector Database - Persistent vector storage
π 5_RAG/
βββ RAGInusranceLLM.ipynb (Main notebook)
βββ knowledge-base/ (Knowledge base folder)
βββ company/ (Company information)
βββ products/ (Product descriptions)
βββ employees/ (Employee profiles)
βββ contracts/ (Business contracts)
βββ amdocsKnowledgeBase/ (Indexed data)
# Navigate to project
cd 5_RAG
# Run the notebook
jupyter notebook RAGInusranceLLM.ipynb
# Or run it directly with Python
python -m jupyter notebook RAGInusranceLLM.ipynbCreate a .env file with:
AUTOX_API_KEY=...
NTNET_USERNAME=...
OPENAI_API_KEY=... (if using OpenAI instead of Azure)
The system indexes 4 document types:
π Company Knowledge
βββ π₯ Employees (12 profiles)
β βββ Roles, career progression, performance ratings
βββ π¦ Products (4 offerings)
β βββ Features, pricing, specifications
βββ π Contracts (13 agreements)
β βββ Terms, pricing, support details
βββ βΉοΈ Company (Overview)
βββ Mission, history, locations
# The system can answer questions like:
"Who is the CEO of our company?"
β Avery Lancaster, Co-Founder & CEO
"What are our main insurance products?"
β Carllm, Homellm, Markellm, Rellm
"Who received the IIOTY award in 2023?"
β Maxine Thompson (Insurellm Innovator of the Year)
"What are the key features of Rellm?"
β AI-powered risk assessment, Dynamic pricing, Instant claims...Knowledge Base Files (Markdown)
β
Document Loading & Chunking
β
Text Splitting (1000 char chunks)
β
HuggingFace Embeddings
β
Chroma Vector Store (123 chunks)
β
ConversationalRetrievalChain
β
Query + Memory
β
LLM Response with Retrieved Context
- Load Documents: Reads all .md files from knowledge-base/
- Split Text: Creates 1000-char chunks with 200-char overlap
- Create Embeddings: Uses HuggingFace sentence-transformers model
- Build Vector Store: Indexes in ChromaDB (123 vectors Γ 384 dimensions)
- Setup Retriever: Configures k=25 for semantic search
- Create Chain: ConversationalRetrievalChain with memory
- Visualize: Optional 2D/3D t-SNE visualization
- Chat: Launch Gradio interface for Q&A
π Knowledge Base Statistics:
- Total documents: 123 chunks
- Vector dimensions: 384
- Document types: 4 categories
- Retrieval k: 25 (top-k similarity search)
- Overlap: 200 characters between chunks
To add your own knowledge base:
# 1. Create knowledge-base/ folder with subfolders
# 2. Add .md files to appropriate folders
# 3. Run the notebook cells in order
# 4. The system automatically:
# - Discovers all .md files
# - Adds doc_type metadata
# - Creates embeddings
# - Builds searchable index| Issue | Solution |
|---|---|
| Proxy errors | Set NO_PROXY before creating HTTP clients |
| Import errors | Run uv sync to install dependencies |
| Embedding errors | Ensure HuggingFace model is downloaded |
| Low quality answers | Try increasing k value or adjusting chunks |
- Python 3.12+
uvpackage manager- API Keys:
- Azure OpenAI / OpenAI
- Pushover (for notifications)
- Google Serper (for web search - optional, Sidekick works without it)
# Clone this repository
git clone https://github.com/yourusername/ChatWithMeAgent.git
cd ChatWithMeAgent
# Install dependencies
uv sync
# Create .env file
cp .env.example .env
# Edit .env with your API keysIf behind a corporate proxy (like Amdocs):
# Already configured in sidekick_tools.py:
os.environ["NO_PROXY"] = ",".join([
".autox.corp.amdocs.azr",
"chat.autox.corp.amdocs.azr",
"localhost", "127.0.0.1"
])Key Rule: Set environment variables BEFORE creating HTTP clients!
LLM + Tools β Tool Calls β Execution β Results β LLM Response
Simple and Direct: One pass through LLM with tools
Worker β Router β Tools/Evaluator β Decision β Worker or END
Complex but Powerful: Feedback loops, evaluators, retries
Input β Generate β Evaluate β Feedback Loop β Regenerate or Accept β Export
Quality-Focused: Automatic evaluation with regeneration based on quality criteria
Knowledge Base β Chunking β Embeddings β Vector Store β Semantic Search β LLM β Response
Context-Aware: Grounds LLM responses in actual documents, reduces hallucinations
All projects use LangChain State:
class State(TypedDict):
messages: Annotated[List, add_messages] # Reducer for concatenation
other_fields: str # Simple overwriteadd_messages automatically appends new messages to history!
cd 1_foundations
uv run gradio deploy
# Follow prompts for app configurationResult: Public URL like huggingface.co/spaces/username/career_conversation
Option 1: Docker
FROM python:3.12
WORKDIR /app
COPY . .
RUN uv sync
CMD ["python", "4_langgraph/app.py"]Option 2: Railway/Render
- Connect GitHub repo
- Set environment variables
- Deploy with one click
Option 3: Local + Ngrok (for testing)
python 4_langgraph/app.py
# In another terminal:
ngrok http 7860Local Execution:
cd 4_langgraph
python DatasetGenerator.pyScheduled Generation (Cron/Task Scheduler):
# Linux/Mac cron example
0 0 * * * cd /path/to/ChatWithMeAgent/4_langgraph && python DatasetGenerator.pyContainerized:
FROM python:3.12
WORKDIR /app
COPY . .
RUN uv sync
CMD ["python", "4_langgraph/DatasetGenerator.py"]Local Execution:
cd PersonalKnowledgeWorker
python main.py # Setup and indexing
python gradio_app.py # Launch web interfaceDocker Deployment:
FROM python:3.12
WORKDIR /app
COPY . .
RUN uv sync
CMD ["python", "PersonalKnowledgeWorker/gradio_app.py"]Jupyter Notebook:
cd 5_RAG
jupyter notebook RAGInusranceLLM.ipynbDocker Deployment:
FROM python:3.12
WORKDIR /app
COPY . .
RUN uv sync
CMD ["jupyter", "notebook", "5_RAG/RAGInusranceLLM.ipynb", "--ip=0.0.0.0", "--allow-root"]| Technology | Purpose |
|---|---|
| LangChain | LLM framework & tools |
| LangGraph | Graph-based agent orchestration |
| Gradio | Web UI for agents |
| Playwright | Web automation & scraping |
| Folium | Interactive map generation |
| Geopy | Address geocoding |
| Azure OpenAI | LLM with corporate proxy support |
| Pydantic | Structured outputs & validation |
| ChromaDB | Vector database for embeddings |
| HuggingFace Transformers | Local embedding models |
| t-SNE | Dimensionality reduction for visualization |
| Issue | Solution |
|---|---|
| 504 Proxy Error | Set NO_PROXY BEFORE creating HTTP clients |
| Playwright NotImplementedError | Comment out Windows event loop policy (see Cell 9, lab3) |
| Nested Event Loop Error | Use nest_asyncio.apply() in Jupyter |
| Map not generating | Ensure sandbox/ folder exists and geopy/folium installed |
| Pushover notifications not sending | Verify PUSHOVER_USER and PUSHOVER_TOKEN in .env |
| Dataset Generator JSON parse error | AI output may include markdown formatting - check regex extraction in code |
| Low quality scores repeatedly | Provide more specific example data structure or detailed use case description |
| RAG: Vector store not found | Run python main.py (Personal KW) or notebook cells in order (Insurance LLM) |
| RAG: Image processing timeout | First run takes 15-30 min for 150+ images; subsequent runs use cache |
| RAG: Poor search results | Increase k parameter for retrieval or provide more specific queries |
| RAG: Out of memory errors | Reduce chunk size or embedding batch size in config |
-
Notes4.md - Comprehensive documentation on:
- LangGraph architecture & scenarios
- Async/Await in Python
- Proxy configuration
- Google Maps tool implementation
- Troubleshooting guide
-
Project Files:
1_foundations/ChatWithPushNotifications.ipynb- Tool integration basics4_langgraph/app.py- Gradio UI with async/await4_langgraph/sidekick.py- Multi-agent orchestration4_langgraph/sidekick_tools.py- Custom tool implementations4_langgraph/DatasetGenerator.py- Complete iterative generation systemPersonalKnowledgeWorker/main.py- Personal Knowledge Worker setup5_RAG/RAGInusranceLLM.ipynb- RAG Insurance LLM notebook
-
Notebooks:
4_langgraph/3_lab3.ipynb- Browser automation intro4_langgraph/4_lab4.ipynb- Multi-agent flows
Have improvements? Found bugs?
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
MIT License - See LICENSE file for details
- Issues: Create an issue in this repository
- Questions: Open a discussion
- LinkedIn: Connect with me
- Email: ed@edwarddonner.com
Happy coding! π