LincolnBot is an intelligent assistant designed to help users find accurate information about parking rules, permits, and penalties in Lincoln City. By combining semantic search capabilities with advanced language models, it provides contextually relevant and accurate responses to user queries.
The system follows a three-layer architecture:
-
Embedding Layer:
- Processes and vectorizes text data
- Uses SentenceTransformers for semantic understanding
- Optimized for parking-related content
-
Search Layer:
- Performs semantic search with ChromaDB
- Implements re-ranking for better relevance
- Utilizes Redis caching for performance
-
Generative Layer:
- Generates natural language responses using GPT-4
- Ensures context-aware and accurate answers
- Validates responses for completeness
- Python 3.9+
- Redis server
- OpenAI API key
- 2GB+ RAM for embedding models
- Clone the Repository:
git clone https://github.com/vaibhavcssbb/LincolnBot.git
cd LincolnBot- Create and Activate Virtual Environment:
# On macOS/Linux
python -m venv venv
source venv/bin/activate
# On Windows
python -m venv venv
venv\Scripts\activate- Install Dependencies:
pip install -r requirements.txt- Configure Environment Variables:
cp .env.example .envEdit .env with your:
- OpenAI API key
- Redis configuration
- Other settings as needed
- Start Redis Server:
redis-server- Ensure Redis is running
- Execute the main script:
python main.py# The bot can answer questions like:
"What are the parking permit requirements?"
"How much does a resident parking permit cost?"
"What are the penalties for parking violations?"Run the test suite:
# Run all tests
python -m pytest tests/
# Run specific test file
python -m pytest tests/test_search_engine.py
# Run with coverage report
python -m pytest --cov=. tests/LincolnBot/
├── README.md # Project documentation
├── requirements.txt # Python dependencies
├── .env.example # Environment variables template
├── .gitignore # Git ignore patterns
├── main.py # Application entry point
├── search_engine.py # Search functionality
├── response_generator.py # Response generation
├── response_validator.py # Response validation
└── tests/ # Test suite
├── test_search_engine.py
└── test_response_generator.py
- Embedding Model: all-MiniLM-L6-v2
- Vector Store: ChromaDB
- Cache: Redis
- Re-ranker: cross-encoder/ms-marco-MiniLM-L-6-v2
- Generative Model: GPT-4
- Average search latency: < 0.3 seconds
- Cache hit rate: ~40%
- Response accuracy: > 95%
- Context relevance: High
- Verify Redis server is running:
redis-cli ping - Check Redis port (default 6379) is not blocked
- Ensure Redis password is correctly set in .env
- Verify API key in .env file
- Check API rate limits at OpenAI dashboard
- Ensure sufficient credits available
- Verify all dependencies are installed
- Check environment variables are set
- Ensure Redis connection is active
- Fork the repository
- Create a feature branch
- Commit your changes
- Push to the branch
- Create a Pull Request
This project is licensed under the MIT License.
- OpenAI for GPT-4
- SentenceTransformers team
- ChromaDB developers
- Redis team
For questions or support, please open an issue on GitHub.