A backend service designed to fetch, cache, and analyze GitHub repository issues using Python, FastAPI, SQLite, and Google Gemini LLM.
Clone the repository and install the dependencies:
pip install -r requirements.txtCreate a .env file in the root directory. You can copy the .env.example:
Edit .env and add your keys:
GITHUB_TOKEN = your github personal access token
GEMINI_API_KEY = your google gemini api keyStart the application using Uvicorn:
uvicorn main:app --reloadBy default, the server will be available at: http://127.0.0.1:8000
Swagger UI to test the app: http://127.0.0.1:8000/docs
I chose SQLite because it's perfect for local caching.
- Persistence: Data remains persistent even after restarts (unlike in-memory).
- Zero-Config: No extra server to manage (unlike Postgres/Redis).
- SQL Integration: Easy to query and filter data efficiently.
- Type Safety: Works great with SQLModel + Pydantic.
"Scaffold a modular FastAPI project with
services,models, anddatabasedirectories. Use SQLModel for SQLite andhttpxfor async requests. Include a lifespan context manager inmain.pyfor DB initialization."
"Create a
GitHubServiceto fetch issues from aowner/repostring. Extractid,title,body,html_url, andcreated_at. Handle API errors and 404s gracefully."
"Implement the
/scanendpoint to fetch issues viaGitHubServiceand upsert them into SQLite using SQLModel. Return a summary with the count of new/updated issues."
"Build an
analyze_issuesservice using Google Gemini. It should accept a list of issues and a user prompt, truncating issue bodies to 2000 chars to fit context. Support both text and markdown output formats."
"Expose
/analyze(text) and/analyze-md(markdown) endpoints. Retrieve cached issues from SQLite, format them for the LLM service, and return the generated analysis."
I reviewed all the code generated by AI to make sure it is correct and robust and works as expected. Also corrected and re-directed the AI to make sure it is working in the right direction.