A simple Flask backend for fetching and managing GPU listings from various providers.
- Create a virtual environment and activate it:
python -m venv .startup
source .startup/bin/activate- Install dependencies from requirements.txt:
pip install -r requirements.txt- Install PostgreSQL if not already installed
sudo pacman -S postgresqlNote: These are for linux, if on mac, pacman will not work. You will have to use brew. Find out how to set up on Windows.
- Create a
.envfile in the root directory:
DATABASE_URI=postgresql://postgres:postgres@localhost:5432/neotix- Now run the script 'setup_postgres.py' in the scripts folder.
Note: We have not yet tested this on a Windows machine. So, if you are on a Windows computer, read the file carefully then figure out how to create the database manually.
- Initialize the database:
flask db upgrade- Run the application:
flask runIn parallel, run the command:
- Fetch GPU data:
flask fetch-gpu-dataIf it does not work on the get go. Run,
git clone https://github.com/Neotix-Dev/gpuhunt.gitAs the gpuhunt repo is not automatically downloaded. Re-run the command flask fetch-gpu-data again.
Most endpoints require Firebase authentication token in the Authorization header.
GET /get_all- Get all GPU listingsUS nationalGET /get_gpus/<page_number>- Get paginated list of GPUs, start from page 1. Contains more info on the GPUs, better to use this endpoint as it pulls 200 GPUs at a timeGET /<id>- Get specific GPU by IDGET /search?q=<query>- Search GPUs by name/specs with fuzzy matchingGET /filtered- Get filtered GPUs with pagination- Query params: gpu_name, gpu_vendor, min/max_gpu_count, min/max_gpu_memory, min/max_cpu, min/max_memory, min/max_price, min/max_gpu_score, provider, sort_by, sort_order, page, per_page
GET /<gpu_id>/price-points- Get current price points for a GPUGET /<gpu_id>/price-history- Get price history with optional date rangeGET /vendors- List all GPU vendorsGET /hosts- List all GPU providers
POST /register- Register new user with Firebase- Required: email, password, first_name, last_name, organization, experience_level
POST /sync- Sync Firebase user with backendGET /profile- Get user profile and preferencesPUT /profile- Update user profilePOST /update- Update user dataDELETE /- Delete user account
GET /- Get user's projectsPOST /- Create new project (requires: name)PUT /<project_id>- Update project detailsDELETE /<project_id>- Delete projectPOST /<project_id>/gpus- Add GPU to projectDELETE /<project_id>/gpus/<gpu_id>- Remove GPU from project
GET /selected-gpus- Get selected GPUsPOST /selected-gpus- Add GPU to selectionDELETE /selected-gpus/<gpu_id>- Remove GPU from selection
GET /favorite-gpus- Get favorite GPUsPOST /favorite-gpus- Add GPU to favoritesDELETE /favorite-gpus/<gpu_id>- Remove from favorites
GET /rented-gpus- Get active rentalsPOST /rented-gpus- Add rented GPUDELETE /rented-gpus/<gpu_id>- End GPU rental
GET /price-alerts- Get price alertsPOST /price-alerts- Create price alert- Required: gpuId/gpuType, targetPrice, isTypeAlert
DELETE /price-alerts/<alert_id>- Remove price alert
The backend implements A/B testing functionality to track and analyze user preferences between different view types (grid vs. table view).
POST /view-preference- Record user view preference- Required payload:
{ "sessionId": "unique_session_id", "timestamp": "ISO timestamp", "viewType": "grid|table", "initialView": "grid|table" }
- Required payload:
GET /view-preference/summary- Get summary of view preferences- Returns:
{ "total_sessions": "number of unique sessions", "view_preferences": { "grid": "number of users who preferred grid view", "table": "number of users who preferred table view" }, "conversion_rates": { "grid": "percentage who stayed with grid view", "table": "percentage who stayed with table view" } }
- Returns:
- Users are randomly assigned either grid or table view on their first visit
- The system tracks:
- Initial view type assigned
- View type changes during the session
- Final view type preference
- Analytics are stored in JSON format at
data/analytics/view_preferences.json
- Ensure the
data/analyticsdirectory exists:mkdir -p data/analytics
- Initialize the analytics file:
echo "[]" > data/analytics/view_preferences.json
- Ensure write permissions:
chmod 644 data/analytics/view_preferences.json
All the analytics data is stored in data/analytics/view_preferences.json.
All endpoints return JSON with standard structure:
- Success:
{ data: [result] } - Error:
{ error: "error message" }
- 200: Success
- 201: Created
- 400: Bad Request
- 401: Unauthorized
- 404: Not Found
- 500: Server Error