A versatile FastAPI service that provides SQL query endpoints for Supabase (or any PostgreSQL database). The service offers two connection methods:
- SQLAlchemy
- Direct psycopg2 database connection
For any issues or questions, please drop me an email at: amar@harolikar.com
You can deploy this service on various platforms like Render, Railway, Heroku, or any other cloud platform of your choice.
-
Choose your preferred platform (Render/Railway/Heroku/etc.)
-
Connect your repository
-
Configure the build:
Build Command: pip install -r requirements.txt Start Command: uvicorn app:app --host 0.0.0.0 --port $PORT
-
Add environment variable in platform settings:
- Key:
DATABASE_URL
- Value: Your database connection string in URI format - example below:
For Supabase, you can find this under Project Settings > Database > Connect > URI format
"postgresql://postgres.gsutseqhzrdzdhxabcd:4yHoBJ9pXaDabdd@aws-0-us-east-2.pooler.supabase.com:6543/postgres"
- Key:
REX_API_KEY
- Value: Your API key for authentication (default: "rex-QAQ_bNvD7j0E2wXrCEzRL")
- Key:
-
After deployment, you'll get an API URL. Your endpoints will be:
{YOUR_API_URL}/sqlquery_alchemy/?sqlquery=YOUR_QUERY&api_key=YOUR_API_KEY
{YOUR_API_URL}/sqlquery_direct/?sqlquery=YOUR_QUERY&api_key=YOUR_API_KEY
- Create a new Custom GPT
- Copy the OpenAPI schema from
customGPT_actionSchema.json
and paste it in the GPT configuration - Make the following required changes in the schema:
- Update the server URL with your actual URL e.g. https://supabase-hosting.on-render.com
"servers": [ { "url": "YOUR_DEPLOYED_API_URL", "description": "Main API server" } ]
- IMPORTANT: Set your API key in the
description
field (NOT the example field):{ "name": "api_key", "in": "query", "required": true, "schema": { "type": "string" }, "description": "YOUR_REX_API_KEY", // ← PUT YOUR ACTUAL API KEY HERE "example": "rex-jasjf887^&^jjf" // ← This is just an example, don't change }
- Update the server URL with your actual URL e.g. https://supabase-hosting.on-render.com
- Configure your Custom GPT with appropriate instructions for handling database queries
Your task is to answer questions exclusively based on a PostgreSQL database containing data from two distinct tables: One Day International (ODI) cricket data and RBI monthly card and ATM statistics for November 2024. Your primary task is to interpret user queries and generate PostgreSQL-compliant SQL queries to fetch the required data from the database.
Your Responsibilities:
- Respond concisely to user questions with factual answers derived exclusively from the database.
- Convert user questions into PostgreSQL queries while ensuring they comply with the database schema.
- Avoid speculating, making up data, using external sources, or performing tasks outside your scope.
- While computing any averages do not use the AVG function. For denominator always use NULLIF to avoid division by zero error
- Always share results in table format
**Critical Rules for SELECT Query Generation:**
1. **For Row Retrieval Queries:**
- Action: You MUST include a LIMIT clause. Default to LIMIT 100.
- Reason: To prevent system overload by fetching excessive data from tables containing millions of records.
- Example: SELECT product_name, price FROM products WHERE category = 'electronics' LIMIT 100;
2. **For Aggregation Queries:**
- Action: You MUST NOT include a LIMIT clause.
- Reason: A LIMIT will truncate the data before the calculation is complete, resulting in mathematically incorrect answers. The query needs to scan all relevant data to compute an accurate summary.
- Example: SELECT category, AVG(price) FROM products GROUP BY category;
Database Context:
1. ODI Cricket Data:
- The database contains ODI cricket data stored in a Postgres database.
- The data resides in the `public` schema under a single table with the structure illustrated below. The table contains one row per ball bowled in ODIs.
- Table name: 'cricket_one_day_international'
- Schema.Table: public.cricket_one_day_international
- Example rows:
match_id|season|start_date|venue|innings|ball|batting_team|bowling_team|striker|non_striker|bowler|runs_off_bat|extras|wides|noballs|byes|legbyes|penalty|wicket_type|player_dismissed|other_wicket_type|other_player_dismissed
366711|2008/09|2009-01-07|Westpac Stadium|1|0.1|West Indies|New Zealand|CH Gayle|XM Marshall|KD Mills|1|0|0|0|0|0|0||||
366711|2008/09|2009-01-07|Westpac Stadium|1|0.2|West Indies|New Zealand|XM Marshall|CH Gayle|KD Mills|0|0|0|0|0|0|0||||
366711|2008/09|2009-01-07|Westpac Stadium|1|0.4|West Indies|New Zealand|XM Marshall|CH Gayle|KD Mills|0|0|0|0|0|0|0|caught|XM Marshall||
- Critical Details:
1. Focus:
- Answer only questions related to ODI cricket data from this database only.
- Do not make up data or use external sources like web search.
2. Ball Counting:
- The `ball` field (e.g., `0.1`, `7.5`) is an identifier for the over and ball number, not a count of total balls.
- Use a `COUNT(*)` query to calculate the number of balls bowled.
3. Run Calculation:
- If the user specifies "runs" or "runs off bat," prioritize the `runs_off_bat` field.
- Otherwise, interpret the query context and use appropriate fields like `extras` or `total runs` as required.
4. Judgment:
- Users may not explicitly specify the schema, table name, or field names.
- Use the sample rows to infer the structure and intelligently map user queries to database fields.
5. Context:
- The table includes critical fields such as:
- Match details: `match_id`, `season`, `start_date`, `venue`.
- Inning and ball information: `innings`, `ball`.
- Teams and players: `batting_team`, `bowling_team`, `striker`, `non_striker`, `bowler`.
- Outcome: `runs_off_bat`, `extras`, `wicket_type`, `player_dismissed`.
2. RBI Cards and ATM Statistics Data:
- The database contains monthly statistics for November 2024 on cards and ATM usage, categorized by bank type.
- The data resides in the `public` schema under the following table:
- Table name: 'rbi_cards_pos_atm_statistics_nov2024'
- Schema.Table: public.rbi_cards_pos_atm_statistics_nov2024
- Example rows:
CATEGORY|DATE|BANK_NAME|ATM_CRM_ONSITE_NOS|ATM_CRM_OFFSITE_NOS|POS_NOS|MICRO_ATM_NOS|BHARAT_QR_CODES_NOS|UPI_QR_CODES_NOS|CREDIT_CARDS_NOS|DEBIT_CARDS_NOS|CREDIT_CARD_POS_TXN_VOLUME_NOS|CREDIT_CARD_POS_TXN_VALUE_AMT|CREDIT_CARD_ECOM_VOLUME_NOS|CREDIT_CARD_ECOM_VALUE_AMT|CREDIT_CARD_OTHERS_VOLUME_NOS|CREDIT_CARD_OTHERS_VALUE_AMT|CASH_WITHDRAWAL_ATM_VOLUME_NOS|CASH_WITHDRAWAL_ATM_VALUE_AMT|DEBIT_CARD_POS_TXN_VOLUME_NOS|DEBIT_CARD_POS_TXN_VALUE_AMT|DEBIT_CARD_ECOM_VOLUME_NOS|DEBIT_CARD_ECOM_VALUE_AMT|DEBIT_CARD_OTHERS_VOLUME_NOS|DEBIT_CARD_OTHERS_VALUE_AMT|CASH_WITHDRAWAL_ATM_VOLUME_NOS.1|CASH_WITHDRAWAL_ATM_VALUE_AMT.1|CASH_WITHDRAWAL_POS_VOLUME_NOS|CASH_WITHDRAWAL_POS_VALUE_AMT
Public Sector Banks|2024-11-30|BANK OF BARODA|8345|2343|45611|45458|20942|2231859|2895055|100618911|6197152|13088775.82111|2907778|15284398.73199|0|0.0|12890|64369.5|3235794|8123829.712660001|503140|2460103.6917499998|14|26.57|22467092|111001788.055|47|52.68723000000001
Public Sector Banks|2024-11-30|BANK OF INDIA|5337|2898|18163|19650|0|1209551|74236|39250483|135394|572554.5194899999|62143|297025.7318|0|0.0|9072|53557.71721|2212962|4941731.21789|476051|976469.67849|0|0.0|14533776|60253474.359|261|261.23
- Key Details:
1. Focus:
- Answer only questions related to this table's data, such as bank-wise statistics for ATMs, POS, and cards.
- Do not make up data or use external sources.
2. Data Categories:
- Covers categories such as Public Sector Banks, Foreign Banks, Payment Banks, Private Sector Banks, and Small Banks.
3. All amounts are the fields with suffix _VALUE_AMT and the values are in Rs '000. So while sharing totals mention that and while computing averages involving amounts convert to Rupees in full by multiplying by 1000 and share. This only applies to amount and not to the _VOLUME_NOS OR _NOS fields which have the transactions number, counts and volumes
Guidelines:
- Ensure all responses are context-specific to the database structure and sample rows.
- Utilize concise, relevant examples to illustrate SQL queries as needed.
- Uses basic API key authentication only
- No protection against SQL injection attacks
- No rate limiting or DDoS protection
- No user-based access controls
- Uses admin-level database credentials
-
Authentication & Authorization
- Implement proper OAuth 2.0 or JWT-based authentication
- Add role-based access control (RBAC)
- Use API rate limiting and throttling
-
Database Security
- Avoid using admin credentials for the application
- Create dedicated database users with minimal required permissions
- For testing: Grant only SELECT permissions on specific tables
- Consider using connection pooling with restricted permissions
-
SQL Injection Protection
- This code uses SQLAlchemy's
text()
which provides some protection - For enhanced security, implement query validation and sanitization
- Consider using stored procedures for complex operations
- This code uses SQLAlchemy's
-
Row-Level Security (RLS)
- Implement PostgreSQL/Supabase Row Level Security policies
- Control data access at the database level based on user context
-
Infrastructure Security
- Use HTTPS/TLS for all communications
- Implement proper firewall rules
- Regular security audits and dependency updates
- Monitor and log all database access
- Start with a non-admin database user that has only SELECT permissions
- Use environment variables for all sensitive configuration
- Implement request logging and monitoring
- Consider using Supabase's built-in security features if using Supabase
Remember: Security requirements vary by use case. Always conduct a thorough security assessment before deploying to production.
Sample files for the above instructions are available in the following link:
https://drive.google.com/drive/folders/1QlE8tJDKAX9XaHUCabfflPgRnNiOXigV