✅ Repo verified: Hackathon_7March — pushed from version1.1 branch
This contains everything you need to run your app locally.
Prerequisites: Node.js
- Install dependencies:
npm install - Set the required API keys in
.env:GEMINI_API_KEY: For Google Gemini AI.EXA_API_KEY: For Exa search grounding.GMI_CLOUD_API_KEY: (Optional) for GMI Cloud models.
- Run the app (Frontend + Backend):
npm run dev
The app will be available at http://localhost:3000. The backend runs on localhost:8080 (proxied via /api).
This project is optimized for deployment on Vercel. It uses Vercel Serverless Functions for the backend logic.
- Connect to Vercel: Push your code to GitHub/GitLab/Bitbucket and import it as a New Project on Vercel.
- Environment Variables: Add the following secrets in the Vercel Dashboard:
GEMINI_API_KEYEXA_API_KEYGMI_CLOUD_API_KEY(if using GMI models)
- Build Settings: Vercel should automatically detect Vite. Use the default settings:
- Framework Preset:
Vite - Build Command:
npm run build - Output Directory:
dist
- Framework Preset:
- Deploy: Vercel will build the frontend and automatically set up the serverless functions in the
api/directory.
src/: React frontend (Vite).api/: Serverless functions (Express) handling/api/*routes.vercel.json: Routing and configuration.package.json: Unified dependencies and scripts.
Atlas supports GMI Cloud as an alternative LLM provider alongside Google Gemini. GMI Cloud offers an OpenAI-compatible inference API with models such as DeepSeek-R1, Llama 3.3 70B, and Qwen 2.5 72B.
Browser → POST /api/gmi/chat → Express backend → https://api.gmi-serving.com/v1
The GMI_CLOUD_API_KEY is kept on the backend so it is never exposed in the browser bundle.
-
Obtain a GMI Cloud API key at https://app.gmi-serving.com/api-keys
-
Add it to your root
.envfile or Vercel environment variables:GMI_CLOUD_API_KEY=your_key_here
-
GMI Cloud logic is now integrated into the
api/serverless functions. Local development automatically starts the server. -
In the Atlas chat UI, open the model selector and choose any GMI Cloud model:
deepseek-ai/DeepSeek-R1meta-llama/Meta-Llama-3.3-70B-InstructQwen/Qwen2.5-72B-Instruct
| Method | Path | Description |
|---|---|---|
POST |
/api/gmi/chat |
Proxies a chat completion request to GMI Cloud |
Request body:
{
"model": "deepseek-ai/DeepSeek-R1",
"messages": [
{ "role": "system", "content": "..." },
{ "role": "user", "content": "Suggest beach destinations under $2000" }
]
}Response:
{ "content": "{ ... atlas JSON response ... }" }