Rica is an intelligent code editor powered by DeepSeek Coder, providing advanced code editing, explanation, and automation capabilities.
- π€ AI-powered code editing and suggestions
- π Intelligent code analysis and explanations
- β‘ Real-time code completion
- π οΈ Automated refactoring
- π Smart debugging assistance
- π Code documentation generation
- π§ͺ Test case generation
- Python 3.8 or later
- Node.js 14 or later
- 16GB RAM minimum (32GB recommended)
- NVIDIA GPU with 8GB VRAM minimum for optimal performance
- Clone this repository
- Run
setup.bat
(Windows) orsetup.sh
(Linux/Mac) - Follow the terminal prompts
If the automatic setup doesn't work, you can set up the components manually:
-
Set up the model server:
python -m venv venv source venv/bin/activate # or `venv\Scripts\activate` on Windows python setup.py cd rica-server pip install -r requirements.txt python server.py
-
Set up the API server:
cd rica-api npm install npm start
-
Set up the frontend:
cd rica-ui npm install npm start
- Model settings can be configured in
rica-server/server.py
- API settings can be configured in
rica-api/.env
- Frontend settings can be configured in
rica-ui/.env
- Open http://localhost:3000 in your browser
- Use the code editor as you normally would
- Access AI features through:
- Command palette (Ctrl/Cmd + Shift + P)
- Right-click menu
- Starry AI sidebar
explain
- Get an explanation of selected coderefactor
- Get suggestions for code improvementdebug
- Get help with debuggingtest
- Generate test casesfind
- Search for similar code patternscreate
- Generate new codeedit
- Get suggestions for code changes
- Ctrl/Cmd + Shift + E - Explain code
- Ctrl/Cmd + Shift + R - Refactor code
- Ctrl/Cmd + Shift + D - Debug code
- Ctrl/Cmd + Shift + T - Generate tests
- Ctrl/Cmd + Shift + Space - Open AI suggestions
-
If the model server fails to start:
- Check if you have enough RAM and VRAM
- Try running with
--load_in_8bit
instead of--load_in_4bit
- Use CPU-only mode by removing
device_map="auto"
-
If the API server fails to start:
- Check if port 3001 is available
- Verify environment variables in
.env
-
If the frontend fails to start:
- Check if port 3000 is available
- Clear npm cache and node_modules
Contributions are welcome! Please read our contributing guidelines and submit pull requests.
MIT License - feel free to use this in your projects! MVP - AI-native Cyber Cockpit (Starry + Swarm + Sims) This repository is an MVP scaffold that wires a lightweight Node.js middleware (rica-api) and a React frontend (rica-ui) to act as a polished UI for headless engines:
- OpenCTI (knowledge graph) β headless, run your own instance.
- OpenBAS / Camoufox (Breach & Attack Simulation) β headless engine.
- Ollama (local LLM runtime) used by Starry / DeepSeek copilot.
This scaffold is intended for fast MVP launch. It assumes you already have OpenCTI and OpenBAS (Camoufox) running in your environment (Docker or Kubernetes). The scaffold exposes:
rica-api
β the orchestration / fusion layer. Routes AI queries to Ollama and proxies calls to OpenCTI/OpenBAS.rica-ui
β React-based Gotham-style UI shell (left nav, graph area placeholder, Starry right panel).
rica-api/
: Node.js Express middleware with endpoints:GET /api/threat-actors
-> proxies OpenCTI GraphQLPOST /api/simulate
-> posts to OpenBAS/CamoufoxPOST /api/starry
-> sends prompts to Ollama (or OpenAI if configured)- Credit wallet simulation (in-memory)
rica-ui/
: React frontend (Create React App style) with sidebar + center + Starry right panel; connects to rica-api.docker-compose.yml
: service definitions forrica-api
andrica-ui
(development). Engines (OpenCTI/OpenBAS) expected to be external.
- Ensure OpenCTI and OpenBAS (Camoufox) are running and reachable:
- OpenCTI GraphQL: e.g.
http://opencti:4000/graphql
orhttp://localhost:4000/graphql
- OpenBAS API: e.g.
http://openbas:8080/api
orhttp://localhost:8080/api
- OpenCTI GraphQL: e.g.
- Optional: Run Ollama locally for private LLM:
ollama serve
or follow Ollama docs. - From this repo root:
This will start
# build & run via docker-compose (dev) docker-compose up --build
rica-api
(port 3001) andrica-ui
(port 3000). If you prefer to run locally:# Run API cd rica-api npm install npm run start # In a separate terminal: Run UI cd rica-ui npm install npm start
Create rica-api/.env
from .env.example
and set:
OPENCTI_GRAPHQL_URL
- e.g. http://localhost:4000/graphqlOPENBAS_API_URL
- e.g. http://localhost:8080/apiOLLAMA_URL
- e.g. http://localhost:11434 (Ollama default)API_KEY
- a token used to protect Rica API (for demo only)
- Docker and Docker Compose installed
- Access to a Docker registry (optional for production)
- Domain name and SSL certificate (for production)
-
Configure environment variables in
.env
files:rica-api/.env
: API settings, external service URLs, and security keysrica-ui/.env
: Frontend settings and API URL
-
For production, ensure these critical variables are set:
NODE_ENV=production
API_KEY
with a strong, unique valueDEEPSEEK_API_KEY
with your valid API keyFRONTEND_URL
with your production domain
# Build and run in production mode
NODE_ENV=production docker-compose up --build -d
- Build and push Docker images to your registry:
docker build -t your-registry/rica-api:latest ./rica-api
docker build -t your-registry/rica-ui:latest ./rica-ui
docker push your-registry/rica-api:latest
docker push your-registry/rica-ui:latest
- Apply Kubernetes manifests (examples in
k8s/
directory):
kubectl apply -f k8s/
- AWS: Use ECS or EKS with Application Load Balancer
- Azure: Use AKS with Azure Container Registry
- GCP: Use GKE with Container Registry
- Use a reverse proxy (Nginx, Traefik) with SSL termination
- Implement proper network segmentation
- Set up monitoring and alerting
- Configure regular backups of data
- Use secrets management for sensitive values
- Replace in-memory credit wallet with persistent DB (Postgres / Redis)
- Add authentication (OIDC / SSO, JWT), rate-limits, RBAC
- Move LLM inference to a managed inference cluster (vLLM / Ollama at scale)
- Run OpenCTI/OpenBAS behind private networks with secure service-to-service auth
- Use horizontal pod autoscaling in Kubernetes
A production-ready guide and runnable demo code has been included. Read the README
and README_API.md
for endpoint details.
-- End of quickstart --