Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
92 changes: 74 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ API and MCP services that generate fully functional Rust projects from natural l
- **Vector Search** 🔍 - Search for similar projects and errors.
- **Docker Containerization** 🐳 - Easy deployment with Docker.
- **Asynchronous Processing** ⏳ - Handle long-running operations efficiently.
- **Multiple Service Interfaces** 🔄 - REST API and MCP (Model-Compiler-Processor) interface.

---

Expand All @@ -24,14 +25,15 @@ Or, if you want to run the services directly on your own computer:

- **Python 3.8+** 🐍
- **Rust Compiler and cargo tools** 🦀
- **Rust Compiler and cargo tools** 🦀

---

## 📦 Install

```bash
git clone <repository-url>
cd Rust_coder_lfx
git clone https://github.com/WasmEdge/Rust_coder
cd Rust_coder
```

## 🚀 Configure and run
Expand Down Expand Up @@ -65,15 +67,15 @@ docker-compose stop
By default, you will need a [Qdrant server](https://qdrant.tech/documentation/quickstart/) running on `localhost` port `6333`. You also need a [local Gaia node](https://github.com/GaiaNet-AI/node-configs/tree/main/qwen-2.5-coder-3b-instruct-gte). Set the following environment variables in your terminal to point to the Qdrant and Gaia instances, as well as your Rust compiler tools.

```
QDRANT_HOST
QDRANT_PORT
LLM_API_BASE
LLM_MODEL
LLM_EMBED_MODEL
LLM_API_KEY
LLM_EMBED_SIZE
CARGO_PATH
RUST_COMPILER_PATH
QDRANT_HOST=localhost
QDRANT_PORT=6333
LLM_API_BASE=http://localhost:8080/v1
LLM_MODEL=Qwen2.5-Coder-3B-Instruct
LLM_EMBED_MODEL=nomic-embed
LLM_API_KEY=your_api_key
LLM_EMBED_SIZE=768
CARGO_PATH=/path/to/cargo
RUST_COMPILER_PATH=/path/to/rustc
```

Start the services.
Expand Down Expand Up @@ -114,9 +116,18 @@ The API provides the following endpoints:

```
[filename: Cargo.toml]
[package]
name = "calculator"
version = "0.1.0"
edition = "2021"

[dependencies]
... ...

[filename: src/main.rs]
fn main() {
// Calculator implementation
}
... ...
```

Expand Down Expand Up @@ -352,23 +363,32 @@ Rust_coder_lfx/
├── app/ # Application code
│ ├── compiler.py # Rust compilation handling
│ ├── llm_client.py # LLM API client
│ ├── main.py # FastAPI application
│ ├── llm_tools.py # Tools for LLM interactions
│ ├── load_data.py # Data loading utilities
│ ├── main.py # FastAPI application & endpoints
│ ├── mcp_server.py # MCP server implementation
│ ├── mcp_service.py # Model-Compiler-Processor service
│ ├── mcp_tools.py # MCP-specific tools
│ ├── prompt_generator.py # LLM prompt generation
│ ├── response_parser.py # Parse LLM responses into files
│ ├── vector_store.py # Vector database interface
│ └── ...
│ ├── utils.py # Utility functions
│ └── vector_store.py # Vector database interface
├── data/ # Data storage
│ ├── error_examples/ # Error examples for vector search
│ └── project_examples/ # Project examples for vector search
├── docker-compose.yml # Docker Compose configuration
├── Dockerfile # Docker configuration
├── examples/ # Example scripts for using the API
├── output/ # Generated project output
│ ├── compile_endpoint.txt # Example for compile endpoint
│ ├── compile_and_fix_endpoint.txt # Example for compile-and-fix endpoint
│ ├── mcp_client_example.py # Example MCP client usage
│ └── run_mcp_server.py # Example for running MCP server
├── templates/ # Prompt templates
│ └── project_prompts.txt # Templates for project generation
├── mcp-proxy-config.json # MCP proxy configuration
├── parse_and_save_qna.py # Q&A parsing utility
├── qdrant_data/ # Vector database storage
├── requirements.txt # Python dependencies
└── templates/ # API templates
└── .env # Environment variables
```

---
Expand All @@ -384,10 +404,46 @@ Compilation Feedback Loop: Automatically compiles, detects errors, and fixes the

File Parsing: Converts LLM responses into project files with `response_parser.py`.

#### Architecture

REST API Interface (app/main.py): FastAPI application exposing HTTP endpoints for project generation, compilation, and error fixing.

MCP Interface (mcp_server.py, app/mcp_service.py): Server-Sent Events interface for the same functionality.

Vector Database (app/vector_store.py): Qdrant is used for storing and searching similar projects and error examples.

LLM Integration (app/llm_client.py): Communicates with LLM APIs (like Gaia nodes) for code generation and error fixing.

Compilation Pipeline (app/compiler.py): Handles Rust code compilation, error detection, and provides feedback for fixing.

#### Process Flow

Project Generation:

User provides a description and requirements
System creates a prompt using templates (templates/project_prompts.txt)
LLM generates a complete Rust project
Response is parsed into individual files (app/response_parser.py)
Project is compiled to verify correctness

Error Fixing:

System attempts to compile the provided code
If errors occur, they're extracted and analyzed
Vector search may find similar past errors
LLM receives the errors and original code to generate fixes
Process repeats until successful or max attempts reached

---

## 🤝 Contributing
Contributions are welcome! Feel free to submit a Pull Request. 🚀
Contributions are welcome! Feel free to submit a Pull Request.

Fork the repository
Create your feature branch (git checkout -b feature/amazing-feature)
Commit your changes (git commit -m 'Add some amazing feature')
Push to the branch (git push origin feature/amazing-feature)
Open a Pull Request

---

Expand Down