A distributed file storage system built on IPFS (InterPlanetary File System) with a FastAPI backend and web interface for seamless file management. This system serves as the storage layer for the Mod-Net module registry, providing decentralized metadata storage for blockchain modules.
- 🚀 FastAPI Backend: High-performance async API with automatic documentation
- 📁 IPFS Integration: Distributed file storage with content addressing
- 🌐 Web Interface: Modern UI for file upload, browsing, and management
- 🔍 Search & Filter: Find files by name, type, or metadata
- 📊 Metadata Storage: SQLite database for file information and indexing
- 🔒 Security Ready: JWT authentication support (optional)
- Python 3.8+
- IPFS node (local or remote)
- uv package manager
-
Clone the repository
git clone https://github.com/Bakobiibizo/commune-ipfs.git cd commune-ipfs -
Install dependencies using UV (recommended)
# Install UV if not already installed curl -LsSf https://astral.sh/uv/install.sh | sh # Install project dependencies uv sync # Or install in development mode uv pip install -e .
Alternative: Using pip
pip install -r requirements.txt
-
Set up IPFS (Kubo)
Install IPFS Kubo (Required for full functionality)
# Download latest Kubo release wget https://dist.ipfs.tech/kubo/v0.28.0/kubo_v0.28.0_linux-amd64.tar.gz tar -xzf kubo_v0.28.0_linux-amd64.tar.gz cd kubo sudo bash install.sh # Initialize IPFS repository ipfs init # Start IPFS daemon ipfs daemon
Note: The IPFS daemon must be running for full functionality. You should see:
RPC API server listening on /ip4/127.0.0.1/tcp/5001 Gateway server listening on /ip4/127.0.0.1/tcp/8080 Daemon is readyOption B: Use Public Gateway (for development only)
# Configure in environment variables export IPFS_API_URL="https://ipfs.infura.io:5001"
-
Run the application
uv run main.py
-
Access the application
- API Documentation: http://localhost:8000/docs
- Web Interface: http://localhost:8000
ipfs/
├── main.py # FastAPI application entry point
├── app/
│ ├── __init__.py
│ ├── api/ # API route handlers
│ │ ├── __init__.py
│ │ └── files.py # File management endpoints
│ ├── models/ # Pydantic data models
│ │ ├── __init__.py
│ │ └── file.py # File-related models
│ ├── services/ # Business logic
│ │ ├── __init__.py
│ │ ├── ipfs.py # IPFS integration
│ │ └── database.py # Database operations
│ └── static/ # Web UI assets
│ ├── index.html
│ ├── style.css
│ └── script.js
├── tests/ # Test suite
├── pyproject.toml # Project configuration
└── README.md # This file
This IPFS storage system includes a Module Registry integration for storing and managing blockchain module metadata. The Module Registry provides a decentralized way to store module information off-chain while keeping only content identifiers (CIDs) on-chain.
- 🔗 Multi-chain Support: Compatible with Ed25519, Ethereum, Solana, and other blockchain public key formats
- 📦 Metadata Storage: Store rich module metadata (name, version, dependencies, etc.) on IPFS
- 🔍 Search & Discovery: Find modules by name, author, chain type, tags, and more
- 📊 Statistics: Get IPFS storage statistics for module metadata
- 🗃️ Database Indexing: Local SQLite database for fast search and retrieval
- Register Module Metadata: Store module information on IPFS
- Get CID: Receive IPFS Content Identifier for the metadata
- Register On-Chain: Store the CID on the Substrate pallet (off-chain step)
- Retrieve: Query pallet for CID, then fetch metadata from IPFS
POST /api/modules/register- Register module metadata on IPFSGET /api/modules/{cid}- Retrieve module metadata by CIDPOST /api/modules/search- Search modules by various criteriaDELETE /api/modules/{cid}- Unregister module (remove from database, optionally unpin)GET /api/modules/{cid}/stats- Get IPFS statistics for module metadata
Use the provided integration client to test the complete workflow:
# Run the integration demo
cd commune-ipfs
uv run python integration_client.py
# Run integration tests
uv run python test_module_integration.pyimport asyncio
from integration_client import ModuleRegistryClient, ModuleMetadata
async def register_module():
metadata = ModuleMetadata(
name="my-awesome-module",
version="1.0.0",
description="An awesome blockchain module",
author="developer@example.com",
license="MIT",
repository="https://github.com/user/my-awesome-module",
dependencies=["substrate-api"],
tags=["defi", "substrate"],
public_key="0x1234567890abcdef...",
chain_type="ed25519"
)
async with ModuleRegistryClient() as client:
# Register metadata on IPFS
result = await client.register_module_metadata(metadata)
print(f"CID: {result['cid']}")
# Register CID on Substrate pallet (placeholder)
substrate_result = client.register_on_substrate(
public_key=bytes.fromhex(metadata.public_key.replace('0x', '')),
cid=result['cid']
)
return result['cid']
# Run the example
cid = asyncio.run(register_module())- IPFS Daemon Running: Must have IPFS daemon active (see installation above)
- Backend Running: Start the commune-ipfs backend with
uv run python main.py - Dependencies: All dependencies installed with
uv pip install -e .
POST /api/files/upload- Upload file to IPFSGET /api/files/{cid}- Download file by CIDGET /api/files/- List all files with metadataDELETE /api/files/{cid}- Remove file from local storageGET /api/files/{cid}/info- Get file metadataPOST /api/files/search- Search files by criteria
# Upload a file
curl -X POST "http://localhost:8000/api/files/upload" \
-H "Content-Type: multipart/form-data" \
-F "file=@example.txt"
# Get file info
curl "http://localhost:8000/api/files/{cid}/info"
# Download file
curl "http://localhost:8000/api/files/{cid}" -o downloaded_file.txtEnvironment variables:
# IPFS Configuration
IPFS_API_URL=http://localhost:5001 # IPFS API endpoint
IPFS_GATEWAY_URL=http://localhost:8080 # IPFS gateway for file access
# Database
DATABASE_URL=sqlite:///./files.db # SQLite database path
# Server
HOST=0.0.0.0
PORT=8000
DEBUG=false
# Security (optional)
SECRET_KEY=your-secret-key-here
ALGORITHM=HS256
ACCESS_TOKEN_EXPIRE_MINUTES=30uv run pytestuv run black .
uv run isort .uv run mypy .# Dockerfile example
FROM python:3.11-slim
WORKDIR /app
COPY . .
RUN pip install uv
RUN uv sync
EXPOSE 8000
CMD ["uv", "run", "main.py"]- Use PostgreSQL instead of SQLite for better performance
- Set up IPFS cluster for redundancy
- Configure reverse proxy (nginx) for static file serving
- Enable HTTPS with SSL certificates
- Set up monitoring and logging
# Clone the repository
git clone https://github.com/Bakobiibizo/commune-ipfs.git
cd commune-ipfs
# Install dependencies
uv sync
# Install pre-commit hooks (optional but recommended)
pre-commit install# Run all tests
uv run pytest
# Run with coverage
uv run pytest --cov=app --cov-report=html
# Run specific test files
uv run python test_core.py
uv run python test_module_integration.py# Format code
uv run black .
uv run isort . --profile black
# Lint code
uv run ruff check .
uv run mypy .
# Run all quality checks
uv run python run_all_tests.py- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes and add tests
- Run the test suite and ensure all tests pass
- Run code quality checks (black, isort, ruff, mypy)
- Commit your changes (
git commit -m 'feat: add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
We follow the Conventional Commits specification:
feat:- New featuresfix:- Bug fixesdocs:- Documentation changesstyle:- Code style changes (formatting, etc.)refactor:- Code refactoringtest:- Adding or updating testschore:- Maintenance tasks
This project is licensed under the MIT License - see the LICENSE file for details.
See CHANGELOG.md for a detailed history of changes.
- PostgreSQL support for production deployments
- IPFS cluster integration for high availability
- Advanced search and filtering capabilities
- Webhook support for file events
- Integration with more blockchain networks
- Performance optimizations and caching
- Docker Compose setup for easy deployment
IPFS Connection Failed
- Ensure IPFS daemon is running:
ipfs daemon - Check IPFS API URL in configuration
- Verify firewall settings for port 5001
File Upload Errors
- Check file size limits in FastAPI configuration
- Ensure sufficient disk space
- Verify IPFS node has write permissions
Database Errors
- Check SQLite file permissions
- Ensure database directory exists
- Run database migrations if applicable
For support and questions:
- Create an issue on GitHub
- Check the IPFS documentation
- Review FastAPI documentation