Everything is a file. Even your cloud. (Eventually)
GNOS is currently a demonstration of what's possible, not a working product. Most operations are simulated to show the potential of the concept.
Imagine if all your cloud infrastructure worked like this:
# Deploy to cloud
cp api.py /mnt/gnos/cloud/aws/lambda/functions/
# Run AI inference
echo "Explain quantum computing" > /mnt/gnos/proc/llama3
cat /mnt/gnos/proc/llama3
# Call APIs
echo '{"message": "Hello"}' > /mnt/gnos/net/http/api.example.com/webhookThat's what we're building. A filesystem interface for all infrastructure.
- FUSE filesystem that mounts at
/mnt/gnos - Basic directory structure (
/proc,/cloud,/net,/dev) - Simulated AI responses when you write to
/proc/llama3 - Security framework with capability tokens (partially working)
- Driver architecture ready for real implementations
- Real AI model integration (returns fake responses)
- Actual cloud storage operations (S3, GCS, etc.)
- Real HTTP/API calls
- Database operations
- Any actual infrastructure integration
# Mount GNOS
sudo ./target/release/gnos-mount mount -m /mnt/gnos -f
# Create a capability token
./target/release/gnos-mount token -p "/proc/llama3" -p "rw" -e 24
# Write to the simulated AI
echo "Hello AI" > /mnt/gnos/proc/llama3
# Read the simulated response
cat /mnt/gnos/proc/llama3
# Output: "GNOS AI Model: LLaMA3-7B (Simulated)..."
# That's about it for now! π
Modern development requires juggling dozens of SDKs:
// The pain is real
import AWS from 'aws-sdk';
import { OpenAI } from 'openai';
import { MongoClient } from 'mongodb';
// ... 20 more imports
// Different auth for each
const s3 = new AWS.S3({ credentials: {...} });
const openai = new OpenAI({ apiKey: process.env.OPENAI_KEY });
// ... etcGNOS proposes a radical simplification: What if everything was just file I/O?
Your Code β File Operation β GNOS β Driver β Actual Service
β β
fs.writeFile() (Currently returns
simulated data)
/mnt/gnos/
βββ proc/ # AI Models (simulated)
β βββ llama3 # Fake AI responses
βββ cloud/ # Cloud Storage (not implemented)
β βββ aws/s3/ # Would connect to S3
β βββ gcp/ # Would connect to GCS
βββ net/ # HTTP APIs (not implemented)
β βββ http/ # Would make real HTTP calls
βββ dev/ # Devices (not implemented)
βββ sensors/ # Would read IoT sensors
# Clone and build
git clone https://github.com/ahammed867/gnos
cd gnos
cargo build --release
# Mount the filesystem
sudo mkdir -p /mnt/gnos
sudo ./target/release/gnos-mount mount -m /mnt/gnos -f
# In another terminal, explore
ls /mnt/gnos
echo "What is GNOS?" > /mnt/gnos/proc/llama3
cat /mnt/gnos/proc/llama3
# Run the demo scripts (they show simulated operations)
./examples/medical_workflow.sh
./examples/real_dev_workflow.sh| What We Claim | Reality |
|---|---|
| "AI inference in 2.8s" | Fake response in ~100ms |
| "S3 upload at 120MB/s" | Not implemented |
| "10x faster development" | Theoretical - needs real drivers |
We need help implementing actual drivers:
Instead of simulating everything, we should pick one integration and make it real:
// src/drivers/openai.rs - This doesn't exist yet!
impl GnosDriver for OpenAiDriver {
async fn write(&self, path: &Path, data: &[u8]) -> Result<()> {
// Actually call OpenAI API
let prompt = String::from_utf8(data.to_vec())?;
let response = self.client.completions().create(prompt).await?;
self.cache.insert(path, response);
Ok(())
}
}- Pick a driver (OpenAI, S3, Postgres, etc.)
- Implement real API calls
- Test it works
- Submit a PR
This is a research project exploring a new paradigm. We welcome:
- Feedback on the concept
- Real driver implementations
- Use case ideas
- Architecture improvements
Q: Is this production-ready?
A: No! This is a proof-of-concept. Most features are simulated.
Q: Why Rust?
A: FUSE requires low-level control, and Rust provides safety without sacrificing performance.
Q: Will this actually work for real infrastructure?
A: That's what we're trying to find out! The concept is sound, but implementation has challenges.
Q: What's the biggest challenge?
A: Mapping stateful operations (websockets, transactions) to a stateless file interface.
Current Reality:
# Today: Multiple SDKs, complex error handling
import boto3
import openai
from slack_sdk import WebClient
def process_patient_scan(scan_file):
# Upload scan to secure storage (AWS specific)
s3 = boto3.client('s3')
s3.upload_file(scan_file, 'hospital-bucket', f'scans/{scan_file}')
# Extract text from scan (need another library)
text = extract_medical_text(scan_file) # Custom OCR code
# Get AI analysis (OpenAI specific)
response = openai.ChatCompletion.create(
model="gpt-4",
messages=[{"role": "user", "content": f"Analyze: {text}"}]
)
# Notify doctor (Slack specific)
slack = WebClient(token=os.environ["SLACK_TOKEN"])
slack.chat_postMessage(channel="#urgent", text=response.choices[0].message)With GNOS Vision:
# Future: One interface, composable operations
cat patient_scan.pdf > /mnt/gnos/cloud/secure/scans/patient123.pdf
cat patient_scan.pdf | ocr > /mnt/gnos/ai/medical-gpt4
cat /mnt/gnos/ai/medical-gpt4 > /mnt/gnos/notify/slack/urgent-careCurrent Reality:
// Today: SDK hell for financial data
const plaid = require('plaid');
const stripe = require('stripe');
const sendgrid = require('@sendgrid/mail');
async function generateFinancialReport(userId) {
// Get bank data (Plaid SDK)
const plaidClient = new plaid.Client({...});
const accounts = await plaidClient.getAccounts(userId);
// Get payment data (Stripe SDK)
const stripeClient = stripe(process.env.STRIPE_KEY);
const charges = await stripeClient.charges.list({customer: userId});
// Generate report with AI
const report = await openai.createCompletion({...});
// Email report (SendGrid SDK)
await sendgrid.send({
to: user.email,
subject: 'Monthly Report',
html: report
});
}With GNOS Vision:
# Future: Unified data pipeline
cat /mnt/gnos/finance/plaid/accounts/$USER_ID > /tmp/finance.json
cat /mnt/gnos/finance/stripe/charges/$USER_ID >> /tmp/finance.json
cat /tmp/finance.json > /mnt/gnos/ai/financial-analyst
cat /mnt/gnos/ai/financial-analyst > /mnt/gnos/notify/email/$USER_EMAILCurrent Reality:
# Today: Different tools for each cloud
# .github/workflows/deploy.yml (GitHub specific)
# buildspec.yml (AWS specific)
# cloudbuild.yaml (GCP specific)
# azure-pipelines.yml (Azure specific)
# Plus terraform/pulumi/CDK for each...With GNOS Vision:
# Future: Cloud-agnostic deployment
# Deploy to ANY cloud with the same commands
cp app.docker /mnt/gnos/build/container
cp container.tar > /mnt/gnos/cloud/compute/deploy
# Switch clouds by changing paths, not rewriting code
DEPLOY_TARGET="/mnt/gnos/cloud/aws/ecs" # or gcp/run or azure/containersCurrent Reality:
# Today: Complex ML pipelines
import pandas as pd
from sqlalchemy import create_engine
import wandb
import mlflow
def train_model():
# Get data from database
engine = create_engine('postgresql://...')
df = pd.read_sql('SELECT * FROM users', engine)
# Track with WandB
wandb.init(project="my-model")
# Train model
model = train_complex_model(df)
# Log to MLflow
mlflow.log_model(model, "model")
# Deploy to SageMaker
sagemaker_client.create_endpoint(...)With GNOS Vision:
# Future: Composable ML operations
cat /mnt/gnos/db/postgres/users |
python train.py |
tee /mnt/gnos/ml/wandb/experiments/run-42 |
tee /mnt/gnos/ml/models/user-predictor |
cp /mnt/gnos/cloud/sagemaker/endpoints/prod- Healthcare: Hospitals use 10-20 different systems. GNOS could unify them.
- Finance: Banks juggle dozens of APIs. GNOS makes compliance easier.
- DevOps: Multi-cloud is painful. GNOS makes it trivial.
- Data Science: ML pipelines are complex. GNOS makes them composable.
The beauty is that GNOS would make these real problems disappear by using an interface everyone already knows: files.
- Pick ONE use case and implement it fully
- Implement ONE real driver (probably OpenAI or S3)
- Measure actual performance with real API calls
- Build community around specific use cases
Apache 2.0 - This is open research. Take the ideas and run with them!
β Star if you think infrastructure should be simpler!
This is a research project. Expect rough edges and wild dreams.