Skip to content

Add cloud-agnostic resource provisioning layer (databases, caches, queues, storage, LLM, search)#4

Draft
Copilot wants to merge 3 commits intocopilot/implement-enterprise-grade-extensionsfrom
copilot/add-cloud-resource-provisioning
Draft

Add cloud-agnostic resource provisioning layer (databases, caches, queues, storage, LLM, search)#4
Copilot wants to merge 3 commits intocopilot/implement-enterprise-grade-extensionsfrom
copilot/add-cloud-resource-provisioning

Conversation

Copy link
Copy Markdown
Contributor

Copilot AI commented Feb 25, 2026

Adds fastops/resources.py - a unified provisioning layer for application services. While azure.py/aws.py handle infrastructure (VMs, networks), this handles dependencies (databases, caches, queues, storage buckets, LLM endpoints, serverless functions, search engines).

Core Design

Every resource function returns (env_dict, compose_svc_kwargs_or_None):

  • env_dict: environment variables for app connectivity (DATABASE_URL, REDIS_URL, etc.)
  • compose_svc_kwargs: Docker service config when provider='docker', None for cloud providers

App code reads env vars uniformly regardless of provider. Switch from local dev to production by changing provider string.

Implementation

New file: fastops/resources.py

  • database(name, engine, provider, **kw) - postgres/mysql/mongo on docker/aws/azure
  • cache(name, provider, **kw) - Redis on docker/aws/azure
  • queue(name, provider, **kw) - RabbitMQ/SQS/Service Bus/Pub/Sub on docker/aws/azure/gcp
  • bucket(name, provider, **kw) - MinIO/S3/Azure Storage/GCS on docker/aws/azure/gcp
  • llm(name, provider, **kw) - Ollama/OpenAI/Azure OpenAI/Bedrock on docker/openai/azure/aws
  • function(name, runtime, handler, provider, **kw) - Lambda/Azure Functions/Cloud Functions on aws/azure/gcp
  • search(name, provider, **kw) - Elasticsearch/OpenSearch/Azure Search on docker/aws/azure
  • stack(resources, provider) - composability function, merges resources into (env, Compose, volumes)

Updated: fastops/__init__.py

  • Export all resource functions at top level

Updated: fastops/ship.py

  • Add resources parameter to ship()
  • Auto-provision resources based on deployment target (to param)
  • Inject resource env vars into app container
  • Merge resource services into docker-compose

Example Usage

from fastops import ship, database, cache, queue, bucket, llm

resources = {
    'db': lambda: database('myapp', engine='postgres', provider='docker'),
    'cache': lambda: cache('redis', provider='docker'),
    'queue': lambda: queue('tasks', provider='docker'),
    'storage': lambda: bucket('uploads', provider='docker'),
    'ai': lambda: llm('gpt-4o', provider='openai')  # cloud service
}

# Local dev
ship(path='./app', to='docker', port=8000, resources=resources)
# → docker-compose with postgres + redis + rabbitmq + minio + app
# → app gets DATABASE_URL, REDIS_URL, QUEUE_URL, S3_ENDPOINT, LLM_ENDPOINT

# Production
ship(path='./app', to='azure', port=8000, resources=resources)
# → provisions Azure Database for PostgreSQL, Azure Cache, etc.
# → same env vars, different endpoints

Pattern

# Define once
resources = {
    'db': lambda: database('prod', provider='docker')
}

# Compose manually
env, compose, volumes = stack(resources)
compose = compose.svc('app', build='.', environment=env)
compose.save('docker-compose.yml')

# Or integrate with ship()
ship(path='.', to='docker', resources=resources)

All Docker services include restart: unless-stopped. Cloud provider imports are lazy-loaded to avoid requiring CLIs during import.

Original prompt

Add Cloud Resource Provisioning Module (fastops/resources.py)

Build on top of the enterprise extensions already in this branch to add a unified cloud resource provisioning module. This is the next layer: while azure.py and aws.py handle infrastructure (VMs, containers, networks), resources.py handles services — databases, caches, queues, storage buckets, LLM endpoints, serverless functions, and search engines.

Core Design Principle

Every resource function returns a tuple: (env_dict, compose_svc_kwargs_or_None):

  • env_dict: environment variables the app needs to connect (e.g. DATABASE_URL, REDIS_URL)
  • compose_svc_kwargs: if not None, a dict suitable for Compose.svc() (Docker-based resource for local dev)

When provider='docker', resources run as containers locally. When provider='azure'|'aws'|'gcp', resources are provisioned as managed cloud services. The app code stays identical — it reads from env vars either way.

This follows the existing fastops pattern where caddy(), swag(), crowdsec() return Compose service kwargs as dicts.

File to Create: fastops/resources.py

Module docstring: """Cloud-agnostic resource provisioning: databases, caches, queues, storage, LLM endpoints, and serverless functions."""

__all__ should export: ['database', 'cache', 'queue', 'bucket', 'llm', 'function', 'search', 'stack']

Import: import os, json, subprocess and from pathlib import Path


Function 1: database(name='db', engine='postgres', provider='docker', **kw)

Returns (env_dict, compose_svc_kwargs_or_None)

Docker provider — supports three engines:

engine='postgres':

  • image: postgres:{version} (default version='16')
  • env: POSTGRES_PASSWORD (from kw.get('password', os.environ.get('DB_PASSWORD', 'secret'))) and POSTGRES_DB = name
  • ports: {'5432': '5432'}
  • volumes: {'pgdata': '/var/lib/postgresql/data'}
  • env_dict: {'DATABASE_URL': f'postgresql://postgres:{password}@db:5432/{name}', 'DB_PROVIDER': 'docker'}
  • restart: 'unless-stopped'

engine='mysql':

  • image: mysql:{version} (default '8')
  • env: MYSQL_ROOT_PASSWORD, MYSQL_DATABASE
  • ports: {'3306': '3306'}
  • volumes: {'mysqldata': '/var/lib/mysql'}
  • env_dict: {'DATABASE_URL': f'mysql://root:{password}@db:3306/{name}', 'DB_PROVIDER': 'docker'}

engine='mongo':

  • image: mongo:{version} (default '7')
  • env: MONGO_INITDB_ROOT_USERNAME: 'admin', MONGO_INITDB_ROOT_PASSWORD
  • ports: {'27017': '27017'}
  • volumes: {'mongodata': '/data/db'}
  • env_dict: {'DATABASE_URL': f'mongodb://admin:{password}@db:27017/{name}?authSource=admin', 'DB_PROVIDER': 'docker'}

AWS provider: Import callaws from .aws, call aws rds create-db-instance with --db-instance-identifier name, --engine postgres, --db-instance-class from kw (default 'db.t3.micro'), --master-username from kw (default 'appadmin'), --master-user-password from env DB_PASSWORD, --allocated-storage from kw (default 20), --no-publicly-accessible, --storage-encrypted. Return env with DATABASE_URL constructed from the result endpoint, DB_PROVIDER: 'rds'.

Azure provider: Import callaz from .azure, call az postgres flexible-server create with --name name, --resource-group from kw, --sku-name from kw (default 'Standard_B1ms'), --version from kw (default '16'), --storage-size from kw (default 32), --admin-user from kw (default 'appadmin'), --admin-password from env DB_PASSWORD, --public-access None. Return env with connection string, DB_PROVIDER: 'azure_postgres'.


Function 2: cache(name='redis', provider='docker', **kw)

Returns (env_dict, compose_svc_kwargs_or_None)

Docker: Redis 7-alpine, port 6379, appendonly yes, volume redis-data:/data. Env: REDIS_URL: 'redis://redis:6379', CACHE_PROVIDER: 'redis'.

AWS: aws elasticache create-cache-cluster with --cache-cluster-id name, --cache-node-type from kw (default 'cache.t3.micro'), --engine redis, --num-cache-nodes 1. Env: REDIS_URL, CACHE_PROVIDER: 'elasticache'.

Azure: az redis create with --name, --resource-group, --sku from kw (default 'Basic'), --vm-size from kw (default 'C0'). Get host and key. Env: REDIS_URL: f'rediss://:{key}@{host}:6380', CACHE_PROVIDER: 'azure_redis'.


Function 3: queue(name='tasks', provider='docker', **kw)

Returns (env_dict, compose_svc_kwargs_or_None)

Docker: RabbitMQ 3-management, ports 5672 + 15672, volume rabbitmq-data:/var/lib/rabbitmq, password from kw (default 'guest'). Env: QUEUE_URL: f'amqp://guest:{password}@rabbitmq:5672/', QUEUE_NAME: name.

AWS: aws sqs create-queue --queue-name name --attributes with VisibilityTimeout=30, MessageRetentionPeriod=345600. Env: QUEUE_URL from result, QUEUE_NAME, QUEUE_PROVIDER: 'sqs'.

Azure: Create Service Bus namespace + queue using az servicebus namespace create and az servicebus queue create. Get connection string via auth...

This pull request was created from Copilot chat.


🔒 GitHub Advanced Security automatically protects Copilot coding agent pull requests. You can protect all pull requests by enabling Advanced Security for your repositories. Learn more about Advanced Security.

Copilot AI and others added 2 commits February 25, 2026 04:14
Co-authored-by: Karthik777 <7102951+Karthik777@users.noreply.github.com>
Co-authored-by: Karthik777 <7102951+Karthik777@users.noreply.github.com>
Copilot AI changed the title [WIP] Add cloud resource provisioning module Add cloud-agnostic resource provisioning layer (databases, caches, queues, storage, LLM, search) Feb 25, 2026
Copilot AI requested a review from Karthik777 February 25, 2026 04:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants