Add cloud-agnostic resource provisioning layer (databases, caches, queues, storage, LLM, search)#4
Draft
Copilot wants to merge 3 commits intocopilot/implement-enterprise-grade-extensionsfrom
Conversation
Co-authored-by: Karthik777 <7102951+Karthik777@users.noreply.github.com>
Co-authored-by: Karthik777 <7102951+Karthik777@users.noreply.github.com>
Copilot
AI
changed the title
[WIP] Add cloud resource provisioning module
Add cloud-agnostic resource provisioning layer (databases, caches, queues, storage, LLM, search)
Feb 25, 2026
This was referenced Feb 25, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Adds
fastops/resources.py- a unified provisioning layer for application services. Whileazure.py/aws.pyhandle infrastructure (VMs, networks), this handles dependencies (databases, caches, queues, storage buckets, LLM endpoints, serverless functions, search engines).Core Design
Every resource function returns
(env_dict, compose_svc_kwargs_or_None):env_dict: environment variables for app connectivity (DATABASE_URL,REDIS_URL, etc.)compose_svc_kwargs: Docker service config whenprovider='docker',Nonefor cloud providersApp code reads env vars uniformly regardless of provider. Switch from local dev to production by changing provider string.
Implementation
New file:
fastops/resources.pydatabase(name, engine, provider, **kw)- postgres/mysql/mongo on docker/aws/azurecache(name, provider, **kw)- Redis on docker/aws/azurequeue(name, provider, **kw)- RabbitMQ/SQS/Service Bus/Pub/Sub on docker/aws/azure/gcpbucket(name, provider, **kw)- MinIO/S3/Azure Storage/GCS on docker/aws/azure/gcpllm(name, provider, **kw)- Ollama/OpenAI/Azure OpenAI/Bedrock on docker/openai/azure/awsfunction(name, runtime, handler, provider, **kw)- Lambda/Azure Functions/Cloud Functions on aws/azure/gcpsearch(name, provider, **kw)- Elasticsearch/OpenSearch/Azure Search on docker/aws/azurestack(resources, provider)- composability function, merges resources into(env, Compose, volumes)Updated:
fastops/__init__.pyUpdated:
fastops/ship.pyresourcesparameter toship()toparam)Example Usage
Pattern
All Docker services include
restart: unless-stopped. Cloud provider imports are lazy-loaded to avoid requiring CLIs during import.Original prompt
Add Cloud Resource Provisioning Module (
fastops/resources.py)Build on top of the enterprise extensions already in this branch to add a unified cloud resource provisioning module. This is the next layer: while
azure.pyandaws.pyhandle infrastructure (VMs, containers, networks),resources.pyhandles services — databases, caches, queues, storage buckets, LLM endpoints, serverless functions, and search engines.Core Design Principle
Every resource function returns a tuple:
(env_dict, compose_svc_kwargs_or_None):env_dict: environment variables the app needs to connect (e.g.DATABASE_URL,REDIS_URL)compose_svc_kwargs: if not None, a dict suitable forCompose.svc()(Docker-based resource for local dev)When
provider='docker', resources run as containers locally. Whenprovider='azure'|'aws'|'gcp', resources are provisioned as managed cloud services. The app code stays identical — it reads from env vars either way.This follows the existing fastops pattern where
caddy(),swag(),crowdsec()return Compose service kwargs as dicts.File to Create:
fastops/resources.pyModule docstring:
"""Cloud-agnostic resource provisioning: databases, caches, queues, storage, LLM endpoints, and serverless functions."""__all__should export:['database', 'cache', 'queue', 'bucket', 'llm', 'function', 'search', 'stack']Import:
import os, json, subprocessandfrom pathlib import PathFunction 1:
database(name='db', engine='postgres', provider='docker', **kw)Returns
(env_dict, compose_svc_kwargs_or_None)Docker provider — supports three engines:
engine='postgres':postgres:{version}(default version='16')POSTGRES_PASSWORD(fromkw.get('password', os.environ.get('DB_PASSWORD', 'secret'))) andPOSTGRES_DB= name{'5432': '5432'}{'pgdata': '/var/lib/postgresql/data'}{'DATABASE_URL': f'postgresql://postgres:{password}@db:5432/{name}', 'DB_PROVIDER': 'docker'}engine='mysql':mysql:{version}(default '8')MYSQL_ROOT_PASSWORD,MYSQL_DATABASE{'3306': '3306'}{'mysqldata': '/var/lib/mysql'}{'DATABASE_URL': f'mysql://root:{password}@db:3306/{name}', 'DB_PROVIDER': 'docker'}engine='mongo':mongo:{version}(default '7')MONGO_INITDB_ROOT_USERNAME: 'admin',MONGO_INITDB_ROOT_PASSWORD{'27017': '27017'}{'mongodata': '/data/db'}{'DATABASE_URL': f'mongodb://admin:{password}@db:27017/{name}?authSource=admin', 'DB_PROVIDER': 'docker'}AWS provider: Import
callawsfrom.aws, callaws rds create-db-instancewith--db-instance-identifier name,--engine postgres,--db-instance-classfrom kw (default 'db.t3.micro'),--master-usernamefrom kw (default 'appadmin'),--master-user-passwordfrom envDB_PASSWORD,--allocated-storagefrom kw (default 20),--no-publicly-accessible,--storage-encrypted. Return env withDATABASE_URLconstructed from the result endpoint,DB_PROVIDER: 'rds'.Azure provider: Import
callazfrom.azure, callaz postgres flexible-server createwith--name name,--resource-groupfrom kw,--sku-namefrom kw (default 'Standard_B1ms'),--versionfrom kw (default '16'),--storage-sizefrom kw (default 32),--admin-userfrom kw (default 'appadmin'),--admin-passwordfrom envDB_PASSWORD,--public-access None. Return env with connection string,DB_PROVIDER: 'azure_postgres'.Function 2:
cache(name='redis', provider='docker', **kw)Returns
(env_dict, compose_svc_kwargs_or_None)Docker: Redis 7-alpine, port 6379, appendonly yes, volume
redis-data:/data. Env:REDIS_URL: 'redis://redis:6379',CACHE_PROVIDER: 'redis'.AWS:
aws elasticache create-cache-clusterwith--cache-cluster-id name,--cache-node-typefrom kw (default 'cache.t3.micro'),--engine redis,--num-cache-nodes 1. Env:REDIS_URL,CACHE_PROVIDER: 'elasticache'.Azure:
az redis createwith--name,--resource-group,--skufrom kw (default 'Basic'),--vm-sizefrom kw (default 'C0'). Get host and key. Env:REDIS_URL: f'rediss://:{key}@{host}:6380',CACHE_PROVIDER: 'azure_redis'.Function 3:
queue(name='tasks', provider='docker', **kw)Returns
(env_dict, compose_svc_kwargs_or_None)Docker: RabbitMQ 3-management, ports 5672 + 15672, volume
rabbitmq-data:/var/lib/rabbitmq, password from kw (default 'guest'). Env:QUEUE_URL: f'amqp://guest:{password}@rabbitmq:5672/',QUEUE_NAME: name.AWS:
aws sqs create-queue --queue-name name --attributeswith VisibilityTimeout=30, MessageRetentionPeriod=345600. Env:QUEUE_URLfrom result,QUEUE_NAME,QUEUE_PROVIDER: 'sqs'.Azure: Create Service Bus namespace + queue using
az servicebus namespace createandaz servicebus queue create. Get connection string via auth...This pull request was created from Copilot chat.
🔒 GitHub Advanced Security automatically protects Copilot coding agent pull requests. You can protect all pull requests by enabling Advanced Security for your repositories. Learn more about Advanced Security.