diff --git a/orchestration/banana.yaml b/orchestration/banana.yaml index 4bd97be..2151685 100644 --- a/orchestration/banana.yaml +++ b/orchestration/banana.yaml @@ -9,11 +9,11 @@ banana: url: https://www.banana.dev/ description: "Banana is a serverless GPU service tailored for AI applications. - It allows users to quickly deploy AI models using custom Python framework, connect to data stores, and run inference efficiently. - Banana offers built-in CI/CD support, facilitating Docker image creation and deployment on their serverless GPU infrastructure. - The service excels in autoscaling applications, minimizing cold boot times to ensure rapid response to changing traffic patterns." + It allows users to quickly deploy AI models using custom Python framework, connect to data stores, and run inference efficiently. + Banana offers built-in CI/CD support, facilitating Docker image creation and deployment on their serverless GPU infrastructure. + The service excels in autoscaling applications, minimizing cold boot times to ensure rapid response to changing traffic patterns." features: - "Efficient GPU Resource Billing." - "Dynamic GPU Allocation." - - "Rapid Scalable Inference." + - "Rapid Scalable Inference." \ No newline at end of file diff --git a/orchestration/cerebrium.yaml b/orchestration/cerebrium.yaml new file mode 100644 index 0000000..e08ce6b --- /dev/null +++ b/orchestration/cerebrium.yaml @@ -0,0 +1,18 @@ +cerebrium: + name: "Cerebrium" + + image_url: https://uploads-ssl.webflow.com/63f3d4a9e05fc85e733e1610/63f40e88b105ee11832e827c_full-logo-colour-white-transparent.svg + + tags: + - model-binary + + url: https://www.cerebrium.ai/ + + description: "Cerebrium simplifies the process of training, deploying, and monitoring machine learning models using a minimal amount of code. + It offers seamless deployment with support for major ML frameworks, including PyTorch, ONNX, and HuggingFace models, as well as prebuilt models optimized for sub-second latency. + Additionally, it provides effortless model fine-tuning and monitoring capabilities with integration into top ML observability platforms, enabling alerts about feature or prediction drift, model version comparisons, and in-depth insights into model performance." + + features: + - "Serverless GPU Model Deployment." + - "Support for All Major Frameworks." + - "Automatic Versioning." \ No newline at end of file diff --git a/orchestration/launchflow.yaml b/orchestration/launchflow.yaml new file mode 100644 index 0000000..5325b63 --- /dev/null +++ b/orchestration/launchflow.yaml @@ -0,0 +1,20 @@ +launchflow: + name: "LaunchFlow" + + image_url: https://www.launchflow.com/images/logo.svg + + tags: + - model-endpoint + + url: https://www.launchflow.com/ + + description: "LaunchFlow is a versatile platform designed for building and deploying real-time applications directly from your code editor. + With its seamless integration into popular code editors like VSCode, it allows users to create and deploy applications in less than 60 seconds. + This platform is 100% Python-powered and offers a drag-and-drop editor, preloaded templates, connectors for various cloud providers, and VSCode extensions for both local and remote development. + LaunchFlow automates cloud infrastructure provisioning from application code, ensuring serverless operation without the need for server management. + It offers ready-to-use templates for real-time IoT, machine learning, and security applications, enabling effortless scaling and efficient data analysis." + + features: + - "Efficient Deployment." + - "Integrated VSCode Extension." + - "Serverless Operation." \ No newline at end of file diff --git a/orchestration/replicate.yaml b/orchestration/replicate.yaml new file mode 100644 index 0000000..2253a9e --- /dev/null +++ b/orchestration/replicate.yaml @@ -0,0 +1,19 @@ +replicate: + name: "Replicate" + + image_url: https://replicate.com/static/favicon.e390e65c9599.png + + tags: + - model-binary + + url: https://replicate.com/ + + description: "Replicate offers a simplified approach to running machine learning models at scale, making it accessible even for those without deep machine learning knowledge. + Users can run models with just a few lines of code using Replicate's Python library or query the API directly with their preferred tool. The platform hosts a vast library of machine learning models, including language models, video creation and editing models, super resolution models, image restoration models, and more, contributed by the community. + Replicate also introduces Cog, an open-source tool for packaging machine learning models in production-ready containers. + It streamlines the deployment process, automatically generating scalable API servers for defined models and offering automatic scaling to handle varying traffic loads." + + features: + - "Simplified Machine Learning Deployment." + - "Extensive Model Library." + - "User is billed only for the time their code is running." \ No newline at end of file