Skip to content

containers-ai/alameda

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 

Repository files navigation

What is Alameda

The Brain of Resource Orchestration for Kubernetes

Alameda is a prediction engine that foresees future resource usage of your Kubernetes cluster from the cloud layer down to the pod level. We use machine learning technology to provide intelligence that enables dynamic scaling and scheduling of your containers - effectively making us the “brain” of Kubernetes resource orchestration. By providing full foresight of resource availability, demand, health, impact and SLA, we enable cloud strategies that involve changing provisioned resources in real time.

Alameda provides cross-cluster and multi-cloud intelligence to justify when and where users' workloads should be deployed. Alameda agents in your cluster collect compute and I/O metrics, and send it to our engine, which will learn the continually changing resource demands and generate configuration recommendations that can be used by other container and storage orchestrators. We aim to help create a solution that provides the predictions and recommendations to automate pod scaling and scheduling, persistent volume provisioning, etc. to replace all manual configuration and orchestration tasks across clusters and clouds.

The intelligence of automated orchestration (pod scaling and scheduling, persistent volume provisioning, etc.) means your cluster’s time spent reactively addressing resource failure and unavailability is reduced to a minimum. With Alameda, container and storage orchestrators can proactively make cluster-wide resource optimizations and reallocations before those problems arise.

Additionally we provide disk failure predictions about physical drives in your cluster that are about to die. This opens up a lot of options for operators to manage their persistent volumes accordingly to avoid redundancy warnings, performance issues, or even worse, data loss.

On the node level we detect anomalies and errors for better intuition about the health of the cluster.

Our vision is to build the core AI engine behind optimization for multi-cloud environments and become a household name in the cloud management platform community.

You’re welcome to join and contribute to our community. We will be continually adding more support based on community demand and engagement in future releases.

Contact

Please use the following to reach members of the community:

Slack: Join our slack channel

Email: Click

Community Meeting:

Join meeting Every third Wednesday 8:00am Pacific Time / Month

Any changes to the meeting schedule will be added to the agenda doc and posted to Slack #announcements

Getting Started and Documentation

See our Documentation.

Contributing

We welcome contributions. See Contributing to get started.

Report a Bug

For filing bugs, suggesting improvements, or requesting new features, please open an issue.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages

No packages published