Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP

AutoScaling

AlexandrYZ edited this page · 1 revision

Manage your worker count

Introduction

Elastic computing resource allocation is a cornerstone of cloud computing. Windows Azure let you decide how many virtual machines (VMs) should be allocated to support the workload of your cloud app.

Lokad.Cloud emphasizes a design where the workload capacity of your cloud app can be incrementally improved by adding more VMs; and - the other way around - incrementally degraded by removing VMs.

Manual setup through the console

Lokad.Cloud embeds the Management API and let you control the number of workers allocated for your cloud app. The screenshot below illustrates how the number of worker instances can be manually set from the Lokad.Cloud Console.

IProvisioningProvider for auto-scalable apps

Within a QueueService or a ScheduledService, you can access the property CloudService.Providers.Provisioning which will grant you a programmatic access to cloud fabric in order to adjust the number of workers running for your app.

The IProvisioning interface is rather straighforward:

public interface IProvisioningProvider
{
    bool IsAvailable { get;  }
    void SetWorkerInstanceCount(int count);
    Maybe<int> GetWorkerInstanceCount();
}

The implicit design implied by this approach is that each Windows Azure Role is responsible for its own scalability. In other words, the Role itself has decide whether more VMs are needed or not; as opposed of having a centralized system controlling the resource allocated to each component.

In order to finely tune the number of VMs allocated to your Role, you can rely (among other) on the execution statistics gathered by Lokad.Cloud.

Limitations

Lokad.Cloud only supports a single VM size for now: VM referred small in Windows Azure.

Something went wrong with that request. Please try again.