Skip to content
This repository has been archived by the owner on Nov 9, 2020. It is now read-only.

Xenon Application Deployment Strategy

Asaf Kariv edited this page Dec 26, 2017 · 6 revisions

Work in progress


About this document

This document provides best practices deploying a Xenon-based application in production. Some of the best practices also touch on the way a Xenon-base application should be authored, as it impacts the ability to later deploy the application in a way that adheres to the deployment best practices.


Separation of Data from Business Logic

In case of a sizable data set, it is recommended to deploy the persistent layer in a separate nodegroup from the nodegroup consisting the business logic (API processing and orchestration logic).

This will allow your business logic to evolve independently from your data. This is important because business logic can evolve often, and you don't want to deal with massive amounts of data when updating the business logic.

When packaging your application's classes into jar files, you need to remember to:

  • Place your factory classes with the persistent layer jar file, as they will be needed to create documents.
  • Share the persistent services' state classes between the business logic jar and the data layer jar. This is true for other classes, that are part of the contract between the business logic and the data layer (and are therefore needed by both), too.

For example, consider the BankAccount sample from the xenon-samples repo:

  • BankAccountService is a persistent Stateful service that needs to be packaged and deployed separately from the business logic layer.
  • BankAccountFactoryService, if exists, needs to be packaged and deployed as part of the data layer. The data layer's ServiceHost needs to start that factory.
  • BankAccountServiceState and BankAccountServiceRequest are part of the contract between the business logic and the data layers, hence it's recommended to factor them out into a common library that is packaged and deployed in both layers.

Plain persistent services

It is recommended to author your persistent services as "plain" - they should contain only simple, straightforward logic that evolves state, like state validation during creation and state merging during updates. Leave more complex validations and/or orchestration to the business logic layer. Think: PODO (plain old data object).

This will minimize changes to your persistent services' classes, which in turn will minimize changes to your data nodegroup deployment. Remember: every time you modify a persistent service class you need to update your production data nodegroup, which could be managing a large amount of data.

For example, consider the BankAccount sample from the xenon-samples repo:

  • BankAccountService should have straightforward start and patch logic. Some simple validations make sense (e.g. to disallow creation of a bank account with a negative balance, or to disallow withdrawing more than the current balance), however more complex logic (including validations) should be deferred to the business logic layer, for example: transferring funds between accounts, special authorization logic, transactional boundaries, etc. When in doubt, err on the side of keeping your data classes simple.
  • BankAccountService has a static createFactory() method, which is preferred over having a separate factory class. Again: less is more here.

Evolution from one version to the next

When it's time to rev your deployed application you can take one of the following approaches:

  • A "blue/green" migration
  • An (potentially rolling) in-place upgrade

The blue/green migration approach copies data from the existing ("blue") nodegroup with the old version to a new ("green") nodegroup with the new version, then switches clients to use the new nodegroup. When clients start to use the new nodegroup, that nodegroup has the new bits with the old data.

The in-place upgrade approach revs the application bits in-place, without copying the data. This is often done in a rolling fashion, to maintain availability from a client perspective. While doing that, a deployment might have a mix of nodes, some running version N while others version N+1. In general, it's better to design and implement a change with a rolling evolution in mind, so that the change can be deployed incrementally throughout the environment.

The approach you should take depends on a number of factors, including:

  • The nature of the change - is it a change in business logic? in data? in both?
  • The current deployment of your application - have you deployed business logic and data separately or in a single nodegroup?
  • The size of your data
  • Is downtime allowed
  • Is allocating additional resources (nodes) for the duration of the migration allowed

Here are some prominent examples:

  • If the change includes a modification to business logic, does the change need to appear 'atomically' across all nodes serving client requests? if the answer is yes, you need to take a migration-based approach, unless you are willing to suffer some downtime, because an in-place rolling upgrade would result in a mixed versions deployment. Conversely, if the change includes only modifications to business logic, and it can be updated in a rolling fashion, consider a rolling in-place upgrade approach.
  • If the change includes modification to data, and your data set is very large, you should consider an in-place upgrade approach because migration could take a long time.
  • If the change includes modification to both data and business logic, and you have deployed separate nodegroups for business logic and data, consider an orchestrated in-place upgrade of the data and then of the business logic that uses it. For example: assuming a change that includes adding a field to a persistent service state class and a modification to a stateless orchestrator service that uses it, you can orchestrate the deployment of the change in a 'bottom-up' manner as follows: ** Deploy the change to the data nodegroup first - this will ensure that all nodes that contain application factory services will have the updated schema before the business logic is updated, and will be ready to create new state with the additional field. Don't forget to use Kryo @Since annotation on the new field, otherwise existing persistent state would fail to be deserialized. You can deploy the change to the data nodegroup in a rolling in-place fashion. Then: ** Deploy the change to the business logic nodegroup. Assuming the business logic change is a change that can be deployed incrementally across business logic nodes, you can deploy the change in a rolling in-place fashion. A node whose bits have been updated and that serves a client request will start executing its updated business logic, including the usage of the new state field. A business logic node that has yet to be updated, but processes a document coming from the data tier, should be able to deserialize the document, effectively ignoring the new field.

For more details about using blue/green migration seeSide-by-Side-Upgrade


Multiple Geo system

Use Case

  • cross datacenter replication (east coast, west coast)

TBD


Active/Standby with different node group

Scenario

Two node groups: ACTIVE and STANDBY. ACTIVE group receives all requests. STANDBY group receives data in near real time and both systems will have identical data.

Use Case

  • backup node group
  • usage for read transactions such as data warehouse

Recommendation

Continuous Migration

Use MigrationTaskService to periodically pull all data from ACTIVE to STANDBY. MigrationTaskService has continuousMigration flag that will continuously query data and reflect to STANDBY node group.

Manual Replication

Subscribe to a service, then manually replicate to a service associated with the other node group (issue updates, in a eventual way)


Multiple nodegroup system

Scenario


Read operation intensive system

Scenario

Use Case

  • high traffic contents oriented website such as blogs

Document index backup/restore

Scenario

Xenon provides API to backup and restore document index from xenon service.

How to

see Backup-Restore

Clone this wiki locally