-
Notifications
You must be signed in to change notification settings - Fork 21
Overview
If you're already familiar with real-time stream based BigData engines, the following list of features will distinguish Dempsy from the others:
- Fine grained "actor model": Dempsy provides for the fine grained distribution and lifecycle management of (potentially) millions of "actors" ("message processors" in Dempsy parlance) across a large cluster of machines allowing developers to write code that concentrates on handling individual data points in a stream without any concern for concurrency.
- Inversion of control programming paradigm: Dempsy allows developers to construct these large-scale processing applications decoupled from all infrastructure concerns providing a means for simple and testable POJO implementations.
- Dynamic Topology Autoconfiguration: Dempsy doesn't require the prior definition of a topology for the stream processing. Not only does it dynamically discover the topology, but an application's topology can morph at runtime. Therefore, there's no topology defining DSL or configuration necessary.
- Integrates with available IAAS and PAAS tools: Dempsy doesn't pretend to be an application server or a batch processing system and therefore the provisioning, deployment and management of nodes in an application are provided by cloud management tools already part of your DevOps infrastructure. It fully integrates with various monitoring solutions including: JMX, Graphite, and Ganglia.
- Fully elastic: Expanding Dempsy's cooperation with existing IAAS/PAAS tools, elasticity allows for the dynamic provisioning and deprovisioning of nodes of the application at runtime. This allows for optimizing the use of computational resources over time (Currently available in the trunk revision).
In a nutshell, Dempsy is a framework that provides for the easy implementation Stream-based, Real-time, BigData applications.
Dempsy was originally developed at Nokia for processing streaming GPS data from vehicles in its vehicle traffic products for its "HERE" division. It stands for "Distributed Elastic Message Processing System."
- Dempsy is Distributed. That is to say a dempsy application can run on multiple JVMs on multiple physical machines.
- Dempsy is Elastic. That is, it is relatively simple to scale an application to more (or fewer) nodes. This does not require code or configuration changes but allows the dynamic insertion and removal of processing nodes.
- Dempsy is Message Processing. Dempsy fundamentally works by message passing. It moves messages between Message processors, which act on the messages to perform simple atomic operations such as enrichment, transformation, or other processing. Generally an application is intended to be broken down into more smaller simpler processors rather than fewer large complex processors.
- Dempsy is a Framework. It is not an application container like a J2EE container, nor a simple library. Instead, like the Spring Framework it is a collection of patterns, the libraries to enable those patterns, and the interfaces one must implement to use those libraries to implement the patterns.
Dempsy is not designed to be a general purpose framework, but is intended to solve two specific classes of "stream processing" problems while encouraging the use of the best software development practices. These two classes include:
Dempsy can be used to solve the problem of processing large amounts of "near real time" stream data with the lowest lag possible; problems where latency is more important that "guaranteed delivery." This class of problems includes use cases such as:
- Real time monitoring of large distributed systems
- Processing complete rich streams of social networking data
- Real time analytics on log information generated from widely distributed systems
- Statistical analytics on real-time vehicle traffic information on a global basis
It is meant to provide developers with a tool that allows them to solve these problems in a simple straightforward manner by allowing them to concentrate on the analytics themselves rather than the infrastructure. Dempsy heavily emphasizes "separation of concerns" through "dependency injection" and out of the box supports both Spring and Guice. It does all of this by supporting what can be (almost) described as a "distributed actor model."
In short Dempsy is a framework to enable decomposing a large class of message processing applications into flows of messages to relatively simple processing units implemented as POJOs
Important features of Dempsy include:
- Built to support an “inversion of control” programming paradigm, making it extremely easy to use, and resulting in smaller and easier to test application codebases, reducing the total cost of ownership of applications that employ Dempsy.
- Fine grained message processing allows the developer to decompose complex analytics into simple small steps.
- Invisible Scalability. While Dempsy is completely horizontally scalable and multithreaded, the development paradigm supported by Dempsy removes all scalability and threading concerns from the application developer.
- Dynamic topologies. There is no need to hard wire application stages into a configuration or into the code. Topologies (by default) are discovered at run-time.
- Full elasticity (for May). Dempsy can handle the dynamic provisioning and decommissioning of computational resources.
- Fault Tolerant. While Dempsy doesn't manage the application state in the case of a failure, its elasticity provides fault tolerance by automatically rebalancing the load among the remaining available servers. It cooperates with IaaS auto-scaling tools by rebalancing when more servers are added.
Dempsy is intentionally not an “Application Server” and runs in a completely distributed manner relying on Apache ZooKeeper for coordination. In sticking with one of the development teams guiding principles, it doesn’t try to solve problems well solved in other parts of the industry. As DevOps tools become the norm in cloud computing environments, where it’s “easier to re-provision that to repair,” Dempsy defers to such systems the management of computational resources, however, being fully elastic (May 2012), it cooperates to produce true dynamic fault tolerance.
Dempsy has been described as a distributed actor framework. While not strictly speaking an actor framework in the sense of Erlang or Akka actors, in that actors typically direct messages directly to other actors, the Message Processors in Dempsy are "actor like POJOs" similar to "Processor Elements" in S4 and less so like Bolts in Storm. Message processors are similar to actors in that Message processors act on a single message at a time, and need not deal with concurrency directly. Unlike actors, Message Processors also are relieved of the the need to know the destination(s) for their output messages, as this is handled inside Dempsy itself.
The Actor model is an approach to concurrent programming that has the following features:
- Fine-grained processing
A traditional (linear) programming model processes input sequentially, maintaining whatever state is needed to represent the entire input space. In an "Fine Grained Actor" model, input is divided into messages and distributed to a large number of independent actors. An individual actor maintains only the state needed to process the messages that it receives.
- Shared-Nothing
Each actor maintains its own state, and does not expose that state to any other actor. This eliminates concurrency bottlenecks and the potential for deadlocks. Immutable state may be shared between actors.
- Message-Passing
Actors communicate by sending immutable messages to one-another. Each message has a key, and the framework is responsible for directing the message to the actor responsible for that key.
A distributed actor model takes an additional step, of allowing actors to exist on multiple nodes in a cluster, and supporting communication of messages between nodes. It adds the following complexities to the Actor model:
- Distribution of Messages
A message may or may not be consumed by an actor residing in the same JVM as the actor that sent the message. The required network communication will add delay to processing, and require physical network configuration to support bandwidth requirements and minimize impact to other consumers.
- Load-Balancing
The framework must distribute work evenly between nodes, potentially using different strategies for different message types (eg: regional grouping for map-matcher, simple round-robin for vehicles).
- Node Failure
If a node fails, the workload on that node must be shifted to other nodes. All state maintained by actors on the failed node is presumed lost.
- Network Partition
If the network connection to a node temporarily drops, it will appear as a node failure to other nodes in the cluster. The node must itself recognize that it is no longer part of the cluster, and its actors must stop sending messages (which may conflict with those sent by the cluster's "replacement" node).
- Node Addition
To support elastic scalability (adding nodes on demand to service load, as well as re-integration of a previously failed node), the framework must support redistribution of actors and their state based on changes to the cluster.
Dempsy is not any flavor of "Complex Event Processing" (nor other names CEP may be known as, such as "Event Processing System," etc.). It does not support the ability to query streams with all of the attending bells-and-whistles.
These systems have their place and if your particular use case has need of their features, then Dempsy is not the right tool for you. Dempsy aims at satisfying the particular class of use-cases in stream processing that don't require the complexity of these systems in a manner that makes these applications much easier to build and maintain.
We believe there is a large number of use-cases that Dempsy fits, but there is a large number of use-cases it does not.
Above all, and in many ways, Dempsy is meant to be SIMPLE. It doesn't try to be the solution for every problem. It tries to do one thing well and it is meant to support developers that think this way. Dempsy is built emphasizing, and built to emphasize several interrelated principles. These principles are meant to reduce the longer term total cost of ownership of the software written using Dempsy. These include:
-
Separation of Concerns (SoC) - Dempsy expects the developer to be able to concentrate on writing the analytics and business logic with (virtually) no consideration for framework or infrastructure.
-
Decoupling - SoC provides the means to isolate cross-cutting concerns so that code written for Dempsy will have little to no (with due respect to annotations) dependence on even the framework itself. Developer's code can be easily run separate from the framework and, in the spirit of Dependency Injection, the framework uses the developer's code rather than the developer having to use the framework. This type of decoupling provides for analytics/business code that has no infrastructure concerns: no framework dependencies, no messaging code, no threading code, etc.
-
Testability - All of this provides a basis to write code that's more testable.
-
"Do one thing well" - Dempsy is written to provide one service: support for the type of "Distributed Actor Model" (with all of the acknowledged caveats) programming paradigm. For this reason it does not pretend to be an Application Server. Nor does it substitute for the lack of an automated provistioning/deployment system.
CEP is really trying to solve a different problem. If you have a large stream of data you want to mine by separating it into subsets and executing different analytics on each subset (which can including ignoring entire subsets), then CEP solutions make sense. If, however, you’re going to do the same thing to every message then you will be underutilizing the power of CEP. Underutilized functionality usually means an increased total cost of ownership, and Dempsy is ALL ABOUT reducing the total cost of ownership for systems that do this type of processing.
There are several pure "Actor Model" frameworks and languages that have been posed as alternatives for Dempsy. Dempsy is not a pure actor model and primarily solves a different problem. As described above Dempsy is primarily a routing mechanism for messages for "fine grained" actors. The reason we still (though loosely) call it an "actor model" is because Dempsy supports concurrency the way a typical Actor Model does.
Dempsy emphasizes reducing the total cost of ownership of real-time analytics applications and as a direct result we feel it has some advantages over the alternatives.
First, as mentioned, Dempsy supports “fine grained” message processing. Because of this, by writing parallel use-cases in Dempsy and alternatives that don't support this programming model, we find that Dempsy leads to a lower code-line count.
Also, because of Dempsy’s emphasis on “Inversion of Control” the resulting applications are more easily testable. With the exception of annotations, Message Processors, which are the atomic unit of work in Dempsy, have no dependency on the framework itself. Every alternative we've found requires that the application be written against and written to use that framework.
Also, in following the adage to never require the system to be told something that it can deduce, the topology of a Dempsy application’s pipeline is discovered at runtime and doesn’t need to be preconfigured. This is primarily a by-product of the fact that Dempsy was designed from the ground up to be “elastic” and as a result, the topology can morph dynamically.
This means that applications with complicated topologies with many branches and merges can be trivially configured since the dependency relationship between stages is discovered by the framework.
Next section Prerequisites