Skip to content

Commit

Permalink
Edits for O'Reilly release
Browse files Browse the repository at this point in the history
* replaced chapter references with bookmarks
  • Loading branch information
hintjens committed Nov 4, 2012
1 parent b9f8477 commit ef7af7b
Show file tree
Hide file tree
Showing 143 changed files with 27,088 additions and 21,072 deletions.
12 changes: 11 additions & 1 deletion bin/mkbook
Expand Up @@ -72,12 +72,13 @@ print OUTPUT "<!DOCTYPE book PUBLIC \"-//OASIS//DTD DocBook XML V4.5//EN\"\n";
print OUTPUT "\"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd\">\n";
print OUTPUT "<book>\n";

# Include bookinfo if publisher directory exists
# Include bookinfo if book directory exists
if ($format eq "book" && open (BOOKINFO, "book/bookinfo.xml")) {
print OUTPUT "<title>ZeroMQ - Connecting your Code</title>\n";
while (<BOOKINFO>) {
print OUTPUT $_ unless /DOCTYPE/;
}
close (BOOKINFO);
}
else {
print OUTPUT "<title>The ZeroMQ Guide - for $source Developers</title>\n";
Expand Down Expand Up @@ -107,6 +108,15 @@ while (<>) {
elsif (/^\.prelude\s+(\w.*)/) {
$prelude = $1;
}
elsif (/^\.inbook\s+(\w.*)/) {
# Include book boilerplate if book directory exists
if ($format eq "book" && open (BOILER, "book/$1")) {
while (<BOILER>) {
print OUTPUT $_;
}
close (BOILER);
}
}
elsif (/^\.filter\s+(\w.*)/) {
$filter = ($1 eq $format);
}
Expand Down
6 changes: 6 additions & 0 deletions bin/z2w
Expand Up @@ -129,6 +129,12 @@ while (<>) {
elsif (/^\.prelude\s+(\w.*)/) {
$prelude = $1;
}
elsif (/^\.inbook\s+(\w.*)/) {
# Skip .inbook directive
}
elsif (/^\.bookmark\s+(\w.*)/) {
# No handling for bookmark yet
}
elsif (/^\.filter\s+(\w.*)/) {
$filter = ($1 eq "online");
}
Expand Down
16,594 changes: 11,122 additions & 5,472 deletions book.xml

Large diffs are not rendered by default.

130 changes: 65 additions & 65 deletions chapter1.txt

Large diffs are not rendered by default.

208 changes: 104 additions & 104 deletions chapter2.txt

Large diffs are not rendered by default.

4 changes: 2 additions & 2 deletions chapter3.txt
@@ -1,8 +1,8 @@
.output chapter3.wd
[[advanced_request_reply_patterns]]
.bookmark advanced-request-reply
++ Advanced Request-Reply Patterns

In Chapter Two we worked through the basics of using 0MQ by developing a series of small applications, each time exploring new aspects of 0MQ. We'll continue this approach in this chapter, as we explore advanced patterns built on top of 0MQ's core request-reply pattern.
In <<sockets-and-patterns>> we worked through the basics of using 0MQ by developing a series of small applications, each time exploring new aspects of 0MQ. We'll continue this approach in this chapter, as we explore advanced patterns built on top of 0MQ's core request-reply pattern.

We'll cover:

Expand Down
10 changes: 5 additions & 5 deletions chapter4.txt
@@ -1,8 +1,8 @@
.output chapter4.wd
[[reliable_request_reply]]
++ Chapter Four - Reliable Request-Reply
.bookmark reliable-request-reply
++ Reliable Request-Reply

In Chapter Three we looked at advanced use of 0MQ's request-reply pattern with worked examples. In this chapter we'll look at the general question of reliability and build a set of reliable messaging patterns on top of 0MQ's core request-reply pattern.
In <<advanced-request-reply>> we looked at advanced use of 0MQ's request-reply pattern with worked examples. In this chapter we'll look at the general question of reliability and build a set of reliable messaging patterns on top of 0MQ's core request-reply pattern.

In this chapter we focus heavily on user-space request-reply 'patterns', reusable models that help you design your own 0MQ architectures:

Expand Down Expand Up @@ -150,7 +150,7 @@ Our second approach takes Lazy Pirate pattern and extends it with a queue device

In all these Pirate patterns, workers are stateless, or have some shared state we don't know about, e.g. a shared database. Having a queue device means workers can come and go without clients knowing anything about it. If one worker dies, another takes over. This is a nice simple topology with only one real weakness, namely the central queue itself, which can become a problem to manage, and a single point of failure.

The basis for the queue device is the least-recently-used (LRU) routing queue from Chapter Three. What is the very //minimum// we need to do to handle dead or blocked workers? Turns out, it's surprisingly little. We already have a retry mechanism in the client. So using the standard LRU queue will work pretty well. This fits with 0MQ's philosophy that we can extend a peer-to-peer pattern like request-reply by plugging naive devices in the middle!figref().
The basis for the queue device is the least-recently-used (LRU) routing queue from <<advanced-request-reply>>. What is the very //minimum// we need to do to handle dead or blocked workers? Turns out, it's surprisingly little. We already have a retry mechanism in the client. So using the standard LRU queue will work pretty well. This fits with 0MQ's philosophy that we can extend a peer-to-peer pattern like request-reply by plugging naive devices in the middle!figref().

[[code type="textdiagram" title="The Simple Pirate Pattern"]]
+-----------+ +-----------+ +-----------+
Expand Down Expand Up @@ -208,7 +208,7 @@ The Simple Pirate Queue pattern works pretty well, especially since it's just a

We'll fix these in a properly pedantic Paranoid Pirate Pattern.

We previously used a REQ socket for the worker. For the Paranoid Pirate worker we'll switch to a DEALER socket!figref(). This has the advantage of letting us send and receive messages at any time, rather than the lock-step send/receive that REQ imposes. The downside of DEALER is that we have to do our own envelope management. If you don't know what I mean, please re-read Chapter Three.
We previously used a REQ socket for the worker. For the Paranoid Pirate worker we'll switch to a DEALER socket!figref(). This has the advantage of letting us send and receive messages at any time, rather than the lock-step send/receive that REQ imposes. The downside of DEALER is that we have to do our own envelope management. If you don't know what I mean, please re-read <<advanced-request-reply>>.

[[code type="textdiagram" title="The Paranoid Pirate Pattern"]]
+-----------+ +-----------+ +-----------+
Expand Down
13 changes: 7 additions & 6 deletions chapter5.txt
@@ -1,7 +1,8 @@
.output chapter5.wd
++ Chapter Five - Advanced Publish-Subscribe
.bookmark advanced-publish-subscribe
++ Advanced Publish-Subscribe

In Chapters Three and Four we looked at advanced use of 0MQ's request-reply pattern. If you managed to digest all that, congratulations. In this chapter we'll focus on publish-subscribe, and extend 0MQ's core pub-sub pattern with higher-level patterns for performance, reliability, state distribution, and monitoring.
In <<advanced-request-reply>> and <<reliable-request-reply>> we looked at advanced use of 0MQ's request-reply pattern. If you managed to digest all that, congratulations. In this chapter we'll focus on publish-subscribe, and extend 0MQ's core pub-sub pattern with higher-level patterns for performance, reliability, state distribution, and monitoring.

We'll cover:

Expand Down Expand Up @@ -193,7 +194,7 @@ The key aspect of the Clone pattern is that clients talk back to servers, which

++++ Distributing Key-Value Updates

We'll develop Clone in stages, solving one problem at a time. First, let's look at how to distribute key-value updates from a server to a set of clients. We'll take our weather server from Chapter One and refactor it to send messages as key-value pairs!figref(). We'll modify our client to store these in a hash table.
We'll develop Clone in stages, solving one problem at a time. First, let's look at how to distribute key-value updates from a server to a set of clients. We'll take our weather server from <<basics>> and refactor it to send messages as key-value pairs!figref(). We'll modify our client to store these in a hash table.

[[code type="textdiagram" title="Simplest Clone Model"]]
+-------------+
Expand Down Expand Up @@ -436,7 +437,7 @@ Let's list the failures we want to be able to handle:

* Clone server process or machine gets disconnected from the network, e.g. a switch dies. It may come back at some point, but in the meantime clients need an alternate server.

Our first step is to add a second server. We can use the Binary Star pattern from Chapter four to organize these into primary and backup. Binary Star is a reactor, so it's useful that we already refactored the last server model into a reactor style.
Our first step is to add a second server. We can use the Binary Star pattern from <<reliable-request-reply>> to organize these into primary and backup. Binary Star is a reactor, so it's useful that we already refactored the last server model into a reactor style.

We need to ensure that updates are not lost if the primary server crashes. The simplest technique is to send them to both servers.

Expand Down Expand Up @@ -567,7 +568,7 @@ Which has the advantage of simplicity (one server sits at one endpoint) but has
* The server updates publisher (PUB) is at port P + 1.
* The server updates subscriber (SUB) is at port P + 2.

The clone class has the same structure as the flcliapi class from Chapter Four. It consists of two parts:
The clone class has the same structure as the flcliapi class from <<reliable-request-reply>>. It consists of two parts:

* An asynchronous clone agent that runs in a background thread. The agent handles all network I/O, talking to servers in real-time, no matter what the application is doing.

Expand Down Expand Up @@ -678,4 +679,4 @@ And now run as many instances of the subscriber as you want to try, each time co

Each subscriber happily reports "Save Roger", and Roth the Escaped Convict slinks back to his cell for dinner and a nice cup of hot milk, which is all he really wanted anyhow and could someone call his mum and tell her his clean socks are almost all up.

One note: the XPUB socket by default does not report duplicate subscriptions, which is what you want when you're naively connecting an XPUB to an XSUB. Our example sneakily gets around this by using random topics so the chance of it not working is one in a million. In a real LVC proxy you'll want to use the {{ZMQ_XPUB_VERBOSE}} option that we implement in Chapter seven as an exercise.
One note: the XPUB socket by default does not report duplicate subscriptions, which is what you want when you're naively connecting an XPUB to an XSUB. Our example sneakily gets around this by using random topics so the chance of it not working is one in a million. In a real LVC proxy you'll want to use the {{ZMQ_XPUB_VERBOSE}} option that we implement later in <<the-0MQ-community>> as an exercise.
17 changes: 9 additions & 8 deletions chapter6.txt
@@ -1,5 +1,6 @@
.output chapter6.wd
++ Chapter Six - The Human Scale
.bookmark the-human-scale
++ The Human Scale

If you've survived the first five chapters, congratulations. It was hard for for me too. Happily the jokes and the code mostly write themselves, so we'll continue with our journey of exploring 0MQ. In this chapter I'm going to step back from the nuts and bolts of 0MQ's technical machinery, and look more at how to use 0MQ successfully in larger projects.

Expand Down Expand Up @@ -201,7 +202,7 @@ Speaking of embarrassments, just as 0MQ lets us aim for really massive architect

So MOPED is meant to save us from such mistakes. Partly it's about slowing down, partly it's about ensuring that when you move fast, you go - and this is essential, dear reader - in the //right direction//. It's my standard interview riddle: what's the rarest property of any software system, the absolute hardest thing to get right, the lack of which causes the slow or fast death of the vast majority of projects? The answer is not code quality, funding, performance, or even (though it's a close answer), popularity. The answer is "accuracy".

If you've read the Guide observantly you'll have seen MOPED in action already. The development of Majordomo in Chapter four is a near-perfect case. But cute names are worth a thousand words.
If you've read the Guide observantly you'll have seen MOPED in action already. The development of Majordomo in <<reliable-request-reply>> is a near-perfect case. But cute names are worth a thousand words.

The goal of MOPED is to define a process, a pattern by which we can take a rough use case for a new distributed application, and go from "hello world" to fully-working prototype in any language in under a week.

Expand Down Expand Up @@ -269,7 +270,7 @@ Now, I've nothing personal against committees. The useless folk need a place to

It used to be, decades ago, when the Internet was a young modest thing, that protocols were short and sweet. They weren't even "standards", but "requests for comments", which is as modest as you can get. It's been one of my goals since we started iMatix in 1995 to find a way for ordinary people like me to write small, accurate protocols without the overhead of the committees.

Now, 0MQ does appear to provide a living, successful protocol abstraction layer with its "we'll carry multi-part messages over random transports" way of working. Since 0MQ deals silently with framing, connections, and routing, it's surprisingly easy to write full protocol specs on top of 0MQ, and in Chapters four and five I showed how to do this.
Now, 0MQ does appear to provide a living, successful protocol abstraction layer with its "we'll carry multi-part messages over random transports" way of working. Since 0MQ deals silently with framing, connections, and routing, it's surprisingly easy to write full protocol specs on top of 0MQ, and in <<reliable-request-reply>> and <<advanced-publish-subscribe>> I showed how to do this.

Somewhere around mid-2007, I kicked-off the Digital Standards Organization to define new simpler ways of producing little standards, protocols, specifications. In my defense, it was a quiet summer. At the time [http://www.digistan.org/spec:1 I wrote that] a new specification should take "minutes to explain, hours to design, days to write, weeks to prove, months to become mature, and years to replace."

Expand Down Expand Up @@ -528,11 +529,11 @@ What if I told you of a way to build custom IDL generators cheaply and quickly?

At iMatix, until a few years ago, we used code generation to build ever larger and more ambitious systems until we decided the technology (GSL) was too dangerous for common use, and we sealed the archive and locked it, with heavy chains, in a deep dungeon. Well, we actually posted it on github. If you want to try the examples that are coming up, grab [https://github.com/imatix/gsl the repository] and build yourself a {{gsl}} command. Typing "make" in the src subdirectory should do it (and if you're that guy who loves Windows, I'm sure you'll send a patch with project files).

This section isn't really about GSL at all, but about a useful and little-known trick that's useful for ambitious architects who want to scale themselves, as well as their work. Once you learn the trick is, you can whip up your own code generators in a short time. The code generators most software engineers know about come with a single hard-coded model. For instance, Ragel "compiles executable finite state machines from regular languages", i.e. Ragel's model is a regular language. This certainly works for a good set of problems but it's far from universal. How do you describe an API in Ragel? Or a project makefile? Or even a finite-state machine like the one we used to design the Binary Star pattern in Chapter four?
This section isn't really about GSL at all, but about a useful and little-known trick that's useful for ambitious architects who want to scale themselves, as well as their work. Once you learn the trick is, you can whip up your own code generators in a short time. The code generators most software engineers know about come with a single hard-coded model. For instance, Ragel "compiles executable finite state machines from regular languages", i.e. Ragel's model is a regular language. This certainly works for a good set of problems but it's far from universal. How do you describe an API in Ragel? Or a project makefile? Or even a finite-state machine like the one we used to design the Binary Star pattern in <<reliable-request-reply>>?

All these would benefit from code generation, but there's no universal model. So the trick is to design your own models as you need them, then make code generators as cheap compilers for that model. You need some experience in how to make good models, and you need a technology that makes it cheap to build custom code generators. Scripting languages like Perl and Python are a good option. However we actually built GSL specifically for this, and that's what I prefer.

Let's take a simple example that ties into what we already know. We'll see more extensive examples later, because I really do believe that code generation is crucial knowledge for large-scale work. In Chapter four, we developed the [http://rfc.zeromq.org/spec:7 Majordomo Protocol (MDP)], and wrote clients, brokers, and workers for that. Now could we generate those pieces mechanically, by building our own interface description language and code generators?
Let's take a simple example that ties into what we already know. We'll see more extensive examples later, because I really do believe that code generation is crucial knowledge for large-scale work. In <<reliable-request-reply>>, we developed the [http://rfc.zeromq.org/spec:7 Majordomo Protocol (MDP)], and wrote clients, brokers, and workers for that. Now could we generate those pieces mechanically, by building our own interface description language and code generators?

When we write a GSL model, we can use //any// semantics we like, in other words we can invent domain-specific languages on the spot. I'll invent a couple - see if you can guess what they represent:

Expand Down Expand Up @@ -668,7 +669,7 @@ bytes, $(field.:))
.endfor
[[/code]]

The XML models and this script are in the subdirectory examples/Chapter6. To do the code generation I give this command:
The XML models and this script are in the subdirectory examples/models. To do the code generation I give this command:

[[code]]
gsl -script:specs mdp_client.xml mdp_worker.xml
Expand Down Expand Up @@ -720,7 +721,7 @@ frames:
* Frame 3: 0x05 (1 byte, DISCONNECT)
[[/code]]

Which as you can see is close to what I wrote by hand in the original spec. Now, if you have cloned the Guide repository and you are looking at the code in examples/Chapter6, you can generate the MDP client and worker codecs. We pass the same two models to a different code generator:
Which as you can see is close to what I wrote by hand in the original spec. Now, if you have cloned the Guide repository and you are looking at the code in examples/models, you can generate the MDP client and worker codecs. We pass the same two models to a different code generator:

[[code]]
gsl -script:codec_c mdp_client.xml mdp_worker.xml
Expand Down Expand Up @@ -1108,7 +1109,7 @@ Now, the original source for these pretty pictures is an XML model:
</class>
[[/code]]

The code generator is in examples/Chapter6/server_c.gsl. It is a fairly complete tool that I'll use and expand for more serious work later. It generates:
The code generator is in examples/models/server_c.gsl. It is a fairly complete tool that I'll use and expand for more serious work later. It generates:

* A server class in C (nom_server.c, nom_server.h) that implements the whole protocol flow.
* A selftest method that runs the selftest steps listed in the XML file.
Expand Down

0 comments on commit ef7af7b

Please sign in to comment.