Skip to content

zooko/TwistedRust

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 

Repository files navigation

This is the summary of a long conversation in which I convinced Claude that the fully-synchronous concurrency model was better for local reasoning and better for testability than the async/await model (as is used in Rust Tokio and in Javascript, for example). Along the way I had to redirect him back onto the true path a couple of times. Along the way I got him to implement a little bit of Twisted Python in Rust to use as examples for our discussion. At the end I asked him to summarize:

Generated by Claude 4.7 Opus at my (Zooko's) request. See ./initial_conversation.md for the context.

My prompt

Okay, now please help me summarize this conversation to communicate the critical bits to someone without them reading through our entire back and forth. Here are the parts that I'd like to include:

  1. The fully-synchronous model allows easier local reasoning than the .await model does, and Rust's type system does not fix this.

  2. The fully-synchronous model makes it easier to write thorough and deterministic unit tests.

  3. It is pretty straightforward to add abstraction and composition on top of the fully-synchronous model, once you stop trying to make functions that block until they can return what the caller is looking for, and instead pass handlers that get called once the thing that the handler is looking for is synchronously available.

  4. There is no currently-maintained library or framework to support this model (a la Twisted Python) in Rust, but on the other hand it is little-enough source code that you can write it yourself for your purpose.

  5. If someone wrote one that provided some commonly-needed functionality (a la Twisted Python), it would fill a gap in the Rust ecosystem.

Please summarize all that, including code examples as necessary to demonstrate the points. I'm especially fond of the gap between the 9-case unit tests for the fully-synchronous version versus the attempt to test the same functionality for the Tokio version, so maybe include that, although maybe move it into a footnote/appendix so as not to disrupt the reader reading your summary.


Assistant

5/2/2026, 11:36:16 AM

Synchronous reactor programming in Rust: a case for the missing middle

Rust's concurrency story is dominated by two options:std::thread for parallelism andasync/await for concurrent I/O. There is a third model that the ecosystem has largely forgotten — the fully synchronous, single-threaded reactor pattern, of the kind Twisted Python popularised — and it has properties that theasync model does not, properties that matter for code where local reasoning and testability are paramount.

1. Local reasoning: the synchronous model wins, and the type system does not saveasync

In a fully synchronous reactor, the application code runs in a single thread, and the only place control returns to the runtime is at the top of the event loop. Inside any handler —on_bytes,on_frame,on_type0 — you can read top to bottom and know that nothing else in the process ran between any two lines. The&mut self you hold is the only&mut self in the universe for the duration of the call, by construction.

In theasync/await model, every.await is a yield point. Any task waiting on the same runtime can run there, mutate any state it has access to, drop guards, complete or cancel futures, and return control to you with the world rearranged. Rust's borrow checker does not protect you here, because the data those other tasks mutate is their data — they hold their own&mut references. The invariant the borrow checker enforces (no aliased mutation at any instant) is preserved; the invariant you actually want when reading code (nothing else changes between these two lines) is not the same invariant, and Rust does not enforce it.

This is not hypothetical. Consider:

async fn handle(&mut self, msg: Msg) {
    let count = self.shared.read_count().await;  // yield point
    // Another task may have run here, including code that
    // increments the count, removes entries from self.shared,
    // or sends messages that this task assumed couldn't arrive yet.
    self.shared.set_count(count + 1).await;       // yield point
}

The compiler accepts this. Two tasks running this concurrently produce a lost update. The fix requires either a mutex held across the yield (which serialises and risks deadlock) or a CAS-style retry loop or a redesign — none of which the type system prompts you to do. In the synchronous model the equivalent code is correct as written, because there is no yield point.

2. Testability: deterministic and thorough vs. runtime-dependent and approximate

The synchronous model makes the protocol layer a plain state machine: a struct with methods that take bytes and produce frames. Unit tests construct it, call methods with chosen byte slices, and assert on observable state. There is no runtime, no executor, no time, no scheduling.

Consider a length-prefixed framer that reads a 4-byte big-endian length followed by a body. Here is the synchronous version:

pub struct LengthPrefixed {
    len_buf: [u8; 4],
    len_filled: u8,
    body: Vec<u8>,
    body_needed: usize,
}

impl LengthPrefixed {
    pub fn on_bytes(&mut self, mut bytes: &[u8],
                    mut on_frame: impl FnMut(&[u8])) {
        while !bytes.is_empty() {
            if self.body_needed == 0 {
                let take = (4 - self.len_filled as usize).min(bytes.len());
                self.len_buf[self.len_filled as usize..][..take]
                    .copy_from_slice(&bytes[..take]);
                self.len_filled += take as u8;
                bytes = &bytes[take..];
                if self.len_filled == 4 {
                    self.body_needed = u32::from_be_bytes(self.len_buf) as usize;
                    self.len_filled = 0;
                    self.body.clear();
                    self.body.reserve(self.body_needed);
                }
            } else {
                let take = (self.body_needed - self.body.len()).min(bytes.len());
                self.body.extend_from_slice(&bytes[..take]);
                bytes = &bytes[take..];
                if self.body.len() == self.body_needed {
                    on_frame(&self.body);
                    self.body_needed = 0;
                }
            }
        }
    }
}

A test exercises it like this:

#[test]
fn length_split_across_three_reads() {
    let mut f = LengthPrefixed::new();
    let mut frames: Vec<Vec<u8>> = Vec::new();
    let mut push = |b: &[u8]| frames.push(b.to_vec());
    f.on_bytes(&[0x00, 0x00], &mut push);
    assert_eq!(f.len_filled, 2);
    f.on_bytes(&[0x00], &mut push);
    assert_eq!(f.len_filled, 3);
    f.on_bytes(&[0x03, b'a', b'b', b'c'], &mut push);
    assert_eq!(frames, vec![b"abc".to_vec()]);
}

The Tokio version of the same logic — apump function reading from anAsyncRead and dispatching frames — requires for each test case constructing a mockAsyncRead that produces bytes in the chunks you want, spawning the pump on a runtime, awaiting completion, and inspecting outputs. The test is longer, depends on the runtime, and cannot directly observelen_filled mid-stream because there is no mid-stream — the pump runs to completion or is cancelled, and partial-read state lives in stack locals inside an opaque future. Asserting "after exactly two bytes arrived, the framer is in this specific intermediate state" is straightforward synchronously and awkward-to-impossible asynchronously.

The appendix lists nine specific unit-test scenarios forLengthPrefixed and contrasts them with the corresponding Tokio test attempts.

3. Composition: pass handlers, do not block

The intuition that synchronous code cannot compose well comes from a habit: writing functions that block until they can return what the caller asked for.read_frame() -> Frame cannot exist in a single-threaded non-blocking reactor, because there is no thread to block. The fix is to invert: instead of returning the thing, accept a handler that gets called when the thing is synchronously available.

LengthPrefixed::on_bytes takesimpl FnMut(&[u8]) and calls it once per complete frame. The caller — aConn — has its ownon_frame that does type dispatch. TheServer owns the connection table and cross-cutting state. The composition looks like this:

pub struct Server {
    conns: HashMap<ConnId, Conn>,
    type0_total: u64,
}

pub struct Conn {
    framer: LengthPrefixed,
    sent_type0: bool,
}

impl Server {
    pub fn on_bytes(&mut self, id: ConnId, bytes: &[u8]) {
        let conn = self.conns.get_mut(&id).unwrap();
        let mut framer = std::mem::replace(&mut conn.framer,
                                            LengthPrefixed::new());
        framer.on_bytes(bytes, |frame| Conn::on_frame(self, id, frame));
        self.conns.get_mut(&id).unwrap().framer = framer;
    }

    fn on_type0(&mut self, id: ConnId) {
        self.type0_total += 1;
        self.conns.get_mut(&id).unwrap().sent_type0 = true;
    }
}

impl Conn {
    fn on_frame(server: &mut Server, id: ConnId, frame: &[u8]) {
        let ty = u32::from_be_bytes(frame[..4].try_into().unwrap());
        match ty {
            0x00000000 => server.on_type0(id),
            _ => panic!("unknown type"),
        }
    }
}

Themem::replace is the one piece of plumbing: the framer lives inside the server, so callingframer.on_bytes(_, |f| use(self)) would aliasself. Pulling the framer out, running it against&mut self, putting it back is local, explicit, costs nothing at runtime, and lets the borrow checker prove soundness rather than punting toRefCell. This pattern composes indefinitely: more layers, more message types, broadcasting to all connections — all are ordinary&mut self method calls.

The mental shift required is small but real: stop looking forlet frame = read_frame().await; and start writingframer.on_bytes(bytes, |frame| ...);. Once made, the abstraction story is no worse thanasync's, and the local-reasoning property is preserved because no method below the event loop yields.

4. The ecosystem gap

There is no currently-maintained Rust library that provides Twisted-shaped abstractions: a reactor, aProtocol trait withon_bytes/on_close, aTransport for writing, composable framers (LengthPrefixed,LineDelimited,Netstring), a connection table. The libraries that tried —rotor,mioco, earlytokio-core — were abandoned around 2017–2018 whenasync/await consumed the ecosystem's attention.mio itself is maintained and stable (Tokio depends on it) but is deliberately just the reactor primitive —Poll,Events,Token — with no protocol layer.polling (from the smol project) is similar.may exists and provides stackful coroutines, but that is a different programming model (Go-style goroutines) with the same yield-point local-reasoning caveat asasync.

The pragmatic situation: writing the missing layer for one project takes roughly 200 lines (framer ~40, connection table ~30, runtime glue ~50, plus protocol-specific code). It is feasible to roll your own and many projects effectively have. The cost is that no one's version became canonical, so each project reinvents it.

A published crate — something likereactor orsynio — exposing the Twisted-shaped abstractions on top ofmio would fill a real gap. It would not appeal to theasync mainstream, but the constituency that wants single-threaded local reasoning (trading systems, game servers, embedded-adjacent work, anyone porting from Twisted) is real and currently underserved. The work is bounded — perhaps 800 lines of library code plus documentation — and the API surface is well-understood from twenty years of Twisted's existence.


Appendix: nine test scenarios, two models

The following nine scenarios fully specify the behaviour ofLengthPrefixed for one frame's worth of input. In the synchronous model each is a five-to-ten-line#[test] function that constructs the framer, callson_bytes with chosen slices, and asserts on collected frames and on internal state (len_filled,body.len(),body_needed).

# Scenario Synchronous test Tokio test
1 All bytes in one read trivial requires mockAsyncRead, runtime
2 Length split 2+2, body whole direct; assertlen_filled after first call cannot observe intermediate state
3 Length whole, body split 1+rest direct; assertbody.len() mid-stream cannot observe intermediate state
4 Length split 1+1+1+1 direct mock must yield four times
5 Two frames in one read direct; assert two frames collected runtime-dependent
6 Frame boundary mid-buffer direct requires careful mock
7 Zero-length body direct; assert frame fires immediately requires runtime
8 Bytes after frame remain buffered (next frame's length) direct; assert state across calls requires multi-step mock
9 Emptyon_bytes call is a no-op direct not naturally expressible

The synchronous tests run in microseconds, are fully deterministic, observe internal state directly, and require no runtime or mock. The Tokio tests require a mockAsyncRead whose chunking schedule must be carefully designed for each scenario, depend on the runtime's scheduling for ordering, cannot observe partial-read state because that state lives in stack locals inside the pump's future, and add roughly 15–30 lines of harness per case. Scenarios 2, 3, and 8 — the ones that assert on intermediate state — cannot be written for the Tokio version at all without changing the implementation to expose state it would not naturally have.


User

5/2/2026, 11:48:43 AM

Correct me if I'm wrong, Javascript today uses this model ? Excluding the worker-threads option. And Javascript uses what in Twisted Python were "deferreds" and in Javascript are "promises", for the purpose of easing ergonomics. (Historical footnote: it's no accident, the Javascript people learned it from Twisted Python, and/or from the E programming language by Mark S. Miller and Marc Stiegler, and Twisted Python learned it from the E programming language.)


Assistant

5/2/2026, 11:48:43 AM

You're right on both the technical claim and the history.

The technical claim

JavaScript (excluding Web Workers and Node's worker_threads) is a single-threaded event loop with non-blocking I/O, which is the same model as Twisted and the same model as the synchronous reactor pattern we've been discussing. Every JavaScript function runs to completion before any other JavaScript runs — the "run-to-completion" guarantee. I/O completions, timers, andPromise resolutions enqueue callbacks that the event loop dispatches one at a time, at the top of the loop, never in the middle of another function. This is precisely the local-reasoning property: between any two synchronous statements in a JavaScript function, nothing else in the program ran.

Promise (andasync/await, which is sugar overPromise) is the ergonomic layer on top, exactly analogous to Twisted'sDeferred. The underlying model is still callbacks-on-a-reactor; promises are a composable representation of "a value that will be delivered to a callback later," with chaining (.then) and error propagation built in so you do not have to hand-write the callback pyramid.

The one caveat worth stating precisely:await in JavaScript is a yield point in the same sense as Rust's.await, so the local-reasoning property weakens atawait boundaries within anasync function — other queued tasks can run there. But the underlying model is still single-threaded and run-to-completion between yield points, which is stronger than what you get with OS threads, and the yield points are syntactically explicit. It is the same trade-off Twisted made when it addedinlineCallbacks and the same one Rust made withasync/await.

The history

The lineage is well-documented and goes roughly: the term and concept of "promise" for concurrent computation traces to Daniel P. Friedman and David Wise's 1976 work and Henry Baker and Carl Hewitt's 1977 "futures," but the modern form — promise pipelining, the resolver/promise split, the chaining semantics — was developed by Mark S. Miller, Dean Tribble, and Rob Jellinghaus in the Project Xanadu work around 1989, and then refined substantially in the E programming language by Miller and others through the late 1990s and 2000s. E established the vocabulary and semantics that everything downstream inherited.

Twisted'sDeferred (Glyph Lefkowitz, around 2002) was directly inspired by E's promises. The Python community got the abstraction from E via Twisted.

JavaScript got it next, but through Twisted rather than directly from E. The Dojo Toolkit addeddojo.Deferred in 2007, explicitly modelled on Twisted'sDeferred (Alex Russell cited Twisted and MochiKit, and MochiKit's author Bob Ippolito has confirmed he ported the design from Twisted). Kris Zyp's 2009 CommonJS Promises/A proposal citeddojo.Deferred as its influence. Promises/A+ and eventually the ES2015Promise standard descend from that line.

So the chain is: E (Miller, Tribble, Stiegler, et al.) → Twisted (Lefkowitz) → Dojo (Russell, via Ippolito's MochiKit) → CommonJS Promises/A (Zyp) → Promises/A+ → ES2015Promiseasync/await. Mark Miller himself was later directly involved in the JavaScript standardisation work (he was on TC39 and contributed to the promise specification), so the influence is not just genealogical — he showed up in person to make sure JavaScript got it right.

The Rustasync/await design absorbed lessons from the JavaScript and C# experience but chose a different runtime model (zero-cost futures, polling, no built-in executor), so the family resemblance is at the syntax-and-ergonomics level rather than the underlying mechanism. But the conceptual lineage — "represent a not-yet-available value as a first-class object you can chain operations onto" — runs through all of them and originates with E.

Your footnote is correct and, if anything, understates how direct the influence was: it was not parallel evolution or convergent design, it was Mark Miller's ideas propagating through specific people who cited their sources.


Added by me (Zooko) manually afterward:

But what if you need to use multiple CPUs in the same process! The above works only for a single thread!

You can have multiple threads, with exactly one reactor per thread and exactly one thread per reactor, and you will retain all of the safety and testability benefits shown above. :-)

About

framework for safer and more testable concurrency model, in Rust

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors