- title
- Portemine
- dated
- Q1 2026
The goal of this study is exploring the use of propagator networks as the computational model for Playbook. The question of a computational model has been a long-standing one for the ink track, and different projects have explored different options.
Around the time of Habitat, we came to the conclusion that there is no single programming model that would be a great fit for all the different use cases we have in mind, and that we would need to support multiple models. I don’t think we’ve ever found a satisfying answer to this question, and it’s something we’ve been circling around ever since.
My hunch is that propagator networks might be a possible answer for a few reasons.
- There’s a simple visual mapping for propagator networks, which means we can construct programs in user space without needing to fall back on text.
- It seems general enough to act as a kind of “glue” that would allow us to combine very different kinds of semantics, like reactive dataflow, bidirectional flow, constraint solving, rewriting, or even more “imperative” styles of programming.
- I imagine this would act like a kind of “assembly language” for Playbook, that we could build other “visual languages” on top of in user space. For example, I imagine TrapDoor-like semantics could be implemented using this under the hood.
- I suspect there’s an angle here that would trivially allow us to support distributed execution across different devices, “a local first vm”.
19 Feb 2026
Got a simple playground/editor setup that I’ll be using to explore different ideas. The first thing I tried is to recreate a simple Crosscut style diagram. I realised that we can simplify the implementation, by getting rid of the implicit bi-directionality we have in crosscut, and instead make all flow explicit. This gives us basically spreadsheet-style one way reactive dataflow. But, unlike spreadsheets, the user can construct cycles to get bi-directionality when they need it.
Here’s an example of a simple Celsius-to-Fahrenheit temperature converter in this style:
Interactive Demo (shift+drag to scrub numbers)
this system is explicitly modeling time as a global monotonically increasing timestamp. Each time the user scrubs a number, the global timer goes up. Whenever a propagator updates a cell it also tags it with the current timestamp (shown as the small grey number in the corner of the cell). This is how we can support cycles without getting into infinite loops. A propagator compares the timestamps of all the cells it’s connected to. Only if one cell is newer than the others, then it processes the update and sends out new updates with a newer timestamp. This means cells can never be updated more than once per tick.
This approach allows the user to construct flows that behave very differently in different directions (Which I imagine can be both very useful, and also a massive footgun).
24 Feb 2026
One thing that we never managed to figure out in Crosscut is how to constrain values to a range. In normal dataflow, you can do comparisons (is x larger than y?) but you can’t really enforce constraints (x must be larger y). But, because propagator networks aren’t limited to basic dataflow, this is actually doable. You can have a propagators that look at the value of a cell, and if it’s out of bounds, it updates it to be within the bounds.
You can think of these propagators as ‘pushing back’ on updates. If there is a path from A to B, and a path from B to A, then you can get a kind of ‘negotiation’ between cells.
Here’s the Celsius-to-Fahrenheit converter, but now with the constraint that the temperature in celcius must be between 0 and 100 degrees. If you try to scrub the temperature in Fahrenheit, the constraints will push back if the corresponding Celsius value would go out of bounds.
In order for this ‘pushing back’ to work with the timestamp system I needed to update it’s logic a bit. Now, a timestamp has a major and a minor component. The major component is the global timer, and the minor component is a local counter that resets every time the global timer ticks. Interacting with a cell increments the global timer, but if a propagator needs to push back on an update, it can increment the minor counter to send out a new update without advancing the global timer. This allows the ‘negotiation’ to happen within a single tick, while still preventing cycles.
25 Feb 2026
The importance of “simulation” as a usecase is somewhat contentious within the group. Nevertheless, it seems to me that we need a story for how this fits into the computational model. In my opinion, we should prioritise a declarative feel, which, I think, is something we get from the propagator network approach. But, at the same time, there are certain kinds of things that are more naturally expressed as an evolution of state over time (simulations).
Here’s an initial sketch of how we might support this. The idea is that we have a special kind of propagator called Future: a propagator that takes the value at its input and pushes it to its output, but only when it receives a tick. This is a way of saying “the value of this cell in the future should be the value of this other cell now”.
By combinging and Add propagator with a Future propagator you can build a simple counter. You can even combine this with the constraints from earlier to count to a hundred
I added explicit notion of time by adding a play/pause button, a speed control, and a timer cell that shows the current time. When you hit play, the timer starts ticking. Note that the timer cell has a different sense of time than the global timestamps I talked about in the previous posts. This means the user can scrub a cells between individual clock ticks, and the system will still update repsonsively, even if the timer is paused, or running very slowly. In other words, the system treats the clock tick and the user scrubbing a number as equivalent updates.
Some notes:
-
I think we can trivially extend the Future idea to support more triggers than just a timer. For example, we could have a
MouseDowncell that becomes true when the mouse is down, and false when it’s up. If you plug this into a Future propagator instead of the timer, then you’re basically saying: “The value of this cell should be the value of this other cell when the mouse goes down”. This is a very simple way of supporting event-driven programming. Orion Reeds’s idea of Scoped Propagators achieves a similar effect, by putting all update logic into a single propagator, and adding the notion of a “scope”. My approach is simpler, but it does so by precomputing future values inside the current timestep. This is kinda weird from a traditional programming pov, but I quite like it, because it makes concrete what would otherwise be hidden. -
I think we should be able to “rewind time” in much the same way we did in Crosscut, or in Alex’s worlds exporations. One question that I’m interested in, is how to deal with multiple timelines. I could imagine wanting to scope a simulation to a page, so you could have multiple simulations running in paralell on different pages. But that implies you could probably also nest simulations inside each other? How would that work?
-
This approach gives us the equivalent of Discrete FRP. Another thing I’d like to explore is adding support for a continous time semantics, like the kind explained in Functional Reactive Animation.
26 Feb 2026
Added boolean expressions to the system. This is pretty straightforward, and you can build bi-directionality in much the same way as with the arithmetic operations.
I also added basic comparators (>, <, =), which allows us to do things that interop between boolean expressions and numbers. The inverse of the comparators can actually be done using the clamping constraints I implemented earlier:
This circuit implements x < y = true in a bi-directional way. If you update false->true it will clamp x to be less than 10 (we need to do some math to decide what that means, in this case: 10-1=9). If you update true->false, it will clamp x to be greater than or equal to 10.
I also needed to update the clamping propagators to take a boolean input to determine whether the constraint is active or not.
While trying to get this to work, I noticed the subticking logic I had earlier wasn’t quite right. The clamping propagators were sometimes firing before any other propagators had a chance to update, which would cause a kind of ‘short circuit’ behaviour. The fix is this: the clamping propagators are now enqueued in a deferred way. Once the regular queue is empty, we subtick and process all the deferred propagators (and any other propagators that update as a result)
We can now combine this with the Gate propagator to make a ping pong counter that bounces between 0 and 100:
2 March 2026
Last week I explored using basic propagators to build a bi-directional, Crosscut-style primitives in user space. I also explored some simple constraints, as well as a simple way of modelling time. This is surprisingly expressive, but, it isn’t quite as powerful as a full solver, like the ones we used in Inkling or Untangle.
One of the main insights I got from Alexey Radul’s phd thesis is that, you shouldn’t think of cells as just a having a value, instead, cells should always accumulate information. This is a very interesting idea, because it allows for multiple propagators to contribute conflicting or incomplete information to the same cell, and the cell can then refine it’s value over time as it receives more information. If you squint at the implementation I have so far, you could think of the timestamp system as a very crude version of this. Each timestep, the cell just takes the first update it receives, and ignores the rest. But, we can do much better than that.
One way of solving constraints in physics engines, is by having each constraint project variables to valid values. If we simply use whatever the latest value of the cell is as the input for the next projection, this is called Gauss-Seidel method, and, as far as I can tell, equivalent to what we’ve been calling relaxation in other studies. This works if the constraints converge, but if they don’t, you can get into oscillations. One way to solve this problem is by using some kind of gradient descent approach, where the propagators only update the cell by a small amount in the direction of the valid value. But here I’m interested in an alternative approach, which is to have the cell itself keep track of all the different values that are being proposed by the propagators, and then averaging them out to produce a new value. This is called Jacobi-style iteration, and it has some nice properties. It’s order independent, which is what we want in a propagator network. It also “dampens out” when constraints are incompatible, which can be a nice property to have in a user-facing system, because it prevents oscillations and divergence.
In order to implement this in our propagator network, we need to change how cells work. Instead of just keeping track of a single value, we also need to keep track of a list of proposed “guesses” from each propagator. Whenever a propagator updates a cell, instead of directly updating the cells’ value, it adds its proposed value to the list of guesses.
We also need a way of implementing the averaging logic, which we can do with a special kind of propagator that’s attached to each cell. This propagator waits until each propagator has proposed a value. Once this happens, it averages the guesses and updates the cell’s value. It then checks if the new value is different enough from the previous value to warrant another round of updates. If the value needs more refinement, it clears out all the guesses which in turn will trigger all the constraints to propose new guesses based on the new value of the cell. This process continues until the values converge, or we hit a maximum number of iterations.
Here’s a demo using this technique to solve distance constraints between points. Each point is represented as a cell. Each cell also has a hidden ‘Averager’ propagator attached to it. Each constraint is implemented as a propagator that proposes new positions for the points that satisfy the distance constraint.
I think this is pretty exciting, because it shows that we can implement this kind of constraint solving entirely within the framework of propagator networks, potentially entirely in userspace, without needing to fall back on some kind of external system.