Patchwork Notes

A Local-First Task Framework

Alex Warth

At the lab, we use Patchwork for pretty much everything: it’s where we keep our weekly planning boards, write about our research, and host most of our prototypes. All of these things are represented as Automerge documents, so they enjoy the many benefits of local-first software, including support for collaboration, from the ground up.

Even a throw-away code sketch of a boxes-and-arrows interface for a new programming language is multi-user right out of the box. If I send you the URL of my prototype and you make an arrow point to a different box, I’ll see that change right away on my computer. This kind of collaboration is enabled by shared state. Now that Patchwork has grown to have a substantial number of users, we’re exploring a different kind: collaboration on computation itself.

This is the first in a series of lab notes in which I’ll outline work I’ve been doing on a local-first task framework. The goal of this framework is to enable the burden of computation to be distributed among users’ computers — including servers in the cloud. The fact that it’s local-first means that the framework doesn’t stop working when you’re offline; you can still create tasks and they’ll get done, but only by task workers that are running on your computer. And when you get back online, the effects and results of those tasks will be visible by everyone else.

We’ve already started to put this framework to good use. For example, we’re using tasks to aggregate the edit histories of Patchwork documents — not a heavy lift computationally, but it’s often enough to break our next frame or your money back goal. As another example, we’re using tasks to generate embeddings that capture the meanings of documents, to enable semantic search. This can take a substantial amount of time even for a single document, so spreading that work across many users’ computers can dramatically cut the time required to process a large folder.

We also plan to use tasks in Ambsheet v2, where distributing the generation of sample scenarios across multiple computers could make a user’s model “come into focus” much faster — but more on that in a future note.

The next note in this series will get into the implementation of the framework.