- title
- Etui Ink Model Studies
- dated
- 2024-2025
2024-07-29 (From playbook daily logs)
- We should talk about how we want to think about strokes. Here’s a suggestion that might maintain the feel we want, but have a more efficient & compact representation.
- We only persist information we get from the pen, so x, y, pressure, tilt etc.
- We store them as strokes: arrays of points. Each stroke has a brush associated with it.
- A brush is a pure function from pen input to stroke geometry.
- We can do interpolation of points inside the brush as well.
- If we make a subselection, this means we might have to split strokes up strokes, and add **
- We can do this either by spitting strokes into two new strokes. Or, a stroke can have different brushes with a range associated with each.
- This could still allow us to do things like brush overloading with flux.
- Quick code sketch of this idea, red dots are (simplified) data points that we persist. Brushes still use inklet-style approach for rendering.
- This (obviously) puts less pressure on automerge… this would make it sync a lot faster
2024-12-10 (From playbook reflection doc)
- The Ink model is the material that has been developed furthest
- I’m really happy with some of the feel we’ve gotten.
- I think we settled on Inklets too early as the base primitive. I already felt this way all the way back in July.
- We eventually decided to stick with Inklets, for reasons I don’t quite remember, mostly I think in order to make progress on other parts of the system.
- I would like to do a serious redesign soon, as I think this will have a lot of impact on all other parts of the system. It might seem like we’re spending too much time on this, but I think this is actually much important that we’re giving it credit.
2025-01-10
I’m going to start working on a study to futher develop the ink model in playbook called Etui
The ink model that is currently in Playbook is based on the idea of Inklets; small particle-like blobs of ink. As an approach, it has some nice properties:
-
The data model is simple
-
It enables us to have rich brush styles, emulating things like pencils and markers.
-
It gives us a very flat datamodel, which means aligns well with some of our design phase ideas (link to figjam board?) about ink and setting properties on parts of strokes.
-
It’s geometrically simple, which means the math for doing things like containment checks are very straight forward.
But, it also has some significant downsides:
-
In order to get reasonable anti-aliasing, we need a lot of inklets
-
It’s data-intensive, which means storage is costly, making it unsuitable for Automerge.
-
While the geometry is simple, the tradeoff is we need a lot of compute.
-
It’s not ideal for scaling or ink deformation.
Additionally, some aspects of the Inklet model remain underdeveloped. Proper affordances for setting and getting properties on ink, are completely missing. Selecting partial strokes, as well as selection refinement based on properties, are also absent.
Etui is an attempt at an ink-model that keeps the simple particle-like qualities of inklets, while trying to solve some of its pitfalls by doing two things:
- Using a richer data-structure to model ink that reduces the amount of data being stored, and is more amenable to deformation.
- Abstract away most of that complexity, by introducing a simple interface and gives the user constrained, well-behaved ways of setting and getting properties on ink.
2025-01-11
On-the-fly Stroke simplification & Inklet generation test. The green area is returning substroke geometry.

2025-01-20 Ink isn’t a thing, it’s stuff
One of the great properties of pen and paper is how informal it is. Drawing is more like manipulating a material than it is like a thing. When you’re making a sketch, or even when you’re writing some text it doesn’t usually make sense to consider each pen stroke as an individual object. Rather, as ink flows onto the page the individual strokes blend together to form a shape. Indeed, when sketching, it isn’t uncommon to visually suggest a single line by sketching multiple strokes.
In Etui, my idea is that ink is surfaced to the user as stuff. Under the hood, there is quite a rich representation, but this is entirely abstracted away. For example, we internally maintain a model of each individual stroke, but you can trivially select any piece of ink on the screen which selects exactly what you selected. This makes it partially behave like pixels, but still allows us to do things like getting & setting properties or deformations.
2025-01-22 Limit properties & choices
In the Etui model, ink has a single property: “Style”.
We want our environment to be distraction free, and to encourage the user to stay sketchy. Twiddling values is probably one of the most pernicious distractions. Pen & paper works best if you limit the choices between different pens and colors, it’s preferable to simply have a red and a blue pen, than to have a pen tool + a color picker.
There is a tension though, because we still want our system to be malleable. So for this reason, a style breaks down into three properties: Color, Weight & Brush.
I’m choosing not to surface these properties as continuous values. Instead, it’s better to give the user a limited number of options. Again, we want to encourage low fidelity, so limiting the colors frees you from getting distracted by trying to pick the perfect color.

Another advantage of limiting choices is that it encourages uniform values; One thing that happens a lot in software with a color picker is that, (unless you’re very careful), you end up with 15 ever so slightly different shades of red. This is not to say that the user shouldn’t be able add a new color, but the point here is to not make it the default.
We also shouldn’t represent the values as numbers. Representing things as numbers is programmer-brain. We should bias towards representing things using domain concepts instead. Of course, the user might want to use a numeric value and map it to a color. Even in that case, I would argue surfacing RGB (or HSL or whatever) values is a bad default. Instead we should support making a sensible mapping from numbers to colors as a built-in primitive.
Color and Weight are relatively straightforward, Brush is a slightly more complex property that I’ll go into more later.
2025-01-24 Multi-property selections
It’s kind of unavoidable that the user will select ink with different properties. Since selection is the main way of getting & setting those properties, we need a sensible way of handling that.
Two BAD ways:
- Only allow selections of ink with the same properties.
- Make properties write only if they’re not homogeneous. This the most common pattern in other tools.
Instead, we should allow the user to access all the possible properties. This gives us a few benefits:
- You can still update properties, even if strokes are overlapping
- You can easily refine the selection based on a property
- You can still override all values, and squash them down to one if you want to.
INSERT VIDEO OF MULTISELECTION
Making ETUI strokes local first
I’ve been thinking a bit about how to make the Etui model local-first. We can just put our data into Automerge, but:
- Automerge ends up spending a lot of effort trying to maintain invariants that we don’t care about.
- It doesn’t maintain the invariants that we do care about.
Strokes afford a few operations:
- Create & Delete (A stroke with a given Id can only be created and deleted once)
- Set a property, like color, weight etc (these can be last write wins)
- Split
Naively, you can model stroke splitting as removing the stroke, and creating two new ones. However, this can cause an inconsistent state in a multiplayer context. If two people split a stroke concurrenty, you’ll end up with four new strokes.
Here’s a relatively simple CRDT that I came up with, that would work and be super performant.
We model strokes by using two separate concepts:
StrokeData
: list of x,y positions that doesn’t ever change after creation (deformations are entirely derived)- Rendered
StrokeSlices
which refer to segments of that data
Ink that’s rendered to the screen references StrokeData, and applies a series of transformations (including deformation) to it, to figure out what should be shown on screen. This is done as a pure function, there’s no mutation of the original data.
We can identify a StrokeSlice
uniquely by using a StrokeDataId
as well as a start-offset
and end-offset
.
A split, then, works as follows:
A split takes a StrokeSlice
and an offset: a number that indicate a point along the length of the StrokeData
.
Again, note that the new strokes simply point to different slices of the original stroke geometry, we don’t mutate that data. So if we simply record the list of all of the splits, we always end up with the same StrokeSlices
, no matter the order in which they’re applied.
Finally, if we record properties in the same way (using slices) we can apply them in any order, and make it consistent. For example, one user might change the color of a stroke, while a second user splits the stroke in two. We can make this converge:

22-07-2025 Heat diffusion weights
Really stoked about getting this working! Automatic weight-painting for deformation using a diffusion algorithm. This should (hopefully) enable deformation without explicit splining, as well as improve deformation with explicit splining, particularly for concave shapes, which is surprisingly non-trivial.


23-07-2025
Here’s some actual deformation. The colors on top show the diffusion process. To speed this up it’s run first at a low resolution an then progressively upscaled and diffused again. If we like this, we could also ‘just’ run this on the GPU which would make it a lot faster.