Programmable Ink Lab Notes
title
Etui Ink Model Studies
dated
2024-2025
author
Marcel Goethals

2024-07-29 (From playbook daily logs)

2024-12-10 (From playbook reflection doc)

2025-01-10

I’m going to start working on a study to futher develop the ink model in playbook called Etui

The ink model that is currently in Playbook is based on the idea of Inklets; small particle-like blobs of ink. As an approach, it has some nice properties:

But, it also has some significant downsides:

Additionally, some aspects of the Inklet model remain underdeveloped. Proper affordances for setting and getting properties on ink, are completely missing. Selecting partial strokes, as well as selection refinement based on properties, are also absent.

Etui is an attempt at an ink-model that keeps the simple particle-like qualities of inklets, while trying to solve some of its pitfalls by doing two things:

  1. Using a richer data-structure to model ink that reduces the amount of data being stored, and is more amenable to deformation.
  2. Abstract away most of that complexity, by introducing a simple interface and gives the user constrained, well-behaved ways of setting and getting properties on ink.

2025-01-11

On-the-fly Stroke simplification & Inklet generation test. The green area is returning substroke geometry.

2025-01-20 Ink isn’t a thing, it’s stuff

One of the great properties of pen and paper is how informal it is. Drawing is more like manipulating a material than it is like a thing. When you’re making a sketch, or even when you’re writing some text it doesn’t usually make sense to consider each pen stroke as an individual object. Rather, as ink flows onto the page the individual strokes blend together to form a shape. Indeed, when sketching, it isn’t uncommon to visually suggest a single line by sketching multiple strokes.

In Etui, my idea is that ink is surfaced to the user as stuff. Under the hood, there is quite a rich representation, but this is entirely abstracted away. For example, we internally maintain a model of each individual stroke, but you can trivially select any piece of ink on the screen which selects exactly what you selected. This makes it partially behave like pixels, but still allows us to do things like getting & setting properties or deformations.

2025-01-22 Limit properties & choices

In the Etui model, ink has a single property: “Style”.

We want our environment to be distraction free, and to encourage the user to stay sketchy. Twiddling values is probably one of the most pernicious distractions. Pen & paper works best if you limit the choices between different pens and colors, it’s preferable to simply have a red and a blue pen, than to have a pen tool + a color picker.

There is a tension though, because we still want our system to be malleable. So for this reason, a style breaks down into three properties: Color, Weight & Brush.

I’m choosing not to surface these properties as continuous values. Instead, it’s better to give the user a limited number of options. Again, we want to encourage low fidelity, so limiting the colors frees you from getting distracted by trying to pick the perfect color.

Another advantage of limiting choices is that it encourages uniform values; One thing that happens a lot in software with a color picker is that, (unless you’re very careful), you end up with 15 ever so slightly different shades of red. This is not to say that the user shouldn’t be able add a new color, but the point here is to not make it the default.

We also shouldn’t represent the values as numbers. Representing things as numbers is programmer-brain. We should bias towards representing things using domain concepts instead. Of course, the user might want to use a numeric value and map it to a color. Even in that case, I would argue surfacing RGB (or HSL or whatever) values is a bad default. Instead we should support making a sensible mapping from numbers to colors as a built-in primitive.

Color and Weight are relatively straightforward, Brush is a slightly more complex property that I’ll go into more later.

2025-01-24 Multi-property selections

It’s kind of unavoidable that the user will select ink with different properties. Since selection is the main way of getting & setting those properties, we need a sensible way of handling that.

Two BAD ways:

  1. Only allow selections of ink with the same properties.
  2. Make properties write only if they’re not homogeneous. This the most common pattern in other tools.

Instead, we should allow the user to access all the possible properties. This gives us a few benefits:

INSERT VIDEO OF MULTISELECTION

Making ETUI strokes local first

I’ve been thinking a bit about how to make the Etui model local-first. We can just put our data into Automerge, but:

Strokes afford a few operations:

  1. Create & Delete (A stroke with a given Id can only be created and deleted once)
  2. Set a property, like color, weight etc (these can be last write wins)
  3. Split

Naively, you can model stroke splitting as removing the stroke, and creating two new ones. However, this can cause an inconsistent state in a multiplayer context. If two people split a stroke concurrenty, you’ll end up with four new strokes.

Here’s a relatively simple CRDT that I came up with, that would work and be super performant.

We model strokes by using two separate concepts:

Ink that’s rendered to the screen references StrokeData, and applies a series of transformations (including deformation) to it, to figure out what should be shown on screen. This is done as a pure function, there’s no mutation of the original data.

We can identify a StrokeSlice uniquely by using a StrokeDataId as well as a start-offset and end-offset.

A split, then, works as follows:

A split takes a StrokeSlice and an offset: a number that indicate a point along the length of the StrokeData.

Again, note that the new strokes simply point to different slices of the original stroke geometry, we don’t mutate that data. So if we simply record the list of all of the splits, we always end up with the same StrokeSlices, no matter the order in which they’re applied.

Finally, if we record properties in the same way (using slices) we can apply them in any order, and make it consistent. For example, one user might change the color of a stroke, while a second user splits the stroke in two. We can make this converge:

22-07-2025 Heat diffusion weights

Really stoked about getting this working! Automatic weight-painting for deformation using a diffusion algorithm. This should (hopefully) enable deformation without explicit splining, as well as improve deformation with explicit splining, particularly for concave shapes, which is surprisingly non-trivial.

23-07-2025

Here’s some actual deformation. The colors on top show the diffusion process. To speed this up it’s run first at a low resolution an then progressively upscaled and diffused again. If we like this, we could also ‘just’ run this on the GPU which would make it a lot faster.