Making a new medium and other recaps

Hello all,

We’re finding ourselves in a reflective mood as the end of 2023 approaches. This year, we published 3 essays, communed at a couple events, and organized several workshops. It’s always nice to celebrate publications and presenting our research in public, but much of our work are ongoing journeys of continuous exploration, prototyping, and findings. So, we wanted to start this end of the year dispatch by talking a bit about one of our longest standing research tracks: programmable ink.

Making a conversational medium

The ink team at the lab is dedicated to making something that combines what we love about sketching on paper with the power of computers. This new tool we envision will allow you to add dynamic behaviour to imperfect and often fuzzy sketches. More importantly, this tool should be intuitive to use; working with it should feel conversational like working with pen and paper or a trusted collaborator.

The lab has been working on this idea through different projects with various team members (see Untangle, Inkbase, or Crosscut). Making the experience of the prototype feel conversational and intuitive has been an ongoing research problem and one of our (many) obsessions. This year, Marcel Goethals, Alex Warth and Ivan Reese have made a breakthrough with their current prototype. They’ve built a system that’s bringing us closer to the level of conversational and intuitive feel we want.

image

Part of what makes pen and paper a versatile medium for thinking is its flexibility and informality. You can start scribbling anywhere and use any set of symbols or words to represent your thinking. You can decide on the relationship between things you scribble as you go, and you can go back and change parts of what you’re scribbling. The meaning of anything you put on the paper can easily transform and evolve as you think, and there’s no friction between your thought process and the medium. The ongoing conversation between what’s on the paper and what’s in your head is what helps you make sense of your ideas and push them further.

Computers are a dynamic medium that allow us to perform calculations and actions like copying, reusing things, and simulating ideas. But computational tools lack the kind of flexibility and informality inherent to pen and paper. They add too much structure and formality (think structured tables or vector shapes) too soon for informal, often hazy thinking. They also typically rely on you knowing the input(s) so that you can arrive at a desired output. A spreadsheet can compute your net profit if you add or make changes to your losses and expenses. We can use CAD to create an accurate floor plan if we know the measurements and positions of items.

Tools like spreadsheets use unidirectional computation where computation can only flow forward in one direction. A spreadsheet formula can compute a result using numbers from the cells but it can’t give you changes to the numbers if you edit the result. This is good for answering questions like: what will my monthly payment be if the interest rates go up? But sometimes, we want to work backwards and ask: what interest rates do I need if I want to keep my monthly payment at a certain amount? For these kinds of questions we’d have to do extra work, and this distracts us from the thinking work that truly matters. We can’t have a conversational experience with a tool that adds too much structure too soon and requires us to think for it. So how do we create a new computational medium that can give us the kind of conversational experience pen and paper does?

One of the key starting points for solving this problem was making bidirectional computation work in our prototype. Any point, symbol, or line you draw on paper are like outputs that are simultaneously inputs that you can do more with. We want the tool we’re making to give you this kind of freedom—the freedom to manipulate any object and add dynamic behavior to it. We implemented a version of bidirectional computation in Crosscut, a tool for drawing vector-based dynamic models. It’s what allowed users to wire a relation between two points and experience the relationship immediately by wiggling any of the points, like in the example below.

Wiggling a point in a line that stays vertical no matter what.
Each point can be moved up and down but the line stays vertical when you try moving the points horizontally (achieved by setting the horizontal position (x) of the points to be equal).

When designing, we’re constantly thinking about materiality — what is the material that things are made out of? If you’re a designer you may have thought about this when it comes to UI elements (remember brushed metal?), but we also think about the materiality of our programming constructs. In Crosscut, the materiality of the programming model had a feeling of grain, like wood, where the model behaves well when data flowed back-and-forth in certain directions, but poorly in others. As a user you would encounter this grain when using the tool and so you couldn’t manipulate objects as freely as we’d like.

The challenge of bidirectional computation is that sometimes there’s more than one way to get to a result. If you change the result of an addition, which input should you update? In our latest prototypes, we’ve developed a computational model that’s powered by a new constraint system. This model can find solutions that satisfy your constraints but it feels grainless to interact with. We’ve started calling this approach omnidirectional computation. We call it omnidirectional because there aren’t really inputs or outputs to it. There are only relationships between objects. You can edit any – or all – of the pieces of the system simultaneously and still get a result even for complex problems. By default, our system will change every object a little bit instead of choosing a single input to change a whole lot. We like to think of this as “spreading the change”.

This model of computation opens up new opportunities for how our system can evolve both computationally and interactionally—it’s bringing us closer to the conversational medium we want to make. For example, it provides robust support for multitouch interactions and problems like this pulley system:

Demo of a pulley system with dynamic behaviour.
Demo of a pulley system with dynamic behaviour.

We’ve made several attempts at implementing this constraints-based computation but weren’t successful at making the constraint system stable enough until recently. Some of our early prototypes felt like you were working with jelly. Our new constraint system feels fast and solid, and feels much more intuitive and conversational. It’s been a journey but we’re excited to continue improving it and look forward to sharing more of our findings in the near future.

Out and about

It’s the end of the year, and so I (Peter) wanted to share some recaps from a few events we attended this year. As a distributed research group, occasional in-person meetups help us keep in touch with each other as well as the rest of you. There’s simply no substitute for meeting people in person, sharing a meal, and getting into the weeds on our shared interests.

This September, many of us attended the last Strangeloop in St. Louis, MO where Martin talked about new algorithms for collaborative text editing. It was bittersweet: Strangeloop was my personal favorite conference and brought together a mix of industry and academic folks with an eclectic programme that combined useful information with feats of “stunt computing.” Any attempt to pick favorite talks would be impossible, but here are our own talks from the previous two years on Programmable Ink by Szymon Kaliski and Backchannel by Rae McKelvey.

After the main event, we co-hosted a one day Local-First Unconference with our friends at Fission and DXOS. It filled up quickly, so we expanded the capacity and then it filled up again. Local-first software is on the move! I’m still hoping to edit something together from the community notes one of these days but there’s plenty of raw material to peruse here.

More recently, we participated in the LIVE Programming and Programming Local-First academic workshops in Cascais, Portugal as part of SPLASH 2023 and then hosted another unconference at a beautiful resort. Together these three events capture the breadth of our interests: LIVE is full of wild open-ended programming language experiments, PLF is a mixture of field reports and crunchy algorithms talks, and the unconference attracted roughly fifty renegade developers from across Europe to discuss optimistic visions of a new kind of computing.

What’s a few more open tabs?

Till next time

We’re looking forward to lots of new projects next year. Thanks for following along with our work. Happy holidays!