Summit 010 Berlin

The Ink & Switch team gathered in Berlin for the Programming Local-First Software workshop hosted at ECOOP22. Since we were in town, we took advantage of the opportunity to invite a few folks to join us for a day of unconference to discuss our mutual interests.

Programming Local First at ECOOP22

The program for PLF was full of interesting talks, including a presentation by Adam Wulf of the Muse team on how they’ve implemented local-first synchronization, a surprise talk by Marc Shapiro about his recent work on consistency, and the introduction of a new project from Socket Supply designed to be a lighter-weight alternative to Electron. Ink & Switch was represented directly by Peter who opened up the day with a keynote about local-first software.

The full day of talks are available on Youtube.

Unconference

We gathered at the Alte Kantine in Gesundbrunnen, a former worker’s cafeteria in what was once a factory in the GDR. Today it is a hidden treasure: a charismatic shabby-chic venue fit for a small group like ours to gather for coffee and the exchange of ideas.

Welcome

The format was simple: we began with lightning presentations of attendee projects. In particular, we wanted to hear from people whose projects were not represented at PLF the day before. After that, we gathered topics of interest for discussion from attendees and divided into smaller groups to have those conversations.

Peter reviews the agenda, and takes a poll for afternoon discussion topics.

Lightning Talks

Our initial presentations were an open call from attendees to briefly share a project they were working on.

Live Literate Programming (Interactive Documents)

Evan recalls: We started the session off explaining what everyone’s interests in Live Programming were, and Gilad gave us a demo of the latest environment he’s working on, another attempt to “boil the ocean” as he said. It was a webpage, but all of the elements of the webpage could be inspected within the page itself, and they were all backed by objects whose source you could edit (in a Smalltalk-variant?), and when you edited them the page itself would immediately reflect the edits & connections between components.

Szymon sparked discussion by questioning whether one of the inherent constraints on these live programming environments is that in production they can never be as performant as non-live, compiled code. This got some pushback, that some of the original SmallTalk and Interlisp environments had techniques for addressing this, but there seemed to be a general concession that today’s VMs are black-boxed enough that this is a problem.

Szymon voiced that we need some more specific term than ‘end-user programming’ which could differentiate the Smalltalk-like environments Gilad and Jack & Martin are working on, but which are ultimately targeting professional software developers, versus things like the Untangle demo he showed earlier which are targeting users with no programming experience at all. During this discussion Jack made the observation which has most stuck with me, on the difficulty of the space Szymon is exploring–

“It’s comparatively easy to give visual representation to computing’s nouns. It’s quite hard to give a visual representation to computing’s verbs.”

Jack

Data Interop Across Apps & Portability Through Sync

This session split into two groups due to a large degree of interest.

One of the two sessions talked about real-life example use cases for data interoperability across applications. We landed on an important distinction between two motivations for interop: 1) having smaller tools each perform an independent part of a workflow (like a UNIX pipeline), 2) letting collaborators each pick their preferred tool when collaborating.

Automerge & Automerge-RS

Peter recalls: This session went into depth on the work on Automerge from the last year, with Martin Kleppmann describing in depth the design process behind the Automerge binary file format, and Orion Henry introducing the work that went into the new Automerge-RS. Kevin Jahns, the author of the Y CRDT had plenty of good questions and insightful critique.

Martin’s OpSet approach seems to me to have unappreciated power and flexibility. I think and hope that some variant of it is the solution to many of the problems that people are debating (such as permissions and connecting CRDTs). I’m exploring this now and have a thread going with Martin and Marc.

Malcolm Handley

Peter recalls: Kevin and I discussed this briefly and I claimed that links can just be to stable document identifiers. It’s okay if they don’t hold together with dependencies, they’re meant to be kind of loosely coupled. The hard part (I claim) is to figure out when to put more things in the same CRDT and when to split it up, and which CRDTs you need to send another user (or fetch from them) to ensure you’re synchronized. This evolved into a discussion about synchronization without running server-side code.

Evan recalls: This session went by quite fast. There was a lot of back and forth between Peter & Kevin around how to deal with CRDT graphs when they reach the size of “millions of documents”, with Peter somewhat unconvinced this is even a problem users have, and Kevin concerned that we have ways to address it. The relationship to “links” here is that having millions of documents implies you can’t download the entire graph, so therefore the graph needs to be partitioned & re-joined using some linking mechanism.

Interleaved somewhere in here was a really compelling idea of Peter’s, namely that techniques be developed in the CRDT application space for building applications which effectively don’t need a custom server, by being able to re-use existing document-based HTTP methods any standard web-host provides. The technique that emerged from discussion IIRC is:

Write-permissioned applications all run-length encode their CRDT deltas and append them to a chosen location using HTTP PUT. To ensure updates don’t clobber one another the PUT should include If-Match headers providing a hash of that same last-seen document in the ETags. (If this fails, the application needs to “pull”/read the file again for latest changes & resubmit the PUT.) On “pull”/read, applications check Accept-Ranges and only issue Range requests to that chosen location, so that they are only ever tailing the file from its last-read location, i.e. only ever reading other client’s deltas. There was a slight variation on this which involves using two files, IIRC the second file is a compacted version so that new clients on initial load don’t have to download all the deltas. That file can contain the correct byte offset to start reading the delta file in #1. (Writing this compacted version would likely require the If-Match technique too.) I suspect it’s not ok to let the delta file grow infinitely, but it does seem like you could simply add some kind of marker record at the end to indicate log rotation to a newly chosen location had occurred if file size became an issue.

UX Patterns for Local First Applications

First we discussed common UX challenges that emerge in local-first software, like sync status, visualizing change over time, and collaborative undo stacks. Then we listed out solutions that have been proven to work in real apps (like Muse) and could be used more broadly in local-first apps.

Eileen recalls: The goal of the session was ambitious: we wanted to collect recurring UX issues we have bumped into while developing local-first apps, and also gather existing solutions to those thorny questions to see where we can help each other. I was not surprised by the number of issues that people listed - much of it can be filed under “version control without git” - but some of the ideas were genuinely new and existing. The example that comes to mind (from Max I believe) is to distinguish local from global search in a search dropdown. There was another good discussion on when and how to show previews before merging branches. I personally wanted more time to talk about Entangled Files as a pattern, but thankfully Johannes hosted the follow-on session on the file system! As a next step, I would like to document and test these patterns to see where more work is needed.

Annette “found the UX discussion very interesting, and wonder[ed] how to integrate this into these frameworks, also provenance and tracing back who did what.”

Eileen led a conversation to help identify common user experience patterns and potential challenges for local first applications. A few areas of exploration were identified including: linear and non-linear history, learning from git, but not recreating git, communicating current status, and the concept of user spaces.

Eileen led a conversation to help identify common user experience patterns and potential challenges for local first applications. A few areas of exploration were identified including: linear and non-linear history, learning from git, but not recreating git, communicating current status, and the concept of user spaces.

Programming ✕ Drawing

Human-First Programming

Human-first programming is an approach to the design of programming systems where the human and its perception, motor skills, social background, social context, skills, motives, etc. comes before machine architecture, generalisability, performance, and mathematical elegance.

Human-first programming systems will allow us to express ourselves in a computationally empowered manner with the means and concepts that are suitable for the particular context we are in and domain we are working on.

Human-first programming systems dialectically evolve with us within the zone of proximal development. This concretely means, that constructs that are beyond what we reasonable can comprehend is not exposed unless there is a perceivable path towards understanding them.

The concept of “human-first programming” is an attempt at articulating both a phenomenon that, e.g., can describe the approach taken with systems such as Szymon’s Inkbase, but also has the quality of articulating a movement that less technically minded people can rally behind.

Human-first programming is not end-user development. A human-first programming system is usable by the novice and expert alike.

Some thoughts:

Clemens Klokmose

Customizable Software

Issues were collected in blue and red, 3 categories in purple, and yellow/white solutions with votes on what is most significant.

Geoffrey Litt recalls: Clemens and Rosano facilitated a wide-ranging discussion about the challenges and opportunities of customizable software. As a group we zoomed out from the mechanisms of customization, and talked a lot about the broader ecosystem of software. A few themes I remember:

I argue that today we have no software — what we have is merely the simulation of software. What we have today bears the same relationship to real software as the set of a Hollywood movie does to the real places and scenes that it portrays. The movie set creates the impression of a particular scene which is good just for an observer in a carefully controlled place and for a limited set of purposes (the camera and its optics). Similarly, our software meets a set of needs which are good for a tiny set of users under a limited range of contexts — often this set is so idealised that the software doesn’t actually adequately meet the needs of any real users. A small change in perspective of the camera or a small change in usage (pushing against a prop wall that wasn’t designed to be rigid, for example) instantly exposes the sham of the movie set world.

Rosano recalls: A larger idea of ‘software vandalism’ emerged as a shorthand for the boundary of what should be possible or prohibited: we like graffiti but not broken benches.

J Ryan Stinnett recalls: All of the sessions were quite interesting overall, but the one on software customisation had the most overlap with my own interests. One of the discussion threads there was around extension mechanisms. There are roughly two known paths to extension support today:

  1. Define some extension API and extensions can only make use of that pre-defined interface
  2. Expose the entire application to extensions

App vendors typically prefer defining an API (path 1) so they can retain control and reduce maintenance impact. Extension authors eventually desire more and more features, and full access to the app (path 2) does make it possible, but then it effectively causes the entire application to become an “API” boundary, requiring careful consideration of every app change, since it could impact extensions. (Web browsers have tried both of these paths over the years.)

I have a vague hope of finding some “middle road” that offers greater than typical power to extension authors while avoiding the maintenance nightmare for app vendors, perhaps relying on things like capabilities for security, auto API evolution (similar to Cambria), and other bits, but it’s still quite hazy for now.

I hope we can keep these customization discussions going in the future!

Space & World & Stuff

[Editor’s note: This session must have been completely engrossing since nobody present had time to take notes.]

Bring Back the Filesystem

This session was proposed by Johannes Schickling, who is working with Geoffrey Litt on Overtone. He reports:

Participant interest in file systems. Left: positive aspects. Right: challenges/questions.

Access Control Within Documents

This topic was inspired by a question from Nik Graf. How could you give “comment access” to a document author but not “edit access”?

Nik recalls: While advanced comment structures would require to be part of the document it, a simple model is just a separate CRDT document for every comment and use a start & end referencing identifiers in the original model. This also allows for “only comment” permissions.

Annette suggests: For the access control, the discussion in the unconference […] circled around some imho obscure issues regarding “changing the past”, i.e. rewriting histories. Often the damage will then already be done. We did some work on eventual consistency and access control with a focus on what is possible (whether the available guarantees based on causal consistency are too weak for practical purposes is a related discussion, but doesn’t invalidate our results). I would love to see some more principled approach or discussion on this topic. [Emphasis added by the editor.]

Martin Kleppmann's nearly filled notebook, with diagrams of a 'many universes' approach to decentralized access control.

Summit 010 Notebooks

As a welcome gift to community attendees, we offered a small bag of analog thinking tools, including a notebook, multi-pen, sticky notes, stickers, and postcard for later correspondence.

Measure twice, cut once, or in the case of print design, prototype, test, prototype, test, and repeat until a prototype stamp is no longer necessary. This looks like Todd was having fun. ;)

Here is one of many possible compositions that can be created with the included transfer sticker. How did you install yours?

Martin Kleppmann processes the events of the day as he transcribes notes across systems.

Unsorted Recollections

Szymon Kaliski:

Santiago Bazerque:

WASM @ ECOOP: do the new referential types, that enable efficient 2-way access from outside the VM, present any opportunity for app interoperability? Esp. w.r.t. sandboxed / untrusted apps.

Combining reactive programming with a local store: everybody seems to be doing it and it seems to rock.

Content addressing for metadata: this is catching up too (e.g. Orion’s representation of changes in automerge-rs, Martin’s Byzantine fault tolerant CRDTs paper, etc.)

Hypermedia <-> Linked Data: Learnt the concept of “transclusion”, are there any insights for sharing data between apps there?

And one final thought that sprouted up on the flight back: the “causal set” concept I presented is a natural extension of an observed removed set. In the OR-set the deletions include tombstones to indicate which additions are being invalidated, thus using the local state of the “deleter” to prevent any conflicts later. The causal set is just extending this idea one step further, by making the “checks for membership” explicit in the causal history in the form of attestations, and (now the other way around) defining the set of valid attestations (by using a transitive closure into the past through the causal history).

Michiel de Jong:

The project I presented is https://federatedbookkeeping.org - a “local-first” world where “local” means “our on-prem or cloud-hosted business software system” (such as SAP, Oracle, etc), and sync between nodes is not just a question of reconciling concurrent edits, but also of data portability.

My most memorable takeaways were

  1. talking with Santiago Bazerque, Marc Shapiro and others about Causal Consistency and how it relates to Access Control changes (e.g. having to undo actions from someone whose admin rights were revoked)
  2. talking with Geoffrey Litt and others about Cambria and storing original message logs (in their original language and context) as something you can always go back to, as well as keeping explicits maps, one per neighbour in the sync network, that records the foreign identifier (nickname) for each identifier you use in your own data structure.
  3. talking with Max Schoening, Geoffrey Litt and others about the long now and obsolete software, and whether it would be possible to describe software and its dependencies in such a way that it can in theory stay functional forever
  4. talking with you, Kevin Jahn and others about CRDT linking and comparison between “multiple CRDTs” vs “one big data structure containing CRDTs as substructures”

Marc Shapiro:

One small nugget I brought back is to make the difference between shared data that needs to persist (e.g., document state) and that can remain volatile (e.g., UI interactions).  Maybe obvious in retrospect, but useful learned lesson.

Another nugget (I think from your PLF talk) is the difference between real-time collaboration, for which automated reconciliation approaches are fine, and offline collaboration à la git, which has larger impact and cannot be left to automation alone.  I had seen that before, but the discussion really nailed it in.

Maybe I’d rephrase that as small-scale vs. large-scale conflit.

The problem of course is (1) where do you put the boundary; (2) how do you present this to the user in a not-too-painful way.

More generally, a lot of the discussion was about the limitations of CRDTs and can be done about them.

One thing that was generally missing from the discussion was taking a step back from low-level solutions, towards correctness.  What are the important invariants of your system? How do these invariants impact the user experience?   It’s important to realise that there are whole classes of invariants (which we can clearly identify) that cannot be upheld online in an AP manner.  Then, it’s a matter of deciding what do you give up on: the invariant? AP? Online progress?  (Maybe there’s an impossibility theorem here waiting to be proved.)

The limitations of the offline/local-first approach are immediately apparent in the security area.  Let’s say you have a security invariant that says that a user may perform an update only if he has the proper authorisation.   Suppose Alice posts a photo; Bob comments on the photo; Cindy responds to the comment; David reads the response and eats a sandwich.  Concurrently Marc removes Alice’s authorisation.  Peter receives Marc’s update before Alice’s; From his perspective, Alice’s update is illegal (Marc wins) and therefore is not included in his visible state.   Eventually all updates reach all replicas and converge; who wins?  If Alice wins, the invariant is violated.  If Marc wins, then David must un-eat his sandwich :-).  If both lose, the system is not very useful.

The point of this example is that arbitration after the fact (i.e., giving up on monotonic online progress) can have sweeping consequences, not necessarily as absurd as un-eating, but equally undesirable.  Time to bring back the difference between small-scale and large-scale conflicts, I guess, but you don’t necessarily know in advance which it will be.

Nik Graf:

Interestingly Muse and Riffle created a similar DB structure by having one event log for the whole app. The big difference though is that Muse only persist the absolute necessary changes while with Riffle the goal is to have as much state in the DB as possible.

Document based CRDTs like automerge & Yjs are very different in that sense. I’m wondering how Muse/Riffle would handle text, but it’s not really a concern for them.

An event log makes it really straight forward to sync between clients. That said I’m curious how to handle situations where you want to pre-load a specific “scope” for example in Muse. They could pre-load it, but then shouldn’t mark it as synced since older changes might not be synced yet.

For Automerge a lot of interesting features are in the making e.g. history cut-off.

Really looking forward to the essay of the “Upwelling”? Ink & Switch UX research by Eileen.

An interesting backup strategy for local-first apps might be to use only you peers to backup their (encrypted) data and your peers can backup with you. Here the incentives of support & trusting a peer are correct.

Closing

It was our great privilege and joy to host so many folks for such stimulating discussions. In particular, we appreciated seeing such a broad spectrum of people in one room engaging as peers: academics and practitioners, engineers and hackers, Berlin locals and travelers, and perhaps most of all folks new to the industry and full of enthusiasm as well as those with long careers and plenty of accomplishments. Thank you all for sharing your insights, creativity, and experience.

Although each of us brought our own ideas about what the specific shape of the future of computing might be and how best to help pursue that vision, all of us share a vision of a future where computers are better tools that serve our needs instead of the other way around.

A very big thank-you to Eileen Wagner for being our local organizer as well as her help and advice running the unconference, to Todd Matthews for photography and producing the lovely notebooks, and to our hosts at Alte Kantine.

Until next time,

Peter van Hardenberg

Lab Director