Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
46
Wasm GC is a wonderful thing that is now available in all major web browsers since slowpoke Safari/WebKit finally shipped it in December. It provides a hierarchy of heap allocated reference types and a set of instructions to operate on them. Wasm GC enables managed memory languages to take advantage of the advanced garbage collectors inside web browser engines. It’s now possible to implement a managed memory language without having to ship a GC inside the binary. The result is smaller binaries, better performance, and better integration with the host runtime. However, Wasm GC has some serious drawbacks when compared to linear memory. I enjoy playing around with realtime graphics programming in my free time, but I was disappointed to discover that Wasm GC just isn’t a good fit for that right now. I decided to write this post because I’d like to see Wasm GC on more or less equal footing with linear memory when it comes to binary data manipulation. Hello triangle For starters, let's...
2 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from dthompson

Guile-websocket 0.2.0 released

I'm happy to announce that guile-websocket 0.2.0 has been released! Guile-websocket is an implementation of the WebSocket protocol, both the client and server sides, for Guile Scheme. This release introduces breaking changes that overhaul the client and server implementations in order to support non-blocking I/O and TLS encrypted connections. source tarball: https://files.dthompson.us/guile-websocket/guile-websocket-0.2.0.tar.gz signature: https://files.dthompson.us/guile-websocket/guile-websocket-0.2.0.tar.gz.asc See the guile-websocket project page for more information. Bug reports, bug fixes, feature requests, and patches are welcomed.

2 months ago 54 votes
Guile-Bstructs 0.1.0 released

I'm pleased to announce that the very first release of guile-bstructs, version 0.1.0, has been released! This is a library I've been working on for quite some time and after more than one rewrite and many smaller refactors I think it's finally ready to release publicly. Let's hope I'm not wrong about that! About guile-bstructs Guile-bstructs is a library that provides structured read/write access to binary data for Guile. A bstruct (short for “binary structure”) is a data type that encapsulates a bytevector and a byte offset which interprets that bytevector based on a specified layout. Some use cases for bstructs are: manipulating C structs when using the foreign function interface packing GPU vertex buffers when using graphics APIs such as OpenGL implementing data types that benefit from Guile's unboxed math optimizations such as vectors and matrices This library was initially inspired by guile-opengl's define-packed-struct syntax but is heavily based on "Ftypes: Structured foreign types" by Andy Keep and R. Kent Dybvig. The resulting interface is quite similar but the implementation is completely original. This library provides a syntax-heavy interface; nearly all of the public API is syntax. This is done to ensure that bstruct types are static and well-known at compile time resulting in efficient bytecode and minimal runtime overhead. A subset of the interface deals in raw bytevector access for accessing structured data in bytevectors directly without going through an intermediary bstruct wrapper. This low-level interface is useful for certain batch processing situations where the overhead of creating wrapper bstructs would hinder throughput. Example Here are some example type definitions to give you an idea of what it’s like to use guile-bstructs: ;; Struct (define-bstruct <vec2> (padded (struct (x float) (y float)))) ;; Type group with a union (define-bstruct (<mouse-move-event> (struct (type uint8) (x int32) (y int32))) (<mouse-button-event> (struct (type uint8) (button uint8) (state uint8) (x int32) (y int32))) (<event> (union (type uint8) (mouse-move <mouse-move-event>) (mouse-button <mouse-button-event>)))) ;; Array (define-bstruct <matrix4> (array 16 float)) ;; Bit fields (define-bstruct <date> (bits (year 32 s) (month 4 u) (day 5 u))) ;; Pointer (define-bstruct (<item> (struct (type int))) (<chest> (struct (opened? uint8) (item (* <item>))))) ;; Packed struct modifier (define-bstruct <enemy> (packed (struct (type uint8) (health uint32)))) ;; Endianness modifier (define-bstruct <big-float> (endian big float)) ;; Recursive type (define-bstruct <node> (struct (item int) (next (* <node>)))) ;; Mutually recursive type group (define-bstruct (<forest> (struct (children (* <tree>)))) (<tree> (struct (value int) (forest (* <forest>)) (next (* <tree>))))) ;; Opaque type (define-bstruct SDL_GPUTexture) Download Source tarball: guile-bstructs-0.1.0.tar.gz GPG signature: guile-bstructs-0.1.0.tar.gz.asc This release was signed with this GPG key. See the guile-bstructs project page for more information.

2 months ago 62 votes
Functional reactive user interfaces with propagators

I’ve been interested in functional reactive programming (FRP) for about a decade now. I even wrote a couple of blog posts back in 2014 describing my experiments. My initial source of inspiration was Elm, the Haskell-like language for the web that once had FRP as a core part of the language. Evan Czaplicki’s Strange Loop 2013 talk really impressed me, especially that Mario demo. From there, I explored the academic literature on the subject. Ultimately, I created and then abandoned a library that focused on FRP for games. It was a neat idea, but the performance was terrible. The overhead of my kinda-sorta FRP system was part of the problem, but mostly it was my own inexperience. I didn’t know how to optimize effectively and my implementation language, Guile, did not have as many optimization passes as it does now. Also, realtime simulations like games require much more careful use of heap allocation. I found that, overhead aside, FRP is a bad fit for things like scripting sequences of actions in a game. I don’t want to give up things like coroutines that make it easy. I’ve learned how different layers of a program may call for different programming paradigms. Functional layers rest upon imperative foundations. Events are built on top of polling. Languages with expression trees run on machines that only understand linear sequences. You get the idea. A good general-purpose language will allow you to compose many paradigms in the same program. I’m still a big fan of functional programming, but single paradigm languages do not appeal to me. Fast forward 10 years, I find myself thinking about FRP again in a new context. I now work for the Spritely Institute where we’re researching and building the next generation of secure, distributed application infrastructure. We want to demo our tech through easy-to-use web applications, which means we need to do some UI programming. So, the back burner of my brain has been mulling over the possibilities. What’s the least painful way to build web UIs? Is this FRP thing worth revisiting? The reason why FRP is so appealing to me (on paper, at least) is that it allows for writing interactive programs declaratively. With FRP, I can simply describe the relationships between the various time-varying components, and the system wires it all together for me. I’m spared from callback hell, one of the more frightening layers of hell that forces programs to be written in a kind of continuation-passing style where timing and state bugs consume more development time as the project grows. What about React? In the time during and since I last experimented with FRP, a different approach to declarative UI programming has swept the web development world: React. From React, many other similar libraries emerged. On the minimalist side there are things like Mithril (a favorite of mine), and then there are bigger players like Vue. The term “reactive” has become overloaded, but in the mainstream software world it is associated with React and friends. FRP is quite different, despite sharing the declarative and reactive traits. Both help free programmers from callback hell, but they achieve their results differently. The React model describes an application as a tree of “components”. Each component represents a subset of the complete UI element tree. For each component, there is a template function that takes some inputs and returns the new desired state of the UI. This function is called whenever an event occurs that might change the state of the UI. The template produces a data structure known as a “virtual DOM”. To realize this new state in the actual DOM, React diffs the previous tree with the new one and updates, creates, and deletes elements as necessary. With FRP, you describe your program as an acyclic graph of nodes that contain time-varying values. The actual value of any given node is determined by a function that maps the current values of some input nodes into an output value. The system is bootstrapped by handling a UI event and updating the appropriate root node, which kicks off a cascade of updates throughout the graph. At the leaf nodes, side-effects occur that realize the desired state of the application. Racket’s FrTime is one example of such a system, which is based on Greg Cooper’s 2008 PhD dissertation “Integrating Dataflow Evaluation into a Practical Higher-Order Call-by-Value Language”. In FrTime, time-varying values are called “signals”. Elm borrowed this language, too, and there’s currently a proposal to add signals to JavaScript. Research into FRP goes back quite a bit further. Notably, Conal Elliot and Paul Hudak wrote “Functional Reactive Animation” in 1997. On jank The scope of potential change for any given event is larger in React than FRP. An FRP system flows data through an acyclic graph, updating only the nodes affected by the event. React requires re-evaluating the template for each component that needs refreshing and applying a diff algorithm to determine what needs changing in the currently rendered UI. The virtual DOM diffing process can be quite wasteful in terms of both memory usage and processing time, leading to jank on systems with limited resources like phones. Andy Wingo has done some interesting analyses of things like React Native and Flutter and covers the subject of jank well. So, while I appreciate the greatly improved developer experience of React-likes (I wrote my fair share of frontend code in the jQuery days), I’m less pleased by the overhead that it pushes onto each user’s computer. React feels like an important step forward on the declarative UI trail, but it’s not the destination. FRP has the potential for less jank because side-effects (the UI widget state updates) can be more precise. For example, if a web page has a text node that displays the number of times the user has clicked a mouse button, an FRP system could produce a program that would do the natural thing: Register a click event handler that replaces the text node with a new one containing the updated count. We don’t need to diff the whole page, nor do we need to create a wrapper component to update a subset of the page. The scope is narrow, so we can apply smaller updates. No virtual DOM necessary. There is, of course, overhead to maintaining the graph of time-varying values. The underlying runtime is free to use mutable state, but the user layer must take care to use pure functions and persistent, immutable data structures. This has a cost, but the per-event cost to refresh the UI feels much more reasonable when compared to React. From here on, I will stop talking about React and start exploring if FRP might really offer a more expressive way to do declarative UI without too much overhead. But first, we need to talk about a serious problem. FRP is acyclic FRP is no silver bullet. As mentioned earlier, FRP graphs are typically of the acyclic flavor. This limits the set of UIs that are possible to create with FRP. Is this the cost of declarativeness? To demonstrate the problem, consider a color picker tool that has sliders for both the red-green-blue and hue-saturation-value representations of color: In this program, updating the sliders on the RGB side should change the values of the sliders on the HSV side, and vice versa. This forms a cycle between the two sets of sliders. It’s possible to express cycles like this with event callbacks, though it’s messy and error-prone to do manually. We’d like a system built on top of event callbacks that can do the right thing without strange glitches or worse, unbounded loops. Propagators to the rescue Fortunately, I didn’t create that diagram above. It’s from Alexey Radul’s 2009 PhD dissertation: “Propagation Networks: A Flexible and Expressive Substrate for Computation”. This paper dedicates a section to explaining how FRP can be built on top a more general paradigm called propagation networks, or just propagators for short. The paper is lengthy, naturally, but it is written in an approachable style. There isn’t any terse math notation and there are plenty of code examples. As far as PhD dissertations go, this one is a real page turner! Here is a quote from section 5.5 about FrTime (with emphasis added by me): FrTime is built around a custom propagation infrastructure; it nicely achieves both non-recomputation and glitch avoidance, but unfortunately, the propagation system is nontrivially complicated, and specialized for the purpose of supporting functional reactivity. In particular, the FrTime system imposes the invariant that the propagation graph be acyclic, and guarantees that it will execute the propagators in topological-sort order. This simplifies the propagators themselves, but greatly complicates the runtime system, especially because it has to dynamically recompute the sort order when the structure of some portion of the graph changes (as when the predicate of a conditional changes from true to false, and the other branch must now be computed). That complexity, in turn, makes that runtime system unsuitable for other kinds of propagation, and even makes it difficult for other kinds of propagation to interoperate with it. So, the claim is that FRP-on-propagators can remove the acyclic restriction, reduce complexity, and improve interoperability. But what are propagators? I like how the book “Software Design for Flexibility” (2021) defines them (again, with emphasis added by me): “The propagator model is built on the idea that the basic computational elements are propagators, autonomous independent machines interconnected by shared cells through which they communicate. Each propagator machine continuously examines the cells it is connected to, and adds information to some cells based on computations it can make from information it can get from others. Cells accumulate information and propagators produce information.” Research on propagators goes back a long way (you’ll even find a form of propagators in the all-time classic “Structure and Interpretation of Computer Programs”), but it was Alexey Radul that discovered how to unify many different types of special-purpose propagation systems so that they could share a generic substrate and interoperate. Perhaps the most exciting application of the propagator model is AI, where it can be used to create “explainable AI” that keeps track of how a particular output was computed. This type of AI stands in stark contrast to the much hyped mainstream machine learning models that hoover up our planet’s precious natural resources to produce black boxes that generate impressive bullshit. But anyway! The diagram above can also be found in section 5.5 of the dissertation. Here’s the description: “A network for a widget for RGB and HSV color selection. Traditional functional reactive systems have qualms about the circularity, but general-purpose propagation handles it automatically.” This color picker felt like a good, achievable target for a prototype. The propagator network is small and there are only a handful of UI elements, yet it will test if the FRP system is working correctly. The prototype I read Alexey Radul’s dissertation, and then read chapter 7 of Software Design for Flexibility, which is all about propagators. Both use Scheme as the implementation language. The latter makes no mention of FRP, and while the former explains how FRP can be implemented in terms of propagators, there is (understandably) no code included. So, I had to implement it for myself to test my understanding. Unsurprisingly, I had misunderstood many things along the way and my iterations of broken code let me know that. Implementation is the best teacher. After much code fiddling, I was able to create a working prototype of the color picker. Here it is below: This prototype is written in Scheme and uses Hoot to compile it to WebAssembly so I can embed it right here in my blog. Sure beats a screenshot or video! This prototype contains a minimal propagator implementation that is sufficient to bootstrap a similarly minimal FRP implementation. Propagator implementation Let’s take a look at the code and see how it all works, starting with propagation. There are two essential data types: Cells and propagators. Cells accumulate information about a value, ranging from nothing, some form of partial information, or a complete value. The concept of partial information is Alexey Radul’s major contribution to the propagator model. It is through partial information structures that general-purpose propagators can be used to implement logic programming, probabilistic programming, type inference, and FRP, among others. I’m going to keep things as simple as possible in this post (it’s a big enough infodump already), but do read the propagator literature if phrases like “dependency directed backtracking” and “truth maintenance system” sound like a good time to you. Cells start out knowing nothing, so we need a special, unique value to represent nothing: (define-record-type <nothing> (make-nothing) %nothing?) (define nothing (make-nothing)) (define (nothing? x) (eq? x nothing)) Any unique (as in eq?) object would do, such as (list ’nothing), but I chose to use a record type because I like the way it prints. In addition to nothing, the propagator model also has a notion of contradictions. If one source of information says there are four lights, but another says there are five, then we have a contradiction. Propagation networks do not fall apart in the presence of contradictory information. There’s a data type that captures information about them and they can be resolved in a context-specific manner. I mention contradictions only for the sake of completeness, as a general-purpose propagator system needs to handle them. This prototype does not create any contradictions, so I won’t mention them again. Now we can define a cell type: (define-record-type <cell> (%make-cell relations neighbors content strongest equivalent? merge find-strongest handle-contradiction) cell? (relations cell-relations) (neighbors cell-neighbors set-cell-neighbors!) (content cell-content set-cell-content!) (strongest cell-strongest set-cell-strongest!) ;; Dispatch table: (equivalent? cell-equivalent?) (merge cell-merge) (find-strongest cell-find-strongest) (handle-contradiction cell-handle-contradiction)) The details of how a cell does things like merge old information with new information is left intentionally unanswered at this level of abstraction. Different systems built on propagators will want to handle things in different ways. In the propagator literature, you’ll see generic procedures used extensively for this purpose. For the sake of simplicity, I use a dispatch table instead. It would be easy to layer generic merge on top later, if we wanted. The constructor for cells sets the default contents to nothing: (define default-equivalent? equal?) ;; But what about partial information??? (define (default-merge old new) new) (define (default-find-strongest content) content) (define (default-handle-contradiction cell) (values)) (define* (make-cell name #:key (equivalent? default-equivalent?) (merge default-merge) (find-strongest default-find-strongest) (handle-contradiction default-handle-contradiction)) (let ((cell (%make-cell (make-relations name) '() nothing nothing equivalent? merge find-strongest handle-contradiction))) (add-child! (current-parent) cell) cell)) The default procedures used for the dispatch table are either no-ops or trivial. The default merge doesn’t merge at all, it just clobbers the old with the new. It’s up to the layers on top to provide more useful implementations. Cells can have neighbors (which will be propagators): (define (add-cell-neighbor! cell neighbor) (set-cell-neighbors! cell (lset-adjoin eq? (cell-neighbors cell) neighbor))) Since cells accumulate information, there are procedures for adding new content and finding the current strongest value contained within: (define (add-cell-content! cell new) (match cell (($ <cell> _ neighbors content strongest equivalent? merge find-strongest handle-contradiction) (let ((content* (merge content new))) (set-cell-content! cell content*) (let ((strongest* (find-strongest content*))) (cond ;; New strongest value is equivalent to the old one. No need ;; to alert propagators. ((equivalent? strongest strongest*) (set-cell-strongest! cell strongest*)) ;; Uh oh, a contradiction! Call handler. ((contradiction? strongest*) (set-cell-strongest! cell strongest*) (handle-contradiction cell)) ;; Strongest value has changed. Alert the propagators! (else (set-cell-strongest! cell strongest*) (for-each alert-propagator! neighbors)))))))) Next up is the propagator type. Propagators can be activated to create information using content stored in cells and store their results in some other cells, forming a graph. Data flow is not forced to be directional. Cycles are not only permitted, but very common in practice. So, propagators keep track of both their input and output cells: (define-record-type <propagator> (%make-propagator relations inputs outputs activate) propagator? (relations propagator-relations) (inputs propagator-inputs) (outputs propagator-outputs) (activate propagator-activate)) Propagators can be alerted to schedule themselves to be re-evaluted: (define (alert-propagator! propagator) (queue-task! (propagator-activate propagator))) The constructor for propagators adds the new propagator as a neighbor to all input cells and then calls alert-propagator! to bootstrap it: (define (make-propagator name inputs outputs activate) (let ((propagator (%make-propagator (make-relations name) inputs outputs activate))) (add-child! (current-parent) propagator) (for-each (lambda (cell) (add-cell-neighbor! cell propagator)) inputs) (alert-propagator! propagator) propagator)) There are two main classes of propagators that will be used: primitive propagators and constraint propagators. Primitive propagators are directional; they apply a function to the values of some input cells and write the result to an output cell: (define (unusable-value? x) (or (nothing? x) (contradiction? x))) (define (primitive-propagator name f) (match-lambda* ((inputs ... output) (define (activate) (let ((args (map cell-strongest inputs))) (unless (any unusable-value? args) (add-cell-content! output (apply f args))))) (make-propagator name inputs (list output) activate)))) We can use primitive-propagator to lift standard Scheme procedures into the realm of propagators. Here’s how we’d make and use an addition propagator: (define p:+ (primitive-propagator +)) (define a (make-cell)) (define b (make-cell)) (define c (make-cell)) (p:+ a b c) (add-cell-content! a 1) (add-cell-content! b 3) ;; After the scheduler runs… (cell-strongest c) ;; => 4 It is from these primitive propagators that we can build more complicated, compound propagators. Compound propagators compose primitive propagators (or other compound propagators) and lazily construct their section of the network upon first activation: (define (compound-propagator name inputs outputs build) (let ((built? #f)) (define (maybe-build) (unless (or built? (and (not (null? inputs)) (every unusable-value? (map cell-strongest inputs)))) (parameterize ((current-parent (propagator-relations propagator))) (build) (set! built? #t)))) (define propagator (make-propagator name inputs outputs maybe-build)) propagator)) By this point you may be wondering what all the references to current-parent are about. It is for tracking the parent/child relationships of the cells and propagators in the network. This could be helpful for things like visualizing the network, but we aren’t going to do anything with it today. I’ve omitted all of the other relation code for this reason. Constraint propagators are compound propagators whose inputs and outputs are the same, which results in bidirectional propagation: (define (constraint-propagator name cells build) (compound-propagator name cells cells build)) Using primitive propagators for addition and subtraction, we can build a constraint propagator for the equation a + b = c: (define p:+ (primitive-propagator +)) (define p:- (primitive-propagator -)) (define (c:sum a b c) (define (build) (p:+ a b c) (p:- c a b) (p:- c b a)) (constraint-propagator 'sum (list a b c) build)) (define a (make-cell)) (define b (make-cell)) (define c (make-cell)) (c:sum a b c) (add-cell-content! a 1) (add-cell-content! c 4) ;; After the scheduler runs… (cell-strongest b) ;; => 3 With a constraint, we can populate any two cells and the propagation system will figure out the value of the third. Pretty cool! This is a good enough propagation system for the FRP prototype. FRP implementation If you’re familiar with terminology from other FRP systems like “signals” and “behaviors” then set that knowledge aside for now. We need some new nouns. But first, a bit about the problems that need solving in order to implement FRP on top of general-purpose propagators: The propagator model does not enforce any ordering of when propagators will be re-activated in relation to each other. If we’re not careful, something in the network could compute a value using a mix of fresh and stale data, resulting in a momentary “glitch” that could be noticeable in the UI. The presence of cycles introduce a crisis of identity. It’s not sufficient for every time-varying value to be treated as its own self. In the color picker, the RGB values and the HSV values are two representations of the same thing. We need a new notion of identity to capture this and prevent unnecessary churn and glitches in the network. For starters, we will create a “reactive identifier” (needs a better name) data type that serves two purposes: To create shared identity between different information sources that are logically part of the same thing To create localized timestamps for values associated with this identity (define-record-type <reactive-id> (%make-reactive-id name clock) reactive-id? (name reactive-id-name) (clock reactive-id-clock set-reactive-id-clock!)) (define (make-reactive-id name) (%make-reactive-id name 0)) (define (reactive-id-tick! id) (let ((t (1+ (reactive-id-clock id)))) (set-reactive-id-clock! id t) `((,id . ,t)))) Giving each logical identity in the FRP system its own clock eliminates the need for a global clock, avoiding a potentially troublesome source of centralization. This is kind of like how Lamport timestamps are used in distributed systems. We also need a data type that captures the value of something at a particular point in time. Since the cruel march of time is unceasing, these are ephemeral values: (define-record-type <ephemeral> (make-ephemeral value timestamps) ephemeral? (value ephemeral-value) ;; Association list mapping identity -> time (timestamps ephemeral-timestamps)) Ephemerals are boxes that contain some arbitrary data with a bunch of shipping labels slapped onto the outside explaining from whence they came. This is the partial information structure that our propagators will manipulate and add to cells. Here’s how to say “the mouse position was (1, 2) at time 3” in code: (define mouse-position (make-reactive-id ’mouse-position)) (make-ephemeral #(1 2) `((,mouse-position . 3))) We need to perform a few operations with ephemerals: Test if one ephemeral is fresher (more recent) than another Merge two ephemerals when cell content is added Compose the timestamps from several inputs to form an aggregate timestamp for an output, but only if all timestamps for each distinct identifier match (no mixing of fresh and stale values) (define (ephemeral-fresher? a b) (let ((b-inputs (ephemeral-timestamps b))) (let lp ((a-inputs (ephemeral-timestamps a))) (match a-inputs (() #t) (((key . a-time) . rest) (match (assq-ref b-inputs key) (#f (lp rest)) (b-time (and (> a-time b-time) (lp rest))))))))) (define (merge-ephemerals old new) (cond ((nothing? old) new) ((nothing? new) old) ((ephemeral-fresher? new old) new) (else old))) (define (merge-ephemeral-timestamps ephemerals) (define (adjoin-keys alist keys) (fold (lambda (key+value keys) (match key+value ((key . _) (lset-adjoin eq? keys key)))) keys alist)) (define (check-timestamps id) (let lp ((ephemerals ephemerals) (t #f)) (match ephemerals (() t) ((($ <ephemeral> _ timestamps) . rest) (match (assq-ref timestamps id) ;; No timestamp for this id in this ephemeral. Continue. (#f (lp rest t)) (t* (if t ;; If timestamps don't match then we have a mix of ;; fresh and stale values, so return #f. Otherwise, ;; continue. (and (= t t*) (lp rest t)) ;; Initialize timestamp and continue. (lp rest t*)))))))) ;; Build a set of all reactive identifiers across all ephemerals. (let ((ids (fold (lambda (ephemeral ids) (adjoin-keys (ephemeral-timestamps ephemeral) ids)) '() ephemerals))) (let lp ((ids ids) (timestamps '())) (match ids (() timestamps) ((id . rest) ;; Check for consistent timestamps. If they are consistent ;; then add it to the alist and continue. Otherwise, return ;; #f. (let ((t (check-timestamps id))) (and t (lp rest (cons (cons id t) timestamps))))))))) Example usage: (define e1 (make-ephemeral #(3 4) `((,mouse-position . 4)))) (define e2 (make-ephemeral #(1 2) `((,mouse-position . 3)))) (ephemeral-fresher? e1 e2) ;; => #t (merge-ephemerals e1 e2) ;; => e1 (merge-ephemeral-timestamps (list e1 e2)) ;; => #f (define (mouse-max-coordinate e) (match e (($ <ephemeral> #(x y) timestamps) (make-ephemeral (max x y) timestamps)))) (define e3 (mouse-max-coordinate e1)) (merge-ephemeral-timestamps (list e1 e3)) ;; => ((mouse-position . 4)) Now we can build a primitive propagator constructor that lifts ordinary Scheme procedures so that they work with ephemerals: (define (ephemeral-wrap proc) (match-lambda* ((and ephemerals (($ <ephemeral> args) ...)) (match (merge-ephemeral-timestamps ephemerals) (#f nothing) (timestamps (make-ephemeral (apply proc args) timestamps)))))) (define* (primitive-reactive-propagator name proc) (primitive-propagator name (ephemeral-wrap proc))) Reactive UI implementation Now we need some propagators that live at the edges of our network that know how to interact with the DOM and can do the following: Sync a DOM element attribute with the value of a cell Create a two-way data binding between an element’s value attribute and a cell Render the markup in a cell and place it into the DOM tree in the right location Syncing an element attribute is a directional operation and the easiest to implement: (define (r:attribute input elem attr) (let ((attr (symbol->string attr))) (define (activate) (match (cell-strongest input) (($ <ephemeral> val) (attribute-set! elem attr (obj->string val))) ;; Ignore unusable values. (_ (values)))) (make-propagator 'r:attribute (list input) '() activate))) Two-way data binding is more involved. First, a new data type is used to capture the necessary information: (define-record-type <binding> (make-binding id cell default group) binding? (id binding-id) (cell binding-cell) (default binding-default) (group binding-group)) (define* (binding id cell #:key (default nothing) (group '())) (make-binding id cell default group)) And then a reactive propagator applies that binding to a specific DOM element: (define* (r:binding binding elem) (match binding (($ <binding> id cell default group) (define (update new) (unless (nothing? new) (let ((timestamp (reactive-id-tick! id))) (add-cell-content! cell (make-ephemeral new timestamp)) ;; Freshen timestamps for all cells in the same group. (for-each (lambda (other) (unless (eq? other cell) (match (cell-strongest other) (($ <ephemeral> val) (add-cell-content! other (make-ephemeral val timestamp))) (_ #f)))) group)))) ;; Sync the element's value with the cell's value. (define (activate) (match (cell-strongest cell) (($ <ephemeral> val) (set-value! elem (obj->string val))) (_ (values)))) ;; Initialize element value with the default value. (update default) ;; Sync the cell's value with the element's value. (add-event-listener! elem "input" (procedure->external (lambda (event) (update (string->obj (value elem)))))) (make-propagator 'r:binding (list cell) '() activate)))) A simple method for rendering to the DOM is to replace some element with a newly created element based on the current ephemeral value of a cell: (define (r:dom input elem) (define (activate) (match (cell-strongest input) (($ <ephemeral> exp) (let ((new (sxml->dom exp))) (replace-with! elem new) (set! elem new))) (_ (values)))) (make-propagator 'dom (list input) '() activate)) The sxml->dom procedure deserves some further explanation. To create a subtree of new elements, we have two options: Use something like the innerHTML element property to insert arbitrary HTML as a string and let the browser parse and build the elements. Use a Scheme data structure that we can iterate over and make the relevant document.createTextNode, document.createElement, etc. calls. Option 1 might be a shortcut and would be fine for a quick prototype, but it would mean that to generate the HTML we’d be stuck using raw strings. While string-based templating is commonplace, we can certainly do better in Scheme. Option 2 is actually not too much work and we get to use Lisp’s universal templating system, quasiquote, to write our markup. Thankfully, SXML already exists for this purpose. SXML is an alternative XML syntax that uses s-expressions. Since Scheme uses s-expression syntax, it’s a natural fit. Example: '(article (h1 "SXML is neat") (img (@ (src "cool.jpg") (alt "cool image"))) (p "Yeah, SXML is " (em "pretty neato!"))) Instead of using it to generate HTML text, we’ll instead generate a tree of DOM elements. Furthermore, because we’re now in full control of how the element tree is built, we can build in support for reactive propagators! Check it out: (define (sxml->dom exp) (match exp ;; The simple case: a string representing a text node. ((? string? str) (make-text-node str)) ((? number? num) (make-text-node (number->string num))) ;; A cell containing SXML (or nothing) ((? cell? cell) (let ((elem (cell->elem cell))) (r:dom cell elem) elem)) ;; An element tree. The first item is the HTML tag. (((? symbol? tag) . body) ;; Create a new element with the given tag. (let ((elem (make-element (symbol->string tag)))) (define (add-children children) ;; Recursively call sxml->dom for each child node and ;; append it to elem. (for-each (lambda (child) (append-child! elem (sxml->dom child))) children)) (match body ((('@ . attrs) . children) (for-each (lambda (attr) (match attr (((? symbol? name) (? string? val)) (attribute-set! elem (symbol->string name) val)) (((? symbol? name) (? number? val)) (attribute-set! elem (symbol->string name) (number->string val))) (((? symbol? name) (? cell? cell)) (r:attribute cell elem name)) ;; The value attribute is special and can be ;; used to setup a 2-way data binding. (('value (? binding? binding)) (r:binding binding elem)))) attrs) (add-children children)) (children (add-children children))) elem)))) Notice the calls to r:dom, r:attribute, and r:binding. A cell can be used in either the context of an element (r:dom) or an attribute (r:attribute). The value attribute gets the additional superpower of r:binding. We will make use of this when it is time to render the color picker UI! Color picker implementation Alright, I’ve spent a lot of time explaining how I built a minimal propagator and FRP system from first principles on top of Hoot-flavored Scheme. Let’s finally write the dang color picker! First we need some data types to represent RGB and HSV colors: (define-record-type <rgb-color> (rgb-color r g b) rgb-color? (r rgb-color-r) (g rgb-color-g) (b rgb-color-b)) (define-record-type <hsv-color> (hsv-color h s v) hsv-color? (h hsv-color-h) (s hsv-color-s) (v hsv-color-v)) And procedures to convert RGB to HSV and vice versa: (define (rgb->hsv rgb) (match rgb (($ <rgb-color> r g b) (let* ((cmax (max r g b)) (cmin (min r g b)) (delta (- cmax cmin))) (hsv-color (cond ((= delta 0.0) 0.0) ((= cmax r) (let ((h (* 60.0 (fmod (/ (- g b) delta) 6.0)))) (if (< h 0.0) (+ h 360.0) h))) ((= cmax g) (* 60.0 (+ (/ (- b r) delta) 2.0))) ((= cmax b) (* 60.0 (+ (/ (- r g) delta) 4.0)))) (if (= cmax 0.0) 0.0 (/ delta cmax)) cmax))))) (define (hsv->rgb hsv) (match hsv (($ <hsv-color> h s v) (let* ((h' (/ h 60.0)) (c (* v s)) (x (* c (- 1.0 (abs (- (fmod h' 2.0) 1.0))))) (m (- v c))) (define-values (r' g' b') (cond ((<= 0.0 h 60.0) (values c x 0.0)) ((<= h 120.0) (values x c 0.0)) ((<= h 180.0) (values 0.0 c x)) ((<= h 240.0) (values 0.0 x c)) ((<= h 300.0) (values x 0.0 c)) ((<= h 360.0) (values c 0.0 x)))) (rgb-color (+ r' m) (+ g' m) (+ b' m)))))) We also need some procedures to convert colors into the hexadecimal representations we’re used to seeing: (define (uniform->byte x) (inexact->exact (round (* x 255.0)))) (define (rgb->int rgb) (match rgb (($ <rgb-color> r g b) (+ (* (uniform->byte r) (ash 1 16)) (* (uniform->byte g) (ash 1 8)) (uniform->byte b))))) (define (rgb->hex-string rgb) (list->string (cons #\# (let lp ((i 0) (n (rgb->int rgb)) (out '())) (if (= i 6) out (lp (1+ i) (ash n -4) (cons (integer->char (let ((digit (logand n 15))) (+ (if (< digit 10) (char->integer #\0) (- (char->integer #\a) 10)) digit))) out))))))) (define (rgb-hex->style hex) (string-append "background-color: " hex ";")) Now we can lift the color API into primitive reactive propagator constructors: (define-syntax-rule (define-primitive-reactive-propagator name proc) (define name (primitive-reactive-propagator 'name proc))) (define-primitive-reactive-propagator r:rgb-color rgb-color) (define-primitive-reactive-propagator r:rgb-color-r rgb-color-r) (define-primitive-reactive-propagator r:rgb-color-g rgb-color-g) (define-primitive-reactive-propagator r:rgb-color-b rgb-color-b) (define-primitive-reactive-propagator r:hsv-color hsv-color) (define-primitive-reactive-propagator r:hsv-color-h hsv-color-h) (define-primitive-reactive-propagator r:hsv-color-s hsv-color-s) (define-primitive-reactive-propagator r:hsv-color-v hsv-color-v) (define-primitive-reactive-propagator r:rgb->hsv rgb->hsv) (define-primitive-reactive-propagator r:hsv->rgb hsv->rgb) (define-primitive-reactive-propagator r:rgb->hex-string rgb->hex-string) (define-primitive-reactive-propagator r:rgb-hex->style rgb-hex->style) From those primitive propagators, we can build the necessary constraint propagators: (define (r:components<->rgb r g b rgb) (define (build) (r:rgb-color r g b rgb) (r:rgb-color-r rgb r) (r:rgb-color-g rgb g) (r:rgb-color-b rgb b)) (constraint-propagator 'r:components<->rgb (list r g b rgb) build)) (define (r:components<->hsv h s v hsv) (define (build) (r:hsv-color h s v hsv) (r:hsv-color-h hsv h) (r:hsv-color-s hsv s) (r:hsv-color-v hsv v)) (constraint-propagator 'r:components<->hsv (list h s v hsv) build)) (define (r:rgb<->hsv rgb hsv) (define (build) (r:rgb->hsv rgb hsv) (r:hsv->rgb hsv rgb)) (constraint-propagator 'r:rgb<->hsv (list rgb hsv) build)) At long last, we are ready to define the UI! Here it is: (define (render exp) (append-child! (document-body) (sxml->dom exp))) (define* (slider id name min max default #:optional (step "any")) `(div (@ (class "slider")) (label (@ (for ,id)) ,name) (input (@ (id ,id) (type "range") (min ,min) (max ,max) (step ,step) (value ,default))))) (define (uslider id name default) ; [0,1] slider (slider id name 0 1 default)) (define-syntax-rule (with-cells (name ...) body . body*) (let ((name (make-cell 'name #:merge merge-ephemerals)) ...) body . body*)) (with-cells (r g b rgb h s v hsv hex style) (define color (make-reactive-id 'color)) (define rgb-group (list r g b)) (define hsv-group (list h s v)) (r:components<->rgb r g b rgb) (r:components<->hsv h s v hsv) (r:rgb<->hsv rgb hsv) (r:rgb->hex-string rgb hex) (r:rgb-hex->style hex style) (render `(div (h1 "Color Picker") (div (@ (class "preview")) (div (@ (class "color-block") (style ,style))) (div (@ (class "hex")) ,hex)) (fieldset (legend "RGB") ,(uslider "red" "Red" (binding color r #:default 1.0 #:group rgb-group)) ,(uslider "green" "Green" (binding color g #:default 0.0 #:group rgb-group)) ,(uslider "blue" "Blue" (binding color b #:default 1.0 #:group rgb-group))) (fieldset (legend "HSV") ,(slider "hue" "Hue" 0 360 (binding color h #:group hsv-group)) ,(uslider "saturation" "Saturation" (binding color s #:group hsv-group)) ,(uslider "value" "Value" (binding color v #:group hsv-group)))))) Each color channel (R, G, B, H, S, and V) has a cell which is bound to a slider (<input type="range">). All six sliders are identified as color, so adjusting any of them increments color’s timestamp. The R, G, and B sliders form one input group, and the H, S, and V sliders form another. By grouping the related sliders together, whenever one of the sliders is moved, all members of the group will have their ephemeral value refreshed with the latest timestamp. This behavior is crucial because otherwise the r:components<->rgb and r:components<->hsv propagators would see that one color channel has a fresher value than the other two and do nothing. Since the underlying propagator infrastructure does not enforce activation order, reactive propagators must wait until their inputs reach a consistent state where the timestamps for a given reactive identifier are all the same. With this setup, changing a slider on the RGB side will cause a new color value to propagate over to the HSV side. Because the relationship is cyclical, the HSV side will then attempt to propagate an equivalent color value back to the RGB side! This could be bad news, but since the current RGB value is equally fresh (same timestamp), the propagation stops right there. Redundant work is minimized and an unbounded loop is avoided. And that’s it! Phew! Complete source code can be found here. Reflections I think the results of this prototype are promising. I’d like to try building some larger demos to see what new problems arise. Since propagation networks include cycles, they cannot be garbage collected until there are no references to any part of the network from the outside. Is this acceptable? I didn’t optimize, either. A more serious implementation would want to do things like use case-lambda for all n-ary procedures to avoid consing an argument list in the common cases of 1, 2, 3, etc. arguments. There is also a need for a more pleasing domain-specific language, using Scheme’s macro system, for describing FRP graphs. Alexey Radul’s dissertation was published in 2009. Has anyone made a FRP system based on propagators since then that’s used in real software? I don’t know of anything but it’s a big information superhighway out there. Update: Since publishing, I have learned about the following: Holograph: A visual editor for propagator networks! Amazing! Scoped Propagators: A WIP propagator system with some notable differences from “traditional” propagators. I wish I had read Alexey Radul's disseration 10 years ago when I was first exploring FRP. It would have saved me a lot of time spent running into problems that have already been solved that I was not equipped to solve on my own. I have even talked to Gerald Sussman (a key figure in propagator research) in person about the propagator model. That conversation was focused on AI, though, and I didn’t realize that propagators could also be used for FRP. It wasn’t until more recently that friend and colleague Christine Lemmer-Webber, who was present for the aforementioned conversation with Sussman, told me about it. Christine has her own research project for propagators. There are so many interesting things to learn out there, but I am also so tired. Better late than never, I guess! Anyway, if you made it this far then I hope you have enjoyed reading about propagators and FRP. ’Til next time!

8 months ago 130 votes
Lisp: Icing or Cake?

The Spring Lisp Game Jam 2024 ended one week ago. 48 games were submitted, a new record for the jam! This past week has been a time for participants to play and rate each other’s games. As I explored the entries, I noticed two distinct meta-patterns in how people approached building games with Lisp. I think these patterns apply more broadly to all applications of Lisp. Let’s talk about these patterns in some detail, with examples. But first! Here’s the breakdown of the jam submissions by language: lang entries % (rounded) ---- ------- ----------- guile 15 31 fennel 10 21 clojure 5 10 cl 5 10 racket 4 8 elisp 4 8 s7 3 6 kawa 1 2 owl 1 2 I haven’t rolled up the various Schemes (Guile, Racket, S7, Kawa) into a general scheme category because Scheme is so minimally specified and they are all very distinct implementations for different purposes, not to mention that Racket has a lot more going on than just Scheme. For the first time ever, Guile came out on top with the most submissions! There’s a very specific reason for this outcome. 11 out of the 15 Guile games were built for the web with Hoot, a Scheme-to-WebAssembly compiler that I work on at the Spritely Institute. 2 of those 11 were official Spritely projects. We put out a call for people to try making games with Hoot before the jam started, and a lot of people took us up on it! Very cool! The next most popular language, which is typically the most popular language in these jams, is Fennel. Fennel is a Lisp that compiles to Lua. It’s very cool, too! Also of note, at least to me as a Schemer, is that three games used S7. Hmm, there might be something relevant to this post going on there. The patterns I’m about to talk about could sort of be framed as “The Guile Way vs. The Fennel Way”, but I don’t want to do that. It's not an “us vs. them” thing. It’s wonderful that there are so many flavors of Lisp these days that anyone can find a great implementation that suits their preferences. Not only that, but many of these implementations can be used to make games that anyone can easily play in their web browser! That was not the case several years ago. Incredible! I want to preface the rest of this post by saying that both patterns are valid, and while I prefer one over the other, that is not to say that the other is inferior. I'll also show how these patterns can be thought of as two ends of a spectrum and how, in the end, compromises must be made. Okay, let’s get into it! Lisp as icing The icing pattern is using Lisp as a “scripting” language on top of a cake that is made from C, Rust, and other static languages. The typical way to do this is by embedding a Lisp interpreter into the larger program. If you’re most interested in writing the high-level parts of an application in Lisp then this pattern is the fastest way to get there. All you need is a suitable interpreter/compiler and a way to add the necessary hooks into your application. Since the program is mainly C/Rust/whatever, you can then use emscripten to compile it to WebAssembly and deploy to the web. Instant gratification, but strongly tied to static languages and their toolchains. S7 is an example of an embeddable Scheme. Guile is also used for extending C programs, though typically that involves dynamically linking to libguile rather than embedding the interpreter into the program’s executable. Fennel takes a different approach, recognizing that there are many existing applications that are already extensible through Lua, and provides a lispy language that compiles to Lua. Lisp as cake The cake pattern is using Lisp to implement as much of the software stack as possible. It’s Lisp all the way down... sorta. Rather than embedding Lisp into a non-Lisp program, the cake pattern does the inverse: the majority of the program is written in Lisp. When necessary, shared libraries can be called via a foreign function interface, but this should be kept to a minimum. This approach takes longer to yield results. Time is spent implementing missing libraries for your Lisp of choice and writing wrappers around the C shared libraries you can’t avoid using. Web deployment gets trickier, too, since the project is not so easily emscriptenable. (You may recognize this as the classic embed vs. extend debate. You’re correct! I'm just adding my own thoughts and applying it specifically to some real-world Lisp projects.) I mentioned Guile as an option for icing, but Guile really shines best as cake. The initial vision for Guile was to Emacsify other programs by adding a Scheme interpreter to them. These days, the best practice is to write your program in Scheme to begin with. Common Lisp is probably the best example, though. Implementations like SBCL have good C FFIs and can compile efficient native executables, minimizing the desire to use some C for performance reasons. Case studies Let’s take a look at some of the languages and libraries used for the Lisp Game Jam and evaluate their icing/cake-ness. Fennel + love2d love2d has been a popular choice for solo or small team game development for many years. It is a C++ program that embeds a Lua interpreter, which means it’s a perfect target for Fennel. Most Linux distributions package love2d, so it’s easy to run .love files natively. Additionally, thanks to emscripten, love2d games can be deployed to the web. Thus most Fennel games use love2d. ./soko.bin and Gnomic Vengeance are two games that use this stack. Fennel + love2d is a perfect example of Lisp as icing. Fennel sits at the very top of the stack, but there’s not really a path to spread Lisp into the layers below. It is also the most successful Lisp game development stack to date. S7 + raylib This stack is new to me, but two games used it this time around: GhostHop and Life Predictor. (You really gotta play GhostHop, btw. It’s a great little puzzle game and it is playable on mobile devices.) Raylib is a C library with bindings for many higher-level languages that has become quite popular in recent years. S7 is also implemented in C and is easily embeddable. This makes the combination easy to deploy on the web with emscripten. S7 + raylib is another example of Lisp as icing. I’m curious to see if this stack becomes more popular in future jams. Guile + Chickadee This is the stack that I helped build. Chickadee is a game library for Guile that implements almost all of the interesting parts in Scheme, including rendering. Two games were built with Chickadee in the most recent jam: Turbo Racer 3000 and Bloatrunner. Guile + Chickadee is an example of Lisp as cake. Chickadee wraps some C libraries for low-level tasks such as loading images, audio, and fonts, but it is written in pure Scheme. All the matrix and vector math is in Scheme. Chickadee comes with a set of rendering primitives comparable to love2d and raylib but they’re all implemented in Scheme. I’ve even made progress on rendering vector graphics with Scheme, whereas most other Lisp game libraries use a C library such as nanosvg. Chickadee has pushed the limits of Guile’s compiler and virtual machine, and Guile has been improved as a result. But it’s the long road. Chickadee is mostly developed by me, alone, in my very limited spare time. It is taking a long time to reach feature parity with more popular game development libraries, but it works quite well for what it is. Hoot + HTML5 canvas I also helped build this one. Hoot is a Scheme-to-WebAssembly compiler. Rather than compile the Guile VM (written in C) to Wasm using emscripten, Hoot implements a complete Wasm toolchain and a new backend for Guile’s compiler that emits Wasm directly. Hoot is written entirely in Scheme. Unlike C programs compiled with emscripten that target Wasm 1.0 with linear memory, Hoot targets Wasm 2.0 with GC managed heap types. This gives Hoot a significant advantage: Hoot binaries do not ship a garbage collector and thus are much smaller than Lisp runtimes compiled via emscripten. The Wasm binary for my game weighs in at < 2MiB whereas the love2d game I checked had a nearly 6MiB love.wasm. Hoot programs can also easily interoperate with JavaScript. Scheme objects can easily be passed to JavaScript, and vice versa, as they are managed in the same heap. With all of the browser APIs just a Wasm import away, an obvious choice for games was the built-in HTML5 canvas API for easy 2D rendering. 11 games used Hoot in the jam, including (shameless plug) Cirkoban and Lambda Dungeon. Hoot + HTML5 canvas is mostly dense cake with a bit of icing. On one hand, it took a year and significant funding to boot Hoot. We said “no” to emscripten, built our own toolchain, and extended Guile’s compiler. It's Lisp all the way until you hit the browser runtime! We even have a Wasm interpreter that runs on the Guile VM! Hoot rules! It was a risk but it paid off. On the other hand, the canvas API is very high-level. The more cake thing to do would be to use Hoot’s JS FFI to call WebGL and/or WebGPU. Indeed, this is the plan for the future! Wasm GC needs some improvements to make this feasible, but my personal goal is to get Chickadee ported to Hoot. I want Chickadee games to be easy to play natively and in browsers, just like love2d games. The cake/icing spectrum I must acknowledge the limitations of the cake approach. We’re not living in a world of Lisp machines, but a world of glorified PDP-11s. Even the tallest of Lisp cakes sits atop an even larger cake made mostly of C. All modern Lisp systems bottom out at some point. Emacs rests on a C core. Guile’s VM is written in C. Hoot runs on mammoth JavaScript engines written in C++ like V8. Games on Hoot currently render with HTML5 canvas rather than WebGL/WebGPU. Good luck using OpenGL without libGL; Chickadee uses guile-opengl which uses the C FFI to call into libGL. Then there’s libpng, FreeType, and more. Who the heck wants to rewrite all this in Lisp? Who even has the resources? Does spending all this time taking the scenic route matter at all, or are we just deluding ourselves because we have fun writing Lisp code? I think it does matter. Every piece of the stack that can be reclaimed from the likes of C is a small victory. The parts written in Lisp are much easier to hack on, and some of those things become live hackable while our programs are running. They are also memory safe, typically, thanks to GC managed runtimes. Less FFI calls means less overhead from traversing the Lisp/C boundary and more safety. As more of the stack becomes Lisp, it starts looking less like icing and more like cake. Moving beyond games, we can look to the Guix project as a great example of just how tasty the cake can get. Guix took the functional packaging model from the Nix project and made a fresh implementation, replacing the Nix language with Guile. Why? For code staging, code sharing, and improved hackability. Guix also uses an init system written in Guile rather than systemd. Why? For code staging, code sharing, and improved hackability. These are real advantages that make the trade-off of not using the industry-standard thing worth it. I’ve been using Guix since the early days, and back then it was easy to make the argument that Guix was just reinventing wheels for no reason. But now, over 10 years later, the insistence on maximizing the usage of Lisp has been key to the success of the project. As a user, once you learn the Guix idioms and a bit of Guile, you unlock extraordinary power to craft your OS to your liking. It’s the closest thing you can get to a Lisp machine on modern hardware. The cake approach paid off for Guix, and it could pay off for other projects, too. If Common Lisp is more your thing, and even if it isn’t, you’ll be amazed by the Trial game engine and how much of it is implemented in Common Lisp rather than wrapping C libraries. There’s also projects like Pre-Scheme that give me hope that one day the layers below the managed GC runtime can be implemented in Lisp. Pre-Scheme was developed and successfully used for Scheme 48 and I am looking forward to a modern revival of it thanks to an NLnet grant. I'm a cake boy That’s right, I said it: I’m a cake boy. I want to see projects continue to push the boundaries of what Lisp can do. When it comes to the Lisp Game Jam, what excites me most are not the games themselves, but the small advances made to reclaim another little slice of the cake from stale, dry C. I intend to keep pushing the limits for Guile game development with my Chickadee project. It’s not a piece of cake to bake a lispy cake, and the way is often hazy, but I know we can’t be lazy and just do the cooking by the book. Rewrite it in Rust? No way! Rewrite it in Lisp!

10 months ago 127 votes

More in programming

syntax highlighting with tree-sitter

I have added syntax highlighting to my blog using tree-sitter. Here are some notes about what I learned, with some complaining. static site generator markdown ingestion highlighting incompatible?! highlight names class names styling code results future work frontmatter templates feed style highlight quality static site generator I moved my blog to my own web site a few years ago. It is produced using a scruffy Rust program that converts a bunch of Markdown files to HTML using pulldown-cmark, and produces complete pages from Handlebars templates. Why did I write another static site generator? Well, partly as an exercise when learning Rust. Partly, since I wrote my own page templates, I’m not going to benefit from a library of existing templates. On the contrary, it’s harder to create new templates that work with a general-purpose SSG than write my own simpler site-specific SSG. It’s miserable to write programs in template languages. My SSG can keep the logic in the templates to a minimum, and do all the fiddly stuff in Rust. (Which is not very fiddly, because my site doesn’t have complicated navigation – compared to the multilevel menus on www.dns.cam.ac.uk for instance.) markdown ingestion There are a few things to do to each Markdown file: split off and deserialize the YAML frontmatter find the <cut> or <toc> marker that indicates the end of the teaser / where the table of contents should be inserted augment headings with self-linking anchors (which are also used by the ToC) Before this work I was using regexes to do all these jobs, because that allowed me to treat pulldown-cmark as a black box: Markdown in, HTML out. But for syntax highlighting I had to be able to find fenced code blocks. It was time to put some code into the pipeline between pulldown-cmark’s parser and renderer. And if I’m using a proper parser I can get rid of a few regexes: after some hacking, now only the YAML frontmatter is handled with a regex. Sub-heading linkification and ToC construction are fiddly and more complicated than they were before. But they are also less buggy: markup in headings actually works now! Compared to the ToC, it’s fairly simple to detect code blocks and pass them through a highlighter. You can look at my Markdown munger here. (I am not very happy with the way it uses state, but it works.) highlighting As well as the tree-sitter-highlight documentation I used femark as an example implementation. I encountered a few problems. incompatible?! I could not get the latest tree-sitter-highlight to work as described in its documentation. I thought the current tree-sitter crates were incompatible with each other! For a while I downgraded to an earlier version, but eventually I solved the problem. Where the docs say, let javascript_language = tree_sitter_javascript::language(); They should say: let javascript_language = tree_sitter::Language::new( tree_sitter_javascript::LANGUAGE ); highlight names I was offended that tree-sitter-highlight seems to expect me to hardcode a list of highlight names, without explaining where they come from or what they mean. I was doubly offended that there’s an array of STANDARD_CAPTURE_NAMES but it isn’t exported, and doesn’t match the list in the docs. You mean I have to copy and paste it? Which one?! There’s some discussion of highlight names in the tree-sitter manual’s “syntax highlighting” chapter, but that is aimed at people who are writing a tree-sitter grammar, not people who are using one. Eventually I worked out that tree_sitter_javascript::HIGHLIGHT_QUERY in the tree-sitter-highlight example corresponds to the contents of a highlights.scm file. Each @name in highlights.scm is a highlight name that I might be interested in. In principle I guess different tree-sitter grammars should use similar highlight names in their highlights.scm files? (Only to a limited extent, it turns out.) I decided the obviously correct list of highlight names is the list of every name defined in the HIGHLIGHT_QUERY. The query is just a string so I can throw a regex at it and build an array of the matches. This should make the highlighter produce <span> wrappers for as many tokens as possible in my code, which might be more than necessary but I don’t have to style them all. class names The tree-sitter-highlight crate comes with a lightly-documented HtmlRenderer, which does much of the job fairly straightforwardly. The fun part is the attribute_callback. When the HtmlRenderer is wrapping a token, it emits the start of a <span then expects the callback to append whatever HTML attributes it thinks might be appropriate. Uh, I guess I want a class="..." here? Well, the highlight names work a little bit like class names: they have dot-separated parts which tree-sitter-highlight can match more or less specifically. (However I am telling it to match all of them.) So I decided to turn each dot-separated highlight name into a space-separated class attribute. The nice thing about this is that my Rust code doesn’t need to know anything about a language’s tree-sitter grammar or its highlight query. The grammar’s highlight names become CSS class names automatically. styling code Now I can write some simple CSS to add some colours to my code. I can make type names green, code span.hilite.type { color: #aca; } If I decide builtin types should be cyan like keywords I can write, code span.hilite.type.builtin, code span.hilite.keyword { color: #9cc; } results You can look at my tree-sitter-highlight wrapper here. Getting it to work required a bit more creativity than I would have preferred, but it turned out OK. I can add support for a new language by adding a crate to Cargo.toml and a couple of lines to hilite.rs – and maybe some CSS if I have not yet covered its highlight names. (Like I just did to highlight the CSS above!) future work While writing this blog post I found myself complaining about things that I really ought to fix instead. frontmatter I might simplify the per-page source format knob so that I can use pulldown-cmark’s support for YAML frontmatter instead of a separate regex pass. This change will be easier if I can treat the html pages as Markdown without mangling them too much (is Markdown even supposed to be idempotent?). More tricky are a couple of special case pages whose source is Handlebars instead of Markdown. templates I’m not entirely happy with Handlebars. It’s a more powerful language than I need – I chose Handlebars instead of Mustache because Handlebars works neatly with serde. But it has a dynamic type system that makes the templates more error-prone than I would like. Perhaps I can find a more static Rust template system that takes advantage of the close coupling between my templates and the data structure that describes the web site. However, I like my templates to be primarily HTML with a sprinkling of insertions, not something weird that’s neither HTML nor Rust. feed style There’s no CSS in my Atom feed, so code blocks there will remain unstyled. I don’t know if feed readers accept <style> tags or if it has to be inline styles. (That would make a mess of my neat setup!) highlight quality I’m not entirely satisfied with the level of detail and consistency provided by the tree-sitter language grammars and highlight queries. For instance, in the CSS above the class names and property names have the same colour because the CSS highlights.scm gives them the same highlight name. The C grammar is good at identifying variables, but the Rust grammar is not. Oh well, I guess it’s good enough for now. At least it doesn’t involve Javascript.

21 hours ago 3 votes
How to simplify complex decisions by cleaving the facts

Simplify complex decisions by separating upsides from downsides, investing in upsides, vetoing with downsides, and using an appropriate decision framework.

23 hours ago 2 votes
Adventures in pixel space
3 hours ago 2 votes
The Year on Linux

I've been running Linux, Neovim, and Framework for a year now, but it easily feels like a decade or more. That's the funny thing about habits: They can be so hard to break, but once you do, they're also easily forgotten. That's how it feels having left the Apple realm after two decades inside the walled garden. It was hard for the first couple of weeks, but since then, it’s rarely crossed my mind. Humans are rigid in the short term, but flexible in the long term. Blessed are the few who can retain the grit to push through that early mental resistance and reach new maxima. That is something that gets harder with age. I can feel it. It takes more of me now to wipe a mental slate clean and start over. To go back to being a beginner. But the reward for learning something new is as satisfying as ever. But it's also why I've tried to be modest with the advocacy. I don't know if most developers are better off on Linux. I mean, I believe they are, at some utopian level, especially if they work for the web, using open source tooling. But I don't know if they are as humans with limited will or capacity for change. Of course, it's fair to say that one doesn't want to. Either because one remain a fan of Apple, in dire need of the remaining edge MacBooks retain on efficiency/battery, or simply content inside the ecosystem. There are plenty of reasons why someone might not want to change. It's not just about rigidity. Besides, it's a dead end trying to convince anyone of an alternative with the sharp end of a religious argument. That kind of crusading just seeds resentment and stubbornness. I know that all too well. What I've found to work much better is planting seeds and showing off your plowshare. Let whatever curiosity that blooms find its own way towards your blue sky. The mimetic engine of persuasion runs much cleaner anyway. And for me, it's primarily about my personal computing workbench regardless of what the world does or doesn't. It was the same with finding Ruby. It's great when others come along for the ride, but I'd also be happy taking the trip solo too. So consider this a postcard from a year into the Linux, Neovim, and Framework journey. The sun is still shining, the wind is in my hair, and the smile on my lips hasn't been this big since the earliest days of OS X.

yesterday 2 votes
Monki Gras 2025: What I’ve Learned by Building to Last

Yesterday I gave a talk at Monki Gras 2025. This year, the theme is Sustaining Software Development Craft, and here’s the description from the conference website: The big question we want to explore is – how can we keep doing the work we do, when it sustains us, provides meaning and purpose, and sometimes pays the bills? We’re in a period of profound change, technically, politically, socially, economically, which has huge implications for us as practitioners, the makers and doers, but also for the culture at large. I did a talk about the first decade of my career, which I’ve spent working on projects that are designed to last. I’m pleased with my talk, and I got a lot of nice comments. Monki Gras is always a pleasure to attend and speak at – it’s such a lovely, friendly vibe, and the organisers James Governor and Jessica West do a great job of making it a nice day. When I left yesterday, I felt warm and fuzzy and appreciated. I also have a front-row photo of me speaking, courtesy of my dear friend Eriol Fox. Naturally, I chose my outfit to match my slides (and this blog post!). Key points How do you create something that lasts? You can’t predict the future, but there are patterns in what lasts People skills sustain a career more than technical skills Long-lasting systems cannot grow without bound; they need weeding Links/recommended reading Sibyl Schaefer presented a paper Energy, Digital Preservation, and the Climate at iPres 2024, which is about how digital preservation needs to change in anticipation of the climate crisis. This was a major inspiration for this talk. Simon Willison gave a talk Coping strategies for the serial project hoarder at DjangoCon US in 2022, which is another inspiration for me. I’m not as prolific as Simon, but I do see parallels between his approach and what I remember of Metaswitch. Most of the photos in the talk come from the Flickr Commons, a collection of historical photographs from over 100 international cultural heritage organisations. You can learn more about the Commons, browse the photos, and see who’s involved using the Commons Explorer https://commons.flickr.org/. (Which I helped to build!) Slides and notes Photo: dry stone wall building in South Wales. Taken by Wikimedia Commons user TR001, used under CC BY‑SA 3.0. [Make introductory remarks; name and pronouns; mention slides on my website] I’ve been a software developer for ten years, and I’ve spent my career working on projects that are designed to last – first telecoms and networking, now cultural heritage – so when I heard this year’s theme “sustaining craft”, I thought about creating things that last a long time. The key question I want to address in this talk is how do you create something that lasts? I want to share a few thoughts I’ve had from working on decade- and century-scale projects. Part of this is about how we sustain ourselves as software developers, as the individuals who create software, especially with the skill threat of AI and the shifting landscape of funding software. I also want to go broader, and talk about how we sustain the craft, the skill, the projects. Let’s go through my career, and see what we can learn. Photo: women working at a Bell System telephone switchboard. From the U.S. National Archives, no known copyright restrictions. My first software developer job was at a company called Metaswitch. Not a household name, they made telecoms equipment, and you’d probably have heard of their customers. They sold equipment to carriers like AT&T, Vodafone, and O2, who’d use that equipment to sell you telephone service. Telecoms infrastructure is designed to last a long time. I spent most of my time at Metaswitch working with BGP, a routing protocol designed on a pair of napkins in 1989. BGP is sometimes known as the "two-napkin protocol", because of the two napkins on which Kirk Lougheed and Yakov Rekhter wrote the original design. From the Computer History Museum. These are those napkins. This design is basically still the backbone of the Internet. A lot of the building blocks of the telephone network and the Internet are fundamentally the same today as when they were created. I was working in a codebase that had been actively developed for most of my life, and was expected to outlast me. This was my first job so I didn’t really appreciate it at the time, but Metaswitch did a lot of stuff designed to keep that codebase going, to sustain it into the future. Let’s talk about a few of them. Photo: a programmer testing electronic equipment. From the San Diego Air & Space Museum Archives, no known copyright restrictions. Metaswitch was very careful about adopting new technologies. Most of their code was written in C, a little C++, and Rust was being adopted very slowly. They didn’t add new technology quickly. Anything they add, they have to support for a long time – so they wanted to pick technologies that weren’t a flash in the pan. I learnt about something called “the Lindy effect” – this is the idea that any technology is about halfway through its expected life. An open-source library that’s been developed for decades? That’ll probably be around a while longer. A brand new JavaScript framework? That’s a riskier long-term bet. The Lindy effect is about how software that’s been around a long time has already proven its staying power. And talking of AI specifically – I’ve been waiting for things to settle. There’s so much churn and change in this space, if I’d learnt a tool six months ago, most of that would be obsolete today. I don’t hate AI, I love that people are trying all these new tools – but I’m tired and I learning new things is exhausting. I’m waiting for things to calm down before really diving deep on these tools. Metaswitch was very cautious about third-party code, and they didn’t have much of it. Again, anything they use will have to be supported for a long time – is that third-party code, that open-source project stick around? They preferred to take the short-term hit of writing their own code, but then having complete control over it. To give you some idea of how seriously they took this: every third-party dependency had to be reviewed and vetted by lawyers before it could be added to the codebase. Imagine doing that for a modern Node.js project! They had a lot of safety nets. Manual and automated testing, a dedicated QA team, lots of checks and reviews. These were large codebases which had to be reliable. Long-lived systems can’t afford to “move fast and break things”. This was a lot of extra work, but it meant more stability, less churn, and not much risk of outside influences breaking things. This isn’t the only way to build software – Metaswitch is at one extreme of a spectrum – but it did seem to work. I think this is a lesson for building software, but also in what we choose to learn as individuals. Focusing on software that’s likely to last means less churn in our careers. If you learn the fundamentals of the web today, that knowledge will still be useful in five years. If you learn the JavaScript framework du jour? Maybe less so. How do you know what’s going to last? That’s the key question! It’s difficult, but it’s not impossible. This is my first thought for you all: you can’t predict the future, but there are patterns in what lasts. I’ve given you some examples of coding practices that can help the longevity of a codebase, these are just a few. Maybe I have rose-tinted spectacles, but I’ve taken the lessons from Metaswitch and brought them into my current work, and I do like them. I’m careful about external dependencies, I write a lot of my own code, and I create lots of safety nets, and stuff doesn’t tend to churn so much. My code lasts because it isn’t constantly being broken by external forces. Photo: a child in nursery school cutting a plank of wood with a saw. From the Community Archives of Belleville and Hastings County, no known copyright restrictions. So that’s what the smart people were doing at Metaswitch. What was I doing? I joined Metaswitch when I was a young and twenty-something graduate, so I knew everything. I knew software development was easy, these old fuddy-duddies were making it all far too complicated, and I was gonna waltz in and show them how it was done. And obviously, that happened. (Please imagine me reading that paragraph in a very sarcastic voice.) I started doing the work, and it was a lot harder than I expected – who knew that software development was difficult? But I was coming from a background as a solo dev who’d only done hobby projects. I’d never worked in a team before. I didn’t know how to say that I was struggling, to ask for help. I kept making bold promises about what I could do, based on how quickly I thought I should be able to do the work – but I was making promises my skills couldn’t match. I kept missing self-imposed deadlines. You can do that once, but you can’t make it a habit. About six months before I left, my manager said to me “Alex, you have a reputation for being unreliable”. Photo: a boy with a pudding bowl haircut, photographed by Elinor Wiltshire, 1964. From the National Library of Ireland, no known copyright restrictions. He was right! I had such a history of making promises that I couldn’t keep, people stopped trusting me. I didn’t get to work on interesting features or the exciting projects, because nobody trusted me to deliver. That was part of why I left that job – I’d ploughed my reputation into the ground, and I needed to reset. Photo: the library stores at Wellcome Collection. Taken by Thomas SG Farnetti used under CC BY‑NC 4.0. I got that reset at Wellcome Collection, a London museum and library that some of you might know. I was working a lot with their collections, a lot of data and metadata. Wellcome Collection is building on long tradition of libraries and archives, which go back thousands of years. Long-term thinking is in their DNA. To give you one example: there’s stuff in the archive that won’t be made public until the turn of the century. Everybody who works there today will be long gone, but they assume that those records will exist in some shape or form form when that time comes, and they’re planning for those files to eventually be opened. This is century-scale thinking. Photo: Bob Hoover. From the San Diego Air & Space Museum Archives, no known copyright restrictions. When I started, I sat next to a guy called Chris. (I couldn’t find a good picture of him, but I feel like this photo captures his energy.) Chris was a senior archivist. He’d been at Wellcome Collection about twenty-five years, and there were very few people – if anyone – who knew more about the archive than he did. He absolutely knew his stuff, and he could have swaggered around like he owned the place. But he didn’t. Something I was struck by, from my very first day, was how curious and humble he was. A bit of a rarity, if you work in software. He was the experienced veteran of the organisation, but he cared about what other people had to say and wanted to learn from them. Twenty-five years in, and he still wanted to learn. He was a nice guy. He was a pleasure to work with, and I think that’s a big part of why he was able to stay in that job as long as he did. We were all quite disappointed when he left for another job! This is my second thought for you: people skills sustain a career more than technical ones. Being a pleasure to work with opens so many doors and opportunities than technical skill alone cannot. We could do another conference just on what those people skills are, but for now I just want to give you a few examples to think about. Photo: Lt.(jg.) Harriet Ida Pickens and Ens. Frances Wills, first Negro Waves to be commissioned in the US Navy. From the U.S. National Archives, no known copyright restrictions. Be a respectful and reliable teammate. You want to be seen as a safe pair of hands. Reliability isn’t about avoiding mistakes, it’s about managing expectations. If you’re consistently overpromising and underdelivering, people stop trusting you (which I learnt the hard way). If you want people to trust you, you have to keep your promises. Good teammates communicate early when things aren’t going to plan, they ask for help and offer it in return. Good teammates respect the work that went before. It’s tempting to dismiss it as “legacy”, but somebody worked hard on it, and it was the best they knew how to do – recognise that effort and skill, don’t dismiss it. Listen with curiosity and intent. My colleague Chris had decades of experience, but he never acted like he knew everything. He asked thoughtful questions and genuinely wanted to learn from everyone. So many of us aren’t really listening when we’re “listening” – we’re just waiting for the next silence, where we can interject with the next thing we’ve already thought of. We aren’t responding to what other people are saying. When we listen, we get to learn, and other people feel heard – and that makes collaboration much smoother and more enjoyable. Finally, and this is a big one: don’t give people unsolicited advice. We are very bad at this as an industry. We all have so many opinions and ideas, but sometimes, sharing isn’t caring. Feedback is only useful when somebody wants to hear it – otherwise, it feels like criticism, it feels like an attack. Saying “um, actually” when nobody asked for feedback isn’t helpful, it just puts people on the defensive. Asking whether somebody wants feedback, and what sort of feedback they want, will go a long way towards it being useful. So again: people skills sustain a career more than technical skills. There aren’t many truly solo careers in software development – we all have to work with other people – for many of us, that’s the joy of it! If you’re a nice person to work with, other people will want to work with you, to collaborate on projects, they’ll offer you opportunities, it opens doors. Your technical skills won’t sustain your career if you can’t work with other people. Photo: "The Keeper", an exhibition at the New Museum in New York. Taken by Daniel Doubrovkine, used under CC BY‑NC‑SA 4.0. When I went to Wellcome Collection, it was my first time getting up-close and personal with a library and archive, and I didn’t really know how they worked. If you’d asked me, I’d have guessed they just keep … everything? And it was gently explained to me that “No Alex, that’s hoarding.” “Your overflowing yarn stash does not count as an archive.” Big collecting institutions are actually super picky – they have guidelines about what sort of material they collect, what’s in scope, what isn’t, and they’ll aggressively reject anything that isn’t a good match. At Wellcome Collection, their remit was “the history of health and human experience”. You have medical papers? Definitely interesting! Your dad’s old pile of car magazines? Less so. Photo: a dumpster full of books that have been discarded. From brewbooks on Flickr, used under CC BY‑SA 2.0. Collecting institutions also engage in the practice of “weeding” or “deaccessioning”, which is removing material, pruning the collection. For example, in lending libraries, books will be removed from the shelves if they’ve become old, damaged, or unpopular. They may be donated, or sold, or just thrown away – but whatever happens, they’re gotten rid of. That space is reclaimed for other books. Getting rid of material is a fundamental part of professional collecting, because professionals know that storing something has an ongoing cost. They know they can’t keep everything. Photo: a box full of printed photos. From Miray Bostancı on Pexels, used under the Pexels license. This is something I think about in my current job as well. I currently work at the Flickr Foundation, where we’re thinking about how to keep Flickr’s pictures visible for 100 years. How do we preserve social media, how do we maintain our digital legacy? When we talk to people, one thing that comes up regularly is that almost everybody has too many photos. Modern smartphones have made it so easy to snap, snap, snap, and we end up with enormous libraries with thousands of images, but we can’t find the photos we care about. We can’t find the meaningful memories. We’re collecting too much stuff. Digital photos aren’t expensive to store, but we feel the cost in other ways – the cognitive load of having to deal with so many images, of having to sift through a disorganised collection. Photo: a wheelbarrow in a garden. From Hans Middendorp on Pexels, used under the Pexels license. I think there’s a lesson here for the software industry. What’s the cost of all the code that we’re keeping? We construct these enormous edifices of code, but when do we turn things off? When do we delete code? We’re more focused on new code, new ideas, new features. I’m personally quite concerned by how much generative AI has focused on writing more code, and not on dealing with the code we already have. Code is text, so it’s cheap to store, but it still has a cost – it’s more cognitive load, more maintenance, more room for bugs and vulnerabilities. We can keep all our software forever, but we shouldn’t. Photo: Open Garbage Dump on Highway 112, North of San Sebastian. Taken by John Vachon, 1973. From the U.S. National Archives no known copyright restrictions. I think this is going to become a bigger issue for us. We live in an era of abundance, where we can get more computing resources at the push of a button. But that can’t last forever. What happens when our current assumptions about endless compute no longer hold? The climate crisis – where’s all our electricity and hardware coming from? The economics of AI – who’s paying for all these GPU-intensive workloads? And politics – how many of us are dependent on cloud computing based in the US? How many of us feel as good about that as we did three months ago? Libraries are good at making a little go a long way, about eking out their resources, about deciding what’s a good use of resources and what’s waste. Often the people who are good with money are the people who don’t have much of it, and we have a lot of money. It’s easier to make decisions about what to prune and what to keep when things are going well – it’s harder to make decisions in an emergency. This is my third thought for you: long-lasting systems cannot grow without bound; they need weeding. It isn’t sustainable to grow forever, because eventually you get overwhelmed by the weight of everything that came before. We need to get better at writing software efficiently, at turning things off that we don’t need. It’s a skill we’ve neglected. We used to be really good at it – when computers were the size of the room, programmers could eke out every last bit of performance. We can’t do that any more, but it’s so important when building something to last, and I think it’s a skill we’ll have to re-learn soon. Photo: Val Weaver and Vera Askew running in a relay race, Brisbane, 1939. From the State Library of Queensland no known copyright restrictions. Weeding is a term that comes from the preservation world, so let’s stay there. When you talk to people who work in digital preservation, we often describe it as a relay race. There is no permanent digital media, there’s no digital parchment or stone tablets – everything we have today will be unreadable in a few decades. We’re constantly migrating from one format to another, trying to stay ahead of obsolete technology. Software is also a bit of a relay race – there is no “write it once and you’re done”. We’re constantly upgrading, editing, improving. And that can be frustrating, but it also means have regular opportunities to learn and improve. We have that chance to reflect, to do things better. Photo: Broken computer monitor found in the woods. By Jeff Myers on Flickr, used under CC BY‑NC 2.0. I think we do our best reflections when computers go bust. When something goes wrong, we spring into action – we do retrospectives, root cause analysis, we work out what went wrong and how to stop it happening again. This is a great way to build software that lasts, to make it more resilient. It’s a period of intense reflection – what went wrong, how do we stop it happening again? What I’ve noticed is that the best systems are doing this sort of reflection all the time – they aren’t waiting for something to go wrong. They know that prevention is better than cure, and they embody it. They give themselves regular time to reflect, to think about what’s working and what’s not – and when we do, great stuff can happen. Photo: Statue of Astrid Lindgren. By Tobias Barz on Flickr, used under CC BY‑ND 2.0. I want to give you one more example. As a sidebar to my day job, I’ve been writing a blog for thirteen years. It’s the longest job – asterisk – I’ve ever had. The indie web is still cool! A lot of what I write, especially when I was starting, was sharing bits of code. “Here’s something I wrote, here’s what it does, here’s how it works and why it’s cool.” Writing about my code has been an incredible learning experience. You might know have heard the saying “ask a developer to review 5 lines of code, she’ll find 5 issues, ask her to review 500 lines and she’ll say it looks good”. When I sit back and deeply read and explain short snippets of my code, I see how to do things better. I get better at programming. Writing this blog has single-handedly had the biggest impact on my skill as a programmer. Photo: Midnight sun in Advent Bay, Spitzbergen, Norway. From the Library of Congress, no known copyright restrictions. There are so many ways to reflect on our work, opportunities to look back and ask how we can do better – but we have to make the most of them. I think we are, in some ways, very lucky that our work isn’t set in stone, that we do keep doing the same thing, that we have the opportunity to do better. Writing this talk has been, in some sense, a reflection on the first decade of my career, and it’s made me think about what I want the next decade to look like. In this talk, I’ve tried to distill some of those things, tried to give you some of the ideas that I want to keep, that I think will help my career and my software to last. Be careful about what you create, what you keep, and how you interact with other people. That care, that process of reflection – that is what creates things that last. [If the formatting of this post looks odd in your feed reader, visit the original article]

2 days ago 3 votes