More from dthompson
I'm happy to announce that guile-websocket 0.2.0 has been released! Guile-websocket is an implementation of the WebSocket protocol, both the client and server sides, for Guile Scheme. This release introduces breaking changes that overhaul the client and server implementations in order to support non-blocking I/O and TLS encrypted connections. source tarball: https://files.dthompson.us/guile-websocket/guile-websocket-0.2.0.tar.gz signature: https://files.dthompson.us/guile-websocket/guile-websocket-0.2.0.tar.gz.asc See the guile-websocket project page for more information. Bug reports, bug fixes, feature requests, and patches are welcomed.
Wasm GC is a wonderful thing that is now available in all major web browsers since slowpoke Safari/WebKit finally shipped it in December. It provides a hierarchy of heap allocated reference types and a set of instructions to operate on them. Wasm GC enables managed memory languages to take advantage of the advanced garbage collectors inside web browser engines. It’s now possible to implement a managed memory language without having to ship a GC inside the binary. The result is smaller binaries, better performance, and better integration with the host runtime. However, Wasm GC has some serious drawbacks when compared to linear memory. I enjoy playing around with realtime graphics programming in my free time, but I was disappointed to discover that Wasm GC just isn’t a good fit for that right now. I decided to write this post because I’d like to see Wasm GC on more or less equal footing with linear memory when it comes to binary data manipulation. Hello triangle For starters, let's take a look at what a “hello triangle” WebGL demo looks like with Wasm GC. I’ll use Hoot, the Scheme to Wasm compiler that I work on, to build it. Below is a Scheme program that declares imports for the subset of the WebGL, HTML5 Canvas, etc. APIs that are necessary and then renders a single triangle: (use-modules (hoot ffi)) ;; Document (define-foreign get-element-by-id "document" "getElementById" (ref string) -> (ref null extern)) ;; Element (define-foreign element-width "element" "width" (ref extern) -> i32) (define-foreign element-height "element" "height" (ref extern) -> i32) ;; Canvas (define-foreign get-canvas-context "canvas" "getContext" (ref extern) (ref string) -> (ref null extern)) ;; WebGL (define GL_VERTEX_SHADER 35633) (define GL_FRAGMENT_SHADER 35632) (define GL_COMPILE_STATUS 35713) (define GL_LINK_STATUS 35714) (define GL_ARRAY_BUFFER 34962) (define GL_STATIC_DRAW 35044) (define GL_COLOR_BUFFER_BIT 16384) (define GL_TRIANGLES 4) (define GL_FLOAT 5126) (define-foreign gl-create-shader "gl" "createShader" (ref extern) i32 -> (ref extern)) (define-foreign gl-delete-shader "gl" "deleteShader" (ref extern) (ref extern) -> none) (define-foreign gl-shader-source "gl" "shaderSource" (ref extern) (ref extern) (ref string) -> none) (define-foreign gl-compile-shader "gl" "compileShader" (ref extern) (ref extern) -> none) (define-foreign gl-get-shader-parameter "gl" "getShaderParameter" (ref extern) (ref extern) i32 -> i32) (define-foreign gl-get-shader-info-log "gl" "getShaderInfoLog" (ref extern) (ref extern) -> (ref string)) (define-foreign gl-create-program "gl" "createProgram" (ref extern) -> (ref extern)) (define-foreign gl-delete-program "gl" "deleteProgram" (ref extern) (ref extern) -> none) (define-foreign gl-attach-shader "gl" "attachShader" (ref extern) (ref extern) (ref extern) -> none) (define-foreign gl-link-program "gl" "linkProgram" (ref extern) (ref extern) -> none) (define-foreign gl-use-program "gl" "useProgram" (ref extern) (ref extern) -> none) (define-foreign gl-get-program-parameter "gl" "getProgramParameter" (ref extern) (ref extern) i32 -> i32) (define-foreign gl-get-program-info-log "gl" "getProgramInfoLog" (ref extern) (ref extern) -> (ref string)) (define-foreign gl-create-buffer "gl" "createBuffer" (ref extern) -> (ref extern)) (define-foreign gl-delete-buffer "gl" "deleteBuffer" (ref extern) (ref extern) -> (ref extern)) (define-foreign gl-bind-buffer "gl" "bindBuffer" (ref extern) i32 (ref extern) -> none) (define-foreign gl-buffer-data "gl" "bufferData" (ref extern) i32 (ref eq) i32 -> none) (define-foreign gl-enable-vertex-attrib-array "gl" "enableVertexAttribArray" (ref extern) i32 -> none) (define-foreign gl-vertex-attrib-pointer "gl" "vertexAttribPointer" (ref extern) i32 i32 i32 i32 i32 i32 -> none) (define-foreign gl-draw-arrays "gl" "drawArrays" (ref extern) i32 i32 i32 -> none) (define-foreign gl-viewport "gl" "viewport" (ref extern) i32 i32 i32 i32 -> none) (define-foreign gl-clear-color "gl" "clearColor" (ref extern) f64 f64 f64 f64 -> none) (define-foreign gl-clear "gl" "clear" (ref extern) i32 -> none) (define (compile-shader gl type source) (let ((shader (gl-create-shader gl type))) (gl-shader-source gl shader source) (gl-compile-shader gl shader) (unless (= (gl-get-shader-parameter gl shader GL_COMPILE_STATUS) 1) (let ((info (gl-get-shader-info-log gl shader))) (gl-delete-shader gl shader) (error "shader compilation failed" info))) shader)) (define (link-shader gl vertex-shader fragment-shader) (let ((program (gl-create-program gl))) (gl-attach-shader gl program vertex-shader) (gl-attach-shader gl program fragment-shader) (gl-link-program gl program) (unless (= (gl-get-program-parameter gl program GL_LINK_STATUS) 1) (let ((info (gl-get-program-info-log gl program))) (gl-delete-program gl program) (error "program linking failed" info))) program)) ;; Setup GL context (define canvas (get-element-by-id "canvas")) (define gl (get-canvas-context canvas "webgl")) (when (external-null? gl) (error "unable to create WebGL context")) ;; Compile shader (define vertex-shader-source "attribute vec2 position; attribute vec3 color; varying vec3 fragColor; void main() { gl_Position = vec4(position, 0.0, 1.0); fragColor = color; }") (define fragment-shader-source "precision mediump float; varying vec3 fragColor; void main() { gl_FragColor = vec4(fragColor, 1); }") (define vertex-shader (compile-shader gl GL_VERTEX_SHADER vertex-shader-source)) (define fragment-shader (compile-shader gl GL_FRAGMENT_SHADER fragment-shader-source)) (define shader (link-shader gl vertex-shader fragment-shader)) ;; Create vertex buffer (define stride (* 4 5)) (define buffer (gl-create-buffer gl)) (gl-bind-buffer gl GL_ARRAY_BUFFER buffer) (gl-buffer-data gl GL_ARRAY_BUFFER #f32(-1.0 -1.0 1.0 0.0 0.0 1.0 -1.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0 1.0) GL_STATIC_DRAW) ;; Draw (gl-viewport gl 0 0 (element-width canvas) (element-height canvas)) (gl-clear gl GL_COLOR_BUFFER_BIT) (gl-use-program gl shader) (gl-enable-vertex-attrib-array gl 0) (gl-vertex-attrib-pointer gl 0 2 GL_FLOAT 0 stride 0) (gl-enable-vertex-attrib-array gl 1) (gl-vertex-attrib-pointer gl 1 3 GL_FLOAT 0 stride 8) (gl-draw-arrays gl GL_TRIANGLES 0 3) Note that in Scheme, the equivalent of a Uint8Array is a bytevector. Hoot uses a packed array, an (array i8) specifically, for the contents of a bytevector. And here is the JavaScript code necessary to boot the resulting Wasm binary: window.addEventListener("load", async () => { function bytevectorToUint8Array(bv) { let len = reflect.bytevector_length(bv); let array = new Uint8Array(len); for (let i = 0; i < len; i++) { array[i] = reflect.bytevector_ref(bv, i); } return array; } let mod = await SchemeModule.fetch_and_instantiate("triangle.wasm", { reflect_wasm_dir: 'reflect-wasm', user_imports: { document: { getElementById: (id) => document.getElementById(id) }, element: { width: (elem) => elem.width, height: (elem) => elem.height }, canvas: { getContext: (elem, type) => elem.getContext(type) }, gl: { createShader: (gl, type) => gl.createShader(type), deleteShader: (gl, shader) => gl.deleteShader(shader), shaderSource: (gl, shader, source) => gl.shaderSource(shader, source), compileShader: (gl, shader) => gl.compileShader(shader), getShaderParameter: (gl, shader, param) => gl.getShaderParameter(shader, param), getShaderInfoLog: (gl, shader) => gl.getShaderInfoLog(shader), createProgram: (gl, type) => gl.createProgram(type), deleteProgram: (gl, program) => gl.deleteProgram(program), attachShader: (gl, program, shader) => gl.attachShader(program, shader), linkProgram: (gl, program) => gl.linkProgram(program), useProgram: (gl, program) => gl.useProgram(program), getProgramParameter: (gl, program, param) => gl.getProgramParameter(program, param), getProgramInfoLog: (gl, program) => gl.getProgramInfoLog(program), createBuffer: (gl) => gl.createBuffer(), deleteBuffer: (gl, buffer) => gl.deleteBuffer(buffer), bindBuffer: (gl, target, buffer) => gl.bindBuffer(target, buffer), bufferData: (gl, buffer, data, usage) => { let bv = new Bytevector(reflect, data); gl.bufferData(buffer, bytevectorToUint8Array(bv), usage); }, enableVertexAttribArray: (gl, index) => gl.enableVertexAttribArray(index), vertexAttribPointer: (gl, index, size, type, normalized, stride, offset) => { gl.vertexAttribPointer(index, size, type, normalized, stride, offset); }, drawArrays: (gl, mode, first, count) => gl.drawArrays(mode, first, count), viewport: (gl, x, y, w, h) => gl.viewport(x, y, w, h), clearColor: (gl, r, g, b, a) => gl.clearColor(r, g, b, a), clear: (gl, mask) => gl.clear(mask) } } }); let reflect = await mod.reflect({ reflect_wasm_dir: 'reflect-wasm' }); let proc = new Procedure(reflect, mod.get_export("$load").value); proc.call(); }); Hello problems There are two major performance issues with this program. One is visible in the source above, the other is hidden in the language implementation. Heap objects are opaque on the other side Wasm GC heap objects are opaque to the host. Likewise, heap objects from the host are opaque to the Wasm guest. Thus the contents of an (array i8) object are not visible from JavaScript and the contents of a Uint8Array are not visible from Wasm. This is a good security property in the general case, but it’s a hinderance in this specific case. Let’s say we have an (array i8) full of vertex data we want to put into a WebGL buffer. To do this, we must make one JS->Wasm call for each byte in the array and store it into a Uint8Array. This is what the bytevectorToUint8Array function above is doing. Copying any significant amount of data per frame is going to tank performance. Hope you aren’t trying to stream vertex data! Contrast the previous paragraph with Wasm linear memory. A WebAssembly.Memory object can be easily accessed from JavaScript as an ArrayBuffer. To get a blob of vertex data out of a memory object, you just need to know the byte offset and length and you’re good to go. There are many Wasm linear memory applications using WebGL successfully. Manipulating multi-byte binary data is inefficient To read a multi-byte number such as an unsigned 32-bit integer from an (array i8), you have to fetch each individual byte and combine them together. Here’s a self-contained example that uses Guile-flavored WAT format: (module (type $bytevector (array i8)) (data $init #u32(123456789)) (func (export "main") (result i32) (local $a (ref $bytevector)) (local.set $a (array.new_data $bytevector $init (i32.const 0) (i32.const 4))) (array.get_u $bytevector (local.get $a) (i32.const 0)) (i32.shl (array.get_u $bytevector (local.get $a) (i32.const 1)) (i32.const 8)) (i32.or) (i32.shl (array.get_u $bytevector (local.get $a) (i32.const 2)) (i32.const 16)) (i32.or) (i32.shl (array.get_u $bytevector (local.get $a) (i32.const 3)) (i32.const 24)) (i32.or))) By contrast, Wasm linear memory needs but a single i32.load instruction: (module (memory 1) (func (export "main") (result i32) (i32.store (i32.const 0) (i32.const 123456789)) (i32.load (i32.const 0)))) Easy peasy. Not only is it less code, it's a lot more efficient. Unsatisfying workarounds There’s no way around the multi-byte problem at the moment, but for byte access from JavaScript there are some things we could try to work with what we have been given. Spoiler alert: None of them are pleasant. Use Uint8Array from the host This approach makes all binary operations from within the Wasm binary slow since we’d have to cross the Wasm->JS bridge for each read/write. Since most of the binary data manipulation is happening in the Wasm module, this approach will just make things slower overall. Use linear memory for bytevectors This would require a little malloc/free implementation and a way to reclaim memory for GC'd bytevectors. You could register every bytevector in a FinalizationRegistry in order to be notified upon GC and free the memory. Now you have to deal with memory fragmentation. This is Wasm GC, we shouldn’t have to do any of this! Use linear memory as a scratch space This avoids crossing the Wasm/JS boundary for each byte, but still involves a byte-by-byte copy from (array i8) to linear memory within the Wasm module. So far this feels like the least worst option, but the extra copy is still going to greatly reduce throughput. Wasm GC needs some fixin' I’ve used realtime graphics as an example because it’s a use case that is very sensitive to performance issues, but this unfortunate need to copy binary data byte-by-byte is also the reason why strings are trash on Wasm GC right now. Stringref is a good proposal and the Wasm community group made a mistake by rejecting it. Anyway, there has been some discussion about both multi-byte and ArrayBuffer access on GitHub, but as far as I can tell neither issue is anywhere close to a resolution. Can these things be implemented efficiently? How can the need for direct access to packed arrays from JS be reconciled with Wasm heap object opaqueness? I hope the Wasm community group can arrive at solutions sooner than later because it will take a long time to get the proposal(s) to phase 4 and shipped in all browsers, perhaps years. It would be a shame to be effectively shut out from using WebGPU when it finally reaches stable browser releases.
I'm pleased to announce that the very first release of guile-bstructs, version 0.1.0, has been released! This is a library I've been working on for quite some time and after more than one rewrite and many smaller refactors I think it's finally ready to release publicly. Let's hope I'm not wrong about that! About guile-bstructs Guile-bstructs is a library that provides structured read/write access to binary data for Guile. A bstruct (short for “binary structure”) is a data type that encapsulates a bytevector and a byte offset which interprets that bytevector based on a specified layout. Some use cases for bstructs are: manipulating C structs when using the foreign function interface packing GPU vertex buffers when using graphics APIs such as OpenGL implementing data types that benefit from Guile's unboxed math optimizations such as vectors and matrices This library was initially inspired by guile-opengl's define-packed-struct syntax but is heavily based on "Ftypes: Structured foreign types" by Andy Keep and R. Kent Dybvig. The resulting interface is quite similar but the implementation is completely original. This library provides a syntax-heavy interface; nearly all of the public API is syntax. This is done to ensure that bstruct types are static and well-known at compile time resulting in efficient bytecode and minimal runtime overhead. A subset of the interface deals in raw bytevector access for accessing structured data in bytevectors directly without going through an intermediary bstruct wrapper. This low-level interface is useful for certain batch processing situations where the overhead of creating wrapper bstructs would hinder throughput. Example Here are some example type definitions to give you an idea of what it’s like to use guile-bstructs: ;; Struct (define-bstruct <vec2> (padded (struct (x float) (y float)))) ;; Type group with a union (define-bstruct (<mouse-move-event> (struct (type uint8) (x int32) (y int32))) (<mouse-button-event> (struct (type uint8) (button uint8) (state uint8) (x int32) (y int32))) (<event> (union (type uint8) (mouse-move <mouse-move-event>) (mouse-button <mouse-button-event>)))) ;; Array (define-bstruct <matrix4> (array 16 float)) ;; Bit fields (define-bstruct <date> (bits (year 32 s) (month 4 u) (day 5 u))) ;; Pointer (define-bstruct (<item> (struct (type int))) (<chest> (struct (opened? uint8) (item (* <item>))))) ;; Packed struct modifier (define-bstruct <enemy> (packed (struct (type uint8) (health uint32)))) ;; Endianness modifier (define-bstruct <big-float> (endian big float)) ;; Recursive type (define-bstruct <node> (struct (item int) (next (* <node>)))) ;; Mutually recursive type group (define-bstruct (<forest> (struct (children (* <tree>)))) (<tree> (struct (value int) (forest (* <forest>)) (next (* <tree>))))) ;; Opaque type (define-bstruct SDL_GPUTexture) Download Source tarball: guile-bstructs-0.1.0.tar.gz GPG signature: guile-bstructs-0.1.0.tar.gz.asc This release was signed with this GPG key. See the guile-bstructs project page for more information.
I’ve been interested in functional reactive programming (FRP) for about a decade now. I even wrote a couple of blog posts back in 2014 describing my experiments. My initial source of inspiration was Elm, the Haskell-like language for the web that once had FRP as a core part of the language. Evan Czaplicki’s Strange Loop 2013 talk really impressed me, especially that Mario demo. From there, I explored the academic literature on the subject. Ultimately, I created and then abandoned a library that focused on FRP for games. It was a neat idea, but the performance was terrible. The overhead of my kinda-sorta FRP system was part of the problem, but mostly it was my own inexperience. I didn’t know how to optimize effectively and my implementation language, Guile, did not have as many optimization passes as it does now. Also, realtime simulations like games require much more careful use of heap allocation. I found that, overhead aside, FRP is a bad fit for things like scripting sequences of actions in a game. I don’t want to give up things like coroutines that make it easy. I’ve learned how different layers of a program may call for different programming paradigms. Functional layers rest upon imperative foundations. Events are built on top of polling. Languages with expression trees run on machines that only understand linear sequences. You get the idea. A good general-purpose language will allow you to compose many paradigms in the same program. I’m still a big fan of functional programming, but single paradigm languages do not appeal to me. Fast forward 10 years, I find myself thinking about FRP again in a new context. I now work for the Spritely Institute where we’re researching and building the next generation of secure, distributed application infrastructure. We want to demo our tech through easy-to-use web applications, which means we need to do some UI programming. So, the back burner of my brain has been mulling over the possibilities. What’s the least painful way to build web UIs? Is this FRP thing worth revisiting? The reason why FRP is so appealing to me (on paper, at least) is that it allows for writing interactive programs declaratively. With FRP, I can simply describe the relationships between the various time-varying components, and the system wires it all together for me. I’m spared from callback hell, one of the more frightening layers of hell that forces programs to be written in a kind of continuation-passing style where timing and state bugs consume more development time as the project grows. What about React? In the time during and since I last experimented with FRP, a different approach to declarative UI programming has swept the web development world: React. From React, many other similar libraries emerged. On the minimalist side there are things like Mithril (a favorite of mine), and then there are bigger players like Vue. The term “reactive” has become overloaded, but in the mainstream software world it is associated with React and friends. FRP is quite different, despite sharing the declarative and reactive traits. Both help free programmers from callback hell, but they achieve their results differently. The React model describes an application as a tree of “components”. Each component represents a subset of the complete UI element tree. For each component, there is a template function that takes some inputs and returns the new desired state of the UI. This function is called whenever an event occurs that might change the state of the UI. The template produces a data structure known as a “virtual DOM”. To realize this new state in the actual DOM, React diffs the previous tree with the new one and updates, creates, and deletes elements as necessary. With FRP, you describe your program as an acyclic graph of nodes that contain time-varying values. The actual value of any given node is determined by a function that maps the current values of some input nodes into an output value. The system is bootstrapped by handling a UI event and updating the appropriate root node, which kicks off a cascade of updates throughout the graph. At the leaf nodes, side-effects occur that realize the desired state of the application. Racket’s FrTime is one example of such a system, which is based on Greg Cooper’s 2008 PhD dissertation “Integrating Dataflow Evaluation into a Practical Higher-Order Call-by-Value Language”. In FrTime, time-varying values are called “signals”. Elm borrowed this language, too, and there’s currently a proposal to add signals to JavaScript. Research into FRP goes back quite a bit further. Notably, Conal Elliot and Paul Hudak wrote “Functional Reactive Animation” in 1997. On jank The scope of potential change for any given event is larger in React than FRP. An FRP system flows data through an acyclic graph, updating only the nodes affected by the event. React requires re-evaluating the template for each component that needs refreshing and applying a diff algorithm to determine what needs changing in the currently rendered UI. The virtual DOM diffing process can be quite wasteful in terms of both memory usage and processing time, leading to jank on systems with limited resources like phones. Andy Wingo has done some interesting analyses of things like React Native and Flutter and covers the subject of jank well. So, while I appreciate the greatly improved developer experience of React-likes (I wrote my fair share of frontend code in the jQuery days), I’m less pleased by the overhead that it pushes onto each user’s computer. React feels like an important step forward on the declarative UI trail, but it’s not the destination. FRP has the potential for less jank because side-effects (the UI widget state updates) can be more precise. For example, if a web page has a text node that displays the number of times the user has clicked a mouse button, an FRP system could produce a program that would do the natural thing: Register a click event handler that replaces the text node with a new one containing the updated count. We don’t need to diff the whole page, nor do we need to create a wrapper component to update a subset of the page. The scope is narrow, so we can apply smaller updates. No virtual DOM necessary. There is, of course, overhead to maintaining the graph of time-varying values. The underlying runtime is free to use mutable state, but the user layer must take care to use pure functions and persistent, immutable data structures. This has a cost, but the per-event cost to refresh the UI feels much more reasonable when compared to React. From here on, I will stop talking about React and start exploring if FRP might really offer a more expressive way to do declarative UI without too much overhead. But first, we need to talk about a serious problem. FRP is acyclic FRP is no silver bullet. As mentioned earlier, FRP graphs are typically of the acyclic flavor. This limits the set of UIs that are possible to create with FRP. Is this the cost of declarativeness? To demonstrate the problem, consider a color picker tool that has sliders for both the red-green-blue and hue-saturation-value representations of color: In this program, updating the sliders on the RGB side should change the values of the sliders on the HSV side, and vice versa. This forms a cycle between the two sets of sliders. It’s possible to express cycles like this with event callbacks, though it’s messy and error-prone to do manually. We’d like a system built on top of event callbacks that can do the right thing without strange glitches or worse, unbounded loops. Propagators to the rescue Fortunately, I didn’t create that diagram above. It’s from Alexey Radul’s 2009 PhD dissertation: “Propagation Networks: A Flexible and Expressive Substrate for Computation”. This paper dedicates a section to explaining how FRP can be built on top a more general paradigm called propagation networks, or just propagators for short. The paper is lengthy, naturally, but it is written in an approachable style. There isn’t any terse math notation and there are plenty of code examples. As far as PhD dissertations go, this one is a real page turner! Here is a quote from section 5.5 about FrTime (with emphasis added by me): FrTime is built around a custom propagation infrastructure; it nicely achieves both non-recomputation and glitch avoidance, but unfortunately, the propagation system is nontrivially complicated, and specialized for the purpose of supporting functional reactivity. In particular, the FrTime system imposes the invariant that the propagation graph be acyclic, and guarantees that it will execute the propagators in topological-sort order. This simplifies the propagators themselves, but greatly complicates the runtime system, especially because it has to dynamically recompute the sort order when the structure of some portion of the graph changes (as when the predicate of a conditional changes from true to false, and the other branch must now be computed). That complexity, in turn, makes that runtime system unsuitable for other kinds of propagation, and even makes it difficult for other kinds of propagation to interoperate with it. So, the claim is that FRP-on-propagators can remove the acyclic restriction, reduce complexity, and improve interoperability. But what are propagators? I like how the book “Software Design for Flexibility” (2021) defines them (again, with emphasis added by me): “The propagator model is built on the idea that the basic computational elements are propagators, autonomous independent machines interconnected by shared cells through which they communicate. Each propagator machine continuously examines the cells it is connected to, and adds information to some cells based on computations it can make from information it can get from others. Cells accumulate information and propagators produce information.” Research on propagators goes back a long way (you’ll even find a form of propagators in the all-time classic “Structure and Interpretation of Computer Programs”), but it was Alexey Radul that discovered how to unify many different types of special-purpose propagation systems so that they could share a generic substrate and interoperate. Perhaps the most exciting application of the propagator model is AI, where it can be used to create “explainable AI” that keeps track of how a particular output was computed. This type of AI stands in stark contrast to the much hyped mainstream machine learning models that hoover up our planet’s precious natural resources to produce black boxes that generate impressive bullshit. But anyway! The diagram above can also be found in section 5.5 of the dissertation. Here’s the description: “A network for a widget for RGB and HSV color selection. Traditional functional reactive systems have qualms about the circularity, but general-purpose propagation handles it automatically.” This color picker felt like a good, achievable target for a prototype. The propagator network is small and there are only a handful of UI elements, yet it will test if the FRP system is working correctly. The prototype I read Alexey Radul’s dissertation, and then read chapter 7 of Software Design for Flexibility, which is all about propagators. Both use Scheme as the implementation language. The latter makes no mention of FRP, and while the former explains how FRP can be implemented in terms of propagators, there is (understandably) no code included. So, I had to implement it for myself to test my understanding. Unsurprisingly, I had misunderstood many things along the way and my iterations of broken code let me know that. Implementation is the best teacher. After much code fiddling, I was able to create a working prototype of the color picker. Here it is below: This prototype is written in Scheme and uses Hoot to compile it to WebAssembly so I can embed it right here in my blog. Sure beats a screenshot or video! This prototype contains a minimal propagator implementation that is sufficient to bootstrap a similarly minimal FRP implementation. Propagator implementation Let’s take a look at the code and see how it all works, starting with propagation. There are two essential data types: Cells and propagators. Cells accumulate information about a value, ranging from nothing, some form of partial information, or a complete value. The concept of partial information is Alexey Radul’s major contribution to the propagator model. It is through partial information structures that general-purpose propagators can be used to implement logic programming, probabilistic programming, type inference, and FRP, among others. I’m going to keep things as simple as possible in this post (it’s a big enough infodump already), but do read the propagator literature if phrases like “dependency directed backtracking” and “truth maintenance system” sound like a good time to you. Cells start out knowing nothing, so we need a special, unique value to represent nothing: (define-record-type <nothing> (make-nothing) %nothing?) (define nothing (make-nothing)) (define (nothing? x) (eq? x nothing)) Any unique (as in eq?) object would do, such as (list ’nothing), but I chose to use a record type because I like the way it prints. In addition to nothing, the propagator model also has a notion of contradictions. If one source of information says there are four lights, but another says there are five, then we have a contradiction. Propagation networks do not fall apart in the presence of contradictory information. There’s a data type that captures information about them and they can be resolved in a context-specific manner. I mention contradictions only for the sake of completeness, as a general-purpose propagator system needs to handle them. This prototype does not create any contradictions, so I won’t mention them again. Now we can define a cell type: (define-record-type <cell> (%make-cell relations neighbors content strongest equivalent? merge find-strongest handle-contradiction) cell? (relations cell-relations) (neighbors cell-neighbors set-cell-neighbors!) (content cell-content set-cell-content!) (strongest cell-strongest set-cell-strongest!) ;; Dispatch table: (equivalent? cell-equivalent?) (merge cell-merge) (find-strongest cell-find-strongest) (handle-contradiction cell-handle-contradiction)) The details of how a cell does things like merge old information with new information is left intentionally unanswered at this level of abstraction. Different systems built on propagators will want to handle things in different ways. In the propagator literature, you’ll see generic procedures used extensively for this purpose. For the sake of simplicity, I use a dispatch table instead. It would be easy to layer generic merge on top later, if we wanted. The constructor for cells sets the default contents to nothing: (define default-equivalent? equal?) ;; But what about partial information??? (define (default-merge old new) new) (define (default-find-strongest content) content) (define (default-handle-contradiction cell) (values)) (define* (make-cell name #:key (equivalent? default-equivalent?) (merge default-merge) (find-strongest default-find-strongest) (handle-contradiction default-handle-contradiction)) (let ((cell (%make-cell (make-relations name) '() nothing nothing equivalent? merge find-strongest handle-contradiction))) (add-child! (current-parent) cell) cell)) The default procedures used for the dispatch table are either no-ops or trivial. The default merge doesn’t merge at all, it just clobbers the old with the new. It’s up to the layers on top to provide more useful implementations. Cells can have neighbors (which will be propagators): (define (add-cell-neighbor! cell neighbor) (set-cell-neighbors! cell (lset-adjoin eq? (cell-neighbors cell) neighbor))) Since cells accumulate information, there are procedures for adding new content and finding the current strongest value contained within: (define (add-cell-content! cell new) (match cell (($ <cell> _ neighbors content strongest equivalent? merge find-strongest handle-contradiction) (let ((content* (merge content new))) (set-cell-content! cell content*) (let ((strongest* (find-strongest content*))) (cond ;; New strongest value is equivalent to the old one. No need ;; to alert propagators. ((equivalent? strongest strongest*) (set-cell-strongest! cell strongest*)) ;; Uh oh, a contradiction! Call handler. ((contradiction? strongest*) (set-cell-strongest! cell strongest*) (handle-contradiction cell)) ;; Strongest value has changed. Alert the propagators! (else (set-cell-strongest! cell strongest*) (for-each alert-propagator! neighbors)))))))) Next up is the propagator type. Propagators can be activated to create information using content stored in cells and store their results in some other cells, forming a graph. Data flow is not forced to be directional. Cycles are not only permitted, but very common in practice. So, propagators keep track of both their input and output cells: (define-record-type <propagator> (%make-propagator relations inputs outputs activate) propagator? (relations propagator-relations) (inputs propagator-inputs) (outputs propagator-outputs) (activate propagator-activate)) Propagators can be alerted to schedule themselves to be re-evaluted: (define (alert-propagator! propagator) (queue-task! (propagator-activate propagator))) The constructor for propagators adds the new propagator as a neighbor to all input cells and then calls alert-propagator! to bootstrap it: (define (make-propagator name inputs outputs activate) (let ((propagator (%make-propagator (make-relations name) inputs outputs activate))) (add-child! (current-parent) propagator) (for-each (lambda (cell) (add-cell-neighbor! cell propagator)) inputs) (alert-propagator! propagator) propagator)) There are two main classes of propagators that will be used: primitive propagators and constraint propagators. Primitive propagators are directional; they apply a function to the values of some input cells and write the result to an output cell: (define (unusable-value? x) (or (nothing? x) (contradiction? x))) (define (primitive-propagator name f) (match-lambda* ((inputs ... output) (define (activate) (let ((args (map cell-strongest inputs))) (unless (any unusable-value? args) (add-cell-content! output (apply f args))))) (make-propagator name inputs (list output) activate)))) We can use primitive-propagator to lift standard Scheme procedures into the realm of propagators. Here’s how we’d make and use an addition propagator: (define p:+ (primitive-propagator +)) (define a (make-cell)) (define b (make-cell)) (define c (make-cell)) (p:+ a b c) (add-cell-content! a 1) (add-cell-content! b 3) ;; After the scheduler runs… (cell-strongest c) ;; => 4 It is from these primitive propagators that we can build more complicated, compound propagators. Compound propagators compose primitive propagators (or other compound propagators) and lazily construct their section of the network upon first activation: (define (compound-propagator name inputs outputs build) (let ((built? #f)) (define (maybe-build) (unless (or built? (and (not (null? inputs)) (every unusable-value? (map cell-strongest inputs)))) (parameterize ((current-parent (propagator-relations propagator))) (build) (set! built? #t)))) (define propagator (make-propagator name inputs outputs maybe-build)) propagator)) By this point you may be wondering what all the references to current-parent are about. It is for tracking the parent/child relationships of the cells and propagators in the network. This could be helpful for things like visualizing the network, but we aren’t going to do anything with it today. I’ve omitted all of the other relation code for this reason. Constraint propagators are compound propagators whose inputs and outputs are the same, which results in bidirectional propagation: (define (constraint-propagator name cells build) (compound-propagator name cells cells build)) Using primitive propagators for addition and subtraction, we can build a constraint propagator for the equation a + b = c: (define p:+ (primitive-propagator +)) (define p:- (primitive-propagator -)) (define (c:sum a b c) (define (build) (p:+ a b c) (p:- c a b) (p:- c b a)) (constraint-propagator 'sum (list a b c) build)) (define a (make-cell)) (define b (make-cell)) (define c (make-cell)) (c:sum a b c) (add-cell-content! a 1) (add-cell-content! c 4) ;; After the scheduler runs… (cell-strongest b) ;; => 3 With a constraint, we can populate any two cells and the propagation system will figure out the value of the third. Pretty cool! This is a good enough propagation system for the FRP prototype. FRP implementation If you’re familiar with terminology from other FRP systems like “signals” and “behaviors” then set that knowledge aside for now. We need some new nouns. But first, a bit about the problems that need solving in order to implement FRP on top of general-purpose propagators: The propagator model does not enforce any ordering of when propagators will be re-activated in relation to each other. If we’re not careful, something in the network could compute a value using a mix of fresh and stale data, resulting in a momentary “glitch” that could be noticeable in the UI. The presence of cycles introduce a crisis of identity. It’s not sufficient for every time-varying value to be treated as its own self. In the color picker, the RGB values and the HSV values are two representations of the same thing. We need a new notion of identity to capture this and prevent unnecessary churn and glitches in the network. For starters, we will create a “reactive identifier” (needs a better name) data type that serves two purposes: To create shared identity between different information sources that are logically part of the same thing To create localized timestamps for values associated with this identity (define-record-type <reactive-id> (%make-reactive-id name clock) reactive-id? (name reactive-id-name) (clock reactive-id-clock set-reactive-id-clock!)) (define (make-reactive-id name) (%make-reactive-id name 0)) (define (reactive-id-tick! id) (let ((t (1+ (reactive-id-clock id)))) (set-reactive-id-clock! id t) `((,id . ,t)))) Giving each logical identity in the FRP system its own clock eliminates the need for a global clock, avoiding a potentially troublesome source of centralization. This is kind of like how Lamport timestamps are used in distributed systems. We also need a data type that captures the value of something at a particular point in time. Since the cruel march of time is unceasing, these are ephemeral values: (define-record-type <ephemeral> (make-ephemeral value timestamps) ephemeral? (value ephemeral-value) ;; Association list mapping identity -> time (timestamps ephemeral-timestamps)) Ephemerals are boxes that contain some arbitrary data with a bunch of shipping labels slapped onto the outside explaining from whence they came. This is the partial information structure that our propagators will manipulate and add to cells. Here’s how to say “the mouse position was (1, 2) at time 3” in code: (define mouse-position (make-reactive-id ’mouse-position)) (make-ephemeral #(1 2) `((,mouse-position . 3))) We need to perform a few operations with ephemerals: Test if one ephemeral is fresher (more recent) than another Merge two ephemerals when cell content is added Compose the timestamps from several inputs to form an aggregate timestamp for an output, but only if all timestamps for each distinct identifier match (no mixing of fresh and stale values) (define (ephemeral-fresher? a b) (let ((b-inputs (ephemeral-timestamps b))) (let lp ((a-inputs (ephemeral-timestamps a))) (match a-inputs (() #t) (((key . a-time) . rest) (match (assq-ref b-inputs key) (#f (lp rest)) (b-time (and (> a-time b-time) (lp rest))))))))) (define (merge-ephemerals old new) (cond ((nothing? old) new) ((nothing? new) old) ((ephemeral-fresher? new old) new) (else old))) (define (merge-ephemeral-timestamps ephemerals) (define (adjoin-keys alist keys) (fold (lambda (key+value keys) (match key+value ((key . _) (lset-adjoin eq? keys key)))) keys alist)) (define (check-timestamps id) (let lp ((ephemerals ephemerals) (t #f)) (match ephemerals (() t) ((($ <ephemeral> _ timestamps) . rest) (match (assq-ref timestamps id) ;; No timestamp for this id in this ephemeral. Continue. (#f (lp rest t)) (t* (if t ;; If timestamps don't match then we have a mix of ;; fresh and stale values, so return #f. Otherwise, ;; continue. (and (= t t*) (lp rest t)) ;; Initialize timestamp and continue. (lp rest t*)))))))) ;; Build a set of all reactive identifiers across all ephemerals. (let ((ids (fold (lambda (ephemeral ids) (adjoin-keys (ephemeral-timestamps ephemeral) ids)) '() ephemerals))) (let lp ((ids ids) (timestamps '())) (match ids (() timestamps) ((id . rest) ;; Check for consistent timestamps. If they are consistent ;; then add it to the alist and continue. Otherwise, return ;; #f. (let ((t (check-timestamps id))) (and t (lp rest (cons (cons id t) timestamps))))))))) Example usage: (define e1 (make-ephemeral #(3 4) `((,mouse-position . 4)))) (define e2 (make-ephemeral #(1 2) `((,mouse-position . 3)))) (ephemeral-fresher? e1 e2) ;; => #t (merge-ephemerals e1 e2) ;; => e1 (merge-ephemeral-timestamps (list e1 e2)) ;; => #f (define (mouse-max-coordinate e) (match e (($ <ephemeral> #(x y) timestamps) (make-ephemeral (max x y) timestamps)))) (define e3 (mouse-max-coordinate e1)) (merge-ephemeral-timestamps (list e1 e3)) ;; => ((mouse-position . 4)) Now we can build a primitive propagator constructor that lifts ordinary Scheme procedures so that they work with ephemerals: (define (ephemeral-wrap proc) (match-lambda* ((and ephemerals (($ <ephemeral> args) ...)) (match (merge-ephemeral-timestamps ephemerals) (#f nothing) (timestamps (make-ephemeral (apply proc args) timestamps)))))) (define* (primitive-reactive-propagator name proc) (primitive-propagator name (ephemeral-wrap proc))) Reactive UI implementation Now we need some propagators that live at the edges of our network that know how to interact with the DOM and can do the following: Sync a DOM element attribute with the value of a cell Create a two-way data binding between an element’s value attribute and a cell Render the markup in a cell and place it into the DOM tree in the right location Syncing an element attribute is a directional operation and the easiest to implement: (define (r:attribute input elem attr) (let ((attr (symbol->string attr))) (define (activate) (match (cell-strongest input) (($ <ephemeral> val) (attribute-set! elem attr (obj->string val))) ;; Ignore unusable values. (_ (values)))) (make-propagator 'r:attribute (list input) '() activate))) Two-way data binding is more involved. First, a new data type is used to capture the necessary information: (define-record-type <binding> (make-binding id cell default group) binding? (id binding-id) (cell binding-cell) (default binding-default) (group binding-group)) (define* (binding id cell #:key (default nothing) (group '())) (make-binding id cell default group)) And then a reactive propagator applies that binding to a specific DOM element: (define* (r:binding binding elem) (match binding (($ <binding> id cell default group) (define (update new) (unless (nothing? new) (let ((timestamp (reactive-id-tick! id))) (add-cell-content! cell (make-ephemeral new timestamp)) ;; Freshen timestamps for all cells in the same group. (for-each (lambda (other) (unless (eq? other cell) (match (cell-strongest other) (($ <ephemeral> val) (add-cell-content! other (make-ephemeral val timestamp))) (_ #f)))) group)))) ;; Sync the element's value with the cell's value. (define (activate) (match (cell-strongest cell) (($ <ephemeral> val) (set-value! elem (obj->string val))) (_ (values)))) ;; Initialize element value with the default value. (update default) ;; Sync the cell's value with the element's value. (add-event-listener! elem "input" (procedure->external (lambda (event) (update (string->obj (value elem)))))) (make-propagator 'r:binding (list cell) '() activate)))) A simple method for rendering to the DOM is to replace some element with a newly created element based on the current ephemeral value of a cell: (define (r:dom input elem) (define (activate) (match (cell-strongest input) (($ <ephemeral> exp) (let ((new (sxml->dom exp))) (replace-with! elem new) (set! elem new))) (_ (values)))) (make-propagator 'dom (list input) '() activate)) The sxml->dom procedure deserves some further explanation. To create a subtree of new elements, we have two options: Use something like the innerHTML element property to insert arbitrary HTML as a string and let the browser parse and build the elements. Use a Scheme data structure that we can iterate over and make the relevant document.createTextNode, document.createElement, etc. calls. Option 1 might be a shortcut and would be fine for a quick prototype, but it would mean that to generate the HTML we’d be stuck using raw strings. While string-based templating is commonplace, we can certainly do better in Scheme. Option 2 is actually not too much work and we get to use Lisp’s universal templating system, quasiquote, to write our markup. Thankfully, SXML already exists for this purpose. SXML is an alternative XML syntax that uses s-expressions. Since Scheme uses s-expression syntax, it’s a natural fit. Example: '(article (h1 "SXML is neat") (img (@ (src "cool.jpg") (alt "cool image"))) (p "Yeah, SXML is " (em "pretty neato!"))) Instead of using it to generate HTML text, we’ll instead generate a tree of DOM elements. Furthermore, because we’re now in full control of how the element tree is built, we can build in support for reactive propagators! Check it out: (define (sxml->dom exp) (match exp ;; The simple case: a string representing a text node. ((? string? str) (make-text-node str)) ((? number? num) (make-text-node (number->string num))) ;; A cell containing SXML (or nothing) ((? cell? cell) (let ((elem (cell->elem cell))) (r:dom cell elem) elem)) ;; An element tree. The first item is the HTML tag. (((? symbol? tag) . body) ;; Create a new element with the given tag. (let ((elem (make-element (symbol->string tag)))) (define (add-children children) ;; Recursively call sxml->dom for each child node and ;; append it to elem. (for-each (lambda (child) (append-child! elem (sxml->dom child))) children)) (match body ((('@ . attrs) . children) (for-each (lambda (attr) (match attr (((? symbol? name) (? string? val)) (attribute-set! elem (symbol->string name) val)) (((? symbol? name) (? number? val)) (attribute-set! elem (symbol->string name) (number->string val))) (((? symbol? name) (? cell? cell)) (r:attribute cell elem name)) ;; The value attribute is special and can be ;; used to setup a 2-way data binding. (('value (? binding? binding)) (r:binding binding elem)))) attrs) (add-children children)) (children (add-children children))) elem)))) Notice the calls to r:dom, r:attribute, and r:binding. A cell can be used in either the context of an element (r:dom) or an attribute (r:attribute). The value attribute gets the additional superpower of r:binding. We will make use of this when it is time to render the color picker UI! Color picker implementation Alright, I’ve spent a lot of time explaining how I built a minimal propagator and FRP system from first principles on top of Hoot-flavored Scheme. Let’s finally write the dang color picker! First we need some data types to represent RGB and HSV colors: (define-record-type <rgb-color> (rgb-color r g b) rgb-color? (r rgb-color-r) (g rgb-color-g) (b rgb-color-b)) (define-record-type <hsv-color> (hsv-color h s v) hsv-color? (h hsv-color-h) (s hsv-color-s) (v hsv-color-v)) And procedures to convert RGB to HSV and vice versa: (define (rgb->hsv rgb) (match rgb (($ <rgb-color> r g b) (let* ((cmax (max r g b)) (cmin (min r g b)) (delta (- cmax cmin))) (hsv-color (cond ((= delta 0.0) 0.0) ((= cmax r) (let ((h (* 60.0 (fmod (/ (- g b) delta) 6.0)))) (if (< h 0.0) (+ h 360.0) h))) ((= cmax g) (* 60.0 (+ (/ (- b r) delta) 2.0))) ((= cmax b) (* 60.0 (+ (/ (- r g) delta) 4.0)))) (if (= cmax 0.0) 0.0 (/ delta cmax)) cmax))))) (define (hsv->rgb hsv) (match hsv (($ <hsv-color> h s v) (let* ((h' (/ h 60.0)) (c (* v s)) (x (* c (- 1.0 (abs (- (fmod h' 2.0) 1.0))))) (m (- v c))) (define-values (r' g' b') (cond ((<= 0.0 h 60.0) (values c x 0.0)) ((<= h 120.0) (values x c 0.0)) ((<= h 180.0) (values 0.0 c x)) ((<= h 240.0) (values 0.0 x c)) ((<= h 300.0) (values x 0.0 c)) ((<= h 360.0) (values c 0.0 x)))) (rgb-color (+ r' m) (+ g' m) (+ b' m)))))) We also need some procedures to convert colors into the hexadecimal representations we’re used to seeing: (define (uniform->byte x) (inexact->exact (round (* x 255.0)))) (define (rgb->int rgb) (match rgb (($ <rgb-color> r g b) (+ (* (uniform->byte r) (ash 1 16)) (* (uniform->byte g) (ash 1 8)) (uniform->byte b))))) (define (rgb->hex-string rgb) (list->string (cons #\# (let lp ((i 0) (n (rgb->int rgb)) (out '())) (if (= i 6) out (lp (1+ i) (ash n -4) (cons (integer->char (let ((digit (logand n 15))) (+ (if (< digit 10) (char->integer #\0) (- (char->integer #\a) 10)) digit))) out))))))) (define (rgb-hex->style hex) (string-append "background-color: " hex ";")) Now we can lift the color API into primitive reactive propagator constructors: (define-syntax-rule (define-primitive-reactive-propagator name proc) (define name (primitive-reactive-propagator 'name proc))) (define-primitive-reactive-propagator r:rgb-color rgb-color) (define-primitive-reactive-propagator r:rgb-color-r rgb-color-r) (define-primitive-reactive-propagator r:rgb-color-g rgb-color-g) (define-primitive-reactive-propagator r:rgb-color-b rgb-color-b) (define-primitive-reactive-propagator r:hsv-color hsv-color) (define-primitive-reactive-propagator r:hsv-color-h hsv-color-h) (define-primitive-reactive-propagator r:hsv-color-s hsv-color-s) (define-primitive-reactive-propagator r:hsv-color-v hsv-color-v) (define-primitive-reactive-propagator r:rgb->hsv rgb->hsv) (define-primitive-reactive-propagator r:hsv->rgb hsv->rgb) (define-primitive-reactive-propagator r:rgb->hex-string rgb->hex-string) (define-primitive-reactive-propagator r:rgb-hex->style rgb-hex->style) From those primitive propagators, we can build the necessary constraint propagators: (define (r:components<->rgb r g b rgb) (define (build) (r:rgb-color r g b rgb) (r:rgb-color-r rgb r) (r:rgb-color-g rgb g) (r:rgb-color-b rgb b)) (constraint-propagator 'r:components<->rgb (list r g b rgb) build)) (define (r:components<->hsv h s v hsv) (define (build) (r:hsv-color h s v hsv) (r:hsv-color-h hsv h) (r:hsv-color-s hsv s) (r:hsv-color-v hsv v)) (constraint-propagator 'r:components<->hsv (list h s v hsv) build)) (define (r:rgb<->hsv rgb hsv) (define (build) (r:rgb->hsv rgb hsv) (r:hsv->rgb hsv rgb)) (constraint-propagator 'r:rgb<->hsv (list rgb hsv) build)) At long last, we are ready to define the UI! Here it is: (define (render exp) (append-child! (document-body) (sxml->dom exp))) (define* (slider id name min max default #:optional (step "any")) `(div (@ (class "slider")) (label (@ (for ,id)) ,name) (input (@ (id ,id) (type "range") (min ,min) (max ,max) (step ,step) (value ,default))))) (define (uslider id name default) ; [0,1] slider (slider id name 0 1 default)) (define-syntax-rule (with-cells (name ...) body . body*) (let ((name (make-cell 'name #:merge merge-ephemerals)) ...) body . body*)) (with-cells (r g b rgb h s v hsv hex style) (define color (make-reactive-id 'color)) (define rgb-group (list r g b)) (define hsv-group (list h s v)) (r:components<->rgb r g b rgb) (r:components<->hsv h s v hsv) (r:rgb<->hsv rgb hsv) (r:rgb->hex-string rgb hex) (r:rgb-hex->style hex style) (render `(div (h1 "Color Picker") (div (@ (class "preview")) (div (@ (class "color-block") (style ,style))) (div (@ (class "hex")) ,hex)) (fieldset (legend "RGB") ,(uslider "red" "Red" (binding color r #:default 1.0 #:group rgb-group)) ,(uslider "green" "Green" (binding color g #:default 0.0 #:group rgb-group)) ,(uslider "blue" "Blue" (binding color b #:default 1.0 #:group rgb-group))) (fieldset (legend "HSV") ,(slider "hue" "Hue" 0 360 (binding color h #:group hsv-group)) ,(uslider "saturation" "Saturation" (binding color s #:group hsv-group)) ,(uslider "value" "Value" (binding color v #:group hsv-group)))))) Each color channel (R, G, B, H, S, and V) has a cell which is bound to a slider (<input type="range">). All six sliders are identified as color, so adjusting any of them increments color’s timestamp. The R, G, and B sliders form one input group, and the H, S, and V sliders form another. By grouping the related sliders together, whenever one of the sliders is moved, all members of the group will have their ephemeral value refreshed with the latest timestamp. This behavior is crucial because otherwise the r:components<->rgb and r:components<->hsv propagators would see that one color channel has a fresher value than the other two and do nothing. Since the underlying propagator infrastructure does not enforce activation order, reactive propagators must wait until their inputs reach a consistent state where the timestamps for a given reactive identifier are all the same. With this setup, changing a slider on the RGB side will cause a new color value to propagate over to the HSV side. Because the relationship is cyclical, the HSV side will then attempt to propagate an equivalent color value back to the RGB side! This could be bad news, but since the current RGB value is equally fresh (same timestamp), the propagation stops right there. Redundant work is minimized and an unbounded loop is avoided. And that’s it! Phew! Complete source code can be found here. Reflections I think the results of this prototype are promising. I’d like to try building some larger demos to see what new problems arise. Since propagation networks include cycles, they cannot be garbage collected until there are no references to any part of the network from the outside. Is this acceptable? I didn’t optimize, either. A more serious implementation would want to do things like use case-lambda for all n-ary procedures to avoid consing an argument list in the common cases of 1, 2, 3, etc. arguments. There is also a need for a more pleasing domain-specific language, using Scheme’s macro system, for describing FRP graphs. Alexey Radul’s dissertation was published in 2009. Has anyone made a FRP system based on propagators since then that’s used in real software? I don’t know of anything but it’s a big information superhighway out there. Update: Since publishing, I have learned about the following: Holograph: A visual editor for propagator networks! Amazing! Scoped Propagators: A WIP propagator system with some notable differences from “traditional” propagators. I wish I had read Alexey Radul's disseration 10 years ago when I was first exploring FRP. It would have saved me a lot of time spent running into problems that have already been solved that I was not equipped to solve on my own. I have even talked to Gerald Sussman (a key figure in propagator research) in person about the propagator model. That conversation was focused on AI, though, and I didn’t realize that propagators could also be used for FRP. It wasn’t until more recently that friend and colleague Christine Lemmer-Webber, who was present for the aforementioned conversation with Sussman, told me about it. Christine has her own research project for propagators. There are so many interesting things to learn out there, but I am also so tired. Better late than never, I guess! Anyway, if you made it this far then I hope you have enjoyed reading about propagators and FRP. ’Til next time!
More in programming
I started writing this early last week but Real Life Stuff happened and now you're getting the first-draft late this week. Warning, unedited thoughts ahead! New Logic for Programmers release! v0.9 is out! This is a big release, with a new cover design, several rewritten chapters, online code samples and much more. See the full release notes at the changelog page, and get the book here! Write the cleverest code you possibly can There are millions of articles online about how programmers should not write "clever" code, and instead write simple, maintainable code that everybody understands. Sometimes the example of "clever" code looks like this (src): # Python p=n=1 exec("p*=n*n;n+=1;"*~-int(input())) print(p%n) This is code-golfing, the sport of writing the most concise code possible. Obviously you shouldn't run this in production for the same reason you shouldn't eat dinner off a Rembrandt. Other times the example looks like this: def is_prime(x): if x == 1: return True return all([x%n != 0 for n in range(2, x)] This is "clever" because it uses a single list comprehension, as opposed to a "simple" for loop. Yes, "list comprehensions are too clever" is something I've read in one of these articles. I've also talked to people who think that datatypes besides lists and hashmaps are too clever to use, that most optimizations are too clever to bother with, and even that functions and classes are too clever and code should be a linear script.1. Clever code is anything using features or domain concepts we don't understand. Something that seems unbearably clever to me might be utterly mundane for you, and vice versa. How do we make something utterly mundane? By using it and working at the boundaries of our skills. Almost everything I'm "good at" comes from banging my head against it more than is healthy. That suggests a really good reason to write clever code: it's an excellent form of purposeful practice. Writing clever code forces us to code outside of our comfort zone, developing our skills as software engineers. Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you [will get excellent debugging practice at exactly the right level required to push your skills as a software engineer] — Brian Kernighan, probably There are other benefits, too, but first let's kill the elephant in the room:2 Don't commit clever code I am proposing writing clever code as a means of practice. Being at work is a job with coworkers who will not appreciate if your code is too clever. Similarly, don't use too many innovative technologies. Don't put anything in production you are uncomfortable with. We can still responsibly write clever code at work, though: Solve a problem in both a simple and a clever way, and then only commit the simple way. This works well for small scale problems where trying the "clever way" only takes a few minutes. Write our personal tools cleverly. I'm a big believer of the idea that most programmers would benefit from writing more scripts and support code customized to their particular work environment. This is a great place to practice new techniques, languages, etc. If clever code is absolutely the best way to solve a problem, then commit it with extensive documentation explaining how it works and why it's preferable to simpler solutions. Bonus: this potentially helps the whole team upskill. Writing clever code... ...teaches simple solutions Usually, code that's called too clever composes several powerful features together — the "not a single list comprehension or function" people are the exception. Josh Comeau's "don't write clever code" article gives this example of "too clever": const extractDataFromResponse = (response) => { const [Component, props] = response; const resultsEntries = Object.entries({ Component, props }); const assignIfValueTruthy = (o, [k, v]) => (v ? { ...o, [k]: v } : o ); return resultsEntries.reduce(assignIfValueTruthy, {}); } What makes this "clever"? I count eight language features composed together: entries, argument unpacking, implicit objects, splats, ternaries, higher-order functions, and reductions. Would code that used only one or two of these features still be "clever"? I don't think so. These features exist for a reason, and oftentimes they make code simpler than not using them. We can, of course, learn these features one at a time. Writing the clever version (but not committing it) gives us practice with all eight at once and also with how they compose together. That knowledge comes in handy when we want to apply a single one of the ideas. I've recently had to do a bit of pandas for a project. Whenever I have to do a new analysis, I try to write it as a single chain of transformations, and then as a more balanced set of updates. ...helps us master concepts Even if the composite parts of a "clever" solution aren't by themselves useful, it still makes us better at the overall language, and that's inherently valuable. A few years ago I wrote Crimes with Python's Pattern Matching. It involves writing horrible code like this: from abc import ABC class NotIterable(ABC): @classmethod def __subclasshook__(cls, C): return not hasattr(C, "__iter__") def f(x): match x: case NotIterable(): print(f"{x} is not iterable") case _: print(f"{x} is iterable") if __name__ == "__main__": f(10) f("string") f([1, 2, 3]) This composes Python match statements, which are broadly useful, and abstract base classes, which are incredibly niche. But even if I never use ABCs in real production code, it helped me understand Python's match semantics and Method Resolution Order better. ...prepares us for necessity Sometimes the clever way is the only way. Maybe we need something faster than the simplest solution. Maybe we are working with constrained tools or frameworks that demand cleverness. Peter Norvig argued that design patterns compensate for missing language features. I'd argue that cleverness is another means of compensating: if our tools don't have an easy way to do something, we need to find a clever way. You see this a lot in formal methods like TLA+. Need to check a hyperproperty? Cast your state space to a directed graph. Need to compose ten specifications together? Combine refinements with state machines. Most difficult problems have a "clever" solution. The real problem is that clever solutions have a skill floor. If normal use of the tool is at difficult 3 out of 10, then basic clever solutions are at 5 out of 10, and it's hard to jump those two steps in the moment you need the cleverness. But if you've practiced with writing overly clever code, you're used to working at a 7 out of 10 level in short bursts, and then you can "drop down" to 5/10. I don't know if that makes too much sense, but I see it happen a lot in practice. ...builds comradery On a few occasions, after getting a pull request merged, I pulled the reviewer over and said "check out this horrible way of doing the same thing". I find that as long as people know they're not going to be subjected to a clever solution in production, they enjoy seeing it! Next week's newsletter will probably also be late, after that we should be back to a regular schedule for the rest of the summer. Mostly grad students outside of CS who have to write scripts to do research. And in more than one data scientist. I think it's correlated with using Jupyter. ↩ If I don't put this at the beginning, I'll get a bajillion responses like "your team will hate you" ↩
Whether we like it or not, email is widely used to identify a person. Code sent to email is used as authentication and sometimes as authorisation for certain actions. I’m not comfortable with Google having such power over me, especially given the fact that they practically don’t have any support you can appeal to. If your Google account is blocked, that’s it. Maybe you know someone from Google and they can help you, but for most of us mortals that’s not an option.
In his book “The Order of Time” Carlo Rovelli notes how we often asks ourselves questions about the fundamental nature of reality such as “What is real?” and “What exists?” But those are bad questions he says. Why? the adjective “real” is ambiguous; it has a thousand meanings. The verb “to exist” has even more. To the question “Does a puppet whose nose grows when he lies exist?” it is possible to reply: “Of course he exists! It’s Pinocchio!”; or: “No, it doesn’t, he’s only part of a fantasy dreamed up by Collodi.” Both answers are correct, because they are using different meanings of the verb “to exist.” He notes how Pinocchio “exists” and is “real” in terms of a literary character, but not so far as any official Italian registry office is concerned. To ask oneself in general “what exists” or “what is real” means only to ask how you would like to use a verb and an adjective. It’s a grammatical question, not a question about nature. The point he goes on to make is that our language has to evolve and adapt with our knowledge. Our grammar developed from our limited experience, before we know what we know now and before we became aware of how imprecise it was in describing the richness of the natural world. Rovelli gives an example of this from a text of antiquity which uses confusing grammar to get at the idea of the Earth having a spherical shape: For those standing below, things above are below, while things below are above, and this is the case around the entire earth. On its face, that is a very confusing sentence full of contradictions. But the idea in there is profound: the Earth is round and direction is relative to the observer. Here’s Rovelli: How is it possible that “things above are below, while things below are above"? It makes no sense…But if we reread it bearing in mind the shape and the physics of the Earth, the phrase becomes clear: its author is saying that for those who live at the Antipodes (in Australia), the direction “upward” is the same as “downward” for those who are in Europe. He is saying, that is, that the direction “above” changes from one place to another on the Earth. He means that what is above with respect to Sydney is below with respect to us. The author of this text, written two thousand years ago, is struggling to adapt his language and his intuition to a new discovery: the fact that the Earth is a sphere, and that “up” and “down” have a meaning that changes between here and there. The terms do not have, as previously thought, a single and universal meaning. So language needs innovation as much as any technological or scientific achievement. Otherwise we find ourselves arguing over questions of deep import in a way that ultimately amounts to merely a question of grammar. Email · Mastodon · Bluesky
In mid-March we released a big bug fix update—elementary OS 8.0.1—and since then we’ve been hard at work on even more bug fixes and some new exciting features that I’m excited to share with you today! Read ahead to find out what we’ve released recently and what you can help us test in Early Access. Quick Settings Quick Settings has a new “Prevent Sleep” toggle Leo added a new “Prevent Sleep” toggle. This is useful when you’re giving a presentation or have a long-running background task where you want to temporarily avoid letting the computer go to sleep on its normal schedule. We also fixed a bug where the “Dark Mode” toggle would cancel the dark mode schedule when used. We now have proper schedule snoozing, so when you manually toggle Dark Mode on or off while using a timed or sunset-to-sunrise schedule, your schedule will resume on the next schedule change instead of being canceled completely. Vishal also fixed an issue that caused some apps to report being improperly closed on system shutdown or restart and on the lock screen we now show the “Suspend” button rather than the “Lock” button. System Settings Locale settings has a fresh layout thanks to Alain with its options aligned more cleanly and improved links to additional settings. Locale Settings has a more responsive design We’ve also added the phrase “about this device” as a search term for the System page and improved interface copy when a restart is required to finish installing updates based on your feedback. Plus, Stanisław improved stylus detection in Wacom settings preventing a crash when no stylus is found. AppCenter We now show a small label next to the download button for apps which contain in-app purchases. This is especially useful for easily identifying free-to-play games or alt stores like Steam or Heroic Games Launcher. AppCenter now shows when apps have in-app purchases Plus, we now reload app icons on-the-fly as their data is processed, thanks to Italo. That means you’ll no longer get occasionally stuck with an AppCenter which shows missing images for app’s who have taken a bit longer than usual to load. Get These Updates As always, pop open System Settings → System on elementary OS 8 and hit “Update All” to get these updates plus your regular security, bug fix, and translation updates. Or set up automatic updates and get a notification when updates are ready to install! Early Access Our development focus recently has been on some of the bigger features that will likely land for either elementary OS 8.1 or 9. We’ve got a new app, big changes to the design of our desktop itself, a whole lot of under-the-hood cleanup, and the return of some key system services thanks to a new open source project. Monitor We’re now shipping a System Monitor app by default By popular demand—and thanks to the hard work of Stanisław—we have a new system monitor app called “Monitor” shipping in Early Access. Monitor provides usage information for your processor, GPU, memory, storage, network, and currently running processes. You can optionally see system information in the panel with Monitor You can also optionally get a ton of glanceable information shown in the panel. There’s currently a lot of work happening to port Monitor to GTK4 and improve its functionality under the Secure Session, so make sure to report any issues you find! Multitasking The Dock is getting a workspace switcher Probably the biggest change to the Pantheon shell since its early inception, the Dock is getting a new workspace switcher! The workspace switcher works in a familiar way to the one you may have seen in the Multitasking View: Your currently open workspaces are represented as tiles with the icons of apps running on them; You can select a workspace to switch to it; You can drag-and-drop workspaces to rearrange them; And you can use the “+” button to create a new blank workspace. One new trick however is that selecting the workspace you’re already on will launch Multitasking View. The new workspace switcher makes it so much more accessible to multitask with just the mouse and get an overview of your workflows without having to first enter the Multitasking View. We’re really excited to hear what people think about it! You can close apps from Multitasking View by swiping up Another very satisfying feature for folks using touch input, you can now swipe up windows in the Multitasking View to close them. This is a really familiar gesture for those of us with Android and iOS devices and feels really natural for managing a big stack of windows without having to aim for a small “x” button. GTK4 Porting We’ve recently landed the port of Tasks to GTK4. So far that comes with a few fixes to tighten up its design, with much more possible in the future. Please make sure to help us test it thoroughly for any regressions! Tasks has a slightly tightened up design We’re also making great progress on porting the panel to GTK4. So far we have branches in review for Nightlight, Bluetooth, Datetime, and Network indicators. Power, Keyboard, and Quick Settings indicators all have in-progress branches. That leaves just Applications, Sound, and Notifications. So far these ports don’t come with major feature changes, but they do involve lots of cleaning up and modernizing of these code bases and in some cases fixing bugs! When the port is finished, we should see immediate performance gains and we’ll have a much better foundation for future releases. You can follow along with our progress porting everything to GTK4 in this GitHub Project. And More When you take a screenshot using keyboard shortcuts or by secondary-clicking an app’s window handle, we now send a notification letting you know that it was succesful and where to find the resulting image. Plus there’s a handy button that opens Files with your screenshot pre-selected. We’re also testing beaconDB as a replacement for Mozilla Location Services (MLS). If you’re not aware, we relied on MLS in previous versions of elementary OS to provide location information for devices that don’t have a GPS radio. Unfortunately Mozilla discontinued the service last June and we’ve been left without a replacement until now. Without these services, not only did maps and weather apps cease to function, but system features like automatic timezone detection and features that rely on sunset and sunrise times no longer work properly. beaconDB offers a drop-in replacement for MLS that uses Wireless networks, bluetooth devices, and cell towers to provide location data when requested. All of its data is crowd-sourced and opt-in and several distributions are now defaulting to using it as their location services data provider. I’ve set up a small sponsorship from elementary on Liberapay to support the project. If you can help support beaconDB either by sponsoring or providing stumbler data, I’d highly encourage you to do so! Sponsors At the moment we’re at 23% of our monthly funding goal and 336 Sponsors on GitHub! Shoutouts to everyone helping us reach our goals here. Your monthly sponsorship funds development and makes sure we have the resources we need to give you the best version of elementary OS we can! Monthly release candidate builds and daily Early Access builds are available to GitHub Sponsors from any tier! Beware that Early Access builds are not considered stable and you will encounter fresh issues when you run them. We’d really appreciate reporting any problems you encounter with the Feedback app or directly on GitHub.
Via Jeremy Keith’s link blog I found this article: Elizabeth Goodspeed on why graphic designers can’t stop joking about hating their jobs. It’s about the disillusionment of designers since the ~2010s. Having ridden that wave myself, there’s a lot of very relatable stuff in there about how design has evolved as a profession. But before we get into the meat of the article, there’s some bangers worth acknowledging, like this: Amazon – the most used website in the world – looks like a bunch of pop-up ads stitched together. lol, burn. Haven’t heard Amazon described this way, but it’s spot on. The hard truth, as pointed out in the article, is this: bad design doesn’t hurt profit margins. Or at least there’s no immediately-obvious, concrete data or correlation that proves this. So most decision makers don’t care. You know what does help profit margins? Spending less money. Cost-savings initiatives. Those always provide a direct, immediate, seemingly-obvious correlation. So those initiatives get prioritized. Fuzzy human-centered initiatives (humanities-adjacent stuff), are difficult to quantitatively (and monetarily) measure. “Let’s stop printing paper and sending people stuff in the mail. It’s expensive. Send them emails instead.” Boom! Money saved for everyone. That’s easier to prioritize than asking, “How do people want us to communicate with them — if at all?” Nobody ever asks that last part. Designers quickly realized that in most settings they serve the business first, customers second — or third, or fourth, or... Shar Biggers [says] designers are “realising that much of their work is being used to push for profit rather than change..” Meet the new boss. Same as the old boss. As students, designers are encouraged to make expressive, nuanced work, and rewarded for experimentation and personal voice. The implication, of course, is that this is what a design career will look like: meaningful, impactful, self-directed. But then graduation hits, and many land their first jobs building out endless Google Slides templates or resizing banner ads...no one prepared them for how constrained and compromised most design jobs actually are. Reality hits hard. And here’s the part Jeremy quotes: We trained people to care deeply and then funnelled them into environments that reward detachment. And the longer you stick around, the more disorienting the gap becomes – especially as you rise in seniority. You start doing less actual design and more yapping: pitching to stakeholders, writing brand strategy decks, performing taste. Less craft, more optics; less idealism, more cynicism. Less work advocating for your customers, more work for advocating for yourself and your team within the organization itself. Then the cynicism sets in. We’re not making software for others. We’re making company numbers go up, so our numbers ($$$) will go up. Which reminds me: Stephanie Stimac wrote about reaching 1 year at Igalia and what stood out to me in her post was that she didn’t feel a pressing requirement to create visibility into her work and measure (i.e. prove) its impact. I’ve never been good at that. I’ve seen its necessity, but am just not good at doing it. Being good at building is great. But being good at the optics of building is often better — for you, your career, and your standing in many orgs. Anyway, back to Elizabeth’s article. She notes you’ll burn out trying to monetize something you love — especially when it’s in pursuit of maintaining a cost of living. Once your identity is tied up in the performance, it’s hard to admit when it stops feeling good. It’s a great article and if you’ve been in the design profession of building software, it’s worth your time. Email · Mastodon · Bluesky