Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
63
Wasm GC is a wonderful thing that is now available in all major web browsers since slowpoke Safari/WebKit finally shipped it in December. It provides a hierarchy of heap allocated reference types and a set of instructions to operate on them. Wasm GC enables managed memory languages to take advantage of the advanced garbage collectors inside web browser engines. It’s now possible to implement a managed memory language without having to ship a GC inside the binary. The result is smaller binaries, better performance, and better integration with the host runtime. However, Wasm GC has some serious drawbacks when compared to linear memory. I enjoy playing around with realtime graphics programming in my free time, but I was disappointed to discover that Wasm GC just isn’t a good fit for that right now. I decided to write this post because I’d like to see Wasm GC on more or less equal footing with linear memory when it comes to binary data manipulation. Hello triangle For starters, let's...
4 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from dthompson

Guile-websocket 0.2.0 released

I'm happy to announce that guile-websocket 0.2.0 has been released! Guile-websocket is an implementation of the WebSocket protocol, both the client and server sides, for Guile Scheme. This release introduces breaking changes that overhaul the client and server implementations in order to support non-blocking I/O and TLS encrypted connections. source tarball: https://files.dthompson.us/guile-websocket/guile-websocket-0.2.0.tar.gz signature: https://files.dthompson.us/guile-websocket/guile-websocket-0.2.0.tar.gz.asc See the guile-websocket project page for more information. Bug reports, bug fixes, feature requests, and patches are welcomed.

4 months ago 67 votes
Guile-Bstructs 0.1.0 released

I'm pleased to announce that the very first release of guile-bstructs, version 0.1.0, has been released! This is a library I've been working on for quite some time and after more than one rewrite and many smaller refactors I think it's finally ready to release publicly. Let's hope I'm not wrong about that! About guile-bstructs Guile-bstructs is a library that provides structured read/write access to binary data for Guile. A bstruct (short for “binary structure”) is a data type that encapsulates a bytevector and a byte offset which interprets that bytevector based on a specified layout. Some use cases for bstructs are: manipulating C structs when using the foreign function interface packing GPU vertex buffers when using graphics APIs such as OpenGL implementing data types that benefit from Guile's unboxed math optimizations such as vectors and matrices This library was initially inspired by guile-opengl's define-packed-struct syntax but is heavily based on "Ftypes: Structured foreign types" by Andy Keep and R. Kent Dybvig. The resulting interface is quite similar but the implementation is completely original. This library provides a syntax-heavy interface; nearly all of the public API is syntax. This is done to ensure that bstruct types are static and well-known at compile time resulting in efficient bytecode and minimal runtime overhead. A subset of the interface deals in raw bytevector access for accessing structured data in bytevectors directly without going through an intermediary bstruct wrapper. This low-level interface is useful for certain batch processing situations where the overhead of creating wrapper bstructs would hinder throughput. Example Here are some example type definitions to give you an idea of what it’s like to use guile-bstructs: ;; Struct (define-bstruct <vec2> (padded (struct (x float) (y float)))) ;; Type group with a union (define-bstruct (<mouse-move-event> (struct (type uint8) (x int32) (y int32))) (<mouse-button-event> (struct (type uint8) (button uint8) (state uint8) (x int32) (y int32))) (<event> (union (type uint8) (mouse-move <mouse-move-event>) (mouse-button <mouse-button-event>)))) ;; Array (define-bstruct <matrix4> (array 16 float)) ;; Bit fields (define-bstruct <date> (bits (year 32 s) (month 4 u) (day 5 u))) ;; Pointer (define-bstruct (<item> (struct (type int))) (<chest> (struct (opened? uint8) (item (* <item>))))) ;; Packed struct modifier (define-bstruct <enemy> (packed (struct (type uint8) (health uint32)))) ;; Endianness modifier (define-bstruct <big-float> (endian big float)) ;; Recursive type (define-bstruct <node> (struct (item int) (next (* <node>)))) ;; Mutually recursive type group (define-bstruct (<forest> (struct (children (* <tree>)))) (<tree> (struct (value int) (forest (* <forest>)) (next (* <tree>))))) ;; Opaque type (define-bstruct SDL_GPUTexture) Download Source tarball: guile-bstructs-0.1.0.tar.gz GPG signature: guile-bstructs-0.1.0.tar.gz.asc This release was signed with this GPG key. See the guile-bstructs project page for more information.

4 months ago 71 votes
Functional reactive user interfaces with propagators

I’ve been interested in functional reactive programming (FRP) for about a decade now. I even wrote a couple of blog posts back in 2014 describing my experiments. My initial source of inspiration was Elm, the Haskell-like language for the web that once had FRP as a core part of the language. Evan Czaplicki’s Strange Loop 2013 talk really impressed me, especially that Mario demo. From there, I explored the academic literature on the subject. Ultimately, I created and then abandoned a library that focused on FRP for games. It was a neat idea, but the performance was terrible. The overhead of my kinda-sorta FRP system was part of the problem, but mostly it was my own inexperience. I didn’t know how to optimize effectively and my implementation language, Guile, did not have as many optimization passes as it does now. Also, realtime simulations like games require much more careful use of heap allocation. I found that, overhead aside, FRP is a bad fit for things like scripting sequences of actions in a game. I don’t want to give up things like coroutines that make it easy. I’ve learned how different layers of a program may call for different programming paradigms. Functional layers rest upon imperative foundations. Events are built on top of polling. Languages with expression trees run on machines that only understand linear sequences. You get the idea. A good general-purpose language will allow you to compose many paradigms in the same program. I’m still a big fan of functional programming, but single paradigm languages do not appeal to me. Fast forward 10 years, I find myself thinking about FRP again in a new context. I now work for the Spritely Institute where we’re researching and building the next generation of secure, distributed application infrastructure. We want to demo our tech through easy-to-use web applications, which means we need to do some UI programming. So, the back burner of my brain has been mulling over the possibilities. What’s the least painful way to build web UIs? Is this FRP thing worth revisiting? The reason why FRP is so appealing to me (on paper, at least) is that it allows for writing interactive programs declaratively. With FRP, I can simply describe the relationships between the various time-varying components, and the system wires it all together for me. I’m spared from callback hell, one of the more frightening layers of hell that forces programs to be written in a kind of continuation-passing style where timing and state bugs consume more development time as the project grows. What about React? In the time during and since I last experimented with FRP, a different approach to declarative UI programming has swept the web development world: React. From React, many other similar libraries emerged. On the minimalist side there are things like Mithril (a favorite of mine), and then there are bigger players like Vue. The term “reactive” has become overloaded, but in the mainstream software world it is associated with React and friends. FRP is quite different, despite sharing the declarative and reactive traits. Both help free programmers from callback hell, but they achieve their results differently. The React model describes an application as a tree of “components”. Each component represents a subset of the complete UI element tree. For each component, there is a template function that takes some inputs and returns the new desired state of the UI. This function is called whenever an event occurs that might change the state of the UI. The template produces a data structure known as a “virtual DOM”. To realize this new state in the actual DOM, React diffs the previous tree with the new one and updates, creates, and deletes elements as necessary. With FRP, you describe your program as an acyclic graph of nodes that contain time-varying values. The actual value of any given node is determined by a function that maps the current values of some input nodes into an output value. The system is bootstrapped by handling a UI event and updating the appropriate root node, which kicks off a cascade of updates throughout the graph. At the leaf nodes, side-effects occur that realize the desired state of the application. Racket’s FrTime is one example of such a system, which is based on Greg Cooper’s 2008 PhD dissertation “Integrating Dataflow Evaluation into a Practical Higher-Order Call-by-Value Language”. In FrTime, time-varying values are called “signals”. Elm borrowed this language, too, and there’s currently a proposal to add signals to JavaScript. Research into FRP goes back quite a bit further. Notably, Conal Elliot and Paul Hudak wrote “Functional Reactive Animation” in 1997. On jank The scope of potential change for any given event is larger in React than FRP. An FRP system flows data through an acyclic graph, updating only the nodes affected by the event. React requires re-evaluating the template for each component that needs refreshing and applying a diff algorithm to determine what needs changing in the currently rendered UI. The virtual DOM diffing process can be quite wasteful in terms of both memory usage and processing time, leading to jank on systems with limited resources like phones. Andy Wingo has done some interesting analyses of things like React Native and Flutter and covers the subject of jank well. So, while I appreciate the greatly improved developer experience of React-likes (I wrote my fair share of frontend code in the jQuery days), I’m less pleased by the overhead that it pushes onto each user’s computer. React feels like an important step forward on the declarative UI trail, but it’s not the destination. FRP has the potential for less jank because side-effects (the UI widget state updates) can be more precise. For example, if a web page has a text node that displays the number of times the user has clicked a mouse button, an FRP system could produce a program that would do the natural thing: Register a click event handler that replaces the text node with a new one containing the updated count. We don’t need to diff the whole page, nor do we need to create a wrapper component to update a subset of the page. The scope is narrow, so we can apply smaller updates. No virtual DOM necessary. There is, of course, overhead to maintaining the graph of time-varying values. The underlying runtime is free to use mutable state, but the user layer must take care to use pure functions and persistent, immutable data structures. This has a cost, but the per-event cost to refresh the UI feels much more reasonable when compared to React. From here on, I will stop talking about React and start exploring if FRP might really offer a more expressive way to do declarative UI without too much overhead. But first, we need to talk about a serious problem. FRP is acyclic FRP is no silver bullet. As mentioned earlier, FRP graphs are typically of the acyclic flavor. This limits the set of UIs that are possible to create with FRP. Is this the cost of declarativeness? To demonstrate the problem, consider a color picker tool that has sliders for both the red-green-blue and hue-saturation-value representations of color: In this program, updating the sliders on the RGB side should change the values of the sliders on the HSV side, and vice versa. This forms a cycle between the two sets of sliders. It’s possible to express cycles like this with event callbacks, though it’s messy and error-prone to do manually. We’d like a system built on top of event callbacks that can do the right thing without strange glitches or worse, unbounded loops. Propagators to the rescue Fortunately, I didn’t create that diagram above. It’s from Alexey Radul’s 2009 PhD dissertation: “Propagation Networks: A Flexible and Expressive Substrate for Computation”. This paper dedicates a section to explaining how FRP can be built on top a more general paradigm called propagation networks, or just propagators for short. The paper is lengthy, naturally, but it is written in an approachable style. There isn’t any terse math notation and there are plenty of code examples. As far as PhD dissertations go, this one is a real page turner! Here is a quote from section 5.5 about FrTime (with emphasis added by me): FrTime is built around a custom propagation infrastructure; it nicely achieves both non-recomputation and glitch avoidance, but unfortunately, the propagation system is nontrivially complicated, and specialized for the purpose of supporting functional reactivity. In particular, the FrTime system imposes the invariant that the propagation graph be acyclic, and guarantees that it will execute the propagators in topological-sort order. This simplifies the propagators themselves, but greatly complicates the runtime system, especially because it has to dynamically recompute the sort order when the structure of some portion of the graph changes (as when the predicate of a conditional changes from true to false, and the other branch must now be computed). That complexity, in turn, makes that runtime system unsuitable for other kinds of propagation, and even makes it difficult for other kinds of propagation to interoperate with it. So, the claim is that FRP-on-propagators can remove the acyclic restriction, reduce complexity, and improve interoperability. But what are propagators? I like how the book “Software Design for Flexibility” (2021) defines them (again, with emphasis added by me): “The propagator model is built on the idea that the basic computational elements are propagators, autonomous independent machines interconnected by shared cells through which they communicate. Each propagator machine continuously examines the cells it is connected to, and adds information to some cells based on computations it can make from information it can get from others. Cells accumulate information and propagators produce information.” Research on propagators goes back a long way (you’ll even find a form of propagators in the all-time classic “Structure and Interpretation of Computer Programs”), but it was Alexey Radul that discovered how to unify many different types of special-purpose propagation systems so that they could share a generic substrate and interoperate. Perhaps the most exciting application of the propagator model is AI, where it can be used to create “explainable AI” that keeps track of how a particular output was computed. This type of AI stands in stark contrast to the much hyped mainstream machine learning models that hoover up our planet’s precious natural resources to produce black boxes that generate impressive bullshit. But anyway! The diagram above can also be found in section 5.5 of the dissertation. Here’s the description: “A network for a widget for RGB and HSV color selection. Traditional functional reactive systems have qualms about the circularity, but general-purpose propagation handles it automatically.” This color picker felt like a good, achievable target for a prototype. The propagator network is small and there are only a handful of UI elements, yet it will test if the FRP system is working correctly. The prototype I read Alexey Radul’s dissertation, and then read chapter 7 of Software Design for Flexibility, which is all about propagators. Both use Scheme as the implementation language. The latter makes no mention of FRP, and while the former explains how FRP can be implemented in terms of propagators, there is (understandably) no code included. So, I had to implement it for myself to test my understanding. Unsurprisingly, I had misunderstood many things along the way and my iterations of broken code let me know that. Implementation is the best teacher. After much code fiddling, I was able to create a working prototype of the color picker. Here it is below: This prototype is written in Scheme and uses Hoot to compile it to WebAssembly so I can embed it right here in my blog. Sure beats a screenshot or video! This prototype contains a minimal propagator implementation that is sufficient to bootstrap a similarly minimal FRP implementation. Propagator implementation Let’s take a look at the code and see how it all works, starting with propagation. There are two essential data types: Cells and propagators. Cells accumulate information about a value, ranging from nothing, some form of partial information, or a complete value. The concept of partial information is Alexey Radul’s major contribution to the propagator model. It is through partial information structures that general-purpose propagators can be used to implement logic programming, probabilistic programming, type inference, and FRP, among others. I’m going to keep things as simple as possible in this post (it’s a big enough infodump already), but do read the propagator literature if phrases like “dependency directed backtracking” and “truth maintenance system” sound like a good time to you. Cells start out knowing nothing, so we need a special, unique value to represent nothing: (define-record-type <nothing> (make-nothing) %nothing?) (define nothing (make-nothing)) (define (nothing? x) (eq? x nothing)) Any unique (as in eq?) object would do, such as (list ’nothing), but I chose to use a record type because I like the way it prints. In addition to nothing, the propagator model also has a notion of contradictions. If one source of information says there are four lights, but another says there are five, then we have a contradiction. Propagation networks do not fall apart in the presence of contradictory information. There’s a data type that captures information about them and they can be resolved in a context-specific manner. I mention contradictions only for the sake of completeness, as a general-purpose propagator system needs to handle them. This prototype does not create any contradictions, so I won’t mention them again. Now we can define a cell type: (define-record-type <cell> (%make-cell relations neighbors content strongest equivalent? merge find-strongest handle-contradiction) cell? (relations cell-relations) (neighbors cell-neighbors set-cell-neighbors!) (content cell-content set-cell-content!) (strongest cell-strongest set-cell-strongest!) ;; Dispatch table: (equivalent? cell-equivalent?) (merge cell-merge) (find-strongest cell-find-strongest) (handle-contradiction cell-handle-contradiction)) The details of how a cell does things like merge old information with new information is left intentionally unanswered at this level of abstraction. Different systems built on propagators will want to handle things in different ways. In the propagator literature, you’ll see generic procedures used extensively for this purpose. For the sake of simplicity, I use a dispatch table instead. It would be easy to layer generic merge on top later, if we wanted. The constructor for cells sets the default contents to nothing: (define default-equivalent? equal?) ;; But what about partial information??? (define (default-merge old new) new) (define (default-find-strongest content) content) (define (default-handle-contradiction cell) (values)) (define* (make-cell name #:key (equivalent? default-equivalent?) (merge default-merge) (find-strongest default-find-strongest) (handle-contradiction default-handle-contradiction)) (let ((cell (%make-cell (make-relations name) '() nothing nothing equivalent? merge find-strongest handle-contradiction))) (add-child! (current-parent) cell) cell)) The default procedures used for the dispatch table are either no-ops or trivial. The default merge doesn’t merge at all, it just clobbers the old with the new. It’s up to the layers on top to provide more useful implementations. Cells can have neighbors (which will be propagators): (define (add-cell-neighbor! cell neighbor) (set-cell-neighbors! cell (lset-adjoin eq? (cell-neighbors cell) neighbor))) Since cells accumulate information, there are procedures for adding new content and finding the current strongest value contained within: (define (add-cell-content! cell new) (match cell (($ <cell> _ neighbors content strongest equivalent? merge find-strongest handle-contradiction) (let ((content* (merge content new))) (set-cell-content! cell content*) (let ((strongest* (find-strongest content*))) (cond ;; New strongest value is equivalent to the old one. No need ;; to alert propagators. ((equivalent? strongest strongest*) (set-cell-strongest! cell strongest*)) ;; Uh oh, a contradiction! Call handler. ((contradiction? strongest*) (set-cell-strongest! cell strongest*) (handle-contradiction cell)) ;; Strongest value has changed. Alert the propagators! (else (set-cell-strongest! cell strongest*) (for-each alert-propagator! neighbors)))))))) Next up is the propagator type. Propagators can be activated to create information using content stored in cells and store their results in some other cells, forming a graph. Data flow is not forced to be directional. Cycles are not only permitted, but very common in practice. So, propagators keep track of both their input and output cells: (define-record-type <propagator> (%make-propagator relations inputs outputs activate) propagator? (relations propagator-relations) (inputs propagator-inputs) (outputs propagator-outputs) (activate propagator-activate)) Propagators can be alerted to schedule themselves to be re-evaluted: (define (alert-propagator! propagator) (queue-task! (propagator-activate propagator))) The constructor for propagators adds the new propagator as a neighbor to all input cells and then calls alert-propagator! to bootstrap it: (define (make-propagator name inputs outputs activate) (let ((propagator (%make-propagator (make-relations name) inputs outputs activate))) (add-child! (current-parent) propagator) (for-each (lambda (cell) (add-cell-neighbor! cell propagator)) inputs) (alert-propagator! propagator) propagator)) There are two main classes of propagators that will be used: primitive propagators and constraint propagators. Primitive propagators are directional; they apply a function to the values of some input cells and write the result to an output cell: (define (unusable-value? x) (or (nothing? x) (contradiction? x))) (define (primitive-propagator name f) (match-lambda* ((inputs ... output) (define (activate) (let ((args (map cell-strongest inputs))) (unless (any unusable-value? args) (add-cell-content! output (apply f args))))) (make-propagator name inputs (list output) activate)))) We can use primitive-propagator to lift standard Scheme procedures into the realm of propagators. Here’s how we’d make and use an addition propagator: (define p:+ (primitive-propagator +)) (define a (make-cell)) (define b (make-cell)) (define c (make-cell)) (p:+ a b c) (add-cell-content! a 1) (add-cell-content! b 3) ;; After the scheduler runs… (cell-strongest c) ;; => 4 It is from these primitive propagators that we can build more complicated, compound propagators. Compound propagators compose primitive propagators (or other compound propagators) and lazily construct their section of the network upon first activation: (define (compound-propagator name inputs outputs build) (let ((built? #f)) (define (maybe-build) (unless (or built? (and (not (null? inputs)) (every unusable-value? (map cell-strongest inputs)))) (parameterize ((current-parent (propagator-relations propagator))) (build) (set! built? #t)))) (define propagator (make-propagator name inputs outputs maybe-build)) propagator)) By this point you may be wondering what all the references to current-parent are about. It is for tracking the parent/child relationships of the cells and propagators in the network. This could be helpful for things like visualizing the network, but we aren’t going to do anything with it today. I’ve omitted all of the other relation code for this reason. Constraint propagators are compound propagators whose inputs and outputs are the same, which results in bidirectional propagation: (define (constraint-propagator name cells build) (compound-propagator name cells cells build)) Using primitive propagators for addition and subtraction, we can build a constraint propagator for the equation a + b = c: (define p:+ (primitive-propagator +)) (define p:- (primitive-propagator -)) (define (c:sum a b c) (define (build) (p:+ a b c) (p:- c a b) (p:- c b a)) (constraint-propagator 'sum (list a b c) build)) (define a (make-cell)) (define b (make-cell)) (define c (make-cell)) (c:sum a b c) (add-cell-content! a 1) (add-cell-content! c 4) ;; After the scheduler runs… (cell-strongest b) ;; => 3 With a constraint, we can populate any two cells and the propagation system will figure out the value of the third. Pretty cool! This is a good enough propagation system for the FRP prototype. FRP implementation If you’re familiar with terminology from other FRP systems like “signals” and “behaviors” then set that knowledge aside for now. We need some new nouns. But first, a bit about the problems that need solving in order to implement FRP on top of general-purpose propagators: The propagator model does not enforce any ordering of when propagators will be re-activated in relation to each other. If we’re not careful, something in the network could compute a value using a mix of fresh and stale data, resulting in a momentary “glitch” that could be noticeable in the UI. The presence of cycles introduce a crisis of identity. It’s not sufficient for every time-varying value to be treated as its own self. In the color picker, the RGB values and the HSV values are two representations of the same thing. We need a new notion of identity to capture this and prevent unnecessary churn and glitches in the network. For starters, we will create a “reactive identifier” (needs a better name) data type that serves two purposes: To create shared identity between different information sources that are logically part of the same thing To create localized timestamps for values associated with this identity (define-record-type <reactive-id> (%make-reactive-id name clock) reactive-id? (name reactive-id-name) (clock reactive-id-clock set-reactive-id-clock!)) (define (make-reactive-id name) (%make-reactive-id name 0)) (define (reactive-id-tick! id) (let ((t (1+ (reactive-id-clock id)))) (set-reactive-id-clock! id t) `((,id . ,t)))) Giving each logical identity in the FRP system its own clock eliminates the need for a global clock, avoiding a potentially troublesome source of centralization. This is kind of like how Lamport timestamps are used in distributed systems. We also need a data type that captures the value of something at a particular point in time. Since the cruel march of time is unceasing, these are ephemeral values: (define-record-type <ephemeral> (make-ephemeral value timestamps) ephemeral? (value ephemeral-value) ;; Association list mapping identity -> time (timestamps ephemeral-timestamps)) Ephemerals are boxes that contain some arbitrary data with a bunch of shipping labels slapped onto the outside explaining from whence they came. This is the partial information structure that our propagators will manipulate and add to cells. Here’s how to say “the mouse position was (1, 2) at time 3” in code: (define mouse-position (make-reactive-id ’mouse-position)) (make-ephemeral #(1 2) `((,mouse-position . 3))) We need to perform a few operations with ephemerals: Test if one ephemeral is fresher (more recent) than another Merge two ephemerals when cell content is added Compose the timestamps from several inputs to form an aggregate timestamp for an output, but only if all timestamps for each distinct identifier match (no mixing of fresh and stale values) (define (ephemeral-fresher? a b) (let ((b-inputs (ephemeral-timestamps b))) (let lp ((a-inputs (ephemeral-timestamps a))) (match a-inputs (() #t) (((key . a-time) . rest) (match (assq-ref b-inputs key) (#f (lp rest)) (b-time (and (> a-time b-time) (lp rest))))))))) (define (merge-ephemerals old new) (cond ((nothing? old) new) ((nothing? new) old) ((ephemeral-fresher? new old) new) (else old))) (define (merge-ephemeral-timestamps ephemerals) (define (adjoin-keys alist keys) (fold (lambda (key+value keys) (match key+value ((key . _) (lset-adjoin eq? keys key)))) keys alist)) (define (check-timestamps id) (let lp ((ephemerals ephemerals) (t #f)) (match ephemerals (() t) ((($ <ephemeral> _ timestamps) . rest) (match (assq-ref timestamps id) ;; No timestamp for this id in this ephemeral. Continue. (#f (lp rest t)) (t* (if t ;; If timestamps don't match then we have a mix of ;; fresh and stale values, so return #f. Otherwise, ;; continue. (and (= t t*) (lp rest t)) ;; Initialize timestamp and continue. (lp rest t*)))))))) ;; Build a set of all reactive identifiers across all ephemerals. (let ((ids (fold (lambda (ephemeral ids) (adjoin-keys (ephemeral-timestamps ephemeral) ids)) '() ephemerals))) (let lp ((ids ids) (timestamps '())) (match ids (() timestamps) ((id . rest) ;; Check for consistent timestamps. If they are consistent ;; then add it to the alist and continue. Otherwise, return ;; #f. (let ((t (check-timestamps id))) (and t (lp rest (cons (cons id t) timestamps))))))))) Example usage: (define e1 (make-ephemeral #(3 4) `((,mouse-position . 4)))) (define e2 (make-ephemeral #(1 2) `((,mouse-position . 3)))) (ephemeral-fresher? e1 e2) ;; => #t (merge-ephemerals e1 e2) ;; => e1 (merge-ephemeral-timestamps (list e1 e2)) ;; => #f (define (mouse-max-coordinate e) (match e (($ <ephemeral> #(x y) timestamps) (make-ephemeral (max x y) timestamps)))) (define e3 (mouse-max-coordinate e1)) (merge-ephemeral-timestamps (list e1 e3)) ;; => ((mouse-position . 4)) Now we can build a primitive propagator constructor that lifts ordinary Scheme procedures so that they work with ephemerals: (define (ephemeral-wrap proc) (match-lambda* ((and ephemerals (($ <ephemeral> args) ...)) (match (merge-ephemeral-timestamps ephemerals) (#f nothing) (timestamps (make-ephemeral (apply proc args) timestamps)))))) (define* (primitive-reactive-propagator name proc) (primitive-propagator name (ephemeral-wrap proc))) Reactive UI implementation Now we need some propagators that live at the edges of our network that know how to interact with the DOM and can do the following: Sync a DOM element attribute with the value of a cell Create a two-way data binding between an element’s value attribute and a cell Render the markup in a cell and place it into the DOM tree in the right location Syncing an element attribute is a directional operation and the easiest to implement: (define (r:attribute input elem attr) (let ((attr (symbol->string attr))) (define (activate) (match (cell-strongest input) (($ <ephemeral> val) (attribute-set! elem attr (obj->string val))) ;; Ignore unusable values. (_ (values)))) (make-propagator 'r:attribute (list input) '() activate))) Two-way data binding is more involved. First, a new data type is used to capture the necessary information: (define-record-type <binding> (make-binding id cell default group) binding? (id binding-id) (cell binding-cell) (default binding-default) (group binding-group)) (define* (binding id cell #:key (default nothing) (group '())) (make-binding id cell default group)) And then a reactive propagator applies that binding to a specific DOM element: (define* (r:binding binding elem) (match binding (($ <binding> id cell default group) (define (update new) (unless (nothing? new) (let ((timestamp (reactive-id-tick! id))) (add-cell-content! cell (make-ephemeral new timestamp)) ;; Freshen timestamps for all cells in the same group. (for-each (lambda (other) (unless (eq? other cell) (match (cell-strongest other) (($ <ephemeral> val) (add-cell-content! other (make-ephemeral val timestamp))) (_ #f)))) group)))) ;; Sync the element's value with the cell's value. (define (activate) (match (cell-strongest cell) (($ <ephemeral> val) (set-value! elem (obj->string val))) (_ (values)))) ;; Initialize element value with the default value. (update default) ;; Sync the cell's value with the element's value. (add-event-listener! elem "input" (procedure->external (lambda (event) (update (string->obj (value elem)))))) (make-propagator 'r:binding (list cell) '() activate)))) A simple method for rendering to the DOM is to replace some element with a newly created element based on the current ephemeral value of a cell: (define (r:dom input elem) (define (activate) (match (cell-strongest input) (($ <ephemeral> exp) (let ((new (sxml->dom exp))) (replace-with! elem new) (set! elem new))) (_ (values)))) (make-propagator 'dom (list input) '() activate)) The sxml->dom procedure deserves some further explanation. To create a subtree of new elements, we have two options: Use something like the innerHTML element property to insert arbitrary HTML as a string and let the browser parse and build the elements. Use a Scheme data structure that we can iterate over and make the relevant document.createTextNode, document.createElement, etc. calls. Option 1 might be a shortcut and would be fine for a quick prototype, but it would mean that to generate the HTML we’d be stuck using raw strings. While string-based templating is commonplace, we can certainly do better in Scheme. Option 2 is actually not too much work and we get to use Lisp’s universal templating system, quasiquote, to write our markup. Thankfully, SXML already exists for this purpose. SXML is an alternative XML syntax that uses s-expressions. Since Scheme uses s-expression syntax, it’s a natural fit. Example: '(article (h1 "SXML is neat") (img (@ (src "cool.jpg") (alt "cool image"))) (p "Yeah, SXML is " (em "pretty neato!"))) Instead of using it to generate HTML text, we’ll instead generate a tree of DOM elements. Furthermore, because we’re now in full control of how the element tree is built, we can build in support for reactive propagators! Check it out: (define (sxml->dom exp) (match exp ;; The simple case: a string representing a text node. ((? string? str) (make-text-node str)) ((? number? num) (make-text-node (number->string num))) ;; A cell containing SXML (or nothing) ((? cell? cell) (let ((elem (cell->elem cell))) (r:dom cell elem) elem)) ;; An element tree. The first item is the HTML tag. (((? symbol? tag) . body) ;; Create a new element with the given tag. (let ((elem (make-element (symbol->string tag)))) (define (add-children children) ;; Recursively call sxml->dom for each child node and ;; append it to elem. (for-each (lambda (child) (append-child! elem (sxml->dom child))) children)) (match body ((('@ . attrs) . children) (for-each (lambda (attr) (match attr (((? symbol? name) (? string? val)) (attribute-set! elem (symbol->string name) val)) (((? symbol? name) (? number? val)) (attribute-set! elem (symbol->string name) (number->string val))) (((? symbol? name) (? cell? cell)) (r:attribute cell elem name)) ;; The value attribute is special and can be ;; used to setup a 2-way data binding. (('value (? binding? binding)) (r:binding binding elem)))) attrs) (add-children children)) (children (add-children children))) elem)))) Notice the calls to r:dom, r:attribute, and r:binding. A cell can be used in either the context of an element (r:dom) or an attribute (r:attribute). The value attribute gets the additional superpower of r:binding. We will make use of this when it is time to render the color picker UI! Color picker implementation Alright, I’ve spent a lot of time explaining how I built a minimal propagator and FRP system from first principles on top of Hoot-flavored Scheme. Let’s finally write the dang color picker! First we need some data types to represent RGB and HSV colors: (define-record-type <rgb-color> (rgb-color r g b) rgb-color? (r rgb-color-r) (g rgb-color-g) (b rgb-color-b)) (define-record-type <hsv-color> (hsv-color h s v) hsv-color? (h hsv-color-h) (s hsv-color-s) (v hsv-color-v)) And procedures to convert RGB to HSV and vice versa: (define (rgb->hsv rgb) (match rgb (($ <rgb-color> r g b) (let* ((cmax (max r g b)) (cmin (min r g b)) (delta (- cmax cmin))) (hsv-color (cond ((= delta 0.0) 0.0) ((= cmax r) (let ((h (* 60.0 (fmod (/ (- g b) delta) 6.0)))) (if (< h 0.0) (+ h 360.0) h))) ((= cmax g) (* 60.0 (+ (/ (- b r) delta) 2.0))) ((= cmax b) (* 60.0 (+ (/ (- r g) delta) 4.0)))) (if (= cmax 0.0) 0.0 (/ delta cmax)) cmax))))) (define (hsv->rgb hsv) (match hsv (($ <hsv-color> h s v) (let* ((h' (/ h 60.0)) (c (* v s)) (x (* c (- 1.0 (abs (- (fmod h' 2.0) 1.0))))) (m (- v c))) (define-values (r' g' b') (cond ((<= 0.0 h 60.0) (values c x 0.0)) ((<= h 120.0) (values x c 0.0)) ((<= h 180.0) (values 0.0 c x)) ((<= h 240.0) (values 0.0 x c)) ((<= h 300.0) (values x 0.0 c)) ((<= h 360.0) (values c 0.0 x)))) (rgb-color (+ r' m) (+ g' m) (+ b' m)))))) We also need some procedures to convert colors into the hexadecimal representations we’re used to seeing: (define (uniform->byte x) (inexact->exact (round (* x 255.0)))) (define (rgb->int rgb) (match rgb (($ <rgb-color> r g b) (+ (* (uniform->byte r) (ash 1 16)) (* (uniform->byte g) (ash 1 8)) (uniform->byte b))))) (define (rgb->hex-string rgb) (list->string (cons #\# (let lp ((i 0) (n (rgb->int rgb)) (out '())) (if (= i 6) out (lp (1+ i) (ash n -4) (cons (integer->char (let ((digit (logand n 15))) (+ (if (< digit 10) (char->integer #\0) (- (char->integer #\a) 10)) digit))) out))))))) (define (rgb-hex->style hex) (string-append "background-color: " hex ";")) Now we can lift the color API into primitive reactive propagator constructors: (define-syntax-rule (define-primitive-reactive-propagator name proc) (define name (primitive-reactive-propagator 'name proc))) (define-primitive-reactive-propagator r:rgb-color rgb-color) (define-primitive-reactive-propagator r:rgb-color-r rgb-color-r) (define-primitive-reactive-propagator r:rgb-color-g rgb-color-g) (define-primitive-reactive-propagator r:rgb-color-b rgb-color-b) (define-primitive-reactive-propagator r:hsv-color hsv-color) (define-primitive-reactive-propagator r:hsv-color-h hsv-color-h) (define-primitive-reactive-propagator r:hsv-color-s hsv-color-s) (define-primitive-reactive-propagator r:hsv-color-v hsv-color-v) (define-primitive-reactive-propagator r:rgb->hsv rgb->hsv) (define-primitive-reactive-propagator r:hsv->rgb hsv->rgb) (define-primitive-reactive-propagator r:rgb->hex-string rgb->hex-string) (define-primitive-reactive-propagator r:rgb-hex->style rgb-hex->style) From those primitive propagators, we can build the necessary constraint propagators: (define (r:components<->rgb r g b rgb) (define (build) (r:rgb-color r g b rgb) (r:rgb-color-r rgb r) (r:rgb-color-g rgb g) (r:rgb-color-b rgb b)) (constraint-propagator 'r:components<->rgb (list r g b rgb) build)) (define (r:components<->hsv h s v hsv) (define (build) (r:hsv-color h s v hsv) (r:hsv-color-h hsv h) (r:hsv-color-s hsv s) (r:hsv-color-v hsv v)) (constraint-propagator 'r:components<->hsv (list h s v hsv) build)) (define (r:rgb<->hsv rgb hsv) (define (build) (r:rgb->hsv rgb hsv) (r:hsv->rgb hsv rgb)) (constraint-propagator 'r:rgb<->hsv (list rgb hsv) build)) At long last, we are ready to define the UI! Here it is: (define (render exp) (append-child! (document-body) (sxml->dom exp))) (define* (slider id name min max default #:optional (step "any")) `(div (@ (class "slider")) (label (@ (for ,id)) ,name) (input (@ (id ,id) (type "range") (min ,min) (max ,max) (step ,step) (value ,default))))) (define (uslider id name default) ; [0,1] slider (slider id name 0 1 default)) (define-syntax-rule (with-cells (name ...) body . body*) (let ((name (make-cell 'name #:merge merge-ephemerals)) ...) body . body*)) (with-cells (r g b rgb h s v hsv hex style) (define color (make-reactive-id 'color)) (define rgb-group (list r g b)) (define hsv-group (list h s v)) (r:components<->rgb r g b rgb) (r:components<->hsv h s v hsv) (r:rgb<->hsv rgb hsv) (r:rgb->hex-string rgb hex) (r:rgb-hex->style hex style) (render `(div (h1 "Color Picker") (div (@ (class "preview")) (div (@ (class "color-block") (style ,style))) (div (@ (class "hex")) ,hex)) (fieldset (legend "RGB") ,(uslider "red" "Red" (binding color r #:default 1.0 #:group rgb-group)) ,(uslider "green" "Green" (binding color g #:default 0.0 #:group rgb-group)) ,(uslider "blue" "Blue" (binding color b #:default 1.0 #:group rgb-group))) (fieldset (legend "HSV") ,(slider "hue" "Hue" 0 360 (binding color h #:group hsv-group)) ,(uslider "saturation" "Saturation" (binding color s #:group hsv-group)) ,(uslider "value" "Value" (binding color v #:group hsv-group)))))) Each color channel (R, G, B, H, S, and V) has a cell which is bound to a slider (<input type="range">). All six sliders are identified as color, so adjusting any of them increments color’s timestamp. The R, G, and B sliders form one input group, and the H, S, and V sliders form another. By grouping the related sliders together, whenever one of the sliders is moved, all members of the group will have their ephemeral value refreshed with the latest timestamp. This behavior is crucial because otherwise the r:components<->rgb and r:components<->hsv propagators would see that one color channel has a fresher value than the other two and do nothing. Since the underlying propagator infrastructure does not enforce activation order, reactive propagators must wait until their inputs reach a consistent state where the timestamps for a given reactive identifier are all the same. With this setup, changing a slider on the RGB side will cause a new color value to propagate over to the HSV side. Because the relationship is cyclical, the HSV side will then attempt to propagate an equivalent color value back to the RGB side! This could be bad news, but since the current RGB value is equally fresh (same timestamp), the propagation stops right there. Redundant work is minimized and an unbounded loop is avoided. And that’s it! Phew! Complete source code can be found here. Reflections I think the results of this prototype are promising. I’d like to try building some larger demos to see what new problems arise. Since propagation networks include cycles, they cannot be garbage collected until there are no references to any part of the network from the outside. Is this acceptable? I didn’t optimize, either. A more serious implementation would want to do things like use case-lambda for all n-ary procedures to avoid consing an argument list in the common cases of 1, 2, 3, etc. arguments. There is also a need for a more pleasing domain-specific language, using Scheme’s macro system, for describing FRP graphs. Alexey Radul’s dissertation was published in 2009. Has anyone made a FRP system based on propagators since then that’s used in real software? I don’t know of anything but it’s a big information superhighway out there. Update: Since publishing, I have learned about the following: Holograph: A visual editor for propagator networks! Amazing! Scoped Propagators: A WIP propagator system with some notable differences from “traditional” propagators. I wish I had read Alexey Radul's disseration 10 years ago when I was first exploring FRP. It would have saved me a lot of time spent running into problems that have already been solved that I was not equipped to solve on my own. I have even talked to Gerald Sussman (a key figure in propagator research) in person about the propagator model. That conversation was focused on AI, though, and I didn’t realize that propagators could also be used for FRP. It wasn’t until more recently that friend and colleague Christine Lemmer-Webber, who was present for the aforementioned conversation with Sussman, told me about it. Christine has her own research project for propagators. There are so many interesting things to learn out there, but I am also so tired. Better late than never, I guess! Anyway, if you made it this far then I hope you have enjoyed reading about propagators and FRP. ’Til next time!

10 months ago 149 votes
Lisp: Icing or Cake?

The Spring Lisp Game Jam 2024 ended one week ago. 48 games were submitted, a new record for the jam! This past week has been a time for participants to play and rate each other’s games. As I explored the entries, I noticed two distinct meta-patterns in how people approached building games with Lisp. I think these patterns apply more broadly to all applications of Lisp. Let’s talk about these patterns in some detail, with examples. But first! Here’s the breakdown of the jam submissions by language: lang entries % (rounded) ---- ------- ----------- guile 15 31 fennel 10 21 clojure 5 10 cl 5 10 racket 4 8 elisp 4 8 s7 3 6 kawa 1 2 owl 1 2 I haven’t rolled up the various Schemes (Guile, Racket, S7, Kawa) into a general scheme category because Scheme is so minimally specified and they are all very distinct implementations for different purposes, not to mention that Racket has a lot more going on than just Scheme. For the first time ever, Guile came out on top with the most submissions! There’s a very specific reason for this outcome. 11 out of the 15 Guile games were built for the web with Hoot, a Scheme-to-WebAssembly compiler that I work on at the Spritely Institute. 2 of those 11 were official Spritely projects. We put out a call for people to try making games with Hoot before the jam started, and a lot of people took us up on it! Very cool! The next most popular language, which is typically the most popular language in these jams, is Fennel. Fennel is a Lisp that compiles to Lua. It’s very cool, too! Also of note, at least to me as a Schemer, is that three games used S7. Hmm, there might be something relevant to this post going on there. The patterns I’m about to talk about could sort of be framed as “The Guile Way vs. The Fennel Way”, but I don’t want to do that. It's not an “us vs. them” thing. It’s wonderful that there are so many flavors of Lisp these days that anyone can find a great implementation that suits their preferences. Not only that, but many of these implementations can be used to make games that anyone can easily play in their web browser! That was not the case several years ago. Incredible! I want to preface the rest of this post by saying that both patterns are valid, and while I prefer one over the other, that is not to say that the other is inferior. I'll also show how these patterns can be thought of as two ends of a spectrum and how, in the end, compromises must be made. Okay, let’s get into it! Lisp as icing The icing pattern is using Lisp as a “scripting” language on top of a cake that is made from C, Rust, and other static languages. The typical way to do this is by embedding a Lisp interpreter into the larger program. If you’re most interested in writing the high-level parts of an application in Lisp then this pattern is the fastest way to get there. All you need is a suitable interpreter/compiler and a way to add the necessary hooks into your application. Since the program is mainly C/Rust/whatever, you can then use emscripten to compile it to WebAssembly and deploy to the web. Instant gratification, but strongly tied to static languages and their toolchains. S7 is an example of an embeddable Scheme. Guile is also used for extending C programs, though typically that involves dynamically linking to libguile rather than embedding the interpreter into the program’s executable. Fennel takes a different approach, recognizing that there are many existing applications that are already extensible through Lua, and provides a lispy language that compiles to Lua. Lisp as cake The cake pattern is using Lisp to implement as much of the software stack as possible. It’s Lisp all the way down... sorta. Rather than embedding Lisp into a non-Lisp program, the cake pattern does the inverse: the majority of the program is written in Lisp. When necessary, shared libraries can be called via a foreign function interface, but this should be kept to a minimum. This approach takes longer to yield results. Time is spent implementing missing libraries for your Lisp of choice and writing wrappers around the C shared libraries you can’t avoid using. Web deployment gets trickier, too, since the project is not so easily emscriptenable. (You may recognize this as the classic embed vs. extend debate. You’re correct! I'm just adding my own thoughts and applying it specifically to some real-world Lisp projects.) I mentioned Guile as an option for icing, but Guile really shines best as cake. The initial vision for Guile was to Emacsify other programs by adding a Scheme interpreter to them. These days, the best practice is to write your program in Scheme to begin with. Common Lisp is probably the best example, though. Implementations like SBCL have good C FFIs and can compile efficient native executables, minimizing the desire to use some C for performance reasons. Case studies Let’s take a look at some of the languages and libraries used for the Lisp Game Jam and evaluate their icing/cake-ness. Fennel + love2d love2d has been a popular choice for solo or small team game development for many years. It is a C++ program that embeds a Lua interpreter, which means it’s a perfect target for Fennel. Most Linux distributions package love2d, so it’s easy to run .love files natively. Additionally, thanks to emscripten, love2d games can be deployed to the web. Thus most Fennel games use love2d. ./soko.bin and Gnomic Vengeance are two games that use this stack. Fennel + love2d is a perfect example of Lisp as icing. Fennel sits at the very top of the stack, but there’s not really a path to spread Lisp into the layers below. It is also the most successful Lisp game development stack to date. S7 + raylib This stack is new to me, but two games used it this time around: GhostHop and Life Predictor. (You really gotta play GhostHop, btw. It’s a great little puzzle game and it is playable on mobile devices.) Raylib is a C library with bindings for many higher-level languages that has become quite popular in recent years. S7 is also implemented in C and is easily embeddable. This makes the combination easy to deploy on the web with emscripten. S7 + raylib is another example of Lisp as icing. I’m curious to see if this stack becomes more popular in future jams. Guile + Chickadee This is the stack that I helped build. Chickadee is a game library for Guile that implements almost all of the interesting parts in Scheme, including rendering. Two games were built with Chickadee in the most recent jam: Turbo Racer 3000 and Bloatrunner. Guile + Chickadee is an example of Lisp as cake. Chickadee wraps some C libraries for low-level tasks such as loading images, audio, and fonts, but it is written in pure Scheme. All the matrix and vector math is in Scheme. Chickadee comes with a set of rendering primitives comparable to love2d and raylib but they’re all implemented in Scheme. I’ve even made progress on rendering vector graphics with Scheme, whereas most other Lisp game libraries use a C library such as nanosvg. Chickadee has pushed the limits of Guile’s compiler and virtual machine, and Guile has been improved as a result. But it’s the long road. Chickadee is mostly developed by me, alone, in my very limited spare time. It is taking a long time to reach feature parity with more popular game development libraries, but it works quite well for what it is. Hoot + HTML5 canvas I also helped build this one. Hoot is a Scheme-to-WebAssembly compiler. Rather than compile the Guile VM (written in C) to Wasm using emscripten, Hoot implements a complete Wasm toolchain and a new backend for Guile’s compiler that emits Wasm directly. Hoot is written entirely in Scheme. Unlike C programs compiled with emscripten that target Wasm 1.0 with linear memory, Hoot targets Wasm 2.0 with GC managed heap types. This gives Hoot a significant advantage: Hoot binaries do not ship a garbage collector and thus are much smaller than Lisp runtimes compiled via emscripten. The Wasm binary for my game weighs in at < 2MiB whereas the love2d game I checked had a nearly 6MiB love.wasm. Hoot programs can also easily interoperate with JavaScript. Scheme objects can easily be passed to JavaScript, and vice versa, as they are managed in the same heap. With all of the browser APIs just a Wasm import away, an obvious choice for games was the built-in HTML5 canvas API for easy 2D rendering. 11 games used Hoot in the jam, including (shameless plug) Cirkoban and Lambda Dungeon. Hoot + HTML5 canvas is mostly dense cake with a bit of icing. On one hand, it took a year and significant funding to boot Hoot. We said “no” to emscripten, built our own toolchain, and extended Guile’s compiler. It's Lisp all the way until you hit the browser runtime! We even have a Wasm interpreter that runs on the Guile VM! Hoot rules! It was a risk but it paid off. On the other hand, the canvas API is very high-level. The more cake thing to do would be to use Hoot’s JS FFI to call WebGL and/or WebGPU. Indeed, this is the plan for the future! Wasm GC needs some improvements to make this feasible, but my personal goal is to get Chickadee ported to Hoot. I want Chickadee games to be easy to play natively and in browsers, just like love2d games. The cake/icing spectrum I must acknowledge the limitations of the cake approach. We’re not living in a world of Lisp machines, but a world of glorified PDP-11s. Even the tallest of Lisp cakes sits atop an even larger cake made mostly of C. All modern Lisp systems bottom out at some point. Emacs rests on a C core. Guile’s VM is written in C. Hoot runs on mammoth JavaScript engines written in C++ like V8. Games on Hoot currently render with HTML5 canvas rather than WebGL/WebGPU. Good luck using OpenGL without libGL; Chickadee uses guile-opengl which uses the C FFI to call into libGL. Then there’s libpng, FreeType, and more. Who the heck wants to rewrite all this in Lisp? Who even has the resources? Does spending all this time taking the scenic route matter at all, or are we just deluding ourselves because we have fun writing Lisp code? I think it does matter. Every piece of the stack that can be reclaimed from the likes of C is a small victory. The parts written in Lisp are much easier to hack on, and some of those things become live hackable while our programs are running. They are also memory safe, typically, thanks to GC managed runtimes. Less FFI calls means less overhead from traversing the Lisp/C boundary and more safety. As more of the stack becomes Lisp, it starts looking less like icing and more like cake. Moving beyond games, we can look to the Guix project as a great example of just how tasty the cake can get. Guix took the functional packaging model from the Nix project and made a fresh implementation, replacing the Nix language with Guile. Why? For code staging, code sharing, and improved hackability. Guix also uses an init system written in Guile rather than systemd. Why? For code staging, code sharing, and improved hackability. These are real advantages that make the trade-off of not using the industry-standard thing worth it. I’ve been using Guix since the early days, and back then it was easy to make the argument that Guix was just reinventing wheels for no reason. But now, over 10 years later, the insistence on maximizing the usage of Lisp has been key to the success of the project. As a user, once you learn the Guix idioms and a bit of Guile, you unlock extraordinary power to craft your OS to your liking. It’s the closest thing you can get to a Lisp machine on modern hardware. The cake approach paid off for Guix, and it could pay off for other projects, too. If Common Lisp is more your thing, and even if it isn’t, you’ll be amazed by the Trial game engine and how much of it is implemented in Common Lisp rather than wrapping C libraries. There’s also projects like Pre-Scheme that give me hope that one day the layers below the managed GC runtime can be implemented in Lisp. Pre-Scheme was developed and successfully used for Scheme 48 and I am looking forward to a modern revival of it thanks to an NLnet grant. I'm a cake boy That’s right, I said it: I’m a cake boy. I want to see projects continue to push the boundaries of what Lisp can do. When it comes to the Lisp Game Jam, what excites me most are not the games themselves, but the small advances made to reclaim another little slice of the cake from stale, dry C. I intend to keep pushing the limits for Guile game development with my Chickadee project. It’s not a piece of cake to bake a lispy cake, and the way is often hazy, but I know we can’t be lazy and just do the cooking by the book. Rewrite it in Rust? No way! Rewrite it in Lisp!

12 months ago 144 votes

More in programming

Retreating to Safety

Ten years ago, Apple’s Phil Schiller surprised Apple enthusiasts and developers by walking out on stage at John Gruber’s The Talk Show Live WWDC event and giving an open, human, honest interview to a somewhat jaded community. I wrote this in response: Both Apple and Phil Schiller himself took a huge risk in doing this. That they agreed at all is a noteworthy gift to this community of long-time enthusiasts, many of whom have felt under-appreciated as the company has grown. […] Phil’s appearance on the show was warm, genuine, informative, and entertaining. It was human. And humanizing the company and its decisions, especially to developers — remember, developer relations is all under Phil — might be worth the PR risk. This started a ten-year run of interviews by Apple executives on The Talk Show every year at WWDC that proved to be great, surprisingly safe PR for Apple. No executive ever said something they shouldn’t have (they’re pros), no sensational or negative news stories ever resulted from them, and Apple’s enthusiastic fans and developers felt seen, heard, and appreciated. *     *     * For unspecified reasons, Apple has declined to participate this year, ending what had become a beloved tradition in our community — and I can’t help but suspect that it won’t come back. (A lot has changed in the meantime.) Maybe Apple has good reasons. Maybe not. We’ll see what their WWDC PR strategy looks like in a couple of weeks. In the absence of any other information, it’s easy to assume that Apple no longer wants its executives to be interviewed in a human, unscripted, unedited context that may contain hard questions, and that Apple no longer feels it necessary to show their appreciation to our community and developers in this way. I hope that’s either not the case, or it doesn’t stay the case for long. This will be the first WWDC I’m not attending since 2009 (excluding the remote 2020 one, of course). Given my realizations about my relationship with Apple and how they view developers, I’ve decided that it’s best for me to take a break this year, gain some perspective, and decide what my future relationship should look like. Maybe Apple’s leaders are doing that, too.

2 days ago 3 votes
A Developer’s Crash Course in Coming to Japan

Thinking about moving to Japan? You’re not alone—Japan is a popular destination for those hoping to move abroad. What’s more, Japan actually needs more international developers. But how easy is it to immigrate to and work in Japan? Scores of videos on social media warn that living in Japan is quite different from holidaying here, and graphic descriptions of exploitative companies also create doubt. The truth is that Japan is not the easiest country to immigrate to, nor is it the hardest. Some Japanese tech companies and developer roles offer great work-life balance and good compensation; others do not. Based on other developers’ experiences, you’ll thrive here if you: Are an experienced developer Value safety, good food, and convenience over a high salary Are willing to invest time and effort into learning Japanese over the long term Read on to discover if Japan is a good fit for you, and the best ways to get a visa and begin your life here. What is it like working as a developer in Japan? TokyoDev conducts an annual survey of international developers living in Japan. Many of the questions in TokyoDev’s 2024 survey specifically addressed respondents’ work environments. Compensation When TokyoDev asked about “workplace difficulties” in the 2024 survey, 45% of respondents said that “compensation” was their number one problem at work. Overall, compensation for developers in Japan is far lower than the US developer median salary of 120,000 USD (currently 17.5 million yen), but higher than the Indian developer median salary of 640,000 rupees (currently around 1.1 million yen). Yet evaluating compensation for international developers in Japan, specifically, is trickier than you might expect. It’s hard to define an expected salary range because international developers tend to work in different companies and roles than the average Japanese developer. According to a 2024 survey conducted by the Japanese Ministry of Health, Labor and Welfare, the average annual salary of software engineers in Japan was 5.69 million yen. In a survey conducted that same year by TokyoDev, though, English-speaking international software developers in Japan enjoyed a median salary of 8.5 million yen. Of those international developers who responded, only 71% of them worked at a company headquartered in Japan, and almost 80% of them used English always or frequently, with 79% belonging to an engineering team with many other non-Japanese members. Wages, then, are heavily influenced by a range of factors, but particularly by whether you’re working for a Japanese or international company. In general, 75% of the international developers surveyed made 6 million yen or more. The real question is, is that enough for you to be comfortable in Japan? The answer is likely to be yes, if you don’t have overseas financial obligations or dependents. If you do, you’ll want to look carefully at rent, grocery, and education prices in your area of choice to guesstimate the expense of your Japanese lifestyle. Work-life balance Japan has a tradition of long hours and overtime. The Financial Times reports that the Japanese government has taken many measures to reduce the phenomenon of death from overwork (過労死, karoushi), from capping overtime to 100 hours a month, to setting up a national hotline for employees to report abusive companies. The results seem mixed. The Financial Times article adds that in 2024, employees at 26,000 organizations reported working illegal overtime at 44.5% of those businesses. On the other hand, average working hours for men fell to below 45 hours per week, and for women to below 35, which is similar to average working hours in the US. Still, 72% of the developers surveyed by TokyoDev worked for less than 40 hours a week. In addition, 70% of TokyoDev respondents cited work-life balance as their top workplace perk. The number of respondents happy with their working conditions came in just below that, at 69%. There was some correlation between hours worked and the type of employer, though. Employees at international subsidiaries were slightly more likely to enjoy shorter work weeks than those at Japanese companies. Remote work Remote work is still relatively new in Japan. Although more offices adopted the practice during Covid, many of them are now backtracking and requiring employees to return to the office, often with a hybrid schedule. While only 9% of TokyoDev respondents weren’t allowed any remote work, 76% of those required to work in-office were employed by Japan-headquartered companies. By contrast, of the 16% who worked fully remotely, only 57% worked for a Japanese company. Those with the option to work remotely really enjoy it. When asked what their most important workplace benefit was, 49% of respondents answered “remote work,” outstripping every other answer by far. Job security A major plus of working in Japan is job security—which, given the waves of layoffs at American tech companies, may now seem extra appealing. It’s overwhelmingly difficult to fire or lay off an employee with a permanent contract (正社員, seishain) in Japan, due to labor laws designed to protect the individual. This may be why 54% of TokyoDev survey respondents named “job security” as their most important workplace perk. Not every company will adhere to labor protection laws, and sometimes businesses pressure employees to “voluntarily” resign. Nonetheless, employees have significant legal recourse when companies attempt to fire them, change their contracts, or alter the current workplace conditions (sometimes, even if those conditions were never stated in writing). Developer stories TokyoDev regularly interviews developers working at our client companies, for information on both their specific positions and their general work environment. Our interviewees work with a variety of technology in many different roles, and at companies ranging from fintech enterprises like PayPay to game companies like Wizcorp. Why do developers choose Japan? In 2024 TokyoDev also asked developers, “What’s your favorite thing about Japan?” The results were: Safety: 21% Food: 13% Convenience: 11% Culture: 8% Peacefulness: 7% Nature: 5% Interestingly, there was a strong correlation between the amount of time someone had lived in Japan and their answer. Those who had been in Japan three years or less more frequently chose “food” or “culture.” Those who’d lived in Japan for four or more years were significantly more likely to answer “safety” or “peacefulness.” Safety It’s true that Japan enjoys a lower crime rate than many developed nations. The Security Journal UK ranked it ninth in a list of the world’s twenty safest countries. In 2024, World Population Review selected Tokyo as the safest city in the world. The homicide rate in 2023 was only 0.23 per 100,000 people, and has been steadily declining since the nineties. There are a few women-specific concerns, such as sexual violence. Nonetheless, the subjective experience of many women in the TokyoDev audience is that Japan feels safe; for example, they experience no trepidation walking around late at night. Of course, crime statistics don’t take into account natural disasters, of which Japan has more than its fair share. Thanks to being located on the Ring of Fire, Japan regularly copes with earthquakes and volcanic activity, and its location in the Pacific means that it is also affected by typhoons and tsunamis. To compensate, Japan also takes natural disaster countermeasures extremely seriously. It’s certainly the only country I’ve been to that posts large-scale evacuation maps on the side of the street, stores emergency supply stockpiles in public parks, and often requires schoolchildren to keep earthquake safety headgear at their desks. Food Food is another major draw. Many respondents simply wrote that “food” or “fresh, affordable food” was their favorite thing about Japan, but a few listed specific dishes. Favorite Japanese foods of the TokyoDev audience include: Yakiniku (self-grilled meat) Ramen Peaches Sushi Hiroshima-style okonomiyaki (savory pancake) Curry rice Onigiri (rice balls) Of those, sushi was mentioned most often. One respondent also answered the question with “drinking,” if you think that should count! Personal experiences Our contributors have also shared their personal experiences of moving to and working in Japan. We’ve got articles from Filipino, Indonesian, Australian, Vietnamese, and Mongolian developers, as well as others sharing what it’s like to work as a female software developer in Japan, or to live in Japan with a disability. Why shouldn’t you live in Japan? Safety, food, convenience, and culture are the most commonly-cited upsides of living in Japan. The downsides are the necessity of learning the language and some strict, yet often-unspoken, cultural expectations. Language Fluency in Japanese is not strictly necessary to live or work in Japan. Access to government services for you and your family, such as Japanese public school, is possible even if you speak little Japanese. (That doesn’t mean that most city hall clerks speak English; usually they’ll either locate a translator, or work with you via a translation app.) Nonetheless, TokyoDev’s 2024 survey showed that language ability was highly correlated to social success in Japan. In particular, 56% of all respondents were happy or very happy with their adjustment to Japanese culture. Breaking down that number, though, 76% of those with fluent or native Japanese ability reported being happy with their cultural adjustment, while only 34% of those with little or no Japanese ability were similarly happy. The same held true for social life satisfaction: 59% of those with fluent or native Japanese ability were happy or very happy with their social life, compared to 42% of those who don’t speak much Japanese. While English study is compulsory in Japan and starts in elementary school, as of 2025, only 28% of Japanese people speak English, and most of them can’t converse with high fluency. Living and working in Japan is possible without Japanese, but it’s hard to integrate, make friends, and participate in cultural activities if you can’t communicate with the locals. Cultural expectations As mentioned above, fluency in Japanese is closely allied to fluency in Japanese culture. At the same time, one does not necessarily imply the other. It’s possible to be fluent in Japanese, but still not grasp many of the unspoken rules your Japanese friends, neighbors, and coworkers operate by. Japan’s culture is both high-context and specifically averse to confrontation and outspokenness; if you get it “wrong,” people aren’t likely to tell you so. Japanese culture also values conformity: as the saying goes, “the nail that sticks up, gets hammered down.” While there are hints of things changing, with many Japanese companies saying support for greater diversity is necessary, minorities or those who are different may experience pressure to fit in. Introspection is required: are you the kind of person who’s adept at “reading the room,” a highly-valued quality in Japan? Conversely, are you self-confident enough to not sweat the small stuff? Either of these personality types may do well in Japan, but if social acceptance is very important to you, and you’re also uncomfortable with feeling occasionally awkward or uncertain, then you may struggle more to adjust. I want to go! How can I get there? If you’ve decided to immigrate to Japan, there are a number of ways to acquire a work visa. The simplest way is to get hired by a company operating in Japan. Alternatively, you can start your own business in Japan, come over on a Working Holiday, or even—if you’re very determined—arrive first as an English teacher. Let’s begin with the most straightforward route: getting hired as a developer. Getting a developer job in Japan As mentioned before, Japan needs more international developers. Some types of developers, though, will find it easier to get a job in Japan. In particular, companies in Japan are looking for the following: Senior developers. Companies are particularly interested in those with management experience and soft skills such as communication and leadership. Backend developers. This is one of the most widely-available roles for those who don’t speak Japanese. Developers who know Python. Python is one of Japan’s top in-demand languages. AI and Machine Learning Specialists. Japan is leaning hard on AI to help cope with demographic changes. Those who already know, or are willing to learn, Japanese. Combining those criteria, an experienced developer who speaks Japanese should have little difficulty finding a job! If you’re none of these things, you don’t need to give up—you just need to be patient, flexible, and willing to think outside the box. As Mercari Senior Technical Recruiter Clement Chidiac told me, “I know a bunch of people that managed to land a job because they’ve tried harder, going to meetups, reaching out to people, networking, that kind of thing.” Edmund Ho, Principal Consultant at Talisman Corporation, agreed that overseas candidates hoping to work in Japan for the first time face a tough road. He believes candidates should maintain a realistic, but optimistic, view of the process. “Keep a longer mindset,” he suggested. “Maybe you don’t get an offer the first year, but you do the second year.” “Stepping-stone” jobs Candidates from overseas do face a severe disadvantage: many companies, even those founded by non-Japanese people, are only open to developers who already live in Japan. Although getting a work visa for an overseas employee is cheaper and easier in Japan than in many countries, it still presents a barrier some organizations are reluctant to overcome. By contrast, once you’re already on the ground, more companies will be interested in your skills. This is why some developers settle on a “stepping-stone” position—in other words, a job that may not be all you hoped for, but that is willing to sponsor your visa and bring you into the country. Here’s where some important clarification on Japanese work visas is required. Work visas The most common visa for developers is the Engineer/Specialist in Humanities/International Services visa, a broad-category visa for foreign workers in those fields. To qualify, a developer must have a college degree, or have ten years of work experience, or have passed an approved IT exam. Another relatively common visa for high-level developers is the Highly-Skilled Professional (HSP) visa. To acquire it, applicants must score at least 70 points on an assessment scale that addresses age, education level, Japanese level, income, and more. The HSP visa has many advantages, but there is one important difference between it, and the more standard Engineer visa. The Engineer/Specialist in Humanities/International Services visa is not tied to a specific company. It grants you the legal right to work within those fields for a specific period of time in Japan. The Highly-Skilled Foreign Professional visa, on the other hand, is tied to a specific employer. If you want to change jobs, you’ll need to update your residency status with immigration. Some unscrupulous companies will try to claim that because they sponsored your Engineer/Specialist in Humanities/International Servicesvisa, you are obligated to remain with their company or risk being deported. This is not the case. If you do leave your job without another one lined up, you have three months to find another before you may be at risk for deportation. In addition, the fields of work covered by the Engineer/Specialist in Humanities/International Services visa are incredibly broad, and include everything from sales to product development to language instruction. As TokyoDev specifically confirmed with immigration, you can even come to Japan as an English instructor, then later work as a developer, without needing to alter your visa. Those with the HSP visa will need to go to immigration and alter their residency status each time they change roles. However, if you have the points and qualifications for an HSP visa, that means you’re also eligible for Permanent Residency within one to three years. Once you’ve obtained Permanent Residency, you’re free to pursue whatever sort of employment you like. International or Japanese company? As you begin your job hunt, you’ll hopefully receive responses from several sorts of companies: Japanese companies that also primarily hire Japanese people, Japanese companies with designated multinational developer teams, companies that were founded in Japan but nonetheless hire international developers for a variety of positions, and international subsidiaries. There are advantages and disadvantages to working with mostly-Japanese or mostly-international companies. Japanese companies The more Japanese a company is—both in philosophy and personnel—the more you’ll need Japanese language skills to thrive there. It’s true that a number of well-established Japanese tech companies are now creating developer teams designed to be multinational from the outset: typically, these are very English-language friendly. Some organizations, such as Money Forward, have even adopted English as the official company language. However, this often results in an institutional language barrier between development teams and the rest of the company, which is usually staffed by Japanese speakers. Developers are still encouraged to learn Japanese, particularly as they climb the promotional ladder, to help facilitate interdepartmental communication. Some companies, such as DeepX and Beatrust, either offer language classes themselves or provide a stipend for language learning. In addition to the language, you’ll also need to become “fluent” in Japanese business norms, which can be much more rigid and hierarchical than American or European company cultures. For example, at introductory drinking parties (themselves a potential surprise for many!), it is customary for new employees or women employees to go around with a bottle of beer and pour glasses for their managers and the company’s senior management. As mentioned in the cultural expectations section, most Japanese people won’t correct you even if you’re doing it all wrong, which leaves foreigners to discover their gaffes via trial-and-error. The advantage here is that you’ll be pressured, hopefully in a good way, to adapt swiftly to the Japanese language and business culture. There’s a sink-or-swim element to this approach, but if you’re serious about settling in Japan, then this “downside” could benefit you in the long run. Finally, there is the above-mentioned issue of compensation. On average, international companies pay more than Japanese ones; the median salary difference is around three million yen per year. Specific roles may be paid at higher rates, though, and most Japanese companies do offer bonuses. Many Japanese companies also offer other perks, such as housing stipends, spouse and child allowances, etc. If you receive an offer, it’s worth examining the whole compensation package before you make a decision. International companies The advantages of working either for an international company, or for a Japanese company that already employs many non-Japanese people, are straightforward: you can usually communicate in English, you already understand most of the business norms, and such companies typically pay developers more. You do run the risk of getting stuck in a rut, though. As mentioned earlier, TokyoDev found in its own survey that the correlation between Japanese language skills and social life satisfaction is high. You can of course study Japanese in your free time—and many do—but the more your work environment and social life revolve around English, the more difficult acquiring Japanese becomes. Want a job? Start here! If you’re ready to begin your job hunt, you can start with the TokyoDev job board. TokyoDev only works with companies we feel good about sending applicants to, and the job board includes positions that don’t require Japanese and that accept candidates from abroad. Other alternatives These visas don’t lead directly to working as a software developer in Japan, but can still help you get your foot in the door. DIY options If you prefer to be your own boss, there are several visas that allow you to set up a business in Japan. The Business Manager visa is typically good for one year, although repeated applicants may get longer terms. Applicants should have five million yen in a bank account when they apply, and there are some complicated requirements for getting and keeping the visa, such as maintaining an office, paying yourself a minimum salary, following proper accounting procedures, etc. The Startup visa is another option if the Business Manager visa appeals to you, but you don’t yet have the funds or connections to make it happen. You’ll be granted the equivalent of a Business Manager visa for up to one year so that you can launch your business in Japan. Working Holiday visa This is the path our own founder Paul McMahon took to get his first developer job in Japan. If you meet various qualifications, and you belong to a country that has a Working Holiday visa agreement with Japan, you can come to Japan for a period of one year and do work that is “incidental” to your holiday. In practice, this means you can work almost any job except for those that are considered “immoral” (bars, clubs, gambling, etc.). The Working Holiday visa is a great opportunity for those who have the option. It allows you to experience living and working in Japan without any long-term commitments, and also permits you to job-hunt freely without time or other visa constraints. J-Find visa The J-Find visa is a one-year visa, intended to let graduates of top universities job-hunt or prepare to found a start-up in Japan. To qualify, applicants should have: A degree from a university ranked in the top 100 by at least two world university rankings, or completed a graduate course there Graduated within five years of the application date At least 200,000 yen for initial living expenses TokyoDev contributor Oguzhan Karagözoglu received a J-Find visa, though he did run into some difficulties, particularly given immigration’s unfamiliarity with this relatively new type of visa. Digital Nomad visa This is another new visa category that allows foreigners from specific countries, who must make over 10 million yen or more a year, to work remotely from Japan for six months. Given that the application process alone can take months, the visa isn’t extendable or renewable, and you’re not granted residency, it’s questionable whether the pay-off is worth the effort. Still, if you have the option to work remotely and want to test out living in Japan before committing long-term, this is one way to do that. TokyoDev contributor Christian Mack was not only one of the first to acquire the Digital Nomad visa, but has since opened a consultancy to help others through the process. Conclusion If your takeaway from this article is, “Japan, here I come!” then there are more TokyoDev articles that can help you on your way. For example, if you want to bring your pets with you, you should know that you need to start preparing the import paperwork up to seven months in advance. If you’re ready now to start applying for jobs, check out the TokyoDev job board. You’ll also want to look at how to write a resume for a job in Japan, and our industry insider advice on passing the resume screening process. These tips for interviewing at Japanese tech companies would be useful, and when you’re ready for it, see this guide to salary negotiations. Once you’ve landed that job, we’ve got articles on everything from bringing your family with you, to getting your first bank account and apartment. In addition, the TokyoDev Discord hosts regular discussions on all these topics and more. It’s a great chance to make developer friends in Japan before you ever set foot in the country. Once you are here, you can join some of Japan’s top tech meetups, including many organized by TokyoDev itself. We look forward to seeing you soon!

3 days ago 6 votes
Let's talk about the future of Remix and react-router (tip)

We go over the "Wake up, Remix!" article by the remix team and talk about their decisions moving forward and also speculate on what is coming next.

3 days ago 2 votes
the algebra of dependent types

TIL (or this week-ish I learned) why big-sigma and big-pi turn up in the notation of dependent type theory. I’ve long been aware of the zoo of more obscure Greek letters that turn up in papers about type system features of functional programming languages, μ, Λ, Π, Σ. Their meaning is usually clear from context but the reason for the choice of notation is usually not explained. I recently stumbled on an explanation for Π (dependent functions) and Σ (dependent pairs) which turn out to be nicer than I expected, and closely related to every-day algebraic data types. sizes of types The easiest way to understand algebraic data types is by counting the inhabitants of a type. For example: the unit type () has one inhabitant, (), and the number 1 is why it’s called the unit type; the bool type hass two inhabitants, false and true. I have even seen these types called 1 and 2 (cruelly, without explanation) in occasional papers. product types Or pairs or (more generally) tuples or records. Usually written, (A, B) The pair contains an A and a B, so the number of possible values is the number of possible A values multiplied by the number of possible B values. So it is spelled in type theory (and in Standard ML) like, A * B sum types Or disjoint union, or variant record. Declared in Haskell like, data Either a b = Left a | Right b Or in Rust like, enum Either<A, B> { Left(A), Right(B), } A value of the type is either an A or a B, so the number of possible values is the number of A values plus the number of B values. So it is spelled in type theory like, A + B dependent pairs In a dependent pair, the type of the second element depends on the value of the first. The classic example is a slice, roughly, struct IntSlice { len: usize, elem: &[i64; len], } (This might look a bit circular, but the idea is that an array [i64; N] must be told how big it is – its size is an explicit part of its type – but an IntSlice knows its own size. The traditional dependent “vector” type is a sized linked list, more like my array type than my slice type.) The classic way to write a dependent pair in type theory is like,      Σ len: usize . Array(Int, len) The big sigma binds a variable that has a type annotation, with a scope covering the expression after the dot – similar syntax to a typed lambda expression. We can expand a simple example like this into a many-armed sum type: either an array of length zero, or an array of length 1, or an array of length 2, … but in a sigma type the discriminant is user-defined instead of hidden. The number of possible values of the type comes from adding up all the alternatives, a summation just like the big sigma summation we were taught in school. ∑ a ∈ A B a When the second element doesn’t depend on the first element, we can count the inhabitants like, ∑ A B = A*B And the sigma type simplifies to a product type. telescopes An aside from the main topic of these notes, I also recently encountered the name “telescope” for a multi-part dependent tuple or record. The name “telescope” comes from de Bruijn’s AUTOMATH, one of the first computerized proof assistants. (I first encountered de Bruijn as the inventor of numbered lambda bindings.) dependent functions The return type of a dependent function can vary according to the argument it is passed. For example, to construct an array we might write something like, fn repeat_zero(len: usize) -> [i64; len] { [0; len] } The classic way to write the type of repeat_zero() is very similar to the IntSlice dependent pair, but with a big pi instead of a big sigma:      Π len: usize . Array(Int, len) Mmm, pie. To count the number of possible (pure, total) functions A ➞ B, we can think of each function as a big lookup table with A entries each containing a B. That is, a big tuple (B, B, … B), that is, B * B * … * B, that is, BA. Functions are exponential types. We can count a dependent function, where the number of possible Bs depends on which A we are passed, ∏ a ∈ A B a danger I have avoided the terms “dependent sum” and “dependent product”, because they seem perfectly designed to cause confusion over whether I am talking about variants, records, or functions. It kind of makes me want to avoid algebraic data type jargon, except that there isn’t a good alternative for “sum type”. Hmf.

3 days ago 4 votes
What does "Undecidable" mean, anyway

Systems Distributed I'll be speaking at Systems Distributed next month! The talk is brand new and will aim to showcase some of the formal methods mental models that would be useful in mainstream software development. It has added some extra stress on my schedule, though, so expect the next two monthly releases of Logic for Programmers to be mostly minor changes. What does "Undecidable" mean, anyway Last week I read Against Curry-Howard Mysticism, which is a solid article I recommend reading. But this newsletter is actually about one comment: I like to see posts like this because I often feel like I can’t tell the difference between BS and a point I’m missing. Can we get one for questions like “Isn’t XYZ (Undecidable|NP-Complete|PSPACE-Complete)?” I've already written one of these for NP-complete, so let's do one for "undecidable". Step one is to pull a technical definition from the book Automata and Computability: A property P of strings is said to be decidable if ... there is a total Turing machine that accepts input strings that have property P and rejects those that do not. (pg 220) Step two is to translate the technical computer science definition into more conventional programmer terms. Warning, because this is a newsletter and not a blog post, I might be a little sloppy with terms. Machines and Decision Problems In automata theory, all inputs to a "program" are strings of characters, and all outputs are "true" or "false". A program "accepts" a string if it outputs "true", and "rejects" if it outputs "false". You can think of this as automata studying all pure functions of type f :: string -> boolean. Problems solvable by finding such an f are called "decision problems". This covers more than you'd think, because we can bootstrap more powerful functions from these. First, as anyone who's programmed in bash knows, strings can represent any other data. Second, we can fake non-boolean outputs by instead checking if a certain computation gives a certain result. For example, I can reframe the function add(x, y) = x + y as a decision problem like this: IS_SUM(str) { x, y, z = split(str, "#") return x + y == z } Then because IS_SUM("2#3#5") returns true, we know 2 + 3 == 5, while IS_SUM("2#3#6") is false. Since we can bootstrap parameters out of strings, I'll just say it's IS_SUM(x, y, z) going forward. A big part of automata theory is studying different models of computation with different strengths. One of the weakest is called "DFA". I won't go into any details about what DFA actually can do, but the important thing is that it can't solve IS_SUM. That is, if you give me a DFA that takes inputs of form x#y#z, I can always find an input where the DFA returns true when x + y != z, or an input which returns false when x + y == z. It's really important to keep this model of "solve" in mind: a program solves a problem if it correctly returns true on all true inputs and correctly returns false on all false inputs. (total) Turing Machines A Turing Machine (TM) is a particular type of computation model. It's important for two reasons: By the Church-Turing thesis, a Turing Machine is the "upper bound" of how powerful (physically realizable) computational models can get. This means that if an actual real-world programming language can solve a particular decision problem, so can a TM. Conversely, if the TM can't solve it, neither can the programming language.1 It's possible to write a Turing machine that takes a textual representation of another Turing machine as input, and then simulates that Turing machine as part of its computations. Property (1) means that we can move between different computational models of equal strength, proving things about one to learn things about another. That's why I'm able to write IS_SUM in a pseudocode instead of writing it in terms of the TM computational model (and why I was able to use split for convenience). Property (2) does several interesting things. First of all, it makes it possible to compose Turing machines. Here's how I can roughly ask if a given number is the sum of two primes, with "just" addition and boolean functions: IS_SUM_TWO_PRIMES(z): x := 1 y := 1 loop { if x > z {return false} if IS_PRIME(x) { if IS_PRIME(y) { if IS_SUM(x, y, z) { return true; } } } y := y + 1 if y > x { x := x + 1 y := 0 } } Notice that without the if x > z {return false}, the program would loop forever on z=2. A TM that always halts for all inputs is called total. Property (2) also makes "Turing machines" a possible input to functions, meaning that we can now make decision problems about the behavior of Turing machines. For example, "does the TM M either accept or reject x within ten steps?"2 IS_DONE_IN_TEN_STEPS(M, x) { for (i = 0; i < 10; i++) { `simulate M(x) for one step` if(`M accepted or rejected`) { return true } } return false } Decidability and Undecidability Now we have all of the pieces to understand our original definition: A property P of strings is said to be decidable if ... there is a total Turing machine that accepts input strings that have property P and rejects those that do not. (220) Let IS_P be the decision problem "Does the input satisfy P"? Then IS_P is decidable if it can be solved by a Turing machine, ie, I can provide some IS_P(x) machine that always accepts if x has property P, and always rejects if x doesn't have property P. If I can't do that, then IS_P is undecidable. IS_SUM(x, y, z) and IS_DONE_IN_TEN_STEPS(M, x) are decidable properties. Is IS_SUM_TWO_PRIMES(z) decidable? Some analysis shows that our corresponding program will either find a solution, or have x>z and return false. So yes, it is decidable. Notice there's an asymmetry here. To prove some property is decidable, I need just to need to find one program that correctly solves it. To prove some property is undecidable, I need to show that any possible program, no matter what it is, doesn't solve it. So with that asymmetry in mind, do are there any undecidable problems? Yes, quite a lot. Recall that Turing machines can accept encodings of other TMs as input, meaning we can write a TM that checks properties of Turing machines. And, by Rice's Theorem, almost every nontrivial semantic3 property of Turing machines is undecidable. The conventional way to prove this is to first find a single undecidable property H, and then use that to bootstrap undecidability of other properties. The canonical and most famous example of an undecidable problem is the Halting problem: "does machine M halt on input i?" It's pretty easy to prove undecidable, and easy to use it to bootstrap other undecidability properties. But again, any nontrivial property is undecidable. Checking a TM is total is undecidable. Checking a TM accepts any inputs is undecidable. Checking a TM solves IS_SUM is undecidable. Etc etc etc. What this doesn't mean in practice I often see the halting problem misconstrued as "it's impossible to tell if a program will halt before running it." This is wrong. The halting problem says that we cannot create an algorithm that, when applied to an arbitrary program, tells us whether the program will halt or not. It is absolutely possible to tell if many programs will halt or not. It's possible to find entire subcategories of programs that are guaranteed to halt. It's possible to say "a program constructed following constraints XYZ is guaranteed to halt." The actual consequence of undecidability is more subtle. If we want to know if a program has property P, undecidability tells us We will have to spend time and mental effort to determine if it has P We may not be successful. This is subtle because we're so used to living in a world where everything's undecidable that we don't really consider what the counterfactual would be like. In such a world there might be no need for Rust, because "does this C program guarantee memory-safety" is a decidable property. The entire field of formal verification could be unnecessary, as we could just check properties of arbitrary programs directly. We could automatically check if a change in a program preserves all existing behavior. Lots of famous math problems could be solved overnight. (This to me is a strong "intuitive" argument for why the halting problem is undecidable: a halt detector can be trivially repurposed as a program optimizer / theorem-prover / bcrypt cracker / chess engine. It's too powerful, so we should expect it to be impossible.) But because we don't live in that world, all of those things are hard problems that take effort and ingenuity to solve, and even then we often fail. To be pendantic, a TM can't do things like "scrape a webpage" or "render a bitmap", but we're only talking about computational decision problems here. ↩ One notation I've adopted in Logic for Programmers is marking abstract sections of pseudocode with backticks. It's really handy! ↩ Nontrivial meaning "at least one TM has this property and at least one TM doesn't have this property". Semantic meaning "related to whether the TM accepts, rejects, or runs forever on a class of inputs". IS_DONE_IN_TEN_STEPS is not a semantic property, as it doesn't tell us anything about inputs that take longer than ten steps. ↩

4 days ago 4 votes