Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
149
I’ve been interested in functional reactive programming (FRP) for about a decade now. I even wrote a couple of blog posts back in 2014 describing my experiments. My initial source of inspiration was Elm, the Haskell-like language for the web that once had FRP as a core part of the language. Evan Czaplicki’s Strange Loop 2013 talk really impressed me, especially that Mario demo. From there, I explored the academic literature on the subject. Ultimately, I created and then abandoned a library that focused on FRP for games. It was a neat idea, but the performance was terrible. The overhead of my kinda-sorta FRP system was part of the problem, but mostly it was my own inexperience. I didn’t know how to optimize effectively and my implementation language, Guile, did not have as many optimization passes as it does now. Also, realtime simulations like games require much more careful use of heap allocation. I found that, overhead aside, FRP is a bad fit for things like scripting...
10 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from dthompson

Guile-websocket 0.2.0 released

I'm happy to announce that guile-websocket 0.2.0 has been released! Guile-websocket is an implementation of the WebSocket protocol, both the client and server sides, for Guile Scheme. This release introduces breaking changes that overhaul the client and server implementations in order to support non-blocking I/O and TLS encrypted connections. source tarball: https://files.dthompson.us/guile-websocket/guile-websocket-0.2.0.tar.gz signature: https://files.dthompson.us/guile-websocket/guile-websocket-0.2.0.tar.gz.asc See the guile-websocket project page for more information. Bug reports, bug fixes, feature requests, and patches are welcomed.

4 months ago 67 votes
Wasm GC isn’t ready for realtime graphics

Wasm GC is a wonderful thing that is now available in all major web browsers since slowpoke Safari/WebKit finally shipped it in December. It provides a hierarchy of heap allocated reference types and a set of instructions to operate on them. Wasm GC enables managed memory languages to take advantage of the advanced garbage collectors inside web browser engines. It’s now possible to implement a managed memory language without having to ship a GC inside the binary. The result is smaller binaries, better performance, and better integration with the host runtime. However, Wasm GC has some serious drawbacks when compared to linear memory. I enjoy playing around with realtime graphics programming in my free time, but I was disappointed to discover that Wasm GC just isn’t a good fit for that right now. I decided to write this post because I’d like to see Wasm GC on more or less equal footing with linear memory when it comes to binary data manipulation. Hello triangle For starters, let's take a look at what a “hello triangle” WebGL demo looks like with Wasm GC. I’ll use Hoot, the Scheme to Wasm compiler that I work on, to build it. Below is a Scheme program that declares imports for the subset of the WebGL, HTML5 Canvas, etc. APIs that are necessary and then renders a single triangle: (use-modules (hoot ffi)) ;; Document (define-foreign get-element-by-id "document" "getElementById" (ref string) -> (ref null extern)) ;; Element (define-foreign element-width "element" "width" (ref extern) -> i32) (define-foreign element-height "element" "height" (ref extern) -> i32) ;; Canvas (define-foreign get-canvas-context "canvas" "getContext" (ref extern) (ref string) -> (ref null extern)) ;; WebGL (define GL_VERTEX_SHADER 35633) (define GL_FRAGMENT_SHADER 35632) (define GL_COMPILE_STATUS 35713) (define GL_LINK_STATUS 35714) (define GL_ARRAY_BUFFER 34962) (define GL_STATIC_DRAW 35044) (define GL_COLOR_BUFFER_BIT 16384) (define GL_TRIANGLES 4) (define GL_FLOAT 5126) (define-foreign gl-create-shader "gl" "createShader" (ref extern) i32 -> (ref extern)) (define-foreign gl-delete-shader "gl" "deleteShader" (ref extern) (ref extern) -> none) (define-foreign gl-shader-source "gl" "shaderSource" (ref extern) (ref extern) (ref string) -> none) (define-foreign gl-compile-shader "gl" "compileShader" (ref extern) (ref extern) -> none) (define-foreign gl-get-shader-parameter "gl" "getShaderParameter" (ref extern) (ref extern) i32 -> i32) (define-foreign gl-get-shader-info-log "gl" "getShaderInfoLog" (ref extern) (ref extern) -> (ref string)) (define-foreign gl-create-program "gl" "createProgram" (ref extern) -> (ref extern)) (define-foreign gl-delete-program "gl" "deleteProgram" (ref extern) (ref extern) -> none) (define-foreign gl-attach-shader "gl" "attachShader" (ref extern) (ref extern) (ref extern) -> none) (define-foreign gl-link-program "gl" "linkProgram" (ref extern) (ref extern) -> none) (define-foreign gl-use-program "gl" "useProgram" (ref extern) (ref extern) -> none) (define-foreign gl-get-program-parameter "gl" "getProgramParameter" (ref extern) (ref extern) i32 -> i32) (define-foreign gl-get-program-info-log "gl" "getProgramInfoLog" (ref extern) (ref extern) -> (ref string)) (define-foreign gl-create-buffer "gl" "createBuffer" (ref extern) -> (ref extern)) (define-foreign gl-delete-buffer "gl" "deleteBuffer" (ref extern) (ref extern) -> (ref extern)) (define-foreign gl-bind-buffer "gl" "bindBuffer" (ref extern) i32 (ref extern) -> none) (define-foreign gl-buffer-data "gl" "bufferData" (ref extern) i32 (ref eq) i32 -> none) (define-foreign gl-enable-vertex-attrib-array "gl" "enableVertexAttribArray" (ref extern) i32 -> none) (define-foreign gl-vertex-attrib-pointer "gl" "vertexAttribPointer" (ref extern) i32 i32 i32 i32 i32 i32 -> none) (define-foreign gl-draw-arrays "gl" "drawArrays" (ref extern) i32 i32 i32 -> none) (define-foreign gl-viewport "gl" "viewport" (ref extern) i32 i32 i32 i32 -> none) (define-foreign gl-clear-color "gl" "clearColor" (ref extern) f64 f64 f64 f64 -> none) (define-foreign gl-clear "gl" "clear" (ref extern) i32 -> none) (define (compile-shader gl type source) (let ((shader (gl-create-shader gl type))) (gl-shader-source gl shader source) (gl-compile-shader gl shader) (unless (= (gl-get-shader-parameter gl shader GL_COMPILE_STATUS) 1) (let ((info (gl-get-shader-info-log gl shader))) (gl-delete-shader gl shader) (error "shader compilation failed" info))) shader)) (define (link-shader gl vertex-shader fragment-shader) (let ((program (gl-create-program gl))) (gl-attach-shader gl program vertex-shader) (gl-attach-shader gl program fragment-shader) (gl-link-program gl program) (unless (= (gl-get-program-parameter gl program GL_LINK_STATUS) 1) (let ((info (gl-get-program-info-log gl program))) (gl-delete-program gl program) (error "program linking failed" info))) program)) ;; Setup GL context (define canvas (get-element-by-id "canvas")) (define gl (get-canvas-context canvas "webgl")) (when (external-null? gl) (error "unable to create WebGL context")) ;; Compile shader (define vertex-shader-source "attribute vec2 position; attribute vec3 color; varying vec3 fragColor; void main() { gl_Position = vec4(position, 0.0, 1.0); fragColor = color; }") (define fragment-shader-source "precision mediump float; varying vec3 fragColor; void main() { gl_FragColor = vec4(fragColor, 1); }") (define vertex-shader (compile-shader gl GL_VERTEX_SHADER vertex-shader-source)) (define fragment-shader (compile-shader gl GL_FRAGMENT_SHADER fragment-shader-source)) (define shader (link-shader gl vertex-shader fragment-shader)) ;; Create vertex buffer (define stride (* 4 5)) (define buffer (gl-create-buffer gl)) (gl-bind-buffer gl GL_ARRAY_BUFFER buffer) (gl-buffer-data gl GL_ARRAY_BUFFER #f32(-1.0 -1.0 1.0 0.0 0.0 1.0 -1.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0 1.0) GL_STATIC_DRAW) ;; Draw (gl-viewport gl 0 0 (element-width canvas) (element-height canvas)) (gl-clear gl GL_COLOR_BUFFER_BIT) (gl-use-program gl shader) (gl-enable-vertex-attrib-array gl 0) (gl-vertex-attrib-pointer gl 0 2 GL_FLOAT 0 stride 0) (gl-enable-vertex-attrib-array gl 1) (gl-vertex-attrib-pointer gl 1 3 GL_FLOAT 0 stride 8) (gl-draw-arrays gl GL_TRIANGLES 0 3) Note that in Scheme, the equivalent of a Uint8Array is a bytevector. Hoot uses a packed array, an (array i8) specifically, for the contents of a bytevector. And here is the JavaScript code necessary to boot the resulting Wasm binary: window.addEventListener("load", async () => { function bytevectorToUint8Array(bv) { let len = reflect.bytevector_length(bv); let array = new Uint8Array(len); for (let i = 0; i < len; i++) { array[i] = reflect.bytevector_ref(bv, i); } return array; } let mod = await SchemeModule.fetch_and_instantiate("triangle.wasm", { reflect_wasm_dir: 'reflect-wasm', user_imports: { document: { getElementById: (id) => document.getElementById(id) }, element: { width: (elem) => elem.width, height: (elem) => elem.height }, canvas: { getContext: (elem, type) => elem.getContext(type) }, gl: { createShader: (gl, type) => gl.createShader(type), deleteShader: (gl, shader) => gl.deleteShader(shader), shaderSource: (gl, shader, source) => gl.shaderSource(shader, source), compileShader: (gl, shader) => gl.compileShader(shader), getShaderParameter: (gl, shader, param) => gl.getShaderParameter(shader, param), getShaderInfoLog: (gl, shader) => gl.getShaderInfoLog(shader), createProgram: (gl, type) => gl.createProgram(type), deleteProgram: (gl, program) => gl.deleteProgram(program), attachShader: (gl, program, shader) => gl.attachShader(program, shader), linkProgram: (gl, program) => gl.linkProgram(program), useProgram: (gl, program) => gl.useProgram(program), getProgramParameter: (gl, program, param) => gl.getProgramParameter(program, param), getProgramInfoLog: (gl, program) => gl.getProgramInfoLog(program), createBuffer: (gl) => gl.createBuffer(), deleteBuffer: (gl, buffer) => gl.deleteBuffer(buffer), bindBuffer: (gl, target, buffer) => gl.bindBuffer(target, buffer), bufferData: (gl, buffer, data, usage) => { let bv = new Bytevector(reflect, data); gl.bufferData(buffer, bytevectorToUint8Array(bv), usage); }, enableVertexAttribArray: (gl, index) => gl.enableVertexAttribArray(index), vertexAttribPointer: (gl, index, size, type, normalized, stride, offset) => { gl.vertexAttribPointer(index, size, type, normalized, stride, offset); }, drawArrays: (gl, mode, first, count) => gl.drawArrays(mode, first, count), viewport: (gl, x, y, w, h) => gl.viewport(x, y, w, h), clearColor: (gl, r, g, b, a) => gl.clearColor(r, g, b, a), clear: (gl, mask) => gl.clear(mask) } } }); let reflect = await mod.reflect({ reflect_wasm_dir: 'reflect-wasm' }); let proc = new Procedure(reflect, mod.get_export("$load").value); proc.call(); }); Hello problems There are two major performance issues with this program. One is visible in the source above, the other is hidden in the language implementation. Heap objects are opaque on the other side Wasm GC heap objects are opaque to the host. Likewise, heap objects from the host are opaque to the Wasm guest. Thus the contents of an (array i8) object are not visible from JavaScript and the contents of a Uint8Array are not visible from Wasm. This is a good security property in the general case, but it’s a hinderance in this specific case. Let’s say we have an (array i8) full of vertex data we want to put into a WebGL buffer. To do this, we must make one JS->Wasm call for each byte in the array and store it into a Uint8Array. This is what the bytevectorToUint8Array function above is doing. Copying any significant amount of data per frame is going to tank performance. Hope you aren’t trying to stream vertex data! Contrast the previous paragraph with Wasm linear memory. A WebAssembly.Memory object can be easily accessed from JavaScript as an ArrayBuffer. To get a blob of vertex data out of a memory object, you just need to know the byte offset and length and you’re good to go. There are many Wasm linear memory applications using WebGL successfully. Manipulating multi-byte binary data is inefficient To read a multi-byte number such as an unsigned 32-bit integer from an (array i8), you have to fetch each individual byte and combine them together. Here’s a self-contained example that uses Guile-flavored WAT format: (module (type $bytevector (array i8)) (data $init #u32(123456789)) (func (export "main") (result i32) (local $a (ref $bytevector)) (local.set $a (array.new_data $bytevector $init (i32.const 0) (i32.const 4))) (array.get_u $bytevector (local.get $a) (i32.const 0)) (i32.shl (array.get_u $bytevector (local.get $a) (i32.const 1)) (i32.const 8)) (i32.or) (i32.shl (array.get_u $bytevector (local.get $a) (i32.const 2)) (i32.const 16)) (i32.or) (i32.shl (array.get_u $bytevector (local.get $a) (i32.const 3)) (i32.const 24)) (i32.or))) By contrast, Wasm linear memory needs but a single i32.load instruction: (module (memory 1) (func (export "main") (result i32) (i32.store (i32.const 0) (i32.const 123456789)) (i32.load (i32.const 0)))) Easy peasy. Not only is it less code, it's a lot more efficient. Unsatisfying workarounds There’s no way around the multi-byte problem at the moment, but for byte access from JavaScript there are some things we could try to work with what we have been given. Spoiler alert: None of them are pleasant. Use Uint8Array from the host This approach makes all binary operations from within the Wasm binary slow since we’d have to cross the Wasm->JS bridge for each read/write. Since most of the binary data manipulation is happening in the Wasm module, this approach will just make things slower overall. Use linear memory for bytevectors This would require a little malloc/free implementation and a way to reclaim memory for GC'd bytevectors. You could register every bytevector in a FinalizationRegistry in order to be notified upon GC and free the memory. Now you have to deal with memory fragmentation. This is Wasm GC, we shouldn’t have to do any of this! Use linear memory as a scratch space This avoids crossing the Wasm/JS boundary for each byte, but still involves a byte-by-byte copy from (array i8) to linear memory within the Wasm module. So far this feels like the least worst option, but the extra copy is still going to greatly reduce throughput. Wasm GC needs some fixin' I’ve used realtime graphics as an example because it’s a use case that is very sensitive to performance issues, but this unfortunate need to copy binary data byte-by-byte is also the reason why strings are trash on Wasm GC right now. Stringref is a good proposal and the Wasm community group made a mistake by rejecting it. Anyway, there has been some discussion about both multi-byte and ArrayBuffer access on GitHub, but as far as I can tell neither issue is anywhere close to a resolution. Can these things be implemented efficiently? How can the need for direct access to packed arrays from JS be reconciled with Wasm heap object opaqueness? I hope the Wasm community group can arrive at solutions sooner than later because it will take a long time to get the proposal(s) to phase 4 and shipped in all browsers, perhaps years. It would be a shame to be effectively shut out from using WebGPU when it finally reaches stable browser releases.

4 months ago 63 votes
Guile-Bstructs 0.1.0 released

I'm pleased to announce that the very first release of guile-bstructs, version 0.1.0, has been released! This is a library I've been working on for quite some time and after more than one rewrite and many smaller refactors I think it's finally ready to release publicly. Let's hope I'm not wrong about that! About guile-bstructs Guile-bstructs is a library that provides structured read/write access to binary data for Guile. A bstruct (short for “binary structure”) is a data type that encapsulates a bytevector and a byte offset which interprets that bytevector based on a specified layout. Some use cases for bstructs are: manipulating C structs when using the foreign function interface packing GPU vertex buffers when using graphics APIs such as OpenGL implementing data types that benefit from Guile's unboxed math optimizations such as vectors and matrices This library was initially inspired by guile-opengl's define-packed-struct syntax but is heavily based on "Ftypes: Structured foreign types" by Andy Keep and R. Kent Dybvig. The resulting interface is quite similar but the implementation is completely original. This library provides a syntax-heavy interface; nearly all of the public API is syntax. This is done to ensure that bstruct types are static and well-known at compile time resulting in efficient bytecode and minimal runtime overhead. A subset of the interface deals in raw bytevector access for accessing structured data in bytevectors directly without going through an intermediary bstruct wrapper. This low-level interface is useful for certain batch processing situations where the overhead of creating wrapper bstructs would hinder throughput. Example Here are some example type definitions to give you an idea of what it’s like to use guile-bstructs: ;; Struct (define-bstruct <vec2> (padded (struct (x float) (y float)))) ;; Type group with a union (define-bstruct (<mouse-move-event> (struct (type uint8) (x int32) (y int32))) (<mouse-button-event> (struct (type uint8) (button uint8) (state uint8) (x int32) (y int32))) (<event> (union (type uint8) (mouse-move <mouse-move-event>) (mouse-button <mouse-button-event>)))) ;; Array (define-bstruct <matrix4> (array 16 float)) ;; Bit fields (define-bstruct <date> (bits (year 32 s) (month 4 u) (day 5 u))) ;; Pointer (define-bstruct (<item> (struct (type int))) (<chest> (struct (opened? uint8) (item (* <item>))))) ;; Packed struct modifier (define-bstruct <enemy> (packed (struct (type uint8) (health uint32)))) ;; Endianness modifier (define-bstruct <big-float> (endian big float)) ;; Recursive type (define-bstruct <node> (struct (item int) (next (* <node>)))) ;; Mutually recursive type group (define-bstruct (<forest> (struct (children (* <tree>)))) (<tree> (struct (value int) (forest (* <forest>)) (next (* <tree>))))) ;; Opaque type (define-bstruct SDL_GPUTexture) Download Source tarball: guile-bstructs-0.1.0.tar.gz GPG signature: guile-bstructs-0.1.0.tar.gz.asc This release was signed with this GPG key. See the guile-bstructs project page for more information.

4 months ago 72 votes
Lisp: Icing or Cake?

The Spring Lisp Game Jam 2024 ended one week ago. 48 games were submitted, a new record for the jam! This past week has been a time for participants to play and rate each other’s games. As I explored the entries, I noticed two distinct meta-patterns in how people approached building games with Lisp. I think these patterns apply more broadly to all applications of Lisp. Let’s talk about these patterns in some detail, with examples. But first! Here’s the breakdown of the jam submissions by language: lang entries % (rounded) ---- ------- ----------- guile 15 31 fennel 10 21 clojure 5 10 cl 5 10 racket 4 8 elisp 4 8 s7 3 6 kawa 1 2 owl 1 2 I haven’t rolled up the various Schemes (Guile, Racket, S7, Kawa) into a general scheme category because Scheme is so minimally specified and they are all very distinct implementations for different purposes, not to mention that Racket has a lot more going on than just Scheme. For the first time ever, Guile came out on top with the most submissions! There’s a very specific reason for this outcome. 11 out of the 15 Guile games were built for the web with Hoot, a Scheme-to-WebAssembly compiler that I work on at the Spritely Institute. 2 of those 11 were official Spritely projects. We put out a call for people to try making games with Hoot before the jam started, and a lot of people took us up on it! Very cool! The next most popular language, which is typically the most popular language in these jams, is Fennel. Fennel is a Lisp that compiles to Lua. It’s very cool, too! Also of note, at least to me as a Schemer, is that three games used S7. Hmm, there might be something relevant to this post going on there. The patterns I’m about to talk about could sort of be framed as “The Guile Way vs. The Fennel Way”, but I don’t want to do that. It's not an “us vs. them” thing. It’s wonderful that there are so many flavors of Lisp these days that anyone can find a great implementation that suits their preferences. Not only that, but many of these implementations can be used to make games that anyone can easily play in their web browser! That was not the case several years ago. Incredible! I want to preface the rest of this post by saying that both patterns are valid, and while I prefer one over the other, that is not to say that the other is inferior. I'll also show how these patterns can be thought of as two ends of a spectrum and how, in the end, compromises must be made. Okay, let’s get into it! Lisp as icing The icing pattern is using Lisp as a “scripting” language on top of a cake that is made from C, Rust, and other static languages. The typical way to do this is by embedding a Lisp interpreter into the larger program. If you’re most interested in writing the high-level parts of an application in Lisp then this pattern is the fastest way to get there. All you need is a suitable interpreter/compiler and a way to add the necessary hooks into your application. Since the program is mainly C/Rust/whatever, you can then use emscripten to compile it to WebAssembly and deploy to the web. Instant gratification, but strongly tied to static languages and their toolchains. S7 is an example of an embeddable Scheme. Guile is also used for extending C programs, though typically that involves dynamically linking to libguile rather than embedding the interpreter into the program’s executable. Fennel takes a different approach, recognizing that there are many existing applications that are already extensible through Lua, and provides a lispy language that compiles to Lua. Lisp as cake The cake pattern is using Lisp to implement as much of the software stack as possible. It’s Lisp all the way down... sorta. Rather than embedding Lisp into a non-Lisp program, the cake pattern does the inverse: the majority of the program is written in Lisp. When necessary, shared libraries can be called via a foreign function interface, but this should be kept to a minimum. This approach takes longer to yield results. Time is spent implementing missing libraries for your Lisp of choice and writing wrappers around the C shared libraries you can’t avoid using. Web deployment gets trickier, too, since the project is not so easily emscriptenable. (You may recognize this as the classic embed vs. extend debate. You’re correct! I'm just adding my own thoughts and applying it specifically to some real-world Lisp projects.) I mentioned Guile as an option for icing, but Guile really shines best as cake. The initial vision for Guile was to Emacsify other programs by adding a Scheme interpreter to them. These days, the best practice is to write your program in Scheme to begin with. Common Lisp is probably the best example, though. Implementations like SBCL have good C FFIs and can compile efficient native executables, minimizing the desire to use some C for performance reasons. Case studies Let’s take a look at some of the languages and libraries used for the Lisp Game Jam and evaluate their icing/cake-ness. Fennel + love2d love2d has been a popular choice for solo or small team game development for many years. It is a C++ program that embeds a Lua interpreter, which means it’s a perfect target for Fennel. Most Linux distributions package love2d, so it’s easy to run .love files natively. Additionally, thanks to emscripten, love2d games can be deployed to the web. Thus most Fennel games use love2d. ./soko.bin and Gnomic Vengeance are two games that use this stack. Fennel + love2d is a perfect example of Lisp as icing. Fennel sits at the very top of the stack, but there’s not really a path to spread Lisp into the layers below. It is also the most successful Lisp game development stack to date. S7 + raylib This stack is new to me, but two games used it this time around: GhostHop and Life Predictor. (You really gotta play GhostHop, btw. It’s a great little puzzle game and it is playable on mobile devices.) Raylib is a C library with bindings for many higher-level languages that has become quite popular in recent years. S7 is also implemented in C and is easily embeddable. This makes the combination easy to deploy on the web with emscripten. S7 + raylib is another example of Lisp as icing. I’m curious to see if this stack becomes more popular in future jams. Guile + Chickadee This is the stack that I helped build. Chickadee is a game library for Guile that implements almost all of the interesting parts in Scheme, including rendering. Two games were built with Chickadee in the most recent jam: Turbo Racer 3000 and Bloatrunner. Guile + Chickadee is an example of Lisp as cake. Chickadee wraps some C libraries for low-level tasks such as loading images, audio, and fonts, but it is written in pure Scheme. All the matrix and vector math is in Scheme. Chickadee comes with a set of rendering primitives comparable to love2d and raylib but they’re all implemented in Scheme. I’ve even made progress on rendering vector graphics with Scheme, whereas most other Lisp game libraries use a C library such as nanosvg. Chickadee has pushed the limits of Guile’s compiler and virtual machine, and Guile has been improved as a result. But it’s the long road. Chickadee is mostly developed by me, alone, in my very limited spare time. It is taking a long time to reach feature parity with more popular game development libraries, but it works quite well for what it is. Hoot + HTML5 canvas I also helped build this one. Hoot is a Scheme-to-WebAssembly compiler. Rather than compile the Guile VM (written in C) to Wasm using emscripten, Hoot implements a complete Wasm toolchain and a new backend for Guile’s compiler that emits Wasm directly. Hoot is written entirely in Scheme. Unlike C programs compiled with emscripten that target Wasm 1.0 with linear memory, Hoot targets Wasm 2.0 with GC managed heap types. This gives Hoot a significant advantage: Hoot binaries do not ship a garbage collector and thus are much smaller than Lisp runtimes compiled via emscripten. The Wasm binary for my game weighs in at < 2MiB whereas the love2d game I checked had a nearly 6MiB love.wasm. Hoot programs can also easily interoperate with JavaScript. Scheme objects can easily be passed to JavaScript, and vice versa, as they are managed in the same heap. With all of the browser APIs just a Wasm import away, an obvious choice for games was the built-in HTML5 canvas API for easy 2D rendering. 11 games used Hoot in the jam, including (shameless plug) Cirkoban and Lambda Dungeon. Hoot + HTML5 canvas is mostly dense cake with a bit of icing. On one hand, it took a year and significant funding to boot Hoot. We said “no” to emscripten, built our own toolchain, and extended Guile’s compiler. It's Lisp all the way until you hit the browser runtime! We even have a Wasm interpreter that runs on the Guile VM! Hoot rules! It was a risk but it paid off. On the other hand, the canvas API is very high-level. The more cake thing to do would be to use Hoot’s JS FFI to call WebGL and/or WebGPU. Indeed, this is the plan for the future! Wasm GC needs some improvements to make this feasible, but my personal goal is to get Chickadee ported to Hoot. I want Chickadee games to be easy to play natively and in browsers, just like love2d games. The cake/icing spectrum I must acknowledge the limitations of the cake approach. We’re not living in a world of Lisp machines, but a world of glorified PDP-11s. Even the tallest of Lisp cakes sits atop an even larger cake made mostly of C. All modern Lisp systems bottom out at some point. Emacs rests on a C core. Guile’s VM is written in C. Hoot runs on mammoth JavaScript engines written in C++ like V8. Games on Hoot currently render with HTML5 canvas rather than WebGL/WebGPU. Good luck using OpenGL without libGL; Chickadee uses guile-opengl which uses the C FFI to call into libGL. Then there’s libpng, FreeType, and more. Who the heck wants to rewrite all this in Lisp? Who even has the resources? Does spending all this time taking the scenic route matter at all, or are we just deluding ourselves because we have fun writing Lisp code? I think it does matter. Every piece of the stack that can be reclaimed from the likes of C is a small victory. The parts written in Lisp are much easier to hack on, and some of those things become live hackable while our programs are running. They are also memory safe, typically, thanks to GC managed runtimes. Less FFI calls means less overhead from traversing the Lisp/C boundary and more safety. As more of the stack becomes Lisp, it starts looking less like icing and more like cake. Moving beyond games, we can look to the Guix project as a great example of just how tasty the cake can get. Guix took the functional packaging model from the Nix project and made a fresh implementation, replacing the Nix language with Guile. Why? For code staging, code sharing, and improved hackability. Guix also uses an init system written in Guile rather than systemd. Why? For code staging, code sharing, and improved hackability. These are real advantages that make the trade-off of not using the industry-standard thing worth it. I’ve been using Guix since the early days, and back then it was easy to make the argument that Guix was just reinventing wheels for no reason. But now, over 10 years later, the insistence on maximizing the usage of Lisp has been key to the success of the project. As a user, once you learn the Guix idioms and a bit of Guile, you unlock extraordinary power to craft your OS to your liking. It’s the closest thing you can get to a Lisp machine on modern hardware. The cake approach paid off for Guix, and it could pay off for other projects, too. If Common Lisp is more your thing, and even if it isn’t, you’ll be amazed by the Trial game engine and how much of it is implemented in Common Lisp rather than wrapping C libraries. There’s also projects like Pre-Scheme that give me hope that one day the layers below the managed GC runtime can be implemented in Lisp. Pre-Scheme was developed and successfully used for Scheme 48 and I am looking forward to a modern revival of it thanks to an NLnet grant. I'm a cake boy That’s right, I said it: I’m a cake boy. I want to see projects continue to push the boundaries of what Lisp can do. When it comes to the Lisp Game Jam, what excites me most are not the games themselves, but the small advances made to reclaim another little slice of the cake from stale, dry C. I intend to keep pushing the limits for Guile game development with my Chickadee project. It’s not a piece of cake to bake a lispy cake, and the way is often hazy, but I know we can’t be lazy and just do the cooking by the book. Rewrite it in Rust? No way! Rewrite it in Lisp!

12 months ago 144 votes

More in programming

Retreating to Safety

Ten years ago, Apple’s Phil Schiller surprised Apple enthusiasts and developers by walking out on stage at John Gruber’s The Talk Show Live WWDC event and giving an open, human, honest interview to a somewhat jaded community. I wrote this in response: Both Apple and Phil Schiller himself took a huge risk in doing this. That they agreed at all is a noteworthy gift to this community of long-time enthusiasts, many of whom have felt under-appreciated as the company has grown. […] Phil’s appearance on the show was warm, genuine, informative, and entertaining. It was human. And humanizing the company and its decisions, especially to developers — remember, developer relations is all under Phil — might be worth the PR risk. This started a ten-year run of interviews by Apple executives on The Talk Show every year at WWDC that proved to be great, surprisingly safe PR for Apple. No executive ever said something they shouldn’t have (they’re pros), no sensational or negative news stories ever resulted from them, and Apple’s enthusiastic fans and developers felt seen, heard, and appreciated. *     *     * For unspecified reasons, Apple has declined to participate this year, ending what had become a beloved tradition in our community — and I can’t help but suspect that it won’t come back. (A lot has changed in the meantime.) Maybe Apple has good reasons. Maybe not. We’ll see what their WWDC PR strategy looks like in a couple of weeks. In the absence of any other information, it’s easy to assume that Apple no longer wants its executives to be interviewed in a human, unscripted, unedited context that may contain hard questions, and that Apple no longer feels it necessary to show their appreciation to our community and developers in this way. I hope that’s either not the case, or it doesn’t stay the case for long. This will be the first WWDC I’m not attending since 2009 (excluding the remote 2020 one, of course). Given my realizations about my relationship with Apple and how they view developers, I’ve decided that it’s best for me to take a break this year, gain some perspective, and decide what my future relationship should look like. Maybe Apple’s leaders are doing that, too.

2 days ago 3 votes
A Developer’s Crash Course in Coming to Japan

Thinking about moving to Japan? You’re not alone—Japan is a popular destination for those hoping to move abroad. What’s more, Japan actually needs more international developers. But how easy is it to immigrate to and work in Japan? Scores of videos on social media warn that living in Japan is quite different from holidaying here, and graphic descriptions of exploitative companies also create doubt. The truth is that Japan is not the easiest country to immigrate to, nor is it the hardest. Some Japanese tech companies and developer roles offer great work-life balance and good compensation; others do not. Based on other developers’ experiences, you’ll thrive here if you: Are an experienced developer Value safety, good food, and convenience over a high salary Are willing to invest time and effort into learning Japanese over the long term Read on to discover if Japan is a good fit for you, and the best ways to get a visa and begin your life here. What is it like working as a developer in Japan? TokyoDev conducts an annual survey of international developers living in Japan. Many of the questions in TokyoDev’s 2024 survey specifically addressed respondents’ work environments. Compensation When TokyoDev asked about “workplace difficulties” in the 2024 survey, 45% of respondents said that “compensation” was their number one problem at work. Overall, compensation for developers in Japan is far lower than the US developer median salary of 120,000 USD (currently 17.5 million yen), but higher than the Indian developer median salary of 640,000 rupees (currently around 1.1 million yen). Yet evaluating compensation for international developers in Japan, specifically, is trickier than you might expect. It’s hard to define an expected salary range because international developers tend to work in different companies and roles than the average Japanese developer. According to a 2024 survey conducted by the Japanese Ministry of Health, Labor and Welfare, the average annual salary of software engineers in Japan was 5.69 million yen. In a survey conducted that same year by TokyoDev, though, English-speaking international software developers in Japan enjoyed a median salary of 8.5 million yen. Of those international developers who responded, only 71% of them worked at a company headquartered in Japan, and almost 80% of them used English always or frequently, with 79% belonging to an engineering team with many other non-Japanese members. Wages, then, are heavily influenced by a range of factors, but particularly by whether you’re working for a Japanese or international company. In general, 75% of the international developers surveyed made 6 million yen or more. The real question is, is that enough for you to be comfortable in Japan? The answer is likely to be yes, if you don’t have overseas financial obligations or dependents. If you do, you’ll want to look carefully at rent, grocery, and education prices in your area of choice to guesstimate the expense of your Japanese lifestyle. Work-life balance Japan has a tradition of long hours and overtime. The Financial Times reports that the Japanese government has taken many measures to reduce the phenomenon of death from overwork (過労死, karoushi), from capping overtime to 100 hours a month, to setting up a national hotline for employees to report abusive companies. The results seem mixed. The Financial Times article adds that in 2024, employees at 26,000 organizations reported working illegal overtime at 44.5% of those businesses. On the other hand, average working hours for men fell to below 45 hours per week, and for women to below 35, which is similar to average working hours in the US. Still, 72% of the developers surveyed by TokyoDev worked for less than 40 hours a week. In addition, 70% of TokyoDev respondents cited work-life balance as their top workplace perk. The number of respondents happy with their working conditions came in just below that, at 69%. There was some correlation between hours worked and the type of employer, though. Employees at international subsidiaries were slightly more likely to enjoy shorter work weeks than those at Japanese companies. Remote work Remote work is still relatively new in Japan. Although more offices adopted the practice during Covid, many of them are now backtracking and requiring employees to return to the office, often with a hybrid schedule. While only 9% of TokyoDev respondents weren’t allowed any remote work, 76% of those required to work in-office were employed by Japan-headquartered companies. By contrast, of the 16% who worked fully remotely, only 57% worked for a Japanese company. Those with the option to work remotely really enjoy it. When asked what their most important workplace benefit was, 49% of respondents answered “remote work,” outstripping every other answer by far. Job security A major plus of working in Japan is job security—which, given the waves of layoffs at American tech companies, may now seem extra appealing. It’s overwhelmingly difficult to fire or lay off an employee with a permanent contract (正社員, seishain) in Japan, due to labor laws designed to protect the individual. This may be why 54% of TokyoDev survey respondents named “job security” as their most important workplace perk. Not every company will adhere to labor protection laws, and sometimes businesses pressure employees to “voluntarily” resign. Nonetheless, employees have significant legal recourse when companies attempt to fire them, change their contracts, or alter the current workplace conditions (sometimes, even if those conditions were never stated in writing). Developer stories TokyoDev regularly interviews developers working at our client companies, for information on both their specific positions and their general work environment. Our interviewees work with a variety of technology in many different roles, and at companies ranging from fintech enterprises like PayPay to game companies like Wizcorp. Why do developers choose Japan? In 2024 TokyoDev also asked developers, “What’s your favorite thing about Japan?” The results were: Safety: 21% Food: 13% Convenience: 11% Culture: 8% Peacefulness: 7% Nature: 5% Interestingly, there was a strong correlation between the amount of time someone had lived in Japan and their answer. Those who had been in Japan three years or less more frequently chose “food” or “culture.” Those who’d lived in Japan for four or more years were significantly more likely to answer “safety” or “peacefulness.” Safety It’s true that Japan enjoys a lower crime rate than many developed nations. The Security Journal UK ranked it ninth in a list of the world’s twenty safest countries. In 2024, World Population Review selected Tokyo as the safest city in the world. The homicide rate in 2023 was only 0.23 per 100,000 people, and has been steadily declining since the nineties. There are a few women-specific concerns, such as sexual violence. Nonetheless, the subjective experience of many women in the TokyoDev audience is that Japan feels safe; for example, they experience no trepidation walking around late at night. Of course, crime statistics don’t take into account natural disasters, of which Japan has more than its fair share. Thanks to being located on the Ring of Fire, Japan regularly copes with earthquakes and volcanic activity, and its location in the Pacific means that it is also affected by typhoons and tsunamis. To compensate, Japan also takes natural disaster countermeasures extremely seriously. It’s certainly the only country I’ve been to that posts large-scale evacuation maps on the side of the street, stores emergency supply stockpiles in public parks, and often requires schoolchildren to keep earthquake safety headgear at their desks. Food Food is another major draw. Many respondents simply wrote that “food” or “fresh, affordable food” was their favorite thing about Japan, but a few listed specific dishes. Favorite Japanese foods of the TokyoDev audience include: Yakiniku (self-grilled meat) Ramen Peaches Sushi Hiroshima-style okonomiyaki (savory pancake) Curry rice Onigiri (rice balls) Of those, sushi was mentioned most often. One respondent also answered the question with “drinking,” if you think that should count! Personal experiences Our contributors have also shared their personal experiences of moving to and working in Japan. We’ve got articles from Filipino, Indonesian, Australian, Vietnamese, and Mongolian developers, as well as others sharing what it’s like to work as a female software developer in Japan, or to live in Japan with a disability. Why shouldn’t you live in Japan? Safety, food, convenience, and culture are the most commonly-cited upsides of living in Japan. The downsides are the necessity of learning the language and some strict, yet often-unspoken, cultural expectations. Language Fluency in Japanese is not strictly necessary to live or work in Japan. Access to government services for you and your family, such as Japanese public school, is possible even if you speak little Japanese. (That doesn’t mean that most city hall clerks speak English; usually they’ll either locate a translator, or work with you via a translation app.) Nonetheless, TokyoDev’s 2024 survey showed that language ability was highly correlated to social success in Japan. In particular, 56% of all respondents were happy or very happy with their adjustment to Japanese culture. Breaking down that number, though, 76% of those with fluent or native Japanese ability reported being happy with their cultural adjustment, while only 34% of those with little or no Japanese ability were similarly happy. The same held true for social life satisfaction: 59% of those with fluent or native Japanese ability were happy or very happy with their social life, compared to 42% of those who don’t speak much Japanese. While English study is compulsory in Japan and starts in elementary school, as of 2025, only 28% of Japanese people speak English, and most of them can’t converse with high fluency. Living and working in Japan is possible without Japanese, but it’s hard to integrate, make friends, and participate in cultural activities if you can’t communicate with the locals. Cultural expectations As mentioned above, fluency in Japanese is closely allied to fluency in Japanese culture. At the same time, one does not necessarily imply the other. It’s possible to be fluent in Japanese, but still not grasp many of the unspoken rules your Japanese friends, neighbors, and coworkers operate by. Japan’s culture is both high-context and specifically averse to confrontation and outspokenness; if you get it “wrong,” people aren’t likely to tell you so. Japanese culture also values conformity: as the saying goes, “the nail that sticks up, gets hammered down.” While there are hints of things changing, with many Japanese companies saying support for greater diversity is necessary, minorities or those who are different may experience pressure to fit in. Introspection is required: are you the kind of person who’s adept at “reading the room,” a highly-valued quality in Japan? Conversely, are you self-confident enough to not sweat the small stuff? Either of these personality types may do well in Japan, but if social acceptance is very important to you, and you’re also uncomfortable with feeling occasionally awkward or uncertain, then you may struggle more to adjust. I want to go! How can I get there? If you’ve decided to immigrate to Japan, there are a number of ways to acquire a work visa. The simplest way is to get hired by a company operating in Japan. Alternatively, you can start your own business in Japan, come over on a Working Holiday, or even—if you’re very determined—arrive first as an English teacher. Let’s begin with the most straightforward route: getting hired as a developer. Getting a developer job in Japan As mentioned before, Japan needs more international developers. Some types of developers, though, will find it easier to get a job in Japan. In particular, companies in Japan are looking for the following: Senior developers. Companies are particularly interested in those with management experience and soft skills such as communication and leadership. Backend developers. This is one of the most widely-available roles for those who don’t speak Japanese. Developers who know Python. Python is one of Japan’s top in-demand languages. AI and Machine Learning Specialists. Japan is leaning hard on AI to help cope with demographic changes. Those who already know, or are willing to learn, Japanese. Combining those criteria, an experienced developer who speaks Japanese should have little difficulty finding a job! If you’re none of these things, you don’t need to give up—you just need to be patient, flexible, and willing to think outside the box. As Mercari Senior Technical Recruiter Clement Chidiac told me, “I know a bunch of people that managed to land a job because they’ve tried harder, going to meetups, reaching out to people, networking, that kind of thing.” Edmund Ho, Principal Consultant at Talisman Corporation, agreed that overseas candidates hoping to work in Japan for the first time face a tough road. He believes candidates should maintain a realistic, but optimistic, view of the process. “Keep a longer mindset,” he suggested. “Maybe you don’t get an offer the first year, but you do the second year.” “Stepping-stone” jobs Candidates from overseas do face a severe disadvantage: many companies, even those founded by non-Japanese people, are only open to developers who already live in Japan. Although getting a work visa for an overseas employee is cheaper and easier in Japan than in many countries, it still presents a barrier some organizations are reluctant to overcome. By contrast, once you’re already on the ground, more companies will be interested in your skills. This is why some developers settle on a “stepping-stone” position—in other words, a job that may not be all you hoped for, but that is willing to sponsor your visa and bring you into the country. Here’s where some important clarification on Japanese work visas is required. Work visas The most common visa for developers is the Engineer/Specialist in Humanities/International Services visa, a broad-category visa for foreign workers in those fields. To qualify, a developer must have a college degree, or have ten years of work experience, or have passed an approved IT exam. Another relatively common visa for high-level developers is the Highly-Skilled Professional (HSP) visa. To acquire it, applicants must score at least 70 points on an assessment scale that addresses age, education level, Japanese level, income, and more. The HSP visa has many advantages, but there is one important difference between it, and the more standard Engineer visa. The Engineer/Specialist in Humanities/International Services visa is not tied to a specific company. It grants you the legal right to work within those fields for a specific period of time in Japan. The Highly-Skilled Foreign Professional visa, on the other hand, is tied to a specific employer. If you want to change jobs, you’ll need to update your residency status with immigration. Some unscrupulous companies will try to claim that because they sponsored your Engineer/Specialist in Humanities/International Servicesvisa, you are obligated to remain with their company or risk being deported. This is not the case. If you do leave your job without another one lined up, you have three months to find another before you may be at risk for deportation. In addition, the fields of work covered by the Engineer/Specialist in Humanities/International Services visa are incredibly broad, and include everything from sales to product development to language instruction. As TokyoDev specifically confirmed with immigration, you can even come to Japan as an English instructor, then later work as a developer, without needing to alter your visa. Those with the HSP visa will need to go to immigration and alter their residency status each time they change roles. However, if you have the points and qualifications for an HSP visa, that means you’re also eligible for Permanent Residency within one to three years. Once you’ve obtained Permanent Residency, you’re free to pursue whatever sort of employment you like. International or Japanese company? As you begin your job hunt, you’ll hopefully receive responses from several sorts of companies: Japanese companies that also primarily hire Japanese people, Japanese companies with designated multinational developer teams, companies that were founded in Japan but nonetheless hire international developers for a variety of positions, and international subsidiaries. There are advantages and disadvantages to working with mostly-Japanese or mostly-international companies. Japanese companies The more Japanese a company is—both in philosophy and personnel—the more you’ll need Japanese language skills to thrive there. It’s true that a number of well-established Japanese tech companies are now creating developer teams designed to be multinational from the outset: typically, these are very English-language friendly. Some organizations, such as Money Forward, have even adopted English as the official company language. However, this often results in an institutional language barrier between development teams and the rest of the company, which is usually staffed by Japanese speakers. Developers are still encouraged to learn Japanese, particularly as they climb the promotional ladder, to help facilitate interdepartmental communication. Some companies, such as DeepX and Beatrust, either offer language classes themselves or provide a stipend for language learning. In addition to the language, you’ll also need to become “fluent” in Japanese business norms, which can be much more rigid and hierarchical than American or European company cultures. For example, at introductory drinking parties (themselves a potential surprise for many!), it is customary for new employees or women employees to go around with a bottle of beer and pour glasses for their managers and the company’s senior management. As mentioned in the cultural expectations section, most Japanese people won’t correct you even if you’re doing it all wrong, which leaves foreigners to discover their gaffes via trial-and-error. The advantage here is that you’ll be pressured, hopefully in a good way, to adapt swiftly to the Japanese language and business culture. There’s a sink-or-swim element to this approach, but if you’re serious about settling in Japan, then this “downside” could benefit you in the long run. Finally, there is the above-mentioned issue of compensation. On average, international companies pay more than Japanese ones; the median salary difference is around three million yen per year. Specific roles may be paid at higher rates, though, and most Japanese companies do offer bonuses. Many Japanese companies also offer other perks, such as housing stipends, spouse and child allowances, etc. If you receive an offer, it’s worth examining the whole compensation package before you make a decision. International companies The advantages of working either for an international company, or for a Japanese company that already employs many non-Japanese people, are straightforward: you can usually communicate in English, you already understand most of the business norms, and such companies typically pay developers more. You do run the risk of getting stuck in a rut, though. As mentioned earlier, TokyoDev found in its own survey that the correlation between Japanese language skills and social life satisfaction is high. You can of course study Japanese in your free time—and many do—but the more your work environment and social life revolve around English, the more difficult acquiring Japanese becomes. Want a job? Start here! If you’re ready to begin your job hunt, you can start with the TokyoDev job board. TokyoDev only works with companies we feel good about sending applicants to, and the job board includes positions that don’t require Japanese and that accept candidates from abroad. Other alternatives These visas don’t lead directly to working as a software developer in Japan, but can still help you get your foot in the door. DIY options If you prefer to be your own boss, there are several visas that allow you to set up a business in Japan. The Business Manager visa is typically good for one year, although repeated applicants may get longer terms. Applicants should have five million yen in a bank account when they apply, and there are some complicated requirements for getting and keeping the visa, such as maintaining an office, paying yourself a minimum salary, following proper accounting procedures, etc. The Startup visa is another option if the Business Manager visa appeals to you, but you don’t yet have the funds or connections to make it happen. You’ll be granted the equivalent of a Business Manager visa for up to one year so that you can launch your business in Japan. Working Holiday visa This is the path our own founder Paul McMahon took to get his first developer job in Japan. If you meet various qualifications, and you belong to a country that has a Working Holiday visa agreement with Japan, you can come to Japan for a period of one year and do work that is “incidental” to your holiday. In practice, this means you can work almost any job except for those that are considered “immoral” (bars, clubs, gambling, etc.). The Working Holiday visa is a great opportunity for those who have the option. It allows you to experience living and working in Japan without any long-term commitments, and also permits you to job-hunt freely without time or other visa constraints. J-Find visa The J-Find visa is a one-year visa, intended to let graduates of top universities job-hunt or prepare to found a start-up in Japan. To qualify, applicants should have: A degree from a university ranked in the top 100 by at least two world university rankings, or completed a graduate course there Graduated within five years of the application date At least 200,000 yen for initial living expenses TokyoDev contributor Oguzhan Karagözoglu received a J-Find visa, though he did run into some difficulties, particularly given immigration’s unfamiliarity with this relatively new type of visa. Digital Nomad visa This is another new visa category that allows foreigners from specific countries, who must make over 10 million yen or more a year, to work remotely from Japan for six months. Given that the application process alone can take months, the visa isn’t extendable or renewable, and you’re not granted residency, it’s questionable whether the pay-off is worth the effort. Still, if you have the option to work remotely and want to test out living in Japan before committing long-term, this is one way to do that. TokyoDev contributor Christian Mack was not only one of the first to acquire the Digital Nomad visa, but has since opened a consultancy to help others through the process. Conclusion If your takeaway from this article is, “Japan, here I come!” then there are more TokyoDev articles that can help you on your way. For example, if you want to bring your pets with you, you should know that you need to start preparing the import paperwork up to seven months in advance. If you’re ready now to start applying for jobs, check out the TokyoDev job board. You’ll also want to look at how to write a resume for a job in Japan, and our industry insider advice on passing the resume screening process. These tips for interviewing at Japanese tech companies would be useful, and when you’re ready for it, see this guide to salary negotiations. Once you’ve landed that job, we’ve got articles on everything from bringing your family with you, to getting your first bank account and apartment. In addition, the TokyoDev Discord hosts regular discussions on all these topics and more. It’s a great chance to make developer friends in Japan before you ever set foot in the country. Once you are here, you can join some of Japan’s top tech meetups, including many organized by TokyoDev itself. We look forward to seeing you soon!

3 days ago 6 votes
Let's talk about the future of Remix and react-router (tip)

We go over the "Wake up, Remix!" article by the remix team and talk about their decisions moving forward and also speculate on what is coming next.

3 days ago 2 votes
the algebra of dependent types

TIL (or this week-ish I learned) why big-sigma and big-pi turn up in the notation of dependent type theory. I’ve long been aware of the zoo of more obscure Greek letters that turn up in papers about type system features of functional programming languages, μ, Λ, Π, Σ. Their meaning is usually clear from context but the reason for the choice of notation is usually not explained. I recently stumbled on an explanation for Π (dependent functions) and Σ (dependent pairs) which turn out to be nicer than I expected, and closely related to every-day algebraic data types. sizes of types The easiest way to understand algebraic data types is by counting the inhabitants of a type. For example: the unit type () has one inhabitant, (), and the number 1 is why it’s called the unit type; the bool type hass two inhabitants, false and true. I have even seen these types called 1 and 2 (cruelly, without explanation) in occasional papers. product types Or pairs or (more generally) tuples or records. Usually written, (A, B) The pair contains an A and a B, so the number of possible values is the number of possible A values multiplied by the number of possible B values. So it is spelled in type theory (and in Standard ML) like, A * B sum types Or disjoint union, or variant record. Declared in Haskell like, data Either a b = Left a | Right b Or in Rust like, enum Either<A, B> { Left(A), Right(B), } A value of the type is either an A or a B, so the number of possible values is the number of A values plus the number of B values. So it is spelled in type theory like, A + B dependent pairs In a dependent pair, the type of the second element depends on the value of the first. The classic example is a slice, roughly, struct IntSlice { len: usize, elem: &[i64; len], } (This might look a bit circular, but the idea is that an array [i64; N] must be told how big it is – its size is an explicit part of its type – but an IntSlice knows its own size. The traditional dependent “vector” type is a sized linked list, more like my array type than my slice type.) The classic way to write a dependent pair in type theory is like,      Σ len: usize . Array(Int, len) The big sigma binds a variable that has a type annotation, with a scope covering the expression after the dot – similar syntax to a typed lambda expression. We can expand a simple example like this into a many-armed sum type: either an array of length zero, or an array of length 1, or an array of length 2, … but in a sigma type the discriminant is user-defined instead of hidden. The number of possible values of the type comes from adding up all the alternatives, a summation just like the big sigma summation we were taught in school. ∑ a ∈ A B a When the second element doesn’t depend on the first element, we can count the inhabitants like, ∑ A B = A*B And the sigma type simplifies to a product type. telescopes An aside from the main topic of these notes, I also recently encountered the name “telescope” for a multi-part dependent tuple or record. The name “telescope” comes from de Bruijn’s AUTOMATH, one of the first computerized proof assistants. (I first encountered de Bruijn as the inventor of numbered lambda bindings.) dependent functions The return type of a dependent function can vary according to the argument it is passed. For example, to construct an array we might write something like, fn repeat_zero(len: usize) -> [i64; len] { [0; len] } The classic way to write the type of repeat_zero() is very similar to the IntSlice dependent pair, but with a big pi instead of a big sigma:      Π len: usize . Array(Int, len) Mmm, pie. To count the number of possible (pure, total) functions A ➞ B, we can think of each function as a big lookup table with A entries each containing a B. That is, a big tuple (B, B, … B), that is, B * B * … * B, that is, BA. Functions are exponential types. We can count a dependent function, where the number of possible Bs depends on which A we are passed, ∏ a ∈ A B a danger I have avoided the terms “dependent sum” and “dependent product”, because they seem perfectly designed to cause confusion over whether I am talking about variants, records, or functions. It kind of makes me want to avoid algebraic data type jargon, except that there isn’t a good alternative for “sum type”. Hmf.

3 days ago 4 votes
What does "Undecidable" mean, anyway

Systems Distributed I'll be speaking at Systems Distributed next month! The talk is brand new and will aim to showcase some of the formal methods mental models that would be useful in mainstream software development. It has added some extra stress on my schedule, though, so expect the next two monthly releases of Logic for Programmers to be mostly minor changes. What does "Undecidable" mean, anyway Last week I read Against Curry-Howard Mysticism, which is a solid article I recommend reading. But this newsletter is actually about one comment: I like to see posts like this because I often feel like I can’t tell the difference between BS and a point I’m missing. Can we get one for questions like “Isn’t XYZ (Undecidable|NP-Complete|PSPACE-Complete)?” I've already written one of these for NP-complete, so let's do one for "undecidable". Step one is to pull a technical definition from the book Automata and Computability: A property P of strings is said to be decidable if ... there is a total Turing machine that accepts input strings that have property P and rejects those that do not. (pg 220) Step two is to translate the technical computer science definition into more conventional programmer terms. Warning, because this is a newsletter and not a blog post, I might be a little sloppy with terms. Machines and Decision Problems In automata theory, all inputs to a "program" are strings of characters, and all outputs are "true" or "false". A program "accepts" a string if it outputs "true", and "rejects" if it outputs "false". You can think of this as automata studying all pure functions of type f :: string -> boolean. Problems solvable by finding such an f are called "decision problems". This covers more than you'd think, because we can bootstrap more powerful functions from these. First, as anyone who's programmed in bash knows, strings can represent any other data. Second, we can fake non-boolean outputs by instead checking if a certain computation gives a certain result. For example, I can reframe the function add(x, y) = x + y as a decision problem like this: IS_SUM(str) { x, y, z = split(str, "#") return x + y == z } Then because IS_SUM("2#3#5") returns true, we know 2 + 3 == 5, while IS_SUM("2#3#6") is false. Since we can bootstrap parameters out of strings, I'll just say it's IS_SUM(x, y, z) going forward. A big part of automata theory is studying different models of computation with different strengths. One of the weakest is called "DFA". I won't go into any details about what DFA actually can do, but the important thing is that it can't solve IS_SUM. That is, if you give me a DFA that takes inputs of form x#y#z, I can always find an input where the DFA returns true when x + y != z, or an input which returns false when x + y == z. It's really important to keep this model of "solve" in mind: a program solves a problem if it correctly returns true on all true inputs and correctly returns false on all false inputs. (total) Turing Machines A Turing Machine (TM) is a particular type of computation model. It's important for two reasons: By the Church-Turing thesis, a Turing Machine is the "upper bound" of how powerful (physically realizable) computational models can get. This means that if an actual real-world programming language can solve a particular decision problem, so can a TM. Conversely, if the TM can't solve it, neither can the programming language.1 It's possible to write a Turing machine that takes a textual representation of another Turing machine as input, and then simulates that Turing machine as part of its computations. Property (1) means that we can move between different computational models of equal strength, proving things about one to learn things about another. That's why I'm able to write IS_SUM in a pseudocode instead of writing it in terms of the TM computational model (and why I was able to use split for convenience). Property (2) does several interesting things. First of all, it makes it possible to compose Turing machines. Here's how I can roughly ask if a given number is the sum of two primes, with "just" addition and boolean functions: IS_SUM_TWO_PRIMES(z): x := 1 y := 1 loop { if x > z {return false} if IS_PRIME(x) { if IS_PRIME(y) { if IS_SUM(x, y, z) { return true; } } } y := y + 1 if y > x { x := x + 1 y := 0 } } Notice that without the if x > z {return false}, the program would loop forever on z=2. A TM that always halts for all inputs is called total. Property (2) also makes "Turing machines" a possible input to functions, meaning that we can now make decision problems about the behavior of Turing machines. For example, "does the TM M either accept or reject x within ten steps?"2 IS_DONE_IN_TEN_STEPS(M, x) { for (i = 0; i < 10; i++) { `simulate M(x) for one step` if(`M accepted or rejected`) { return true } } return false } Decidability and Undecidability Now we have all of the pieces to understand our original definition: A property P of strings is said to be decidable if ... there is a total Turing machine that accepts input strings that have property P and rejects those that do not. (220) Let IS_P be the decision problem "Does the input satisfy P"? Then IS_P is decidable if it can be solved by a Turing machine, ie, I can provide some IS_P(x) machine that always accepts if x has property P, and always rejects if x doesn't have property P. If I can't do that, then IS_P is undecidable. IS_SUM(x, y, z) and IS_DONE_IN_TEN_STEPS(M, x) are decidable properties. Is IS_SUM_TWO_PRIMES(z) decidable? Some analysis shows that our corresponding program will either find a solution, or have x>z and return false. So yes, it is decidable. Notice there's an asymmetry here. To prove some property is decidable, I need just to need to find one program that correctly solves it. To prove some property is undecidable, I need to show that any possible program, no matter what it is, doesn't solve it. So with that asymmetry in mind, do are there any undecidable problems? Yes, quite a lot. Recall that Turing machines can accept encodings of other TMs as input, meaning we can write a TM that checks properties of Turing machines. And, by Rice's Theorem, almost every nontrivial semantic3 property of Turing machines is undecidable. The conventional way to prove this is to first find a single undecidable property H, and then use that to bootstrap undecidability of other properties. The canonical and most famous example of an undecidable problem is the Halting problem: "does machine M halt on input i?" It's pretty easy to prove undecidable, and easy to use it to bootstrap other undecidability properties. But again, any nontrivial property is undecidable. Checking a TM is total is undecidable. Checking a TM accepts any inputs is undecidable. Checking a TM solves IS_SUM is undecidable. Etc etc etc. What this doesn't mean in practice I often see the halting problem misconstrued as "it's impossible to tell if a program will halt before running it." This is wrong. The halting problem says that we cannot create an algorithm that, when applied to an arbitrary program, tells us whether the program will halt or not. It is absolutely possible to tell if many programs will halt or not. It's possible to find entire subcategories of programs that are guaranteed to halt. It's possible to say "a program constructed following constraints XYZ is guaranteed to halt." The actual consequence of undecidability is more subtle. If we want to know if a program has property P, undecidability tells us We will have to spend time and mental effort to determine if it has P We may not be successful. This is subtle because we're so used to living in a world where everything's undecidable that we don't really consider what the counterfactual would be like. In such a world there might be no need for Rust, because "does this C program guarantee memory-safety" is a decidable property. The entire field of formal verification could be unnecessary, as we could just check properties of arbitrary programs directly. We could automatically check if a change in a program preserves all existing behavior. Lots of famous math problems could be solved overnight. (This to me is a strong "intuitive" argument for why the halting problem is undecidable: a halt detector can be trivially repurposed as a program optimizer / theorem-prover / bcrypt cracker / chess engine. It's too powerful, so we should expect it to be impossible.) But because we don't live in that world, all of those things are hard problems that take effort and ingenuity to solve, and even then we often fail. To be pendantic, a TM can't do things like "scrape a webpage" or "render a bitmap", but we're only talking about computational decision problems here. ↩ One notation I've adopted in Logic for Programmers is marking abstract sections of pseudocode with backticks. It's really handy! ↩ Nontrivial meaning "at least one TM has this property and at least one TM doesn't have this property". Semantic meaning "related to whether the TM accepts, rejects, or runs forever on a class of inputs". IS_DONE_IN_TEN_STEPS is not a semantic property, as it doesn't tell us anything about inputs that take longer than ten steps. ↩

4 days ago 4 votes