Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
10
"The Refragmentation" "Nations and Nationalism Since 1780"
over a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from David Crawshaw

How I program with LLMs

How I program with LLMs 2025-01-06 This document is a summary of my personal experiences using generative models while programming over the past year. It has not been a passive process. I have intentionally sought ways to use LLMs while programming to learn about them. The result has been that I now regularly use LLMs while working and I consider their benefits net-positive on my productivity. (My attempts to go back to programming without them are unpleasant.) Along the way I have found oft-repeated steps that can be automated, and a few of us are working on building those into a tool specifically for Go programming: . It’s very early but so far the experience has been positive.sketch.dev I am typically curious about new technology. It took very little experimentation with LLMs for me to want to see if I could extract practical value. There is an allure to a technology that can (at least some of the time) craft sophisticated responses to challenging questions. It is even more exciting to watch a computer attempt to write a piece of a program as requested, and make solid progress. The only technological shift I have experienced that feels similar to me happened in 1995, when we first configured our LAN with a usable default route. We replaced the shared computer in the other room running Trumpet Winsock with a machine that could route a dialup connection, and all at once I had The Internet on tap. Having the internet all the time was astonishing, and felt like the future. Probably far more to me in that moment than to many who had been on the internet longer at universities, because I was immediately dropped into high internet technology: web browsers, JPEGs, and millions of people. Access to a powerful LLM feels like that. So I followed this curiosity, to see if a tool that can generate something mostly not wrong most of the time could be a net benefit in my daily work. The answer appears to be yes, generative models are useful for me when I program. It has not been easy to get to this point. My underlying fascination with the new technology is the only way I have managed to figure it out, so I am sympathetic when other engineers claim LLMs are “useless.” But as I have been asked more than once how I can possibly use them effectively, this post is my attempt to describe what I have found so far. There are three ways I use LLMs in my day-to-day programming: As this is about the of programming, this has been a fundamentally qualitative process that is hard to write about with quantitative rigor. The closest I will get to data is to say: it appears from my records that for every two hours of programming I do now, I accept more than 10 autocomplete suggestions, use LLM for a search-like task once, and program in a chat session once.practice The rest of this is about extracting value from chat-driven programming. Let me try to motivate this for the skeptical. A lot of the value I personally get out of chat-driven programming is I reach a point in the day when I know what needs to be written, I can describe it, but I don’t have the energy to create a new file, start typing, then start looking up the libraries I need. (I’m an early-morning person, so this is usually any time after 11am for me, though it can also be any time I context-switch into a different language/framework/etc.) LLMs perform that service for me in programming. They give me a first draft, with some good ideas, with several of the dependencies I need, and often some mistakes. Often, .I find fixing those mistakes is a lot easier than starting from scratch This means chat-based programming may not be for you. I am doing a particular kind of programming, product development, which could be roughly described as trying to bring programs to a user through a robust interface. That means I am building a lot, throwing away a lot, and bouncing around between environments. Some days I mostly write typescript, some days mostly Go. I spent a week in a C++ codebase last month exploring an idea, and just had an opportunity to learn the HTTP server-side events format. I am all over the place, constantly forgetting and relearning. If you spend more time proving your optimization of a cryptographic algorithm is not vulnerable to timing attacks than you do writing the code, I don’t think any of my observations here are going to be useful to you. Give an LLM a specific objective and all the background material it needs so it can craft a well-contained code review packet and expect it to adjust as you question it. There are two major elements to this: The ideal task for an LLM is one where it needs to use a lot of common libraries (more than a human can remember, so it is doing a lot of small-scale research for you), working to an interface you designed or produces a small interface you can verify as sensible quickly, and it can write readable tests. Sometimes this means choosing the library for it, if you want something obscure (though with open source code LLMs are quite good at this). You always need to pass an LLM’s code through a compiler and run the tests before spending time reading it. They all produce code that doesn’t compile sometimes. (Always making errors I find surprisingly human, every time I see one I think, there but for the grace of God go I.) The better LLMs are very good at recovering from their mistakes, often all they need is for you to paste the compiler error or test failure into the chat and they fix the code. There are vague tradeoffs we make every day around the cost of writing, the cost of reading, and the cost of refactoring code. Let’s take Go package boundaries as an example. The standard library has a package “net/http” that contains some fundamental types for dealing with wire format encoding, MIME types, etc. It contains an HTTP client, and an HTTP server. Should it be one package, or several? Reasonable people can disagree! So much so, I do not know if there is a correct answer today. What we have works, after 15 years of use it is still not clear to me that some other package arrangement would work better. Advantages of a larger package include: centralized documentation for callers, easier initial writing, easier refactoring, easier sharing of helper code without devising robust interfaces for them (which often involves pulling the fundamental types of a package out into yet another leaf package filled with types). The disadvantages include the package being harder to read because many different things are going on (try reading the net/http client implementation without tripping up and finding yourself in the server code for a few minutes), or it being harder to use because there is too much going on in it. For example I have a codebase that uses a C library in some fundamental types, but parts of the codebase need to be in a binary widely distributed to many platforms that does not technically need the C library, so have more packages than you might expect in the codebase isolating the use of the C library to avoid cgo in the multi-platform binary. There are no right answers here, instead we are trading off different types of work that an engineer will have to do (upfront and ongoing). LLMs influence those tradeoffs: Let me work an example to combine a few of the discussed ideas: Write a reservoir sampler for the quartiles of floats. First off, package structure. Were I doing this before LLMs, I would have chosen to have some sort of streamstat package that contained several algorithms, maybe one per file. This does not seem to be a unique opinion, here is an open source package following that model. Now, I want just this one algorithm in its own package. Other variants or related algorithms can have their own package.quantile Next up, what do we get from an LLM. The first pass is not bad. That prompt, with some details about wanting it in Go got me quartile_sampler.go: The core interface is good too: Great! There are also tests. An aside: this may be the place to stop. Sometimes I use LLM codegen as a form of specialized search. E.g. I’m curious about reservoir sampling, but want to see how the algorithm would be applied under some surprising constraint, for example time-windowed sampling. Instead of doing a literature search I might amend my prompt for an implementation that tracks freshness. (I could also ask it to include references to the literature in the comments, which I could manually check to see if it’s making things up or if there’s some solid research to work from.) Often I spend 60 seconds reading some generated code, see an obvious trick I hadn’t thought of, then throw it away and start over. Now I know the trick is possible. This is why it is so hard to attribute value generated by LLMs. Yes sometimes it makes bad code, gets stuck in a rut, makes up something impossible (it hallucinated a part of the monaco API I wish existed the other day) and wastes my time. It can also save me hours by pointing out something relevant I don’t know. Back to the code. Fascinatingly, the initial code produced didn’t compile. In the middle of the Quartiles implementation there was the line: Which is a fine line, sorted is a slice defined a few lines earlier. But the value is never used so gopls (and the Go compiler if you run go build) immediately says: This is a very easy fix. If I paste the error back into the LLM it will correct it. Though in this case, as I’m reading the code, it’s quite clear to me that I can just delete the line myself, so I do. Now the tests. I got what I expected. In quartile_sampler_test.go: Exactly the sort of thing I would write! I would run some cases through another implementation to generate expected outputs and copy them into a test like this. But there are two issues with this. The first is the LLM did run these numbers through another implementation. (To the best of my knowledge. When using a sophisticated LLM service, it is hard to say for sure what is happening behind the scenes.) It made them up, and LLMs have a reputation for being weak at arithmetic. So this sort of test, while reasonable for a human to write because we base it on the output of another tool, or if we are particularly old-school do some arithmetic ourselves, is not great from an LLM.not The second issue with this is we can do better. I am happy we now live in a time when programmers write their own tests, but we do not hold ourselves to the same standards with tests as we do with production code. That is a reasonable tradeoff, there are only so many hours in the day. But what LLMs lack in arithmetical prowess, they make up for in enthusiasm. Let’s ask for an even better test. This got us some new test code: The original test from above has been reworked to to use checkQuartiles and we have something new: This is fun, because it's wrong. My running tool immediately says:gopls Pasting that error back into the LLM gets it to regenerate the fuzz test such that it is built around a function that uses to extract floats from the data slice. Interactions like this point us towards automating the feedback from tools: all it needed was the obvious error message to make solid progress towards something useful. I was not needed.func(t *testing.T, data []byte)math.Float64frombits Doing a quick survey of the last few weeks of my LLM chat history shows (which as I mentioned earlier, is not a proper quantitative analysis by any measure) that more than 80% of the time there is a tooling error, the LLM can make useful progress without me adding any insight. About half the time it can completely resolve the issue without me saying anything of note, I am just acting as the messenger. There was a programming movement some 25 years ago focused around the principle “don’t repeat yourself.” As is so often the case with short snappy principles taught to undergrads, it got taken too far. There is a lot of cost associated with abstracting out a piece of code so it can be reused, it requires creating intermediate abstractions that must be learned, and it requires adding features to the factored out code to make it maximally useful to the maximum number of people, which means we depend on libraries filled with useless distracting features. The past 10-15 years has seen a far more tempered approach to writing code, with many programmers understanding it is better to reimplement a concept if the cost of sharing the implementation is higher than the cost of implementing and maintaining separate code. It is far less common for me to write on a code review “this isn’t worth it, separate the implementations.” (Which is fortunate, because people really don’t want to hear things like that after they have done all the work.) Programmers are getting better at tradeoffs. What we have now is a world where the tradeoffs have shifted. It is now easier to write more comprehensive tests. You can have the LLM write the fuzz test implementation you want but didn’t have the hours to build properly. You can spend a lot more time writing tests to be readable, because the LLM is not sitting there constantly thinking “it would be better for the company if I went and picked another bug off the issue tracker than doing this.” So the tradeoff shifts in favor of having more specialized implementations. The place where I expect this to be most visible is language-specific . Every major company API comes with dozens of these, usually low quality, wrappers written by people who aren’t actually using their implementations for a specific goal, instead are trying to capture every nook and cranny of an API in a large and complex interface. Even when it is done well, I have found it easier to go to the REST documentation (usually a set of curl commands), and implement a language wrapper for the 1% of the API I actually care about. It cuts down the amount of the API I need to learn upfront, and it cuts down how much future programmers (myself) reading the code need to understand.REST API wrappers For example, as part of my recent work on sketch.dev I implemented a Gemini API wrapper in Go. Even though the in Go has been carefully handcrafted by people who know the language well and clearly care, there is a lot to read to understand it:official wrapper My simplistic initial wrapper was 200 lines of code total, one method, three types. Reading the entire implementation is 20% of the work of reading the documentation of the official package, and if you decide to try digging into its implementation you will discover that it is a wrapper around another largely code-generated implementation with protos and grpc and the works. All I want is to cURL and parse a JSON object. There obviously comes a point in a project, where Gemini is the foundation of the entire app, where nearly every feature is used, where building on gRPC aligns well with the telemetry system elsewhere in your organization, where you should use the large official wrapper. But most of the time it is so much more time consuming, both upfront and ongoing, to do so given we almost always want only some wafer-thin sliver of whatever API we need to use today, that custom clients, largely written by a GPU, are far more effective for getting work done. So I foresee a world with far more specialized code, with fewer generalized packages, and more readable tests. Reusable code will continue to thrive around small robust interfaces and otherwise will be pulled apart into specialized code. Depending how well this is done, it will lead to either better software or worse software. I would expect both, with a long-term trend towards better software by the metrics that matter. As a programmer my instinct is to make computers do work for me. It is a lot of work getting value out of LLMs, how can a computer do it? I believe the key to solving a problem is not to overgeneralize. Solve a particular problem and then expand slowly. So instead of building a general-purpose UI for chat programming that is just as good at COBOL as it is for Haskell, we want to focus on one particular environment. The bulk of my programming is in Go, and so what I want is easy to imagine for a Go programmer: A few of us have built an early prototype of this: .sketch.dev The goal is not a “Web IDE” but rather to challenge the notion that chat-based programming even belongs in what is traditionally called an IDE. IDEs are collections of tools arranged for people. It is a delicate environment where I know what is going on. While an LLM is ultimately a developer tool, it is one that needs its own IDE to get the feedback it needs to operate effectively.I do not want an LLM spewing its first draft all over my current branch. Put another way: we didn’t embed goimports into sketch for it to be used by humans, but to get Go code closer to compiling using automatic signals, so that the compiler can provide better error feedback to the LLM driving it. It might be better to think of sketch.dev as a “Go IDE for LLMs”. This is all very recent work with a lot left to do, e.g. git integration so we can load existing packages for editing and drop the results on a branch. Better test feedback. More console control. (If the answer is to run sed, run sed. Be you the human or the LLM.) We are still exploring, but are convinced that focusing an environment for a particular kind of programming will give us better results than the generalized tool. Background Overview Why use chat at all? Chat-based LLMs do best with exam-style questions Extra code structure is much cheaper An example Where are we going? Better tests, maybe even less DRY Automating these observations: sketch.dev . This makes me more productive by doing a lot of the more-obvious typing for me. It turns out the current state of the art can be improved on here, but that’s a conversation for another day. Even the standard products you can get off the shelf are better for me than nothing. I convinced myself of that by trying to give them up. I could not go a week without getting frustrated by how much mundane typing I had to do before having a FIM model. This is the place to experiment first. Autocomplete . If I have a question about a complex environment, say “how do I make a button transparent in CSS” I will get a far better answer asking any consumer-based LLM, o1, sonnet 3.5, etc, than I do using an old fashioned web search engine and trying to parse the details out of whatever page I land on. (Sometimes the LLM is wrong. So are people. The other day I put my shoe on my head and asked my two year old what she thought of my hat. She dealt with it and gave me a proper scolding. I can deal with LLMs being wrong sometimes too.) Search . This is the hardest of the three. This is where I get the most value of LLMs, but also the one that bothers me the most. It involves learning a lot and adjusting how you program, and on principle I don’t like that. It requires at least as much messing about to get value out of LLM chat as it does to learn to use a slide rule, with the added annoyance that it is a non-deterministic service that is regularly changing its behavior and user interface. Indeed, the long-term goal in my work is to replace the need for chat-driven programming, to bring the power of these models to a developer in a way that is not so off-putting. But as of now I am dedicated to approaching the problem incrementally, which means figuring out how to do best with what we have and improve it.Chat-driven programming Avoid creating a situation with so much complexity and ambiguity that the LLM gets confused and produces bad results. This is why I have had little success with chat inside my IDE. My workspace is often messy, the repository I am working on is by default too large, it is filled with distractions. One thing humans appear to be much better than LLMs at (as of January 2025) is not getting distracted. That is why I still use an LLM via a web browser, because I want a blank slate on which to craft a well-contained request. Ask for work that is easy to verify. Your job as a programmer using an LLM is to read the code it produces, think about it, and decide if the work is good. You can ask an LLM to do things you would never ask a human to do. “Rewrite all of your new tests introducing an <intermediate concept designed to make the tests easier to read>” is an appalling thing to ask a human, you’re going to have days of tense back-and-forth about whether the cost of the work is worth the benefit. An LLM will do it in 60 seconds and not make you fight to get it done. Take advantage of the fact that .redoing work is extremely cheap As LLMs do better with exam-style questions, more and smaller packages make it easier to give a complete and yet isolated context for a piece of work. This is true for humans too, which is why we use packages at all, but we trade off package size against the extra typing/plumbing/filing to make more readable code. With an LLM both doing and benefiting from a big chunk of that extra work, the tradeoff shifts. (As a bonus, we humans get more readable code!) Smaller and more numerous packages can be compiled and tested independently of unrelated code. This is helpful for the LLM dev cycle as it means changes can be compiled and tested before other dependent packages are refactored, and the package has a simpler test environment. something like the Go playground, built around editing a package and tests with a chat interface onto editable code a little UNIX env where we can run go get and go test goimports integration gopls integration automatic model feedback: on model edit run go get, go build, go test, feedback missing packages, compiler errors, test failures to the model to try and get them fixed automatically // QuartileSampler maintains a reservoir sample to estimate quartiles of a stream // of float64 values. It uses reservoir sampling to maintain a representative // sample when processing large streams of data. type QuartileSampler struct { reservoir []float64 seen int maxSize int rng *rand.Rand } // NewQuartileSampler creates a new sampler that maintains up to maxSize samples. // A larger maxSize provides more accurate estimates but uses more memory. func NewQuartileSampler(maxSize int) *QuartileSampler { … } // Add adds a new value to the sampler. func (qs *QuartileSampler) Add(value float64) { … } // Quartiles returns the estimated 25th, 50th (median), and 75th percentiles // of the data seen so far. Returns (0,0,0) if no data has been added. func (qs *QuartileSampler) Quartiles() (q1, median, q3 float64) { if len(qs.reservoir) == 0 { return 0, 0, 0 } // Make a copy to avoid modifying the reservoir sorted := make([]float64, len(qs.reservoir)) copy(sorted, qs.reservoir) slices.Sort(sorted) n := len(sorted) q1 = percentile(sorted, 0.25) median = percentile(sorted, 0.50) q3 = percentile(sorted, 0.75) return q1, median, q3 } n := len(sorted) declared and not used: n func TestQuartileSampler(t *testing.T) { tests := []struct { name string data []float64 wantQ1 float64 wantMed float64 wantQ3 float64 epsilon float64 // allowed error for floating point comparison }{ { name: "basic sequence", data: []float64{1, 2, 3, 4, 5, 6, 7}, wantQ1: 2, wantMed: 4, wantQ3: 6, epsilon: 0.1, }, … } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { qs := NewQuartileSampler(100) // Use large reservoir for accurate testing for _, v := range tt.data { qs.Add(v) } q1, med, q3 := qs.Quartiles() if math.Abs(q1-tt.wantQ1) > tt.epsilon { t.Errorf("Q1 = %v, want %v (±%v)", q1, tt.wantQ1, tt.epsilon) } if math.Abs(med-tt.wantMed) > tt.epsilon { t.Errorf("Median = %v, want %v (±%v)", med, tt.wantMed, tt.epsilon) } if math.Abs(q3-tt.wantQ3) > tt.epsilon { t.Errorf("Q3 = %v, want %v (±%v)", q3, tt.wantQ3, tt.epsilon) } }) } } // referenceQuartiles calculates the exact quartiles for a slice of float64 values // using linear interpolation, matching the behavior expected from the sampler. func referenceQuartiles(data []float64) (q1, median, q3 float64) { … } // compareQuartiles checks if two sets of quartiles are within epsilon of each other. // Returns true if they match within the tolerance, false otherwise. func compareQuartiles(q1a, meda, q3a, q1b, medb, q3b, epsilon float64) bool { … } // checkQuartiles is a test helper that compares sampler output against the reference // implementation and reports any differences. func checkQuartiles(t *testing.T, data []float64, epsilon float64) { t.Helper() // Get reference values wantQ1, wantMed, wantQ3 := referenceQuartiles(data) // Get sampler values using a large reservoir for accuracy qs := NewQuartileSampler(1000) for _, v := range data { qs.Add(v) } gotQ1, gotMed, gotQ3 := qs.Quartiles() if !compareQuartiles(gotQ1, gotMed, gotQ3, wantQ1, wantMed, wantQ3, epsilon) { t.Errorf("Quartiles mismatch:\ngot (q1=%v, med=%v, q3=%v)\nwant (q1=%v, med=%v, q3=%v)\nepsilon=%v", gotQ1, gotMed, gotQ3, wantQ1, wantMed, wantQ3, epsilon) } } func FuzzQuartileSampler(f *testing.F) { // Add some seed corpus f.Add([]float64{1, 2, 3, 4, 5}) f.Fuzz(func(t *testing.T, data []float64) { // Use a larger epsilon for fuzzing since we might get more extreme values checkQuartiles(t, data, 0.2) }) } fuzzing arguments can only have the following types: string, bool, float32, float64, int, int8, int16, int32, int64, uint, uint8, uint16, uint32, uint64, []byte $ go doc -all genai | wc -l 1155 In the tests, implement the simplest, most readable version of the standard code for quartiles over a fixed set of known values in a slice. Then pass the test cases through the standard code and the reservoir sampler and confirm they are within an epsilon of each other. Structure the comparison code such that it can be used in a fuzz test too.

4 months ago 18 votes
jsonfile: a quick hack for tinkering

jsonfile: a quick hack for tinkering Consider your requirements! A reference implementation A final thought 2024-02-06 The year is 2024. I am on vacation and dream up a couple of toy programs I would like to build. It has been a few years since I built a standalone toy, I have . So instead of actually building any of the toys I think of, I spend my time researching if anything has changed since the last time I did it. Should pick up new tools or techniques?been busy It turns out lots of things have changed! There’s some great stuff out there, including decent quorum-write regional cloud databases now. Oh and the ability to have a fascinating hour-long novel conversation with transistors. But things are still awkward for small fast tinkering. Going back in time, I struggled constantly rewriting the database for the prototype for Tailscale, so I ended up writing my in-memory objects out as . It went far further than I planned. Somewhere in the intervening years I convinced myself it must have been a bad idea even for toys, given all the pain migrating away from it caused. But now that I find myself in an empty text editor wanting to write a little web server, I am not so sure. The migration was painful, and a lot of that pain was born by others (which is unfortunate, I find handing a mess to someone else deeply unpleasant). Much of that pain came from the brittle design of the caching layers on top (also my doing), which came from not moving to an SQL system soon enough.a JSON file I suspect, considering the process retrospect, a great deal of that pain can be avoided by committing to migrating directly to an SQL system the moment you need an index. You can pay down a lot of exploratory design work in a prototype before you need an index, which n is small, full scans are fine. But you don’t make it very far into production before one of your values of n crosses something around a thousand and you long for an index. With a clear exit strategy for avoiding big messes, that means the JSON file as database is still a valid technique for prototyping. And having spent a couple of days remembering what a misery it is to write a unit test for software that uses postgresql (mocks? docker?? for a database program I first ran on a computer with less power than my 2024 wrist watch?) and struggling figuring out how to make my cgo sqlite cross-compile to Windows, I’m firmly back to thinking a JSON file can be a perfectly adequate database for a 200-line toy. Before you jump into this and discover it won’t work, or just as bad, dismiss the small and unscaling as always a bad idea, consider the requirements of your software. Using a JSON file as a database means your software: Programming is the art of tradeoffs. You have to decide what matters and what does not. Some of those decisions need to be made early, usually with imperfect information. You may very well need a powerful SQL DBMS from the moment you start programming, depending on the kind of program you’re writing! An implementation of jsonfile (which Brad called JSONMutexDB, which is cooler because it has an x in it, but requires more typing) can fit in about 70 lines of Go. But there are a couple of lessons we ran into in the early days of Tailscale that can be paid down relatively easily, growing the implementation to 85 lines. (More with comments!) I think it’s worth describing the interesting things we ran into, both in code and here. You can find the implementation of jsonfile here: . The interface is:https://github.com/crawshaw/jsonfile/blob/main/jsonfile.go There is some experience behind this design. In no particular order: One of the early pain points in the transition was figuring out the equivalent of when to , , and . The first version exposed the mutex directly (which was later converted into a RWMutex).BEGINCOMMITROLLBACK There is no advantage to paying this transition cost later. It is easy to box up read/write transactions with a callback. This API does that, and provides a great point to include other safety mechanisms. There are two forms of this. The first is if the write fn fails half-way through, having edited the db object in some way. To avoid this, the implementation first creates an entirely new copy of the DB before applying the edit, so the entire change set can be thrown away on error. Yes, this is inefficient. No, it doesn’t matter. Inefficiency in this design is dominated by the I/O required to write the entire database on every edit. If you are concerned about the duplicate-on-write cost, you are not modeling I/O cost appropriately (which is important, because if I/O matters, switch to SQL). The second is from a full disk. The easy to write a file in Go is to call os.WriteFile, which the first implementation did. But that means: A failure can occur in any of those system calls, resulting in a corrupt DB. So this implementation creates a new file, loads the DB into it, and when that has all succeeded, uses . It is not a panacea, our operating systems do not make all the promises we wish they would about rename. But it is much better than the default.rename(2) A nasty issue I have run into twice is aliasing memory. This involves doing something like: An intermediate version of this code kept the previous database file on write. But there’s an easier and even more robust strategy: never rename the file back to the original. Always create a new file, . On starting, load the most recent file. Then when your data is worth backing up (if ever), have a separate program prune down the number of files and send them somewhere robust.Backups.mydb.json.<timestamp> Not in this implementation but you may want to consider, is removing the risk of a Read function editing memory. You can do that with View* types generated by the tool. It’s neat, but more than quadruples the complexity of JSONFileDB, complicates the build system, and initially isn’t very important in the sorts of programs I write. I have found several memory aliasing bugs in all the code I’ve written on top of a JSON file, but have yet to accidentally write when reading. Still, for large code bases Views are quite pleasant and well-worth considering about the point when a project should move to a real SQL.Constant memory.viewer There is some room for performance improvements too (using cloner instead of unmarshalling a fresh copy of the data for writing), though I must point out again that needing more performance is a good sign it is time to move on to SQLite, or something bigger. It’s a tiny library. Copy and edit as needed. It is an all-new implementation so I will be fixing bugs as I find them. (As a bonus: this was my first time using a Go generic! 👴 It went fine. Parametric polymorphism is ok.) Why go out of my way to devise an inadequate replacement for a database? Most projects fail before they start. They fail because the is too high. Our dreams are big and usually too much, as dreams should be.activation energy But software is not building a house or traveling the world. You can realize a dream with the tools you have on you now, in a few spare hours. This is the great joy of it, you are free from physical and economic constraint. If you start. Be willing to compromise almost everything to start. Doesn’t have a lot of data. Keep it to a few megabytes. The data structure is boring enough not to require indexes. You don’t need something interesting like full-text search. You do plenty of reads, but writes are infrequent. Ideally no more than one every few seconds. Truncating the database file Making multiple system calls to .write(2) Calling .close(2) type JSONFile[Data any] struct { … } func New[Data any](path string) (*JSONFile[Data], error) func Load[Data any](path string) (*JSONFile[Data], error) func (p *JSONFile[Data]) Read(fn func(data *Data)) func (p *JSONFile[Data]) Write(fn func(*Data) error) error list := []int{1, 2, 3} db.Write(func() { db.List = list }) list[0] = 10 // editing the database! Transactions Database corruption through partial writes Memory aliasing Some changes you may want to consider

a year ago 15 votes
new year, same plan

new year, same plan 2022-12-31 Some months ago, the bill from GCE for hosting this blog jumped from nearly nothing to far too much for what it is, so I moved provider and needed to write a blog post to test it all. I could have figured out why my current provider hiked the price. Presumably I was Holding It Wrong and with just a few grip adjustments I could get the price dropped. But if someone mysteriously starts charging you more money, and there are other people who offer the same service, why would you stay? This has not been a particularly easy year, for a variety of reasons. But here I am at the end of it, and beyond a few painful mistakes that in retrospect I did not have enough information to get right, I made mostly the same decisions I would again. There were a handful of truly wonderful moments. So the plan for 2023 is the same: keep the kids intact, and make programming more fun. There is also the question of Twitter. It took me a few years to develop the skin to handle the generally unpleasant environment. (I can certainly see why almost no old Twitter employees used their product.) The experience recently has degraded, there are still plenty of funny tweets, but far less moments of interesting content. Here is a recent exception, but it is notable because it's the first time in weeks I learned anything from twitter: . I now find more new ideas hiding in HN comments than on Twitter.https://twitter.com/lrocket/status/1608883621980704768 Many people I know have sort-of moved to Mastodon, but it has a pretty horrible UX that is just enough work that I, on the whole, don't enjoy it much. And the fascinating insights don't seem to be there yet, but I'm still reading and waiting. On the writing side, it might be a good idea to lower the standards (and length) of my blog posts to replace writing tweets. But maybe there isn't much value in me writing short notes anyway, are my contributions as fascinating as the ones I used to sift through Twitter to read? Not really. So maybe the answer is to give up the format entirely. That might be something new for 2023. Here is something to think about for the new year: http://www.shoppbs.pbs.org/now/transcript/transcriptNOW140_full.html DAVID BRANCACCIO: There's a little sweet moment, I've got to say, in a very intense book– your latest– in which you're heading out the door and your wife says what are you doing? I think you say– I'm getting– I'm going to buy an envelope. KURT VONNEGUT: Yeah. DAVID BRANCACCIO: What happens then? KURT VONNEGUT: Oh, she says well, you're not a poor man. You know, why don't you go online and buy a hundred envelopes and put them in the closet? And so I pretend not to hear her. And go out to get an envelope because I'm going to have a hell of a good time in the process of buying one envelope. I meet a lot of people. And, see some great looking babes. And a fire engine goes by. And I give them the thumbs up. And, and ask a woman what kind of dog that is. And, and I don't know. The moral of the story is, is we're here on Earth to fart around. And, of course, the computers will do us out of that. And, what the computer people don't realize, or they don't care, is we're dancing animals. You know, we love to move around. And, we're not supposed to dance at all anymore.

over a year ago 13 votes
log4j: between a rock and a hard place

log4j: between a rock and a hard place 2021-12-11 What does backwards compatibility mean to me? Backwards compatibility should not have forced log4j to keep LDAP/JNDI URLs The other side of compatibility: being cautious adding features There is more than enough written on the mechanics of and mitigations for the recent . On prevention, this is the most interesting widely-reshared I have seen:severe RCE in log4jinsight This is making the rounds because highly-profitable companies are using infrastructure they do not pay for. That is a worthy topic, but not the most interesting thing in this particular case because it would not clearly have contributed to preventing this bug. It is the second statement in this tweet that is worthy of attention: the long ago, but could not because of the backwards compatibility promises they are held to.maintainers of log4j would have loved to remove this bad feature I am often heard to say that I love backwards compatibility, and that it is underrated. But what exactly do I mean? I don't mean that whenever I upgrade a dependency, I expect zero side effects. If a library function gets two times faster in an upgrade, that is a change in behavior that might break my software! But obviously the exact timings of functions can change between versions. In some extreme cases I need libraries to promise the algorithmic complexity of run time or memory usage, where I am providing extremely large inputs, or need constant-time algorithms to avoid timing attacks. But I don't need that from a logging library. So let me back up and describe what is important. The ideal version of this is I run my package manager's upgrade command, execute the tests, commit the output, and not think about it any more. This means the API/ABI stays similar enough that the compiler won't break, the behavior of the library routines is similar enough the tests will pass, and no other constraints, such as total binary size limits, are exceeded. This is impossible in the general case. The only way to achieve it is to not make any changes at all. When we write down a promise, we leave lots of definitional holes in the promise. E.g. take the (generally excellent) :Go compatibility promise Here "correctly" means according to the Go language specification and the API documentation. The spec and the docs do not cover run time, memory use, or binary size. The next version of Go can be 10x slower and be compatible! But I can assure you if that were the case I would fail my goal of not spending much time upgrading a dependency. But the Go team know this, and work to the spirit of their promise. Very occasionally they break things, for security reasons, and when they do I have to spend time upgrading a dependency for a really good reason: my program needs it.very If I want my program to work correctly I should write tests for all the behaviors I care about. But like all programmers, I am short on hours in the day to do all that needs doing, and never have enough tests. So whenever a change in behavior happens in an upstream library that my tests don't catch but makes it into production, my instinct is to blame upstream. This is of course unfair, the burden for releasing good programs is borne by the person pressing the release button. But it is an expression of a programming social contract that has taken hold: a good software project tries to break downstream as little as possible, and when we do break downstream, we should do our best to make the breakage obvious and easy to fix. No compatibility promise I have seen covers the spirit of minimizing breakage and moving it to the right part of the process. As far as I can tell, engineers aren't taught this in school, and many have never heard the concept articulated. So much of best practice in releasing libraries is learned on the job and not well communicated (yet). Good upstream dependencies are maintained by people who have figured this out the hard way and do their best by their users. As a user, it is extremely hard to know what kind of library you are getting when you first consider a dependency, unless it is a very old and well established project. This is where software goes wrong the most for me. I want, year after year, to come back to a tool and be able to apply the knowledge I acquired the last time I used it, to new things I learn, and build on it. I want to hone my craft by growing a deep understanding of the tools I use. Some new features are additive. If I buy a new for framing, and it has a notch on it my old one didn't that I can use as a shortcut in marking up a beam, its presence does not invalidate my old knowledge. If the new interior notch replaces a marking that was on the outside of the square, then when I go to find my trusty marking I remember from years ago, and it's missing, I need to stop and figure out a new way to solve this old problem. Maybe I will notice the new feature, or, more likely, I'll pull out the tape measure measure I know how to use and find my mark that (slower) way. If someone who knew what they were doing saw me they could correct me! But like programming, I'm usually making a mess with wood alone in a few spare hours on a Saturday.speed square When software "upgrades" invalidate my old knowledge, it makes me a worse programmer. I can spend time getting back to where I was, but that's time I am not spending improving on where I was. To give a concrete example: I will never be an expert at developing for macOS or iOS. I bounce into and out of projects for Apple devices, spending no more than 10% of my hours on their platform. Their APIs change constantly. The buttons in Xcode move so quickly I sometimes wonder if it's happening before my eyes. Try looking up some Swift syntax on stack overflow and you'll find the answers are constantly edited for the latest version of Swift. At this point, I assume every time I come back to macOS/iOS, that I know nothing and I am learning the platform for the first time. Compare the shifting sands of Swift with the stability of awk. I have spent not a tenth of the time learning awk that I have spent relearning Swift, and yet I am about as capable in each language. An awk one-liner I learned 20 years ago still works today! When I see someone use awk to solve a problem, I'm enthusiastic to learn how they did it, because I know that 20 years from now the trick will work. By what backwards compatibility means to me, a project like log4j will break fewer people by removing a feature like the JNDI URLs than by marking an old API method with some mechanical deprecation notice that causes a build process's equivalent of to fail and moving it to a new name. They will in practice, break fewer people removing this feature than they would by slowing down a critical path by 10%, which is the sort of thing that can trivially slip into a release unnoticed.-Wall But the spirit of compatibility promises appears to be poorly understood across our industry (as software updates demonstrate to me every week), and so we lean on the pseudo-legalistic wording of project documentation to write strongly worded emails or snarky tweets any time a project makes work for us (because most projects don't get it, so surely every example of a breakage must be a project that doesn't get it, not a good reason), and upstream maintainers become defensive and overly conservative. The result is now everyone's Java software is broken! We as a profession misunderstand and misuse the concept of backwards compatibility, both upstream and downstream, by focusing on narrow legalistic definitions instead of outcomes. This is a harder, longer topic that maybe I'll find enough clarity to write properly about one day. It should be easy to hack up code and share it! We should also be cautious about adding burdensome features. This particular bug feels impossibly strange to me, because my idea of a logging API is file descriptor number 2 with the system call. None of the bells and whistles are necessary and we should be conservative about our core libraries. Indeed libraries like these are why I have been growing ever-more skeptical of using any depdendencies, and now force myself to read a big chunk of any library before adding it to a project.write But I have also written my share of misfeatures, as much as I would like to forget them. I am thankful my code I don't like has never achieved the success or wide use of log4j, and I cannot fault diligent (and unpaid!) maintainers doing their best under those circumstances. Log4j maintainers have been working sleeplessly on mitigation measures; fixes, docs, CVE, replies to inquiries, etc. Yet nothing is stopping people to bash us, for work we aren't paid for, for a feature we all dislike yet needed to keep due to backward compatibility concerns. It is intended that programs written to the Go 1 specification will continue to compile and run correctly, unchanged, over the lifetime of that specification. I want to not spend much time upgrading a dependency I want any problems caused by the upgrade to be caught early, not in production. I want to be able to build knowledge of the library over a long time, to hone my craft

over a year ago 15 votes
Software I’m thankful for

Software I’m thankful for 2021-11-25 A few of the things that come to mind, this thanksgiving. Most Unix-ish APIs, from files to sockets are a bit of a mess today. Endless poorly documented sockopts, unexpected changes in write semantics across FSs and OSes, good luck trying to figure out . But despite the mess, I can generally wrap my head around open/read/write/close. I can strace a binary and figure out the sequence and decipher what’s going on. Sprinkle in some printfs and state is quickly debuggable. Stack traces are useful!mtimes Enormous effort has been spent on many projects to replace this style of I/O programming, for efficiency or aesthetics, often with an asynchronous bent. I am thankful for this old reliable standby of synchronous open/read/write/close, and hope to see it revived and reinvented throughout my career to be cleaner and simpler. Goroutines are coroutines with compiler/runtime optimized yielding, to make them behave like threads. This breathes new life into the previous technology I’m thankful for: simple blocking I/O. With goroutines it becomes cheap to write large-scale blocking servers without running out of OS resources (like heavy threads, on OSes where they’re heavy, or FDs). It also makes it possible to use blocking interfaces between “threads” within a process without paying the ever-growing price of a context switch in the post- world.spectre This is the first year where the team working on Tailscale has outgrown and eclipsed me to the point where I can be thankful for Tailscale without feeling like I’m thanking myself. Many of the wonderful new features that let me easily wire machines together wherever they are, like userspace networking or MagicDNS, are not my doing. I’m thankful for the product, and the opportunity to work with the best engineering team I’ve ever had the privilege of being part of. Much like open/read/write/close, SQLite is an island of stability in a constantly changing technical landscape. Techniques I learned 10 or 15 years ago using SQLite work today. As a bonus, it does so much more than then: WAL mode for highly-concurrent servers, advanced SQL like window functions, excellent ATTACH semantics. It has done all of this while keeping the number of, in the projects own language, “goofy design” decisions to a minimum and holding true to its mission of being “lite”. I aspire to write such wonderful software. JSON is the worst form of encoding — except for all the others that have been tried. It’s complicated, but not too complicated. It’s not easily read by humans, but it can be read by humans. It is possible to extend it in intuitive ways. When it gets printed onto your terminal, you can figure out what’s going on without going and finding the magic decoder ring of the week. It makes some things that are extremely hard with XML or INI easy, without introducing accidental Turing completeness or turning . Writing software is better for it, and shows the immense effect carefully describing something can do for programming. JSON was everywhere in our JavaScript before the term was defined, the definition let us see it and use it elsewhere.country codes into booleans WireGuard is a great demonstration of why the total complexity of the implementation ends up affecting the UX of the product. In theory I could have been making tunnels between my devices for years with IPSec or TLS, in practice I’d completely given it up until something came along that made it easier. It didn’t make it easier by putting a slick UI over complex technology, it made the underlying technology simpler, so even I could (eventually) figure out the configuration. Most importantly, by not eating my entire complexity budget with its own internals, I could suddenly see it as a building block in larger projects. Complexity makes more things possible, and fewer things possible, simultaneously. WireGuard is a beautiful example of simplicity and I’m thankful for it. Before Go became popular, the fast programming language compilers of the 90s had mostly fallen by the wayside, to be replaced with a bimodal world of interpreters/JITs on one side and creaky slow compilers attempting to produce extremely optimal code on the other. The main Go toolchain found, or rediscovered, a new optimal point in the plane of tradeoffs for programming languages to sit: ahead of time compiled, but with a fast less-than-optimal compiler. It has managed to continue to hold that interesting, unstable equilibrium for a decade now, which is incredibly impressive. (E.g. I personally would love to improve its inliner, but know that it’s nearly impossible to get too far into that project without sacrificing a lot of the compiler’s speed.) I’ve always been cranky about GCC: I find its codebase nearly impossible to modify, it’s slow, the associated ducks I need to line up to make it useful (binutils, libc, etc) blow out the complexity budget on any project I try to start before I get far, and it is associated with GNU, which I used to view as an oddity and now view as a millstone around the neck of an otherwise excellent software project. But these are all the sorts of complaints you only make when using something truly invaluable. GCC is invaluable. I would never have learned to program if a free C compiler hadn’t been available in the 90s, so I owe it my career. To this day, it vies neck-and-neck with LLVM for best performing object code. Without the competition between them, compiler technology would stagnate. And while LLVM now benefits from $10s or $100s of millions a year in Silicon Valley salaries working on it, GCC does it all with far less investment. I’m thankful it keeps on going. I keep trying to quit vim. I keep ending up inside a terminal, inside vim, writing code. Like SQLite, vim is an island of stability over my career. While I wish IDEs were better, I am extremely thankful for tools that work and respect the effort I have taken to learn them, decade after decade. SSH gets me from here to there, and has done since ~1999. There is a lot about ssh that needs reinventing, but I am thankful for stable, reliable tools. It takes a lot of work to keep something like ssh working and secure, and if the maintainers are ever looking for someone to buy them a round they know where to find me. How would I get anything done without all the wonderful information on the public web and search engines to find it? What an amazing achievement. Thanks everyone, for making computers so great. open/read/write/close goroutines Tailscale SQLite JSON WireGuard The speed of the Go compiler GCC vim ssh The public web and search engines

over a year ago 17 votes

More in programming

Notes from Alexander Petros’ “Building the Hundred-Year Web Service”

I loved this talk from Alexander Petros titled “Building the Hundred-Year Web Service”. What follows is summation of my note-taking from watching the talk on YouTube. Is what you’re building for future generations: Useful for them? Maintainable by them? Adaptable by them? Actually, forget about future generations. Is what you’re building for future you 6 months or 6 years from now aligning with those goals? While we’re building codebases which may not be useful, maintainable, or adaptable by someone two years from now, the Romans built a bridge thousands of years ago that is still being used today. It should be impossible to imagine building something in Roman times that’s still useful today. But if you look at [Trajan’s Bridge in Portugal, which is still used today] you can see there’s a little car on its and a couple pedestrians. They couldn’t have anticipated the automobile, but nevertheless it is being used for that today. That’s a conundrum. How do you build for something you can’t anticipate? You have to think resiliently. Ask yourself: What’s true today, that was true for a software engineer in 1991? One simple answer is: Sharing and accessing information with a uniform resource identifier. That was true 30+ years ago, I would venture to bet it will be true in another 30 years — and more! There [isn’t] a lot of source code that can run unmodified in software that is 30 years apart. And yet, the first web site ever made can do precisely that. The source code of the very first web page — which was written for a line mode browser — still runs today on a touchscreen smartphone, which is not a device that Tim Berners-less could have anticipated. Alexander goes on to point out how interaction with web pages has changed over time: In the original line mode browser, links couldn’t be represented as blue underlined text. They were represented more like footnotes on screen where you’d see something like this[1] and then this[2]. If you wanted to follow that link, there was no GUI to point and click. Instead, you would hit that number on your keyboard. In desktop browsers and GUI interfaces, we got blue underlines to represent something you could point and click on to follow a link On touchscreen devices, we got “tap” with your finger to follow a link. While these methods for interaction have changed over the years, the underlying medium remains unchanged: information via uniform resource identifiers. The core representation of a hypertext document is adaptable to things that were not at all anticipated in 1991. The durability guarantees of the web are absolutely astounding if you take a moment to think about it. In you’re sprinting you might beat the browser, but it’s running a marathon and you’ll never beat it in the long run. If your page is fast enough, [refreshes] won’t even repaint the page. The experience of refreshing a page, or clicking on a “hard link” is identical to the experience of partially updating the page. That is something that quietly happened in the last ten years with no fanfare. All the people who wrote basic HTML got a huge performance upgrade in their browser. And everybody who tried to beat the browser now has to reckon with all the JavaScript they wrote to emulate these basic features. Email · Mastodon · Bluesky

20 hours ago 2 votes
Modeling Awkward Social Situations with TLA+

You're walking down the street and need to pass someone going the opposite way. You take a step left, but they're thinking the same thing and take a step to their right, aka your left. You're still blocking each other. Then you take a step to the right, and they take a step to their left, and you're back to where you started. I've heard this called "walkwarding" Let's model this in TLA+. TLA+ is a formal methods tool for finding bugs in complex software designs, most often involving concurrency. Two people trying to get past each other just also happens to be a concurrent system. A gentler introduction to TLA+'s capabilities is here, an in-depth guide teaching the language is here. The spec ---- MODULE walkward ---- EXTENDS Integers VARIABLES pos vars == <<pos>> Double equals defines a new operator, single equals is an equality check. <<pos>> is a sequence, aka array. you == "you" me == "me" People == {you, me} MaxPlace == 4 left == 0 right == 1 I've gotten into the habit of assigning string "symbols" to operators so that the compiler complains if I misspelled something. left and right are numbers so we can shift position with right - pos. direction == [you |-> 1, me |-> -1] goal == [you |-> MaxPlace, me |-> 1] Init == \* left-right, forward-backward pos = [you |-> [lr |-> left, fb |-> 1], me |-> [lr |-> left, fb |-> MaxPlace]] direction, goal, and pos are "records", or hash tables with string keys. I can get my left-right position with pos.me.lr or pos["me"]["lr"] (or pos[me].lr, as me == "me"). Juke(person) == pos' = [pos EXCEPT ![person].lr = right - @] TLA+ breaks the world into a sequence of steps. In each step, pos is the value of pos in the current step and pos' is the value in the next step. The main outcome of this semantics is that we "assign" a new value to pos by declaring pos' equal to something. But the semantics also open up lots of cool tricks, like swapping two values with x' = y /\ y' = x. TLA+ is a little weird about updating functions. To set f[x] = 3, you gotta write f' = [f EXCEPT ![x] = 3]. To make things a little easier, the rhs of a function update can contain @ for the old value. ![me].lr = right - @ is the same as right - pos[me].lr, so it swaps left and right. ("Juke" comes from here) Move(person) == LET new_pos == [pos[person] EXCEPT !.fb = @ + direction[person]] IN /\ pos[person].fb # goal[person] /\ \A p \in People: pos[p] # new_pos /\ pos' = [pos EXCEPT ![person] = new_pos] The EXCEPT syntax can be used in regular definitions, too. This lets someone move one step in their goal direction unless they are at the goal or someone is already in that space. /\ means "and". Next == \E p \in People: \/ Move(p) \/ Juke(p) I really like how TLA+ represents concurrency: "In each step, there is a person who either moves or jukes." It can take a few uses to really wrap your head around but it can express extraordinarily complicated distributed systems. Spec == Init /\ [][Next]_vars Liveness == <>(pos[me].fb = goal[me]) ==== Spec is our specification: we start at Init and take a Next step every step. Liveness is the generic term for "something good is guaranteed to happen", see here for more. <> means "eventually", so Liveness means "eventually my forward-backward position will be my goal". I could extend it to "both of us eventually reach out goal" but I think this is good enough for a demo. Checking the spec Four years ago, everybody in TLA+ used the toolbox. Now the community has collectively shifted over to using the VSCode extension.1 VSCode requires we write a configuration file, which I will call walkward.cfg. SPECIFICATION Spec PROPERTY Liveness I then check the model with the VSCode command TLA+: Check model with TLC. Unsurprisingly, it finds an error: The reason it fails is "stuttering": I can get one step away from my goal and then just stop moving forever. We say the spec is unfair: it does not guarantee that if progress is always possible, progress will be made. If I want the spec to always make progress, I have to make some of the steps weakly fair. + Fairness == WF_vars(Next) - Spec == Init /\ [][Next]_vars + Spec == Init /\ [][Next]_vars /\ Fairness Now the spec is weakly fair, so someone will always do something. New error: \* First six steps cut 7: <Move("me")> pos = [you |-> [lr |-> 0, fb |-> 4], me |-> [lr |-> 1, fb |-> 2]] 8: <Juke("me")> pos = [you |-> [lr |-> 0, fb |-> 4], me |-> [lr |-> 0, fb |-> 2]] 9: <Juke("me")> (back to state 7) In this failure, I've successfully gotten past you, and then spend the rest of my life endlessly juking back and forth. The Next step keeps happening, so weak fairness is satisfied. What I actually want is for both my Move and my Juke to both be weakly fair independently of each other. - Fairness == WF_vars(Next) + Fairness == WF_vars(Move(me)) /\ WF_vars(Juke(me)) If my liveness property also specified that you reached your goal, I could instead write \A p \in People: WF_vars(Move(p)) etc. I could also swap the \A with a \E to mean at least one of us is guaranteed to have fair actions, but not necessarily both of us. New error: 3: <Move("me")> pos = [you |-> [lr |-> 0, fb |-> 2], me |-> [lr |-> 0, fb |-> 3]] 4: <Juke("you")> pos = [you |-> [lr |-> 1, fb |-> 2], me |-> [lr |-> 0, fb |-> 3]] 5: <Juke("me")> pos = [you |-> [lr |-> 1, fb |-> 2], me |-> [lr |-> 1, fb |-> 3]] 6: <Juke("me")> pos = [you |-> [lr |-> 1, fb |-> 2], me |-> [lr |-> 0, fb |-> 3]] 7: <Juke("you")> (back to state 3) Now we're getting somewhere! This is the original walkwarding situation we wanted to capture. We're in each others way, then you juke, but before either of us can move you juke, then we both juke back. We can repeat this forever, trapped in a social hell. Wait, but doesn't WF(Move(me)) guarantee I will eventually move? Yes, but only if a move is permanently available. In this case, it's not permanently available, because every couple of steps it's made temporarily unavailable. How do I fix this? I can't add a rule saying that we only juke if we're blocked, because the whole point of walkwarding is that we're not coordinated. In the real world, walkwarding can go on for agonizing seconds. What I can do instead is say that Liveness holds as long as Move is strongly fair. Unlike weak fairness, strong fairness guarantees something happens if it keeps becoming possible, even with interruptions. Liveness == + SF_vars(Move(me)) => <>(pos[me].fb = goal[me]) This makes the spec pass. Even if we weave back and forth for five minutes, as long as we eventually pass each other, I will reach my goal. Note we could also by making Move in Fairness strongly fair, which is preferable if we have a lot of different liveness properties to check. A small exercise for the reader There is a presumed invariant that is violated. Identify what it is, write it as a property in TLA+, and show the spec violates it. Then fix it. Answer (in rot13): Gur vainevnag vf "ab gjb crbcyr ner va gur rknpg fnzr ybpngvba". Zbir thnenagrrf guvf ohg Whxr qbrf abg. More TLA+ Exercises I've started work on an exercises repo. There's only a handful of specific problems now but I'm planning on adding more over the summer. learntla is still on the toolbox, but I'm hoping to get it all moved over this summer. ↩

23 hours ago 2 votes
the penultimate conditional syntax

About half a year ago I encountered a paper bombastically titled “the ultimate conditional syntax”. It has the attractive goal of unifying pattern match with boolean if tests, and its solution is in some ways very nice. But it seems over-complicated to me, especially for something that’s a basic work-horse of programming. I couldn’t immediately see how to cut it down to manageable proportions, but recently I had an idea. I’ll outline it under the “penultimate conditionals” heading below, after reviewing the UCS and explaining my motivation. what the UCS? whence UCS out of scope penultimate conditionals dangling syntax examples antepenultimate breath what the UCS? The ultimate conditional syntax does several things which are somewhat intertwined and support each other. An “expression is pattern” operator allows you to do pattern matching inside boolean expressions. Like “match” but unlike most other expressions, “is” binds variables whose scope is the rest of the boolean expression that might be evaluated when the “is” is true, and the consequent “then” clause. You can “split” tests to avoid repeating parts that are the same in successive branches. For example, if num < 0 then -1 else if num > 0 then +1 else 0 can be written if num < 0 then -1 > 0 then +1 else 0 The example shows a split before an operator, where the left hand operand is the same and the rest of the expression varies. You can split after the operator when the operator is the same, which is common for “is” pattern match clauses. Indentation-based syntax (an offside rule) reduces the amount of punctuation that splits would otherwise need. An explicit version of the example above is if { x { { < { 0 then −1 } }; { > { 0 then +1 } }; else 0 } } (This example is written in the paper on one line. I’ve split it for narrow screens, which exposes what I think is a mistake in the nesting.) You can also intersperse let bindings between splits. I doubt the value of this feature, since “is” can also bind values, but interspersed let does have its uses. The paper has an example using let to avoid rightward drift: if let tp1_n = normalize(tp1) tp1_n is Bot then Bot let tp2_n = normalize(tp2) tp2_n is Bot then Bot let m = merge(tp1_n, tp2_n) m is Some(tp) then tp m is None then glb(tp1_n, tp2_n) It’s probably better to use early return to avoid rightward drift. The desugaring uses let bindings when lowering the UCS to simpler constructions. whence UCS Pattern matching in the tradition of functional programming languages supports nested patterns that are compiled in a way that eliminates redundant tests. For example, this example checks that e1 is Some(_) once, not twice as written. if e1 is Some(Left(lv)) then e2 Some(Right(rv)) then e3 None then e4 Being cheeky, I’d say UCS introduces more causes of redundant checks, then goes to great effort to to eliminate redundant checks again. Splits reduce redundant code at the source level; the bulk of the paper is about eliminating redundant checks in the lowering from source to core language. I think the primary cause of this extra complexity is treating the is operator as a two-way test rather than a multi-way match. Splits are introduced as a more general (more complicated) way to build multi-way conditions out of two-way tests. There’s a secondary cause: the tradition of expression-oriented functional languages doesn’t like early returns. A nice pattern in imperative code is to write a function as a series of preliminary calculations and guards with early returns that set things up for the main work of the function. Rust’s ? operator and let-else statement support this pattern directly. UCS addresses the same pattern by wedging calculate-check sequences into if statements, as in the normalize example above. out of scope I suspect UCS’s indentation-based syntax will make programmers more likely to make mistakes, and make compilers have more trouble producing nice error messages. (YAML has put me off syntax that doesn’t have enough redundancy to support good error recovery.) So I wondered if there’s a way to have something like an “is pattern” operator in a Rust-like language, without an offside rule, and without the excess of punctuation in the UCS desugaring. But I couldn’t work out how to make the scope of variable bindings in patterns cover all the code that might need to use them. The scope needs to extend into the consequent then clause, but also into any follow-up tests – and those tests can branch so the scope might need to reach into multiple then clauses. The problem was the way I was still thinking of the then and else clauses as part of the outer if. That implied the expression has to be closed off before the then, which troublesomely closes off the scope of any is-bound variables. The solution – part of it, at least – is actually in the paper, where then and else are nested inside the conditional expression. penultimate conditionals There are two ingredients: The then and else clauses become operators that cause early return from a conditional expression. They can be lowered to a vaguely Rust syntax with the following desugaring rules. The 'if label denotes the closest-enclosing if; you can’t use then or else inside the expr of a then or else unless there’s another intervening if. then expr ⟼ && break 'if expr else expr ⟼ || break 'if expr else expr ⟼ || _ && break 'if expr There are two desugarings for else depending on whether it appears in an expression or a pattern. If you prefer a less wordy syntax, you might spell then as => (like match in Rust) and else as || =>. (For symmetry we might allow && => for then as well.) An is operator for multi-way pattern-matching that binds variables whose scope covers the consequent part of the expression. The basic form is like the UCS, scrutinee is pattern which matches the scrutinee against the pattern returning a boolean result. For example, foo is None Guarded patterns are like, scrutinee is pattern && consequent where the scope of the variables bound by the pattern covers the consequent. The consequent might be a simple boolean guard, for example, foo is Some(n) && n < 0 or inside an if expression it might end with a then clause, if foo is Some(n) && n < 0 => -1 // ... Simple multi-way patterns are like, scrutinee is { pattern || pattern || … } If there is a consequent then the patterns must all bind the same set of variables (if any) with the same types. More typically, a multi-way match will have consequent clauses, like scrutinee is { pattern && consequent || pattern && consequent || => otherwise } When a consequent is false, we go on to try other alternatives of the match, like we would when the first operand of boolean || is false. To help with layout, you can include a redundant || before the first alternative. For example, if foo is { || Some(n) && n < 0 => -1 || Some(n) && n > 0 => +1 || Some(n) => 0 || None => 0 } Alternatively, if foo is { Some(n) && ( n < 0 => -1 || n > 0 => +1 || => 0 ) || None => 0 } (They should compile the same way.) The evaluation model is like familiar shortcutting && and || and the syntax is supposed to reinforce that intuition. The UCS paper spends a lot of time discussing backtracking and how to eliminate it, but penultimate conditionals evaluate straightforwardly from left to right. The paper briefly mentions as patterns, like Some(Pair(x, y) as p) which in Rust would be written Some(p @ Pair(x, y)) The is operator doesn’t need a separate syntax for this feature: Some(p is Pair(x, y)) For large examples, the penultimate conditional syntax is about as noisy as Rust’s match, but it scales down nicely to smaller matches. However, there are differences in how consequences and alternatives are punctuated which need a bit more discussion. dangling syntax The precedence and associativity of the is operator is tricky: it has two kinds of dangling-else problem. The first kind occurs with a surrounding boolean expression. For example, when b = false, what is the value of this? b is true || false It could bracket to the left, yielding false: (b is true) || false Or to the right, yielding true: b is { true || false } This could be disambiguated by using different spellings for boolean or and pattern alternatives. But that doesn’t help for the second kind which occurs with an inner match. foo is Some(_) && bar is Some(_) || None Does that check foo is Some(_) with an always-true look at bar ( foo is Some(_) ) && bar is { Some(_) || None } Or does it check bar is Some(_) and waste time with foo? foo is { Some(_) && ( bar is Some(_) ) || None } I have chosen to resolve the ambiguity by requiring curly braces {} around groups of alternative patterns. This allows me to use the same spelling || for all kinds of alternation. (Compare Rust, which uses || for boolean expressions, | in a pattern, and , between the arms of a match.) Curlies around multi-way matches can be nested, so the example in the previous section can also be written, if foo is { || Some(n) && n < 0 => -1 || Some(n) && n > 0 => +1 || { Some(0) || None } => 0 } The is operator binds tigher than && on its left, but looser than && on its right (so that a chain of && is gathered into a consequent) and tigher than || on its right so that outer || alternatives don’t need extra brackets. examples I’m going to finish these notes by going through the ultimate conditional syntax paper to translate most of its examples into the penultimate syntax, to give it some exercise. Here we use is to name a value n, as a replacement for the |> abs pipe operator, and we use range patterns instead of split relational operators: if foo(args) is { || 0 => "null" || n && abs(n) is { || 101.. => "large" || ..10 => "small" || => "medium" ) } In both the previous example and the next one, we have some extra brackets where UCS relies purely on an offside rule. if x is { || Right(None) => defaultValue || Right(Some(cached)) => f(cached) || Left(input) && compute(input) is { || None => defaultValue || Some(result) => f(result) } } This one is almost identical to UCS apart from the spellings of and, then, else. if name.startsWith("_") && name.tailOption is Some(namePostfix) && namePostfix.toIntOption is Some(index) && 0 <= index && index < arity && => Right([index, name]) || => Left("invalid identifier: " + name) Here are some nested multi-way matches with overlapping patterns and bound values: if e is { // ... || Lit(value) && Map.find_opt(value) is Some(result) => Some(result) // ... || { Lit(value) || Add(Lit(0), value) || Add(value, Lit(0)) } => { print_int(value); Some(value) } // ... } The next few examples show UCS splits without the is operator. In my syntax I need to press a few more buttons but I think that’s OK. if x == 0 => "zero" || x == 1 => "unit" || => "?" if x == 0 => "null" || x > 0 => "positive" || => "negative" if predicate(0, 1) => "A" || predicate(2, 3) => "B" || => "C" The first two can be written with is instead, but it’s not briefer: if x is { || 0 => "zero" || 1 => "unit" || => "?" } if x is { || 0 => "null" || 1.. => "positive" || => "negative" } There’s little need for a split-anything feature when we have multi-way matches. if foo(u, v, w) is { || Some(x) && x is { || Left(_) => "left-defined" || Right(_) => "right-defined" } || None => "undefined" } A more complete function: fn zip_with(f, xs, ys) { if [xs, ys] is { || [x :: xs, y :: ys] && zip_with(f, xs, ys) is Some(tail) => Some(f(x, y) :: tail) || [Nil, Nil] => Some(Nil) || => None } } Another fragment of the expression evaluator: if e is { // ... || Var(name) && Map.find_opt(env, name) is { || Some(Right(value)) => Some(value) || Some(Left(thunk)) => Some(thunk()) } || App(lhs, rhs) => // ... // ... } This expression is used in the paper to show how a UCS split is desugared: if Pair(x, y) is { || Pair(Some(xv), Some(yv)) => xv + yv || Pair(Some(xv), None) => xv || Pair(None, Some(yv)) => yv || Pair(None, None) => 0 } The desugaring in the paper introduces a lot of redundant tests. I would desugar straightforwardly, then rely on later optimizations to eliminate other redundancies such as the construction and immediate destruction of the pair: if Pair(x, y) is Pair(xx, yy) && xx is { || Some(xv) && yy is { || Some(yv) => xv + yv || None => xv } || None && yy is { || Some(yv) => yv || None => 0 } } Skipping ahead to the “non-trivial example” in the paper’s fig. 11: if e is { || Var(x) && context.get(x) is { || Some(IntVal(v)) => Left(v) || Some(BoolVal(v)) => Right(v) } || Lit(IntVal(v)) => Left(v) || Lit(BoolVal(v)) => Right(v) // ... } The next example in the paper compares C# relational patterns. Rust’s range patterns do a similar job, with the caveat that Rust’s ranges don’t have a syntax for exclusive lower bounds. fn classify(value) { if value is { || .. -4.0 => "too low" || 10.0 .. => "too high" || NaN => "unknown" || => "acceptable" } } I tend to think relational patterns are the better syntax than ranges. With relational patterns I can rewrite an earlier example like, if foo is { || Some(< 0) => -1 || Some(> 0) => +1 || { Some(0) || None } => 0 } I think with the UCS I would have to name the Some(_) value to be able to compare it, which suggests that relational patterns can be better than UCS split relational operators. Prefix-unary relational operators are also a nice way to write single-ended ranges in expressions. We could simply write both ends to get a complete range, like >= lo < hi or like if value is > -4.0 < 10.0 => "acceptable" || => "far out" Near the start I quoted a normalize example that illustrates left-aligned UCS expression. The penultimate version drifts right like the Scala version: if normalize(tp1) is { || Bot => Bot || tp1_n && normalize(tp2) is { || Bot => Bot || tp2_n && merge(tp1_n, tp2_n) is { || Some(tp) => tp || None => glb(tp1_n, tp2_n) } } } But a more Rusty style shows the benefits of early returns (especially the terse ? operator) and monadic combinators. let tp1 = normalize(tp1)?; let tp2 = normalize(tp2)?; merge(tp1, tp2) .unwrap_or_else(|| glb(tp1, tp2)) antepenultimate breath When I started writing these notes, my penultimate conditional syntax was little more than a sketch of an idea. Having gone through the previous section’s exercise, I think it has turned out better than I thought it might. The extra nesting from multi-way match braces doesn’t seem to be unbearably heavyweight. However, none of the examples have bulky then or else blocks which are where the extra nesting is more likely to be annoying. But then, as I said before it’s comparable to a Rust match: match scrutinee { pattern => { consequent } } if scrutinee is { || pattern => { consequent } } The || lines down the left margin are noisy, but hard to get rid of in the context of a curly-brace language. I can’t reduce them to | like OCaml because what would I use for bitwise OR? I don’t want presence or absence of flow control to depend on types or context. I kind of like Prolog / Erlang , for && and ; for ||, but that’s well outside what’s legible to mainstream programmers. So, dunno. Anyway, I think I’ve successfully found a syntax that does most of what UCS does, but much in a much simpler fashion.

2 days ago 5 votes
Coding should be a vibe!

The appeal of "vibe coding" — where programmers lean back and prompt their way through an entire project with AI — appears partly to be based on the fact that so many development environments are deeply unpleasant to work with. So it's no wonder that all these programmers stuck working with cumbersome languages and frameworks can't wait to give up on the coding part of software development. If I found writing code a chore, I'd be looking for retirement too. But I don't. I mean, I used to! When I started programming, it was purely because I wanted programs. Learning to code was a necessary but inconvenient step toward bringing systems to life. That all changed when I learned Ruby and built Rails. Ruby's entire premise is "programmer happiness": that writing code should be a joy. And historically, the language was willing to trade run-time performance, memory usage, and other machine sympathies against the pursuit of said programmer happiness. These days, it seems like you can eat your cake and have it too, though. Ruby, after thirty years of constant improvement, is now incredibly fast and efficient, yet remains a delight to work with. That ethos couldn't shine brighter now. Disgruntled programmers have finally realized that an escape from nasty syntax, boilerplate galore, and ecosystem hyper-churn is possible. That's the appeal of AI: having it hide away all that unpleasantness. Only it's like cleaning your room by stuffing the mess under the bed — it doesn't make it go away! But the instinct is correct: Programming should be a vibe! It should be fun! It should resemble English closely enough that line noise doesn't obscure the underlying ideas and decisions. It should allow a richness of expression that serves the human reader instead of favoring the strictness preferred by the computer. Ruby does. And given that, I have no interest in giving up writing code. That's not the unpleasant part that I want AI to take off my hands. Just so I can — what? — become a project manager for a murder of AI crows? I've had the option to retreat up the manager ladder for most of my career, but I've steadily refused, because I really like writing Ruby! It's the most enjoyable part of the job! That doesn't mean AI doesn't have a role to play when writing Ruby. I'm conversing and collaborating with LLMs all day long — looking up APIs, clarifying concepts, and asking stupid questions. AI is a superb pair programmer, but I'd retire before permanently handing it the keyboard to drive the code. Maybe one day, wanting to write code will be a quaint concept. Like tending to horses for transportation in the modern world — done as a hobby but devoid of any economic value. I don't think anyone knows just how far we can push the intelligence and creativity of these insatiable token munchers. And I wouldn't bet against their advance, but it's clear to me that a big part of their appeal to programmers is the wisdom that Ruby was founded on: Programming should favor and flatter the human.

2 days ago 8 votes
Tempest Rising is a great game

I really like RTS games. I pretty much grew up on them, starting with Command&Conquer 3: Kane’s Wrath, moving on to StarCraft 2 trilogy and witnessing the downfall of Command&Conquer 4. I never had the disks for any other RTS games during my teenage years. Yes, the disks, the ones you go to the store to buy! I didn’t know Steam existed back then, so this was my only source of games. There is something magical in owning a physical copy of the game. I always liked the art on the front (a mandatory huge face for all RTS!), game description and screenshots on the back, even the smell of the plastic disk case.

2 days ago 4 votes