More from David Crawshaw
How I program with LLMs 2025-01-06 This document is a summary of my personal experiences using generative models while programming over the past year. It has not been a passive process. I have intentionally sought ways to use LLMs while programming to learn about them. The result has been that I now regularly use LLMs while working and I consider their benefits net-positive on my productivity. (My attempts to go back to programming without them are unpleasant.) Along the way I have found oft-repeated steps that can be automated, and a few of us are working on building those into a tool specifically for Go programming: . It’s very early but so far the experience has been positive.sketch.dev I am typically curious about new technology. It took very little experimentation with LLMs for me to want to see if I could extract practical value. There is an allure to a technology that can (at least some of the time) craft sophisticated responses to challenging questions. It is even more exciting to watch a computer attempt to write a piece of a program as requested, and make solid progress. The only technological shift I have experienced that feels similar to me happened in 1995, when we first configured our LAN with a usable default route. We replaced the shared computer in the other room running Trumpet Winsock with a machine that could route a dialup connection, and all at once I had The Internet on tap. Having the internet all the time was astonishing, and felt like the future. Probably far more to me in that moment than to many who had been on the internet longer at universities, because I was immediately dropped into high internet technology: web browsers, JPEGs, and millions of people. Access to a powerful LLM feels like that. So I followed this curiosity, to see if a tool that can generate something mostly not wrong most of the time could be a net benefit in my daily work. The answer appears to be yes, generative models are useful for me when I program. It has not been easy to get to this point. My underlying fascination with the new technology is the only way I have managed to figure it out, so I am sympathetic when other engineers claim LLMs are “useless.” But as I have been asked more than once how I can possibly use them effectively, this post is my attempt to describe what I have found so far. There are three ways I use LLMs in my day-to-day programming: As this is about the of programming, this has been a fundamentally qualitative process that is hard to write about with quantitative rigor. The closest I will get to data is to say: it appears from my records that for every two hours of programming I do now, I accept more than 10 autocomplete suggestions, use LLM for a search-like task once, and program in a chat session once.practice The rest of this is about extracting value from chat-driven programming. Let me try to motivate this for the skeptical. A lot of the value I personally get out of chat-driven programming is I reach a point in the day when I know what needs to be written, I can describe it, but I don’t have the energy to create a new file, start typing, then start looking up the libraries I need. (I’m an early-morning person, so this is usually any time after 11am for me, though it can also be any time I context-switch into a different language/framework/etc.) LLMs perform that service for me in programming. They give me a first draft, with some good ideas, with several of the dependencies I need, and often some mistakes. Often, .I find fixing those mistakes is a lot easier than starting from scratch This means chat-based programming may not be for you. I am doing a particular kind of programming, product development, which could be roughly described as trying to bring programs to a user through a robust interface. That means I am building a lot, throwing away a lot, and bouncing around between environments. Some days I mostly write typescript, some days mostly Go. I spent a week in a C++ codebase last month exploring an idea, and just had an opportunity to learn the HTTP server-side events format. I am all over the place, constantly forgetting and relearning. If you spend more time proving your optimization of a cryptographic algorithm is not vulnerable to timing attacks than you do writing the code, I don’t think any of my observations here are going to be useful to you. Give an LLM a specific objective and all the background material it needs so it can craft a well-contained code review packet and expect it to adjust as you question it. There are two major elements to this: The ideal task for an LLM is one where it needs to use a lot of common libraries (more than a human can remember, so it is doing a lot of small-scale research for you), working to an interface you designed or produces a small interface you can verify as sensible quickly, and it can write readable tests. Sometimes this means choosing the library for it, if you want something obscure (though with open source code LLMs are quite good at this). You always need to pass an LLM’s code through a compiler and run the tests before spending time reading it. They all produce code that doesn’t compile sometimes. (Always making errors I find surprisingly human, every time I see one I think, there but for the grace of God go I.) The better LLMs are very good at recovering from their mistakes, often all they need is for you to paste the compiler error or test failure into the chat and they fix the code. There are vague tradeoffs we make every day around the cost of writing, the cost of reading, and the cost of refactoring code. Let’s take Go package boundaries as an example. The standard library has a package “net/http” that contains some fundamental types for dealing with wire format encoding, MIME types, etc. It contains an HTTP client, and an HTTP server. Should it be one package, or several? Reasonable people can disagree! So much so, I do not know if there is a correct answer today. What we have works, after 15 years of use it is still not clear to me that some other package arrangement would work better. Advantages of a larger package include: centralized documentation for callers, easier initial writing, easier refactoring, easier sharing of helper code without devising robust interfaces for them (which often involves pulling the fundamental types of a package out into yet another leaf package filled with types). The disadvantages include the package being harder to read because many different things are going on (try reading the net/http client implementation without tripping up and finding yourself in the server code for a few minutes), or it being harder to use because there is too much going on in it. For example I have a codebase that uses a C library in some fundamental types, but parts of the codebase need to be in a binary widely distributed to many platforms that does not technically need the C library, so have more packages than you might expect in the codebase isolating the use of the C library to avoid cgo in the multi-platform binary. There are no right answers here, instead we are trading off different types of work that an engineer will have to do (upfront and ongoing). LLMs influence those tradeoffs: Let me work an example to combine a few of the discussed ideas: Write a reservoir sampler for the quartiles of floats. First off, package structure. Were I doing this before LLMs, I would have chosen to have some sort of streamstat package that contained several algorithms, maybe one per file. This does not seem to be a unique opinion, here is an open source package following that model. Now, I want just this one algorithm in its own package. Other variants or related algorithms can have their own package.quantile Next up, what do we get from an LLM. The first pass is not bad. That prompt, with some details about wanting it in Go got me quartile_sampler.go: The core interface is good too: Great! There are also tests. An aside: this may be the place to stop. Sometimes I use LLM codegen as a form of specialized search. E.g. I’m curious about reservoir sampling, but want to see how the algorithm would be applied under some surprising constraint, for example time-windowed sampling. Instead of doing a literature search I might amend my prompt for an implementation that tracks freshness. (I could also ask it to include references to the literature in the comments, which I could manually check to see if it’s making things up or if there’s some solid research to work from.) Often I spend 60 seconds reading some generated code, see an obvious trick I hadn’t thought of, then throw it away and start over. Now I know the trick is possible. This is why it is so hard to attribute value generated by LLMs. Yes sometimes it makes bad code, gets stuck in a rut, makes up something impossible (it hallucinated a part of the monaco API I wish existed the other day) and wastes my time. It can also save me hours by pointing out something relevant I don’t know. Back to the code. Fascinatingly, the initial code produced didn’t compile. In the middle of the Quartiles implementation there was the line: Which is a fine line, sorted is a slice defined a few lines earlier. But the value is never used so gopls (and the Go compiler if you run go build) immediately says: This is a very easy fix. If I paste the error back into the LLM it will correct it. Though in this case, as I’m reading the code, it’s quite clear to me that I can just delete the line myself, so I do. Now the tests. I got what I expected. In quartile_sampler_test.go: Exactly the sort of thing I would write! I would run some cases through another implementation to generate expected outputs and copy them into a test like this. But there are two issues with this. The first is the LLM did run these numbers through another implementation. (To the best of my knowledge. When using a sophisticated LLM service, it is hard to say for sure what is happening behind the scenes.) It made them up, and LLMs have a reputation for being weak at arithmetic. So this sort of test, while reasonable for a human to write because we base it on the output of another tool, or if we are particularly old-school do some arithmetic ourselves, is not great from an LLM.not The second issue with this is we can do better. I am happy we now live in a time when programmers write their own tests, but we do not hold ourselves to the same standards with tests as we do with production code. That is a reasonable tradeoff, there are only so many hours in the day. But what LLMs lack in arithmetical prowess, they make up for in enthusiasm. Let’s ask for an even better test. This got us some new test code: The original test from above has been reworked to to use checkQuartiles and we have something new: This is fun, because it's wrong. My running tool immediately says:gopls Pasting that error back into the LLM gets it to regenerate the fuzz test such that it is built around a function that uses to extract floats from the data slice. Interactions like this point us towards automating the feedback from tools: all it needed was the obvious error message to make solid progress towards something useful. I was not needed.func(t *testing.T, data []byte)math.Float64frombits Doing a quick survey of the last few weeks of my LLM chat history shows (which as I mentioned earlier, is not a proper quantitative analysis by any measure) that more than 80% of the time there is a tooling error, the LLM can make useful progress without me adding any insight. About half the time it can completely resolve the issue without me saying anything of note, I am just acting as the messenger. There was a programming movement some 25 years ago focused around the principle “don’t repeat yourself.” As is so often the case with short snappy principles taught to undergrads, it got taken too far. There is a lot of cost associated with abstracting out a piece of code so it can be reused, it requires creating intermediate abstractions that must be learned, and it requires adding features to the factored out code to make it maximally useful to the maximum number of people, which means we depend on libraries filled with useless distracting features. The past 10-15 years has seen a far more tempered approach to writing code, with many programmers understanding it is better to reimplement a concept if the cost of sharing the implementation is higher than the cost of implementing and maintaining separate code. It is far less common for me to write on a code review “this isn’t worth it, separate the implementations.” (Which is fortunate, because people really don’t want to hear things like that after they have done all the work.) Programmers are getting better at tradeoffs. What we have now is a world where the tradeoffs have shifted. It is now easier to write more comprehensive tests. You can have the LLM write the fuzz test implementation you want but didn’t have the hours to build properly. You can spend a lot more time writing tests to be readable, because the LLM is not sitting there constantly thinking “it would be better for the company if I went and picked another bug off the issue tracker than doing this.” So the tradeoff shifts in favor of having more specialized implementations. The place where I expect this to be most visible is language-specific . Every major company API comes with dozens of these, usually low quality, wrappers written by people who aren’t actually using their implementations for a specific goal, instead are trying to capture every nook and cranny of an API in a large and complex interface. Even when it is done well, I have found it easier to go to the REST documentation (usually a set of curl commands), and implement a language wrapper for the 1% of the API I actually care about. It cuts down the amount of the API I need to learn upfront, and it cuts down how much future programmers (myself) reading the code need to understand.REST API wrappers For example, as part of my recent work on sketch.dev I implemented a Gemini API wrapper in Go. Even though the in Go has been carefully handcrafted by people who know the language well and clearly care, there is a lot to read to understand it:official wrapper My simplistic initial wrapper was 200 lines of code total, one method, three types. Reading the entire implementation is 20% of the work of reading the documentation of the official package, and if you decide to try digging into its implementation you will discover that it is a wrapper around another largely code-generated implementation with protos and grpc and the works. All I want is to cURL and parse a JSON object. There obviously comes a point in a project, where Gemini is the foundation of the entire app, where nearly every feature is used, where building on gRPC aligns well with the telemetry system elsewhere in your organization, where you should use the large official wrapper. But most of the time it is so much more time consuming, both upfront and ongoing, to do so given we almost always want only some wafer-thin sliver of whatever API we need to use today, that custom clients, largely written by a GPU, are far more effective for getting work done. So I foresee a world with far more specialized code, with fewer generalized packages, and more readable tests. Reusable code will continue to thrive around small robust interfaces and otherwise will be pulled apart into specialized code. Depending how well this is done, it will lead to either better software or worse software. I would expect both, with a long-term trend towards better software by the metrics that matter. As a programmer my instinct is to make computers do work for me. It is a lot of work getting value out of LLMs, how can a computer do it? I believe the key to solving a problem is not to overgeneralize. Solve a particular problem and then expand slowly. So instead of building a general-purpose UI for chat programming that is just as good at COBOL as it is for Haskell, we want to focus on one particular environment. The bulk of my programming is in Go, and so what I want is easy to imagine for a Go programmer: A few of us have built an early prototype of this: .sketch.dev The goal is not a “Web IDE” but rather to challenge the notion that chat-based programming even belongs in what is traditionally called an IDE. IDEs are collections of tools arranged for people. It is a delicate environment where I know what is going on. While an LLM is ultimately a developer tool, it is one that needs its own IDE to get the feedback it needs to operate effectively.I do not want an LLM spewing its first draft all over my current branch. Put another way: we didn’t embed goimports into sketch for it to be used by humans, but to get Go code closer to compiling using automatic signals, so that the compiler can provide better error feedback to the LLM driving it. It might be better to think of sketch.dev as a “Go IDE for LLMs”. This is all very recent work with a lot left to do, e.g. git integration so we can load existing packages for editing and drop the results on a branch. Better test feedback. More console control. (If the answer is to run sed, run sed. Be you the human or the LLM.) We are still exploring, but are convinced that focusing an environment for a particular kind of programming will give us better results than the generalized tool. Background Overview Why use chat at all? Chat-based LLMs do best with exam-style questions Extra code structure is much cheaper An example Where are we going? Better tests, maybe even less DRY Automating these observations: sketch.dev . This makes me more productive by doing a lot of the more-obvious typing for me. It turns out the current state of the art can be improved on here, but that’s a conversation for another day. Even the standard products you can get off the shelf are better for me than nothing. I convinced myself of that by trying to give them up. I could not go a week without getting frustrated by how much mundane typing I had to do before having a FIM model. This is the place to experiment first. Autocomplete . If I have a question about a complex environment, say “how do I make a button transparent in CSS” I will get a far better answer asking any consumer-based LLM, o1, sonnet 3.5, etc, than I do using an old fashioned web search engine and trying to parse the details out of whatever page I land on. (Sometimes the LLM is wrong. So are people. The other day I put my shoe on my head and asked my two year old what she thought of my hat. She dealt with it and gave me a proper scolding. I can deal with LLMs being wrong sometimes too.) Search . This is the hardest of the three. This is where I get the most value of LLMs, but also the one that bothers me the most. It involves learning a lot and adjusting how you program, and on principle I don’t like that. It requires at least as much messing about to get value out of LLM chat as it does to learn to use a slide rule, with the added annoyance that it is a non-deterministic service that is regularly changing its behavior and user interface. Indeed, the long-term goal in my work is to replace the need for chat-driven programming, to bring the power of these models to a developer in a way that is not so off-putting. But as of now I am dedicated to approaching the problem incrementally, which means figuring out how to do best with what we have and improve it.Chat-driven programming Avoid creating a situation with so much complexity and ambiguity that the LLM gets confused and produces bad results. This is why I have had little success with chat inside my IDE. My workspace is often messy, the repository I am working on is by default too large, it is filled with distractions. One thing humans appear to be much better than LLMs at (as of January 2025) is not getting distracted. That is why I still use an LLM via a web browser, because I want a blank slate on which to craft a well-contained request. Ask for work that is easy to verify. Your job as a programmer using an LLM is to read the code it produces, think about it, and decide if the work is good. You can ask an LLM to do things you would never ask a human to do. “Rewrite all of your new tests introducing an <intermediate concept designed to make the tests easier to read>” is an appalling thing to ask a human, you’re going to have days of tense back-and-forth about whether the cost of the work is worth the benefit. An LLM will do it in 60 seconds and not make you fight to get it done. Take advantage of the fact that .redoing work is extremely cheap As LLMs do better with exam-style questions, more and smaller packages make it easier to give a complete and yet isolated context for a piece of work. This is true for humans too, which is why we use packages at all, but we trade off package size against the extra typing/plumbing/filing to make more readable code. With an LLM both doing and benefiting from a big chunk of that extra work, the tradeoff shifts. (As a bonus, we humans get more readable code!) Smaller and more numerous packages can be compiled and tested independently of unrelated code. This is helpful for the LLM dev cycle as it means changes can be compiled and tested before other dependent packages are refactored, and the package has a simpler test environment. something like the Go playground, built around editing a package and tests with a chat interface onto editable code a little UNIX env where we can run go get and go test goimports integration gopls integration automatic model feedback: on model edit run go get, go build, go test, feedback missing packages, compiler errors, test failures to the model to try and get them fixed automatically // QuartileSampler maintains a reservoir sample to estimate quartiles of a stream // of float64 values. It uses reservoir sampling to maintain a representative // sample when processing large streams of data. type QuartileSampler struct { reservoir []float64 seen int maxSize int rng *rand.Rand } // NewQuartileSampler creates a new sampler that maintains up to maxSize samples. // A larger maxSize provides more accurate estimates but uses more memory. func NewQuartileSampler(maxSize int) *QuartileSampler { … } // Add adds a new value to the sampler. func (qs *QuartileSampler) Add(value float64) { … } // Quartiles returns the estimated 25th, 50th (median), and 75th percentiles // of the data seen so far. Returns (0,0,0) if no data has been added. func (qs *QuartileSampler) Quartiles() (q1, median, q3 float64) { if len(qs.reservoir) == 0 { return 0, 0, 0 } // Make a copy to avoid modifying the reservoir sorted := make([]float64, len(qs.reservoir)) copy(sorted, qs.reservoir) slices.Sort(sorted) n := len(sorted) q1 = percentile(sorted, 0.25) median = percentile(sorted, 0.50) q3 = percentile(sorted, 0.75) return q1, median, q3 } n := len(sorted) declared and not used: n func TestQuartileSampler(t *testing.T) { tests := []struct { name string data []float64 wantQ1 float64 wantMed float64 wantQ3 float64 epsilon float64 // allowed error for floating point comparison }{ { name: "basic sequence", data: []float64{1, 2, 3, 4, 5, 6, 7}, wantQ1: 2, wantMed: 4, wantQ3: 6, epsilon: 0.1, }, … } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { qs := NewQuartileSampler(100) // Use large reservoir for accurate testing for _, v := range tt.data { qs.Add(v) } q1, med, q3 := qs.Quartiles() if math.Abs(q1-tt.wantQ1) > tt.epsilon { t.Errorf("Q1 = %v, want %v (±%v)", q1, tt.wantQ1, tt.epsilon) } if math.Abs(med-tt.wantMed) > tt.epsilon { t.Errorf("Median = %v, want %v (±%v)", med, tt.wantMed, tt.epsilon) } if math.Abs(q3-tt.wantQ3) > tt.epsilon { t.Errorf("Q3 = %v, want %v (±%v)", q3, tt.wantQ3, tt.epsilon) } }) } } // referenceQuartiles calculates the exact quartiles for a slice of float64 values // using linear interpolation, matching the behavior expected from the sampler. func referenceQuartiles(data []float64) (q1, median, q3 float64) { … } // compareQuartiles checks if two sets of quartiles are within epsilon of each other. // Returns true if they match within the tolerance, false otherwise. func compareQuartiles(q1a, meda, q3a, q1b, medb, q3b, epsilon float64) bool { … } // checkQuartiles is a test helper that compares sampler output against the reference // implementation and reports any differences. func checkQuartiles(t *testing.T, data []float64, epsilon float64) { t.Helper() // Get reference values wantQ1, wantMed, wantQ3 := referenceQuartiles(data) // Get sampler values using a large reservoir for accuracy qs := NewQuartileSampler(1000) for _, v := range data { qs.Add(v) } gotQ1, gotMed, gotQ3 := qs.Quartiles() if !compareQuartiles(gotQ1, gotMed, gotQ3, wantQ1, wantMed, wantQ3, epsilon) { t.Errorf("Quartiles mismatch:\ngot (q1=%v, med=%v, q3=%v)\nwant (q1=%v, med=%v, q3=%v)\nepsilon=%v", gotQ1, gotMed, gotQ3, wantQ1, wantMed, wantQ3, epsilon) } } func FuzzQuartileSampler(f *testing.F) { // Add some seed corpus f.Add([]float64{1, 2, 3, 4, 5}) f.Fuzz(func(t *testing.T, data []float64) { // Use a larger epsilon for fuzzing since we might get more extreme values checkQuartiles(t, data, 0.2) }) } fuzzing arguments can only have the following types: string, bool, float32, float64, int, int8, int16, int32, int64, uint, uint8, uint16, uint32, uint64, []byte $ go doc -all genai | wc -l 1155 In the tests, implement the simplest, most readable version of the standard code for quartiles over a fixed set of known values in a slice. Then pass the test cases through the standard code and the reservoir sampler and confirm they are within an epsilon of each other. Structure the comparison code such that it can be used in a fuzz test too.
jsonfile: a quick hack for tinkering Consider your requirements! A reference implementation A final thought 2024-02-06 The year is 2024. I am on vacation and dream up a couple of toy programs I would like to build. It has been a few years since I built a standalone toy, I have . So instead of actually building any of the toys I think of, I spend my time researching if anything has changed since the last time I did it. Should pick up new tools or techniques?been busy It turns out lots of things have changed! There’s some great stuff out there, including decent quorum-write regional cloud databases now. Oh and the ability to have a fascinating hour-long novel conversation with transistors. But things are still awkward for small fast tinkering. Going back in time, I struggled constantly rewriting the database for the prototype for Tailscale, so I ended up writing my in-memory objects out as . It went far further than I planned. Somewhere in the intervening years I convinced myself it must have been a bad idea even for toys, given all the pain migrating away from it caused. But now that I find myself in an empty text editor wanting to write a little web server, I am not so sure. The migration was painful, and a lot of that pain was born by others (which is unfortunate, I find handing a mess to someone else deeply unpleasant). Much of that pain came from the brittle design of the caching layers on top (also my doing), which came from not moving to an SQL system soon enough.a JSON file I suspect, considering the process retrospect, a great deal of that pain can be avoided by committing to migrating directly to an SQL system the moment you need an index. You can pay down a lot of exploratory design work in a prototype before you need an index, which n is small, full scans are fine. But you don’t make it very far into production before one of your values of n crosses something around a thousand and you long for an index. With a clear exit strategy for avoiding big messes, that means the JSON file as database is still a valid technique for prototyping. And having spent a couple of days remembering what a misery it is to write a unit test for software that uses postgresql (mocks? docker?? for a database program I first ran on a computer with less power than my 2024 wrist watch?) and struggling figuring out how to make my cgo sqlite cross-compile to Windows, I’m firmly back to thinking a JSON file can be a perfectly adequate database for a 200-line toy. Before you jump into this and discover it won’t work, or just as bad, dismiss the small and unscaling as always a bad idea, consider the requirements of your software. Using a JSON file as a database means your software: Programming is the art of tradeoffs. You have to decide what matters and what does not. Some of those decisions need to be made early, usually with imperfect information. You may very well need a powerful SQL DBMS from the moment you start programming, depending on the kind of program you’re writing! An implementation of jsonfile (which Brad called JSONMutexDB, which is cooler because it has an x in it, but requires more typing) can fit in about 70 lines of Go. But there are a couple of lessons we ran into in the early days of Tailscale that can be paid down relatively easily, growing the implementation to 85 lines. (More with comments!) I think it’s worth describing the interesting things we ran into, both in code and here. You can find the implementation of jsonfile here: . The interface is:https://github.com/crawshaw/jsonfile/blob/main/jsonfile.go There is some experience behind this design. In no particular order: One of the early pain points in the transition was figuring out the equivalent of when to , , and . The first version exposed the mutex directly (which was later converted into a RWMutex).BEGINCOMMITROLLBACK There is no advantage to paying this transition cost later. It is easy to box up read/write transactions with a callback. This API does that, and provides a great point to include other safety mechanisms. There are two forms of this. The first is if the write fn fails half-way through, having edited the db object in some way. To avoid this, the implementation first creates an entirely new copy of the DB before applying the edit, so the entire change set can be thrown away on error. Yes, this is inefficient. No, it doesn’t matter. Inefficiency in this design is dominated by the I/O required to write the entire database on every edit. If you are concerned about the duplicate-on-write cost, you are not modeling I/O cost appropriately (which is important, because if I/O matters, switch to SQL). The second is from a full disk. The easy to write a file in Go is to call os.WriteFile, which the first implementation did. But that means: A failure can occur in any of those system calls, resulting in a corrupt DB. So this implementation creates a new file, loads the DB into it, and when that has all succeeded, uses . It is not a panacea, our operating systems do not make all the promises we wish they would about rename. But it is much better than the default.rename(2) A nasty issue I have run into twice is aliasing memory. This involves doing something like: An intermediate version of this code kept the previous database file on write. But there’s an easier and even more robust strategy: never rename the file back to the original. Always create a new file, . On starting, load the most recent file. Then when your data is worth backing up (if ever), have a separate program prune down the number of files and send them somewhere robust.Backups.mydb.json.<timestamp> Not in this implementation but you may want to consider, is removing the risk of a Read function editing memory. You can do that with View* types generated by the tool. It’s neat, but more than quadruples the complexity of JSONFileDB, complicates the build system, and initially isn’t very important in the sorts of programs I write. I have found several memory aliasing bugs in all the code I’ve written on top of a JSON file, but have yet to accidentally write when reading. Still, for large code bases Views are quite pleasant and well-worth considering about the point when a project should move to a real SQL.Constant memory.viewer There is some room for performance improvements too (using cloner instead of unmarshalling a fresh copy of the data for writing), though I must point out again that needing more performance is a good sign it is time to move on to SQLite, or something bigger. It’s a tiny library. Copy and edit as needed. It is an all-new implementation so I will be fixing bugs as I find them. (As a bonus: this was my first time using a Go generic! 👴 It went fine. Parametric polymorphism is ok.) Why go out of my way to devise an inadequate replacement for a database? Most projects fail before they start. They fail because the is too high. Our dreams are big and usually too much, as dreams should be.activation energy But software is not building a house or traveling the world. You can realize a dream with the tools you have on you now, in a few spare hours. This is the great joy of it, you are free from physical and economic constraint. If you start. Be willing to compromise almost everything to start. Doesn’t have a lot of data. Keep it to a few megabytes. The data structure is boring enough not to require indexes. You don’t need something interesting like full-text search. You do plenty of reads, but writes are infrequent. Ideally no more than one every few seconds. Truncating the database file Making multiple system calls to .write(2) Calling .close(2) type JSONFile[Data any] struct { … } func New[Data any](path string) (*JSONFile[Data], error) func Load[Data any](path string) (*JSONFile[Data], error) func (p *JSONFile[Data]) Read(fn func(data *Data)) func (p *JSONFile[Data]) Write(fn func(*Data) error) error list := []int{1, 2, 3} db.Write(func() { db.List = list }) list[0] = 10 // editing the database! Transactions Database corruption through partial writes Memory aliasing Some changes you may want to consider
new year, same plan 2022-12-31 Some months ago, the bill from GCE for hosting this blog jumped from nearly nothing to far too much for what it is, so I moved provider and needed to write a blog post to test it all. I could have figured out why my current provider hiked the price. Presumably I was Holding It Wrong and with just a few grip adjustments I could get the price dropped. But if someone mysteriously starts charging you more money, and there are other people who offer the same service, why would you stay? This has not been a particularly easy year, for a variety of reasons. But here I am at the end of it, and beyond a few painful mistakes that in retrospect I did not have enough information to get right, I made mostly the same decisions I would again. There were a handful of truly wonderful moments. So the plan for 2023 is the same: keep the kids intact, and make programming more fun. There is also the question of Twitter. It took me a few years to develop the skin to handle the generally unpleasant environment. (I can certainly see why almost no old Twitter employees used their product.) The experience recently has degraded, there are still plenty of funny tweets, but far less moments of interesting content. Here is a recent exception, but it is notable because it's the first time in weeks I learned anything from twitter: . I now find more new ideas hiding in HN comments than on Twitter.https://twitter.com/lrocket/status/1608883621980704768 Many people I know have sort-of moved to Mastodon, but it has a pretty horrible UX that is just enough work that I, on the whole, don't enjoy it much. And the fascinating insights don't seem to be there yet, but I'm still reading and waiting. On the writing side, it might be a good idea to lower the standards (and length) of my blog posts to replace writing tweets. But maybe there isn't much value in me writing short notes anyway, are my contributions as fascinating as the ones I used to sift through Twitter to read? Not really. So maybe the answer is to give up the format entirely. That might be something new for 2023. Here is something to think about for the new year: http://www.shoppbs.pbs.org/now/transcript/transcriptNOW140_full.html DAVID BRANCACCIO: There's a little sweet moment, I've got to say, in a very intense book– your latest– in which you're heading out the door and your wife says what are you doing? I think you say– I'm getting– I'm going to buy an envelope. KURT VONNEGUT: Yeah. DAVID BRANCACCIO: What happens then? KURT VONNEGUT: Oh, she says well, you're not a poor man. You know, why don't you go online and buy a hundred envelopes and put them in the closet? And so I pretend not to hear her. And go out to get an envelope because I'm going to have a hell of a good time in the process of buying one envelope. I meet a lot of people. And, see some great looking babes. And a fire engine goes by. And I give them the thumbs up. And, and ask a woman what kind of dog that is. And, and I don't know. The moral of the story is, is we're here on Earth to fart around. And, of course, the computers will do us out of that. And, what the computer people don't realize, or they don't care, is we're dancing animals. You know, we love to move around. And, we're not supposed to dance at all anymore.
log4j: between a rock and a hard place 2021-12-11 What does backwards compatibility mean to me? Backwards compatibility should not have forced log4j to keep LDAP/JNDI URLs The other side of compatibility: being cautious adding features There is more than enough written on the mechanics of and mitigations for the recent . On prevention, this is the most interesting widely-reshared I have seen:severe RCE in log4jinsight This is making the rounds because highly-profitable companies are using infrastructure they do not pay for. That is a worthy topic, but not the most interesting thing in this particular case because it would not clearly have contributed to preventing this bug. It is the second statement in this tweet that is worthy of attention: the long ago, but could not because of the backwards compatibility promises they are held to.maintainers of log4j would have loved to remove this bad feature I am often heard to say that I love backwards compatibility, and that it is underrated. But what exactly do I mean? I don't mean that whenever I upgrade a dependency, I expect zero side effects. If a library function gets two times faster in an upgrade, that is a change in behavior that might break my software! But obviously the exact timings of functions can change between versions. In some extreme cases I need libraries to promise the algorithmic complexity of run time or memory usage, where I am providing extremely large inputs, or need constant-time algorithms to avoid timing attacks. But I don't need that from a logging library. So let me back up and describe what is important. The ideal version of this is I run my package manager's upgrade command, execute the tests, commit the output, and not think about it any more. This means the API/ABI stays similar enough that the compiler won't break, the behavior of the library routines is similar enough the tests will pass, and no other constraints, such as total binary size limits, are exceeded. This is impossible in the general case. The only way to achieve it is to not make any changes at all. When we write down a promise, we leave lots of definitional holes in the promise. E.g. take the (generally excellent) :Go compatibility promise Here "correctly" means according to the Go language specification and the API documentation. The spec and the docs do not cover run time, memory use, or binary size. The next version of Go can be 10x slower and be compatible! But I can assure you if that were the case I would fail my goal of not spending much time upgrading a dependency. But the Go team know this, and work to the spirit of their promise. Very occasionally they break things, for security reasons, and when they do I have to spend time upgrading a dependency for a really good reason: my program needs it.very If I want my program to work correctly I should write tests for all the behaviors I care about. But like all programmers, I am short on hours in the day to do all that needs doing, and never have enough tests. So whenever a change in behavior happens in an upstream library that my tests don't catch but makes it into production, my instinct is to blame upstream. This is of course unfair, the burden for releasing good programs is borne by the person pressing the release button. But it is an expression of a programming social contract that has taken hold: a good software project tries to break downstream as little as possible, and when we do break downstream, we should do our best to make the breakage obvious and easy to fix. No compatibility promise I have seen covers the spirit of minimizing breakage and moving it to the right part of the process. As far as I can tell, engineers aren't taught this in school, and many have never heard the concept articulated. So much of best practice in releasing libraries is learned on the job and not well communicated (yet). Good upstream dependencies are maintained by people who have figured this out the hard way and do their best by their users. As a user, it is extremely hard to know what kind of library you are getting when you first consider a dependency, unless it is a very old and well established project. This is where software goes wrong the most for me. I want, year after year, to come back to a tool and be able to apply the knowledge I acquired the last time I used it, to new things I learn, and build on it. I want to hone my craft by growing a deep understanding of the tools I use. Some new features are additive. If I buy a new for framing, and it has a notch on it my old one didn't that I can use as a shortcut in marking up a beam, its presence does not invalidate my old knowledge. If the new interior notch replaces a marking that was on the outside of the square, then when I go to find my trusty marking I remember from years ago, and it's missing, I need to stop and figure out a new way to solve this old problem. Maybe I will notice the new feature, or, more likely, I'll pull out the tape measure measure I know how to use and find my mark that (slower) way. If someone who knew what they were doing saw me they could correct me! But like programming, I'm usually making a mess with wood alone in a few spare hours on a Saturday.speed square When software "upgrades" invalidate my old knowledge, it makes me a worse programmer. I can spend time getting back to where I was, but that's time I am not spending improving on where I was. To give a concrete example: I will never be an expert at developing for macOS or iOS. I bounce into and out of projects for Apple devices, spending no more than 10% of my hours on their platform. Their APIs change constantly. The buttons in Xcode move so quickly I sometimes wonder if it's happening before my eyes. Try looking up some Swift syntax on stack overflow and you'll find the answers are constantly edited for the latest version of Swift. At this point, I assume every time I come back to macOS/iOS, that I know nothing and I am learning the platform for the first time. Compare the shifting sands of Swift with the stability of awk. I have spent not a tenth of the time learning awk that I have spent relearning Swift, and yet I am about as capable in each language. An awk one-liner I learned 20 years ago still works today! When I see someone use awk to solve a problem, I'm enthusiastic to learn how they did it, because I know that 20 years from now the trick will work. By what backwards compatibility means to me, a project like log4j will break fewer people by removing a feature like the JNDI URLs than by marking an old API method with some mechanical deprecation notice that causes a build process's equivalent of to fail and moving it to a new name. They will in practice, break fewer people removing this feature than they would by slowing down a critical path by 10%, which is the sort of thing that can trivially slip into a release unnoticed.-Wall But the spirit of compatibility promises appears to be poorly understood across our industry (as software updates demonstrate to me every week), and so we lean on the pseudo-legalistic wording of project documentation to write strongly worded emails or snarky tweets any time a project makes work for us (because most projects don't get it, so surely every example of a breakage must be a project that doesn't get it, not a good reason), and upstream maintainers become defensive and overly conservative. The result is now everyone's Java software is broken! We as a profession misunderstand and misuse the concept of backwards compatibility, both upstream and downstream, by focusing on narrow legalistic definitions instead of outcomes. This is a harder, longer topic that maybe I'll find enough clarity to write properly about one day. It should be easy to hack up code and share it! We should also be cautious about adding burdensome features. This particular bug feels impossibly strange to me, because my idea of a logging API is file descriptor number 2 with the system call. None of the bells and whistles are necessary and we should be conservative about our core libraries. Indeed libraries like these are why I have been growing ever-more skeptical of using any depdendencies, and now force myself to read a big chunk of any library before adding it to a project.write But I have also written my share of misfeatures, as much as I would like to forget them. I am thankful my code I don't like has never achieved the success or wide use of log4j, and I cannot fault diligent (and unpaid!) maintainers doing their best under those circumstances. Log4j maintainers have been working sleeplessly on mitigation measures; fixes, docs, CVE, replies to inquiries, etc. Yet nothing is stopping people to bash us, for work we aren't paid for, for a feature we all dislike yet needed to keep due to backward compatibility concerns. It is intended that programs written to the Go 1 specification will continue to compile and run correctly, unchanged, over the lifetime of that specification. I want to not spend much time upgrading a dependency I want any problems caused by the upgrade to be caught early, not in production. I want to be able to build knowledge of the library over a long time, to hone my craft
Software I’m thankful for 2021-11-25 A few of the things that come to mind, this thanksgiving. Most Unix-ish APIs, from files to sockets are a bit of a mess today. Endless poorly documented sockopts, unexpected changes in write semantics across FSs and OSes, good luck trying to figure out . But despite the mess, I can generally wrap my head around open/read/write/close. I can strace a binary and figure out the sequence and decipher what’s going on. Sprinkle in some printfs and state is quickly debuggable. Stack traces are useful!mtimes Enormous effort has been spent on many projects to replace this style of I/O programming, for efficiency or aesthetics, often with an asynchronous bent. I am thankful for this old reliable standby of synchronous open/read/write/close, and hope to see it revived and reinvented throughout my career to be cleaner and simpler. Goroutines are coroutines with compiler/runtime optimized yielding, to make them behave like threads. This breathes new life into the previous technology I’m thankful for: simple blocking I/O. With goroutines it becomes cheap to write large-scale blocking servers without running out of OS resources (like heavy threads, on OSes where they’re heavy, or FDs). It also makes it possible to use blocking interfaces between “threads” within a process without paying the ever-growing price of a context switch in the post- world.spectre This is the first year where the team working on Tailscale has outgrown and eclipsed me to the point where I can be thankful for Tailscale without feeling like I’m thanking myself. Many of the wonderful new features that let me easily wire machines together wherever they are, like userspace networking or MagicDNS, are not my doing. I’m thankful for the product, and the opportunity to work with the best engineering team I’ve ever had the privilege of being part of. Much like open/read/write/close, SQLite is an island of stability in a constantly changing technical landscape. Techniques I learned 10 or 15 years ago using SQLite work today. As a bonus, it does so much more than then: WAL mode for highly-concurrent servers, advanced SQL like window functions, excellent ATTACH semantics. It has done all of this while keeping the number of, in the projects own language, “goofy design” decisions to a minimum and holding true to its mission of being “lite”. I aspire to write such wonderful software. JSON is the worst form of encoding — except for all the others that have been tried. It’s complicated, but not too complicated. It’s not easily read by humans, but it can be read by humans. It is possible to extend it in intuitive ways. When it gets printed onto your terminal, you can figure out what’s going on without going and finding the magic decoder ring of the week. It makes some things that are extremely hard with XML or INI easy, without introducing accidental Turing completeness or turning . Writing software is better for it, and shows the immense effect carefully describing something can do for programming. JSON was everywhere in our JavaScript before the term was defined, the definition let us see it and use it elsewhere.country codes into booleans WireGuard is a great demonstration of why the total complexity of the implementation ends up affecting the UX of the product. In theory I could have been making tunnels between my devices for years with IPSec or TLS, in practice I’d completely given it up until something came along that made it easier. It didn’t make it easier by putting a slick UI over complex technology, it made the underlying technology simpler, so even I could (eventually) figure out the configuration. Most importantly, by not eating my entire complexity budget with its own internals, I could suddenly see it as a building block in larger projects. Complexity makes more things possible, and fewer things possible, simultaneously. WireGuard is a beautiful example of simplicity and I’m thankful for it. Before Go became popular, the fast programming language compilers of the 90s had mostly fallen by the wayside, to be replaced with a bimodal world of interpreters/JITs on one side and creaky slow compilers attempting to produce extremely optimal code on the other. The main Go toolchain found, or rediscovered, a new optimal point in the plane of tradeoffs for programming languages to sit: ahead of time compiled, but with a fast less-than-optimal compiler. It has managed to continue to hold that interesting, unstable equilibrium for a decade now, which is incredibly impressive. (E.g. I personally would love to improve its inliner, but know that it’s nearly impossible to get too far into that project without sacrificing a lot of the compiler’s speed.) I’ve always been cranky about GCC: I find its codebase nearly impossible to modify, it’s slow, the associated ducks I need to line up to make it useful (binutils, libc, etc) blow out the complexity budget on any project I try to start before I get far, and it is associated with GNU, which I used to view as an oddity and now view as a millstone around the neck of an otherwise excellent software project. But these are all the sorts of complaints you only make when using something truly invaluable. GCC is invaluable. I would never have learned to program if a free C compiler hadn’t been available in the 90s, so I owe it my career. To this day, it vies neck-and-neck with LLVM for best performing object code. Without the competition between them, compiler technology would stagnate. And while LLVM now benefits from $10s or $100s of millions a year in Silicon Valley salaries working on it, GCC does it all with far less investment. I’m thankful it keeps on going. I keep trying to quit vim. I keep ending up inside a terminal, inside vim, writing code. Like SQLite, vim is an island of stability over my career. While I wish IDEs were better, I am extremely thankful for tools that work and respect the effort I have taken to learn them, decade after decade. SSH gets me from here to there, and has done since ~1999. There is a lot about ssh that needs reinventing, but I am thankful for stable, reliable tools. It takes a lot of work to keep something like ssh working and secure, and if the maintainers are ever looking for someone to buy them a round they know where to find me. How would I get anything done without all the wonderful information on the public web and search engines to find it? What an amazing achievement. Thanks everyone, for making computers so great. open/read/write/close goroutines Tailscale SQLite JSON WireGuard The speed of the Go compiler GCC vim ssh The public web and search engines
More in programming
Test UI outcomes, not API requests. Mock network calls in setup, but assert on what users actually see and experience, not implementation details.
Do you feel that the number of applications needed to land a role has skyrocketed? If so, your instincts are correct. According to a Workday Global Workforce Report in September 2024, job applications are growing at a rate four times faster than job openings. This growth is fuelled by a tight job market as well as the new availability of remote work and online job boards. It’s also one of the results of improved generative AI. Around half of all job seekers use AI tools to create their resumes or fill out applications. More than that, a 2024 survey found that 29 percent of applicants were using AI tools to complete skills tests, while 26 percent employed AI tools to mass apply to positions, regardless of fit or qualifications. This never-before-seen flood of applications poses new hardships for both job candidates and recruiters. Candidates must ensure that their applications stand out enough from the pile to receive a recruiter’s attention. Recruiters, meanwhile, are struggling to manage the sheer number of resumes they receive, and winnow through heaps of irrelevant or unqualified applicants to find the ones they need. These problems worsen if you’re an overseas candidate hoping to find a role in Japan. Japan is a popular country for migrants, thereby increasing the competition for each open position. In addition, recruiters here have set expectations and criteria, some of which can be triggered unknowingly by candidates unfamiliar with the Japanese market. With all this in mind, how can you ensure your resume stands out from the crowd—and is there anything else you can do to pass the screening stage? I interviewed nine recruiters, both external and in-house, to learn how applicants can increase their chances of success. Below are their detailed suggestions on improving your resume, avoiding Japan-specific red flags, and persisting even in the face of rejection. The competition The first questions I asked each recruiter were: How many resumes do you review in a month? How long does it take you to review a resume? Some interviewees work for agencies or independently, while others are employed by the companies they screen applicants for. Surprisingly, where they work doesn’t consistently affect how many resumes they receive. What does affect their numbers is whether they accept candidates from overseas. One anonymous contributor stated the case plainly: “The volume of applications depends on whether the job posting targets candidates in Japan or internationally.” In Japan: we receive around 20–100+ applications within the first three days. Outside of Japan: a single job posting can attract 200–1,000 applications within three days. ”[Because] we are generally only open to current residents of Japan, our total applicant count is around 100 or so in a month,” said Caleb McClain, who is both a Senior Software Engineer and a hiring manager at Lunaris. “In the past, when we accepted applications from abroad it was much higher, though I unfortunately don’t have stats for that period. It was unmanageable for a single person (me) reviewing the applications, though! “Given that I deal with 100 or so per month, I probably spend a bit more time than others screening applications, but it depends. I’ll give every candidate a quick read through within a minute or so and, if I didn’t find a reason to immediately reject them, I’ll spend a few more minutes reading about their experience more deeply. I’ll check out the companies they have listed for their experience if I’m not familiar with them and, if they have a Github or personal projects listed, I’ll also spend a few minutes checking those out.” For companies that accept overseas candidates, the workload is greater. Laine Takahashi, a Talent Acquisition employee at HENNGE, estimated that every month they receive around 200 completed applications for engineering mid-career roles and 270 applications for their Global Internship program. Since their application process starts with a coding test as well as a resume and cover letter, it can take up to two weeks to review, score, and respond to each application. Clement Chidiac, Senior Technical Recruiter at Mercari, explained that the number of resumes he reviews monthly varies widely. “As an example, one of the current roles I am working on received 250+ applications in three weeks. Typically a recruiter at Mercari can work from 5–20 positions at a time, so this gives you an idea.” He also said that his initial quick scan of each resume might take between 5–30 seconds. External recruiters process resumes at a similar rate. Edmund Ho, Principal Consultant for Talisman Corporation, works with around 15 clients a month. To find them, he looks at 20–30 resumes a day, or 600–700 a month, and can only spend 30 seconds to 2 minutes on each one before coming to a decision. Axel Algoet, founder and CEO of InnoHyve, only reviews 200 resumes a month—but “if you count LinkedIn profiles, it’s probably around 1,000.” Why LinkedIn? “I usually start by looking at LinkedIn—the companies they’ve worked at and the roles they’ve had,” Algoet explained. “From there, I can quickly tell whether I’m open to talking with them or not. Since I focus on a very specific segment of roles, I can rapidly identify if a candidate might be a fit for my clients.” Applicant Tracking Systems (ATS) Given the sheer volume of resumes to review and respond to, it’s not surprising that companies are using Applicant Tracking Systems. What’s more unexpected is how few recruiters personally use an ATS or AI when evaluating candidates. Both Ho and Algoet reported that though a high percentage of their clients use an ATS—as many as 90 percent, according to Ho—they themselves don’t use one. Ho in particular emphasized that he manually reads every resume he receives. Lunaris doesn’t use an ATS, “unless you count Notion,” joked McClain. “Open to recommendations!” Koji Hamane, Vice President of Human Resources at KOMOJU, said, “Up to 2023, we were managing the pipeline on a spreadsheet basis, and you cannot do it anymore with 3,000 applications [a year]. So it’s more effective and efficient in terms of tracking where each applicant sits in the recruiting process, but it also facilitates communication among [the members of] the interview panel.” The ATS KOMOJU uses is Workable. “Workable, I mean, you know, it works,” Hamane joked. “It’s much better than nothing. . . . Workable actually shows the valid points of the candidates, highlights characteristics, and evaluates the fit for the required positions, like from a 0 to 100 point basis. It helps, but actually you need to go through the details anyway, to properly assess the candidates.” Chidiac explained that Mercari also uses Workable, which has a feature that matches keywords from the job description to the resume, giving the resume a score. “I’ve never made a decision based on that,” said Chidiac. “It’s an indicator, but it’s not accurate enough yet to use it as a decision-making tool.” For example, it doesn’t screen out non-Japanese speakers when Japanese is a requirement for the role. I think these [ATS] tools are going to be better, and they’re going to work. I think it’s a good idea to help junior recruiters. But I think it has to be used as a ‘decision helper,’ not a decision-making tool. There’s also an element of ethics—do you want to be screened out by a robot? HENNGE uses a different ATS, Greenhouse, mostly to communicate with candidates and send them the results of their application. “ Everything they submit,” said Sonam Choden, HENNGE’s Software Engineer Recruiter, “is actually manually checked by somebody in our team. It’s not that everything is automated for the coding test—the bot only checks if they meet the minimum score. Then there is another [human] screener that will actually look over the test itself. If they pass the coding test, then we have another [human] screener looking through each and every document, both the resume and the cover letter.” How to format your resume The good news is that, according to our interviewees, passing the resume screening doesn’t involve trying to master ATS algorithms. However, since many recruiters are manually evaluating a high number of resume every day, they can spend at most only a few minutes on each one. That’s why it’s critical to make your resume stand out positively from the rest. You can see tips on formatting and good practices in our article on the subject, but below recruiters offer detailed explanations of exactly what they’re looking for—and, importantly, what red flags lead to rejection. Red flags The biggest red flags called out by recruiters are frequent job changes, not having skills required by the position, applications from abroad when no visa support is available, mismatches in salary expectations, and lack of required Japanese language ability. Frequent job changes Jumpiness. Job-hopping. Career-switching. Although they had different names for it, nearly everyone listed frequent job changes as the number one red flag on a candidate’s resume—at least, when applying to jobs in Japan. “There’s a term HR in Japan uses: ‘Oh, this guy is jumpy,’” Clement Chidiac told me. When he asked what they meant by that, they told him it referred to a candidate who had only been in their last job for two years or less. “And my first reaction was like, ‘Is that a bad thing?’ I think in the US, and in most tech companies, people change over every two to three years. I remember at my university in France, I was told you need to change your job externally or internally every three years to grow. But in Japan, there’s still the element of loyalty, right?” It’s changing a little bit, but when I have a candidate, a good candidate, that has had four jobs in the past ten years, I know I’m going to get questioned. . . . If I get a candidate that’s changed jobs three times in the past three years, they’re not likely to pass the screening, especially if they’re overseas. “Which is fair, right?” he added. “Because it’s a bit expensive, it’s a bit of a risk, and [it takes] a bit of time.” Why do Japanese companies feel so strongly on this issue? Some of it is simply history—lifetime employment at a single company was the Japanese ideal until quite recently. But as Chidiac pointed out, hiring overseas candidates represents additional investments in both money and time spent navigating the visa system, so it makes sense for Japanese companies to move more cautiously when doing so. Sayaka Sasaki, who was previously employed as a Sourcing Specialist by Tech Japan Inc., told me that recruiters attempt to use past job history to foresee the future. “A lack of consistency in career history can also lead to rejection,” she said. “Recruiters can often predict a candidate’s future career plans and job-switching tendencies based on their past job-change patterns.” Koji Hamane has another reason for considering job tenure. “When you try to leave some achievement or visible impact, [you have to] take some time in the same job, in the same company. So from that perspective, the tenure of each position on a resume really matters. Even though you say, ‘I have this capability and I have this strength,’ your tenure at each company is very short, and [you] don’t leave an impact on those workplaces.” In this sense, Hamane is not evaluating loyalty for its own sake, but considering tenure as a variable to assess the reproducibility of meaningful achievement. For him, achievement and impact—rather than tenure length itself—are the true signals of qualities such as leadership and resilience. Long-time or regular freelancers may face similar scrutiny. Though Chidiac is reluctant to call freelancing a red flag, he acknowledged that it can cause problems. “[With] an engineer that’s been doing freelance for the past three or four years, I know I’m going to get pushback from the hiring team, because they might have worked on three-, four-, five-month projects. They might not have the depth of knowledge that companies on a large scale might want to hire.” Also my question is, if that person has been working on their own for three or four years, how are they going to work in the team? How long are they going to stay with us? Are they going to be happy being part of a company and then maybe having to come to the office, that kind of thing? He gave an example: “If you get 100 applicants for backend engineer roles, it’s sad, but you’re going to go with the ones that fit the most traditional background. If I’m hiring and I’m getting five candidates from PayPay . . . I might prioritize these people as opposed to a freelancer that’s based out of Spain and wants to relocate to Japan, because there are a lot of question marks. That’s the reality of the candidate pool. “Now, if the freelancer in Spain has the exact experience that I want, and I don’t have other applicants, then yeah, of course I’ll talk to that person. I’ll take time to understand [their reasons].” How to “fix” job-hopping on your resume If you have changed jobs frequently, is rejection guaranteed? Not necessarily. These recruiters also offered a host of tips to compensate for job-hopping, freelancing stints, or gaps in your work history. The biggest tip: include an explanation on your resume. Edmund Ho advises offering a “reason for leaving” for short-term jobs, defining short-term as “less than three years.” For example, if the job was a limited contract role, then labelling it as such will prevent Japanese companies from drawing the conclusion that you left prematurely. Lay-offs and failed start-ups will also be looked upon more benevolently than simply quitting. In addition, Ho suggested that those with difficult resumes avail themselves of an agent or recruiter. Since the recruiter will contact the company directly, they have the chance to advocate and explain your job history better than the resume alone can. Sasaki also feels that explanations can help, but added a caveat: “Being honest about what you did during a gap period is not a bad thing. However, it is important to present it in a positive light. For example, if you traveled abroad or spent time at your family home during the gap period, you could write something like this: ‘Once I start a new job, it will be difficult to take a long vacation. So, I took advantage of this break to visit [destination], which I had always dreamed of seeing. Experiencing [specific highlight] was a lifelong goal, and it helped me refresh myself while boosting my motivation for work.’ “If the gap period lasted for more than a year, it is necessary to provide a convincing explanation for the hiring manager. For instance, you could write, ‘I used this time to enhance my skills by studying [specific subject] and preparing for [certification].’ If you have actually obtained a qualification, that would be a perfect way to present your time productively.” Hamane answered the question quite differently. “Do you gamble?” he asked me. He went on: “ When I say ‘gamble,’ ultimately recruiting is decision-making under uncertainty, right? It comes with risks. But the most important question is, what are the downside risks and upside risks?” “In the game of hiring,” Hamane explained, “employers are looking for indicators of future performance. Tenure, to me, is not inherently valuable, but serves as a variable to assess whether a candidate had the opportunity to leave a meaningful impact. It’s not about loyalty or raw length of time, but about whether qualities like resilience or leadership had the chance to emerge. Those qualities often require time. However, I don’t judge the number of years on its own—what matters is whether there is evidence of real contributions.” A shorter tenure with clear impact can be just as strong a signal as longer service. That’s why I view tenure not categorically, but contextually—as one indicator among others. If possible, then, a candidate should focus on highlighting their work contributions and unique strengths in their resume, which can counterbalance the perceived “downside risk” of job-hopping. Incompatibility with the job description Most other red flags can be categorized as “incompatible with the job description.” This includes: Not possessing the required skills Applying from abroad when the position doesn’t offer visa support Mismatch in salary expectations Not speaking Japanese Many of the resumes recruiters receive are wholly unsuited for the position. Hamane estimated that 70 percent of the resumes his department reviews are essentially “random applications.” Almost all the applications are basically not qualified. One of the major reasons why is the Internet. The Internet enables us to apply for any job from anywhere, right? So there are so many applications with no required skills. . . . From my perspective, they are applying on a batch basis, like mass applications. Even if the candidate has the required job skills, if they’re overseas and the position doesn’t offer visa support, their resume almost certainly won’t pass. Caleb McClain, whose company is currently hiring only domestically, said, “The most common reason [for rejection] is the person is applying from abroad. . . . After that, if there’s just a clear skills mismatch, we won’t move forward with them.” Axel Algoet pointed out that nationality can be a problem even if the company is open to hiring from overseas. “I support many companies in the space, aerospace, and defense industries,” he said, “and they are not allowed to hire candidates from certain countries.” It’s important to comprehend any legal issues surrounding sensitive industries before applying, to save both your own and the company’s time. He also mentioned that, while companies do look for candidates with experience at top enterprises, a prestigious background can actually be a red flag—-mostly in terms of compensation. Japanese tech companies on average pay lower wages than American businesses, and a mismatch in expectations can become a major stumbling block in the application process overall. “Especially [for] candidates coming from companies like Indeed or some foreign firms,” Algoet said, “if I know I won’t be able to match or beat their current salary, I tell them upfront.” Not speaking Japanese is another common stumbling block. Companies have different expectations of candidates when it comes to Japanese language ability. Algoet said that, although in his own niche Japanese often isn’t required at all, a Japanese level below JLPT N2 can be a problem for other roles. Sasaki agreed that speaking Japanese to at least the JLPT N3 level would open more doors. Anticipating potential rejection points If you can anticipate why recruiters might reject you, you can structure your resume accordingly, highlighting your strengths while deemphasizing any weak points. For example, if you don’t live in Japan but do speak Japanese, it’s important to bring attention to that fact. “Something that’s annoying,” said Chidiac, “that I’m seeing a lot from a hiring manager point of view, is that they sort of anticipate or presume things. . . . ‘That person has only been in Japan for a year, they can’t speak Japanese.’ But there are some people that have been [going to] Japanese school back home.” That’s why he urges candidates to clearly state both their language ability and their connections to Japan in their resume whenever possible. Chidiac also mentioned seniority issues. “It’s important that you highlight any elements of seniority.” However, he added, “Seniority means different things depending on the environment.” That’s why context is critical in your resume. If you’ve worked for a company in another country or another industry, the recruiter may not intuitively know much about the scale or complexity of the projects you’ve worked on. Without offering some context—the size of the project, the size of the team, the technologies involved, etc.—it’s difficult for recruiters to judge. If you contextualize your projects properly, though, Chidiac believes that even someone with relatively few years of experience may still be viewed favorably for higher roles. If you’ve led a very strong project, you might have the seniority we want. Finally, Edmund Ho suggested an easy trick for those without a STEM degree: just put down the university you graduated from, and not your major. “It’s cheating!” he said with a chuckle. Green flags Creating a great resume isn’t just about avoiding pitfalls. Your resume may also be missing some of the green flags recruiters get excited to see, which can open doors or lead to unexpected offers. Niche skills Niche skills were cited by several as not only being valuable in and of themselves, but also being a great way to open otherwise closed doors. Even when the job description doesn’t call for your unusual ability or experience, it’s probably worth including them in your resume. “I’ll of course take into consideration the requirements as written in our current open listings,” said McClain, “as that represents the core of what we are looking for at any given time. However, I also try to keep an eye out for interesting individuals with skills or experience that may benefit us in ways we haven’t considered yet, or match well with projects that aren’t formally planned but we are excited about starting when we have the time or the right people.” Chidiac agrees that he takes special note of rare skills or very senior candidates on a resume. “We might be able to create an unseeable headcount to secure a rare talent. . . . I think it’s important to have that mindset, especially for niche areas. Machine learning is one that comes to mind, but it could also be very senior [candidates], like staff level or principal level engineers, or people coming from very strong companies, or people that solve problems that we want to solve at the moment, that kind of thing.” I call it the opportunistic approach, like the unusual path, but it’s important to have that in mind when you apply for a company, because you might not be a fit for a role now, but you might not be aware that a role is going to open soon. Sasaki pointed out that niche skills can compensate for an otherwise relatively weak resume, or one that would be bypassed by more traditional Japanese companies. “If the company you are applying to is looking for a niche skill set that only you possess, they will want to speak with you in an interview. So don’t lose hope!” Tailoring to the job description “I don’t think there’s a secret recipe to automatically pass the resume screening, because at the end of the day, you need to match the job, right?” said Chidiac. “But I’ve seen people that use the same resume for different roles, and sometimes it’s missing [relevant] experience or specific keywords. So I think it’s important to really read the job description and think about, ‘Okay, these are all the main skills they want. Let me highlight these in some way.’” If you’re a cloud infrastructure engineer, but you’ve done a lot of coding in the past, or you use a specific technology but it doesn’t show on your CV, you may be automatically rejected either by the recruiter or by the [ATS]. But if you make sure that, ‘Oh yeah, I’ve seen the need for coding skill. I’m going to add that I was a software engineer when I started and I’m doing coding on my side project,’ that will help you with the screening. It’s not necessary to entirely remake your resume each time, Chidiac believes, but you should at least ensure that at the top of the resume you highlight the skills that match the job description. Connections to Japan While most of this advice would be relevant anywhere in the world, recruiters did offer one additional tip for applying in Japan—emphasizing your connection to the country. “Whenever a candidate overseas writes a little thing about any ties to Japan, it usually helps,” said Chidiac. For example, he believes that it helps to highlight your Japanese language ability at the top of your resume. [If] someone writes like, ‘I want to come to Japan,’ ‘I’ve been going to Japanese school for the last five years,’ ‘I’ve got family in Japan,’ . . . that kind of stuff usually helps. Laine Takahashi confirmed that HENNGE shows extra interest in those kinds of candidates. “Either in the cover letter or the CV,” she said, “if they’re not living in Japan, we want them to write about their passion for coming to Japan.” Ho went so far as to state that every overseas candidate he’d helped land a job in Japan had either already learned some Japanese, or had an interest in Japanese culture. Tourists who’d just enjoyed traveling in Japan were less successful, he’d found. How important is a cover letter? Most recruiters had similar advice for candidates, but one serious point of contention arose: cover letters. Depending on their company and hiring style, interviewees’ opinions ranged widely on whether cover letters were necessary or helpful. Cover letters aren’t important “I was trying to remember the last time I read a cover letter,” said Clement Chidiac, “and I honestly don’t think I’ve ever screened an application based on the cover letter.” Instead, Mercari typically requests a resume and poses some screening questions. Chidiac thought this might be a controversial opinion to take, but it was echoed strongly by around half of the other interviewees. When applying to jobs in Japan, there’s no need to write a cover letter, Edmund Ho told me. “Companies in Japan don’t care!” He then added, “One company, HENNGE, uses cover letters. But you don’t need,” he advised, “to write a fancy cover letter.” “I never ask for cover letters,” said Axel Algoet. “Instead, I usually set up a casual twenty-minute call between the hiring manager and the candidate, as a quick intro to decide if it’s worth moving forward with the interview process.” Getting to skip the cover letter and go straight to an early-stage interview is a major advantage Algoet is able to offer his candidates. “That said,” he added, “if a candidate is rejected at the screening stage and I feel the client is making a mistake, we sometimes work on a cover letter together to give it another shot.” Cover letters are extremely important According to Sayaka Sasaki, though, Japanese companies don’t just expect cover letters—they read them quite closely. “Some people may find this hard to believe,” said Sasaki, “but many Japanese companies carefully analyze aspects of a candidate’s personality that cannot be directly read from the text of a cover letter. They expect to see respect, humility, enthusiasm, and sincerity reflected in the writing.” Such companies also expect, or at least hope for, brevity and clarity. “Long cover letters are not a good sign,” said Koji Hamane. “You need to be clear and concise.” He does appreciate cover letters, though, especially for junior candidates, who have less information on their resume. “It supplements [our knowledge of] the candidate’s objectives, and helps us to verify the fit between the candidate’s motivation and the job and the company.” Caleb McClain feels strongly that a good cover letter is the best way for a candidate to stand out from a crowd. “After looking at enough resumes,” he said, “you start to notice similarities and patterns, and as the resume screener I feel a bit of exhaustion over trying to pick out what makes a person unique or better-suited for the position than another.” A well-written and personal cover letter that expresses genuine interest in joining ‘our’ team and company and working on ‘our’ projects will make you stand out and, assuming you meet the requirements otherwise, I will take that interest into serious consideration. “For example,” McClain continued, “we had an applicant in the past who wrote about his experience using our e-commerce site, SolarisJapan, many years ago, and his positive impressions of shopping there. Others wrote about their interests which clearly align with our businesses, or about details from our TokyoDev company profile that appealed to them.” McClain urged candidates to “really tie your experience and interests into what the company does, show us why you’re the best fit! Use the cover letter to stand out in the crowd and show us who you are in ways that a standard resume cannot. If you have interesting projects on Github or blogs on technical topics, share them! But of course,” he added, “make sure they are in a state where you’d want others to read them.” What to avoid in your cover letter “However,” McClain also cautioned, “[cover letters are] a double-edged sword, and for as many times as they’ve caused an application to rise to the top, they’ve also sunk that many.” For this reason, it’s best not to attach a cover letter unless one is specifically requested. Since cover letters are extremely important to some recruiters, however, you should have a good one prepared in advance—and not one authored by an AI tool. “I sometimes receive cover letters,” McClain told me, “that are very clearly written by AI, even going so far as to leave the prompt in the cover letter. Others simply rehash points from their resume, which is a shame and feels like a waste. This is your chance to really sell yourself!” He wasn’t the only recruiter who frowned on using AI. “Avoid simply copying and pasting AI-generated content into your cover letter,” Sasaki advised. “At the very least, you should write the base structure yourself. Using AI to refine your writing is acceptable, but hiring managers tend to dislike cover letters that clearly appear to be AI-written.” Laine Takahashi and Sonam Choden at HENNGE have also received their share of AI-generated letters. Sometimes, Choden explained, the use of AI is blatantly obvious, because the places where the company or applicant’s name should be written aren’t filled out. That doesn’t mean they’re opposed to all use of AI, though. “[The screeners] do not have a problem with the usage of AI technology. It’s just that [you should] show a bit more of your personality,” Takahashi said. She thinks it’s acceptable to use AI “just for making the sentences a bit more pretty, for example, but the story itself is still yours.” A bigger mistake would be not writing a cover letter at all. “There are cases,” Takahashi explained, “where perhaps the candidate thought that we actually don’t look at or read the cover letter.” They sent the CV, and then the cover letter was like, ‘Whatever, you’re not going to read this anyway.’ That’s an automatic fail from our side. “We do understand,” said Choden, “that most developers now think cover letters are an outdated type of process. But for us, there is a lot of benefit in actually going through with the cover letter, because it’s really hard to judge someone by one piece like a resume, right? So the cover letter is perfect to supplement with things that you might not be able to express in a one-page CV.” Other tips for success The interviewees offered a host of other tips to help candidates advance in the application process. Recruiters vs job boards There are pros and cons to working with a recruiter as opposed to applying directly. Partnering with a recruiter can be a complex process in its own right, and candidates should not expect recruiters to guarantee a specific placement or job. Edmund Ho pointed out some of the advantages of working with a recruiter from the start of your job search. Not only can they help fix your resume, or call a company’s HR directly if you’re rejected, but these services are free. After all, external recruiters are paid only if they successfully place you with a company. Axel Algoet also recommended candidates find a recruiter, but he offered a few caveats to this general advice. “Many candidates are unaware of the candidate ownership rule—which means that when a recruiter submits your application, they ‘own’ it for the next 12–18 months. There’s nothing you can do about it after that point.” By that, he means that the agency you work with will be eligible for a fee if you are hired within that timeframe. Other agencies typically won’t submit your application if it is currently “owned” by another. This affects TokyoDev as well: if you apply to a company with a recruiter, and then later apply to another role at that company via TokyoDev within 12 months of the original application, the recruiter receives the hiring fee rather than TokyoDev. That’s why, Algoet said, you should make sure your recruiter is a good fit and can represent you properly. “If you feel they can’t,” he suggested, “walk away.” And if you have less than three years of experience, he suggests skipping a recruiter entirely. “Many companies don’t want to pay recruitment fees for junior candidates,” he added, “but that doesn’t mean they won’t hire you. Reach out to hiring managers directly.” From the internal recruiter’s perspective, Sonam Choden is in favor of candidates who come through job boards. “I think we definitely have more success with job boards where people are actively directly applying, rather than candidates from agents. In terms of the requirements, the candidates introduced by agents have the experience and what we’re looking for, but those candidates introduced by agents might not necessarily be looking for work, or even if they are . . . [HENNGE] might not be their first choice.” Laine Takahashi agreed and cited TokyoDev as one of HENNGE’s best sources for candidates. We’ve been using TokyoDev for the longest time . . . before the [other] job boards that we’re using now. I think TokyoDev was the one that gave us a good head start for hiring inside Japan. “And now we’re expanding to other job boards as well,” she said, “but still, TokyoDev is [at] the top, definitely.” Follow up Ho casually nailed the dilemma around sending a message or email to follow up on your application. “It’s always best to follow up if you don’t hear back,” he said, “but if you follow up too much, it’s irritating.” The question is, how much is too much? When is it too soon to message a recruiter or hiring manager? Ho gave a concrete suggestion: “Send a message after three days to one week.” For Chidiac, following up is a strategy he’s used himself to great effect. “Something that I’ve always done when I look for a job is ping people on LinkedIn, trying to anticipate who is the hiring manager for that role, or who’s the recruiter for that role, and say ‘Hey, I want to apply,’ or ‘I’ve applied.’” [I’ve said] ‘I know I might not be able to do this and this and that, but I’ve done this and this and this. Can we have a quick chat? Do you need me to tailor my CV differently? Do you have any other roles that you think would be a good fit?’ And then, follow up frequently. “This is something that’s important,” he added, “showing that you’ve researched about the company, showing that you’ve attended meetups from time to time, checking the [company] blogs as well. I’ve had people that just said, ‘Hey, I’ve seen on the blogs that you’re working on this. This is what I’ve done in my company. If you’re hiring [for] this team, let me know, right?’ So that could be a good tip to stand out from other applicants. [But] I think there’s no rule. It’s just going to be down to individuals.” “You might,” he continued, “end up talking to someone who’s like, ‘Hey, don’t ever contact me again.’ As an agency recruiter that happened to me, someone said, ‘How did you get my phone [number]? Don’t ever call me again.’ . . . [But] then a lot of the time it’s like, ‘Oh, we’re both French, let’s help each other out,’ or, ‘Oh, yeah, we were at the same university,’ or ‘Hey, I know you know that person.’” Chidiac gave a recent example of a highly-effective follow-up message. “He used to work in top US tech companies for the past 25 years. [After he applied to Mercari], the person messaged me out of the blue: ‘I’m in Japan, I’m semi-retired, I don’t care about money. I really like what Mercari is doing. I’ve done X and Y at these companies.’ . . . So yeah, I was like, I don’t have a role, but this is an exceptional CV. I’ll show it to the hiring team.” There are a few caveats to this advice, however. First, a well-researched, well-crafted follow-up message is necessary to stand out from the crowd—and these days, there is quite a crowd. “Oh my goodness,” Choden exclaimed when I brought up the subject. “I actually wanted to write a post on LinkedIn, apologizing to people for not being able to get back to them, because of the amount of requests to connect and all related to the positions that we have at HENNGE.” Takahashi and Choden explained that many of these messages are attempts to get around the actual hiring process. “Sometimes,” Choden said, “when I do have the time, I try to redirect them. ‘Oh, please, apply here, or go directly to the site,’ because we can’t really do anything, they have to start with the coding test itself. . . . I do look at them,” Choden went on, “and if they’re actually asking a question that I can help with, then I’m more than happy to reply.” Nonetheless, a few candidates have attempted to go over their heads. Sometimes we have some candidates who are asking for updates on their application directly from our CEO. It’s quite shocking, because they send it to his work email as well. “And then he’s like, ‘Is anybody handling this? Why am I getting this email?’,” Choden related. Other applicants have emailed random HENNGE employees, or even members of the overseas branch in Taiwan. Needless to say, such candidates don’t endear themselves to anyone on the hiring team. Be persistent “I know a bunch of people,” Chidiac told me, “that managed to land a job because they’ve tried harder going to meetups, reaching out to people, networking, that kind of thing.” One of those people was Chidiac himself, who in 2021 was searching for an in-house recruiter position in Japan, while not speaking Japanese. In his job hunt, Chidiac was well aware that he faced some major disadvantages. “So I went the extra mile by contacting the company directly and being like, ‘This is what I’ve done, I’ve solved these problems, I’ve done this, I’ve done that, I know the Japanese market . . . [but] I don’t speak Japanese.’” There’s a bit of a reality check that everyone has to have on what they can bring to the table and how much effort they need to [put forth]. You’re going to have to sell yourself and reach out and find your people. “Does it always work? No. Does it often work? No. But it works, right?” said Chidiac with a laugh. “Like five percent of the time it works every time. But you need to understand that there are some markets that are tougher than others.” Ho agreed that job-hunters, particularly candidates who are overseas hoping to work in Japan for the first time, face a tough road. He recommended applying to as many jobs as possible, but in a strictly organized way. “Make an Excel sheet for your applications,” he urged. Such a spreadsheet should track your applications, when you followed up on those applications, and the probation period for reapplying to that company when you receive a rejection. Most importantly, Ho believes candidates should maintain a realistic, but optimistic, view of the process. “Keep a longer mindset,” he suggested. “Maybe you don’t get an offer the first year, but you do the second year.” Conclusion Given the staggering number of applications recruiters must process, and the increasing competition for good roles—especially those open to candidates overseas—it’s easy to become discouraged. Nonetheless, Japan needs international developers. Given Japan’s demographics, as well as the government’s interest in implementing AI and digital transformation (DX) solutions for social problems, that fact won’t change anytime soon. We at TokyoDev suggest that candidates interested in working in Japan adopt two basic approaches. First, follow the advice in this article and also in our resume-writing guide to prevent your resume from being rejected for common flaws. You can highlight niche skills, write an original cover letter, and send appropriate follow-up messages to the recruiters and hiring managers you hope to impress. Second, persistence is key. The work culture in Japan is evolving and there are more openings for new candidates. Japan’s startup scene is also burgeoning, and modern tech companies—such as Mercari—continue to grow and hire. If your long-term goal is to work in Japan, then it’s worth investing the time to keep applying. That said, hopefully the suggestions offered above will help turn what might have been a lengthy job-hunt into a quicker and more successful search. To apply to open positions right now, see our job board. If you want to hear more tips from other international developers in Japan, check out the TokyoDev Discord. We also have articles with more advice on job hunting, relocating to Japan, and life in Japan.
With search getting worse by the day, maybe it's time we rebounded in the other direction. The long forgotten directory. The post Can Directories Rise Again? appeared first on The History of the Web.
In his post about “Vibe Drive Development”, Robin Rendle warns against what I’ll call the pseudoscientific approach to product building prevalent across the software industry: when folks at tech companies talk about data they’re not talking about a well-researched study from a lab but actually wildly inconsistent and untrustworthy data scraped from an analytics dashboard. This approach has all the theater of science — “we measured and made decisions on the data, the numbers don’t lie” etc. — but is missing the rigor of science. Like, for example, corroboration. Independent corroboration is a vital practice of science that we in tech conveniently gloss over in our (self-proclaimed) objective data-driven decision making. In science you can observe something, measure it, analyze the results, and draw conclusions, but nobody accepts it as fact until there can be multiple instances of independent corroboration. Meanwhile in product, corroboration is often merely a group of people nodding along in support of a Powerpoint with some numbers supporting a foregone conclusion — “We should do X, that’s what the numbers say!” (What’s worse is when we have the hubris to think our experiments, anecdotal evidence, and conclusions should extend to others outside of our own teams, despite zero independent corroboration — looking at you Medium articles.) Don’t get me wrong, experimentation and measurement are great. But let’s not pretend there is (or should be) a science to everything we do. We don’t hold a candle to the rigor of science. Software is as much art as science. Embrace the vibe. Email · Mastodon · Bluesky
Our battle with Apple over their gangster attempt to extort 30% of our HEY revenues was one of the defining moments of my career. It was the kind of test that calls you to account for what you believe and asks what you're willing to risk to see it through. Well, we risked everything, but also secured a four-year truce, and now near-total victory is at hand: HEY is finally for sale on the iPhone in the US! Credit for this amazing turn of events goes to Epic Games founders Tim Sweeney and Mark Rein, who did what no small developer like us could ever dream of doing: they spent over $100 million to sue Apple in court. And while the first round yielded very little progress, Apple's (possibly criminal) contempt of court is what ultimately delivered the resolution. Thanks to their fight for Fortnite, app developers everywhere are now allowed to link out of apps to their own web-based payment system in the US store (but, sadly, nowhere else yet). This is all we ever wanted from Apple: to have a way to distribute our iPhone apps and keep the customer relationship by billing directly. The 30% toll gets all the attention, and it is ludicrously egregious, but to us, it's just as much about retaining that direct customer relationship, so we can help folks with refunds, so they don't tie their billing for a multi-platform email system to a single manufacturer. Apple always claims to put the needs of the users first, and that whatever hardship developers have to carry is justified by their customer-focused obsession. But in this case, it's clear that the obsession was with collecting the easiest billions Apple has ever made, by taking an obscene cut of all software and subscription sales on the platform. This obsession with squeezing every last dollar from developers has produced countless customer-hostile experiences on the iPhone. Like how you couldn't buy a book in the Kindle app before this (now you can!). Or sign up for a Netflix subscription (now you can!). Before, users would hunt in vain for an explanation inside these apps, and thanks to Apple's gag orders, developers were not even allowed to explain the confusing situation. It's been the same deal with HEY. While we successfully fought off Apple's attempt to extort us into using their in-app payment system (IAP), we've been stuck with an awkward user experience ever since. One that prevented new customers from signing up for a real email address in the application, and instead sent them down this bizarre burner-account setup. All so the app would "do something", in order to please an argument that App Store chief Phil Schiller made up on the fly in an interview. That's what we can now get rid of. No more weird burner accounts. Now you can sign up directly for a real email address in HEY, and if you like what we have to offer (and I think you will!), you'll be able to pay the $99/year for a subscription via a web-based flow that it's now kosher to link to from the app itself. What a journey, and what a needless torching of the developer relationship from Apple's side. We've always been happy to pay Apple for hosting our application on the App Store, as all developers have always needed to do via the $99/year developer fee. But being forced to hand over 30% of the business, as well as the direct customer relationship, was always an unacceptable overreach. Now that's been arrested by Judge Yvonne Gonzalez Rogers from the United States District Court of Northern California, who has delivered app developers the only real relief that we've seen in this whole sordid monopoly affair that's been boiling since 2020. It's a beautiful thing. It also offers Apple an opportunity to bury the hatchet with developers. They can choose to accept the court's decision in full and worldwide. Allow developers everywhere the right to link to their own billing flow, so they can retain their own customer relationship, and so business models that can't carry a 30% toll can flourish. Besides, Apple's own offering will likely still have plenty of pull. I'm sure many small developers would continue to consider IAP to avoid having to worry about international taxes or even direct customer service. Nobody is taking that away from Apple or those developers. All Judge Rogers is demanding is that Apple compete fairly with alternative arrangements. In case Apple doesn't accept the court's decision — and there's sadly some evidence to that — I hope the European antitrust regulators watch the simple yet powerful mechanism that Judge Rogers has imposed on Apple. While I'd love side loading as much as the next sovereign techie who wants to own the hardware I buy, I think we can get the lion's share of independence by simply being allowed to link out of the apps, just like has been so ordered by this District Court. I do hope, though, that Apple does accept the court's decision. Both because it would be a stain on their reputation to get convicted of criminal contempt of court, but also because I really want Apple to return to being a shining city on the hill. To show that you can win in the market merely by making better products. Something Apple never used to be afraid of doing. That they don't need these gangster extortion techniques to make the numbers that Cook has promised Wall Street. Despite moving on to Linux and Android, I have a real soft spot for Apple's taste, aesthetics, and engineering prowess. They've lost their way and moral compass over the last half decade or so, but that's only one leadership pivot away from being found again. That won't win back all the trust and good faith that was squandered right away, but they'll at least be on the long road to recovery. Who knows, maybe developers would even be inclined to assist Apple next time they need help launching a new device in need of third-party software to succeed.