More from Computer Things
I realize that for all I've talked about Logic for Programmers in this newsletter, I never once explained basic logical quantifiers. They're both simple and incredibly useful, so let's do that this week! Sets and quantifiers A set is a collection of unordered, unique elements. {1, 2, 3, …} is a set, as are "every programming language", "every programming language's Wikipedia page", and "every function ever defined in any programming language's standard library". You can put whatever you want in a set, with some very specific limitations to avoid certain paradoxes.2 Once we have a set, we can ask "is something true for all elements of the set" and "is something true for at least one element of the set?" IE, is it true that every programming language has a set collection type in the core language? We would write it like this: # all of them all l in ProgrammingLanguages: HasSetType(l) # at least one some l in ProgrammingLanguages: HasSetType(l) This is the notation I use in the book because it's easy to read, type, and search for. Mathematicians historically had a few different formats; the one I grew up with was ∀x ∈ set: P(x) to mean all x in set, and ∃ to mean some. I use these when writing for just myself, but find them confusing to programmers when communicating. "All" and "some" are respectively referred to as "universal" and "existential" quantifiers. Some cool properties We can simplify expressions with quantifiers, in the same way that we can simplify !(x && y) to !x || !y. First of all, quantifiers are commutative with themselves. some x: some y: P(x,y) is the same as some y: some x: P(x, y). For this reason we can write some x, y: P(x,y) as shorthand. We can even do this when quantifying over different sets, writing some x, x' in X, y in Y instead of some x, x' in X: some y in Y. We can not do this with "alternating quantifiers": all p in Person: some m in Person: Mother(m, p) says that every person has a mother. some m in Person: all p in Person: Mother(m, p) says that someone is every person's mother. Second, existentials distribute over || while universals distribute over &&. "There is some url which returns a 403 or 404" is the same as "there is some url which returns a 403 or some url that returns a 404", and "all PRs pass the linter and the test suites" is the same as "all PRs pass the linter and all PRs pass the test suites". Finally, some and all are duals: some x: P(x) == !(all x: !P(x)), and vice-versa. Intuitively: if some file is malicious, it's not true that all files are benign. All these rules together mean we can manipulate quantifiers almost as easily as we can manipulate regular booleans, putting them in whatever form is easiest to use in programming. Speaking of which, how do we use this in in programming? How we use this in programming First of all, people clearly have a need for directly using quantifiers in code. If we have something of the form: for x in list: if P(x): return true return false That's just some x in list: P(x). And this is a prevalent pattern, as you can see by using GitHub code search. It finds over 500k examples of this pattern in Python alone! That can be simplified via using the language's built-in quantifiers: the Python would be any(P(x) for x in list). (Note this is not quantifying over sets but iterables. But the idea translates cleanly enough.) More generally, quantifiers are a key way we express higher-level properties of software. What does it mean for a list to be sorted in ascending order? That all i, j in 0..<len(l): if i < j then l[i] <= l[j]. When should a ratchet test fail? When some f in functions - exceptions: Uses(f, bad_function). Should the image classifier work upside down? all i in images: classify(i) == classify(rotate(i, 180)). These are the properties we verify with tests and types and MISU and whatnot;1 it helps to be able to make them explicit! One cool use case that'll be in the book's next version: database invariants are universal statements over the set of all records, like all a in accounts: a.balance > 0. That's enforceable with a CHECK constraint. But what about something like all i, i' in intervals: NoOverlap(i, i')? That isn't covered by CHECK, since it spans two rows. Quantifier duality to the rescue! The invariant is equivalent to !(some i, i' in intervals: Overlap(i, i')), so is preserved if the query SELECT COUNT(*) FROM intervals CROSS JOIN intervals … returns 0 rows. This means we can test it via a database trigger.3 There are a lot more use cases for quantifiers, but this is enough to introduce the ideas! Next week's the one year anniversary of the book entering early access, so I'll be writing a bit about that experience and how the book changed. It's crazy how crude v0.1 was compared to the current version. MISU ("make illegal states unrepresentable") means using data representations that rule out invalid values. For example, if you have a location -> Optional(item) lookup and want to make sure that each item is in exactly one location, consider instead changing the map to item -> location. This is a means of implementing the property all i in item, l, l' in location: if ItemIn(i, l) && l != l' then !ItemIn(i, l'). ↩ Specifically, a set can't be an element of itself, which rules out constructing things like "the set of all sets" or "the set of sets that don't contain themselves". ↩ Though note that when you're inserting or updating an interval, you already have that row's fields in the trigger's NEW keyword. So you can just query !(some i in intervals: Overlap(new, i')), which is more efficient. ↩
Hi nerds, I'm back from Systems Distributed! I'd heartily recommend it, wildest conference I've been to in years. I have a lot of work to catch up on, so this will be a short newsletter. In an earlier version of my talk, I had a gag about unit tests. First I showed the test f([1,2,3]) == 3, then said that this was satisfied by f(l) = 3, f(l) = l[-1], f(l) = len(l), f(l) = (129*l[0]-34*l[1]-617)*l[2] - 443*l[0] + 1148*l[1] - 182. Then I progressively rule them out one by one with more unit tests, except the last polynomial which stubbornly passes every single test. If you're given some function of f(x: int, y: int, …): int and a set of unit tests asserting specific inputs give specific outputs, then you can find a polynomial that passes every single unit test. To find the gag, and as SMT practice, I wrote a Python program that finds a polynomial that passes a test suite meant for max. It's hardcoded for three parameters and only finds 2nd-order polynomials but I think it could be generalized with enough effort. The code Full code here, breakdown below. from z3 import * # type: ignore s1, s2 = Solver(), Solver() Z3 is just the particular SMT solver we use, as it has good language bindings and a lot of affordances. As part of learning SMT I wanted to do this two ways. First by putting the polynomial "outside" of the SMT solver in a python function, second by doing it "natively" in Z3. I created two solvers so I could test both versions in one run. a0, a, b, c, d, e, f = Consts('a0 a b c d e f', IntSort()) x, y, z = Ints('x y z') t = "a*x+b*y+c*z+d*x*y+e*x*z+f*y*z+a0" Both Const('x', IntSort()) and Int('x') do the exact same thing, the latter being syntactic sugar for the former. I did not know this when I wrote the program. To keep the two versions in sync I represented the equation as a string, which I later eval. This is one of the rare cases where eval is a good idea, to help us experiment more quickly while learning. The polynomial is a "2nd-order polynomial", even though it doesn't have x^2 terms, as it has xy and xz terms. lambdamax = lambda x, y, z: eval(t) z3max = Function('z3max', IntSort(), IntSort(), IntSort(), IntSort()) s1.add(ForAll([x, y, z], z3max(x, y, z) == eval(t))) lambdamax is pretty straightforward: create a lambda with three parameters and eval the string. The string "a*x" then becomes the python expression a*x, a is an SMT symbol, while the x SMT symbol is shadowed by the lambda parameter. To reiterate, a terrible idea in practice, but a good way to learn faster. z3max function is a little more complex. Function takes an identifier string and N "sorts" (roughly the same as programming types). The first N-1 sorts define the parameters of the function, while the last becomes the output. So here I assign the string identifier "z3max" to be a function with signature (int, int, int) -> int. I can load the function into the model by specifying constraints on what z3max could be. This could either be a strict input/output, as will be done later, or a ForAll over all possible inputs. Here I just use that directly to say "for all inputs, the function should match this polynomial." But I could do more complicated constraints, like commutativity (f(x, y) == f(y, x)) or monotonicity (Implies(x < y, f(x) <= f(y))). Note ForAll takes a list of z3 symbols to quantify over. That's the only reason we need to define x, y, z in the first place. The lambda version doesn't need them. inputs = [(1,2,3), (4, 2, 2), (1, 1, 1), (3, 5, 4)] for g in inputs: s1.add(z3max(*g) == max(*g)) s2.add(lambdamax(*g) == max(*g)) This sets up the joke: adding constraints to each solver that the polynomial it finds must, for a fixed list of triplets, return the max of each triplet. for s, func in [(s1, z3max), (s2, lambdamax)]: if s.check() == sat: m = s.model() for x, y, z in inputs: print(f"max([{x}, {y}, {z}]) =", m.evaluate(func(x, y, z))) print(f"max([x, y, z]) = {m[a]}x + {m[b]}y", f"+ {m[c]}z +", # linebreaks added for newsletter rendering f"{m[d]}xy + {m[e]}xz + {m[f]}yz + {m[a0]}\n") Output: max([1, 2, 3]) = 3 # etc max([x, y, z]) = -133x + 130y + -10z + -2xy + 62xz + -46yz + 0 max([1, 2, 3]) = 3 # etc max([x, y, z]) = -17x + 16y + 0z + 0xy + 8xz + -6yz + 0 I find that z3max (top) consistently finds larger coefficients than lambdamax does. I don't know why. Practical Applications Test-Driven Development recommends a strict "red-green refactor" cycle. Write a new failing test, make the new test pass, then go back and refactor. Well, the easiest way to make the new test pass would be to paste in a new polynomial, so that's what you should be doing. You can even do this all automatically: have a script read the set of test cases, pass them to the solver, and write the new polynomial to your code file. All you need to do is write the tests! Pedagogical Notes Writing the script took me a couple of hours. I'm sure an LLM could have whipped it all up in five minutes but I really want to learn SMT and LLMs may decrease learning retention.1 Z3 documentation is not... great for non-academics, though, and most other SMT solvers have even worse docs. One useful trick I use regularly is to use Github code search to find code using the same APIs and study how that works. Turns out reading API-heavy code is a lot easier than writing it! Anyway, I'm very, very slowly feeling like I'm getting the basics on how to use SMT. I don't have any practical use cases yet, but I wanted to learn this skill for a while and glad I finally did. Caveat I have not actually read the study, for all I know it could have a sample size of three people, I'll get around to it eventually ↩
No newsletter next week I’ll be speaking at Systems Distributed. My talk isn't close to done yet, which is why this newsletter is both late and short. Solving LinkedIn Queens in SMT The article Modern SAT solvers: fast, neat and underused claims that SAT solvers1 are "criminally underused by the industry". A while back on the newsletter I asked "why": how come they're so powerful and yet nobody uses them? Many experts responded saying the reason is that encoding SAT kinda sucked and they rather prefer using tools that compile to SAT. I was reminded of this when I read Ryan Berger's post on solving “LinkedIn Queens” as a SAT problem. A quick overview of Queens. You’re presented with an NxN grid divided into N regions, and have to place N queens so that there is exactly one queen in each row, column, and region. While queens can be on the same diagonal, they cannot be adjacently diagonal. (Important note: Linkedin “Queens” is a variation on the puzzle game Star Battle, which is the same except the number of stars you place in each row/column/region varies per puzzle, and is usually two. This is also why 'queens' don’t capture like chess queens.) Ryan solved this by writing Queens as a SAT problem, expressing properties like "there is exactly one queen in row 3" as a large number of boolean clauses. Go read his post, it's pretty cool. What leapt out to me was that he used CVC5, an SMT solver.2 SMT solvers are "higher-level" than SAT, capable of handling more data types than just boolean variables. It's a lot easier to solve the problem at the SMT level than at the SAT level. To show this, I whipped up a short demo of solving the same problem in Z3 (via the Python API). Full code here, which you can compare to Ryan's SAT solution here. I didn't do a whole lot of cleanup on it (again, time crunch!), but short explanation below. The code from z3 import * # type: ignore from itertools import combinations, chain, product solver = Solver() size = 9 # N Initial setup and modules. size is the number of rows/columns/regions in the board, which I'll call N below. # queens[n] = col of queen on row n # by construction, not on same row queens = IntVector('q', size) SAT represents the queen positions via N² booleans: q_00 means that a Queen is on row 0 and column 0, !q_05 means a queen isn't on row 0 col 5, etc. In SMT we can instead encode it as N integers: q_0 = 5 means that the queen on row 0 is positioned at column 5. This immediately enforces one class of constraints for us: we don't need any constraints saying "exactly one queen per row", because that's embedded in the definition of queens! (Incidentally, using 0-based indexing for the board was a mistake on my part, it makes correctly encoding the regions later really painful.) To actually make the variables [q_0, q_1, …], we use the Z3 affordance IntVector(str, n) for making n variables at once. solver.add([And(0 <= i, i < size) for i in queens]) # not on same column solver.add(Distinct(queens)) First we constrain all the integers to [0, N), then use the incredibly handy Distinct constraint to force all the integers to have different values. This guarantees at most one queen per column, which by the pigeonhole principle means there is exactly one queen per column. # not diagonally adjacent for i in range(size-1): q1, q2 = queens[i], queens[i+1] solver.add(Abs(q1 - q2) != 1) One of the rules is that queens can't be adjacent. We already know that they can't be horizontally or vertically adjacent via other constraints, which leaves the diagonals. We only need to add constraints that, for each queen, there is no queen in the lower-left or lower-right corner, aka q_3 != q_2 ± 1. We don't need to check the top corners because if q_1 is in the upper-left corner of q_2, then q_2 is in the lower-right corner of q_1! That covers everything except the "one queen per region" constraint. But the regions are the tricky part, which we should expect because we vary the difficulty of queens games by varying the regions. regions = { "purple": [(0, 0), (0, 1), (0, 2), (0, 3), (0, 4), (0, 5), (0, 6), (0, 7), (0, 8), (1, 0), (2, 0), (3, 0), (4, 0), (5, 0), (6, 0), (7, 0), (8, 0), (1, 1), (8, 1)], "red": [(1, 2), (2, 2), (2, 1), (3, 1), (4, 1), (5, 1), (6, 1), (6, 2), (7, 1), (7, 2), (8, 2), (8, 3),], # you get the picture } # Some checking code left out, see below The region has to be manually coded in, which is a huge pain. (In the link, some validation code follows. Since it breaks up explaining the model I put it in the next section.) for r in regions.values(): solver.add(Or( *[queens[row] == col for (row, col) in r] )) Finally we have the region constraint. The easiest way I found to say "there is exactly one queen in each region" is to say "there is a queen in region 1 and a queen in region 2 and a queen in region 3" etc." Then to say "there is a queen in region purple" I wrote "q_0 = 0 OR q_0 = 1 OR … OR q_1 = 0 etc." Why iterate over every position in the region instead of doing something like (0, q[0]) in r? I tried that but it's not an expression that Z3 supports. if solver.check() == sat: m = solver.model() print([(l, m[l]) for l in queens]) Finally, we solve and print the positions. Running this gives me: [(q__0, 0), (q__1, 5), (q__2, 8), (q__3, 2), (q__4, 7), (q__5, 4), (q__6, 1), (q__7, 3), (q__8, 6)] Which is the correct solution to the queens puzzle. I didn't benchmark the solution times, but I imagine it's considerably slower than a raw SAT solver. Glucose is really, really fast. But even so, solving the problem with SMT was a lot easier than solving it with SAT. That satisfies me as an explanation for why people prefer it to SAT. Sanity checks One bit I glossed over earlier was the sanity checking code. I knew for sure that I was going to make a mistake encoding the region, and the solver wasn't going to provide useful information abut what I did wrong. In cases like these, I like adding small tests and checks to catch mistakes early, because the solver certainly isn't going to catch them! all_squares = set(product(range(size), repeat=2)) def test_i_set_up_problem_right(): assert all_squares == set(chain.from_iterable(regions.values())) for r1, r2 in combinations(regions.values(), 2): assert not set(r1) & set(r2), set(r1) & set(r2) The first check was a quick test that I didn't leave any squares out, or accidentally put the same square in both regions. Converting the values into sets makes both checks a lot easier. Honestly I don't know why I didn't just use sets from the start, sets are great. def render_regions(): colormap = ["purple", "red", "brown", "white", "green", "yellow", "orange", "blue", "pink"] board = [[0 for _ in range(size)] for _ in range(size)] for (row, col) in all_squares: for color, region in regions.items(): if (row, col) in region: board[row][col] = colormap.index(color)+1 for row in board: print("".join(map(str, row))) render_regions() The second check is something that prints out the regions. It produces something like this: 111111111 112333999 122439999 124437799 124666779 124467799 122467899 122555889 112258899 I can compare this to the picture of the board to make sure I got it right. I guess a more advanced solution would be to print emoji squares like 🟥 instead. Neither check is quality code but it's throwaway and it gets the job done so eh. "Boolean SATisfiability Solver", aka a solver that can find assignments that make complex boolean expressions true. I write a bit more about them here. ↩ "Satisfiability Modulo Theories" ↩
New Logic for Programmers Release v0.10 is now available! This is a minor release, mostly focused on logic-based refactoring, with new material on set types and testing refactors are correct. See the full release notes at the changelog page. Due to conference pressure v0.11 will also likely be a minor release. AI is a gamechanger for TLA+ users TLA+ is a specification language to model and debug distributed systems. While very powerful, it's also hard for programmers to learn, and there's always questions of connecting specifications with actual code. That's why The Coming AI Revolution in Distributed Systems caught my interest. In the post, Cheng Huang claims that Azure successfully used LLMs to examine an existing codebase, derive a TLA+ spec, and find a production bug in that spec. "After a decade of manually crafting TLA+ specifications", he wrote, "I must acknowledge that this AI-generated specification rivals human work". This inspired me to experiment with LLMs in TLA+ myself. My goals are a little less ambitious than Cheng's: I wanted to see how LLMs could help junior specifiers write TLA+, rather than handling the entire spec automatically. Details on what did and didn't work below, but my takeaway is that LLMs are an immense specification force multiplier. All tests were done with a standard VSCode Copilot subscription, writing Claude 3.7 in Agent mode. Other LLMs or IDEs may be more or less effective, etc. Things Claude was good at Fixing syntax errors TLA+ uses a very different syntax than mainstream programming languages, meaning beginners make a lot of mistakes where they do a "programming syntax" instead of TLA+ syntax: NotThree(x) = \* should be ==, not = x != 3 \* should be #, not != The problem is that the TLA+ syntax checker, SANY, is 30 years old and doesn't provide good information. Here's what it says for that snippet: Was expecting "==== or more Module body" Encountered "NotThree" at line 6, column 1 That only isolates one error and doesn't tell us what the problem is, only where it is. Experienced TLA+ users get "error eyes" and can quickly see what the problem is, but beginners really struggle with this. The TLA+ foundation has made LLM integration a priority, so the VSCode extension naturally supports several agents actions. One of these is running SANY, meaning an agent can get an error, fix it, get another error, fix it, etc. Provided the above sample and asked to make it work, Claude successfully fixed both errors. It also fixed many errors in a larger spec, as well as figure out why PlusCal specs weren't compiling to TLA+. This by itself is already enough to make LLMs a worthwhile tool, as it fixes one of the biggest barriers to entry. Understanding error traces When TLA+ finds a violated property, it outputs the sequence of steps that leads to the error. This starts in plaintext, and VSCode parses it into an interactive table: Learning to read these error traces is a skill in itself. You have to understand what's happening in each step and how it relates back to the actually broken property. It takes a long time for people to learn how to do this well. Claude was successful here, too, accurately reading 20+ step error traces and giving a high-level explanation of what went wrong. It also could condense error traces: if ten steps of the error trace could be condensed into a one-sentence summary (which can happen if you're modeling a lot of process internals) Claude would do it. I did have issues here with doing this in agent mode: while the extension does provide a "run model checker" command, the agent would regularly ignore this and prefer to run a terminal command instead. This would be fine except that the LLM consistently hallucinated invalid commands. I had to amend every prompt with "run the model checker via vscode, do not use a terminal command". You can skip this if you're willing to copy and paste the error trace into the prompt. As with syntax checking, if this was the only thing LLMs could effectively do, that would already be enough1 to earn a strong recommend. Even as a TLA+ expert I expect I'll be using this trick regularly. Boilerplate tasks TLA+ has a lot of boilerplate. One of the most notorious examples is UNCHANGED rules. Specifications are extremely precise — so precise that you have to specify what variables don't change in every step. This takes the form of an UNCHANGED clause at the end of relevant actions: RemoveObjectFromStore(srv, o, s) == /\ o \in stored[s] /\ stored' = [stored EXCEPT ![s] = @ \ {o}] /\ UNCHANGED <<capacity, log, objectsize, pc>> Writing this is really annoying. Updating these whenever you change an action, or add a new variable to the spec, is doubly so. Syntax checking and error analysis are important for beginners, but this is what I wanted for myself. I took a spec and prompted Claude Add UNCHANGED <> for each variable not changed in an action. And it worked! It successfully updated the UNCHANGED in every action. (Note, though, that it was a "well-behaved" spec in this regard: only one "action" happened at a time. In TLA+ you can have two actions happen simultaneously, that each update half of the variables, meaning neither of them should have an UNCHANGED clause. I haven't tested how Claude handles that!) That's the most obvious win, but Claude was good at handling other tedious work, too. Some examples include updating vars (the conventional collection of all state variables), lifting a hard-coded value into a model parameter, and changing data formats. Most impressive to me, though, was rewriting a spec designed for one process to instead handle multiple processes. This means taking all of the process variables, which originally have types like Int, converting them to types like [Process -> Int], and then updating the uses of all of those variables in the spec. It didn't account for race conditions in the new concurrent behavior, but it was an excellent scaffold to do more work. Writing properties from an informal description You have to be pretty precise with your intended property description but it handles converting that precise description into TLA+'s formalized syntax, which is something beginners often struggle with. Things it is less good at Generating model config files To model check TLA+, you need both a specification (.tla) and a model config file (.cfg), which have separate syntaxes. Asking the agent to generate the second often lead to it using TLA+ syntax. It automatically fixed this after getting parsing errors, though. Fixing specs Whenever the ran model checking and discovered a bug, it would naturally propose a change to either the invalid property or the spec. Sometimes the changes were good, other times the changes were not physically realizable. For example, if it found that a bug was due to a race condition between processes, it would often suggest fixing it by saying race conditions were okay. I mean yes, if you say bugs are okay, then the spec finds that bugs are okay! Or it would alternatively suggest adding a constraint to the spec saying that race conditions don't happen. But that's a huge mistake in specification, because race conditions happen if we don't have coordination. We need to specify the mechanism that is supposed to prevent them. Finding properties of the spec After seeing how capable it was at translating my properties to TLA+, I started prompting Claude to come up with properties on its own. Unfortunately, almost everything I got back was either trivial, uninteresting, or too coupled to implementation details. I haven't tested if it would work better to ask it for "properties that may be violated". Generating code from specs I have to be specific here: Claude could sometimes convert Python into a passable spec, an vice versa. It wasn't good at recognizing abstraction. For example, TLA+ specifications often represent sequential operations with a state variable, commonly called pc. If modeling code that nonatomically retrieves a counter value and increments it, we'd have one action that requires pc = "Get" and sets the new value to "Inc", then another that requires it be "Inc" and sets it to "Done". I found that Claude would try to somehow convert pc into part of the Python program's state, rather than recognize it as a TLA+ abstraction. On the other side, when converting python code to TLA+ it would often try to translate things like sleep into some part of the spec, not recognizing that it is abstractable into a distinct action. I didn't test other possible misconceptions, like converting randomness to nondeterminism. For the record, when converting TLA+ to Python Claude tended to make simulators of the spec, rather than possible production code implementing the spec. I really wasn't expecting otherwise though. Unexplored Applications Things I haven't explored thoroughly but could possibly be effective, based on what I know about TLA+ and AI: Writing Java Overrides Most TLA+ operators are resolved via TLA+ interpreters, but you can also implement them in "native" Java. This lets you escape the standard language semantics and add capabilities like executing programs during model-checking or dynamically constrain the depth of the searched state space. There's a lot of cool things I think would be possible with overrides. The problem is there's only a handful of people in the world who know how to write them. But that handful have written quite a few overrides and I think there's enough there for Claude to work with. Writing specs, given a reference mechanism In all my experiments, the LLM only had my prompts and the occasional Python script as information. That makes me suspect that some of its problems with writing and fixing specs come down to not having a system model. Maybe it wouldn't suggest fixes like "these processes never race" if it had a design doc saying that the processes can't coordinate. (Could a Sufficiently Powerful LLM derive some TLA+ specification from a design document?) Connecting specs and code This is the holy grail of TLA+: taking a codebase and showing it correctly implements a spec. Currently the best ways to do this are by either using TLA+ to generate a test suite, or by taking logged production traces and matching them to TLA+ behaviors. This blog post discusses both. While I've seen a lot of academic research into these approaches there are no industry-ready tools. So if you want trace validation you have to do a lot of manual labour tailored to your specific product. If LLMs could do some of this work for us then that'd really amplify the usefulness of TLA+ to many companies. Thoughts Right now, agents seem good at the tedious and routine parts of TLA+ and worse at the strategic and abstraction parts. But, since the routine parts are often a huge barrier to beginners, this means that LLMs have the potential to make TLA+ far, far more accessible than it previously was. I have mixed thoughts on this. As an advocate, this is incredible. I want more people using formal specifications because I believe it leads to cheaper, safer, more reliable software. Anything that gets people comfortable with specs is great for our industry. As a professional TLA+ consultant, I'm worried that this obsoletes me. Most of my income comes from training and coaching, which companies will have far less demand of now. Then again, maybe this an opportunity to pitch "agentic TLA+ training" to companies! Anyway, if you're interested in TLA+, there has never been a better time to try it. I mean it, these tools handle so much of the hard part now. I've got a free book available online, as does the inventor of TLA+. I like this guide too. Happy modeling! Dayenu. ↩
Systems Distributed I'll be speaking at Systems Distributed next month! The talk is brand new and will aim to showcase some of the formal methods mental models that would be useful in mainstream software development. It has added some extra stress on my schedule, though, so expect the next two monthly releases of Logic for Programmers to be mostly minor changes. What does "Undecidable" mean, anyway Last week I read Against Curry-Howard Mysticism, which is a solid article I recommend reading. But this newsletter is actually about one comment: I like to see posts like this because I often feel like I can’t tell the difference between BS and a point I’m missing. Can we get one for questions like “Isn’t XYZ (Undecidable|NP-Complete|PSPACE-Complete)?” I've already written one of these for NP-complete, so let's do one for "undecidable". Step one is to pull a technical definition from the book Automata and Computability: A property P of strings is said to be decidable if ... there is a total Turing machine that accepts input strings that have property P and rejects those that do not. (pg 220) Step two is to translate the technical computer science definition into more conventional programmer terms. Warning, because this is a newsletter and not a blog post, I might be a little sloppy with terms. Machines and Decision Problems In automata theory, all inputs to a "program" are strings of characters, and all outputs are "true" or "false". A program "accepts" a string if it outputs "true", and "rejects" if it outputs "false". You can think of this as automata studying all pure functions of type f :: string -> boolean. Problems solvable by finding such an f are called "decision problems". This covers more than you'd think, because we can bootstrap more powerful functions from these. First, as anyone who's programmed in bash knows, strings can represent any other data. Second, we can fake non-boolean outputs by instead checking if a certain computation gives a certain result. For example, I can reframe the function add(x, y) = x + y as a decision problem like this: IS_SUM(str) { x, y, z = split(str, "#") return x + y == z } Then because IS_SUM("2#3#5") returns true, we know 2 + 3 == 5, while IS_SUM("2#3#6") is false. Since we can bootstrap parameters out of strings, I'll just say it's IS_SUM(x, y, z) going forward. A big part of automata theory is studying different models of computation with different strengths. One of the weakest is called "DFA". I won't go into any details about what DFA actually can do, but the important thing is that it can't solve IS_SUM. That is, if you give me a DFA that takes inputs of form x#y#z, I can always find an input where the DFA returns true when x + y != z, or an input which returns false when x + y == z. It's really important to keep this model of "solve" in mind: a program solves a problem if it correctly returns true on all true inputs and correctly returns false on all false inputs. (total) Turing Machines A Turing Machine (TM) is a particular type of computation model. It's important for two reasons: By the Church-Turing thesis, a Turing Machine is the "upper bound" of how powerful (physically realizable) computational models can get. This means that if an actual real-world programming language can solve a particular decision problem, so can a TM. Conversely, if the TM can't solve it, neither can the programming language.1 It's possible to write a Turing machine that takes a textual representation of another Turing machine as input, and then simulates that Turing machine as part of its computations. Property (1) means that we can move between different computational models of equal strength, proving things about one to learn things about another. That's why I'm able to write IS_SUM in a pseudocode instead of writing it in terms of the TM computational model (and why I was able to use split for convenience). Property (2) does several interesting things. First of all, it makes it possible to compose Turing machines. Here's how I can roughly ask if a given number is the sum of two primes, with "just" addition and boolean functions: IS_SUM_TWO_PRIMES(z): x := 1 y := 1 loop { if x > z {return false} if IS_PRIME(x) { if IS_PRIME(y) { if IS_SUM(x, y, z) { return true; } } } y := y + 1 if y > x { x := x + 1 y := 0 } } Notice that without the if x > z {return false}, the program would loop forever on z=2. A TM that always halts for all inputs is called total. Property (2) also makes "Turing machines" a possible input to functions, meaning that we can now make decision problems about the behavior of Turing machines. For example, "does the TM M either accept or reject x within ten steps?"2 IS_DONE_IN_TEN_STEPS(M, x) { for (i = 0; i < 10; i++) { `simulate M(x) for one step` if(`M accepted or rejected`) { return true } } return false } Decidability and Undecidability Now we have all of the pieces to understand our original definition: A property P of strings is said to be decidable if ... there is a total Turing machine that accepts input strings that have property P and rejects those that do not. (220) Let IS_P be the decision problem "Does the input satisfy P"? Then IS_P is decidable if it can be solved by a Turing machine, ie, I can provide some IS_P(x) machine that always accepts if x has property P, and always rejects if x doesn't have property P. If I can't do that, then IS_P is undecidable. IS_SUM(x, y, z) and IS_DONE_IN_TEN_STEPS(M, x) are decidable properties. Is IS_SUM_TWO_PRIMES(z) decidable? Some analysis shows that our corresponding program will either find a solution, or have x>z and return false. So yes, it is decidable. Notice there's an asymmetry here. To prove some property is decidable, I need just to need to find one program that correctly solves it. To prove some property is undecidable, I need to show that any possible program, no matter what it is, doesn't solve it. So with that asymmetry in mind, do are there any undecidable problems? Yes, quite a lot. Recall that Turing machines can accept encodings of other TMs as input, meaning we can write a TM that checks properties of Turing machines. And, by Rice's Theorem, almost every nontrivial semantic3 property of Turing machines is undecidable. The conventional way to prove this is to first find a single undecidable property H, and then use that to bootstrap undecidability of other properties. The canonical and most famous example of an undecidable problem is the Halting problem: "does machine M halt on input i?" It's pretty easy to prove undecidable, and easy to use it to bootstrap other undecidability properties. But again, any nontrivial property is undecidable. Checking a TM is total is undecidable. Checking a TM accepts any inputs is undecidable. Checking a TM solves IS_SUM is undecidable. Etc etc etc. What this doesn't mean in practice I often see the halting problem misconstrued as "it's impossible to tell if a program will halt before running it." This is wrong. The halting problem says that we cannot create an algorithm that, when applied to an arbitrary program, tells us whether the program will halt or not. It is absolutely possible to tell if many programs will halt or not. It's possible to find entire subcategories of programs that are guaranteed to halt. It's possible to say "a program constructed following constraints XYZ is guaranteed to halt." The actual consequence of undecidability is more subtle. If we want to know if a program has property P, undecidability tells us We will have to spend time and mental effort to determine if it has P We may not be successful. This is subtle because we're so used to living in a world where everything's undecidable that we don't really consider what the counterfactual would be like. In such a world there might be no need for Rust, because "does this C program guarantee memory-safety" is a decidable property. The entire field of formal verification could be unnecessary, as we could just check properties of arbitrary programs directly. We could automatically check if a change in a program preserves all existing behavior. Lots of famous math problems could be solved overnight. (This to me is a strong "intuitive" argument for why the halting problem is undecidable: a halt detector can be trivially repurposed as a program optimizer / theorem-prover / bcrypt cracker / chess engine. It's too powerful, so we should expect it to be impossible.) But because we don't live in that world, all of those things are hard problems that take effort and ingenuity to solve, and even then we often fail. To be pendantic, a TM can't do things like "scrape a webpage" or "render a bitmap", but we're only talking about computational decision problems here. ↩ One notation I've adopted in Logic for Programmers is marking abstract sections of pseudocode with backticks. It's really handy! ↩ Nontrivial meaning "at least one TM has this property and at least one TM doesn't have this property". Semantic meaning "related to whether the TM accepts, rejects, or runs forever on a class of inputs". IS_DONE_IN_TEN_STEPS is not a semantic property, as it doesn't tell us anything about inputs that take longer than ten steps. ↩
More in programming
<![CDATA[It has been a year since I set up my System76 Merkaat with Linux Mint. In July of 2024 I migrated from ChromeOS and the Merkaat has been my daily driver on the desktop. A year later I have nothing major to report, which is the point. Despite the occasional unplanned reinstallation I have been enjoying the stability of Linux and just using the PC. This stability finally enabled me to burn bridges with mainstream operating systems and fully embrace Linux and open systems. I'm ready to handle the worst and get back to work. Just a few years ago the frustration of troubleshooting a broken system would have made me seriously consider the switch to a proprietary solution. But a year of regular use, with an ordinary mix of quiet moments and glitches, gave me the confidence to stop worrying and learn to love Linux. linux a href="https://remark.as/p/journal.paoloamoroso.com/my-first-year-since-coming-back-to-linux"Discuss.../a Email | Reply @amoroso@oldbytes.space !--emailsub--]]>
There’s a lot of excitement about what AI (specifically the latest wave of LLM-anchored AI) can do, and how AI-first companies are different from the prior generations of companies. There are a lot of important and real opportunities at hand, but I find that many of these conversations occur at such an abstract altitude that they’re a bit too abstract. Sort of like saying that your company could be much better if you merely adopted software. That’s certainly true, but it’s not a particularly helpful claim. This post is an attempt to concisely summarize how AI agents work, apply that summary to a handful of real-world use cases for AI, and make the case that the potential of AI agents is equivalent to the potential of this generation of AI. By the end of this writeup, my hope is that you’ll be well-armed to have a concrete discussion about how LLMs and agents could change the shape of your company. How do agents work? At its core, using an LLM is an API call that includes a prompt. For example, you might call Anthropic’s /v1/message with a prompt: How should I adopt LLMs in my company? That prompt is used to fill the LLM’s context window, which conditions the model to generate certain kinds of responses. This is the first important thing that agents can do: use an LLM to evaluate a context window and get a result. Prompt engineering, or context engineering as it’s being called now, is deciding what to put into the context window to best generate the responses you’re looking for. For example, In-Context Learning (ICL) is one form of context engineering, where you supply a bunch of similar examples before asking a question. If I want to determine if a transaction is fraudulent, then I might supply a bunch of prior transactions and whether they were, or were not, fraudulent as ICL examples. Those examples make generating the correct answer more likely. However, composing the perfect context window is very time intensive, benefiting from techniques like metaprompting to improve your context. Indeed, the human (or automation) creating the initial context might not know enough to do a good job of providing relevant context. For example, if you prompt, Who is going to become the next mayor of New York City?, then you are unsuited to include the answer to that question in your prompt. To do that, you would need to already know the answer, which is why you’re asking the question to begin with! This is where we see model chat experiences from OpenAI and Anthropic use web search to pull in context that you likely don’t have. If you ask a question about the new mayor of New York, they use a tool to retrieve web search results, then add the content of those searches to your context window. This is the second important thing that agents can do: use an LLM to suggest tools relevant to the context window, then enrich the context window with the tool’s response. However, it’s important to clarify how “tool usage” actually works. An LLM does not actually call a tool. (You can skim OpenAI’s function calling documentation if you want to see a specific real-world example of this.) Instead there is a five-step process to calling tools that can be a bit counter-intuitive: The program designer that calls the LLM API must also define a set of tools that the LLM is allowed to suggest using. Every API call to the LLM includes that defined set of tools as options that the LLM is allowed to recommend The response from the API call with defined functions is either: Generated text as any other call to an LLM might provide A recommendation to call a specific tool with a specific set of parameters, e.g. an LLM that knows about a get_weather tool, when prompted about the weather in Paris, might return this response: [{ "type": "function_call", "name": "get_weather", "arguments": "{\"location\":\"Paris, France\"}" }] The program that calls the LLM API then decides whether and how to honor that requested tool use. The program might decide to reject the requested tool because it’s been used too frequently recently (e.g. rate limiting), it might check if the associated user has permission to use the tool (e.g. maybe it’s a premium only tool), it might check if the parameters match the user’s role-based permissions as well (e.g. the user can check weather, but only admin users are allowed to check weather in France). If the program does decide to call the tool, it invokes the tool, then calls the LLM API with the output of the tool appended to the prior call’s context window. The important thing about this loop is that the LLM itself can still only do one interesting thing: taking a context window and returning generated text. It is the broader program, which we can start to call an agent at this point, that calls tools and sends the tools’ output to the LLM to generate more context. What’s magical is that LLMs plus tools start to really improve how you can generate context windows. Instead of having to have a very well-defined initial context window, you can use tools to inject relevant context to improve the initial context. This brings us to the third important thing that agents can do: they manage flow control for tool usage. Let’s think about three different scenarios: Flow control via rules has concrete rules about how tools can be used. Some examples: it might only allow a given tool to be used once in a given workflow (or a usage limit of a tool for each user, etc) it might require that a human-in-the-loop approves parameters over a certain value (e.g. refunds more than $100 require human approval) it might run a generated Python program and return the output to analyze a dataset (or provide error messages if it fails) apply a permission system to tool use, restricting who can use which tools and which parameters a given user is able to use (e.g. you can only retrieve your own personal data) a tool to escalate to a human representative can only be called after five back and forths with the LLM agent Flow control via statistics can use statistics to identify and act on abnormal behavior: if the size of a refund is higher than 99% of other refunds for the order size, you might want to escalate to a human if a user has used a tool more than 99% of other users, then you might want to reject usage for the rest of the day it might escalate to a human representative if tool parameters are more similar to prior parameters that required escalation to a human agent LLMs themselves absolutely cannot be trusted. Anytime you rely on an LLM to enforce something important, you will fail. Using agents to manage flow control is the mechanism that makes it possible to build safe, reliable systems with LLMs. Whenever you find yourself dealing with an unreliable LLM-based system, you can always find a way to shift the complexity to a tool to avoid that issue. As an example, if you want to do algebra with an LLM, the solution is not asking the LLM to directly perform algebra, but instead providing a tool capable of algebra to the LLM, and then relying on the LLM to call that tool with the proper parameters. At this point, there is one final important thing that agents do: they are software programs. This means they can do anything software can do to build better context windows to pass on to LLMs for generation. This is an infinite category of tasks, but generally these include: Building general context to add to context window, sometimes thought of as maintaining memory Initiating a workflow based on an incoming ticket in a ticket tracker, customer support system, etc Periodically initiating workflows at a certain time, such as hourly review of incoming tickets Alright, we’ve now summarized what AI agents can do down to four general capabilities. Recapping a bit, those capabilities are: Use an LLM to evaluate a context window and get a result Use an LLM to suggest tools relevant to the context window, then enrich the context window with the tool’s response Manage flow control for tool usage via rules or statistical analysis Agents are software programs, and can do anything other software programs do Armed with these four capabilities, we’ll be able to think about the ways we can, and cannot, apply AI agents to a number of opportunities. Use Case 1: Customer Support Agent One of the first scenarios that people often talk about deploying AI agents is customer support, so let’s start there. A typical customer support process will have multiple tiers of agents who handle increasingly complex customer problems. So let’s set a goal of taking over the easiest tier first, with the goal of moving up tiers over time as we show impact. Our approach might be: Allow tickets (or support chats) to flow into an AI agent Provide a variety of tools to the agent to support: Retrieving information about the user: recent customer support tickets, account history, account state, and so on Escalating to next tier of customer support Refund a purchase (almost certainly implemented as “refund purchase” referencing a specific purchase by the user, rather than “refund amount” to prevent scenarios where the agent can be fooled into refunding too much) Closing the user account on request Include customer support guidelines in the context window, describe customer problems, map those problems to specific tools that should be used to solve the problems Flow control rules that ensure all calls escalate to a human if not resolved within a certain time period, number of back-and-forth exchanges, if they run into an error in the agent, and so on. These rules should be both rules-based and statistics-based, ensuring that gaps in your rules are neither exploitable nor create a terrible customer experience Review agent-customer interactions for quality control, making improvements to the support guidelines provided to AI agents. Initially you would want to review every interaction, then move to interactions that lead to unusual outcomes (e.g. escalations to human) and some degree of random sampling Review hourly, then daily, and then weekly metrics of agent performance Based on your learnings from the metric reviews, you should set baselines for alerts which require more immediate response. For example, if a new topic comes up frequently, it probably means a serious regression in your product or process, and it requires immediate review rather than periodical review. Note that even when you’ve moved “Customer Support to AI agents”, you still have: a tier of human agents dealing with the most complex calls humans reviewing the periodic performance statistics humans performing quality control on AI agent-customer interactions You absolutely can replace each of those downstream steps (reviewing performance statistics, etc) with its own AI agent, but doing that requires going through the development of an AI product for each of those flows. There is a recursive process here, where over time you can eliminate many human components of your business, in exchange for increased fragility as you have more tiers of complexity. The most interesting part of complex systems isn’t how they work, it’s how they fail, and agent-driven systems will fail occasionally, as all systems do, very much including human-driven ones. Applied with care, the above series of actions will work successfully. However, it’s important to recognize that this is building an entire software pipeline, and then learning to operate that software pipeline in production. These are both very doable things, but they are meaningful work, turning customer support leadership into product managers and requiring an engineering team building and operating the customer support agent. Use Case 2: Triaging incoming bug reports When an incident is raised within your company, or when you receive a bug report, the first problem of the day is determining how severe the issue might be. If it’s potentially quite severe, then you want on-call engineers immediately investigating; if it’s certainly not severe, then you want to triage it in a less urgent process of some sort. It’s interesting to think about how an AI agent might support this triaging workflow. The process might work as follows: Pipe all created incidents and all created tickets to this agent for review. Expose these tools to the agent: Open an incident Retrieve current incidents Retrieve recently created tickets Retrieve production metrics Retrieve deployment logs Retrieve feature flag change logs Toggle known-safe feature flags Propose merging an incident with another for human approval Propose merging a ticket with another ticket for human approval Redundant LLM providers for critical workflows. If the LLM provider’s API is unavailable, retry three times over ten seconds, then resort to using a second model provider (e.g. Anthropic first, if unavailable try OpenAI), and then finally create an incident that the triaging mechanism is unavailable. For critical workflows, we can’t simply assume the APIs will be available, because in practice all major providers seem to have monthly availability issues. Merge duplicates. When a ticket comes in, first check ongoing incidents and recently created tickets for potential duplicates. If there is a probable duplicate, suggest merging the ticket or incident with the existing issue and exit the workflow. Assess impact. If production statistics are severely impacted, or if there is a new kind of error in production, then this is likely an issue that merits quick human review. If it’s high priority, open an incident. If it’s low priority, create a ticket. Propose cause. Now that the incident has been sized, switch to analyzing the potential causes of the incident. Look at the code commits in recent deploys and suggest potential issues that might have caused the current error. In some cases this will be obvious (e.g. spiking errors with a traceback of a line of code that changed recently), and in other cases it will only be proximity in time. Apply known-safe feature flags. Establish an allow list of known safe feature flags that the system is allowed to activate itself. For example, if there are expensive features that are safe to disable, it could be allowed to disable them, e.g. restricting paginating through deeper search results when under load might be a reasonable tradeoff between stability and user experience. Defer to humans. At this point, rely on humans to drive incident, or ticket, remediation to completion. Draft initial incident report. If an incident was opened, the agent should draft an initial incident report including the timeline, related changes, and the human activities taken over the course of the incident. This report should then be finalized by the human involved in the incident. Run incident review. Your existing incident review process should take the incident review and determine how to modify your systems, including the triaging agent, to increase reliability over time. Safeguard to reenable feature flags. Since we now have an agent disabling feature flags, we also need to add a periodic check (agent-driven or otherwise) to reenable the “known safe” feature flags if there isn’t an ongoing incident to avoid accidentally disabling them for long periods of time. This is another AI agent that will absolutely work as long as you treat it as a software product. In this case, engineering is likely the product owner, but it will still require thoughtful iteration to improve its behavior over time. Some of the ongoing validation to make this flow work includes: The role of humans in incident response and review will remain significant, merely aided by this agent. This is especially true in the review process, where an agent cannot solve the review process because it’s about actively learning what to change based on the incident. You can make a reasonable argument that an agent could decide what to change and then hand that specification off to another agent to implement it. Even today, you can easily imagine low risk changes (e.g. a copy change) being automatically added to a ticket for human approval. Doing this for more complex, or riskier changes, is possible but requires an extraordinary degree of care and nuance: it is the polar opposite of the idea of “just add agents and things get easy.” Instead, enabling that sort of automation will require immense care in constraining changes to systems that cannot expose unsafe behavior. For example, one startup I know has represented their domain logic in a domain-specific language (DSL) that can be safely generated by an LLM, and are able to represent many customer-specific features solely through that DSL. Expanding the list of known-safe feature flags to make incidents remediable. To do this widely will require enforcing very specific requirements for how software is developed. Even doing this narrowly will require changes to ensure the known-safe feature flags remain safe as software is developed. Periodically reviewing incident statistics over time to ensure mean-time-to-resolution (MTTR) is decreasing. If the agent is truly working, this should decrease. If the agent isn’t driving a reduction in MTTR, then something is rotten in the details of the implementation. Even a very effective agent doesn’t relieve the responsibility of careful system design. Rather, agents are a multiplier on the quality of your system design: done well, agents can make you significantly more effective. Done poorly, they’ll only amplify your problems even more widely. Do AI Agents Represent Entirety of this Generation of AI? If you accept my definition that AI agents are any combination of LLMs and software, then I think it’s true that there’s not much this generation of AI can express that doesn’t fit this definition. I’d readily accept the argument that LLM is too narrow a term, and that perhaps foundational model would be a better term. My sense is that this is a place where frontier definitions and colloquial usage have deviated a bit. Closing thoughts LLMs and agents are powerful mechanisms. I think they will truly change how products are designed and how products work. An entire generation of software makers, and company executives, are in the midst of learning how these tools work. Software isn’t magic, it’s very logical, but what it can accomplish is magical. The same goes for agents and LLMs. The more we can accelerate that learning curve, the better for our industry.
After 18 months of living in Karuizawa, a resort town about an hour away from Tokyo via the Shinkansen, I have bought a house here. This article describes my experience of purchasing a house, and contains tips that are useful both if you’re considering buying in Karuizawa specifically, and in Japan more generally. In this article, I’ll cover: My personal journey to Karuizawa Why you might choose to live in Karuizawa My process of buying a house Tips for potential homebuyers in Japan From Tokyo to Karuizawa: my own journey In April 2020 I was living in central Tokyo. Three days after my one-year-old son entered daycare, a state of emergency was declared because of COVID-19. The daycare was closed, so my wife and I took turns looking after him while working remotely from our apartment. The small park next to our apartment was also closed due to the state of emergency, and so my time with my son included such fun activities as walking up and down the stairs, and looking at ants in the parking lot. After a week or two, we’d had enough. We moved in with my in-laws, who were living in a house in a small commuter town in Saitama. Our lives improved dramatically. The nearby parks were not closed. The grandparents could help look after my son. I could enjoy excellent cycling in the nearby mountains. We kept our apartment in Tokyo, as we didn’t know how long we’d stay, but let my brother-in-law use it. It was more spacious than his own, and gave him a better place to work remotely from. Leaving Tokyo for good In June of 2020 the state of emergency was lifted, yet we decided not to move back to Tokyo. Before having a kid, Tokyo had been a wonderful place to live. There was always something new to try, and opportunities to make new professional connections abounded. But as a father, my lifestyle had changed dramatically, even before COVID. No longer was I attending multiple events per week, but at most one per month. I basically stopped drinking alcohol, as maximizing the quality of what little sleep I could get was too important. With a stroller, getting around by train was no longer easy. My life had become much more routine: excitement came from watching my son grow, not my own activities. Though we’d enjoyed living in an area that was suburban bordering on rural, we didn’t want to live with my in-laws indefinitely. Still, seeing how useful it was to have them around with a small child, we decided to look for a house to rent nearby. We found a relatively modern one in the next town over, about a 10-minute drive from my in-laws. The rent was about half of what our Tokyo apartment’s had been, yet this house was twice as big, and even had a small garden. Living there was a wonderful change of pace. When my second child was born a year later, having the in-laws nearby became even more helpful. But as the COVID restrictions began rolling back, we started to think about where we wanted to settle down permanently. Choosing Karuizawa There were some definite downsides to the town we were living in. Practically all our neighbours were retirees. There were few restaurants that were worth going to. The equipment at the playgrounds was falling apart, and there was no sign it would be replaced. The summers were brutally hot, making it unpleasant to be outside. There wasn’t anything for kids to do inside. We’d visited Karuizawa before, and it seemed like it had the access to nature we appreciated, while also offering more options for dining out and other cultural activities. The cooler climate was also a big motivating factor. So in June 2022, we started looking at places. Most of the rental options were incredibly expensive seasonal properties, and so buying a house seemed like the most realistic option. But the market moved extremely quickly, and any property that seemed like a good deal was snatched up before we even had a chance to look at it. After about half a year of searching, we gave up. We then considered buying a house in Gotenba, a town at the base of Mount Fuji. We’d visited there in the past, and though it didn’t seem quite as nice an option as Karuizawa, we were able to find a decent house that was cheap enough that buying it was a relatively small risk. We placed a lowball offer on the property, and were rejected. Frustrated, I took a look at the rental options for Karuizawa again. This time we saw two houses for rent that were good candidates! We heard from the real estate agent that one of the properties already had someone coming to view the next afternoon, and so we went the next morning. Visiting the house, it seemed like exactly what we were looking for. We immediately put in a rental application, and moved in the next month. Why Karuizawa? I’ve already covered some of the reasons why we chose Karuizawa, but I’ll go into more detail about them, along with other unexpected good points. Mild summers According to historical data, the average temperature in Karuizawa in August is 20.8°C. While daytime temperatures get hotter than that, you’re unlikely to risk heat stroke by being active outside, something that isn’t true for much of Japan. That being said, climate change is inescapable, and summers in Karuizawa are getting hotter than historical data would suggest. This is an issue because while it cools down at night, many of the houses are built only with the cold winters in mind. For instance, despite the house I rented being built in 2019, it only had a single AC unit. This meant that many days in July and August were unbearably hot inside, and I had trouble sleeping at night as it wouldn’t cool down until well past midnight. Cold but cheerful winters While summers are cooler, this also means winters are cold. The average temperature in January is -3.3°C. However, much like Tokyo, winters here tend to be quite sunny, and there have only been a few heavy snowfalls per year while I’ve lived here. That’s enough that you can enjoy the snow without having to worry about shovelling it daily. Proximity to Tokyo The Shinkansen makes it quicker for me to get into central Tokyo than when I was living in suburban Saitama. It’s about a 70-minute ride from Karuizawa station to Tokyo station. While this is not something I’d want to do daily, I now typically go into Tokyo only a couple of times per month, which is reasonable. Greenery and nature As a resort community, Karuizawa places a lot of emphasis on greenery. Throughout the town, there are plenty of areas where residential and commercial spaces mingle with forest. Many houses have nice gardens. Shade exists everywhere. Wilder nature is close by, too. While you can do day hikes from Tokyo, they involve an hour plus on the train, followed by a climb where you’re usually surrounded by other people. In Karuizawa, I can walk 15 minutes to a trailhead where I’ll see at most a couple of groups per hour. Cheaper than Tokyo, but not significantly so Money spent on housing goes a lot further in Karuizawa than in Tokyo. For the same price as a used Tokyo apartment, you can get a nice used house on a relatively large plot of land in Karuizawa. However, it’s hard to find truly affordable housing. For starters, Karuizawa’s plots tend to be large by Japanese standards, so even if the per-unit price of the land is cheaper, the overall cost isn’t so cheap. For instance, a centrally located 100 tsubo (330 m²) plot in Nakakaruizawa, which is on the small side of what you can find, will currently sell for at least 30 million yen. It may be possible to get a deal on a more remote piece of land, but true steals are hard to come by. Dining out is another area where it’s easy to spend lots of money. Prices are on par with Tokyo, and there are few cheap options, with Sukiya, McDonald’s, and a couple of ramen shops being practically the only sub-1,000 yen choices. Groceries, at least, are cheaper. While the Tsuruya grocery store does sell a lot of high-end items, the basics like milk, eggs, vegetables, and chicken are significantly cheaper than in Tokyo or even suburban Saitama. All this means that many residents of Karuizawa are wealthy compared to Japan’s overall population. Sometimes that makes them take for granted that others might struggle financially. For instance, when my son graduated from public daycare, the PTA organized a graduation party. We paid 16,000 yen for myself, my wife, and two children to attend. Since that’s equivalent to two eight-hour days at minimum wage, I think it was too expensive. I noticed that some of the children didn’t attend, while others came with just one parent, perhaps because of the prohibitive cost. Liquid housing market While we were living in suburban Saitama, we considered buying a house, and even looked at a couple of places. It was very affordable, with even mansions (in the traditional English usage of the word) being cheaper than Tokyo mansions (in the Japanese sense). But the reason they were so cheap was because there wasn’t a secondary market for them. So even if we could get a good deal, we were worried that should we ever want to sell it again, it might not even be possible to give it away. Karuizawa’s real estate market moves fast. This sucks as a buyer but is great for sellers. Furthermore, because it’s perceived as a place to have a second home by wealthy Tokyoites, houses that are by our standards very nice are nowhere near the top of the market. This reduces the risk of buying a property and not being able to sell it later. Great public facilities If you have young children, Karuizawa offers many free or cheap public facilities. This includes nice playgrounds, children’s halls stocked with toys and activities, a beautiful library with a decent selection of English picture books, and a recreation centre with a gym, training room, pool, indoor and outdoor skating rinks, and a curling rink (built for 1998 Winter Olympics). A welcoming and international community Some areas of Japan have a reputation for being insular and not friendly toward newcomers. Perhaps because much of Karuizawa’s population has moved here from elsewhere, the people I’ve met have been quite welcoming. I’m also always happy to meet new residents, and pass on the helpful tips I’ve picked up. In addition, I’ve discovered that many residents have some sort of international background. While non-Japanese residents are very much the minority, a large percentage of the Japanese parents I’ve spoken with have either studied or worked abroad. There is also an International Association of Karuizawa, whose members include many long-term non-Japanese residents. Their Facebook group makes it easy to ask for advice in English. You can get around without a car Most days I don’t use a car, and get around by walking or bicycling. The lack of heavy snow makes it possible to do this year round (though I’m one of the few people who still uses a bicycle when it is -10°C out). Mostly I’m able to get around without a car because I live in a relatively central area, so it’s not feasible for every resident. It can be quite helpful, though, as traffic becomes absolutely horrible during the peak seasons. For instance, the house I was renting was a kilometer away from the one I bought. It was less than a 5-minute drive normally, but during Golden Week it took 30 minutes. That being said, it’s practically required to use a car sometimes. For instance, there’s a great pediatrician, but they’re only accessible by car. Similarly, I use our car to take my kids to the pool, pick up things from the home centre, and so on. My process for buying a house After a year of renting in Karuizawa, we decided we wanted to continue living there indefinitely. Continuing to rent the same house wasn’t an option: the owners were temporarily abroad and so we had a fixed-term contract. While we could have looked for another place to rent, we figured we’d be settling down, and so wanted to buy a house. Starting the search On September 1st, 2024, we started our search and immediately found our first candidate: a recently-built house near Asama Fureai Park. That park is the nicest one for young children in Karuizawa, as it has a good playground but is never too crowded. The property was listed with Royal Resort, which has the most listings for Karuizawa. Rather than contacting Royal Resort directly, though, we asked Masaru Takeyama of Resort Innovation to do so on our behalf. In Japan, there is a buyer’s agent and seller’s agent. Often the seller’s agent wants to act as the buyer’s agent as well, as it earns them double the commission, despite the obvious conflicts of interest. Of the agents we’d contacted previously, Masaru seemed the most trustworthy. I may have also been a bit biased, as he speaks English, whereas the other agents only spoke Japanese. He set up a viewing for us, but we discovered it wasn’t quite our ideal house. While the house was well built, the layout wasn’t our preference. It also was on a road with many large, noisy trucks, so it was kind of unpleasant to be outside. Making the first offer Over the next three months, we’d look at new listings online practically every day. I set up automated alerts for when the websites of different real estate agents were updated. We considered dozens of properties, and actually visited four plots and three used houses. None of them were a good match. On December 3rd, 2024, I found a great-looking property. It was being advertised as 5.1 km from Nakakaruiza station, which would have made it inconvenient, but it was also listed as 中部小学校近隣戸建 (Chubu Elementary Neighborhood Detached House). As the elementary school isn’t that far from the station, I assumed there was an error. The “house” was more of an old cabin, but it was priced cheaply for just the land itself. I sent Masaru an email asking him to give us the address. Meanwhile, my wife and I had gotten pretty good at identifying the locations of properties based on clues in the listing. The startup of a fellow Karuizawa international resident has a good tool for searching lots. My wife managed to identify this one, and so the following morning we looked at it. It wasn’t perfect, but it was a great location and a good deal, so before Masaru had even responded to us with the address, we sent another email saying we’d like to buy it. Masaru responded later that morning that he had a concern: the access to the house was a private road divided into two parcels, one owned by a real estate company, and the other by a pair of individuals. We’d need to negotiate with those parties about things like usage rights, waterworks, and so on. While he wasn’t worried about dealing with the real estate company, as it should just be a question of compensating them, the individuals owning the other parcel might theoretically be unwilling to negotiate. Based on his analysis, the following day (December 5th) we submitted a non-binding purchase application, with the condition that the road issue be resolved. That evening Masaru told us that he had discovered that the seller was a woman in her nineties. Because things like dementia are common among people that old, he added another condition, which was that a judicial scrivener testify to her mental capacity to enter into a contract. He warned that if we didn’t do that, and later the seller was to be found to be mentally unfit, the sale could be reversed, even if we had already proceeded with the construction of a new house. Given the cost of building a new home, this seemed like a prudent measure to take. A few days later we heard that the seller’s son was discussing the sale with his mother, and that they’d be having a family meeting the following weekend to decide whether to make the sale. To try to encourage them to sell to us, my wife wrote a letter (in Japanese) to them, and attached some photos of our family in Karuizawa. On December 16th, we received word that there was actually another potential buyer for the property who’d made an identical offer. In Japan, once someone submits a purchase application, it is normally handled on a first-come, first-served basis. However I noted that the seller didn’t mark the property as “under negotiation” on their website. I suspect we were the first to make an offer, as we did it literally the day after it was published. Apparently the seller’s son wanted to do business with us, but the daughter preferred the other family. We were not told why the daughter wanted to go with the other potential buyers, but it may have been because of our request to have the seller’s mental capacity tested. Dropping that requirement seemed too risky, though. On December 18th, we were informed that the seller had chosen the other buyer. This was quite a disappointing turn of events. We’d initially thought it was a sure thing because we were so fast to submit the offer and were giving them what they’d asked for. We’d even talked with a couple of home builders already. Furthermore, as winter had arrived, it seemed like the real estate market had also frozen. Whereas we’d previously seen new properties appear almost daily, now we spotted one or two a week at most. Making a successful offer Over the holidays, we looked at a couple more plots. On January 6th, we identified a used house in Nakakaruizawa that seemed to meet our criteria. It was by no means perfect, but close enough to be a candidate, so on January 10th we visited it. The house was pretty much as expected. Our one concern was the heating situation. It had previously had central heating that was based on a kerosene boiler, but that had been replaced with central air conditioning. While visiting the house, we turned it on, but it didn’t seem to make much difference. After some back and forth, we got the seller to run the AC constantly for a couple of days, and then on the morning of January 14th, we visited the house again. It was a grey and cold day, with little sunlight—basically the worst conditions possible for keeping the house warm, which I suppose was a good test. It was about 1°C outside and 10°C inside. We heard that the house had previously been rented out, and that the prior tenants had needed to use extra kerosene heaters to stay warm. This wasn’t ideal, but we also saw that with some renovations, we could improve the insulation. We considered making a lowball offer, but Masaru thought it was unlikely that it would be accepted. Instead, on January 17th, we made an offer at the asking price, conditional on it passing a home inspection and us getting an estimate for renovation costs. We heard that, once again, there was another potential buyer. They were “preparing” to submit an offer, but hadn’t done so yet. This time, the seller followed the standard process of considering our offer first, and accepting it in a non-binding manner. We learned that the seller itself was also a real estate company, whose main business seemed to be developing hotels and golf courses. The sale of this property was a relatively minor deal for them, and their main concern was that everything went smoothly. On January 23rd, we had a housing inspection carried out. The inspector found several issues, but they were all cosmetic or otherwise minor, so we said we’d proceed with the contract. Signing the contract The seller’s agent didn’t appear to be in a hurry to finalize the contract. It took a while for them to present us with the paperwork, and after that there were some minor changes to it, so it wasn’t until March 19th that we finally signed. One reason for the delay was that we’d suggested we sign the contract electronically, something the buyer had never done before. Signing a contract electronically was not only simpler from our perspective, but also avoided the need to affix a revenue stamp on the contract, saving some money. But even though Masaru was experienced with electronic contracts, it seemed to take the seller some time to confirm that everything was okay. When we signed the contract, we agreed to immediately make a 10 percent down payment, with the outstanding amount to be transferred by May 16th. We’d already gotten pre-approval on a housing loan, but needed the extra time for the final approval. Obtaining the loan On January 22nd, we visited Hachijuni Bank, which is headquartered in Nagano and has a branch in Karuizawa, and started the loan pre-approval process. This involved submitting a fair amount of paperwork, especially as I was the director of a company as opposed to a regular employee. The whole process took about two weeks. In parallel, we started the pre-approval process for PayPay Bank. The application involved minimal paperwork, and we were able to quickly get pre-approved by them too. To receive the actual approval for the loan, though, we needed to have already signed the purchase contract, so we couldn’t begin until March 19th. This time, the situation was reversed, and PayPay Bank required much more documentation than Hachijuni Bank. On April 9th, we met again with Hachijuni Bank. There was a problem with our application: my middle names hadn’t been properly entered into the contract, and so we’d have to go through the approval process again. The representative said he’d make sure they reviewed it quickly because it was his fault. That evening, we got the result of our application to PayPay Bank: rejected! No reason was given, but as they offer quite low rates, I’m guessing they only accept the lowest-risk loans. Maybe the reason was the house itself, maybe it was that I was the director of a small private company, maybe it was something else. While the Hachijuni Bank representative seemed positive about our application, I was worried. On April 15th we got the result from Hachijuni Bank: approved! Two days later we signed the paperwork with the bank and scheduled the handover for April 30th. Toilet troubles When the previous tenants had moved out of the house, the seller had decided to drain the water. Karuizawa temperatures are cold enough that if the house isn’t being heated, the pipes can freeze and burst. This meant that when we had the housing inspection performed, the inspector couldn’t confirm anything about the plumbing. Given that by mid-April temperatures were no longer below zero, we asked that the seller put the water back in. They agreed, and on April 23rd we heard that there was a problem with the toilets—some sort of leak. Likely some of the rubber sealing had dried out while the water had been removed. The seller offered to replace the toilets or pay us the cost of replacing them with equivalent ones. My understanding was that because they hadn’t declared any issues with the toilets in the original purchase agreement, they were obligated to do so. My wife and I had already talked about upgrading the toilets, but I wasn’t convinced that it’d be worth it. With the seller offering to pay two-thirds of the cost of the more premium toilets we wanted, though, it seemed like a stroke of luck. Handover On April 30th, the handover went smoothly. A representative from the seller’s company met with myself, the real estate agents, and a judicial scrivener. Technically I didn’t need to be there for it, but as the representative had come from Hiroshima I thought I should be present just in case anything happened. Nothing went awry, however, and they gave me the keys and some omiyage. Renovations We had already decided to do some renovations before moving in. With the toilets not working, that became an immediate necessity. One of the challenges of buying a used house is deciding how extensively to renovate. We settled on doing the minimum: replacing the toilets, putting up a wall to split the kids’ bedroom in two (it already had two separate doors and was designed to be split), switching the lights in the bedroom from fluorescent tubes to LEDs, and adding inner windows (内窓, uchi mado) to improve the insulation. We’d kicked off the process shortly after completing the home inspection, and had already had contractors visit to give estimates in advance. Still, we weren’t able to finalize any plans until we’d decided the exact handover date. After some negotiating, we got an agreement to wrap up the renovations by the beginning of June, and set a move-in date of June 5th. Moving in We managed to move in as scheduled, almost half a year after we first saw the house, and 10 months after we began our initial search. We weren’t exactly in a rush with the whole process, but it did take a lot of our time. Tips for potential homebuyers in Japan Here’s what I’d recommend if you’re looking to buy a house in Japan. Don’t rely on real estate agents to find properties for you My impression of real estate agents in Karuizawa is that they’re quite passive. I think this is a combination of the market being hot, and the fact that many inquiries they get are from people who dream about living here but will never make it a reality. I’d suggest being proactive about the search yourself. All of the best properties we found because we discovered the listings ourselves, not because an agent sent them to us. Find an agent you can trust The main value I got out of using a real estate agent was not because he introduced specific properties, but because he could point out potential issues with them. Checking the ownership of the road leading up to the property, or whether a contract could be reversed if the signer was years later deemed to be mentally incompetent, weren’t issues that had ever even occurred to me. Every property I considered purchasing had some unapparent problems. While there are some legal obligations placed upon the buyer’s agent, my understanding is that those don’t extend to every possible situation. What’s more, real estate seems to attract unscrupulous people, and some agents aren’t averse to behaving unethically or even illegally if it helps them close more deals. Because of this, I think the most important quality in an agent is trustworthiness. That can be hard to evaluate, but go with your gut. You don’t need your company to be profitable three years in a row I’d previously heard that it’s challenging to get a loan as the director of a company, and that the last three years of a business needed to be profitable to receive approval. That made me apprehensive, as TokyoDev was only incorporated in 2022, and 2024 was an unprofitable year for us due to it being the first year we had a decrease in the number of hires. However, when Hachijuni Bank reviewed TokyoDev’s finances, they said it was a great business. I was the sole employee of the business and it was only unprofitable because I had set my director’s compensation too aggressively. We’d had previous profitable years, and the company didn’t have any debts. In other words, one bad year for your company isn’t going to tank your chances of getting a loan. Go through the final approval process with multiple banks Some will advise you not to go through the final approval process of multiple banks at the same time. Since there’s a central database that allows banks to see that you’re doing this, it apparently increases your risk of rejection. If you’re only applying at a few banks, though, the risk remains fairly low. I guess PayPay Bank theoretically could have rejected us for this reason, but I doubt it. If we had initially applied only with them, however, and had waited to be rejected before applying with Hachijuni Bank, we would have risked not having the final approval go through before the payment deadline. Get a housing inspection If you’re buying a used house of any significant value, I highly recommend getting an inspection. We used Sakura Home Inspection, the biggest inspection company in Japan. While the base price for the inspection was only 66,000 yen, we ended up paying about 240,000 yen due to additional options, the size of the house, and transportation fees. While that’s not exactly cheap, it was less than one percent of the total purchase price, and seemed worth it to mitigate the risk of serious issues that might appear after the purchase. Rent locally before buying Our plan of first renting, then buying worked out quite well for us. Not only did it give me confidence that Karuizawa was somewhere I wanted to live for the long haul, it was much easier visiting potential properties when I lived a 10-minute bicycle ride away from them, instead of a 3-hour drive. If you’re considering buying a house in Karuizawa… Please get in touch with me! I’m happy to answer any questions you might have. In my previous article on coworking spaces in Karuizawa, I invited people to get in touch, and already have made several new connections through it. I’d love to continue to grow my community, and help transform Karuizawa from a resort town to one that’s focused on its full-time residents.
There’s a video on YouTube from “Technology Connections” — who I’ve never heard of or watched until now — called Algorithms are breaking how we think. I learned of this video from Gedeon Maheux of The Iconfactory fame. Speaking in the context of why they made Tapestry, he said the ideas in this video would be their manifesto. So I gave it a watch. Generally speaking, the video asks: Does anyone care to have a self-directed experience online, or with a computer more generally? I'm not sure how infrequently we’re actually deciding for ourselves these days [how we decide what we want to see, watch, and do on the internet] Ironically we spend more time than ever on computing devices, but less time than ever curating our own experiences with them. Which — again ironically — is the inverse of many things in our lives. Generally speaking, the more time we spend with something, the more we invest in making it our own — customizing it to our own idiosyncrasies. But how much time do you spend curating, customizing, and personalizing your digital experience? (If you’re reading this in an RSS reader, high five!) I’m not talking about “I liked that post, or saved that video, so the algorithm is personalizing things for me”. Do you know what to get yourself more of? Do you know where to find it? Do you even ask yourself these questions? “That sounds like too much work” you might say. And you’re right, it is work. As the guy in the video says: I'm one of those weirdos who think the most rewarding things in life take effort Me too. Email · Mastodon · Bluesky
The mystery In the previous article, I briefly mentioned a slight difference between the ESP-Prog and the reproduced circuit, when it comes to EN: Focusing on EN, it looks like the voltage level goes back to 3.3V much faster on the ESP-Prog than on the breadboard circuit. The grid is horizontally spaced at 2ms, so … Continue reading Overanalyzing a minor quirk of Espressif’s reset circuit → The post Overanalyzing a minor quirk of Espressif’s reset circuit appeared first on Quentin Santos.