Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
19
New blogpost! Planner programming blows my mind, Patreon here. Next essay out should be the graph project. The post is about Picat and more specifically, planner programming. Planning very roughly is: You provide a start state, a set of goals, and a set of state transitions (actions), The planner uses the actions to generate a state space, The planner reports a sequence of actions that reaches a goal, if any. A lot of formal methods stuff like TLA+ is verified with model checking, which very roughly is: You provide initial states, a set of properties, and a set of state transitions The model checker uses the transitions to generate a state space, The checker reports a sequence of actions that violate a property, if any. These look extremely similar, and in fact they are mathematically the same thing. Any model checker can find a plan that reaches goal P by writing !P as a property to check. Then the "violation" returns a trace that ends with P. That's how you can use TLA+ to solve...
a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Computer Things

Modeling Awkward Social Situations with TLA+

You're walking down the street and need to pass someone going the opposite way. You take a step left, but they're thinking the same thing and take a step to their right, aka your left. You're still blocking each other. Then you take a step to the right, and they take a step to their left, and you're back to where you started. I've heard this called "walkwarding" Let's model this in TLA+. TLA+ is a formal methods tool for finding bugs in complex software designs, most often involving concurrency. Two people trying to get past each other just also happens to be a concurrent system. A gentler introduction to TLA+'s capabilities is here, an in-depth guide teaching the language is here. The spec ---- MODULE walkward ---- EXTENDS Integers VARIABLES pos vars == <<pos>> Double equals defines a new operator, single equals is an equality check. <<pos>> is a sequence, aka array. you == "you" me == "me" People == {you, me} MaxPlace == 4 left == 0 right == 1 I've gotten into the habit of assigning string "symbols" to operators so that the compiler complains if I misspelled something. left and right are numbers so we can shift position with right - pos. direction == [you |-> 1, me |-> -1] goal == [you |-> MaxPlace, me |-> 1] Init == \* left-right, forward-backward pos = [you |-> [lr |-> left, fb |-> 1], me |-> [lr |-> left, fb |-> MaxPlace]] direction, goal, and pos are "records", or hash tables with string keys. I can get my left-right position with pos.me.lr or pos["me"]["lr"] (or pos[me].lr, as me == "me"). Juke(person) == pos' = [pos EXCEPT ![person].lr = right - @] TLA+ breaks the world into a sequence of steps. In each step, pos is the value of pos in the current step and pos' is the value in the next step. The main outcome of this semantics is that we "assign" a new value to pos by declaring pos' equal to something. But the semantics also open up lots of cool tricks, like swapping two values with x' = y /\ y' = x. TLA+ is a little weird about updating functions. To set f[x] = 3, you gotta write f' = [f EXCEPT ![x] = 3]. To make things a little easier, the rhs of a function update can contain @ for the old value. ![me].lr = right - @ is the same as right - pos[me].lr, so it swaps left and right. ("Juke" comes from here) Move(person) == LET new_pos == [pos[person] EXCEPT !.fb = @ + direction[person]] IN /\ pos[person].fb # goal[person] /\ \A p \in People: pos[p] # new_pos /\ pos' = [pos EXCEPT ![person] = new_pos] The EXCEPT syntax can be used in regular definitions, too. This lets someone move one step in their goal direction unless they are at the goal or someone is already in that space. /\ means "and". Next == \E p \in People: \/ Move(p) \/ Juke(p) I really like how TLA+ represents concurrency: "In each step, there is a person who either moves or jukes." It can take a few uses to really wrap your head around but it can express extraordinarily complicated distributed systems. Spec == Init /\ [][Next]_vars Liveness == <>(pos[me].fb = goal[me]) ==== Spec is our specification: we start at Init and take a Next step every step. Liveness is the generic term for "something good is guaranteed to happen", see here for more. <> means "eventually", so Liveness means "eventually my forward-backward position will be my goal". I could extend it to "both of us eventually reach out goal" but I think this is good enough for a demo. Checking the spec Four years ago, everybody in TLA+ used the toolbox. Now the community has collectively shifted over to using the VSCode extension.1 VSCode requires we write a configuration file, which I will call walkward.cfg. SPECIFICATION Spec PROPERTY Liveness I then check the model with the VSCode command TLA+: Check model with TLC. Unsurprisingly, it finds an error: The reason it fails is "stuttering": I can get one step away from my goal and then just stop moving forever. We say the spec is unfair: it does not guarantee that if progress is always possible, progress will be made. If I want the spec to always make progress, I have to make some of the steps weakly fair. + Fairness == WF_vars(Next) - Spec == Init /\ [][Next]_vars + Spec == Init /\ [][Next]_vars /\ Fairness Now the spec is weakly fair, so someone will always do something. New error: \* First six steps cut 7: <Move("me")> pos = [you |-> [lr |-> 0, fb |-> 4], me |-> [lr |-> 1, fb |-> 2]] 8: <Juke("me")> pos = [you |-> [lr |-> 0, fb |-> 4], me |-> [lr |-> 0, fb |-> 2]] 9: <Juke("me")> (back to state 7) In this failure, I've successfully gotten past you, and then spend the rest of my life endlessly juking back and forth. The Next step keeps happening, so weak fairness is satisfied. What I actually want is for both my Move and my Juke to both be weakly fair independently of each other. - Fairness == WF_vars(Next) + Fairness == WF_vars(Move(me)) /\ WF_vars(Juke(me)) If my liveness property also specified that you reached your goal, I could instead write \A p \in People: WF_vars(Move(p)) etc. I could also swap the \A with a \E to mean at least one of us is guaranteed to have fair actions, but not necessarily both of us. New error: 3: <Move("me")> pos = [you |-> [lr |-> 0, fb |-> 2], me |-> [lr |-> 0, fb |-> 3]] 4: <Juke("you")> pos = [you |-> [lr |-> 1, fb |-> 2], me |-> [lr |-> 0, fb |-> 3]] 5: <Juke("me")> pos = [you |-> [lr |-> 1, fb |-> 2], me |-> [lr |-> 1, fb |-> 3]] 6: <Juke("me")> pos = [you |-> [lr |-> 1, fb |-> 2], me |-> [lr |-> 0, fb |-> 3]] 7: <Juke("you")> (back to state 3) Now we're getting somewhere! This is the original walkwarding situation we wanted to capture. We're in each others way, then you juke, but before either of us can move you juke, then we both juke back. We can repeat this forever, trapped in a social hell. Wait, but doesn't WF(Move(me)) guarantee I will eventually move? Yes, but only if a move is permanently available. In this case, it's not permanently available, because every couple of steps it's made temporarily unavailable. How do I fix this? I can't add a rule saying that we only juke if we're blocked, because the whole point of walkwarding is that we're not coordinated. In the real world, walkwarding can go on for agonizing seconds. What I can do instead is say that Liveness holds as long as Move is strongly fair. Unlike weak fairness, strong fairness guarantees something happens if it keeps becoming possible, even with interruptions. Liveness == + SF_vars(Move(me)) => <>(pos[me].fb = goal[me]) This makes the spec pass. Even if we weave back and forth for five minutes, as long as we eventually pass each other, I will reach my goal. Note we could also by making Move in Fairness strongly fair, which is preferable if we have a lot of different liveness properties to check. A small exercise for the reader There is a presumed invariant that is violated. Identify what it is, write it as a property in TLA+, and show the spec violates it. Then fix it. Answer (in rot13): Gur vainevnag vf "ab gjb crbcyr ner va gur rknpg fnzr ybpngvba". Zbir thnenagrrf guvf ohg Whxr qbrf abg. More TLA+ Exercises I've started work on an exercises repo. There's only a handful of specific problems now but I'm planning on adding more over the summer. learntla is still on the toolbox, but I'm hoping to get it all moved over this summer. ↩

yesterday 2 votes
Write the most clever code you possibly can

I started writing this early last week but Real Life Stuff happened and now you're getting the first-draft late this week. Warning, unedited thoughts ahead! New Logic for Programmers release! v0.9 is out! This is a big release, with a new cover design, several rewritten chapters, online code samples and much more. See the full release notes at the changelog page, and get the book here! Write the cleverest code you possibly can There are millions of articles online about how programmers should not write "clever" code, and instead write simple, maintainable code that everybody understands. Sometimes the example of "clever" code looks like this (src): # Python p=n=1 exec("p*=n*n;n+=1;"*~-int(input())) print(p%n) This is code-golfing, the sport of writing the most concise code possible. Obviously you shouldn't run this in production for the same reason you shouldn't eat dinner off a Rembrandt. Other times the example looks like this: def is_prime(x): if x == 1: return True return all([x%n != 0 for n in range(2, x)] This is "clever" because it uses a single list comprehension, as opposed to a "simple" for loop. Yes, "list comprehensions are too clever" is something I've read in one of these articles. I've also talked to people who think that datatypes besides lists and hashmaps are too clever to use, that most optimizations are too clever to bother with, and even that functions and classes are too clever and code should be a linear script.1. Clever code is anything using features or domain concepts we don't understand. Something that seems unbearably clever to me might be utterly mundane for you, and vice versa. How do we make something utterly mundane? By using it and working at the boundaries of our skills. Almost everything I'm "good at" comes from banging my head against it more than is healthy. That suggests a really good reason to write clever code: it's an excellent form of purposeful practice. Writing clever code forces us to code outside of our comfort zone, developing our skills as software engineers. Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you [will get excellent debugging practice at exactly the right level required to push your skills as a software engineer] — Brian Kernighan, probably There are other benefits, too, but first let's kill the elephant in the room:2 Don't commit clever code I am proposing writing clever code as a means of practice. Being at work is a job with coworkers who will not appreciate if your code is too clever. Similarly, don't use too many innovative technologies. Don't put anything in production you are uncomfortable with. We can still responsibly write clever code at work, though: Solve a problem in both a simple and a clever way, and then only commit the simple way. This works well for small scale problems where trying the "clever way" only takes a few minutes. Write our personal tools cleverly. I'm a big believer of the idea that most programmers would benefit from writing more scripts and support code customized to their particular work environment. This is a great place to practice new techniques, languages, etc. If clever code is absolutely the best way to solve a problem, then commit it with extensive documentation explaining how it works and why it's preferable to simpler solutions. Bonus: this potentially helps the whole team upskill. Writing clever code... ...teaches simple solutions Usually, code that's called too clever composes several powerful features together — the "not a single list comprehension or function" people are the exception. Josh Comeau's "don't write clever code" article gives this example of "too clever": const extractDataFromResponse = (response) => { const [Component, props] = response; const resultsEntries = Object.entries({ Component, props }); const assignIfValueTruthy = (o, [k, v]) => (v ? { ...o, [k]: v } : o ); return resultsEntries.reduce(assignIfValueTruthy, {}); } What makes this "clever"? I count eight language features composed together: entries, argument unpacking, implicit objects, splats, ternaries, higher-order functions, and reductions. Would code that used only one or two of these features still be "clever"? I don't think so. These features exist for a reason, and oftentimes they make code simpler than not using them. We can, of course, learn these features one at a time. Writing the clever version (but not committing it) gives us practice with all eight at once and also with how they compose together. That knowledge comes in handy when we want to apply a single one of the ideas. I've recently had to do a bit of pandas for a project. Whenever I have to do a new analysis, I try to write it as a single chain of transformations, and then as a more balanced set of updates. ...helps us master concepts Even if the composite parts of a "clever" solution aren't by themselves useful, it still makes us better at the overall language, and that's inherently valuable. A few years ago I wrote Crimes with Python's Pattern Matching. It involves writing horrible code like this: from abc import ABC class NotIterable(ABC): @classmethod def __subclasshook__(cls, C): return not hasattr(C, "__iter__") def f(x): match x: case NotIterable(): print(f"{x} is not iterable") case _: print(f"{x} is iterable") if __name__ == "__main__": f(10) f("string") f([1, 2, 3]) This composes Python match statements, which are broadly useful, and abstract base classes, which are incredibly niche. But even if I never use ABCs in real production code, it helped me understand Python's match semantics and Method Resolution Order better. ...prepares us for necessity Sometimes the clever way is the only way. Maybe we need something faster than the simplest solution. Maybe we are working with constrained tools or frameworks that demand cleverness. Peter Norvig argued that design patterns compensate for missing language features. I'd argue that cleverness is another means of compensating: if our tools don't have an easy way to do something, we need to find a clever way. You see this a lot in formal methods like TLA+. Need to check a hyperproperty? Cast your state space to a directed graph. Need to compose ten specifications together? Combine refinements with state machines. Most difficult problems have a "clever" solution. The real problem is that clever solutions have a skill floor. If normal use of the tool is at difficult 3 out of 10, then basic clever solutions are at 5 out of 10, and it's hard to jump those two steps in the moment you need the cleverness. But if you've practiced with writing overly clever code, you're used to working at a 7 out of 10 level in short bursts, and then you can "drop down" to 5/10. I don't know if that makes too much sense, but I see it happen a lot in practice. ...builds comradery On a few occasions, after getting a pull request merged, I pulled the reviewer over and said "check out this horrible way of doing the same thing". I find that as long as people know they're not going to be subjected to a clever solution in production, they enjoy seeing it! Next week's newsletter will probably also be late, after that we should be back to a regular schedule for the rest of the summer. Mostly grad students outside of CS who have to write scripts to do research. And in more than one data scientist. I think it's correlated with using Jupyter. ↩ If I don't put this at the beginning, I'll get a bajillion responses like "your team will hate you" ↩

a week ago 8 votes
Requirements change until they don't

Recently I got a question on formal methods1: how does it help to mathematically model systems when the system requirements are constantly changing? It doesn't make sense to spend a lot of time proving a design works, and then deliver the product and find out it's not at all what the client needs. As the saying goes, the hard part is "building the right thing", not "building the thing right". One possible response: "why write tests"? You shouldn't write tests, especially lots of unit tests ahead of time, if you might just throw them all away when the requirements change. This is a bad response because we all know the difference between writing tests and formal methods: testing is easy and FM is hard. Testing requires low cost for moderate correctness, FM requires high(ish) cost for high correctness. And when requirements are constantly changing, "high(ish) cost" isn't affordable and "high correctness" isn't worthwhile, because a kinda-okay solution that solves a customer's problem is infinitely better than a solid solution that doesn't. But eventually you get something that solves the problem, and what then? Most of us don't work for Google, we can't axe features and products on a whim. If the client is happy with your solution, you are expected to support it. It should work when your customers run into new edge cases, or migrate all their computers to the next OS version, or expand into a market with shoddy internet. It should work when 10x as many customers are using 10x as many features. It should work when you add new features that come into conflict. And just as importantly, it should never stop solving their problem. Canonical example: your feature involves processing requested tasks synchronously. At scale, this doesn't work, so to improve latency you make it asynchronous. Now it's eventually consistent, but your customers were depending on it being always consistent. Now it no longer does what they need, and has stopped solving their problems. Every successful requirement met spawns a new requirement: "keep this working". That requirement is permanent, or close enough to decide our long-term strategy. It takes active investment to keep a feature behaving the same as the world around it changes. (Is this all a pretentious of way of saying "software maintenance is hard?" Maybe!) Phase changes In physics there's a concept of a phase transition. To raise the temperature of a gram of liquid water by 1° C, you have to add 4.184 joules of energy.2 This continues until you raise it to 100°C, then it stops. After you've added two thousand joules to that gram, it suddenly turns into steam. The energy of the system changes continuously but the form, or phase, changes discretely. Software isn't physics but the idea works as a metaphor. A certain architecture handles a certain level of load, and past that you need a new architecture. Or a bunch of similar features are independently hardcoded until the system becomes too messy to understand, you remodel the internals into something unified and extendable. etc etc etc. It's doesn't have to be totally discrete phase transition, but there's definitely a "before" and "after" in the system form. Phase changes tend to lead to more intricacy/complexity in the system, meaning it's likely that a phase change will introduce new bugs into existing behaviors. Take the synchronous vs asynchronous case. A very simple toy model of synchronous updates would be Set(key, val), which updates data[key] to val.3 A model of asynchronous updates would be AsyncSet(key, val, priority) adds a (key, val, priority, server_time()) tuple to a tasks set, and then another process asynchronously pulls a tuple (ordered by highest priority, then earliest time) and calls Set(key, val). Here are some properties the client may need preserved as a requirement: If AsyncSet(key, val, _, _) is called, then eventually db[key] = val (possibly violated if higher-priority tasks keep coming in) If someone calls AsyncSet(key1, val1, low) and then AsyncSet(key2, val2, low), they should see the first update and then the second (linearizability, possibly violated if the requests go to different servers with different clock times) If someone calls AsyncSet(key, val, _) and immediately reads db[key] they should get val (obviously violated, though the client may accept a slightly weaker property) If the new system doesn't satisfy an existing customer requirement, it's prudent to fix the bug before releasing the new system. The customer doesn't notice or care that your system underwent a phase change. They'll just see that one day your product solves their problems, and the next day it suddenly doesn't. This is one of the most common applications of formal methods. Both of those systems, and every one of those properties, is formally specifiable in a specification language. We can then automatically check that the new system satisfies the existing properties, and from there do things like automatically generate test suites. This does take a lot of work, so if your requirements are constantly changing, FM may not be worth the investment. But eventually requirements stop changing, and then you're stuck with them forever. That's where models shine. As always, I'm using formal methods to mean the subdiscipline of formal specification of designs, leaving out the formal verification of code. Mostly because "formal specification" is really awkward to say. ↩ Also called a "calorie". The US "dietary Calorie" is actually a kilocalorie. ↩ This is all directly translatable to a TLA+ specification, I'm just describing it in English to avoid paying the syntax tax ↩

3 weeks ago 22 votes
The Halting Problem is a terrible example of NP-Harder

Short one this time because I have a lot going on this week. In computation complexity, NP is the class of all decision problems (yes/no) where a potential proof (or "witness") for "yes" can be verified in polynomial time. For example, "does this set of numbers have a subset that sums to zero" is in NP. If the answer is "yes", you can prove it by presenting a set of numbers. We would then verify the witness by 1) checking that all the numbers are present in the set (~linear time) and 2) adding up all the numbers (also linear). NP-complete is the class of "hardest possible" NP problems. Subset sum is NP-complete. NP-hard is the set all problems at least as hard as NP-complete. Notably, NP-hard is not a subset of NP, as it contains problems that are harder than NP-complete. A natural question to ask is "like what?" And the canonical example of "NP-harder" is the halting problem (HALT): does program P halt on input C? As the argument goes, it's undecidable, so obviously not in NP. I think this is a bad example for two reasons: All NP requires is that witnesses for "yes" can be verified in polynomial time. It does not require anything for the "no" case! And even though HP is undecidable, there is a decidable way to verify a "yes": let the witness be "it halts in N steps", then run the program for that many steps and see if it halted by then. To prove HALT is not in NP, you have to show that this verification process grows faster than polynomially. It does (as busy beaver is uncomputable), but this all makes the example needlessly confusing.1 "What's bigger than a dog? THE MOON" Really (2) bothers me a lot more than (1) because it's just so inelegant. It suggests that NP-complete is the upper bound of "solvable" problems, and after that you're in full-on undecidability. I'd rather show intuitive problems that are harder than NP but not that much harder. But in looking for a "slightly harder" problem, I ran into an, ah, problem. It seems like the next-hardest class would be EXPTIME, except we don't know for sure that NP != EXPTIME. We know for sure that NP != NEXPTIME, but NEXPTIME doesn't have any intuitive, easily explainable problems. Most "definitely harder than NP" problems require a nontrivial background in theoretical computer science or mathematics to understand. There is one problem, though, that I find easily explainable. Place a token at the bottom left corner of a grid that extends infinitely up and right, call that point (0, 0). You're given list of valid displacement moves for the token, like (+1, +0), (-20, +13), (-5, -6), etc, and a target point like (700, 1). You may make any sequence of moves in any order, as long as no move ever puts the token off the grid. Does any sequence of moves bring you to the target? This is PSPACE-complete, I think, which still isn't proven to be harder than NP-complete (though it's widely believed). But what if you increase the number of dimensions of the grid? Past a certain number of dimensions the problem jumps to being EXPSPACE-complete, and then TOWER-complete (grows tetrationally), and then it keeps going. Some point might recognize this as looking a lot like the Ackermann function, and in fact this problem is ACKERMANN-complete on the number of available dimensions. A friend wrote a Quanta article about the whole mess, you should read it. This problem is ludicrously bigger than NP ("Chicago" instead of "The Moon"), but at least it's clearly decidable, easily explainable, and definitely not in NP. It's less confusing if you're taught the alternate (and original!) definition of NP, "the class of problems solvable in polynomial time by a nondeterministic Turing machine". Then HALT can't be in NP because otherwise runtime would be bounded by an exponential function. ↩

4 weeks ago 17 votes
Solving a "Layton Puzzle" with Prolog

I have a lot in the works for the this month's Logic for Programmers release. Among other things, I'm completely rewriting the chapter on Logic Programming Languages. I originally showcased the paradigm with puzzle solvers, like eight queens or four-coloring. Lots of other demos do this too! It takes creativity and insight for humans to solve them, so a program doing it feels magical. But I'm trying to write a book about practical techniques and I want everything I talk about to be useful. So in v0.9 I'll be replacing these examples with a couple of new programs that might get people thinking that Prolog could help them in their day-to-day work. On the other hand, for a newsletter, showcasing a puzzle solver is pretty cool. And recently I stumbled into this post by my friend Pablo Meier, where he solves a videogame puzzle with Prolog:1 Summary for the text-only readers: We have a test with 10 true/false questions (denoted a/b) and four student attempts. Given the scores of the first three students, we have to figure out the fourth student's score. bbababbabb = 7 baaababaaa = 5 baaabbbaba = 3 bbaaabbaaa = ??? You can see Pablo's solution here, and try it in SWI-prolog here. Pretty cool! But after way too long studying Prolog just to write this dang book chapter, I wanted to see if I could do it more elegantly than him. Code and puzzle spoilers to follow. (Normally here's where I'd link to a gentler introduction I wrote but I think this is my first time writing about Prolog online? Uh here's a Picat intro instead) The Program You can try this all online at SWISH or just jump to my final version here. :- use_module(library(dif)). % Sound inequality :- use_module(library(clpfd)). % Finite domain constraints First some imports. dif lets us write dif(A, B), which is true if A and B are not equal. clpfd lets us write A #= B + 1 to say "A is 1 more than B".2 We'll say both the student submission and the key will be lists, where each value is a or b. In Prolog, lowercase identifiers are atoms (like symbols in other languages) and identifiers that start with a capital are variables. Prolog finds values for variables that match equations (unification). The pattern matching is real real good. % ?- means query ?- L = [a,B,c], [Y|X] = [1,2|L], B + 1 #= 7. B = 6, L = [a, 6, c], X = [2, a, 6, c], Y = 1 Next, we define score/33 recursively. % The student's test score % score(student answers, answer key, score) score([], [], 0). score([A|As], [A|Ks], N) :- N #= M + 1, score(As, Ks, M). score([A|As], [K|Ks], N) :- dif(A, K), score(As, Ks, N). First key is the student's answers, second is the answer key, third is the final score. The base case is the empty test, which has score 0. Otherwise, we take the head values of each list and compare them. If they're the same, we add one to the score, otherwise we keep the same score. Notice we couldn't write if x then y else z, we instead used pattern matching to effectively express (x && y) || (!x && z). Prolog does have a conditional operator, but it prevents backtracking so what's the point??? A quick break about bidirectionality One of the coolest things about Prolog: all purely logical predicates are bidirectional. We can use score to check if our expected score is correct: ?- score([a, b, b], [b, b, b], 2). true But we can also give it answers and a key and ask it for the score: ?- score([a, b, b], [b, b, b], X). X = 2 Or we could give it a key and a score and ask "what test answers would have this score?" ?- score(X, [b, b, b], 2). X = [b, b, _A], dif(_A,b) X = [b, _A, b], dif(_A,b) X = [_A, b, b], dif(_A,b) The different value is written _A because we never told Prolog that the array can only contain a and b. We'll fix this later. Okay back to the program Now that we have a way of computing scores, we want to find a possible answer key that matches all of our observations, ie gives everybody the correct scores. key(Key) :- % Figure it out score([b, b, a, b, a, b, b, a, b, b], Key, 7), score([b, a, a, a, b, a, b, a, a, a], Key, 5), score([b, a, a, a, b, b, b, a, b, a], Key, 3). So far we haven't explicitly said that the Key length matches the student answer lengths. This is implicitly verified by score (both lists need to be empty at the same time) but it's a good idea to explicitly add length(Key, 10) as a clause of key/1. We should also explicitly say that every element of Key is either a or b.4 Now we could write a second predicate saying Key had the right 'type': keytype([]). keytype([K|Ks]) :- member(K, [a, b]), keytype(Ks). But "generating lists that match a constraint" is a thing that comes up often enough that we don't want to write a separate predicate for each constraint! So after some digging, I found a more elegant solution: maplist. Let L=[l1, l2]. Then maplist(p, L) is equivalent to the clause p(l1), p(l2). It also accepts partial predicates: maplist(p(x), L) is equivalent to p(x, l1), p(x, l2). So we could write5 contains(L, X) :- member(X, L). key(Key) :- length(Key, 10), maplist(contains([a,b]), L), % the score stuff Now, let's query for the Key: ?- key(Key) Key = [a, b, a, b, a, a, b, a, a, b] Key = [b, b, a, b, a, a, a, a, a, b] Key = [b, b, a, b, a, a, b, b, a, b] Key = [b, b, b, b, a, a, b, a, a, b] So there are actually four different keys that all explain our data. Does this mean the puzzle is broken and has multiple different answers? Nope The puzzle wasn't to find out what the answer key was, the point was to find the fourth student's score. And if we query for it, we see all four solutions give him the same score: ?- key(Key), score([b, b, a, a, a, b, b, a, a, a], Key, X). X = 6 X = 6 X = 6 X = 6 Huh! I really like it when puzzles look like they're broken, but every "alternate" solution still gives the same puzzle answer. Total program length: 15 lines of code, compared to the original's 80 lines. Suck it, Pablo. (Incidentally, you can get all of the answer at once by writing findall(X, (key(Key), score($answer-array, Key, X)), L).) I still don't like puzzles for teaching The actual examples I'm using in the book are "analyzing a version control commit graph" and "planning a sequence of infrastructure changes", which are somewhat more likely to occur at work than needing to solve a puzzle. You'll see them in the next release! I found it because he wrote Gamer Games for Lite Gamers as a response to my Gamer Games for Non-Gamers. ↩ These are better versions of the core Prolog expressions \+ (A = B) and A is B + 1, because they can defer unification. ↩ Prolog-descendants have a convention of writing the arity of the function after its name, so score/3 means "score has three parameters". I think they do this because you can overload predicates with multiple different arities. Also Joe Armstrong used Prolog for prototyping, so Erlang and Elixir follow the same convention. ↩ It still gets the right answers without this type restriction, but I had no idea it did until I checked for myself. Probably better not to rely on this! ↩ We could make this even more compact by using a lambda function. First import module yall, then write maplist([X]>>member(X, [a,b]), Key). But (1) it's not a shorter program because you replace the extra definition with an extra module import, and (2) yall is SWI-Prolog specific and not an ISO-standard prolog module. Using contains is more portable. ↩

a month ago 75 votes

More in programming

Notes from Alexander Petros’ “Building the Hundred-Year Web Service”

I loved this talk from Alexander Petros titled “Building the Hundred-Year Web Service”. What follows is summation of my note-taking from watching the talk on YouTube. Is what you’re building for future generations: Useful for them? Maintainable by them? Adaptable by them? Actually, forget about future generations. Is what you’re building for future you 6 months or 6 years from now aligning with those goals? While we’re building codebases which may not be useful, maintainable, or adaptable by someone two years from now, the Romans built a bridge thousands of years ago that is still being used today. It should be impossible to imagine building something in Roman times that’s still useful today. But if you look at [Trajan’s Bridge in Portugal, which is still used today] you can see there’s a little car on its and a couple pedestrians. They couldn’t have anticipated the automobile, but nevertheless it is being used for that today. That’s a conundrum. How do you build for something you can’t anticipate? You have to think resiliently. Ask yourself: What’s true today, that was true for a software engineer in 1991? One simple answer is: Sharing and accessing information with a uniform resource identifier. That was true 30+ years ago, I would venture to bet it will be true in another 30 years — and more! There [isn’t] a lot of source code that can run unmodified in software that is 30 years apart. And yet, the first web site ever made can do precisely that. The source code of the very first web page — which was written for a line mode browser — still runs today on a touchscreen smartphone, which is not a device that Tim Berners-less could have anticipated. Alexander goes on to point out how interaction with web pages has changed over time: In the original line mode browser, links couldn’t be represented as blue underlined text. They were represented more like footnotes on screen where you’d see something like this[1] and then this[2]. If you wanted to follow that link, there was no GUI to point and click. Instead, you would hit that number on your keyboard. In desktop browsers and GUI interfaces, we got blue underlines to represent something you could point and click on to follow a link On touchscreen devices, we got “tap” with your finger to follow a link. While these methods for interaction have changed over the years, the underlying medium remains unchanged: information via uniform resource identifiers. The core representation of a hypertext document is adaptable to things that were not at all anticipated in 1991. The durability guarantees of the web are absolutely astounding if you take a moment to think about it. In you’re sprinting you might beat the browser, but it’s running a marathon and you’ll never beat it in the long run. If your page is fast enough, [refreshes] won’t even repaint the page. The experience of refreshing a page, or clicking on a “hard link” is identical to the experience of partially updating the page. That is something that quietly happened in the last ten years with no fanfare. All the people who wrote basic HTML got a huge performance upgrade in their browser. And everybody who tried to beat the browser now has to reckon with all the JavaScript they wrote to emulate these basic features. Email · Mastodon · Bluesky

21 hours ago 2 votes
Modeling Awkward Social Situations with TLA+

You're walking down the street and need to pass someone going the opposite way. You take a step left, but they're thinking the same thing and take a step to their right, aka your left. You're still blocking each other. Then you take a step to the right, and they take a step to their left, and you're back to where you started. I've heard this called "walkwarding" Let's model this in TLA+. TLA+ is a formal methods tool for finding bugs in complex software designs, most often involving concurrency. Two people trying to get past each other just also happens to be a concurrent system. A gentler introduction to TLA+'s capabilities is here, an in-depth guide teaching the language is here. The spec ---- MODULE walkward ---- EXTENDS Integers VARIABLES pos vars == <<pos>> Double equals defines a new operator, single equals is an equality check. <<pos>> is a sequence, aka array. you == "you" me == "me" People == {you, me} MaxPlace == 4 left == 0 right == 1 I've gotten into the habit of assigning string "symbols" to operators so that the compiler complains if I misspelled something. left and right are numbers so we can shift position with right - pos. direction == [you |-> 1, me |-> -1] goal == [you |-> MaxPlace, me |-> 1] Init == \* left-right, forward-backward pos = [you |-> [lr |-> left, fb |-> 1], me |-> [lr |-> left, fb |-> MaxPlace]] direction, goal, and pos are "records", or hash tables with string keys. I can get my left-right position with pos.me.lr or pos["me"]["lr"] (or pos[me].lr, as me == "me"). Juke(person) == pos' = [pos EXCEPT ![person].lr = right - @] TLA+ breaks the world into a sequence of steps. In each step, pos is the value of pos in the current step and pos' is the value in the next step. The main outcome of this semantics is that we "assign" a new value to pos by declaring pos' equal to something. But the semantics also open up lots of cool tricks, like swapping two values with x' = y /\ y' = x. TLA+ is a little weird about updating functions. To set f[x] = 3, you gotta write f' = [f EXCEPT ![x] = 3]. To make things a little easier, the rhs of a function update can contain @ for the old value. ![me].lr = right - @ is the same as right - pos[me].lr, so it swaps left and right. ("Juke" comes from here) Move(person) == LET new_pos == [pos[person] EXCEPT !.fb = @ + direction[person]] IN /\ pos[person].fb # goal[person] /\ \A p \in People: pos[p] # new_pos /\ pos' = [pos EXCEPT ![person] = new_pos] The EXCEPT syntax can be used in regular definitions, too. This lets someone move one step in their goal direction unless they are at the goal or someone is already in that space. /\ means "and". Next == \E p \in People: \/ Move(p) \/ Juke(p) I really like how TLA+ represents concurrency: "In each step, there is a person who either moves or jukes." It can take a few uses to really wrap your head around but it can express extraordinarily complicated distributed systems. Spec == Init /\ [][Next]_vars Liveness == <>(pos[me].fb = goal[me]) ==== Spec is our specification: we start at Init and take a Next step every step. Liveness is the generic term for "something good is guaranteed to happen", see here for more. <> means "eventually", so Liveness means "eventually my forward-backward position will be my goal". I could extend it to "both of us eventually reach out goal" but I think this is good enough for a demo. Checking the spec Four years ago, everybody in TLA+ used the toolbox. Now the community has collectively shifted over to using the VSCode extension.1 VSCode requires we write a configuration file, which I will call walkward.cfg. SPECIFICATION Spec PROPERTY Liveness I then check the model with the VSCode command TLA+: Check model with TLC. Unsurprisingly, it finds an error: The reason it fails is "stuttering": I can get one step away from my goal and then just stop moving forever. We say the spec is unfair: it does not guarantee that if progress is always possible, progress will be made. If I want the spec to always make progress, I have to make some of the steps weakly fair. + Fairness == WF_vars(Next) - Spec == Init /\ [][Next]_vars + Spec == Init /\ [][Next]_vars /\ Fairness Now the spec is weakly fair, so someone will always do something. New error: \* First six steps cut 7: <Move("me")> pos = [you |-> [lr |-> 0, fb |-> 4], me |-> [lr |-> 1, fb |-> 2]] 8: <Juke("me")> pos = [you |-> [lr |-> 0, fb |-> 4], me |-> [lr |-> 0, fb |-> 2]] 9: <Juke("me")> (back to state 7) In this failure, I've successfully gotten past you, and then spend the rest of my life endlessly juking back and forth. The Next step keeps happening, so weak fairness is satisfied. What I actually want is for both my Move and my Juke to both be weakly fair independently of each other. - Fairness == WF_vars(Next) + Fairness == WF_vars(Move(me)) /\ WF_vars(Juke(me)) If my liveness property also specified that you reached your goal, I could instead write \A p \in People: WF_vars(Move(p)) etc. I could also swap the \A with a \E to mean at least one of us is guaranteed to have fair actions, but not necessarily both of us. New error: 3: <Move("me")> pos = [you |-> [lr |-> 0, fb |-> 2], me |-> [lr |-> 0, fb |-> 3]] 4: <Juke("you")> pos = [you |-> [lr |-> 1, fb |-> 2], me |-> [lr |-> 0, fb |-> 3]] 5: <Juke("me")> pos = [you |-> [lr |-> 1, fb |-> 2], me |-> [lr |-> 1, fb |-> 3]] 6: <Juke("me")> pos = [you |-> [lr |-> 1, fb |-> 2], me |-> [lr |-> 0, fb |-> 3]] 7: <Juke("you")> (back to state 3) Now we're getting somewhere! This is the original walkwarding situation we wanted to capture. We're in each others way, then you juke, but before either of us can move you juke, then we both juke back. We can repeat this forever, trapped in a social hell. Wait, but doesn't WF(Move(me)) guarantee I will eventually move? Yes, but only if a move is permanently available. In this case, it's not permanently available, because every couple of steps it's made temporarily unavailable. How do I fix this? I can't add a rule saying that we only juke if we're blocked, because the whole point of walkwarding is that we're not coordinated. In the real world, walkwarding can go on for agonizing seconds. What I can do instead is say that Liveness holds as long as Move is strongly fair. Unlike weak fairness, strong fairness guarantees something happens if it keeps becoming possible, even with interruptions. Liveness == + SF_vars(Move(me)) => <>(pos[me].fb = goal[me]) This makes the spec pass. Even if we weave back and forth for five minutes, as long as we eventually pass each other, I will reach my goal. Note we could also by making Move in Fairness strongly fair, which is preferable if we have a lot of different liveness properties to check. A small exercise for the reader There is a presumed invariant that is violated. Identify what it is, write it as a property in TLA+, and show the spec violates it. Then fix it. Answer (in rot13): Gur vainevnag vf "ab gjb crbcyr ner va gur rknpg fnzr ybpngvba". Zbir thnenagrrf guvf ohg Whxr qbrf abg. More TLA+ Exercises I've started work on an exercises repo. There's only a handful of specific problems now but I'm planning on adding more over the summer. learntla is still on the toolbox, but I'm hoping to get it all moved over this summer. ↩

yesterday 2 votes
the penultimate conditional syntax

About half a year ago I encountered a paper bombastically titled “the ultimate conditional syntax”. It has the attractive goal of unifying pattern match with boolean if tests, and its solution is in some ways very nice. But it seems over-complicated to me, especially for something that’s a basic work-horse of programming. I couldn’t immediately see how to cut it down to manageable proportions, but recently I had an idea. I’ll outline it under the “penultimate conditionals” heading below, after reviewing the UCS and explaining my motivation. what the UCS? whence UCS out of scope penultimate conditionals dangling syntax examples antepenultimate breath what the UCS? The ultimate conditional syntax does several things which are somewhat intertwined and support each other. An “expression is pattern” operator allows you to do pattern matching inside boolean expressions. Like “match” but unlike most other expressions, “is” binds variables whose scope is the rest of the boolean expression that might be evaluated when the “is” is true, and the consequent “then” clause. You can “split” tests to avoid repeating parts that are the same in successive branches. For example, if num < 0 then -1 else if num > 0 then +1 else 0 can be written if num < 0 then -1 > 0 then +1 else 0 The example shows a split before an operator, where the left hand operand is the same and the rest of the expression varies. You can split after the operator when the operator is the same, which is common for “is” pattern match clauses. Indentation-based syntax (an offside rule) reduces the amount of punctuation that splits would otherwise need. An explicit version of the example above is if { x { { < { 0 then −1 } }; { > { 0 then +1 } }; else 0 } } (This example is written in the paper on one line. I’ve split it for narrow screens, which exposes what I think is a mistake in the nesting.) You can also intersperse let bindings between splits. I doubt the value of this feature, since “is” can also bind values, but interspersed let does have its uses. The paper has an example using let to avoid rightward drift: if let tp1_n = normalize(tp1) tp1_n is Bot then Bot let tp2_n = normalize(tp2) tp2_n is Bot then Bot let m = merge(tp1_n, tp2_n) m is Some(tp) then tp m is None then glb(tp1_n, tp2_n) It’s probably better to use early return to avoid rightward drift. The desugaring uses let bindings when lowering the UCS to simpler constructions. whence UCS Pattern matching in the tradition of functional programming languages supports nested patterns that are compiled in a way that eliminates redundant tests. For example, this example checks that e1 is Some(_) once, not twice as written. if e1 is Some(Left(lv)) then e2 Some(Right(rv)) then e3 None then e4 Being cheeky, I’d say UCS introduces more causes of redundant checks, then goes to great effort to to eliminate redundant checks again. Splits reduce redundant code at the source level; the bulk of the paper is about eliminating redundant checks in the lowering from source to core language. I think the primary cause of this extra complexity is treating the is operator as a two-way test rather than a multi-way match. Splits are introduced as a more general (more complicated) way to build multi-way conditions out of two-way tests. There’s a secondary cause: the tradition of expression-oriented functional languages doesn’t like early returns. A nice pattern in imperative code is to write a function as a series of preliminary calculations and guards with early returns that set things up for the main work of the function. Rust’s ? operator and let-else statement support this pattern directly. UCS addresses the same pattern by wedging calculate-check sequences into if statements, as in the normalize example above. out of scope I suspect UCS’s indentation-based syntax will make programmers more likely to make mistakes, and make compilers have more trouble producing nice error messages. (YAML has put me off syntax that doesn’t have enough redundancy to support good error recovery.) So I wondered if there’s a way to have something like an “is pattern” operator in a Rust-like language, without an offside rule, and without the excess of punctuation in the UCS desugaring. But I couldn’t work out how to make the scope of variable bindings in patterns cover all the code that might need to use them. The scope needs to extend into the consequent then clause, but also into any follow-up tests – and those tests can branch so the scope might need to reach into multiple then clauses. The problem was the way I was still thinking of the then and else clauses as part of the outer if. That implied the expression has to be closed off before the then, which troublesomely closes off the scope of any is-bound variables. The solution – part of it, at least – is actually in the paper, where then and else are nested inside the conditional expression. penultimate conditionals There are two ingredients: The then and else clauses become operators that cause early return from a conditional expression. They can be lowered to a vaguely Rust syntax with the following desugaring rules. The 'if label denotes the closest-enclosing if; you can’t use then or else inside the expr of a then or else unless there’s another intervening if. then expr ⟼ && break 'if expr else expr ⟼ || break 'if expr else expr ⟼ || _ && break 'if expr There are two desugarings for else depending on whether it appears in an expression or a pattern. If you prefer a less wordy syntax, you might spell then as => (like match in Rust) and else as || =>. (For symmetry we might allow && => for then as well.) An is operator for multi-way pattern-matching that binds variables whose scope covers the consequent part of the expression. The basic form is like the UCS, scrutinee is pattern which matches the scrutinee against the pattern returning a boolean result. For example, foo is None Guarded patterns are like, scrutinee is pattern && consequent where the scope of the variables bound by the pattern covers the consequent. The consequent might be a simple boolean guard, for example, foo is Some(n) && n < 0 or inside an if expression it might end with a then clause, if foo is Some(n) && n < 0 => -1 // ... Simple multi-way patterns are like, scrutinee is { pattern || pattern || … } If there is a consequent then the patterns must all bind the same set of variables (if any) with the same types. More typically, a multi-way match will have consequent clauses, like scrutinee is { pattern && consequent || pattern && consequent || => otherwise } When a consequent is false, we go on to try other alternatives of the match, like we would when the first operand of boolean || is false. To help with layout, you can include a redundant || before the first alternative. For example, if foo is { || Some(n) && n < 0 => -1 || Some(n) && n > 0 => +1 || Some(n) => 0 || None => 0 } Alternatively, if foo is { Some(n) && ( n < 0 => -1 || n > 0 => +1 || => 0 ) || None => 0 } (They should compile the same way.) The evaluation model is like familiar shortcutting && and || and the syntax is supposed to reinforce that intuition. The UCS paper spends a lot of time discussing backtracking and how to eliminate it, but penultimate conditionals evaluate straightforwardly from left to right. The paper briefly mentions as patterns, like Some(Pair(x, y) as p) which in Rust would be written Some(p @ Pair(x, y)) The is operator doesn’t need a separate syntax for this feature: Some(p is Pair(x, y)) For large examples, the penultimate conditional syntax is about as noisy as Rust’s match, but it scales down nicely to smaller matches. However, there are differences in how consequences and alternatives are punctuated which need a bit more discussion. dangling syntax The precedence and associativity of the is operator is tricky: it has two kinds of dangling-else problem. The first kind occurs with a surrounding boolean expression. For example, when b = false, what is the value of this? b is true || false It could bracket to the left, yielding false: (b is true) || false Or to the right, yielding true: b is { true || false } This could be disambiguated by using different spellings for boolean or and pattern alternatives. But that doesn’t help for the second kind which occurs with an inner match. foo is Some(_) && bar is Some(_) || None Does that check foo is Some(_) with an always-true look at bar ( foo is Some(_) ) && bar is { Some(_) || None } Or does it check bar is Some(_) and waste time with foo? foo is { Some(_) && ( bar is Some(_) ) || None } I have chosen to resolve the ambiguity by requiring curly braces {} around groups of alternative patterns. This allows me to use the same spelling || for all kinds of alternation. (Compare Rust, which uses || for boolean expressions, | in a pattern, and , between the arms of a match.) Curlies around multi-way matches can be nested, so the example in the previous section can also be written, if foo is { || Some(n) && n < 0 => -1 || Some(n) && n > 0 => +1 || { Some(0) || None } => 0 } The is operator binds tigher than && on its left, but looser than && on its right (so that a chain of && is gathered into a consequent) and tigher than || on its right so that outer || alternatives don’t need extra brackets. examples I’m going to finish these notes by going through the ultimate conditional syntax paper to translate most of its examples into the penultimate syntax, to give it some exercise. Here we use is to name a value n, as a replacement for the |> abs pipe operator, and we use range patterns instead of split relational operators: if foo(args) is { || 0 => "null" || n && abs(n) is { || 101.. => "large" || ..10 => "small" || => "medium" ) } In both the previous example and the next one, we have some extra brackets where UCS relies purely on an offside rule. if x is { || Right(None) => defaultValue || Right(Some(cached)) => f(cached) || Left(input) && compute(input) is { || None => defaultValue || Some(result) => f(result) } } This one is almost identical to UCS apart from the spellings of and, then, else. if name.startsWith("_") && name.tailOption is Some(namePostfix) && namePostfix.toIntOption is Some(index) && 0 <= index && index < arity && => Right([index, name]) || => Left("invalid identifier: " + name) Here are some nested multi-way matches with overlapping patterns and bound values: if e is { // ... || Lit(value) && Map.find_opt(value) is Some(result) => Some(result) // ... || { Lit(value) || Add(Lit(0), value) || Add(value, Lit(0)) } => { print_int(value); Some(value) } // ... } The next few examples show UCS splits without the is operator. In my syntax I need to press a few more buttons but I think that’s OK. if x == 0 => "zero" || x == 1 => "unit" || => "?" if x == 0 => "null" || x > 0 => "positive" || => "negative" if predicate(0, 1) => "A" || predicate(2, 3) => "B" || => "C" The first two can be written with is instead, but it’s not briefer: if x is { || 0 => "zero" || 1 => "unit" || => "?" } if x is { || 0 => "null" || 1.. => "positive" || => "negative" } There’s little need for a split-anything feature when we have multi-way matches. if foo(u, v, w) is { || Some(x) && x is { || Left(_) => "left-defined" || Right(_) => "right-defined" } || None => "undefined" } A more complete function: fn zip_with(f, xs, ys) { if [xs, ys] is { || [x :: xs, y :: ys] && zip_with(f, xs, ys) is Some(tail) => Some(f(x, y) :: tail) || [Nil, Nil] => Some(Nil) || => None } } Another fragment of the expression evaluator: if e is { // ... || Var(name) && Map.find_opt(env, name) is { || Some(Right(value)) => Some(value) || Some(Left(thunk)) => Some(thunk()) } || App(lhs, rhs) => // ... // ... } This expression is used in the paper to show how a UCS split is desugared: if Pair(x, y) is { || Pair(Some(xv), Some(yv)) => xv + yv || Pair(Some(xv), None) => xv || Pair(None, Some(yv)) => yv || Pair(None, None) => 0 } The desugaring in the paper introduces a lot of redundant tests. I would desugar straightforwardly, then rely on later optimizations to eliminate other redundancies such as the construction and immediate destruction of the pair: if Pair(x, y) is Pair(xx, yy) && xx is { || Some(xv) && yy is { || Some(yv) => xv + yv || None => xv } || None && yy is { || Some(yv) => yv || None => 0 } } Skipping ahead to the “non-trivial example” in the paper’s fig. 11: if e is { || Var(x) && context.get(x) is { || Some(IntVal(v)) => Left(v) || Some(BoolVal(v)) => Right(v) } || Lit(IntVal(v)) => Left(v) || Lit(BoolVal(v)) => Right(v) // ... } The next example in the paper compares C# relational patterns. Rust’s range patterns do a similar job, with the caveat that Rust’s ranges don’t have a syntax for exclusive lower bounds. fn classify(value) { if value is { || .. -4.0 => "too low" || 10.0 .. => "too high" || NaN => "unknown" || => "acceptable" } } I tend to think relational patterns are the better syntax than ranges. With relational patterns I can rewrite an earlier example like, if foo is { || Some(< 0) => -1 || Some(> 0) => +1 || { Some(0) || None } => 0 } I think with the UCS I would have to name the Some(_) value to be able to compare it, which suggests that relational patterns can be better than UCS split relational operators. Prefix-unary relational operators are also a nice way to write single-ended ranges in expressions. We could simply write both ends to get a complete range, like >= lo < hi or like if value is > -4.0 < 10.0 => "acceptable" || => "far out" Near the start I quoted a normalize example that illustrates left-aligned UCS expression. The penultimate version drifts right like the Scala version: if normalize(tp1) is { || Bot => Bot || tp1_n && normalize(tp2) is { || Bot => Bot || tp2_n && merge(tp1_n, tp2_n) is { || Some(tp) => tp || None => glb(tp1_n, tp2_n) } } } But a more Rusty style shows the benefits of early returns (especially the terse ? operator) and monadic combinators. let tp1 = normalize(tp1)?; let tp2 = normalize(tp2)?; merge(tp1, tp2) .unwrap_or_else(|| glb(tp1, tp2)) antepenultimate breath When I started writing these notes, my penultimate conditional syntax was little more than a sketch of an idea. Having gone through the previous section’s exercise, I think it has turned out better than I thought it might. The extra nesting from multi-way match braces doesn’t seem to be unbearably heavyweight. However, none of the examples have bulky then or else blocks which are where the extra nesting is more likely to be annoying. But then, as I said before it’s comparable to a Rust match: match scrutinee { pattern => { consequent } } if scrutinee is { || pattern => { consequent } } The || lines down the left margin are noisy, but hard to get rid of in the context of a curly-brace language. I can’t reduce them to | like OCaml because what would I use for bitwise OR? I don’t want presence or absence of flow control to depend on types or context. I kind of like Prolog / Erlang , for && and ; for ||, but that’s well outside what’s legible to mainstream programmers. So, dunno. Anyway, I think I’ve successfully found a syntax that does most of what UCS does, but much in a much simpler fashion.

2 days ago 5 votes
Coding should be a vibe!

The appeal of "vibe coding" — where programmers lean back and prompt their way through an entire project with AI — appears partly to be based on the fact that so many development environments are deeply unpleasant to work with. So it's no wonder that all these programmers stuck working with cumbersome languages and frameworks can't wait to give up on the coding part of software development. If I found writing code a chore, I'd be looking for retirement too. But I don't. I mean, I used to! When I started programming, it was purely because I wanted programs. Learning to code was a necessary but inconvenient step toward bringing systems to life. That all changed when I learned Ruby and built Rails. Ruby's entire premise is "programmer happiness": that writing code should be a joy. And historically, the language was willing to trade run-time performance, memory usage, and other machine sympathies against the pursuit of said programmer happiness. These days, it seems like you can eat your cake and have it too, though. Ruby, after thirty years of constant improvement, is now incredibly fast and efficient, yet remains a delight to work with. That ethos couldn't shine brighter now. Disgruntled programmers have finally realized that an escape from nasty syntax, boilerplate galore, and ecosystem hyper-churn is possible. That's the appeal of AI: having it hide away all that unpleasantness. Only it's like cleaning your room by stuffing the mess under the bed — it doesn't make it go away! But the instinct is correct: Programming should be a vibe! It should be fun! It should resemble English closely enough that line noise doesn't obscure the underlying ideas and decisions. It should allow a richness of expression that serves the human reader instead of favoring the strictness preferred by the computer. Ruby does. And given that, I have no interest in giving up writing code. That's not the unpleasant part that I want AI to take off my hands. Just so I can — what? — become a project manager for a murder of AI crows? I've had the option to retreat up the manager ladder for most of my career, but I've steadily refused, because I really like writing Ruby! It's the most enjoyable part of the job! That doesn't mean AI doesn't have a role to play when writing Ruby. I'm conversing and collaborating with LLMs all day long — looking up APIs, clarifying concepts, and asking stupid questions. AI is a superb pair programmer, but I'd retire before permanently handing it the keyboard to drive the code. Maybe one day, wanting to write code will be a quaint concept. Like tending to horses for transportation in the modern world — done as a hobby but devoid of any economic value. I don't think anyone knows just how far we can push the intelligence and creativity of these insatiable token munchers. And I wouldn't bet against their advance, but it's clear to me that a big part of their appeal to programmers is the wisdom that Ruby was founded on: Programming should favor and flatter the human.

2 days ago 8 votes
Tempest Rising is a great game

I really like RTS games. I pretty much grew up on them, starting with Command&Conquer 3: Kane’s Wrath, moving on to StarCraft 2 trilogy and witnessing the downfall of Command&Conquer 4. I never had the disks for any other RTS games during my teenage years. Yes, the disks, the ones you go to the store to buy! I didn’t know Steam existed back then, so this was my only source of games. There is something magical in owning a physical copy of the game. I always liked the art on the front (a mandatory huge face for all RTS!), game description and screenshots on the back, even the smell of the plastic disk case.

2 days ago 4 votes