Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
2
I know I said we'd be back to normal newsletters this week and in fact had 80% of one already written. Then I unearthed something that was better left buried. Blog post here, Patreon notes here (Mostly an explanation of how I found this horror in the first place). Next week I'll send what was supposed to be this week's piece. (PS: April Cools in three weeks!)
4 hours ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Computer Things

Five Kinds of Nondeterminism

No newsletter next week, I'm teaching a TLA+ workshop. Speaking of which: I spend a lot of time thinking about formal methods (and TLA+ specifically) because it's where the source of almost all my revenue. But I don't share most of the details because 90% of my readers don't use FM and never will. I think it's more interesting to talk about ideas from FM that would be useful to people outside that field. For example, the idea of "property strength" translates to the idea that some tests are stronger than others. Another possible export is how FM approaches nondeterminism. A nondeterministic algorithm is one that, from the same starting conditions, has multiple possible outputs. This is nondeterministic: # Pseudocode def f() { return rand()+1; } When specifying systems, I may not encounter nondeterminism more often than in real systems, but I am definitely more aware of its presence. Modeling nondeterminism is a core part of formal specification. I mentally categorize nondeterminism into five buckets. Caveat, this is specifically about nondeterminism from the perspective of system modeling, not computer science as a whole. If I tried to include stuff on NFAs and amb operations this would be twice as long.1 1. True Randomness Programs that literally make calls to a random function and then use the results. This the simplest type of nondeterminism and one of the most ubiquitous. Most of the time, random isn't truly nondeterministic. Most of the time computer randomness is actually pseudorandom, meaning we seed a deterministic algorithm that behaves "randomly-enough" for some use. You could "lift" a nondeterministic random function into a deterministic one by adding a fixed seed to the starting state. # Python from random import random, seed def f(x): seed(x) return random() >>> f(3) 0.23796462709189137 >>> f(3) 0.23796462709189137 Often we don't do this because the point of randomness is to provide nondeterminism! We deliberately abstract out the starting state of the seed from our program, because it's easier to think about it as locally nondeterministic. (There's also "true" randomness, like using thermal noise as an entropy source, which I think are mainly used for cryptography and seeding PRNGs.) Most formal specification languages don't deal with randomness (though some deal with probability more broadly). Instead, we treat it as a nondeterministic choice: # software if rand > 0.001 then return a else crash # specification either return a or crash This is because we're looking at worst-case scenarios, so it doesn't matter if crash happens 50% of the time or 0.0001% of the time, it's still possible. 2. Concurrency # Pseudocode global x = 1, y = 0; def thread1() { x++; x++; x++; } def thread2() { y := x; } If thread1() and thread2() run sequentially, then (assuming the sequence is fixed) the final value of y is deterministic. If the two functions are started and run simultaneously, then depending on when thread2 executes y can be 1, 2, 3, or 4. Both functions are locally sequential, but running them concurrently leads to global nondeterminism. Concurrency is arguably the most dramatic source of nondeterminism. Small amounts of concurrency lead to huge explosions in the state space. We have words for the specific kinds of nondeterminism caused by concurrency, like "race condition" and "dirty write". Often we think about it as a separate topic from nondeterminism. To some extent it "overshadows" the other kinds: I have a much easier time teaching students about concurrency in models than nondeterminism in models. Many formal specification languages have special syntax/machinery for the concurrent aspects of a system, and generic syntax for other kinds of nondeterminism. In P that's choose. Others don't special-case concurrency, instead representing as it as nondeterministic choices by a global coordinator. This more flexible but also more inconvenient, as you have to implement process-local sequencing code yourself. 3. User Input One of the most famous and influential programming books is The C Programming Language by Kernighan and Ritchie. The first example of a nondeterministic program appears on page 14: For the newsletter readers who get text only emails,2 here's the program: #include /* copy input to output; 1st version */ main() { int c; c = getchar(); while (c != EOF) { putchar(c); c = getchar(); } } Yup, that's nondeterministic. Because the user can enter any string, any call of main() could have any output, meaning the number of possible outcomes is infinity. Okay that seems a little cheap, and I think it's because we tend to think of determinism in terms of how the user experiences the program. Yes, main() has an infinite number of user inputs, but for each input the user will experience only one possible output. It starts to feel more nondeterministic when modeling a long-standing system that's reacting to user input, for example a server that runs a script whenever the user uploads a file. This can be modeled with nondeterminism and concurrency: We have one execution that's the system, and one nondeterministic execution that represents the effects of our user. (One intrusive thought I sometimes have: any "yes/no" dialogue actually has three outcomes: yes, no, or the user getting up and walking away without picking a choice, permanently stalling the execution.) 4. External forces The more general version of "user input": anything where either 1) some part of the execution outcome depends on retrieving external information, or 2) the external world can change some state outside of your system. I call the distinction between internal and external components of the system the world and the machine. Simple examples: code that at some point reads an external temperature sensor. Unrelated code running on a system which quits programs if it gets too hot. API requests to a third party vendor. Code processing files but users can delete files before the script gets to them. Like with PRNGs, some of these cases don't have to be nondeterministic; we can argue that "the temperature" should be a virtual input into the function. Like with PRNGs, we treat it as nondeterministic because it's useful to think in that way. Also, what if the temperature changes between starting a function and reading it? External forces are also a source of nondeterminism as uncertainty. Measurements in the real world often comes with errors, so repeating a measurement twice can give two different answers. Sometimes operations fail for no discernable reason, or for a non-programmatic reason (like something physically blocks the sensor). All of these situations can be modeled in the same way as user input: a concurrent execution making nondeterministic choices. 5. Abstraction This is where nondeterminism in system models and in "real software" differ the most. I said earlier that pseudorandomness is arguably deterministic, but we abstract it into nondeterminism. More generally, nondeterminism hides implementation details of deterministic processes. In one consulting project, we had a machine that received a message, parsed a lot of data from the message, went into a complicated workflow, and then entered one of three states. The final state was totally deterministic on the content of the message, but the actual process of determining that final state took tons and tons of code. None of that mattered at the scope we were modeling, so we abstracted it all away: "on receiving message, nondeterministically enter state A, B, or C." Doing this makes the system easier to model. It also makes the model more sensitive to possible errors. What if the workflow is bugged and sends us to the wrong state? That's already covered by the nondeterministic choice! Nondeterministic abstraction gives us the potential to pick the worst-case scenario for our system, so we can prove it's robust even under those conditions. I know I beat the "nondeterminism as abstraction" drum a whole lot but that's because it's the insight from formal methods I personally value the most, that nondeterminism is a powerful tool to simplify reasoning about things. You can see the same approach in how I approach modeling users and external forces: complex realities black-boxed and simplified into nondeterministic forces on the system. Anyway, I hope this collection of ideas I got from formal methods are useful to my broader readership. Lemme know if it somehow helps you out! I realized after writing this that I already talked wrote an essay about nondeterminism in formal specification just under a year ago. I hope this one covers enough new ground to be interesting! ↩ There is a surprising number of you. ↩

2 weeks ago 15 votes
Are Efficiency and Horizontal Scalability at odds?

Sorry for missing the newsletter last week! I started writing on Monday as normal, and by Wednesday the piece (about the hierarchy of controls ) was 2000 words and not close to done. So now it'll be a blog post sometime later this month. I also just released a new version of Logic for Programmers! 0.7 adds a bunch of new content (type invariants, modeling access policies, rewrites of the first chapters) but more importantly has new fonts that are more legible than the old ones. Go check it out! For this week's newsletter I want to brainstorm an idea I've been noodling over for a while. Say we have a computational task, like running a simulation or searching a very large graph, and it's taking too long to complete on a computer. There's generally three things that we can do to make it faster: Buy a faster computer ("vertical scaling") Modify the software to use the computer's resources better ("efficiency") Modify the software to use multiple computers ("horizontal scaling") (Splitting single-threaded software across multiple threads/processes is sort of a blend of (2) and (3).) The big benefit of (1) is that we (usually) don't have to make any changes to the software to get a speedup. The downside is that for the past couple of decades computers haven't gotten much faster, except in ways that require recoding (like GPUs and multicore). This means we rely on (2) and (3), and we can do both to a point. I've noticed, though, that horizontal scaling seems to conflict with efficiency. Software optimized to scale well tends to be worse or the N=1 case than software optimized to, um, be optimized. Are there reasons to expect this? It seems reasonable that design goals of software are generally in conflict, purely because exclusively optimizing for one property means making decisions that impede other properties. But is there something in the nature of "efficiency" and "horizontal scalability" that make them especially disjoint? This isn't me trying to explain a fully coherent idea, more me trying to figure this all out to myself. Also I'm probably getting some hardware stuff wrong Amdahl's Law According to Amdahl's Law, the maximum speedup by parallelization is constrained by the proportion of the work that can be parallelized. If 80% of algorithm X is parallelizable, the maximum speedup from horizontal scaling is 5x. If algorithm Y is 25% parallelizable, the maximum speedup is only 1.3x. If you need horizontal scalability, you want to use algorithm X, even if Y is naturally 3x faster. But if Y was 4x faster, you'd prefer it to X. Maximal scalability means finding the optimal balance between baseline speed and parallelizability. Maximal efficiency means just optimizing baseline speed. Coordination Overhead Distributed algorithms require more coordination. To add a list of numbers in parallel via fork-join, we'd do something like this: Split the list into N sublists Fork a new thread/process for sublist Wait for each thread/process to finish Add the sums together. (1), (2), and (3) all add overhead to the algorithm. At the very least, it's extra lines of code to execute, but it can also mean inter-process communication or network hops. Distribution also means you have fewer natural correctness guarantees, so you need more administrative overhead to avoid race conditions. Real world example: Historically CPython has a "global interpreter lock" (GIL). In multithreaded code, only one thread could execute Python code at a time (others could execute C code). The newest version supports disabling the GIL, which comes at a 40% overhead for single-threaded programs. Supposedly the difference is because the specializing adaptor optimization isn't thread-safe yet. The Python team is hoping on getting it down to "only" 10%. Scaling loses shared resources I'd say that intra-machine scaling (multiple threads/processes) feels qualitatively different than inter-machine scaling. Part of that is that intra-machine scaling is "capped" while inter-machine is not. But there's also a difference in what assumptions you can make about shared resources. Starting from the baseline of single-threaded program: Threads have a much harder time sharing CPU caches (you have to manually mess with affinities) Processes have a much harder time sharing RAM (I think you have to use mmap?) Machines can't share cache, RAM, or disk, period. It's a lot easier to solve a problem when the whole thing fits in RAM. But if you split a 50 gb problem across three machines, it doesn't fit in ram by default, even if the machines have 64 gb each. Scaling also means that separate machines can't reuse resources like database connections. Efficiency comes from limits I think the two previous points tie together in the idea that maximal efficiency comes from being able to make assumptions about the system. If we know the exact sequence of computations, we can aim to minimize cache misses. If we don't have to worry about thread-safety, tracking references is dramatically simpler. If we have all of the data in a single database, our query planner has more room to work with. At various tiers of scaling these assumptions are no longer guaranteed and we lose the corresponding optimizations. Sometimes these assumptions are implicit and crop up in odd places. Like if you're working at a scale where you need multiple synced databases, you might want to use UUIDs instead of numbers for keys. But then you lose the assumption "recently inserted rows are close together in the index", which I've read can lead to significant slowdowns. This suggests that if you can find a limit somewhere else, you can get both high horizontal scaling and high efficiency. Supposedly the Tigerbeetle database has both, but that could be because they limit all records to accounts and transfers. This means every record fits in exactly 128 bytes. Does this mean that "assumptions" could be both "assumptions about the computing environment" and "assumptions about the problem"? In the famous essay Scalability! But at what COST, Frank McSherry shows that his single-threaded laptop could outperform 128-node "big data systems" on PageRank and graph connectivity (via label propagation). Afterwards, he discusses how a different algorithm solves graph connectivity even faster: [Union find] is more line of code than label propagation, but it is 10x faster and 100x less embarassing. … The union-find algorithm is fundamentally incompatible with the graph computation approaches Giraph, GraphLab, and GraphX put forward (the so-called “think like a vertex” model). The interesting thing to me is that his alternate makes more "assumptions" than what he's comparing to. He can "assume" a fixed goal and optimize the code for that goal. The "big data systems" are trying to be general purpose compute platforms and have to pick a model that supports the widest range of possible problems. A few years back I wrote clever vs insightful code, I think what I'm trying to say here is that efficiency comes from having insight into your problem and environment. (Last thought to shove in here: to exploit assumptions, you need control. Carefully arranging your data to fit in L1 doesn't matter if your programming language doesn't let you control where things are stored!) Is there a cultural aspect? Maybe there's also a cultural element to this conflict. What if the engineers interested in "efficiency" are different from the engineers interested in "horizontal scaling"? At my first job the data scientists set up a Hadoop cluster for their relatively small dataset, only a few dozen gigabytes or so. One of the senior software engineers saw this and said "big data is stupid." To prove it, he took one of their example queries, wrote a script in Go to compute the same thing, and optimized it to run faster on his machine. At the time I was like "yeah, you're right, big data IS stupid!" But I think now that we both missed something obvious: with the "scalable" solution, the data scientists didn't have to write an optimized script for every single query. Optimizing code is hard, adding more machines is easy! The highest-tier of horizontal scaling is usually something large businesses want, and large businesses like problems that can be solved purely with money. Maximizing efficiency requires a lot of knowledge-intensive human labour, so is less appealing as an investment. Then again, I've seen a lot of work on making the scalable systems more efficient, such as evenly balancing heterogeneous workloads. Maybe in the largest systems intra-machine efficiency is just too small-scale a problem. I'm not sure where this fits in but scaling a volume of tasks conflicts less than scaling individual tasks If you have 1,000 machines and need to crunch one big graph, you probably want the most scalable algorithm. If you instead have 50,000 small graphs, you probably want the most efficient algorithm, which you then run on all 1,000 machines. When we call a problem embarrassingly parallel, we usually mean it's easy to horizontally scale. But it's also one that's easy to make more efficient, because local optimizations don't affect the scaling! Okay that's enough brainstorming for one week. Blog Rec Whenever I think about optimization as a skill, the first article that comes to mind is Mat Klad's Push Ifs Up And Fors Down. I'd never have considered on my own that inlining loops into functions could be such a huge performance win. The blog has a lot of other posts on the nuts-and-bolts of systems languages, optimization, and concurrency.

4 weeks ago 18 votes
What hard thing does your tech make easy?

I occasionally receive emails asking me to look at the writer's new language/library/tool. Sometimes it's in an area I know well, like formal methods. Other times, I'm a complete stranger to the field. Regardless, I'm generally happy to check it out. When starting out, this is the biggest question I'm looking to answer: What does this technology make easy that's normally hard? What justifies me learning and migrating to a new thing as opposed to fighting through my problems with the tools I already know? The new thing has to have some sort of value proposition, which could be something like "better performance" or "more secure". The most universal value and the most direct to show is "takes less time and mental effort to do something". I can't accurately judge two benchmarks, but I can see two demos or code samples and compare which one feels easier to me. Examples Functional programming What drew me originally to functional programming was higher order functions. # Without HOFs out = [] for x in input { if test(x) { out.append(x) } } # With HOFs filter(test, input) We can also compare the easiness of various tasks between examples within the same paradigm. If I know FP via Clojure, what could be appealing about Haskell or F#? For one, null safety is a lot easier when I've got option types. Array Programming Array programming languages like APL or J make certain classes of computation easier. For example, finding all of the indices where two arrays differ. Here it is in Python: x = [1, 4, 2, 3, 4, 1, 0, 0, 0, 4] y = [2, 3, 1, 1, 2, 3, 2, 0, 2, 4] >>> [i for i, (a, b) in enumerate(zip(x, y)) if a == b] [7, 9] And here it is in J: x =: 1 4 2 3 4 1 0 0 0 4 y =: 2 3 1 1 2 3 2 0 2 4 I. x = y 7 9 Not every tool is meant for every programmer, because you might not have any of the problems a tool makes easier. What comes up more often for you: filtering a list or finding all the indices where two lists differ? Statistically speaking, functional programming is more useful to you than array programming. But I have this problem enough to justify learning array programming. LLMs I think a lot of the appeal of LLMs is they make a lot of specialist tasks easy for nonspecialists. One thing I recently did was convert some rst list tables to csv tables. Normally I'd have to do write some tricky parsing and serialization code to automatically convert between the two. With LLMs, it's just Convert the following rst list-table into a csv-table: [table] "Easy" can trump "correct" as a value. The LLM might get some translations wrong, but it's so convenient I'd rather manually review all the translations for errors than write specialized script that is correct 100% of the time. Let's not take this too far A college friend once claimed that he cracked the secret of human behavior: humans do whatever makes them happiest. "What about the martyr who dies for their beliefs?" "Well, in their last second of life they get REALLY happy." We can do the same here, fitting every value proposition into the frame of "easy". CUDA makes it easier to do matrix multiplication. Rust makes it easier to write low-level code without memory bugs. TLA+ makes it easier to find errors in your design. Monads make it easier to sequence computations in a lazy environment. Making everything about "easy" obscures other reason for adopting new things. That whole "simple vs easy" thing Sometimes people think that "simple" is better than "easy", because "simple" is objective and "easy" is subjective. This comes from the famous talk Simple Made Easy. I'm not sure I agree that simple is better or more objective: the speaker claims that polymorphism and typeclasses are "simpler" than conditionals, and I doubt everybody would agree with that. The problem is that "simple" is used to mean both "not complicated" and "not complex". And everybody agrees that "complicated" and "complex" are different, even if they can't agree what the difference is. This idea should probably expanded be expanded into its own newsletter. It's also a lot harder to pitch a technology on being "simpler". Simplicity by itself doesn't make a tool better equipped to solve problems. Simplicity can unlock other benefits, like compositionality or tractability, that provide the actual value. And often that value is in the form of "makes some tasks easier".

a month ago 24 votes
The Juggler's Curse

I'm making a more focused effort to juggle this year. Mostly boxes, but also classic balls too.1 I've gotten to the point where I can almost consistently do a five-ball cascade, which I thought was the cutoff to being a "good juggler". "Thought" because I now know a "good juggler" is one who can do the five-ball cascade with outside throws. I know this because I can't do the outside five-ball cascade... yet. But it's something I can see myself eventually mastering, unlike the slightly more difficult trick of the five-ball mess, which is impossible for mere mortals like me. In theory there is a spectrum of trick difficulties and skill levels. I could place myself on the axis like this: In practice, there are three tiers: Toddlers Good jugglers who practice hard Genetic freaks and actual wizards And the graph always, always looks like this: This is the jugglers curse, and it's a three-parter: The threshold between you and "good" is the next trick you cannot do. Everything below that level is trivial. Once you've gotten a trick down, you can never go back to not knowing it, to appreciating how difficult it was to learn in the first place.2 Everything above that level is just "impossible". You don't have the knowledge needed to recognize the different tiers.3 So as you get better, the stuff that was impossible becomes differentiable, and you can see that some of it is possible. And everything you learned becomes trivial. So you're never a good juggler until you learn "just one more hard trick". The more you know, the more you know you don't know and the less you know you know. This is supposed to be a software newsletter A monad is a monoid in the category of endofunctors, what's the problem? (src) I think this applies to any difficult topic? Most fields don't have the same stark spectral lines as juggling, but there's still tiers of difficulty to techniques, which get compressed the further in either direction they are from your current level. Like, I'm not good at formal methods. I've written two books on it but I've never mastered a dependently-typed language or a theorem prover. Those are equally hard. And I'm not good at modeling concurrent systems because I don't understand the formal definition of bisimulation and haven't implemented a Raft. Those are also equally hard, in fact exactly as hard as mastering a theorem prover. At the same time, the skills I've already developed are easy: properly using refinement is exactly as easy as writing a wrapped counter. Then I get surprised when I try to explain strong fairness to someone and they just don't get how □◇(ENABLED〈A〉ᵥ) is obviously different from ◇□(ENABLED 〈A〉ᵥ). Juggler's curse! Now I don't actually know if this is actually how everybody experiences expertise or if it's just my particular personality— I was a juggler long before I was a software developer. Then again, I'd argue that lots of people talk about one consequence of the juggler's curse: imposter syndrome. If you constantly think what you know is "trivial" and what you don't know is "impossible", then yeah, you'd start feeling like an imposter at work real quick. I wonder if part of the cause is that a lot of skills you have to learn are invisible. One of my favorite blog posts ever is In Defense of Blub Studies, which argues that software expertise comes through understanding "boring" topics like "what all of the error messages mean" and "how to use a debugger well". Blub is a critical part of expertise and takes a lot of hard work to learn, but it feels like trivia. So looking back on a skill I mastered, I might think it was "easy" because I'm not including all of the blub that I had to learn, too. The takeaway, of course, is that the outside five-ball cascade is objectively the cutoff between good jugglers and toddlers. Rant time: I love cigar box juggling. It's fun, it's creative, it's totally unlike any other kind of juggling. And it's so niche I straight up cannot find anybody in Chicago to practice with. I once went to a juggling convention and was the only person with a cigar box set there. ↩ This particular part of the juggler's curse is also called the curse of knowledge or "expert blindness". ↩ This isn't Dunning-Kruger, because DK says that people think they are better than they actually are, and also may not actually be real. ↩

a month ago 39 votes

More in programming

Notes on Improving Churn

Ask any B2C SaaS founder what metric they’d like to improve and most will say reducing churn. However, proactively reducing churn is a difficult task. I’ll outline the approach we’ve taken at Jenni AI to go from ~17% to 9% churn over the past year. We are still a work in progress but hopefully you’ll […] The post Notes on Improving Churn appeared first on Marc Astbury.

7 hours ago 2 votes
Our switch to Kamal is complete

In a fit of frustration, I wrote the first version of Kamal in six weeks at the start of 2023. Our plan to get out of the cloud was getting bogged down in enterprisey pricing and Kubernetes complexity. And I refused to accept that running our own hardware had to be that expensive or that convoluted. So I got busy building a cheap and simple alternative.  Now, just two years later, Kamal is deploying every single application in our entire heritage fleet, and everything in active development. Finalizing a perfectly uniform mode of deployment for every web app we've built over the past two decades and still maintain. See, we have this obsession at 37signals: That the modern build-boost-discard cycle of internet applications is a scourge. That users ought to be able to trust that when they adopt a system like Basecamp or HEY, they don't have to fear eviction from the next executive re-org. We call this obsession Until The End Of The Internet. That obsession isn't free, but it's worth it. It means we're still operating the very first version of Basecamp for thousands of paying customers. That's the OG code base from 2003! Which hasn't seen any updates since 2010, beyond security patches, bug fixes, and performance improvements. But we're still operating it, and, along with every other app in our heritage collection, deploying it with Kamal. That just makes me smile, knowing that we have customers who adopted Basecamp in 2004, and are still able to use the same system some twenty years later. In the meantime, we've relaunched and dramatically improved Basecamp many times since. But for customers happy with what they have, there's no forced migration to the latest version. I very much had all of this in mind when designing Kamal. That's one of the reasons I really love Docker. It allows you to encapsulate an entire system, with all of its dependencies, and run it until the end of time. Kind of how modern gaming emulators can run the original ROM of Pac-Man or Pong to perfection and eternity. Kamal seeks to be but a simple wrapper and workflow around this wondrous simplicity. Complexity is but a bridge — and a fragile one at that. To build something durable, you have to make it simple.

10 hours ago 1 votes
Catching grace

Meditation is easy when you know what to do: absolutely nothing! It's hard at first, like trying to look at the back of your own head, but there's a knack to it.

4 hours ago 1 votes
The Power of Principles in Web Development Decision-Making (article)

Discover how The Epic Programming Principles can transform your web development decision-making, boost your career, and help you build better software.

yesterday 4 votes