More from Computer Things
Sorry for missing the newsletter last week! I started writing on Monday as normal, and by Wednesday the piece (about the hierarchy of controls ) was 2000 words and not close to done. So now it'll be a blog post sometime later this month. I also just released a new version of Logic for Programmers! 0.7 adds a bunch of new content (type invariants, modeling access policies, rewrites of the first chapters) but more importantly has new fonts that are more legible than the old ones. Go check it out! For this week's newsletter I want to brainstorm an idea I've been noodling over for a while. Say we have a computational task, like running a simulation or searching a very large graph, and it's taking too long to complete on a computer. There's generally three things that we can do to make it faster: Buy a faster computer ("vertical scaling") Modify the software to use the computer's resources better ("efficiency") Modify the software to use multiple computers ("horizontal scaling") (Splitting single-threaded software across multiple threads/processes is sort of a blend of (2) and (3).) The big benefit of (1) is that we (usually) don't have to make any changes to the software to get a speedup. The downside is that for the past couple of decades computers haven't gotten much faster, except in ways that require recoding (like GPUs and multicore). This means we rely on (2) and (3), and we can do both to a point. I've noticed, though, that horizontal scaling seems to conflict with efficiency. Software optimized to scale well tends to be worse or the N=1 case than software optimized to, um, be optimized. Are there reasons to expect this? It seems reasonable that design goals of software are generally in conflict, purely because exclusively optimizing for one property means making decisions that impede other properties. But is there something in the nature of "efficiency" and "horizontal scalability" that make them especially disjoint? This isn't me trying to explain a fully coherent idea, more me trying to figure this all out to myself. Also I'm probably getting some hardware stuff wrong Amdahl's Law According to Amdahl's Law, the maximum speedup by parallelization is constrained by the proportion of the work that can be parallelized. If 80% of algorithm X is parallelizable, the maximum speedup from horizontal scaling is 5x. If algorithm Y is 25% parallelizable, the maximum speedup is only 1.3x. If you need horizontal scalability, you want to use algorithm X, even if Y is naturally 3x faster. But if Y was 4x faster, you'd prefer it to X. Maximal scalability means finding the optimal balance between baseline speed and parallelizability. Maximal efficiency means just optimizing baseline speed. Coordination Overhead Distributed algorithms require more coordination. To add a list of numbers in parallel via fork-join, we'd do something like this: Split the list into N sublists Fork a new thread/process for sublist Wait for each thread/process to finish Add the sums together. (1), (2), and (3) all add overhead to the algorithm. At the very least, it's extra lines of code to execute, but it can also mean inter-process communication or network hops. Distribution also means you have fewer natural correctness guarantees, so you need more administrative overhead to avoid race conditions. Real world example: Historically CPython has a "global interpreter lock" (GIL). In multithreaded code, only one thread could execute Python code at a time (others could execute C code). The newest version supports disabling the GIL, which comes at a 40% overhead for single-threaded programs. Supposedly the difference is because the specializing adaptor optimization isn't thread-safe yet. The Python team is hoping on getting it down to "only" 10%. Scaling loses shared resources I'd say that intra-machine scaling (multiple threads/processes) feels qualitatively different than inter-machine scaling. Part of that is that intra-machine scaling is "capped" while inter-machine is not. But there's also a difference in what assumptions you can make about shared resources. Starting from the baseline of single-threaded program: Threads have a much harder time sharing CPU caches (you have to manually mess with affinities) Processes have a much harder time sharing RAM (I think you have to use mmap?) Machines can't share cache, RAM, or disk, period. It's a lot easier to solve a problem when the whole thing fits in RAM. But if you split a 50 gb problem across three machines, it doesn't fit in ram by default, even if the machines have 64 gb each. Scaling also means that separate machines can't reuse resources like database connections. Efficiency comes from limits I think the two previous points tie together in the idea that maximal efficiency comes from being able to make assumptions about the system. If we know the exact sequence of computations, we can aim to minimize cache misses. If we don't have to worry about thread-safety, tracking references is dramatically simpler. If we have all of the data in a single database, our query planner has more room to work with. At various tiers of scaling these assumptions are no longer guaranteed and we lose the corresponding optimizations. Sometimes these assumptions are implicit and crop up in odd places. Like if you're working at a scale where you need multiple synced databases, you might want to use UUIDs instead of numbers for keys. But then you lose the assumption "recently inserted rows are close together in the index", which I've read can lead to significant slowdowns. This suggests that if you can find a limit somewhere else, you can get both high horizontal scaling and high efficiency. Supposedly the Tigerbeetle database has both, but that could be because they limit all records to accounts and transfers. This means every record fits in exactly 128 bytes. Does this mean that "assumptions" could be both "assumptions about the computing environment" and "assumptions about the problem"? In the famous essay Scalability! But at what COST, Frank McSherry shows that his single-threaded laptop could outperform 128-node "big data systems" on PageRank and graph connectivity (via label propagation). Afterwards, he discusses how a different algorithm solves graph connectivity even faster: [Union find] is more line of code than label propagation, but it is 10x faster and 100x less embarassing. … The union-find algorithm is fundamentally incompatible with the graph computation approaches Giraph, GraphLab, and GraphX put forward (the so-called “think like a vertex” model). The interesting thing to me is that his alternate makes more "assumptions" than what he's comparing to. He can "assume" a fixed goal and optimize the code for that goal. The "big data systems" are trying to be general purpose compute platforms and have to pick a model that supports the widest range of possible problems. A few years back I wrote clever vs insightful code, I think what I'm trying to say here is that efficiency comes from having insight into your problem and environment. (Last thought to shove in here: to exploit assumptions, you need control. Carefully arranging your data to fit in L1 doesn't matter if your programming language doesn't let you control where things are stored!) Is there a cultural aspect? Maybe there's also a cultural element to this conflict. What if the engineers interested in "efficiency" are different from the engineers interested in "horizontal scaling"? At my first job the data scientists set up a Hadoop cluster for their relatively small dataset, only a few dozen gigabytes or so. One of the senior software engineers saw this and said "big data is stupid." To prove it, he took one of their example queries, wrote a script in Go to compute the same thing, and optimized it to run faster on his machine. At the time I was like "yeah, you're right, big data IS stupid!" But I think now that we both missed something obvious: with the "scalable" solution, the data scientists didn't have to write an optimized script for every single query. Optimizing code is hard, adding more machines is easy! The highest-tier of horizontal scaling is usually something large businesses want, and large businesses like problems that can be solved purely with money. Maximizing efficiency requires a lot of knowledge-intensive human labour, so is less appealing as an investment. Then again, I've seen a lot of work on making the scalable systems more efficient, such as evenly balancing heterogeneous workloads. Maybe in the largest systems intra-machine efficiency is just too small-scale a problem. I'm not sure where this fits in but scaling a volume of tasks conflicts less than scaling individual tasks If you have 1,000 machines and need to crunch one big graph, you probably want the most scalable algorithm. If you instead have 50,000 small graphs, you probably want the most efficient algorithm, which you then run on all 1,000 machines. When we call a problem embarrassingly parallel, we usually mean it's easy to horizontally scale. But it's also one that's easy to make more efficient, because local optimizations don't affect the scaling! Okay that's enough brainstorming for one week. Blog Rec Whenever I think about optimization as a skill, the first article that comes to mind is Mat Klad's Push Ifs Up And Fors Down. I'd never have considered on my own that inlining loops into functions could be such a huge performance win. The blog has a lot of other posts on the nuts-and-bolts of systems languages, optimization, and concurrency.
I occasionally receive emails asking me to look at the writer's new language/library/tool. Sometimes it's in an area I know well, like formal methods. Other times, I'm a complete stranger to the field. Regardless, I'm generally happy to check it out. When starting out, this is the biggest question I'm looking to answer: What does this technology make easy that's normally hard? What justifies me learning and migrating to a new thing as opposed to fighting through my problems with the tools I already know? The new thing has to have some sort of value proposition, which could be something like "better performance" or "more secure". The most universal value and the most direct to show is "takes less time and mental effort to do something". I can't accurately judge two benchmarks, but I can see two demos or code samples and compare which one feels easier to me. Examples Functional programming What drew me originally to functional programming was higher order functions. # Without HOFs out = [] for x in input { if test(x) { out.append(x) } } # With HOFs filter(test, input) We can also compare the easiness of various tasks between examples within the same paradigm. If I know FP via Clojure, what could be appealing about Haskell or F#? For one, null safety is a lot easier when I've got option types. Array Programming Array programming languages like APL or J make certain classes of computation easier. For example, finding all of the indices where two arrays differ. Here it is in Python: x = [1, 4, 2, 3, 4, 1, 0, 0, 0, 4] y = [2, 3, 1, 1, 2, 3, 2, 0, 2, 4] >>> [i for i, (a, b) in enumerate(zip(x, y)) if a == b] [7, 9] And here it is in J: x =: 1 4 2 3 4 1 0 0 0 4 y =: 2 3 1 1 2 3 2 0 2 4 I. x = y 7 9 Not every tool is meant for every programmer, because you might not have any of the problems a tool makes easier. What comes up more often for you: filtering a list or finding all the indices where two lists differ? Statistically speaking, functional programming is more useful to you than array programming. But I have this problem enough to justify learning array programming. LLMs I think a lot of the appeal of LLMs is they make a lot of specialist tasks easy for nonspecialists. One thing I recently did was convert some rst list tables to csv tables. Normally I'd have to do write some tricky parsing and serialization code to automatically convert between the two. With LLMs, it's just Convert the following rst list-table into a csv-table: [table] "Easy" can trump "correct" as a value. The LLM might get some translations wrong, but it's so convenient I'd rather manually review all the translations for errors than write specialized script that is correct 100% of the time. Let's not take this too far A college friend once claimed that he cracked the secret of human behavior: humans do whatever makes them happiest. "What about the martyr who dies for their beliefs?" "Well, in their last second of life they get REALLY happy." We can do the same here, fitting every value proposition into the frame of "easy". CUDA makes it easier to do matrix multiplication. Rust makes it easier to write low-level code without memory bugs. TLA+ makes it easier to find errors in your design. Monads make it easier to sequence computations in a lazy environment. Making everything about "easy" obscures other reason for adopting new things. That whole "simple vs easy" thing Sometimes people think that "simple" is better than "easy", because "simple" is objective and "easy" is subjective. This comes from the famous talk Simple Made Easy. I'm not sure I agree that simple is better or more objective: the speaker claims that polymorphism and typeclasses are "simpler" than conditionals, and I doubt everybody would agree with that. The problem is that "simple" is used to mean both "not complicated" and "not complex". And everybody agrees that "complicated" and "complex" are different, even if they can't agree what the difference is. This idea should probably expanded be expanded into its own newsletter. It's also a lot harder to pitch a technology on being "simpler". Simplicity by itself doesn't make a tool better equipped to solve problems. Simplicity can unlock other benefits, like compositionality or tractability, that provide the actual value. And often that value is in the form of "makes some tasks easier".
I'm making a more focused effort to juggle this year. Mostly boxes, but also classic balls too.1 I've gotten to the point where I can almost consistently do a five-ball cascade, which I thought was the cutoff to being a "good juggler". "Thought" because I now know a "good juggler" is one who can do the five-ball cascade with outside throws. I know this because I can't do the outside five-ball cascade... yet. But it's something I can see myself eventually mastering, unlike the slightly more difficult trick of the five-ball mess, which is impossible for mere mortals like me. In theory there is a spectrum of trick difficulties and skill levels. I could place myself on the axis like this: In practice, there are three tiers: Toddlers Good jugglers who practice hard Genetic freaks and actual wizards And the graph always, always looks like this: This is the jugglers curse, and it's a three-parter: The threshold between you and "good" is the next trick you cannot do. Everything below that level is trivial. Once you've gotten a trick down, you can never go back to not knowing it, to appreciating how difficult it was to learn in the first place.2 Everything above that level is just "impossible". You don't have the knowledge needed to recognize the different tiers.3 So as you get better, the stuff that was impossible becomes differentiable, and you can see that some of it is possible. And everything you learned becomes trivial. So you're never a good juggler until you learn "just one more hard trick". The more you know, the more you know you don't know and the less you know you know. This is supposed to be a software newsletter A monad is a monoid in the category of endofunctors, what's the problem? (src) I think this applies to any difficult topic? Most fields don't have the same stark spectral lines as juggling, but there's still tiers of difficulty to techniques, which get compressed the further in either direction they are from your current level. Like, I'm not good at formal methods. I've written two books on it but I've never mastered a dependently-typed language or a theorem prover. Those are equally hard. And I'm not good at modeling concurrent systems because I don't understand the formal definition of bisimulation and haven't implemented a Raft. Those are also equally hard, in fact exactly as hard as mastering a theorem prover. At the same time, the skills I've already developed are easy: properly using refinement is exactly as easy as writing a wrapped counter. Then I get surprised when I try to explain strong fairness to someone and they just don't get how □◇(ENABLED〈A〉ᵥ) is obviously different from ◇□(ENABLED 〈A〉ᵥ). Juggler's curse! Now I don't actually know if this is actually how everybody experiences expertise or if it's just my particular personality— I was a juggler long before I was a software developer. Then again, I'd argue that lots of people talk about one consequence of the juggler's curse: imposter syndrome. If you constantly think what you know is "trivial" and what you don't know is "impossible", then yeah, you'd start feeling like an imposter at work real quick. I wonder if part of the cause is that a lot of skills you have to learn are invisible. One of my favorite blog posts ever is In Defense of Blub Studies, which argues that software expertise comes through understanding "boring" topics like "what all of the error messages mean" and "how to use a debugger well". Blub is a critical part of expertise and takes a lot of hard work to learn, but it feels like trivia. So looking back on a skill I mastered, I might think it was "easy" because I'm not including all of the blub that I had to learn, too. The takeaway, of course, is that the outside five-ball cascade is objectively the cutoff between good jugglers and toddlers. Rant time: I love cigar box juggling. It's fun, it's creative, it's totally unlike any other kind of juggling. And it's so niche I straight up cannot find anybody in Chicago to practice with. I once went to a juggling convention and was the only person with a cigar box set there. ↩ This particular part of the juggler's curse is also called the curse of knowledge or "expert blindness". ↩ This isn't Dunning-Kruger, because DK says that people think they are better than they actually are, and also may not actually be real. ↩
First of all, I just released version 0.6 of Logic for Programmers! You can get it here. Release notes in the footnote.1 I've been thinking about my next project after the book's done. One idea is to do a survey of new formal specification languages. There's been a lot of new ones in the past few years (P, Quint, etc), plus some old ones I haven't critically examined (SPIN, mcrl2). I'm thinking of a brief overview of each, what's interesting about it, and some examples of the corresponding models. For this I'd want a set of "Rosetta" examples. Rosetta Code is a collection of programming tasks done in different languages. For example, "99 bottles of beer on the wall" in over 300 languages. If I wanted to make a Rosetta Code for specifications of concurrent systems, what examples would I use? What makes a good Rosetta examples? A good Rosetta example would be simple enough to understand and implement but also showcase the differences between the languages. A good example of a Rosetta example is leftpad for code verification. Proving leftpad correct is short in whatever verification language you use. But the proofs themselves are different enough that you can compare what it's like to use code contracts vs with dependent types, etc. A bad Rosetta example is "hello world". While it's good for showing how to run a language, it doesn't clearly differentiate languages. Haskell's "hello world" is almost identical to BASIC's "hello world". Rosetta examples don't have to be flashy, but I want mine to be flashy. Formal specification is niche enough that regardless of my medium, most of my audience hasn't use it and may be skeptical. I always have to be selling. This biases me away from using things like dining philosophers or two-phase commit. So with that in mind, three ideas: 1. Wrapped Counter A counter that starts at 1 and counts to N, after which it wraps around to 1 again. Why it's good This is a good introductory formal specification: it's a minimal possible stateful system without concurrency or nondeterminism. You can use it to talk about the basic structure of a spec, how a verifier works, etc. It also a good way of introducing "boring" semantics, like conditionals and arithmetic, and checking if the language does anything unusual with them. Alloy, for example, defaults to 4-bit signed integers, so you run into problems if you set N too high.2 At the same time, wrapped counters are a common building block of complex systems. Lots of things can be represented this way: N=1 is a flag or blinker, N=3 is a traffic light, N=24 is a clock, etc. The next example is better for showing basic safety and liveness properties, but this will do in a pinch. 2. Threads A counter starts at 0. N threads each, simultaneously try to update the counter. They do this nonatomically: first they read the value of the counter and store that in a thread-local tmp, then they increment tmp, then they set the counter to tmp. The expected behavior is that the final value of the counter will be N. Why it's good The system as described is bugged. If two threads interleave the setlocal commands, one thread update can "clobber" the other and the counter can go backwards. To my surprise, most people do not see this error. So it's a good showcase of how the language actually finds real bugs, and how it can verify fixes. As to actual language topics: the spec covers concurrency and track process-local state. A good spec language should make it possible to adjust N without having to add any new variables. And it "naturally" introduces safety, liveness, and action properties. Finally, the thread spec is endlessly adaptable. I've used variations of it to teach refinement, resource starvation, fairness, livelocks, and hyperproperties. Tweak it a bit and you get dining philosophers. 3. Bounded buffer We have a bounded buffer with maximum length X. We have R reader and W writer processes. Before writing, writers first check if the buffer is full. If full, the writer goes to sleep. Otherwise, the writer wakes up a random sleeping process, then pushes an arbitrary value. Readers work the same way, except they pop from the buffer (and go to sleep if the buffer is empty). The only way for a sleeping process to wake up is if another process successfully performs a read or write. Why it's good This shows process-local nondeterminism (in choosing which sleeping process to wake up), different behavior for different types of processes, and deadlocks: it's possible for every reader and writer to be asleep at the same time. The beautiful thing about this example: the spec can only deadlock if X . This is the kind of bug you'd struggle to debug in real code. An in fact, people did struggle: even when presented with a minimal code sample and told there was a bug, many testing experts couldn't find it. Whereas a formal model of the same code finds the bug in seconds. If a spec language can model the bounded buffer, then it's good enough for production systems. On top of that, the bug happens regardless of what writers actually put in the buffer, so you can abstract that all away. This example can demonstrate that you can leave implementation details out of a spec and still find critical errors. Caveat This is all with a heavy TLA+ bias. I've modeled all of these systems in TLA+ and it works pretty well for them. That is to say, none of these do things TLA+ is bad at: reachability, subtyping, transitive closures, unbound spaces, etc. I imagine that as I cover more specification languages I'll find new Rosettas. Exercises are more compact, answers now show name of exercise in title "Conditionals" chapter has new section on nested conditionals "Crash course" chapter significantly rewritten Starting migrating to use consistently use == for equality and = for definition. Not everything is migrated yet "Beyond Logic" appendix does a slightly better job of covering HOL and constructive logic Addressed various reader feedback Two new exercises ↩ You can change the int size in a model run, so this is more "surprising footgun and inconvenience" than "fundamental limit of the specification language." Something still good to know! ↩
More in programming
As well as changing the way I organise my writing, last year I made some cosmetic improvements to this site. I design everything on this site myself, and I write the CSS by hand – I don’t use any third-party styles or frameworks. I don’t have any design training, and I don’t do design professionally, so I use this site as a place to learn and practice my design skills. It’s a continual work-in-progress, but I’d like to think it’s getting better over time. I design this site for readers. I write long, text-heavy posts with the occasional illustration or diagram, so I want something that will be comfortable to read and look good on a wide variety of browsers and devices. I get a lot of that “for free” by using semantic HTML and the default styles – most of my CSS is just cosmetic. Let’s go through some of the changes. Cleaning up the link styles This is what links used to look like: Every page has a tint colour, and then I was deriving different shades to style different links – a darker shade for visited links, a lighter shade for visited links in dark mode, and a background that appears on hover. I’m generating these new colours programatically, and I was so proud of getting that code working that I didn’t stop to think whether it was a good idea. In hindsight, I see several issues. The tint colour is meant to give the page a consistent visual appearance, but the different shades diluted that effect. I don’t think their meaning was especially obvious. How many readers ever worked it out? And the hover styles are actively unhelpful – just as you hover over a link you’re interested in, I’m making it harder to read! (At least in light mode – in dark mode, the hover style is barely legible.) One thing I noticed is that for certain tint colours, the “visited” colour I generated was barely distinguishable from the text colour. So I decided to lean into that in the new link styles: visited links are now the same colour as regular text. This new set of styles feels more coherent. I’m only using one shade of the tint colour, and I think the meaning is a bit clearer – only new-to-you links will get the pop of colour to stand out from the rest of the text. I’m happy to rely on underlines for the links you’ve already visited. And when you hover, the thick underline means you can see where you are, but the link text remains readable. Swapping out the font I swapped out the font, replacing Georgia with Charter. The difference is subtle, so I’d be surprised if anyone noticed: I’ve always used web safe fonts for this site – the fonts that are built into web browsers, and don’t need to be downloaded first. I’ve played with custom fonts from time to time, but there’s no font I like more enough to justify the hassle of loading a custom font. I still like Georgia, but I felt it was showing its age – it was designed in 1993 to look good on low-resolution screens, but looks a little chunky on modern displays. I think Charter looks nicer on high-resolution screens, but if you don’t have it installed then I fall back to Georgia. Making all the roundrects consistent I use a lot of rounded rectangles for components on this site, including article cards, blockquotes, and code blocks. For a long time they had similar but not identical styles, because I designed them all at different times. There were weird inconsistencies. For example, why does one roundrect have a 2px border, but another one is 3px? These are small details that nobody will ever notice directly, but undermine the sense of visual together-ness. I’ve done a complete overhaul of these styles, to make everything look more consistent. I’m leaning heavily on CSS variables, a relatively new CSS feature that I’ve really come to like. Variables make it much easier to use consistent values in different rules. I also tweaked the appearance: I’ve removed another two shades of the tint colour. (Yes, those shades were different from the ones used in links.) Colour draws your attention, so I’m trying to use it more carefully. A link says “click here”. A heading says “start here”. What does a blockquote or code snippet say? It’s just part of the text, so it shouldn’t be grabbing your attention. I think the neutral background also makes the syntax highlighting easier to read, because the tint colour isn’t clashing with the code colours. I could probably consolidate the shades of grey I’m using, but that’s a task for another day. I also removed the left indent on blockquotes and code blocks – I think it looks nicer to have a flush left edge for everything, and it means you can read more text on mobile screens. (That’s where I really felt the issues with the old design.) What’s next? By tidying up the design and reducing the number of unique elements, I’ve got a bit of room to add something new. For a while now I’ve wanted a place at the bottom of posts for common actions, or links to related and follow-up posts. As I do more and more long-form, reflective writing, I want to be able to say “if you liked this, you should read this too”. I want something that catches your eye, but doesn’t distract from the article you’re already reading. Louie Mantia has a version of this that I quite like: I’ve held off designing this because the existing pages felt too busy, but now I feel like I have space to add this – there aren’t as many clashing colours and components to compete for your attention. I’m still sketching out designs – my current idea is my rounded rectangle blocks, but with a coloured border instead of a subtle grey, but when I did a prototype, I feel like it’s missing something. I need to try a few more ideas. Watch this space! [If the formatting of this post looks odd in your feed reader, visit the original article]
One of the biggest mistakes that new startup founders make is trying to get away from the customer-facing roles too early. Whether it's customer support or it's sales, it's an incredible advantage to have the founders doing that work directly, and for much longer than they find comfortable. The absolute worst thing you can do is hire a sales person or a customer service agent too early. You'll miss all the golden nuggets that customers throw at you for free when they're rejecting your pitch or complaining about the product. Seeing these reasons paraphrased or summarized destroy all the nutrients in their insights. You want that whole-grain feedback straight from the customers' mouth! When we launched Basecamp in 2004, Jason was doing all the customer service himself. And he kept doing it like that for three years!! By the time we hired our first customer service agent, Jason was doing 150 emails/day. The business was doing millions of dollars in ARR. And Basecamp got infinitely, better both as a market proposition and as a product, because Jason could funnel all that feedback into decisions and positioning. For a long time after that, we did "Everyone on Support". Frequently rotating programmers, designers, and founders through a day of answering emails directly to customers. The dividends of doing this were almost as high as having Jason run it all in the early years. We fixed an incredible number of minor niggles and annoying bugs because programmers found it easier to solve the problem than to apologize for why it was there. It's not easy doing this! Customers often offer their valuable insights wrapped in rude language, unreasonable demands, and bad suggestions. That's why many founders quit the business of dealing with them at the first opportunity. That's why few companies ever do "Everyone On Support". That's why there's such eagerness to reduce support to an AI-only interaction. But quitting dealing with customers early, not just in support but also in sales, is an incredible handicap for any startup. You don't have to do everything that every customer demands of you, but you should certainly listen to them. And you can't listen well if the sound is being muffled by early layers of indirection.
I’m sitting in a small coffee shop in Brooklyn. I have a warm drink, and it’s just started to snow outside. I’m visiting New York to see Operation Mincemeat on Broadway – I was at the dress rehearsal yesterday, and I’ll be at the opening preview tonight. I’ve seen this show more times than I care to count, and I hope US theater-goers love it as much as Brits. The people who make the show will tell you that it’s about a bunch of misfits who thought they could do something ridiculous, who had the audacity to believe in something unlikely. That’s certainly one way to see it. The musical tells the true story of a group of British spies who tried to fool Hitler with a dead body, fake papers, and an outrageous plan that could easily have failed. Decades later, the show’s creators would mirror that same spirit of unlikely ambition. Four friends, armed with their creativity, determination, and a wardrobe full of hats, created a new musical in a small London theatre. And after a series of transfers, they’re about to open the show under the bright lights of Broadway. But when I watch the show, I see a story about friendship. It’s about how we need our friends to help us, to inspire us, to push us to be the best versions of ourselves. I see the swaggering leader who needs a team to help him truly achieve. The nervous scientist who stands up for himself with the support of his friends. The enthusiastic secretary who learns wisdom and resilience from her elder. And so, I suppose, it’s fitting that I’m not in New York on my own. I’m here with friends – dozens of wonderful people who I met through this ridiculous show. At first, I was just an audience member. I sat in my seat, I watched the show, and I laughed and cried with equal measure. After the show, I waited at stage door to thank the cast. Then I came to see the show a second time. And a third. And a fourth. After a few trips, I started to see familiar faces waiting with me at stage door. So before the cast came out, we started chatting. Those conversations became a Twitter community, then a Discord, then a WhatsApp. We swapped fan art, merch, and stories of our favourite moments. We went to other shows together, and we hung out outside the theatre. I spent New Year’s Eve with a few of these friends, sitting on somebody’s floor and laughing about a bowl of limes like it was the funniest thing in the world. And now we’re together in New York. Meeting this kind, funny, and creative group of people might seem as unlikely as the premise of Mincemeat itself. But I believed it was possible, and here we are. I feel so lucky to have met these people, to take this ridiculous trip, to share these precious days with them. I know what a privilege this is – the time, the money, the ability to say let’s do this and make it happen. How many people can gather a dozen friends for even a single evening, let alone a trip halfway round the world? You might think it’s silly to travel this far for a theatre show, especially one we’ve seen plenty of times in London. Some people would never see the same show twice, and most of us are comfortably into double or triple-figures. Whenever somebody asks why, I don’t have a good answer. Because it’s fun? Because it’s moving? Because I enjoy it? I feel the need to justify it, as if there’s some logical reason that will make all of this okay. But maybe I don’t have to. Maybe joy doesn’t need justification. A theatre show doesn’t happen without people who care. Neither does a friendship. So much of our culture tells us that it’s not cool to care. It’s better to be detached, dismissive, disinterested. Enthusiasm is cringe. Sincerity is weakness. I’ve certainly felt that pressure – the urge to play it cool, to pretend I’m above it all. To act as if I only enjoy something a “normal” amount. Well, fuck that. I don’t know where the drive to be detached comes from. Maybe it’s to protect ourselves, a way to guard against disappointment. Maybe it’s to seem sophisticated, as if having passions makes us childish or less mature. Or perhaps it’s about control – if we stay detached, we never have to depend on others, we never have to trust in something bigger than ourselves. Being detached means you can’t get hurt – but you’ll also miss out on so much joy. I’m a big fan of being a big fan of things. So many of the best things in my life have come from caring, from letting myself be involved, from finding people who are a big fan of the same things as me. If I pretended not to care, I wouldn’t have any of that. Caring – deeply, foolishly, vulnerably – is how I connect with people. My friends and I care about this show, we care about each other, and we care about our joy. That care and love for each other is what brought us together, and without it we wouldn’t be here in this city. I know this is a once-in-a-lifetime trip. So many stars had to align – for us to meet, for the show we love to be successful, for us to be able to travel together. But if we didn’t care, none of those stars would have aligned. I know so many other friends who would have loved to be here but can’t be, for all kinds of reasons. Their absence isn’t for lack of caring, and they want the show to do well whether or not they’re here. I know they care, and that’s the important thing. To butcher Tennyson: I think it’s better to care about something you cannot affect, than to care about nothing at all. In a world that’s full of cynicism and spite and hatred, I feel that now more than ever. I’d recommend you go to the show if you haven’t already, but that’s not really the point of this post. Maybe you’ve already seen Operation Mincemeat, and it wasn’t for you. Maybe you’re not a theatre kid. Maybe you aren’t into musicals, or history, or war stories. That’s okay. I don’t mind if you care about different things to me. (Imagine how boring the world would be if we all cared about the same things!) But I want you to care about something. I want you to find it, find people who care about it too, and hold on to them. Because right now, in this city, with these people, at this show? I’m so glad I did. And I hope you find that sort of happiness too. Some of the people who made this trip special. Photo by Chloe, and taken from her Twitter. Timing note: I wrote this on February 15th, but I delayed posting it because I didn’t want to highlight the fact I was away from home. [If the formatting of this post looks odd in your feed reader, visit the original article]
Humanity's Last Exam by Center for AI Safety (CAIS) and Scale AI
Most of our cultural virtues, celebrated heroes, and catchy slogans align with the idea of "never give up". That's a good default! Most people are inclined to give up too easily, as soon as the going gets hard. But it's also worth remembering that sometimes you really should fold, admit defeat, and accept that your plan didn't work out. But how to distinguish between a bad plan and insufficient effort? It's not easy. Plenty of plans look foolish at first glance, especially to people without skin in the game. That's the essence of a disruptive startup: The idea ought to look a bit daft at first glance or it probably doesn't carry the counter-intuitive kernel needed to really pop. Yet it's also obviously true that not every daft idea holds the potential to be a disruptive startup. That's why even the best venture capital investors in the world are wrong far more than they're right. Not because they aren't smart, but because nobody is smart enough to predict (the disruption of) the future consistently. The best they can do is make long bets, and then hope enough of them pay off to fund the ones that don't. So far, so logical, so conventional. A million words have been written by a million VCs about how their shrewd eyes let them see those hidden disruptive kernels before anyone else could. Good for them. What I'm more interested in knowing more about is how and when you pivot from a promising bet to folding your hand. When do you accept that no amount of additional effort is going to get that turkey to soar? I'm asking because I don't have any great heuristics here, and I'd really like to know! Because the ability to fold your hand, and live to play your remaining chips another day, isn't just about startups. It's also about individual projects. It's about work methods. Hell, it's even about politics and societies at large. I'll give you just one small example. In 2017, Rails 5.1 shipped with new tooling for doing end-to-end system tests, using a headless browser to validate the functionality, as a user would in their own browser. Since then, we've spent an enormous amount of time and effort trying to make this approach work. Far too much time, if you ask me now. This year, we finished our decision to fold, and to give up on using these types of system tests on the scale we had previously thought made sense. In fact, just last week, we deleted 5,000 lines of code from the Basecamp code base by dropping literally all the system tests that we had carried so diligently for all these years. I really like this example, because it draws parallels to investing and entrepreneurship so well. The problem with our approach to system tests wasn't that it didn't work at all. If that had been the case, bailing on the approach would have been a no brainer long ago. The trouble was that it sorta-kinda did work! Some of the time. With great effort. But ultimately wasn't worth the squeeze. I've seen this trap snap on startups time and again. The idea finds some traction. Enough for the founders to muddle through for years and years. Stuck with an idea that sorta-kinda does work, but not well enough to be worth a decade of their life. That's a tragic trap. The only antidote I've found to this on the development side is time boxing. Programmers are just as liable as anyone to believe a flawed design can work if given just a bit more time. And then a bit more. And then just double of what we've already spent. The time box provides a hard stop. In Shape Up, it's six weeks. Do or die. Ship or don't. That works. But what's the right amount of time to give a startup or a methodology or a societal policy? There's obviously no universal answer, but I'd argue that whatever the answer, it's "less than you think, less than you want". Having the grit to stick with the effort when the going gets hard is a key trait of successful people. But having the humility to give up on good bets turned bad might be just as important.