More from Jorge Arango
Nicolay Gerold interviewed me for his How AI is Built podcast. Our conversation focused on information architecture – with an interesting angle: Nicolay’s audience consists primarily of engineers developing AI products. What can these folks learn from IA to create better AI products? Conversely, what can IAs learn from engineers? And does information architecture matter at all in a world where these technologies exist? Tune in to find out: Spotify Apple Podcasts YouTube
In Episode 6 of the Traction Heroes podcast, Harry and I explored Chesterton’s fence — a simple yet profound idea that has important implications for leaders navigating complex, high-stakes changes. The gist: when change is needed, don’t start by destroying what you don’t understand. Assume things are the way they are because of reasons. Once you understand the reasons, you’re more likely to avoid unintended consequences when making changes. Here’s the passage I read from Chesterton’s The Thing: In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.” This paradox rests on the most elementary common sense. The gate or fence did not grow there. It was not set up by somnambulists who built it in their sleep. It is highly improbable that it was put there by escaped lunatics who were for some reason loose in the street. Some person had some reason for thinking it would be a good thing for somebody. And until we know what the reason was, we really cannot judge whether the reason was reasonable. It is extremely probable that we have overlooked some whole aspect of the question, if something set up by human beings like ourselves seems to be entirely meaningless and mysterious. There are reformers who get over this difficulty by assuming that all their fathers were fools; but if that be so, we can only say that folly appears to be a hereditary disease. Catastrophic outcomes happen for many reasons. One of the worst is what Harry called stupidity: “a result of a series of actions that lead to an outcome that’s the opposite of what you say you want, under conditions of self-deception.” Perhaps if more people knew about Chesterton’s fence there would be less suffering caused by stupidity. As always, I learned a lot from bouncing these ideas off Harry. Among other things, he responded with an intriguing followup book. Perhaps that will be the subject of a future episode. Stay tuned for more!
Louis Rosenfeld interviewed Harry Max and me for his Rosenfeld Review podcast. The subject? Harry and my podcast, Traction Heroes. We recorded this conversation late in 2024, before we’d shared the first episode. This interview lays out Traction Heroes’s backstory. It’s fitting that we shared it in Lou’s show, since he published Harry’s book Managing Priorities and my Living in Information and Duly Noted. The Rosenfeld Review Podcast (Rosenfeld Media) · Traction Heroes with Harry Max & Jorge Arango Listen on SoundCloud
For as long as we’ve had computers, they’ve produced predictable outputs. But AI – in the form of large language models – represents a new kind of unpredictable computing. The key to implementing useful AI solutions is making the most of both paradigms. One of the oldest known computers is the Antikythera mechanism, an ancient device for calculating astronomical events. Given certain inputs, it computed positions based on logic hard-coded in its gears. Traditional software is kind of like that: it determines what to do based on pre-defined conditions. You give the computer input and get predictable outcomes. If a program produces unexpected results, it’s either because the programmer introduced randomness or because there are bugs. Both can be replicated by mirroring the exact conditions that led to the outcome. Because of this, traditional computation is deterministic. Modern AI, such as large language models, represents a new computing paradigm. If you’ve used ChatGPT or Claude, you know you seldom get the same results given the same input. Unlike traditional programs, LLMs don’t follow explicit instructions. Instead, they generate responses by weighting probabilities across a vast network of linguistic relationships. There can be many paths to possible likely responses. This is a new kind of probabilistic computing. Much of what we value about computers is due to their predictability. That’s one reason why so many people find LLMs baffling or objectionable: probabilistic behavior breaks our mental models for how computers work. Probabilistic computing is good for some tasks but not others. Brainstorming is a good use case since you’re explicitly asking for divergent thinking. On the flip side, math requires deterministic approaches. LLMs can do it by offloading computations to deterministic systems like Wolfram Alpha. Prompt engineering is an attempt to constrain probabilistic processing to make LLMs behave more predictably. But it only goes so far: you can’t force LLMs to behave like traditional programs. A better approach is building deterministic software that uses AI at particular junctures for specific tasks. An example is my approach to re-categorizing blog posts: a deterministic program iterates through files, offloading pattern matching to an LLM. The LLM is used only for stuff probabilistic systems do well – the inverse of the Wolfram Alpha approach. This new paradigm offers unprecedented opportunities. But taking advantage of probabilistic systems requires adding some determinism to the mix. You can’t ask ChatGPT to re-organize a website, but you can build scaffolding using traditional approaches that take advantage of what each does best. If you work with content, it behooves you to learn how to combine AI’s probabilistic approach with the traditional deterministic approach. That’s what I’ll be teaching in my hands-on workshop at the IA Conference in Philadelphia in late April. Join me there to learn how to do it.
More in technology
My event with Anya Martin (with a brief cameo from Chris Curtis MP!)
Rands: The Product Engineer You don’t need Product Managers. There. I said it. As someone who just moved into a product management role, you had best believe this line caught my eye. 👀 The post makes some reasonable arguments, although I think a lot of it is an
Nicolay Gerold interviewed me for his How AI is Built podcast. Our conversation focused on information architecture – with an interesting angle: Nicolay’s audience consists primarily of engineers developing AI products. What can these folks learn from IA to create better AI products? Conversely, what can IAs learn from engineers? And does information architecture matter at all in a world where these technologies exist? Tune in to find out: Spotify Apple Podcasts YouTube