More from The Pragmatic Engineer
How has this uncertainty affected software engineers at the Chinese-owned social network?
The Pragmatic Engineer's YouTube channel crossed 100K subscribers. Celebrating with a giveaway of 100 books and newsletter subs: 10x signed physical books (The Software Engineer’s Guidebook [in English or German - your choice!], Building Mobile Apps at Scale; winners get both; shipping is on me) 90x
Fresh data shows that the number of questions asked on StackOverflow are as low as they were back in 2009 – which was when StackOverflow was one years old. The drop suggests that ChatGPT – and LLMs – managed to make StackOverflow’s business model irrelevant in about two years’ time.
The Pragmatic Engineer Podcast covers software engineering at Big Tech and startups, from the inside. We do deepdives with experienced engineers and tech professionals who share their hard-earned lessons, interesting stories and advice they have on building software. After each episode, you’ll walk away with pragmatic approaches you
More in technology
Dean Burnett writing for Science Focus: Your Brain Is Hard-Wired to Avoid Exercise. Here's Why So why, even though we’ve evolved to do it, doesn't everyone enjoy exercise? The baffling complexity of the human brain is to blame. Evolving an ability doesn’t
One of the most common questions I’m asked is, “is information architecture still relevant now that we have AI?” Of course, not everyone puts it like that. Instead, they’ll say things like “we won’t need navigation if we have chat” or “AI will organize the website” or “in a world with smart agents, we won’t need UI” or something like that. The gist is the same: Do we need structured information in a world with AI? My unequivocal answer is yes. But given my pivot, you might think this is self-serving. So I’d better explain my reasoning. What Information Does Let’s start here: what is information? A surprisingly tricky question! “Information” is one of those squishy words we use without fully grasping its meaning. Over years, I’ve narrowed it down to a pithy (and hopefully, practical) definition: Information is a means for skillful decision-making. Imagine you reach a fork in the road. A sign points left to Buenas Peras (750 km) and right to Pelotillehue (650 km). If you want to go to Pelotillehue and trust the sign, you go right. The sign gives you what you need to make a choice. That’s information. Note a few things: Information must be understandable. To make a choice, the actor must understand the options. You’ll know this firsthand if you’ve tried to drive in a country where you can’t read the local language. Informing happens in context. The sign only makes sense at that particular junction and is only useful to an actor trying to get to either destination. (And with the means to do so: a pedestrian is unlikely to care about a destination 650 km away.) The decider needn’t be human. Nothing about this definition says the choice must be made by a person: it could be an autonomous vehicle driving to Pelotillehue; the “sign” could be data in its software. Whether human or artificial, the decider needs information. While information may be derived algorithmically, it’s not arbitrary. The distinction on the sign is relevant and understandable to particular drivers on particular journeys. The sign isn’t at some random point in the road, but at the fork. Someone (something?) must decide what should be shown for the right actor to choose at the right time and place. That’s the essence of information architecture. I’ll argue that current AIs can’t yet do this on their own. But before I do, let’s dive a bit deeper into the practice of IA and what it entails. What Information Architecture Does Few people have heard the phrase “information architecture” at all. Of those who have, many misunderstand what it does. At best, they think IA is about setting rigid top-down categorization schemes for information systems. At worse, they think it’s about drawing site maps. While these things are outcomes of information architecture, they’re not what IA is about. I’ve boiled down IA to three basic activities: Organize information. Set the context. Plan for change. Let’s unpack them: Organize information. People only understand things relative to other things they already understand. We make distinctions between things (Buenas Peras/Pelotillehue) and group others (vehicles, cities, roads, signs.) We grok concepts through clustering and contrasting. Set the context. We don’t experience information in a void. Choices only make sense in particular contexts. Moreover, sets of choices create particular contexts. IA isn’t just about organizing information, but defining contexts that influence how we perceive where we are and what’s on offer. Plan for change. The idea that IA traffics in “static” information structures is misguided. There are no static information structures – only structures that change at different paces. IA enables changes to happen without compromising intended meaning. Basically, this means governance. Taken together, these activities differentiate IA from other disciplines. And although AI changes how they’re done (and experienced), it doesn’t make them obsolete. At least not yet. What Artificial Intelligence Does (and Does Not) For non-human systems to independently organize information, set the context, and plan for change, they’ll need capabilities current AI systems don’t provide. That’s not to say they never will. But I don’t see how current architectures get us there anytime soon. Yes, AI vendors are promising that AGI (artificial general intelligence) is imminent. I’m skeptical. The more I work in the space, the more convinced I am that current architectures won’t scale to AGI. It only seems they might because LLMs are so effective at languaging. ChatGPT is vastly more capable, useful, and sophisticated than ELIZA. But at their core, both work by matching patterns in language rather than developing real-world understanding – a prerequisite for the skills needed to architect information: empathy, planning, goal-oriented behavior, learning and adapting, and improvising when needed. Not to mention embodiment, which is essential for true contextual understanding. AFAIK, current AI systems aren’t close to acquiring these abilities. (But there’s much I don’t know; I’m going by my real-world experience.) This isn’t to say these systems aren’t useful. Far from it! I wouldn’t be betting my career on the technology if I didn’t believe these systems have incredible potential. But I also think expectations that they can effectively organize information, set contexts, and manage change in ways that are truly useful for humans (or even other AIs) are wildly optimistic, if not outright magical thinking. At least with the current technology – which of course, might change. Bottom Line: Play for Today If AGI ever arrives in the way many imagine — fully autonomous systems that reason, plan, and adapt like humans — then yes, we might no longer need human information architects. But that’s an if, not a when. We’re certainly not there yet. So I’m keeping my mind open and learning as much as I can. But I also understand that organizations want to deliver better products and experiences now. Current technologies can help. But they won’t do it on their own. For the foreseeable future, they’ll need guidance. That’s why I’m pivoting to architecting structures that allow organizations to use these systems more effectively. Taking a page from Pascal, you can think of it as “Jorge’s wager”: AGI may be imminent, but we may as well act as if it isn’t. When (if) it arrives, we’ll have bigger issues to deal with. For now, we have amazing systems that can take us a long way – with human architects guiding them.