More from the singularity is nearer
Intel is sitting on a huge amount of card inventory they can’t move, largely because of bad software. Most of this is a summary of the public #intel-hardware channel in the tinygrad discord. Intel currently is sitting on: 15,000 Gaudi 2 cards (with baseboards) 5,100 Intel Data Center GPU Max 1450s (without baseboards) If you were Intel, what would you do with them? First, starting with the Gaudi cards. The open source repo needed to control them was archived on Feb 4, 2025. There’s a closed source version of this that’s maybe still maintained, but eww closed source and do you think it’s really maintained? The architecture is kind of tragic, and that’s likely why they didn’t open source it. Unlike every other accelerator I have seen, the MMEs, which is where all the FLOPS are, are not controllable by the TPCs. While the TPCs have an LLVM port, the MME is not documented. After some poking around, I found the spec: It’s highly fixed function, looks very similar to the Apple ANE. But that’s not even the real problem with it. The problem is that it is controlled by queues, not by the TPCs. Unpacking habanalabs-dkms-1.19.2-32.all.deb you can find the queues. There is some way to push a command stream to the device so you don’t actually have to deal with the host itself for the queues. But that doesn’t prevent you having to decompose the network you are trying to run into something you can put on this fixed function block. Programmability is on a spectrum, ranging from CPUs being the easiest, to GPUs, to things like the Qualcomm DSP / Google TPU (where at least you drive the MME from the program), to this and the Apple ANE being the hardest. While it’s impressive that they actually got on MLPerf Training v4.0 training GPT3, I suspect it’s all hand coded, and if you even can deviate off the trodden path you’ll get almost no perf. Accelerators like this are okay for low power inference where you can adjust the model architecture for the target, Apple does a great job of this. But this will never be acceptable for a training chip. Then there’s the Data Center GPU Max 1450. Intel actually sent us a few of these. You quickly run into a problem…how do you plug them in? They need OAM sockets, 48V power, and a cooling solution that can sink 600W. As far as I can tell, they were only ever deployed in two systems, the Aurora Supercomputer and the Dell XE9640. It’s hard to know, but I really doubt many of these Dell systems were sold. Intel then sent us this carrier board. In some ways it’s helpful, but in other ways it’s not at all. It still doesn’t solve cooling or power, and you need to buy 16x MCIO cables (cheap in quantity, but expensive and hard to find off the shelf). Also, I never got a straight answer, but I really doubt Intel has many of these boards. And that board doesn’t look cheap to manufacturer more of. The connectors alone, which you need two of per GPU, cost $26 each. That’s $104 for just the OAM connectors. tiny corp was in discussions to buy these GPUs. How much would you pay for one of these on a PCIe card? The specs look great. 839 TFLOPS, 128 GB of ram, 3.3 TB/s of bandwidth. However…read this article. Even in simple synthetic benchmarks, the chip doesn’t get anywhere near its max performance, and it looks to be for fundamental reasons like memory latency. We estimate we could sell PCIe versions of these GPUs for $1,000; I don’t think most people know how hard it is to move non NVIDIA hardware. Before you say you’d pay more, ask yourself, do you really want to deal with the software? An adapter card has four pieces. A PCB for the card, a 12->48V voltage converter, a heatsink, and a fan. My quote from the guy who makes an OAM adapter board was $310 for 10+ PCBs and $75 for the voltage converter. A heatsink that can handle 600W (heat pipes + vapor chamber) is going to cost $100, then maybe $20 more for the fan. That’s $505, and you still need to assemble and test them, oh and now there’s tariffs. Maybe you can get this down to $400 in ~1000 quantity. So $200 for the GPU, $400 for the adapter, $100 for shipping/fulfillment/returns (more if you use Amazon), and 30% profit if you sell at $1k. tiny would net $1M on this, which has to cover NRE and you have risk of unsold inventory. We offered Intel $200 per GPU (a $680k wire) and they said no. They wanted $600. I suspect that unless a supercomputer person who already uses these GPUs wants to buy more, they will ride it to zero. tl;dr: there’s 5100 of these GPUs with no simple way to plug them in. It’s unclear if they worth the cost of the slot they go in. I bet they end up shredded, or maybe dumped on eBay for $50 each in a year like the Xeon Phi cards. If you buy one, good luck plugging it in! The reason Meta and friends buy some AMD is as a hedge against NVIDIA. Even if it’s not usable, AMD has progressed on a solid steady roadmap, with a clear continuation from the 2018 MI50 (which you can now buy for 99% off), to the MI325X which is a super exciting chip (AMD is king of chiplets). They are even showing signs of finally investing in software, which makes me bullish. If NVIDIA stumbles for a generation, this is AMD’s game. The ROCm “copy each NVIDIA repo” strategy actually works if your competition stumbles. They can win GPUs with slow and steady improvement + competition stumbling, that’s how AMD won server CPUs. With these Intel chips, I’m not sure who they would appeal to. Ponte Vecchio is cancelled. There’s no point in investing in the platform if there’s not going to be a next generation, and therefore nobody can justify the cost of developing software, therefore there won’t be software, therefore they aren’t worth plugging in. Where does this leave Intel’s AI roadmap? The successor to Ponte Vecchio was Rialto Bridge, but that was cancelled. The successor to that was Falcon Shores, but that was also cancelled. Intel claims the next GPU will be “Jaguar Shores”, but fool me once… To quote JazzLord1234 from reddit “No point even bothering to listen to their roadmaps anymore. They have squandered all their credibility.” Gaudi 3 is a flop due to “unbaked software”, but as much as I usually do blame software, nothing has changed from Gaudi 2 and it’s just a really hard chip to program for. So there’s no future there either. I can’t say that “Jaguar Shores” square instills confidence. It didn’t inspire confidence for “Joseph B.” on LinkedIn either. From my interactions with Intel people, it seems there’s no individuals with power there, it’s all committee like leadership. The problem with this is there’s nobody who can say yes, just many people who can say no. Hence all the cancellations and the nonsense strategy. AMD’s dysfunction is different. from the beginning they had leadership that can do things (Lisa Su replied to my first e-mail), they just didn’t see the value in investing in software until recently. They sort of had a point if they were only targeting hyperscalars. but it seems like SemiAnalysis got through to them that hyperscalars aren’t going to deal with bad software either. It remains to be seem if they can shift culture to actually deliver good software, but there’s movement in that direction, and if they succeed AMD is so undervalued. Their hardware is good. With Intel, until that committee style leadership is gone, there’s 0 chance for success. Committee leadership is fine if you are trying to maintain, but Intel’s AI situation is even more hopeless than AMDs, and you’d need something major to turn it around. At least with AMD, you can try installing ROCm and be frustrated when there are bugs. Every time I have tried Intel’s software I can’t even recall getting the import to work, and the card wasn’t powerful enough that I cared. Intel needs actual leadership to turn this around, or there’s 0 future in Intel AI.
If you give some monkeys a slice of cucumber each, they are all pretty happy. Then you give one monkey a grape, and nobody is happy with their cucumber any more. They might even throw the slices back at the experimenter. He got a god damned grape this is bullshit I don’t want a cucumber anymore! Nobody was in absolute terms worse off, but that doesn’t prevent the monkeys from being upset. And this isn’t unique to monkeys, I see this same behavior on display when I hear about billionaires. It’s not about what I have, they got a grape. The tweet is here. What do you do about this? Of course, you can fire this women, but what percent of people in American society feel the same way? How much of this can you tolerate and still have a functioning society? What’s particularly absurd about the critique in the video is that it hasn’t been thought through very far. If that house and its friends stopped “ordering shit”, the company would stop making money and she wouldn’t have that job. There’s nothing preventing her from quitting today and getting the same outcome for herself. But of course, that isn’t what it’s about, because then somebody else would be delivering the packages. You see, that house got a grape. So how do we get through this? I’ll propose something, but it’s sort of horrible. Bring people to power based on this feeling. Let everyone indulge fully in their resentment. Kill the bourgeois. They got grapes, kill them all! Watch the situation not improve. Realize that this must be because there’s still counterrevolutionaries in the mix, still a few grapefuckers. Some billionaire is trying to hide his billions! Let the purge continue! And still, things are not improving. People are starving. The economy isn’t even tracked anymore. Things are bad. Millions are dead. The demoralization is complete. Starvation and real poverty are more powerful emotions than resentment. It was bad when people were getting grapes, but now there aren’t even cucumbers anymore. In the face of true poverty for all, the resentment fades. Society begins to heal. People are grateful to have food, they are grateful for what they have. Expectations are back in line with market value. You have another way to fix this? Cause this is what seems to happen in history, and it takes a generation. The demoralization is just beginning.
AMD is sending us the two MI300X boxes we asked for. They are in the mail. It took a bit, but AMD passed my cultural test. I now believe they aren’t going to shoot themselves in the foot on software, and if that’s true, there’s absolutely no reason they should be worth 1/16th of NVIDIA. CUDA isn’t really the moat people think it is, it was just an early ecosystem. tiny corp has a fully sovereign AMD stack, and soon we’ll port it to the MI300X. You won’t even have to use tinygrad proper, tinygrad has a torch frontend now. Either NVIDIA is super overvalued or AMD is undervalued. If the petaflop gets commoditized (tiny corp’s mission), the current situation doesn’t make any sense. The hardware is similar, AMD even got the double throughput Tensor Cores on RDNA4 (NVIDIA artificially halves this on their cards, soon they won’t be able to). I’m betting on AMD being undervalued, and that the demand for AI has barely started. With good software, the MI300X should outperform the H100. In for a quarter million. Long term. It can always dip short term, but check back in 5 years.
This is a map of primary trading partners, US vs China, and how it has evolved over the last 20 years. Think about it, and realize this probably reflects your experience. I know there was a similar panic about Japan in the 80s, but Japan by population has always been 3x smaller than the US, whereas China is 3x larger. In addition, we had and have military bases in Japan. This is not the same situation. The US, since I have been born, has been coasting. The main product made by the US is the dollar, and it used those manufactured dollars to outsource everything. Most jobs in the US are now basically fake. It’s basically an economy in which five people stick a pipe in the ground, but that pipe is the fed and the oil was the good will built up over 1870-1970. In 2008, with the bailouts, it was made clear that the US has no interest in reform. The next decade, in perhaps a spitting in your face move, the fed made the interest rate 0. Known as ZIRP, this had never been done before. This led to insane perversions. When I got into business, I didn’t understand that business in America was mostly a total scam. Sure, you might look at a single business, and be like, oh, that sounds reasonable, but then you zoom out and look at the entire system, and it doesn’t really make sense. It’s scams feeding other scams. Wanna each start a business, pass dollars back and forth over and over again, and drive both our revenues super high? Sure, we don’t produce anything, but we have companies with high revenues and we can raise money based on those revenues. We’ll both be rich! Let’s do it with a bunch of extra steps so people don’t catch on though. They’ll only see it reflected in the lack of movement of real macro metrics. You see, the US is a “developed” country, which means real growth is over? You do understand that guns and boats are made of steel, right? Oh, airplanes aren’t, they are made of aluminum. Oh…right, yea, it’s not just steel it is absolutely everything. The future is chips you say? All the good chips are made in the Republic of China you say? This 2021 article lays it out clearly, and it also explains why nothing I saw in Silicon Valley made any sense. I’m not going to go into the personal stories, but I just had an underlying assumption that the goal was growth and value production. It isn’t. It’s self licking ice cream cone scams, and any growth or value is incidental to that. It isn’t until you understand this that people’s behavior starts to make sense. America really is at a fork in the road. In one world, they abandon all hopes of being an empire, becoming a regional power with highly protectionist economics. This happened before, and it’s called Europe. I know it’s hard to believe now, but Europe used to be the seat of power for the whole world. The sun never set on the British empire. Now they put you in jail for memes. Protectionist America is a boring place and not somewhere I want to be. It kicks the can further down the road of poverty, basically embraces socialism, is stagnant, is stale, is a museum…etc, again there’s a contemporary example of this. When I said on Lex they were gonna nationalize NVIDIA, look at the AI Diffusion Framework, and notice how Trump hasn’t repealed it. It allows export of GPUs to only 18 countries. Nationalization with American characteristics. It tells the other 177 countries that they should plan on purchasing their AI infrastructure from China. The other path, which is the exciting path, is the attempt to maintain an empire. An empire has to compete on its merits. There’s two simple steps to restore American greatness: 1) Brain drain the world. Work visas for every person who can produce more than they consume. I’m talking doubling the US population, bringing in all the factory workers, farmers, miners, engineers, literally anyone who produces value. Can we raise the average IQ of America to be higher than China? 2) Back the dollar by gold (not socially constructed crypto), and bring major crackdowns to finance to tie it to real world value. Trading is not a job. Passive income is not a thing. Instead, go produce something real and exchange it for gold. The first will bring the value of “American” labor in line with its global market value. It is a particularly unique advantage of the US over China, the US has a potentially much larger pool of talent. Non ironically, diversity is our strength. Unfortunately, there’s a lot of resistance to American labor finding its market value. The second will prevent a lot of the scams. The reason the banking industry is so big is that it is close to the source of the made up dollars. If currency is gold backed, you could imagine something similar happening to the mining industry instead. However, the mining industry is real! It uses steel and aluminum to build physical things. And imagine when we start to mine space. That’s a way better reward function than scamming politicians out of fake dollars. Unfortunately, I doubt either will happen. They very much both can, but people haven’t been demoralized enough yet.
A lot of smooth brains on Hacker News about the last post. I’m sorry if you spent your whole life worshipping money, but hey, the Bible warned you about false idols, don’t shoot the messenger. “It’s easier to imagine the end of the world than the end of capitalism” – Mark Fisher It’s actually very easy to imagine the end of capitalism. Imagine capitalism as a game of sharks, where eventually the biggest shark ends up gobbling up all the fish, and that one shark is the last player left standing with all the money. When one person (or company) has all the money, do you see how the money would be worthless? I’ll spell this out clearly. Money is a map, it is not a territory. Please understand what I mean by this before continuing to read. You can erase the mountains from the map, but you still have to climb over them in real life, and even worse, now you don’t have a map! “Everything around you that you call ‘life’ was made up by people who were no smarter than you” – Steve Jobs So, if money is the map, what territory is it attempting to capture? Presumably something having to do with value, but increasingly, as we are buying and selling baskets of derivatives of memecoins, nothing. A map that doesn’t accurately capture a territory is not a Schelling point. It’s not a useful map. And maps are only as good as their usefulness. Useless maps die out. Do you agree or disagree that money is supposed to be a map of value? If you disagree, that’s an ought and I can’t use logic to convince you otherwise, I can just call you a moron who refuses to burn paper $100 bills for warmth on a deserted island. Many capitalists I meet are as stupid as communists, trying to give a moral justification for their system. This is my money, I deserve it. I should be able to passively deploy my capital into the markets and live off the returns. “Moral victories are for minor league coaches.” – JAY-Z A economic system is only good in so much as it effectively deploys capital for real growth. If real economic growth is only 3 percent, any time you are earning beyond that, somebody else is losing. And yet somehow, today, you can put your money in money market accounts and earn a “risk-free” 5 percent…hmm something doesn’t make sense. Who is losing? You will eventually be unable to squeeze the productive people any further. The worst was an e-mail I got with someone who supposedly agreed with me. “Value creation (for all stakeholders) is at the core of the organization/ business model I am putting together…Anyway I wanted to let you know others out there who share your vision.” – anon email Fuck your stakeholders. Fuck your business model. You don’t understand me at all. Stop worrying so much about the distribution of the pie. Start thinking about how to make the pie bigger. With exponential (what 3 percent year over year is) growth, the latter outstrips the former by so much. The right distribution is simply: From each according to his ability, to each according to his ability to effectively deploy capital to achieve real economic growth. Communism is dumb cause it goes to the poor (who routinely demonstrate that they poorly deploy capital). Capitalism is dumb cause it goes to the rent-seekers (who frequently deploy capital to increase their moat). Acceleration is the way.
More in programming
I have added syntax highlighting to my blog using tree-sitter. Here are some notes about what I learned, with some complaining. static site generator markdown ingestion highlighting incompatible?! highlight names class names styling code results future work frontmatter templates feed style highlight quality static site generator I moved my blog to my own web site a few years ago. It is produced using a scruffy Rust program that converts a bunch of Markdown files to HTML using pulldown-cmark, and produces complete pages from Handlebars templates. Why did I write another static site generator? Well, partly as an exercise when learning Rust. Partly, since I wrote my own page templates, I’m not going to benefit from a library of existing templates. On the contrary, it’s harder to create new templates that work with a general-purpose SSG than write my own simpler site-specific SSG. It’s miserable to write programs in template languages. My SSG can keep the logic in the templates to a minimum, and do all the fiddly stuff in Rust. (Which is not very fiddly, because my site doesn’t have complicated navigation – compared to the multilevel menus on www.dns.cam.ac.uk for instance.) markdown ingestion There are a few things to do to each Markdown file: split off and deserialize the YAML frontmatter find the <cut> or <toc> marker that indicates the end of the teaser / where the table of contents should be inserted augment headings with self-linking anchors (which are also used by the ToC) Before this work I was using regexes to do all these jobs, because that allowed me to treat pulldown-cmark as a black box: Markdown in, HTML out. But for syntax highlighting I had to be able to find fenced code blocks. It was time to put some code into the pipeline between pulldown-cmark’s parser and renderer. And if I’m using a proper parser I can get rid of a few regexes: after some hacking, now only the YAML frontmatter is handled with a regex. Sub-heading linkification and ToC construction are fiddly and more complicated than they were before. But they are also less buggy: markup in headings actually works now! Compared to the ToC, it’s fairly simple to detect code blocks and pass them through a highlighter. You can look at my Markdown munger here. (I am not very happy with the way it uses state, but it works.) highlighting As well as the tree-sitter-highlight documentation I used femark as an example implementation. I encountered a few problems. incompatible?! I could not get the latest tree-sitter-highlight to work as described in its documentation. I thought the current tree-sitter crates were incompatible with each other! For a while I downgraded to an earlier version, but eventually I solved the problem. Where the docs say, let javascript_language = tree_sitter_javascript::language(); They should say: let javascript_language = tree_sitter::Language::new( tree_sitter_javascript::LANGUAGE ); highlight names I was offended that tree-sitter-highlight seems to expect me to hardcode a list of highlight names, without explaining where they come from or what they mean. I was doubly offended that there’s an array of STANDARD_CAPTURE_NAMES but it isn’t exported, and doesn’t match the list in the docs. You mean I have to copy and paste it? Which one?! There’s some discussion of highlight names in the tree-sitter manual’s “syntax highlighting” chapter, but that is aimed at people who are writing a tree-sitter grammar, not people who are using one. Eventually I worked out that tree_sitter_javascript::HIGHLIGHT_QUERY in the tree-sitter-highlight example corresponds to the contents of a highlights.scm file. Each @name in highlights.scm is a highlight name that I might be interested in. In principle I guess different tree-sitter grammars should use similar highlight names in their highlights.scm files? (Only to a limited extent, it turns out.) I decided the obviously correct list of highlight names is the list of every name defined in the HIGHLIGHT_QUERY. The query is just a string so I can throw a regex at it and build an array of the matches. This should make the highlighter produce <span> wrappers for as many tokens as possible in my code, which might be more than necessary but I don’t have to style them all. class names The tree-sitter-highlight crate comes with a lightly-documented HtmlRenderer, which does much of the job fairly straightforwardly. The fun part is the attribute_callback. When the HtmlRenderer is wrapping a token, it emits the start of a <span then expects the callback to append whatever HTML attributes it thinks might be appropriate. Uh, I guess I want a class="..." here? Well, the highlight names work a little bit like class names: they have dot-separated parts which tree-sitter-highlight can match more or less specifically. (However I am telling it to match all of them.) So I decided to turn each dot-separated highlight name into a space-separated class attribute. The nice thing about this is that my Rust code doesn’t need to know anything about a language’s tree-sitter grammar or its highlight query. The grammar’s highlight names become CSS class names automatically. styling code Now I can write some simple CSS to add some colours to my code. I can make type names green, code span.hilite.type { color: #aca; } If I decide builtin types should be cyan like keywords I can write, code span.hilite.type.builtin, code span.hilite.keyword { color: #9cc; } results You can look at my tree-sitter-highlight wrapper here. Getting it to work required a bit more creativity than I would have preferred, but it turned out OK. I can add support for a new language by adding a crate to Cargo.toml and a couple of lines to hilite.rs – and maybe some CSS if I have not yet covered its highlight names. (Like I just did to highlight the CSS above!) future work While writing this blog post I found myself complaining about things that I really ought to fix instead. frontmatter I might simplify the per-page source format knob so that I can use pulldown-cmark’s support for YAML frontmatter instead of a separate regex pass. This change will be easier if I can treat the html pages as Markdown without mangling them too much (is Markdown even supposed to be idempotent?). More tricky are a couple of special case pages whose source is Handlebars instead of Markdown. templates I’m not entirely happy with Handlebars. It’s a more powerful language than I need – I chose Handlebars instead of Mustache because Handlebars works neatly with serde. But it has a dynamic type system that makes the templates more error-prone than I would like. Perhaps I can find a more static Rust template system that takes advantage of the close coupling between my templates and the data structure that describes the web site. However, I like my templates to be primarily HTML with a sprinkling of insertions, not something weird that’s neither HTML nor Rust. feed style There’s no CSS in my Atom feed, so code blocks there will remain unstyled. I don’t know if feed readers accept <style> tags or if it has to be inline styles. (That would make a mess of my neat setup!) highlight quality I’m not entirely satisfied with the level of detail and consistency provided by the tree-sitter language grammars and highlight queries. For instance, in the CSS above the class names and property names have the same colour because the CSS highlights.scm gives them the same highlight name. The C grammar is good at identifying variables, but the Rust grammar is not. Oh well, I guess it’s good enough for now. At least it doesn’t involve Javascript.
Simplify complex decisions by separating upsides from downsides, investing in upsides, vetoing with downsides, and using an appropriate decision framework.
I've been running Linux, Neovim, and Framework for a year now, but it easily feels like a decade or more. That's the funny thing about habits: They can be so hard to break, but once you do, they're also easily forgotten. That's how it feels having left the Apple realm after two decades inside the walled garden. It was hard for the first couple of weeks, but since then, it’s rarely crossed my mind. Humans are rigid in the short term, but flexible in the long term. Blessed are the few who can retain the grit to push through that early mental resistance and reach new maxima. That is something that gets harder with age. I can feel it. It takes more of me now to wipe a mental slate clean and start over. To go back to being a beginner. But the reward for learning something new is as satisfying as ever. But it's also why I've tried to be modest with the advocacy. I don't know if most developers are better off on Linux. I mean, I believe they are, at some utopian level, especially if they work for the web, using open source tooling. But I don't know if they are as humans with limited will or capacity for change. Of course, it's fair to say that one doesn't want to. Either because one remain a fan of Apple, in dire need of the remaining edge MacBooks retain on efficiency/battery, or simply content inside the ecosystem. There are plenty of reasons why someone might not want to change. It's not just about rigidity. Besides, it's a dead end trying to convince anyone of an alternative with the sharp end of a religious argument. That kind of crusading just seeds resentment and stubbornness. I know that all too well. What I've found to work much better is planting seeds and showing off your plowshare. Let whatever curiosity that blooms find its own way towards your blue sky. The mimetic engine of persuasion runs much cleaner anyway. And for me, it's primarily about my personal computing workbench regardless of what the world does or doesn't. It was the same with finding Ruby. It's great when others come along for the ride, but I'd also be happy taking the trip solo too. So consider this a postcard from a year into the Linux, Neovim, and Framework journey. The sun is still shining, the wind is in my hair, and the smile on my lips hasn't been this big since the earliest days of OS X.
Yesterday I gave a talk at Monki Gras 2025. This year, the theme is Sustaining Software Development Craft, and here’s the description from the conference website: The big question we want to explore is – how can we keep doing the work we do, when it sustains us, provides meaning and purpose, and sometimes pays the bills? We’re in a period of profound change, technically, politically, socially, economically, which has huge implications for us as practitioners, the makers and doers, but also for the culture at large. I did a talk about the first decade of my career, which I’ve spent working on projects that are designed to last. I’m pleased with my talk, and I got a lot of nice comments. Monki Gras is always a pleasure to attend and speak at – it’s such a lovely, friendly vibe, and the organisers James Governor and Jessica West do a great job of making it a nice day. When I left yesterday, I felt warm and fuzzy and appreciated. I also have a front-row photo of me speaking, courtesy of my dear friend Eriol Fox. Naturally, I chose my outfit to match my slides (and this blog post!). Key points How do you create something that lasts? You can’t predict the future, but there are patterns in what lasts People skills sustain a career more than technical skills Long-lasting systems cannot grow without bound; they need weeding Links/recommended reading Sibyl Schaefer presented a paper Energy, Digital Preservation, and the Climate at iPres 2024, which is about how digital preservation needs to change in anticipation of the climate crisis. This was a major inspiration for this talk. Simon Willison gave a talk Coping strategies for the serial project hoarder at DjangoCon US in 2022, which is another inspiration for me. I’m not as prolific as Simon, but I do see parallels between his approach and what I remember of Metaswitch. Most of the photos in the talk come from the Flickr Commons, a collection of historical photographs from over 100 international cultural heritage organisations. You can learn more about the Commons, browse the photos, and see who’s involved using the Commons Explorer https://commons.flickr.org/. (Which I helped to build!) Slides and notes Photo: dry stone wall building in South Wales. Taken by Wikimedia Commons user TR001, used under CC BY‑SA 3.0. [Make introductory remarks; name and pronouns; mention slides on my website] I’ve been a software developer for ten years, and I’ve spent my career working on projects that are designed to last – first telecoms and networking, now cultural heritage – so when I heard this year’s theme “sustaining craft”, I thought about creating things that last a long time. The key question I want to address in this talk is how do you create something that lasts? I want to share a few thoughts I’ve had from working on decade- and century-scale projects. Part of this is about how we sustain ourselves as software developers, as the individuals who create software, especially with the skill threat of AI and the shifting landscape of funding software. I also want to go broader, and talk about how we sustain the craft, the skill, the projects. Let’s go through my career, and see what we can learn. Photo: women working at a Bell System telephone switchboard. From the U.S. National Archives, no known copyright restrictions. My first software developer job was at a company called Metaswitch. Not a household name, they made telecoms equipment, and you’d probably have heard of their customers. They sold equipment to carriers like AT&T, Vodafone, and O2, who’d use that equipment to sell you telephone service. Telecoms infrastructure is designed to last a long time. I spent most of my time at Metaswitch working with BGP, a routing protocol designed on a pair of napkins in 1989. BGP is sometimes known as the "two-napkin protocol", because of the two napkins on which Kirk Lougheed and Yakov Rekhter wrote the original design. From the Computer History Museum. These are those napkins. This design is basically still the backbone of the Internet. A lot of the building blocks of the telephone network and the Internet are fundamentally the same today as when they were created. I was working in a codebase that had been actively developed for most of my life, and was expected to outlast me. This was my first job so I didn’t really appreciate it at the time, but Metaswitch did a lot of stuff designed to keep that codebase going, to sustain it into the future. Let’s talk about a few of them. Photo: a programmer testing electronic equipment. From the San Diego Air & Space Museum Archives, no known copyright restrictions. Metaswitch was very careful about adopting new technologies. Most of their code was written in C, a little C++, and Rust was being adopted very slowly. They didn’t add new technology quickly. Anything they add, they have to support for a long time – so they wanted to pick technologies that weren’t a flash in the pan. I learnt about something called “the Lindy effect” – this is the idea that any technology is about halfway through its expected life. An open-source library that’s been developed for decades? That’ll probably be around a while longer. A brand new JavaScript framework? That’s a riskier long-term bet. The Lindy effect is about how software that’s been around a long time has already proven its staying power. And talking of AI specifically – I’ve been waiting for things to settle. There’s so much churn and change in this space, if I’d learnt a tool six months ago, most of that would be obsolete today. I don’t hate AI, I love that people are trying all these new tools – but I’m tired and I learning new things is exhausting. I’m waiting for things to calm down before really diving deep on these tools. Metaswitch was very cautious about third-party code, and they didn’t have much of it. Again, anything they use will have to be supported for a long time – is that third-party code, that open-source project stick around? They preferred to take the short-term hit of writing their own code, but then having complete control over it. To give you some idea of how seriously they took this: every third-party dependency had to be reviewed and vetted by lawyers before it could be added to the codebase. Imagine doing that for a modern Node.js project! They had a lot of safety nets. Manual and automated testing, a dedicated QA team, lots of checks and reviews. These were large codebases which had to be reliable. Long-lived systems can’t afford to “move fast and break things”. This was a lot of extra work, but it meant more stability, less churn, and not much risk of outside influences breaking things. This isn’t the only way to build software – Metaswitch is at one extreme of a spectrum – but it did seem to work. I think this is a lesson for building software, but also in what we choose to learn as individuals. Focusing on software that’s likely to last means less churn in our careers. If you learn the fundamentals of the web today, that knowledge will still be useful in five years. If you learn the JavaScript framework du jour? Maybe less so. How do you know what’s going to last? That’s the key question! It’s difficult, but it’s not impossible. This is my first thought for you all: you can’t predict the future, but there are patterns in what lasts. I’ve given you some examples of coding practices that can help the longevity of a codebase, these are just a few. Maybe I have rose-tinted spectacles, but I’ve taken the lessons from Metaswitch and brought them into my current work, and I do like them. I’m careful about external dependencies, I write a lot of my own code, and I create lots of safety nets, and stuff doesn’t tend to churn so much. My code lasts because it isn’t constantly being broken by external forces. Photo: a child in nursery school cutting a plank of wood with a saw. From the Community Archives of Belleville and Hastings County, no known copyright restrictions. So that’s what the smart people were doing at Metaswitch. What was I doing? I joined Metaswitch when I was a young and twenty-something graduate, so I knew everything. I knew software development was easy, these old fuddy-duddies were making it all far too complicated, and I was gonna waltz in and show them how it was done. And obviously, that happened. (Please imagine me reading that paragraph in a very sarcastic voice.) I started doing the work, and it was a lot harder than I expected – who knew that software development was difficult? But I was coming from a background as a solo dev who’d only done hobby projects. I’d never worked in a team before. I didn’t know how to say that I was struggling, to ask for help. I kept making bold promises about what I could do, based on how quickly I thought I should be able to do the work – but I was making promises my skills couldn’t match. I kept missing self-imposed deadlines. You can do that once, but you can’t make it a habit. About six months before I left, my manager said to me “Alex, you have a reputation for being unreliable”. Photo: a boy with a pudding bowl haircut, photographed by Elinor Wiltshire, 1964. From the National Library of Ireland, no known copyright restrictions. He was right! I had such a history of making promises that I couldn’t keep, people stopped trusting me. I didn’t get to work on interesting features or the exciting projects, because nobody trusted me to deliver. That was part of why I left that job – I’d ploughed my reputation into the ground, and I needed to reset. Photo: the library stores at Wellcome Collection. Taken by Thomas SG Farnetti used under CC BY‑NC 4.0. I got that reset at Wellcome Collection, a London museum and library that some of you might know. I was working a lot with their collections, a lot of data and metadata. Wellcome Collection is building on long tradition of libraries and archives, which go back thousands of years. Long-term thinking is in their DNA. To give you one example: there’s stuff in the archive that won’t be made public until the turn of the century. Everybody who works there today will be long gone, but they assume that those records will exist in some shape or form form when that time comes, and they’re planning for those files to eventually be opened. This is century-scale thinking. Photo: Bob Hoover. From the San Diego Air & Space Museum Archives, no known copyright restrictions. When I started, I sat next to a guy called Chris. (I couldn’t find a good picture of him, but I feel like this photo captures his energy.) Chris was a senior archivist. He’d been at Wellcome Collection about twenty-five years, and there were very few people – if anyone – who knew more about the archive than he did. He absolutely knew his stuff, and he could have swaggered around like he owned the place. But he didn’t. Something I was struck by, from my very first day, was how curious and humble he was. A bit of a rarity, if you work in software. He was the experienced veteran of the organisation, but he cared about what other people had to say and wanted to learn from them. Twenty-five years in, and he still wanted to learn. He was a nice guy. He was a pleasure to work with, and I think that’s a big part of why he was able to stay in that job as long as he did. We were all quite disappointed when he left for another job! This is my second thought for you: people skills sustain a career more than technical ones. Being a pleasure to work with opens so many doors and opportunities than technical skill alone cannot. We could do another conference just on what those people skills are, but for now I just want to give you a few examples to think about. Photo: Lt.(jg.) Harriet Ida Pickens and Ens. Frances Wills, first Negro Waves to be commissioned in the US Navy. From the U.S. National Archives, no known copyright restrictions. Be a respectful and reliable teammate. You want to be seen as a safe pair of hands. Reliability isn’t about avoiding mistakes, it’s about managing expectations. If you’re consistently overpromising and underdelivering, people stop trusting you (which I learnt the hard way). If you want people to trust you, you have to keep your promises. Good teammates communicate early when things aren’t going to plan, they ask for help and offer it in return. Good teammates respect the work that went before. It’s tempting to dismiss it as “legacy”, but somebody worked hard on it, and it was the best they knew how to do – recognise that effort and skill, don’t dismiss it. Listen with curiosity and intent. My colleague Chris had decades of experience, but he never acted like he knew everything. He asked thoughtful questions and genuinely wanted to learn from everyone. So many of us aren’t really listening when we’re “listening” – we’re just waiting for the next silence, where we can interject with the next thing we’ve already thought of. We aren’t responding to what other people are saying. When we listen, we get to learn, and other people feel heard – and that makes collaboration much smoother and more enjoyable. Finally, and this is a big one: don’t give people unsolicited advice. We are very bad at this as an industry. We all have so many opinions and ideas, but sometimes, sharing isn’t caring. Feedback is only useful when somebody wants to hear it – otherwise, it feels like criticism, it feels like an attack. Saying “um, actually” when nobody asked for feedback isn’t helpful, it just puts people on the defensive. Asking whether somebody wants feedback, and what sort of feedback they want, will go a long way towards it being useful. So again: people skills sustain a career more than technical skills. There aren’t many truly solo careers in software development – we all have to work with other people – for many of us, that’s the joy of it! If you’re a nice person to work with, other people will want to work with you, to collaborate on projects, they’ll offer you opportunities, it opens doors. Your technical skills won’t sustain your career if you can’t work with other people. Photo: "The Keeper", an exhibition at the New Museum in New York. Taken by Daniel Doubrovkine, used under CC BY‑NC‑SA 4.0. When I went to Wellcome Collection, it was my first time getting up-close and personal with a library and archive, and I didn’t really know how they worked. If you’d asked me, I’d have guessed they just keep … everything? And it was gently explained to me that “No Alex, that’s hoarding.” “Your overflowing yarn stash does not count as an archive.” Big collecting institutions are actually super picky – they have guidelines about what sort of material they collect, what’s in scope, what isn’t, and they’ll aggressively reject anything that isn’t a good match. At Wellcome Collection, their remit was “the history of health and human experience”. You have medical papers? Definitely interesting! Your dad’s old pile of car magazines? Less so. Photo: a dumpster full of books that have been discarded. From brewbooks on Flickr, used under CC BY‑SA 2.0. Collecting institutions also engage in the practice of “weeding” or “deaccessioning”, which is removing material, pruning the collection. For example, in lending libraries, books will be removed from the shelves if they’ve become old, damaged, or unpopular. They may be donated, or sold, or just thrown away – but whatever happens, they’re gotten rid of. That space is reclaimed for other books. Getting rid of material is a fundamental part of professional collecting, because professionals know that storing something has an ongoing cost. They know they can’t keep everything. Photo: a box full of printed photos. From Miray Bostancı on Pexels, used under the Pexels license. This is something I think about in my current job as well. I currently work at the Flickr Foundation, where we’re thinking about how to keep Flickr’s pictures visible for 100 years. How do we preserve social media, how do we maintain our digital legacy? When we talk to people, one thing that comes up regularly is that almost everybody has too many photos. Modern smartphones have made it so easy to snap, snap, snap, and we end up with enormous libraries with thousands of images, but we can’t find the photos we care about. We can’t find the meaningful memories. We’re collecting too much stuff. Digital photos aren’t expensive to store, but we feel the cost in other ways – the cognitive load of having to deal with so many images, of having to sift through a disorganised collection. Photo: a wheelbarrow in a garden. From Hans Middendorp on Pexels, used under the Pexels license. I think there’s a lesson here for the software industry. What’s the cost of all the code that we’re keeping? We construct these enormous edifices of code, but when do we turn things off? When do we delete code? We’re more focused on new code, new ideas, new features. I’m personally quite concerned by how much generative AI has focused on writing more code, and not on dealing with the code we already have. Code is text, so it’s cheap to store, but it still has a cost – it’s more cognitive load, more maintenance, more room for bugs and vulnerabilities. We can keep all our software forever, but we shouldn’t. Photo: Open Garbage Dump on Highway 112, North of San Sebastian. Taken by John Vachon, 1973. From the U.S. National Archives no known copyright restrictions. I think this is going to become a bigger issue for us. We live in an era of abundance, where we can get more computing resources at the push of a button. But that can’t last forever. What happens when our current assumptions about endless compute no longer hold? The climate crisis – where’s all our electricity and hardware coming from? The economics of AI – who’s paying for all these GPU-intensive workloads? And politics – how many of us are dependent on cloud computing based in the US? How many of us feel as good about that as we did three months ago? Libraries are good at making a little go a long way, about eking out their resources, about deciding what’s a good use of resources and what’s waste. Often the people who are good with money are the people who don’t have much of it, and we have a lot of money. It’s easier to make decisions about what to prune and what to keep when things are going well – it’s harder to make decisions in an emergency. This is my third thought for you: long-lasting systems cannot grow without bound; they need weeding. It isn’t sustainable to grow forever, because eventually you get overwhelmed by the weight of everything that came before. We need to get better at writing software efficiently, at turning things off that we don’t need. It’s a skill we’ve neglected. We used to be really good at it – when computers were the size of the room, programmers could eke out every last bit of performance. We can’t do that any more, but it’s so important when building something to last, and I think it’s a skill we’ll have to re-learn soon. Photo: Val Weaver and Vera Askew running in a relay race, Brisbane, 1939. From the State Library of Queensland no known copyright restrictions. Weeding is a term that comes from the preservation world, so let’s stay there. When you talk to people who work in digital preservation, we often describe it as a relay race. There is no permanent digital media, there’s no digital parchment or stone tablets – everything we have today will be unreadable in a few decades. We’re constantly migrating from one format to another, trying to stay ahead of obsolete technology. Software is also a bit of a relay race – there is no “write it once and you’re done”. We’re constantly upgrading, editing, improving. And that can be frustrating, but it also means have regular opportunities to learn and improve. We have that chance to reflect, to do things better. Photo: Broken computer monitor found in the woods. By Jeff Myers on Flickr, used under CC BY‑NC 2.0. I think we do our best reflections when computers go bust. When something goes wrong, we spring into action – we do retrospectives, root cause analysis, we work out what went wrong and how to stop it happening again. This is a great way to build software that lasts, to make it more resilient. It’s a period of intense reflection – what went wrong, how do we stop it happening again? What I’ve noticed is that the best systems are doing this sort of reflection all the time – they aren’t waiting for something to go wrong. They know that prevention is better than cure, and they embody it. They give themselves regular time to reflect, to think about what’s working and what’s not – and when we do, great stuff can happen. Photo: Statue of Astrid Lindgren. By Tobias Barz on Flickr, used under CC BY‑ND 2.0. I want to give you one more example. As a sidebar to my day job, I’ve been writing a blog for thirteen years. It’s the longest job – asterisk – I’ve ever had. The indie web is still cool! A lot of what I write, especially when I was starting, was sharing bits of code. “Here’s something I wrote, here’s what it does, here’s how it works and why it’s cool.” Writing about my code has been an incredible learning experience. You might know have heard the saying “ask a developer to review 5 lines of code, she’ll find 5 issues, ask her to review 500 lines and she’ll say it looks good”. When I sit back and deeply read and explain short snippets of my code, I see how to do things better. I get better at programming. Writing this blog has single-handedly had the biggest impact on my skill as a programmer. Photo: Midnight sun in Advent Bay, Spitzbergen, Norway. From the Library of Congress, no known copyright restrictions. There are so many ways to reflect on our work, opportunities to look back and ask how we can do better – but we have to make the most of them. I think we are, in some ways, very lucky that our work isn’t set in stone, that we do keep doing the same thing, that we have the opportunity to do better. Writing this talk has been, in some sense, a reflection on the first decade of my career, and it’s made me think about what I want the next decade to look like. In this talk, I’ve tried to distill some of those things, tried to give you some of the ideas that I want to keep, that I think will help my career and my software to last. Be careful about what you create, what you keep, and how you interact with other people. That care, that process of reflection – that is what creates things that last. [If the formatting of this post looks odd in your feed reader, visit the original article]