Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
6
I’m trying something a bit different today – fiction. I had an idea for a short story the other evening, and I fleshed it out into a proper piece. I want to get better at writing fiction, and the only way to do that is with practice. I hope you like what I’ve written! When the fire starts, I am already running for the exit. When the fire starts, the world is thrown into sharp relief. I have worked in this theatre since it opened its doors. When the fire starts, my work begins – and in a way, it also ends. When the fire starts, they run beneath me. When the fire starts, they leave their bags behind. Their coats. Their tickets. They hear me, though I have no voice. When the fire starts, I know I will never leave. When the fire starts, I will keep running. I will always be running for the exit, because somebody must. A “running man” exit sign. Photo by Mateusz Dach on Pexels, used under the Pexels license. Hopefully it’s clear that this isn’t a story...
3 days ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from alexwlchan

Monki Gras 2025: What I’ve Learned by Building to Last

Yesterday I gave a talk at Monki Gras 2025. This year, the theme is Sustaining Software Development Craft, and here’s the description from the conference website: The big question we want to explore is – how can we keep doing the work we do, when it sustains us, provides meaning and purpose, and sometimes pays the bills? We’re in a period of profound change, technically, politically, socially, economically, which has huge implications for us as practitioners, the makers and doers, but also for the culture at large. I did a talk about the first decade of my career, which I’ve spent working on projects that are designed to last. I’m pleased with my talk, and I got a lot of nice comments. Monki Gras is always a pleasure to attend and speak at – it’s such a lovely, friendly vibe, and the organisers James Governor and Jessica West do a great job of making it a nice day. When I left yesterday, I felt warm and fuzzy and appreciated. I also have a front-row photo of me speaking, courtesy of my dear friend Eriol Fox. Naturally, I chose my outfit to match my slides (and this blog post!). Key points How do you create something that lasts? You can’t predict the future, but there are patterns in what lasts People skills sustain a career more than technical skills Long-lasting systems cannot grow without bound; they need weeding Links/recommended reading Sibyl Schaefer presented a paper Energy, Digital Preservation, and the Climate at iPres 2024, which is about how digital preservation needs to change in anticipation of the climate crisis. This was a major inspiration for this talk. Simon Willison gave a talk Coping strategies for the serial project hoarder at DjangoCon US in 2022, which is another inspiration for me. I’m not as prolific as Simon, but I do see parallels between his approach and what I remember of Metaswitch. Most of the photos in the talk come from the Flickr Commons, a collection of historical photographs from over 100 international cultural heritage organisations. You can learn more about the Commons, browse the photos, and see who’s involved using the Commons Explorer https://commons.flickr.org/. (Which I helped to build!) Slides and notes Photo: dry stone wall building in South Wales. Taken by Wikimedia Commons user TR001, used under CC BY‑SA 3.0. [Make introductory remarks; name and pronouns; mention slides on my website] I’ve been a software developer for ten years, and I’ve spent my career working on projects that are designed to last – first telecoms and networking, now cultural heritage – so when I heard this year’s theme “sustaining craft”, I thought about creating things that last a long time. The key question I want to address in this talk is how do you create something that lasts? I want to share a few thoughts I’ve had from working on decade- and century-scale projects. Part of this is about how we sustain ourselves as software developers, as the individuals who create software, especially with the skill threat of AI and the shifting landscape of funding software. I also want to go broader, and talk about how we sustain the craft, the skill, the projects. Let’s go through my career, and see what we can learn. Photo: women working at a Bell System telephone switchboard. From the U.S. National Archives, no known copyright restrictions. My first software developer job was at a company called Metaswitch. Not a household name, they made telecoms equipment, and you’d probably have heard of their customers. They sold equipment to carriers like AT&T, Vodafone, and O2, who’d use that equipment to sell you telephone service. Telecoms infrastructure is designed to last a long time. I spent most of my time at Metaswitch working with BGP, a routing protocol designed on a pair of napkins in 1989. BGP is sometimes known as the "two-napkin protocol", because of the two napkins on which Kirk Lougheed and Yakov Rekhter wrote the original design. From the Computer History Museum. These are those napkins. This design is basically still the backbone of the Internet. A lot of the building blocks of the telephone network and the Internet are fundamentally the same today as when they were created. I was working in a codebase that had been actively developed for most of my life, and was expected to outlast me. This was my first job so I didn’t really appreciate it at the time, but Metaswitch did a lot of stuff designed to keep that codebase going, to sustain it into the future. Let’s talk about a few of them. Photo: a programmer testing electronic equipment. From the San Diego Air & Space Museum Archives, no known copyright restrictions. Metaswitch was very careful about adopting new technologies. Most of their code was written in C, a little C++, and Rust was being adopted very slowly. They didn’t add new technology quickly. Anything they add, they have to support for a long time – so they wanted to pick technologies that weren’t a flash in the pan. I learnt about something called “the Lindy effect” – this is the idea that any technology is about halfway through its expected life. An open-source library that’s been developed for decades? That’ll probably be around a while longer. A brand new JavaScript framework? That’s a riskier long-term bet. The Lindy effect is about how software that’s been around a long time has already proven its staying power. And talking of AI specifically – I’ve been waiting for things to settle. There’s so much churn and change in this space, if I’d learnt a tool six months ago, most of that would be obsolete today. I don’t hate AI, I love that people are trying all these new tools – but I’m tired and I learning new things is exhausting. I’m waiting for things to calm down before really diving deep on these tools. Metaswitch was very cautious about third-party code, and they didn’t have much of it. Again, anything they use will have to be supported for a long time – is that third-party code, that open-source project stick around? They preferred to take the short-term hit of writing their own code, but then having complete control over it. To give you some idea of how seriously they took this: every third-party dependency had to be reviewed and vetted by lawyers before it could be added to the codebase. Imagine doing that for a modern Node.js project! They had a lot of safety nets. Manual and automated testing, a dedicated QA team, lots of checks and reviews. These were large codebases which had to be reliable. Long-lived systems can’t afford to “move fast and break things”. This was a lot of extra work, but it meant more stability, less churn, and not much risk of outside influences breaking things. This isn’t the only way to build software – Metaswitch is at one extreme of a spectrum – but it did seem to work. I think this is a lesson for building software, but also in what we choose to learn as individuals. Focusing on software that’s likely to last means less churn in our careers. If you learn the fundamentals of the web today, that knowledge will still be useful in five years. If you learn the JavaScript framework du jour? Maybe less so. How do you know what’s going to last? That’s the key question! It’s difficult, but it’s not impossible. This is my first thought for you all: you can’t predict the future, but there are patterns in what lasts. I’ve given you some examples of coding practices that can help the longevity of a codebase, these are just a few. Maybe I have rose-tinted spectacles, but I’ve taken the lessons from Metaswitch and brought them into my current work, and I do like them. I’m careful about external dependencies, I write a lot of my own code, and I create lots of safety nets, and stuff doesn’t tend to churn so much. My code lasts because it isn’t constantly being broken by external forces. Photo: a child in nursery school cutting a plank of wood with a saw. From the Community Archives of Belleville and Hastings County, no known copyright restrictions. So that’s what the smart people were doing at Metaswitch. What was I doing? I joined Metaswitch when I was a young and twenty-something graduate, so I knew everything. I knew software development was easy, these old fuddy-duddies were making it all far too complicated, and I was gonna waltz in and show them how it was done. And obviously, that happened. (Please imagine me reading that paragraph in a very sarcastic voice.) I started doing the work, and it was a lot harder than I expected – who knew that software development was difficult? But I was coming from a background as a solo dev who’d only done hobby projects. I’d never worked in a team before. I didn’t know how to say that I was struggling, to ask for help. I kept making bold promises about what I could do, based on how quickly I thought I should be able to do the work – but I was making promises my skills couldn’t match. I kept missing self-imposed deadlines. You can do that once, but you can’t make it a habit. About six months before I left, my manager said to me “Alex, you have a reputation for being unreliable”. Photo: a boy with a pudding bowl haircut, photographed by Elinor Wiltshire, 1964. From the National Library of Ireland, no known copyright restrictions. He was right! I had such a history of making promises that I couldn’t keep, people stopped trusting me. I didn’t get to work on interesting features or the exciting projects, because nobody trusted me to deliver. That was part of why I left that job – I’d ploughed my reputation into the ground, and I needed to reset. Photo: the library stores at Wellcome Collection. Taken by Thomas SG Farnetti used under CC BY‑NC 4.0. I got that reset at Wellcome Collection, a London museum and library that some of you might know. I was working a lot with their collections, a lot of data and metadata. Wellcome Collection is building on long tradition of libraries and archives, which go back thousands of years. Long-term thinking is in their DNA. To give you one example: there’s stuff in the archive that won’t be made public until the turn of the century. Everybody who works there today will be long gone, but they assume that those records will exist in some shape or form form when that time comes, and they’re planning for those files to eventually be opened. This is century-scale thinking. Photo: Bob Hoover. From the San Diego Air & Space Museum Archives, no known copyright restrictions. When I started, I sat next to a guy called Chris. (I couldn’t find a good picture of him, but I feel like this photo captures his energy.) Chris was a senior archivist. He’d been at Wellcome Collection about twenty-five years, and there were very few people – if anyone – who knew more about the archive than he did. He absolutely knew his stuff, and he could have swaggered around like he owned the place. But he didn’t. Something I was struck by, from my very first day, was how curious and humble he was. A bit of a rarity, if you work in software. He was the experienced veteran of the organisation, but he cared about what other people had to say and wanted to learn from them. Twenty-five years in, and he still wanted to learn. He was a nice guy. He was a pleasure to work with, and I think that’s a big part of why he was able to stay in that job as long as he did. We were all quite disappointed when he left for another job! This is my second thought for you: people skills sustain a career more than technical ones. Being a pleasure to work with opens so many doors and opportunities than technical skill alone cannot. We could do another conference just on what those people skills are, but for now I just want to give you a few examples to think about. Photo: Lt.(jg.) Harriet Ida Pickens and Ens. Frances Wills, first Negro Waves to be commissioned in the US Navy. From the U.S. National Archives, no known copyright restrictions. Be a respectful and reliable teammate. You want to be seen as a safe pair of hands. Reliability isn’t about avoiding mistakes, it’s about managing expectations. If you’re consistently overpromising and underdelivering, people stop trusting you (which I learnt the hard way). If you want people to trust you, you have to keep your promises. Good teammates communicate early when things aren’t going to plan, they ask for help and offer it in return. Good teammates respect the work that went before. It’s tempting to dismiss it as “legacy”, but somebody worked hard on it, and it was the best they knew how to do – recognise that effort and skill, don’t dismiss it. Listen with curiosity and intent. My colleague Chris had decades of experience, but he never acted like he knew everything. He asked thoughtful questions and genuinely wanted to learn from everyone. So many of us aren’t really listening when we’re “listening” – we’re just waiting for the next silence, where we can interject with the next thing we’ve already thought of. We aren’t responding to what other people are saying. When we listen, we get to learn, and other people feel heard – and that makes collaboration much smoother and more enjoyable. Finally, and this is a big one: don’t give people unsolicited advice. We are very bad at this as an industry. We all have so many opinions and ideas, but sometimes, sharing isn’t caring. Feedback is only useful when somebody wants to hear it – otherwise, it feels like criticism, it feels like an attack. Saying “um, actually” when nobody asked for feedback isn’t helpful, it just puts people on the defensive. Asking whether somebody wants feedback, and what sort of feedback they want, will go a long way towards it being useful. So again: people skills sustain a career more than technical skills. There aren’t many truly solo careers in software development – we all have to work with other people – for many of us, that’s the joy of it! If you’re a nice person to work with, other people will want to work with you, to collaborate on projects, they’ll offer you opportunities, it opens doors. Your technical skills won’t sustain your career if you can’t work with other people. Photo: "The Keeper", an exhibition at the New Museum in New York. Taken by Daniel Doubrovkine, used under CC BY‑NC‑SA 4.0. When I went to Wellcome Collection, it was my first time getting up-close and personal with a library and archive, and I didn’t really know how they worked. If you’d asked me, I’d have guessed they just keep … everything? And it was gently explained to me that “No Alex, that’s hoarding.” “Your overflowing yarn stash does not count as an archive.” Big collecting institutions are actually super picky – they have guidelines about what sort of material they collect, what’s in scope, what isn’t, and they’ll aggressively reject anything that isn’t a good match. At Wellcome Collection, their remit was “the history of health and human experience”. You have medical papers? Definitely interesting! Your dad’s old pile of car magazines? Less so. Photo: a dumpster full of books that have been discarded. From brewbooks on Flickr, used under CC BY‑SA 2.0. Collecting institutions also engage in the practice of “weeding” or “deaccessioning”, which is removing material, pruning the collection. For example, in lending libraries, books will be removed from the shelves if they’ve become old, damaged, or unpopular. They may be donated, or sold, or just thrown away – but whatever happens, they’re gotten rid of. That space is reclaimed for other books. Getting rid of material is a fundamental part of professional collecting, because professionals know that storing something has an ongoing cost. They know they can’t keep everything. Photo: a box full of printed photos. From Miray Bostancı on Pexels, used under the Pexels license. This is something I think about in my current job as well. I currently work at the Flickr Foundation, where we’re thinking about how to keep Flickr’s pictures visible for 100 years. How do we preserve social media, how do we maintain our digital legacy? When we talk to people, one thing that comes up regularly is that almost everybody has too many photos. Modern smartphones have made it so easy to snap, snap, snap, and we end up with enormous libraries with thousands of images, but we can’t find the photos we care about. We can’t find the meaningful memories. We’re collecting too much stuff. Digital photos aren’t expensive to store, but we feel the cost in other ways – the cognitive load of having to deal with so many images, of having to sift through a disorganised collection. Photo: a wheelbarrow in a garden. From Hans Middendorp on Pexels, used under the Pexels license. I think there’s a lesson here for the software industry. What’s the cost of all the code that we’re keeping? We construct these enormous edifices of code, but when do we turn things off? When do we delete code? We’re more focused on new code, new ideas, new features. I’m personally quite concerned by how much generative AI has focused on writing more code, and not on dealing with the code we already have. Code is text, so it’s cheap to store, but it still has a cost – it’s more cognitive load, more maintenance, more room for bugs and vulnerabilities. We can keep all our software forever, but we shouldn’t. Photo: Open Garbage Dump on Highway 112, North of San Sebastian. Taken by John Vachon, 1973. From the U.S. National Archives no known copyright restrictions. I think this is going to become a bigger issue for us. We live in an era of abundance, where we can get more computing resources at the push of a button. But that can’t last forever. What happens when our current assumptions about endless compute no longer hold? The climate crisis – where’s all our electricity and hardware coming from? The economics of AI – who’s paying for all these GPU-intensive workloads? And politics – how many of us are dependent on cloud computing based in the US? How many of us feel as good about that as we did three months ago? Libraries are good at making a little go a long way, about eking out their resources, about deciding what’s a good use of resources and what’s waste. Often the people who are good with money are the people who don’t have much of it, and we have a lot of money. It’s easier to make decisions about what to prune and what to keep when things are going well – it’s harder to make decisions in an emergency. This is my third thought for you: long-lasting systems cannot grow without bound; they need weeding. It isn’t sustainable to grow forever, because eventually you get overwhelmed by the weight of everything that came before. We need to get better at writing software efficiently, at turning things off that we don’t need. It’s a skill we’ve neglected. We used to be really good at it – when computers were the size of the room, programmers could eke out every last bit of performance. We can’t do that any more, but it’s so important when building something to last, and I think it’s a skill we’ll have to re-learn soon. Photo: Val Weaver and Vera Askew running in a relay race, Brisbane, 1939. From the State Library of Queensland no known copyright restrictions. Weeding is a term that comes from the preservation world, so let’s stay there. When you talk to people who work in digital preservation, we often describe it as a relay race. There is no permanent digital media, there’s no digital parchment or stone tablets – everything we have today will be unreadable in a few decades. We’re constantly migrating from one format to another, trying to stay ahead of obsolete technology. Software is also a bit of a relay race – there is no “write it once and you’re done”. We’re constantly upgrading, editing, improving. And that can be frustrating, but it also means have regular opportunities to learn and improve. We have that chance to reflect, to do things better. Photo: Broken computer monitor found in the woods. By Jeff Myers on Flickr, used under CC BY‑NC 2.0. I think we do our best reflections when computers go bust. When something goes wrong, we spring into action – we do retrospectives, root cause analysis, we work out what went wrong and how to stop it happening again. This is a great way to build software that lasts, to make it more resilient. It’s a period of intense reflection – what went wrong, how do we stop it happening again? What I’ve noticed is that the best systems are doing this sort of reflection all the time – they aren’t waiting for something to go wrong. They know that prevention is better than cure, and they embody it. They give themselves regular time to reflect, to think about what’s working and what’s not – and when we do, great stuff can happen. Photo: Statue of Astrid Lindgren. By Tobias Barz on Flickr, used under CC BY‑ND 2.0. I want to give you one more example. As a sidebar to my day job, I’ve been writing a blog for thirteen years. It’s the longest job – asterisk – I’ve ever had. The indie web is still cool! A lot of what I write, especially when I was starting, was sharing bits of code. “Here’s something I wrote, here’s what it does, here’s how it works and why it’s cool.” Writing about my code has been an incredible learning experience. You might know have heard the saying “ask a developer to review 5 lines of code, she’ll find 5 issues, ask her to review 500 lines and she’ll say it looks good”. When I sit back and deeply read and explain short snippets of my code, I see how to do things better. I get better at programming. Writing this blog has single-handedly had the biggest impact on my skill as a programmer. Photo: Midnight sun in Advent Bay, Spitzbergen, Norway. From the Library of Congress, no known copyright restrictions. There are so many ways to reflect on our work, opportunities to look back and ask how we can do better – but we have to make the most of them. I think we are, in some ways, very lucky that our work isn’t set in stone, that we do keep doing the same thing, that we have the opportunity to do better. Writing this talk has been, in some sense, a reflection on the first decade of my career, and it’s made me think about what I want the next decade to look like. In this talk, I’ve tried to distill some of those things, tried to give you some of the ideas that I want to keep, that I think will help my career and my software to last. Be careful about what you create, what you keep, and how you interact with other people. That care, that process of reflection – that is what creates things that last. [If the formatting of this post looks odd in your feed reader, visit the original article]

a week ago 11 votes
Whose code am I running in GitHub Actions?

A week ago, somebody added malicious code to the tj-actions/changed-files GitHub Action. If you used the compromised action, it would leak secrets to your build log. Those build logs are public for public repositories, so anybody could see your secrets. Scary! Mutable vs immutable references This attack was possible because it’s common practice to refer to tags in a GitHub Actions workflow, for example: jobs: changed_files: ... steps: - name: Get changed files id: changed-files uses: tj-actions/changed-files@v2 ... At a glance, this looks like an immutable reference to an already-released “version 2” of this action, but actually this is a mutable Git tag. If somebody changes the v2 tag in the tj-actions/changed-files repo to point to a different commit, this action will run different code the next time it runs. If you specify a Git commit ID instead (e.g. a5b3abf), that’s an immutable reference that will run the same code every time. Tags vs commit IDs is a tradeoff between convenience and security. Specifying an exact commit ID means the code won’t change unexpectedly, but tags are easier to read and compare. Do I have any mutable references? I wasn’t worried about this particular attack because I don’t use tj-actions, but I was curious about what other GitHub Actions I’m using. I ran a short shell script in the folder where I have local clones of all my repos: find . -path '*/.github/workflows/*' -type f -name '*.yml' -print0 \ | xargs -0 grep --no-filename "uses:" \ | sed 's/\- uses:/uses:/g' \ | tr '"' ' ' \ | awk '{print $2}' \ | sed 's/\r//g' \ | sort \ | uniq --count \ | sort --numeric-sort This prints a tally of all the actions I’m using. Here’s a snippet of the output: 1 hashicorp/setup-terraform@v3 2 dtolnay/rust-toolchain@v1 2 taiki-e/create-gh-release-action@v1 2 taiki-e/upload-rust-binary-action@v1 4 actions/setup-python@v4 6 actions/cache@v4 9 ruby/setup-ruby@v1 31 actions/setup-python@v5 58 actions/checkout@v4 I went through the entire list and thought about how much I trust each action and its author. Is it from a large organisation like actions or ruby? They’re not perfect, but they’re likely to have good security procedures in place to protect against malicious changes. Is it from an individual developer or small organisation? Here I tend to be more wary, especially if I don’t know the author personally. That’s not to say that individuals can’t have good security, but there’s more variance in the security setup of random developers on the Internet than among big organisations. Do I need to use somebody else’s action, or could I write my own script to replace it? This is what I generally prefer, especially if I’m only using a small subset of the functionality offered by the action. It’s a bit more work upfront, but then I know exactly what it’s doing and there’s less churn and risk from upstream changes. I feel pretty good about my list. Most of my actions are from large organisations, and the rest are a few actions specific to my Rust command-line tools which are non-critical toys, where the impact of a compromised GitHub repo would be relatively slight. How this script works This is a classic use of Unix pipelines, where I’m chaining together a bunch of built-in text processing tools. Let’s step through how it works. find . -path '*/.github/workflows/*' -type f -name '*.yml' -print0 .yml in a folder like .github/workflows/. It prints a list of filenames, like: ./alexwlchan.net/.github/workflows/build_site.yml \0) between them, which makes it possible to split the filenames in the next step. By default it uses a newline, but a null byte is a bit safer, in case you have filenames which include newline characters. .yml as a file extension, but if you sometimes use .yaml, you can replace -name '*.yml' with \( -name '*.yml' -o -name '*.yaml' \) -path rules, like -not -path './cpython/*'. xargs -0 grep --no-filename "uses:" xargs to go through the filenames one-by-one. The `-0` flag tells it to split on the null byte, and then it runs grep to look for lines that include "uses:" – this is how you use an action in your workflow file. --no-filename option means this just prints the matching line, and not the name of the file it comes from. Not all of my files are formatted or indented consistently, so the output is quite messy:     - uses: actions/checkout@v4 sed 's/\- uses:/uses:/g' \ uses: is the first key in the YAML dictionary. This sed command replaces "- uses:" with "uses:" to start tidying up the data.     uses: actions/checkout@v4 sed is a pretty powerful tool for making changes to text, but I only know a couple of simple commands, like this pattern for replacing text: sed 's/old/new/g'. tr '"' ' '     uses: actions/checkout@v4 sed to make this substitution as well. I reached for tr because I've been using it for longer, and the syntax is simpler for doing single character substitutions: tr '<oldchar>' '<newchar>' awk '{print $2}' actions/checkout@v4 awk is another powerful text utility that I’ve never learnt properly – I only know how to print the nth word in a string. It has a lot of pattern-matching features I’ve never tried. sed 's/\r//g' \r), and those were included in the awk output. This command gets rid of them, which makes the data more consistent for the final step. sort | uniq --count | sort --numeric-sort tally.    6 actions/cache@v4 This step-by-step approach is how I build Unix text pipelines: I can write a step at a time, and gradually refine and tweak the output until I get the result I want. There are lots of ways to do it, and because this is a script I’ll use once and then discard, I don’t have to worry too much about doing it in the “purest” way – as long as it gets the right result, that’s good enough. If you use GitHub Actions, you might want to use this script to check your own actions, and see what you’re using. But more than that, I recommend becoming familiar with the Unix text processing tools and pipelines – even in the age of AI, they’re still a powerful and flexible way to cobble together one-off scripts for processing data. [If the formatting of this post looks odd in your feed reader, visit the original article]

2 weeks ago 12 votes
Fast and random sampling in SQLite

I was building a small feature for the Flickr Commons Explorer today: show a random selection of photos from the entire collection. I wanted a fast and varied set of photos. This meant getting a random sample of rows from a SQLite table (because the Explorer stores all its data in SQLite). I’m happy with the code I settled on, but it took several attempts to get right. Approach #1: ORDER BY RANDOM() My first attempt was pretty naïve – I used an ORDER BY RANDOM() clause to sort the table, then limit the results: SELECT * FROM photos ORDER BY random() LIMIT 10 This query works, but it was slow – about half a second to sample a table with 2 million photos (which is very small by SQLite standards). This query would run on every request for the homepage, so that latency is unacceptable. It’s slow because it forces SQLite to generate a value for every row, then sort all the rows, and only then does it apply the limit. SQLite is fast, but there’s only so fast you can sort millions of values. I found a suggestion from Stack Overflow user Ali to do a random sort on the id column first, pick my IDs from that, and only fetch the whole row for the photos I’m selecting: SELECT * FROM photos WHERE id IN ( SELECT id FROM photos ORDER BY RANDOM() LIMIT 10 ) This means SQLite only has to load the rows it’s returning, not every row in the database. This query was over three times faster – about 0.15s – but that’s still slower than I wanted. Approach #2: WHERE rowid > (…) Scrolling down the Stack Overflow page, I found an answer by Max Shenfield with a different approach: SELECT * FROM photos WHERE rowid > ( ABS(RANDOM()) % (SELECT max(rowid) FROM photos) ) LIMIT 10 The rowid is a unique identifier that’s used as a primary key in most SQLite tables, and it can be looked up very quickly. SQLite automatically assigns a unique rowid unless you explicitly tell it not to, or create your own integer primary key. This query works by picking a point between the biggest and smallest rowid values used in the table, then getting the rows with rowids which are higher than that point. If you want to know more, Max’s answer has a more detailed explanation. This query is much faster – around 0.0008s – but I didn’t go this route. The result is more like a random slice than a random sample. In my testing, it always returned contiguous rows – 101, 102, 103, … – which isn’t what I want. The photos in the Commons Explorer database were inserted in upload order, so photos with adjacent row IDs were uploaded at around the same time and are probably quite similar. I’d get one photo of an old plane, then nine more photos of other planes. I want more variety! (This behaviour isn’t guaranteed – if you don’t add an ORDER BY clause to a SELECT query, then the order of results is undefined. SQLite is returning rows in rowid order in my table, and a quick Google suggests that’s pretty common, but that may not be true in all cases. It doesn’t affect whether I want to use this approach, but I mention it here because I was confused about the ordering when I read this code.) Approach #3: Select random rowid values outside SQLite Max’s answer was the first time I’d heard of rowid, and it gave me an idea – what if I chose random rowid values outside SQLite? This is a less “pure” approach because I’m not doing everything in the database, but I’m happy with that if it gets the result I want. Here’s the procedure I came up with: Create an empty list to store our sample. Find the highest rowid that’s currently in use: sqlite> SELECT MAX(rowid) FROM photos; 1913389 Use a random number generator to pick a rowid between 1 and the highest rowid: >>> import random >>> random.randint(1, max_rowid) 196476 If we’ve already got this rowid, discard it and generate a new one. (The rowid is a signed, 64-bit integer, so the minimum possible value is always 1.) Look for a row with that rowid: SELECT * FROM photos WHERE rowid = 196476 If such a row exists, add it to our sample. If we have enough items in our sample, we’re done. Otherwise, return to step 3 and generate another rowid. If such a row doesn’t exist, return to step 3 and generate another rowid. This requires a bit more code, but it returns a diverse sample of photos, which is what I really care about. It’s a bit slower, but still plenty fast enough (about 0.001s). This approach is best for tables where the rowid values are mostly contiguous – it would be slower if there are lots of rowids between 1 and the max that don’t exist. If there are large gaps in rowid values, you might try multiple missing entries before finding a valid row, slowing down the query. You might want to try something different, like tracking valid rowid values separately. This is a good fit for my use case, because photos don’t get removed from Flickr Commons very often. Once a row is written, it sticks around, and over 97% of the possible rowid values do exist. Summary Here are the four approaches I tried: Approach Performance (for 2M rows) Notes ORDER BY RANDOM() ~0.5s Slowest, easiest to read WHERE id IN (SELECT id …) ~0.15s Faster, still fairly easy to understand WHERE rowid > ... ~0.0008s Returns clustered results Random rowid in Python ~0.001s Fast and returns varied results, requires code outside SQL I’m using the random rowid in Python in the Commons Explorer, trading code complexity for speed. I’m using this random sample to render a web page, so it’s important that it returns quickly – when I was testing ORDER BY RANDOM(), I could feel myself waiting for the page to load. But I’ve used ORDER BY RANDOM() in the past, especially for asynchronous data pipelines where I don’t care about absolute performance. It’s simpler to read and easier to see what’s going on. Now it’s your turn – visit the Commons Explorer and see what random gems you can find. Let me know if you spot anything cool! [If the formatting of this post looks odd in your feed reader, visit the original article]

3 weeks ago 15 votes
We all lose when art is anonymised

One rabbit hole I can never resist going down is finding the original creator of a piece of art. This sounds simple, but it’s often quite difficult. The Internet is a maze of social media accounts that only exist to repost other people’s art, usually with minimal or non-existent attribution. A popular image spawns a thousand copies, each a little further from the original. Signatures get cropped, creators’ names vanish, and we’re left with meaningless phrases like “no copyright intended”, as if that magically absolves someone of artistic theft. Why do I do this? I’ve always been a bit obsessive, a bit completionist. I’ve worked in cultural heritage for eight years, which has made me more aware of copyright and more curious about provenance. And it’s satisfying to know I’ve found the original source, that I can’t dig any further. This takes time. It’s digital detective work, using tools like Google Lens and TinEye, and it’s not always easy or possible. Sometimes the original pops straight to the top, but other times it takes a lot of digging to find the source of an image. So many of us have become accustomed to art as an endless, anonymous stream of “content”. A beautiful image appears in our feed, we give it a quick heart, and scroll on, with no thought for the human who sweated blood and tears to create it. That original artist feels distant, disconected. Whatever benefit they might get from the “exposure” of your work going viral, they don’t get any if their name has been removed first. I came across two examples recently that remind me it’s not just artists who miss out – it’s everyone who enjoys art. I saw a photo of some traffic lights on Tumblr. I love their misty, nighttime aesthetic, the way the bright colours of the lights cut through the fog, the totality of the surrounding darkness. But there was no name – somebody had just uploaded the image to their Tumblr page, it was reblogged a bunch of times, and then it appeared on my dashboard. Who took it? I used Google Lens to find the original photographer: Lucas Zimmerman. Then I discovered it was part of a series. And there was a sequel. I found interviews. Context. Related work. I found all this cool stuff, but only because I knew Lucas’s name. Traffic Lights, by Lucas Zimmerman. Published on Behance.net under a CC BY‑NC 4.0 license, and reposted here in accordance with that license. The second example was a silent video of somebody making tiny chess pieces, just captioned “wow”. It was clearly an edit of another video, with fast-paced cuts to make it accommodate a short attention span – and again with no attribution. This was a little harder to find – I had to search several frames in Google Lens before I found a summary on a Russian website, which had a link to a YouTube video by metalworker and woodworker Левша (Levsha). This video is four times longer than the cut-up version I found, in higher resolution, and with commentary from the original creator. I don’t speak Russian, but YouTube has auto-translated subtitles. Now I know how this amazing set was made, and I have a much better understanding of the materials and techniques involved. (This includes the delightful name Wenge wood, which I’d never heard before.) https://youtube.com/watch?v=QoKdDK3y-mQ A piece of art is more than just a single image or video. It’s a process, a human story. When art is detached from its context and creator, we lose something fundamental. Creators lose the chance to benefit from their work, and we lose the opportunity to engage with it in a deeper way. We can’t learn how it was made, find their other work, or discover how to make similar art for ourselves. The Internet has done many wonderful things for art, but it’s also a machine for endless copyright infringement. It’s not just about generative AI and content scraping – those are serious issues, but this problem existed long before any of us had heard of ChatGPT. It’s a thousand tiny paper cuts. How many of us have used an image from the Internet because it showed up in a search, without a second thought for its creator? When Google Images says “images may be subject to copyright”, how many of us have really thought about what that means? Next time you want to use an image from the web, look to see if it’s shared under a license that allows reuse, and make sure you include the appropriate attribution – and if not, look for a different image. Finding the original creator is hard, sometimes impossible. The Internet is full of shadows: copies of things that went offline years ago. But when I succeed, it feels worth the effort – both for the original artist and myself. When I read a book or watch a TV show, the credits guide me to the artists, and I can appreciate both them and the rest of their work. I wish the Internet was more like that. I wish the platforms we rely on put more emphasis on credit and attribution, and the people behind art. The next time an image catches your eye, take a moment. Who made this? What does it mean? What’s their story? [If the formatting of this post looks odd in your feed reader, visit the original article]

4 weeks ago 14 votes

More in programming

1995 Was the Most Important Year for the Web

The world changed a lot in 1995. And for the web, it was a transformational year. The post 1995 Was the Most Important Year for the Web appeared first on The History of the Web.

23 hours ago 3 votes
Solving a "Layton Puzzle" with Prolog

I have a lot in the works for the this month's Logic for Programmers release. Among other things, I'm completely rewriting the chapter on Logic Programming Languages. I originally showcased the paradigm with puzzle solvers, like eight queens or four-coloring. Lots of other demos do this too! It takes creativity and insight for humans to solve them, so a program doing it feels magical. But I'm trying to write a book about practical techniques and I want everything I talk about to be useful. So in v0.9 I'll be replacing these examples with a couple of new programs that might get people thinking that Prolog could help them in their day-to-day work. On the other hand, for a newsletter, showcasing a puzzle solver is pretty cool. And recently I stumbled into this post by my friend Pablo Meier, where he solves a videogame puzzle with Prolog:1 Summary for the text-only readers: We have a test with 10 true/false questions (denoted a/b) and four student attempts. Given the scores of the first three students, we have to figure out the fourth student's score. bbababbabb = 7 baaababaaa = 5 baaabbbaba = 3 bbaaabbaaa = ??? You can see Pablo's solution here, and try it in SWI-prolog here. Pretty cool! But after way too long studying Prolog just to write this dang book chapter, I wanted to see if I could do it more elegantly than him. Code and puzzle spoilers to follow. (Normally here's where I'd link to a gentler introduction I wrote but I think this is my first time writing about Prolog online? Uh here's a Picat intro instead) The Program You can try this all online at SWISH or just jump to my final version here. :- use_module(library(dif)). % Sound inequality :- use_module(library(clpfd)). % Finite domain constraints First some imports. dif lets us write dif(A, B), which is true if A and B are not equal. clpfd lets us write A #= B + 1 to say "A is 1 more than B".2 We'll say both the student submission and the key will be lists, where each value is a or b. In Prolog, lowercase identifiers are atoms (like symbols in other languages) and identifiers that start with a capital are variables. Prolog finds values for variables that match equations (unification). The pattern matching is real real good. % ?- means query ?- L = [a,B,c], [Y|X] = [1,2|L], B + 1 #= 7. B = 6, L = [a, 6, c], X = [2, a, 6, c], Y = 1 Next, we define score/33 recursively. % The student's test score % score(student answers, answer key, score) score([], [], 0). score([A|As], [A|Ks], N) :- N #= M + 1, score(As, Ks, M). score([A|As], [K|Ks], N) :- dif(A, K), score(As, Ks, N). First key is the student's answers, second is the answer key, third is the final score. The base case is the empty test, which has score 0. Otherwise, we take the head values of each list and compare them. If they're the same, we add one to the score, otherwise we keep the same score. Notice we couldn't write if x then y else z, we instead used pattern matching to effectively express (x && y) || (!x && z). Prolog does have a conditional operator, but it prevents backtracking so what's the point??? A quick break about bidirectionality One of the coolest things about Prolog: all purely logical predicates are bidirectional. We can use score to check if our expected score is correct: ?- score([a, b, b], [b, b, b], 2). true But we can also give it answers and a key and ask it for the score: ?- score([a, b, b], [b, b, b], X). X = 2 Or we could give it a key and a score and ask "what test answers would have this score?" ?- score(X, [b, b, b], 2). X = [b, b, _A], dif(_A,b) X = [b, _A, b], dif(_A,b) X = [_A, b, b], dif(_A,b) The different value is written _A because we never told Prolog that the array can only contain a and b. We'll fix this later. Okay back to the program Now that we have a way of computing scores, we want to find a possible answer key that matches all of our observations, ie gives everybody the correct scores. key(Key) :- % Figure it out score([b, b, a, b, a, b, b, a, b, b], Key, 7), score([b, a, a, a, b, a, b, a, a, a], Key, 5), score([b, a, a, a, b, b, b, a, b, a], Key, 3). So far we haven't explicitly said that the Key length matches the student answer lengths. This is implicitly verified by score (both lists need to be empty at the same time) but it's a good idea to explicitly add length(Key, 10) as a clause of key/1. We should also explicitly say that every element of Key is either a or b.4 Now we could write a second predicate saying Key had the right 'type': keytype([]). keytype([K|Ks]) :- member(K, [a, b]), keytype(Ks). But "generating lists that match a constraint" is a thing that comes up often enough that we don't want to write a separate predicate for each constraint! So after some digging, I found a more elegant solution: maplist. Let L=[l1, l2]. Then maplist(p, L) is equivalent to the clause p(l1), p(l2). It also accepts partial predicates: maplist(p(x), L) is equivalent to p(x, l1), p(x, l2). So we could write5 contains(L, X) :- member(X, L). key(Key) :- length(Key, 10), maplist(contains([a,b]), L), % the score stuff Now, let's query for the Key: ?- key(Key) Key = [a, b, a, b, a, a, b, a, a, b] Key = [b, b, a, b, a, a, a, a, a, b] Key = [b, b, a, b, a, a, b, b, a, b] Key = [b, b, b, b, a, a, b, a, a, b] So there are actually four different keys that all explain our data. Does this mean the puzzle is broken and has multiple different answers? Nope The puzzle wasn't to find out what the answer key was, the point was to find the fourth student's score. And if we query for it, we see all four solutions give him the same score: ?- key(Key), score([b, b, a, a, a, b, b, a, a, a], Key, X). X = 6 X = 6 X = 6 X = 6 Huh! I really like it when puzzles look like they're broken, but every "alternate" solution still gives the same puzzle answer. Total program length: 15 lines of code, compared to the original's 80 lines. Suck it, Pablo. (Incidentally, you can get all of the answer at once by writing findall(X, (key(Key), score($answer-array, Key, X)), L).) I still don't like puzzles for teaching The actual examples I'm using in the book are "analyzing a version control commit graph" and "planning a sequence of infrastructure changes", which are somewhat more likely to occur at work than needing to solve a puzzle. You'll see them in the next release! I found it because he wrote Gamer Games for Lite Gamers as a response to my Gamer Games for Non-Gamers. ↩ These are better versions of the core Prolog expressions \+ (A = B) and A is B + 1, because they can defer unification. ↩ Prolog-descendants have a convention of writing the arity of the function after its name, so score/3 means "score has three parameters". I think they do this because you can overload predicates with multiple different arities. Also Joe Armstrong used Prolog for prototyping, so Erlang and Elixir follow the same convention. ↩ It still gets the right answers without this type restriction, but I had no idea it did until I checked for myself. Probably better not to rely on this! ↩ We could make this even more compact by using a lambda function. First import module yall, then write maplist([X]>>member(X, [a,b]), Key). But (1) it's not a shorter program because you replace the extra definition with an extra module import, and (2) yall is SWI-Prolog specific and not an ISO-standard prolog module. Using contains is more portable. ↩

15 hours ago 3 votes
Market Ending Moves

Startup CEOs should ask themselves what crazy ideas can turn into a move that just ends a market's competitive dynamic

15 hours ago 3 votes
A Data Engineering Perspective of LLMs

Data engineering is a field I would categorize as a subspecialty of software engineering. It shares the same concerns as software engineering—scalability, maintainability, and other “-ilities”—but its primary focus is on data. It’s a unique discipline because data is inherently messy, and as a result, no standard enterprise framework has emerged to dominate the space—and […]

10 hours ago 2 votes
Why we won't hire a junior with five years of experience

We just opened a search for a new junior programmer at 37signals. It's been years since we last hired a junior, but the real reason the listing is turning heads is because we're open about the yearly salary: $145,849*. That's high enough that programmers with lots of experience are asking whether they could apply, even if they aren't technically "junior". The answer is no. The reason we're willing to pay a junior more than most is because we're looking for a junior who's better than most. Not better in "what do they already know", but in "how far could they go". We're hiring for peak promise — and such promise only remains until it's revealed. Maybe it sounds a little harsh, but a programmer who's been working professionally for five years has likely already revealed their potential. What you're going to get is roughly what you see. That doesn't mean that people can't get better after that, but it means that the trajectory by which they improve has already been plotted. Whereas a programmer who's either straight out of school or fresh off their first internship or short-stint job is essentially all potential. So you draw their line on the basis of just a few early dots, but the line can be steep. It's not that different from something like the NFL scouting combine. Teams fight to find the promise of The Next All-Star. These rookies won't have the experience that someone who's already played in the league for years would have, but they have the potential to be the best. Someone who's already played for several seasons will have shown what they have and be weighed accordingly. This is not easy to do! Plenty of rookies, in sports and programming, may show some early potential, then fail to elevate their game to where the buyer is betting it could be. But that's the chance you take to land someone extraordinary. So if you know a junior programmer with less than three years of industry experience who is sparkling with potential, do let them know of our listing. And if you know someone awesome who's already a senior programmer, we also have an opening for them. *It's a funnily precise number because it's pulled directly from the Radford salary database, which we query for the top 10% of San Francisco salaries for junior programmers.

yesterday 2 votes