More from macwright.com
I was just enjoying Simon Willison’s predictions and, heck, why not. 1: The web becomes adversarial to AI The history of search engines is sort of an arms race between websites and search engines. Back in the early 2000s, juicing your ranking on search engines was pretty easy - you could put a bunch of junk in your meta description tags or put some text with lots of keywords on each page and make that text really tiny and transparent so users didn’t notice it but Google did. I doubt that Perplexity’s userbase is that big but Perplexity users are probably a lot wealthier on average than Google’s, and there’s some edge to be achieved by getting Perplexity to rank your content highly or recommend your website. I’ve already noticed some search results including links to content farms. There are handful of startups that do this already, but the prediction is: the average marketing exec at a consumer brand will put some of their budget to work on fooling AI. That means serving different content to AI scrapers, maybe using some twist on Glaze and other forms of adversarial image processing to make their marketing images more tantalizing to the bots. Websites will be increasingly aware that they’re being consumed by AI, and they will have a vested interest in messing with the way AI ‘perceives’ them. As Simon notes in his predictions, AIs are gullible: and that’s before there are widespread efforts to fool them. There’s probably some way to detect an AI scraper, give it a special payload, and trick it into recommending your brand of razors whenever anyone asks, and once someone figures it out this will be the marketing trend of the decade. 2: Copyright nihilism breeds a return to physical-only media The latest lawsuit about Meta’s use of pirated books, allegedly with Mark Zuckerberg’s explicit permission, if true, will be another reason to lose faith in the American legal system’s intellectual property system entirely. We’ve only seen it used to punish individuals and protect corporations, regardless of the facts and damages, and there’s no reason to believe it will do anything different (POSIWID). The result, besides an uptick in nihilism, could be a rejuvenation of physical-only releases. New albums only released on vinyl. Books only available in paperback format. More private screenings of hip movies. When all digital records are part of the ‘training dataset,’ a niche, hipster subset will be drawn to things that aren’t as easily captured and reproduced. This is parallel, to the state of closed-source models from Anthropic or OpenAI. They’re never distributed or run locally. They exist as bytes on some hard drive and in some massive GPU’s memory in some datacenter, and there aren’t Bittorrents pirating them because they’re kept away from people, not because of the power of copyright law. What can be accessed can be copied, so secrecy and inaccessibility is valuable. 3: American tech companies will pull out of Europe because they want to do acquisitions The incoming political administration will probably bring an end to Lina Khan’s era of the FTC, and era in which the FTC did stuff. We will go back to a ‘hands off’ policy in which big companies will acquire each other pretty often without much government interference. But, even in Khan’s era, the real nail in the coffin for one of the biggest acquisitions - Adobe’s attempt to buy Figma – was regulators from the EU and UK. Those regulators will probably keep doing stuff, so I think it’s likely that the next time some company wants to acquire a close competitor, they just close up shop in the EU, maybe with a long-term plan to return. 4: The tech industry’s ‘DEI backlash’ will run up against reality The reality is that the gap between women and men in terms of college degrees is really big: “Today, 47% of U.S. women ages 25 to 34 have a bachelor’s degree, compared with 37% of men.” And that a great deal of the tech industry’s workforce is made of up highly-skilled people who are on H-1B visas. The synthesis will be that tech workers will be more diverse, in some respects, but by stripping away the bare-bones protections around their presence, companies will keep them in a more vulnerable and exploitable position. But hard right-wingers will have plenty to complain about because these companies will continue to look less white and male, because the labor pool is not that. 5: Local-first will have a breakthrough moment I think that Zero Sync has a good chance at cracking this really hard problem. So does electric and maybe jazz, too. The gap between the dream of local-first apps and the reality has been wide, but I think projects are starting to come to grips with a few hard truths: Full decentralization is not worth it. You need to design for syncing a subset of the data, not the entire dataset. You need an approach to schema evolution and permission checking These systems are getting there. We could see a big, Figma-level application built on Zero this year that will set the standard for future web application architecture. 6: Local, small AI models will be a big deal Embedding models are cool as heck. New text-to-speech and speech-to-text models are dramatically better than what came before. Image segmentation is getting a lot better. There’s a lot of stuff that is coming out of this boom that will be able to scale down to a small model that runs on a phone, browser, or at least on our own web servers without having to call out to OpenAI or Anthropic APIs. It’ll make sense for costs, performance, and security. Candle is a really interesting effort in this area. Mini predictions Substack will re-bundle news. People are tired of subscribing to individual newsletters. Substack will introduce some ~$20/month plan that gives you access to all of the newsletters that participate in this new pricing model. TypeScript gets a zeitwork equivalent and lots of people use it. Same as how prettier brought full code formatting from TypeScript, autoloading is the kind of thing that once you have it, it’s magic. What if you could just write <SomeComponent /> in your React app and didn’t have to import it? I think this would be extremely addictive and catch on fast. Node.js will fend off its competitors. Even though Val Town is built around Deno’s magic, I’ve been very impressed that Node.js is keeping up. They’ve introduced permissions, just like Deno, and native TypeScript support, just like the upstarts. Bun and Deno will keep gaining adherents, but Node.js has a long future ahead of it. Another US city starts seriously considering congestion pricing. For all the chatter and terrible discourse around the plan, it is obviously a good idea and it will work, as it has in every other case, and inspire other cities to do the same. Stripe will IPO. They’re still killing it, but they’re killing it in an established, repeatable way that public markets will like, and will let up the pressure on the many, many people who own their stock.
Happy end-of-2024! It’s been a pretty good year overall. I’m thankful. There’s no way that I’ll be able to remember and carve out the time around New Years to write this, so here’s some end-of-year roundup, ahead of schedule! Running This was my biggest year for running on record: 687 miles as of today. I think the biggest difference with this year was just that nothing stood in the way of my being pretty consistent and putting in the miles: the weather has been mild, I haven’t had any major injuries, and long runs have felt pretty good. I was happy to hit a half-marathon PR (1:36:21), but my performance in 5Ks was far short of the goal of sub-20 – partly because Brooklyn’s wonderful 5K series was run at the peak of summer, with multiple races at over 85°F. I learned the value of good lightweight running gear: Bakline’s singlets and Goodr sunglasses were super helpful in getting me through the summer. Work Val Town raised a seed round and hired a bunch of excellent people. We moved into a new office of our own, which has a great vibe. It’s been good: we’re doing a lot of ground-up work wrangling cgroups and low-level worker scheduling, and a lot of UX-in work, just trying to make it a pleasant tool. Frankly, with every product I’ve worked on, I’ve never had a feeling that it was good enough, and accordingly, for me, Val Town feels like it has a long way to go. It’s probably a good tendency to be sort of unsatisfied and motivated to constantly improve. New York It’s still such a wonderful place to live. Late this year, I’ve been rediscovering my obsession with cycling, and realizing how much I whiffed the opportunity to ride more when I lived in San Francisco. I guess that’s the first time I felt genuinely nostalgic for the West coast. I miss DC a bit too: it’s one of the few cities where my friends have been able to stay in the city proper while raising children, and I miss the accessible, underdog punk scene. But Brooklyn is just a remarkable place to live. My walk score is 100. The degree to which people here are in the city because they want to be, not because they have to, shapes so much of what makes it great. Other ‘metrics’ Relative to my old level obsession about self-quantification, my ‘metrics’ are pretty moderate now. Everything’s just backward-looking: I’m not paying much attention to the numbers as I go, it’s just fun to look at them year-over-year trends. That said, this was a lackluster year for reading: just 18 books so far. I think I just read an above-average number of books that I didn’t enjoy very much. Next year I’m going to return to authors who I already love, and stay away from genres that – the data shows – I don’t like. Whereas this was a banner year for watching movies: not great! Next year, I want to flip these results. Of everything I saw, Kinds of Kindness will probably stick with me the most. Placemark It seems like a decade ago that I released Placemark as open source software, as developing it as a closed-source SaaS application for a few years. But I did that in January. There have been a few great open source contributions since then, but it’s pretty quiet. Which is okay, somewhat expected: there is no hidden crowd of people with extra time on their hands and unending enthusiasm for ‘geospatial software’ waiting to contribute to that kind of project. Placemark is also, even with my obsessive focus on simplicity, a pretty complicated codebase. The learning curve is probably pretty significant. Maps are a challenging problem area: that’s what attracts a lot of people to them, but people who use maps persistently have the feeling that it couldn’t be that complicated, which means that few users convert into contributors. There are a few prominent efforts chasing similar goals as Placemark: Atlas.co is aiming to be an all-in-one editing/analysis platform, Felt a cloud-native GIS platform, and then there are plenty of indiehackers-style projects. I hope these projects take off! Figma plugins I also kept maintaining the Figma plugins I developed under the Placemark name. Potentially a lot of people are using them, but I don’t really know. The problem with filling in water shapes in the plugins is still unsolved: it’s pretty hard and I haven’t had the time or motivation to fix it. The most energy into those plugins this year, unfortunately, was when someone noticed that the dataset I was using - Natural Earth – marked Crimea as part of Russia. Which obviously: I don’t draw the countries in datasets, but it’s a reasonable thing to point out (but to assume that the author is malicious was a real downer, again, like, I don’t draw the countries). This decision from Natural Earth’s maintainer is heavily discussed and they aren’t planning on changing it, so I switched to world-atlas, which doesn’t have that problem. Which was fine, but a reminder of the days when I worked on maps full-time and this kind of unexpected “you’re the baddie” realization came up much more often. Sometimes it was silly: people who complain about label priority in the sense of “why, at zoom level 3, does one country’s name show up and not anothers?” was just silly. The answer, ahem, was that there isn’t enough space for the two labels and one country had a higher population or a geometry that gave their label more distance from the other country’s centroid. But a lot of the territorial disputes are part of people’s long cultural, political, military history and the source of intergenerational strife. Of course that’s serious stuff. Making a tool that shows a globe with labels on it will probably always trigger some sort of moment like that, and it’s a reason to not work on it that much because you’re bound to unintentionally step on something contentious. Other projects I released Obsidian Freeform, and have been using it a bit myself. Obsidian has really stuck for me. My vault is well over 2,000 notes, and I’ve created a daily note for almost every day for the last year. Freeform was a fun project and I have other ideas that are Obsidian plugin-shaped, though I’ve become a little bit let down by the plugin API - the fact that Obsidian-flavored-Markdown is nonstandard and the parser/AST is not accessible to plugins is a pretty big drawback for the kinds of things I want to build. Elsewhere recently I’ve been writing a bit: Recently I’ve written about dependency bloat and a developer analytics tool we built at Val Town, and started writing some supplementary documentation for Observable Plot about parts of its API that I think are unintuitive. On the micro blog, I wrote about not using GitHub Copilot and how brands should make a comeback. This blog got a gentle redesign in May, to show multiple categories of posts on the home page, and then in August I did a mass update to switch all YouTube embeds to lite-youtube-embed to make pages load faster. I’m still running Jekyll, like I have been for the last decade, and it works great. Oh, and I’ve basically stopped using Twitter and am only on Mastodon and Bluesky. Bluesky more than Mastodon recently because it seems like it’s doing a better job at attracting a more diverse community. I’m looking forward to 2025, to cycling a lot more and a new phase of startup-building. See you in the new year.
I still use Bandcamp almost exclusively to buy music, and keep a big library of MP3s. The downside is that this marks me as a weirdo, but otherwise it’s great and has been working well for me. Since I last wrote about it, Bandcamp was acquired by Epic games (?) and then acquired from them by Songtradr, and its employees are trying to get recognized as a union. Times are changing and Bandcamp is no longer a lovely indie company, but it’s still a heck of a lot better than Spotify. People (who?) are sharing their ‘Spotify wrapped’ auto-generated compilations and I wanted the same, for my Bandcamp purchases, so I built it on Val Town. You can create your own! Or edit the code of the tool that generates them. Because of API limitations – really, the absence of an API – it requires you to copy & paste content from your purchases page, but isn’t copy-and-paste really a kind of API? Anyway: Vampire Empire / Born For Loving You by Big Thief Patterns by Pool Boys Acadia by Yasmin Williams Cascade by Floating Points of course i still love you by Darwin Deez (pre-order) 4 | 2 | 3 by MIZU Son by Rosie Lowe & Duval Timothy Imaginal Disk by Magdalena Bay Dirty Projectors by Dirty Projectors Green Disco by Justine Electra Daedalus by Daedelus You Look A Lot Like Me (2016) by Mal Blum Big City Boys by Cailin Pitt Promises by Floating Points, Pharoah Sanders & The London Symphony Orchestra Windswept by Photay Jessie Mae Hemphill by Jessie Mae Hemphill Rituals by Ishmael Ensemble 1992 - 2001 by Acetone Final Summer by Cloud Nothings Bright Future by Adrianne Lenker La Forêt (2024) by Xiu Xiu Frog Poems by Mister Goblin Living is Easy by Agriculture Again by Oneohtrix Point Never Put The Shine On by CocoRosie The Light Is On You Return by Ben Levin Mercurial World by Magdalena Bay Burn It Down by Lovebirds Room 25 by Noname Wall Of Eyes by The Smile Forest Scenes by MIZU Looking back on the year, I like how I can remember a few of these albums from my first exposure to them in odd places - I heard Jessie Mae Hemphill playing in a Chipotle, and Rosie Lowe playing in my hair salon. It was apparently a big year for instrumental, electronic, minimalist music. The only ‘rock’ album that hooked me was Wall of Eyes, and the only pop album that made an impact was Imaginal Disk - the fuzzy outro of Image is something I keep re-listening to. MIZU has been on heavy rotation, too – the only of these artists that I learned about by seeing them live - she opened for Tim Hecker and I think made a lot of fans there with a really theatric and heavy performance. Buy some music! Listen to it repeatedly, and put it in your MP3 player!
I haven’t been posting much to the ‘main blog’ recently, but I have been keeping the micro blog updates humming. If you want more content in your RSS reader, you can subscribe to those posts, which are shorter, more scattered, and even less copyedited. It feels bad to have multiple “Recently” headings in the blog listing, so I’ll give them short subtitles from now on. Anyway, what’s up? October was all right. At Val Town, we spent a lot of time interviewing job candidates and improving the AI assistant, Townie. I also got some time to tackle long-awaited technical debt cleanups: I conquered the ‘big scary function’ that did the actual ‘running’ of val code. Cycling Outside of work, a lot of my October-related excitement was related to being outdoors. It’s been a great year for running – I just passed 600 miles so far and will probably hit 650 barring any injuries or life complications. But cycling is on the mind. We just rode the Old Croton Aqueduct trail from Ossining back to Brooklyn. It’s a fairly rough trail: plenty of rocks and terrain. Rideable on my ~32mm tires, but it’d be a lot easier with a mountain bike. We rode past some osage orange trees with their funky-looking and inedible fruit the size of large grapefruits. The trail passed right next to the Lyndhurst Estate, which was owned by a series of rich and powerful people, including Jay Gould, who is especially hated. Upstate, a lot of the attractions are like this, other big historic houses. The trail was mostly really beautiful, though the parts closer to Yonkers have a lot of trash. It’s much more popular with hikers than with cyclists. Even though bikes are explicitly permitted, locals seemed a little surprised by our presence, even though we were ringing bells, going slow, and making lots of space. It’s kind of funny to compare the general spatial awareness of people upstate to those in the city: we encountered a lot of people upstate who were standing in the center of the trail, completely zoned out and surprised by the presence of another human, and then on the way back were on city streets with four people within a few feet of us on foot, bikes, cars, scooters, all mostly aware and ready to silently negotiate how to move together through a shared space. I remarked that I think that when some people move out of the city because of the ‘inconvenience’, the inconvenience is people, and once you leave, you lose a certain ability to live around other people - from then on, you expect to have a suburban yard-sized perimeter around your personal space. Micro I wrote a lot on the microblog this month: about the Arc browser’s recent news that it’ll be abandoned, Reddit adopting Web Components, domain squatting, Python datascience tech, and Knip, a tool for finding dead code in TypeScript systems. Content I watched a bunch of films, which are on my Letterboxd, and the only new album on my rotation is Yasmin Williams’s Acadia: Acadia by Yasmin Williams This YouTube channel is showing all of the steps involved in doing a multi-day bikepacking trip through India. It’s a lot of fun: And that’s it for this month! I’ll write a full-fledged blog post one of these days.
More in programming
One of my recent home organisation projects has been sorting out my LEGO collection. I have a bunch of sets which are mixed together in one messy box, and I’m trying to separate bricks back into distinct sets. My collection is nowhere near large enough to be worth sorting by individual parts, and I hope that breaking down by set will make it all easier to manage and store. I’ve been creating spreadsheets to track the parts in each set, and count them out as I find them. I briefly hinted at this in my post about looking at images in spreadsheets, where I included a screenshot of one of my inventory spreadsheets: These spreadsheets have been invaluable – I can see exactly what pieces I need, and what pieces I’m missing. Without them, I wouldn’t even attempt this. I’m about to pause this cleanup and work on some other things, but first I wanted to write some notes on how I’m creating these spreadsheets – I’ll probably want them again in the future. Getting a list of parts in a set There are various ways to get a list of parts in a LEGO set: Newer LEGO sets include a list of parts at the back of the printed instructions You can get a list from LEGO-owned website like LEGO.com or BrickLink There are community-maintained databases on sites like Rebrickable I decided to use the community maintained lists from Rebrickable – they seem very accurate in my experience, and you can download daily snapshots of their entire catalog database. The latter is very powerful, because now I can load the database into my tools of choice, and slice and dice the data in fun and interesting ways. Downloading their entire database is less than 15MB – which is to say, two-thirds the size of just opening the LEGO.com homepage. Bargain! Putting Rebrickable data in a SQLite database My tool of choice is SQLite. I slept on this for years, but I’ve come to realise just how powerful and useful it can be. A big part of what made me realise the power of SQLite is seeing Simon Willison’s work with datasette, and some of the cool things he’s built on top of SQLite. Simon also publishes a command-line tool sqlite-utils for manipulating SQLite databases, and that’s what I’ve been using to create my spreadsheets. Here’s my process: Create a Python virtual environment, and install sqlite-utils: python3 -m venv .venv source .venv/bin/activate pip install sqlite-utils At time of writing, the latest version of sqlite-utils is 3.38. Download the Rebrickable database tables I care about, uncompress them, and load them into a SQLite database: curl -O 'https://cdn.rebrickable.com/media/downloads/colors.csv.gz' curl -O 'https://cdn.rebrickable.com/media/downloads/parts.csv.gz' curl -O 'https://cdn.rebrickable.com/media/downloads/inventories.csv.gz' curl -O 'https://cdn.rebrickable.com/media/downloads/inventory_parts.csv.gz' gunzip colors.csv.gz gunzip parts.csv.gz gunzip inventories.csv.gz gunzip inventory_parts.csv.gz sqlite-utils insert lego_parts.db colors colors.csv --csv sqlite-utils insert lego_parts.db parts parts.csv --csv sqlite-utils insert lego_parts.db inventories inventories.csv --csv sqlite-utils insert lego_parts.db inventory_parts inventory_parts.csv --csv The inventory_parts table describes how many of each part there are in a set. “Set S contains 10 of part P in colour C.” The parts and colors table contains detailed information about each part and color. The inventories table matches the official LEGO set numbers to the inventory IDs in Rebrickable’s database. “The set sold by LEGO as 6616-1 has ID 4159 in the inventory table.” Run a SQLite query that gets information from the different tables to tell me about all the parts in a particular set: SELECT ip.img_url, ip.quantity, ip.is_spare, c.name as color, p.name, ip.part_num FROM inventory_parts ip JOIN inventories i ON ip.inventory_id = i.id JOIN parts p ON ip.part_num = p.part_num JOIN colors c ON ip.color_id = c.id WHERE i.set_num = '6616-1'; Or use sqlite-utils to export the query results as a spreadsheet: sqlite-utils lego_parts.db " SELECT ip.img_url, ip.quantity, ip.is_spare, c.name as color, p.name, ip.part_num FROM inventory_parts ip JOIN inventories i ON ip.inventory_id = i.id JOIN parts p ON ip.part_num = p.part_num JOIN colors c ON ip.color_id = c.id WHERE i.set_num = '6616-1';" --csv > 6616-1.csv Here are the first few lines of that CSV: img_url,quantity,is_spare,color,name,part_num https://cdn.rebrickable.com/media/parts/photos/9999/23064-9999-e6da02af-9e23-44cd-a475-16f30db9c527.jpg,1,False,[No Color/Any Color],Sticker Sheet for Set 6616-1,23064 https://cdn.rebrickable.com/media/parts/elements/4523412.jpg,2,False,White,Flag 2 x 2 Square [Thin Clips] with Chequered Print,2335pr0019 https://cdn.rebrickable.com/media/parts/photos/15/2335px13-15-33ae3ea3-9921-45fc-b7f0-0cd40203f749.jpg,2,False,White,Flag 2 x 2 Square [Thin Clips] with Octan Logo Print,2335pr0024 https://cdn.rebrickable.com/media/parts/elements/4141999.jpg,4,False,Green,Tile Special 1 x 2 Grille with Bottom Groove,2412b https://cdn.rebrickable.com/media/parts/elements/4125254.jpg,4,False,Orange,Tile Special 1 x 2 Grille with Bottom Groove,2412b Import that spreadsheet into Google Sheets, then add a couple of columns. I add a column image where every cell has the formula =IMAGE(…) that references the image URL. This gives me an inline image, so I know what that brick looks like. I add a new column quantity I have where every cell starts at 0, which is where I’ll count bricks as I find them. I add a new column remaining to find which counts the difference between quantity and quantity I have. Then I can highlight or filter for rows where this is non-zero, so I can see the bricks I still need to find. If you’re interested, here’s an example spreadsheet that has a clean inventory. It took me a while to refine the SQL query, but now I have it, I can create a new spreadsheet in less than a minute. One of the things I’ve realised over the last year or so is how powerful “get the data into SQLite” can be – it opens the door to all sorts of interesting queries and questions, with a relatively small amount of code required. I’m sure I could write a custom script just for this task, but it wouldn’t be as concise or flexible. [If the formatting of this post looks odd in your feed reader, visit the original article]
For some purpose, the DOGE people are burrowing their way into all US Federal Systems. Their complete control over the Treasury Department is entirely insane. Unless you intend to destroy everything, making arbitrary changes to complex computer systems will result in destruction, even if that was not your intention. No
Entering 2025, I decided to spend some time exploring the topic of agents. I started reading Anthropic’s Building effective agents, followed by Chip Huyen’s AI Engineering. I kicked off a major workstream at work on using agents, and I also decided to do a personal experiment of sorts. This is a general commentary on building that project. What I wanted to build was a simple chat interface where I could write prompts, select models, and have the model use tools as appropriate. My side goal was to build this using Cursor and generally avoid writing code directly as much as possible, but I found that generally slower than writing code in emacs while relying on 4o-mini to provide working examples to pull from. Similarly, while I initially envisioned building this in fullstack TypeScript via Cursor, I ultimately bailed into a stack that I’m more comfortable, and ended up using Python3, FastAPI, PostgreSQL, and SQLAlchemy with the async psycopg3 driver. It’s been a… while… since I started a brand new Python project, and used this project as an opportunity to get comfortable with Python3’s async/await mechanisms along with Python3’s typing along with mypy. Finally, I also wanted to experiment with Tailwind, and ended up using TailwindUI’s components to build the site. The working version supports everything I wanted: creating chats with models, and allowing those models to use function calling to use tools that I provide. The models are allowed to call any number of tools in pursuit of the problem they are solving. The tool usage is the most interesting part here for sure. The simplest tool I created was a get_temperature tool that provided a fake temperature for your location. This allowed me to ask questions like “What should I wear tomorrow in San Francisco, CA?” and get a useful respond. The code to add this function to my project was pretty straightforward, just three lines of Python and 25 lines of metadata to pass to the OpenAI API. def tool_get_current_weather(location: str|None=None, format: str|None=None) -> str: "Simple proof of concept tool." temp = random.randint(40, 90) if format == 'fahrenheit' else random.randint(10, 25) return f"It's going to be {temp} degrees {format} tomorrow." FUNCTION_REGISTRY['get_current_weather'] = tool_get_current_weather TOOL_USAGE_REGISTRY['get_current_weather'] = { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA", }, "format": { "type": "string", "enum": ["celsius", "fahrenheit"], "description": "The temperature unit to use. Infer this from the users location.", }, }, "required": ["location", "format"], }, } } After getting this tool, the next tool I added was a simple URL retriever tool, which allowed the agent to grab a URL and use the content of that URL in its prompt. The implementation for this tool was similarly quite simple. def tool_get_url(url: str|None=None) -> str: if url is None: return '' url = str(url) response = requests.get(url) soup = BeautifulSoup(response.content, 'html.parser') content = soup.find('main') or soup.find('article') or soup.body if not content: return str(response.content) markdown = markdownify(str(content), heading_style="ATX").strip() return str(markdown) FUNCTION_REGISTRY['get_url'] = tool_get_url TOOL_USAGE_REGISTRY['get_url'] = { "type": "function", "function": { "name": "get_url", "description": "Retrieve the contents of a website via its URL.", "parameters": { "type": "object", "properties": { "url": { "type": "string", "description": "The complete URL, including protocol to retrieve. For example: \"https://lethain.com\"", } }, "required": ["url"], }, } } What’s pretty amazing is how much power you can add to your agent by adding such a trivial tool as retrieving a URL. You can similarly imagine adding tools for retrieving and commenting on Github pull requests and so, which could allow a very simple agent tool like this to become quite useful. Working on this project gave me a moderately compelling view of a near-term future where most engineers have simple application like this running that they can pipe events into from various systems (email, text, Github pull requests, calendars, etc), create triggers that map events to templates that feed into prompts, and execute those prompts with tool-aware agents. Combine that with ability for other agents to register themselves with you and expose the tools that they have access to (e.g. schedule an event with tool’s owner), and a bunch of interesting things become very accessible with a very modest amount of effort: You could schedule events between two busy people’s calendars, as if both of them had an assistant managing their calendar Reply to your own pull requests with new blog posts, providing feedback on typos and grammatical issues Crawl websites you care about and identify posts you might be interested in Ask the model to generate a system model using lethain:systems, run that model, then chart the responses Add a “planning tool” which allows the model to generate a plan to guide subsequent steps in a complex task. (e.g. getting my calendar, getting a friend’s calendar, suggesting a time we could meet) None of these are exactly lifesaving, but each is somewhat useful, and I imagine there are many more fairly obvious ideas that become easy once you have the necessary scaffolding to make this sort of thing easy. Altogether, I think that I am convinced at this points that agents, using current foundational models, are going to create a number of very interesting experiences that improve our day to day lives in small ways that are, in aggregate, pretty transformational. I’m less convinced that this is the way all software should work going forward though, but more thoughts on that over time. (A bunch of fun experiments happening at work, but early days on those.)
A lieutenant colonel in the Soviet Air Defense Forces prevented the end of human civilization on September 26th, 1983. His name was Stanislav Petrov. Protocol dictated that the Soviet Union would retaliate against any nuclear strikes sent by the United States. This was a policy of mutually assured destruction, a doctrine that compels a horrifying logical conclusion. The second and third stage effects of this type of exchange would be even more catastrophic. Allies for each side would likely be pulled into the conflict. The resulting nuclear winter was projected to lead to 2 billion deaths due to starvation. This is to say nothing about those who would have been unfortunate enough to have survived. Petrov’s job was to monitor Oko, the computerized warning systems built to centralize Soviet satellite communications. Around midnight, he received a report that one of the satellites had detected the infrared signature of a single launch of a United States ICBM. While Petrov was deciding what to do about this report, the system detected four more incoming missile launches. He had minutes to make a choice about what to do. It is impossible to imagine the amount of pressure placed on him at this moment. Source: Stanislav Petrov, Soviet officer credited with averting nuclear war, dies at 77 by Schwartzreport. Petrov lived in a world of deterministic systems. The technologies that powered these warning systems have outputs that are guaranteed, provided the proper inputs are provided. However, deterministic does not mean infallible. The only reason you are alive and reading this is because Petrov understood that the systems he observed were capable of error. He was suspicious of what he was seeing reported, and chose not to escalate a retaliatory strike. There were two factors guiding his decision: A surprise attack would most likely have used hundreds of missiles, and not just five. The allegedly foolproof Oko system was new and prone to errors. An error in a deterministic system can still lead to expected outputs being generated. For the Oko system, infrared reflections of the sun shining off of the tops of clouds created a false positive that was interpreted as detection of a nuclear launch event. Source: US-K History by Kosmonavtika. The concept of erroneous truth is a deep thing to internalize, as computerized systems are presented as omniscient, indefective, and absolute. Petrov’s rewards for this action were reprimands, reassignment, and denial of promotion. This was likely for embarrassing his superiors by the politically inconvenient shedding of light on issues with the Oko system. A coerced early retirement caused a nervous breakdown, likely him having to grapple with the weight of his decision. It was only in the 1990s—after the fall of the Soviet Union—that his actions were discovered internationally and celebrated. Stanislav Petrov was given the recognition that he deserved, including being honored by the United Nations, awarded the Dresden Peace Prize, featured in a documentary, and being able to visit a Minuteman Missile silo in the United States. On January 31st, 2025, OpenAI struck a deal with the United States government to use its AI product for nuclear weapon security. It is unclear how this technology will be used, where, and to what extent. It is also unclear how OpenAI’s systems function, as they are black box technologies. What is known is that LLM-generated responses—the product OpenAI sells—are non-deterministic. Non-deterministic systems don’t have guaranteed outputs from their inputs. In addition, LLM-based technology hallucinates—it invents content with no self-knowledge that it is a falsehood. Non-deterministic systems that are computerized also have the perception as being authoritative, the same as their deterministic peers. It is not a question of how the output is generated, it is one of the output being perceived to come from a machine. These are terrifying things to know. Consider not only the systems this technology is being applied to, but also the thoughtless speed of their integration. Then consider how we’ve historically been conditioned and rewarded to interpret the output of these systems, and then how we perceive and treat skeptics. We don’t live in a purely deterministic world of technology anymore. Stanislav Petrov died on September 18th, 2017, before this change occurred. I would be incredibly curious to know his thoughts about our current reality, as well as the increasing abdication of human monitoring of automated systems in favor of notably biased, supposed “AI solutions.” In acknowledging Petrov’s skepticism in a time of mania and political instability, we acknowledge a quote from former U.S. Secretary of Defense William J. Perry’s memoir about the incident: [Oko’s false positives] illustrates the immense danger of placing our fate in the hands of automated systems that are susceptible to failure and human beings who are fallible.
In our *Ambsheets* project, we are exploring a small extension to the familiar spreadsheet: **what if a single spreadsheet cell could hold multiple values at once**?