Full Width [alt+shift+f] Shortcuts [alt+shift+k] TRY SIMPLE MODE
Sign Up [alt+shift+s] Log In [alt+shift+l]
49
An evergreen topic is something like “why are websites so big and slow and hard and video games are so amazing and fast?” I’ve thought about it more than I’d like. Anyway, here are some reasons: Web pages are just-in-time delivered, with no installation required. Modern video games typically require both a long install process, downloading tens of gigabytes onto the console. Even after that, they take minutes to boot up: when I play Cyberpunk on my XBox S, the loading screen takes at least a minute. It’s fast after that, but it’s fast because of that loading phase. Video game development cycles are long and extraordinarily expensive. A recent failed game that didn’t make a serious media stir cost over 140 million dollars to make. There is no startup that will pour over $100 million on a website before even launching it. And AAA video games, which are often the ones that people have in mind, take years to develop. Video games are generally single-tasked: you don’t have 10 of them open...
a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from macwright.com

Reading Zanzibar

Google published Zanzibar: Google’s Consistent, Global Authorization System in 2019. It describes a system for authorization – enforcing who can do what – which maxes out both flexibility and scalability. Google has lots of different apps that rely on Zanzibar, and bigger scale than practically any other company, so it needed Zanzibar. The Zanzibar paper made quite a stir. There are at least four companies that advertise products as being inspired by or based on Zanzibar. It says a lot for everyone to loudly reference this paper on homepages and marketing materials: companies aren’t advertising their own innovation as much as simply saying they’re following the gospel. A short list of companies & OSS products I found: Companies WorkOS FGA Authzed auth0 FGA Ory Permify Open source Ory Keto (Go) Warrant (Go) probably the basis for WorkOS FGA, since WorkOS acquired Warrant. SpiceDB (Go) the basis for Authzed. Permify (Go) OpenFGA (Go) the basis of auth0 FGA. I read the paper, and have a few notes, but the Google Zanzibar Paper, annotated by AuthZed is the same thing from a real domain expert (albeit one who works for one of these companies), so read that too, or instead. Features My brief summary is that the Zanzibar paper describes the features of the system succinctly, and those features are really appealing. They’ve figured out a few primitives from which developers can build really flexible authorization rules for almost any kind of application. They avoid making assumptions about ID formats, or any particular relations, or how groups are set up. It’s abstract and beautiful. The gist of the system is: Objects: things in your data model, like documents Users: needs no explanation Namespaces: for isolating applications Usersets: groups of users Userset rewrite rules: allow usersets to inherit from each other or have other kinds of set relationships Tuples, which are like (object)#(relation)@(user), and are sort of the core ‘rule’ construct for saying who can access what There’s then a neat configuration language which looks like this in an example: name: "doc" relation { name: "owner"} relation { name: "editor" userset_rewrite { union { child { _this f } } child { computed_userset { relation: "owner" } } relation { name: "viewer" userset_rewrite { union { child {_this f} } child { computed_userset & relation: "editor" 3 } child { tuple_to_userset { tupleset { relation: "parent" } computed_userset { object: $TUPLE_USERSET_OBJECT # parent folder relation: "viewer" } } } } } } It’s pretty neat. At this point in the paper I was sold on Zanzibar: I could see this as being a much nicer way to represent authorization than burying it in a bunch of queries. Specifications & Implementation details And then the paper discusses specifications: how much scale it can handle, and how it manages consistency. This is where it becomes much more noticeably Googley. So, with Google’s scale and international footprint, all of their services need to be globally distributed. So Zanzibar is a distributed system, and it is also a system that needs good consistency guarantees so that it avoid the “new enemy” problem, nobody is able to access resources that they shouldn’t, and applications that are relying on Zanzibar can get a consistent view of its data. Pages 5-11 are about this challenge, and it is a big one with a complex, high-end solution, and a lot of details that are very specific to Google. Most noticeably, Zanzibar is built with Spanner Google’s distributed database, and Spanner has the ability to order timestamps using TrueTime, which relies on atomic clocks and GPS antennae: this is not standard equipment for a server. Even CockroachDB, which is explicitly modeled off of Spanner, can’t rely on having GPS & atomic clocks around so it has to take a very different approach. But this time accuracy idea is pretty central to Zanzibar’s idea of zookies, which are sort of like tokens that get sent around in its API and indicate what time reference the client expects so that a follow-up response doesn’t accidentally include stale data. To achieve scalability, Zanzibar is also a multi-server architecture: there are aclservers, watchservers, a Leopard indexing system that creates compressed skip list-based representations of usersets. There’s also a clever solution to the caching & hot-spot problem, in which certain objects or tuples will get lots of requests all at once so their database shard gets overwhelmed. Conclusions Zanzibar is two things: A flexible, relationship-based access control model A system to provide that model to applications at enormous scale and with consistency guarantees My impressions of these things match with AuthZed’s writeup so I’ll just quote & link them: There seems to be a lot of confusion about Zanzibar. Some people think all relationship-based access control is “Zanzibar”. This section really brings to light that the ReBAC concepts have already been explored in depth, and that Zanzibar is really the scaling achievement of bringing those concepts to Google’s scale needs. link And Zookies are very clearly important to Google. They get a significant amount of attention in the paper and are called out as a critical component in the conclusion. Why then do so many of the Zanzibar-like solutions that are cropping up give them essentially no thought? link I finished the paper having absorbed a lot of tricky ideas about how to solve the distributed-consistency problems, and if I were to describe Zanzibar, those would be a big part of the story. But maybe that’s not what people mean when they say Zanzibar, and it’s more a description of features? I did find that Permify has a zookie-like Snap Token, AuthZed/SpiceDB has ZedTokens, and Warrant has Warrant-Tokens. Whereas OpenFGA doesn’t have anything like zookies and neither does Ory Keto. So it’s kind of mixed on whether these Zanzibar-inspired products have Zanzibar-inspired implementations, or focus more on exposing the same API surface. For my own needs, zookies and distributed consistency to the degree described in the Zanzibar paper are overkill. There’s no way that we’d deploy a sharded five-server system for authorization when the main application is doing just fine with single-instance Postgres. I want the API surface that Zanzibar describes, but would trade some scalability for simplicity. Or use a third-party service for authorization. Ideally, I wish there was something like these products but smaller, or delivered as a library rather than a server.

3 months ago 24 votes
Recently

I watched a large part of All Watched Over By Machines of Loving Grace this month. This also counts as a “listening” item, because the theme song, “Baby Love Child” by Pizzicato Five, is also spectacular. Guitar Moves is a good series of interviews by Matt Sweeney, who I mostly know via his involvement in Bonnie Prince Billy. It’s a really cool format. I like how he interviews guitarists with recognizable sounds, and you get to see how little they need to play to sound just like themselves. The episode with St. Vincent is excellent too: she’s one of my guitar heroes: check out the guitar solo in Just The Same But Brand New, or her version of Dig a Pony. I also watched No Other Land. Everyone should watch No Other Land. AI thoughts roundup I don’t have a conclusion. Really, that’s my current state: ambivalence. I acknowledge that these tools are incredibly powerful, I’ve even started incorporating them into my work in certain limited ways (low-stakes code like POCs and unit tests seem like an ideal use case), but I absolutely hate them. I hate the way they’ve taken over the software industry, I hate how they make me feel while I’m using them, and I hate the human-intelligence-insulting postulation that a glorified Excel spreadsheet can do what I can but better. Nolan Lawson: AI ambivalence As I always say, the purpose of the system is what it does. Or, in this case, how I think about AI stuff is mostly affected by how people use AI stuff, and how people use AI stuff is a real mixed bag. There’s the tidal wave of spam, the aesthetic of fascism, the low-effort marketing materials with nonsense images, the non-consensual AI porn. I see all of the bad stuff every day both online and in the odd subway ad. The good stuff seems pretty theoretical, though: the press releases about AI-driven medical advances never seem to break into the real world. The stories about engineers 10x’ing their ability seem pretty mixed: we’re already at the hangover-and-regret phase with programmers bemoaning how they’ve generated so much slop and lost so much knowledge. Anyway, I’m mildly optimistic about the potential! But it’s a lot like crypto in that you could theoretically use the technology for something good but most people loudly used it for bad stuff, and people including me judged it based on what it did. AI has to start doing some good stuff soon. Potential isn’t enough. I think one thing chatGPT’s invention has revealed is how many people - including some very important people in society - find just basic reading and writing to be laborious and cumbersome to perform, and how oddly closely that type of strained literacy correlates with having other shitty opinions. From mtsw on Bluesky, about this story about Andrew Cuomo using ChatGPT to half-assedly write a policy platform. Right off the bat I should say that judging people for their level of literacy reeks of classism and so on. My own ability to read & write has a lot to do with my place in society: I went to good schools, had a stable home life, and smart parents. However, “the way that society was set up” kind of evened this out. Extremely social people with cultural capital and chiseled jawlines and biceps would get their rewards, and people like… myself, we would get rewarded for literacy and critical thinking. When one group needed the other, it was usually some kind of payment or partnership: Cuomo pays his scriptwriter, the TV show creator pays the actors. And some people can do both sides of the equation. But LLMs definitely indicate that people do not like this deal. Whew, they don’t like writing, but they also don’t like paying the writers or reading what they write. Maybe they could rejigger the system so that they could do it all. They have ideas for music and art but no interest in learning about music, practicing instruments, going to art school, or concentrating on a task for a long time, so why not generate it all? Why not, well - there are reasons, those reasons being that the generated output usually passes their own vibe check but once someone who looks closely at things or reads all the words encounters it, everyone points at the slop and it’s embarrassing. (Cuomo will never be embarrassed) Plus, you’re always going to get average results by asking a device that is incapable of creativity or thought. Also, you’ll miss out on the human experience of creating. And you’ll be indirectly feeding output data into training data for future LLMs, consequentially making their output worse. (Cuomo does not care about consequences) Colophon update I’ve moved the images for this website to Bunny (that’s an affiliate link, here’s a non-affiliate link if that’s what you prefer). When I initially moved my photos to this website, I set them up with Amazon S3 for storage and CloudFront to serve them with a CDN. Using AWS is painful for me, so I moved them to Cloudflare R2, which is Cloudflare’s equivalent to S3, and Cloudflare as a CDN. Thanks to owning my own domains, swapping out image hosts is pretty quick: switching to Bunny took all of five minutes. So what’s the deal with Bunny? Partly I’ve become a little more negative on Cloudflare and R2: I think Cloudflare’s technology is neat, but R2 has iffy reliability and Cloudflare has iffy politics. I’m also intrigued by diversifying my dependencies geographically. Bunny is a Slovenian company, and my email is from an Australian company. This probably won’t have any practical effect, but it feels kind of good for obvious reasons to even minutely hedge my bets here. So far Bunny has been great. They don’t support the S3 protocol but they do support SFTP, which works just as well for my purposes and works great with the beautiful Transmit app. Before, with R2, I was using the significantly less beautiful Cyberduck application because Cloudflare R2 doesn’t support all of the S3 protocol. It seems to be just as fast as Cloudflare was, too. And I’m somewhat reassured by the prospect of paying Bunny. I don’t like the feeling of getting “free” services like I can from Cloudflare. I want that customer relationship. Reading Then again, pop culture is powerful, and even the dumbest marketing both affects and reflects it. Busch Light’s can holder shaped like a cup that holds beer is dumb, which is fine, because most beer promos are. But the fact that the brand frames it as a functional, masculine alternative to Stanley’s H2.0 Flowstate affirms a similarly retrograde outlook on gender roles to the one that young American men are seeking out on the political right. From my friend Dave’s article about Busch Light’s weird attempt to riff on the Stanley Tumbler trend. I was once a loyal listener of the Chapo Trap House podcast, but fell off of it in 2020 when their support of Bernie Sanders led them to be jerks about Elizabeth Warren. But reading this Vanity Fair article about the cohosts of the podcast endeared them to me a bit. “Like to thank” is linguistic phlegm. “I’d like to thank the Academy.” They’d “like to thank” me. Well I’d like to be 6’3” and drive a G Wagon, thanks. I’d like you to accept my novella. I’d like to quit paying three dollars to Submittable every time I want to send a story out. The world is full of actions I would like to do. The most direct way to say thank you is just to say it: “thank you, name, for doing X.” “I’d like to thank” is a performative thanks, a thanks with a smirk and a blink, eyeing for extra credit. Just because people say it in their award show acceptance speeches doesn’t mean you should say it, too. In fact, that’s the reason you shouldn’t say it. Loved this article “Close reading my rejections” from friend of the blog Barrett Hathcock.

3 months ago 23 votes
Tidbyt without the company

(async () => { const colors = ['fb6b1d','e83b3b','831c5d','c32454','f04f78','f68181','fca790','e3c896','ab947a','966c6c','625565','3e3546','0b5e65','0b8a8f','1ebc73','91db69','fbff86','fbb954','cd683d','9e4539','7a3045','6b3e75','905ea9','a884f3','eaaded', '8fd3ff', '4d9be6', '4d65b4', '484a77', '30e1b9', '8ff8e2'].map(c => `#${c}`); const mask = document.querySelector('#mask'); const replacement = await fetch('/images/2025-04-12-tidbyt-second-life-tidbyt-mask.svg').then(r => r.text()); mask.style = ''; mask.innerHTML = replacement; let i = 0; let delay = 10; const svg = mask.querySelector('svg'); svg.removeAttribute('width'); svg.removeAttribute('height'); svg.setAttribute('style', 'width:auto;height:auto;position:absolute;top:0;right:0;bottom:0;left:0;opacity:0.4;'); for (const path of svg.querySelectorAll('path')) { delay += 20; delay *= 1.02; setTimeout(() => { path.setAttribute('fill', colors[i++ % colors.length]); }, delay) path.addEventListener('mouseover', () => { path.setAttribute('fill', colors[i++ % colors.length]); }); } })() Remember the Tidbyt? It’s a super low-resolution, internet-connected, wood-paneled display that I wrote a review of it back in 2022. It’s been on my shelf for years now, showing the time, weather, warning me when the UV is going to be high. In 2023 I used it as an excuse to learn some Rust, to render custom graphics. It’s a toy, a distraction, a worry stone for me to work on when I need something open-ended and low-stakes. Anyway, the company that made the Tidbyt is no more. They got acquihired by Modal, a company that makes serverless AI compute hosting. So, they aren’t making devices right now, and the blog post promises that their cloud services will keep working. I don’t hold anything against the Tidbyt team: in fact, our Val Town office was coincidentally right next to theirs in a WeWork, and we met in real life! They’re very nice folks, and were doing so much with a small team. Lots of respect to them. Modal made a smart choice acquiring Tidbyt. But realistically, it’s time to make sure my device doesn’t become e-waste. The Tidbyt is ready for this One of the biggest critiques of the Tidbyt was that it was just an LED matrix and an ESP chip. You could buy an LED matrix on Sparkfun, the ESP, a power supply, some wood for the enclosure, and you’d have your own DIY Tidbyt. Maybe you could do it for half the price! But that’s also a strength. The Tidbyt is not some custom SoC with an exotic custom software stack and boutique hardware. It is what it looks like: a neat combination of commonplace parts. That makes it kind of future-proof and flexible. The first step is to replace the firmware. Tidbyt’s stock firmware routes all of its requests through the Tidbyt company’s servers. I want to eliminate that hop. Replacing the firmware Thankfully, Tidbyt published their ‘HDK’, which is an open source version of their stock firmware. It’s remarkably simple: It connects to Wifi It downloads a WebP image from a URL It displays that WebP image The HDK contains the code to do this stuff. There’s very little code required, but it does drag in a WebP decoder, Wifi library, and a library for running the LED matrix. But, setting up the HDK I ran into issues both small and large: it had issues with HTTPS URLs and Wifi passwords that contain spaces. Plus nobody has been added as a contributor to the HDK repository, so Pull Requests aren’t being accepted and it hasn’t had a change in 7 months. But the community came to the rescue with tronbyt’s firmware-http, a fork of the HDK that fixes every issue I experienced. Open source works! So back in 2022 I included this chart of the Tidbyt network: With an updated HDK, this workflow is a lot simpler. Instead of sending images to the Tidbyt servers and those Tidbyt servers delivering them to my device, the device makes requests directly of the server that generates the images. Replacing pixlet The Tidbyt team wrote pixlet, a little framework for generating pixel graphics that the Tidbyt displays. It lets you define a React-like tree of components - some text in a stack, a rectangle, images, and so on - and does all of the layout and rendering. The tronbyt community also forked pixlet and are actively developing it, which is fantastic. But this part of the stack I really never liked. That’s why I spent so much time reimplementing it in Rust and JavaScript. Partly it’s the language - pixlet apps are written in starlark, which is kind of an outgrowth of the Bazel build system from Google. Starlark is sort of like Python, but isn’t actually compatible with anything in the Python ecosystem. It’s very niche, limited, and overall just weird. I think I understand why Tidbyt would choose Starlark - it’s fast and has hermetic execution - making it safe to run untrusted Starlark programs because they can’t access the filesystem, network, or even the system clock without being given explicit controlled APIs to do those things. If you’re building a cloud service that runs a lot of untrusted user code, dictating that code is all Starlark is a really good cheat code - I know firsthand how hard it is to run untrusted JavaScript. But I’m not building a cloud service full of untrusted code. People who are self-hosting their Tidbyt devices (dozens of us!) don’t benefit from the tradeoffs of the Starlark language. They’d be better off with something normal. I rewrote pixlet again It’s called indiepixel and it’s a Python reimplementation of pixlet. It supports almost the entire pixlet API, and comes with the added benefit of being Python. You can use Python modules! You can read from the filesystem, parse CSVs, do all of your usual Python stuff. You can embed it in a Python application to render some graphics. What does indiepixel do currently? Renders text in the glorious retro BDF pixel font format. Renders pixelated pie charts, rectangles, and boxes. Supports animation for its WebP outputs. Provides a nice UI for browsing your selection of screens. It’ll probably never be finished, but it works well enough to power my Tidbyt. I’m running indiepixel on a free Render server instance, but it should run pretty much the same on any Python-compatible hosting: the only tricky dependency is Pillow, which it uses for image parsing and rendering. My free time for computer-oriented side projects has been limited, due to other commitments and an intention to get offline on the weekends. I’ve been sewing, biking, and running more. So I really want a side project I can enjoy, and indiepixel has fit the bill. It’s really satisfying to implement a new widget and see it rendered in blocky 64x32 pixels. The Pillow image rendering library for Python is mostly wonderful and very powerful. Why Python? Why is indiepixel written in Python? Well - I learned from tidbyt-rs that Rust would be an awkward fit as a scripting language for rendering graphics. The well-known Rust complexities around memory management made simple things difficult for me, which would make them totally unacceptable for others. Besides the attraction of being able to compile a small binary that might be able to run on the Tidbyt itself, Rust didn’t have many other advantages. The Pillow module really is such an advantage for Python. JavaScript doesn’t have a real alternative: there’s sharp, a great module for image conversion, but nothing that has such a great canvas interface. node-canvas is fine, but it doesn’t support WebP or animation, which are critical features for this project. I also wanted a test out the amazing new Python tooling that Astral is cooking up, like uv. I now have a better grasp of the Python ecosystem than I did a few months ago, and it’s optimistic but mixed. uv is amazing, but Python has a lot of legacy cruft around packaging. People are critical of NPM, but I think it did benefit from being established after PyPI and learning from its lessons. Thank you Steven Loria for a PR that fixed everything and made it all work and saved me months of tweaking settings. The graphic I watercolored that Tidbyt a while while ago and have been seriously dragging my feet on finishing this blog post. Sometimes the watercolor-illustration wags the technical-blog-post dog’s tail? Anyway, it’s a callback to that little world, with some small tweaks: this time I thought it’d be nice to have it be both watercolored and interactive. That ‘cybernetic’ feel. The secret recipe: a nice palette from lospec, creating a black & white mask of areas in Affinity Photo and vectorizing it with potrace, and then just some JavaScript that recolors based on hover handling. If you’re using the Tidbyt or some similar pixel-displaying device, try out indiepixel! It’s niche and has required a silly amount of effort to generate a glorified weather clock in my apartment, but it was a fun time chasing another interest.

4 months ago 51 votes
Recently

Reading Whether it’s cryptocurrency scammers mining with FOSS compute resources or Google engineers too lazy to design their software properly or Silicon Valley ripping off all the data they can get their hands on at everyone else’s expense… I am sick and tired of having all of these costs externalized directly into my fucking face. Drew DeVault on the annoyance and cost of AI scrapers. I share some of that pain: Val Town is routinely hammered by some AI company’s poorly-coded scraping bot. I think it’s like this for everyone, and it’s hard to tell if AI companies even care that everyone hates them. And perhaps most recently, when a person who publishes their work under a free license discovers that work has been used by tech mega-giants to train extractive, exploitative large language models? Wait, no, not like that. Molly White wrote a more positive article about the LLM scraping problem, but I have my doubts about its positivity. For example, she suggests that Wikimedia’s approach with “Wikimedia Enterprise” gives LLM companies a way to scrape the site without creating too much cost. But that doesn’t seem like it’s working. The problem is that these companies really truly do not care. Harberger taxes represent an elegant theoretical solution that fails in practice for immobile property. Just as mobile home residents face exploitation through sudden ground rent increases, property owners under a Harberger system would face similar hold-up problems. This creates an impossible dilemma: pay increasingly burdensome taxes or surrender investments at below-market values. Progress and Poverty, a blog about Georgism, has this post about Herberger taxes, which are a super neat idea. The gist is that you would be in charge of saying how much your house is worth, but the added wrinkle is that by saying a price you are bound to be open to selling your house at that price. So if you go too low, someone will buy it, or too high, and you’re paying too much in taxes. It’s clever but doesn’t work, and the analysis points to the vital difference between housing and other goods: that buying, selling, and moving between houses is anything but simple. I’ve always been a little skeptical of the line that the AI crowd feels contempt for artists, or that such a sense is particularly widespread—because certainly they all do not!—but it’s hard to take away any other impression from a trend so widely cheered in its halls as AI Ghiblification. Brian Merchant on the OpenAI Studio Ghibli ‘trend’ is a good read. I can’t stop thinking that AI is in danger of being right-wing coded, the examples of this, like the horrifying White House tweet mentioned in that article, are multiplying. I feel bad when I recoil to innocent usage of the tool by good people who just want something cute. It is kind of fine, on the micro level. But with context, it’s so bad in so many ways. Already the joy and attachment I’ve felt to the graphic style is fading as more shitty Studio Ghibli knockoffs have been created in the last month than in all of the studio’s work. Two days later, at a state dinner in the White House, Mark gets another chance to speak with Xi. In Mandarin, he asks Xi if he’ll do him the honor of naming his unborn child. Xi refuses. Careless People was a good read. It’s devastating for Zuckerberg, Joel Kaplan, and Sheryl Sandberg, as well as a bunch of global leaders who are eager to provide tax loopholes for Facebook. Perhaps the only person who ends the book as a hero is President Obama, who sees through it all. In a March 26 Slack message, Lavingia also suggested that the agency should do away with paper forms entirely, aiming for “full digitization.” “There are over 400 vet-facing forms that the VA supports, and only about 10 percent of those are digitized,” says a VA worker, noting that digitizing forms “can take years because of the sensitivity of the data” they contain. Additionally, many veterans are elderly and prefer using paper forms because they lack the technical skills to navigate digital platforms. “Many vets don’t have computers or can’t see at all,” they say. “My skin is crawling thinking about the nonchalantness of this guy.” Perhaps because of proximity, the story that Sahil Lavingia has been working for DOGE seems important. It was a relief when a few other people noticed it and started retelling the story to the tech sphere, like Dan Brown’s “Gumroad is not open source” and Ernie Smith’s “Gunkroad”, but I have to nitpick on the structure here: using a non-compliant open source license is not the headline, collaborating with fascists and carelessly endangering disabled veterans is. Listening Septet by John Carroll Kirby I saw John Carroll Kirby play at Public Records and have been listening to them constantly ever since. The music is such a paradox: the components sound like elevator music or incredibly cheesy jazz if you listen to a few seconds, but if you keep listening it’s a unique, deep sound. Sierra Tracks by Vega Trails More new jazz! Mammoth Hands and Portico Quartet overlap with Vega Trails, which is a beautiful minimalist band. Watching This short video with John Wilson was great. He says a bit about having a real physical video camera, not just a phone, which reminded me of an old post of mine, Carrying a Camera.

4 months ago 43 votes
Personal tools

I used to make little applications just for myself. Sixteen years ago (oof) I wrote a habit tracking application, and a keylogger that let me keep track of when I was using a computer, and generate some pretty charts. I’ve taken a long break from those kinds of things. I love my hobbies, but they’ve drifted toward the non-technical, and the idea of keeping a server online for a fun project is unappealing (which is something that I hope Val Town, where I work, fixes). Some folks maintain whole ‘homelab’ setups and run Kubernetes in their basement. Not me, at least for now. But I have been tiptoeing back into some little custom tools that only I use, with a focus on just my own computing experience. Here’s a quick tour. Hammerspoon Hammerspoon is an extremely powerful scripting tool for macOS that lets you write custom keyboard shortcuts, UIs, and more with the very friendly little language Lua. Right now my Hammerspoon configuration is very simple, but I think I’ll use it for a lot more as time progresses. Here it is: hs.hotkey.bind({"cmd", "shift"}, "return", function() local frontmost = hs.application.frontmostApplication() if frontmost:name() == "Ghostty" then frontmost:hide() else hs.application.launchOrFocus("Ghostty") end end) Not much! But I recently switched to Ghostty as my terminal, and I heavily relied on iTerm2’s global show/hide shortcut. Ghostty doesn’t have an equivalent, and Mikael Henriksson suggested a script like this in GitHub discussions, so I ran with it. Hammerspoon can do practically anything, so it’ll probably be useful for other stuff too. SwiftBar I review a lot of PRs these days. I wanted an easy way to see how many were in my review queue and go to them quickly. So, this script runs with SwiftBar, which is a flexible way to put any script’s output into your menu bar. It uses the GitHub CLI to list the issues, and jq to massage that output into a friendly list of issues, which I can click on to go directly to the issue on GitHub. #!/bin/bash # <xbar.title>GitHub PR Reviews</xbar.title> # <xbar.version>v0.0</xbar.version> # <xbar.author>Tom MacWright</xbar.author> # <xbar.author.github>tmcw</xbar.author.github> # <xbar.desc>Displays PRs that you need to review</xbar.desc> # <xbar.image></xbar.image> # <xbar.dependencies>Bash GNU AWK</xbar.dependencies> # <xbar.abouturl></xbar.abouturl> DATA=$(gh search prs --state=open -R val-town/val.town --review-requested=@me --json url,title,number,author) echo "$(echo "$DATA" | jq 'length') PR" echo '---' echo "$DATA" | jq -c '.[]' | while IFS= read -r pr; do TITLE=$(echo "$pr" | jq -r '.title') AUTHOR=$(echo "$pr" | jq -r '.author.login') URL=$(echo "$pr" | jq -r '.url') echo "$TITLE ($AUTHOR) | href=$URL" done Tampermonkey Tampermonkey is essentially a twist on Greasemonkey: both let you run your own JavaScript on anybody’s webpage. Sidenote: Greasemonkey was created by Aaron Boodman, who went on to write Replicache, which I used in Placemark, and is now working on Zero, the successor to Replicache. Anyway, I have a few fancy credit cards which have ‘offers’ which only work if you ‘activate’ them. This is an annoying dark pattern! And there’s a solution to it - CardPointers - but I neither spend enough nor care enough about points hacking to justify the cost. Plus, I’d like to know what code is running on my bank website. So, Tampermonkey to the rescue! I wrote userscripts for Chase, American Express, and Citi. You can check them out on this Gist but I strongly recommend to read through all the code because of the afore-mentioned risks around running untrusted code on your bank account’s website! Obsidian Freeform This is a plugin for Obsidian, the notetaking tool that I use every day. Freeform is pretty cool, if I can say so myself (I wrote it), but could be much better. The development experience is lackluster because you can’t preview output at the same time as writing code: you have to toggle between the two states. I’ll fix that eventually, or perhaps Obsidian will add new API that makes it all work. I use Freeform for a lot of private health & financial data, almost always with an Observable Plot visualization as an eventual output. For example, when I was switching banks and one of the considerations was mortgage discounts in case I ever buy a house (ha 😢), it was fun to chart out the % discounts versus the required AUM. It’s been really nice to have this kind of visualization as ‘just another document’ in my notetaking app. Doesn’t need another server, and Obsidian is pretty secure and private.

4 months ago 52 votes

More in programming

The future of large files in Git is Git

.title {text-wrap:balance;} #content > p:first-child {text-wrap:balance;} If Git had a nemesis, it’d be large files. Large files bloat Git’s storage, slow down git clone, and wreak havoc on Git forges. In 2015, GitHub released Git LFS—a Git extension that hacked around problems with large files. But Git LFS added new complications and storage costs. Meanwhile, the Git project has been quietly working on large files. And while LFS ain’t dead yet, the latest Git release shows the path towards a future where LFS is, finally, obsolete. What you can do today: replace Git LFS with Git partial clone Git LFS works by storing large files outside your repo. When you clone a project via LFS, you get the repo’s history and small files, but skip large files. Instead, Git LFS downloads only the large files you need for your working copy. In 2017, the Git project introduced partial clones that provide the same benefits as Git LFS: Partial clone allows us to avoid downloading [large binary assets] in advance during clone and fetch operations and thereby reduce download times and disk usage. – Partial Clone Design Notes, git-scm.com Git’s partial clone and LFS both make for: Small checkouts – On clone, you get the latest copy of big files instead of every copy. Fast clones – Because you avoid downloading large files, each clone is fast. Quick setup – Unlike shallow clones, you get the entire history of the project—you can get to work right away. What is a partial clone? A Git partial clone is a clone with a --filter. For example, to avoid downloading files bigger than 100KB, you’d use: git clone --filter='blobs:size=100k' <repo> Later, Git will lazily download any files over 100KB you need for your checkout. By default, if I git clone a repo with many revisions of a noisome 25 MB PNG file, then cloning is slow and the checkout is obnoxiously large: $ time git clone https://github.com/thcipriani/noise-over-git Cloning into '/tmp/noise-over-git'... ... Receiving objects: 100% (153/153), 1.19 GiB real 3m49.052s Almost four minutes to check out a single 25MB file! $ du --max-depth=0 --human-readable noise-over-git/. 1.3G noise-over-git/. $ ^ 🤬 And 50 revisions of that single 25MB file eat 1.3GB of space. But a partial clone side-steps these problems: $ git config --global alias.pclone 'clone --filter=blob:limit=100k' $ time git pclone https://github.com/thcipriani/noise-over-git Cloning into '/tmp/noise-over-git'... ... Receiving objects: 100% (1/1), 24.03 MiB real 0m6.132s $ du --max-depth=0 --human-readable noise-over-git/. 49M noise-over-git/ $ ^ 😻 (the same size as a git lfs checkout) My filter made cloning 97% faster (3m 49s → 6s), and it reduced my checkout size by 96% (1.3GB → 49M)! But there are still some caveats here. If you run a command that needs data you filtered out, Git will need to make a trip to the server to get it. So, commands like git diff, git blame, and git checkout will require a trip to your Git host to run. But, for large files, this is the same behavior as Git LFS. Plus, I can’t remember the last time I ran git blame on a PNG 🙃. Why go to the trouble? What’s wrong with Git LFS? Git LFS foists Git’s problems with large files onto users. And the problems are significant: 🖕 High vendor lock-in – When GitHub wrote Git LFS, the other large file systems—Git Fat, Git Annex, and Git Media—were agnostic about the server-side. But GitHub locked users to their proprietary server implementation and charged folks to use it.1 💸 Costly – GitHub won because it let users host repositories for free. But Git LFS started as a paid product. Nowadays, there’s a free tier, but you’re dependent on the whims of GitHub to set pricing. Today, a 50GB repo on GitHub will cost $40/year for storage. In contrast, storing 50GB on Amazon’s S3 standard storage is $13/year. 😰 Hard to undo – Once you’ve moved to Git LFS, it’s impossible to undo the move without rewriting history. 🌀 Ongoing set-up costs – All your collaborators need to install Git LFS. Without Git LFS installed, your collaborators will get confusing, metadata-filled text files instead of the large files they expect. The future: Git large object promisors Large files create problems for Git forges, too. GitHub and GitLab put limits on file size2 because big files cost more money to host. Git LFS keeps server-side costs low by offloading large files to CDNs. But the Git project has a new solution. Earlier this year, Git merged a new feature: large object promisers. Large object promisors aim to provide the same server-side benefits as LFS, minus the hassle to users. This effort aims to especially improve things on the server side, and especially for large blobs that are already compressed in a binary format. This effort aims to provide an alternative to Git LFS – Large Object Promisors, git-scm.com What is a large object promisor? Large object promisors are special Git remotes that only house large files. In the bright, shiny future, large object promisors will work like this: You push a large file to your Git host. In the background, your Git host offloads that large file to a large object promisor. When you clone, the Git host tells your Git client about the promisor. Your client will clone from the Git host, and automagically nab large files from the promisor remote. But we’re still a ways off from that bright, shiny future. Git large object promisors are still a work in progress. Pieces of large object promisors merged to Git in March of 2025. But there’s more to do and open questions yet to answer. And so, for today, you’re stuck with Git LFS for giant files. But once large object promisors see broad adoption, maybe GitHub will let you push files bigger than 100MB. The future of large files in Git is Git. The Git project is thinking hard about large files, so you don’t have to. Today, we’re stuck with Git LFS. But soon, the only obstacle for large files in Git will be your half-remembered, ominous hunch that it’s a bad idea to stow your MP3 library in Git. Edited by Refactoring English Later, other Git forges made their own LFS servers. Today, you can push to multiple Git forges or use an LFS transfer agent, but all this makes set up harder for contributors. You’re pretty much locked-in unless you put in extra effort to get unlocked.↩︎ File size limits: 100MB for GitHub, 100MB for GitLab.com↩︎

23 hours ago 7 votes
Just a Little More Context Bro, I Promise, and It’ll Fix Everything

Conrad Irwin has an article on the Zed blog “Why LLMs Can't Really Build Software”. He says it boils down to: the distinguishing factor of effective engineers is their ability to build and maintain clear mental models We do this by: Building a mental model of what you want to do Building a mental model of what the code does Reducing the difference between the two It’s kind of an interesting observation about how we (as humans) problem solve vs. how we use LLMs to problem solve: With LLMs, you stuff more and more information into context until it (hopefully) has enough to generate a solution. With your brain, you tweak, revise, or simplify your mental model more and more until the solution presents itself. One adds information — complexity you might even say — to solve a problem. The other eliminates it. You know what that sort of makes me think of? NPM driven development. Solving problems with LLMs is like solving front-end problems with NPM: the “solution” comes through installing more and more things — adding more and more context, i.e. more and more packages. LLM: Problem? Add more context. NPM: Problem? There’s a package for that. Contrast that with a solution that comes through simplification. You don’t add more context. You simplify your mental model so you need less to solve a problem — if you solve it at all, perhaps you eliminate the problem entirely! Rather than install another package to fix what ails you, you simplify your mental model which often eliminates the problem you had in the first place; thus eliminating the need to solve any problem at all, or to add any additional context or complexity (or dependency). As I’m typing this, I’m thinking of that image of the evolution of the Raptor engine, where it evolved in simplicity: This stands in contrast to my working with LLMs, which often wants more and more context from me to get to a generative solution: I know, I know. There’s probably a false equivalence here. This entire post started as a note and I just kept going. This post itself needs further thought and simplification. But that’ll have to come in a subsequent post, otherwise this never gets published lol. Email · Mastodon · Bluesky

yesterday 3 votes
How to Leverage the CPU’s Micro-Op Cache for Faster Loops

Measuring, analyzing, and optimizing loops using Linux perf, Top-Down Microarchitectural Analysis, and the CPU’s micro-op cache

yesterday 5 votes
Omarchy micro-forks Chromium

You can just change things! That's the power of open source. But for a lot of people, it might seem like a theoretical power. Can you really change, say, Chrome? Well, yes! We've made a micro fork of Chromium for Omarchy (our new 37signals Linux distribution). Just to add one feature needed for live theming. And now it's released as a package anyone can install on any flavor of Arch using the AUR (Arch User Repository). We got it all done in just four days. From idea, to solicitation, to successful patch, to release, to incorporation. And now it'll be part of the next release of Omarchy. There are no speed limits in open source. Nobody to ask for permission. You have the code, so you can make the change. All you need is skill and will (and maybe, if you need someone else to do it for you, a $5,000 incentive 😄).

2 days ago 4 votes
Choosing Tools To Make Websites

Jan Miksovsky lays out his idea for website creation as content transformation. He starts by talking about tools that hide what’s happening “under the hood”: A framework’s marketing usually pretends it is unnecessary for you to understand how its core transformation works — but without that knowledge, you can’t achieve the beautiful range of results you see in the framework’s sample site gallery. This is a great callout. Tools will say, “You don’t have to worry about the details.” But the reality is, you end up worrying about the details — at least to some degree. Why? Because what you want to build is full of personalization. That’s how you differentiate yourself, which means you’re going to need a tool that’s expressive enough to help you. So the question becomes: how hard is it to understand the details that are being intentionally hidden away? A lot of the time those details are not exposed directly. Instead they’re exposed through configuration. But configuration doesn’t really help you learn how something works. I mean, how many of you have learned how typescript works under the hood by using tsconfig.json? As Jan says: Configuration can lead to as many problems as it solves Nailed it. He continues: Configuring software is itself a form of programming, in fact a rather difficult and often baroque form. It can take more data files or code to configure a framework’s transformation than to write a program that directly implements that transformation itself. I’m not a Devops person, but that sounds like Devops in a nutshell right there. (It also perfectly encapsulates my feelings on trying to setup configuration in GitHub Actions.) Jan moves beyond site creation to also discuss site hosting. He gives good reasons for keeping your website’s architecture simple and decoupled from your hosting provider (something I’ve been a long time proponent of): These site hosting platforms typically charge an ongoing subscription fee. (Some offer a free tier that may meet your needs.) The monthly fee may not be large, but it’s forever. Ten years from now you’ll probably still want your content to be publicly available, but will you still be happy paying that monthly fee? If you stop paying, your site disappears. In subscription pricing, any price (however small) is recurring. Stated differently: pricing is forever. Anyhow, it’s a good read from Jan and lays out his vision for why he’s building Web Origami: a tool for that encourages you to understand (and customize) how you transform content to a website. He just launched version 0.4.0 which has some exciting stuff I’m excited to try out further (I’ll have to write about all that later). Email · Mastodon · Bluesky

3 days ago 5 votes