Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
47
The other day in our morning rush before school my wife asked for help figuring out how to put lunch money on our kids’ school accounts. For some time she’s been doing it “the hard way”: talk to the people in the front office of the school every few months and swipe a credit card. Every time she did it, they would remind her there was an “easier and more convenient” way to do it via an app. But that seemed like the hard way of doing it: find an app, download it, create yet another account (and another password to remember), enter a credit card, etc. So she asked for my help. First I had to find the app in the App Store. This sounds like an easy task, but all I had was the name of a generic, three-letter service. Have you tried searching for an app with a generic name in the App Store? The results are legion, and you have no heuristics for gauging the authenticity of the app. To be honest, this was an app for a county lunch school program, which means it would probably look and feel...
4 months ago

More from Jim Nielsen’s Blog

Notes on Google Search Now Requiring JavaScript

John Gruber has a post about how Google’s search results now require JavaScript[1]. Why? Here’s Google: the change is intended to “better protect” Google Search against malicious activity, such as bots and spam Lol, the irony. Let’s turn to JavaScript for protection, as if the entire ad-based tracking/analytics world born out of JavaScript’s capabilities isn’t precisely what led to a less secure, less private, more exploited web. But whatever, “the web” is Google’s product so they can do what they want with it — right? Here’s John: Old original Google was a company of and for the open web. Post 2010-or-so Google is a company that sees the web as a de facto proprietary platform that it owns and controls. Those who experience the web through Google Chrome and Google Search are on that proprietary not-closed-per-se-but-not-really-open web. Search that requires JavaScript won’t cause the web to die. But it’s a sign of what’s to come (emphasis mine): Requiring JavaScript for Google Search is not about the fact that 99.9 percent of humans surfing the web have JavaScript enabled in their browsers. It’s about taking advantage of that fact to tightly control client access to Google Search results. But the nature of the true open web is that the server sticks to the specs for the HTTP protocol and the HTML content format, and clients are free to interpret that as they see fit. Original, novel, clever ways to do things with website output is what made the web so thrilling, fun, useful, and amazing. This JavaScript mandate is Google’s attempt at asserting that it will only serve search results to exactly the client software that it sees fit to serve. Requiring JavaScript is all about control. The web was founded on the idea of open access for all. But since that’s been completely and utterly abused (see LLM training datasets) we’re gonna lose it. The whole “freemium with ads” model that underpins the web was exploited for profit by AI at an industrial scale and that’s causing the “free and open web” to become the “paid and private web”. Universal access is quickly becoming select access — Google search results included. If you want to go down a rabbit hole of reading more about this, there’s the TechCrunch article John cites, a Hacker News thread, and this post from a company founded on providing search APIs. ⏎ Email :: Mastodon :: Bluesky #generalNotes

4 days ago 9 votes
Missed Connections

Let me tell you about one of the best feelings. You have a problem. You bang your head on it for a while. Through the banging, you formulate a string of keywords describing the problem. You put those words into a search engine. You land on a forum or a blog post and read someone else’s words containing those keywords and more. Their words resonate with you deeply. They’re saying the exact same things you were saying to yourself in your head. You immediately know, “This person gets it!” You know they have an answer to your problem. They’ve seen what you’re seeing. And on top of it all, they provide a solution which fixes your problem! A sense of connection is now formed. You feel validated, understood, seen. They’ve been through what you’re going through, and they wrote about it to reach out to you — across time and space. I fell in love with the web for this reason, this feeling of connection. You could search the world and find someone who saw what you see, felt what you feel, went through what you’re going through. Contrast that with today. Today you have a problem. You bang your head on it. You ask a question in a prompt. And you get back something. But there’s no human behind it. Just a machine which takes human voices and de-personalizes them until the individual point of view is annihilated. And so too with it the sense of connection — the feeling of being validated, understood, seen. Every prompt a connection that could have been. A world of missed connections. Email :: Mastodon :: Bluesky

5 days ago 6 votes
HTML Minification for Static Sites

This is a note to my future self, as I’ve setup HTML minification on a few different projects and each time I ask myself, “How did I do that again?” So here’s your guide, future Jim (and anyone else on the internet who finds this). I use html-minifier to minifiy HTML files created by my static site generator. Personally, I use the CLI tool because it's easy to add a CLI command as an npm postbuild step. Example package.json: { "scripts": { "build": "<BUILD-COMMAND>" "postbuild": "html-minifier --input-dir <BUILD-DIR> --output-dir <BUILD-DIR> --file-ext html <OPTIONS>" } } All the minification options are off by default, so you have to turn them on one-by-one (HTML minfication is a tricky concern). Me personally, I’m using the ones exemplified in the project README: --collapse-whitespace --remove-comments --remove-optional-tags --remove-redundant-attributes --remove-script-type-attributes --remove-tag-whitespace --use-short-doctype --minify-css true --minify-js true So, for a site folder named build, the entire command looks like this: html-minifier --input-dir ./build --output-dir ./build --file-ext html --collapse-whitespace --remove-comments --remove-optional-tags --remove-redundant-attributes --remove-script-type-attributes --remove-tag-whitespace --use-short-doctype --minify-css true --minify-js true That’s it — that’s the template. What Kind of Results Do I Get? I use this on a few of my sites, including my notes site and this blog. When testing it locally for my blog’s build, I: Run a build and put files to ./build Copy ./build to ./build-min Command: cp -R build build-min Run html-minifier on build-min and compare the resulting folders in macOS finder. Here’s my results for my blog (2,501 items in ./build): Directory size: Before: 37MB After: 28.4MB Difference: ▼ -8.6MB (-23.24%) Main index.html file lines of code: Before: 1,484 After: 15 lines Difference: ▼ -1,469 lines (-99%) Main index.html file size over the network: Before: 30.6kB After: 17.6kB Difference: ▼ -13kB (-42.48%) And the results for my notes (one big index.html file): File size: Before: 1.5MB After: 1.1MB Difference: ▼ -0.4MB (-26.67%) Lines of code: Before: 25,974 After: 1 Difference: ▼ -25,973 lines (-99.996%) Email :: Mastodon :: Bluesky #html

6 days ago 14 votes
Consistency For Who? Thoughts on Overriding Basic Computing Controls

A note before we start: I don’t know how much of this I believe. I’m sketching out some feelings in this post and thinking through whether it actually makes any sense. I’d be curious where other folks land on this. I’m not sure I totally understand this impulse we have on the web to override the default style and appearance of fundamental computing controls. Everyone wants their own checkboxes, radios, and select menus that fit their brand. But websites aren’t about you or your brand. They’re about the people you’re serving who have to use them, i.e. the users. And their needs vary from one person to the next, based on their unique context and environment (operating system, device, etc.) For them, a checkbox that’s visually and functionally uniform across every website is a good thing. It provides consistency and sets expectations — “Oh hey, a checkbox, I know how to use this. It looks and functions the same as a checkbox on every other website, app, or system preference on my computer.” But where we’ve arrived on the web is consistency for brands is more important than consistency for end users. Take Radios, For Example Imagine a radio control in macOS. There are some design considerations in how that system-level control looks and functions that are unique to macOS. For example, when a window loses focus in favor of another window, radio controls are de-emphasized visually because the user is now focused on something else in a different window. This is a unique solution for a specific computing experience where multiple windows may be on the screen at the same time and, as the user shifts focus from one window to another, additional visual help is provided to emphasize and de-emphasize the user’s focal point in the user interface. The beauty of leveraging a system-level element is that you’re tapping into these kinds of solutions which are tailored to solve problems unique to their context and environment. Contrast that with a radio somebody re-implemented on the web to match their brand. I highly doubt many have taken into consideration a de-emphasized state for windowed computing experiences. Or Take Select, For Example As another example, consider how the <select> element can break outside of the browser window because it is an OS-level control. For example, have a list with a lot of options? A <select> element can provide users something your custom select never could: an adaptation to its environment, the operating system. If the browser window is small on screen (because, say, the user is trying to do something else within their computing environment like side-by-side windows) the <select> can break out of the browser window and accommodate more space. Similarly, though perhaps not as advantageous, on mobile devices like iOS the <select> can break outside of the browser window. Something a custom element could never do. Additionally, these native controls are incredibly forward looking. If new hardware or OS appears on the scene (see visionOS), how the <select> works is handled for you. When it ships, you’re up to date (vs. a design system where now you have to go consider how, if at all, things change for your entire system and every site it supports). Business case: there’s no more economical way to ship websites than using the platform. You get outside engineering resources to build your UIs at no cost to you! Every component you build is a liability, so what’s the least you can do to deliver value? I get it, there are trade-offs. But when building UIs, how often do we stop to ask: What’s lost when we refuse to consider the context and environment of our users because we instead force upon them the context and environment of our brand? Two Cents on Design Systems We extoll the virtues of a “design system” within our brands and organizations — consistency, familiarity, uniformity, all for our users! But once they leave the walled garden of our brand, it’s ok that they suddenly lose this privilege? If the inconsistencies across design systems for basic computing controls were within our own organizational systems, we would be enraged! But since they’re across brands (e.g. websites), it’s fine? (Below is an example of radios and checkboxes and selects across various popular design systems.) In the end, it’s the user who has to deal with these inconsistencies. But isn’t that what “systems” are meant to solve in the first place? In other words, the default, un-styled, system-level controls for radios, switches, checkboxes, etc., are the original design system before our branded design systems overrode them. Are Organizational Design Systems User-Centric? Your organization’s design system lacks the sensibilities of your users’ platforms. “We made our own radios! They’re great! They’re ‘on-brand’ and consistent across all our stuff.” But they’re not consistent across all your users’ stuff. In other words, you made a radio for your company without considering what makes a radio a radio on the computer it will be used on. You oriented a visual and functional experience around you and your environment, rather than the person you’re serving and their context and environment. And I just tend to think we’re losing out on something with that choice — to say nothing of its cost. Disclaimers Disclaimer 1: I know I’m cheating here. Not all native system controls have been standardized in a way that serves the varied needs of complex applications. But, on the other side of this coin, a simple healthcare form that would be perfectly suited to some basic radio controls and a plain <select> menu instead rolls its UI for no other reason than to make it “on-brand” and it’s worse in almost every way: visually, functionally, accessibly. Disclaimer 2: Yeah I know, this puts us as developers at the mercy of browser vendors and OS platforms and the paltry level of access they give us to system controls. For example, it’s still not easy to mark a checkbox with an indeterminate state in HTML alone. I get that. But perhaps if we spent more time advocating for these kinds of enhancements (instead of re-theming a checkbox for the nth time) maybe we’d get what we ask for? Disclaimer 3: In case it’s not clear, I am not advocating every website everywhere should only use form controls provided by the web platform. The web is a big place, it’s silly to make universal statements for something so big. What I’m trying to do is bring attention to the fact that maybe you don’t need to roll your own. Maybe design systems should consider the computing context and environment of their users over the context and environment of their own brand. Disclaimer 4: I get that system-level consistency is a kind of branded consistency. If you choose an Apple product, you’re choosing an Apple-branded experience for native form controls. I realize these things are not totally brand-agnostic. But consumers make a choice when they buy a computing device, and maybe we should honor that choice rather than try overriding it. Disclaimer 5: Having disclaimers clears me of any and all criticism lol. Email :: Mastodon :: Bluesky

a week ago 82 votes
Relationship Advice for AI

You know what’s really helpful in solving my own problems? Writing them down, sending them to someone, and not hearing back. You ever do that? For me, it’s a bulletproof method to solving problems. It’s akin to those moments when you go to someone with a problem, you talk it through, you find a solution, you thank them for their help, and they say, “Well I didn’t even say anything, but you’re welcome.” If I have a friend, co-worker, or collaborator who I know is on the other end of a chat box, typing out my problem and not hearing back from them can be a tremendous help. Here’s an example of how it often goes: Jim Nielsen, Friday at 12:53 PM I’m having an issue where the deployment isn’t working. Failiures are coming from lines 123-125 of the build script... Jim Nielsen, Friday at 12:59 PM Oh, it looks like something changed in commit abc123e in the lock file... Jim Nielsen, Friday at 1:02 PM This is so weird, I hate troubleshooting this crap. Why is everything in the world garbage? Jim Nielsen, Friday at 1:03 PM Ok, I can’t figure this out. I'm going to need your help when you have a second. Jim Nielsen, Friday at 1:09 PM Oh hey, actually I think I know what the problem is... Jim Nielsen, Friday at 1:11 PM Ok, it’s fixed now. Nevermind, I don’t need your help. Thanks! Co-worker, Friday at 4:03 PM You're welcome, glad I could help! In contrast, AI is too eager to respond back with something when nothing would be much more helpful. Knowing another human is there to connect with — available, listening, but not speaking — has helped me many times as I express my thinking step-by-step. So let me give you some relationship advice, AI. Sometimes you don’t need to say or do anything. You just need to listen. Cool? Thanks. Email :: Mastodon :: Bluesky #ai

a week ago 32 votes

More in programming

Adding auto-generated cover images to EPUBs downloaded from AO3

I was chatting with a friend recently, and she mentioned an annoyance when reading fanfiction on her iPad. She downloads fic from AO3 as EPUB files, and reads it in the Kindle app – but the files don’t have a cover image, and so the preview thumbnails aren’t very readable: She’s downloaded several hundred stories, and these thumbnails make it difficult to find things in the app’s “collections” view. This felt like a solvable problem. There are tools to add cover images to EPUB files, if you already have the image. The EPUB file embeds some key metadata, like the title and author. What if you had a tool that could extract that metadata, auto-generate an image, and use it as the cover? So I built that. It’s a small site where you upload EPUB files you’ve downloaded from AO3, the site generates a cover image based on the metadata, and it gives you an updated EPUB to download. The new covers show the title and author in large text on a coloured background, so they’re much easier to browse in the Kindle app: If you’d find this helpful, you can use it at alexwlchan.net/my-tools/add-cover-to-ao3-epubs/ Otherwise, I’m going to explain how it works, and what I learnt from building it. There are three steps to this process: Open the existing EPUB to get the title and author Generate an image based on that metadata Modify the EPUB to insert the new cover image Let’s go through them in turn. Open the existing EPUB I’ve not worked with EPUB before, and I don’t know much about it. My first instinct was to look for Python EPUB libraries on PyPI, but there was nothing appealing. The results were either very specific tools (convert EPUB to/from format X) or very unmaintained (the top result was last updated in April 2014). I decied to try writing my own code to manipulate EPUBs, rather than using somebody else’s library. I had a vague memory that EPUB files are zips, so I changed the extension from .epub to .zip and tried unzipping one – and it turns out that yes, it is a zip file, and the internal structure is fairly simple. I found a file called content.opf which contains metadata as XML, including the title and author I’m looking for: <?xml version='1.0' encoding='utf-8'?> <package xmlns="http://www.idpf.org/2007/opf" version="2.0" unique-identifier="uuid_id"> <metadata xmlns:opf="http://www.idpf.org/2007/opf" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcterms="http://purl.org/dc/terms/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:calibre="http://calibre.kovidgoyal.net/2009/metadata"> <dc:title>Operation Cameo</dc:title> <meta name="calibre:timestamp" content="2025-01-25T18:01:43.253715+00:00"/> <dc:language>en</dc:language> <dc:creator opf:file-as="alexwlchan" opf:role="aut">alexwlchan</dc:creator> <dc:identifier id="uuid_id" opf:scheme="uuid">13385d97-35a1-4e72-830b-9757916d38a7</dc:identifier> <meta name="calibre:title_sort" content="operation cameo"/> <dc:description><p>Some unusual orders arrive at Operation Mincemeat HQ.</p></dc:description> <dc:publisher>Archive of Our Own</dc:publisher> <dc:subject>Fanworks</dc:subject> <dc:subject>General Audiences</dc:subject> <dc:subject>Operation Mincemeat: A New Musical - SpitLip</dc:subject> <dc:subject>No Archive Warnings Apply</dc:subject> <dc:date>2023-12-14T00:00:00+00:00</dc:date> </metadata> … That dc: prefix was instantly familiar from my time working at Wellcome Collection – this is Dublin Core, a standard set of metadata fields used to describe books and other objects. I’m unsurprised to see it in an EPUB; this is exactly how I’d expect it to be used. I found an article that explains the structure of an EPUB file, which told me that I can find the content.opf file by looking at the root-path element inside the mandatory META-INF/container.xml file which is every EPUB. I wrote some code to find the content.opf file, then a few XPath expressions to extract the key fields, and I had the metadata I needed. Generate a cover image I sketched a simple cover design which shows the title and author. I wrote the first version of the drawing code in Pillow, because that’s what I’m familiar with. It was fine, but the code was quite flimsy – it didn’t wrap properly for long titles, and I couldn’t get custom fonts to work. Later I rewrote the app in JavaScript, so I had access to the HTML canvas element. This is another tool that I haven’t worked with before, so a fun chance to learn something new. The API felt fairly familiar, similar to other APIs I’ve used to build HTML elements. This time I did implement some line wrapping – there’s a measureText() API for canvas, so you can see how much space text will take up before you draw it. I break the text into words, and keeping adding words to a line until measureText tells me the line is going to overflow the page. I have lots of ideas for how I could improve the line wrapping, but it’s good enough for now. I was also able to get fonts working, so I picked Georgia to match the font used for titles on AO3. Here are some examples: I had several ideas for choosing the background colour. I’m trying to help my friend browse her collection of fic, and colour would be a useful way to distinguish things – so how do I use it? I realised I could get the fandom from the EPUB file, so I decided to use that. I use the fandom name as a seed to a random number generator, then I pick a random colour. This means that all the fics in the same fandom will get the same colour – for example, all the Star Wars stories are a shade of red, while Star Trek are a bluey-green. This was a bit harder than I expected, because it turns out that JavaScript doesn’t have a built-in seeded random number generator – I ended up using some snippets from a Stack Overflow answer, where bryc has written several pseudorandom number generators in plain JavaScript. I didn’t realise until later, but I designed something similar to the placeholder book covers in the Apple Books app. I don’t use Apple Books that often so it wasn’t a deliberate choice to mimic this style, but clearly it was somewhere in my subconscious. One difference is that Apple’s app seems to be picking from a small selection of background colours, whereas my code can pick a much nicer variety of colours. Apple’s choices will have been pre-approved by a designer to look good, but I think mine is more fun. Add the cover image to the EPUB My first attempt to add a cover image used pandoc: pandoc input.epub --output output.epub --epub-cover-image cover.jpeg This approach was no good: although it added the cover image, it destroyed the formatting in the rest of the EPUB. This made it easier to find the fic, but harder to read once you’d found it. An EPUB file I downloaded from AO3, before/after it was processed by pandoc. So I tried to do it myself, and it turned out to be quite easy! I unzipped another EPUB which already had a cover image. I found the cover image in OPS/images/cover.jpg, and then I looked for references to it in content.opf. I found two elements that referred to cover images: <?xml version="1.0" encoding="UTF-8"?> <package xmlns="http://www.idpf.org/2007/opf" version="3.0" unique-identifier="PrimaryID"> <metadata xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:opf="http://www.idpf.org/2007/opf"> <meta name="cover" content="cover-image"/> … </metadata> <manifest> <item id="cover-image" href="images/cover.jpg" media-type="image/jpeg" properties="cover-image"/> … </manifest> </package> This gave me the steps for adding a cover image to an EPUB file: add the image file to the zipped bundle, then add these two elements to the content.opf. Where am I going to deploy this? I wrote the initial prototype of this in Python, because that’s the language I’m most familiar with. Python has all the libraries I need: The zipfile module can unpack and modify the EPUB/ZIP The xml.etree or lxml modules can manipulate XML The Pillow library can generate images I built a small Flask web app: you upload the EPUB to my server, my server does some processing, and sends the EPUB back to you. But for such a simple app, do I need a server? I tried rebuilding it as a static web page, doing all the processing in client-side JavaScript. That’s simpler for me to host, and it doesn’t involve a round-trip to my server. That has lots of other benefits – it’s faster, less of a privacy risk, and doesn’t require a persistent connection. I love static websites, so can they do this? Yes! I just had to find a different set of libraries: The JSZip library can unpack and modify the EPUB/ZIP, and is the only third-party code I’m using in the tool Browsers include DOMParser for manipulating XML I’ve already mentioned the HTML <canvas> element for rendering the image This took a bit longer because I’m not as familiar with JavaScript, but I got it working. As a bonus, this makes the tool very portable. Everything is bundled into a single HTML file, so if you download that file, you have the whole tool. If my friend finds this tool useful, she can save the file and keep a local copy of it – she doesn’t have to rely on my website to keep using it. What should it look like? My first design was very “engineer brain” – I just put the basic controls on the page. It was fine, but it wasn’t good. That might be okay, because the only person I need to be able to use this app is my friend – but wouldn’t it be nice if other people were able to use it? If they’re going to do that, they need to know what it is – most people aren’t going to read a 2,500 word blog post to understand a tool they’ve never heard of. (Although if you have read this far, I appreciate you!) I started designing a proper page, including some explanations and descriptions of what the tool is doing. I got something that felt pretty good, including FAQs and acknowledgements, and I added a grey area for the part where you actually upload and download your EPUBs, to draw the user’s eye and make it clear this is the important stuff. But even with that design, something was missing. I realised I was telling you I’d create covers, but not showing you what they’d look like. Aha! I sat down and made up a bunch of amusing titles for fanfic and fanfic authors, so now you see a sample of the covers before you upload your first EPUB: This makes it clearer what the app will do, and was a fun way to wrap up the project. What did I learn from this project? Don’t be scared of new file formats My first instinct was to look for a third-party library that could handle the “complexity” of EPUB files. In hindsight, I’m glad I didn’t find one – it forced me to learn more about how EPUBs work, and I realised I could write my own code using built-in libraries. EPUB files are essentially ZIP files, and I only had basic needs. I was able to write my own code. Because I didn’t rely on a library, now I know more about EPUBs, I have code that’s simpler and easier for me to understand, and I don’t have a dependency that may cause problems later. There are definitely some file formats where I need existing libraries (I’m not going to write my own JPEG parser, for example) – but I should be more open to writing my own code, and not jumping to add a dependency. Static websites can handle complex file manipulations I love static websites and I’ve used them for a lot of tasks, but mostly read-only display of information – not anything more complex or interactive. But modern JavaScript is very capable, and you can do a lot of things with it. Static pages aren’t just for static data. One of the first things I made that got popular was find untagged Tumblr posts, which was built as a static website because that’s all I knew how to build at the time. Somewhere in the intervening years, I forgot just how powerful static sites can be. I want to build more tools this way. Async JavaScript calls require careful handling The JSZip library I’m using has a lot of async functions, and this is my first time using async JavaScript. I got caught out several times, because I forgot to wait for async calls to finish properly. For example, I’m using canvas.toBlob to render the image, which is an async function. I wasn’t waiting for it to finish, and so the zip would be repackaged before the cover image was ready to add, and I got an EPUB with a missing image. Oops. I think I’ll always prefer the simplicity of synchronous code, but I’m sure I’ll get better at async JavaScript with practice. Final thoughts I know my friend will find this helpful, and that feels great. Writing software that’s designed for one person is my favourite software to write. It’s not hyper-scale, it won’t launch the next big startup, and it’s usually not breaking new technical ground – but it is useful. I can see how I’m making somebody’s life better, and isn’t that what computers are for? If other people like it, that’s a nice bonus, but I’m really thinking about that one person. Normally the one person I’m writing software for is me, so it’s extra nice when I can do it for somebody else. If you want to try this tool yourself, go to alexwlchan.net/my-tools/add-cover-to-ao3-epubs/ If you want to read the code, it’s all on GitHub. [If the formatting of this post looks odd in your feed reader, visit the original article]

5 hours ago 3 votes
Non-alcoholic apéritifs

I’ve been doing Dry January this year. One thing I missed was something for apéro hour, a beverage to mark the start of the evening. Something complex and maybe bitter, not like a drink you’d have with lunch. I found some good options. Ghia sodas are my favorite. Ghia is an NA apéritif based on grape juice but with enough bitterness (gentian) and sourness (yuzu) to be interesting. You can buy a bottle and mix it with soda yourself but I like the little cans with extra flavoring. The Ginger and the Sumac & Chili are both great. Another thing I like are low-sugar fancy soda pops. Not diet drinks, they still have a little sugar, but typically 50 calories a can. De La Calle Tepache is my favorite. Fermented pineapple is delicious and they have some fun flavors. Culture Pop is also good. A friend gave me the Zero book, a drinks cookbook from the fancy restaurant Alinea. This book is a little aspirational but the recipes are doable, it’s just a lot of labor. Very fancy high end drink mixing, really beautiful flavor ideas. The only thing I made was their gin substitute (mostly junipers extracted in glycerin) and it was too sweet for me. Need to find the right use for it, a martini definitely ain’t it. An easier homemade drink is this Nonalcoholic Dirty Lemon Tonic. It’s basically a lemonade heavily flavored with salted preserved lemons, then mixed with tonic. I love the complexity and freshness of this drink and enjoy it on its own merits. Finally, non-alcoholic beer has gotten a lot better in the last few years thanks to manufacturing innovations. I’ve been enjoying NA Black Butte Porter, Stella Artois 0.0, Heineken 0.0. They basically all taste just like their alcoholic uncles, no compromise. One thing to note about non-alcoholic substitutes is they are not cheap. They’ve become a big high end business. Expect to pay the same for an NA drink as one with alcohol even though they aren’t taxed nearly as much.

2 days ago 5 votes
It burns

The first time we had to evacuate Malibu this season was during the Franklin fire in early December. We went to bed with our bags packed, thinking they'd probably get it under control. But by 2am, the roaring blades of fire choppers shaking the house got us up. As we sped down the canyon towards Pacific Coast Highway (PCH), the fire had reached the ridge across from ours, and flames were blazing large out the car windows. It felt like we had left the evacuation a little too late, but they eventually did get Franklin under control before it reached us. Humans have a strange relationship with risk and disasters. We're so prone to wishful thinking and bad pattern matching. I remember people being shocked when the flames jumped the PCH during the Woolsey fire in 2017. IT HAD NEVER DONE THAT! So several friends of ours had to suddenly escape a nightmare scenario, driving through burning streets, in heavy smoke, with literally their lives on the line. Because the past had failed to predict the future. I feel into that same trap for a moment with the dramatic proclamations of wind and fire weather in the days leading up to January 7. Warning after warning of "extremely dangerous, life-threatening wind" coming from the City of Malibu, and that overly-bureaucratic-but-still-ominous "Particularly Dangerous Situation" designation. Because, really, how much worse could it be? Turns out, a lot. It was a little before noon on the 7th when we first saw the big plumes of smoke rise from the Palisades fire. And immediately the pattern matching ran astray. Oh, it's probably just like Franklin. It's not big yet, they'll get it out. They usually do. Well, they didn't. By the late afternoon, we had once more packed our bags, and by then it was also clear that things actually were different this time. Different worse. Different enough that even Santa Monica didn't feel like it was assured to be safe. So we headed far North, to be sure that we wouldn't have to evacuate again. Turned out to be a good move. Because by now, into the evening, few people in the connected world hadn't started to see the catastrophic images emerging from the Palisades and Eaton fires. Well over 10,000 houses would ultimately burn. Entire neighborhoods leveled. Pictures that could be mistaken for World War II. Utter and complete destruction. By the night of the 7th, the fire reached our canyon, and it tore through the chaparral and brush that'd been building since the last big fire that area saw in 1993. Out of some 150 houses in our immediate vicinity, nearly a hundred burned to the ground. Including the first house we moved to in Malibu back in 2009. But thankfully not ours. That's of course a huge relief. This was and is our Malibu Dream House. The site of that gorgeous home office I'm so fond to share views from. Our home. But a house left standing in a disaster zone is still a disaster. The flames reached all the way up to the base of our construction, incinerated much of our landscaping, and devoured the power poles around it to dysfunction. We have burnt-out buildings every which way the eye looks. The national guard is still stationed at road blocks on the access roads. Utility workers are tearing down the entire power grid to rebuild it from scratch. It's going to be a long time before this is comfortably habitable again. So we left. That in itself feels like defeat. There's an urge to stay put, and to help, in whatever helpless ways you can. But with three school-age children who've already missed over a months worth of learning from power outages, fire threats, actual fires, and now mudslide dangers, it was time to go. None of this came as a surprise, mind you. After Woolsey in 2017, Malibu life always felt like living on borrowed time to us. We knew it, even accepted it. Beautiful enough to be worth the risk, we said.  But even if it wasn't a surprise, it's still a shock. The sheer devastation, especially in the Palisades, went far beyond our normal range of comprehension. Bounded, as it always is, by past experiences. Thus, we find ourselves back in Copenhagen. A safe haven for calamities of all sorts. We lived here for three years during the pandemic, so it just made sense to use it for refuge once more. The kids' old international school accepted them right back in, and past friendships were quickly rebooted. I don't know how long it's going to be this time. And that's an odd feeling to have, just as America has been turning a corner, and just as the optimism is back in so many areas. Of the twenty years I've spent in America, this feels like the most exciting time to be part of the exceptionalism that the US of A offers. And of course we still are. I'll still be in the US all the time on both business, racing, and family trips. But it won't be exclusively so for a while, and it won't be from our Malibu Dream House. And that burns.

2 days ago 6 votes
Slow, flaky, and failing

Thou shalt not suffer a flaky test to live, because it’s annoying, counterproductive, and dangerous: one day it might fail for real, and you won’t notice. Here’s what to do.

3 days ago 7 votes
Name that Ware, January 2025

The ware for January 2025 is shown below. Thanks to brimdavis for contributing this ware! …back in the day when you would get wares that had “blue wires” in them… One thing I wonder about this ware is…where are the ROMs? Perhaps I’ll find out soon! Happy year of the snake!

3 days ago 5 votes