Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
37
After I posted that link to my latest podcast with Rene Ritchie, several folks alerted me via Twitter that all my colorful metaphors had been “bleeped” on the audio. I didn’t realize that because I hadn’t listened to the recording myself. And I don’t normally listen to my own podcasts because… that’s just sort of creepy, isn’t it? Obviously, that means I don’t mix the audio either. I don’t do that because 1) I don’t have relevant experience at it, 2) I’m really lazy and 3) fine folks elsewhere do all the hard work for me. My apologies if you didn’t get the whole “Melton” experience you were expecting. Rene tells me that episode was an accident and our next podcast won’t be censored. “Let Melton be Melton,” as he likes to say. Plus, we might just release an explicit version of the current show. Has everyone calmed the fuck down now?1 OK, here’s the thing—I was not upset at all about being censored. The show might be called “Melton” but that’s only because 1) Rene Ritchie is a generous...
over a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Don Melton

Sorry, we’re closed

For reasons that will soon become obvious, I’m shutting the doors on this website. Everything will remain online for now, but I don’t plan on returning to write anything new here. Not that I’ve added any content in almost two years anyway. I still have a passion for making observations, telling stories and recording my thoughts as they happen. I’ll just be doing it elsewhere. Thank you for reading.

over a year ago 87 votes
Happy twentieth to Safari and WebKit

Safari and WebKit aren’t teenagers anymore. I just want to make note of that. To quote a previous post: On June 25, 2001, I arrived at Apple Computer to lead the effort in building a new Web browser. It was also Ken Kocienda’s first day on the job, both at Apple and on that same project with me. For that reason, Ken and I have always considered our start date to be when Safari and WebKit were born. Not any other position on the calendar. Only June 25, 2001. We were there. We should know. That was 20 years ago today. Twenty years! Of course, it’s been over nine years since I retired from Apple. Obviously, I’m not a teenager anymore either. But I still remember that first day clearly. So, happy birthday to Safari and WebKit and the team now tasked with their adult supervision.

over a year ago 40 votes
Cranking up the blogging machine again

For whatever reason I started blogging again last week. Not knowing why isn’t due to a lack of introspection on my part. Maybe the nauseating weight of the Trump administration was suppressing my desire to write for the previous three-and-a-half years? Or maybe I’m just arbitrary and lazy? It’s also unclear how long I can keep this up. Inspiration and a willingness to type are not something which you can purchase online or install with a package manager. I suppose we’ll find out. However, the mechanics of blogging again are simpler to understand. For one thing, as I write here: … this website is only free-range, handcrafted, artisanal HTML. With a little CSS, of course. No JavaScript—that’s just crazy talk. Technically, it’s all created using software. I don’t actually type all that markup manually, like some filthy animal. And since the site remained unchanged from the time I generated it during June of 2017, it was still working fine as of last week. Keep that in mind when you consider the architecture for your own blog. Once you’ve created it, static HTML is pretty much maintenance free. However, there’s that whole problem of generating it again. With new content. Yeah. I had all the publishing software, content, configuration, etc. installed on my Mac originally. But since we all know I’m using a Windows PC now, I had to migrate everything. That meant just copying my blog posts since they’re simply Markdown documents with YAML frontmatter. Easy. But my content management system is Nanoc, a Ruby-based generator. And while it’s reasonably cross-platform and mostly runs on Windows, it’s not officially supported there. More importantly, the scripts and other tools I built on top of Nanoc were kinda Unix-adjacent, if you know what I mean. This is where the Windows Subsystem for Linux (WSL) came to the rescue. Normally, I use the Windows-specific version of ruby.exe for my other projects. But with WSL, you really need to apt-get ruby and shove that baby into Ubuntu as well. After that it was just a gem install of nanoc and kramdown, my Markdown parser of choice. At least, I thought that’s all I needed. Turns out the kramdown-parser-gfm Gem is required too since I depend on GitHub-flavored Markdown and the kramdown developers removed support for it from the main project back in 2019. Surprise, surprise. But that’s what I get for not parsing any Markdown for so damn long. By the way, for any of you also installing Ruby Gems in WSL or other Unix-like environments, don’t preface gem install with sudo. This is both unnecessary and unwise. It’s unnecessary because you can simply append --user-install to those installation commands. This will place them in ~/.gem, your local Gem directory. And it’s unwise because you don’t want them placed in your system-wide Gem directory. Doing so will delete, overwrite or otherwise fuck them up whenever you update ruby itself. Of course, you’ll need to add that local Gem directory to your $PATH variable in ~/.bash_profile or whatever the equivalent is for your shell. Otherwise the shell can’t find those Gems. Duh. Here’s an example ~/.bash_profile showing how to do just that: if [ -f ~/.bashrc ]; then . ~/.bashrc fi PATH=$HOME/.gem/ruby/2.7.0/bin:$PATH Obviously the version of ruby in that path will need to be adjusted if yours if different. So after getting the correct Gems installed in the correct places, I then had to make a few changes to my Nanoc configuration files and various homebuilt Unix-y scripts. These were mostly just converting some hard-coded macOS-specific directory names to their Windows-specific equivalents. And then… it all worked. Flawlessly. Which means migration was not really much of a problem at all. Sure, thinking ahead on what I needed to do took awhile, but that actual typing necessary to make it happen was just a matter of minutes. Kind of anticlimactic, really. Of course, now I have to figure out what to write. Dammit.

over a year ago 39 votes
Waiting four years to exhale

Today is a good day. Joseph Robinette Biden Jr. has been inaugurated as our 46th president. And Kamala Devi Harris as our 49th vice president. While they cannot immediately undo the American carnage inflicted upon us by the previous administration, at least the vindictive malevolence has stopped now. Finally, and ironically, fulfilling the promise made four years ago by Donald Trump, deposed tyrant and career criminal. So let’s take a moment to unload that uncomfortable weight off our chests and shout in celebration. Fuck yeah! ‘Murica!

over a year ago 42 votes
Our long national nightmare is not over

I have faith in Joe Biden. And Kamala Harris. They’re good people. They and the team they’ve selected know what they’re doing. It’s obvious just listening to them. So I can barely wait for them to take over the White House tomorrow. Because real governance will be back in residence. And we need all of that to make it through this pandemic. Along with a crushing number of other crises. But even after Trump slithers back to Florida—with a few of his favorite swamp creatures in tow—his enablers in federal, state and local government aren’t going anywhere. And they don’t believe in accountability for him or themselves. Then there’s 75% of Republican voters out there who still think the election was stolen and that Biden is an illegitimate president. Which means The Big Lie isn’t going anywhere either. Don’t ever assume it’s just a small minority that suddenly developed a taste for bullshit. Worse, Trump might be without a platform but he’ll continue to incite his army of insurrectionists with more grievance and more lies. Not all of these people are silly cosplayers. There are enough with military training and weapons to cause significant damage. God only knows what they’ll do the next time. Possibly pose as real troops or law enforcement. If we’re lucky, Trump will just blow up the Republican Party instead of the whole country. But let’s not bet on being lucky. We need to be vigilant. This isn’t over yet.

over a year ago 42 votes

More in programming

An Analysis of Links From The White House’s “Wire” Website

A little while back I heard about the White House launching their version of a Drudge Report style website called White House Wire. According to Axios, a White House official said the site’s purpose was to serve as “a place for supporters of the president’s agenda to get the real news all in one place”. So a link blog, if you will. As a self-professed connoisseur of websites and link blogs, this got me thinking: “I wonder what kind of links they’re considering as ‘real news’ and what they’re linking to?” So I decided to do quick analysis using Quadratic, a programmable spreadsheet where you can write code and return values to a 2d interface of rows and columns. I wrote some JavaScript to: Fetch the HTML page at whitehouse.gov/wire Parse it with cheerio Select all the external links on the page Return a list of links and their headline text In a few minutes I had a quick analysis of what kind of links were on the page: This immediately sparked my curiosity to know more about the meta information around the links, like: If you grouped all the links together, which sites get linked to the most? What kind of interesting data could you pull from the headlines they’re writing, like the most frequently used words? What if you did this analysis, but with snapshots of the website over time (rather than just the current moment)? So I got to building. Quadratic today doesn’t yet have the ability for your spreadsheet to run in the background on a schedule and append data. So I had to look elsewhere for a little extra functionality. My mind went to val.town which lets you write little scripts that can 1) run on a schedule (cron), 2) store information (blobs), and 3) retrieve stored information via their API. After a quick read of their docs, I figured out how to write a little script that’ll run once a day, scrape the site, and save the resulting HTML page in their key/value storage. From there, I was back to Quadratic writing code to talk to val.town’s API and retrieve my HTML, parse it, and turn it into good, structured data. There were some things I had to do, like: Fine-tune how I select all the editorial links on the page from the source HTML (I didn’t want, for example, to include external links to the White House’s social pages which appear on every page). This required a little finessing, but I eventually got a collection of links that corresponded to what I was seeing on the page. Parse the links and pull out the top-level domains so I could group links by domain occurrence. Create charts and graphs to visualize the structured data I had created. Selfish plug: Quadratic made this all super easy, as I could program in JavaScript and use third-party tools like tldts to do the analysis, all while visualizing my output on a 2d grid in real-time which made for a super fast feedback loop! Once I got all that done, I just had to sit back and wait for the HTML snapshots to begin accumulating! It’s been about a month and a half since I started this and I have about fifty days worth of data. The results? Here’s the top 10 domains that the White House Wire links to (by occurrence), from May 8 to June 24, 2025: youtube.com (133) foxnews.com (72) thepostmillennial.com (67) foxbusiness.com (66) breitbart.com (64) x.com (63) reuters.com (51) truthsocial.com (48) nypost.com (47) dailywire.com (36) From the links, here’s a word cloud of the most commonly recurring words in the link headlines: “trump” (343) “president” (145) “us” (134) “big” (131) “bill” (127) “beautiful” (113) “trumps” (92) “one” (72) “million” (57) “house” (56) The data and these graphs are all in my spreadsheet, so I can open it up whenever I want to see the latest data and re-run my script to pull the latest from val.town. In response to the new data that comes in, the spreadsheet automatically parses it, turn it into links, and updates the graphs. Cool! If you want to check out the spreadsheet — sorry! My API key for val.town is in it (“secrets management” is on the roadmap). But I created a duplicate where I inlined the data from the API (rather than the code which dynamically pulls it) which you can check out here at your convenience. Email · Mastodon · Bluesky

23 hours ago 2 votes
Building a container orchestrator

Kubernetes is not exactly the most fun piece of technology around. Learning it isn’t easy, and learning the surrounding ecosystem is even harder. Even those who have managed to tame it are still afraid of getting paged by an ETCD cluster corruption, a Kubelet certificate expiration, or the DNS breaking down (and somehow, it’s always the DNS). Samuel Sianipar If you’re like me, the thought of making your own orchestrator has crossed your mind a few times. The result would, of course, be a magical piece of technology that is both simple to learn and wouldn’t break down every weekend. Sadly, the task seems daunting. Kubernetes is a multi-million lines of code project which has been worked on for more than a decade. The good thing is someone wrote a book that can serve as a good starting point to explore the idea of building our own container orchestrator. This book is named “Build an Orchestrator in Go”, written by Tim Boring, published by Manning. The tasks The basic unit of our container orchestrator is called a “task”. A task represents a single container. It contains configuration data, like the container’s name, image and exposed ports. Most importantly, it indicates the container state, and so acts as a state machine. The state of a task can be Pending, Scheduled, Running, Completed or Failed. Each task will need to interact with a container runtime, through a client. In the book, we use Docker (aka Moby). The client will get its configuration from the task and then proceed to pull the image, create the container and start it. When it is time to finish the task, it will stop the container and remove it. The workers Above the task, we have workers. Each machine in the cluster runs a worker. Workers expose an API through which they receive commands. Those commands are added to a queue to be processed asynchronously. When the queue gets processed, the worker will start or stop tasks using the container client. In addition to exposing the ability to start and stop tasks, the worker must be able to list all the tasks running on it. This demands keeping a task database in the worker’s memory and updating it every time a task change’s state. The worker also needs to be able to provide information about its resources, like the available CPU and memory. The book suggests reading the /proc Linux file system using goprocinfo, but since I use a Mac, I used gopsutil. The manager On top of our cluster of workers, we have the manager. The manager also exposes an API, which allows us to start, stop, and list tasks on the cluster. Every time we want to create a new task, the manager will call a scheduler component. The scheduler has to list the workers that can accept more tasks, assign them a score by suitability and return the best one. When this is done, the manager will send the work to be done using the worker’s API. In the book, the author also suggests that the manager component should keep track of every tasks state by performing regular health checks. Health checks typically consist of querying an HTTP endpoint (i.e. /ready) and checking if it returns 200. In case a health check fails, the manager asks the worker to restart the task. I’m not sure if I agree with this idea. This could lead to the manager and worker having differing opinions about a task state. It will also cause scaling issues: the manager workload will have to grow linearly as we add tasks, and not just when we add workers. As far as I know, in Kubernetes, Kubelet (the equivalent of the worker here) is responsible for performing health checks. The CLI The last part of the project is to create a CLI to make sure our new orchestrator can be used without having to resort to firing up curl. The CLI needs to implement the following features: start a worker start a manager run a task in the cluster stop a task get the task status get the worker node status Using cobra makes this part fairly straightforward. It lets you create very modern feeling command-line apps, with properly formatted help commands and easy argument parsing. Once this is done, we almost have a fully functional orchestrator. We just need to add authentication. And maybe some kind of DaemonSet implementation would be nice. And a way to handle mounting volumes…

11 hours ago 2 votes
Digital hygiene: Emails

Email is your most important online account, so keep it clean.

7 hours ago 1 votes
AmigaGuide Reference Library

As I slowly but surely work towards the next release of my setcmd project for the Amiga (see the 68k branch for the gory details and my total noob-like C flailing around), I’ve made heavy use of documentation in the AmigaGuide format. Despite it’s age, it’s a great Amiga-native format and there’s a wealth of great information out there for things like the C API, as well as language guides and tutorials for tools like the Installer utility - and the AmigaGuide markup syntax itself. The only snag is, I had to have access to an Amiga (real or emulated), or install one of the various viewer programs on my laptops. Because like many, I spend a lot of time in a web browser and occasionally want to check something on my mobile phone, this is less than convenient. Fortunately, there’s a great AmigaGuideJS online viewer which renders AmigaGuide format documents using Javascript. I’ve started building up a collection of useful developer guides and other files in my own reference library so that I can access this documentation whenever I’m not at my Amiga or am coding in my “modern” dev environment. It’s really just for my own personal use, but I’ll be adding to it whenever I come across a useful piece of documentation so I hope it’s of some use to others as well! And on a related note, I now have a “unified” code-base so that SetCmd now builds and runs on 68k-based OS 3.x systems as well as OS 4.x PPC systems like my X5000. I need to: Tidy up my code and fix all the “TODO” stuff Update the Installer to run on OS 3.x systems Update the documentation Build a new package and upload to Aminet/OS4Depot Hopefully I’ll get that done in the next month or so. With the pressures of work and family life (and my other hobbies), progress has been a lot slower these last few years but I’m still really enjoying working on Amiga code and it’s great to have a fun personal project that’s there for me whenever I want to hack away at something for the sheer hell of it. I’ve learned a lot along the way and the AmigaOS is still an absolute joy to develop for. I even brought my X5000 to the most recent Kickstart Amiga User Group BBQ/meetup and had a fun day working on the code with fellow Amigans and enjoying some classic gaming & demos - there was also a MorphOS machine there, which I think will be my next target as the codebase is slowly becoming more portable. Just got to find some room in the “retro cave” now… This stuff is addictive :)

yesterday 4 votes
That boolean should probably be something else

One of the first types we learn about is the boolean. It's pretty natural to use, because boolean logic underpins much of modern computing. And yet, it's one of the types we should probably be using a lot less of. In almost every single instance when you use a boolean, it should be something else. The trick is figuring out what "something else" is. Doing this is worth the effort. It tells you a lot about your system, and it will improve your design (even if you end up using a boolean). There are a few possible types that come up often, hiding as booleans. Let's take a look at each of these, as well as the case where using a boolean does make sense. This isn't exhaustive—[1]there are surely other types that can make sense, too. Datetimes A lot of boolean data is representing a temporal event having happened. For example, websites often have you confirm your email. This may be stored as a boolean column, is_confirmed, in the database. It makes a lot of sense. But, you're throwing away data: when the confirmation happened. You can instead store when the user confirmed their email in a nullable column. You can still get the same information by checking whether the column is null. But you also get richer data for other purposes. Maybe you find out down the road that there was a bug in your confirmation process. You can use these timestamps to check which users would be affected by that, based on when their confirmation was stored. This is the one I've seen discussed the most of all these. We run into it with almost every database we design, after all. You can detect it by asking if an action has to occur for the boolean to change values, and if values can only change one time. If you have both of these, then it really looks like it is a datetime being transformed into a boolean. Store the datetime! Enums Much of the remaining boolean data indicates either what type something is, or its status. Is a user an admin or not? Check the is_admin column! Did that job fail? Check the failed column! Is the user allowed to take this action? Return a boolean for that, yes or no! These usually make more sense as an enum. Consider the admin case: this is really a user role, and you should have an enum for it. If it's a boolean, you're going to eventually need more columns, and you'll keep adding on other statuses. Oh, we had users and admins, but now we also need guest users and we need super-admins. With an enum, you can add those easily. enum UserRole { User, Admin, Guest, SuperAdmin, } And then you can usually use your tooling to make sure that all the new cases are covered in your code. With a boolean, you have to add more booleans, and then you have to make sure you find all the places where the old booleans were used and make sure they handle these new cases, too. Enums help you avoid these bugs. Job status is one that's pretty clearly an enum as well. If you use booleans, you'll have is_failed, is_started, is_queued, and on and on. Or you could just have one single field, status, which is an enum with the various statuses. (Note, though, that you probably do want timestamp fields for each of these events—but you're still best having the status stored explicitly as well.) This begins to resemble a state machine once you store the status, and it means that you can make much cleaner code and analyze things along state transition lines. And it's not just for storing in a database, either. If you're checking a user's permissions, you often return a boolean for that. fn check_permissions(user: User) -> bool { false // no one is allowed to do anything i guess } In this case, true means the user can do it and false means they can't. Usually. I think. But you can really start to have doubts here, and with any boolean, because the application logic meaning of the value cannot be inferred from the type. Instead, this can be represented as an enum, even when there are just two choices. enum PermissionCheck { Allowed, NotPermitted(reason: String), } As a bonus, though, if you use an enum? You can end up with richer information, like returning a reason for a permission check failing. And you are safe for future expansions of the enum, just like with roles. You can detect when something should be an enum a proliferation of booleans which are mutually exclusive or depend on one another. You'll see multiple columns which are all changed at the same time. Or you'll see a boolean which is returned and used for a long time. It's important to use enums here to keep your program maintainable and understandable. Conditionals But when should we use a boolean? I've mainly run into one case where it makes sense: when you're (temporarily) storing the result of a conditional expression for evaluation. This is in some ways an optimization, either for the computer (reuse a variable[2]) or for the programmer (make it more comprehensible by giving a name to a big conditional) by storing an intermediate value. Here's a contrived example where using a boolean as an intermediate value. fn calculate_user_data(user: User, records: RecordStore) { // this would be some nice long conditional, // but I don't have one. So variables it is! let user_can_do_this: bool = (a && b) && (c || !d); if user_can_do_this && records.ready() { // do the thing } else if user_can_do_this && records.in_progress() { // do another thing } else { // and something else! } } But even here in this contrived example, some enums would make more sense. I'd keep the boolean, probably, simply to give a name to what we're calculating. But the rest of it should be a match on an enum! * * * Sure, not every boolean should go away. There's probably no single rule in software design that is always true. But, we should be paying a lot more attention to booleans. They're sneaky. They feel like they make sense for our data, but they make sense for our logic. The data is usually something different underneath. By storing a boolean as our data, we're coupling that data tightly to our application logic. Instead, we should remain critical and ask what data the boolean depends on, and should we maybe store that instead? It comes easier with practice. Really, all good design does. A little thinking up front saves you a lot of time in the long run. I know that using an em-dash is treated as a sign of using LLMs. LLMs are never used for my writing. I just really like em-dashes and have a dedicated key for them on one of my keyboard layers. ↩ This one is probably best left to the compiler. ↩

yesterday 4 votes