Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
34
My Static Blog Publishing Setup and an Apology to RSS Subscribers 2022-03-21 In case you missed it, this website is now generated with pure HTML & CSS. Although, generated isn't the proper way to describe it anymore. Written is a better description. No more Markdown files. No more build scripts. No more Jekyll. Clean, simple, static HTML & CSS is my "CMS". More on that in a moment. First, I must apologize. I'm Sorry Dear RSS Subscribers RSS feeds are tricky things for me personally. I always botch them with a site redesign or a re-structure of my previous posts. Those of you subscribed via RSS were likely bombarded with post spam when I rebuilt this website. Sorry about that - I know how annoying that can be. Fortunately, that all stops today. Moving forward my RSS feed (Atom) will be edited manually with every new post I write. Each entry will feature the post title, post url, and post date. No summaries or full-inline content will be included (since that would involve a great amount...
over a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Making software better without sacrificing user experience.

Bringing dwm Shortcuts to GNOME

Bringing dwm Shortcuts to GNOME 2023-11-02 The dwm window manager is my standard "go-to" for most of my personal laptop environments. For desktops with larger, higher resolution monitors I tend to lean towards using GNOME. The GNOME DE is fairly solid for my own purposes. This article isn't going to deep dive into GNOME itself, but instead highlight some minor configuration changes I make to mimic a few dwm shortcuts. For reference, I'm running GNOME 45.0 on Ubuntu 23.10 Setting Up Fixed Workspaces When I use dwm I tend to have a hard-set amount of tags to cycle through (normally 4-5). Unfortunately, dynamic rendering is the default for workspaces (ie. tags) in GNOME. For my personal preference I set this setting to fixed. We can achieve this by opening Settings > Multitasking and selecting "Fixed number of workspaces". Screenshot of GNOME's Multitasking Settings GUI Setting Our Keybindings Now all that is left is to mimic dwm keyboard shortcuts, in this case: ALT + $num for switching between workspaces and ALT + SHIFT + $num for moving windows across workspaces. These keyboard shortcuts can be altered under Settings > Keyboard > View and Customize Shortcuts > Navigation. You'll want to make edits to both the "Switch to workspace n" and "Move window to workspace n". Screenshot of GNOME's keyboard shortcut GUI: switch to workspace Screenshot of GNOME's keyboard shortcut GUI: move window to workspace That's it. You're free to include even more custom keyboard shortcuts (open web browser, lock screen, hibernate, etc.) but this is a solid starting point. Enjoy tweaking GNOME!

a year ago 93 votes
The X220 ThinkPad is the Best Laptop in the World

The X220 ThinkPad is the Best Laptop in the World 2023-09-26 The X220 ThinkPad is the greatest laptop ever made and you're wrong if you think otherwise. No laptop hardware has since surpassed the nearly perfect build of the X220. New devices continue to get thinner and more fragile. Useful ports are constantly discarded for the sake of "design". Functionality is no longer important to manufacturers. Repairability is purposefully removed to prevent users from truly "owing" their hardware. It's a mess out there. But thank goodness I still have my older, second-hand X220. Specs Before I get into the details explaining why this laptop is the very best of its kind, let's first take a look at my machine's basic specifications: CPU: Intel i7-2640M (4) @ 3.500GHz GPU: Intel 2nd Generation Core Processor Memory: 16GB DDR3 OS: Arch Linux / OpenBSD Resolution: 1366x768 With that out of the way, I will break down my thoughts on the X220 into five major sections: Build quality, available ports, the keyboard, battery life, and repairability. Build Quality The X220 (like most of Lenovo's older X/T models) is built like a tank. Although sourced from mostly plastic, the device is still better equipped to handle drops and mishandling compared to that of more fragile devices (such as the MacBook Air or Framework). This is made further impressive since the X220 is actually composed of many smaller interconnected pieces (more on this later). A good litmus test I perform on most laptops is the "corner test". You grab the base corner of a laptop in its open state. The goal is to see if the device displays any noticeable give or flex. In the X220's case: it feels rock solid. The base remains stiff and bobbing the device causes no movement on the opened screen. I'm aware that holding a laptop in this position is certainly not a normal use case, but knowing it is built well enough to do so speaks volumes of its construction. The X220 is also not a lightweight laptop. This might be viewed as a negative for most users, but I actually prefer it. I often become too cautious and end up "babying" thinner laptops out of fear of breakage. A minor drop from even the smallest height will severely damage these lighter devices. I have no such worries with my X220. As for the laptop's screen and resolution: your mileage may vary. I have zero issues with the default display or the smaller aspect ratio. I wrote about how I stopped using an external monitor, so I might be a little biased. Overall, this laptop is a device you can snatch up off your desk, whip into your travel bag and be on your way. The rugged design and bulkier weight help put my mind at ease - which is something I can't say for newer laptop builds. Ports Ports. Ports Everywhere. I don't think I need to explain how valuable it is to have functional ports on a laptop. Needing to carrying around a bunch of dongles for ports that should already be on the device just seems silly. The X220 comes equipped with: 3 USB ports (one of those being USB3 on the i7 model) DisplayPort VGA Ethernet SD Card Reader 3.5mm Jack Ultrabay (SATA) Wi-Fi hardware kill-switch Incredibly versatile and ready for anything I throw at it! Keyboard The classic ThinkPad keyboards are simply that: classic. I don't think anyone could argue against these keyboards being the golden standard for laptops. It's commendable how Lenovo managed to package so much functionality into such a small amount of real estate. Most modern laptops lack helpful keys such as Print Screen, Home, End, and Screen Lock. They're also an absolute joy to type on. The fact that so many people go out of their way to mod X230 ThinkPad models with X220 keyboards should tell you something... Why Lenovo moved away from these keyboards will always baffle me. (I know why they did it - I just think it's stupid). Did I mention these classic keyboards come with the extremely useful Trackpoint as well? Battery Life Author's Note: This section is very subjective. The age, quality, and size of the X220's battery can have a massive impact on benchmarks. I should also mention that I run very lightweight operating systems and use DWM as opposed to a heavier desktop environment. Just something to keep in mind. The battery life on my own X220 is fantastic. I have a brand-new 9-cell that lasts for roughly 5-6 hours of daily work. Obviously these numbers don't come close to the incredible battery life of Apple's M1/M2 chip devices, but it's still quite competitive against other "newer" laptops on the market. Although, even if the uptime was lower than 5-6 hours, you have the ability to carry extra batteries with you. The beauty of swapping out your laptop's battery without needing to open up the device itself is fantastic. Others might whine about the annoyance of carrying an extra battery in their travel bag, but doing so is completely optional. A core part of what makes the X220 so wonderful is user control and choice. The X220's battery is another great example of that. Repairability The ability to completely disassemble and replace almost everything on the X220 has to be one of its biggest advantages over newer laptops. No glue to rip apart. No special proprietary tools required. Just some screws and plastic snaps. If someone as monkey-brained as me can completely strip down this laptop and put it back together again without issue, then the hardware designers have done something right! Best of all, Lenovo provides a very detailed hardware maintenance manual to help guide you through the entire process. My disassembled X220 when I was reapplying the CPU thermal paste. Bonus Round: Price I didn't list this in my initial section "breakdown" but it's something to consider. I purchased my X220 off eBay for $175 Canadian. While this machine came with a HDD instead of an SSD and only 8GB of total memory, that was still an incredible deal. I simply swapped out the hard-drive with an SSD I had on hand, along with upgrading the DDR3 memory to its max of 16. Even if you needed to buy those components separately you would be hard-pressed to find such a good deal for a decent machine. Not to mention you would be helping to prevent more e-waste! What More Can I Say? Obviously the title and tone of this article is all in good fun. Try not to take things so seriously! But, I still personally believe the X220 is one of, if not the best laptop in the world.

a year ago 120 votes
Installing Older Versions of MongoDB on Arch Linux

Installing Older Versions of MongoDB on Arch Linux 2023-09-11 I've recently been using Arch Linux for my main work environment on my ThinkPad X260. It's been great. As someone who is constantly drawn to minimalist operating systems such as Alpine or OpenBSD, it's nice to use something like Arch that boasts that same minimalist approach but with greater documentation/support. Another major reason for the switch was the need to run older versions of "services" locally. Most people would simply suggest using Docker or vmm, but I personally run projects in self-contained, personalized directories on my system itself. I am aware of the irony in that statement... but that's just my personal preference. So I thought I would share my process of setting up an older version of MongoDB (3.4 to be precise) on Arch Linux. AUR to the Rescue You will need to target the specific version of MongoDB using the very awesome AUR packages: yay -S mongodb34-bin Follow the instructions and you'll be good to go. Don't forget to create the /data/db directory and give it proper permissions: mkdir -p /data/db/ chmod -R 777 /date/db What About My "Tools"? If you plan to use MongoDB, then you most likely want to utilize the core database tools (restore, dump, etc). The problem is you can't use the default mongodb-tools package when trying to work with older versions of MongoDB itself. The package will complain about conflicts and ask you to override your existing version. This is not what we want. So, you'll have to build from source locally: git clone https://github.com/mongodb/mongo-tools cd mongodb-tools ./make build Then you'll need to copy the built executables into the proper directory in order to use them from the terminal: cp bin/* /usr/local/bin/ And that's it! Now you can run mongod directly or use systemctl to enable it by default. Hopefully this helps anyone else curious about running older (or even outdated!) versions of MongoDB.

a year ago 59 votes
Converting HEIF Images with macOS Automator

Converting HEIF Images with macOS Automator 2023-07-21 Often times when you save or export photos from iOS to iCloud they often render themselves into heif or heic formats. Both macOS and iOS have no problem working with these formats, but a lot of software programs will not even recognize these filetypes. The obvious step would just be to convert them via an application or online service, right? Not so fast! Wouldn't it be much cleaner if we could simply right-click our heif or heic files and convert them directly in Finder? Well, I've got some good news for you... Basic Requirements You will need to have Homebrew installed You will need to install the libheif package through Homebrew: brew install libheif Creating our custom Automator script For this example script we are going to convert the image to JPG format. You can freely change this to whatever format you wish (PNG, TIFF, etc.). We're just keeping things basic for this tutorial. Don't worry if you've never worked with Automator before because setting things up is incredibly simple. Open the macOS Automator from the Applications folder Select Quick Option from the first prompt Set "Workflow receives current" to image files Set the label "in" to Finder From the left pane, select "Library > Utilities" From the presented choices in the next pane, drag and drop Run Shell Script into the far right pane Set the area "Pass input" to as arguments Enter the following code below as your script and type ⌘-S to save (name it something like "Convert HEIC/HEIF to JPG") for f in "$@" do /opt/homebrew/bin/heif-convert "$f" "${f%.*}.jpg" done Making Edits If you ever have the need to edit this script (for example, changing the default format to png), you will need to navigate to your ~/Library/Services folder and open your custom heif Quick Action in the Automator application. Simple as that. Happy converting! If you're interested, I also have some other Automator scripts available: Batch Converting Images to webp with macOS Automator Convert Files to HTML with macOS Automator Quick Actions

a year ago 33 votes
Blogging for 7 Years

Blogging for 7 Years 2023-06-24 My first public article was posted on June 28th 2016. That was seven years ago. In that time, quite a lot has changed in my life both personally and professionally. So, I figured it would be interesting to reflect on these years and document it for my own personal records. My hope is that this is something I could start doing every 5 or 10 years (if I can keep going that long!). This way, my blog also serves as a "time capsule" or museum of the past... Fun Facts This Blog: I originally started blogging on bradleytaunt.com using WordPress, but since then I have changed both my main domain and blog infrastructure multiple times. At a glance I have used: Jekyll Hugo Blot Static HTML/CSS PHPetite Shinobi pblog barf Currently using! Personal: As with anyone over time, the personal side of my life has seen the biggest updates: Married the love of my life (after knowing each other for ~14 years!) Moved out into rural Ontario for some peace and quiet Had three wonderful kids with said wife (two boys and a girl) Started noticing grey sprinkles in my stubble (I guess I can officially call myself a "grey beard"?) Professionally: Pivoted heavily into UX research and design for a handful of years (after working mostly with web front-ends) Recently switched back into a more fullstack development role to challenge myself and learn more Nothing Special This post isn't anything ground-breaking but for me it's nice to reflect on the time passed and remember how much can change in such little time. Hopefully I'll be right back here in another 7 years and maybe you'll still be reading along with me!

over a year ago 56 votes

More in programming

Digital hygiene: Emails

Email is your most important online account, so keep it clean.

11 hours ago 3 votes
Building a container orchestrator

Kubernetes is not exactly the most fun piece of technology around. Learning it isn’t easy, and learning the surrounding ecosystem is even harder. Even those who have managed to tame it are still afraid of getting paged by an ETCD cluster corruption, a Kubelet certificate expiration, or the DNS breaking down (and somehow, it’s always the DNS). Samuel Sianipar If you’re like me, the thought of making your own orchestrator has crossed your mind a few times. The result would, of course, be a magical piece of technology that is both simple to learn and wouldn’t break down every weekend. Sadly, the task seems daunting. Kubernetes is a multi-million lines of code project which has been worked on for more than a decade. The good thing is someone wrote a book that can serve as a good starting point to explore the idea of building our own container orchestrator. This book is named “Build an Orchestrator in Go”, written by Tim Boring, published by Manning. The tasks The basic unit of our container orchestrator is called a “task”. A task represents a single container. It contains configuration data, like the container’s name, image and exposed ports. Most importantly, it indicates the container state, and so acts as a state machine. The state of a task can be Pending, Scheduled, Running, Completed or Failed. Each task will need to interact with a container runtime, through a client. In the book, we use Docker (aka Moby). The client will get its configuration from the task and then proceed to pull the image, create the container and start it. When it is time to finish the task, it will stop the container and remove it. The workers Above the task, we have workers. Each machine in the cluster runs a worker. Workers expose an API through which they receive commands. Those commands are added to a queue to be processed asynchronously. When the queue gets processed, the worker will start or stop tasks using the container client. In addition to exposing the ability to start and stop tasks, the worker must be able to list all the tasks running on it. This demands keeping a task database in the worker’s memory and updating it every time a task change’s state. The worker also needs to be able to provide information about its resources, like the available CPU and memory. The book suggests reading the /proc Linux file system using goprocinfo, but since I use a Mac, I used gopsutil. The manager On top of our cluster of workers, we have the manager. The manager also exposes an API, which allows us to start, stop, and list tasks on the cluster. Every time we want to create a new task, the manager will call a scheduler component. The scheduler has to list the workers that can accept more tasks, assign them a score by suitability and return the best one. When this is done, the manager will send the work to be done using the worker’s API. In the book, the author also suggests that the manager component should keep track of every tasks state by performing regular health checks. Health checks typically consist of querying an HTTP endpoint (i.e. /ready) and checking if it returns 200. In case a health check fails, the manager asks the worker to restart the task. I’m not sure if I agree with this idea. This could lead to the manager and worker having differing opinions about a task state. It will also cause scaling issues: the manager workload will have to grow linearly as we add tasks, and not just when we add workers. As far as I know, in Kubernetes, Kubelet (the equivalent of the worker here) is responsible for performing health checks. The CLI The last part of the project is to create a CLI to make sure our new orchestrator can be used without having to resort to firing up curl. The CLI needs to implement the following features: start a worker start a manager run a task in the cluster stop a task get the task status get the worker node status Using cobra makes this part fairly straightforward. It lets you create very modern feeling command-line apps, with properly formatted help commands and easy argument parsing. Once this is done, we almost have a fully functional orchestrator. We just need to add authentication. And maybe some kind of DaemonSet implementation would be nice. And a way to handle mounting volumes…

15 hours ago 2 votes
Bugs I fixed in SumatraPDF

Unexamined life is not worth living said Socrates. I don’t know about that but to become a better, faster, more productive programmer it pays to examine what makes you un-productive. Fixing bugs is one of those un-productive activities. You have to fix them but it would be even better if you didn’t write them in the first place. Therefore it’s good to reflect after fixing a bug. Why did the bug happen? Could I have done something to not write the bug in the first place? If I did write the bug, could I do something to diagnose or fix it faster? This seems like a great idea that I wasn’t doing. Until now. Here’s a random selection of bugs I found and fixed in SumatraPDF, with some reflections. SumatraPDF is a C++ win32 Windows app. It’s a small, fast, open-source, multi-format PDF/eBook/Comic Book reader. To keep the app small and fast I generally avoid using other people’s code. As a result most code is mine and most bugs are mine. Let’s reflect on those bugs. TabWidth doesn’t work A user reported that TabWidth advanced setting doesn’t work in 3.5.2 but worked in 3.4.6. I looked at the code and indeed: the setting was not used anywhere. The fix was to use it. Why did the bug happen? It was a refactoring. I heavily refactored tabs control. Somehow during the rewrite I forgot to use the advanced setting when creating the new tabs control, even though I did write the code to support it in the control. I guess you could call it sloppiness. How could I not write the bug? I could review the changes more carefully. There’s no-one else working on this project so there’s no one else to do additional code reviews. I typically do a code review by myself with webdiff but let’s face it: reviewing changes right after writing them is the worst possible time. I’m biased to think that the code I just wrote is correct and I’m often mentally exhausted. Maybe I should adopt a process when I review changes made yesterday with fresh, un-tired eyes? How could I detect the bug earlier?. 3.5.2 release happened over a year ago. Could I have found it sooner? I knew I was refactoring tabs code. I knew I have a setting for changing the look of tabs. If I connected the dots at the time, I could have tested if the setting still works. I don’t make releases too often. I could do more testing before each release and at the very least verify all advanced settings work as expected. The real problem In retrospect, I shouldn’t have implemented that feature at all. I like Sumatra’s customizability and I think it’s non-trivial contributor to it’s popularity but it took over a year for someone to notice and report that particular bug. It’s clear it’s not a frequently used feature. I implemented it because someone asked and it was easy. I should have said no to that particular request. Fix printing crash by correctly ref-counting engine Bugs can crash your program. Users rarely report crashes even though I did put effort into making it easy. When I a crash happens I have a crash handler that saves the diagnostic info to a file and I show a message box asking users to report the crash and with a press of a button I launch a notepad with diagnostic info and a browser with a page describing how to submit that as a GitHub issue. The other button is to ignore my pleas for help. Most users overwhelmingly choose to ignore. I know that because I also have crash reporting system that sends me a crash report. I get thousands of crash reports for every crash reported by the user. Therefore I’m convinced that the single most impactful thing for making software that doesn’t crash is to have a crash reporting system, look at the crashes and fix them. This is not a perfect system because all I have is a call stack of crashed thread, info about the computer and very limited logs. Nevertheless, sometimes all it takes is a look at the crash call stack and inspection of the code. I saw a crash in printing code which I fixed after some code inspection. The clue was that I was accessing a seemingly destroyed instance of Engine. That was easy to diagnose because I just refactored the code to add ref-counting to Engine so it was easy to connect the dots. I’m not a fan of ref-counting. It’s easy to mess up ref-counting (add too many refs, which leads to memory leaks or too many releases which leads to premature destruction). I’ve seen codebases where developers were crazy in love with ref-counting: every little thing, even objects with obvious lifetimes. In contrast,, that was the first ref-counted object in over 100k loc of SumatraPDF code. It was necessary in this case because I would potentially hand off the object to a printing thread so its lifetime could outlast the lifetime of the window for which it was created. How could I not write the bug? It’s another case of sloppiness but I don’t feel bad. I think the bug existed there before the refactoring and this is the hard part about programming: complex interactions between distant, in space and time, parts of the program. Again, more time spent reviewing the change could have prevented it. As a bonus, I managed to simplify the logic a bit. Writing software is an incremental process. I could feel bad about not writing the perfect code from the beginning but I choose to enjoy the process of finding and implementing improvements. Making the code and the program better over time. Tracking down a chm thumbnail crash Not all crashes can be fixed given information in crash report. I saw a report with crash related to creating a thumbnail crash. I couldn’t figure out why it crashes but I could add more logging to help figure out the issue if it happens again. If it doesn’t happen again, then I win. If it does happen again, I will have more context in the log to help me figure out the issue. Update: I did fix the crash. Fix crash when viewing favorites menu A user reported a crash. I was able to reproduce the crash and fix it. This is the bast case scenario: a bug report with instructions to reproduce a crash. If I can reproduce the crash when running debug build under the debugger, it’s typically very easy to figure out the problem and fix it. In this case I’ve recently implemented an improved version of StrVec (vector of strings) class. It had a compatibility bug compared to previous implementation in that StrVec::InsertAt(0) into an empty vector would crash. Arguably it’s not a correct usage but existing code used it so I’ve added support to InsertAt() at the end of vector. How could I not write the bug? I should have written a unit test (which I did in the fix). I don’t blindly advocate unit tests. Writing tests has a productivity cost but for such low-level, relatively tricky code, unit tests are good. I don’t feel too bad about it. I did write lots of tests for StrVec and arguably this particular usage of InsertAt() was borderline correct so it didn’t occur to me to test that condition. Use after free I saw a crash in crash reports, close to DeleteThumbnailForFile(). I looked at the code: if (!fs->favorites->IsEmpty()) { // only hide documents with favorites gFileHistory.MarkFileInexistent(fs->filePath, true); } else { gFileHistory.Remove(fs); DeleteDisplayState(fs); } DeleteThumbnailForFile(fs->filePath); I immediately spotted suspicious part: we call DeleteDisplayState(fs) and then might use fs->filePath. I looked at DeleteDisplayState and it does, in fact, deletes fs and all its data, including filePath. So we use freed data in a classic use after free bug. The fix was simple: make a copy of fs->filePath before calling DeleteDisplayState and use that. How could I not write the bug? Same story: be more careful when reviewing the changes, test the changes more. If I fail that, crash reporting saves my ass. The bug didn’t last more than a few days and affected only one user. I immediately fixed it and published an update. Summary of being more productive and writing bug free software If many people use your software, a crash reporting system is a must. Crashes happen and few of them are reported by users. Code reviews can catch bugs but they are also costly and reviewing your own code right after you write it is not a good time. You’re tired and biased to think your code is correct. Maybe reviewing the code a day after, with fresh eyes, would be better. I don’t know, I haven’t tried it.

22 hours ago 1 votes
An Analysis of Links From The White House’s “Wire” Website

A little while back I heard about the White House launching their version of a Drudge Report style website called White House Wire. According to Axios, a White House official said the site’s purpose was to serve as “a place for supporters of the president’s agenda to get the real news all in one place”. So a link blog, if you will. As a self-professed connoisseur of websites and link blogs, this got me thinking: “I wonder what kind of links they’re considering as ‘real news’ and what they’re linking to?” So I decided to do quick analysis using Quadratic, a programmable spreadsheet where you can write code and return values to a 2d interface of rows and columns. I wrote some JavaScript to: Fetch the HTML page at whitehouse.gov/wire Parse it with cheerio Select all the external links on the page Return a list of links and their headline text In a few minutes I had a quick analysis of what kind of links were on the page: This immediately sparked my curiosity to know more about the meta information around the links, like: If you grouped all the links together, which sites get linked to the most? What kind of interesting data could you pull from the headlines they’re writing, like the most frequently used words? What if you did this analysis, but with snapshots of the website over time (rather than just the current moment)? So I got to building. Quadratic today doesn’t yet have the ability for your spreadsheet to run in the background on a schedule and append data. So I had to look elsewhere for a little extra functionality. My mind went to val.town which lets you write little scripts that can 1) run on a schedule (cron), 2) store information (blobs), and 3) retrieve stored information via their API. After a quick read of their docs, I figured out how to write a little script that’ll run once a day, scrape the site, and save the resulting HTML page in their key/value storage. From there, I was back to Quadratic writing code to talk to val.town’s API and retrieve my HTML, parse it, and turn it into good, structured data. There were some things I had to do, like: Fine-tune how I select all the editorial links on the page from the source HTML (I didn’t want, for example, to include external links to the White House’s social pages which appear on every page). This required a little finessing, but I eventually got a collection of links that corresponded to what I was seeing on the page. Parse the links and pull out the top-level domains so I could group links by domain occurrence. Create charts and graphs to visualize the structured data I had created. Selfish plug: Quadratic made this all super easy, as I could program in JavaScript and use third-party tools like tldts to do the analysis, all while visualizing my output on a 2d grid in real-time which made for a super fast feedback loop! Once I got all that done, I just had to sit back and wait for the HTML snapshots to begin accumulating! It’s been about a month and a half since I started this and I have about fifty days worth of data. The results? Here’s the top 10 domains that the White House Wire links to (by occurrence), from May 8 to June 24, 2025: youtube.com (133) foxnews.com (72) thepostmillennial.com (67) foxbusiness.com (66) breitbart.com (64) x.com (63) reuters.com (51) truthsocial.com (48) nypost.com (47) dailywire.com (36) From the links, here’s a word cloud of the most commonly recurring words in the link headlines: “trump” (343) “president” (145) “us” (134) “big” (131) “bill” (127) “beautiful” (113) “trumps” (92) “one” (72) “million” (57) “house” (56) The data and these graphs are all in my spreadsheet, so I can open it up whenever I want to see the latest data and re-run my script to pull the latest from val.town. In response to the new data that comes in, the spreadsheet automatically parses it, turn it into links, and updates the graphs. Cool! If you want to check out the spreadsheet — sorry! My API key for val.town is in it (“secrets management” is on the roadmap). But I created a duplicate where I inlined the data from the API (rather than the code which dynamically pulls it) which you can check out here at your convenience. Email · Mastodon · Bluesky

yesterday 2 votes
AmigaGuide Reference Library

As I slowly but surely work towards the next release of my setcmd project for the Amiga (see the 68k branch for the gory details and my total noob-like C flailing around), I’ve made heavy use of documentation in the AmigaGuide format. Despite it’s age, it’s a great Amiga-native format and there’s a wealth of great information out there for things like the C API, as well as language guides and tutorials for tools like the Installer utility - and the AmigaGuide markup syntax itself. The only snag is, I had to have access to an Amiga (real or emulated), or install one of the various viewer programs on my laptops. Because like many, I spend a lot of time in a web browser and occasionally want to check something on my mobile phone, this is less than convenient. Fortunately, there’s a great AmigaGuideJS online viewer which renders AmigaGuide format documents using Javascript. I’ve started building up a collection of useful developer guides and other files in my own reference library so that I can access this documentation whenever I’m not at my Amiga or am coding in my “modern” dev environment. It’s really just for my own personal use, but I’ll be adding to it whenever I come across a useful piece of documentation so I hope it’s of some use to others as well! And on a related note, I now have a “unified” code-base so that SetCmd now builds and runs on 68k-based OS 3.x systems as well as OS 4.x PPC systems like my X5000. I need to: Tidy up my code and fix all the “TODO” stuff Update the Installer to run on OS 3.x systems Update the documentation Build a new package and upload to Aminet/OS4Depot Hopefully I’ll get that done in the next month or so. With the pressures of work and family life (and my other hobbies), progress has been a lot slower these last few years but I’m still really enjoying working on Amiga code and it’s great to have a fun personal project that’s there for me whenever I want to hack away at something for the sheer hell of it. I’ve learned a lot along the way and the AmigaOS is still an absolute joy to develop for. I even brought my X5000 to the most recent Kickstart Amiga User Group BBQ/meetup and had a fun day working on the code with fellow Amigans and enjoying some classic gaming & demos - there was also a MorphOS machine there, which I think will be my next target as the codebase is slowly becoming more portable. Just got to find some room in the “retro cave” now… This stuff is addictive :)

yesterday 4 votes