Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
19
Tom Scott is playing to win. The intro music made me wonder if the Wendover team watch c90dventures. At one point in the episode, Adam breaks out some Scatman. This is going to be a great season.
a month ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from anderegg.ca

Skylight and the AT Protocol

Since my last piece about Bluesky, I’ve been using the service a lot more. Just about everyone I followed on other services is there now, and it’s way more fun than late-stage Twitter ever was. Halifax is particularly into Bluesky, which reminds me of our local scene during the late-2000s/early-2010s Twitter era. That said, I still have reservations about the service. Primarily around the whole decentralized/federated piece. The Bluesky team continues to work toward the goal of creating a decentralized and open protocol, but they’ve got quite a way to go. Part of my fascination with Bluesky is due to its radical openness. There is no similar service that allows users unauthenticated access to the firehose, or that publishes in-depth stats around user behaviour and retention. I like watching numbers go up, so I enjoy following those stats and collecting some of my own. A few days ago I noticed that the rate of user growth was accelerating. Growth had dropped off steadily since late January. As of this writing, there are currently around 5 users a second signing up for the service. It was happening around the same time as tariff news was dropping, but that didn’t seem like a major driver. Turned out that the bigger cause was a new Tiktok-like video sharing app called Skylight Social. I was a bit behind on tech news, so I missed when TechCrunch covered the app. It’s gathered more steam since then, and today is one of the highest days for new Bluesky signups since the US election. As per the TechCrunch story, Skylight has been given some initial funding by Mark Cuban. It’s also selling itself as “decentralized” and “unbannable”. I’m happy for their success, especially given how unclear the Tiktok situation is, but I continue to feel like everyone’s getting credit for work they haven’t done yet. Skylight Social goes out of its way to say that it’s powered by the AT Protocol. They’re not lying, but I think it’s truer at the moment to say that the app is powered by Bluesky. In fact, the first thing you see when launching the app is a prompt to sign up for a “BlueSky” account 1 if you don’t already have one. The Bluesky team are working on better ways to handle this, but it’s work that isn’t completed. At the moment, Skylight is not decentralized. I decided to sign up and test the service out, but this wasn’t a smooth experience. I started by creating an App Password, and tried logging using the “Continue with Bluesky” button. I used both my username and email address along with the app password, but both failed with a “wrong identifier or password” error. I saw a few other people having the same issue. It wasn’t until later that I tried using the “Sign in to your PDS” route, which ended up working fine. The only issue: I don’t run my own PDS! I just use custom domain name on top of Bluesky’s first-party PDS. In fact, it looks like third-party PDSs might not even be supported at the moment. Even if/when you can sign up with a third-party PDS, this is just a data storage and authentication platform. You’re still relying on Skylight and Bluesky’s services to shuttle the data around and show it to you. I’m not trying to beat up on Skylight specifically. I want more apps to be built with open standards, and I think TikTok could use a replacement — especially given that something is about to happen tomorrow. I honestly wish them luck! I just think the “decentralized” and “unbannable” copy on their website should currently be taken with a shaker or two of salt. I don’t know why, but seeing “BlueSky” camel-cased drives me nuts. Most of the Skylight Social marketing material doesn’t make this mistake, but I find it irritating to see during the first launch experience. ↩

a week ago 12 votes
Maze Mice

I was late to the party, but I played Luck Be a Landlord last year and really enjoyed it. It’s a deckbuilder where you build combos by manipulating the icons in your custom slot machine. I linked above to the Steam page, but it’s on just about every platform — I played through it on iOS. TrampolineTales, the indie developer behind Luck Be a Landlord, released a demo for Maze Mice as part of Steam Next Fest. I got around to giving the demo a shot today and I loved it! The game is a slightly weird mix of Pac-Man and Vampire Survivors where time only progresses when you move. You pilot a mouse around a cardboard maze and collect XP gems to earn new weapons and passive effects. You’re being chased by cats and ghosts — the cats follow your path exactly around the maze, and the ghosts ignore walls as they move directly toward you. The time progression system is fun, and I found myself just squeezing through some tight spots by tapping the arrow keys. There’s some light strategy required to herd your foes away from the gems you want to collect. You can check out the Steam page for Maze Mice and give the demo a go on macOS or Windows. If you liked Vampire Survivors, I think you’ll have a good time with this as well.

a month ago 17 votes
Sequoia’s “Macintosh” screen saver and old Control Panels

I was recently on-site with a client and noticed that one person was using the new “Macintosh” screen saver that was added in macOS Sequoia. If you haven’t seen it, here’s a video of it in action. I knew that the screen saver had released, but I was very happy with Relay’s St. Jude screen saver by James Thomson. Happily it turns out that you can run two different screen savers on macOS if you have more than one monitor. To get this working under macOS Sequoia, first make sure your monitors set up as different “Spaces”. You can do this by heading to System Settings ➔ Desktop & Dock, and under the “Mission Control” section, make sure “Displays have separate Spaces” is enabled. Then you can head to System Settings ➔ Screen Saver, and turn off “Show on all Spaces” to the left of the preview thumbnail. Now you can use the drop-down below the thumbnail to choose which monitor you want to configure. I chose to set up the Macintosh screen saver on my secondary monitor, which is in portrait orientation. I set it to the “Spectrum” colour setting (same as in the example video linked above), and also enabled “Show as wallpaper”. This has the nice effect of having the screen saver ease out of its animation and into the desktop wallpaper for that monitor when you wake your machine. I switched to the Mac in 2002 with the release of Mac OS X Jaguar. Previously, I lived in the PC world and didn’t have much love for anything Apple-related. After I switched, I found myself curious about the earlier days of the Mac. This screen saver made me want to dig further into some of the details. A nice effect of the screen saver and its wallpaper mode is the subtle shadowing on the chunky pixels. I’m assuming this is a nod to the Macintosh Portable and its early active-matrix LCD. The screen on the Portable had a distinctive “floating pixel” look. I love how this looks, though I think it would have been a pain to use day-to-day. Colin Wirth produced an excellent video about the machine on his channel “This Does Not Compute”. You can see the some close-ups of the effect starting around the 2:30 mark. Watching the screen saver also had me curious about what version of system software was being shown off. Turns out it’s more than one. Two tools I used to start looking into this were GUIdebook’s screenshots section and Infinite Mac — a site that lets you run fully-loaded versions of classic Macs in your browser. I was most fascinated when the screen saver scrolled over versions of the Control Panel. Especially the version from System 1. You can see this starting at 0:12 in the example video. This thing is a marvel of user interface design. Pretty much everything that can be configured about the original Macintosh is shown, without words, in this gem of a screen. Low End Mac has a good overview of what’s going on here, but I feel like it’s the sort of thing you could intuit if you played with it for a minute or two. One thing I learned while writing this is that you can click the menu bar in the desktop background preview to cycle through some presets! My only nitpicks about this screen are that it uses a strange XOR’d cross instead of the default mouse pointer. I’m assuming this was to make it easier to edit the desktop background, but it still feels like an odd choice. Also, the box with controls how many time the menu blinks is one pixel narrower than the two boxes below it. This would have driven me insane, and I’m amazed it still looked this way System 2.1. The Macintosh screen saver shows its time based on your system clock. I use 24-hour time, and that’s respected in the screen saver even when it’s showing the original Control Panel. This, ironically, is an anachronism. 24-hour time wasn’t an option until System 4. The screen saver also includes a version of Control Panel from System 6. You can see this at around 9:08 in the example video. This Control Panel shows its version as 3.3.3 in the bottom left. I believe this makes it System 6.0.7 or 6.0.8. You can run System 6.0.8 using an emulator on Archive.org. While this version allows for many more options, it’s far less playful. This general style — with the scrollable list of setting sections on the left — started with System 4. System 3 had the last all-in-one Control Panel layout. System 7 migrated to the Control Panels folder, where each panel is its own file, and you could easily add third-party panels to the system. Anyway, this has been far too many words about a screen saver released eight months ago. If you find this interesting, I encourage you to give the Macintosh screen saver a go. I also recommend poking around at old versions of classic Mac OS. I had a lot of fun digging into this!

a month ago 20 votes
Algorithms are breaking how we think

Today, Alec Watson posted a video titled “Algorithms are breaking how we think” on his YouTube channel, Technology Connections. The whole thing is excellent and very well argued. The main thrust is: people seem increasingly less mindful about the stuff they engage with. Watson argues that this is bad, and I agree. A little while ago I watched a video by Hank Green called “$4.5M to Spray Alcoholic Rats with Bobcat Urine”. Green has been banging this drum for a while. He hits some of the same notes as Watson, but from a different angle. This last month has been a lot, and I’ve withdrawn from news and social media quite a bit because of it. Part of this is because I’ve been very busy with work, but it’s also because I’ve felt overwhelmed. There are now a lot of bad-faith actors in positions of power. Part of their game plan is to spray a mass of obviously false, intellectually shallow, enraging nonsense into the world as quickly as possible. At a certain point the bullshit seeps in if you’re soaking in it. The ability to control over what you see next is powerful. I think it would be great if more people started being a bit more choosy about who they give that control to.

a month ago 33 votes

More in technology

Securing My Web Infrastructure

Securing My Web Infrastructure A few months ago, I very briefly mentioned that I've migrated all my web infrastructure off Cloudflare, as well as having built a custom web service to host it all. I call this new web service WebCentral and I'd like to talk about some of the steps I've taken and lessons I've learned about how I secure my infrastructure. Building a Threat Model Before you can work to secure any service, you need to understand what your threat model is. This sounds more complicated than it really is; all you must do is consider what your risks how, how likely those risks are to be realized, and what the potential damage or impact those risks could have. My websites don't store or process any user data, so I'm not terribly concerned about exfiltration, instead my primary risks are unauthorized access to the server, exploitation of my code, and denial of service. Although the risks of denial of service are self-explanatory, the primary risk I see needing to protect against is malicious code running on the machine. Malicious actors are always looking for places to run their cryptocurrency miners or spam botnets, and falling victim to that is simply out of the question. While I can do my best to try and ensure I'm writing secure code, there's always going to be the possibility that I or someone else makes a mistake that turns into an exploitable weakness. Therefore, my focus is on minimizing the potential impact should this occur. VPS Security The server that powers the very blog you're reading is a VPS, virtual private server, hosted by Azure. A VPS is just a fancy way to say a virtual machine that you have mostly total control over. A secure web service must start with a secure server hosting it, so let's go into detail about all the steps I take to keep the server safe. Network Security Minimizing the internet-facing exposure is critical for any VPS and can be one of the most effective ways to keep a machine safe. My rule is simple, no open ports other than what is required for user traffic. In effect this only means one thing: I cannot expose SSH to the internet. Doing so protects me against a wide range of threats and also reduces the impact from scanners (more on them later). While Azure itself offers several of ways to interact with a running VPS, I've chosen to disable most of those features and instead rely on my own. I personally need to be able to access the machine over SSH, however, so how do I do that if SSH is blocked? I use a VPN. On my home network is a WireGuard VPN server as well as a Dynamic DNS setup to work-around my rotating residential IP address. The VM will try to connect to the WireGuard VPN on my home network and establish a private tunnel between them. Since the VM is the one initiating the connection (acting as a client) no port must be exposed. With this configuration I can effortlessly access and manage the machine without needing to expose SSH to the internet. I'm also experimenting with, but have not yet fully rolled out, an outbound firewall. Outbound firewalls are far, far more difficult to set up than inbound because you must first have a very good understanding of what and where your machine talks to. OS-Level Security Although the internet footprint of my VPS is restricted to only HTTP and HTTPS, I still must face the risk of someone exploiting a vulnerability in my code. I've taken a few steps to help minimize the impact from a compromise to my web application's security. Automatic Updates First is some of the most basic things everyone should be doing, automatic updates & reboots. Every day I download and install any updates and restart the VPS if needed. All of this is trivially easy with a cron job and built-in tooling. I use this script that runs using a cron job: #!/bin/bash # Check for updates dnf check-update > /dev/null if [[ $? == 0 ]]; then # Nothing to update exit 0 fi # Install updates dnf -y update # Check if need to reboot dnf needs-restarting -r if [[ $? == 1 ]]; then reboot fi Low-Privileged Accounts Second, the actual process serving traffic does not run as root, instead it runs as a dedicated service user without a shell and without sudo permission. Doing this limits the abilities of what an attacker might be able to do, should they somehow have the ability to execute shell code on the machine. A challenge with using non-root users for web services is a specific security restriction enforced by Linux: only the root user can bind to port at or below 1024. Thankfully, however, SystemD services can be granted additional capabilities, one of which is the capability to bind to privileged ports. A single line in the service file is all it takes to overcome this challenge. Filesystem Isolation Lastly, the process also uses a virtualized root filesystem, a process known as chroot(). Chrooting is a method where the Linux kernel effectively lies to the process about where the root of the filesystem is by prepending a path to every call to access the filesystem. To chroot a process, you provide a directory that will act as the filesystem root for that process, meaning if the process were to try and list of contents of the root (/), they'd instead be listing the contents of the directory you specified. When configured properly, this has the effect of an filesystem allowlist - the process is only allowed to access data in the filesystem that you have specifically granted for it, and all of this without complicated permissions. It's important to note, however, that chrooting is often misunderstood as a more involved security control, because it's often incorrectly called a "jail" - referring to BSD's jails. Chrooting a process only isolates the filesystem from the process, but nothing else. In my specific use case it serves as an added layer of protection to guard against simple path transversal bugs. If an attacker were somehow able to trick the server into serving a sensitive file like /etc/passwd, it would fail because that file doesn't exist as far as the process knows. For those wondering, my SystemD service file looks like this: [Unit] Description=webcentral After=syslog.target After=network.target [Service] # I utilize systemd's heartbeat feature, sd-notify Type=notify NotifyAccess=main WatchdogSec=5 # This is the directory that serves as the virtual root for the process RootDirectory=/opt/webcentral/root # The working directory for the process, this is automatically mapped to the # virtual root so while the process sees this path, in actuality it would be # /opt/webcentral/root/opt/webcentral WorkingDirectory=/opt/webcentral # Additional directories to pass through to the process BindReadOnlyPaths=/etc/letsencrypt # Remember all of the paths here are being mapped to the virtual root ExecStart=/opt/webcentral/live/webcentral -d /opt/webcentral/data --production ExecReload=/bin/kill -USR2 "$MAINPID" TimeoutSec=5000 Restart=on-failure # The low-privilege service user to run the process as User=webcentral Group=webcentral # The additional capability to allow this process to bind to privileged ports CapabilityBoundingSet=CAP_NET_BIND_SERVICE [Install] WantedBy=default.target To quickly summarize: Remote Access (SSH) is blocked from the internet, a VPN must be used to access the VM, updates are automatically installed on the VM, the web process itself runs as a low-privileged service account, and the same process is chroot()-ed to shield the VMs filesystem. Service Availability Now it's time to shift focus away from the VPS to the application itself. One of, if not the, biggest benefits of running my own entire web server means that I can deeply integrate security controls how I best see fit. For this, I focus on detection and rejection of malicious clients. Being on the internet means you will be constantly exposed to malicious traffic - it's just a fact of life. The overwhelming majority of this traffic is just scanners, people going over every available IP address and looking widely known and exploitable vulnerabilities, things like leaving credentials out in the open or web shells. Generally, these scanners are one and done - you'll see a small handful of requests from a single address and then never again. I find that trying to block or prevent these scanners is a bit of a fool's errand, however by tracking these scanners over time I can begin to identify patterns to proactively block them early, saving resources. Why this matters is not because of the one-and-done scanners, but instead the malicious ones, the ones that don't just send a handful of requests - they send hundreds, if not thousands, all at once. These scanners risk degrading the service for others by occupying server resources that would better be used for legitimate visitors. To detect malicious hosts, I employ some basic heuristic by focusing on the headers sent by the client, and the paths they're trying to access. Banned Paths Having collected months of data from the traffic I served, I was able to identify some of the most common paths these scanners are looking for. One of the more common treds I see if scanning for weak and vulnerable WordPress configurations. WordPress is an incredibly common content management platform, which also makes it a prime target for attackers. Since I don't use WordPress (and perhaps you shouldn't either...) this made it a good candidate for scanner tracking. Therefore, any request where the path contains any of: "wp-admin", "wp-content", "wp-includes", or "xmlrpc.php" are flagged as malicious and recorded. User Agents The User Agent header is data sent by your web browser to the server that provides a vague description of the browser and the device it's running on. For example, my user agent when I wrote this post is: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:128.0) Gecko/20100101 Firefox/128.0 All this really tells the server is that I'm on a Mac running macOS 15 and using Firefox 128. One of the most effective measures I've found to block malicious traffic early is to do some very basic filtering by user agent. The simplest and most effective measure thus far has been to block requests that have no user agent header. I also have a growing list of bogus user agent values, where the header looks valid - but if you check the version numbers of the system or browser, nothing lines up. IP Firewall When clients start getting a bit too rowdy, they get put into the naughty corner temporarily blocked from connecting. Blocked connections happen during the TCP handshake, saving resources as we skip the TLS negotiation. Addresses are blocked 24 hours, and I found this time to be perfectly adequate as most clients quickly give up and move on. ASN Blocks In some extreme situations, it's necessary to block entire services and all of their addresses from accessing my server. This happens when a network provider, such as an ISP, VPN, or cloud provider, fails to do their job in preventing abuse of their services and malicious find home there. Cloud providers have a responsibility to ensure that if a malicious customer is using their service, they would terminate their accounts and stop providing their services. For the most part, these cloud providers do a decent enough job at that. Some providers, however, don't care - at all - and quickly become popular amongst malicious actors. Cloudflare and Alibaba are two great examples. Because of the sheer volume of malicious traffic and total lack of valid user traffic, I block all of Cloudflare and Alibaba's address space. Specifically, I block AS13335 and AS45102. Putting It All Together Summarized, this is the path a request takes when connecting to my server: Upon recieving a TCP connection, the IP address of the client is checked if it's either in a blocked ASN or is individually blocked. If so, the request is quickly rejected. Otherwise, TLS is negotiated, allowing the server to see the details of the actual HTTP request. We then check if the request is for a banned path, or has a banned user agent, if so the IP is blocked for 24 hours and the request is rejected, otherwise the request is served as normal. The Result I feel this graph speaks for itself: This graph shows the number of requests that were blocked per minute. These bursts are the malicious scanners that I'm working to block, and all of these were successful defences against them. This will be a never-ending fight, but that's part of the fun, innit?

6 hours ago 2 votes
Comics from January/February 1983 Issue of Today Mag

Time for some oldie levity.

21 hours ago 2 votes
Resistors, Johnson-Nyquist, nV/√Hz

A major source of noise in electronic circuits is easy to understand. The unit we use to measure it is not.

13 hours ago 2 votes
tinyML in Malawi: Empowering local communities through technology

Dr. David Cuartielles, co-founder of Arduino, recently participated in a workshop titled “TinyML for Sustainable Development” in Zomba, organized by the International Centre for Theoretical Physics (ICTP), a category 1 UNESCO institute, and the University of Malawi. Bringing together students, educators, and professionals from Malawi and neighboring countries, as well as international experts from Brazil, […] The post tinyML in Malawi: Empowering local communities through technology appeared first on Arduino Blog.

17 hours ago 2 votes
Odds and Ends #66: The winner of the 2040 US Presidential election is in space

Plus ultra-grim Nazi revisionism, why Kemi is right about Adolescence, and my Gladiators conspiracy theory

18 hours ago 2 votes