More from bt RSS Feed
Installing OpenBSD on Linveo KVM VPS 2024-10-21 I recently came across an amazing deal for a VPS on Linveo. For just $15 a year they provide: AMD KVM 1GB 1024 MB RAM 1 CPU Core 25 GB NVMe SSD 2000 GB Bandwidth It’s a pretty great deal and I suggest you look more into it if you’re interested! But this post is more focused on setting up OpenBSD via the custom ISO option in the KVM dashboard. Linveo already provides several Linux OS options, along with FreeBSD by default (which is great!). Since there is no OpenBSD template we need to do things manually. Getting Started Once you have your initial VPS up and running, login to the main dashboard and navigate to the Media tab. Under CD/DVD-ROM you’ll want to click “Custom CD/DVD” and enter the direct link to the install76.iso: https://cdn.openbsd.org/pub/OpenBSD/7.6/amd64/install76.iso The "Media" tab of the Linveo Dashboard. Use the official ISO link and set the Boot Order to CD/DVD. Select “Insert”, then set your Boot Order to CD/DVD and click “Apply”. Once complete, Restart your server. Installing via VNC With the server rebooting, jump over to Options and click on “Browser VNC” to launch the web-based VNC client. From here we will boot into the OpenBSD installer and get things going! Follow the installer as you normally would when installing OpenBSD (if you’re unsure, I have a step-by-step walkthrough) until you reach the IPv4 selection. At this point you will want to input your servers IPv4 and IPv6 IPs found under your Network section of your dashboard. Next you will want to set the IPv6 route to first default listed option (not “none”). After that is complete, choose cd0 for your install media (don’t worry about http yet). Continue with the rest of the install (make users if desired, etc) until it tells you to reboot the machine. Go back to the Linveo Dashboard, switch your Boot Order back to “Harddrive” and reboot the machine directly. Booting into OpenBSD Load into the VNC client again. If you did everything correctly you should be greeted with the OpenBSD login prompt. There are a few tweaks we still need to make, so login as the root user. Remember how we installed our sets directly from the cd0? We’ll want to change that. Since we are running OpenBSD “virtually” through KVM, our target network interface will be vio0. Edit the /etc/hostname.vio0 file and add the following: dhcp !route add default <your_gateway_ip> The <your_gateway_ip> can be found under the Network tab of your dashboard. The next file we need to tweak is /etc/resolv.conf. Add the following to it: nameserver 8.8.8.8 nameserver 1.1.1.1 These nameservers are based on your selected IPs under the Resolvers section of Network in the Linveo dashboard. Change these as you see fit, so long as they match what you place in the resolve.conf file. Finally, the last file we need to edit is /etc/pf.conf. Like the others, add the following: pass out proto { tcp, udp } from any to any port 53 Final Stretch Now just reboot the server. Log back in as your desired user and everything should be working as expected! You can perform a simple test to check: ping openbsd.org This should work - meaning your network is up and running! Now you’re free to enjoy the beauty that is OpenBSD.
Vertical Tabs in Safari 2024-09-26 I use Firefox as my main browser (specifically the Nightly build) which has vertical tabs built-in. There are instances where I need to use Safari, such as debugging or testing iOS devices, and in those instances I prefer to have a similar experience to that of Firefox. Luckily, Apple has finally made it fairly straight forward to do so. Click the Sidebar icon in the top left of the Safari browser Right click and group your current tab(s) (I normally name mine something uninspired like “My Tabs” or simply “Tabs”) For an extra “clean look”, remove the horizontal tabs by right clicking the top bar, selected Customize Toolbar and dragging the tabs out When everything is set properly, you’ll have something that looks like this: One minor drawback is not having access to a direct URL input, since we have removed the horizontal tab bar altogether. Using a set of curated bookmarks could help avoid the need for direct input, along with setting our new tab page to DuckDuckGo or any other search engine.
Build and Deploy Websites Automatically with Git 2024-09-20 I recently began the process of setting up my self-hosted1 cgit server as my main code forge. Updating repos via cgit on NearlyFreeSpeech on its own has been simple enough, but it lacked the “wow-factor” of having some sort of automated build process. I looked into a bunch of different tools that I could add to my workflow and automate deploying changes. The problem was they all seemed to be fairly bloated or overly complex for my needs. Then I realized I could simply use post-receive hooks which were already built-in to git! You can’t get more simple than that… So I thought it would be best to document my full process. These notes are more for my future self when I inevitably forget this, but hopefully others can benefit from it! Before We Begin This “tutorial” assumes that you already have a git server setup. It shouldn’t matter what kind of forge you’re using, so long as you have access to the hooks/ directory and have the ability to write a custom post-receive script. For my purposes I will be running standard git via the web through cgit, hosted on NearlyFreeSpeech (FreeBSD based). Overview Here is a quick rundown of what we plan to do: Write a custom post-receive script in the repo of our choice Build and deploy our project when a remote push to master is made Nothing crazy. Once you get the hang of things it’s really simple. Prepping Our Servers Before we get into the nitty-gritty, there are a few items we need to take care of first: Your main git repo needs ssh access to your web hosting (deploy) server. Make sure to add your public key and run a connection test first (before running the post-receive hook) in order to approve the “fingerprinting”. You will need to git clone your main git repo in a private/admin area of your deploy server. In the examples below, mine is cloned under /home/private/_deploys Once you do both of those tasks, continue with the rest of the article! The post-receive Script I will be using my own personal website as the main project for this example. My site is built with wruby, so the build instructions are specific to that generator. If you use Jekyll or something similar, you will need to tweak those commands for your own purposes. Head into your main git repo (not the cloned one on your deploy server), navigate under the hooks/ directory and create a new file named post-receive containing the following: #!/bin/bash # Get the branch that was pushed while read oldrev newrev ref do branch=$(echo $ref | cut -d/ -f3) if [ "$branch" == "master" ]; then echo "Deploying..." # Build on the remote server ssh user@deployserver.net << EOF set -e # Stop on any error cd /home/private/_deploys/btxx.org git pull origin master gem install 'kramdown:2.4.0' 'rss:0.3.0' make build rsync -a build/* ~/public/btxx.org/ EOF echo "Build synced to the deployment server." echo "Deployment complete." fi done Let’s break everything down. First we check if the branch being pushed to the remote server is master. Only if this is true do we proceed. (Feel free to change this if you prefer something like production or deploy) if [ "$branch" == "master" ]; then Then we ssh into the server (ie. deployserver.net) which will perform the build commands and also host these built files. ssh user@deployserver.net << EOF Setting set -e ensures that the script stops if any errors are triggered. set -e # Stop on any error Next, we navigate into the previously mentioned “private” directory, pull the latest changes from master, and run the required build commands (in this case installing gems and running make build) cd /home/private/_deploys/btxx.org git pull origin master gem install 'kramdown:2.4.0' 'rss:0.3.0' make build Finally, rsync is run to copy just the build directory to our public-facing site directory. rsync -a build/* ~/public/btxx.org/ With that saved and finished, be sure to give this file proper permissions: chmod +x post-receive That’s all there is to it! Time to Test! Now make changes to your main git project and push those up into master. You should see the post-receive commands printing out into your terminal successfully. Now check out your website to see the changes. Good stuff. Still Using sourcehut My go-to code forge was previously handled through sourcehut, which will now be used for mirroring my repos and handling mailing lists (since I don’t feel like hosting something like that myself - yet!). This switch over was nothing against sourcehut itself but more of a “I want to control all aspects of my projects” mentality. I hope this was helpful and please feel free to reach out with suggestions or improvements! By self-hosted I mean a NearlyFreeSpeech instance ↩
Burning & Playing PS2 Games without a Modded Console 2024-09-02 Important: I do not support pirating or obtaining illegal copies of video games. This process should only be used to copy your existing PS2 games for backup, in case of accidental damage to the original disc. Requirements Note: This tutorial is tailored towards macOS users, but most things should work similar on Windows or Linux. You will need: An official PS2 game disc (the one you wish to copy) A PS2 Slim console An Apple device with a optical DVD drive (or a portable USB DVD drive) Some time and a coffee! (or tea) Create an ISO Image of Your PS2 Disc: Insert your PS2 disc into your optical drive. Open Disk Utility (Applications > Utilities) In Disk Utility, select your PS2 disc from the sidebar Click on the File menu, then select New Image > Image from [Disc Name] Choose a destination to save the ISO file and select the format as DVD/CD Master Name your file and click Save. Disk Utility will create a .cdr file, which is essentially an ISO file Before we move on, we will need to convert that newly created cdr file into ISO. Navigate to the directory where the .cdr file is located and use the hdiutil command to convert the .cdr file to an ISO file: hdiutil convert yourfile.cdr -format UDTO -o yourfile.iso You’ll end up with a file named yourfile.iso.cdr. Rename it by removing the .cdr extension to make it an .iso file: mv yourfile.iso.cdr yourfile.iso Done and done. Getting Started For Mac and Linux users, you will need to install Wine in order to run the patcher: # macOS brew install wine-stable # Linux (Debian) apt install wine Clone & Run the Patcher Clone the FreeDVDBoot ESR Patcher: git clone https://git.sr.ht/~bt/fdvdb-esr Navigate to the cloned project folder: cd /path/to/fdvdb-esr The run the executable: wine FDVDB_ESR_Patcher.exe Now you need to select your previously cloned ISO file, use the default Payload setting and then click Patch!. After a few seconds your file should be patched. Burning Our ISO to DVD It’s time for the main event! Insert a blank DVD-R into your disc drive and mount it. Then right click on your patched ISO file and run “Burn Disk Image to Disc...". From here, you want to make sure you select the slowest write speed and enable verification. Once the file is written to the disc and verified (verification might fail - it is safe to ignore) you can remove the disc from the drive. Before Playing the Game Make sure you change the PS2 disc speed from Standard to Fast in the main “Browser” setting before you put the game into your console. After that, enjoy playing your cloned PS2 game!
“This Key is Useless Now. Discard?” 2024-08-28 The title of this article probably triggers nostalgic memories for old school Resident Evil veterans like myself. My personal favourite in the series (not that anyone asked) was the original, 1998 version of Resident Evil 2 (RE2). I believe that game stands the test of time and is very close to a masterpiece. The recent remake lost a lot of the charm and nuance that made the original so great, which is why I consistently fire up the PS1 version on my PS2 Slim. Resident Evil 2 (PS1) running on my PS2, hooked up to my Toshiba CRT TV. But the point of this post isn’t to gush over RE2. Instead I would like to discuss how well RE2 handled its interface and user experience across multiple in-game systems. HUD? What HUD? Just like the first Resident Evil that came before it, RE2 has no in-game HUD (heads-up display) to speak of. It’s just your playable character and the environment. No ammo-counters. No health bars. No “quest” markers. Nothing. This is how the game looks while you play. Zero HUD elements. The game does provide you with an inventory system, which holds your core items, weapons and notes you find along your journey. Opening up this sub-menu allows you to heal, reload weapons, combine objects or puzzle items, or read through previously collected documents. Not only is this more immersive (HUDs don’t exist for us in the real world, we need to look through our packs as well…) it also gets out of the way. The main inventory screen. Shows everything you need to know, only when you need it. (I can hear this screenshot...) I don’t need a visual element in the bottom corner showing me a list of “items” I can cycle through. I don’t want an ammo counter cluttering up my screen with information I only need to see in combat or while manually reloading. If those are pieces of information I need, I’ll explicitly open and look for it. Don’t make assumptions about what is important to me on screen. Capcom took this concept of less visual clutter even further in regards to maps and the character health status. Where We’re Going, We Don’t Need Roads Mini-Maps A great deal of newer games come pre-packaged with a mini-map on the main interface. In certain instances this works just fine, but most are 100% UI clutter. Something to add to the screen. I can only assume some devs believe it is “helpful”. Most times it’s simply a distraction. Thank goodness most games allow you to disable them. As for RE2, you collect maps throughout your adventure and, just like most other systems in the game, you need to consciously open the map menu to view them. You know, just like in real life. This creates a higher tension as well, since you need to constantly reference your map (on initial playthroughs) to figure out where the heck to go. You feel the pressure of someone frantically pulling out a physical map and scanning their surroundings. It also helps the player build a mental model in their head, thus providing even more of that sweet, sweet immersion. The map of the Raccoon City Police Station. No Pain, No Gain The game doesn’t display any health bar or player status information. In order to view your current status (symbolized by “Fine”, “Caution” or “Danger”) you need to open your inventory screen. From here you can heal yourself (if needed) and see the status type change in real-time. The "condition" health status. This is fine. But that isn’t the only way to visually see your current status. Here’s a scenario: you’re traveling down a hallway, turn a corner and run right into the arms of a zombie. She takes a couple good bites out of your neck before you push her aside. You unload some handgun rounds into her and down she goes. As you run over her body she reaches out and chomps on your leg as a final “goodbye”. You break free and move along but notice something different in your character’s movement - they’re holding their stomach and limping. Here we can see the character "Hunk" holding his stomach and limping, indicating an injury without the need for a custom HUD element. If this was your first time playing, most players would instinctively open the inventory menu, where their characters health is displayed, and (in this instance) be greeted with a “Caution” status. This is another example of subtle UX design. I don’t need to know the health status of my character until an action is required (in this example: healing). The health system is out of the way but not hidden. This keeps the focus on immersion, not baby-sitting the game’s interface. A Key to Every Lock Hey! This section is in reference to the title of the article. We made it! …But yes, discarding keys in RE2 is a subtle example of fantastic user experience. As a player, I know for certain this key is no longer needed. I can safely discard it and free up precious space from my inventory. There is also a sense of accomplishment, a feeling of “I’ve completed a task” or an internal checkbox being ticked. Progress has been made! Don’t overlook how powerful of a interaction this little text prompt is. Ask any veteran of the series and they will tell you this prompt is almost euphoric. The game's prompt asking if you'd like to discard a useless key. Perfection. Inspiring Greatness RE2 is certainly not the first or last game to implement these “minimal” game systems. A more “modern” example is Dead Space (DS), along with its very faithful remake. In DS the character’s health is displayed directly on the character model itself, and a similar inventory screen is used to manage items. An ammo-counter is visible but only when the player aims their weapon. Pretty great stuff and another masterpiece of survival horror. In Dead Space, the character's health bar is set as part of their spacesuit. The Point I guess my main takeaway is that designers and developers should try their best to keep user experience intuitive. I know that sounds extremely generic but it is a lot more complex than one might think. Try to be as direct as possible while remaining subtle. It’s a delicate balance but experiences like RE2 show us it is attainable. I’d love to talk more, but I have another play-through of RE2 to complete…
More in programming
Explore Remix's new React Server Components (RSC) preview in react-router! Learn usage, different approaches, and trade-offs.
Have you thought about doing the opposite of whatever you're doing or considering? It's a really helpful way to test your assumptions and your values. What does the opposite look like, how would it work? It's so easy to get stuck in a groove of what works, what you believe to be right. But helpful assumptions have a half-life, just like facts. And it's ever so easy to miss the shift when circumstances change, if you're not regularly stress-testing your core beliefs. That doesn't mean you're just a flag in the wind, blowing whichever way. But it does mean having enough intellectual humility and creative flexibility to consider that what you believe to be true about your business, about your team, about your technology might not be so. We did this a while back with full-time managers. We'd been working for nearly two decades without any, but exactly because it'd been so long, we were drawn to try the opposite, just to see what we might have missed. So we did. Hired a few full-time managers to help us test that assumption for a few years. In the end, we decided that our managers-of-one culture worked better, but it wasn't a given at the outset. To try the opposite, you really have to believe that you might have been wrong. Because you're wrong about something. I guarantee it. We all are.
I’ve spent so much time, had so many headaches, and encountered so much complexity from what, in my estimation, boils down to this: trying to get something to work on multiple computers. It might be time to just go back to having one computer — a personal laptop — do everything. No more commit, push, and let the cloud build and deploy. No more making it possible to do a task on my phone and tablet too. No more striving to make it possible to do anything from anywhere. Instead, I should accept the constraint of doing specific kinds of tasks when I’m at my laptop. No laptop? Don’t do it. Save it for later. Is it really that important? I think I’d save myself a lot of time and headache with that constraint. No more continuous over-investment of my time in making it possible to do some particular task across multiple computers. It’s a subtle, but fundamental, shift in thinking about my approach to computing tasks. Today, my default posture is to defer control of tasks to cloud computing platforms. Let them do the work, and I can access and monitor that work from any device. Like, for example, publishing a version of my website: git commit, push, and let the cloud build and deploy it. But beware, there be possible dragons! The build fails. It’s not clear why, but it “works on my machine”. Something is different between my computer and the computer in the cloud. Now I’m troubleshooting an issue unrelated to my website itself. I’m troubleshooting an issue with the build and deployment of my website across multiple computers. It’s easy to say: build works on my machine, deploy it! It’s deceivingly time-consuming to take that one more step and say: let another computer build it and deploy it. So rather than taking the default posture of “cloud-first”, i.e. push to the cloud and let it handle everything, I’d rather take a “local-first” approach where I choose one primary device to do tasks on, and ensure I can do them from there. Everything else beyond that, i.e. getting it to work on multiple computers, is a “progressive enhancement” in my workflow. I can invest the time, if I want to, but I don’t have to. This stands in contrast to where I am today which is if a build fails in the cloud, I have to invest the time because that’s how I’ve setup my workflow. I can only deploy via the cloud. So I have to figure out how to get the cloud’s computer to build my site, even when my laptop is doing it just fine. It’s hard to make things work identically across multiple computers. I get it, that’s a program not software. And that’s the work. But sometimes a program is just fine. Wisdom is knowing the difference. Email · Mastodon · Bluesky
For the past week, I've been working off the Minisforum UM870. A tiny mini PC with an 8-core/16-thread AMD 8745H CPU, which retails for $343 (or €379) as a bare-bone unit, and stays below $550, even after adding 48GB of RAM and 1TB of storage. I'm shocked to report that I really don't need more than this! I mean, I knew that Apple's Mac Mini, which is equally petite to the Minisforum, had plenty of power for macOS. But somehow I thought Apple had some special sauce that made this possible, and that PCs were forever condemned to be bigger, louder, and slower. How 2020 of me. The UM870 is a little beast. It runs our full HEY test suite in just 2m28s. In comparison, it takes a 14-core M4 Pro 2m49s, and such an Apple costs $2,199, once you've given it 48GB of RAM and 1TB of storage. Now, that M4 Mac Mini can probably do things with, say, 8K video editing that the UM870 can't touch. But on the other hand, the UM870 can play the latest video games. Everything from Fortnite to Cyberpunk 2077 to Forza Horizon. It won't trouble a modern, dedicated Nvidia card for max FPS, but it's perfectly playable at 1080p at medium settings in a ton of games. In raw CPU power, the AMD 8745H will match a regular M4 in multi-core. They both clock in right around 13,000 points on Geekbench 6. Though the M4 is a fair bit quicker in single-core. The AMD is also far behind an M4 Pro in raw multi-core power (13K vs 22K), but at less than a quarter the cost, it's hard to complain. But as with the example of video games, it's often deceiving just to compare the Geekbench numbers, because it all depends on what you're doing. If you're really into video games, it's no use to have extra grunt, if your favorite games won't run. The same is true if you're a developer working with Docker containers and a Linux toolchain. As quoted above, the UM870 handily defeats the M4 Pro in our all-cores-buzzing HEY test suite. That's partly because we run databases and accessories, like MySQL, Redis, and ElasticSearch, in Docker containers. Even though we run the Ruby code natively on both platforms, the Docker dependencies put the Mac further behind than it otherwise would have been, because Linux runs Docker natively, and the Mac has to deal with the file-system tax and other drawbacks. The irony is that it was partly Apple's volume with and investment in TSMC that got us these incredible AMD chips, as they're riding the same improvements in TSMC manufacturing prowess as Apple's M chips. The Zen 4 cores in the 8745H are all forged on the same 5nm process as the M2, so it's no surprise that the AMD cores are dead-on-the-money for Apple's in Geekbench single-core performance. And Zen 4 is even the last generation! The insane new (and insanely named) AMD Ryzen AI Max 395+ chip that's used in the upcoming Framework Desktop runs on Zen 5 cores. And with 16 of those, the 395+ is faster in Geekbench multi-core than an M4 Pro, and only ever so slightly behind the M4 Max. On my HEY test suite, it completes the run in an insane 1m21s — more than twice as fast as the 14-core M4 Pro! But I digress. The 395+ chip isn't cheap, even if it's still a great deal. The Framework Desktop with 64GB/1TB, which is twice as fast as the M4 Pro with our HEY tests, is $1,744. That's still less than the $2,199 Mac Mini, which only has 48GB of RAM. But obviously way more than a $550 Minisforum! And while it's quite small, it's not tiny, like the UM870. Regardless, this is what I love about technology. I love when our assumptions are tested: just how small and cheap can an awesome developer machine become? I love that open-source Linux is able to run laps around Apple in the workloads that many developers need (like working with Docker containers). I love that this tiny little silent $550 mini PC on my desk is capable of putting out computing power that only a decade ago would have been reserved to loud, honking metal in a data center. Mini PCs have gotten really good. AMD is on a roll. Linux is a blast. These are my conclusions. Check out the Minisforum UM870 or the Beelink SER8. Anything with an AMD 7745H and up to an 8945HS should be a great deal. If you want to splurge (yet still get a bargain compared to the macs), you could have a look at the new HX370 in the Beelink SER9 or Minisforum X1, but I'd save my money, buy a Lofree Flow84 keyboard to go with the new mini rig, and put the rest of the money towards a KEF LSX II savings fund!
For many years now, I’ve been using an old YubiKey along with the free tier of Duo Security to add a second factor to my SSH logins. This is klunky, and has a number of drawbacks (dependency on a cloud service and Internet among them). I decided it was time to upgrade, so I recently … Continue reading How to Use SSH with FIDO2/U2F Security Keys →