More from Jonas Hietala
About 1.5 years ago I ventured into 3D printing by building a VORON Trident. It was a very fun project and I’ve even used the printer quite a bit. Naturally, I had to build another one and this time I opted for the cute VORON 0. Why another printer? I really like my VORON Trident and it’ll continue to be my main printer for the foreseeable future but a second printer would do two important things for me: Act as a backup printer if my Trident breaks. A printer made partially of printed parts is great as you can easily repair it… But only if you have a working printer to print the parts. It would also be very annoying if I disassemble the printer because I want to mod it and realize I’ve forgotten to print a part I needed. Building printers are really fun. Building the VORON Trident is one of my most fun and rewarding projects I’ve done. Why a VORON 0? These properties makes the VORON 0 an ideal secondary printer for me: You need to assemble the VORON 0 yourself (a feature not a bug) Prints ABS/ASA well (for printer parts) Very moddable and truly open source It’s tiny The VORON 0 to the left and the VORON Trident 250 to the right. It’s really small, which is perfect for me as I have a limited amount of space. It would be very fun to build a VORON 2.4 (or even a VORON Phoenix) but I really don’t have space for more printers. Getting the parts I opted to buy a kit instead of self-sourcing the parts as it’s usually cheaper and requires a lot less work, even if you replace some parts. This is what I ended up getting: A VORON 0 kit from Lecktor Parts for a Dragon Burner toolhead Parts for a Nevermore V4 active carbon filter Later on, I replaced the SKR Mini E3 V2 that came with the kit with the V3 Lots of delays I ordered a VORON 0 from Lecktor in February 2024 and it took roughly 4 months before I got the first shipment of parts and it wasn’t until the end of 2024 that I had received all the parts needed to complete the build. The wait was annoying… While I can’t complain about the quality of parts, with the massive delays I regret ordering from Lecktor and in hindsight I should’ve ordered an LDO kit from 3DJake, like I was first considering. Printing parts myself So what do you do when you can’t start the build? You print parts! A box of some of the printed parts for the build (and many I later threw away). There’s something very satisfying with printing parts you then build a printer with. This time I wanted to make a colorful printer and I came up with this mix of filament: PolyLite ASA Yellow Formfutura EasyFil ABS Light Green Formfutura EasyFil ABS Light Blue Formfutura EasyFil ABS Magenta I think they made the printer look great. The build I won’t do as detailed of a build log as I did when building the VORON Trident but I tried to take some pictures. Scroll on! Frames and bed The linear Y-rails. The kit comes with the Kirigami bed mod. The frame with A/B motors. Building the bottom of the printer with feet, power supply, and display. MGN9 instead of MGN7 X-axis After I assembled the X-axis I noticed a problem: The carriage collides with the stock A drive. The reason is that the kit comes with MGN9 rails for the X-axis instead of the standard MGN7 rails. This required me to reprint modified A/B drives, X-carriage, and alignment tools. The carriage passes the modded B drive. Belts Starting to install the belt. The belt is tight. Dragon Burner toolhead I got the parts needed to build the standard mini stealthburner… But I’m attracted to playing around with new stuff and I decided to try out the Dragon Burner instead. I went with it because it’s quite popular, it has good cooling (I print a bunch of PLA), and I haven’t tried it out yet. The fans are inserted. I don’t care about LEDs so I inserted an opaque magenta part instead. I think it looks really good. The back of the Dragon Burner. I opted for the Rapido 2 instead of the Dragon that came with the kit because the Dragon has problems printing PLA. I was a bit confused on how to route the wires as there was very little space when mounting the toolhead on the carriage. Routing the wires close to the fans, clipping off the ears of the fans, and holding together it with cable ties in this way worked for me. Galileo 2 standalone Dragon Burner together with the Galileo 2 extruder mounted on the printer. For the extruder I opted for the standalone version of Galileo 2. I’ve used Galileo 2 on the Trident but I hated the push down latch it uses in the Stealthburner configuration. The latch eventually broke by pulling out a heat-set insert so I went back to the Clockwork 2 on the Trident, giving me the parts to rebuilt the Galileo for the VORON 0 in a standalone configuration. The parts for Galileo 2. There will be left-overs from the Stealthburner variant. The build was really fast and simple—compared to the Stealthburner variant it’s night and day. I didn’t even think to take a break for pictures. Nevermore filter Since I want to be able to print ABS I feel I need to have an activated carbon filter. I wanted to have an exhaust fan with a HEPA filter as well, but I’ll leave that to a mod in the future. The Nevermore V4 is an activated carbon filter that fits well in the VORON 0. I fastened the fan using a strip of VHB—it was a struggle to position it in the middle. The Nevermore is mounted standing in the side of the printer. Just remember to preload the extrusion with extra M3 nuts when you assemble the printer. (I’ve heard LDO has nuts you can insert after… Sounds great.) Panels With the panel and spool holder at the back. Please ignore the filament path in this picture, it’ll interfere with the rear belt when routed behind the umbilical cable. With the tophat and door installed. I’m slightly annoyed with the small gaps and holes the printer has (mainly between the tophat and the panels at the bottom half). I later changed some of the parts related to the top hat to match the colorscheme better. Wiring Wiring was simpler than for the Trident but it was harder to make the wiring pretty. Thank god I could cover it up. The underside of the printer with the power, 5V converter, display, and Z-motor. Back of the printer with the Raspberry Pi and MCU. Raspberry Pi The Raspberry Pi only has two cables; power and communication over the GPIO pins and a display via USB. The Pi communicates and gets power over the TFT connection on the MCU. Toolhead The kit came with a toolhead board and breakout board for an umbilical setup: The toolhead board. The breakout board. I did run into an issue where the polarity of the fans on the toolhead board did not match the polarity of the fans on the MCU, leading to some frustration where the fans refused to spin. I ended up swapping the polarity using the cables from the breakout board to the MCU. Chamber thermistor The MCU only has two thermistor ports and they’re used for the hotend and bed thermistors. For the chamber thermistor (that’s integrated into the breakout board) I use the MOSI pin on the SPI1 8-pin header: The chamber thermistor connected to MOSI and ground on the SPI1 header. SKR mini E3 v3 I got an SKR mini E3 v2 with the kit but I replaced it with the v3 for two reasons: FAN output, used for the Nevermore Filter A filament runout sensor There’s not much to say about the extra FAN output but the filament runout sensor has 3 pins, while VORON 0.2 style runout sensor has 3 pins. I reused the prepared y-endstop I got with the kit, scratched away some of the plastic to make the 2-pin connection fit the 3-pins on the MCU (the +5V pin isn’t needed): The filament runout sensor connected to E0-stop. Klipper setup I followed the VORON documentation and chose Mainsail as I’ve been happy with it on my Trident. I’m not going to describe everything and only call out some issues I had or extra steps I had to take. MCU firmware The VORON documentation assumes USB communication so the default firmware instructions didn’t work for me. According to BigTreeTech’s documentation if you communicate over USART2 (the TFT port) then you need to compile the firmware with Communication interface set to Serial (on USART2 PA3/PA2). You then need to use this klipper configuration: [mcu] serial: /dev/ttyAMA0 restart_method: command It took a long time for me to figure out as I had a display connected via USB, so I thought the display was the MCU and got stuck at a Your Klipper version is: xxx MCU(s) which should be updated: xxx error. Filament runout [filament_switch_sensor Filament_Runout_Sensor] pause_on_runout: True runout_gcode: PAUSE switch_pin: PC15 Chamber thermistor According to this comment this is the config to use the SPI header for a thermistor: [temperature_sensor chamber_temp] sensor_type: Generic 3950 sensor_pin: PA7 pullup_resistor: 10000 Works for me™ Display It’s easy to flash the display directly from the Raspberry Pi although the first firmware I built was too large. There are optional features you can remove but I removed too many so the configuration for the buttons wasn’t accepted. These were the features that ended up working for me: [*] Support GPIO "bit-banging" devices [*] Support LCD devices [ ] Support thermocouple MAX sensors [ ] Support adxl accelerometers [ ] Support lis2dw and lis3dh 3-axis accelerometers [ ] Support MPU accelerometers [*] Support HX711 and HX717 ADC chips [ ] Support ADS 1220 ADC chip [ ] Support ldc1612 eddy current sensor [ ] Support angle sensors [*] Support software based I2C "bit-banging" [*] Support software based SPI "bit-banging" Sensorless homing I was nervous setting up sensorless homing, fearing that without a physical switch the printer might decide to burn the motor against the edge or something. (I really have no idea how it works, hence my fear.) In the end it was straightforward. The VORON 0 example firmware was already configured for sensorless homing and the only things I had to do was: X-DIAG and Y-DIAG pins on the board Tweak the driver_SGTHRS values (I landed on 85 down from 255) And now I have sensorless homing working consistently. What confused me was that the sensorless homing guide and the homing macros it links to were slightly different from the VORON 0 example firmware and it wasn’t clear if I had to make all the changes or not. (I did not.) Some random issues I encountered In typical 3D printer fashion, you’ll always run into various issues, for example: I got the mcu shutdown: Timer too close error a few times. I don’t know what I did but it only happened a couple of times at beginning. The filament sensor had some consistency issues. Some extra tape on the bearing seemed to fix it. The filament keeps getting stuck in the extruder after unload. I’m still having issues but forgetting to tighten the nozzle and using a too short PTFE tube didn’t help. I had trouble getting the filament to stick to bed. Super frustrating to be honest. I re-calibrated the z offset and thumb screws a bunch of times and (right now) it seems to work fairly well. Even though you’re not supposed to need automatic bed leveling for a printer this small, I can’t help but miss the “just works” feeling I have with the Trident. Initial thoughts on the printer A model I printed for one of my kids. It came out really well. I haven’t printed that much with the printer yet but I have some positive things to say about it: Dragon Burner is great when printing PLA (which I use a lot). But I have some negative things to say too: horribly loud but the print movement is also too loud for my taste. It’s poorly insulated. For example there are gaps between the top hat and the rest of the printer that I don’t see a good way to cover up. Overall though I’m very happy with it. I wouldn’t recommend it as a first printer or to someone who just wants a tool that works out of the box, but for people like me who wanted to build a backup/secondary printer I think it’s great. What’s next? With a secondary printer finally up and running I can now start working on some significant mods for my Trident! This is the tentative plan right now: Inverted electronics mod. Replace Stealthburner with another toolhead, most likely A4T-toolhead. Build a BoxTurtle for multi-color support. But we’ll see when I manage to get to it. I’m not in a rush and I should take a little break and play with my VORON 0 and perhaps work on my other dozen or so projects that lie dormant.
I recently came upon a horror story where a developer was forced to switch editor from Neovim to Cursor and I felt I had to write a little to cleanse myself of the disgust I felt. Two different ways of approaching an editor I think that there’s two opposing ways of thinking about the tool that is an editor: Refuse to personalize anything and only use the basic features “An editor is a simple tool I use to get the job done.” Get stuck in configuration hell and spend tons of time tweaking minor things “An editor is a highly personalized tool that works the way I want.” These are the extreme ends of the spectrum to make a point and most developers will fall somewhere in between. It’s not a static proposition; I’ve had periods in my life where I’ve used the same Vim configuration for years and other times I’ve spent more time rewriting my Neovim config than doing useful things. I don’t differentiate between text editors and IDEs as I don’t find the distinction very meaningful. They’re all just editors. Freedom of choice is important Freedom of choice is more to be treasured than any possession earth can give. David O. McKay Some developers want zero configuration while others want to configure their editor so it’s just right. Either way is fine and I’ve met excellent developers from both sides. But removing the power of choice is a horrible idea as you’re forcing developers to work in a way they’re not comfortable with, not productive with, or simply don’t like. You’re bound to make some of the developers miserable or see them leave (usually the best ones who can easily find another job). To explain how important an editor might be to some people, I give you this story about Stephen Hendry—one of the most successful Snooker players ever—and how important his cue was to him: In all the years I’ve been playing I’ve never considered changing my cue. It was the first cue I ever bought, aged 13, picked from a cabinet in a Dunfermline snooker centre just because I liked the Rex Williams signature on it. I saved £40 to buy it. It’s a cheap bit of wood and it’s been the butt of other players’ jokes for ages. Alex Higgins said it was ‘only good for holdin’ up f*g tomatoes!’ But I insist on sticking with it. And I’ve won a lot of silverware, including seven World Championship trophies, with it. It’s a one-piece which I carry in a wooden, leather-bound case that’s much more expensive than the cue it houses. But in 2003, at Glasgow airport after a flight from Bangkok, it emerges through the rubber flaps on the carousel and even at twenty yards I can see that both case and cue are broken. Snapped almost clean in two, the whole thing now resembling some form of shepherd’s crook. The cue comes to where I’m standing, and I pick it up, the broken end dangling down forlornly. I could weep. Instead, I laugh. ‘Well,’ I say to my stunned-looking friend John, ‘that’s my career over.’ Stephen Hendry, The Mirror Small improvements leads to large long-term gains Kaizen isn’t about massive overhauls or overnight success. Instead, it focuses on small, continuous improvements that add up to significant long-term gains. What is Kaizen? A Guide to Continuous Improvement I firmly believe that even small improvements are worth it as they add up over time (also see compound interest and how it relates to financial investments). An editor is a great example where even small improvements may have a big effect for the simple reason that you spend so much time in your editor. I’ve spent hours almost every day inside (neo)vim since I started using it 15+ years ago. Even simple things like quickly changing text inside brackets (ci[) instead of selecting text with your mouse might save hundreds of hours during a programming career—and that’s just one example. Naturally, as a developer you can find small but worthwhile improvements in other areas too, for instance: Learning the programming languages and libraries you use a little better Customizing your keyboard and keyboard layout This is more for comfort and health than speed but that makes it even more important, not less. Increasing your typing speed Some people dismiss typing speed as they say they’re limited by their thinking, not typing. But the benefit of typing faster (and more fluidly) isn’t really the overall time spent typing vs thinking; it’s so you can continue thinking with as little interruption as possible. On some level you want to reduce the time typing in this chain: think… edit, think… edit, think… It’s also why the Vim way of editing is so good—it’s based on making small edits and to return quickly to normal (thinking) mode. Some people ask how can you afford to spend time practicing Vim commands or to configure your editor as it takes away time from work? But I ask you: with a programming career of several decades and tens of thousands of hours to spend in front of your computer, how can you afford not to? Neovim is versatile During the years I’ve done different things: Switched keyboard and keyboard layout multiple times. Been blogging and wrote a book. The one constant through all of this has been Neovim. Neovim may not have the best language specific integrations but it does everything well and the benefit of having the same setup for everything you do is not to be underestimated. It pairs nicely with the idea of adding up small improvements over time; every small improvement that I add to my Neovim workflow will stay with me no matter what I work with. I did use Emacs at work for years because their proprietary language only had an Emacs integration and I didn’t have the time nor energy to create one for Neovim. While Evil made the experience survivable I realized then that I absolutely hate having my work setup be different from my setup at home. People weren’t overjoyed with being unable to choose their own editor and I’ve heard rumors that there’s now an extension for Visual Studio. Neovim is easily extensible Neovim: a Personalized Development Environment TJ DeVries A different take on editing code I’ve always felt that Vimscript is the worst part of Vim. Maybe that’s a weird statement as the scriptability of Vim is one if it’s strengths; and to be fair, simple things are very nice: nnoremap j gj set expandtab But writing complex things in Vimscript is simply not a great experience. One of the major benefits of Neovim is the addition of Lua as a first-class scripting language. Yes, Lua isn’t perfect and it’s often too verbose but it’s so much better than Vimscript. Lua is the main reason that the Neovim plugin ecosystem is currently a lot more vibrant than in Vim. Making it easier to write plugins is of course a boon, but the real benefit is in how it makes it even easier to make more complex customization for yourself. Just plop down some Lua in the configuration files you already have and you’re done. (Emacs worked this out to an even greater extent decades ago.) One way I use this customizability is to help me when I’m blogging: Maybe you don’t need to create something this big but even small things such as disabling autoformat for certain file types in specific folders can be incredibly useful. Approachability should not be underestimated. While plugins in Lua is understandably the focus today, Neovim can still use plugins written in Vimscript and 99% of your old Vim configuration will still work in Neovim. Neovim won’t go anywhere The old is expected to stay longer than the young in proportion to their age. Nassim Nicholas Taleb, “Antifragile” The last big benefit with Neovim I’ll highlight—and why I feel fine with investing even more time into Neovim—is that Neovim will most likely continue to exist and thrive for years if not decades to come. While Vim has—after an impressive 30 years of development—recently entered maintenance mode, activity in Neovim has steadily increased since the fork from Vim more than a decade ago. The amount of high quality plugins, interest in Google trends, and GitHub activity have all been trending upwards. Neovim was also the most desired editor according to the latest Stackoverflow developer survey and the overall buzz and excitement in the community is at an all-time high. With the self-reinforced behavior and benefits of investing into a versatile and flexible editor with a huge plugin ecosystem such as Neovim I see no reason for the trend to taper off anytime soon. Neovim will probably never be as popular as something like VSCode but as an open source project backed by excited developers, Neovim will probably be around long after VSCode has been discontinued for The Next Big Thing.
I’ve been with Veronica for over a decade now and I think I’m starting to know her fairly well. Yet she still manages to surprise me. For instance, a couple of weeks ago she came and asked me about email security: I worry that my email password is too weak. Can you help me change email address and make it secure? It was completely unexpected—but I’m all for it. The action plan All heroic journeys needs a plan; here’s mine: .com surname was available). Migrate her email to Fastmail. Setup Bitwarden as a password manager. Use a YubiKey to secure the important services. Why a domain? If you ever want (or need) to change email providers it’s very nice to have your own domain. For instance, Veronica has a hotmail.com address but she can’t bring that with her if she moves to Fastmail. Worse, what if she gets locked out of her Outlook account for some reason? It might happen if you forget your password, someone breaks into your account, or even by accident. For example, Apple users recently got locked out of their Apple IDs without any apparent reason and Gmail has been notorious about locking out users for no reason. Some providers may be better but this is a systemic problem that can happen at any service. In almost all cases, your email is your key to the rest of your digital life. The email address is your username and to reset your password you use your email. If you lose access to your email you lose everything. When you control your domain, you can point the domain to a new email provider and continue with your life. Why pay for email? One of the first things Veronica told me when I proposed that she’d change providers was that she didn’t want to pay. It’s a common sentiment online that email must be cheap (or even free). I don’t think that email is the area where cost should be the most significant factor. As I argued for in why you should own your email’s domain, your email is your most important digital asset. If email is so important, why try to be cheap about it? You should spend your money on the important things and shouldn’t spend money on the unimportant things. Paying for email gives you a couple of nice things: Human support. It’s all too easy to get shafted by algorithms where you might get banned because you triggered some edge case (such as resetting your password outside your usual IP address). Ability to use your own domain. Having a custom domain is a paid feature at most email providers. A long-term viable business. How do you run an email company if you don’t charge for it? (You sell out your users or you close your business.) Why a password manager? The best thing you can do security wise is to adopt a password manager. Then you don’t have to try to remember dozens of passwords (leading to easy-to-remember and duplicate passwords) and can focus on remembering a single (stronger) password, confident that the password manager will remember all the rest. “Putting all your passwords in one basket” is a concern of course but I think the pros outweigh the cons. Why a YubiKey? To take digital security to the next level you should use two-factor authentication (2FA). 2FA is an extra “thing” in addition to your password you need to be able to login. It could be a code sent to your phone over SMS (insecure), to your email (slightly better), a code from a 2FA app on your phone such as Aegis Authenticator (good), or from a hardware token (most secure). It’s easy to think that I went with a YubiKey because it’s the most secure option; but the biggest reason is that a YubiKey is more convenient than a 2FA app. With a 2FA app you have to whip out your phone, open the 2FA app, locate the correct site, and then copy the TOTP code into the website (quickly, before the code changes). It’s honestly not that convenient, even for someone like me who’s used this setup for years. With a YubiKey you plug it into a USB port and press it when it flashes. Or on the phone you can use NFC. NFC is slightly more annoying compared to plugging it in as you need to move/hold it in a specific spot, yet it’s still preferable to having to jump between apps on the phone. There are hardware keys other than YubiKey of course. I’ve used YubiKey for years and have a good experience. Don’t fix what isn’t broken. The setup Here’s a few quick notes on how I setup her new accounts: Password management with Bitwarden The first thing we did was setup Bitwarden as the password manager for her. I chose the family plan so I can handle the billing. To give her access I installed Bitwarden as: I gave her a YubiKey and registered it with Bitwarden for additional security. As a backup I also registered my own YubiKeys on her account; if she loses her key we still have others she can use. Although it was a bit confusing for her I think she appreciates not having to remember a dozen different passwords and can simply remember one (stronger) password. We can also share passwords easily via Bitwarden (for news papers, Spotify, etc). The YubiKey itself is very user friendly and she hasn’t run into any usability issues. Email on Fastmail With the core security up and running the next step was to change her email: Gave her an email address on Fastmail with her own domain (<firstname>@<lastname>.com). She has a basic account that I manage (there’s a Duo plan that I couldn’t migrate to at this time). I secured the account with our YubiKeys and a generated password stored in Bitwarden. We bolstered the security of her old Hotmail account by generating a new password and registering our YubiKeys. Forward all email from her old Hotmail address to her new address. With this done she has a secure email account with an email address that she owns. As is proper she’s been changing her contact information and changing email address in her other services. It’s a slow process but I can’t be too critical—I still have a few services that use my old Gmail address even though I migrated to my own domain more than a decade ago. Notes on recovery and redundancy It’s great to worry about weak phishing, weak passwords, and getting hacked. But for most people the much bigger risk is to forget your password or lose your second factor auth, and get locked out that way. To reduce the risk of losing access to her accounts we have: YubiKeys for all accounts. The recovery codes for all accounts are written down and secured. My own accounts can recover her Bitwarden and Fastmail accounts via their built-in recovery functionality. Perfect is the enemy of good Some go further than we’ve done here, others do less, and I think that’s fine. It’s important to not compare yourself with others too much; even small security measures makes a big difference in practice. Not doing anything at all because you feel overwhelmed is worse than doing something, even something simple as making sure you’re using a strong password for your email account.
There are two conflicting forces in play in setting up your computer environment: It’s common to find people get stuck at the extreme ends of the spectrum; some programmers refuse to configure or learn their tools at all, while others get stuck re-configuring their setups constantly without any productivity gains to show for it. Finding a balance can be tricky. With regards to terminals I’ve been using alacritty for many years. It gets the job done but I don’t know if I’m missing out on anything? I’ve been meaning to look at alternatives like wezterm and kitty but I never got far enough to try them out. On one hand it’s just a terminal, what difference could it make? Enter Ghostty, a terminal so hyped up it made me drop any useful things I was working on and see what the fuzz was about. I don’t quite get why people hype up a terminal of all things but here we are. Ghostty didn’t revolutionize my setup or anything but I admit that Ghostty is quite nice and it has replaced alacritty as my terminal. I just want a blank canvas without any decorations One of the big selling points of Ghostty is it’s native platform integration. It’s supposed to integrate well with your window manager so it looks the same and gives you some extra functionality… But I don’t know why I should care—I just want a big square without decorations of any kind. You’re supposed to to be able to simply turn off any window decorations: window-decoration = false At the moment there’s a bug that requires you set some weird GTK settings to fully remove the borders: gtk-titlebar = false gtk-adwaita = false It’s unfortunate as I haven’t done any GKT configuration on my machine (I use XMonad as my window manager and I don’t have any window decorations anywhere). There might some useful native features I don’t know about. The password input style is neat for instance, although I’m not sure it does anything functionally different compared to other terminals: Cursor invert cursor-invert-fg-bg = true In alacritty I’ve had the cursor invert the background and foreground and you can do that in Ghostty too. I ran into an issue where it interferes with indent-blankline.nvim making the cursor very hard to spot in indents (taking the color of the indent guides, which is by design low contrast with the background). Annoying but it gave me the shove I needed to try out different plugins to see if the problem persisted. I ended up with (an even nicer) setup using snacks.nvim that doesn’t hide the cursor: Left: indent-blankline.nvim (cursor barely visible) snacks.nvim (cursor visible and it highlights scope). Minimum contrast Unreadable ls output is a staple of the excellent Linux UX. It might look like this: Super annoying. You can of course configure the ls output colors but that’s just for one program and it won’t automatically follow when you ssh to another server. Ghostty’s minimum-contrast option ensures that the text and background always has enough contrast to be visible: minimum-contrast = 1.05 Most excellent. This feature has the potential to break “eye candy” features, such the Neovim indent lines plugins if you use a low contrast configuration. I still run into minor issues from time to time. Hide cursor while typing mouse-hide-while-typing = true A small quality-of-life feature is the ability to hide the cursor when typing. I didn’t know I needed this in my life. Consistent font sizing between desktop and laptop With alacritty I have an annoying problem where I need to use a very different font size on my laptop and my desktop (8 and 12). This wasn’t always the case and I think something may have changed in alacritty but I’m not sure. Ghostty doesn’t have this problem and I can now use the same font settings across my machines ( font-size = 16 ). Ligature support The issue for adding ligatures to alacritty was closed eight years ago and even though I wanted to try ligatures I couldn’t be bothered to “run a low quality fork”. Ghostty seems like the opposite of “low quality” and it renders Iosevka’s ligatures very well: My configured ligatures of Iosevka, rendered in Ghostty. Overall I feel that the font rendering in Ghostty is a little better than in alacritty, although that might be recency bias. I’m still undecided on ligatures but I love that I don’t have to feel limited by the terminal. I use a custom Iosevka build with these Ghostty settings: font-family = IosevkaTreeLig Nerd Font font-style = Medium font-style-bold = Bold font-style-italic = Medium Italic font-style-bold-italic = Bold Italic font-size = 16 Colorscheme While Ghostty has an absolutely excellent theme selector with a bunch of included themes (ghostty +list-themes) melange-nvim wasn’t included, so I had to configure the colorscheme myself. It was fairly straightforward even though the palette = 0= syntax was a bit surprising: # The dark variant of melange background = #292522 foreground = #ECE1D7 palette = 0=#867462 palette = 1=#D47766 palette = 2=#85B695 palette = 3=#EBC06D palette = 4=#A3A9CE palette = 5=#CF9BC2 palette = 6=#89B3B6 palette = 7=#ECE1D7 palette = 8=#34302C palette = 9=#BD8183 palette = 10=#78997A palette = 11=#E49B5D palette = 12=#7F91B2 palette = 13=#B380B0 palette = 14=#7B9695 palette = 15=#C1A78E # I think it's nice to colorize the selection too selection-background = #403a36 selection-foreground = #c1a78e I’m happy with Ghostty In the end Ghostty has improved my setup and I’m happy I took time to try it out. It took a little more time than “just launch it” but it absolutely wasn’t a big deal. The reward was a few pleasant improvements that have improved my life a little. And perhaps most important of all: I’m now an alpha Nerd that uses a terminal written in Zig. Did I create a custom highlighter for the Ghostty configuration file just to have proper syntax highlighting for this one blog post? You bet I did. (It’s a simple treesitter grammar.)
More in technology
I moved recently, and so did my home server. You might have noticed it due to the downtime. This time I have built a dedicated shelf for it, which allows for more flexibility and room for additional expensive ideas. The internet connection is a fiber line, which is fantastic for a place that’s generally considered to be in the countryside. I had to hire a guy at the last place in Tallinn (capital of Estonia) to pull a fiber line from the basement to the apartment, with my own money, so I’m very happy that I don’t have to do it here. And yes, the ThinkPad T430 is still a solid home server. I had an issue with my battery calibration script resulting in the machine being turned off, but I fixed it by disabling it, at the cost of the battery probably dying soon. Seems like a tlp and/or Linux kernel issue that has surfaced recently, as it also happened on a different ThinkPad laptop when I last tried it. I can’t really remove the battery, because the “power on with AC attach” setting only works when the battery is connected and charged. The server/wardrobe/closet room is slightly chillier compared to the rest of the environment, meaning that the temperatures are also slightly lower. I also have an option to do some crazy ventilation experiments in the winter, but that will have to wait for a bit, mainly because it’s spring. I’m genuinely surprised that the Wi-Fi 5 signal is coming through the closet quite adequately, with the whole apartment being covered with at least 50 Mbit/s speeds, and over 300 Mbit/s when near the closet, which is about the maximum speed that I can achieve from the access point in ideal conditions.
When your machines run smoothly, your business can go far. That’s why condition monitoring – once a “nice to have” – is quickly becoming a must in maintenance strategies across industrial settings. But most dedicated systems can be complex to set up or difficult to scale. To make things easier, we’re introducing the Arduino Rileva ME Opta […] The post Optimize maintenance with the Arduino Rileva ME Opta Bundle appeared first on Arduino Blog.
They discuss interface design and Raskin's hatred of the mouse.
Air traffic control has been in the news lately, on account of my country's declining ability to do it. Well, that's a long-term trend, resulting from decades of under-investment, severe capture by our increasingly incompetent defense-industrial complex, no small degree of management incompetence in the FAA, and long-lasting effects of Reagan crushing the PATCO strike. But that's just my opinion, you know, maybe airplanes got too woke. In any case, it's an interesting time to consider how weird parts of air traffic control are. The technical, administrative, and social aspects of ATC all seem two notches more complicated than you would expect. ATC is heavily influenced by its peculiar and often accidental development, a product of necessity that perpetually trails behind the need, and a beneficiary of hand-me-down military practices and technology. Aviation Radio In the early days of aviation, there was little need for ATC---there just weren't many planes, and technology didn't allow ground-based controllers to do much of value. There was some use of flags and signal lights to clear aircraft to land, but for the most part ATC had to wait for the development of aviation radio. The impetus for that work came mostly from the First World War. Here we have to note that the history of aviation is very closely intertwined with the history of warfare. Aviation technology has always rapidly advanced during major conflicts, and as we will see, ATC is no exception. By 1913, the US Army Signal Corps was experimenting with the use of radio to communicate with aircraft. This was pretty early in radio technology, and the aircraft radios were huge and awkward to operate, but it was also early in aviation and "huge and awkward to operate" could be similarly applied to the aircraft of the day. Even so, radio had obvious potential in aviation. The first military application for aircraft was reconnaissance. Pilots could fly past the front to find artillery positions and otherwise provide useful information, and then return with maps. Well, even better than returning with a map was providing the information in real-time, and by the end of the war medium-frequency AM radios were well developed for aircraft. Radios in aircraft lead naturally to another wartime innovation: ground control. Military personnel on the ground used radio to coordinate the schedules and routes of reconnaissance planes, and later to inform on the positions of fighters and other enemy assets. Without any real way to know where the planes were, this was all pretty primitive, but it set the basic pattern that people on the ground could keep track of aircraft and provide useful information. Post-war, civil aviation rapidly advanced. The early 1920s saw numerous commercial airlines adopting radio, mostly for business purposes like schedule coordination. Once you were in contact with someone on the ground, though, it was only logical to ask about weather and conditions. Many of our modern practices like weather briefings, flight plans, and route clearances originated as more or less formal practices within individual airlines. Air Mail The government was not left out of the action. The Post Office operated what may have been the largest commercial aviation operation in the world during the early 1920s, in the form of Air Mail. The Post Office itself did not have any aircraft; all of the flying was contracted out---initially to the Army Air Service, and later to a long list of regional airlines. Air Mail was considered a high priority by the Post Office and proved very popular with the public. When the transcontinental route began proper operation in 1920, it became possible to get a letter from New York City to San Francisco in just 33 hours by transferring it between airplanes in a nearly non-stop relay race. The Post Office's largesse in contracting the service to private operators provided not only the funding but the very motivation for much of our modern aviation industry. Air travel was not very popular at the time, being loud and uncomfortable, but the mail didn't complain. The many contract mail carriers of the 1920s grew and consolidated into what are now some of the United States' largest companies. For around a decade, the Post Office almost singlehandedly bankrolled civil aviation, and passengers were a side hustle [1]. Air mail ambition was not only of economic benefit. Air mail routes were often longer and more challenging than commercial passenger routes. Transcontinental service required regular flights through sparsely populated parts of the interior, challenging the navigation technology of the time and making rescue of downed pilots a major concern. Notably, air mail operators did far more nighttime flying than any other commercial aviation in the 1920s. The post office became the government's de facto technical leader in civil aviation. Besides the network of beacons and markers built to guide air mail between cities, the post office built 17 Air Mail Radio Stations along the transcontinental route. The Air Mail Radio Stations were the company radio system for the entire air mail enterprise, and the closest thing to a nationwide, public air traffic control service to then exist. They did not, however, provide what we would now call control. Their role was mainly to provide pilots with information (including, critically, weather reports) and to keep loose tabs on air mail flights so that a disappearance would be noticed in time to send search and rescue. In 1926, the Watres Act created the Aeronautic Branch of the Department of Commerce. The Aeronautic Branch assumed a number of responsibilities, but one of them was the maintenance of the Air Mail routes. Similarly, the Air Mail Radio Stations became Aeronautics Branch facilities, and took on the new name of Flight Service Stations. No longer just for the contract mail carriers, the Flight Service Stations made up a nationwide network of government-provided services to aviators. They were the first edifices in what we now call the National Airspace System (NAS): a complex combination of physical facilities, technologies, and operating practices that enable safe aviation. In 1935, the first en-route air traffic control center opened, a facility in Newark owned by a group of airlines. The Aeronautic Branch, since renamed the Bureau of Air Commerce, supported the airlines in developing this new concept of en-route control that used radio communications and paperwork to track which aircraft were in which airways. The rising number of commercial aircraft made in-air collisions a bigger problem, so the Newark control center was quickly followed by more facilities built on the same pattern. In 1936, the Bureau of Air Commerce took ownership of these centers, and ATC became a government function alongside the advisory and safety services provided by the flight service stations. En route center controllers worked off of position reports from pilots via radio, but needed a way to visualize and track aircraft's positions and their intended flight paths. Several techniques helped: first, airlines shared their flight planning paperwork with the control centers, establishing "flight plans" that corresponded to each aircraft in the sky. Controllers adopted a work aid called a "flight strip," a small piece of paper with the key information about an aircraft's identity and flight plan that could easily be handed between stations. By arranging the flight strips on display boards full of slots, controllers could visualize the ordering of aircraft in terms of altitude and airway. Second, each center was equipped with a large plotting table map where controllers pushed markers around to correspond to the position reports from aircraft. A small flag on each marker gave the flight number, so it could easily be correlated to a flight strip on one of the boards mounted around the plotting table. This basic concept of air traffic control, of a flight strip and a position marker, is still in use today. Radar The Second World War changed aviation more than any other event of history. Among the many advancements were two British inventions of particular significance: first, the jet engine, which would make modern passenger airliners practical. Second, the radar, and more specifically the magnetron. This was a development of such significance that the British government treated it as a secret akin to nuclear weapons; indeed, the UK effectively traded radar technology to the US in exchange for participation in US nuclear weapons research. Radar created radical new possibilities for air defense, and complimented previous air defense development in Britain. During WWI, the organization tasked with defending London from aerial attack had developed a method called "ground-controlled interception" or GCI. Under GCI, ground-based observers identify possible targets and then direct attack aircraft towards them via radio. The advent of radar made GCI tremendously more powerful, allowing a relatively small number of radar-assisted air defense centers to monitor for inbound attack and then direct defenders with real-time vectors. In the first implementation, radar stations reported contacts via telephone to "filter centers" that correlated tracks from separate radars to create a unified view of the airspace---drawn in grease pencil on a preprinted map. Filter center staff took radar and visual reports and updated the map by moving the marks. This consolidated information was then provided to air defense bases, once again by telephone. Later technical developments in the UK made the process more automated. The invention of the "plan position indicator" or PPI, the type of radar scope we are all familiar with today, made the radar far easier to operate and interpret. Radar sets that automatically swept over 360 degrees allowed each radar station to see all activity in its area, rather than just aircraft passing through a defensive line. These new capabilities eliminated the need for much of the manual work: radar stations could see attacking aircraft and defending aircraft on one PPI, and communicated directly with defenders by radio. It became routine for a radar operator to give a pilot navigation vectors by radio, based on real-time observation of the pilot's position and heading. A controller took strategic command of the airspace, effectively steering the aircraft from a top-down view. The ease and efficiency of this workflow was a significant factor in the end of the Battle of Britain, and its remarkable efficacy was noticed in the US as well. At the same time, changes were afoot in the US. WWII was tremendously disruptive to civil aviation; while aviation technology rapidly advanced due to wartime needs those same pressing demands lead to a slowdown in nonmilitary activity. A heavy volume of military logistics flights and flight training, as well as growing concerns about defending the US from an invasion, meant that ATC was still a priority. A reorganization of the Bureau of Air Commerce replaced it with the Civil Aeronautics Authority, or CAA. The CAA's role greatly expanded as it assumed responsibility for airport control towers and commissioned new en route centers. As WWII came to a close, CAA en route control centers began to adopt GCI techniques. By 1955, the name Air Route Traffic Control Center (ARTCC) had been adopted for en route centers and the first air surveillance radars were installed. In a radar-equipped ARTCC, the map where controllers pushed markers around was replaced with a large tabletop PPI built to a Navy design. The controllers still pushed markers around to track the identities of aircraft, but they moved them based on their corresponding radar "blips" instead of radio position reports. Air Defense After WWII, post-war prosperity and wartime technology like the jet engine lead to huge growth in commercial aviation. During the 1950s, radar was adopted by more and more ATC facilities (both "terminal" at airports and "en route" at ARTCCs), but there were few major changes in ATC procedure. With more and more planes in the air, tracking flight plans and their corresponding positions became labor intensive and error-prone. A particular problem was the increasing range and speed of aircraft, and corresponding longer passenger flights, that meant that many aircraft passed from the territory of one ARTCC into another. This required that controllers "hand off" the aircraft, informing the "next" ARTCC of the flight plan and position at which the aircraft would enter their airspace. In 1956, 128 people died in a mid-air collision of two commercial airliners over the Grand Canyon. In 1958, 49 people died when a military fighter struck a commercial airliner over Nevada. These were not the only such incidents in the mid-1950s, and public trust in aviation started to decline. Something had to be done. First, in 1958 the CAA gave way to the Federal Aviation Administration. This was more than just a name change: the FAA's authority was greatly increased compared tot he CAA, most notably by granting it authority over military aviation. This is a difficult topic to explain succinctly, so I will only give broad strokes. Prior to 1958, military aviation was completely distinct from civil aviation, with no coordination and often no communication at all between the two. This was, of course, a factor in the 1958 collision. Further, the 1956 collision, while it did not involve the military, did result in part from communications issues between separate distinct CAA facilities and the airline's own control facilities. After 1958, ATC was completely unified into one organization, the FAA, which assumed the work of the military controllers of the time and some of the role of the airlines. The military continues to have its own air controllers to this day, and military aircraft continue to include privileges such as (practical but not legal) exemption from transponder requirements, but military flights over the US are still beholden to the same ATC as civil flights. Some exceptions apply, void where prohibited, etc. The FAA's suddenly increased scope only made the practical challenges of ATC more difficult, and commercial aviation numbers continued to rise. As soon as the FAA was formed, it was understood that there needed to be major investments in improving the National Airspace System. While the first couple of years were dominated by the transition, the FAA's second director (Najeeb Halaby) prepared two lengthy reports examining the situation and recommending improvements. One of these, the Beacon report (also called Project Beacon), specifically addressed ATC. The Beacon report's recommendations included massive expansion of radar-based control (called "positive control" because of the controller's access to real-time feedback on aircraft movements) and new control procedures for airways and airports. Even better, for our purposes, it recommended the adoption of general-purpose computers and software to automate ATC functions. Meanwhile, the Cold War was heating up. US air defense, a minor concern in the few short years after WWII, became a higher priority than ever before. The Soviet Union had long-range aircraft capable of reaching the United States, and nuclear weapons meant that only a few such aircraft had to make it to cause massive destruction. Considering the vast size of the United States (and, considering the new unified air defense command between the United States and Canada, all of North America) made this a formidable challenge. During the 1950s, the newly minted Air Force worked closely with MIT's Lincoln Laboratory (an important center of radar research) and IBM to design a computerized, integrated, networked system for GCI. When the Air Force committed to purchasing the system, it was christened the Semi-Automated Ground Environment, or SAGE. SAGE is a critical juncture in the history of the computer and computer communications, the first system to demonstrate many parts of modern computer technology and, moreover, perhaps the first large-scale computer system of any kind. SAGE is an expansive topic that I will not take on here; I'm sure it will be the focus of a future article but it's a pretty well-known and well-covered topic. I have not so far felt like I had much new to contribute, despite it being the first item on my "list of topics" for the last five years. But one of the things I want to tell you about SAGE, that is perhaps not so well known, is that SAGE was not used for ATC. SAGE was a purely military system. It was commissioned by the Air Force, and its numerous operating facilities (called "direction centers") were located on Air Force bases along with the interceptor forces they would direct. However, there was obvious overlap between the functionality of SAGE and the needs of ATC. SAGE direction centers continuously received tracks from remote data sites using modems over leased telephone lines, and automatically correlated multiple radar tracks to a single aircraft. Once an operator entered information about an aircraft, SAGE stored that information for retrieval by other radar operators. When an aircraft with associated data passed from the territory of one direction center to another, the aircraft's position and related information were automatically transmitted to the next direction center by modem. One of the key demands of air defense is the identification of aircraft---any unknown track might be routine commercial activity, or it could be an inbound attack. The air defense command received flight plan data on commercial flights (and more broadly all flights entering North America) from the FAA and entered them into SAGE, allowing radar operators to retrieve "flight strip" data on any aircraft on their scope. Recognizing this interconnection with ATC, as soon as SAGE direction centers were being installed the Air Force started work on an upgrade called SAGE Air Traffic Integration, or SATIN. SATIN would extend SAGE to serve the ATC use-case as well, providing SAGE consoles directly in ARTCCs and enhancing SAGE to perform non-military safety functions like conflict warning and forward projection of flight plans for scheduling. Flight strips would be replaced by teletype output, and in general made less necessary by the computer's ability to filter the radar scope. Experimental trial installations were made, and the FAA participated readily in the research efforts. Enhancement of SAGE to meet ATC requirements seemed likely to meet the Beacon report's recommendations and radically improve ARTCC operations, sooner and cheaper than development of an FAA-specific system. As it happened, well, it didn't happen. SATIN became interconnected with another planned SAGE upgrade to the Super Combat Centers (SCC), deep underground combat command centers with greatly enhanced SAGE computer equipment. SATIN and SCC planners were so confident that the last three Air Defense Sectors scheduled for SAGE installation, including my own Albuquerque, were delayed under the assumption that the improved SATIN/SCC equipment should be installed instead of the soon-obsolete original system. SCC cost estimates ballooned, and the program's ambitions were reduced month by month until it was canceled entirely in 1960. Albuquerque never got a SAGE installation, and the Albuquerque air defense sector was eliminated by reorganization later in 1960 anyway. Flight Service Stations Remember those Flight Service Stations, the ones that were originally built by the Post Office? One of the oddities of ATC is that they never went away. FSS were transferred to the CAB, to the CAA, and then to the FAA. During the 1930s and 1940s many more were built, expanding coverage across much of the country. Throughout the development of ATC, the FSS remained responsible for non-control functions like weather briefing and flight plan management. Because aircraft operating under instrument flight rules must closely comply with ATC, the involvement of FSS in IFR flights is very limited, and FSS mostly serve VFR traffic. As ATC became common, the FSS gained a new and somewhat odd role: playing go-between for ATC. FSS were more numerous and often located in sparser areas between cities (while ATC facilities tended to be in cities), so especially in the mid-century, pilots were more likely to be able to reach an FSS than ATC. It was, for a time, routine for FSS to relay instructions between pilots and controllers. This is still done today, although improved communications have made the need much less common. As weather dissemination improved (another topic for a future post), FSS gained access to extensive weather conditions and forecasting information from the Weather Service. This connectivity is bidirectional; during the midcentury FSS not only received weather forecasts by teletype but transmitted pilot reports of weather conditions back to the Weather Service. Today these communications have, of course, been computerized, although the legacy teletype format doggedly persists. There has always been an odd schism between the FSS and ATC: they are operated by different departments, out of different facilities, with different functions and operating practices. In 2005, the FAA cut costs by privatizing the FSS function entirely. Flight service is now operated by Leidos, one of the largest government contractors. All FSS operations have been centralized to one facility that communicates via remote radio sites. While flight service is still available, increasing automation has made the stations far less important, and the general perception is that flight service is in its last years. Last I looked, Leidos was not hiring for flight service and the expectation was that they would never hire again, retiring the service along with its staff. Flight service does maintain one of my favorite internet phenomenon, the phone number domain name: 1800wxbrief.com. One of the odd manifestations of the FSS/ATC schism and the FAA's very partial privatization is that Leidos maintains an online aviation weather portal that is separate from, and competes with, the Weather Service's aviationweather.gov. Since Flight Service traditionally has the responsibility for weather briefings, it is honestly unclear to what extend Leidos vs. the National Weather Service should be investing in aviation weather information services. For its part, the FAA seems to consider aviationweather.gov the official source, while it pays for 1800wxbrief.com. There's also weathercams.faa.gov, which duplicates a very large portion (maybe all?) of the weather information on Leidos's portal and some of the NWS's. It's just one of those things. Or three of those things, rather. Speaking of duplication due to poor planning... The National Airspace System Left in the lurch by the Air Force, the FAA launched its own program for ATC automation. While the Air Force was deploying SAGE, the FAA had mostly been waiting, and various ARTCCs had adopted a hodgepodge of methods ranging from one-off computer systems to completely paper-based tracking. By 1960 radar was ubiquitous, but different radar systems were used at different facilities, and correlation between radar contacts and flight plans was completely manual. The FAA needed something better, and with growing congressional support for ATC modernization, they had the money to fund what they called National Airspace System En Route Stage A. Further bolstering historical confusion between SAGE and ATC, the FAA decided on a practical, if ironic, solution: buy their own SAGE. In an upcoming article, we'll learn about the FAA's first fully integrated computerized air traffic control system. While the failed detour through SATIN delayed the development of this system, the nearly decade-long delay between the design of SAGE and the FAA's contract allowed significant technical improvements. This "New SAGE," while directly based on SAGE at a functional level, used later off-the-shelf computer equipment including the IBM System/360, giving it far more resemblance to our modern world of computing than SAGE with its enormous, bespoke AN/FSQ-7. And we're still dealing with the consequences today! [1] It also laid the groundwork for the consolidation of the industry, with a 1930 decision that took air mail contracts away from most of the smaller companies and awarded them instead to the precursors of United, TWA, and American Airlines.
Exploring a peculiar bit-twiddling hack at the intersection of 1980s geek sensibilities.