Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
11
I recently bought an Intuos Manga drawing tablet recently, because I got this fix idea that I want to learn how to draw. And what better way to do it than with a drawing tablet, while satisfying my need for new things? With little experience I boldly set forth and I found a lovely and free drawing program Krita. I’ve used photoshop earlier but it’s just so expensive and Krita seems like a good replacement. Inkscape is another good alternative for vector graphics. To install Krita you need KDE, which should come preinstalled with Slackware. If you’re like me and decided to skip it you can install it with slackpkg install kde I had some trouble installing Krita, but ultimately this guide with cats worked when I also changed krita/plugins/formats/tiff/kis_tiff_converter.cc from #if TIFFLIB_VERSION < 20111221 typedef size_t tmsize_t; #endif to typedef size_t tmsize_t; which I filed as a bug report over at KDE. More troubling was the fact that drawing on the canvass was offset...
over a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Jonas Hietala

Let's build a VORON 0

About 1.5 years ago I ventured into 3D printing by building a VORON Trident. It was a very fun project and I’ve even used the printer quite a bit. Naturally, I had to build another one and this time I opted for the cute VORON 0. Why another printer? I really like my VORON Trident and it’ll continue to be my main printer for the foreseeable future but a second printer would do two important things for me: Act as a backup printer if my Trident breaks. A printer made partially of printed parts is great as you can easily repair it… But only if you have a working printer to print the parts. It would also be very annoying if I disassemble the printer because I want to mod it and realize I’ve forgotten to print a part I needed. Building printers are really fun. Building the VORON Trident is one of my most fun and rewarding projects I’ve done. Why a VORON 0? These properties makes the VORON 0 an ideal secondary printer for me: You need to assemble the VORON 0 yourself (a feature not a bug) Prints ABS/ASA well (for printer parts) Very moddable and truly open source It’s tiny The VORON 0 to the left and the VORON Trident 250 to the right. It’s really small, which is perfect for me as I have a limited amount of space. It would be very fun to build a VORON 2.4 (or even a VORON Phoenix) but I really don’t have space for more printers. Getting the parts I opted to buy a kit instead of self-sourcing the parts as it’s usually cheaper and requires a lot less work, even if you replace some parts. This is what I ended up getting: A VORON 0 kit from Lecktor Parts for a Dragon Burner toolhead Parts for a Nevermore V4 active carbon filter Later on, I replaced the SKR Mini E3 V2 that came with the kit with the V3 Lots of delays I ordered a VORON 0 from Lecktor in February 2024 and it took roughly 4 months before I got the first shipment of parts and it wasn’t until the end of 2024 that I had received all the parts needed to complete the build. The wait was annoying… While I can’t complain about the quality of parts, with the massive delays I regret ordering from Lecktor and in hindsight I should’ve ordered an LDO kit from 3DJake, like I was first considering. Printing parts myself So what do you do when you can’t start the build? You print parts! A box of some of the printed parts for the build (and many I later threw away). There’s something very satisfying with printing parts you then build a printer with. This time I wanted to make a colorful printer and I came up with this mix of filament: PolyLite ASA Yellow Formfutura EasyFil ABS Light Green Formfutura EasyFil ABS Light Blue Formfutura EasyFil ABS Magenta I think they made the printer look great. The build I won’t do as detailed of a build log as I did when building the VORON Trident but I tried to take some pictures. Scroll on! Frames and bed The linear Y-rails. The kit comes with the Kirigami bed mod. The frame with A/B motors. Building the bottom of the printer with feet, power supply, and display. MGN9 instead of MGN7 X-axis After I assembled the X-axis I noticed a problem: The carriage collides with the stock A drive. The reason is that the kit comes with MGN9 rails for the X-axis instead of the standard MGN7 rails. This required me to reprint modified A/B drives, X-carriage, and alignment tools. The carriage passes the modded B drive. Belts Starting to install the belt. The belt is tight. Dragon Burner toolhead I got the parts needed to build the standard mini stealthburner… But I’m attracted to playing around with new stuff and I decided to try out the Dragon Burner instead. I went with it because it’s quite popular, it has good cooling (I print a bunch of PLA), and I haven’t tried it out yet. The fans are inserted. I don’t care about LEDs so I inserted an opaque magenta part instead. I think it looks really good. The back of the Dragon Burner. I opted for the Rapido 2 instead of the Dragon that came with the kit because the Dragon has problems printing PLA. I was a bit confused on how to route the wires as there was very little space when mounting the toolhead on the carriage. Routing the wires close to the fans, clipping off the ears of the fans, and holding together it with cable ties in this way worked for me. Galileo 2 standalone Dragon Burner together with the Galileo 2 extruder mounted on the printer. For the extruder I opted for the standalone version of Galileo 2. I’ve used Galileo 2 on the Trident but I hated the push down latch it uses in the Stealthburner configuration. The latch eventually broke by pulling out a heat-set insert so I went back to the Clockwork 2 on the Trident, giving me the parts to rebuilt the Galileo for the VORON 0 in a standalone configuration. The parts for Galileo 2. There will be left-overs from the Stealthburner variant. The build was really fast and simple—compared to the Stealthburner variant it’s night and day. I didn’t even think to take a break for pictures. Nevermore filter Since I want to be able to print ABS I feel I need to have an activated carbon filter. I wanted to have an exhaust fan with a HEPA filter as well, but I’ll leave that to a mod in the future. The Nevermore V4 is an activated carbon filter that fits well in the VORON 0. I fastened the fan using a strip of VHB—it was a struggle to position it in the middle. The Nevermore is mounted standing in the side of the printer. Just remember to preload the extrusion with extra M3 nuts when you assemble the printer. (I’ve heard LDO has nuts you can insert after… Sounds great.) Panels With the panel and spool holder at the back. Please ignore the filament path in this picture, it’ll interfere with the rear belt when routed behind the umbilical cable. With the tophat and door installed. I’m slightly annoyed with the small gaps and holes the printer has (mainly between the tophat and the panels at the bottom half). I later changed some of the parts related to the top hat to match the colorscheme better. Wiring Wiring was simpler than for the Trident but it was harder to make the wiring pretty. Thank god I could cover it up. The underside of the printer with the power, 5V converter, display, and Z-motor. Back of the printer with the Raspberry Pi and MCU. Raspberry Pi The Raspberry Pi only has two cables; power and communication over the GPIO pins and a display via USB. The Pi communicates and gets power over the TFT connection on the MCU. Toolhead The kit came with a toolhead board and breakout board for an umbilical setup: The toolhead board. The breakout board. I did run into an issue where the polarity of the fans on the toolhead board did not match the polarity of the fans on the MCU, leading to some frustration where the fans refused to spin. I ended up swapping the polarity using the cables from the breakout board to the MCU. Chamber thermistor The MCU only has two thermistor ports and they’re used for the hotend and bed thermistors. For the chamber thermistor (that’s integrated into the breakout board) I use the MOSI pin on the SPI1 8-pin header: The chamber thermistor connected to MOSI and ground on the SPI1 header. SKR mini E3 v3 I got an SKR mini E3 v2 with the kit but I replaced it with the v3 for two reasons: FAN output, used for the Nevermore Filter A filament runout sensor There’s not much to say about the extra FAN output but the filament runout sensor has 3 pins, while VORON 0.2 style runout sensor has 3 pins. I reused the prepared y-endstop I got with the kit, scratched away some of the plastic to make the 2-pin connection fit the 3-pins on the MCU (the +5V pin isn’t needed): The filament runout sensor connected to E0-stop. Klipper setup I followed the VORON documentation and chose Mainsail as I’ve been happy with it on my Trident. I’m not going to describe everything and only call out some issues I had or extra steps I had to take. MCU firmware The VORON documentation assumes USB communication so the default firmware instructions didn’t work for me. According to BigTreeTech’s documentation if you communicate over USART2 (the TFT port) then you need to compile the firmware with Communication interface set to Serial (on USART2 PA3/PA2). You then need to use this klipper configuration: [mcu] serial: /dev/ttyAMA0 restart_method: command It took a long time for me to figure out as I had a display connected via USB, so I thought the display was the MCU and got stuck at a Your Klipper version is: xxx MCU(s) which should be updated: xxx error. Filament runout [filament_switch_sensor Filament_Runout_Sensor] pause_on_runout: True runout_gcode: PAUSE switch_pin: PC15 Chamber thermistor According to this comment this is the config to use the SPI header for a thermistor: [temperature_sensor chamber_temp] sensor_type: Generic 3950 sensor_pin: PA7 pullup_resistor: 10000 Works for me™ Display It’s easy to flash the display directly from the Raspberry Pi although the first firmware I built was too large. There are optional features you can remove but I removed too many so the configuration for the buttons wasn’t accepted. These were the features that ended up working for me: [*] Support GPIO "bit-banging" devices [*] Support LCD devices [ ] Support thermocouple MAX sensors [ ] Support adxl accelerometers [ ] Support lis2dw and lis3dh 3-axis accelerometers [ ] Support MPU accelerometers [*] Support HX711 and HX717 ADC chips [ ] Support ADS 1220 ADC chip [ ] Support ldc1612 eddy current sensor [ ] Support angle sensors [*] Support software based I2C "bit-banging" [*] Support software based SPI "bit-banging" Sensorless homing I was nervous setting up sensorless homing, fearing that without a physical switch the printer might decide to burn the motor against the edge or something. (I really have no idea how it works, hence my fear.) In the end it was straightforward. The VORON 0 example firmware was already configured for sensorless homing and the only things I had to do was: X-DIAG and Y-DIAG pins on the board Tweak the driver_SGTHRS values (I landed on 85 down from 255) And now I have sensorless homing working consistently. What confused me was that the sensorless homing guide and the homing macros it links to were slightly different from the VORON 0 example firmware and it wasn’t clear if I had to make all the changes or not. (I did not.) Some random issues I encountered In typical 3D printer fashion, you’ll always run into various issues, for example: I got the mcu shutdown: Timer too close error a few times. I don’t know what I did but it only happened a couple of times at beginning. The filament sensor had some consistency issues. Some extra tape on the bearing seemed to fix it. The filament keeps getting stuck in the extruder after unload. I’m still having issues but forgetting to tighten the nozzle and using a too short PTFE tube didn’t help. I had trouble getting the filament to stick to bed. Super frustrating to be honest. I re-calibrated the z offset and thumb screws a bunch of times and (right now) it seems to work fairly well. Even though you’re not supposed to need automatic bed leveling for a printer this small, I can’t help but miss the “just works” feeling I have with the Trident. Initial thoughts on the printer A model I printed for one of my kids. It came out really well. I haven’t printed that much with the printer yet but I have some positive things to say about it: Dragon Burner is great when printing PLA (which I use a lot). But I have some negative things to say too: horribly loud but the print movement is also too loud for my taste. It’s poorly insulated. For example there are gaps between the top hat and the rest of the printer that I don’t see a good way to cover up. Overall though I’m very happy with it. I wouldn’t recommend it as a first printer or to someone who just wants a tool that works out of the box, but for people like me who wanted to build a backup/secondary printer I think it’s great. What’s next? With a secondary printer finally up and running I can now start working on some significant mods for my Trident! This is the tentative plan right now: Inverted electronics mod. Replace Stealthburner with another toolhead, most likely A4T-toolhead. Build a BoxTurtle for multi-color support. But we’ll see when I manage to get to it. I’m not in a rush and I should take a little break and play with my VORON 0 and perhaps work on my other dozen or so projects that lie dormant.

3 weeks ago 15 votes
I'll give up Neovim when you pry it from my cold, dead hands

I recently came upon a horror story where a developer was forced to switch editor from Neovim to Cursor and I felt I had to write a little to cleanse myself of the disgust I felt. Two different ways of approaching an editor I think that there’s two opposing ways of thinking about the tool that is an editor: Refuse to personalize anything and only use the basic features “An editor is a simple tool I use to get the job done.” Get stuck in configuration hell and spend tons of time tweaking minor things “An editor is a highly personalized tool that works the way I want.” These are the extreme ends of the spectrum to make a point and most developers will fall somewhere in between. It’s not a static proposition; I’ve had periods in my life where I’ve used the same Vim configuration for years and other times I’ve spent more time rewriting my Neovim config than doing useful things. I don’t differentiate between text editors and IDEs as I don’t find the distinction very meaningful. They’re all just editors. Freedom of choice is important Freedom of choice is more to be treasured than any possession earth can give. David O. McKay Some developers want zero configuration while others want to configure their editor so it’s just right. Either way is fine and I’ve met excellent developers from both sides. But removing the power of choice is a horrible idea as you’re forcing developers to work in a way they’re not comfortable with, not productive with, or simply don’t like. You’re bound to make some of the developers miserable or see them leave (usually the best ones who can easily find another job). To explain how important an editor might be to some people, I give you this story about Stephen Hendry—one of the most successful Snooker players ever—and how important his cue was to him: In all the years I’ve been playing I’ve never considered changing my cue. It was the first cue I ever bought, aged 13, picked from a cabinet in a Dunfermline snooker centre just because I liked the Rex Williams signature on it. I saved £40 to buy it. It’s a cheap bit of wood and it’s been the butt of other players’ jokes for ages. Alex Higgins said it was ‘only good for holdin’ up f*g tomatoes!’ But I insist on sticking with it. And I’ve won a lot of silverware, including seven World Championship trophies, with it. It’s a one-piece which I carry in a wooden, leather-bound case that’s much more expensive than the cue it houses. But in 2003, at Glasgow airport after a flight from Bangkok, it emerges through the rubber flaps on the carousel and even at twenty yards I can see that both case and cue are broken. Snapped almost clean in two, the whole thing now resembling some form of shepherd’s crook. The cue comes to where I’m standing, and I pick it up, the broken end dangling down forlornly. I could weep. Instead, I laugh. ‘Well,’ I say to my stunned-looking friend John, ‘that’s my career over.’ Stephen Hendry, The Mirror Small improvements leads to large long-term gains Kaizen isn’t about massive overhauls or overnight success. Instead, it focuses on small, continuous improvements that add up to significant long-term gains. What is Kaizen? A Guide to Continuous Improvement I firmly believe that even small improvements are worth it as they add up over time (also see compound interest and how it relates to financial investments). An editor is a great example where even small improvements may have a big effect for the simple reason that you spend so much time in your editor. I’ve spent hours almost every day inside (neo)vim since I started using it 15+ years ago. Even simple things like quickly changing text inside brackets (ci[) instead of selecting text with your mouse might save hundreds of hours during a programming career—and that’s just one example. Naturally, as a developer you can find small but worthwhile improvements in other areas too, for instance: Learning the programming languages and libraries you use a little better Customizing your keyboard and keyboard layout This is more for comfort and health than speed but that makes it even more important, not less. Increasing your typing speed Some people dismiss typing speed as they say they’re limited by their thinking, not typing. But the benefit of typing faster (and more fluidly) isn’t really the overall time spent typing vs thinking; it’s so you can continue thinking with as little interruption as possible. On some level you want to reduce the time typing in this chain: think… edit, think… edit, think… It’s also why the Vim way of editing is so good—it’s based on making small edits and to return quickly to normal (thinking) mode. Some people ask how can you afford to spend time practicing Vim commands or to configure your editor as it takes away time from work? But I ask you: with a programming career of several decades and tens of thousands of hours to spend in front of your computer, how can you afford not to? Neovim is versatile During the years I’ve done different things: Switched keyboard and keyboard layout multiple times. Been blogging and wrote a book. The one constant through all of this has been Neovim. Neovim may not have the best language specific integrations but it does everything well and the benefit of having the same setup for everything you do is not to be underestimated. It pairs nicely with the idea of adding up small improvements over time; every small improvement that I add to my Neovim workflow will stay with me no matter what I work with. I did use Emacs at work for years because their proprietary language only had an Emacs integration and I didn’t have the time nor energy to create one for Neovim. While Evil made the experience survivable I realized then that I absolutely hate having my work setup be different from my setup at home. People weren’t overjoyed with being unable to choose their own editor and I’ve heard rumors that there’s now an extension for Visual Studio. Neovim is easily extensible Neovim: a Personalized Development Environment TJ DeVries A different take on editing code I’ve always felt that Vimscript is the worst part of Vim. Maybe that’s a weird statement as the scriptability of Vim is one if it’s strengths; and to be fair, simple things are very nice: nnoremap j gj set expandtab But writing complex things in Vimscript is simply not a great experience. One of the major benefits of Neovim is the addition of Lua as a first-class scripting language. Yes, Lua isn’t perfect and it’s often too verbose but it’s so much better than Vimscript. Lua is the main reason that the Neovim plugin ecosystem is currently a lot more vibrant than in Vim. Making it easier to write plugins is of course a boon, but the real benefit is in how it makes it even easier to make more complex customization for yourself. Just plop down some Lua in the configuration files you already have and you’re done. (Emacs worked this out to an even greater extent decades ago.) One way I use this customizability is to help me when I’m blogging: Maybe you don’t need to create something this big but even small things such as disabling autoformat for certain file types in specific folders can be incredibly useful. Approachability should not be underestimated. While plugins in Lua is understandably the focus today, Neovim can still use plugins written in Vimscript and 99% of your old Vim configuration will still work in Neovim. Neovim won’t go anywhere The old is expected to stay longer than the young in proportion to their age. Nassim Nicholas Taleb, “Antifragile” The last big benefit with Neovim I’ll highlight—and why I feel fine with investing even more time into Neovim—is that Neovim will most likely continue to exist and thrive for years if not decades to come. While Vim has—after an impressive 30 years of development—recently entered maintenance mode, activity in Neovim has steadily increased since the fork from Vim more than a decade ago. The amount of high quality plugins, interest in Google trends, and GitHub activity have all been trending upwards. Neovim was also the most desired editor according to the latest Stackoverflow developer survey and the overall buzz and excitement in the community is at an all-time high. With the self-reinforced behavior and benefits of investing into a versatile and flexible editor with a huge plugin ecosystem such as Neovim I see no reason for the trend to taper off anytime soon. Neovim will probably never be as popular as something like VSCode but as an open source project backed by excited developers, Neovim will probably be around long after VSCode has been discontinued for The Next Big Thing.

2 months ago 22 votes
Securing my partner's digital life

I’ve been with Veronica for over a decade now and I think I’m starting to know her fairly well. Yet she still manages to surprise me. For instance, a couple of weeks ago she came and asked me about email security: I worry that my email password is too weak. Can you help me change email address and make it secure? It was completely unexpected—but I’m all for it. The action plan All heroic journeys needs a plan; here’s mine: .com surname was available). Migrate her email to Fastmail. Setup Bitwarden as a password manager. Use a YubiKey to secure the important services. Why a domain? If you ever want (or need) to change email providers it’s very nice to have your own domain. For instance, Veronica has a hotmail.com address but she can’t bring that with her if she moves to Fastmail. Worse, what if she gets locked out of her Outlook account for some reason? It might happen if you forget your password, someone breaks into your account, or even by accident. For example, Apple users recently got locked out of their Apple IDs without any apparent reason and Gmail has been notorious about locking out users for no reason. Some providers may be better but this is a systemic problem that can happen at any service. In almost all cases, your email is your key to the rest of your digital life. The email address is your username and to reset your password you use your email. If you lose access to your email you lose everything. When you control your domain, you can point the domain to a new email provider and continue with your life. Why pay for email? One of the first things Veronica told me when I proposed that she’d change providers was that she didn’t want to pay. It’s a common sentiment online that email must be cheap (or even free). I don’t think that email is the area where cost should be the most significant factor. As I argued for in why you should own your email’s domain, your email is your most important digital asset. If email is so important, why try to be cheap about it? You should spend your money on the important things and shouldn’t spend money on the unimportant things. Paying for email gives you a couple of nice things: Human support. It’s all too easy to get shafted by algorithms where you might get banned because you triggered some edge case (such as resetting your password outside your usual IP address). Ability to use your own domain. Having a custom domain is a paid feature at most email providers. A long-term viable business. How do you run an email company if you don’t charge for it? (You sell out your users or you close your business.) Why a password manager? The best thing you can do security wise is to adopt a password manager. Then you don’t have to try to remember dozens of passwords (leading to easy-to-remember and duplicate passwords) and can focus on remembering a single (stronger) password, confident that the password manager will remember all the rest. “Putting all your passwords in one basket” is a concern of course but I think the pros outweigh the cons. Why a YubiKey? To take digital security to the next level you should use two-factor authentication (2FA). 2FA is an extra “thing” in addition to your password you need to be able to login. It could be a code sent to your phone over SMS (insecure), to your email (slightly better), a code from a 2FA app on your phone such as Aegis Authenticator (good), or from a hardware token (most secure). It’s easy to think that I went with a YubiKey because it’s the most secure option; but the biggest reason is that a YubiKey is more convenient than a 2FA app. With a 2FA app you have to whip out your phone, open the 2FA app, locate the correct site, and then copy the TOTP code into the website (quickly, before the code changes). It’s honestly not that convenient, even for someone like me who’s used this setup for years. With a YubiKey you plug it into a USB port and press it when it flashes. Or on the phone you can use NFC. NFC is slightly more annoying compared to plugging it in as you need to move/hold it in a specific spot, yet it’s still preferable to having to jump between apps on the phone. There are hardware keys other than YubiKey of course. I’ve used YubiKey for years and have a good experience. Don’t fix what isn’t broken. The setup Here’s a few quick notes on how I setup her new accounts: Password management with Bitwarden The first thing we did was setup Bitwarden as the password manager for her. I chose the family plan so I can handle the billing. To give her access I installed Bitwarden as: I gave her a YubiKey and registered it with Bitwarden for additional security. As a backup I also registered my own YubiKeys on her account; if she loses her key we still have others she can use. Although it was a bit confusing for her I think she appreciates not having to remember a dozen different passwords and can simply remember one (stronger) password. We can also share passwords easily via Bitwarden (for news papers, Spotify, etc). The YubiKey itself is very user friendly and she hasn’t run into any usability issues. Email on Fastmail With the core security up and running the next step was to change her email: Gave her an email address on Fastmail with her own domain (<firstname>@<lastname>.com). She has a basic account that I manage (there’s a Duo plan that I couldn’t migrate to at this time). I secured the account with our YubiKeys and a generated password stored in Bitwarden. We bolstered the security of her old Hotmail account by generating a new password and registering our YubiKeys. Forward all email from her old Hotmail address to her new address. With this done she has a secure email account with an email address that she owns. As is proper she’s been changing her contact information and changing email address in her other services. It’s a slow process but I can’t be too critical—I still have a few services that use my old Gmail address even though I migrated to my own domain more than a decade ago. Notes on recovery and redundancy It’s great to worry about weak phishing, weak passwords, and getting hacked. But for most people the much bigger risk is to forget your password or lose your second factor auth, and get locked out that way. To reduce the risk of losing access to her accounts we have: YubiKeys for all accounts. The recovery codes for all accounts are written down and secured. My own accounts can recover her Bitwarden and Fastmail accounts via their built-in recovery functionality. Perfect is the enemy of good Some go further than we’ve done here, others do less, and I think that’s fine. It’s important to not compare yourself with others too much; even small security measures makes a big difference in practice. Not doing anything at all because you feel overwhelmed is worse than doing something, even something simple as making sure you’re using a strong password for your email account.

3 months ago 47 votes
First impressions of Ghostty

There are two conflicting forces in play in setting up your computer environment: It’s common to find people get stuck at the extreme ends of the spectrum; some programmers refuse to configure or learn their tools at all, while others get stuck re-configuring their setups constantly without any productivity gains to show for it. Finding a balance can be tricky. With regards to terminals I’ve been using alacritty for many years. It gets the job done but I don’t know if I’m missing out on anything? I’ve been meaning to look at alternatives like wezterm and kitty but I never got far enough to try them out. On one hand it’s just a terminal, what difference could it make? Enter Ghostty, a terminal so hyped up it made me drop any useful things I was working on and see what the fuzz was about. I don’t quite get why people hype up a terminal of all things but here we are. Ghostty didn’t revolutionize my setup or anything but I admit that Ghostty is quite nice and it has replaced alacritty as my terminal. I just want a blank canvas without any decorations One of the big selling points of Ghostty is it’s native platform integration. It’s supposed to integrate well with your window manager so it looks the same and gives you some extra functionality… But I don’t know why I should care—I just want a big square without decorations of any kind. You’re supposed to to be able to simply turn off any window decorations: window-decoration = false At the moment there’s a bug that requires you set some weird GTK settings to fully remove the borders: gtk-titlebar = false gtk-adwaita = false It’s unfortunate as I haven’t done any GKT configuration on my machine (I use XMonad as my window manager and I don’t have any window decorations anywhere). There might some useful native features I don’t know about. The password input style is neat for instance, although I’m not sure it does anything functionally different compared to other terminals: Cursor invert cursor-invert-fg-bg = true In alacritty I’ve had the cursor invert the background and foreground and you can do that in Ghostty too. I ran into an issue where it interferes with indent-blankline.nvim making the cursor very hard to spot in indents (taking the color of the indent guides, which is by design low contrast with the background). Annoying but it gave me the shove I needed to try out different plugins to see if the problem persisted. I ended up with (an even nicer) setup using snacks.nvim that doesn’t hide the cursor: Left: indent-blankline.nvim (cursor barely visible) snacks.nvim (cursor visible and it highlights scope). Minimum contrast Unreadable ls output is a staple of the excellent Linux UX. It might look like this: Super annoying. You can of course configure the ls output colors but that’s just for one program and it won’t automatically follow when you ssh to another server. Ghostty’s minimum-contrast option ensures that the text and background always has enough contrast to be visible: minimum-contrast = 1.05 Most excellent. This feature has the potential to break “eye candy” features, such the Neovim indent lines plugins if you use a low contrast configuration. I still run into minor issues from time to time. Hide cursor while typing mouse-hide-while-typing = true A small quality-of-life feature is the ability to hide the cursor when typing. I didn’t know I needed this in my life. Consistent font sizing between desktop and laptop With alacritty I have an annoying problem where I need to use a very different font size on my laptop and my desktop (8 and 12). This wasn’t always the case and I think something may have changed in alacritty but I’m not sure. Ghostty doesn’t have this problem and I can now use the same font settings across my machines ( font-size = 16 ). Ligature support The issue for adding ligatures to alacritty was closed eight years ago and even though I wanted to try ligatures I couldn’t be bothered to “run a low quality fork”. Ghostty seems like the opposite of “low quality” and it renders Iosevka’s ligatures very well: My configured ligatures of Iosevka, rendered in Ghostty. Overall I feel that the font rendering in Ghostty is a little better than in alacritty, although that might be recency bias. I’m still undecided on ligatures but I love that I don’t have to feel limited by the terminal. I use a custom Iosevka build with these Ghostty settings: font-family = IosevkaTreeLig Nerd Font font-style = Medium font-style-bold = Bold font-style-italic = Medium Italic font-style-bold-italic = Bold Italic font-size = 16 Colorscheme While Ghostty has an absolutely excellent theme selector with a bunch of included themes (ghostty +list-themes) melange-nvim wasn’t included, so I had to configure the colorscheme myself. It was fairly straightforward even though the palette = 0= syntax was a bit surprising: # The dark variant of melange background = #292522 foreground = #ECE1D7 palette = 0=#867462 palette = 1=#D47766 palette = 2=#85B695 palette = 3=#EBC06D palette = 4=#A3A9CE palette = 5=#CF9BC2 palette = 6=#89B3B6 palette = 7=#ECE1D7 palette = 8=#34302C palette = 9=#BD8183 palette = 10=#78997A palette = 11=#E49B5D palette = 12=#7F91B2 palette = 13=#B380B0 palette = 14=#7B9695 palette = 15=#C1A78E # I think it's nice to colorize the selection too selection-background = #403a36 selection-foreground = #c1a78e I’m happy with Ghostty In the end Ghostty has improved my setup and I’m happy I took time to try it out. It took a little more time than “just launch it” but it absolutely wasn’t a big deal. The reward was a few pleasant improvements that have improved my life a little. And perhaps most important of all: I’m now an alpha Nerd that uses a terminal written in Zig. Did I create a custom highlighter for the Ghostty configuration file just to have proper syntax highlighting for this one blog post? You bet I did. (It’s a simple treesitter grammar.)

3 months ago 58 votes
2024 in review

It’s time for my 15th yearly review. Nerdy things I enjoyed I read a lot of fantasy books this year! My favorite new series were The Kingkiller Chronicle, Gentlemen Bastards series, and The Stormlight Archive. If you’re curious about Sanderson’s books but a little apprehensive about jumping into a massive series such as The Stormlight Archive then I’ll recommend The Emperor’s Soul as an excellent little introduction. The standalone book Warbreaker is also fantastic (available for free on Brandon’s website). Customizing Neovim was fun and rewarding. It’s amazing I got anything productive done this year… I really enjoy working with Rust in my own hobby projects. Types are coming to Elixir and I’m loving it. (I recently migrated some small projects to v1.18 and found a bunch of errors.) The Gleam programming language shows a lot of promise. My one gripe is the pain of manually encoding/decoding JSON (even with the various libraries). Compared to for example Elixir dynamic encoding or Rust’s #[derive(Deserialize)] it just feels so bad that I’ve avoided Gleam for some projects. Shame on me? CSS is alive and better than ever. Things I accomplished I quit my job and started my own company. At the moment I’m focusing on consulting but maybe something else can grow from it one day? I wrote 26 blog posts—it was quite a productive blogging year for me. I built a custom keyboard together with a custom keyboard layout. Made the eBook for Why Cryptocurrencies? freely available and finally finished the How I wrote ‘Why Cryptocurrencies?’ series. I wrote a Tree-sitter grammar for Djot. I need to be more active maintaining it… But it’s hard to get motivated as I’ve got so many other interesting things I’d like to work on. Managing an open source project is not for the faint of heart (or the easily distracted). Finished the blog series about building my first 3D printer. I’m up to over 2100 printing hours with the machine so it’s safe to say I’ve been using it, not only playing around with. Rewrote my lighting home automation from Python to Elixir. I realized that meta-blogging is a great way to get virtual points on Hacker News. Tentative plans/goals/wishes for 2025 For some reason the idea of writing a fantasy novel got stuck in my head. I had a stint where I listened to dozens of hours of advice for aspiring writers and started planning a series. The excitement tapered off a bit during the Christmas holidays and I don’t know if this was just a temporary sidetrack or if it’s something I’ll actually end up doing. Design a one-handed keyboard layout. Again, this was just something my brain got stuck thinking about and I’m not sure if it’s just a fleeting idea or something I need to do so I can stop thinking about it. (Sometimes just a little planning plus solving the most difficult problems are enough—I don’t have to finish all the crazy/dumb ideas I get for my mind to consider them “done”.) Complete my second 3D printer. There’s no point in having a single printer; what if I break it and I can no longer print replacement parts for it? Develop my home automation system more. I’ve got a ton of things I’d like to improve (or play around with). For instance, yesterday I received the Home Assistant Voice Preview Edition that I hope works as well as advertised.

3 months ago 53 votes

More in technology

Greatest Hits

I’ve been blogging now for approximately 8,465 days since my first post on Movable Type. My colleague Dan Luu helped me compile some of the “greatest hits” from the archives of ma.tt, perhaps some posts will stir some memories for you as well: Where Did WordCamps Come From? (2023) A look back at how Foo … Continue reading Greatest Hits →

21 hours ago 2 votes
Let's give PRO/VENIX a barely adequate, pre-C89 TCP/IP stack (featuring Slirp-CK)

TCP/IP Illustrated (what would now be called the first edition prior to the 2011 update) for a hundred-odd bucks on sale which has now sat on my bookshelf, encased in its original shrinkwrap, for at least twenty years. It would be fun to put up the 4.4BSD data structures poster it came with but that would require opening it. Fortunately, today we have AI we have many more excellent and comprehensive documents on the subject, and more importantly, we've recently brought back up an oddball platform that doesn't have networking either: our DEC Professional 380 running the System V-based PRO/VENIX V2.0, which you met a couple articles back. The DEC Professionals are a notoriously incompatible member of the PDP-11 family and, short of DECnet (DECNA) support in its unique Professional Operating System, there's officially no other way you can get one on a network — let alone the modern Internet. Are we going to let that stop us? Crypto Ancienne proxy for TLS 1.3. And, as we'll discuss, if you can get this thing on the network, you can get almost anything on the network! Easily portable and painfully verbose source code is included. Recall from our lengthy history of DEC's early misadventures with personal computers that, in Digital's ill-advised plan to avoid the DEC Pros cannibalizing low-end sales from their categorical PDP-11 minicomputers, Digital's Small Systems Group deliberately made the DEC Professional series nearly totally incompatible despite the fact they used the same CPUs. In their initial roll-out strategy in 1982, the Pros (as well as their sibling systems, the Rainbow and the DECmate II) were only supposed to be mere desktop office computers — the fact the Pros were PDP-11s internally was mostly treated as an implementation detail. The idea backfired spectacularly against the IBM PC when the Pros and their promised office software failed to arrive on time and in 1984 DEC retooled around a new concept of explicitly selling the Pros as desktop PDP-11s. This required porting operating systems that PDP-11 minis typically ran: RSX-11M Plus was already there as the low-level layer of the Professional Operating System (P/OS), and DEC internally ported RT-11 (as PRO/RT-11) and COS. PDP-11s were also famous for running Unix and so DEC needed a Unix for the Pro as well, though eventually only one official option was ever available: a port of VenturCom's Venix based on V7 Unix and later System V Release 2.0 called PRO/VENIX. After the last article, I had the distinct pleasure of being contacted by Paul Kleppner, the company's first paid employee in 1981, who was part of the group at VenturCom that did the Pro port and stayed at the company until 1988. Venix was originally developed from V6 Unix on the PDP-11/23 incorporating Myron Zimmerman's real-time extensions to the kernel (such as semaphores and asynchronous I/O), then a postdoc in physics at MIT; Kleppner's father was the professor of the lab Zimmerman worked in. Zimmerman founded VenturCom in 1981 to capitalize on the emerging Unix market, becoming one of the earliest commercial Unix licensees. Venix-11 was subsequently based on the later V7 Unix, as was Venix/86, which was the first Unix on the IBM PC in January 1983 and was ported to the DEC Rainbow as Venix/86R. In addition to its real-time extensions and enhanced segmentation capability, critical for memory management in smaller 16-bit address spaces, it also included a full desktop graphics package. Notably, DEC themselves were also a Unix licensee through their Unix Engineering Group and already had an enhanced V7 Unix of their own running on the PDP-11, branded initially as V7M. Subsequently the UEG developed a port of 4.2BSD with some System V components for the VAX and planned to release it as Ultrix-32, simultaneously retconning V7M as Ultrix-11 even though it had little in common with the VAX release. Paul recalls that DEC did attempt a port of Ultrix-11 to the Pro 350 themselves but ran into intractable performance problems. By then the clock was ticking on the Pro relaunch and the issues with Ultrix-11 likely prompted DEC to look for alternatives. Crucially, Zimmerman had managed to upgrade Venix-11's kernel while still keeping it small, a vital aspect on his 11/23 which lacked split instruction and data addressing and would have had to page in and out a larger kernel otherwise. Moreover, the 11/23 used an F-11 CPU — the same CPU as the original Professional 350 and 325. DEC quickly commissioned VenturCom to port their own system over to the Pro, which Paul says was a real win for VenturCom, and the first release came out in July 1984 complete with its real-time features intact and graphics support for the Pro's bitmapped screen. It was upgraded ("PRO/VENIX Rev 2.0") in October 1984, adding support for the new top-of-the-line DEC Professional 380, and then switched to System V (SVR2) in July 1985 with PRO/VENIX V2.0. (For its part Ultrix-11 was released as such in 1984 as well, but never for the Pro series.) Keep that kernel version history in mind for when we get to oddiments of the C compiler. As for networking, though, with the exception of UUCP over serial, none of these early versions of Venix on either the PDP-11 or 8086 supported any kind of network connectivity out of the box — officially the only Pro operating system to support its Ethernet upgrade option was P/OS 2.0. Although all Pros have a 15-pin AUI network port, it isn't activated until an Ethernet CTI card is installed. (While Stan P. found mention of a third-party networking product called Fusion by Network Research Corporation which could run on PRO/VENIX, Paul's recollection is that this package ran into technical problems with kernel size during development. No examples of the PRO/VENIX version have so far been located and it may never have actually been released. You'll hear about it if a copy is found. The unofficial Pro 2.9BSD port also supports the network card, but that was always an under-the-table thing.) Since we run Venix on our Pro, that means currently our only realistic option to get this on the 'Nets is also over a serial port. lower speed port for our serial IP implementation. PRO/VENIX supports using only the RS-423 port as a remote terminal, and because it's twice as fast, it's more convenient for logins and file exchange over Kermit (which also has no TCP/IP overhead). Using the printer port also provides us with a nice challenge: if our stack works acceptably well at 4800bps, it should do even better at higher speeds if we port it elsewhere. On the Pro, we connect to our upstream host using a BCC05 cable (in the middle of this photograph), which terminates in a regular 25-pin RS-232 on the other end. Now for the software part. There are other small TCP/IP stacks, notably things like Adam Dunkel's lwIP and so on. But even SVR2 Venix is by present standards a old Unix with a much less extensive libc and more primitive C compiler — in a short while you'll see just how primitive — and relatively modern code like lwIP's would require a lot of porting. Ideally we'd like a very minimal, indeed barely adequate, stack that can do simple tasks and can be expressed in a fashion acceptable to a now antiquated compiler. Once we've written it, it would be nice if it were also easily portable to other very limited systems, even by directly translating it to assembly language if necessary. What we want this barebones stack to accomplish will inform its design: and the hardware 24-7 to make such a use case meaningful. The Ethernet option was reportedly competent at server tasks, but Ethernet has more bandwidth, and that card also has additional on-board hardware. Let's face the cold reality: as a server, we'd find interacting with it over the serial port unsatisfactory at best and we'd use up a lot of power and MTBF keeping it on more than we'd like to. Therefore, we really should optimize for the client case, which means we also only need to run the client when we're performing a network task. no remote login capacity, like, I dunno, a C64, the person on the console gets it all. Therefore, we really should optimize for the single user case, which means we can simplify our code substantially by merely dealing with sockets sequentially, one at a time, without having to worry about routing packets we get on the serial port to other tasks or multiplexing them. Doing so would require extra work for dual-socket protocols like FTP, but we're already going to use directly-attached Kermit for that, and if we really want file transfer over TCP/IP there are other choices. (On a larger antique system with multiple serial ports, we could consider a setup where each user uses a separate outgoing serial port as their own link, which would also work under this scheme.) Some of you may find this conflicts hard with your notion of what a "stack" should provide, but I also argue that the breadth of a full-service driver would be wasted on a limited configuration like this and be unnecessarily more complex to write and test. Worse, in many cases, is better, and I assert this particular case is one of them. Keeping the above in mind, what are appropriate client tasks for a microcomputer from 1984, now over 40 years old — even a fairly powerful one by the standards of the time — to do over a slow TCP/IP link? Crypto Ancienne's carl can serve as an HTTP-to-HTTPS proxy to handle the TLS part, if necessary.) We could use protocols like these to download and/or view files from systems that aren't directly connected, or to send and receive status information. One task that is also likely common is an interactive terminal connection (e.g., Telnet, rlogin) to another host. However, as a client this particular deployment is still likely to hit the same sorts of latency problems for the same reasons we would experience connecting to it as a server. These other tasks here are not highly sensitive to latency, require only a single "connection" and no multiplexing, and are simple protocols which are easy to implement. Let's call this feature set our minimum viable product. Because we're writing only for a couple of specific use cases, and to make them even more explicit and easy to translate, we're going to take the unusual approach of having each of these clients handle their own raw packets in a bytewise manner. For the actual serial link we're going to go even more barebones and use old-school RFC 1055 SLIP instead of PPP (uncompressed, too, not even Van Jacobson CSLIP). This is trivial to debug and straightforward to write, and if we do so in a relatively encapsulated fashion, we could consider swapping in CSLIP or PPP later on. A couple of utility functions will do the IP checksum algorithm and reading and writing the serial port, and DNS and some aspects of TCP also get their own utility subroutines, but otherwise all of the programs we will create will read and write their own network datagrams, using the SLIP code to send and receive over the wire. The C we will write will also be intentionally very constrained, using bytewise operations assuming nothing about endianness and using as little of the C standard library as possible. For types, you only need some sort of 32-bit long, which need not be native, an int of at least 16 bits, and a char type — which can be signed, and in fact has to be to run on earlier Venices (read on). You can run the entirety of the code with just malloc/free, read/write/open/close, strlen/strcat, sleep, rand/srand and time for the srand seed (and fprintf for printing debugging information, if desired). On a system with little or no operating system support, almost all of these primitive library functions are easy to write or simulate, and we won't even assume we're capable of non-blocking reads despite the fact Venix can do so. After all, from that which little is demanded, even less is expected. slattach which effectively makes a serial port directly into a network interface. Such an arrangement would be the most flexible approach from the user's perspective because you necessarily have a fixed, bindable external address, but obviously such a scheme didn't scale over time. With the proliferation of dialup Unix shell accounts in the late 1980s and early 1990s, closed-source tools like 1993's The Internet Adapter ("TIA") could provide the SLIP and later PPP link just by running them from a shell prompt. Because they synthesize artificial local IP addresses, sort of NAT before the concept explicitly existed, the architecture of such tools prevented directly creating listening sockets — though for some situations this could be considered a more of a feature than a bug. Any needed external ports could be proxied by the software anyway and later network clients tended not to require it, so for most tasks it was more than sufficient. Closed-source and proprietary SLIP/PPP-over-shell solutions like TIA were eventually displaced by open source alternatives, most notably SLiRP. SLiRP (hereafter Slirp so I don't gouge my eyes out) emerged in 1995 and used a similar architecture to TIA, handing out virtual addresses on an synthetic network and bridging that network to the Internet through the host system. It rapidly became the SLIP/PPP shell solution of choice, leading to its outright ban by some shell ISPs who claimed it violated their terms of service. As direct SLIP/PPP dialup became more common than shell accounts, during which time yours truly upgraded to a 56K Mac modem I still have around here somewhere, Slirp eventually became most useful for connecting small devices via their serial ports (PDAs and mobile phones especially, but really anything — subsets of Slirp are still used in emulators today like QEMU for a similar purpose) to a LAN. By a shocking and completely contrived coincidence, that's exactly what we'll be doing! Slirp has not been officially maintained since 2006. There is no package in Fedora, which is my usual desktop Linux, and the one in Debian reportedly has issues. A stack of patch sets circulated thereafter, but the planned 1.1 release never happened and other crippling bugs remain, some of which were addressed in other patches that don't seem to have made it into any release, source or otherwise. If you tried to build Slirp from source on a modern system and it just immediately exits, you got bit. I have incorporated those patches and a couple of my own to port naming and the configure script, plus some additional fixes, into an unofficial "Slirp-CK" which is on Github. It builds the same way as prior versions and is tested on Fedora Linux. I'm working on getting it functional on current macOS also. Next, I wrote up our four basic functional clients: ping, DNS lookup, NTP client (it doesn't set the clock, just shows you the stratum, refid and time which you can use for your own purposes), and TCP client. The TCP client accepts strings up to a defined maximum length, opens the connection, sends those strings (optionally separated by CRLF), and then reads the reply until the connection closes. This all seemed to work great on the Linux box, which you yourself can play with as a toy stack (directions at the end). Unfortunately, I then pushed it over to the Pro with Kermit and the compiler immediately started complaining. SLIP is a very thin layer on IP packets. There are exactly four metabytes, which I created preprocessor defines for: A SLIP packet ends with SLIP_END, or hex $c0. Where this must occur within a packet, it is replaced by a two byte sequence for unambiguity, SLIP_ESC SLIP_ESC_END, or hex $db $dc, and where the escape byte must occur within a packet, it gets a different two byte sequence, SLIP_ESC SLIP_ESC_ESC, or hex $db $dd. Although I initially set out to use defines and symbols everywhere instead of naked bytes, and wrote slip.c on that basis, I eventually settled on raw bytes afterwards using copious comments so it was clear what was intended to be sent. That probably saved me a lot of work renaming everything, because: Dimly I recalled that early C compilers, including System V, limit their identifiers to eight characters (the so-called "Ritchie limit"). At this point I probably should have simply removed them entirely for consistency with their absence elsewhere, but I went ahead and trimmed them down to more opaque, pithy identifiers. That wasn't the only problem, though. I originally had two functions in slip.c, slip_start and slip_stop, and it didn't like that either despite each appearing to have a unique eight-character prefix: That's because their symbols in the object file are actually prepended with various metacharacters like _ and ~, so effectively you only get seven characters in function identifiers, an issue this error message fails to explain clearly. The next problem: there's no unsigned char, at least not in PRO/VENIX Rev. 2.0 which I want to support because it's more common, and presumably the original versions of PRO/VENIX and Venix-11. (This type does exist in PRO/VENIX V2.0, but that's because it's System V and has a later C compiler.) In fact, the unsigned keyword didn't exist at all in the earliest C compilers, and even when it did, it couldn't be applied to every basic type. Although unsigned char was introduced in V7 Unix and is documented as legal in the PRO/VENIX manual, and it does exist in Venix/86 2.1 which is also a V7 Unix derivative, the PDP-11 and 8086 C compilers have different lineages and Venix's V7 PDP-11 compiler definitely doesn't support it: I suspect this may not have been intended because unsigned int works (unsigned long would be pointless on this architecture, and indeed correctly generates Misplaced 'long' on both versions of PRO/VENIX). Regardless of why, however, the plain char type on the PDP-11 is signed, and for compatibility reasons here we'll have no choice but to use it. Recall that when C89 was being codified, plain char was left as an ambiguous type since some platforms (notably PDP-11 and VAX) made it signed by default and others made it unsigned, and C89 was more about codifying existing practice than establishing new ones. That's why you see this on a modern 64-bit platform, e.g., my POWER9 workstation, where plain char is unsigned: If we change the original type explicitly to signed char on our POWER9 Linux machine, that's different: and, accounting for different sizes of int, seems similar on PRO/VENIX V2.0 (again, which is System V): but the exact same program on PRO/VENIX Rev. 2.0 behaves a bit differently: The differences in int size we expect, but there's other kinds of weird stuff going on here. The PRO/VENIX manual lists all the various permutations about type conversions and what gets turned into what where, but since the manual is already wrong about unsigned char I don't think we can trust the documentation for this part either. Our best bet is to move values into int and mask off any propagated sign bits before doing comparisons or math, which is agonizing, but reliable. That means throwing around a lot of seemingly superfluous & 0xff to make sure we don't get negative numbers where we don't want them. Once I got it built, however, there were lots of bugs. Many were because it turns out the compiler isn't too good with 32-bit long, which is not a native type on the 16-bit PDP-11. This (part of the NTP client) worked on my regular Linux desktop, but didn't work in Venix: The first problem is that the intermediate shifts are too large and overshoot, even though they should be in range for a long. Consider this example: On the POWER9, accounting for the different semantics of %lx, But on Venix, the second shift blows out the value. We can get an idea of why from the generated assembly in the adb debugger (here from PRO/VENIX V2.0, since I could cut and paste from the Kermit session): (Parenthetical notes: csav is a small subroutine that pushes volatiles r2 through r4 on the stack and turns r5 into the frame pointer; the corresponding cret unwinds this. The initial branch in this main is used to reserve additional stack space, but is often practically a no-op.) The first shift is here at ~main+024. Remember the values are octal, so 010 == 8. r0 is 16 bits wide — no 32-bit registers — so an eight-bit shift is fine. When we get to the second shift, however, it's the same instruction on just one register (030 == 24) and the overflow is never checked. In fact, the compiler never shifts the second part of the long at all. The result is thus zero. The second problem in this example is that the compiler never treats the constant as a long even though statically there's no way it can fit in a 16-bit int. To get around those two gotchas on both Venices here, I rewrote it this way: An alternative to a second variable is to explicitly mark the epoch constant itself as long, e.g., by casting it, which also works. Here's another example for your entertainment. At least some sort of pseudo-random number generator is crucial, especially for TCP when selecting the pseudo-source port and initial sequence numbers, or otherwise Slirp seemed to get very confused because we would "reuse" things a lot. Unfortunately, the obvious typical idiom to seed it like srand(time(NULL)) doesn't work: srand() expects a 16-bit int but time(NULL) returns a 32-bit long, and it turns out the compiler only passes the 16 most significant bits of the time — i.e., the ones least likely to change — to srand(). Here's the disassembly as proof (contents trimmed for display here; since this is a static binary, we can see everything we're calling): At the time we call the glue code for time from main, the value under the stack pointer (i.e., r6) is cleared immediately beforehand since we're passing NULL (at ~main+06). We then invoke the system call, which per the Venix manual for time(2) uses two registers for the 32-bit result, namely r0 (high bits) and r1 (low bits). We passed a null pointer, so the values remain in those registers and aren't written anywhere (branch at _time+014). When we return to ~main+014, however, we only put r0 on the stack for srand (remember that r5 is being used as the frame pointer; see the disassembly I provided for csav) and r1 is completely ignored. Why would this happen? It's because time(2) isn't declared anywhere in /usr/include or /usr/include/sys (the two C include directories), nor for that matter rand(3) or srand(3). This is true of both Rev. 2.0 and V2.0. Since the symbols are statically present in the standard library, linking will still work, but since the compiler doesn't know what it's supposed to be working with, it assumes int and fails to handle both halves of the long. One option is to manually declare everything ourselves. However, from the assembly at _time+016 we do know that if we pass a pointer, the entire long value will get placed there. That means we can also do this: Now this gets the lower bits and there is sufficient entropy for our purpose (though obviously not a cryptographically-secure PRNG). Interestingly, the Venix manual recommends using the time as the seed, but doesn't include any sample code. At any rate this was enough to make the pieces work for IP, ICMP and UDP, but TCP would bug out after just a handful of packets. As it happens, Venix has rather small serial buffers by modern standards: tty(7), based on the TIOCQCNT ioctl(2), appears to have just a 256-byte read buffer (sg_ispeed is only char-sized). If we don't make adjustments for this, we'll start losing framing when the buffer gets overrun, as in this extract from a test build with debugging dumps on and a maximum segment size/window of 512 bytes. Here, the bytes marked by dashes are the remote end and the bytes separated by dots are what the SLIP driver is scanning for framing and/or throwing away; you'll note there is obvious ASCII data in them. If we make the TCP MSS and window on our client side 256 bytes, there is still retransmission, but the connection is more reliable since overrun occurs less often and seems to work better than a hard cap on the maximum transmission unit (e.g., "mtu 256") from SLiRP's side. Our only consequence to dropping the TCP MSS and window size is that the TCP client is currently hard-coded to just send one packet at the beginning (this aligns with how you'd do finger, HTTP/1.x, gopher, etc.), and that datagram uses the same size which necessarily limits how much can be sent. If I did the extra work to split this over several datagrams, it obviously wouldn't be a problem anymore, but I'm lazy and worse is better! The connection can be made somewhat more reliable still by improving the SLIP driver's notion of framing. RFC 1055 only specifies that the SLIP end byte (i.e., $c0) occur at the end of a SLIP datagram, though it also notes that it was proposed very early on that it could also start datagrams — i.e., if two occur back to back, then it just looks like a zero length or otherwise obviously invalid entity which can be trivially discarded. However, since there's no guarantee or requirement that the remote link will do this, we can't assume it either. We also can't just look for a $45 byte (i.e., IPv4 and a 20 byte length) because that's an ASCII character and appears frequently in text payloads. However, $45 followed by a valid DSCP/ECN byte is much less frequent, and most of the time this byte will be either $00, $08 or $10; we don't currently support ECN (maybe we should) and we wouldn't find other DSCP values meaningful anyway. The SLIP driver uses these sequences to find the start of a datagram and $c0 to end it. While that doesn't solve the overflow issue, it means the SLIP driver will be less likely to go out of framing when the buffer does overrun and thus can better recover when the remote side retransmits. And, well, that's it. There are still glitches to bang out but it's good enough to grab Hacker News: src/ directory, run configure and then run make (parallel make is fine, I use -j24 on my POWER9). Connect your two serial ports together with a null modem, which I assume will be /dev/ttyUSB0 and /dev/ttyUSB1. Start Slirp-CK with a command line like ./slirp -b 4800 "tty /dev/ttyUSB1" but adjusting the baud and path to your serial port. Take note of the specified virtual and nameserver addresses: Unlike the given directions, you can just kill it with Control-C when you're done; the five zeroes are only if you're running your connection over standard output such as direct shell dial-in (this is a retrocomputing blog so some of you might). To see the debug version in action, next go to the BASS directory and just do a make. You'll get a billion warnings but it should still work with current gcc and clang because I specifically request -std=c89. If you use a different path for your serial port (i.e., not /dev/ttyUSB0), edit slip.c before you compile. You don't do anything like ifconfig with these tools; you always provide the tools the client IP address they'll use (or create an alias or script to do so). Try this initial example, with slirp already running: Because I'm super-lazy, you separate the components of the IPv4 address with spaces, not dots. In Slirp-land, 10.0.2.2 is always the host you are connected to. You can see the ICMP packet being sent, the bytes being scanned by the SLIP driver for framing (the ones with dots), and then the reply (with dashes). These datagram dumps have already been pre-processed for SLIP metabytes. Unfortunately, you may not be able to ping other hosts through Slirp because there's no backroute but you could try this with a direct SLIP connection, an exercise left for the reader. If Slirp doesn't want to respond and you're sure your serial port works (try testing both ends with Kermit?), you can recompile it with -DDEBUG (change this in the generated Makefile) and pass your intended debug level like -d 1 or -d 3. You'll get a file called slirp_debug with some agonizingly detailed information so you can see if it's actually getting the datagrams and/or liking the datagrams it gets. For nslookup, ntp and minisock, the second address becomes your accessible recursive nameserver (or use -i to provide an IP). The DNS dump is also given in the debug mode with slashes for the DNS answer section. nslookup and ntp are otherwise self-explanatory: minisock takes a server name (or IP) and port, followed by optional strings. The strings, up to 255 characters total (in this version), are immediately sent with CR-LFs between them except if you specify -n. If you specify no strings, none are sent. It then waits on that port for data and exits when the socket closes. This is how we did the HTTP/1.0 requests in the screenshots. On the DEC Pro, this has been tested on my trusty DEC Professional 380 running PRO/VENIX V2.0. It should compile and run on a 325 or 350, and on at least PRO/VENIX Rev. V2.0, though I don't have any hardware for this and Xhomer's serial port emulation is not good enough for this purpose (so unfortunately you'll need a real DEC Pro until I or Tarek get around to fixing it). The easiest way to get it over there is Kermit. Assuming you have this already, connect your host and the Pro on the "real" serial port at 9600bps. Make sure both sides are set to binary and just push all the files over (except the Markdown documentation unless you really want), and then do a make -f Makefile.venix (it may have been renamed to makefile.venix; adjust accordingly). Establishing the link is as simple as connecting your server's serial port to the other end of the BCC05 or equivalent from the Pro and starting Slirp to talk to that port (on my system, it's even the same port, so the same command line suffices). If you experience issues with the connection, the easiest fix is to just bounce Slirp — because there are no timeouts, there are also no retransmits. I don't know if this is hitting bugs in Slirp or in my code, though it's probably the latter. Nevertheless, I've been able to run stuff most of the day without issue. It's nice to have a simple network option and the personal satisfaction of having written it myself. There are many acknowledged deficiencies, mostly because I assume little about the system itself and tried to keep everything very simplistic. There are no timeouts and thus no retransmits, and if you break the TCP connection in the middle there will be no proper teardown. Also, because I used Slirp for the other side (as many others will), and because my internal network is full of machines that have no idea what IPv6 is, there is no IPv6 support. I agree there should be and SLIP doesn't care whether it gets IPv4 or IPv6, but for now that would require patching Slirp which is a job I just don't feel up to at the moment. I'd also like to support at least CSLIP in the future. In the meantime, if you want to try this on other operating systems, the system-dependent portions are in compat.h and slip.c with a small amount in ntp.c for handling time values. You will likely want to make changes to where your serial ports are and the speed they run at and how to make that port "raw" in slip.c. You should also add any extra #includes to compat.h that your system requires. I'd love to hear about it running other places. Slirp-CK remains under the original modified Slirp license and BASS is under the BSD 2-clause license. You can get Slirp-CK and BASS at Github.

15 hours ago 2 votes
Transactions are a protocol

Transactions are not an intrinsic part of a storage system. Any storage system can be made transactional: Redis, S3, the filesystem, etc. Delta Lake and Orleans demonstrated techniques to make S3 (or cloud storage in general) transactional. Epoxy demonstrated techniques to make Redis (and any other system) transactional. And of course there's always good old Two-Phase Commit. If you don't want to read those papers, I wrote about a simplified implementation of Delta Lake and also wrote about a simplified MVCC implementation over a generic key-value storage layer. It is both the beauty and the burden of transactions that they are not intrinsic to a storage system. Postgres and MySQL and SQLite have transactions. But you don't need to use them. It isn't possible to require you to use transactions. Many developers, myself a few years ago included, do not know why you should use them. (Hint: read Designing Data Intensive Applications.) And you can take it even further by ignoring the transaction layer of an existing transactional database and implement your own transaction layer as Convex has done (the Epoxy paper above also does this). It isn't entirely clear that you have a lot to lose by implementing your own transaction layer since the indexes you'd want on the version field of a value would only be as expensive or slow as any other secondary index in a transactional database. Though why you'd do this isn't entirely clear (I will like to read about this from Convex some time). It's useful to see transaction protocols as another tool in your system design tool chest when you care about consistency, atomicity, and isolation. Especially as you build systems that span data systems. Maybe, as Ben Hindman hinted at the last NYC Systems, even proprietary APIs will eventually provide something like two-phase commit so physical systems outside our control can become transactional too. Transactions are a protocol short new post pic.twitter.com/nTj5LZUpUr — Phil Eaton (@eatonphil) April 20, 2025

21 hours ago 2 votes
Humanities Crash Course Week 16: The Art of War

In week 16 of the humanities crash course, I revisited the Tao Te Ching and The Art of War. I just re-read the Tao Te Ching last year, so I only revisited my notes now. I’ve also read The Art of War a few times, but decided to re-visit it now anyway. Readings Both books are related. The Art of War is older; Sun Tzu wrote it around 500 BCE, at a time when war was becoming more “professionalized” in China. The book aims convey what had (or hadn’t) worked in the battlefield. The starting point is conflict. There’s an enemy we’re looking to defeat. The best victory is achieved without engagement. That’s not always possible, so the book offers pragmatic suggestions on tactical maneuvers and such. It gives good advice for situations involving conflict, which is why they’ve influenced leaders (including businesspeople) throughout centuries: It’s better to win before any shots are fired (i.e., through cunning and calculation.) Use deception. Don’t let conflicts drag on. Understand the context to use it to your advantage. Keep your forces unified and disciplined. Adapt to changing conditions on the ground. Consider economics and logistics. Gather intelligence on the opposition. The goal is winning through foresight rather than brute force — good advice! The Tao Te Ching, written by Lao Tzu around the late 4th century BCE, is the central text in Taoism, a philosophy that aims for skillful action by aligning with the natural order of the universe — i.e., doing through “non-doing” and transcending distinctions (which aren’t present in reality but layered onto experiences by humans.) Tao means Way, as in the Way to achieve such alignment. The book is a guide to living the Tao. (Living in Tao?) But as it makes clear from its very first lines, you can’t really talk about it: the Tao precedes language. It’s a practice — and the practice entails non-striving. Audiovisual Music: Gioia recommended the Beatles (The White Album, Sgt. Pepper’s, and Abbey Road) and Rolling Stones (Let it Bleed, Beggars Banquet, and Exile on Main Street.) I’d heard all three Rolling Stones albums before, but don’t know them by heart (like I do with the Beatles.) So I revisited all three. Some songs sounded a bit cringe-y, especially after having heard “real” blues a few weeks ago. Of the three albums, Exile on Main Street sounds more authentic. (Perhaps because of the band member’s altered states?) In any case, it sounded most “in the Tao” to me — that is, as though the musicians surrendered to the experience of making this music. It’s about as rock ‘n roll as it gets. Arts: Gioia recommended looking at Chinese architecture. As usual, my first thought was to look for short documentaries or lectures in YouTube. I was surprised by how little there was. Instead, I read the webpage Gioia suggested. Cinema: Since we headed again to China, I took in another classic Chinese film that had long been on my to-watch list: Wong Kar-wai’s IN THE MOOD FOR LOVE. I found it more Confucian than Taoist, although its slow pacing, gentleness, focus on details, and passivity strike something of a Taoist mood. Reflections When reading the Tao Te Ching, I’m often reminded of this passage from the Gospel of Matthew: No man can serve two masters: for either he will hate the one, and love the other; or else he will hold to the one, and despise the other. Ye cannot serve God and mammon. Therefore I say unto you, Take no thought for your life, what ye shall eat, or what ye shall drink; nor yet for your body, what ye shall put on. Is not the life more than meat, and the body than raiment? Behold the fowls of the air: for they sow not, neither do they reap, nor gather into barns; yet your heavenly Father feedeth them. Are ye not much better than they? Which of you by taking thought can add one cubit unto his stature? And why take ye thought for raiment? Consider the lilies of the field, how they grow; they toil not, neither do they spin: And yet I say unto you, That even Solomon in all his glory was not arrayed like one of these. Wherefore, if God so clothe the grass of the field, which to day is, and to morrow is cast into the oven, shall he not much more clothe you, O ye of little faith? Therefore take no thought, saying, What shall we eat? or, What shall we drink? or, Wherewithal shall we be clothed? (For after all these things do the Gentiles seek:) for your heavenly Father knoweth that ye have need of all these things. But seek ye first the kingdom of God, and his righteousness; and all these things shall be added unto you. Take therefore no thought for the morrow: for the morrow shall take thought for the things of itself. Sufficient unto the day is the evil thereof. The Tao Te Ching is older and from a different culture, but “Consider the lilies of the field, how they grow; they toil not, neither do they spin” has always struck me as very Taoistic: both texts emphasize non-striving and putting your trust on a higher order. Even though it’s even older, that spirit is also evident in The Art of War. It’s not merely letting things happen, but aligning mindfully with the needs of the time. Sometimes we must fight. Best to do it quickly and efficiently. And best yet if the conflict can be settled before it begins. Notes on Note-taking This week, I started using ChatGPT’s new o3 model. Its answers are a bit better than what I got with previous models, but there are downsides. For one thing, o3 tends to format answers in tables rather than lists. This works well if you use ChatGPT in a wide window, but is less useful on a mobile device or (as in my case) on a narrow window to the side. This is how I usually use ChatGPT on my Mac: in a narrow window. o3’s responses often include tables that get cut off in this window. For another, replies take much longer as the AI does more “research” in the background. As a result, it feels less conversational than 4o — which changes how I interact with it. I’ll play more with o3 for work, but for this use case, I’ll revert to 4o. Up Next Gioia recommends Apulelius’s The Golden Ass. I’ve never read this, and frankly feel weary about returning to the period of Roman decline. (Too close to home?) But I’ll approach it with an open mind. Again, there’s a YouTube playlist for the videos I’m sharing here. I’m also sharing these posts via Substack if you’d like to subscribe and comment. See you next week!

14 hours ago 1 votes
My approach to teaching electronics

Explaining the reasoning behind my series of articles on electronics -- and asking for your thoughts.

yesterday 2 votes