More from Jonas Hietala
I recently completed my VORON 0 build and I was determined to leave it as-is for a while and to start modding my VORON Trident… So before embarking om my larger Trident modding journey I decided to work on the VORON 0 just a little bit more. HEPA filter With the Nevermore Micro V4 I had active carbon filtering but I also wanted a HEPA filter that would also provide negative air pressure to the printer. I found the Hepa filter by JNP for the VORON 0.1 and a mount for the VORON 0.2 that I installed. For the fans I used two Noctua NF-A4x10 FLX fans and I spliced them together with the Nevermore filter, allowing the MCU to control all the filter fans together. It might have been better to buy the 5V versions and connect them to the 5V output to have them always on, but by then I had already ordered the other version. Oh well. Back meshed panel The small 5V fan for the Raspberry Pi was super loud and I wanted to replace it with something. Because the Raspberry Pi Zero doesn’t get that hot I removed the fan and replaced the back panel with a meshed variant, which I hope should provide enough airflow to keep the electronics cool. (There are other variants with integrated fans if I realize this wasn’t enough.) Modesty mesh The wiring is super ugly and I stumbled upon the modesty mesh that hides the wires well from the sides. Not at all necessary but they make the printer a little prettier. Full size panels One thing that bothered me with the stock VORON 0.2 was the gaps between the tophat and the side panels and front door. I went looking for a mod with fill-sized panels and found the ZeroPanels mod. Instead of magnets the printed parts clips into the extrusions pretty hard while still allowing you to pull them off when you want to. It works really well honestly. The clips were slightly difficult to print but manageable. I was looking at the BoxZero mod for a proper full-sized panels mod but I didn’t want to tear apart the printer and rebuild the belt path so I simply replaced the stock panels with full sized ones. This does leave some air gaps at the back and front of the printer right next to the belt that I simply covered with some tape: Some tape to cover the gaps around the belts. While the clips are good for panels you don’t remove that often, they’re too much to use for the front door. They have some magnetic clips you can use but I’m honestly perplexed on how to use them for good effect. The standard VORON 0 handles don’t consider the extra 3mm the foam tape adds, leaving a gap that severely reduces the pulling force of the magnets. Similarly the magnet clips included in ZeroPanels surprisingly have the same issue. For the door handle I used the stealth handle found in the Voron 0.2 fullsize ZeroPanel mod that does take the foam tape into consideration. Three different magnet holders; at the top the Stealth handles holders that come out 3mm, in the middle the 6mm holder, and at the bottom the standard magnet holder. There’s a variant of the clips for 6mm magnets in the pull requests that I used by pushing in two 3x2mm magnets and super gluing one 10x3mm magnet on top, so it sticks out the 3mm extra distance the foam tape adds. (Yes, maybe just the 10x3mm magnet would be enough). For the outside I used the standard ZeroPanels holders for 10x3mm magnets, allowing the magnets close really tightly against each other. Extra magnets at the top of the printer to get a proper seal. The panels I bought were just slightly too wide causing the side panels to bend a little and it made it hard to get a close seal for front and side panels. I had to file down the clips on the front door to avoid them from colliding with the side panel clips, and I had to add extra clips and magnets for the panels to close tightly against the foam tape.
About 1.5 years ago I ventured into 3D printing by building a VORON Trident. It was a very fun project and I’ve even used the printer quite a bit. Naturally, I had to build another one and this time I opted for the cute VORON 0. Why another printer? I really like my VORON Trident and it’ll continue to be my main printer for the foreseeable future but a second printer would do two important things for me: Act as a backup printer if my Trident breaks. A printer made partially of printed parts is great as you can easily repair it… But only if you have a working printer to print the parts. It would also be very annoying if I disassemble the printer because I want to mod it and realize I’ve forgotten to print a part I needed. Building printers are really fun. Building the VORON Trident is one of my most fun and rewarding projects I’ve done. Why a VORON 0? These properties makes the VORON 0 an ideal secondary printer for me: You need to assemble the VORON 0 yourself (a feature not a bug) Prints ABS/ASA well (for printer parts) Very moddable and truly open source It’s tiny The VORON 0 to the left and the VORON Trident 250 to the right. It’s really small, which is perfect for me as I have a limited amount of space. It would be very fun to build a VORON 2.4 (or even a VORON Phoenix) but I really don’t have space for more printers. Getting the parts I opted to buy a kit instead of self-sourcing the parts as it’s usually cheaper and requires a lot less work, even if you replace some parts. This is what I ended up getting: A VORON 0 kit from Lecktor Parts for a Dragon Burner toolhead Parts for a Nevermore V4 active carbon filter Later on, I replaced the SKR Mini E3 V2 that came with the kit with the V3 Lots of delays I ordered a VORON 0 from Lecktor in February 2024 and it took roughly 4 months before I got the first shipment of parts and it wasn’t until the end of 2024 that I had received all the parts needed to complete the build. The wait was annoying… While I can’t complain about the quality of parts, with the massive delays I regret ordering from Lecktor and in hindsight I should’ve ordered an LDO kit from 3DJake, like I was first considering. Printing parts myself So what do you do when you can’t start the build? You print parts! A box of some of the printed parts for the build (and many I later threw away). There’s something very satisfying with printing parts you then build a printer with. This time I wanted to make a colorful printer and I came up with this mix of filament: PolyLite ASA Yellow Formfutura EasyFil ABS Light Green Formfutura EasyFil ABS Light Blue Formfutura EasyFil ABS Magenta I think they made the printer look great. The build I won’t do as detailed of a build log as I did when building the VORON Trident but I tried to take some pictures. Scroll on! Frames and bed The linear Y-rails. The kit comes with the Kirigami bed mod. The frame with A/B motors. Building the bottom of the printer with feet, power supply, and display. MGN9 instead of MGN7 X-axis After I assembled the X-axis I noticed a problem: The carriage collides with the stock A drive. The reason is that the kit comes with MGN9 rails for the X-axis instead of the standard MGN7 rails. This required me to reprint modified A/B drives, X-carriage, and alignment tools. The carriage passes the modded B drive. Belts Starting to install the belt. The belt is tight. Dragon Burner toolhead I got the parts needed to build the standard mini stealthburner… But I’m attracted to playing around with new stuff and I decided to try out the Dragon Burner instead. I went with it because it’s quite popular, it has good cooling (I print a bunch of PLA), and I haven’t tried it out yet. The fans are inserted. I don’t care about LEDs so I inserted an opaque magenta part instead. I think it looks really good. The back of the Dragon Burner. I opted for the Rapido 2 instead of the Dragon that came with the kit because the Dragon has problems printing PLA. I was a bit confused on how to route the wires as there was very little space when mounting the toolhead on the carriage. Routing the wires close to the fans, clipping off the ears of the fans, and holding together it with cable ties in this way worked for me. Galileo 2 standalone Dragon Burner together with the Galileo 2 extruder mounted on the printer. For the extruder I opted for the standalone version of Galileo 2. I’ve used Galileo 2 on the Trident but I hated the push down latch it uses in the Stealthburner configuration. The latch eventually broke by pulling out a heat-set insert so I went back to the Clockwork 2 on the Trident, giving me the parts to rebuilt the Galileo for the VORON 0 in a standalone configuration. The parts for Galileo 2. There will be left-overs from the Stealthburner variant. The build was really fast and simple—compared to the Stealthburner variant it’s night and day. I didn’t even think to take a break for pictures. Nevermore filter Since I want to be able to print ABS I feel I need to have an activated carbon filter. I wanted to have an exhaust fan with a HEPA filter as well, but I’ll leave that to a mod in the future. The Nevermore V4 is an activated carbon filter that fits well in the VORON 0. I fastened the fan using a strip of VHB—it was a struggle to position it in the middle. The Nevermore is mounted standing in the side of the printer. Just remember to preload the extrusion with extra M3 nuts when you assemble the printer. (I’ve heard LDO has nuts you can insert after… Sounds great.) Panels With the panel and spool holder at the back. Please ignore the filament path in this picture, it’ll interfere with the rear belt when routed behind the umbilical cable. With the tophat and door installed. I’m slightly annoyed with the small gaps and holes the printer has (mainly between the tophat and the panels at the bottom half). I later changed some of the parts related to the top hat to match the colorscheme better. Wiring Wiring was simpler than for the Trident but it was harder to make the wiring pretty. Thank god I could cover it up. The underside of the printer with the power, 5V converter, display, and Z-motor. Back of the printer with the Raspberry Pi and MCU. Raspberry Pi The Raspberry Pi only has two cables; power and communication over the GPIO pins and a display via USB. The Pi communicates and gets power over the TFT connection on the MCU. Toolhead The kit came with a toolhead board and breakout board for an umbilical setup: The toolhead board. The breakout board. I did run into an issue where the polarity of the fans on the toolhead board did not match the polarity of the fans on the MCU, leading to some frustration where the fans refused to spin. I ended up swapping the polarity using the cables from the breakout board to the MCU. Chamber thermistor The MCU only has two thermistor ports and they’re used for the hotend and bed thermistors. For the chamber thermistor (that’s integrated into the breakout board) I use the MOSI pin on the SPI1 8-pin header: The chamber thermistor connected to MOSI and ground on the SPI1 header. SKR mini E3 v3 I got an SKR mini E3 v2 with the kit but I replaced it with the v3 for two reasons: FAN output, used for the Nevermore Filter A filament runout sensor There’s not much to say about the extra FAN output but the filament runout sensor has 3 pins, while VORON 0.2 style runout sensor has 3 pins. I reused the prepared y-endstop I got with the kit, scratched away some of the plastic to make the 2-pin connection fit the 3-pins on the MCU (the +5V pin isn’t needed): The filament runout sensor connected to E0-stop. Klipper setup I followed the VORON documentation and chose Mainsail as I’ve been happy with it on my Trident. I’m not going to describe everything and only call out some issues I had or extra steps I had to take. MCU firmware The VORON documentation assumes USB communication so the default firmware instructions didn’t work for me. According to BigTreeTech’s documentation if you communicate over USART2 (the TFT port) then you need to compile the firmware with Communication interface set to Serial (on USART2 PA3/PA2). You then need to use this klipper configuration: [mcu] serial: /dev/ttyAMA0 restart_method: command It took a long time for me to figure out as I had a display connected via USB, so I thought the display was the MCU and got stuck at a Your Klipper version is: xxx MCU(s) which should be updated: xxx error. Filament runout [filament_switch_sensor Filament_Runout_Sensor] pause_on_runout: True runout_gcode: PAUSE switch_pin: PC15 Chamber thermistor According to this comment this is the config to use the SPI header for a thermistor: [temperature_sensor chamber_temp] sensor_type: Generic 3950 sensor_pin: PA7 pullup_resistor: 10000 Works for me™ Display It’s easy to flash the display directly from the Raspberry Pi although the first firmware I built was too large. There are optional features you can remove but I removed too many so the configuration for the buttons wasn’t accepted. These were the features that ended up working for me: [*] Support GPIO "bit-banging" devices [*] Support LCD devices [ ] Support thermocouple MAX sensors [ ] Support adxl accelerometers [ ] Support lis2dw and lis3dh 3-axis accelerometers [ ] Support MPU accelerometers [*] Support HX711 and HX717 ADC chips [ ] Support ADS 1220 ADC chip [ ] Support ldc1612 eddy current sensor [ ] Support angle sensors [*] Support software based I2C "bit-banging" [*] Support software based SPI "bit-banging" Sensorless homing I was nervous setting up sensorless homing, fearing that without a physical switch the printer might decide to burn the motor against the edge or something. (I really have no idea how it works, hence my fear.) In the end it was straightforward. The VORON 0 example firmware was already configured for sensorless homing and the only things I had to do was: X-DIAG and Y-DIAG pins on the board Tweak the driver_SGTHRS values (I landed on 85 down from 255) And now I have sensorless homing working consistently. What confused me was that the sensorless homing guide and the homing macros it links to were slightly different from the VORON 0 example firmware and it wasn’t clear if I had to make all the changes or not. (I did not.) Some random issues I encountered In typical 3D printer fashion, you’ll always run into various issues, for example: I got the mcu shutdown: Timer too close error a few times. I don’t know what I did but it only happened a couple of times at beginning. The filament sensor had some consistency issues. Some extra tape on the bearing seemed to fix it. The filament keeps getting stuck in the extruder after unload. I’m still having issues but forgetting to tighten the nozzle and using a too short PTFE tube didn’t help. I had trouble getting the filament to stick to bed. Super frustrating to be honest. I re-calibrated the z offset and thumb screws a bunch of times and (right now) it seems to work fairly well. Even though you’re not supposed to need automatic bed leveling for a printer this small, I can’t help but miss the “just works” feeling I have with the Trident. Initial thoughts on the printer A model I printed for one of my kids. It came out really well. I haven’t printed that much with the printer yet but I have some positive things to say about it: Dragon Burner is great when printing PLA (which I use a lot). But I have some negative things to say too: horribly loud but the print movement is also too loud for my taste. It’s poorly insulated. For example there are gaps between the top hat and the rest of the printer that I don’t see a good way to cover up. Overall though I’m very happy with it. I wouldn’t recommend it as a first printer or to someone who just wants a tool that works out of the box, but for people like me who wanted to build a backup/secondary printer I think it’s great. What’s next? With a secondary printer finally up and running I can now start working on some significant mods for my Trident! This is the tentative plan right now: Inverted electronics mod. Replace Stealthburner with another toolhead, most likely A4T-toolhead. Build a BoxTurtle for multi-color support. But we’ll see when I manage to get to it. I’m not in a rush and I should take a little break and play with my VORON 0 and perhaps work on my other dozen or so projects that lie dormant.
I recently came upon a horror story where a developer was forced to switch editor from Neovim to Cursor and I felt I had to write a little to cleanse myself of the disgust I felt. Two different ways of approaching an editor I think that there’s two opposing ways of thinking about the tool that is an editor: Refuse to personalize anything and only use the basic features “An editor is a simple tool I use to get the job done.” Get stuck in configuration hell and spend tons of time tweaking minor things “An editor is a highly personalized tool that works the way I want.” These are the extreme ends of the spectrum to make a point and most developers will fall somewhere in between. It’s not a static proposition; I’ve had periods in my life where I’ve used the same Vim configuration for years and other times I’ve spent more time rewriting my Neovim config than doing useful things. I don’t differentiate between text editors and IDEs as I don’t find the distinction very meaningful. They’re all just editors. Freedom of choice is important Freedom of choice is more to be treasured than any possession earth can give. David O. McKay Some developers want zero configuration while others want to configure their editor so it’s just right. Either way is fine and I’ve met excellent developers from both sides. But removing the power of choice is a horrible idea as you’re forcing developers to work in a way they’re not comfortable with, not productive with, or simply don’t like. You’re bound to make some of the developers miserable or see them leave (usually the best ones who can easily find another job). To explain how important an editor might be to some people, I give you this story about Stephen Hendry—one of the most successful Snooker players ever—and how important his cue was to him: In all the years I’ve been playing I’ve never considered changing my cue. It was the first cue I ever bought, aged 13, picked from a cabinet in a Dunfermline snooker centre just because I liked the Rex Williams signature on it. I saved £40 to buy it. It’s a cheap bit of wood and it’s been the butt of other players’ jokes for ages. Alex Higgins said it was ‘only good for holdin’ up f*g tomatoes!’ But I insist on sticking with it. And I’ve won a lot of silverware, including seven World Championship trophies, with it. It’s a one-piece which I carry in a wooden, leather-bound case that’s much more expensive than the cue it houses. But in 2003, at Glasgow airport after a flight from Bangkok, it emerges through the rubber flaps on the carousel and even at twenty yards I can see that both case and cue are broken. Snapped almost clean in two, the whole thing now resembling some form of shepherd’s crook. The cue comes to where I’m standing, and I pick it up, the broken end dangling down forlornly. I could weep. Instead, I laugh. ‘Well,’ I say to my stunned-looking friend John, ‘that’s my career over.’ Stephen Hendry, The Mirror Small improvements leads to large long-term gains Kaizen isn’t about massive overhauls or overnight success. Instead, it focuses on small, continuous improvements that add up to significant long-term gains. What is Kaizen? A Guide to Continuous Improvement I firmly believe that even small improvements are worth it as they add up over time (also see compound interest and how it relates to financial investments). An editor is a great example where even small improvements may have a big effect for the simple reason that you spend so much time in your editor. I’ve spent hours almost every day inside (neo)vim since I started using it 15+ years ago. Even simple things like quickly changing text inside brackets (ci[) instead of selecting text with your mouse might save hundreds of hours during a programming career—and that’s just one example. Naturally, as a developer you can find small but worthwhile improvements in other areas too, for instance: Learning the programming languages and libraries you use a little better Customizing your keyboard and keyboard layout This is more for comfort and health than speed but that makes it even more important, not less. Increasing your typing speed Some people dismiss typing speed as they say they’re limited by their thinking, not typing. But the benefit of typing faster (and more fluidly) isn’t really the overall time spent typing vs thinking; it’s so you can continue thinking with as little interruption as possible. On some level you want to reduce the time typing in this chain: think… edit, think… edit, think… It’s also why the Vim way of editing is so good—it’s based on making small edits and to return quickly to normal (thinking) mode. Some people ask how can you afford to spend time practicing Vim commands or to configure your editor as it takes away time from work? But I ask you: with a programming career of several decades and tens of thousands of hours to spend in front of your computer, how can you afford not to? Neovim is versatile During the years I’ve done different things: Switched keyboard and keyboard layout multiple times. Been blogging and wrote a book. The one constant through all of this has been Neovim. Neovim may not have the best language specific integrations but it does everything well and the benefit of having the same setup for everything you do is not to be underestimated. It pairs nicely with the idea of adding up small improvements over time; every small improvement that I add to my Neovim workflow will stay with me no matter what I work with. I did use Emacs at work for years because their proprietary language only had an Emacs integration and I didn’t have the time nor energy to create one for Neovim. While Evil made the experience survivable I realized then that I absolutely hate having my work setup be different from my setup at home. People weren’t overjoyed with being unable to choose their own editor and I’ve heard rumors that there’s now an extension for Visual Studio. Neovim is easily extensible Neovim: a Personalized Development Environment TJ DeVries A different take on editing code I’ve always felt that Vimscript is the worst part of Vim. Maybe that’s a weird statement as the scriptability of Vim is one if it’s strengths; and to be fair, simple things are very nice: nnoremap j gj set expandtab But writing complex things in Vimscript is simply not a great experience. One of the major benefits of Neovim is the addition of Lua as a first-class scripting language. Yes, Lua isn’t perfect and it’s often too verbose but it’s so much better than Vimscript. Lua is the main reason that the Neovim plugin ecosystem is currently a lot more vibrant than in Vim. Making it easier to write plugins is of course a boon, but the real benefit is in how it makes it even easier to make more complex customization for yourself. Just plop down some Lua in the configuration files you already have and you’re done. (Emacs worked this out to an even greater extent decades ago.) One way I use this customizability is to help me when I’m blogging: Maybe you don’t need to create something this big but even small things such as disabling autoformat for certain file types in specific folders can be incredibly useful. Approachability should not be underestimated. While plugins in Lua is understandably the focus today, Neovim can still use plugins written in Vimscript and 99% of your old Vim configuration will still work in Neovim. Neovim won’t go anywhere The old is expected to stay longer than the young in proportion to their age. Nassim Nicholas Taleb, “Antifragile” The last big benefit with Neovim I’ll highlight—and why I feel fine with investing even more time into Neovim—is that Neovim will most likely continue to exist and thrive for years if not decades to come. While Vim has—after an impressive 30 years of development—recently entered maintenance mode, activity in Neovim has steadily increased since the fork from Vim more than a decade ago. The amount of high quality plugins, interest in Google trends, and GitHub activity have all been trending upwards. Neovim was also the most desired editor according to the latest Stackoverflow developer survey and the overall buzz and excitement in the community is at an all-time high. With the self-reinforced behavior and benefits of investing into a versatile and flexible editor with a huge plugin ecosystem such as Neovim I see no reason for the trend to taper off anytime soon. Neovim will probably never be as popular as something like VSCode but as an open source project backed by excited developers, Neovim will probably be around long after VSCode has been discontinued for The Next Big Thing.
I’ve been with Veronica for over a decade now and I think I’m starting to know her fairly well. Yet she still manages to surprise me. For instance, a couple of weeks ago she came and asked me about email security: I worry that my email password is too weak. Can you help me change email address and make it secure? It was completely unexpected—but I’m all for it. The action plan All heroic journeys needs a plan; here’s mine: .com surname was available). Migrate her email to Fastmail. Setup Bitwarden as a password manager. Use a YubiKey to secure the important services. Why a domain? If you ever want (or need) to change email providers it’s very nice to have your own domain. For instance, Veronica has a hotmail.com address but she can’t bring that with her if she moves to Fastmail. Worse, what if she gets locked out of her Outlook account for some reason? It might happen if you forget your password, someone breaks into your account, or even by accident. For example, Apple users recently got locked out of their Apple IDs without any apparent reason and Gmail has been notorious about locking out users for no reason. Some providers may be better but this is a systemic problem that can happen at any service. In almost all cases, your email is your key to the rest of your digital life. The email address is your username and to reset your password you use your email. If you lose access to your email you lose everything. When you control your domain, you can point the domain to a new email provider and continue with your life. Why pay for email? One of the first things Veronica told me when I proposed that she’d change providers was that she didn’t want to pay. It’s a common sentiment online that email must be cheap (or even free). I don’t think that email is the area where cost should be the most significant factor. As I argued for in why you should own your email’s domain, your email is your most important digital asset. If email is so important, why try to be cheap about it? You should spend your money on the important things and shouldn’t spend money on the unimportant things. Paying for email gives you a couple of nice things: Human support. It’s all too easy to get shafted by algorithms where you might get banned because you triggered some edge case (such as resetting your password outside your usual IP address). Ability to use your own domain. Having a custom domain is a paid feature at most email providers. A long-term viable business. How do you run an email company if you don’t charge for it? (You sell out your users or you close your business.) Why a password manager? The best thing you can do security wise is to adopt a password manager. Then you don’t have to try to remember dozens of passwords (leading to easy-to-remember and duplicate passwords) and can focus on remembering a single (stronger) password, confident that the password manager will remember all the rest. “Putting all your passwords in one basket” is a concern of course but I think the pros outweigh the cons. Why a YubiKey? To take digital security to the next level you should use two-factor authentication (2FA). 2FA is an extra “thing” in addition to your password you need to be able to login. It could be a code sent to your phone over SMS (insecure), to your email (slightly better), a code from a 2FA app on your phone such as Aegis Authenticator (good), or from a hardware token (most secure). It’s easy to think that I went with a YubiKey because it’s the most secure option; but the biggest reason is that a YubiKey is more convenient than a 2FA app. With a 2FA app you have to whip out your phone, open the 2FA app, locate the correct site, and then copy the TOTP code into the website (quickly, before the code changes). It’s honestly not that convenient, even for someone like me who’s used this setup for years. With a YubiKey you plug it into a USB port and press it when it flashes. Or on the phone you can use NFC. NFC is slightly more annoying compared to plugging it in as you need to move/hold it in a specific spot, yet it’s still preferable to having to jump between apps on the phone. There are hardware keys other than YubiKey of course. I’ve used YubiKey for years and have a good experience. Don’t fix what isn’t broken. The setup Here’s a few quick notes on how I setup her new accounts: Password management with Bitwarden The first thing we did was setup Bitwarden as the password manager for her. I chose the family plan so I can handle the billing. To give her access I installed Bitwarden as: I gave her a YubiKey and registered it with Bitwarden for additional security. As a backup I also registered my own YubiKeys on her account; if she loses her key we still have others she can use. Although it was a bit confusing for her I think she appreciates not having to remember a dozen different passwords and can simply remember one (stronger) password. We can also share passwords easily via Bitwarden (for news papers, Spotify, etc). The YubiKey itself is very user friendly and she hasn’t run into any usability issues. Email on Fastmail With the core security up and running the next step was to change her email: Gave her an email address on Fastmail with her own domain (<firstname>@<lastname>.com). She has a basic account that I manage (there’s a Duo plan that I couldn’t migrate to at this time). I secured the account with our YubiKeys and a generated password stored in Bitwarden. We bolstered the security of her old Hotmail account by generating a new password and registering our YubiKeys. Forward all email from her old Hotmail address to her new address. With this done she has a secure email account with an email address that she owns. As is proper she’s been changing her contact information and changing email address in her other services. It’s a slow process but I can’t be too critical—I still have a few services that use my old Gmail address even though I migrated to my own domain more than a decade ago. Notes on recovery and redundancy It’s great to worry about weak phishing, weak passwords, and getting hacked. But for most people the much bigger risk is to forget your password or lose your second factor auth, and get locked out that way. To reduce the risk of losing access to her accounts we have: YubiKeys for all accounts. The recovery codes for all accounts are written down and secured. My own accounts can recover her Bitwarden and Fastmail accounts via their built-in recovery functionality. Perfect is the enemy of good Some go further than we’ve done here, others do less, and I think that’s fine. It’s important to not compare yourself with others too much; even small security measures makes a big difference in practice. Not doing anything at all because you feel overwhelmed is worse than doing something, even something simple as making sure you’re using a strong password for your email account.
There are two conflicting forces in play in setting up your computer environment: It’s common to find people get stuck at the extreme ends of the spectrum; some programmers refuse to configure or learn their tools at all, while others get stuck re-configuring their setups constantly without any productivity gains to show for it. Finding a balance can be tricky. With regards to terminals I’ve been using alacritty for many years. It gets the job done but I don’t know if I’m missing out on anything? I’ve been meaning to look at alternatives like wezterm and kitty but I never got far enough to try them out. On one hand it’s just a terminal, what difference could it make? Enter Ghostty, a terminal so hyped up it made me drop any useful things I was working on and see what the fuzz was about. I don’t quite get why people hype up a terminal of all things but here we are. Ghostty didn’t revolutionize my setup or anything but I admit that Ghostty is quite nice and it has replaced alacritty as my terminal. I just want a blank canvas without any decorations One of the big selling points of Ghostty is it’s native platform integration. It’s supposed to integrate well with your window manager so it looks the same and gives you some extra functionality… But I don’t know why I should care—I just want a big square without decorations of any kind. You’re supposed to to be able to simply turn off any window decorations: window-decoration = false At the moment there’s a bug that requires you set some weird GTK settings to fully remove the borders: gtk-titlebar = false gtk-adwaita = false It’s unfortunate as I haven’t done any GKT configuration on my machine (I use XMonad as my window manager and I don’t have any window decorations anywhere). There might some useful native features I don’t know about. The password input style is neat for instance, although I’m not sure it does anything functionally different compared to other terminals: Cursor invert cursor-invert-fg-bg = true In alacritty I’ve had the cursor invert the background and foreground and you can do that in Ghostty too. I ran into an issue where it interferes with indent-blankline.nvim making the cursor very hard to spot in indents (taking the color of the indent guides, which is by design low contrast with the background). Annoying but it gave me the shove I needed to try out different plugins to see if the problem persisted. I ended up with (an even nicer) setup using snacks.nvim that doesn’t hide the cursor: Left: indent-blankline.nvim (cursor barely visible) snacks.nvim (cursor visible and it highlights scope). Minimum contrast Unreadable ls output is a staple of the excellent Linux UX. It might look like this: Super annoying. You can of course configure the ls output colors but that’s just for one program and it won’t automatically follow when you ssh to another server. Ghostty’s minimum-contrast option ensures that the text and background always has enough contrast to be visible: minimum-contrast = 1.05 Most excellent. This feature has the potential to break “eye candy” features, such the Neovim indent lines plugins if you use a low contrast configuration. I still run into minor issues from time to time. Hide cursor while typing mouse-hide-while-typing = true A small quality-of-life feature is the ability to hide the cursor when typing. I didn’t know I needed this in my life. Consistent font sizing between desktop and laptop With alacritty I have an annoying problem where I need to use a very different font size on my laptop and my desktop (8 and 12). This wasn’t always the case and I think something may have changed in alacritty but I’m not sure. Ghostty doesn’t have this problem and I can now use the same font settings across my machines ( font-size = 16 ). Ligature support The issue for adding ligatures to alacritty was closed eight years ago and even though I wanted to try ligatures I couldn’t be bothered to “run a low quality fork”. Ghostty seems like the opposite of “low quality” and it renders Iosevka’s ligatures very well: My configured ligatures of Iosevka, rendered in Ghostty. Overall I feel that the font rendering in Ghostty is a little better than in alacritty, although that might be recency bias. I’m still undecided on ligatures but I love that I don’t have to feel limited by the terminal. I use a custom Iosevka build with these Ghostty settings: font-family = IosevkaTreeLig Nerd Font font-style = Medium font-style-bold = Bold font-style-italic = Medium Italic font-style-bold-italic = Bold Italic font-size = 16 Colorscheme While Ghostty has an absolutely excellent theme selector with a bunch of included themes (ghostty +list-themes) melange-nvim wasn’t included, so I had to configure the colorscheme myself. It was fairly straightforward even though the palette = 0= syntax was a bit surprising: # The dark variant of melange background = #292522 foreground = #ECE1D7 palette = 0=#867462 palette = 1=#D47766 palette = 2=#85B695 palette = 3=#EBC06D palette = 4=#A3A9CE palette = 5=#CF9BC2 palette = 6=#89B3B6 palette = 7=#ECE1D7 palette = 8=#34302C palette = 9=#BD8183 palette = 10=#78997A palette = 11=#E49B5D palette = 12=#7F91B2 palette = 13=#B380B0 palette = 14=#7B9695 palette = 15=#C1A78E # I think it's nice to colorize the selection too selection-background = #403a36 selection-foreground = #c1a78e I’m happy with Ghostty In the end Ghostty has improved my setup and I’m happy I took time to try it out. It took a little more time than “just launch it” but it absolutely wasn’t a big deal. The reward was a few pleasant improvements that have improved my life a little. And perhaps most important of all: I’m now an alpha Nerd that uses a terminal written in Zig. Did I create a custom highlighter for the Ghostty configuration file just to have proper syntax highlighting for this one blog post? You bet I did. (It’s a simple treesitter grammar.)
More in technology
Prior Art Department and today we'll consider a forgotten yet still extant sidebar of the early 1990s Internet. If you had Internet access at home back then, it was almost certainly dialup modem (like I did); only the filthy rich had T1 lines or ISDN. Moreover, from a user perspective, the hosts you connected to were their own universe. You got your shell account or certain interactive services over Telnet (and, for many people including yours truly, E-mail), you got your news postings from the spool either locally or NNTP, and you got your files over FTP. It may have originated elsewhere, but everything on the host you connected to was a local copy: the mail you received, the files you could access, the posts you could read. Exceptional circumstances like NFS notwithstanding, what you could see and access was local — it didn't point somewhere else. Around this time, however, was when sites started referencing other sites, much like the expulsion from Eden. In 1990 both HYTELNET and Archie appeared, which were early search engines for Telnet and FTP resources. Since they relied on accurate information about sites they didn't control, both of them had to regularly update their databases. Gopher, when it emerged in 1991, consciously tried to be a friendlier FTP by presenting files and resources hung from a hierarchy of menus, which could even point to menus on other hosts. That meant you didn't have to locally mirror a service to point people at it, but if the referenced menu was relocated or removed, the link to it was broken and the reference's one-way nature meant there was no automated way to trace back and fix it. And then there was that new World Wide Web thing introduced to the public in 1993: a powerful soup of media and hypertext with links that could point to nearly anything, but they were unidirectional as well, and the sheer number even in modest documents could quickly overwhelm users in a rapidly expanding environment. Not for nothing was the term "linkrot" first attested around 1996, as well as how disoriented a user might get following even perfectly valid links down a seemingly infinite rabbithole. "memex" idea, imagining not only literature but photographs, sketches and notes all interconnected with various "trails." The concept was exceedingly speculative and never implemented (nor was Ted Nelson's Xanadu "docuverse" in 1965) but Douglas Engelbart's oN-Line System "NLS" at the Stanford Research Institute was heavily inspired by it, leading to the development of the mouse and the 1968 Mother of All Demos. The notion wasn't new on computers, either, such as 1967's Hypertext Editing System on an IBM System/360 Model 50, and early microcomputer implementations like OWL Guide appeared in the mid-1980s on workstations and the Macintosh. Hermann Maurer, then a professor at the Graz University of Technology in Austria, had been interested in early computer-based information systems for some time, pioneering work on early graphic terminals instead of the pure text ones commonly in use. One of these was the MUPID series, a range of Z80-based systems first introduced in 1981 ostensibly for the West German videotex service Bildschirmtext but also standalone home computers in their own right. This and other work happened at what was the Institutes for Information Processing Graz, or IIG, later the Institute for Information Processing and Computer-Supported New Media (IICM). Subsequently the IIG started researching new methods of computer-aided instruction by developing an early frame-based hypermedia system called COSTOC (originally "COmputer Supported Teaching Of Computer-Science" and later "COmputer Supported Teaching? Of Course!") in 1985, which by 1989 had been commercialized, was in use at about twenty institutions on both sides of the Atlantic, and contained hundreds of one-hour lessons. COSTOC's successful growth also started to make it unwieldy, and a planned upgrade in 1989 called HyperCOSTOC proposed various extensions to improve authoring, delivery, navigation and user annotation. Meanwhile, it was only natural that Maurer's interest would shift to the growing early Internet, at that time under the U.S. National Science Foundation and by late that year numbering over 150,000 hosts. Maurer's group decided to consolidate their experiences with COSTOC and HyperCOSTOC into what they termed "the optimal large-scale hypermedia system," code-named Hyper-G (the G, natürlich, for Graz). It would be networked and searchable, preserve user orientation, and maintain correct and up-to-date linkages between the resources it managed. In January 1990, the Austrian Ministry of Science agreed to fund a prototype for which Maurer's grad student Frank Kappe formally wrote the architectural design as his PhD dissertation. Other new information technologies like Gopher and the Web were emerging at the same time, at the University of Minnesota and CERN respectively, and the Hyper-G team worked with the Gopher and W3 teams so that the Hyper-G server could also speak to those servers and clients. The prototype emerged in January 1992 as the University's new information system TUGinfo. Because Hyper-G servers could also speak Gopher and HTTP, TUGinfo was fully accessible by the clients of the day, but it could also be used with various Hyper-G line-mode clients. One of these was a bespoke tool named UniInfo which doesn't appear to have been distributed outside the University and is likely lost. The other is called the Hyper-G Terminal Viewer, or hgtv (not to be confused with the vapid cable channel), which became a standard part of the server for administration tasks. The success of TUGinfo convinced the European Space Agency to adopt Hyper-G for its Guide and Directory in the fall, after which came a beta native Windows client called Amadeus in 1993 and a beta Unix client called Harmony in 1994. Yours truly remembers accessing some of these servers through a web browser around this time, which is how this whole entry got started trying to figure out where Hyper-G ended up. a partial copy of these files, it lacks, for example, any of the executables for the Harmony client. Fortunately there were also at least two books on Hyper-G, one by Hermann Maurer himself, and a second by Wolfgang Dalitz and Gernot Heyer, two partnering researchers then at the Konrad-Zuse-Zentrum für Informationstechnik Berlin (ZIB). Happily these two books have CDs with full software kits, and the later CD from Dalitz and Heyer's book is what we'll use here. I've already uploaded its contents to the Floodgap Gopher server to serve as a supreme case of historical irony. collections. A resource must belong to at least one collection, but it may belong to multiple collections, and a collection can span more than one server. A special type of collection is the cluster, where semantically related materials are grouped together such as multiple translations, alternate document formats, or multimedia aggregates (e.g., text and various related images or video clips). We'll look at how this appears practically when we fire the system up. Any resource may link to another resource. Like HTML, these links are called anchors, but unlike HTML, anchors are bidirectional and can occur in any media type like PostScript documents, images, or even audio/video. Because they can be followed backwards, clients can walk the chains to construct a link map, like so: man page for grep(1), showing what it connects to, and what those pages connect to. Hyper-G clients could construct such maps on demand and all of the resources it shows can of course be jumped to directly. This was an obvious aid to navigation because you always could find out where you were in relation to anything else. Under the hood, anchors aren't part of the document, or even hidden within it; they're part of the metadata. Here's a real section of a serialized Hyper-G database: This textual export format (HIF, the Hyper-G Interchange Format) is how a database could be serialized and backed up or transmitted to another server, including internal resources. Everything is an object and has an ID, with resources existing at a specified path (either a global ID based on its IPv4 address or a filesystem path), and the parent indicating the name of the collection the resource belongs to. These fields are all searchable, as are text resources via full-text search, all of which is indexed immediately. You don't need to do anything to set up a site search facility — it comes built-in. Anchors are connected at either byte ranges or spatial/time coordinates within their resource. This excerpt defines three source anchors, i.e., a link that goes out to another resource. uudecodeing the text fragment and dumping it, the byte offsets in the anchor sections mean the text ranges for hg_comm.h, hg_comm.c and hg_who.c will be linked to those respective entries as destination anchors in the database. For example, here is the HIF header for hg_comm.h: These fields are indexed, so the server can walk them backwards or forwards, and the operation is very fast. The title and its contents and even its location can change; the link will always be valid as long as the object exists, and if it's later deleted, the server can automatically find and remove all anchors to it. Analogous to an HTML text fragment, destination anchors can provide a target covering a specific position and/or portion within a text resource. As the process requires creating and maintaining various unique IDs, Hyper-G clients have authoring capability as well, allowing a user to authenticate and then insert or update resources and anchors as permitted. We're going to do exactly that. Since resources don't have to be modified to create an anchor, even read-only resources such as those residing on a mounted CD-ROM could be linked and have anchors of their own. Instead of having their content embedded in the database, however, they can also appear as external entities pointed to by conventional filesystem paths. This would have been extremely useful for multimedia in particular considering the typical hard disk size of the early 1990s. Similarly, Internet resources on external servers could also be part of the collection: While resources that are not Hyper-G will break the link chain, the connection can still be expressed, and at least the object itself can be tracked by the database. The protocol could be Hyper-G, Gopher, HTTP, WAIS, Telnet or FTP. It was also possible to create SQL queries this way, which would be performed live. Later versions of the server even had a CGI-compatible scripting ability. I mentioned that the user can authenticate to the server, as well as being anonymous. When logged in, authenticated access allows not only authoring and editing but also commenting through annotations (and annotating the annotations). This feature is obviously useful for things like document review, but could also have served as a means for a blog with comments, well before the concept existed formally, or a message board or BBS. Authenticated access is also required for resources with limited permissions, or those that can only be viewed for a limited time or require payment (yes, all this was built-in). In the text file you can also see markup tags that resemble and in some cases are the same as, but in fact are not, HTML. These markup tags are part of HTF, or the Hyper-G Text Format, Hyper-G's native text document format. HTF is dynamically converted for Gopher or Web clients; there is a corresponding HTML tag for most HTF tags, eventually supporting much of HTML 3.0 except for tables and forms, and most HTML entities are the same in HTF. Anchor tags in an HTF document are handled specially: upon upload the server strips them off and turns them into database entries, which the server then maintains. In turn, anchor tags are automatically re-inserted according to their specified positions with current values when the HTF resource is fetched or translated. dbserver) that handles the database and the full-text index server (ftserver) used for document search. The document cache server (dcserver), however, has several functions: it serves and stores local documents on request, it runs CGI scripts (using the same Common Gateway Interface standard as a webserver of the era would have), and to request and cache resources from remote servers referenced on this one, indicated by the upper 32 bits of the global ID. In earlier versions of the server, clients were responsible for other protocols. A Hyper-G client, if presented with a Gopher or HTTP URL, would have to go fetch it. hgserver (no relation to Mercurial). This talks directly to other Hyper-G servers (using TCP port 418), and also directly to clients with port 418 as a control connection and a dynamically assigned port number for document transfer (not unlike FTP). Since links are bidirectional, Hyper-G servers contact other Hyper-G servers to let them know a link has been made (or, possibly, removed), and then those servers will send them updates. There are hazards with this approach. One is that it introduces an inevitable race condition between the change occurring on the upstream and any downstream(s) knowing about it, so earlier implementations would wait until all the downstream(s) acknowledged the change before actually making it effective. Unfortunately this ran into a second problem: particularly for major Hyper-G sites like IIG/IICM itself, an upstream server could end up sending thousands of update notifications after making any change at all, and some downstreams might not respond in a timely fashion for any number of reasons. Later servers use a probablistic version of the "flood" algorithm from the Harvest resource discovery system (perhaps a future Prior Art entry) where downstreams pass the update along to a smaller subset of hosts, who in turn do the same to another subset, until the message has propagated throughout the network (p-flood). Any temporary inconsistency is simply tolerated until the message makes the rounds. This process was facilitated because all downstreams knew about all other Hyper-G servers, and updates to this master list were sent in the same manner. A new server could get this list from IICM after installation to bootstrap itself, becoming part of a worldwide collection called the Hyper Root. requiring license fees for commercial use of their Gopher server implementation. Subsequent posts were made to clarify this only applied to UMN gopherd, and then only to commercial users, nor is it clear exactly how much that license fee was or whether anybody actually paid, but the damage was done and the Web — freely available from the beginning — continued unimpeded on its meteoric rise. (UMN eventually relicensed 2.3.1 under the GNU Public License in 2000.) Hyper-G's principals would have no doubt known of this precautionary tale. On the other hand, they also clearly believed that they possessed a fundamentally superior product to existing servers that people would be willing to pay good money for. Indeed, just like they did with COSTOC, the intention of spinning Hyper-G/HyperWave off as a commercial enterprise had been planned from the very beginning. Hyper-G, now renamed HyperWave, officially became a commercial product in June 1996. This shift was facilitated by the fact that no publicly available version had ever been open-source. Early server versions of Hyper-G had no limit on users, but once HyperWave was productized, its free unregistered tier imposed document restrictions and a single-digit login cap (anonymous users could of course still view HyperWave sites without logging in, but they couldn't post anything either). Non-commercial entities could apply for a free license key, something that is obviously no longer possible, but commercial use required a full paid license starting at US$3600 for a 30-user license (in 2025 dollars about $6900) or $30,000 for an unlimited one ($57,600). An early 1997 version of this commercial release appears to be what's available from the partial mirror at the Internet Archive, which comes with a license key limiting you to four users and 500 documents — that expired on July 31, 1997. This license is signed with a 128-bit checksum that might be brute-forceable on a modern machine but you get to do that yourself. Fortunately, the CD from our HyperWave book, although also published in 1996, predates the commercial shift; it is a offline and complete copy of the Hyper-G FTP server as it existed on April 13, 1996 with all the clients and server software then available. We'll start with the Hyper-G server portion, which on disc offers official builds for SunOS 4.1.3, Solaris 2.2 (SPARC only), HP-UX 9.01 (PA-RISC only), Ultrix 4.2 (MIPS DECstation "PMAX" only), IRIX 3.0 (SGI MIPS), Linux/x86 1.2, OSF/1 3.2 (on Alpha) and a beta build for IBM AIX 4.1. Apple Network Server 500 would have been perfect: it has oodles of disk space (like, whole gigabytes, man), a full 200MHz PowerPC 604e upgrade, zippy 20MB/s SCSI-2 and a luxurious 512MB of parity RAM. I'll just stop here and say that it ended in failure because both the available AIX versions on disc completely lack the Gopher and Web gateways, without which the server will fail to start. I even tried the Internet Archive pay-per-u-ser beta version and it still lacked the Gopher gateway, without which it also failed to start, and the new Web gateway in that release seemed to have glitches of its own (though the expired license key may have been a factor). Although there are ways to hack around the startup problems, doing so only made it into a pure Hyper-G system with no other protocols which doesn't make a very good demo for our purposes, and I ended up spending the rest of the afternoon manually uninstalling it. In fairness it doesn't appear AIX was ever officially supported. Otherwise, I don't have a PA-RISC HP-UX server up and running right now (just a 68K one running HP-UX 8.0), and while the SunOS 4 version should be binary compatible with my Solbourne S3000 running OS/MP 4.1C, I wasn't sure if the 56MB of RAM it has was enough if I really wanted to stress-test it. and it has 256MB of RAM. It runs IRIX 6.5.22 but that should still start these binaries. That settled the server part. For the client hardware, however, I wanted something particularly special. My original Power Macintosh 7300 (now with a G4/800) sitting on top will play a supporting role running Windows 98 in emulation for Amadeus, and also testing our Hyper-G's Gopher gateway with UMN TurboGopher, which is appropriate because when it ran NetBSD it was gopher.floodgap.com. Today, though, it runs Mac OS 9.1, and the planned native Mac OS client for Hyper-G was never finished nor released. Our other choices for Harmony are the same as for the server, sans AIX 4.1, which doesn't seem to have been supported as a client at all. Unfortunately the S3000 is only 36MHz, so it wouldn't be particularly fast at the hypermedia features, and I was concerned about the Indy running as client and server at the same time. But while we don't have any PA-RISC servers running, we do have a couple choices in PA-RISC workstations, and one of them is a especially rare bird. Let's meet ... ruby, named for HP chief architect Ruby B. Lee, who was a key designer of the PA-RISC architecture and its first single-chip implementation. This is an RDI PrecisionBook 160 laptop with a 160MHz PA-7300LC CPU, one of the relatively few PA-RISC chips with support for a categorical L2 cache (1MB, in this case), and the last and most powerful family of 32-bit PA-RISC 1.1 chips. Visualize B160L, even appearing as the same exact model number to HP-UX, it came in the same case as its better-known SPARC UltraBook siblings (I have an UltraBook IIi here as well) and was released in 1998, just prior to RDI's buyout by Tadpole. This unit has plenty of free disk space, 512MB of RAM and runs HP-UX 11.00, all of which should run Harmony splendidly, and its battery incredibly still holds some charge. Although the on-board HP Visualize-EG graphics don't have 3D acceleration, neither does the XL24 in our Indy, and its PA-7300LC will be better at software rendering than the Indy's R4400. Fortunately, the Visualize-EG has very good 2D performance for the time. With our hardware selected, it's time to set up the server side. We'll do this by the literal book, and the book in this case recommends creating a regular user hgsystem belonging to a new group hyperg under which the server processes should run. IRIX makes this very easy. hyperg as the user's sole group membership, ... tcsh, which is fine by me because other shells are for people who don't know any better. Logging in and checking our prerequisites: This is the Perl that came with 6.5.22. Hyper-G uses Perl scripts for installation, but they will work under 4.036 and later (Perl 5 isn't required), and pre-built Perl distributions are also included on the CD. Ordinarily, and this is heavily encouraged in the book and existing documentation, you would run one of these scripts to download, unpack and install the server. At the time you had to first manually request permission from an E-mail address at IICM to download it, including the IPv4 address you were going to connect from, the operating system and of course the local contact. Fortunately some forethought was applied and an alternative offline method was also made available if you already had the tarchive in your possession, or else this entire article might not have been possible. Since the CD is a precise copy of the FTP site, even including the READMEs, we'll just pretend to be the FTP site for dramatic purposes. The description files you see here are exactly what you would have seen accessing TU Graz's FTP site in 1996. quote PASV 227 Entering Passive Mode (XXX). ftp> cd /run/media/spectre/Hyper-G/unix/Hyper-G 250-You have entered the Hyper-G archive (ftp://ftp.iicm.tu-graz.ac.at/pub/Hyper-G). 250-================================================================================ 250- 250-What's where: 250- 250- Server Hyper-G Server Installation Script 250- UnixClient the vt100 client for UNIX - Installation Script 250- Harmony Harmony (the UNIX/X11 client) 250- Amadeus Amadeus (the PC/Windows client) 250- VRweb VRweb (free VRML browser for Hyper-G, Mosaic & Gopher) 250- papers documentation on Hyper-G (mainly PostScript) 250- talk slides & illustrations we use for Hyper-G talks 250- 250-Note: this directory is mirrored daily (nightly) to: 250- 250- Australia ftp://ftp.cinemedia.com.au/pub/Hyper-G 250- ftp://gatekeeper.digital.com.au/pub/Hyper-G 250- Austria ftp://ftp.tu-graz.ac.at/pub/Hyper-G 250- Czech Rep. ftp://sunsite.mff.cuni.cz/Net/Infosystems/Hyper-G 250- Germany ftp://elib.zib-berlin.de/pub/InfoSystems/Hyper-G 250- ftp://ftp.ask.uni-karlsruhe.de/pub/infosystems/Hyper-G 250- Italy ftp://ftp.esrin.esa.it/pub/Hyper-G 250- Poland ftp://sunsite.icm.edu.pl/pub/Hyper-G 250- Portugal ftp://ftp.ua.pt/pub/infosystems/www/Hyper-G 250- Spain ftp://ftp.etsimo.uniovi.es/pub/Hyper-G 250- Sweden ftp://ftp.sunet.se/pub/Networked.Information.Retrieval/Hyper-G 250- UK ftp://unix.hensa.ac.uk/mirrors/Hyper-G 250- USA ftp://ftp.ncsa.uiuc.edu/Hyper-G 250- ftp://mirror1.utdallas.edu/pub/Hyper-G 250 Directory successfully changed. ftp> cd Server 250 Directory successfully changed. ftp> get Hyper-G_Server_21.03.96.SGI.tar.gz local: Hyper-G_Server_21.03.96.SGI.tar.gz remote: Hyper-G_Server_21.03.96.SGI.tar.gz 200 PORT command successful. Consider using PASV. 150 Opening BINARY mode data connection for Hyper-G_Server_21.03.96.SGI.tar.gz (2582212 bytes). 226 Transfer complete. 2582212 bytes received in 3.12 seconds (808.12 Kbytes/s) ftp> get Hyper-G_Tools_21.03.96.SGI.tar.gz local: Hyper-G_Tools_21.03.96.SGI.tar.gz remote: Hyper-G_Tools_21.03.96.SGI.tar.gz 200 PORT command successful. Consider using PASV. 150 Opening BINARY mode data connection for Hyper-G_Tools_21.03.96.SGI.tar.gz (3337367 bytes). 226 Transfer complete. 3337367 bytes received in 3.95 seconds (825.82 Kbytes/s) ftp> ^D221 Goodbye. ~hgsystem, a central directory (by default /usr/local/Hyper-G) holds links to it as a repository. We'll create that and sign it over to hgsystem as well. Next, we unpack the server (first) package and start the offline installation script. This package includes the server binaries, server documentation and HTML templates. Text in italics was my response to prompts, which the script stores in configuration files and also in your environment variables, and patches the startup scripts for hgsystem to instantiate them on login. Floodgap Hyper-G Full internet host name of this machine:indy.floodgap.com installed bin/scripts/hginstserver installed bin/SGI/dbcontrol [...] installed HTML/ge/options.html installed HTML/ge/result_head.html installed HTML/ge/search.html installed HTML/ge/search_simple.html installed HTML/ge/status.html did make this piece open-source so a paranoid sysadmin could see what they were running as root (in this case setuid). Now for the tools. This includes adminstration utilities but also the hgtv client and additional documentation. The install script is basically the same for the tools as for the server. Last but not least, we will log out and log back in to ensure that our environment is properly setup, and then set the password on the internal hgsystem user (which is not the same as hgsystem, the Unix login). This account is setup by default in the database and to modify it we'll use the hgadmin tool. This tool is always accessible from the hgsystem login in case the database gets horribly munged. That should be all that was necessary (NARRATOR: It wasn't.), but starting up the server still failed. It's possible the tar offline install utility wasn't updated as often as the usual one. Nevertheless, it seemed out-of-sync with what the startup script was actually looking for. Riffling the Perl and shell-script code to figure out the missing piece, it turns out I had to manually create ~hgsystem/HTF and ~hgsystem/server, then add two more environment variables to ~hgsystem/.hgrc (nothing to do with Mercurial): Logging out and logging back in to refresh the environment, ... we're up! Immediately I decided to see if the webserver would answer. It did, buuuuut ... uname identifies this PrecisionBook 160 as a 9000/778, which is the same model number as the Visualize B160L workstation.) Netscape Navigator Gold 3.01 is installed on this machine, and we're going to use it later, but I figured you'd enjoy a crazier choice. Yes, you read that right ... on those platforms as well as Tru64, but no version for them ever emerged. After releasing 5.0 SP1 in 2001, Microsoft cited low uptake of the browser and ended all support for IE Unix the following year. As for Mainsoft, they became notorious for the 2004 Microsoft source code leak when a Linux core in the file dump fingered them as the source; Microsoft withdrew WISE completely, eliminating MainWin's viability as a commercial product, though Mainsoft remains in business today as Harmon.ie since 2010. IE Unix was a completely different codebase from what became Internet Explorer 5 on Mac OS X (and a completely different layout engine, Tasman) and of course is not at all related to modern Microsoft Edge either. because there are Y2K issues, the server fails to calculate its own uptime, but everything else basically works. ruby:/pro/harmony/% uname -a HP-UX ruby B.11.00 A 9000/778 2000295180 two-user license ruby:/pro/harmony/% model 9000/778/B160L ruby:/pro/harmony/% grep -i b160l /usr/sam/lib/mo/sched.models B160L 1.1e PA7300 ruby:/pro/harmony/% su Password: # echo itick_per_usec/D | adb -k /stand/vmunix /dev/mem itick_per_usec: itick_per_usec: 160 # ^D ruby:/pro/harmony/% cat /var/opt/ignite/local/hw.info disk: 8/16/5.0.0 0 sdisk 188 31 0 ADTX_AXSITS2532R_014C 4003776 /dev/rdsk/c0t0d0 /dev/dsk/c0t0d0 -1 -1 5 1 9 disk: 8/16/5.1.0 1 sdisk 188 31 1000 ADTX_AXSITS2532R_014C 6342840 /dev/rdsk/c0t1d0 /dev/dsk/c0t1d0 -1 -1 4 1 9 cdrom: 8/16/5.4.0 2 sdisk 188 31 4000 TOSHIBA_CD-ROM_XM-5701TA 0 /dev/rdsk/c0t4d0 /dev/dsk/c0t4d0 -1 -1 0 1 0 lan: 8/16/6 0 lan0 lan2 0060B0C00809 Built-in_LAN 0 graphics: 8/24 0 graph3 /dev/crt0 INTERNAL_EG_DX1024 1024 768 16 755548327 ext_bus: 8/16/0 1 CentIf n/a Built-in_Parallel_Interface ext_bus: 8/16/5 0 c720 n/a Built-in_SCSI ps2: 8/16/7 0 ps2 /dev/ps2_0 Built-in_Keyboard/Mouse processor: 62 0 processor n/a Processor an old version ready to run. Otherwise images and most other media types are handled by Harmony itself, so let's grab and setup the client now. We'll want both Harmony proper and, later when we play a bit with the VRML tools, VRweb. Notionally these both come in Mesa and IRIX GL or OpenGL versions, but this laptop has no 3D acceleration, so we'll use the Mesa builds which are software-rendered and require no additional 3D support. cd /run/media/spectre/Hyper-G/unix/Hyper-G/Harmony 250- 250-You have entered the Harmony archive. 250- 250-The current version is Release 1.1 250-and there are a few patched binaries in the 250-patched-bins directory. 250- 250-Please read INSTALLATION for full installation instructions. 250- 250-Mirrors can be found at: 250- 250- Australia ftp://ftp.cinemedia.com.au/pub/Hyper-G 250- Austria ftp://ftp.tu-graz.ac.at/pub/Hyper-G/ 250- Germany ftp://elib.zib-berlin.de/pub/InfoSystems/Hyper-G/ 250- ftp://ftp.ask.uni-karlsruhe.de/pub/infosystems/Hyper-G 250- Italy ftp://ftp.esrin.esa.it/pub/Hyper-G 250- Spain ftp://ftp.etsimo.uniovi.es/pub/Hyper-G 250- Sweden ftp://ftp.sunet.se/pub/Networked.Information.Retrieval/Hyper-G 250- New Zealand ftp://ftp.cs.auckland.ac.nz/pub/HMU/Hyper-G 250- UK ftp://unix.hensa.ac.uk/mirrors/Hyper-G 250- USA ftp://ftp.utdallas.edu/pub/Hyper-G 250- ftp://ftp.ncsa.uiuc.edu/Hyper-G 250- ftp://ftp.ua.pt/pub/infosystems/www/Hyper-G 250- 250-and a distributing WWW server: 250- 250- http://ftp.ua.pt/infosystems/www/Hyper-G 250- 250 Directory successfully changed. ftp> get harmony-1.1-HP-UX-A.09.01-mesa.tar.gz get harmony-1.1-HP-UX-A.09.01-mesa.tar.gz 200 PORT command successful. Consider using PASV. 150 Opening BINARY mode data connection for harmony-1.1-HP-UX-A.09.01-mesa.tar.gz (11700275 bytes). 226 Transfer complete. 11700275 bytes received in 11.95 seconds (956.02 Kbytes/s) ftp> cd ../VRweb 250- 250-ftp://ftp.iicm.tu-graz.ac.at/pub/Hyper-G/VRweb/ 250-... here you find the VRweb (VRML 3D Viewer) distribution. 250- 250-The current release is 1.1.2 of Mar 13 1996. 250- 250-Note: this directory is mirrored daily (nightly) to: 250- 250- Australia ftp://ftp.cinemedia.com.au/pub/Hyper-G/VRweb 250- ftp://gatekeeper.digital.com.au/pub/Hyper-G/VRweb 250- Austria ftp://ftp.tu-graz.ac.at/pub/Hyper-G/VRweb 250- Czech Rep. ftp://sunsite.mff.cuni.cz/Net/Infosystems/Hyper-G/VRweb 250- Germany ftp://elib.zib-berlin.de/pub/InfoSystems/Hyper-G/VRweb 250- ftp://ftp.ask.uni-karlsruhe.de/pub/infosystems/Hyper-G/VRweb 250- Italy ftp://ftp.esrin.esa.it/pub/Hyper-G/VRweb 250- Poland ftp://sunsite.icm.edu.pl/pub/Hyper-G/VRweb 250- Portugal ftp://ftp.ua.pt/pub/infosystems/www/Hyper-G/VRweb 250- Spain ftp://ftp.etsimo.uniovi.es/pub/Hyper-G/VRweb 250- Sweden ftp://ftp.sunet.se/pub/Networked.Information.Retrieval/Hyper-G/VRweb 250- UK ftp://unix.hensa.ac.uk/mirrors/Hyper-G/VRweb 250- USA ftp://ftp.ncsa.uiuc.edu/Hyper-G/VRweb 250- ftp://mirror1.utdallas.edu/pub/Hyper-G/VRweb 250 Directory successfully changed. ftp> cd UNIX 250-This directory contains the VRweb 1.1.2e distribution for UNIX/X11 250- 250- 250-vrweb-1.1.2e-[GraphicLibrary]-[Architecture]: 250- VRweb scene viewer for viewing VRML files 250- as external viewer for your WWW client. 250- 250-harscened-[GraphicLibrary]-[Architecture]: 250- VRweb for Harmony. Only usable with Harmony, the Hyper-G 250- client for UNIX/X11. 250- 250-[GraphicLibry]: ogl ... OpenGL (available for SGI, DEC Alpha) 250- mesa ... Mesa (via X protocol; for all platforms) 250- 250-help.tar.gz 250- on-line Help, includes installation guide 250- 250-vrweb.src-1.1.2e.tar.gz 250- VRweb source code 250- 250 Directory successfully changed. ftp> get vrweb-1.1.2e-mesa-HPUX9.05.gz 200 PORT command successful. Consider using PASV. 150 Opening BINARY mode data connection for vrweb-1.1.2e-mesa-HPUX9.05.gz (1000818 bytes). 226 Transfer complete. 1000818 bytes received in 1.23 seconds (794.05 Kbytes/s) ftp> ^D221 Goodbye. /pro logical volume which has ample space. Because this is for an earlier version of HP-UX, although it should run, we'd want to make sure it wasn't using outdated libraries or paths. Unfortunately, checking for this in advance is made difficult by the fact that ldd in HP-UX 11.00 will only show dependencies for 64-bit binaries and this is a 32-bit binary on a 32-bit CPU: So we have to do it the hard way. For some reason symlinks for the shared libraries below didn't exist on this machine, though I had to discover that one by one. /usr/lib/X11R5/libX11.1 lrwxr-xr-x 1 root sys 23 Jan 17 2001 /usr/lib/libX11.2 -> /usr/lib/X11R6/libX11.2 lrwxr-xr-x 1 root sys 23 Jan 17 2001 /usr/lib/libX11.3 -> /usr/lib/X11R6/libX11.3 ruby:/pro/% su Password: # cd /usr/lib # ln -s libX11.1 libX11.sl # ^D ruby:/pro/% harmony/bin/harmony /usr/lib/dld.sl: Can't open shared library: /usr/lib/libXext.sl /usr/lib/dld.sl: No such file or directory Abort ruby:/pro/% ls -l /usr/lib/libXext* lrwxr-xr-x 1 root sys 24 Jan 17 2001 /usr/lib/libXext.1 -> /usr/lib/X11R5/libXext.1 lrwxr-xr-x 1 root sys 24 Jan 17 2001 /usr/lib/libXext.2 -> /usr/lib/X11R6/libXext.2 lrwxr-xr-x 1 root sys 24 Jan 17 2001 /usr/lib/libXext.3 -> /usr/lib/X11R6/libXext.3 ruby:/pro/% su Password: # cd /usr/lib # ln -s libXext.1 libXext.sl # ^D ruby:/pro/% harmony/bin/harmony --- Harmony Version 1.1 (MESA) of Fri 15 Dec 1995 --- Enviroment variable HARMONY_HOME not set ruby:/pro/harmony/% gunzip vrweb-1.1.2e-mesa-HPUX9.05.gz ruby:/pro/harmony/% file vrweb-1.1.2e-mesa-HPUX9.05 vrweb-1.1.2e-mesa-HPUX9.05: PA-RISC1.1 shared executable dynamically linked ruby:/pro/harmony/% chmod +x vrweb-1.1.2e-mesa-HPUX9.05 ruby:/pro/harmony/% ./vrweb-1.1.2e-mesa-HPUX9.05 can't open DISPLAY ruby:/pro/harmony/% mv vrweb-1.1.2e-mesa-HPUX9.05 bin/vrweb -hghost option passed to Harmony or it will connect to the IICM by default. starmony #!/bin/csh setenv HARMONY_HOME /pro/harmony set path=($HARMONY_HOME/bin $path) setenv XAPPLRESDIR $HARMONY_HOME/misc/ $HARMONY_HOME/bin/harmony -hghost indy & ^D ruby:/pro/harmony/% ps -fu spectre UID PID PPID C STIME TTY TIME COMMAND spectre 2172 2170 0 14:55:25 pts/0 0:00 /usr/bin/tcsh spectre 1514 1 0 12:34:19 ? 0:00 /usr/dt/bin/ttsession -s spectre 1535 1534 0 13:18:41 pts/ta 0:01 -tcsh spectre 1523 1515 0 12:34:20 ? 0:04 dtwm spectre 1515 1510 0 12:34:19 ? 0:00 /usr/dt/bin/dtsession spectre 2210 1535 3 15:28:44 pts/ta 0:00 ps -fu spectre spectre 1483 1459 0 12:34:13 ? 0:00 /usr/dt/bin/Xsession /usr/dt/bin/Xsession spectre 1510 1483 0 12:34:15 ? 0:00 /usr/bin/tcsh -c unsetenv _ PWD; spectre 2169 1523 0 14:55:24 ? 0:00 /usr/dt/bin/dtexec -open 0 -ttprocid 1.1eAEIw 01 1514 134217 spectre 2170 2169 0 14:55:24 ? 0:00 /usr/dt/bin/dtterm spectre 2194 2193 0 15:01:47 pts/0 0:01 hartextd -c 49285 spectre 2193 1 0 15:01:42 pts/0 0:06 /pro/harmony/bin/harmony -hghost indy hgsystem anyway, but in a larger deployment you'd of course have multiple users with appropriate permissions. Hyper-G users are specific to the server; they do not have a Unix uid. Users may be in groups and may have multiple simultaneously valid passwords (this is to facilitate automatic login from known hosts, where the password can be unique to each host). Each user gets their own "home collection" that they may maintain, like a home directory. Each user also has a credit account which is automatically billed when pay resources are accessed, though the Hyper-G server is agnostic about how account value is added. We can certainly whip out hgadmin again and do it from the command line, but we can also create users from a graphical administration tool that comes as part of Harmony. This tool is haradmin, or the Harmony Administrator. DocumentType is what we'd consider the "actual" object type. By default, all users, including anonymous ones, can view objects, but cannot write or delete ("unlink") anything; only the owner of the object and the system administrators can do those. In practical terms this unprivileged user with no group memberships we created has the same permissions as an anonymous drive-by right now. However, becase this user is authenticated, we can add permissions to it later. I've censored the most significant word in this and other screenshots with global IDs for this machine because it contains the Indy's IPv4 address and you naughty little people out there don't need to know the details of my test network. % hifimport rootcollection cleaned-tech.hif Username:hgsystem Password: hifimport: HIF 1.0 hifimport: # hifimport: # hifimport: Collection en:Technical Documentation on Hyper-G hifimport: Text en:Hyper-G Anchor Specification Version 1.0 [...] hifimport: # END COLLECTION obj.rights hifimport: # END COLLECTION hg_server hifimport: Collection en:Software User Manuals (man-pages) hifimport: Text en:dbserver.control (1) hifimport: # already visited: 0x000003b7 (en:dcserver (1)) hifimport: # already visited: 0x000003bf (en:ftmkmirror (1)) hifimport: # already visited: 0x000003c2 (en:ftquery (1)) hifimport: # already visited: 0x000003c1 (en:ftserver (1)) hifimport: # already visited: 0x000003be (en:ftunzipmirror (1)) hifimport: # already visited: 0x000003bd (en:ftzipmirror (1)) hifimport: Text en:gophgate (1) hifimport: Text en:hgadmin (1) [...] hifimport: Text en:Clark J.: SGMLS hifimport: * Object already exists. Not replaced. hifimport: Text en:Goldfarb C. F.: The SGML Handbook hifimport: Text en:ISO: Information Processing - 8-bit single-byte coded graphic character sets - Part 1: Latin alphabet No. 1, ISO IS 8859-1 [...] hifimport: # END COLLECTION HTFdoc hifimport: Text en:Hyper-G Interchange Format (HIF) hifimport: # END COLLECTION hyperg/tech PASS 2: Additional Collection Memberships C 0x00000005 hyperglinks(0xa1403d02) hifimport. Error: No Collection hyperglinks(0xa1403d02) C 0x00000005 technik-speziell(0x83f65901) hifimport. Error: No Collection technik-speziell(0x83f65901) C 0x0000015c ~bolle(0x83ea6001) hifimport. Error: No Collection ~bolle(0x83ea6001) C 0x00000193 ~smitter hifimport. Error: No Collection ~smitter [...] PASS 3: Source Anchors SRC Doc=0x00000007 GDest=0x811b9908 0x00187b01 SRC Doc=0x00000008 GDest=0x811b9908 0x00064f74 [...] hifimport: Warning: Link Destination outside HIF file: 0x00000323 hifimport: ... linking to remote object. hifimport. Error: Could not make src anchor: remote server not responding [...] SRC Doc=0x000002e1 GDest=0x811b9908 0x000b5c5b SRC Doc=0x000002e1 GDest=0x811b9908 0x000b5c5a hifimport: Inserted 75 collections. hifimport: Inserted 528 documents. hifimport: Inserted 596 anchors. rootcollection). The import then proceeds in three passes. The first part just loads the objects and the second one sets up additional collections. This dump included everything, including user collections that did not exist, so those additional collections were (in this case desirably) not created. For the final third pass, all new anchors added to the database are processed for validity. Notice that one of them referred to an outside Hyper-G server that the import process duly attempted to contact, as it was intended to. In the end, the import process successfully added 75 new collections and 528 documents with 596 anchors. Instant content! hgtv, the line-mode Hyper-G client, on the Indy. This client, or at least this version of the client, does not start at the Hyper Root and we are immediately within our own root collection. The number is the total number of documents and subdocuments within it. hgtv understands HTF, so we can view the documents directly. Looks like it all worked. Let's jump back into Harmony and see how it looks there too. hgsystem. emacs, but this house is Team vi, and we will brook no deviations. For CDE, though, dtpad would be better. You can change the X resource for Harmony.Text.editcommand accordingly. Here I've written a very basic HTF file that will suffice for the purpose. the Floodgap machine room page (which hasn't been updated since 2019, but I'll get around to it soon enough). As a point of contrast I've elected to do this as a cluster rather than a collection so that you can see the difference. Recall from the introduction that a cluster is a special kind of collection intended for use where the contents are semantically equivalent or related, like alternative translations of text or alternative formats of an image. This is a rather specific case so for most instances, admittedly including this one, you'd want a collection. In practice, however, Hyper-G doesn't really impose this distinction rigidly and as you're about to see, a cluster mostly acts like a collection by a different term — except where it doesn't. <map> for imagemaps. alex, our beige-box Am5x86 DOS games machine, as the destination. can be open at the same time — I guess for people who might do text descriptions for the visually impaired, or something. Clusters appear differently in two other respects we'll get to a little later on. Markup Language), was nearly a first-class citizen. Most modern browsers don't support VRML and its technological niche is now mostly occupied by X3D, which is largely backwards-compatible with it. Like older browsers, Hyper-G needs an external viewer (in this case VRweb, which we loaded onto the system as part of the Harmony client install), but once installed VRML becomes just as smoothly integrated into the client as PostScript documents. Let's create a new collection with the sample VRML documents that came with VRweb. fsn), as most famously seen in 1993's Jurassic Park. Jurassic Park is a candy store for vintage technology sightings, notably the SGI Crimson, Macintosh Quadra 700 and what is likely an early version of the Motorola Envoy, all probably due to Michael Crichton's influence. online resources, made possible by the fact that edges could be walked in both directions and retrieved rapidly from the database. GopherVR, which came out in 1995 and post-dates both FSN and earlier versions of Harmony, but it now renders a lot better with some content. (I do need to get around to updating GopherVR for 64-bit.) hgtv did. This problem likely never got fixed because the beta versions on the Internet Archive unfortunately appear to have removed Gopher support entirely. Rodney Anonymous. C:\ ... \Program Files as \PROGRA~1. This version of Amadeus is distributed in multiple floppy-sized archives. That was a nice thing at the time but today it's also really obnoxious. Here are all the points at which you'll need to "switch disks" (e.g., by unzipping them to the installation folder): Simpsons music). hgtv than it does to Harmony, which to be sure was getting the majority of development resources. In particular, there isn't a tree view, just going from collection to collection like individual menus. blofeld and spectre) into the lusers group. W:g lusers which keeps the default read and unlink permissions, but specifically allows users in group lusers to create and modify documents here. blofeld, because you can never say never again, I will now annotate that "thread." (Oddly, this version of Harmony appears to lack an option to directly annotate from the text view. I suspect this oversight was corrected later.) spectre can post too. blofeld and spectre have the same permissions and the default is to allow anyone in the group to write, without taking some explicit steps they can then edit each other's posts with impunity. To wit, we'll deface blofeld's comment. now? That concludes our demonstration, so on the Indy we'll type dbstop to bring down the database and finish our story. offices in Germany and Austria, later expanding to the US and UK. Gopher no longer had any large-scale relevance and the Web had clearly become dominant, causing Hyperwave to also gradually de-emphasize its own native client in lieu of uploading and managing content with more typical tools like WebDAV, Windows Explorer and Microsoft Word, and administering and accessing it with a regular web browser (offline operation was still supported), as depicted in this screenshot from the same year. Along the way the capital W got dropped, becoming merely Hyperwave. In all of these later incarnations, however, the bidirectional linkages and strict hierarchy remained intact as foundational features in some form, even though the massive Hyper Root concept contemplated by earlier versions ultimately fell by the wayside. Hyperwave continues to be sold as a commercial product today, with the company revived after a 2005 reorganization, and the underlying technology of Hyper-G still seems to be a part of the most current release. As proof, at the IICM — now after several name changes called Institute of Human-Centred Computing, with professor Frank Kappe its first deputy — there's still a HyperWave [sic] IS/7 server. It has a home collection just like ours with exactly one item, Herman Maurer's home page, who as of this writing still remains on Hyperwave's advisory board. Although later products have attempted to do similar sorts of large-scale document and resource management, Hyper-G pioneered the field by years, and even smaller such tools owe it a debt either directly or by independent convergent evolution. That makes it more than appropriate to appear in the Prior Art Department, especially since some of its more innovative solutions to hypermedia's intrinsic navigational issues have largely been forgotten — or badly reinvented. That said, unlike many such examples of prior art, it has managed to quietly evolve and survive to the present, even if by doing so it lost much of its unique nature and some of its wildest ideas. Of course, without those wild ideas, this article would have been a great deal less interesting. You can access the partial mirror on the Internet Archive, or our copy of the CD with everything I've demonstrated here and more on the Floodgap gopher server.
At some point in your life, you’ve probably had a doctor or dentist ask you if you clench your jaw or grind your teeth — particularly at night. That is called bruxism and it can have serious effects, such as tooth damage, jaw pain, headaches, and more. But because it often happens when you’re asleep […] The post This DIY bruxism detector prevents jaw clenching during sleep appeared first on Arduino Blog.
I don’t like laptops with loud cooling fans in them. Quite a controversial position, I know. But really, they do suck. A laptop can be great to use, have a fantastic keyboard, sharp display, lots of storage and a fast CPU, and all of that can be ruined by one component: the cooling fan. Laptop fans are small, meaning that they have to run faster to have any meaningful cooling effect, which means that they are usually very loud and often have a high-pitched whine to them, making them especially obnoxious. Sometimes it feels like a deliberate attack on one of my senses. Fans introduce a maintenance burden. They keep taking in dust, which tends to accumulate at the heat sink. If you skip maintenance, then you’ll see your performance drop and the laptop will get notably hot, which may contribute to a complete hardware failure. We’ve seen tremendous progress in the world of consumer CPU-s over the last decade. Power consumption is much lower while idle, processors can do a lot more work in the same power envelope, and yet most laptops that I see in use are still actively cooled by an annoying-ass cooling fan.1 And yet we keep buying them. But it doesn’t have to be this way. My colleagues that have switched to Apple Silicon laptops are sometimes surprised to hear the fan on their laptop because it’s a genuinely rare occurrence for them. Most of the time it just sits there doing nothing, and when it does come on, it’s whisper-quiet. And to top it off, some models, such as the Macbook Air series, are completely fanless. Meanwhile, those colleagues that run Lenovo ThinkPads with Ryzen 5000 and 7000 series APU-s (that includes me) have audible fans and at the same time the build times for the big Java monolith that we maintain are significantly slower (~15%) compared to the fan-equipped MacBooks.2 We can fix this, if we really wanted to. As a first step, you can change to a power saving mode on your current laptop. This will likely result in your CPU and GPU running more efficiently, which also helps avoid turning the cooling fan on. You will have to sacrifice some performance as a result of this change, which will not be a worthwhile trade-off for everyone. If you are OK with risking damaging your hardware, you can also play around with setting your own fan curve. The CPU and GPU throttling technology is quite advanced nowadays, so you will likely be fine in this area, but other components in the laptop, such as the battery, may not be very happy with higher temperatures. After doing all that, the next step is to avoid buying a laptop that abuses your sense of hearing. That’s the only signal that we can send to manufacturers that they will actually listen to. Money speaks louder than words. What alternative options do we have? Well, there are the Apple Silicon MacBooks, and, uhh, that one ThinkPad with an ARM CPU, and a bunch of Chromebooks, and a few Windows tablets I guess. I’ll be honest, I have not kept a keen eye on recent developments, but a quick search online for fanless laptops pretty much looks as I described. Laptops that you’d actually want to get work done on are completely missing from that list, unless you like Apple.3 In a corporate environment the choice of laptop might not be fully up to you, but you can do your best to influence the decision-makers. There’s one more alternative: ask your software vendor to not write shoddily thrown together software that performs like shit. Making a doctor appointment should not make my cooling fan go crazy. Not only is slow and inefficient software discriminatory towards those that cannot afford decent computer hardware, it’s also directly contributing to the growing e-waste generation problem by continuously raising the minimum hardware requirements for the software that we rely on every day. Written on a Lenovo ThinkPad X395 that just won’t stop heating up and making annoying fan noises. passive vs active cooling? More like passive vs annoying cooling. ↩︎ I dream of a day where Asahi Linux runs perfectly on an Apple Silicon MacBook. It’s not production ready right now, but the developers have done an amazing job so far! ↩︎ I like the hardware that Apple produces, it’s the operating system that I heavily dislike. ↩︎
We’re excited to introduce two tiny additions to the Arduino ecosystem that will make a big difference: the Nano Connector Carrier and seven new Modulino® nodes, now available individually in the Arduino Store! These products are designed to make your prototyping experience faster, easier, and more fun – whether you’re building interactive installations, automating tasks, […] The post New arrivals: Nano Connector Carrier + 7 Modulino® nodes to supercharge your projects appeared first on Arduino Blog.