More from Krzysztof Kowalczyk blog
Go is the most hated programming language. Compared to other languages, it provides 80% of utility with 20% of complexity. The hate comes from people who want 81% of utility, or 85% or 97%. As Rob Pike said, no one denies that 87% of utility provides more utility than 80%. The problem is that additional 7% of utility requires 36% more work. Here are some example. Someone complained on HN that struct tags are a not as powerful as annotations or macros. I explained that this is 80⁄20 design. Go testing standard library is a couple hundred lines of code, didn’t change much over the years and yet it provides all the basic testing features you might need. It doesn’t provide all the convenience features you might think of. That’s what Java’s jUnit library does, at a cost of tens of thousands lines of code and years of never-ending development. Go is 80⁄20 design. Goroutines are 80⁄20 design for concurrency compared to async in C# or Rust. Not as many features and knobs but only a fraction of complexity (for users and implementors). When Go launched it didn’t have user defined generics but the built-in types that needed it were generic: arrays/slices, maps, channels. That 8⁄20 design served Go well for over a decade. Most languages can’t resist driving towards 100% design at 400% the cost. C#, Swift, Rust - they all seem on a never-ending treadmill of adding features. Even JavaScript, which started as a 70⁄30 language has been captured by people whose job became adding more features to JavaScript. If 80⁄20 is good, wouldn’t 70⁄30 be even better? No, it wouldn’t. Go has shown that you can have a popular language without enums. I don’t think you could have a popular language without structs. There’s a line below which the language is just not useful enough. Finally, what does “work” mean? There’s work done by the users of the language. Every additional feature of the language requires the programmer to learn about it. It’s more work than it seems. If you make functions as first class concepts, the work is not just learning the syntax and functionality. You need to learn new patterns coding, like functions that return functions. You need to learn about currying, passing functions as arguments. You need to learn not only how but when: when you should use that powerful functionality and when you shouldn’t. You can’t skip that complexity. Even if you decide to not learn how to use functions as first class concepts, your co-worker might and you have to be able to understand his code. Or the library you uses it or a tutorial talks about it. That’s why 80+% languages need coding guidelines. Google has one for C++ because hundreds of programmers couldn’t effectively work on shared C++ codebase if there was no restriction on what features any individual programmer could use. Google’s C++ style guide exists to lower C++ from 95% language to 90% language. The other work is by implementors of the language. Swift is a cautionary tale here. Despite over 10 years of development by very smart people with practically unlimited budget, on a project that is a priority for Apple, Swift compiler is still slow, crashy and is not meaningfully cross platform. They designed a language that they cannot implement properly. In contrast Go, a much simpler but still very capable, was fast, cross platform and robust from version 1.0.
In Chrome Dev Tools you can setup a mapping between the files web server sends to the browser and files on disk. This allows editing files in dev tools and having those changes saved to a file, which is handy. Early in 2025 they’ve added a way to automatically configure this mapping. It’s quite simple. Your server needs to serve /.well-known/appspecific/com.chrome.devtools.json file with content that looks like: { "workspace": { "root": "C:/Users/kjk/src/edna/src", "uuid": "8f6e3d9a-4b7c-4c1e-a2d5-7f9b1e3c1384" } } The uuid should be unique for each project. I got mine by asking Grok to generate random version 4 uuid. On Windows the path has to use Unix slash. It didn’t work when I sent Windows path. Shocking incompetence of the devs. This only works from localhost. Security. This file is read when you open dev tools in chrome. If you already have it open, you have to close and re-open. You still need to connect (annoying security theater). So the first thing you need to do: switch to sources tab in dev tools reveal left side panel (if hidden) switch to workspace tab there you should see the mapping is already configured but you need to click connect button Apparently bun sends this automatically. Here’s how I send it in my Go server: func serveChromeDevToolsJSON(w http.ResponseWriter, r *http.Request) { // your directory might be different than "src" srcDir, err := filepath.Abs("src") must(err) // stupid Chrome doesn't accept windows-style paths srcDir = filepath.ToSlash(srcDir) // uuid should be unique for each workspace uuid := "8f6e3d9a-4b7c-4c1e-a2d5-7f9b0e3c1284" s := `{ "workspace": { "root": "{{root}}", "uuid": "{{uuid}}" } }` s = strings.ReplaceAll(s, "{{root}}", srcDir) s = strings.ReplaceAll(s, "{{uuid}}", uuid) logf("serveChromeDevToolsJSON:\n\n%s\n\n", s) w.Header().Set("Content-Type", "application/json") w.Write(data) }
In Chrome Dev Tools you can setup a mapping between the files web server sends to the browser and files on disk. This allows editing files in dev tools and having those changes saved to a file, which is handy. Early in 2025 they’ve added a way to automatically configure this mapping. It’s quite simple. Your server needs to serve /.well-known/appspecific/com.chrome.devtools.json file with content that looks like: { "workspace": { "root": "C:/Users/kjk/src/edna/src", "uuid": "8f6e3d9a-4b7c-4c1e-a2d5-7f9b1e3c1384" } } The uuid should be unique for each project. I got mine by asking Grok to generate random version 4 uuid. On Windows the path has to use Unix slash. It didn’t work when I sent Windows path. Shocking incompetence of the devs. This only works from localhost. Security. This file is read when you open dev tools in chrome. If you already have it open, you have to close and re-open. First time you’ll have to give Chrome permissions to access that source director on disk. You need to: switch to sources tab in dev tools reveal left side panel (if hidden) switch to workspace tab there you should see the mapping is already configured but you need to click connect button Apparently bun sends this automatically. Here’s how I send it in my Go server: func serveChromeDevToolsJSON(w http.ResponseWriter, r *http.Request) { // your directory might be different than "src" srcDir, err := filepath.Abs("src") must(err) // stupid Chrome doesn't accept windows-style paths srcDir = filepath.ToSlash(srcDir) // uuid should be unique for each workspace uuid := "8f6e3d9a-4b7c-4c1e-a2d5-7f9b0e3c1284" s := `{ "workspace": { "root": "{{root}}", "uuid": "{{uuid}}" } }` s = strings.ReplaceAll(s, "{{root}}", srcDir) s = strings.ReplaceAll(s, "{{uuid}}", uuid) logf("serveChromeDevToolsJSON:\n\n%s\n\n", s) w.Header().Set("Content-Type", "application/json") w.Write(data) }
You don’t want your software to crash, do you? This post describes my experiences in making SumatraPDF crash less. SumatraPDF is a Windows desktop app. It’s a fast viewer for PDF, ePub, comic books etc.. It’s small and yet full of features. Know thy crashes The most important step in fixing crashes is knowing about them. There are variations between different Windows versions and variations of how people customize Windows and how they use your software. Sometimes in ways you would never think of. Those variations can lead to bugs or crashes. I have no hope of testing my software in all possible configurations. If you’re Microsoft or Adobe you can reinvest some of the revenue to hire an army or testers, setup compatibility labs etc. but for a single developer this is not realistic. Bugs most often lurk in untested code and even a very good testing effort won’t encounter all things that can go wrong in real life. Get the crash reports automatically Very few people bother to submit bug reports and crashes. When a program crashes, they just shrug and restart. The only realistic way to be informed about crashes is to automatically gather crash reports without user involvement. This is a proven idea. Microsoft did it for Windows. Mozilla and Google did it for their browsers. How to get crashes Regardless of the platform, the solution involves two parts: code in the software itself. When a crash happens it runs crash handler which creates crash report and sends it to the server a server, which accepts crash reports from the software The server The server part is simple: it’s a Go server running on Hetzner that accepts crash reports in text form via HTTP POST requests, saves them to disk and provides simple UI for browsing them. Crash reports are deleted after a week because there’s no point keeping them. If a crash stops happening, it was fixed. If it keeps happening, I’ll get new crash report. Intercepting crashes in C++ on Windows SumatraPDF is written in C++. We want our code to be executed when a crash or other fatal thing happen. To be notified about fatal things in C runtime: usesignal(SIGABRT, onSignalAbort);. This register onSignalAbort to be called on SIGABRT: void __cdecl onSignalAbort(int) { // put the signal back because can be called many times // (from multiple threads) and raise() resets the handler signal(SIGABRT, onSignalAbort); CrashMe(); } I just induce a hardware crash (referencing invalid memory location 0) so that it’s handled by crash handler: inline void CrashMe() { char* p = nullptr; // cppcheck-suppress nullPointer *p = 0; // NOLINT } To register with C++ runtime: ::set_terminate(onTerminate);. Similarly: void onTerminate() { CrashMe(); } To register for exceptions generated by CPU: gDumpEvent = CreateEvent(nullptr, FALSE, FALSE, nullptr); if (!gDumpEvent) { log("InstallCrashHandler: skipping because !gDumpEvent\n"); return; } gDumpThread = CreateThread(nullptr, 0, CrashDumpThread, nullptr, 0, nullptr); if (!gDumpThread) { log("InstallCrashHandler: skipping because !gDumpThread\n"); return; } gPrevExceptionFilter = SetUnhandledExceptionFilter(CrashDumpExceptionHandler); // 1 means that our handler will be called first, 0 would be: last AddVectoredExceptionHandler(1, CrashDumpVectoredExceptionHandler); Generating a crash report The most important part of a crash report is a readable call stack of a thread that crashed which looks like: sumatrapdf.exe!RectF::IsEmpty+0x0 \src\utils\GeomUtil.cpp+314 sumatrapdf.exe!DisplayModel::GetContentStart+0x3f \src\DisplayModel.cpp+1196 sumatrapdf.exe!DisplayModel::GoToPrevPage+0x7e \src\DisplayModel.cpp+1386 sumatrapdf.exe!CanvasOnMouseWheel+0x191 \src\Canvas.cpp+1318 sumatrapdf.exe!WndProcCanvasFixedPageUI+0x300 \src\Canvas.cpp+1672 sumatrapdf.exe!WndProcCanvas+0xa7 \src\Canvas.cpp+1993 user32.dll!CallWindowProcW+0x589 user32.dll!TranslateMessage+0x292 sumatrapdf.exe!RunMessageLoop+0x15c \src\SumatraStartup.cpp+531 sumatrapdf.exe!WinMain+0x11fd \src\SumatraStartup.cpp+1407 sumatrapdf.exe!__scrt_common_main_seh+0x106 To get a call stack you can use StackWalk64() from dbghelp.dll. But those are just addresses in memory. You then have to map each address into a loaded dll and offset in that dll. Then you have to match offset in the dll to a function and offset in that functions. And then match that offset in the function to a source code file and line that generated that code. To do all of that you need symbols in .pdb format. Because SumatraPDF is open source I decided for an unorthodox approach: For each version of SumatraPDF executable, I store .pdb symbols in online storage (currently it happens to be Cloudflare’s S3-copmatible R2). When crash happens I download those symbols locally, unpack them, and initialize dbghelp.dll with their locations. I then resolve addresses in memory to dll name, function name, offset in function and source code file name and line number. Other stuff in crash report: info about OS version, processor etc. in case those correlate to the crash list of loaded dlls. It’s quite common that other software injects their dlls into all executables running and those dlls might have bugs that cause the crash SumatraPDF configuration. To my detriment I’ve made SumatraPDF quite customizable and sometimes bugs only happen when certain options are used and if I’m not using the same settings, I can’t reproduce the bug even if I execute the same steps logs. When I can’t figure out a certain crash, I can add additional logging to help me understand what leads to the crash I include Git has revision in the executable, which is included in crash report. That way I can post-process the crash report on the server and for each stack frame I can generate a link to source code on GitHub. When crash happens the program is compromised so I take care to pre-compute as much info before crash handler executes. If crash handler crashes, I won’t get the crash report. SumatraPDF experience How does it work in practice? I’ve implemented the system described here in Sumatra 1.5. Sumatra is a rather complicated piece of C++ code and quite popular (several thousand of downloads per day). Before 1.5 we had a system where we would save the minidump to a disk and after a crash we would ask the user to report it in our bug tracker and attach minidump to the bug report. Almost no one did that. I only got few crash reports from users in few months. The automated system was sending tens of crash reports per day. Once I knew about the problems, I would try to fix them. Some problems I could fix just by looking at crash report. Some required writing stress tests to make them easier to reproduce locally. Some of them I can’t fix (e.g. because they are caused by buggy printer drivers or other software that injects buggy dlls into SumatraPDF process). I do know that I fixed some of the bugs. I can see that a new release generates less crashes and by looking at crash reports I can tell that some crashes that happened frequently in previous releases do not happen anymore. Building automated crash reporting system was the best investment I could have made for improving reliability of SumatraPDF. The alternatives While the general idea is always the same, there are different ways of implementing it. On Windows a simpler solution is to capture so-called minidumps (using MiniDumpWriteDumpProc() Windows API) instead of going to the trouble of generating human-readable crash reports client side. I did that too. The problem with that approach is that you have to inspect each crash dump manually in the debugger (e.g. WinDBG). I wrote a python script that automated the process (you can script it by launching cdb debugger with the right parameters and making it run !analyze -v)). Unfortunately, cdb is buggy and was hanging on some dump files. It’s probably possible to work around with a timeout in the python script, but at that point I stopped caring. Windows provides native support for minidumps. Google took minidump design and provided cross-platform implementation for Windows, Mac and Linux, as part of breakpad project which was then replaced by Crashpad. They are both Breakpad is the crash reporting system used by Google for Chrome and Mozilla for Firefox. It contains both client and server parts for native (C/C++ or Objective C) code. I used it once for a Mac app. For Objective C I prefer the approach described above as it’s simpler to implement, but I’m sure that’s a solid and well tested approach. On Windows, crash reports from your app are already sent to Microsoft as part of Windows Error Reporting. Apparently, it’s possible to for third party developers to get access to those reports but I never did that so I don’t know how. References CrashHandler.cpp is crash handling code in SumatraPDF Chrome’s crash reporting Mozilla crash reporting
More in programming
My latest love letter to Linux has been published. It's called Omarchy, and it's an opinionated setup of the Arch Linux distribution and the Hyprland tiling window manager. With everything configured out-of-the-box to give you exactly the same setup that I now run every day. My Platonic ideal of what a developer environment should look like. It's not for everyone, though. Arch has a reputation for being difficult, but while I think that's vastly overstated, I still think it's fair to say that Ubuntu is an easier landing for someone new to Linux. And that's why this exists as a sister project to Omakub — my opinionated setup for Ubuntu — and not a replacement of it. Because I do think that Hyprland deserves its reputation of being difficult! Not because the core tiling window manager is hard, but because it comes incredibly bare-boned in the box. You have to figure out everything yourself. Even how to get a lock screen or idle timing or a menu bar or bluetooth setting or... you get the idea. Omarchy is an attempt to solve for all that. To give you a default set of great, beautiful configurations for Hyprland, and installing all the common tooling you'd normally want. You could setup this, not change a thing, and you'll have exactly what I run every day. But you can also just use this as a paved path into the glorious world of Linux ricing. The flip side of Hyprland being so atomized is that it's infinitely configurable. You can really, really make it yours. No wonder its the preferred platform for r/unixporn, and even what PewDiePie picked up for his amazing Russian nuclear core build. I don't know when we'll literally get "The Year of Linux on the Desktop", but I've never been as convinced that its coming as I am now. There's enough dissent in the water. Enough dissatisfaction with both Apple and Microsoft. And between Valve going all-in on Steam on Linux (the Steamdeck runs Arch!), major creators (like PewDiePie) switching to Linux, and incredible projects like Hyprland — which offer not just a cheap visual copy of the two major commercial operating systems, but something much more unique and compelling — I think all the factors are in place for a big switch. At least among developers. But broad adoption or not, I'm in love with Linux, and thrilled to share my work to make it easier to enjoy.
Here’s a story from nearly 10 years ago. the bug I think it was my friend Richard Kettlewell who told me about a bug he encountered with Let’s Encrypt in its early days in autumn 2015: it was failing to validate mail domains correctly. the context At the time I had previously been responsible for Cambridge University’s email anti-spam system for about 10 years, and in 2014 I had been given responsibility for Cambridge University’s DNS. So I knew how Let’s Encrypt should validate mail domains. Let’s Encrypt was about one year old. Unusually, the code that runs their operations, Boulder, is free software and open to external contributors. Boulder is written in Golang, and I had not previously written any code in Golang. But its reputation is to be easy to get to grips with. So, in principle, the bug was straightforward for me to fix. How difficult would it be as a Golang newbie? And what would Let’s Encrypt’s contribution process be like? the hack I cloned the Boulder repository and had a look around the code. As is pretty typical, there are a couple of stages to fixing a bug in an unfamiliar codebase: work out where the problem is try to understand if the obvious fix could be better In this case, I remember discovering a relatively substantial TODO item that intersected with the bug. I can’t remember the details, but I think there were wider issues with DNS lookups in Boulder. I decided it made sense to fix the immediate problem without getting involved in things that would require discussion with Let’s Encrypt staff. I faffed around with the code and pushed something that looked like it might work. A fun thing about this hack is that I never got a working Boulder test setup on my workstation (or even Golang, I think!) – I just relied on the Let’s Encrypt cloud test setup. The feedback time was very slow, but it was tolerable for a simple one-off change. the fix My pull request was small, +48-14. After a couple of rounds of review and within a few days, it was merged and put into production! A pleasing result. the upshot I thought Golang (at least as it was used in the Boulder codebase) was as easy to get to grips with as promised. I did not touch it again until several years later, because there was no need to, but it seemed fine. I was very impressed by the Let’s Encrypt continuous integration and automated testing setup, and by their low-friction workflow for external contributors. One of my fastest drive-by patches to get into worldwide production. My fix was always going to be temporary, and all trace of it was overwritten years ago. It’s good when “temporary” turns out to be true! the point I was reminded of this story in the pub this evening, and I thought it was worth writing down. It demonstrated to me that Let’s Encrypt really were doing all the good stuff they said they were doing. So thank you to Let’s Encrypt for providing an exemplary service and for giving me a happy little anecdote.
Go is the most hated programming language. Compared to other languages, it provides 80% of utility with 20% of complexity. The hate comes from people who want 81% of utility, or 85% or 97%. As Rob Pike said, no one denies that 87% of utility provides more utility than 80%. The problem is that additional 7% of utility requires 36% more work. Here are some example. Someone complained on HN that struct tags are a not as powerful as annotations or macros. I explained that this is 80⁄20 design. Go testing standard library is a couple hundred lines of code, didn’t change much over the years and yet it provides all the basic testing features you might need. It doesn’t provide all the convenience features you might think of. That’s what Java’s jUnit library does, at a cost of tens of thousands lines of code and years of never-ending development. Go is 80⁄20 design. Goroutines are 80⁄20 design for concurrency compared to async in C# or Rust. Not as many features and knobs but only a fraction of complexity (for users and implementors). When Go launched it didn’t have user defined generics but the built-in types that needed it were generic: arrays/slices, maps, channels. That 8⁄20 design served Go well for over a decade. Most languages can’t resist driving towards 100% design at 400% the cost. C#, Swift, Rust - they all seem on a never-ending treadmill of adding features. Even JavaScript, which started as a 70⁄30 language has been captured by people whose job became adding more features to JavaScript. If 80⁄20 is good, wouldn’t 70⁄30 be even better? No, it wouldn’t. Go has shown that you can have a popular language without enums. I don’t think you could have a popular language without structs. There’s a line below which the language is just not useful enough. Finally, what does “work” mean? There’s work done by the users of the language. Every additional feature of the language requires the programmer to learn about it. It’s more work than it seems. If you make functions as first class concepts, the work is not just learning the syntax and functionality. You need to learn new patterns coding, like functions that return functions. You need to learn about currying, passing functions as arguments. You need to learn not only how but when: when you should use that powerful functionality and when you shouldn’t. You can’t skip that complexity. Even if you decide to not learn how to use functions as first class concepts, your co-worker might and you have to be able to understand his code. Or the library you uses it or a tutorial talks about it. That’s why 80+% languages need coding guidelines. Google has one for C++ because hundreds of programmers couldn’t effectively work on shared C++ codebase if there was no restriction on what features any individual programmer could use. Google’s C++ style guide exists to lower C++ from 95% language to 90% language. The other work is by implementors of the language. Swift is a cautionary tale here. Despite over 10 years of development by very smart people with practically unlimited budget, on a project that is a priority for Apple, Swift compiler is still slow, crashy and is not meaningfully cross platform. They designed a language that they cannot implement properly. In contrast Go, a much simpler but still very capable, was fast, cross platform and robust from version 1.0.
Hello! After many months of writing deep dive blog posts about the terminal, on Tuesday I released a new zine called “The Secret Rules of the Terminal”! You can get it for $12 here: https://wizardzines.com/zines/terminal, or get an 15-pack of all my zines here. Here’s the cover: the table of contents Here’s the table of contents: why the terminal? At first when I thought about writing about the terminal I was a bit baffled. After all – you just type in a command and run it, right? How hard could it be? But then I ran a terminal workshop for some folks who were new to the terminal, and somebody asked this question: “how do I quit? Ctrl+C isn’t working!” This question has a very simple answer (they’d run man pngquant, so they just needed to press q to quit). But it made me think about how even though different situations in the terminal look extremely similar (it’s all text!), the way they behave can be very different. Something as simple as “quitting” is different depending on whether you’re in a REPL (Ctrl+D), a full screen program like less (q), or a noninteractive program (Ctrl+C). And then I realized that the terminal was way more complicated than I’d been giving it credit for. there are a million tiny inconsistencies The more I thought about using the terminal, the more I realized that the terminal has a lot of tiny inconsistencies like: sometimes you can use the arrow keys to move around, but sometimes pressing the arrow keys just prints ^[[D sometimes you can use the mouse to select text, but sometimes you can’t sometimes your commands get saved to a history when you run them, and sometimes they don’t some shells let you use the up arrow to see the previous command, and some don’t If you use the terminal daily for 10 or 20 years, even if you don’t understand exactly why these things happen, you’ll probably build an intuition for them. But having an intuition for them isn’t the same as understanding why they happen. When writing this zine I actually had to do a lot of work to figure out exactly what was happening in the terminal to be able to talk about how to reason about it. the rules aren’t written down anywhere It turns out that the “rules” for how the terminal works (how do you edit a command you type in? how do you quit a program? how do you fix your colours?) are extremely hard to fully understand, because “the terminal” is actually made of many different pieces of software (your terminal emulator, your operating system, your shell, the core utilities like grep, and every other random terminal program you’ve installed) which are written by different people with different ideas about how things should work. So I wanted to write something that would explain: how the 4 pieces of the terminal (your shell, terminal emulator, programs, and TTY driver) fit together to make everything work some of the core conventions for how you can expect things in your terminal to work lots of tips and tricks for how to use terminal programs this zine explains the most useful parts of terminal internals Terminal internals are a mess. A lot of it is just the way it is because someone made a decision in the 80s and now it’s impossible to change, and honestly I don’t think learning everything about terminal internals is worth it. But some parts are not that hard to understand and can really make your experience in the terminal better, like: if you understand what your shell is responsible for, you can configure your shell (or use a different one!) to access your history more easily, get great tab completion, and so much more if you understand escape codes, it’s much less scary when cating a binary to stdout messes up your terminal, you can just type reset and move on if you understand how colour works, you can get rid of bad colour contrast in your terminal so you can actually read the text I learned a surprising amount writing this zine When I wrote How Git Works, I thought I knew how Git worked, and I was right. But the terminal is different. Even though I feel totally confident in the terminal and even though I’ve used it every day for 20 years, I had a lot of misunderstandings about how the terminal works and (unless you’re the author of tmux or something) I think there’s a good chance you do too. A few things I learned that are actually useful to me: I understand the structure of the terminal better and so I feel more confident debugging weird terminal stuff that happens to me (I was even able to suggest a small improvement to fish!). Identifying exactly which piece of software is causing a weird thing to happen in my terminal still isn’t easy but I’m a lot better at it now. you can write a shell script to copy to your clipboard over SSH how reset works under the hood (it does the equivalent of stty sane; sleep 1; tput reset) – basically I learned that I don’t ever need to worry about remembering stty sane or tput reset and I can just run reset instead how to look at the invisible escape codes that a program is printing out (run unbuffer program > out; less out) why the builtin REPLs on my Mac like sqlite3 are so annoying to use (they use libedit instead of readline) blog posts I wrote along the way As usual these days I wrote a bunch of blog posts about various side quests: How to add a directory to your PATH “rules” that terminal problems follow why pipes sometimes get “stuck”: buffering some terminal frustrations ASCII control characters in my terminal on “what’s the deal with Ctrl+A, Ctrl+B, Ctrl+C, etc?” entering text in the terminal is complicated what’s involved in getting a “modern” terminal setup? reasons to use your shell’s job control standards for ANSI escape codes, which is really me trying to figure out if I think the terminfo database is serving us well today people who helped with this zine A long time ago I used to write zines mostly by myself but with every project I get more and more help. I met with Marie Claire LeBlanc Flanagan every weekday from September to June to work on this one. The cover is by Vladimir Kašiković, Lesley Trites did copy editing, Simon Tatham (who wrote PuTTY) did technical review, our Operations Manager Lee did the transcription as well as a million other things, and Jesse Luehrs (who is one of the very few people I know who actually understands the terminal’s cursed inner workings) had so many incredibly helpful conversations with me about what is going on in the terminal. get the zine Here are some links to get the zine again: get The Secret Rules of the Terminal get a 15-pack of all my zines here. As always, you can get either a PDF version to print at home or a print version shipped to your house. The only caveat is print orders will ship in August – I need to wait for orders to come in to get an idea of how many I should print before sending it to the printer.