More from Julia Evans
Hello! After many months of writing deep dive blog posts about the terminal, on Tuesday I released a new zine called “The Secret Rules of the Terminal”! You can get it for $12 here: https://wizardzines.com/zines/terminal, or get an 15-pack of all my zines here. Here’s the cover: the table of contents Here’s the table of contents: why the terminal? At first when I thought about writing about the terminal I was a bit baffled. After all – you just type in a command and run it, right? How hard could it be? But then I ran a terminal workshop for some folks who were new to the terminal, and somebody asked this question: “how do I quit? Ctrl+C isn’t working!” This question has a very simple answer (they’d run man pngquant, so they just needed to press q to quit). But it made me think about how even though different situations in the terminal look extremely similar (it’s all text!), the way they behave can be very different. Something as simple as “quitting” is different depending on whether you’re in a REPL (Ctrl+D), a full screen program like less (q), or a noninteractive program (Ctrl+C). And then I realized that the terminal was way more complicated than I’d been giving it credit for. there are a million tiny inconsistencies The more I thought about using the terminal, the more I realized that the terminal has a lot of tiny inconsistencies like: sometimes you can use the arrow keys to move around, but sometimes pressing the arrow keys just prints ^[[D sometimes you can use the mouse to select text, but sometimes you can’t sometimes your commands get saved to a history when you run them, and sometimes they don’t some shells let you use the up arrow to see the previous command, and some don’t If you use the terminal daily for 10 or 20 years, even if you don’t understand exactly why these things happen, you’ll probably build an intuition for them. But having an intuition for them isn’t the same as understanding why they happen. When writing this zine I actually had to do a lot of work to figure out exactly what was happening in the terminal to be able to talk about how to reason about it. the rules aren’t written down anywhere It turns out that the “rules” for how the terminal works (how do you edit a command you type in? how do you quit a program? how do you fix your colours?) are extremely hard to fully understand, because “the terminal” is actually made of many different pieces of software (your terminal emulator, your operating system, your shell, the core utilities like grep, and every other random terminal program you’ve installed) which are written by different people with different ideas about how things should work. So I wanted to write something that would explain: how the 4 pieces of the terminal (your shell, terminal emulator, programs, and TTY driver) fit together to make everything work some of the core conventions for how you can expect things in your terminal to work lots of tips and tricks for how to use terminal programs this zine explains the most useful parts of terminal internals Terminal internals are a mess. A lot of it is just the way it is because someone made a decision in the 80s and now it’s impossible to change, and honestly I don’t think learning everything about terminal internals is worth it. But some parts are not that hard to understand and can really make your experience in the terminal better, like: if you understand what your shell is responsible for, you can configure your shell (or use a different one!) to access your history more easily, get great tab completion, and so much more if you understand escape codes, it’s much less scary when cating a binary to stdout messes up your terminal, you can just type reset and move on if you understand how colour works, you can get rid of bad colour contrast in your terminal so you can actually read the text I learned a surprising amount writing this zine When I wrote How Git Works, I thought I knew how Git worked, and I was right. But the terminal is different. Even though I feel totally confident in the terminal and even though I’ve used it every day for 20 years, I had a lot of misunderstandings about how the terminal works and (unless you’re the author of tmux or something) I think there’s a good chance you do too. A few things I learned that are actually useful to me: I understand the structure of the terminal better and so I feel more confident debugging weird terminal stuff that happens to me (I was even able to suggest a small improvement to fish!). Identifying exactly which piece of software is causing a weird thing to happen in my terminal still isn’t easy but I’m a lot better at it now. you can write a shell script to copy to your clipboard over SSH how reset works under the hood (it does the equivalent of stty sane; sleep 1; tput reset) – basically I learned that I don’t ever need to worry about remembering stty sane or tput reset and I can just run reset instead how to look at the invisible escape codes that a program is printing out (run unbuffer program > out; less out) why the builtin REPLs on my Mac like sqlite3 are so annoying to use (they use libedit instead of readline) blog posts I wrote along the way As usual these days I wrote a bunch of blog posts about various side quests: How to add a directory to your PATH “rules” that terminal problems follow why pipes sometimes get “stuck”: buffering some terminal frustrations ASCII control characters in my terminal on “what’s the deal with Ctrl+A, Ctrl+B, Ctrl+C, etc?” entering text in the terminal is complicated what’s involved in getting a “modern” terminal setup? reasons to use your shell’s job control standards for ANSI escape codes, which is really me trying to figure out if I think the terminfo database is serving us well today people who helped with this zine A long time ago I used to write zines mostly by myself but with every project I get more and more help. I met with Marie Claire LeBlanc Flanagan every weekday from September to June to work on this one. The cover is by Vladimir Kašiković, Lesley Trites did copy editing, Simon Tatham (who wrote PuTTY) did technical review, our Operations Manager Lee did the transcription as well as a million other things, and Jesse Luehrs (who is one of the very few people I know who actually understands the terminal’s cursed inner workings) had so many incredibly helpful conversations with me about what is going on in the terminal. get the zine Here are some links to get the zine again: get The Secret Rules of the Terminal get a 15-pack of all my zines here. As always, you can get either a PDF version to print at home or a print version shipped to your house. The only caveat is print orders will ship in August – I need to wait for orders to come in to get an idea of how many I should print before sending it to the printer.
I have never been a C programmer but every so often I need to compile a C/C++ program from source. This has been kind of a struggle for me: for a long time, my approach was basically “install the dependencies, run make, if it doesn’t work, either try to find a binary someone has compiled or give up”. “Hope someone else has compiled it” worked pretty well when I was running Linux but since I’ve been using a Mac for the last couple of years I’ve been running into more situations where I have to actually compile programs myself. So let’s talk about what you might have to do to compile a C program! I’ll use a couple of examples of specific C programs I’ve compiled and talk about a few things that can go wrong. Here are three programs we’ll be talking about compiling: paperjam sqlite qf (a pager you can run to quickly open files from a search with rg -n THING | qf) step 1: install a C compiler This is pretty simple: on an Ubuntu system if I don’t already have a C compiler I’ll install one with: sudo apt-get install build-essential This installs gcc, g++, and make. The situation on a Mac is more confusing but it’s something like “install xcode command line tools”. step 2: install the program’s dependencies Unlike some newer programming languages, C doesn’t have a dependency manager. So if a program has any dependencies, you need to hunt them down yourself. Thankfully because of this, C programmers usually keep their dependencies very minimal and often the dependencies will be available in whatever package manager you’re using. There’s almost always a section explaining how to get the dependencies in the README, for example in paperjam’s README, it says: To compile PaperJam, you need the headers for the libqpdf and libpaper libraries (usually available as libqpdf-dev and libpaper-dev packages). You may need a2x (found in AsciiDoc) for building manual pages. So on a Debian-based system you can install the dependencies like this. sudo apt install -y libqpdf-dev libpaper-dev If a README gives a name for a package (like libqpdf-dev), I’d basically always assume that they mean “in a Debian-based Linux distro”: if you’re on a Mac brew install libqpdf-dev will not work. I still have not 100% gotten the hang of developing on a Mac yet so I don’t have many tips there yet. I guess in this case it would be brew install qpdf if you’re using Homebrew. step 3: run ./configure (if needed) Some C programs come with a Makefile and some instead come with a script called ./configure. For example, if you download sqlite’s source code, it has a ./configure script in it instead of a Makefile. My understanding of this ./configure script is: You run it, it prints out a lot of somewhat inscrutable output, and then it either generates a Makefile or fails because you’re missing some dependency The ./configure script is part of a system called autotools that I have never needed to learn anything about beyond “run it to generate a Makefile”. I think there might be some options you can pass to get the ./configure script to produce a different Makefile but I have never done that. step 4: run make The next step is to run make to try to build a program. Some notes about make: Sometimes you can run make -j8 to parallelize the build and make it go faster It usually prints out a million compiler warnings when compiling the program. I always just ignore them. I didn’t write the software! The compiler warnings are not my problem. compiler errors are often dependency problems Here’s an error I got while compiling paperjam on my Mac: /opt/homebrew/Cellar/qpdf/12.0.0/include/qpdf/InputSource.hh:85:19: error: function definition does not declare parameters 85 | qpdf_offset_t last_offset{0}; | ^ Over the years I’ve learned it’s usually best not to overthink problems like this: if it’s talking about qpdf, there’s a good change it just means that I’ve done something wrong with how I’m including the qpdf dependency. Now let’s talk about some ways to get the qpdf dependency included in the right way. the world’s shortest introduction to the compiler and linker Before we talk about how to fix dependency problems: building C programs is split into 2 steps: Compiling the code into object files (with gcc or clang) Linking those object files into a final binary (with ld) It’s important to know this when building a C program because sometimes you need to pass the right flags to the compiler and linker to tell them where to find the dependencies for the program you’re compiling. make uses environment variables to configure the compiler and linker If I run make on my Mac to install paperjam, I get this error: c++ -o paperjam paperjam.o pdf-tools.o parse.o cmds.o pdf.o -lqpdf -lpaper ld: library 'qpdf' not found This is not because qpdf is not installed on my system (it actually is!). But the compiler and linker don’t know how to find the qpdf library. To fix this, we need to: pass "-I/opt/homebrew/include" to the compiler (to tell it where to find the header files) pass "-L/opt/homebrew/lib -liconv" to the linker (to tell it where to find library files and to link in iconv) And we can get make to pass those extra parameters to the compiler and linker using environment variables! To see how this works: inside paperjam’s Makefile you can see a bunch of environment variables, like LDLIBS here: paperjam: $(OBJS) $(LD) -o $@ $^ $(LDLIBS) Everything you put into the LDLIBS environment variable gets passed to the linker (ld) as a command line argument. secret environment variable: CPPFLAGS Makefiles sometimes define their own environment variables that they pass to the compiler/linker, but make also has a bunch of “implicit” environment variables which it will automatically pass to the C compiler and linker. There’s a full list of implicit environment variables here, but one of them is CPPFLAGS, which gets automatically passed to the C compiler. (technically it would be more normal to use CXXFLAGS for this, but this particular Makefile hardcodes CXXFLAGS so setting CPPFLAGS was the only way I could find to set the compiler flags without editing the Makefile) how to use CPPFLAGS and LDLIBS to fix this compiler error Now that we’ve talked about how CPPFLAGS and LDLIBS get passed to the compiler and linker, here’s the final incantation that I used to get the program to build successfully! CPPFLAGS="-I/opt/homebrew/include" LDLIBS="-L/opt/homebrew/lib -liconv" make paperjam This passes -I/opt/homebrew/include to the compiler and -L/opt/homebrew/lib -liconv to the linker. Also I don’t want to pretend that I “magically” knew that those were the right arguments to pass, figuring them out involved a bunch of confused Googling that I skipped over in this post. I will say that: the -I compiler flag tells the compiler which directory to find header files in, like /opt/homebrew/include/qpdf/QPDF.hh the -L linker flag tells the linker which directory to find libraries in, like /opt/homebrew/lib/libqpdf.a the -l linker flag tells the linker which libraries to link in, like -liconv means “link in the iconv library”, or -lm means “link math” tip: how to just build 1 specific file: make $FILENAME Yesterday I discovered this cool tool called qf which you can use to quickly open files from the output of ripgrep. qf is in a big directory of various tools, but I only wanted to compile qf. So I just compiled qf, like this: make qf Basically if you know (or can guess) the output filename of the file you’re trying to build, you can tell make to just build that file by running make $FILENAME tip: look at how other packaging systems built the same C program If you’re having trouble building a C program, maybe other people had problems building it too! Every Linux distribution has build files for every package that they build, so even if you can’t install packages from that distribution directly, maybe you can get tips from that Linux distro for how to build the package. Realizing this (thanks to my friend Dave) was a huge ah-ha moment for me. For example, this line from the nix package for paperjam says: env.NIX_LDFLAGS = lib.optionalString stdenv.hostPlatform.isDarwin "-liconv"; This is basically saying “pass the linker flag -liconv to build this on a Mac”, so that’s a clue we could use to build it. That same file also says env.NIX_CFLAGS_COMPILE = "-DPOINTERHOLDER_TRANSITION=1";. I’m not sure what this means, but when I try to build the paperjam package I do get an error about something called a PointerHolder, so I guess that’s somehow related to the “PointerHolder transition”. step 5: installing the binary Once you’ve managed to compile the program, probably you want to install it somewhere! Some Makefiles have an install target that let you install the tool on your system with make install. I’m always a bit scared of this (where is it going to put the files? what if I want to uninstall them later?), so if I’m compiling a pretty simple program I’ll often just manually copy the binary to install it instead, like this: cp qf ~/bin step 6: maybe make your own package! Once I figured out how to do all of this, I realized that I could use my new make knowledge to contribute a paperjam package to Homebrew! Then I could just brew install paperjam on future systems. The good thing is that even if the details of how all of the different packaging systems, they fundamentally all use C compilers and linkers. it can be useful to understand a little about C even if you’re not a C programmer I think all of this is an interesting example of how it can useful to understand some basics of how C programs work (like “they have header files”) even if you’re never planning to write a nontrivial C program if your life. It feels good to have some ability to compile C/C++ programs myself, even though I’m still not totally confident about all of the compiler and linker flags and I still plan to never learn anything about how autotools works other than “you run ./configure to generate the Makefile”. Also one important thing I left out is LD_LIBRARY_PATH / DYLD_LIBRARY_PATH (which you use to tell the dynamic linker at runtime where to find dynamically linked files) because I can’t remember the last time I ran into an LD_LIBRARY_PATH issue and couldn’t find an example.
I was talking to a friend about how to add a directory to your PATH today. It’s something that feels “obvious” to me since I’ve been using the terminal for a long time, but when I searched for instructions for how to do it, I actually couldn’t find something that explained all of the steps – a lot of them just said “add this to ~/.bashrc”, but what if you’re not using bash? What if your bash config is actually in a different file? And how are you supposed to figure out which directory to add anyway? So I wanted to try to write down some more complete directions and mention some of the gotchas I’ve run into over the years. Here’s a table of contents: step 1: what shell are you using? step 2: find your shell’s config file a note on bash’s config file step 3: figure out which directory to add step 3.1: double check it’s the right directory step 4: edit your shell config step 5: restart your shell problems: problem 1: it ran the wrong program problem 2: the program isn’t being run from your shell notes: a note on source a note on fish_add_path step 1: what shell are you using? If you’re not sure what shell you’re using, here’s a way to find out. Run this: ps -p $$ -o pid,comm= if you’re using bash, it’ll print out 97295 bash if you’re using zsh, it’ll print out 97295 zsh if you’re using fish, it’ll print out an error like “In fish, please use $fish_pid” ($$ isn’t valid syntax in fish, but in any case the error message tells you that you’re using fish, which you probably already knew) Also bash is the default on Linux and zsh is the default on Mac OS (as of 2024). I’ll only cover bash, zsh, and fish in these directions. step 2: find your shell’s config file in zsh, it’s probably ~/.zshrc in bash, it might be ~/.bashrc, but it’s complicated, see the note in the next section in fish, it’s probably ~/.config/fish/config.fish (you can run echo $__fish_config_dir if you want to be 100% sure) a note on bash’s config file Bash has three possible config files: ~/.bashrc, ~/.bash_profile, and ~/.profile. If you’re not sure which one your system is set up to use, I’d recommend testing this way: add echo hi there to your ~/.bashrc Restart your terminal If you see “hi there”, that means ~/.bashrc is being used! Hooray! Otherwise remove it and try the same thing with ~/.bash_profile You can also try ~/.profile if the first two options don’t work. (there are a lot of elaborate flow charts out there that explain how bash decides which config file to use but IMO it’s not worth it and just testing is the fastest way to be sure) step 3: figure out which directory to add Let’s say that you’re trying to install and run a program called http-server and it doesn’t work, like this: $ npm install -g http-server $ http-server bash: http-server: command not found How do you find what directory http-server is in? Honestly in general this is not that easy – often the answer is something like “it depends on how npm is configured”. A few ideas: Often when setting up a new installer (like cargo, npm, homebrew, etc), when you first set it up it’ll print out some directions about how to update your PATH. So if you’re paying attention you can get the directions then. Sometimes installers will automatically update your shell’s config file to update your PATH for you Sometimes just Googling “where does npm install things?” will turn up the answer Some tools have a subcommand that tells you where they’re configured to install things, like: Homebrew: brew --prefix (and then append /bin/ and /sbin/ to what that gives you) Node/npm: npm config get prefix (then append /bin/) Go: go env | grep GOPATH (then append /bin/) asdf: asdf info | grep ASDF_DIR (then append /bin/ and /shims/) step 3.1: double check it’s the right directory Once you’ve found a directory you think might be the right one, make sure it’s actually correct! For example, I found out that on my machine, http-server is in ~/.npm-global/bin. I can make sure that it’s the right directory by trying to run the program http-server in that directory like this: $ ~/.npm-global/bin/http-server Starting up http-server, serving ./public It worked! Now that you know what directory you need to add to your PATH, let’s move to the next step! step 4: edit your shell config Now we have the 2 critical pieces of information we need: Which directory you’re trying to add to your PATH (like ~/.npm-global/bin/) Where your shell’s config is (like ~/.bashrc, ~/.zshrc, or ~/.config/fish/config.fish) Now what you need to add depends on your shell: bash and zsh instructions: Open your shell’s config file, and add a line like this: export PATH=$PATH:~/.npm-global/bin/ (obviously replace ~/.npm-global/bin with the actual directory you’re trying to add) fish instructions: In fish, the syntax is different: set PATH $PATH ~/.npm-global/bin (in fish you can also use fish_add_path, some notes on that further down) step 5: restart your shell Now, an extremely important step: updating your shell’s config won’t take effect if you don’t restart it! Two ways to do this: open a new terminal (or terminal tab), and maybe close the old one so you don’t get confused Run bash to start a new shell (or zsh if you’re using zsh, or fish if you’re using fish) I’ve found that both of these usually work fine. And you should be done! Try running the program you were trying to run and hopefully it works now. If not, here are a couple of problems that you might run into: problem 1: it ran the wrong program If the wrong version of a is program running, you might need to add the directory to the beginning of your PATH instead of the end. For example, on my system I have two versions of python3 installed, which I can see by running which -a: $ which -a python3 /usr/bin/python3 /opt/homebrew/bin/python3 The one your shell will use is the first one listed. If you want to use the Homebrew version, you need to add that directory (/opt/homebrew/bin) to the beginning of your PATH instead, by putting this in your shell’s config file (it’s /opt/homebrew/bin/:$PATH instead of the usual $PATH:/opt/homebrew/bin/) export PATH=/opt/homebrew/bin/:$PATH or in fish: set PATH ~/.cargo/bin $PATH problem 2: the program isn’t being run from your shell All of these directions only work if you’re running the program from your shell. If you’re running the program from an IDE, from a GUI, in a cron job, or some other way, you’ll need to add the directory to your PATH in a different way, and the exact details might depend on the situation. in a cron job Some options: use the full path to the program you’re running, like /home/bork/bin/my-program put the full PATH you want as the first line of your crontab (something like PATH=/bin:/usr/bin:/usr/local/bin:….). You can get the full PATH you’re using in your shell by running echo "PATH=$PATH". I’m honestly not sure how to handle it in an IDE/GUI because I haven’t run into that in a long time, will add directions here if someone points me in the right direction. a note on source When you install cargo (Rust’s installer) for the first time, it gives you these instructions for how to set up your PATH, which don’t mention a specific directory at all. This is usually done by running one of the following (note the leading DOT): . "$HOME/.cargo/env" # For sh/bash/zsh/ash/dash/pdksh source "$HOME/.cargo/env.fish" # For fish The idea is that you add that line to your shell’s config, and their script automatically sets up your PATH (and potentially other things) for you. This is pretty common (Homebrew and asdf have something similar), and there are two ways to approach this: Just do what the tool suggests (add . "$HOME/.cargo/env" to your shell’s config) Figure out which directories the script they’re telling you to run would add to your PATH, and then add those manually. Here’s how I’d do that: Run . "$HOME/.cargo/env" in my shell (or the fish version if using fish) Run echo "$PATH" | tr ':' '\n' | grep cargo to figure out which directories it added See that it says /Users/bork/.cargo/bin and shorten that to ~/.cargo/bin Add the directory ~/.cargo/bin to PATH (with the directions in this post) I don’t think there’s anything wrong with doing what the tool suggests (it might be the “best way”!), but personally I usually use the second approach because I prefer knowing exactly what configuration I’m changing. a note on fish_add_path fish has a handy function called fish_add_path that you can run to add a directory to your PATH like this: fish_add_path /some/directory This will add the directory to your PATH, and automatically update all running fish shells with the new PATH. You don’t have to update your config at all! This is EXTREMELY convenient, but one downside (and the reason I’ve personally stopped using it) is that if you ever need to remove the directory from your PATH a few weeks or months later because maybe you made a mistake, it’s kind of hard to do (there are instructions in this comments of this github issue though). that’s all Hopefully this will help some people. Let me know (on Mastodon or Bluesky) if you there are other major gotchas that have tripped you up when adding a directory to your PATH, or if you have questions about this post!
A few weeks ago I ran a terminal survey (you can read the results here) and at the end I asked: What’s the most frustrating thing about using the terminal for you? 1600 people answered, and I decided to spend a few days categorizing all the responses. Along the way I learned that classifying qualitative data is not easy but I gave it my best shot. I ended up building a custom tool to make it faster to categorize everything. As with all of my surveys the methodology isn’t particularly scientific. I just posted the survey to Mastodon and Twitter, ran it for a couple of days, and got answers from whoever happened to see it and felt like responding. Here are the top categories of frustrations! I think it’s worth keeping in mind while reading these comments that 40% of people answering this survey have been using the terminal for 21+ years 95% of people answering the survey have been using the terminal for at least 4 years These comments aren’t coming from total beginners. Here are the categories of frustrations! The number in brackets is the number of people with that frustration. Honestly I don’t how how interesting this is to other people – I’m just writing this up for myself because I’m trying to write a zine about the terminal and I wanted to get a sense for what people are having trouble with. remembering syntax (115) People talked about struggles remembering: the syntax for CLI tools like awk, jq, sed, etc the syntax for redirects keyboard shortcuts for tmux, text editing, etc One example comment: There are just so many little “trivia” details to remember for full functionality. Even after all these years I’ll sometimes forget where it’s 2 or 1 for stderr, or forget which is which for > and >>. switching terminals is hard (91) People talked about struggling with switching systems (for example home/work computer or when SSHing) and running into: OS differences in keyboard shortcuts (like Linux vs Mac) systems which don’t have their preferred text editor (“no vim” or “only vim”) different versions of the same command (like Mac OS grep vs GNU grep) no tab completion a shell they aren’t used to (“the subtle differences between zsh and bash”) as well as differences inside the same system like pagers being not consistent with each other (git diff pagers, other pagers). One example comment: I got used to fish and vi mode which are not available when I ssh into servers, containers. color (85) Lots of problems with color, like: programs setting colors that are unreadable with a light background color finding a colorscheme they like (and getting it to work consistently across different apps) color not working inside several layers of SSH/tmux/etc not liking the defaults not wanting color at all and struggling to turn it off This comment felt relatable to me: Getting my terminal theme configured in a reasonable way between the terminal emulator and fish (I did this years ago and remember it being tedious and fiddly and now feel like I’m locked into my current theme because it works and I dread touching any of that configuration ever again). keyboard shortcuts (84) Half of the comments on keyboard shortcuts were about how on Linux/Windows, the keyboard shortcut to copy/paste in the terminal is different from in the rest of the OS. Some other issues with keyboard shortcuts other than copy/paste: using Ctrl-W in a browser-based terminal and closing the window the terminal only supports a limited set of keyboard shortcuts (no Ctrl-Shift-, no Super, no Hyper, lots of ctrl- shortcuts aren’t possible like Ctrl-,) the OS stopping you from using a terminal keyboard shortcut (like by default Mac OS uses Ctrl+left arrow for something else) issues using emacs in the terminal backspace not working (2) other copy and paste issues (75) Aside from “the keyboard shortcut for copy and paste is different”, there were a lot of OTHER issues with copy and paste, like: copying over SSH how tmux and the terminal emulator both do copy/paste in different ways dealing with many different clipboards (system clipboard, vim clipboard, the “middle click” keyboard on Linux, tmux’s clipboard, etc) and potentially synchronizing them random spaces added when copying from the terminal pasting multiline commands which automatically get run in a terrifying way wanting a way to copy text without using the mouse discoverability (55) There were lots of comments about this, which all came down to the same basic complaint – it’s hard to discover useful tools or features! This comment kind of summed it all up: How difficult it is to learn independently. Most of what I know is an assorted collection of stuff I’ve been told by random people over the years. steep learning curve (44) A lot of comments about it generally having a steep learning curve. A couple of example comments: After 15 years of using it, I’m not much faster than using it than I was 5 or maybe even 10 years ago. and That I know I could make my life easier by learning more about the shortcuts and commands and configuring the terminal but I don’t spend the time because it feels overwhelming. history (42) Some issues with shell history: history not being shared between terminal tabs (16) limits that are too short (4) history not being restored when terminal tabs are restored losing history because the terminal crashed not knowing how to search history One example comment: It wasted a lot of time until I figured it out and still annoys me that “history” on zsh has such a small buffer; I have to type “history 0” to get any useful length of history. bad documentation (37) People talked about: documentation being generally opaque lack of examples in man pages programs which don’t have man pages Here’s a representative comment: Finding good examples and docs. Man pages often not enough, have to wade through stack overflow scrollback (36) A few issues with scrollback: programs printing out too much data making you lose scrollback history resizing the terminal messes up the scrollback lack of timestamps GUI programs that you start in the background printing stuff out that gets in the way of other programs’ outputs One example comment: When resizing the terminal (in particular: making it narrower) leads to broken rewrapping of the scrollback content because the commands formatted their output based on the terminal window width. “it feels outdated” (33) Lots of comments about how the terminal feels hampered by legacy decisions and how users often end up needing to learn implementation details that feel very esoteric. One example comment: Most of the legacy cruft, it would be great to have a green field implementation of the CLI interface. shell scripting (32) Lots of complaints about POSIX shell scripting. There’s a general feeling that shell scripting is difficult but also that switching to a different less standard scripting language (fish, nushell, etc) brings its own problems. Shell scripting. My tolerance to ditch a shell script and go to a scripting language is pretty low. It’s just too messy and powerful. Screwing up can be costly so I don’t even bother. more issues Some more issues that were mentioned at least 10 times: (31) inconsistent command line arguments: is it -h or help or –help? (24) keeping dotfiles in sync across different systems (23) performance (e.g. “my shell takes too long to start”) (20) window management (potentially with some combination of tmux tabs, terminal tabs, and multiple terminal windows. Where did that shell session go?) (17) generally feeling scared/uneasy (“The debilitating fear that I’m going to do some mysterious Bad Thing with a command and I will have absolutely no idea how to fix or undo it or even really figure out what happened”) (16) terminfo issues (“Having to learn about terminfo if/when I try a new terminal emulator and ssh elsewhere.”) (16) lack of image support (sixel etc) (15) SSH issues (like having to start over when you lose the SSH connection) (15) various tmux/screen issues (for example lack of integration between tmux and the terminal emulator) (15) typos & slow typing (13) the terminal getting messed up for various reasons (pressing Ctrl-S, cating a binary, etc) that’s all! I’m not going to make a lot of commentary on these results, but here are a couple of categories that feel related to me: remembering syntax & history (often the thing you need to remember is something you’ve run before!) discoverability & the learning curve (the lack of discoverability is definitely a big part of what makes it hard to learn)
More in programming
I signed up for Hinge. Holy shit with the boosts. How does someone who works on this wake up every morning and feel okay about themselves? Similarly with the tip screens, Uber algorithm, all the zero sum bullshit using all the tricks of psychology to extract a little bit more from every interaction in society. Nudge. Nudge. NUDGE. Want to partake in normal society like buying a coffee, going on a date, getting a ride, paying a friend. Oh, there’s a middle man now. An evil ominous middleman using state of the art AI algorithms to extract just a little bit more from you. But eventually the market will fix this, right? People will feel sick of being manipulated and move elsewhere? Ahhh, but they see that coming long before you do. They have dashboards. Quick Jeeves, tune the AI to make people feel less manipulated. Give them a little bit more for now, we have to think about maximizing lifetime customer value here. Oh the AI already did this on its own? Jeeves you’ve been replaced! People perpetually on the edge. You want to opt out of this all you say? Good luck running a competitive business! Every metric is now a target. You better maximize engagement or you will lose engagement this is a red queen’s race we can’t afford to lose! Burn all the social capital, burn all your values, FEED IT ALL TO MOLOCH! Someday, people will have to realize we live in a society. What will it take? A complete self cannibalization to the point you can’t eat your own mouth? It sure as hell isn’t going to be people opting out, that’s a collective action problem you can’t solve. Democracy, haha, you think the algorithms will let you vote to kill them? Your vote is as decoupled from action as the amount Uber pays the driver is decoupled from the fare that you pay. There’s no reform here, there’s only revolution. Will it simply be a huge financial collapse? Or do we need World War 3? And even World War 3 is on a spectrum. Will mass starvation fix this? Or will the attitude of thinking it’s okay to manipulate others at scale persist even past that? He’s got his, and I’ve got mine… If you open a government S&P 500 account for everyone with $1,000 at birth that’ll pay their social security cause it like…goes up…wait who’s creating this value again? It’s not okay. Advertising is not okay. Price discrimination is not okay. Using big data, machine learning, and psychology to manipulate others at scale is not okay. But you aren’t going to learn this lesson until you have fed a huge majority of your customers to Moloch. Modern capitialism is wireheading. Release the hypnodrones. How many cans of Pepsi did you want them to consume an hour again?
I've never seen so many developers curious about leaving the Mac and giving Linux a go. Something has really changed in the last few years. Maybe Linux just got better? Maybe powerful mini PCs made it easier? Maybe Apple just fumbled their relationship with developers one too many times? Maybe it's all of it. But whatever the reason, the vibe shift is noticeable. This is why the future is so hard to predict! People have been joking about "The Year of Linux on the Desktop" since the late 90s. Just like self-driving cars were supposed to be a thing back in 2017. And now, in the year of our Lord 2025, it seems like we're getting both! I also wouldn't underestimate the cultural influence of a few key people. PewDiePie sharing his journey into Arch and Hyprland with his 110 million followers is important. ThePrimeagen moving to Arch and Hyprland is important. Typecraft teaching beginners how to build an Arch and Hyprland setup from scratch is important (and who I just spoke to about Omarchy). Gabe Newell's Steam Deck being built on Arch and pushing Proton to over 20,000 compatible Linux games is important. You'll notice a trend here, which is that Arch Linux, a notoriously "difficult" distribution, is at the center of much of this new engagement. Despite the fact that it's been around since 2003! There's nothing new about Arch, but there's something new about the circles of people it's engaging. I've put Arch at the center of Omarchy too. Originally just because that was what Hyprland recommended. Then, after living with the wonders of 90,000+ packages on the community-driven AUR package repository, for its own sake. It's really good! But while Arch (and Hyprland) are having a moment amongst a new crowd, it's also "just" Linux at its core. And Linux really is the star of the show. The perfect, free, and open alternative that was just sitting around waiting for developers to finally have had enough of the commercial offerings from Apple and Microsoft. Now obviously there's a taste of "new vegan sees vegans everywhere" here. You start talking about Linux, and you'll hear from folks already in the community or those considering the move too. It's easy to confuse what you'd like to be true with what is actually true. And it's definitely true that Linux is still a niche operating system on the desktop. Even among developers. Apple and Microsoft sit on the lion's share of the market share. But the mind share? They've been losing that fast. The window is open for a major shift to happen. First gradually, then suddenly. It feels like morning in Linux land!
Snippets are a useful addition to Svelte 5. I use them in my Svelte 5 projects like Edna. Snippet basics A snippet is a function that renders html based on its arguments. Here’s how to define and use a snippet: {#snippet hello(name)} <div>Hello {name}!</div> {/snippet} {@render hello("Andrew")} {@render hello("Amy")} You can re-use snippets by exporting them: <script module> export { hello }; </script> {@snippet hello(name)}<div>Hello {name}!</div>{/snippet} Snippets use cases Snippets for less nesting Deeply nested html is hard to read. You can use snippets to extract some parts to make the structure clearer. For example, you can transform: <div> <div class="flex justify-end mt-2"> <button onclick={onclose} class="mr-4 px-4 py-1 border border-black hover:bg-gray-100" >Cancel</button > <button onclick={() => emitRename()} disabled={!canRename} class="px-4 py-1 border border-black hover:bg-gray-50 disabled:text-gray-400 disabled:border-gray-400 disabled:bg-white default:bg-slate-700" >Rename</button > </div> into: {#snippet buttonCancel()} <button onclick={onclose} class="mr-4 px-4 py-1 border border-black hover:bg-gray-100" >Cancel</button > {/snippet} {#snippet buttonRename()}...{/snippet} To make this easier to read: <div> <div class="flex justify-end mt-2"> {@render buttonCancel()} {@render buttonRename()} </div> </div> snippets replace default <slot/> In Svelte 4, if you wanted place some HTML inside the component, you used <slot />. Let’s say you have Overlay.svelte component used like this: <Overlay> <MyDialog></MyDialog> </Overlay> In Svelte 4, you would use <slot /> to render children: <div class="overlay-wrapper"> <slot /> </div> <slot /> would be replaced with <MyDialog></MyDialog>. In Svelte 5 <MyDialog></MyDialog> is passed to Overlay.svelte as children property so you would change Overlay.svelte to: <script> let { children } = $props(); </script> <div class="overlay-wrapper"> {@render children()} </div> children property is created by Svelte compiler so you should avoid naming your own props children. snippets replace named slots A component can have a default slot for rendering children and additional named slots. In Svelte 5 instead of named slots you pass snippets as props. An example of Dialog.svelte: <script> let { title, children } = $props(); </script> <div class="dialog"> <div class="title"> {@render title()} </div> {@render children()} </div> And use: {#snippet title()} <div class="fancy-title">My fancy title</div> {/snippet} <Dialog title={title}> <div>Body of the dialog</div> </Dialog> passing snippets as implicit props You can pass title snippet prop implicitly: <Dialog> {#snippet title()} <div class="fancy-title">My fancy title</div> {/snippet} <div>Body of the dialog</div> </Dialog> Because {snippet title()} is a child or <Dialog>, we don’t have to pass it as explicit title={title} prop. The compiler does it for us. snippets to reduce repetition Here’s part of how I render https://tools.arslexis.io/ {#snippet row(name, url, desc)} <tr> <td class="text-left align-top" ><a class="font-semibold whitespace-nowrap" href={url}>{name}</a> </td> <td class="pl-4 align-top">{@html desc}</td> </tr> {/snippet} {@render row("unzip", "/unzip/", "unzip a file in the browser")} {@render row("wc", "/wc/", "like <tt>wc</tt>, but in the browser")} It saves me copy & paste of the same HTML and makes the structure more readable. snippets for recursive rendering Sometimes you need to render a recursive structure, like nested menus or file tree. In Svelte 4 you could use <svelte:self> but the downside of that is that you create multiple instances of the component. That means that the state is also split among multiple instances. That makes it harder to implement functionality that requires a global view of the structure, like keyboard navigation. With snippets you can render things recursively in a single instance of the component. I used it to implement nested context menus. snippets to customize rendering Let’s say you’re building a Menu component. Each menu item is a <div> with some non-trivial children. To allow the client of Menu customize how items are rendered, you could provide props for things like colors, padding etc. or you could allow ultimate flexibility by accepting an optional menuitem prop that is a snippet that renders the item. You can think of it as a headless UI i.e. you provide the necessary structure and difficult logic like keyboard navigation etc. and allow the client lots of control over how things are rendered. snippets for library of icons Before snippets every SVG Icon I used was a Svelte component. Many icons means many files. Now I have a single Icons.svelte file, like: <script module> export { IconMenu, IconSettings }; </script> {#snippet IconMenu(arg1, arg2, ...)} <svg>... icon svg</svg> {/snippet}} {#snippet IconSettings()} <svg>... icon svg</svg> {/snippet}}
I realize that for all I've talked about Logic for Programmers in this newsletter, I never once explained basic logical quantifiers. They're both simple and incredibly useful, so let's do that this week! Sets and quantifiers A set is a collection of unordered, unique elements. {1, 2, 3, …} is a set, as are "every programming language", "every programming language's Wikipedia page", and "every function ever defined in any programming language's standard library". You can put whatever you want in a set, with some very specific limitations to avoid certain paradoxes.2 Once we have a set, we can ask "is something true for all elements of the set" and "is something true for at least one element of the set?" IE, is it true that every programming language has a set collection type in the core language? We would write it like this: # all of them all l in ProgrammingLanguages: HasSetType(l) # at least one some l in ProgrammingLanguages: HasSetType(l) This is the notation I use in the book because it's easy to read, type, and search for. Mathematicians historically had a few different formats; the one I grew up with was ∀x ∈ set: P(x) to mean all x in set, and ∃ to mean some. I use these when writing for just myself, but find them confusing to programmers when communicating. "All" and "some" are respectively referred to as "universal" and "existential" quantifiers. Some cool properties We can simplify expressions with quantifiers, in the same way that we can simplify !(x && y) to !x || !y. First of all, quantifiers are commutative with themselves. some x: some y: P(x,y) is the same as some y: some x: P(x, y). For this reason we can write some x, y: P(x,y) as shorthand. We can even do this when quantifying over different sets, writing some x, x' in X, y in Y instead of some x, x' in X: some y in Y. We can not do this with "alternating quantifiers": all p in Person: some m in Person: Mother(m, p) says that every person has a mother. some m in Person: all p in Person: Mother(m, p) says that someone is every person's mother. Second, existentials distribute over || while universals distribute over &&. "There is some url which returns a 403 or 404" is the same as "there is some url which returns a 403 or some url that returns a 404", and "all PRs pass the linter and the test suites" is the same as "all PRs pass the linter and all PRs pass the test suites". Finally, some and all are duals: some x: P(x) == !(all x: !P(x)), and vice-versa. Intuitively: if some file is malicious, it's not true that all files are benign. All these rules together mean we can manipulate quantifiers almost as easily as we can manipulate regular booleans, putting them in whatever form is easiest to use in programming. Speaking of which, how do we use this in in programming? How we use this in programming First of all, people clearly have a need for directly using quantifiers in code. If we have something of the form: for x in list: if P(x): return true return false That's just some x in list: P(x). And this is a prevalent pattern, as you can see by using GitHub code search. It finds over 500k examples of this pattern in Python alone! That can be simplified via using the language's built-in quantifiers: the Python would be any(P(x) for x in list). (Note this is not quantifying over sets but iterables. But the idea translates cleanly enough.) More generally, quantifiers are a key way we express higher-level properties of software. What does it mean for a list to be sorted in ascending order? That all i, j in 0..<len(l): if i < j then l[i] <= l[j]. When should a ratchet test fail? When some f in functions - exceptions: Uses(f, bad_function). Should the image classifier work upside down? all i in images: classify(i) == classify(rotate(i, 180)). These are the properties we verify with tests and types and MISU and whatnot;1 it helps to be able to make them explicit! One cool use case that'll be in the book's next version: database invariants are universal statements over the set of all records, like all a in accounts: a.balance > 0. That's enforceable with a CHECK constraint. But what about something like all i, i' in intervals: NoOverlap(i, i')? That isn't covered by CHECK, since it spans two rows. Quantifier duality to the rescue! The invariant is equivalent to !(some i, i' in intervals: Overlap(i, i')), so is preserved if the query SELECT COUNT(*) FROM intervals CROSS JOIN intervals … returns 0 rows. This means we can test it via a database trigger.3 There are a lot more use cases for quantifiers, but this is enough to introduce the ideas! Next week's the one year anniversary of the book entering early access, so I'll be writing a bit about that experience and how the book changed. It's crazy how crude v0.1 was compared to the current version. MISU ("make illegal states unrepresentable") means using data representations that rule out invalid values. For example, if you have a location -> Optional(item) lookup and want to make sure that each item is in exactly one location, consider instead changing the map to item -> location. This is a means of implementing the property all i in item, l, l' in location: if ItemIn(i, l) && l != l' then !ItemIn(i, l'). ↩ Specifically, a set can't be an element of itself, which rules out constructing things like "the set of all sets" or "the set of sets that don't contain themselves". ↩ Though note that when you're inserting or updating an interval, you already have that row's fields in the trigger's NEW keyword. So you can just query !(some i in intervals: Overlap(new, i')), which is more efficient. ↩