Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
15
Today I was thinking about – what happens when you run a simple “Hello World” Python program on Linux, like this one? print("hello world") Here’s what it looks like at the command line: $ python3 hello.py hello world But behind the scenes, there’s a lot more going on. I’ll describe some of what happens, and (much much more importantly!) explain some tools you can use to see what’s going on behind the scenes yourself. We’ll use readelf, strace, ldd, debugfs, /proc, ltrace, dd, and stat. I won’t talk about the Python-specific parts at all – just what happens when you run any dynamically linked executable. Here’s a table of contents: parse “python3 hello.py” figure out the full path to python3 stat, under the hood time to fork the shell calls execve get the binary’s contents find the interpreter dynamic linking go to _start write a string before execve Before we even start the Python interpreter, there are a lot of things that have to happen. What executable are we even running? Where...
a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Julia Evans

How to add a directory to your PATH

I was talking to a friend about how to add a directory to your PATH today. It’s something that feels “obvious” to me since I’ve been using the terminal for a long time, but when I searched for instructions for how to do it, I actually couldn’t find something that explained all of the steps – a lot of them just said “add this to ~/.bashrc”, but what if you’re not using bash? What if your bash config is actually in a different file? And how are you supposed to figure out which directory to add anyway? So I wanted to try to write down some more complete directions and mention some of the gotchas I’ve run into over the years. Here’s a table of contents: step 1: what shell are you using? step 2: find your shell’s config file a note on bash’s config file step 3: figure out which directory to add step 3.1: double check it’s the right directory step 4: edit your shell config step 5: restart your shell problems: problem 1: it ran the wrong program problem 2: the program isn’t being run from your shell notes: a note on source a note on fish_add_path step 1: what shell are you using? If you’re not sure what shell you’re using, here’s a way to find out. Run this: ps -p $$ -o pid,comm= if you’re using bash, it’ll print out 97295 bash if you’re using zsh, it’ll print out 97295 zsh if you’re using fish, it’ll print out an error like “In fish, please use $fish_pid” ($$ isn’t valid syntax in fish, but in any case the error message tells you that you’re using fish, which you probably already knew) Also bash is the default on Linux and zsh is the default on Mac OS (as of 2024). I’ll only cover bash, zsh, and fish in these directions. step 2: find your shell’s config file in zsh, it’s probably ~/.zshrc in bash, it might be ~/.bashrc, but it’s complicated, see the note in the next section in fish, it’s probably ~/.config/fish/config.fish (you can run echo $__fish_config_dir if you want to be 100% sure) a note on bash’s config file Bash has three possible config files: ~/.bashrc, ~/.bash_profile, and ~/.profile. If you’re not sure which one your system is set up to use, I’d recommend testing this way: add echo hi there to your ~/.bashrc Restart your terminal If you see “hi there”, that means ~/.bashrc is being used! Hooray! Otherwise remove it and try the same thing with ~/.bash_profile You can also try ~/.profile if the first two options don’t work. (there are a lot of elaborate flow charts out there that explain how bash decides which config file to use but IMO it’s not worth it and just testing is the fastest way to be sure) step 3: figure out which directory to add Let’s say that you’re trying to install and run a program called http-server and it doesn’t work, like this: $ npm install -g http-server $ http-server bash: http-server: command not found How do you find what directory http-server is in? Honestly in general this is not that easy – often the answer is something like “it depends on how npm is configured”. A few ideas: Often when setting up a new installer (like cargo, npm, homebrew, etc), when you first set it up it’ll print out some directions about how to update your PATH. So if you’re paying attention you can get the directions then. Sometimes installers will automatically update your shell’s config file to update your PATH for you Sometimes just Googling “where does npm install things?” will turn up the answer Some tools have a subcommand that tells you where they’re configured to install things, like: Homebrew: brew --prefix (and then append /bin/ and /sbin/ to what that gives you) Node/npm: npm config get prefix (then append /bin/) Go: go env | grep GOPATH (then append /bin/) asdf: asdf info | grep ASDF_DIR (then append /bin/ and /shims/) step 3.1: double check it’s the right directory Once you’ve found a directory you think might be the right one, make sure it’s actually correct! For example, I found out that on my machine, http-server is in ~/.npm-global/bin. I can make sure that it’s the right directory by trying to run the program http-server in that directory like this: $ ~/.npm-global/bin/http-server Starting up http-server, serving ./public It worked! Now that you know what directory you need to add to your PATH, let’s move to the next step! step 4: edit your shell config Now we have the 2 critical pieces of information we need: Which directory you’re trying to add to your PATH (like ~/.npm-global/bin/) Where your shell’s config is (like ~/.bashrc, ~/.zshrc, or ~/.config/fish/config.fish) Now what you need to add depends on your shell: bash and zsh instructions: Open your shell’s config file, and add a line like this: export PATH=$PATH:~/.npm-global/bin/ (obviously replace ~/.npm-global/bin with the actual directory you’re trying to add) fish instructions: In fish, the syntax is different: set PATH $PATH ~/.npm-global/bin (in fish you can also use fish_add_path, some notes on that further down) step 5: restart your shell Now, an extremely important step: updating your shell’s config won’t take effect if you don’t restart it! Two ways to do this: open a new terminal (or terminal tab), and maybe close the old one so you don’t get confused Run bash to start a new shell (or zsh if you’re using zsh, or fish if you’re using fish) I’ve found that both of these usually work fine. And you should be done! Try running the program you were trying to run and hopefully it works now. If not, here are a couple of problems that you might run into: problem 1: it ran the wrong program If the wrong version of a is program running, you might need to add the directory to the beginning of your PATH instead of the end. For example, on my system I have two versions of python3 installed, which I can see by running which -a: $ which -a python3 /usr/bin/python3 /opt/homebrew/bin/python3 The one your shell will use is the first one listed. If you want to use the Homebrew version, you need to add that directory (/opt/homebrew/bin) to the beginning of your PATH instead, by putting this in your shell’s config file (it’s /opt/homebrew/bin/:$PATH instead of the usual $PATH:/opt/homebrew/bin/) export PATH=/opt/homebrew/bin/:$PATH or in fish: set PATH ~/.cargo/bin $PATH problem 2: the program isn’t being run from your shell All of these directions only work if you’re running the program from your shell. If you’re running the program from an IDE, from a GUI, in a cron job, or some other way, you’ll need to add the directory to your PATH in a different way, and the exact details might depend on the situation. in a cron job Some options: use the full path to the program you’re running, like /home/bork/bin/my-program put the full PATH you want as the first line of your crontab (something like PATH=/bin:/usr/bin:/usr/local/bin:….). You can get the full PATH you’re using in your shell by running echo "PATH=$PATH". I’m honestly not sure how to handle it in an IDE/GUI because I haven’t run into that in a long time, will add directions here if someone points me in the right direction. a note on source When you install cargo (Rust’s installer) for the first time, it gives you these instructions for how to set up your PATH, which don’t mention a specific directory at all. This is usually done by running one of the following (note the leading DOT): . "$HOME/.cargo/env" # For sh/bash/zsh/ash/dash/pdksh source "$HOME/.cargo/env.fish" # For fish The idea is that you add that line to your shell’s config, and their script automatically sets up your PATH (and potentially other things) for you. This is pretty common (Homebrew and asdf have something similar), and there are two ways to approach this: Just do what the tool suggests (add . "$HOME/.cargo/env" to your shell’s config) Figure out which directories the script they’re telling you to run would add to your PATH, and then add those manually. Here’s how I’d do that: Run . "$HOME/.cargo/env" in my shell (or the fish version if using fish) Run echo "$PATH" | tr ':' '\n' | grep cargo to figure out which directories it added See that it says /Users/bork/.cargo/bin and shorten that to ~/.cargo/bin Add the directory ~/.cargo/bin to PATH (with the directions in this post) I don’t think there’s anything wrong with doing what the tool suggests (it might be the “best way”!), but personally I usually use the second approach because I prefer knowing exactly what configuration I’m changing. a note on fish_add_path fish has a handy function called fish_add_path that you can run to add a directory to your PATH like this: fish_add_path /some/directory This will add the directory to your PATH, and automatically update all running fish shells with the new PATH. You don’t have to update your config at all! This is EXTREMELY convenient, but one downside (and the reason I’ve personally stopped using it) is that if you ever need to remove the directory from your PATH a few weeks or months later because maybe you made a mistake, it’s kind of hard to do (there are instructions in this comments of this github issue though). that’s all Hopefully this will help some people. Let me know (on Mastodon or Bluesky) if you there are other major gotchas that have tripped you up when adding a directory to your PATH, or if you have questions about this post!

a week ago 9 votes
Some terminal frustrations

A few weeks ago I ran a terminal survey (you can read the results here) and at the end I asked: What’s the most frustrating thing about using the terminal for you? 1600 people answered, and I decided to spend a few days categorizing all the responses. Along the way I learned that classifying qualitative data is not easy but I gave it my best shot. I ended up building a custom tool to make it faster to categorize everything. As with all of my surveys the methodology isn’t particularly scientific. I just posted the survey to Mastodon and Twitter, ran it for a couple of days, and got answers from whoever happened to see it and felt like responding. Here are the top categories of frustrations! I think it’s worth keeping in mind while reading these comments that 40% of people answering this survey have been using the terminal for 21+ years 95% of people answering the survey have been using the terminal for at least 4 years These comments aren’t coming from total beginners. Here are the categories of frustrations! The number in brackets is the number of people with that frustration. Honestly I don’t how how interesting this is to other people – I’m just writing this up for myself because I’m trying to write a zine about the terminal and I wanted to get a sense for what people are having trouble with. remembering syntax (115) People talked about struggles remembering: the syntax for CLI tools like awk, jq, sed, etc the syntax for redirects keyboard shortcuts for tmux, text editing, etc One example comment: There are just so many little “trivia” details to remember for full functionality. Even after all these years I’ll sometimes forget where it’s 2 or 1 for stderr, or forget which is which for > and >>. switching terminals is hard (91) People talked about struggling with switching systems (for example home/work computer or when SSHing) and running into: OS differences in keyboard shortcuts (like Linux vs Mac) systems which don’t have their preferred text editor (“no vim” or “only vim”) different versions of the same command (like Mac OS grep vs GNU grep) no tab completion a shell they aren’t used to (“the subtle differences between zsh and bash”) as well as differences inside the same system like pagers being not consistent with each other (git diff pagers, other pagers). One example comment: I got used to fish and vi mode which are not available when I ssh into servers, containers. color (85) Lots of problems with color, like: programs setting colors that are unreadable with a light background color finding a colorscheme they like (and getting it to work consistently across different apps) color not working inside several layers of SSH/tmux/etc not liking the defaults not wanting color at all and struggling to turn it off This comment felt relatable to me: Getting my terminal theme configured in a reasonable way between the terminal emulator and fish (I did this years ago and remember it being tedious and fiddly and now feel like I’m locked into my current theme because it works and I dread touching any of that configuration ever again). keyboard shortcuts (84) Half of the comments on keyboard shortcuts were about how on Linux/Windows, the keyboard shortcut to copy/paste in the terminal is different from in the rest of the OS. Some other issues with keyboard shortcuts other than copy/paste: using Ctrl-W in a browser-based terminal and closing the window the terminal only supports a limited set of keyboard shortcuts (no Ctrl-Shift-, no Super, no Hyper, lots of ctrl- shortcuts aren’t possible like Ctrl-,) the OS stopping you from using a terminal keyboard shortcut (like by default Mac OS uses Ctrl+left arrow for something else) issues using emacs in the terminal backspace not working (2) other copy and paste issues (75) Aside from “the keyboard shortcut for copy and paste is different”, there were a lot of OTHER issues with copy and paste, like: copying over SSH how tmux and the terminal emulator both do copy/paste in different ways dealing with many different clipboards (system clipboard, vim clipboard, the “middle click” keyboard on Linux, tmux’s clipboard, etc) and potentially synchronizing them random spaces added when copying from the terminal pasting multiline commands which automatically get run in a terrifying way wanting a way to copy text without using the mouse discoverability (55) There were lots of comments about this, which all came down to the same basic complaint – it’s hard to discover useful tools or features! This comment kind of summed it all up: How difficult it is to learn independently. Most of what I know is an assorted collection of stuff I’ve been told by random people over the years. steep learning curve (44) A lot of comments about it generally having a steep learning curve. A couple of example comments: After 15 years of using it, I’m not much faster than using it than I was 5 or maybe even 10 years ago. and That I know I could make my life easier by learning more about the shortcuts and commands and configuring the terminal but I don’t spend the time because it feels overwhelming. history (42) Some issues with shell history: history not being shared between terminal tabs (16) limits that are too short (4) history not being restored when terminal tabs are restored losing history because the terminal crashed not knowing how to search history One example comment: It wasted a lot of time until I figured it out and still annoys me that “history” on zsh has such a small buffer; I have to type “history 0” to get any useful length of history. bad documentation (37) People talked about: documentation being generally opaque lack of examples in man pages programs which don’t have man pages Here’s a representative comment: Finding good examples and docs. Man pages often not enough, have to wade through stack overflow scrollback (36) A few issues with scrollback: programs printing out too much data making you lose scrollback history resizing the terminal messes up the scrollback lack of timestamps GUI programs that you start in the background printing stuff out that gets in the way of other programs’ outputs One example comment: When resizing the terminal (in particular: making it narrower) leads to broken rewrapping of the scrollback content because the commands formatted their output based on the terminal window width. “it feels outdated” (33) Lots of comments about how the terminal feels hampered by legacy decisions and how users often end up needing to learn implementation details that feel very esoteric. One example comment: Most of the legacy cruft, it would be great to have a green field implementation of the CLI interface. shell scripting (32) Lots of complaints about POSIX shell scripting. There’s a general feeling that shell scripting is difficult but also that switching to a different less standard scripting language (fish, nushell, etc) brings its own problems. Shell scripting. My tolerance to ditch a shell script and go to a scripting language is pretty low. It’s just too messy and powerful. Screwing up can be costly so I don’t even bother. more issues Some more issues that were mentioned at least 10 times: (31) inconsistent command line arguments: is it -h or help or –help? (24) keeping dotfiles in sync across different systems (23) performance (e.g. “my shell takes too long to start”) (20) window management (potentially with some combination of tmux tabs, terminal tabs, and multiple terminal windows. Where did that shell session go?) (17) generally feeling scared/uneasy (“The debilitating fear that I’m going to do some mysterious Bad Thing with a command and I will have absolutely no idea how to fix or undo it or even really figure out what happened”) (16) terminfo issues (“Having to learn about terminfo if/when I try a new terminal emulator and ssh elsewhere.”) (16) lack of image support (sixel etc) (15) SSH issues (like having to start over when you lose the SSH connection) (15) various tmux/screen issues (for example lack of integration between tmux and the terminal emulator) (15) typos & slow typing (13) the terminal getting messed up for various reasons (pressing Ctrl-S, cating a binary, etc) that’s all! I’m not going to make a lot of commentary on these results, but here are a couple of categories that feel related to me: remembering syntax & history (often the thing you need to remember is something you’ve run before!) discoverability & the learning curve (the lack of discoverability is definitely a big part of what makes it hard to learn)

2 weeks ago 11 votes
What's involved in getting a "modern" terminal setup?

Hello! Recently I ran a terminal survey and I asked people what frustrated them. One person commented: There are so many pieces to having a modern terminal experience. I wish it all came out of the box. My immediate reaction was “oh, getting a modern terminal experience isn’t that hard, you just need to….”, but the more I thought about it, the longer the “you just need to…” list got, and I kept thinking about more and more caveats. So I thought I would write down some notes about what it means to me personally to have a “modern” terminal experience and what I think can make it hard for people to get there. what is a “modern terminal experience”? Here are a few things that are important to me, with which part of the system is responsible for them: multiline support for copy and paste: if you paste 3 commands in your shell, it should not immediatly run them all! That’s scary! (shell, terminal emulator) infinite shell history: if I run a command in my shell, it should be saved forever, not deleted after 500 history entries or whatever. Also I want commands to be saved to the history immediately when I run them, not only when I exit the shell session (shell) a useful prompt: I can’t live without having my current directory and current git branch in my prompt (shell) 24-bit colour: this is important to me because I find it MUCH easier to theme neovim with 24-bit colour support than in a terminal with only 256 colours (terminal emulator) clipboard integration between vim and my operating system so that when I copy in Firefox, I can just press p in vim to paste (text editor, maybe the OS/terminal emulator too) good autocomplete: for example commands like git should have command-specific autocomplete (shell) having colours in ls (shell config) a terminal theme I like: I spend a lot of time in my terminal, I want it to look nice and I want its theme to match my terminal editor’s theme. (terminal emulator, text editor) automatic terminal fixing: If a programs prints out some weird escape codes that mess up my terminal, I want that to automatically get reset so that my terminal doesn’t get messed up (shell) keybindings: I want Ctrl+left arrow to work (shell or application) being able to use the scroll wheel in programs like less: (terminal emulator and applications) There are a million other terminal conveniences out there and different people value different things, but those are the ones that I would be really unhappy without. how I achieve a “modern experience” My basic approach is: use the fish shell. Mostly don’t configure it, except to: set the EDITOR environment variable to my favourite terminal editor alias ls to ls --color=auto use any terminal emulator with 24-bit colour support. In the past I’ve used GNOME Terminal, Terminator, and iTerm, but I’m not picky about this. I don’t really configure it other than to choose a font. use neovim, with a configuration that I’ve been very slowly building over the last 9 years or so (the last time I deleted my vim config and started from scratch was 9 years ago) use the base16 framework to theme everything some “out of the box” options for a “modern” experience What if you want a nice experience, but don’t want to spend a lot of time on configuration? Figuring out how to configure vim in a way that I was satisfied with really did take me like ten years, which is a long time! My best ideas for how to get a reasonable terminal experience with minimal config are: shell: either fish or zsh with oh-my-zsh terminal emulator: almost anything with 24-bit colour support, for example all of these are popular: linux: GNOME Terminal, Konsole, Terminator, xfce4-terminal mac: iTerm (Terminal.app doesn’t have 256-colour support) cross-platform: kitty, alacritty, wezterm, or ghostty shell config: set the EDITOR environment variable to your favourite terminal text editor maybe alias ls to ls --color=auto text editor: this is a tough one, maybe micro or helix? I haven’t used either of them seriously but they both seem like very cool projects and I think it’s amazing that you can just use all the usual GUI editor commands (Ctrl-C to copy, Ctrl-V to paste, Ctrl-A to select all) in micro and they do what you’d expect. I would probably try switching to helix except that retraining my vim muscle memory seems way too hard and I have a working vim config already. Personally I wouldn’t use xterm, rxvt, or Terminal.app as a terminal emulator, because I’ve found in the past that they’re missing core features (like 24-bit colour in Terminal.app’s case) that make the terminal harder to use for me. I don’t want to pretend that getting a “modern” terminal experience is easier than it is though – I think there are two issues that make it hard. Let’s talk about them! issue 1 with getting to a “modern” experience: the shell bash and zsh are by far the two most popular shells, and neither of them provide a default experience that I would be happy using out of the box, for example: you need to customize your prompt they don’t come with git completions by default, you have to set them up by default, bash only stores 500 (!) lines of history and (at least on Mac OS) zsh is only configured to store 2000 lines, which is still not a lot I find bash’s tab completion very frustrating, if there’s more than one match then you can’t tab through them And even though I love fish, the fact that it isn’t POSIX does make it hard for a lot of folks to make the switch. Of course it’s totally possible to learn how to customize your prompt in bash or whatever, and it doesn’t even need to be that complicated (in bash I’d probably start with something like export PS1='[\u@\h \W$(__git_ps1 " (%s)")]\$ ', or maybe use starship). But each of these “not complicated” things really does add up and it’s especially tough if you need to keep your config in sync across several systems. An extremely popular solution to getting a “modern” shell experience is oh-my-zsh. It seems like a great project and I know a lot of people use it very happily, but I’ve struggled with configuration systems like that in the past – it looks like right now the base oh-my-zsh adds about 3000 lines of config, and often I find that having an extra configuration system makes it harder to debug what’s happening when things go wrong. I personally have a tendency to use the system to add a lot of extra plugins, make my system slow, get frustrated that it’s slow, and then delete it completely and write a new config from scratch. issue 2 with getting to a “modern” experience: the text editor In the terminal survey I ran recently, the most popular terminal text editors by far were vim, emacs, and nano. I think the main options for terminal text editors are: use vim or emacs and configure it to your liking, you can probably have any feature you want if you put in the work use nano and accept that you’re going to have a pretty limited experience (for example as far as I can tell if you want to copy some text from nano and put it in your system clipboard you just… can’t?) use micro or helix which seem to offer a pretty good out-of-the-box experience, potentially occasionally run into issues with using a less mainstream text editor just avoid using a terminal text editor as much as possible, maybe use VSCode, use VSCode’s terminal for all your terminal needs, and mostly never edit files in the terminal issue 3: individual applications The last issue is that sometimes individual programs that I use are kind of annoying. For example on my Mac OS machine, /usr/bin/sqlite3 doesn’t support the Ctrl+Left Arrow keyboard shortcut. Fixing this to get a reasonable terminal experience in SQLite was a little complicated, I had to: realize why this is happening (Mac OS won’t ship GNU tools, and “Ctrl-Left arrow” support comes from GNU readline) find a workaround (install sqlite from homebrew, which does have readline support) adjust my environment (put Homebrew’s sqlite3 in my PATH) I find that debugging application-specific issues like this is really not easy and often it doesn’t feel “worth it” – often I’ll end up just dealing with various minor inconveniences because I don’t want to spend hours investigating them. The only reason I was even able to figure this one out at all is that I’ve been spending a huge amount of time thinking about the terminal recently. A big part of having a “modern” experience using terminal programs is just using newer terminal programs, for example I can’t be bothered to learn a keyboard shortcut to sort the columns in top, but in htop I can just click on a column heading with my mouse to sort it. So I use htop instead! But discovering new more “modern” command line tools isn’t easy (though I made a list here), finding ones that I actually like using in practice takes time, and if you’re SSHed into another machine, they won’t always be there. everything affects everything else Something I find tricky about configuring my terminal to make everything “nice” is that changing one seemingly small thing about my workflow can really affect everything else. For example right now I don’t use tmux. But if I needed to use tmux again (for example because I was doing a lot of work SSHed into another machine), I’d need to think about a few things, like: if I wanted tmux’s copy to synchronize with my system clipboard over SSH, I’d need to make sure that my terminal emulator has OSC 52 support if I wanted to use iTerm’s tmux integration (which makes tmux tabs into iTerm tabs), I’d need to change how I configure colours – right now I set them with a shell script that I run when my shell starts, but that means the colours get lost when restoring a tmux session. and probably more things I haven’t thought of. “Using tmux means that I have to change how I manage my colours” sounds unlikely, but that really did happen to me and I decided “well, I don’t want to change how I manage colours right now, so I guess I’m not using that feature!”. It’s also hard to remember which features I’m relying on – for example maybe my current terminal does have OSC 52 support and because copying from tmux over SSH has always Just Worked I don’t even realize that that’s something I need, and then it mysteriously stops working when I switch terminals. change things slowly Personally even though I think my setup is not that complicated, it’s taken me 20 years to get to this point! Because terminal config changes are so likely to have unexpected and hard-to-understand consequences, I’ve found that if I change a lot of terminal configuration all at once it makes it much harder to understand what went wrong if there’s a problem, which can be really disorienting. So I prefer to make pretty small changes, and accept that changes can might take me a REALLY long time to get used to. For example I switched from using ls to eza a year or two ago and while I think I like it I’m still not quite sure about it. getting a “modern” terminal is not that easy Trying to explain how “easy” it is to configure your terminal really just made me think that it’s kind of hard and that I still sometimes get confused. I’ve found that there’s never one perfect way to configure things in the terminal that will be compatible with every single other thing. I just need to try stuff, figure out some kind of locally stable state that works for me, and accept that if I start using a new tool it might disrupt the system and I might need to rethink things.

a month ago 42 votes
"Rules" that terminal programs follow

Recently I’ve been thinking about how everything that happens in the terminal is some combination of: Your operating system’s job Your shell’s job Your terminal emulator’s job The job of whatever program you happen to be running (like top or vim or cat) The first three (your operating system, shell, and terminal emulator) are all kind of known quantities – if you’re using bash in GNOME Terminal on Linux, you can more or less reason about how how all of those things interact, and some of their behaviour is standardized by POSIX. But the fourth one (“whatever program you happen to be running”) feels like it could do ANYTHING. How are you supposed to know how a program is going to behave? This post is kind of long so here’s a quick table of contents: programs behave surprisingly consistently these are meant to be descriptive, not prescriptive it’s not always obvious which “rules” are the program’s responsibility to implement rule 1: noninteractive programs should quit when you press Ctrl-C rule 2: TUIs should quit when you press q rule 3: REPLs should quit when you press Ctrl-D on an empty line rule 4: don’t use more than 16 colours rule 5: vaguely support readline keybindings rule 5.1: Ctrl-W should delete the last word rule 6: disable colours when writing to a pipe rule 7: - means stdin/stdout these “rules” take a long time to learn programs behave surprisingly consistently As far as I know, there are no real standards for how programs in the terminal should behave – the closest things I know of are: POSIX, which mostly dictates how your terminal emulator / OS / shell should work together. It does specify a few things about how core utilities like cp should work but AFAIK it doesn’t have anything to say about how for example htop should behave. these command line interface guidelines But even though there are no standards, in my experience programs in the terminal behave in a pretty consistent way. So I wanted to write down a list of “rules” that in my experience programs mostly follow. these are meant to be descriptive, not prescriptive My goal here isn’t to convince authors of terminal programs that they should follow any of these rules. There are lots of exceptions to these and often there’s a good reason for those exceptions. But it’s very useful for me to know what behaviour to expect from a random new terminal program that I’m using. Instead of “uh, programs could do literally anything”, it’s “ok, here are the basic rules I expect, and then I can keep a short mental list of exceptions”. So I’m just writing down what I’ve observed about how programs behave in my 20 years of using the terminal, why I think they behave that way, and some examples of cases where that rule is “broken”. it’s not always obvious which “rules” are the program’s responsibility to implement There are a bunch of common conventions that I think are pretty clearly the program’s responsibility to implement, like: config files should go in ~/.BLAHrc or ~/.config/BLAH/FILE or /etc/BLAH/ or something --help should print help text programs should print “regular” output to stdout and errors to stderr But in this post I’m going to focus on things that it’s not 100% obvious are the program’s responsibility. For example it feels to me like a “law of nature” that pressing Ctrl-D should quit a REPL (“rule 3” below), but there’s no reason that that has to be always be true. The reason that Ctrl-D almost always works is that interactive REPLs almost all implement that keyboard shortcut. rule 1: noninteractive programs should quit when you press Ctrl-C The main reason for this rule is that noninteractive programs will quit by default on Ctrl-C if they don’t set up a SIGINT signal handler, so this is kind of a “you should act like the default” rule. Something that trips a lot of people up is that this doesn’t apply to interactive programs like python3 or bc or less. This is because in an interactive program, Ctrl-C has a different job – if the program is running an operation (like for example a search in less or some Python code in python3), then Ctrl-C will interrupt that operation but not stop the program. As an example of how this works in an interactive program: here’s the code in prompt-toolkit (the library that iPython uses for handling input) that aborts a search when you press Ctrl-C. rule 2: TUIs should quit when you press q TUI programs (like less or htop) will usually quit when you press q. This rule doesn’t apply to any program where pressing q to quit wouldn’t make sense, like tmux or text editors. rule 3: REPLs should quit when you press Ctrl-D on an empty line REPLs (like python3 or ed) will usually quit when you press Ctrl-D on an empty line. This rule is similar to the Ctrl-C rule – the reason for this is that by default if you’re running a program (like cat) in “cooked mode”, then the operating system will return an EOF when you press Ctrl-D on an empty line. Most of the REPLs I use (sqlite3, python3, fish, bash, etc) don’t actually use cooked mode, but they all implement this keyboard shortcut anyway to mimic the default behaviour. For example, here’s the code in prompt-toolkit that quits when you press Ctrl-D, and here’s the same code in readline. I actually thought that this one was a “Law of Terminal Physics” until very recently because I’ve basically never seen it broken, but you can see that it’s just something that each individual input library has to implement in the links above. Someone pointed out that the Erlang REPL does not quit when you press Ctrl-D, so I guess not every REPL follows this “rule”. rule 4: don’t use more than 16 colours Terminal programs rarely use colours other than the base 16 ANSI colours. This is because if you specify colours with a hex code, it’s very likely to clash with some users’ background colour. For example if I print out some text as #EEEEEE, it would be almost invisible on a white background, though it would look fine on a dark background. But if you stick to the default 16 base colours, you have a much better chance that the user has configured those colours in their terminal emulator so that they work reasonably well with their background color. Another reason to stick to the default base 16 colours is that it makes less assumptions about what colours the terminal emulator supports. The only programs I usually see breaking this “rule” are text editors, for example Helix by default will use a custom colorscheme with this very nice purple background which is not a default ANSI colour. It seems fine for Helix to break this rule since Helix isn’t a “core” program and I assume any Helix user who doesn’t like that colorscheme will just change the theme. rule 5: vaguely support readline keybindings Almost every program I use supports readline keybindings if it would make sense to do so. For example, here are a bunch of different programs and a link to where they define Ctrl-E to go to the end of the line: ipython (Ctrl-E defined here) atuin (Ctrl-E defined here) fzf (Ctrl-E defined here) zsh (Ctrl-E defined here) fish (Ctrl-E defined here) tmux’s command prompt (Ctrl-E defined here) None of those programs actually uses readline directly, they just sort of mimic emacs/readline keybindings. They don’t always mimic them exactly: for example atuin seems to use Ctrl-A as a prefix, so Ctrl-A doesn’t go to the beginning of the line. The exceptions to this are: some programs (like git, cat, and nc) don’t have any line editing support at all (except for backspace, Ctrl-W, and Ctrl-U) as usual text editors are an exception, every text editor has its own approach to editing text I wrote more about this “what keybindings does a program support?” question in entering text in the terminal is complicated. rule 5.1: Ctrl-W should delete the last word I’ve never seen a program (other than a text editor) where Ctrl-W doesn’t delete the last word. This is similar to the Ctrl-C rule – by default if a program is in “cooked mode”, the OS will delete the last word if you press Ctrl-W, and delete the whole line if you press Ctrl-U. So usually programs will imitate that behaviour. I can’t think of any exceptions to this other than text editors but if there are I’d love to hear about them! rule 6: disable colours when writing to a pipe Most programs will disable colours when writing to a pipe. For example: rg blah will highlight all occurrences of blah in the output, but if the output is to a pipe or a file, it’ll turn off the highlighting. ls --color=auto will use colour when writing to a terminal, but not when writing to a pipe Both of those programs will also format their output differently when writing to the terminal: ls will organize files into columns, and ripgrep will group matches with headings. If you want to force the program to use colour (for example because you want to look at the colour), you can use unbuffer to force the program’s output to be a tty like this: unbuffer rg blah | less -R I’m sure that there are some programs that “break” this rule but I can’t think of any examples right now. Some programs have an --color flag that you can use to force colour to be on. rule 7: - means stdin/stdout Usually if you pass - to a program instead of a filename, it’ll read from stdin or write to stdout (whichever is appropriate). For example, if you want to format the Python code that’s on your clipboard with black and then copy it, you could run: pbpaste | black - | pbcopy (pbpaste is a Mac program, you can do something similar on Linux with xclip) My impression is that most programs implement this if it would make sense and I can’t think of any exceptions right now, but I’m sure there are many exceptions. these “rules” take a long time to learn These rules took me a long time for me to learn because I had to: learn that the rule applied anywhere at all ("Ctrl-C will exit programs") notice some exceptions (“okay, Ctrl-C will exit find but not less”) subconsciously figure out what the pattern is ("Ctrl-C will generally quit noninteractive programs, but in interactive programs it might interrupt the current operation instead of quitting the program") eventually maybe formulate it into an explicit rule that I know A lot of my understanding of the terminal is honestly still in the “subconscious pattern recognition” stage. The only reason I’ve been taking the time to make things explicit at all is because I’ve been trying to explain how it works to others. Hopefully writing down these “rules” explicitly will make learning some of this stuff a little bit faster for others.

2 months ago 60 votes
Why pipes sometimes get "stuck": buffering

Here’s a niche terminal problem that has bothered me for years but that I never really understood until a few weeks ago. Let’s say you’re running this command to watch for some specific output in a log file: tail -f /some/log/file | grep thing1 | grep thing2 If log lines are being added to the file relatively slowly, the result I’d see is… nothing! It doesn’t matter if there were matches in the log file or not, there just wouldn’t be any output. I internalized this as “uh, I guess pipes just get stuck sometimes and don’t show me the output, that’s weird”, and I’d handle it by just running grep thing1 /some/log/file | grep thing2 instead, which would work. So as I’ve been doing a terminal deep dive over the last few months I was really excited to finally learn exactly why this happens. why this happens: buffering The reason why “pipes get stuck” sometimes is that it’s VERY common for programs to buffer their output before writing it to a pipe or file. So the pipe is working fine, the problem is that the program never even wrote the data to the pipe! This is for performance reasons: writing all output immediately as soon as you can uses more system calls, so it’s more efficient to save up data until you have 8KB or so of data to write (or until the program exits) and THEN write it to the pipe. In this example: tail -f /some/log/file | grep thing1 | grep thing2 the problem is that grep thing1 is saving up all of its matches until it has 8KB of data to write, which might literally never happen. programs don’t buffer when writing to a terminal Part of why I found this so disorienting is that tail -f file | grep thing will work totally fine, but then when you add the second grep, it stops working!! The reason for this is that the way grep handles buffering depends on whether it’s writing to a terminal or not. Here’s how grep (and many other programs) decides to buffer its output: Check if stdout is a terminal or not using the isatty function If it’s a terminal, use line buffering (print every line immediately as soon as you have it) Otherwise, use “block buffering” – only print data if you have at least 8KB or so of data to print So if grep is writing directly to your terminal then you’ll see the line as soon as it’s printed, but if it’s writing to a pipe, you won’t. Of course the buffer size isn’t always 8KB for every program, it depends on the implementation. For grep the buffering is handled by libc, and libc’s buffer size is defined in the BUFSIZ variable. Here’s where that’s defined in glibc. (as an aside: “programs do not use 8KB output buffers when writing to a terminal” isn’t, like, a law of terminal physics, a program COULD use an 8KB buffer when writing output to a terminal if it wanted, it would just be extremely weird if it did that, I can’t think of any program that behaves that way) commands that buffer & commands that don’t One annoying thing about this buffering behaviour is that you kind of need to remember which commands buffer their output when writing to a pipe. Some commands that don’t buffer their output: tail cat tee I think almost everything else will buffer output, especially if it’s a command where you’re likely to be using it for batch processing. Here’s a list of some common commands that buffer their output when writing to a pipe, along with the flag that disables block buffering. grep (--line-buffered) sed (-u) awk (there’s a fflush() function) tcpdump (-l) jq (-u) tr (-u) cut (can’t disable buffering) Those are all the ones I can think of, lots of unix commands (like sort) may or may not buffer their output but it doesn’t matter because sort can’t do anything until it finishes receiving input anyway. programming languages where the default “print” statement buffers Also, here are a few programming language where the default print statement will buffer output when writing to a pipe, and some ways to disable buffering if you want: C (disable with setvbuf) Python (disable with python -u, or PYTHON_UNBUFFERED=1, or sys.stdout.reconfigure(line_buffering=False), or print(x, flush=True)) Ruby (disable with STDOUT.sync = true) Perl (disable with $| = 1) I assume that these languages are designed this way so that the default print function will be fast when you’re doing batch processing. Also whether output is buffered or not might depend on what print function you use, for example in Rust print! buffers when writing to a pipe but println! will flush its output. when you press Ctrl-C on a pipe, the contents of the buffer are lost Let’s say you’re running this command as a hacky way to watch for DNS requests to example.com, and you forgot to pass -l to tcpdump: sudo tcpdump -ni any port 53 | grep example.com When you press Ctrl-C, what happens? In a magical perfect world, what I would want to happen is for tcpdump to flush its buffer, grep would search for example.com, and I would see all the output I missed. But in the real world, what happens is that all the programs get killed and the output in tcpdump’s buffer is lost. I think this problem is probably unavoidable – I spent a little time with strace to see how this works and grep receives the SIGINT before tcpdump anyway so even if tcpdump tried to flush its buffer grep would already be dead. redirecting to a file also buffers It’s not just pipes, this will also buffer: sudo tcpdump -ni any port 53 > output.txt Redirecting to a file doesn’t have the same “Ctrl-C will totally destroy the contents of the buffer” problem though – in my experience it usually behaves more like you’d want, where the contents of the buffer get written to the file before the program exits. I’m not 100% sure whether this is something you can always rely on or not. a bunch of potential ways to avoid buffering Okay, let’s talk solutions. Let’s say you’ve run this command or s tail -f /some/log/file | grep thing1 | grep thing2 I asked people on Mastodon how they would solve this in practice and there were 5 basic approaches. Here they are: solution 1: run a program that finishes quickly Historically my solution to this has been to just avoid the “command writing to pipe slowly” situation completely and instead run a program that will finish quickly like this: cat /some/log/file | grep thing1 | grep thing2 | tail This doesn’t do the same thing as the original command but it does mean that you get to avoid thinking about these weird buffering issues. (you could also do grep thing1 /some/log/file but I often prefer to use an “unnecessary” cat) solution 2: remember the “line buffer” flag to grep You could remember that grep has a flag to avoid buffering and pass it like this: tail -f /some/log/file | grep --line-buffered thing1 | grep thing2 solution 3: use awk Some people said that if they’re specifically dealing with a multiple greps situation, they’ll rewrite it to use a single awk instead, like this: tail -f /some/log/file | awk '/thing1/ && /thing2/' Or you would write a more complicated grep, like this: tail -f /some/log/file | grep -E 'thing1.*thing2' (awk also buffers, so for this to work you’ll want awk to be the last command in the pipeline) solution 4: use stdbuf stdbuf uses LD_PRELOAD to turn off libc’s buffering, and you can use it to turn off output buffering like this: tail -f /some/log/file | stdbuf -o0 grep thing1 | grep thing2 Like any LD_PRELOAD solution it’s a bit unreliable – it doesn’t work on static binaries, I think won’t work if the program isn’t using libc’s buffering, and doesn’t always work on Mac OS. Harry Marr has a really nice How stdbuf works post. solution 5: use unbuffer unbuffer program will force the program’s output to be a TTY, which means that it’ll behave the way it normally would on a TTY (less buffering, colour output, etc). You could use it in this example like this: tail -f /some/log/file | unbuffer grep thing1 | grep thing2 Unlike stdbuf it will always work, though it might have unwanted side effects, for example grep thing1’s will also colour matches. If you want to install unbuffer, it’s in the expect package. that’s all the solutions I know about! It’s a bit hard for me to say which one is “best”, I think personally I’m mostly likely to use unbuffer because I know it’s always going to work. If I learn about more solutions I’ll try to add them to this post. I’m not really sure how often this comes up I think it’s not very common for me to have a program that slowly trickles data into a pipe like this, normally if I’m using a pipe a bunch of data gets written very quickly, processed by everything in the pipeline, and then everything exits. The only examples I can come up with right now are: tcpdump tail -f watching log files in a different way like with kubectl logs the output of a slow computation what if there were an environment variable to disable buffering? I think it would be cool if there were a standard environment variable to turn off buffering, like PYTHON_UNBUFFERED in Python. I got this idea from a couple of blog posts by Mark Dominus in 2018. Maybe NO_BUFFER like NO_COLOR? The design seems tricky to get right; Mark points out that NETBSD has environment variables called STDBUF, STDBUF1, etc which gives you a ton of control over buffering but I imagine most developers don’t want to implement many different environment variables to handle a relatively minor edge case. stuff I left out Some things I didn’t talk about in this post since these posts have been getting pretty long recently and seriously does anyone REALLY want to read 3000 words about buffering? the difference between line buffering and having totally unbuffered output how buffering to stderr is different from buffering to stdout this post is only about buffering that happens inside the program, your operating system’s TTY driver also does a little bit of buffering sometimes other reasons you might need to flush your output other than “you’re writing to a pipe”

2 months ago 52 votes

More in programming

Diagnosis in engineering strategy.

Once you’ve written your strategy’s exploration, the next step is working on its diagnosis. Diagnosis is understanding the constraints and challenges your strategy needs to address. In particular, it’s about doing that understanding while slowing yourself down from deciding how to solve the problem at hand before you know the problem’s nuances and constraints. If you ever find yourself wanting to skip the diagnosis phase–let’s get to the solution already!–then maybe it’s worth acknowledging that every strategy that I’ve seen fail, did so due to a lazy or inaccurate diagnosis. It’s very challenging to fail with a proper diagnosis, and almost impossible to succeed without one. The topics this chapter will cover are: Why diagnosis is the foundation of effective strategy, on which effective policy depends. Conversely, how skipping the diagnosis phase consistently ruins strategies A step-by-step approach to diagnosing your strategy’s circumstances How to incorporate data into your diagnosis effectively, and where to focus on adding data Dealing with controversial elements of your diagnosis, such as pointing out that your own executive is one of the challenges to solve Why it’s more effective to view difficulties as part of the problem to be solved, rather than a blocking issue that prevents making forward progress The near impossibility of an effective diagnosis if you don’t bring humility and self-awareness to the process Into the details we go! This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. Diagnosis is strategy’s foundation One of the challenges in evaluating strategy is that, after the fact, many effective strategies are so obvious that they’re pretty boring. Similarly, most ineffective strategies are so clearly flawed that their authors look lazy. That’s because, as a strategy is operated, the reality around it becomes clear. When you’re writing your strategy, you don’t know if you can convince your colleagues to adopt a new approach to specifying APIs, but a year later you know very definitively whether it’s possible. Building your strategy’s diagnosis is your attempt to correctly recognize the context that the strategy needs to solve before deciding on the policies to address that context. Done well, the subsequent steps of writing strategy often feel like an afterthought, which is why I think of diagnosis as strategy’s foundation. Where exploration was an evaluation-free activity, diagnosis is all about evaluation. How do teams feel today? Why did that project fail? Why did the last strategy go poorly? What will be the distractions to overcome to make this new strategy successful? That said, not all evaluation is equal. If you state your judgment directly, it’s easy to dispute. An effective diagnosis is hard to argue against, because it’s a web of interconnected observations, facts, and data. Even for folks who dislike your conclusions, the weight of evidence should be hard to shift. Strategy testing, explored in the Refinement section, takes advantage of the reality that it’s easier to diagnose by doing than by speculating. It proposes a recursive diagnosis process until you have real-world evidence that the strategy is working. How to develop your diagnosis Your strategy is almost certain to fail unless you start from an effective diagnosis, but how to build a diagnosis is often left unspecified. That’s because, for most folks, building the diagnosis is indeed a dark art: unspecified, undiscussion, and uncontrollable. I’ve been guilty of this as well, with The Engineering Executive’s Primer’s chapter on strategy staying silent on the details of how to diagnose for your strategy. So, yes, there is some truth to the idea that forming your diagnosis is an emergent, organic process rather than a structured, mechanical one. However, over time I’ve come to adopt a fairly structured approach: Braindump, starting from a blank sheet of paper, write down your best understanding of the circumstances that inform your current strategy. Then set that piece of paper aside for the moment. Summarize exploration on a new piece of paper, review the contents of your exploration. Pull in every piece of diagnosis from similar situations that resonates with you. This is true for both internal and external works! For each diagnosis, tag whether it fits perfectly, or needs to be adjusted for your current circumstances. Then, once again, set the piece of paper aside. Mine for distinct perspectives on yet another blank page, talking to different stakeholders and colleagues who you know are likely to disagree with your early thinking. Your goal is not to agree with this feedback. Instead, it’s to understand their view. The Crux by Richard Rumelt anchors diagnosis in this approach, emphasizing the importance of “testing, adjusting, and changing the frame, or point of view.” Synthesize views into one internally consistent perspective. Sometimes the different perspectives you’ve gathered don’t mesh well. They might well explicitly differ in what they believe the underlying problem is, as is typical in tension between platform and product engineering teams. The goal is to competently represent each of these perspectives in the diagnosis, even the ones you disagree with, so that later on you can evaluate your proposed approach against each of them. When synthesizing feedback goes poorly, it tends to fail in one of two ways. First, the author’s opinion shines through so strongly that it renders the author suspect. Your goal is never to agree with every team’s perspective, just as your diagnosis should typically avoid crowning any perspective as correct: a reader should generally be appraised of the details and unaware of the author. The second common issue is when a group tries to jointly own the synthesis, but create a fractured perspective rather than a unified one. I generally find that having one author who is accountable for representing all views works best to address both of these issues. Test drafts across perspectives. Once you’ve written your initial diagnosis, you want to sit down with the people who you expect to disagree most fervently. Iterate with them until they agree that you’ve accurately captured their perspective. It might be that they disagree with some other view points, but they should be able to agree that others hold those views. They might argue that the data you’ve included doesn’t capture their full reality, in which case you can caveat the data by saying that their team disagrees that it’s a comprehensive lens. Don’t worry about getting the details perfectly right in your initial diagnosis. You’re trying to get the right crumbs to feed into the next phase, strategy refinement. Allowing yourself to be directionally correct, rather than perfectly correct, makes it possible to cover a broad territory quickly. Getting caught up in perfecting details is an easy way to anchor yourself into one perspective prematurely. At this point, I hope you’re starting to predict how I’ll conclude any recipe for strategy creation: if these steps feel overly mechanical to you, adjust them to something that feels more natural and authentic. There’s no perfect way to understand complex problems. That said, if you feel uncertain, or are skeptical of your own track record, I do encourage you to start with the above approach as a launching point. Incorporating data into your diagnosis The strategy for Navigating Private Equity ownership’s diagnosis includes a number of details to help readers understand the status quo. For example the section on headcount growth explains headcount growth, how it compares to the prior year, and providing a mental model for readers to translate engineering headcount into engineering headcount costs: Our Engineering headcount costs have grown by 15% YoY this year, and 18% YoY the prior year. Headcount grew 7% and 9% respectively, with the difference between headcount and headcount costs explained by salary band adjustments (4%), a focus on hiring senior roles (3%), and increased hiring in higher cost geographic regions (1%). If everyone evaluating a strategy shares the same foundational data, then evaluating the strategy becomes vastly simpler. Data is also your mechanism for supporting or critiquing the various views that you’ve gathered when drafting your diagnosis; to an impartial reader, data will speak louder than passion. If you’re confident that a perspective is true, then include a data narrative that supports it. If you believe another perspective is overstated, then include data that the reader will require to come to the same conclusion. Do your best to include data analysis with a link out to the full data, rather than requiring readers to interpret the data themselves while they are reading. As your strategy document travels further, there will be inevitable requests for different cuts of data to help readers understand your thinking, and this is somewhat preventable by linking to your original sources. If much of the data you want doesn’t exist today, that’s a fairly common scenario for strategy work: if the data to make the decision easy already existed, you probably would have already made a decision rather than needing to run a structured thinking process. The next chapter on refining strategy covers a number of tools that are useful for building confidence in low-data environments. Whisper the controversial parts At one time, the company I worked at rolled out a bar raiser program styled after Amazon’s, where there was an interviewer from outside the team that had to approve every hire. I spent some time arguing against adding this additional step as I didn’t understand what we were solving for, and I was surprised at how disinterested management was about knowing if the new process actually improved outcomes. What I didn’t realize until much later was that most of the senior leadership distrusted one of their peers, and had rolled out the bar raiser program solely to create a mechanism to control that manager’s hiring bar when the CTO was disinterested holding that leader accountable. (I also learned that these leaders didn’t care much about implementing this policy, resulting in bar raiser rejections being frequently ignored, but that’s a discussion for the Operations for strategy chapter.) This is a good example of a strategy that does make sense with the full diagnosis, but makes little sense without it, and where stating part of the diagnosis out loud is nearly impossible. Even senior leaders are not generally allowed to write a document that says, “The Director of Product Engineering is a bad hiring manager.” When you’re writing a strategy, you’ll often find yourself trying to choose between two awkward options: Say something awkward or uncomfortable about your company or someone working within it Omit a critical piece of your diagnosis that’s necessary to understand the wider thinking Whenever you encounter this sort of debate, my advice is to find a way to include the diagnosis, but to reframe it into a palatable statement that avoids casting blame too narrowly. I think it’s helpful to discuss a few concrete examples of this, starting with the strategy for navigating private equity, whose diagnosis includes: Based on general practice, it seems likely that our new Private Equity ownership will expect us to reduce R&D headcount costs through a reduction. However, we don’t have any concrete details to make a structured decision on this, and our approach would vary significantly depending on the size of the reduction. There are many things the authors of this strategy likely feel about their state of reality. First, they are probably upset about the fact that their new private equity ownership is likely to eliminate colleagues. Second, they are likely upset that there is no clear plan around what they need to do, so they are stuck preparing for a wide range of potential outcomes. However they feel, they don’t say any of that, they stick to precise, factual statements. For a second example, we can look to the Uber service migration strategy: Within infrastructure engineering, there is a team of four engineers responsible for service provisioning today. While our organization is growing at a similar rate as product engineering, none of that additional headcount is being allocated directly to the team working on service provisioning. We do not anticipate this changing. The team didn’t agree that their headcount should not be growing, but it was the reality they were operating in. They acknowledged their reality as a factual statement, without any additional commentary about that statement. In both of these examples, they found a professional, non-judgmental way to acknowledge the circumstances they were solving. The authors would have preferred that the leaders behind those decisions take explicit accountability for them, but it would have undermined the strategy work had they attempted to do it within their strategy writeup. Excluding critical parts of your diagnosis makes your strategies particularly hard to evaluate, copy or recreate. Find a way to say things politely to make the strategy effective. As always, strategies are much more about realities than ideals. Reframe blockers as part of diagnosis When I work on strategy with early-career leaders, an idea that comes up a lot is that an identified problem means that strategy is not possible. For example, they might argue that doing strategy work is impossible at their current company because the executive team changes their mind too often. That core insight is almost certainly true, but it’s much more powerful to reframe that as a diagnosis: if we don’t find a way to show concrete progress quickly, and use that to excite the executive team, our strategy is likely to fail. This transforms the thing preventing your strategy into a condition your strategy needs to address. Whenever you run into a reason why your strategy seems unlikely to work, or why strategy overall seems difficult, you’ve found an important piece of your diagnosis to include. There are never reasons why strategy simply cannot succeed, only diagnoses you’ve failed to recognize. For example, we knew in our work on Uber’s service provisioning strategy that we weren’t getting more headcount for the team, the product engineering team was going to continue growing rapidly, and that engineering leadership was unwilling to constrain how product engineering worked. Rather than preventing us from implementing a strategy, those components clarified what sort of approach could actually succeed. The role of self-awareness Every problem of today is partially rooted in the decisions of yesterday. If you’ve been with your organization for any duration at all, this means that you are directly or indirectly responsible for a portion of the problems that your diagnosis ought to recognize. This means that recognizing the impact of your prior actions in your diagnosis is a powerful demonstration of self-awareness. It also suggests that your next strategy’s success is rooted in your self-awareness about your prior choices. Don’t be afraid to recognize the failures in your past work. While changing your mind without new data is a sign of chaotic leadership, changing your mind with new data is a sign of thoughtful leadership. Summary Because diagnosis is the foundation of effective strategy, I’ve always found it the most intimidating phase of strategy work. While I think that’s a somewhat unavoidable reality, my hope is that this chapter has somewhat prepared you for that challenge. The four most important things to remember are simply: form your diagnosis before deciding how to solve it, try especially hard to capture perspectives you initially disagree with, supplement intuition with data where you can, and accept that sometimes you’re missing the data you need to fully understand. The last piece in particular, is why many good strategies never get shared, and the topic we’ll address in the next chapter on strategy refinement.

10 hours ago 3 votes
My friend, JT

I’ve had a cat for almost a third of my life.

2 hours ago 3 votes
[Course Launch] Hands-on Introduction to X86 Assembly

A Live, Interactive Course for Systems Engineers

4 hours ago 2 votes
It’s cool to care

I’m sitting in a small coffee shop in Brooklyn. I have a warm drink, and it’s just started to snow outside. I’m visiting New York to see Operation Mincemeat on Broadway – I was at the dress rehearsal yesterday, and I’ll be at the opening preview tonight. I’ve seen this show more times than I care to count, and I hope US theater-goers love it as much as Brits. The people who make the show will tell you that it’s about a bunch of misfits who thought they could do something ridiculous, who had the audacity to believe in something unlikely. That’s certainly one way to see it. The musical tells the true story of a group of British spies who tried to fool Hitler with a dead body, fake papers, and an outrageous plan that could easily have failed. Decades later, the show’s creators would mirror that same spirit of unlikely ambition. Four friends, armed with their creativity, determination, and a wardrobe full of hats, created a new musical in a small London theatre. And after a series of transfers, they’re about to open the show under the bright lights of Broadway. But when I watch the show, I see a story about friendship. It’s about how we need our friends to help us, to inspire us, to push us to be the best versions of ourselves. I see the swaggering leader who needs a team to help him truly achieve. The nervous scientist who stands up for himself with the support of his friends. The enthusiastic secretary who learns wisdom and resilience from her elder. And so, I suppose, it’s fitting that I’m not in New York on my own. I’m here with friends – dozens of wonderful people who I met through this ridiculous show. At first, I was just an audience member. I sat in my seat, I watched the show, and I laughed and cried with equal measure. After the show, I waited at stage door to thank the cast. Then I came to see the show a second time. And a third. And a fourth. After a few trips, I started to see familiar faces waiting with me at stage door. So before the cast came out, we started chatting. Those conversations became a Twitter community, then a Discord, then a WhatsApp. We swapped fan art, merch, and stories of our favourite moments. We went to other shows together, and we hung out outside the theatre. I spent New Year’s Eve with a few of these friends, sitting on somebody’s floor and laughing about a bowl of limes like it was the funniest thing in the world. And now we’re together in New York. Meeting this kind, funny, and creative group of people might seem as unlikely as the premise of Mincemeat itself. But I believed it was possible, and here we are. I feel so lucky to have met these people, to take this ridiculous trip, to share these precious days with them. I know what a privilege this is – the time, the money, the ability to say let’s do this and make it happen. How many people can gather a dozen friends for even a single evening, let alone a trip halfway round the world? You might think it’s silly to travel this far for a theatre show, especially one we’ve seen plenty of times in London. Some people would never see the same show twice, and most of us are comfortably into double or triple-figures. Whenever somebody asks why, I don’t have a good answer. Because it’s fun? Because it’s moving? Because I enjoy it? I feel the need to justify it, as if there’s some logical reason that will make all of this okay. But maybe I don’t have to. Maybe joy doesn’t need justification. A theatre show doesn’t happen without people who care. Neither does a friendship. So much of our culture tells us that it’s not cool to care. It’s better to be detached, dismissive, disinterested. Enthusiasm is cringe. Sincerity is weakness. I’ve certainly felt that pressure – the urge to play it cool, to pretend I’m above it all. To act as if I only enjoy something a “normal” amount. Well, fuck that. I don’t know where the drive to be detached comes from. Maybe it’s to protect ourselves, a way to guard against disappointment. Maybe it’s to seem sophisticated, as if having passions makes us childish or less mature. Or perhaps it’s about control – if we stay detached, we never have to depend on others, we never have to trust in something bigger than ourselves. Being detached means you can’t get hurt – but you’ll also miss out on so much joy. I’m a big fan of being a big fan of things. So many of the best things in my life have come from caring, from letting myself be involved, from finding people who are a big fan of the same things as me. If I pretended not to care, I wouldn’t have any of that. Caring – deeply, foolishly, vulnerably – is how I connect with people. My friends and I care about this show, we care about each other, and we care about our joy. That care and love for each other is what brought us together, and without it we wouldn’t be here in this city. I know this is a once-in-a-lifetime trip. So many stars had to align – for us to meet, for the show we love to be successful, for us to be able to travel together. But if we didn’t care, none of those stars would have aligned. I know so many other friends who would have loved to be here but can’t be, for all kinds of reasons. Their absence isn’t for lack of caring, and they want the show to do well whether or not they’re here. I know they care, and that’s the important thing. To butcher Tennyson: I think it’s better to care about something you cannot affect, than to care about nothing at all. In a world that’s full of cynicism and spite and hatred, I feel that now more than ever. I’d recommend you go to the show if you haven’t already, but that’s not really the point of this post. Maybe you’ve already seen Operation Mincemeat, and it wasn’t for you. Maybe you’re not a theatre kid. Maybe you aren’t into musicals, or history, or war stories. That’s okay. I don’t mind if you care about different things to me. (Imagine how boring the world would be if we all cared about the same things!) But I want you to care about something. I want you to find it, find people who care about it too, and hold on to them. Because right now, in this city, with these people, at this show? I’m so glad I did. And I hope you find that sort of happiness too. Some of the people who made this trip special. Photo by Chloe, and taken from her Twitter. Timing note: I wrote this on February 15th, but I delayed posting it because I didn’t want to highlight the fact I was away from home. [If the formatting of this post looks odd in your feed reader, visit the original article]

yesterday 3 votes
Stick with the customer

One of the biggest mistakes that new startup founders make is trying to get away from the customer-facing roles too early. Whether it's customer support or it's sales, it's an incredible advantage to have the founders doing that work directly, and for much longer than they find comfortable. The absolute worst thing you can do is hire a sales person or a customer service agent too early. You'll miss all the golden nuggets that customers throw at you for free when they're rejecting your pitch or complaining about the product. Seeing these reasons paraphrased or summarized destroy all the nutrients in their insights. You want that whole-grain feedback straight from the customers' mouth!  When we launched Basecamp in 2004, Jason was doing all the customer service himself. And he kept doing it like that for three years!! By the time we hired our first customer service agent, Jason was doing 150 emails/day. The business was doing millions of dollars in ARR. And Basecamp got infinitely, better both as a market proposition and as a product, because Jason could funnel all that feedback into decisions and positioning. For a long time after that, we did "Everyone on Support". Frequently rotating programmers, designers, and founders through a day of answering emails directly to customers. The dividends of doing this were almost as high as having Jason run it all in the early years. We fixed an incredible number of minor niggles and annoying bugs because programmers found it easier to solve the problem than to apologize for why it was there. It's not easy doing this! Customers often offer their valuable insights wrapped in rude language, unreasonable demands, and bad suggestions. That's why many founders quit the business of dealing with them at the first opportunity. That's why few companies ever do "Everyone On Support". That's why there's such eagerness to reduce support to an AI-only interaction. But quitting dealing with customers early, not just in support but also in sales, is an incredible handicap for any startup. You don't have to do everything that every customer demands of you, but you should certainly listen to them. And you can't listen well if the sound is being muffled by early layers of indirection.

yesterday 4 votes