Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
69
The other day I asked what folks on Mastodon find confusing about working in the terminal, and one thing that stood out to me was “editing a command you already typed in”. This really resonated with me: even though entering some text and editing it is a very “basic” task, it took me maybe 15 years of using the terminal every single day to get used to using Ctrl+A to go to the beginning of the line (or Ctrl+E for the end). So let’s talk about why entering text might be hard! I’ll also share a few tips that I wish I’d learned earlier. it’s very inconsistent between programs A big part of what makes entering text in the terminal hard is the inconsistency between how different programs handle entering text. For example: some programs (the dash shell, cat, nc, git commit --interactive, etc) don’t support using arrow keys at all: if you press arrow keys, you’ll just see ^[[D^[[D^[[C^[[C^ many programs (like irb, python3 on a Linux machine and many many more) use the readline library, which...
a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Julia Evans

New zine: The Secret Rules of the Terminal

Hello! After many months of writing deep dive blog posts about the terminal, on Tuesday I released a new zine called “The Secret Rules of the Terminal”! You can get it for $12 here: https://wizardzines.com/zines/terminal, or get an 15-pack of all my zines here. Here’s the cover: the table of contents Here’s the table of contents: why the terminal? At first when I thought about writing about the terminal I was a bit baffled. After all – you just type in a command and run it, right? How hard could it be? But then I ran a terminal workshop for some folks who were new to the terminal, and somebody asked this question: “how do I quit? Ctrl+C isn’t working!” This question has a very simple answer (they’d run man pngquant, so they just needed to press q to quit). But it made me think about how even though different situations in the terminal look extremely similar (it’s all text!), the way they behave can be very different. Something as simple as “quitting” is different depending on whether you’re in a REPL (Ctrl+D), a full screen program like less (q), or a noninteractive program (Ctrl+C). And then I realized that the terminal was way more complicated than I’d been giving it credit for. there are a million tiny inconsistencies The more I thought about using the terminal, the more I realized that the terminal has a lot of tiny inconsistencies like: sometimes you can use the arrow keys to move around, but sometimes pressing the arrow keys just prints ^[[D sometimes you can use the mouse to select text, but sometimes you can’t sometimes your commands get saved to a history when you run them, and sometimes they don’t some shells let you use the up arrow to see the previous command, and some don’t If you use the terminal daily for 10 or 20 years, even if you don’t understand exactly why these things happen, you’ll probably build an intuition for them. But having an intuition for them isn’t the same as understanding why they happen. When writing this zine I actually had to do a lot of work to figure out exactly what was happening in the terminal to be able to talk about how to reason about it. the rules aren’t written down anywhere It turns out that the “rules” for how the terminal works (how do you edit a command you type in? how do you quit a program? how do you fix your colours?) are extremely hard to fully understand, because “the terminal” is actually made of many different pieces of software (your terminal emulator, your operating system, your shell, the core utilities like grep, and every other random terminal program you’ve installed) which are written by different people with different ideas about how things should work. So I wanted to write something that would explain: how the 4 pieces of the terminal (your shell, terminal emulator, programs, and TTY driver) fit together to make everything work some of the core conventions for how you can expect things in your terminal to work lots of tips and tricks for how to use terminal programs this zine explains the most useful parts of terminal internals Terminal internals are a mess. A lot of it is just the way it is because someone made a decision in the 80s and now it’s impossible to change, and honestly I don’t think learning everything about terminal internals is worth it. But some parts are not that hard to understand and can really make your experience in the terminal better, like: if you understand what your shell is responsible for, you can configure your shell (or use a different one!) to access your history more easily, get great tab completion, and so much more if you understand escape codes, it’s much less scary when cating a binary to stdout messes up your terminal, you can just type reset and move on if you understand how colour works, you can get rid of bad colour contrast in your terminal so you can actually read the text I learned a surprising amount writing this zine When I wrote How Git Works, I thought I knew how Git worked, and I was right. But the terminal is different. Even though I feel totally confident in the terminal and even though I’ve used it every day for 20 years, I had a lot of misunderstandings about how the terminal works and (unless you’re the author of tmux or something) I think there’s a good chance you do too. A few things I learned that are actually useful to me: I understand the structure of the terminal better and so I feel more confident debugging weird terminal stuff that happens to me (I was even able to suggest a small improvement to fish!). Identifying exactly which piece of software is causing a weird thing to happen in my terminal still isn’t easy but I’m a lot better at it now. you can write a shell script to copy to your clipboard over SSH how reset works under the hood (it does the equivalent of stty sane; sleep 1; tput reset) – basically I learned that I don’t ever need to worry about remembering stty sane or tput reset and I can just run reset instead how to look at the invisible escape codes that a program is printing out (run unbuffer program > out; less out) why the builtin REPLs on my Mac like sqlite3 are so annoying to use (they use libedit instead of readline) blog posts I wrote along the way As usual these days I wrote a bunch of blog posts about various side quests: How to add a directory to your PATH “rules” that terminal problems follow why pipes sometimes get “stuck”: buffering some terminal frustrations ASCII control characters in my terminal on “what’s the deal with Ctrl+A, Ctrl+B, Ctrl+C, etc?” entering text in the terminal is complicated what’s involved in getting a “modern” terminal setup? reasons to use your shell’s job control standards for ANSI escape codes, which is really me trying to figure out if I think the terminfo database is serving us well today people who helped with this zine A long time ago I used to write zines mostly by myself but with every project I get more and more help. I met with Marie Claire LeBlanc Flanagan every weekday from September to June to work on this one. The cover is by Vladimir Kašiković, Lesley Trites did copy editing, Simon Tatham (who wrote PuTTY) did technical review, our Operations Manager Lee did the transcription as well as a million other things, and Jesse Luehrs (who is one of the very few people I know who actually understands the terminal’s cursed inner workings) had so many incredibly helpful conversations with me about what is going on in the terminal. get the zine Here are some links to get the zine again: get The Secret Rules of the Terminal get a 15-pack of all my zines here. As always, you can get either a PDF version to print at home or a print version shipped to your house. The only caveat is print orders will ship in August – I need to wait for orders to come in to get an idea of how many I should print before sending it to the printer.

a month ago 39 votes
Using `make` to compile C programs (for non-C-programmers)

I have never been a C programmer but every so often I need to compile a C/C++ program from source. This has been kind of a struggle for me: for a long time, my approach was basically “install the dependencies, run make, if it doesn’t work, either try to find a binary someone has compiled or give up”. “Hope someone else has compiled it” worked pretty well when I was running Linux but since I’ve been using a Mac for the last couple of years I’ve been running into more situations where I have to actually compile programs myself. So let’s talk about what you might have to do to compile a C program! I’ll use a couple of examples of specific C programs I’ve compiled and talk about a few things that can go wrong. Here are three programs we’ll be talking about compiling: paperjam sqlite qf (a pager you can run to quickly open files from a search with rg -n THING | qf) step 1: install a C compiler This is pretty simple: on an Ubuntu system if I don’t already have a C compiler I’ll install one with: sudo apt-get install build-essential This installs gcc, g++, and make. The situation on a Mac is more confusing but it’s something like “install xcode command line tools”. step 2: install the program’s dependencies Unlike some newer programming languages, C doesn’t have a dependency manager. So if a program has any dependencies, you need to hunt them down yourself. Thankfully because of this, C programmers usually keep their dependencies very minimal and often the dependencies will be available in whatever package manager you’re using. There’s almost always a section explaining how to get the dependencies in the README, for example in paperjam’s README, it says: To compile PaperJam, you need the headers for the libqpdf and libpaper libraries (usually available as libqpdf-dev and libpaper-dev packages). You may need a2x (found in AsciiDoc) for building manual pages. So on a Debian-based system you can install the dependencies like this. sudo apt install -y libqpdf-dev libpaper-dev If a README gives a name for a package (like libqpdf-dev), I’d basically always assume that they mean “in a Debian-based Linux distro”: if you’re on a Mac brew install libqpdf-dev will not work. I still have not 100% gotten the hang of developing on a Mac yet so I don’t have many tips there yet. I guess in this case it would be brew install qpdf if you’re using Homebrew. step 3: run ./configure (if needed) Some C programs come with a Makefile and some instead come with a script called ./configure. For example, if you download sqlite’s source code, it has a ./configure script in it instead of a Makefile. My understanding of this ./configure script is: You run it, it prints out a lot of somewhat inscrutable output, and then it either generates a Makefile or fails because you’re missing some dependency The ./configure script is part of a system called autotools that I have never needed to learn anything about beyond “run it to generate a Makefile”. I think there might be some options you can pass to get the ./configure script to produce a different Makefile but I have never done that. step 4: run make The next step is to run make to try to build a program. Some notes about make: Sometimes you can run make -j8 to parallelize the build and make it go faster It usually prints out a million compiler warnings when compiling the program. I always just ignore them. I didn’t write the software! The compiler warnings are not my problem. compiler errors are often dependency problems Here’s an error I got while compiling paperjam on my Mac: /opt/homebrew/Cellar/qpdf/12.0.0/include/qpdf/InputSource.hh:85:19: error: function definition does not declare parameters 85 | qpdf_offset_t last_offset{0}; | ^ Over the years I’ve learned it’s usually best not to overthink problems like this: if it’s talking about qpdf, there’s a good change it just means that I’ve done something wrong with how I’m including the qpdf dependency. Now let’s talk about some ways to get the qpdf dependency included in the right way. the world’s shortest introduction to the compiler and linker Before we talk about how to fix dependency problems: building C programs is split into 2 steps: Compiling the code into object files (with gcc or clang) Linking those object files into a final binary (with ld) It’s important to know this when building a C program because sometimes you need to pass the right flags to the compiler and linker to tell them where to find the dependencies for the program you’re compiling. make uses environment variables to configure the compiler and linker If I run make on my Mac to install paperjam, I get this error: c++ -o paperjam paperjam.o pdf-tools.o parse.o cmds.o pdf.o -lqpdf -lpaper ld: library 'qpdf' not found This is not because qpdf is not installed on my system (it actually is!). But the compiler and linker don’t know how to find the qpdf library. To fix this, we need to: pass "-I/opt/homebrew/include" to the compiler (to tell it where to find the header files) pass "-L/opt/homebrew/lib -liconv" to the linker (to tell it where to find library files and to link in iconv) And we can get make to pass those extra parameters to the compiler and linker using environment variables! To see how this works: inside paperjam’s Makefile you can see a bunch of environment variables, like LDLIBS here: paperjam: $(OBJS) $(LD) -o $@ $^ $(LDLIBS) Everything you put into the LDLIBS environment variable gets passed to the linker (ld) as a command line argument. secret environment variable: CPPFLAGS Makefiles sometimes define their own environment variables that they pass to the compiler/linker, but make also has a bunch of “implicit” environment variables which it will automatically pass to the C compiler and linker. There’s a full list of implicit environment variables here, but one of them is CPPFLAGS, which gets automatically passed to the C compiler. (technically it would be more normal to use CXXFLAGS for this, but this particular Makefile hardcodes CXXFLAGS so setting CPPFLAGS was the only way I could find to set the compiler flags without editing the Makefile) how to use CPPFLAGS and LDLIBS to fix this compiler error Now that we’ve talked about how CPPFLAGS and LDLIBS get passed to the compiler and linker, here’s the final incantation that I used to get the program to build successfully! CPPFLAGS="-I/opt/homebrew/include" LDLIBS="-L/opt/homebrew/lib -liconv" make paperjam This passes -I/opt/homebrew/include to the compiler and -L/opt/homebrew/lib -liconv to the linker. Also I don’t want to pretend that I “magically” knew that those were the right arguments to pass, figuring them out involved a bunch of confused Googling that I skipped over in this post. I will say that: the -I compiler flag tells the compiler which directory to find header files in, like /opt/homebrew/include/qpdf/QPDF.hh the -L linker flag tells the linker which directory to find libraries in, like /opt/homebrew/lib/libqpdf.a the -l linker flag tells the linker which libraries to link in, like -liconv means “link in the iconv library”, or -lm means “link math” tip: how to just build 1 specific file: make $FILENAME Yesterday I discovered this cool tool called qf which you can use to quickly open files from the output of ripgrep. qf is in a big directory of various tools, but I only wanted to compile qf. So I just compiled qf, like this: make qf Basically if you know (or can guess) the output filename of the file you’re trying to build, you can tell make to just build that file by running make $FILENAME tip: look at how other packaging systems built the same C program If you’re having trouble building a C program, maybe other people had problems building it too! Every Linux distribution has build files for every package that they build, so even if you can’t install packages from that distribution directly, maybe you can get tips from that Linux distro for how to build the package. Realizing this (thanks to my friend Dave) was a huge ah-ha moment for me. For example, this line from the nix package for paperjam says: env.NIX_LDFLAGS = lib.optionalString stdenv.hostPlatform.isDarwin "-liconv"; This is basically saying “pass the linker flag -liconv to build this on a Mac”, so that’s a clue we could use to build it. That same file also says env.NIX_CFLAGS_COMPILE = "-DPOINTERHOLDER_TRANSITION=1";. I’m not sure what this means, but when I try to build the paperjam package I do get an error about something called a PointerHolder, so I guess that’s somehow related to the “PointerHolder transition”. step 5: installing the binary Once you’ve managed to compile the program, probably you want to install it somewhere! Some Makefiles have an install target that let you install the tool on your system with make install. I’m always a bit scared of this (where is it going to put the files? what if I want to uninstall them later?), so if I’m compiling a pretty simple program I’ll often just manually copy the binary to install it instead, like this: cp qf ~/bin step 6: maybe make your own package! Once I figured out how to do all of this, I realized that I could use my new make knowledge to contribute a paperjam package to Homebrew! Then I could just brew install paperjam on future systems. The good thing is that even if the details of how all of the different packaging systems, they fundamentally all use C compilers and linkers. it can be useful to understand a little about C even if you’re not a C programmer I think all of this is an interesting example of how it can useful to understand some basics of how C programs work (like “they have header files”) even if you’re never planning to write a nontrivial C program if your life. It feels good to have some ability to compile C/C++ programs myself, even though I’m still not totally confident about all of the compiler and linker flags and I still plan to never learn anything about how autotools works other than “you run ./configure to generate the Makefile”. Also one important thing I left out is LD_LIBRARY_PATH / DYLD_LIBRARY_PATH (which you use to tell the dynamic linker at runtime where to find dynamically linked files) because I can’t remember the last time I ran into an LD_LIBRARY_PATH issue and couldn’t find an example.

a month ago 27 votes
Standards for ANSI escape codes

Hello! Today I want to talk about ANSI escape codes. For a long time I was vaguely aware of ANSI escape codes (“that’s how you make text red in the terminal and stuff”) but I had no real understanding of where they were supposed to be defined or whether or not there were standards for them. I just had a kind of vague “there be dragons” feeling around them. While learning about the terminal this year, I’ve learned that: ANSI escape codes are responsible for a lot of usability improvements in the terminal (did you know there’s a way to copy to your system clipboard when SSHed into a remote machine?? It’s an escape code called OSC 52!) They aren’t completely standardized, and because of that they don’t always work reliably. And because they’re also invisible, it’s extremely frustrating to troubleshoot escape code issues. So I wanted to put together a list for myself of some standards that exist around escape codes, because I want to know if they have to feel unreliable and frustrating, or if there’s a future where we could all rely on them with more confidence. what’s an escape code? ECMA-48 xterm control sequences terminfo should programs use terminfo? is there a “single common set” of escape codes? some reasons to use terminfo some more documents/standards why I think this is interesting what’s an escape code? Have you ever pressed the left arrow key in your terminal and seen ^[[D? That’s an escape code! It’s called an “escape code” because the first character is the “escape” character, which is usually written as ESC, \x1b, \E, \033, or ^[. Escape codes are how your terminal emulator communicates various kinds of information (colours, mouse movement, etc) with programs running in the terminal. There are two kind of escape codes: input codes which your terminal emulator sends for keypresses or mouse movements that don’t fit into Unicode. For example “left arrow key” is ESC[D, “Ctrl+left arrow” might be ESC[1;5D, and clicking the mouse might be something like ESC[M :3. output codes which programs can print out to colour text, move the cursor around, clear the screen, hide the cursor, copy text to the clipboard, enable mouse reporting, set the window title, etc. Now let’s talk about standards! ECMA-48 The first standard I found relating to escape codes was ECMA-48, which was originally published in 1976. ECMA-48 does two things: Define some general formats for escape codes (like “CSI” codes, which are ESC[ + something and “OSC” codes, which are ESC] + something) Define some specific escape codes, like how “move the cursor to the left” is ESC[D, or “turn text red” is ESC[31m. In the spec, the “cursor left” one is called CURSOR LEFT and the one for changing colours is called SELECT GRAPHIC RENDITION. The formats are extensible, so there’s room for others to define more escape codes in the future. Lots of escape codes that are popular today aren’t defined in ECMA-48: for example it’s pretty common for terminal applications (like vim, htop, or tmux) to support using the mouse, but ECMA-48 doesn’t define escape codes for the mouse. xterm control sequences There are a bunch of escape codes that aren’t defined in ECMA-48, for example: enabling mouse reporting (where did you click in your terminal?) bracketed paste (did you paste that text or type it in?) OSC 52 (which terminal applications can use to copy text to your system clipboard) I believe (correct me if I’m wrong!) that these and some others came from xterm, are documented in XTerm Control Sequences, and have been widely implemented by other terminal emulators. This list of “what xterm supports” is not a standard exactly, but xterm is extremely influential and so it seems like an important document. terminfo In the 80s (and to some extent today, but my understanding is that it was MUCH more dramatic in the 80s) there was a huge amount of variation in what escape codes terminals actually supported. To deal with this, there’s a database of escape codes for various terminals called “terminfo”. It looks like the standard for terminfo is called X/Open Curses, though you need to create an account to view that standard for some reason. It defines the database format as well as a C library interface (“curses”) for accessing the database. For example you can run this bash snippet to see every possible escape code for “clear screen” for all of the different terminals your system knows about: for term in $(toe -a | awk '{print $1}') do echo $term infocmp -1 -T "$term" 2>/dev/null | grep 'clear=' | sed 's/clear=//g;s/,//g' done On my system (and probably every system I’ve ever used?), the terminfo database is managed by ncurses. should programs use terminfo? I think it’s interesting that there are two main approaches that applications take to handling ANSI escape codes: Use the terminfo database to figure out which escape codes to use, depending on what’s in the TERM environment variable. Fish does this, for example. Identify a “single common set” of escape codes which works in “enough” terminal emulators and just hardcode those. Some examples of programs/libraries that take approach #2 (“don’t use terminfo”) include: kakoune python-prompt-toolkit linenoise libvaxis chalk I got curious about why folks might be moving away from terminfo and I found this very interesting and extremely detailed rant about terminfo from one of the fish maintainers, which argues that: [the terminfo authors] have done a lot of work that, at the time, was extremely important and helpful. My point is that it no longer is. I’m not going to do it justice so I’m not going to summarize it, I think it’s worth reading. is there a “single common set” of escape codes? I was just talking about the idea that you can use a “common set” of escape codes that will work for most people. But what is that set? Is there any agreement? I really do not know the answer to this at all, but from doing some reading it seems like it’s some combination of: The codes that the VT100 supported (though some aren’t relevant on modern terminals) what’s in ECMA-48 (which I think also has some things that are no longer relevant) What xterm supports (though I’d guess that not everything in there is actually widely supported enough) and maybe ultimately “identify the terminal emulators you think your users are going to use most frequently and test in those”, the same way web developers do when deciding which CSS features are okay to use I don’t think there are any resources like Can I use…? or Baseline for the terminal though. (in theory terminfo is supposed to be the “caniuse” for the terminal but it seems like it often takes 10+ years to add new terminal features when people invent them which makes it very limited) some reasons to use terminfo I also asked on Mastodon why people found terminfo valuable in 2025 and got a few reasons that made sense to me: some people expect to be able to use the TERM environment variable to control how programs behave (for example with TERM=dumb), and there’s no standard for how that should work in a post-terminfo world even though there’s less variation between terminal emulators than there was in the 80s, there’s far from zero variation: there are graphical terminals, the Linux framebuffer console, the situation you’re in when connecting to a server via its serial console, Emacs shell mode, and probably more that I’m missing there is no one standard for what the “single common set” of escape codes is, and sometimes programs use escape codes which aren’t actually widely supported enough some more documents/standards A few more documents and standards related to escape codes, in no particular order: the Linux console_codes man page documents escape codes that Linux supports how the VT 100 handles escape codes & control sequences the kitty keyboard protocol OSC 8 for links in the terminal (and notes on adoption) A summary of ANSI standards from tmux this terminal features reporting specification from iTerm sixel graphics why I think this is interesting I sometimes see people saying that the unix terminal is “outdated”, and since I love the terminal so much I’m always curious about what incremental changes might make it feel less “outdated”. Maybe if we had a clearer standards landscape (like we do on the web!) it would be easier for terminal emulator developers to build new features and for authors of terminal applications to more confidently adopt those features so that we can all benefit from them and have a richer experience in the terminal. Obviously standardizing ANSI escape codes is not easy (ECMA-48 was first published almost 50 years ago and we’re still not there!). But the situation with HTML/CSS/JS used to be extremely bad too and now it’s MUCH better, so maybe there’s hope.

4 months ago 45 votes
How to add a directory to your PATH

I was talking to a friend about how to add a directory to your PATH today. It’s something that feels “obvious” to me since I’ve been using the terminal for a long time, but when I searched for instructions for how to do it, I actually couldn’t find something that explained all of the steps – a lot of them just said “add this to ~/.bashrc”, but what if you’re not using bash? What if your bash config is actually in a different file? And how are you supposed to figure out which directory to add anyway? So I wanted to try to write down some more complete directions and mention some of the gotchas I’ve run into over the years. Here’s a table of contents: step 1: what shell are you using? step 2: find your shell’s config file a note on bash’s config file step 3: figure out which directory to add step 3.1: double check it’s the right directory step 4: edit your shell config step 5: restart your shell problems: problem 1: it ran the wrong program problem 2: the program isn’t being run from your shell notes: a note on source a note on fish_add_path step 1: what shell are you using? If you’re not sure what shell you’re using, here’s a way to find out. Run this: ps -p $$ -o pid,comm= if you’re using bash, it’ll print out 97295 bash if you’re using zsh, it’ll print out 97295 zsh if you’re using fish, it’ll print out an error like “In fish, please use $fish_pid” ($$ isn’t valid syntax in fish, but in any case the error message tells you that you’re using fish, which you probably already knew) Also bash is the default on Linux and zsh is the default on Mac OS (as of 2024). I’ll only cover bash, zsh, and fish in these directions. step 2: find your shell’s config file in zsh, it’s probably ~/.zshrc in bash, it might be ~/.bashrc, but it’s complicated, see the note in the next section in fish, it’s probably ~/.config/fish/config.fish (you can run echo $__fish_config_dir if you want to be 100% sure) a note on bash’s config file Bash has three possible config files: ~/.bashrc, ~/.bash_profile, and ~/.profile. If you’re not sure which one your system is set up to use, I’d recommend testing this way: add echo hi there to your ~/.bashrc Restart your terminal If you see “hi there”, that means ~/.bashrc is being used! Hooray! Otherwise remove it and try the same thing with ~/.bash_profile You can also try ~/.profile if the first two options don’t work. (there are a lot of elaborate flow charts out there that explain how bash decides which config file to use but IMO it’s not worth it and just testing is the fastest way to be sure) step 3: figure out which directory to add Let’s say that you’re trying to install and run a program called http-server and it doesn’t work, like this: $ npm install -g http-server $ http-server bash: http-server: command not found How do you find what directory http-server is in? Honestly in general this is not that easy – often the answer is something like “it depends on how npm is configured”. A few ideas: Often when setting up a new installer (like cargo, npm, homebrew, etc), when you first set it up it’ll print out some directions about how to update your PATH. So if you’re paying attention you can get the directions then. Sometimes installers will automatically update your shell’s config file to update your PATH for you Sometimes just Googling “where does npm install things?” will turn up the answer Some tools have a subcommand that tells you where they’re configured to install things, like: Homebrew: brew --prefix (and then append /bin/ and /sbin/ to what that gives you) Node/npm: npm config get prefix (then append /bin/) Go: go env | grep GOPATH (then append /bin/) asdf: asdf info | grep ASDF_DIR (then append /bin/ and /shims/) step 3.1: double check it’s the right directory Once you’ve found a directory you think might be the right one, make sure it’s actually correct! For example, I found out that on my machine, http-server is in ~/.npm-global/bin. I can make sure that it’s the right directory by trying to run the program http-server in that directory like this: $ ~/.npm-global/bin/http-server Starting up http-server, serving ./public It worked! Now that you know what directory you need to add to your PATH, let’s move to the next step! step 4: edit your shell config Now we have the 2 critical pieces of information we need: Which directory you’re trying to add to your PATH (like ~/.npm-global/bin/) Where your shell’s config is (like ~/.bashrc, ~/.zshrc, or ~/.config/fish/config.fish) Now what you need to add depends on your shell: bash and zsh instructions: Open your shell’s config file, and add a line like this: export PATH=$PATH:~/.npm-global/bin/ (obviously replace ~/.npm-global/bin with the actual directory you’re trying to add) fish instructions: In fish, the syntax is different: set PATH $PATH ~/.npm-global/bin (in fish you can also use fish_add_path, some notes on that further down) step 5: restart your shell Now, an extremely important step: updating your shell’s config won’t take effect if you don’t restart it! Two ways to do this: open a new terminal (or terminal tab), and maybe close the old one so you don’t get confused Run bash to start a new shell (or zsh if you’re using zsh, or fish if you’re using fish) I’ve found that both of these usually work fine. And you should be done! Try running the program you were trying to run and hopefully it works now. If not, here are a couple of problems that you might run into: problem 1: it ran the wrong program If the wrong version of a is program running, you might need to add the directory to the beginning of your PATH instead of the end. For example, on my system I have two versions of python3 installed, which I can see by running which -a: $ which -a python3 /usr/bin/python3 /opt/homebrew/bin/python3 The one your shell will use is the first one listed. If you want to use the Homebrew version, you need to add that directory (/opt/homebrew/bin) to the beginning of your PATH instead, by putting this in your shell’s config file (it’s /opt/homebrew/bin/:$PATH instead of the usual $PATH:/opt/homebrew/bin/) export PATH=/opt/homebrew/bin/:$PATH or in fish: set PATH ~/.cargo/bin $PATH problem 2: the program isn’t being run from your shell All of these directions only work if you’re running the program from your shell. If you’re running the program from an IDE, from a GUI, in a cron job, or some other way, you’ll need to add the directory to your PATH in a different way, and the exact details might depend on the situation. in a cron job Some options: use the full path to the program you’re running, like /home/bork/bin/my-program put the full PATH you want as the first line of your crontab (something like PATH=/bin:/usr/bin:/usr/local/bin:….). You can get the full PATH you’re using in your shell by running echo "PATH=$PATH". I’m honestly not sure how to handle it in an IDE/GUI because I haven’t run into that in a long time, will add directions here if someone points me in the right direction. a note on source When you install cargo (Rust’s installer) for the first time, it gives you these instructions for how to set up your PATH, which don’t mention a specific directory at all. This is usually done by running one of the following (note the leading DOT): . "$HOME/.cargo/env" # For sh/bash/zsh/ash/dash/pdksh source "$HOME/.cargo/env.fish" # For fish The idea is that you add that line to your shell’s config, and their script automatically sets up your PATH (and potentially other things) for you. This is pretty common (Homebrew and asdf have something similar), and there are two ways to approach this: Just do what the tool suggests (add . "$HOME/.cargo/env" to your shell’s config) Figure out which directories the script they’re telling you to run would add to your PATH, and then add those manually. Here’s how I’d do that: Run . "$HOME/.cargo/env" in my shell (or the fish version if using fish) Run echo "$PATH" | tr ':' '\n' | grep cargo to figure out which directories it added See that it says /Users/bork/.cargo/bin and shorten that to ~/.cargo/bin Add the directory ~/.cargo/bin to PATH (with the directions in this post) I don’t think there’s anything wrong with doing what the tool suggests (it might be the “best way”!), but personally I usually use the second approach because I prefer knowing exactly what configuration I’m changing. a note on fish_add_path fish has a handy function called fish_add_path that you can run to add a directory to your PATH like this: fish_add_path /some/directory This will add the directory to your PATH, and automatically update all running fish shells with the new PATH. You don’t have to update your config at all! This is EXTREMELY convenient, but one downside (and the reason I’ve personally stopped using it) is that if you ever need to remove the directory from your PATH a few weeks or months later because maybe you made a mistake, it’s kind of hard to do (there are instructions in this comments of this github issue though). that’s all Hopefully this will help some people. Let me know (on Mastodon or Bluesky) if you there are other major gotchas that have tripped you up when adding a directory to your PATH, or if you have questions about this post!

5 months ago 47 votes
Some terminal frustrations

A few weeks ago I ran a terminal survey (you can read the results here) and at the end I asked: What’s the most frustrating thing about using the terminal for you? 1600 people answered, and I decided to spend a few days categorizing all the responses. Along the way I learned that classifying qualitative data is not easy but I gave it my best shot. I ended up building a custom tool to make it faster to categorize everything. As with all of my surveys the methodology isn’t particularly scientific. I just posted the survey to Mastodon and Twitter, ran it for a couple of days, and got answers from whoever happened to see it and felt like responding. Here are the top categories of frustrations! I think it’s worth keeping in mind while reading these comments that 40% of people answering this survey have been using the terminal for 21+ years 95% of people answering the survey have been using the terminal for at least 4 years These comments aren’t coming from total beginners. Here are the categories of frustrations! The number in brackets is the number of people with that frustration. Honestly I don’t how how interesting this is to other people – I’m just writing this up for myself because I’m trying to write a zine about the terminal and I wanted to get a sense for what people are having trouble with. remembering syntax (115) People talked about struggles remembering: the syntax for CLI tools like awk, jq, sed, etc the syntax for redirects keyboard shortcuts for tmux, text editing, etc One example comment: There are just so many little “trivia” details to remember for full functionality. Even after all these years I’ll sometimes forget where it’s 2 or 1 for stderr, or forget which is which for > and >>. switching terminals is hard (91) People talked about struggling with switching systems (for example home/work computer or when SSHing) and running into: OS differences in keyboard shortcuts (like Linux vs Mac) systems which don’t have their preferred text editor (“no vim” or “only vim”) different versions of the same command (like Mac OS grep vs GNU grep) no tab completion a shell they aren’t used to (“the subtle differences between zsh and bash”) as well as differences inside the same system like pagers being not consistent with each other (git diff pagers, other pagers). One example comment: I got used to fish and vi mode which are not available when I ssh into servers, containers. color (85) Lots of problems with color, like: programs setting colors that are unreadable with a light background color finding a colorscheme they like (and getting it to work consistently across different apps) color not working inside several layers of SSH/tmux/etc not liking the defaults not wanting color at all and struggling to turn it off This comment felt relatable to me: Getting my terminal theme configured in a reasonable way between the terminal emulator and fish (I did this years ago and remember it being tedious and fiddly and now feel like I’m locked into my current theme because it works and I dread touching any of that configuration ever again). keyboard shortcuts (84) Half of the comments on keyboard shortcuts were about how on Linux/Windows, the keyboard shortcut to copy/paste in the terminal is different from in the rest of the OS. Some other issues with keyboard shortcuts other than copy/paste: using Ctrl-W in a browser-based terminal and closing the window the terminal only supports a limited set of keyboard shortcuts (no Ctrl-Shift-, no Super, no Hyper, lots of ctrl- shortcuts aren’t possible like Ctrl-,) the OS stopping you from using a terminal keyboard shortcut (like by default Mac OS uses Ctrl+left arrow for something else) issues using emacs in the terminal backspace not working (2) other copy and paste issues (75) Aside from “the keyboard shortcut for copy and paste is different”, there were a lot of OTHER issues with copy and paste, like: copying over SSH how tmux and the terminal emulator both do copy/paste in different ways dealing with many different clipboards (system clipboard, vim clipboard, the “middle click” keyboard on Linux, tmux’s clipboard, etc) and potentially synchronizing them random spaces added when copying from the terminal pasting multiline commands which automatically get run in a terrifying way wanting a way to copy text without using the mouse discoverability (55) There were lots of comments about this, which all came down to the same basic complaint – it’s hard to discover useful tools or features! This comment kind of summed it all up: How difficult it is to learn independently. Most of what I know is an assorted collection of stuff I’ve been told by random people over the years. steep learning curve (44) A lot of comments about it generally having a steep learning curve. A couple of example comments: After 15 years of using it, I’m not much faster than using it than I was 5 or maybe even 10 years ago. and That I know I could make my life easier by learning more about the shortcuts and commands and configuring the terminal but I don’t spend the time because it feels overwhelming. history (42) Some issues with shell history: history not being shared between terminal tabs (16) limits that are too short (4) history not being restored when terminal tabs are restored losing history because the terminal crashed not knowing how to search history One example comment: It wasted a lot of time until I figured it out and still annoys me that “history” on zsh has such a small buffer; I have to type “history 0” to get any useful length of history. bad documentation (37) People talked about: documentation being generally opaque lack of examples in man pages programs which don’t have man pages Here’s a representative comment: Finding good examples and docs. Man pages often not enough, have to wade through stack overflow scrollback (36) A few issues with scrollback: programs printing out too much data making you lose scrollback history resizing the terminal messes up the scrollback lack of timestamps GUI programs that you start in the background printing stuff out that gets in the way of other programs’ outputs One example comment: When resizing the terminal (in particular: making it narrower) leads to broken rewrapping of the scrollback content because the commands formatted their output based on the terminal window width. “it feels outdated” (33) Lots of comments about how the terminal feels hampered by legacy decisions and how users often end up needing to learn implementation details that feel very esoteric. One example comment: Most of the legacy cruft, it would be great to have a green field implementation of the CLI interface. shell scripting (32) Lots of complaints about POSIX shell scripting. There’s a general feeling that shell scripting is difficult but also that switching to a different less standard scripting language (fish, nushell, etc) brings its own problems. Shell scripting. My tolerance to ditch a shell script and go to a scripting language is pretty low. It’s just too messy and powerful. Screwing up can be costly so I don’t even bother. more issues Some more issues that were mentioned at least 10 times: (31) inconsistent command line arguments: is it -h or help or –help? (24) keeping dotfiles in sync across different systems (23) performance (e.g. “my shell takes too long to start”) (20) window management (potentially with some combination of tmux tabs, terminal tabs, and multiple terminal windows. Where did that shell session go?) (17) generally feeling scared/uneasy (“The debilitating fear that I’m going to do some mysterious Bad Thing with a command and I will have absolutely no idea how to fix or undo it or even really figure out what happened”) (16) terminfo issues (“Having to learn about terminfo if/when I try a new terminal emulator and ssh elsewhere.”) (16) lack of image support (sixel etc) (15) SSH issues (like having to start over when you lose the SSH connection) (15) various tmux/screen issues (for example lack of integration between tmux and the terminal emulator) (15) typos & slow typing (13) the terminal getting messed up for various reasons (pressing Ctrl-S, cating a binary, etc) that’s all! I’m not going to make a lot of commentary on these results, but here are a couple of categories that feel related to me: remembering syntax & history (often the thing you need to remember is something you’ve run before!) discoverability & the learning curve (the lack of discoverability is definitely a big part of what makes it hard to learn)

5 months ago 41 votes

More in programming

Building competency is better than therapy

The world is waking to the fact that talk therapy is neither the only nor the best way to cure a garden-variety petite depression. Something many people will encounter at some point in their lives. Studies have shown that exercise, for example, is a more effective treatment than talk therapy (and pharmaceuticals!) when dealing with such episodes. But I'm just as interested in the role building competence can have in warding off the demons. And partly because of this meme: I've talked about it before, but I keep coming back to the fact that it's exactly backwards. That signing up for an educational quest into Linux, history, or motorcycle repair actually is an incredibly effective alternative to therapy! At least for men who'd prefer to feel useful over being listened to, which, in my experience, is most of them. This is why I find it so misguided when people who undertake those quests sell their journey short with self-effacing jibes about how much an unattractive nerd it makes them to care about their hobby. Mihaly Csikszentmihalyi detailed back in 1990 how peak human happiness arrives exactly in these moments of flow when your competence is stretched by a difficult-but-doable challenge. Don't tell me those endorphins don't also help counter the darkness. But it's just as much about the fact that these pursuits of competence usually offer a great opportunity for community as well that seals the deal. I've found time and again that people are starved for the kind of topic-based connections that, say, learning about Linux offers in spades. You're not just learning, you're learning with others. That is a time-tested antidote to depression: Forming and cultivating meaningful human connections. Yes, doing so over the internet isn't as powerful as doing it in person, but it's still powerful. It still offers community, involvement, and plenty of invitation to carry a meaningful burden. Open source nails this trifecta of motivations to a T. There are endless paths of discovery and mastery available. There are tons of fellow travelers with whom to connect and collaborate. And you'll find an unlimited number of meaningful burdens in maintainerships open for the taking. So next time you see that meme, you should cheer that the talk therapy table is empty. Leave it available for the severe, pathological cases that exercise and the pursuit of competence can't cure. Most people just don't need therapy, they need purpose, they need competence, they need exercise, and they need community.

2 days ago 5 votes
Programming Language Escape Hatches

The excellent-but-defunct blog Programming in the 21st Century defines "puzzle languages" as languages were part of the appeal is in figuring out how to express a program idiomatically, like a puzzle. As examples, he lists Haskell, Erlang, and J. All puzzle languages, the author says, have an "escape" out of the puzzle model that is pragmatic but stigmatized. But many mainstream languages have escape hatches, too. Languages have a lot of properties. One of these properties is the language's capabilities, roughly the set of things you can do in the language. Capability is desirable but comes into conflicts with a lot of other desirable properties, like simplicity or efficiency. In particular, reducing the capability of a language means that all remaining programs share more in common, meaning there's more assumptions the compiler and programmer can make ("tractability"). Assumptions are generally used to reason about correctness, but can also be about things like optimization: J's assumption that everything is an array leads to high-performance "special combinations". Rust is the most famous example of mainstream language that trades capability for tractability.1 Rust has a lot of rules designed to prevent common memory errors, like keeping a reference to deallocated memory or modifying memory while something else is reading it. As a consequence, there's a lot of things that cannot be done in (safe) Rust, like interface with an external C function (as it doesn't have these guarantees). To do this, you need to use unsafe Rust, which lets you do additional things forbidden by safe Rust, such as deference a raw pointer. Everybody tells you not to use unsafe unless you absolutely 100% know what you're doing, and possibly not even then. Sounds like an escape hatch to me! To extrapolate, an escape hatch is a feature (either in the language itself or a particular implementation) that deliberately breaks core assumptions about the language in order to add capabilities. This explains both Rust and most of the so-called "puzzle languages": they need escape hatches because they have very strong conceptual models of the language which leads to lots of assumptions about programs. But plenty of "kitchen sink" mainstream languages have escape hatches, too: Some compilers let C++ code embed inline assembly. Languages built on .NET or the JVM has some sort of interop with C# or Java, and many of those languages make assumptions about programs that C#/Java do not. The SQL language has stored procedures as an escape hatch and vendors create a second escape hatch of user-defined functions. Ruby lets you bypass any form of encapsulation with send. Frameworks have escape hatches, too! React has an entire page on them. (Does eval in interpreted languages count as an escape hatch? It feels different, but it does add a lot of capability. Maybe they don't "break assumptions" in the same way?) The problem with escape hatches In all languages with escape hatches, the rule is "use this as carefully and sparingly as possible", to the point where a messy solution without an escape hatch is preferable to a clean solution with one. Breaking a core assumption is a big deal! If the language is operating as if its still true, it's going to do incorrect things. I recently had this problem in a TLA+ contract. TLA+ is a language for modeling complicated systems, and assumes that the model is a self-contained universe. The client wanted to use the TLA+ to test a real system. The model checker should send commands to a test device and check the next states were the same. This is straightforward to set up with the IOExec escape hatch.2 But the model checker assumed that state exploration was pure and it could skip around the state randomly, meaning it would do things like set x = 10, then skip to set x = 1, then skip back to inc x; assert x == 11. Oops! We eventually found workarounds but it took a lot of clever tricks to pull off. I'll probably write up the technique when I'm less busy with The Book. The other problem with escape hatches is the rest of the language is designed around not having said capabilities, meaning it can't support the feature as well as a language designed for them from the start. Even if your escape hatch code is clean, it might not cleanly integrate with the rest of your code. This is why people complain about unsafe Rust so often. It should be noted though that all languages with automatic memory management are trading capability for tractability, too. If you can't deference pointers, you can't deference null pointers. ↩ From the Community Modules (which come default with the VSCode extension). ↩

3 days ago 10 votes
How We Migrated the Parse API From Ruby to Golang (Resurrected)

I wrote a lot of blog posts over my time at Parse, but they all evaporated after Facebook killed the product. Most of them I didn’t care about (there were, ahem, a lot of status updates and “service reliability announcements”, but I was mad about losing this one in particular, a deceptively casual retrospective of […]

3 days ago 11 votes
It's a Beelink, baby

It's only been two months since I discovered the power and joy of this new generation of mini PCs. My journey started out with a Minisforum UM870, which is a lovely machine, but since then, I've come to really appreciate the work of Beelink.  In a crowded market for mini PCs, Beelink stands out with their superior build quality, their class-leading cooling and silent operation, and their use of fully Linux-compatible components (the UM870 shipped with a MediaTek bluetooth/wifi card that doesn't work with Linux!). It's the complete package at three super compelling price points. For $289, you can get the EQR5, which runs an 8-core AMD Zen3 5825U that puts out 1723/6419 in Geekbench, and comes with 16GB RAM and 500GB NVMe. I've run Omarchy on it, and it flies. For me, the main drawback was the lack of a DisplayPort, which kept me from using it with an Apple display, and the fact that the SER8 exists. But if you're on a budget, and you're fine with HDMI only, it's a wild bargain. For $499, you can get the SER8. That's the price-to-performance sweet spot in the range. It uses the excellent 8-core AMD Zen4 8745HS that puts out 2595/12985 in Geekbench (~M4 multi-core numbers!), and runs our HEY test suite with 30,000 assertions almost as fast as an M4 Max! At that price, you get 32GB RAM + 1TB NVMe, as well as a DisplayPort, so it works with both the Apple 5K Studio Display and the Apple 6K XDR Display (you just need the right cable). Main drawback is limited wifi/bluetooth range, but Beelink tells me there's a fix on the way for that. For $929, you can get the SER9 HX370. This is the top dog in this form factor. It uses the incredible 12-core AMD Zen5 HX370 that hits 2990/15611 in Geekbench, and runs our HEY test suite faster than any Apple M chip I've ever tested. The built-in graphics are also very capable. Enough to play a ton of games at 1080p. It also sorted the SER8's current wifi/bluetooth range issue. I ran the SER8 as my main computer for a while, but now I'm using the SER9, and I just about never feel like I need anything more. Yes, the Framework Desktop, with its insane AMD Max 395+ chip, is even more bonkers. It almost cuts the HEY test suite time in half(!), but it's also $1,795, and not yet generally available. (But preorders are open for the ballers!). Whichever machine fits your budget, it's frankly incredible that we have this kind of performance and efficiency available at these prices with all of these Beelinks drawing less than 10 watt at idle and no more than 100 watt at peak! So it's no wonder that Beelink has been selling these units like hotcakes since I started talking about them on X as the ideal, cheap Omarchy desktop computers. It's such a symbiotic relationship. There are a ton of programmers who have become Linux curious, and Beelink offers no-brainer options to give that a try at a bargain. I just love when that happens. The perfect intersection of hardware, software, and timing. That's what we got here. It's a Beelink, baby! (And no, before you ask, I don't get any royalties, there's no affiliate link, and I don't own any shares in Beelink. I just love discovering great technology and seeing people start their Linux journey with an awesome, affordable computer!)

4 days ago 13 votes
How to Network as a Developer (Without Feeling Sleazy)

“One of the comments that sparked this article,” our founder Paul McMahon told me, “was someone saying, ‘I don’t really want to do networking because it seems kind of sleazy. I’m not that kind of person.’” I guess that’s the key misconception people have when they hear ‘networking.’ They think it’s like some used car salesman kind of approach where you have to go and get something out of the person. That’s a serious error, according to Paul, and it worries him that so many developers share that mindset. Instead, Paul considers networking a mix of making new friends, growing a community, and enjoying serendipitous connections that might not bear fruit until years later, but which could prove to be make-or-break career moments. It’s something that you don’t get quick results on and that doesn’t make a difference at all until it does. And it’s just because of the one connection you happen to make at an event you went to once, this rainy Tuesday night when you didn’t really feel like going, but told yourself you have to go—and that can make all the difference. As Paul has previously shared, he can attribute much of his own career success—and, interestingly enough, his peace of mind—to the huge amount of networking he’s done over the years. This is despite the fact that Paul is, in his own words, “not such a talkative person when it comes to small talk or whatever.” Recently I sat down with Paul to discuss exactly how developers are networking “wrong,” and how they can get it right instead. In our conversation, we covered: What networking really is, and why you need to start ASAP Paul’s top tip for anyone who wants to network Advice for networking as an introvert Online vs offline networking—which is more effective? And how to network in Japan, even when you don’t speak Japanese What is networking, really, and why should you start now? “Sometimes,” Paul explained, “people think of hiring fairs and various exhibitions as the way to network, but that’s not networking to me. It’s purely transactional. Job seekers are focused on getting interviews, recruiters on making hires. There’s no chance to make friends or help people outside of your defined role.” Networking is getting to know other people, understanding how maybe you can help them and how they can help you. And sometime down the road, maybe something comes out of it, maybe it doesn’t, but it’s just expanding your connections to people. One reason developers often avoid or delay networking is that, at its core, networking is a long game. Dramatic impacts on your business or career are possible—even probable—but they don’t come to fruition immediately. “A very specific example would be TokyoDev,” said Paul. “One of our initial clients that posted to the list came through networking.” Sounds like a straightforward result? It’s a bit more complicated than that. “There was a Belgian guy, Peter, whom I had known through the Ruby and tech community in Japan for a while,” Paul explained. “We knew each other, and Peter had met another Canadian guy, Jack, who [was] looking to hire a Ruby developer. “So Peter knew about me and TokyoDev and introduced me to Jack, and that was the founder of Degica, who became one of our first clients. . . . And that just happened because I had known Peter through attending events over the years.” Another example is how Paul’s connection to the Ruby community helped him launch Doorkeeper. His participation in Ruby events played a critical role in helping the product succeed, but only because he’d already volunteered at them for years. “Because I knew those people,” he said, “they wanted to support me, and I guess they also saw that I was genuine about this stuff, and I wasn’t participating in these events with some big plan about, ‘If I do this, then they’re going to use my system,’ or whatever. Again, it was people helping each other out.” These delayed and indirect impacts are why Paul thinks you should start networking right now. “You need to do it in advance of when you actually need it,” he said. “People say they’re looking for a job, and they’re told ‘You could network!’ Yeah, that could potentially help, but it’s almost too late.” You should have been networking a couple years ago when you didn’t need to be doing it, because then you’ve already built up the relationships. You can have this karma you’re building over time. . . . Networking has given me a lot of wealth. I don’t mean so much in money per se, but more it’s given me a safety net. “Now I’m confident,” he said, “that if tomorrow TokyoDev disappeared, I could easily find something just through my connections. I don’t think I’ll, at least in Japan, ever have to apply for a job again.” “I think my success with networking is something that anyone can replicate,” Paul went on, “provided they put in the time. I don’t consider myself to be especially skilled in networking, it’s just that I’ve spent over a decade making connections with people.” How to network (the non-sleazy way) Paul has a fair amount of advice for those who want to network in an effective, yet genuine fashion. His first and most important tip:  Be interested in other people. Asking questions rather than delivering your own talking points is Paul’s number one method for forging connections. It also helps avoid those “used car salesman” vibes. “ That’s why, at TokyoDev,” Paul explained, “we typically bar recruiters from attending our developer events. Because there are these kinds of people who are just going around wanting to get business cards from everyone, wanting to get their contact information, wanting to then sell them on something later. It’s quite obvious that they’re like that, and that leads to a bad environment, [if] someone’s trying to sell you on something.” Networking for introverts The other reason Paul likes asking questions is that it helps him to network as an introvert. “That’s actually one of the things that makes networking easier for someone who isn’t naturally so talkative. . . . When you meet new people, there are some standard questions you can ask them, and it’s like a blank slate where you’re filling in the details about this person.” He explained further that going to events and being social can be fun for him, but he doesn’t exactly find it relaxing. “When it comes to talking about something I’m really interested in, I can do it, but I stumble in these social situations. Despite that, I think I have been pretty successful when it comes to networking.” “What has worked well for me,” he went on, “has been putting myself in those situations that require me to do some networking, like going to an event.” Even if you aren’t that proactive, you’re going to meet a couple of people there. You’re making more connections than you would if you stayed home and played video games. The more often you do it, the easier it gets, and not just because of practice: there’s a cumulative effect to making connections. “Say you’re going to an event, and maybe last time you met a couple of people, you could just say ‘Hi’ to those people again. And maybe they are talking with someone else they can introduce you to.” Or, you can be the one making the introductions. “What has also worked well for me, is that I like to introduce other people,” Paul said. It’s always a great feeling when I’m talking to someone at an event, and I hear about what they’re doing or what they’re wanting to do, and then I can introduce someone else who maybe matches that. “And it’s also good for me, then I can just be kind of passive there,” Paul joked. “I don’t have to be out there myself so much, if they’re talking to each other.” His last piece of advice for introverts is somewhat counterintuitive. “Paradoxically,” he told me, “it helps if you’re in some sort of leadership position.” If you’re an introvert, my advice would be one, just do it, but then also look for opportunities for helping in some more formal capacity, whether it’s organizing an event yourself, volunteering at an event . . . [or] making presentations. “Like for me, when I’ve organized a Tokyo Rubyist Meetup,” Paul said, “[then] naturally as the organizer there people come to talk to me and ask me questions. . . . And it’s been similar when I’ve presented at an event, because then people have something that they know that you know something about, and maybe they want to know more about it, and so then they can ask you more questions and lead the conversation that way.” Offline vs online networking When it comes to offline vs online networking, Paul prefers offline. In-person events are great for networking because they create serendipity. You meet people through events you wouldn’t meet otherwise just because you’re in the same physical space as them. Those time and space constraints add pressure to make conversation—in a good way. “It’s natural when you are meeting someone, you ask about what they’re doing, and you make that small connection there. Then, after seeing them at multiple different events, you get a bit of a stronger connection to them.” “Physical events are [also] much more constrained in the number of people, so it’s easier to help people,” he added. “Like with TokyoDev, I can’t help every single person online there, but if someone meets me at the event [and is] asking for advice or something like that, of course I’ve got to answer them. And I have more time for them there, because we’re in the same place at the same time.” As humans, we’re more likely to help other people we have met in person, I think just because that’s how our brains work. That being said, Paul’s also found success with online networking. For example, several TokyoDev contributors—myself included—started working with Paul after interacting with him online. I commented on TokyoDev’s Dungeons and Dragons article, which led to Paul checking my profile and asking to chat about my experience. Scott, our community moderator and editor, joined TokyoDev in a paid position after being active on the TokyoDev Discord. Michelle was also active on the Discord, and Paul initially asked her to write an article for TokyoDev on being a woman software engineer in Japan, before later bringing her onto the team. Key to these results was that they involved no stereotypical “networking” strategies on either side: we all connected simply by playing a role in a shared, online community. Consistent interactions with others, particularly over a longer period of time, builds mutual trust and understanding. Your online presence can help with offline networking. As TokyoDev became bigger and more people knew about me through my blog, it became a lot easier to network with people at events because they’re like, ‘Hey, you’re Paul from TokyoDev. I like that site.’ “It just leads to more opportunities,” he continued. “If you’ve interacted with someone before online, and then you meet them offline, you already do have a bit of a relationship with them, so you’re more likely to have a place to start the conversation. [And] if you’re someone who is struggling with doing in-person networking, the more you can produce or put out there [online], the more opportunities that can lead to.” Networking in Japanese While there are a number of events throughout Japan that are primarily in English, for best networking results, developers should take advantage of Japanese events as well—even if your Japanese isn’t that good. In 2010, Paul created the Tokyo Rubyist Meetup, with the intention of bringing together Japanese and international developers. To ensure it succeeded, he knew he needed more connections to the Japanese development community. “So I started attending a lot of Japanese developer events where I was the only non-Japanese person there,” said Paul. “I didn’t have such great Japanese skills. I couldn’t understand all the presentations. But it still gave me a chance to make lots of connections, both with people who would later present at [Tokyo Rubyist Meetup], but also with other Japanese developers whom I would work with either on my own products or also on other client projects.” I think it helped being kind of a visible minority. People were curious about me, about why I was attending these events. Their curiosity not only helped him network, but also gave him a helping hand when it came to Japanese conversation. “It’s a lot easier for me in Japanese to be asked questions and answer them,” he admitted. But Paul wasn’t just attending those seminars and events in a passive manner. He soon started delivering presentations himself, usually as part of Lightning Talks—again, despite his relatively low level of Japanese. “It doesn’t matter if you do a bad job of it,” he said. Japanese people I think are really receptive to people trying to speak in Japanese and making an effort. I think they’re happy to have someone who isn’t Japanese present, even if they don’t do a great job. He also quickly learned that the most important networking doesn’t take place at the meetup itself. “At least in the past,” he explained, “it was really split . . . [there’s the] seminar time where everyone goes and watches someone present. Everyone’s pretty passive there and there isn’t much conversation going on between attendees. “Then afterwards—and maybe less than half of the people attend—but they go to a restaurant and have drinks after the event. And that’s where all the real socialization happens, and so that’s where I was able to really make the most connections.” That said, Paul noted that the actual “drinking” part of the process has noticeably diminished. “Drinking culture in Japan is changing a lot,” he told me. “I noticed that even when hosting the Tokyo Rubyist Meetup. When we were first hosting it, we [had] an average of 2.5 beers per participant. And more recently, the average is one or less per participant there. “I think there is not so much of an expectation for people to drink a lot. Young Japanese people don’t drink at the same rate, so don’t feel like you actually have to get drunk at these events. You probably shouldn’t,” he added with a laugh. What you should do is be persistent, and patient. It took Paul about a year of very regularly attending events before he felt he was treated as a member of the community. “Literally I was attending more than the typical Japanese person,” he said. “At the peak, there were a couple events per week.” His hard work paid off, though. “I think one thing about Japanese culture,” he said, “is that it’s really group based.” Initially, as foreigners, we see ourselves in the foreign group versus the Japanese group, and there’s kind of a barrier there. But if you can find some other connection, like in my case Ruby, then with these developers I became part of the “Ruby developer group,” and then I felt much more accepted. Eventually he experienced another benefit. “I think it was after a year of volunteering, maybe two years. . . . RubyKaigi, the biggest Ruby conference in Japan and one of the biggest developer conferences in Japan [in general], used Doorkeeper, the event registration system [I created], to manage their event. “That was a big win for us because it showed that we were a serious system to lots of people there. It exposed us to lots of potential users and was one of the things that I think led to us, for a time, being the most popular event registration system among the tech community in Japan.” Based on his experiences, Paul would urge more developers to try attending Japanese dev events. “Because I think a lot of non-Japanese people are still too intimidated to go to these events, even if they have better Japanese ability than I did. “If you look at most of the Japanese developer events happening now, I think the participants are almost exclusively Japanese, but still, that doesn’t need to be the case.” Takeaways What Paul hopes other developers will take away from this article is that networking shouldn’t feel sleazy. Instead, good networking looks like: Being interested in other people. Asking them questions is the easiest way to start a conversation and make a genuine connection. Occasionally just making yourself go to that in-person event. Serendipity can’t happen if you don’t create opportunities for it. Introducing people to each other—it’s a fast and stress-free way to make more connections. Volunteering for events or organizing your own. Supporting offline events with a solid online presence as well. Not being afraid to attend Japanese events, even if your Japanese isn’t good. Above all, Paul stressed, don’t overcomplicate what networking is at its core. Really what networking comes down to is learning about what other people are doing, and how you can help them or how they can help you. Whether you’re online, offline, or doing it in Japanese, that mindset can turn networking from an awkward, sleazy-feeling experience into something you actually enjoy—even on a rainy Tuesday night.

4 days ago 12 votes