Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
49
Hello! While talking with folks about Git, I’ve been seeing a comment over and over to the effect of “I hate rebase”. People seemed to feel pretty strongly about this, and I was really surprised because I don’t run into a lot of problems with rebase. I’ve found that if many people have a very strong opinion that’s different from mine, usually it’s because they have different experiences around that thing from me. So I asked on Mastodon: today I’m thinking about the tradeoffs of using git rebase a bit. I think the goal of rebase is to have a nice linear commit history, which is something I like. but what are the costs of using rebase? what problems has it caused for you in practice? I’m really only interested in specific bad experiences you’ve had here – not opinions or general statements like “rewriting history is bad” I got a huge number of incredible answers to this, and I’m going to do my best to summarize them here. I’ll also mention solutions or workarounds to those problems in...
a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Julia Evans

New zine: The Secret Rules of the Terminal

Hello! After many months of writing deep dive blog posts about the terminal, on Tuesday I released a new zine called “The Secret Rules of the Terminal”! You can get it for $12 here: https://wizardzines.com/zines/terminal, or get an 15-pack of all my zines here. Here’s the cover: the table of contents Here’s the table of contents: why the terminal? At first when I thought about writing about the terminal I was a bit baffled. After all – you just type in a command and run it, right? How hard could it be? But then I ran a terminal workshop for some folks who were new to the terminal, and somebody asked this question: “how do I quit? Ctrl+C isn’t working!” This question has a very simple answer (they’d run man pngquant, so they just needed to press q to quit). But it made me think about how even though different situations in the terminal look extremely similar (it’s all text!), the way they behave can be very different. Something as simple as “quitting” is different depending on whether you’re in a REPL (Ctrl+D), a full screen program like less (q), or a noninteractive program (Ctrl+C). And then I realized that the terminal was way more complicated than I’d been giving it credit for. there are a million tiny inconsistencies The more I thought about using the terminal, the more I realized that the terminal has a lot of tiny inconsistencies like: sometimes you can use the arrow keys to move around, but sometimes pressing the arrow keys just prints ^[[D sometimes you can use the mouse to select text, but sometimes you can’t sometimes your commands get saved to a history when you run them, and sometimes they don’t some shells let you use the up arrow to see the previous command, and some don’t If you use the terminal daily for 10 or 20 years, even if you don’t understand exactly why these things happen, you’ll probably build an intuition for them. But having an intuition for them isn’t the same as understanding why they happen. When writing this zine I actually had to do a lot of work to figure out exactly what was happening in the terminal to be able to talk about how to reason about it. the rules aren’t written down anywhere It turns out that the “rules” for how the terminal works (how do you edit a command you type in? how do you quit a program? how do you fix your colours?) are extremely hard to fully understand, because “the terminal” is actually made of many different pieces of software (your terminal emulator, your operating system, your shell, the core utilities like grep, and every other random terminal program you’ve installed) which are written by different people with different ideas about how things should work. So I wanted to write something that would explain: how the 4 pieces of the terminal (your shell, terminal emulator, programs, and TTY driver) fit together to make everything work some of the core conventions for how you can expect things in your terminal to work lots of tips and tricks for how to use terminal programs this zine explains the most useful parts of terminal internals Terminal internals are a mess. A lot of it is just the way it is because someone made a decision in the 80s and now it’s impossible to change, and honestly I don’t think learning everything about terminal internals is worth it. But some parts are not that hard to understand and can really make your experience in the terminal better, like: if you understand what your shell is responsible for, you can configure your shell (or use a different one!) to access your history more easily, get great tab completion, and so much more if you understand escape codes, it’s much less scary when cating a binary to stdout messes up your terminal, you can just type reset and move on if you understand how colour works, you can get rid of bad colour contrast in your terminal so you can actually read the text I learned a surprising amount writing this zine When I wrote How Git Works, I thought I knew how Git worked, and I was right. But the terminal is different. Even though I feel totally confident in the terminal and even though I’ve used it every day for 20 years, I had a lot of misunderstandings about how the terminal works and (unless you’re the author of tmux or something) I think there’s a good chance you do too. A few things I learned that are actually useful to me: I understand the structure of the terminal better and so I feel more confident debugging weird terminal stuff that happens to me (I was even able to suggest a small improvement to fish!). Identifying exactly which piece of software is causing a weird thing to happen in my terminal still isn’t easy but I’m a lot better at it now. you can write a shell script to copy to your clipboard over SSH how reset works under the hood (it does the equivalent of stty sane; sleep 1; tput reset) – basically I learned that I don’t ever need to worry about remembering stty sane or tput reset and I can just run reset instead how to look at the invisible escape codes that a program is printing out (run unbuffer program > out; less out) why the builtin REPLs on my Mac like sqlite3 are so annoying to use (they use libedit instead of readline) blog posts I wrote along the way As usual these days I wrote a bunch of blog posts about various side quests: How to add a directory to your PATH “rules” that terminal problems follow why pipes sometimes get “stuck”: buffering some terminal frustrations ASCII control characters in my terminal on “what’s the deal with Ctrl+A, Ctrl+B, Ctrl+C, etc?” entering text in the terminal is complicated what’s involved in getting a “modern” terminal setup? reasons to use your shell’s job control standards for ANSI escape codes, which is really me trying to figure out if I think the terminfo database is serving us well today people who helped with this zine A long time ago I used to write zines mostly by myself but with every project I get more and more help. I met with Marie Claire LeBlanc Flanagan every weekday from September to June to work on this one. The cover is by Vladimir Kašiković, Lesley Trites did copy editing, Simon Tatham (who wrote PuTTY) did technical review, our Operations Manager Lee did the transcription as well as a million other things, and Jesse Luehrs (who is one of the very few people I know who actually understands the terminal’s cursed inner workings) had so many incredibly helpful conversations with me about what is going on in the terminal. get the zine Here are some links to get the zine again: get The Secret Rules of the Terminal get a 15-pack of all my zines here. As always, you can get either a PDF version to print at home or a print version shipped to your house. The only caveat is print orders will ship in August – I need to wait for orders to come in to get an idea of how many I should print before sending it to the printer.

2 weeks ago 23 votes
Using `make` to compile C programs (for non-C-programmers)

I have never been a C programmer but every so often I need to compile a C/C++ program from source. This has been kind of a struggle for me: for a long time, my approach was basically “install the dependencies, run make, if it doesn’t work, either try to find a binary someone has compiled or give up”. “Hope someone else has compiled it” worked pretty well when I was running Linux but since I’ve been using a Mac for the last couple of years I’ve been running into more situations where I have to actually compile programs myself. So let’s talk about what you might have to do to compile a C program! I’ll use a couple of examples of specific C programs I’ve compiled and talk about a few things that can go wrong. Here are three programs we’ll be talking about compiling: paperjam sqlite qf (a pager you can run to quickly open files from a search with rg -n THING | qf) step 1: install a C compiler This is pretty simple: on an Ubuntu system if I don’t already have a C compiler I’ll install one with: sudo apt-get install build-essential This installs gcc, g++, and make. The situation on a Mac is more confusing but it’s something like “install xcode command line tools”. step 2: install the program’s dependencies Unlike some newer programming languages, C doesn’t have a dependency manager. So if a program has any dependencies, you need to hunt them down yourself. Thankfully because of this, C programmers usually keep their dependencies very minimal and often the dependencies will be available in whatever package manager you’re using. There’s almost always a section explaining how to get the dependencies in the README, for example in paperjam’s README, it says: To compile PaperJam, you need the headers for the libqpdf and libpaper libraries (usually available as libqpdf-dev and libpaper-dev packages). You may need a2x (found in AsciiDoc) for building manual pages. So on a Debian-based system you can install the dependencies like this. sudo apt install -y libqpdf-dev libpaper-dev If a README gives a name for a package (like libqpdf-dev), I’d basically always assume that they mean “in a Debian-based Linux distro”: if you’re on a Mac brew install libqpdf-dev will not work. I still have not 100% gotten the hang of developing on a Mac yet so I don’t have many tips there yet. I guess in this case it would be brew install qpdf if you’re using Homebrew. step 3: run ./configure (if needed) Some C programs come with a Makefile and some instead come with a script called ./configure. For example, if you download sqlite’s source code, it has a ./configure script in it instead of a Makefile. My understanding of this ./configure script is: You run it, it prints out a lot of somewhat inscrutable output, and then it either generates a Makefile or fails because you’re missing some dependency The ./configure script is part of a system called autotools that I have never needed to learn anything about beyond “run it to generate a Makefile”. I think there might be some options you can pass to get the ./configure script to produce a different Makefile but I have never done that. step 4: run make The next step is to run make to try to build a program. Some notes about make: Sometimes you can run make -j8 to parallelize the build and make it go faster It usually prints out a million compiler warnings when compiling the program. I always just ignore them. I didn’t write the software! The compiler warnings are not my problem. compiler errors are often dependency problems Here’s an error I got while compiling paperjam on my Mac: /opt/homebrew/Cellar/qpdf/12.0.0/include/qpdf/InputSource.hh:85:19: error: function definition does not declare parameters 85 | qpdf_offset_t last_offset{0}; | ^ Over the years I’ve learned it’s usually best not to overthink problems like this: if it’s talking about qpdf, there’s a good change it just means that I’ve done something wrong with how I’m including the qpdf dependency. Now let’s talk about some ways to get the qpdf dependency included in the right way. the world’s shortest introduction to the compiler and linker Before we talk about how to fix dependency problems: building C programs is split into 2 steps: Compiling the code into object files (with gcc or clang) Linking those object files into a final binary (with ld) It’s important to know this when building a C program because sometimes you need to pass the right flags to the compiler and linker to tell them where to find the dependencies for the program you’re compiling. make uses environment variables to configure the compiler and linker If I run make on my Mac to install paperjam, I get this error: c++ -o paperjam paperjam.o pdf-tools.o parse.o cmds.o pdf.o -lqpdf -lpaper ld: library 'qpdf' not found This is not because qpdf is not installed on my system (it actually is!). But the compiler and linker don’t know how to find the qpdf library. To fix this, we need to: pass "-I/opt/homebrew/include" to the compiler (to tell it where to find the header files) pass "-L/opt/homebrew/lib -liconv" to the linker (to tell it where to find library files and to link in iconv) And we can get make to pass those extra parameters to the compiler and linker using environment variables! To see how this works: inside paperjam’s Makefile you can see a bunch of environment variables, like LDLIBS here: paperjam: $(OBJS) $(LD) -o $@ $^ $(LDLIBS) Everything you put into the LDLIBS environment variable gets passed to the linker (ld) as a command line argument. secret environment variable: CPPFLAGS Makefiles sometimes define their own environment variables that they pass to the compiler/linker, but make also has a bunch of “implicit” environment variables which it will automatically pass to the C compiler and linker. There’s a full list of implicit environment variables here, but one of them is CPPFLAGS, which gets automatically passed to the C compiler. (technically it would be more normal to use CXXFLAGS for this, but this particular Makefile hardcodes CXXFLAGS so setting CPPFLAGS was the only way I could find to set the compiler flags without editing the Makefile) how to use CPPFLAGS and LDLIBS to fix this compiler error Now that we’ve talked about how CPPFLAGS and LDLIBS get passed to the compiler and linker, here’s the final incantation that I used to get the program to build successfully! CPPFLAGS="-I/opt/homebrew/include" LDLIBS="-L/opt/homebrew/lib -liconv" make paperjam This passes -I/opt/homebrew/include to the compiler and -L/opt/homebrew/lib -liconv to the linker. Also I don’t want to pretend that I “magically” knew that those were the right arguments to pass, figuring them out involved a bunch of confused Googling that I skipped over in this post. I will say that: the -I compiler flag tells the compiler which directory to find header files in, like /opt/homebrew/include/qpdf/QPDF.hh the -L linker flag tells the linker which directory to find libraries in, like /opt/homebrew/lib/libqpdf.a the -l linker flag tells the linker which libraries to link in, like -liconv means “link in the iconv library”, or -lm means “link math” tip: how to just build 1 specific file: make $FILENAME Yesterday I discovered this cool tool called qf which you can use to quickly open files from the output of ripgrep. qf is in a big directory of various tools, but I only wanted to compile qf. So I just compiled qf, like this: make qf Basically if you know (or can guess) the output filename of the file you’re trying to build, you can tell make to just build that file by running make $FILENAME tip: look at how other packaging systems built the same C program If you’re having trouble building a C program, maybe other people had problems building it too! Every Linux distribution has build files for every package that they build, so even if you can’t install packages from that distribution directly, maybe you can get tips from that Linux distro for how to build the package. Realizing this (thanks to my friend Dave) was a huge ah-ha moment for me. For example, this line from the nix package for paperjam says: env.NIX_LDFLAGS = lib.optionalString stdenv.hostPlatform.isDarwin "-liconv"; This is basically saying “pass the linker flag -liconv to build this on a Mac”, so that’s a clue we could use to build it. That same file also says env.NIX_CFLAGS_COMPILE = "-DPOINTERHOLDER_TRANSITION=1";. I’m not sure what this means, but when I try to build the paperjam package I do get an error about something called a PointerHolder, so I guess that’s somehow related to the “PointerHolder transition”. step 5: installing the binary Once you’ve managed to compile the program, probably you want to install it somewhere! Some Makefiles have an install target that let you install the tool on your system with make install. I’m always a bit scared of this (where is it going to put the files? what if I want to uninstall them later?), so if I’m compiling a pretty simple program I’ll often just manually copy the binary to install it instead, like this: cp qf ~/bin step 6: maybe make your own package! Once I figured out how to do all of this, I realized that I could use my new make knowledge to contribute a paperjam package to Homebrew! Then I could just brew install paperjam on future systems. The good thing is that even if the details of how all of the different packaging systems, they fundamentally all use C compilers and linkers. it can be useful to understand a little about C even if you’re not a C programmer I think all of this is an interesting example of how it can useful to understand some basics of how C programs work (like “they have header files”) even if you’re never planning to write a nontrivial C program if your life. It feels good to have some ability to compile C/C++ programs myself, even though I’m still not totally confident about all of the compiler and linker flags and I still plan to never learn anything about how autotools works other than “you run ./configure to generate the Makefile”. Also one important thing I left out is LD_LIBRARY_PATH / DYLD_LIBRARY_PATH (which you use to tell the dynamic linker at runtime where to find dynamically linked files) because I can’t remember the last time I ran into an LD_LIBRARY_PATH issue and couldn’t find an example.

a month ago 18 votes
Standards for ANSI escape codes

Hello! Today I want to talk about ANSI escape codes. For a long time I was vaguely aware of ANSI escape codes (“that’s how you make text red in the terminal and stuff”) but I had no real understanding of where they were supposed to be defined or whether or not there were standards for them. I just had a kind of vague “there be dragons” feeling around them. While learning about the terminal this year, I’ve learned that: ANSI escape codes are responsible for a lot of usability improvements in the terminal (did you know there’s a way to copy to your system clipboard when SSHed into a remote machine?? It’s an escape code called OSC 52!) They aren’t completely standardized, and because of that they don’t always work reliably. And because they’re also invisible, it’s extremely frustrating to troubleshoot escape code issues. So I wanted to put together a list for myself of some standards that exist around escape codes, because I want to know if they have to feel unreliable and frustrating, or if there’s a future where we could all rely on them with more confidence. what’s an escape code? ECMA-48 xterm control sequences terminfo should programs use terminfo? is there a “single common set” of escape codes? some reasons to use terminfo some more documents/standards why I think this is interesting what’s an escape code? Have you ever pressed the left arrow key in your terminal and seen ^[[D? That’s an escape code! It’s called an “escape code” because the first character is the “escape” character, which is usually written as ESC, \x1b, \E, \033, or ^[. Escape codes are how your terminal emulator communicates various kinds of information (colours, mouse movement, etc) with programs running in the terminal. There are two kind of escape codes: input codes which your terminal emulator sends for keypresses or mouse movements that don’t fit into Unicode. For example “left arrow key” is ESC[D, “Ctrl+left arrow” might be ESC[1;5D, and clicking the mouse might be something like ESC[M :3. output codes which programs can print out to colour text, move the cursor around, clear the screen, hide the cursor, copy text to the clipboard, enable mouse reporting, set the window title, etc. Now let’s talk about standards! ECMA-48 The first standard I found relating to escape codes was ECMA-48, which was originally published in 1976. ECMA-48 does two things: Define some general formats for escape codes (like “CSI” codes, which are ESC[ + something and “OSC” codes, which are ESC] + something) Define some specific escape codes, like how “move the cursor to the left” is ESC[D, or “turn text red” is ESC[31m. In the spec, the “cursor left” one is called CURSOR LEFT and the one for changing colours is called SELECT GRAPHIC RENDITION. The formats are extensible, so there’s room for others to define more escape codes in the future. Lots of escape codes that are popular today aren’t defined in ECMA-48: for example it’s pretty common for terminal applications (like vim, htop, or tmux) to support using the mouse, but ECMA-48 doesn’t define escape codes for the mouse. xterm control sequences There are a bunch of escape codes that aren’t defined in ECMA-48, for example: enabling mouse reporting (where did you click in your terminal?) bracketed paste (did you paste that text or type it in?) OSC 52 (which terminal applications can use to copy text to your system clipboard) I believe (correct me if I’m wrong!) that these and some others came from xterm, are documented in XTerm Control Sequences, and have been widely implemented by other terminal emulators. This list of “what xterm supports” is not a standard exactly, but xterm is extremely influential and so it seems like an important document. terminfo In the 80s (and to some extent today, but my understanding is that it was MUCH more dramatic in the 80s) there was a huge amount of variation in what escape codes terminals actually supported. To deal with this, there’s a database of escape codes for various terminals called “terminfo”. It looks like the standard for terminfo is called X/Open Curses, though you need to create an account to view that standard for some reason. It defines the database format as well as a C library interface (“curses”) for accessing the database. For example you can run this bash snippet to see every possible escape code for “clear screen” for all of the different terminals your system knows about: for term in $(toe -a | awk '{print $1}') do echo $term infocmp -1 -T "$term" 2>/dev/null | grep 'clear=' | sed 's/clear=//g;s/,//g' done On my system (and probably every system I’ve ever used?), the terminfo database is managed by ncurses. should programs use terminfo? I think it’s interesting that there are two main approaches that applications take to handling ANSI escape codes: Use the terminfo database to figure out which escape codes to use, depending on what’s in the TERM environment variable. Fish does this, for example. Identify a “single common set” of escape codes which works in “enough” terminal emulators and just hardcode those. Some examples of programs/libraries that take approach #2 (“don’t use terminfo”) include: kakoune python-prompt-toolkit linenoise libvaxis chalk I got curious about why folks might be moving away from terminfo and I found this very interesting and extremely detailed rant about terminfo from one of the fish maintainers, which argues that: [the terminfo authors] have done a lot of work that, at the time, was extremely important and helpful. My point is that it no longer is. I’m not going to do it justice so I’m not going to summarize it, I think it’s worth reading. is there a “single common set” of escape codes? I was just talking about the idea that you can use a “common set” of escape codes that will work for most people. But what is that set? Is there any agreement? I really do not know the answer to this at all, but from doing some reading it seems like it’s some combination of: The codes that the VT100 supported (though some aren’t relevant on modern terminals) what’s in ECMA-48 (which I think also has some things that are no longer relevant) What xterm supports (though I’d guess that not everything in there is actually widely supported enough) and maybe ultimately “identify the terminal emulators you think your users are going to use most frequently and test in those”, the same way web developers do when deciding which CSS features are okay to use I don’t think there are any resources like Can I use…? or Baseline for the terminal though. (in theory terminfo is supposed to be the “caniuse” for the terminal but it seems like it often takes 10+ years to add new terminal features when people invent them which makes it very limited) some reasons to use terminfo I also asked on Mastodon why people found terminfo valuable in 2025 and got a few reasons that made sense to me: some people expect to be able to use the TERM environment variable to control how programs behave (for example with TERM=dumb), and there’s no standard for how that should work in a post-terminfo world even though there’s less variation between terminal emulators than there was in the 80s, there’s far from zero variation: there are graphical terminals, the Linux framebuffer console, the situation you’re in when connecting to a server via its serial console, Emacs shell mode, and probably more that I’m missing there is no one standard for what the “single common set” of escape codes is, and sometimes programs use escape codes which aren’t actually widely supported enough some more documents/standards A few more documents and standards related to escape codes, in no particular order: the Linux console_codes man page documents escape codes that Linux supports how the VT 100 handles escape codes & control sequences the kitty keyboard protocol OSC 8 for links in the terminal (and notes on adoption) A summary of ANSI standards from tmux this terminal features reporting specification from iTerm sixel graphics why I think this is interesting I sometimes see people saying that the unix terminal is “outdated”, and since I love the terminal so much I’m always curious about what incremental changes might make it feel less “outdated”. Maybe if we had a clearer standards landscape (like we do on the web!) it would be easier for terminal emulator developers to build new features and for authors of terminal applications to more confidently adopt those features so that we can all benefit from them and have a richer experience in the terminal. Obviously standardizing ANSI escape codes is not easy (ECMA-48 was first published almost 50 years ago and we’re still not there!). But the situation with HTML/CSS/JS used to be extremely bad too and now it’s MUCH better, so maybe there’s hope.

4 months ago 41 votes
How to add a directory to your PATH

I was talking to a friend about how to add a directory to your PATH today. It’s something that feels “obvious” to me since I’ve been using the terminal for a long time, but when I searched for instructions for how to do it, I actually couldn’t find something that explained all of the steps – a lot of them just said “add this to ~/.bashrc”, but what if you’re not using bash? What if your bash config is actually in a different file? And how are you supposed to figure out which directory to add anyway? So I wanted to try to write down some more complete directions and mention some of the gotchas I’ve run into over the years. Here’s a table of contents: step 1: what shell are you using? step 2: find your shell’s config file a note on bash’s config file step 3: figure out which directory to add step 3.1: double check it’s the right directory step 4: edit your shell config step 5: restart your shell problems: problem 1: it ran the wrong program problem 2: the program isn’t being run from your shell notes: a note on source a note on fish_add_path step 1: what shell are you using? If you’re not sure what shell you’re using, here’s a way to find out. Run this: ps -p $$ -o pid,comm= if you’re using bash, it’ll print out 97295 bash if you’re using zsh, it’ll print out 97295 zsh if you’re using fish, it’ll print out an error like “In fish, please use $fish_pid” ($$ isn’t valid syntax in fish, but in any case the error message tells you that you’re using fish, which you probably already knew) Also bash is the default on Linux and zsh is the default on Mac OS (as of 2024). I’ll only cover bash, zsh, and fish in these directions. step 2: find your shell’s config file in zsh, it’s probably ~/.zshrc in bash, it might be ~/.bashrc, but it’s complicated, see the note in the next section in fish, it’s probably ~/.config/fish/config.fish (you can run echo $__fish_config_dir if you want to be 100% sure) a note on bash’s config file Bash has three possible config files: ~/.bashrc, ~/.bash_profile, and ~/.profile. If you’re not sure which one your system is set up to use, I’d recommend testing this way: add echo hi there to your ~/.bashrc Restart your terminal If you see “hi there”, that means ~/.bashrc is being used! Hooray! Otherwise remove it and try the same thing with ~/.bash_profile You can also try ~/.profile if the first two options don’t work. (there are a lot of elaborate flow charts out there that explain how bash decides which config file to use but IMO it’s not worth it and just testing is the fastest way to be sure) step 3: figure out which directory to add Let’s say that you’re trying to install and run a program called http-server and it doesn’t work, like this: $ npm install -g http-server $ http-server bash: http-server: command not found How do you find what directory http-server is in? Honestly in general this is not that easy – often the answer is something like “it depends on how npm is configured”. A few ideas: Often when setting up a new installer (like cargo, npm, homebrew, etc), when you first set it up it’ll print out some directions about how to update your PATH. So if you’re paying attention you can get the directions then. Sometimes installers will automatically update your shell’s config file to update your PATH for you Sometimes just Googling “where does npm install things?” will turn up the answer Some tools have a subcommand that tells you where they’re configured to install things, like: Homebrew: brew --prefix (and then append /bin/ and /sbin/ to what that gives you) Node/npm: npm config get prefix (then append /bin/) Go: go env | grep GOPATH (then append /bin/) asdf: asdf info | grep ASDF_DIR (then append /bin/ and /shims/) step 3.1: double check it’s the right directory Once you’ve found a directory you think might be the right one, make sure it’s actually correct! For example, I found out that on my machine, http-server is in ~/.npm-global/bin. I can make sure that it’s the right directory by trying to run the program http-server in that directory like this: $ ~/.npm-global/bin/http-server Starting up http-server, serving ./public It worked! Now that you know what directory you need to add to your PATH, let’s move to the next step! step 4: edit your shell config Now we have the 2 critical pieces of information we need: Which directory you’re trying to add to your PATH (like ~/.npm-global/bin/) Where your shell’s config is (like ~/.bashrc, ~/.zshrc, or ~/.config/fish/config.fish) Now what you need to add depends on your shell: bash and zsh instructions: Open your shell’s config file, and add a line like this: export PATH=$PATH:~/.npm-global/bin/ (obviously replace ~/.npm-global/bin with the actual directory you’re trying to add) fish instructions: In fish, the syntax is different: set PATH $PATH ~/.npm-global/bin (in fish you can also use fish_add_path, some notes on that further down) step 5: restart your shell Now, an extremely important step: updating your shell’s config won’t take effect if you don’t restart it! Two ways to do this: open a new terminal (or terminal tab), and maybe close the old one so you don’t get confused Run bash to start a new shell (or zsh if you’re using zsh, or fish if you’re using fish) I’ve found that both of these usually work fine. And you should be done! Try running the program you were trying to run and hopefully it works now. If not, here are a couple of problems that you might run into: problem 1: it ran the wrong program If the wrong version of a is program running, you might need to add the directory to the beginning of your PATH instead of the end. For example, on my system I have two versions of python3 installed, which I can see by running which -a: $ which -a python3 /usr/bin/python3 /opt/homebrew/bin/python3 The one your shell will use is the first one listed. If you want to use the Homebrew version, you need to add that directory (/opt/homebrew/bin) to the beginning of your PATH instead, by putting this in your shell’s config file (it’s /opt/homebrew/bin/:$PATH instead of the usual $PATH:/opt/homebrew/bin/) export PATH=/opt/homebrew/bin/:$PATH or in fish: set PATH ~/.cargo/bin $PATH problem 2: the program isn’t being run from your shell All of these directions only work if you’re running the program from your shell. If you’re running the program from an IDE, from a GUI, in a cron job, or some other way, you’ll need to add the directory to your PATH in a different way, and the exact details might depend on the situation. in a cron job Some options: use the full path to the program you’re running, like /home/bork/bin/my-program put the full PATH you want as the first line of your crontab (something like PATH=/bin:/usr/bin:/usr/local/bin:….). You can get the full PATH you’re using in your shell by running echo "PATH=$PATH". I’m honestly not sure how to handle it in an IDE/GUI because I haven’t run into that in a long time, will add directions here if someone points me in the right direction. a note on source When you install cargo (Rust’s installer) for the first time, it gives you these instructions for how to set up your PATH, which don’t mention a specific directory at all. This is usually done by running one of the following (note the leading DOT): . "$HOME/.cargo/env" # For sh/bash/zsh/ash/dash/pdksh source "$HOME/.cargo/env.fish" # For fish The idea is that you add that line to your shell’s config, and their script automatically sets up your PATH (and potentially other things) for you. This is pretty common (Homebrew and asdf have something similar), and there are two ways to approach this: Just do what the tool suggests (add . "$HOME/.cargo/env" to your shell’s config) Figure out which directories the script they’re telling you to run would add to your PATH, and then add those manually. Here’s how I’d do that: Run . "$HOME/.cargo/env" in my shell (or the fish version if using fish) Run echo "$PATH" | tr ':' '\n' | grep cargo to figure out which directories it added See that it says /Users/bork/.cargo/bin and shorten that to ~/.cargo/bin Add the directory ~/.cargo/bin to PATH (with the directions in this post) I don’t think there’s anything wrong with doing what the tool suggests (it might be the “best way”!), but personally I usually use the second approach because I prefer knowing exactly what configuration I’m changing. a note on fish_add_path fish has a handy function called fish_add_path that you can run to add a directory to your PATH like this: fish_add_path /some/directory This will add the directory to your PATH, and automatically update all running fish shells with the new PATH. You don’t have to update your config at all! This is EXTREMELY convenient, but one downside (and the reason I’ve personally stopped using it) is that if you ever need to remove the directory from your PATH a few weeks or months later because maybe you made a mistake, it’s kind of hard to do (there are instructions in this comments of this github issue though). that’s all Hopefully this will help some people. Let me know (on Mastodon or Bluesky) if you there are other major gotchas that have tripped you up when adding a directory to your PATH, or if you have questions about this post!

4 months ago 43 votes
Some terminal frustrations

A few weeks ago I ran a terminal survey (you can read the results here) and at the end I asked: What’s the most frustrating thing about using the terminal for you? 1600 people answered, and I decided to spend a few days categorizing all the responses. Along the way I learned that classifying qualitative data is not easy but I gave it my best shot. I ended up building a custom tool to make it faster to categorize everything. As with all of my surveys the methodology isn’t particularly scientific. I just posted the survey to Mastodon and Twitter, ran it for a couple of days, and got answers from whoever happened to see it and felt like responding. Here are the top categories of frustrations! I think it’s worth keeping in mind while reading these comments that 40% of people answering this survey have been using the terminal for 21+ years 95% of people answering the survey have been using the terminal for at least 4 years These comments aren’t coming from total beginners. Here are the categories of frustrations! The number in brackets is the number of people with that frustration. Honestly I don’t how how interesting this is to other people – I’m just writing this up for myself because I’m trying to write a zine about the terminal and I wanted to get a sense for what people are having trouble with. remembering syntax (115) People talked about struggles remembering: the syntax for CLI tools like awk, jq, sed, etc the syntax for redirects keyboard shortcuts for tmux, text editing, etc One example comment: There are just so many little “trivia” details to remember for full functionality. Even after all these years I’ll sometimes forget where it’s 2 or 1 for stderr, or forget which is which for > and >>. switching terminals is hard (91) People talked about struggling with switching systems (for example home/work computer or when SSHing) and running into: OS differences in keyboard shortcuts (like Linux vs Mac) systems which don’t have their preferred text editor (“no vim” or “only vim”) different versions of the same command (like Mac OS grep vs GNU grep) no tab completion a shell they aren’t used to (“the subtle differences between zsh and bash”) as well as differences inside the same system like pagers being not consistent with each other (git diff pagers, other pagers). One example comment: I got used to fish and vi mode which are not available when I ssh into servers, containers. color (85) Lots of problems with color, like: programs setting colors that are unreadable with a light background color finding a colorscheme they like (and getting it to work consistently across different apps) color not working inside several layers of SSH/tmux/etc not liking the defaults not wanting color at all and struggling to turn it off This comment felt relatable to me: Getting my terminal theme configured in a reasonable way between the terminal emulator and fish (I did this years ago and remember it being tedious and fiddly and now feel like I’m locked into my current theme because it works and I dread touching any of that configuration ever again). keyboard shortcuts (84) Half of the comments on keyboard shortcuts were about how on Linux/Windows, the keyboard shortcut to copy/paste in the terminal is different from in the rest of the OS. Some other issues with keyboard shortcuts other than copy/paste: using Ctrl-W in a browser-based terminal and closing the window the terminal only supports a limited set of keyboard shortcuts (no Ctrl-Shift-, no Super, no Hyper, lots of ctrl- shortcuts aren’t possible like Ctrl-,) the OS stopping you from using a terminal keyboard shortcut (like by default Mac OS uses Ctrl+left arrow for something else) issues using emacs in the terminal backspace not working (2) other copy and paste issues (75) Aside from “the keyboard shortcut for copy and paste is different”, there were a lot of OTHER issues with copy and paste, like: copying over SSH how tmux and the terminal emulator both do copy/paste in different ways dealing with many different clipboards (system clipboard, vim clipboard, the “middle click” keyboard on Linux, tmux’s clipboard, etc) and potentially synchronizing them random spaces added when copying from the terminal pasting multiline commands which automatically get run in a terrifying way wanting a way to copy text without using the mouse discoverability (55) There were lots of comments about this, which all came down to the same basic complaint – it’s hard to discover useful tools or features! This comment kind of summed it all up: How difficult it is to learn independently. Most of what I know is an assorted collection of stuff I’ve been told by random people over the years. steep learning curve (44) A lot of comments about it generally having a steep learning curve. A couple of example comments: After 15 years of using it, I’m not much faster than using it than I was 5 or maybe even 10 years ago. and That I know I could make my life easier by learning more about the shortcuts and commands and configuring the terminal but I don’t spend the time because it feels overwhelming. history (42) Some issues with shell history: history not being shared between terminal tabs (16) limits that are too short (4) history not being restored when terminal tabs are restored losing history because the terminal crashed not knowing how to search history One example comment: It wasted a lot of time until I figured it out and still annoys me that “history” on zsh has such a small buffer; I have to type “history 0” to get any useful length of history. bad documentation (37) People talked about: documentation being generally opaque lack of examples in man pages programs which don’t have man pages Here’s a representative comment: Finding good examples and docs. Man pages often not enough, have to wade through stack overflow scrollback (36) A few issues with scrollback: programs printing out too much data making you lose scrollback history resizing the terminal messes up the scrollback lack of timestamps GUI programs that you start in the background printing stuff out that gets in the way of other programs’ outputs One example comment: When resizing the terminal (in particular: making it narrower) leads to broken rewrapping of the scrollback content because the commands formatted their output based on the terminal window width. “it feels outdated” (33) Lots of comments about how the terminal feels hampered by legacy decisions and how users often end up needing to learn implementation details that feel very esoteric. One example comment: Most of the legacy cruft, it would be great to have a green field implementation of the CLI interface. shell scripting (32) Lots of complaints about POSIX shell scripting. There’s a general feeling that shell scripting is difficult but also that switching to a different less standard scripting language (fish, nushell, etc) brings its own problems. Shell scripting. My tolerance to ditch a shell script and go to a scripting language is pretty low. It’s just too messy and powerful. Screwing up can be costly so I don’t even bother. more issues Some more issues that were mentioned at least 10 times: (31) inconsistent command line arguments: is it -h or help or –help? (24) keeping dotfiles in sync across different systems (23) performance (e.g. “my shell takes too long to start”) (20) window management (potentially with some combination of tmux tabs, terminal tabs, and multiple terminal windows. Where did that shell session go?) (17) generally feeling scared/uneasy (“The debilitating fear that I’m going to do some mysterious Bad Thing with a command and I will have absolutely no idea how to fix or undo it or even really figure out what happened”) (16) terminfo issues (“Having to learn about terminfo if/when I try a new terminal emulator and ssh elsewhere.”) (16) lack of image support (sixel etc) (15) SSH issues (like having to start over when you lose the SSH connection) (15) various tmux/screen issues (for example lack of integration between tmux and the terminal emulator) (15) typos & slow typing (13) the terminal getting messed up for various reasons (pressing Ctrl-S, cating a binary, etc) that’s all! I’m not going to make a lot of commentary on these results, but here are a couple of categories that feel related to me: remembering syntax & history (often the thing you need to remember is something you’ve run before!) discoverability & the learning curve (the lack of discoverability is definitely a big part of what makes it hard to learn)

5 months ago 39 votes

More in programming

Single-Use Disposable Applications

As search gets worse and “working code” gets cheaper, apps get easier to make from scratch than to find.

13 hours ago 4 votes
Thoughts on Motivation and My 40-Year Career

I’ve never published an essay quite like this. I’ve written about my life before, reams of stuff actually, because that’s how I process what I think, but never for public consumption. I’ve been pushing myself to write more lately because my co-authors and I have a whole fucking book to write between now and October. […]

8 hours ago 3 votes
Desktop UI frameworks written by a single person

Less known desktop UI frameworks Writing desktop software is hard. The UI technologies of Windows or MacOS are awful compared to web technology. What can trivially be done with HTML/CSS/JavaScript in few minutes can take hours using Windows’s win32 APIs or Mac’s Cocoa. That’s why the default technology for desktop apps, especially cross-platform, is Electron: a Chrome browser combined with Node runtime. The problem is that it’s bloaty: each app is a unique build of Chrome with a little bit of application code. Chrome is over 100MB so many apps ship less than 1MB of code in a 100M wrapper. People tried to address the problem of poor OS APIs by writing UI frameworks, often meant to be cross-platform. You’ve heard about QT, GTK, wxWindows. The problem with those is that they are also old, their APIs are not the greatest either and they are bloaty as well. There just doesn’t seem to be a good option. Writing your own framework seems impossible due to the size of task. But is it? I’ll show a couple of less-known UI frameworks written mostly be a single person, often done simply to enable writing an application. SWELL in WDL WDL is interesting. Justin Frankel, the guy who created Winamp, has a repository of C++ code he uses in different projects. After selling Winamp to AOL, a side quest of writing file sharing application, getting fired from AOL for writing file sharing application, he started a company building Reaper a digital audio workstation software for Windows. Winamp is a win32 API program and so is Reaper. At some point Justin decided to make a Mac version but by then he had a lot of code heavily using win32 APIs. So he did what anyone in his position would: he implemented win32 APIs for Mac OS and Linux and called it SWELL - Simple Windows Emulation Layer. Ok, actually no-one else would do it. It was an insane idea but it worked. It’s important to not over-state SWELL capabilities. It’s not Wine. You can’t take any win32 program and recompile for Mac with SWELL. Frankel is insanely pragmatic and so is his code. SWELL only implements the subset of APIs he uses in Reaper. At the same time Reaper is a big app so if SWELL works for Reaper, it could work for your app. WDL is open-source using permissive MIT license. Sublime Text For a few years Sublime Text was THE programmer’s editor. It was written by a single developer in C++ and he wrote a custom UI toolkit for it. Not open source but its existence shows it can be done. RAD Debugger RAD Debugger is an open-source Windows debugger for C/C++ apps written in C by mostly a single person. It implements a custom UI framework based on 3D renderer. The UI is integral part of the the app but the code is well structured so you probably can take just their UI / render code and use it in your own C / C++ app. Currently the app / UI is only for Windows but it’s designed to be cross-platform and they are working on porting the renderer to Mac OS / Linux. They use permissive MIT license and everything is written in C. Dear ImGUI Dear ImGui is a newer cross-platform, UI framework in C++. Open source, permissive MIT license. Written by mostly a single person. Ghostty Ghostty is a cross-platform terminal emulator and UI. It’s written in Zig by mostly a single person and uses it’s own low-level GPU renderer for the UI. You too can write your own UI framework At first the idea of writing your own UI framework seems impossibly daunting. What I’m hoping to show is that if you’re ambitious enough it’s possible to build cross platform desktop apps that are not just bloated 100MB Chrome wrappers around few kilobytes of custom code. I’m not saying it’s a simple thing, just that enough people did it that it’s possible. It shouldn’t be necessary but both Microsoft and Apple have tragically dropped the ball on providing decent, high-performance UI libraries for their OS. Microsoft even writes their own apps, like Teams, in web technologies. Thanks to open source you’re not at the staring line. You can just use Dear ImGUI or WDL’s SWELL. Or you can extract the UI code from RAD Debugger or Ghostty (if you write in Zig). Or you can look at how their implementation to speed up your own design and implementation.

yesterday 2 votes
Logic for Programmers Turns One

I released Logic for Programmers exactly one year ago today. It feels weird to celebrate the anniversary of something that isn't 1.0 yet, but software projects have a proud tradition of celebrating a dozen anniversaries before 1.0. I wanted to share about what's changed in the past year and the work for the next six+ months. The Road to 0.1 I had been noodling on the idea of a logic book since the pandemic. The first time I wrote about it on the newsletter was in 2021! Then I said that it would be done by June and would be "under 50 pages". The idea was to cover logic as a "soft skill" that helped you think about things like requirements and stuff. That version sucked. If you want to see how much it sucked, I put it up on Patreon. Then I slept on the next draft for three years. Then in 2024 a lot of business fell through and I had a lot of free time, so with the help of Saul Pwanson I rewrote the book. This time I emphasized breadth over depth, trying to cover a lot more techniques. I also decided to self-publish it instead of pitching it to a publisher. Not going the traditional route would mean I would be responsible for paying for editing, advertising, graphic design etc, but I hoped that would be compensated by much higher royalties. It also meant I could release the book in early access and use early sales to fund further improvements. So I wrote up a draft in Sphinx, compiled it to LaTeX, and uploaded the PDF to leanpub. That was in June 2024. Since then I kept to a monthly cadence of updates, missing once in November (short-notice contract) and once last month (Systems Distributed). The book's now on v0.10. What's changed? A LOT v0.1 was very obviously an alpha, and I have made a lot of improvements since then. For one, the book no longer looks like a Sphinx manual. Compare! Also, the content is very, very different. v0.1 was 19,000 words, v.10 is 31,000.1 This comes from new chapters on TLA+, constraint/SMT solving, logic programming, and major expansions to the existing chapters. Originally, "Simplifying Conditionals" was 600 words. Six hundred words! It almost fit in two pages! The chapter is now 2600 words, now covering condition lifting, quantifier manipulation, helper predicates, and set optimizations. All the other chapters have either gotten similar facelifts or are scheduled to get facelifts. The last big change is the addition of book assets. Originally you had to manually copy over all of the code to try it out, which is a problem when there are samples in eight distinct languages! Now there are ready-to-go examples for each chapter, with instructions on how to set up each programming environment. This is also nice because it gives me breaks from writing to code instead. How did the book do? Leanpub's all-time visualizations are terrible, so I'll just give the summary: 1180 copies sold, $18,241 in royalties. That's a lot of money for something that isn't fully out yet! By comparison, Practical TLA+ has made me less than half of that, despite selling over 5x as many books. Self-publishing was the right choice! In that time I've paid about $400 for the book cover (worth it) and maybe $800 in Leanpub's advertising service (probably not worth it). Right now that doesn't come close to making back the time investment, but I think it can get there post-release. I believe there's a lot more potential customers via marketing. I think post-release 10k copies sold is within reach. Where is the book going? The main content work is rewrites: many of the chapters have not meaningfully changed since 1.0, so I am going through and rewriting them from scratch. So far four of the ten chapters have been rewritten. My (admittedly ambitious) goal is to rewrite three of them by the end of this month and another three by the end of next. I also want to do final passes on the rewritten chapters; as most of them have a few TODOs left lying around. (Also somehow in starting this newsletter and publishing it I realized that one of the chapters might be better split into two chapters, so there could well-be a tenth technique in v0.11 or v0.12!) After that, I will pass it to a copy editor while I work on improving the layout, making images, and indexing. I want to have something worthy of printing on a dead tree by 1.0. In terms of timelines, I am very roughly estimating something like this: Summer: final big changes and rewrites Early Autumn: graphic design and copy editing Late Autumn: proofing, figuring out printing stuff Winter: final ebook and initial print releases of 1.0. (If you know a service that helps get self-published books "past the finish line", I'd love to hear about it! Preferably something that works for a fee, not part of royalties.) This timeline may be disrupted by official client work, like a new TLA+ contract or a conference invitation. Needless to say, I am incredibly excited to complete this book and share the final version with you all. This is a book I wished for years ago, a book I wrote because nobody else would. It fills a critical gap in software educational material, and someday soon I'll be able to put a copy on my bookshelf. It's exhilarating and terrifying and above all, satisfying. It's also 150 pages vs 50 pages, but admittedly this is partially because I made the book smaller with a larger font. ↩

2 days ago 5 votes
Implementing UI translation in SumatraPDF, a C++ Windows application

Translating user interface of SumatraPDF SumatraPDF is the best PDF/eBook/Comic Book viewer for Windows. It’s small, fast, full of features, free and open-source. It became popular enough that it made sense to translate the UI for non-English users. Currently we support 72 languages. This article describes how I designed and implemented a translation system in SumatraPDF, a native win32 C++ Windows application. Hard things about translating the UI There are 2 hard things about translating an application code for translation system (extracting strings to translate, translate strings from English to user’s language) translating them into many languages Extracting strings to translate from source code Currently there are 381 strings in SumatraPDF subject to translation. It’s important that the system requires the least amount of effort when adding new strings to translate. Every string that needs to be translated is marked in .cpp or .h file with one of two macros: _TRA("Rename") _TRN("Open") I have a script that extracts those strings from source files. Mine is written in Go but it could just as well be Python or JavaScript. It’s a simple regex job. _TR stands for “translation”. _TRA(s) expands into const char* trans::GetTranslation(const char* str) function which returns str translated to current UI language. We auto-detect language at startup based on Windows settings and allow the user to explicitly set UI language. For English we just return the original string. If a string to be translated is e.g. a part of const char* array[], we can’t use trans::GetTranslation(). For cases like that we have _TRN() which expands to English string. We have to write code to translate it at some point. Adding new strings is therefore as simple as wrapping them in _TRA() or _TRN() macros. Translating strings into many languages Now that we’ve extracted strings to be translated, we need to translate them into 72 languages. SumatraPDF is a free, open-source program. I don’t have a budget to hire translators. I don’t have a budget, period. The only option was to get help from SumatraPDF users. It was vital to make it very easy for users to send me translations. I didn’t want to ask them, for example, to download some translation software. Design and implementation of AppTranslator web app I couldn’t find a really simple software for crowd sourcing translations so I wrote my own: https://github.com/kjk/apptranslator You can see it in action: https://www.apptranslator.org/app/SumatraPDF I designed it to be generic but I don’t think anyone else is using it. AppTranslator is simple. Per https://tools.arslexis.io/wc/: 4k lines of Go server code 451 lines of html code a single dependency: bootstrap CSS framework (the project is old) It’s simple because I don’t want to spend a lot of time writing translation software. It’s just a side project in service of the goal of translating SumatraPDF. Login is exclusively via GitHub. It doesn’t even use a database. Like in Redis, changes are stored as a series of operations in an append-only log. We keep the whole state in memory and re-create it from the log at startup. Main operation is translate a string from English to language X represented as [kOpTranslation, english string, language, translation, user who provided translation]. When user provides a translation in the web UI, we send an API call to the server which appends the translation operation to the log. Simple and reliable. Because the code is written in Go, it’s very fast and memory efficient. When running it uses mere megabytes of RAM. It can comfortably run on the smallest 256 MB VPS server. I backup the log to S3 so if the server ever fails, I can re-install the program on a new server and re-download the translations from S3. I provide RSS feed for each language so that people who provide translations can monitor for new strings to be translated. Sending strings for translation and receiving translations So I have a web app for collecting translations and a script that extracts strings to be translated from source code. How do they connect? AppTranslator has an API for submitting the current set of strings to be translated in the simplest possible format: a line for each string (I ensure there are no newlines in the string itself by escaping them with \n) API is password protected because only I can submit the strings. The server compares the strings sent with the current set and records a difference in the log. It also sends a response with translations. Again the simplest possible format: AppTranslator: SumatraPDF 651b739d7fa110911f25563c933f42b1d37590f8 :%s annotation. Ctrl+click to edit. am:%s մեկնաբանություն: Ctrl+քլիք՝ խմբագրելու համար: ar:ملاحظة %s. اضغط Ctrl للتحرير. az:Qeyd %s. Düzəliş etmək üçün Ctrl+düyməyə basın. As you can see: a string to translate is on a line starting with : is followed by translations of that strings in the format: ${lang}: ${translation} An optimization: 651b739d7fa110911f25563c933f42b1d37590f8 is a hash of this response. If I submit this hash with my request and translations didn’t change on the server, the response is empty. Implementing C++ part of translation system So now I have a text file with translation downloaded from the server. How do I get a translation in my C++ code? As with everything in SumatraPDF, I try to do things in a simple and efficient way. The whole Translation.cpp is only 239 lines of code. The core of translation system is const char* trans::GetTranslation(const char* s); function. I embed the translations in exact the same format as received from AppTranslator in the executable as data file in resources. If the UI language is English, we do nothing. trans::GetTranslation() returns its argument. When we switch the language, we load the translations from resources and build an index: an array of English strings an array of corresponding translations Both arrays use my own StrVec class optimized for storing an array of strings. To find a translation we scan the first array to find an index of the string and return translation from the second array, at the same index. Linear scan seems like it would be slow but it isn’t. Resizing dialogs I have a few dialogs defined in SumatraPDF.rc file. The problem with dialogs is that position of UI elements is fixed. A translated string will almost certainly have a different size than the English string which will mess up fixed layout. Thankfully someone wrote DialogSizer that smartly resizes dialogs and solves this problem. The evolution of a solution No AppTranslator My initial implementation was simpler. I didn’t yet have AppTranslator so I stored the strings in a text file in repository in the same format as what I described above. People would download it, make changes using a text editor and send me the file via email which I would then checkin. It worked for a while but it became worse over time. More strings, more languages created more work for me to manually manage e-mail submissions. I decided to automate the process. Code generation My first implementation of C++ side used code generation instead of embedding the text file in resources. My Go script would generate C++ source code files with static const char* [] arrays. This worked well but I decided to improve it further by making the code use the text file with translations embedded in the app. The main motivation for the change was to open a possibility of downloading latest translations from the server to fix the problem of translations not being all ready when I build the release executable. I haven’t done that yet but it’s now easier to implement given that the format of strings embedded in the exe is the same as the one I can download from AppTranslator. Only utf-8 SumatraPDF started by using both WCHAR* Unicode strings and char* utf8 strings. For that reason the translation system had to support returning translation in both WCHAR* and char* version. Over time I refactored the code to use mostly utf8 and at some point I no longer needed to support WCHAR* version. That made the code even smaller and reduced memory usage. The experience I’m happy how things turned out. AppTranslator proved to be reliable and hassle free. It runs for many years now and collected 35440 string translations from users. I automated everything so that all I need to do is to periodically re-run the script that extracts strings from source code, uploads them to AppTranslator and downloads latest translations. One problem is that translations are not always ready in time for release so I make a release and then people start translating strings added since last release. I’ve considered downloading the latest translations from the server, in addition to embedding them in an executable at the time of building the app. Would I do the same today? While AppTranslator is reliable and doesn’t require on-going work, it would be better to not have to run a server at all. The world has changed since I started SumatraPDF. Namely: people are comfortable using GitHub and you can edit files directly in GitHub UI. It’s not a great experience but it works. One option would be to generate a translation text file for each language, in this format: :first untranslated string :second untranslated string :first translated string translation of first string :second translated string translation of second string Untranslated strings are listed at the top, to make it easier to find. A link would send a translator directly to edit this file in GitHub UI. When translator saves translations, it creates a PR for me to review and merge. The roads not taken But why did you re-invent everything? You should do X instead. All other X that I know about suck. Using per-language .rc resource files Traditional way of localizing / translating Window GUI apps is to store all strings and dialog definitions in an .rc file. Each language gets its own .rc file (or files) and the program picks the right resource based on a language. This doesn’t solve the 2 hard problems: having an easy way to add strings for translations having an easy way for users to provide translations XML horror show There was a dark time when the world was under the iron grip of XML fanaticism. Everything had to be an XML file even when it was the worst possible solution for the problem. XML doesn’t solve the 2 hard problems and a string storage format is an absolute nightmare for human editing. GNU gettext There’s a C library gettext that uses .po files. This is much saner solution than XML horror show. .po files are relatively simple text format. The code is already written. Warning: tooting my own horn. My format is better. It’s easier for people to edit, it’s easier to write code to parse it. This looks like many times more than 239 lines of code. Ok, gettext probably does a bit more than my code, but clearly nothing than I need. It also doesn’t solve the 2 hard problems. I would still have to write code to extract strings from source code and build a way to allow users to translate them easily.

2 days ago 3 votes