More from Tyler Cipriani: blog
Any code of your own that you haven’t looked at for six or more months might as well have been written by someone else. – Eagleson’s Law After scouring git history, I found the correct config file, but someone removed it. Their full commit message read: Remove config. Don't bring it back. Very. helpful. But I get it; it’s hard to care about commit messages when you’re making a quick change. Git commit templates can help. Commit templates provide a scaffold for your commit messages, reminding you to answer questions like: What problem are you solving? Why is this the solution? What alternatives did you consider? Where can I read more? What is a git commit template? When you type git commit, git pops open your text editor1. Git can pre-fill your editor with a commit template—a form that reminds you of everything it’s easy to forget when writing a commit. Creating a commit template is simple. Create a plaintext file – mine lives at ~/.config/git/message.txt Tell git to use it: git config --global \ commit.template '~/.config/git/message.txt' My template packs everything I know about writing a commit. Project-specific templates Large projects, such as Linux kernel, git, and MediaWiki, have their own commit guidelines. Git templates can remind you about these per-project requirements if you add a commit template to a project’s .git/config file. Another way to do this is git’s includeIf configuration setting. includeIf lets you override git config settings when you’re working under directories you define. For example, all my Wikimedia work lives in ~/Projects/Wikimedia and at the bottom of my ~/.config/git/config I have: [includeIf "gitdir:~/Projects/Wikimedia/**"] path = ~/.config/git/config.wikimedia In config.wikimedia, I point to my Wikimedia-specific commit template (along with other necessary git settings: my user.email, core.hooksPath, and a pushInsteadOf url to push to ssh even when I clone via https). Forge-specific templates Personal git commit templates lead to better commits, which make for a better history. The forge-specific pull-request templates are a band-aid, the cheap kind that falls off in the shower. There’s no incentive for GitHub to make git history better: the worse your commit history, the more you rely on GitHub. Still, all the major pull-request-style forges let you foist a pull-request template on your contributors. As a contributor, I dislike filling those out—they add unnecessary friction. Commit message contents Your commit template allows you do the hard thinking upfront. Then, when you make a commit, you simply follow the template. My template asks questions I answer with my commit message: 72ch. wide -------------------------------------------------------- BODY # | # - Why should this change be made? | # - What problem are you solving? | # - Why this solution? | # - What's wrong with the current code? | # - Are there other ways to do it? | # - How can the reviewer confirm it works? | # | # ---------------------------------------------------------------- /BODY But other clever folks cooked up conventions you could incorporate: Conventional commits – how do your commits relate to semantic versioning? This makes it easier for SRE and downstream users. Problem/Solution format – first pioneered by ZeroMQ2, this format anticipates the questions of future developers and reviewers. Gitmoji – developed for the GitHub crowd, this format defines an emoji shorthand that makes it easy to spot changes of a particular type. Commit message formatting How you format text affects how people read it. My template also deals with text formatting rules3: Subject – 50 characters or less, capitalized, no end punctuation. Body – Wrap at 72 characters with a blank line separating it from the subject. Trailers – Standard formats with a blank line separating them from the body. People will read your commit in different contexts: git log, git shortlog, and git rebase. But git’s pager has no line wrapping by default. I hard wrap at 72 characters because that makes text easier to read in wide terminals.4 Finally, my template addresses trailers, reminding me about standard trailers supported in the projects I’m working on. Git can interpret trailers, which can be useful later. For example, if I wanted a tab-separated list of commits and their related tasks I could find that with git log: $ TAB=%x09 $ BUG_TRAILER='%(trailers:key=Bug,valueonly=true,separator=%x2C )' $ SHORT_HASH=%h $ SUBJ=%s $ FORMAT="${SHORT_HASH}${TAB}${BUG_TRAILER}${TAB}${GIT_SUBJ}" $ git log --topo-order --no-merges \ --format="$FORMAT" d2b09deb12f T359762 Rewrite Kurdish (ku) Latin to Arabic converter 28123a6a262 T332865 tests: Remove non-static fallback in HookRunnerTestBase 4e919a307a4 T328919 tests: Remove unused argument from data provider in PageUpdaterTest bedd0f685f9 objectcache: Improve `RESTBagOStuff::handleError()` 2182a0c4490 T393219 tests: Remove two data provider in RestStructureTest Git commit templates free your brain from remembering what you should write, allowing you to focus on the story you should tell. Your future self will thank you for the effort. Starting with core.editor in your git config, $VISUAL or $EDITOR in your shell, finally falling back to vi.↩︎ I think…↩︎ All cribbed from Tim Pope↩︎ Another story I’ve heard: a standard terminal allows 80 characters per line. git log indents commit messages with 4 spaces. A 72-character-per-line commit centers text on an 80-character-per-line terminal. To me, readability in modern terminals is a better reason to wrap than kowtowing to antiquated terminals.↩︎
[The] Linux kernel uses GPLv2, and if you distribute GPLv2 code, you have to provide a copy of the source (and modifications) once someone asks for it. And now I’m asking nicely for you to do so 🙂 – Joga, bbs.onyx-international.com Boox in split screen, typewriter mode In January, I bought a Boox Go 10.3—a 10.3-inch, 300-ppi, e-ink Android tablet. After two months, I use the Boox daily—it’s replaced my planner, notebook, countless PDF print-offs, and the good parts of my phone. But Boox’s parent company, Onyx, is sketchy. I’m conflicted. The Boox Go is a beautiful, capable tablet that I use every day, but I recommend avoiding as long as Onyx continues to disregard the rights of its users. How I’m using my Boox My e-ink floor desk Each morning, I plop down in front of my MagicHold laptop stand and journal on my Boox with Obsidian. I use Syncthing to back up my planner and sync my Zotero library between my Boox and laptop. In the evening, I review my PDF planner and plot for tomorrow. I use these apps: Obsidian – a markdown editor that syncs between all my devices with no fuss for $8/mo. Syncthing – I love Syncthing—it’s an encrypted, continuous file sync-er without a centralized server. Meditation apps1 – Guided meditation away from the blue light glow of my phone or computer is better. Before buying the Boox, I considered a reMarkable. The reMarkable Paper Pro has a beautiful color screen with a frontlight, a nice pen, and a “type folio,” plus it’s certified by the Calm Tech Institute. But the reMarkable is a distraction-free e-ink tablet. Meanwhile, I need distraction-lite. What I like Calm(ish) technology – The Boox is an intentional device. Browsing the internet, reading emails, and watching videos is hard, but that’s good. Apps – Google Play works out of the box. I can install F-Droid and change my launcher without difficulty. Split screen – The built-in launcher has a split screen feature. I use it to open a PDF side-by-side with a notes doc. Reading – The screen is a 300ppi Carta 1200, making text crisp and clear. What I dislike I filmed myself typing at 240fps, each frame is 4.17ms. Boox’s typing latency is between 150ms and 275ms at the fastest refresh rate inside Obsidian. Typing – Typing latency is noticeable. At Boox’s highest refresh rate, after hitting a key, text takes between 150ms to 275ms to appear. I can still type, though it’s distracting at times. The horror of the default pen Accessories Pen – The default pen looks like a child’s whiteboard marker and feels cheap. I replaced it with the Kindle Scribe Premium pen, and the writing experience is vastly improved. Cover – It’s impossible to find a nice cover. I’m using a $15 cover that I’m encasing in stickers. Tool switching – Swapping between apps is slow and clunky. I blame Android and the current limitations of e-ink more than Boox. No frontlight – The Boox’s lack of frontlight prevents me from reading more with it. I knew this when I bought my Boox, but devices with frontlights seem to make other compromises. Onyx The Chinese company behind Boox, Onyx International, Inc., runs the servers where Boox shuttles tracking information. I block this traffic with Pi-Hole2. pihole-ing whatever telemetry Boox collects I inspected this traffic via Mitm proxy—most traffic was benign, though I never opted into sending any telemetry (nor am I logged in to a Boox account). But it’s also an Android device, so it’s feeding telemetry into Google’s gaping maw, too. Worse, Onyx is flouting the terms of the GNU Public License, declining to release Linux kernel modifications to users. This is anathema to me—GPL violations are tantamount to theft. Onyx’s disregard for user rights makes me regret buying the Boox. Verdict I’ll continue to use the Boox and feel bad about it. I hope my digging in this post will help the next person. Unfortunately, the e-ink tablet market is too niche to support the kind of solarpunk future I’d always imagined. But there’s an opportunity for an open, Linux-based tablet to dominate e-ink. Linux is playing catch-up on phones with PostmarketOS. Meanwhile, the best e-ink tablets have to offer are old, unupdateable versions of Android, like the OS on the Boox. In the future, I’d love to pay a license- and privacy-respecting company for beautiful, calm technology and recommend their product to everyone. But today is not the future. I go back and forth between “Waking Up” and “Calm”↩︎ Using github.com/JordanEJ/Onyx-Boox-Blocklist↩︎
.title { text-wrap: balance } Spending for October, generated by piping hledger → R Over the past six months, I’ve tracked my money with hledger—a plain text double-entry accounting system written in Haskell. It’s been surprisingly painless. My previous attempts to pick up real accounting tools floundered. Hosted tools are privacy nightmares, and my stint with GnuCash didn’t last. But after stumbling on Dmitry Astapov’s “Full-fledged hledger” wiki1, it clicked—eventually consistent accounting. Instead of modeling your money all at once, take it one hacking session at a time. It should be easy to work towards eventual consistency. […] I should be able to [add financial records] bit by little bit, leaving things half-done, and picking them up later with little (mental) effort. – Dmitry Astapov, Full-Fledged Hledger Principles of my system I’ve cobbled together a system based on these principles: Avoid manual entry – Avoid typing in each transaction. Instead, rely on CSVs from the bank. CSVs as truth – CSVs are the only things that matter. Everything else can be blown away and rebuilt anytime. Embrace version control – Keep everything under version control in Git for easy comparison and safe experimentation. Learn hledger in five minutes hledger concepts are heady, but its use is simple. I divide the core concepts into two categories: Stuff hledger cares about: Transactions – how hledger moves money between accounts. Journal files – files full of transactions Stuff I care about: Rules files – how I set up accounts, import CSVs, and move money between accounts. Reports – help me see where my money is going and if I messed up my rules. Transactions move money between accounts: 2024-01-01 Payday income:work $-100.00 assets:checking $100.00 This transaction shows that on Jan 1, 2024, money moved from income:work into assets:checking—Payday. The sum of each transaction should be $0. Money comes from somewhere, and the same amount goes somewhere else—double-entry accounting. This is powerful technology—it makes mistakes impossible to ignore. Journal files are text files containing one or more transactions: 2024-01-01 Payday income:work $-100.00 assets:checking $100.00 2024-01-02 QUANSHENG UVK5 assets:checking $-29.34 expenses:fun:radio $29.34 Rules files transform CSVs into journal files via regex matching. Here’s a CSV from my bank: Transaction Date,Description,Category,Type,Amount,Memo 09/01/2024,DEPOSIT Paycheck,Payment,Payment,1000.00, 09/04/2024,PizzaPals Pizza,Food & Drink,Sale,-42.31, 09/03/2024,Amazon.com*XXXXXXXXY,Shopping,Sale,-35.56, 09/03/2024,OBSIDIAN.MD,Shopping,Sale,-10.00, 09/02/2024,Amazon web services,Personal,Sale,-17.89, And here’s a checking.rules to transform that CSV into a journal file so I can use it with hledger: # checking.rules # -------------- # Map CSV fields → hledger fields[0] fields date,description,category,type,amount,memo,_ # `account1`: the account for the whole CSV.[1] account1 assets:checking account2 expenses:unknown skip 1 date-format %m/%d/%Y currency $ if %type Payment account2 income:unknown if %category Food & Drink account2 expenses:food:dining # [0]: <https://hledger.org/hledger.html#field-names> # [1]: <https://hledger.org/hledger.html#account-field> With these two files (checking.rules and 2024-09_checking.csv), I can make the CSV into a journal: $ > 2024-09_checking.journal \ hledger print \ --rules-file checking.rules \ -f 2024-09_checking.csv $ head 2024-09_checking.journal 2024-09-01 DEPOSIT Paycheck assets:checking $1000.00 income:unknown $-1000.00 2024-09-02 Amazon web services assets:checking $-17.89 expenses:unknown $17.89 Reports are interesting ways to view transactions between accounts. There are registers, balance sheets, and income statements: $ hledger incomestatement \ --depth=2 \ --file=2024-09_bank.journal Revenues: $1000.00 income:unknown ----------------------- $1000.00 Expenses: $42.31 expenses:food $63.45 expenses:unknown ----------------------- $105.76 ----------------------- Net: $894.24 At the beginning of September, I spent $105.76 and made $1000, leaving me with $894.24. But a good chunk is going to the default expense account, expenses:unknown. I can use the hleger aregister to see what those transactions are: $ hledger areg expenses:unknown \ --file=2024-09_checking.journal \ -O csv | \ csvcut -c description,change | \ csvlook | description | change | | ------------------------ | ------ | | OBSIDIAN.MD | 10.00 | | Amazon web services | 17.89 | | Amazon.com*XXXXXXXXY | 35.56 | l Then, I can add some more rules to my checking.rules: if OBSIDIAN.MD account2 expenses:personal:subscriptions if Amazon web services account2 expenses:personal:web:hosting if Amazon.com account2 expenses:personal:shopping:amazon Now, I can reprocess my data to get a better picture of my spending: $ > 2024-09_bank.journal \ hledger print \ --rules-file bank.rules \ -f 2024-09_bank.csv $ hledger bal expenses \ --depth=3 \ --percent \ -f 2024-09_checking2.journal 30.0 % expenses:food:dining 33.6 % expenses:personal:shopping 9.5 % expenses:personal:subscriptions 16.9 % expenses:personal:web -------------------- 100.0 % For the Amazon.com purchase, I lumped it into the expenses:personal:shopping account. But I could dig deeper—download my order history from Amazon and categorize that spending. This is the power of working bit-by-bit—the data guides you to the next, deeper rabbit hole. Goals and non-goals Why am I doing this? For years, I maintained a monthly spreadsheet of account balances. I had a balance sheet. But I still had questions. Spending over six months, generated by piping hledger → gnuplot Before diving into accounting software, these were my goals: Granular understanding of my spending – The big one. This is where my monthly spreadsheet fell short. I knew I had money in the bank—I kept my monthly balance sheet. I budgeted up-front the % of my income I was saving. But I had no idea where my other money was going. Data privacy – I’m unwilling to hand the keys to my accounts to YNAB or Mint. Increased value over time – The more time I put in, the more value I want to get out—this is what you get from professional tools built for nerds. While I wished for low-effort setup, I wanted the tool to be able to grow to more uses over time. Non-goals—these are the parts I never cared about: Investment tracking – For now, I left this out of scope. Between monthly balances in my spreadsheet and online investing tools’ ability to drill down, I was fine.2 Taxes – Folks smarter than me help me understand my yearly taxes.3 Shared system – I may want to share reports from this system, but no one will have to work in it except me. Cash – Cash transactions are unimportant to me. I withdraw money from the ATM sometimes. It evaporates. hledger can track all these things. My setup is flexible enough to support them someday. But that’s unimportant to me right now. Monthly maintenance I spend about an hour a month checking in on my money Which frees me to spend time making fancy charts—an activity I perversely enjoy. Income vs. Expense, generated by piping hledger → gnuplot Here’s my setup: $ tree ~/Documents/ledger . ├── export │ ├── 2024-balance-sheet.txt │ └── 2024-income-statement.txt ├── import │ ├── in │ │ ├── amazon │ │ │ └── order-history.csv │ │ ├── credit │ │ │ ├── 2024-01-01_2024-02-01.csv │ │ │ ├── ... │ │ │ └── 2024-10-01_2024-11-01.csv │ │ └── debit │ │ ├── 2024-01-01_2024-02-01.csv │ │ ├── ... │ │ └── 2024-10-01_2024-11-01.csv │ └── journal │ ├── amazon │ │ └── order-history.journal │ ├── credit │ │ ├── 2024-01-01_2024-02-01.journal │ │ ├── ... │ │ └── 2024-10-01_2024-11-01.journal │ └── debit │ ├── 2024-01-01_2024-02-01.journal │ ├── ... │ └── 2024-10-01_2024-11-01.journal ├── rules │ ├── amazon │ │ └── journal.rules │ ├── credit │ │ └── journal.rules │ ├── debit │ │ └── journal.rules │ └── common.rules ├── 2024.journal ├── Makefile └── README Process: Import – download a CSV for the month from each account and plop it into import/in/<account>/<dates>.csv Make – run make Squint – Look at git diff; if it looks good, git add . && git commit -m "💸" otherwise review hledger areg to see details. The Makefile generates everything under import/journal: journal files from my CSVs using their corresponding rules. reports in the export folder I include all the journal files in the 2024.journal with the line: include ./import/journal/*/*.journal Here’s the Makefile: SHELL := /bin/bash RAW_CSV = $(wildcard import/in/**/*.csv) JOURNALS = $(foreach file,$(RAW_CSV),$(subst /in/,/journal/,$(patsubst %.csv,%.journal,$(file)))) .PHONY: all all: $(JOURNALS) hledger is -f 2024.journal > export/2024-income-statement.txt hledger bs -f 2024.journal > export/2024-balance-sheet.txt .PHONY clean clean: rm -rf import/journal/**/*.journal import/journal/%.journal: import/in/%.csv @echo "Processing csv $< to $@" @echo "---" @mkdir -p $(shell dirname $@) @hledger print --rules-file rules/$(shell basename $$(dirname $<))/journal.rules -f "$<" > "$@" If I find anything amiss (e.g., if my balances are different than what the bank tells me), I look at hleger areg. I may tweak my rules or my CSVs and then I run make clean && make and try again. Simple, plain text accounting made simple. And if I ever want to dig deeper, hledger’s docs have more to teach. But for now, the balance of effort vs. reward is perfect. while reading a blog post from Jonathan Dowland↩︎ Note, this is covered by full-fledged hledger – Investements↩︎ Also covered in full-fledged hledger – Tax returns↩︎
Luckily, I speak Leet. – Amita Ramanujan, Numb3rs, CBS’s IRC Drama There’s an episode of the CBS prime-time drama Numb3rs that plumbs the depths of Dr. Joel Fleischman’s1 knowledge of IRC. In one scene, Fleischman wonders, “What’s ‘leet’”? “Leet” is writing that replaces letters with numbers, e.g., “Numb3rs,” where 3 stands in for e. In short, leet is like the heavy-metal “S” you drew in middle school: Sweeeeet. / \ / | \ | | | \ \ | | | \ | / \ / ASCII art version of your misspent youth. Following years of keen observation, I’ve noticed Git commit hashes are also letters and numbers. Git commit hashes are, as Fleischman might say, prime targets for l33tification. What can I spell with a git commit? DenITDao via orlybooks) With hexidecimal we can spell any word containing the set of letters {A, B, C, D, E, F}—DEADBEEF (a classic) or ABBABABE (for Mama Mia aficionados). This is because hexidecimal is a base-16 numbering system—a single “digit” represents 16 numbers: Base-10: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 16 15 Base-16: 0 1 2 3 4 5 6 7 8 9 A B C D E F Leet expands our palette of words—using 0, 1, and 5 to represent O, I, and S, respectively. I created a script that scours a few word lists for valid words and phrases. With it, I found masterpieces like DADB0D (dad bod), BADA55 (bad ass), and 5ADBAB1E5 (sad babies). Manipulating commit hashes for fun and no profit Git commit hashes are no mystery. A commit hash is the SHA-1 of a commit object. And a commit object is the commit message with some metadata. $ mkdir /tmp/BADA55-git && cd /tmp/BAD55-git $ git init Initialized empty Git repository in /tmp/BADA55-git/.git/ $ echo '# BADA55 git repo' > README.md && git add README.md && git commit -m 'Initial commit' [main (root-commit) 68ec0dd] Initial commit 1 file changed, 1 insertion(+) create mode 100644 README.md $ git log --oneline 68ec0dd (HEAD -> main) Initial commit Let’s confirm we can recreate the commit hash: $ git cat-file -p 68ec0dd > commit-msg $ sha1sum <(cat \ <(printf "commit ") \ <(wc -c < commit-msg | tr -d '\n') \ <(printf '%b' '\0') commit-msg) 68ec0dd6dead532f18082b72beeb73bd828ee8fc /dev/fd/63 Our repo’s first commit has the hash 68ec0dd. My goal is: Make 68ec0dd be BADA55. Keep the commit message the same, visibly at least. But I’ll need to change the commit to change the hash. To keep those changes invisible in the output of git log, I’ll add a \t and see what happens to the hash. $ truncate -s -1 commit-msg # remove final newline $ printf '\t\n' >> commit-msg # Add a tab $ # Check the new SHA to see if it's BADA55 $ sha1sum <(cat \ <(printf "commit ") \ <(wc -c < commit-msg | tr -d '\n') \ <(printf '%b' '\0') commit-msg) 27b22ba5e1c837a34329891c15408208a944aa24 /dev/fd/63 Success! I changed the SHA-1. Now to do this until we get to BADA55. Fortunately, user not-an-aardvark created a tool for that—lucky-commit that manipulates a commit message, adding a combination of \t and [:space:] characters until you hit a desired SHA-1. Written in rust, lucky-commit computes all 256 unique 8-bit strings composed of only tabs and spaces. And then pads out commits up to 48-bits with those strings, using worker threads to quickly compute the SHA-12 of each commit. It’s pretty fast: $ time lucky_commit BADA555 real 0m0.091s user 0m0.653s sys 0m0.007s $ git log --oneline bada555 (HEAD -> main) Initial commit $ xxd -c1 <(git cat-file -p 68ec0dd) | grep -cPo ': (20|09)' 12 $ xxd -c1 <(git cat-file -p HEAD) | grep -cPo ': (20|09)' 111 Now we have an more than an initial commit. We have a BADA555 initial commit. All that’s left to do is to make ALL our commits BADA55 by abusing git hooks. $ cat > .git/hooks/post-commit && chmod +x .git/hooks/post-commit #!/usr/bin/env bash echo 'L337-ifying!' lucky_commit BADA55 $ echo 'A repo that is very l33t.' >> README.md && git commit -a -m 'l33t' L337-ifying! [main 0e00cb2] l33t 1 file changed, 1 insertion(+) $ git log --oneline bada552 (HEAD -> main) l33t bada555 Initial commit And now I have a git repo almost as cool as the sweet “S” I drew in middle school. This is a Northern Exposure spin off, right? I’ve only seen 1:48 of the show…↩︎ or SHA-256 for repos that have made the jump to a more secure hash function↩︎
A brief and biased history. Oh yeah, there’s pull requests now – GitHub blog, Sat, 23 Feb 2008 When GitHub launched, it had no code review. Three years after launch, in 2011, GitHub user rtomayko became the first person to make a real code comment, which read, in full: “+1”. Before that, GitHub lacked any way to comment on code directly. Instead, pull requests were a combination of two simple features: Cross repository compare view – a feature they’d debuted in 2010—git diff in a web page. A comments section – a feature most blogs had in the 90s. There was no way to thread comments, and the comments were on a different page than the diff. GitHub pull requests circa 2010. This is from the official documentation on GitHub. Earlier still, when the pull request debuted, GitHub claimed only that pull requests were “a way to poke someone about code”—a way to direct message maintainers, but one that lacked any web view of the code whatsoever. For developers, it worked like this: Make a fork. Click “pull request”. Write a message in a text form. Send the message to someone1 with a link to your fork. Wait for them to reply. In effect, pull requests were a limited way to send emails to other GitHub users. Ten years after this humble beginning—seven years after the first code comment—when Microsoft acquired GitHub for $7.5 Billion, this cobbled-together system known as “GitHub flow” had become the default way to collaborate on code via Git. And I hate it. Pull requests were never designed. They emerged. But not from careful consideration of the needs of developers or maintainers. Pull requests work like they do because they were easy to build. In 2008, GitHub’s developers could have opted to use git format-patch instead of teaching the world to juggle branches. Or they might have chosen to generate pull requests using the git request-pull command that’s existed in Git since 2005 and is still used by the Linux kernel maintainers today2. Instead, they shrugged into GitHub flow, and that flow taught the world to use Git. And commit histories have sucked ever since. For some reason, github has attracted people who have zero taste, don’t care about commit logs, and can’t be bothered. – Linus Torvalds, 2012 “Someone” was a person chosen by you from a checklist of the people who had also forked this repository at some point.↩︎ Though to make small, contained changes you’d use git format-patch and git am.↩︎
More in programming
I realize that for all I've talked about Logic for Programmers in this newsletter, I never once explained basic logical quantifiers. They're both simple and incredibly useful, so let's do that this week! Sets and quantifiers A set is a collection of unordered, unique elements. {1, 2, 3, …} is a set, as are "every programming language", "every programming language's Wikipedia page", and "every function ever defined in any programming language's standard library". You can put whatever you want in a set, with some very specific limitations to avoid certain paradoxes.2 Once we have a set, we can ask "is something true for all elements of the set" and "is something true for at least one element of the set?" IE, is it true that every programming language has a set collection type in the core language? We would write it like this: # all of them all l in ProgrammingLanguages: HasSetType(l) # at least one some l in ProgrammingLanguages: HasSetType(l) This is the notation I use in the book because it's easy to read, type, and search for. Mathematicians historically had a few different formats; the one I grew up with was ∀x ∈ set: P(x) to mean all x in set, and ∃ to mean some. I use these when writing for just myself, but find them confusing to programmers when communicating. "All" and "some" are respectively referred to as "universal" and "existential" quantifiers. Some cool properties We can simplify expressions with quantifiers, in the same way that we can simplify !(x && y) to !x || !y. First of all, quantifiers are commutative with themselves. some x: some y: P(x,y) is the same as some y: some x: P(x, y). For this reason we can write some x, y: P(x,y) as shorthand. We can even do this when quantifying over different sets, writing some x, x' in X, y in Y instead of some x, x' in X: some y in Y. We can not do this with "alternating quantifiers": all p in Person: some m in Person: Mother(m, p) says that every person has a mother. some m in Person: all p in Person: Mother(m, p) says that someone is every person's mother. Second, existentials distribute over || while universals distribute over &&. "There is some url which returns a 403 or 404" is the same as "there is some url which returns a 403 or some url that returns a 404", and "all PRs pass the linter and the test suites" is the same as "all PRs pass the linter and all PRs pass the test suites". Finally, some and all are duals: some x: P(x) == !(all x: !P(x)), and vice-versa. Intuitively: if some file is malicious, it's not true that all files are benign. All these rules together mean we can manipulate quantifiers almost as easily as we can manipulate regular booleans, putting them in whatever form is easiest to use in programming. Speaking of which, how do we use this in in programming? How we use this in programming First of all, people clearly have a need for directly using quantifiers in code. If we have something of the form: for x in list: if P(x): return true return false That's just some x in list: P(x). And this is a prevalent pattern, as you can see by using GitHub code search. It finds over 500k examples of this pattern in Python alone! That can be simplified via using the language's built-in quantifiers: the Python would be any(P(x) for x in list). (Note this is not quantifying over sets but iterables. But the idea translates cleanly enough.) More generally, quantifiers are a key way we express higher-level properties of software. What does it mean for a list to be sorted in ascending order? That all i, j in 0..<len(l): if i < j then l[i] <= l[j]. When should a ratchet test fail? When some f in functions - exceptions: Uses(f, bad_function). Should the image classifier work upside down? all i in images: classify(i) == classify(rotate(i, 180)). These are the properties we verify with tests and types and MISU and whatnot;1 it helps to be able to make them explicit! One cool use case that'll be in the book's next version: database invariants are universal statements over the set of all records, like all a in accounts: a.balance > 0. That's enforceable with a CHECK constraint. But what about something like all i, i' in intervals: NoOverlap(i, i')? That isn't covered by CHECK, since it spans two rows. Quantifier duality to the rescue! The invariant is equivalent to !(some i, i' in intervals: Overlap(i, i')), so is preserved if the query SELECT COUNT(*) FROM intervals CROSS JOIN intervals … returns 0 rows. This means we can test it via a database trigger.3 There are a lot more use cases for quantifiers, but this is enough to introduce the ideas! Next week's the one year anniversary of the book entering early access, so I'll be writing a bit about that experience and how the book changed. It's crazy how crude v0.1 was compared to the current version. MISU ("make illegal states unrepresentable") means using data representations that rule out invalid values. For example, if you have a location -> Optional(item) lookup and want to make sure that each item is in exactly one location, consider instead changing the map to item -> location. This is a means of implementing the property all i in item, l, l' in location: if ItemIn(i, l) && l != l' then !ItemIn(i, l'). ↩ Specifically, a set can't be an element of itself, which rules out constructing things like "the set of all sets" or "the set of sets that don't contain themselves". ↩ Though note that when you're inserting or updating an interval, you already have that row's fields in the trigger's NEW keyword. So you can just query !(some i in intervals: Overlap(new, i')), which is more efficient. ↩
After shipping my work transforming HTML with Netlify’s edge functions I realized I have a little bug: the order of the icons specified in the URL doesn’t match the order in which they are displayed on screen. Why’s this happening? I have a bunch of links in my HTML document, like this: <icon-list> <a href="/1/">…</a> <a href="/2/">…</a> <a href="/3/">…</a> <!-- 2000+ more --> </icon-list> I use html-rewriter in my edge function to strip out the HTML for icons not specified in the URL. So for a request to: /lookup?id=1&id=2 My HTML will be transformed like so: <icon-list> <!-- Parser keeps these two --> <a href="/1/">…</a> <a href="/2/">…</a> <!-- But removes this one --> <a href="/3/">…</a> </icon-list> Resulting in less HTML over the wire to the client. But what about the order of the IDs in the URL? What if the request is to: /lookup?id=2&id=1 Instead of: /lookup?id=1&id=2 In the source HTML document containing all the icons, they’re marked up in reverse chronological order. But the request for this page may specify a different order for icons in the URL. So how do I rewrite the HTML to match the URL’s ordering? The problem is that html-rewriter doesn’t give me a fully-parsed DOM to work with. I can’t do things like “move this node to the top” or “move this node to position x”. With html-rewriter, you only “see” each element as it streams past. Once it passes by, your chance at modifying it is gone. (It seems that’s just the way these edge function tools are designed to work, keeps them lean and performant and I can’t shoot myself in the foot). So how do I change the icon’s display order to match what’s in the URL if I can’t modify the order of the elements in the HTML? CSS to the rescue! Because my markup is just a bunch of <a> tags inside a custom element and I’m using CSS grid for layout, I can use the order property in CSS! All the IDs are in the URL, and their position as parameters has meaning, so I assign their ordering to each element as it passes by html-rewriter. Here’s some pseudo code: // Get all the IDs in the URL const ids = url.searchParams.getAll("id"); // Select all the icons in the HTML rewriter.on("icon-list a", { element: (element) => { // Get the ID const id = element.getAttribute('id'); // If it's in our list, set it's order // position from the URL if (ids.includes(id)) { const order = ids.indexOf(id); element.setAttribute( "style", `order: ${order}` ); // Otherwise, remove it } else { element.remove(); } }, }); Boom! I didn’t have to change the order in the source HTML document, but I can still get the displaying ordering to match what’s in the URL. I love shifty little workarounds like this! Email · Mastodon · Bluesky
In the previous article, we peeked at the reset circuit of ESP-Prog with an oscilloscope, and reproduced it with basic components. We observed that it did not behave quite as expected. In this article, we’ll look into the missing pieces. An incomplete circuit For a hint, we’ll first look a bit more closely at the … Continue reading The missing part of Espressif’s reset circuit → The post The missing part of Espressif’s reset circuit appeared first on Quentin Santos.
Here are a few tangentially-related ideas vaguely near the theme of comparison operators. comparison style clamp style clamp is median clamp in range range style style clash? comparison style Some languages such as BCPL, Icon, Python have chained comparison operators, like if min <= x <= max: ... In languages without chained comparison, I like to write comparisons as if they were chained, like, if min <= x && x <= max { // ... } A rule of thumb is to prefer less than (or equal) operators and avoid greater than. In a sequence of comparisons, order values from (expected) least to greatest. clamp style The clamp() function ensures a value is between some min and max, def clamp(min, x, max): if x < min: return min if max < x: return max return x I like to order its arguments matching the expected order of the values, following my rule of thumb for comparisons. (I used that flavour of clamp() in my article about GCRA.) But I seem to be unusual in this preference, based on a few examples I have seen recently. clamp is median Last month, Fabian Giesen pointed out a way to resolve this difference of opinion: A function that returns the median of three values is equivalent to a clamp() function that doesn’t care about the order of its arguments. This version is written so that it returns NaN if any of its arguments is NaN. (When an argument is NaN, both of its comparisons will be false.) fn med3(a: f64, b: f64, c: f64) -> f64 { match (a <= b, b <= c, c <= a) { (false, false, false) => f64::NAN, (false, false, true) => b, // a > b > c (false, true, false) => a, // c > a > b (false, true, true) => c, // b <= c <= a (true, false, false) => c, // b > c > a (true, false, true) => a, // c <= a <= b (true, true, false) => b, // a <= b <= c (true, true, true) => b, // a == b == c } } When two of its arguments are constant, med3() should compile to the same code as a simple clamp(); but med3()’s misuse-resistance comes at a small cost when the arguments are not known at compile time. clamp in range If your language has proper range types, there is a nicer way to make clamp() resistant to misuse: fn clamp(x: f64, r: RangeInclusive<f64>) -> f64 { let (&min,&max) = (r.start(), r.end()); if x < min { return min } if max < x { return max } return x; } let x = clamp(x, MIN..=MAX); range style For a long time I have been fond of the idea of a simple counting for loop that matches the syntax of chained comparisons, like for min <= x <= max: ... By itself this is silly: too cute and too ad-hoc. I’m also dissatisfied with the range or slice syntax in basically every programming language I’ve seen. I thought it might be nice if the cute comparison and iteration syntaxes were aspects of a more generally useful range syntax, but I couldn’t make it work. Until recently when I realised I could make use of prefix or mixfix syntax, instead of confining myself to infix. So now my fantasy pet range syntax looks like >= min < max // half-open >= min <= max // inclusive And you might use it in a pattern match if x is >= min < max { // ... } Or as an iterator for x in >= min < max { // ... } Or to take a slice xs[>= min < max] style clash? It’s kind of ironic that these range examples don’t follow the left-to-right, lesser-to-greater rule of thumb that this post started off with. (x is not lexically between min and max!) But that rule of thumb is really intended for languages such as C that don’t have ranges. Careful stylistic conventions can help to avoid mistakes in nontrivial conditional expressions. It’s much better if language and library features reduce the need for nontrivial conditions and catch mistakes automatically.
SumatraPDF is a medium size (120k+ loc, not counting dependencies) Windows GUI (win32) C++ code base started by me and written by mostly 2 people. The goals of SumatraPDF are to be: fast small packed with features and yet with thoughtfully minimal UI It’s not just a matter of pride in craftsmanship of writing code. I believe being fast and small are a big reason for SumatraPDF’s success. People notice when an app starts in an instant because that’s sadly not the norm in modern software. The engineering goals of SumatraPDF are: reliable (no crashes) fast compilation to enable fast iteration SumatraPDF has been successful achieving those objectives so I’m writing up my C++ implementation decisions. I know those decisions are controversial. Maybe not Terry Davis level of controversial but still. You probably won’t adopt them. Even if you wanted to, you probably couldn’t. There’s no way code like this would pass Google review. Not because it’s bad but becaues it’s different. Diverging from mainstream this much is only feasible if you have total control: it’s your company or your own open-source project. If my ideas were just like everyone else’s ideas, there would be little point in writing about them, would it? Use UTF8 strings internally My app only runs on Windows and a string native to Windows is WCHAR* where each character consumes 2 bytes. Despite that I mostly use char* assumed to be utf8-encoded. I only decided on that after lots of code was written so it was a refactoring oddysey that is still ongoing. My initial impetus was to be able to compile non-GUI parts under Linux and Mac. I abandoned that goal but I think that’s a good idea anyway. WCHAR* strings are 2x larger than char*. That’s more memory used which also makes the app slower. Binaries are bigger if string constants are WCHAR*. The implementation rule is simple: I only convert to WCHAR* when calling Windows API. When Windows API returns WCHA* I convert it to utf-8. No exceptions Do you want to hear a joke? “Zero-cost exceptions”. Throwing and catching exceptions generate bloated code. Exceptions are a non-local control flow that makes it hard to reason about program. Every memory allocation becomes a potential leak. But RAII, you protest. RAII is a “solution” to a problem created by exceptions. How about I don’t create the problem in the first place. Hard core #include discipline I wrote about it in depth. My objects are not shy I don’t bother with private and protected. struct is just class with guts exposed by default, so I use that. While intellectually I understand the reasoning behind hiding implementation details in practices it becomes busy work of typing noise and then even more typing when you change your mind about visibility. I’m the only person working on the code so I don’t need to force those of lesser intellect to write the code properly. My objects are shy At the same time I minimize what goes into a class, especially methods. The smaller the class, the faster the build. A common problem is adding too many methods to a class. You have a StrVec class for array of strings. A lesser programmer is tempted to add Join(const char* sep) method to StrVec. A wise programmer makes it a stand-alone function: Join(const StrVec& v, const char* sep). This is enabled by making everything in a class public. If you limit visibility you then have to use friendto allow Join() function access what it needs. Another example of “solution” to self-inflicted problems. Minimize #ifdef #ifdef is problematic because it creates code paths that I don’t always build. I provide arm64, intel 32-bit and 64-bit builds but typically only develop with 64-bit intel build. Every #ifdef that branches on architecture introduces potential for compilation error which I’ll only know about when my daily ci build fails. Consider 2 possible implementations of IsProcess64Bit(): Bad: bool IsProcess64Bit() { #ifdef _WIN64 return true; #else return false; #endif } Good: bool IsProcess64Bit() { return sizeof(uintptr_t) == 8; } The bad version has a bug: it was correct when I was only doing intel builds but became buggy when I added arm64 builds. This conflicts with the goal of smallest possible size but it’s worth it. Stress testing SumatraPDF supports a lot of very complex document and image formats. Complex format require complex code that is likely to have bugs. I also have lots of files in those formats. I’ve added stress testing functionality where I point SumatraPDF to a folder with files and tell it to render all of them. For greater coverage, I also simulate some of the possible UI actions users can take like searching, switching view modes etc. Crash reporting I wrote about it in depth. Heavy use of CrashIf() C/C++ programmers are familiar with assert() macro. CrashIf() is my version of that, tailored to my needs. The purpose of assert / CrashIf is to add checks to detect incorrect use of APIs or invalid states in the program. For example, if the code tries to access an element of an array at an invalid index (negative or larger than size of the array), it indicates a bug in the program. I want to be notified about such bugs both when I test SumatraPDF and when it runs on user’s computers. As the name implies, it’ll crash (by de-referencing null pointer) and therefore generate a crash report. It’s enabled in debug and pre-release builds but not in release builds. Release builds have many, many users so I worry about too many crash reports. premake to generate Visual Studio solution Visual Studio uses XML files as a list of files in the project and build format. The format is impossible to work with in a text editor so you have no choice but to use Visual Studio to edit the project / solution. To add a new file: find the right UI element, click here, click there, pick a file using file picker, click again. To change a compilation setting of a project or a file? Find the right UI element, click here, click there, type this, confirm that. You accidentally changed compilation settings of 1 file out of a hundred? Good luck figuring out which one. Go over all files in UI one by one. In other words: managing project files using Visual Studio UI is a nightmare. Premake is a solution. It’s a meta-build system. You define your build using lua scripts, which look like test configuration files. Premake then can generate Visual Studio projects, XCode project, makefiles etc. That’s the meta part. It was truly a life server on project with lots of files (SumatraPDF’s own are over 300, many times more for third party libraries). Using /analyze and cppcheck cppcheck and /analyze flag in cl.exe are tools to find bugs in C++ code via static analysis. They are like a C++ compiler but instead of generating code, they analyze control flow in a program to find potential programs. It’s a cheap way to find some bugs, so there’s no excuse to not run them from time to time on your code. Using asan builds Address Sanitizer (asan) is a compiler flag /fsanitize=address that instruments the code with checks for common memory-related bugs like using an object after freeing it, over-writing values on the stack, freeing an object twice, writing past allocated memory. The downside of this instrumentation is that the code is much slower due to overhead of instrumentation. I’ve created a project for release build with asan and run it occasionally, especially in stress test. Write for the debugger Programmers love to code golf i.e. put us much code on one line as possible. As if lines of code were expensive. Many would write: Bad: // ... return (char*)(start + offset); I write: Good: // ... char* s = (char*)(start + offset); return s; Why? Imagine you’re in a debugger stepping through a debug build of your code. The second version makes it trivial to set a breakpoint at return s line and look at the value of s. The first doesn’t. I don’t optimize for smallest number of lines of code but for how easy it is to inspect the state of the program in the debugger. In practice it means that I intentionally create intermediary variables like s in the example above. Do it yourself standard library I’m not using STL. Yes, I wrote my own string and vector class. There are several reasons for that. Historical reason When I started SumatraPDF over 15 years ago STL was crappy. Bad APIs Today STL is still crappy. STL implementations improved greatly but the APIs still suck. There’s no API to insert something in the middle of a string or a vector. I understand the intent of separation of data structures and algorithms but I’m a pragmatist and to my pragmatist eyes v.insert (v.begin(), myarray, myarray+3); is just stupid compared to v.inert(3, el). Code bloat STL is bloated. Heavy use of templates leads to lots of generated code i.e. surprisingly large binaries for supposedly low-level language. That bloat is invisible i.e. you won’t know unless you inspect generated binaries, which no one does. The bloat is out of my control. Even if I notice, I can’t fix STL classes. All I can do is to write my non-bloaty alternative, which is what I did. Slow compilation times Compilation of C code is not fast but it feels zippy compared to compilation of C++ code. Heavy use of templates is big part of it. STL implementations are over-templetized and need to provide all the C++ support code (operators, iterators etc.). As a pragmatist, I only implement the absolute minimum functionality I use in my code. I minimize use of templates. For example Str and WStr could be a single template but are 2 implementations. I don’t understand C++ I understand the subset of C++ I use but the whole of C++ is impossibly complicated. For example I’ve read a bunch about std::move() and I’m not confident I know how to use it correctly and that’s just one of many complicated things in C++. C++ is too subtle and I don’t want my code to be a puzzle. Possibility of optimized implementations I wrote a StrVec class that is optimized for storing vector of strings. It’s more efficient than std::vector<std::string> by a large margin and I use it extensively. Temporary allocator and pool allocators I use temporary allocators heavily. They make the code faster and smaller. Technically STL has support for non-standard allocators but the API is so bad that I would rather not. My temporary allocator and pool allocators are very small and simple and I can add support for them only when beneficial. Minimize unsigned int STL and standard C library like to use size_t and other unsigned integers. I think it was a mistake. Go shows that you can just use int. Having two types leads to cast-apalooza. I don’t like visual noise in my code. Unsigned are also more dangerous. When you substract you can end up with a bigger value. Indexing from end is subtle, for (int i = n; i >= 0; i--) is buggy because i >= 0 is always true for unsigned. Sadly I only realized this recently so there’s a lot of code still to refactor to change use of size_t to int. Mostly raw pointers No std::unique_ptr for me. Warnings are errors C++ makes a distinction between compilation errors and compilation warnings. I don’t like sloppy code and polluting build output with warning messages so for my own code I use a compiler flag that turns warnings into errors, which forces me to fix the warnings.