Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
35
Exploting a long-standing git bug for my own amusement. And I think there is one known race: the index mtime itself is not race-free. – Linus Torvalds, Re:git bugs, 2008 A well-known race condition skulks through git’s plumbing. And I can demo it via a git magic trick 🪄1 $ tree -L 1 -a . . ├── file └── .git $ cat file okbye $ git status On branch main Changes to be committed: new file: file $ git ls-files --modified # No output. A clean working directory. $ git commit --message="The file sez $(cat file)" $ git log --oneline HEAD -1 600fcac (HEAD -> main) The file sez okbye $ git status On branch main nothing to commit, working tree clean Nothing up my sleeves: The git repo had one staged file, file, containing okbye—nothing else I committed it And the commit message is The file sez okbye Now, the big reveal: $ cat file okbye $ git show HEAD:file hello $ git status nothing to commit, working tree clean Boom. ✨Magic✨ Git is clueless that the wrong file is in your work...
a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Tyler Cipriani: blog

Eventually consistent plain text accounting

.title { text-wrap: balance } Spending for October, generated by piping hledger → R Over the past six months, I’ve tracked my money with hledger—a plain text double-entry accounting system written in Haskell. It’s been surprisingly painless. My previous attempts to pick up real accounting tools floundered. Hosted tools are privacy nightmares, and my stint with GnuCash didn’t last. But after stumbling on Dmitry Astapov’s “Full-fledged hledger” wiki1, it clicked—eventually consistent accounting. Instead of modeling your money all at once, take it one hacking session at a time. It should be easy to work towards eventual consistency. […] I should be able to [add financial records] bit by little bit, leaving things half-done, and picking them up later with little (mental) effort. – Dmitry Astapov, Full-Fledged Hledger Principles of my system I’ve cobbled together a system based on these principles: Avoid manual entry – Avoid typing in each transaction. Instead, rely on CSVs from the bank. CSVs as truth – CSVs are the only things that matter. Everything else can be blown away and rebuilt anytime. Embrace version control – Keep everything under version control in Git for easy comparison and safe experimentation. Learn hledger in five minutes hledger concepts are heady, but its use is simple. I divide the core concepts into two categories: Stuff hledger cares about: Transactions – how hledger moves money between accounts. Journal files – files full of transactions Stuff I care about: Rules files – how I set up accounts, import CSVs, and move money between accounts. Reports – help me see where my money is going and if I messed up my rules. Transactions move money between accounts: 2024-01-01 Payday income:work $-100.00 assets:checking $100.00 This transaction shows that on Jan 1, 2024, money moved from income:work into assets:checking—Payday. The sum of each transaction should be $0. Money comes from somewhere, and the same amount goes somewhere else—double-entry accounting. This is powerful technology—it makes mistakes impossible to ignore. Journal files are text files containing one or more transactions: 2024-01-01 Payday income:work $-100.00 assets:checking $100.00 2024-01-02 QUANSHENG UVK5 assets:checking $-29.34 expenses:fun:radio $29.34 Rules files transform CSVs into journal files via regex matching. Here’s a CSV from my bank: Transaction Date,Description,Category,Type,Amount,Memo 09/01/2024,DEPOSIT Paycheck,Payment,Payment,1000.00, 09/04/2024,PizzaPals Pizza,Food & Drink,Sale,-42.31, 09/03/2024,Amazon.com*XXXXXXXXY,Shopping,Sale,-35.56, 09/03/2024,OBSIDIAN.MD,Shopping,Sale,-10.00, 09/02/2024,Amazon web services,Personal,Sale,-17.89, And here’s a checking.rules to transform that CSV into a journal file so I can use it with hledger: # checking.rules # -------------- # Map CSV fields → hledger fields[0] fields date,description,category,type,amount,memo,_ # `account1`: the account for the whole CSV.[1] account1 assets:checking account2 expenses:unknown skip 1 date-format %m/%d/%Y currency $ if %type Payment account2 income:unknown if %category Food & Drink account2 expenses:food:dining # [0]: <https://hledger.org/hledger.html#field-names> # [1]: <https://hledger.org/hledger.html#account-field> With these two files (checking.rules and 2024-09_checking.csv), I can make the CSV into a journal: $ > 2024-09_checking.journal \ hledger print \ --rules-file checking.rules \ -f 2024-09_checking.csv $ head 2024-09_checking.journal 2024-09-01 DEPOSIT Paycheck assets:checking $1000.00 income:unknown $-1000.00 2024-09-02 Amazon web services assets:checking $-17.89 expenses:unknown $17.89 Reports are interesting ways to view transactions between accounts. There are registers, balance sheets, and income statements: $ hledger incomestatement \ --depth=2 \ --file=2024-09_bank.journal Revenues: $1000.00 income:unknown ----------------------- $1000.00 Expenses: $42.31 expenses:food $63.45 expenses:unknown ----------------------- $105.76 ----------------------- Net: $894.24 At the beginning of September, I spent $105.76 and made $1000, leaving me with $894.24. But a good chunk is going to the default expense account, expenses:unknown. I can use the hleger aregister to see what those transactions are: $ hledger areg expenses:unknown \ --file=2024-09_checking.journal \ -O csv | \ csvcut -c description,change | \ csvlook | description | change | | ------------------------ | ------ | | OBSIDIAN.MD | 10.00 | | Amazon web services | 17.89 | | Amazon.com*XXXXXXXXY | 35.56 | l Then, I can add some more rules to my checking.rules: if OBSIDIAN.MD account2 expenses:personal:subscriptions if Amazon web services account2 expenses:personal:web:hosting if Amazon.com account2 expenses:personal:shopping:amazon Now, I can reprocess my data to get a better picture of my spending: $ > 2024-09_bank.journal \ hledger print \ --rules-file bank.rules \ -f 2024-09_bank.csv $ hledger bal expenses \ --depth=3 \ --percent \ -f 2024-09_checking2.journal 30.0 % expenses:food:dining 33.6 % expenses:personal:shopping 9.5 % expenses:personal:subscriptions 16.9 % expenses:personal:web -------------------- 100.0 % For the Amazon.com purchase, I lumped it into the expenses:personal:shopping account. But I could dig deeper—download my order history from Amazon and categorize that spending. This is the power of working bit-by-bit—the data guides you to the next, deeper rabbit hole. Goals and non-goals Why am I doing this? For years, I maintained a monthly spreadsheet of account balances. I had a balance sheet. But I still had questions. Spending over six months, generated by piping hledger → gnuplot Before diving into accounting software, these were my goals: Granular understanding of my spending – The big one. This is where my monthly spreadsheet fell short. I knew I had money in the bank—I kept my monthly balance sheet. I budgeted up-front the % of my income I was saving. But I had no idea where my other money was going. Data privacy – I’m unwilling to hand the keys to my accounts to YNAB or Mint. Increased value over time – The more time I put in, the more value I want to get out—this is what you get from professional tools built for nerds. While I wished for low-effort setup, I wanted the tool to be able to grow to more uses over time. Non-goals—these are the parts I never cared about: Investment tracking – For now, I left this out of scope. Between monthly balances in my spreadsheet and online investing tools’ ability to drill down, I was fine.2 Taxes – Folks smarter than me help me understand my yearly taxes.3 Shared system – I may want to share reports from this system, but no one will have to work in it except me. Cash – Cash transactions are unimportant to me. I withdraw money from the ATM sometimes. It evaporates. hledger can track all these things. My setup is flexible enough to support them someday. But that’s unimportant to me right now. Monthly maintenance I spend about an hour a month checking in on my money Which frees me to spend time making fancy charts—an activity I perversely enjoy. Income vs. Expense, generated by piping hledger → gnuplot Here’s my setup: $ tree ~/Documents/ledger . ├── export │   ├── 2024-balance-sheet.txt │   └── 2024-income-statement.txt ├── import │   ├── in │   │   ├── amazon │   │   │   └── order-history.csv │   │   ├── credit │   │   │   ├── 2024-01-01_2024-02-01.csv │   │   │   ├── ... │   │   │   └── 2024-10-01_2024-11-01.csv │   │   └── debit │   │   ├── 2024-01-01_2024-02-01.csv │   │   ├── ... │   │   └── 2024-10-01_2024-11-01.csv │   └── journal │   ├── amazon │   │   └── order-history.journal │   ├── credit │   │   ├── 2024-01-01_2024-02-01.journal │   │   ├── ... │   │   └── 2024-10-01_2024-11-01.journal │   └── debit │   ├── 2024-01-01_2024-02-01.journal │   ├── ... │   └── 2024-10-01_2024-11-01.journal ├── rules │   ├── amazon │   │   └── journal.rules │   ├── credit │   │   └── journal.rules │   ├── debit │   │   └── journal.rules │   └── common.rules ├── 2024.journal ├── Makefile └── README Process: Import – download a CSV for the month from each account and plop it into import/in/<account>/<dates>.csv Make – run make Squint – Look at git diff; if it looks good, git add . && git commit -m "💸" otherwise review hledger areg to see details. The Makefile generates everything under import/journal: journal files from my CSVs using their corresponding rules. reports in the export folder I include all the journal files in the 2024.journal with the line: include ./import/journal/*/*.journal Here’s the Makefile: SHELL := /bin/bash RAW_CSV = $(wildcard import/in/**/*.csv) JOURNALS = $(foreach file,$(RAW_CSV),$(subst /in/,/journal/,$(patsubst %.csv,%.journal,$(file)))) .PHONY: all all: $(JOURNALS) hledger is -f 2024.journal > export/2024-income-statement.txt hledger bs -f 2024.journal > export/2024-balance-sheet.txt .PHONY clean clean: rm -rf import/journal/**/*.journal import/journal/%.journal: import/in/%.csv @echo "Processing csv $< to $@" @echo "---" @mkdir -p $(shell dirname $@) @hledger print --rules-file rules/$(shell basename $$(dirname $<))/journal.rules -f "$<" > "$@" If I find anything amiss (e.g., if my balances are different than what the bank tells me), I look at hleger areg. I may tweak my rules or my CSVs and then I run make clean && make and try again. Simple, plain text accounting made simple. And if I ever want to dig deeper, hledger’s docs have more to teach. But for now, the balance of effort vs. reward is perfect. while reading a blog post from Jonathan Dowland↩︎ Note, this is covered by full-fledged hledger – Investements↩︎ Also covered in full-fledged hledger – Tax returns↩︎

4 months ago 35 votes
Subliminal git commits

Luckily, I speak Leet. – Amita Ramanujan, Numb3rs, CBS’s IRC Drama There’s an episode of the CBS prime-time drama Numb3rs that plumbs the depths of Dr. Joel Fleischman’s1 knowledge of IRC. In one scene, Fleischman wonders, “What’s ‘leet’”? “Leet” is writing that replaces letters with numbers, e.g., “Numb3rs,” where 3 stands in for e. In short, leet is like the heavy-metal “S” you drew in middle school: Sweeeeet. / \ / | \ | | | \ \ | | | \ | / \ / ASCII art version of your misspent youth. Following years of keen observation, I’ve noticed Git commit hashes are also letters and numbers. Git commit hashes are, as Fleischman might say, prime targets for l33tification. What can I spell with a git commit? DenITDao via orlybooks) With hexidecimal we can spell any word containing the set of letters {A, B, C, D, E, F}—DEADBEEF (a classic) or ABBABABE (for Mama Mia aficionados). This is because hexidecimal is a base-16 numbering system—a single “digit” represents 16 numbers: Base-10: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 16 15 Base-16: 0 1 2 3 4 5 6 7 8 9 A B C D E F Leet expands our palette of words—using 0, 1, and 5 to represent O, I, and S, respectively. I created a script that scours a few word lists for valid words and phrases. With it, I found masterpieces like DADB0D (dad bod), BADA55 (bad ass), and 5ADBAB1E5 (sad babies). Manipulating commit hashes for fun and no profit Git commit hashes are no mystery. A commit hash is the SHA-1 of a commit object. And a commit object is the commit message with some metadata. $ mkdir /tmp/BADA55-git && cd /tmp/BAD55-git $ git init Initialized empty Git repository in /tmp/BADA55-git/.git/ $ echo '# BADA55 git repo' > README.md && git add README.md && git commit -m 'Initial commit' [main (root-commit) 68ec0dd] Initial commit 1 file changed, 1 insertion(+) create mode 100644 README.md $ git log --oneline 68ec0dd (HEAD -> main) Initial commit Let’s confirm we can recreate the commit hash: $ git cat-file -p 68ec0dd > commit-msg $ sha1sum <(cat \ <(printf "commit ") \ <(wc -c < commit-msg | tr -d '\n') \ <(printf '%b' '\0') commit-msg) 68ec0dd6dead532f18082b72beeb73bd828ee8fc /dev/fd/63 Our repo’s first commit has the hash 68ec0dd. My goal is: Make 68ec0dd be BADA55. Keep the commit message the same, visibly at least. But I’ll need to change the commit to change the hash. To keep those changes invisible in the output of git log, I’ll add a \t and see what happens to the hash. $ truncate -s -1 commit-msg # remove final newline $ printf '\t\n' >> commit-msg # Add a tab $ # Check the new SHA to see if it's BADA55 $ sha1sum <(cat \ <(printf "commit ") \ <(wc -c < commit-msg | tr -d '\n') \ <(printf '%b' '\0') commit-msg) 27b22ba5e1c837a34329891c15408208a944aa24 /dev/fd/63 Success! I changed the SHA-1. Now to do this until we get to BADA55. Fortunately, user not-an-aardvark created a tool for that—lucky-commit that manipulates a commit message, adding a combination of \t and [:space:] characters until you hit a desired SHA-1. Written in rust, lucky-commit computes all 256 unique 8-bit strings composed of only tabs and spaces. And then pads out commits up to 48-bits with those strings, using worker threads to quickly compute the SHA-12 of each commit. It’s pretty fast: $ time lucky_commit BADA555 real 0m0.091s user 0m0.653s sys 0m0.007s $ git log --oneline bada555 (HEAD -> main) Initial commit $ xxd -c1 <(git cat-file -p 68ec0dd) | grep -cPo ': (20|09)' 12 $ xxd -c1 <(git cat-file -p HEAD) | grep -cPo ': (20|09)' 111 Now we have an more than an initial commit. We have a BADA555 initial commit. All that’s left to do is to make ALL our commits BADA55 by abusing git hooks. $ cat > .git/hooks/post-commit && chmod +x .git/hooks/post-commit #!/usr/bin/env bash echo 'L337-ifying!' lucky_commit BADA55 $ echo 'A repo that is very l33t.' >> README.md && git commit -a -m 'l33t' L337-ifying! [main 0e00cb2] l33t 1 file changed, 1 insertion(+) $ git log --oneline bada552 (HEAD -> main) l33t bada555 Initial commit And now I have a git repo almost as cool as the sweet “S” I drew in middle school. This is a Northern Exposure spin off, right? I’ve only seen 1:48 of the show…↩︎ or SHA-256 for repos that have made the jump to a more secure hash function↩︎

5 months ago 50 votes
The Pull Request

A brief and biased history. Oh yeah, there’s pull requests now – GitHub blog, Sat, 23 Feb 2008 When GitHub launched, it had no code review. Three years after launch, in 2011, GitHub user rtomayko became the first person to make a real code comment, which read, in full: “+1”. Before that, GitHub lacked any way to comment on code directly. Instead, pull requests were a combination of two simple features: Cross repository compare view – a feature they’d debuted in 2010—git diff in a web page. A comments section – a feature most blogs had in the 90s. There was no way to thread comments, and the comments were on a different page than the diff. GitHub pull requests circa 2010. This is from the official documentation on GitHub. Earlier still, when the pull request debuted, GitHub claimed only that pull requests were “a way to poke someone about code”—a way to direct message maintainers, but one that lacked any web view of the code whatsoever. For developers, it worked like this: Make a fork. Click “pull request”. Write a message in a text form. Send the message to someone1 with a link to your fork. Wait for them to reply. In effect, pull requests were a limited way to send emails to other GitHub users. Ten years after this humble beginning—seven years after the first code comment—when Microsoft acquired GitHub for $7.5 Billion, this cobbled-together system known as “GitHub flow” had become the default way to collaborate on code via Git. And I hate it. Pull requests were never designed. They emerged. But not from careful consideration of the needs of developers or maintainers. Pull requests work like they do because they were easy to build. In 2008, GitHub’s developers could have opted to use git format-patch instead of teaching the world to juggle branches. Or they might have chosen to generate pull requests using the git request-pull command that’s existed in Git since 2005 and is still used by the Linux kernel maintainers today2. Instead, they shrugged into GitHub flow, and that flow taught the world to use Git. And commit histories have sucked ever since. For some reason, github has attracted people who have zero taste, don’t care about commit logs, and can’t be bothered. – Linus Torvalds, 2012 “Someone” was a person chosen by you from a checklist of the people who had also forked this repository at some point.↩︎ Though to make small, contained changes you’d use git format-patch and git am.↩︎

6 months ago 65 votes
Git the stupid password store

.title {text-wrap:balance;} GIT - the stupid content tracker “git” can mean anything, depending on your mood. – Linus Torvalds, Initial revision of “git”, the information manager from hell Like most git features, gitcredentials(7) are obscure, byzantine, and incredibly useful. And, for me, they’re a nice, hacky solution to a simple problem. Problem: Home directories teeming with tokens. Too many programs store cleartext credentials in config files in my home directory, making exfiltration all too easy. Solution: For programs I write, I can use git credential fill – the password library I never knew I installed. #!/usr/bin/env bash input="\ protocol=https host=example.com user=thcipriani " eval "$(echo "$input" | git credential fill)" echo "The password is: $password" Which looks like this when you run it: $ ./prompt.sh Password for 'https://thcipriani@example.com': The password is: hunter2 What did git credentials fill do? Accepted a protocol, username, and host on standard input. Called out to my git credential helper My credential helper checked for credentials matching https://thcipriani@example.com and found nothing Since my credential helper came up empty, it prompted me for my password Finally, it echoed <key>=<value>\n pairs for the keys protocol, host, username, and password to standard output. If I want, I can tell my credential helper to store the information I entered: git credential approve <<EOF protocol=$protocol username=$username host=$host password=$password EOF If I do that, the next time I run the script, it finds the password without prompting: $ ./prompt.sh The password is: hunter2 What are git credentials? Surprisingly, the intended purpose of git credentials is NOT “a weird way to prompt for passwords.” The problem git credentials solve is this: With git over ssh, you use your keys. With git over https, you type a password. Over and over and over. Beleaguered git maintainers solved this dilemma with the credential storage system—git credentials. With the right configuration, git will stop asking for your password when you push to an https remote. Instead, git credentials retrieve and send auth info to remotes. On the labyrinthine options of git credentials My mind initially refused to learn git credentials due to its twisty maze of terms that all sound alike: git credential fill: how you invoke a user’s configured git credential helper git credential approve: how you save git credentials (if this is supported by the user’s git credential helper) git credential.helper: the git config that points to a script that poops out usernames and passwords. These helper scripts are often named git-credential-<something>. git-credential-cache: a specific, built-in git credential helper that caches credentials in memory for a while. git-credential-store: STOP. DON’T TOUCH. This is a specific, built-in git credential helper that stores credentials in cleartext in your home directory. Whomp whomp. git-credential-manager: a specific and confusingly named git credential helper from Microsoft®. If you’re on Linux or Mac, feel free to ignore it. But once I mapped the terms, I only needed to pick a git credential helper. Configuring good credential helpers The built-in git-credential-store is a bad credential helper—it saves your passwords in cleartext in ~/.git-credentials.1 If you’re on a Mac, you’re in luck2—one command points git credentials to your keychain: git config --global credential.helper osxkeychain Third-party developers have contributed helpers for popular password stores: 1Password pass: the standard Unix password manager OAuth Git’s documentation contains a list of credential-helpers, too Meanwhile, Linux and Windows have standard options. Git’s source repo includes helpers for these options in the contrib directory. On Linux, you can use libsecret. Here’s how I configured it on Debian: sudo apt install libsecret-1-0 libsecret-1-dev cd /usr/share/doc/git/contrib/credential/libsecret/ sudo make sudo mv git-credential-libsecret /usr/local/bin/ git config --global credential.helper libsecret On Windows, you can use the confusingly named git credential manager. I have no idea how to do this, and I refuse to learn. Now, if you clone a repo over https, you can push over https without pain3. Plus, you have a handy trick for shell scripts. git-credential-store is not a git credential helper of honor. No highly-esteemed passwords should be stored with it. This message is a warning about danger. The danger is still present, in your time, as it was in ours.↩︎ I think. I only have Linux computers to test this on, sorry ;_;↩︎ Or the config option pushInsteadOf, which is what I actually do.↩︎

7 months ago 50 votes
Hexadecimal Sucks

Humans do no operate on hexadecimal symbols effectively […] there are exceptions. – Dan Kaminsky When SSH added ASCII art fingerprints (AKA, randomart), the author credited a talk by Dan Kaminsky. As a refresher, randomart looks like this: $ ssh-keygen -lv -f ~/.ssh/id_ed25519.pub 256 SHA256:XrvNnhQuG1ObprgdtPiqIGXUAsHT71SKh9/WAcAKoS0 thcipriani@foo.bar (ED25519) +--[ED25519 256]--+ | .++ ... | | o+.... o | |E .oo=.o . | | . .+.= . | | o= .S.o.o | | o o.o+.= + | | . . .o B * | | . . + & . | | ..+o*.= | +----[SHA256]-----+ Ben Cox describes the algorithm for generating random art on his blog. Here’s a slo-mo version of the algorithm in action: ASCII art ssh fingerprints slo-mo algorithm But in Dan’s talk, he never mentions anything about ASCII art. Instead, his talk was about exploiting our brain’s hardware acceleration to make it easier for us to recognize SSH fingerprints. The talk is worth watching, but I’ll attempt a summary. What’s the problem? We’ll never memorize SHA256:XrvNnhQuG1ObprgdtPiqIGXUAsHT71SKh9/WAcAKoS0—hexadecimal and base64 were built to encode large amounts of information rather than be easy to remember. But that’s ok for SSH keys because there are different kinds of memory: Rejection: I’ve never seen that before! Recognition: I know it’s that one—not the other one. Recollection: rote recall, like a phone number or address. For SSH you’ll use recognition—do you recognize this key? Of course, SSH keys are still a problem because our working memory is too small to recognize such long strings of letters and numbers. Hacks abound to shore up our paltry working memory—what Dan called “brain hardware acceleration.” Randomart attempts to tap into our hardware acceleration for pattern recognition—the visiuo-spacial sketchpad, where we store pictures. Dan’s idea tapped into a different aspect of hardware acceleration, one often cited by memory competition champions: chunking. Memory chunking and sha256 The web service what3words maps every three cubic meters (3m²) on Earth to three words. The White House’s Oval Office is ///curve.empty.buzz. Three words encode the same information as latitude and longitude—38.89, -77.03—chunking the information to be small enough to fit in our working memory. The mapping of locations to words uses a list of 40 thousand common English words, so each word encodes 15.29 bits of information—45.9 bits of information, identifying 64 trillion unique places. Meanwhile sha256 is 256 bits of information: ~116 quindecillion unique combinations. 64000000000000 # 64 trillion (what3words) 115792089237316195423570985008687907853269984665640564039457584007913129639936 # 116 (ish) quindecillion (sha256) For SHA256, we need more than three words or a dictionary larger than 40,000 words. Dan’s insight was we can identify SSH fingerprints using pairs of human names—couples. The math works like this1: 131,072 first names: 17 bits per name (×2) 524,288 last names: 19 bits per name 2,048 cities: 11 bits per city 17+17+19+11 = 64 bits With 64 bits per couple, you could uniquely identify 116 quindecillion items with four couples. Turning this: $ ssh foo.bar The authenticity of host 'foo.bar' can't be established. ED25519 key fingerprint is SHA256:XrvNnhQuG1ObprgdtPiqIGXUAsHT71SKh9/WAcAKoS0. Are you sure you want to continue connecting (yes/no/[fingerprint])? Into this2: $ ssh foo.bar The authenticity of host 'foo.bar' can't be established. SHA256:XrvNnhQuG1ObprgdtPiqIGXUAsHT71SKh9/WAcAKoS0 Key Data: Svasse and Tainen Jesudasson from Fort Wayne, Indiana, United States Illma and Sibeth Primack from Itārsi, Madhya Pradesh, India Maarja and Nisim Balyeat from Mukilteo, Washington, United States Hsu-Heng and Rasim Haozi from Manali, Tamil Nadu, India Are you sure you want to continue connecting (yes/no/[fingerprint])? With enough exposure, building recognition for these names and places should be possible—at least more possible than memorizing host keys. I’ve modified this from the original talk, in 2006 we were using md5 fingerprints of 160-bits. Now we’re using 256-bit fingerprints, so we needed to encode even more information, but the idea still works.↩︎ A (very) rough code implementation is on my github.↩︎

9 months ago 61 votes

More in programming

Closing the borders alone won't fix the problems

Denmark has been reaping lots of delayed accolades from its relatively strict immigration policy lately. The Swedes and the Germans in particular are now eager to take inspiration from The Danish Model, given their predicaments. The very same countries that until recently condemned the lack of open-arms/open-border policies they would champion as Moral Superpowers.  But even in Denmark, thirty years after the public opposition to mass immigration started getting real political representation, the consequences of culturally-incompatible descendants from MENAPT continue to stress the high-trust societal model. Here are just three major cases that's been covered in the Danish media in 2025 alone: Danish public schools are increasingly struggling with violence and threats against students and teachers, primarily from descendants of MENAPT immigrants. In schools with 30% or more immigrants, violence is twice as prevalent. This is causing a flight to private schools from parents who can afford it (including some Syrians!). Some teachers are quitting the profession as a result, saying "the Quran run the class room". Danish women are increasingly feeling unsafe in the nightlife. The mayor of the country's third largest city, Odense, says he knows why: "It's groups of young men with an immigrant background that's causing it. We might as well be honest about that." But unfortunately, the only suggestion he had to deal with the problem was that "when [the women] meet these groups... they should take a big detour around them". A soccer club from the infamous ghetto area of Vollsmose got national attention because every other team in their league refused to play them. Due to the team's long history of violent assaults and death threats against opposing teams and referees. Bizarrely leading to the situation were the team got to the top of its division because they'd "win" every forfeited match. Problems of this sort have existed in Denmark for well over thirty years. So in a way, none of this should be surprising. But it actually is. Because it shows that long-term assimilation just isn't happening at a scale to tackle these problems. In fact, data shows the opposite: Descendants of MENAPT immigrants are more likely to be violent and troublesome than their parents. That's an explosive point because it blows up the thesis that time will solve these problems. Showing instead that it actually just makes it worse. And then what? This is particularly pertinent in the analysis of Sweden. After the "far right" party of the Swedish Democrats got into government, the new immigrant arrivals have plummeted. But unfortunately, the net share of immigrants is still increasing, in part because of family reunifications, and thus the problems continue. Meaning even if European countries "close the borders", they're still condemned to deal with the damning effects of maladjusted MENAPT immigrant descendants for decades to come. If the intervention stops there. There are no easy answers here. Obviously, if you're in a hole, you should stop digging. And Sweden has done just that. But just because you aren't compounding the problem doesn't mean you've found a way out. Denmark proves to be both a positive example of minimizing the digging while also a cautionary tale that the hole is still there.

4 hours ago 1 votes
An unexpected lesson in CSS stacking contexts

I’ve made another small tweak to the site – I’ve added “new” banners to articles I’ve written recently, and any post marked as “new” will be pinned to the homepage. Previously, the homepage was just a random selection of six articles I’d written at any time. Last year I made some changes to de-emphasise sorting by date and reduce recency bias. I stand by that decision, but now I see I went too far. Nobody comes to my site asking “what did Alex write on a specific date”, but there are people who ask “what did Alex write recently”. I’d made it too difficult to find my newest writing, and that’s what this tweak is trying to fix. This should have been a simple change, but it became a lesson about the inner workings of CSS. Absolute positioning and my first attempt I started with some code I wrote last year. Let’s step through it in detail. <div class="container"> <div class="banner">NEW</div> <img src="computer.jpg"> </div> NEW .banner { position: absolute; } absolute positioning, which removes the banner from the normal document flow and allows it to be placed anywhere on the page. Now it sits alone, and it doesn't affect the layout of other elements on the page – in particular, the image no longer has to leave space for it. NEW .container { position: relative; } .banner { transform: rotate(45deg); right: 16px; top: 20px; } NEW I chose the transform, right, and top values by tweaking until I got something that looked correct. They move the banner to the corner, and then the transform rotates it diagonally. The relative position of the container element is vital. The absolutely positioned banner still needs a reference point for the top and right, and it uses the closest ancestor with an explicit position – or if it doesn’t find one, the root <html> element. Setting position: relative; means the offsets are measured against the sides of the container, not the entire HTML document. This is a CSS feature called positioning context, which I’d never heard of until I started writing this blog post. I’d been copying the position: relative; line from other examples without really understanding what it did, or why it was necessary. (What made this particularly confusing to me is that if you only add position: absolute to the banner, it seems like the image is the reference point – notice how, with just that property, the text is in the top left-hand corner of the image. It’s not until you set top or right that the banner starts using the entire page as a reference point. This is because an absolutely positioned element takes its initial position from where it would be in the normal flow, and doesn’t look for a positioned ancestor until you set an offset.) .banner { background: red; color: white; } NEW .banner { right: -34px; top: 18px; padding: 2px 50px; } NEW .container { overflow: hidden; } box-shadow on my homepage to make it stand out further, but cosmetic details like that aren’t important for the rest of this post. NEW As a reminder, here’s the HTML: <div class="container"> <div class="banner">NEW</div> <img src="computer.jpg"> </div> and here’s the complete CSS: .container { position: relative; overflow: hidden; } .banner { position: absolute; background: red; color: white; transform: rotate(45deg); right: -34px; top: 18px; padding: 2px 50px; } It’s only nine CSS properties, but it contains a surprising amount of complexity. I had this CSS and I knew it worked, but I didn’t really understand it – and especially the way absolute positioning worked – until I wrote this post. This worked when I wrote it as a standalone snippet, and then I deployed it on this site, and I found a bug. (The photo I used in the examples is from Viktorya Sergeeva on Pexels.) Dark mode, filters, and stacking contexts I added dark mode support to this site a couple of years ago – the background changes from white to black, the text colour flips, and a few other changes. I’m a light mode person, but I know a lot of people prefer dark mode and it was a fun bit of CSS work, so it’s there. The code I described above breaks if you’re using this site in dark mode. What. I started poking around in my browser’s developer tools, and I could see that the banner was being rendered, but it was under the image instead of on top of it. All my positioning code that worked in light mode was broken in dark mode. I was baffled. I discovered that by adding a z-index property to the banner, I could make it reappear. I knew that elements with a higher z-index will appear above an element with a lower z-index – so I was moving my banner back out from under the image. I had a fix, but it felt uncomfortable because I couldn’t explain why it worked, or why it was only necessary in dark mode. I wanted to go deeper. I knew the culprit was in the CSS I’d written. I could see the issue if I tried my code in this site, but not if I copied it to a standalone HTML file. To find the issue, I created a local branch of the site, and I started deleting CSS until I could no longer reproduce the issue. I eventually tracked it down to the following rule: @media (prefers-color-scheme: dark) { /* see https://web.dev/articles/prefers-color-scheme#re-colorize_and_darken_photographic_images */ img:not([src*='.svg']):not(.dark_aware) { filter: grayscale(10%); } } This applies a slight darkening to any images when dark mode is enabled – unless they’re an SVG, or I’ve added the dark_aware class that means an image look okay in dark mode. This makes images a bit less vibrant in dark mode, so they’re not too visually loud. This is a suggestion from Thomas Steiner, from an article with a lot of useful advice about supporting dark mode. When this rule is present, the banner vanishes. When I delete it, the banner looks fine. Eventually I found the answer: I’d not thought about (or heard of!) the stacking context. The stacking context is a way of thinking about HTML elements in three dimensions. It introduces a z‑axis that determines which elements appear above or below each other. It’s affected by properties like z-index, but also less obvious ones like filter. In light mode, the banner and the image are both part of the same stacking context. This means that both elements can be rendered together, and the positioning rules are applied together – so the banner appears on top of the image. In dark mode, my filter property creates a new stacking context. Applying a filter to an element forces it into a new stacking context, and in this case that means the image and the banner will be rendered separately. Browsers render elements in DOM order, and because the banner appears before the image in the HTML, the stacking context with the banner is rendered first, then the stacking context with the image is rendered separately and covers it up. The correct fix is not to set a z-index, but to swap the order of DOM elements so the banner is rendered after the image: <div class="container"> <img src="computer.jpg"> <div class="banner">NEW</div> </div> This is the code I’m using now, and now the banner looks correct in dark mode. In hindsight, this ordering makes more sense anyway – the banner is an overlay on the image, and it feels right to me that it should appear later in the HTML. If I was laying this out with bits of paper, I’d put down the image, then the banner. One example is nowhere near enough for me to properly understand stacking contexts or rendering order, but now I know it’s a thing I need to consider. I have a vague recollection that I made another mistake with filter and rendering order in the past, but I didn’t investigate properly – this time, I wanted to understand what was happening. I’m still not done – now I have the main layout working, I’m chasing a hairline crack that’s started appearing in the cards, but only on WebKit. There’s an interaction between relative positioning and border-radius that’s throwing everything off. CSS is hard. I stick to a small subset of CSS properties, but that doesn’t mean I can avoid the complexity of the web. There are lots of moving parts that interact in non-obvious ways, and my understanding is rudimentary at best. I have a lot of respect for front-end developers who work on much larger and more complex code bases. I’m getting better, but CSS keeps reminding me how much more I have to learn. [If the formatting of this post looks odd in your feed reader, visit the original article]

yesterday 2 votes
Rohit Chess

fun little board game

yesterday 4 votes
Top Coworking Spaces in Karuizawa

Since November 2023, I’ve been living in Karuizawa, a small resort town that’s 70 minutes away from Tokyo by Shinkansen. The elevation is approximately 1000 meters above sea level, making the summers relatively mild. Unlike other colder places in Japan, it doesn’t get much snow, and has the same sunny winters I came to love in Tokyo. With COVID and the remote work boom, it’s also become popular among professionals such as myself who want to live somewhere with an abundance of nature, but who still need to commute into Tokyo on a semi-regular basis. While I have a home office, I sometimes like to work outside. So I thought I’d share my impressions of the coworking spaces in town that I’ve personally visited, and a few other places where you can get some work done when you’re in town. Sawamura Roastery 11am on a Friday morning and there was only one other customer. Sawamura Roastery is technically a cafe, but it’s my personal favourite coworking space. It has free wifi, outlets, and comfortable chairs. While their coffees are on the expensive side, at about 750 yen for a cafe latte, they are also some of Karuizawa’s best. It’s empty enough on weekday mornings that I feel fine about staying there for hours, making it a deal compared to official drop-in coworking spaces. Another bonus is that it opens early: 7 a.m. (or 8 a.m. during the winter months). This allows me to start working right after I drop off my kids at daycare, rather than having 20 odd minutes to kill before heading to the other places that open at 9 a.m. If you’re having an online meeting, you can make use of the outdoor seating. It’s perfect when the weather is nice, but they also have heating for when it isn’t. The downsides are that their playlist is rather short, so I’m constantly hearing the same songs, and their roasting machine sometimes gets quite noisy. Gokalab Gokalab is my favourite dedicated coworking space in Karuizawa. Technically it is in Miyota, the next town over, which is sometimes called “Nishikaruizawa”. But it’s the only coworking space in the area I’ve been to that feels like it has a real community. When you want to work here, you have three options: buy a drink (600 yen for a cafe au lait—no cafe lattes, unfortunately, but if you prefer black coffee they have a good selection) and work out of the cafe area on the first floor; pay their daily drop-in fee of 1,000 yen; or become a “researcher” (研究員, kenkyuin) for 3,000 yen per month and enjoy unlimited usage. Now you may be thinking that the last option is a steal. That’s because it is. However, to become a researcher you need to go through a workshop that involves making something out of LEGO, and submit an essay about why you want to use the space. The thinking behind this is that they want to support people who actually share their vision, and aren’t just after a cheap space to work or study. Kind of zany, but that sort of out-of-the-box thinking is exactly what I want in a coworking space. When I first moved to Karuizawa, my youngest child couldn’t get into the local daycare. However, we found out that in Miyota, Suginoko Kindergarten had part-time spots available for two year olds. My wife and I ended up taking turns driving my kid there, and then spending the morning working out of Gokalab. Since my youngest is now in a local daycare, I haven’t made it out to Gokalab much. It’s just a bit too far for me (about a 15-minute drive from my house, while other options on this list are at most a 15-minute bicycle ride). But if I was living closer, I’d be a regular there. 232 Coworking Space & Hotel Noon on a Monday morning at 232 Coworking Space. If you’re looking for a coworking space near Karuizawa station, 232 Coworking Space & Hotel is the best option I’ve come across. The “hotel” part of the name made me think they were focused on “workcations,” but the space seems like it caters to locals as well. The space offers free coffee via an automatic espresso machine, along with other drinks, and a decent number of desks. When I used it on a Monday morning in the off-season, it was moderately occupied at perhaps a quarter capacity. Everyone spoke in whispers, so it felt a bit like a library. There were two booths for calls, but unfortunately they were both occupied when I wanted to have mine, so I had to sit in the hall instead. If the weather was a bit warmer I would have taken it outside, as there was some nice covered seating available. The decor was nice, though the chairs weren’t that comfortable. After a couple of hours I was getting sore. It was also too dimly lit for me, without much natural light. The price for drop-ins is reasonable, starting at 1,500 yen for four hours. They also have monthly plans starting from 10,000 yen for five days per month. WhatI found missing was a feeling of community. I didn’t see any small talk between the people working there, though I was only there for a couple hours, and maybe this occurs at other times. Their webpage also mentioned that they host events, but apparently they don’t have any upcoming ones planned and haven’t had any in a while. Shozo Coffee Karuizawa The latte is just okay here, but the atmosphere is nice. Shozo Coffee Karuizawa is a cafe on the first floor of the bookstore in Karuizawa Commongrounds. The second floor has a dedicated coworking space, but for me personally, the cafe is a better deal. Their cafe latte is mid-tier and 700 yen. In the afternoons I’ll go for their chai to avoid over-caffeination. They offer free wifi and have signs posted asking you not to hold online meetings, implicitly making it clear that otherwise they don’t mind you working there. Location-wise, this place is very convenient for me, but it suffers from a fatal flaw that prevents me from working there for an extended amount of time: the tables are way too low for me to type comfortably. I’m tall though (190 cm), so they aren’t designed with me in mind. Sheridan Coffee and a popover \- my entrance fee to this “coworking space”. Sheridan is a western breakfast and brunch restaurant. They aren’t that busy on weekdays and have free wifi, plus the owner was happy to let me work there. The coffee comes in a pot with enough for at least one refill. There’s also some covered outdoor seating. I used this spot to get some work done when my child was sick and being looked after at the wonderful Hochi Lodge (ほっちのロージ). It’s a clinic and sick childcare facility that does its best to not let on that it’s a medical facility. The doctors and nurses don’t wear uniforms, and appointments there feel more like you’re visiting someone’s home. Sheridan is within walking distance of it. Natural Cafeina An excellent cappuccino but only an okay place to work. If you’d like to get a bit of work done over an excellent cappuccino, Natural Cafeina is a good option. This cafe feels a bit cramped, and as there isn’t much seating, I wouldn’t want to use it for an extended period of time. Also, the music was also a bit loud. But they do have free wifi, and when I visited, there were a couple of other customers besides myself working there. Nakakaruizawa Library The Nakakaruizawa Library is a beautiful space with plenty of desks facing the windows and free wifi. Anyone can use it for free, making it the most economical coworking space in town. I’ve tried working out of it, but found that, for me personally, it wasn’t conducive to work. It is still a library, and there’s something about the vibes that just doesn’t inspire me. Karuizawa Commongrounds Bookstore Coworking Space The renowned bookstore Tsutaya operates Karuizawa Books in the Karuizawa Commongrounds development. The second floor has a coworking space that features the “cheap chic” look common among hip coworking spaces. Unfinished plywood is everywhere, as are books. I’d never actually worked at this space until writing this article. The price is just too high for me to justify it, as it starts at 1,100 yen for a mere hour, to a max of 4,000 yen per day. At 22,000 yen per month, it’s a more reasonable price for someone using it as an office full time. But I already have a home office and just want somewhere I can drop in at occasionally. There are a couple options, seating-wise. Most of the seats are in booths, which I found rather dark but with comfortable chairs. Then there’s a row of stools next to the window, which offer a good view, but are too uncomfortable for me. Depending on your height, the bar there may work as a standing desk. Lastly, there are two coveted seats with office chairs by a window, but they were both occupied when I visited. The emphasis here seems to be on individual deep work, and though there were a number of other people working, I’d have felt uncomfortable striking up a conversation with one of them. That’s enough to make me give it a pass. Coworking Space Ikoi Villa Coworking Space Ikoi Villa is located in Naka-Karuizawa, relatively close to my home. I’ve only used it once though. It’s part of a hotel, and they converted the lobby to a coworking space by putting a bunch of desks and chairs in it. If all you need is wifi and space to work, it gets the job done. But it’s a shame they didn’t invest a bit more in making it feel like a nice place to work. I went during the summer on one of the hottest days. My house only had one AC unit and couldn’t keep up, so I was hoping to find somewhere cooler to work. But they just had the windows open with some fans going, which left me disappointed. This was ostensibly the peak season for Karuizawa, but only a couple of others were working there that day. Maybe the regulars knew it’d be too hot, but it felt kind of lonely for a coworking space. The drop-in fee starts at 1,000 yen for four hours. It comes with free drinks from a machine: green tea, coffee, and water, if I recall correctly. Karuizawa Prince The Workation Core Do you like corporate vibes? Then this is the place for you. Karuizawa Prince The Workation Core is a coworking space located in my least favourite part of the town—the outlet mall. The throngs of shoppers and rampant commercialism are in stark contrast to the serenity found farther away from the station. This is another coworking space I visited expressly for this article. The fee is 660 yen per 30 minutes, to a maximum of 6,336 yen per day. Even now, just reading that maximum, my heart skipped a beat. This is certainly the most expensive coworking space I’ve ever worked from—I better get this article done fast. The facilities include a large open space with reasonably comfortable seating. There are a number of booths with monitors. As they are 23.8 inch monitors with 1,920 x 1,080 resolution, they’re a step down from the resolution of modern laptops, and so not of much use. Though there was room for 40 plus people, I was the only person working . Granted this was on a Sunday morning, so not when most people would typically attend. I don’t think I’ll be back here again. The price and sterile corporate vibe just aren’t for me. If you’re staying at The Prince Hotel, I think you get a discount. In that case, maybe it’s worth it, but otherwise I think there are better options. Sawamura Bakery & Restaurant Kyukaruizawa Sawamura Bakery & Restaurant is across the street from the Roastery. It offers slightly cheaper prices, with about 100 yen off the cafe latte, though the quality is worse, as is the vibe of the place as a whole. They do have a bigger selection of baked goods, though. As a cafe for doing some work, there’s nothing wrong with it per se. The upstairs cafe area has ample seating outside of peak hours. But I just don’t have a good reason to work here over the Roastery. The Pie Hole Los Angles Karuizawa The best (and only) pecan pie that I’ve had in Japan. The name of this place is a mouthful. Technically, it shouldn’t be on this list because I’ve never worked out of it. But they have wonderful pie, free wifi, and not many customers, so I could see working here. The chairs are a bit uncomfortable though, so I wouldn’t want to stop by for more than an hour or two. While this place had been on my radar for a while, I’d avoided it because there’s no good bicycle parking nearby—-or so I thought. I just found that the relatively close Church Street shopping street has a bit of bicycle parking off to the side. If you come to Karuizawa… When I was living in Tokyo, there were just too many opportunities to meet people, and so I found myself having to frequently turn down offers to go out for coffee. Since moving here, I’ve made some local connections, but the pace has been a lot slower. If you’re ever passing through Karuizawa, do get in touch, and I’d be happy to meet up for a cafe latte and possibly some pie.

yesterday 4 votes
Stewardship over ownership

Code ownership is a popular concept, but it emphasizes the wrong thing. It can bring out the worst in a person or a team: defensiveness, control-seeking, power struggles. Instead, we should be focusing on stewardship. How code ownership manifests Code ownership as a concept means that a particular person or team "owns" a section of the codebase. This gives them certain rights and responsibilities: They control what goes into the code, and can approve or deny changes They are responsible for fixing bugs in that part of the code They are responsible for maintaining and improving that part of the code There are tools that help with these, like the CODEOWNERS file on GitHub. This file lets you define a group or list of individuals who own a section of the repository. Then you can require reviews/approvals from them before anything gets merged. These are all coming from a good place. We want our code to be well-maintained, and we want to make sure that someone is responsible for its direction. It really helps to know who to go to with questions or requests. Without these, changes can grind to a halt, mired in confusion and tech debt. But the concept in practice brings challenges. If you've worked on a team using code ownership before, you've probably run into: that engineer who guards the code against anyone else's changes, wanting all the credit for themselves that engineer who refuses to add anything else to their codebase, because they don't want to maintain it that engineer who tries to gain code ownership over more areas, to control more of the code and more of the company I've done certainly acted badly due to code ownership, without realizing what I was doing or or why I was doing it at the time. There are almost endless ways that code ownership can bring out the worst in people. And it all makes sense. We can do better by shifting to stewardship instead of ownership. Stewardship is about service We are all stewards of things we own or are responsible for. I have stewardship over the house I live in with my family, for example. I also have stewardship over the espresso machine I use every day: It's a big piece of machinery, and it's my responsibility to take good care of it and to ensure that as long as it's mine, it operates well and lasts a long time. That reduces expense, reduces waste, and reduces impact on the world—but it also means that the object (an espresso machine) is serving its purpose to bring joy and connection. Code is no different. By focusing on stewardship rather than ownership, we are focusing on the responsible, sustainable maintenance of the code. We focus on taking good care of that which we're entrusted with. A steward doesn't jealously guard, or struggle to gain more power. A steward watches what her responsibilities are, ensuring enough to contribute but not so many as to burn out. And she nurtures and cares for the code, to make sure that it continues to serve its purpose. Instead of an adversarial relationship, stewardship promotes partnership: It promotes working with others to figure out how to make the best use of resources, instead of hoarding them for yourself. Stewardship can solve many of the same problems that code ownership does: It gives you someone who's a main point of contact for some code It grants someone responsibility for bug fixes and maintenance of that code And in some ways, they look alike. You're going to do a lot of the same things, controlling what goes in or out. But they are very different in the focus. Owners are concerned with the value of what they own. Stewards are concerned with how well it can serve the group. And this makes all the difference in producing better outcomes.

yesterday 3 votes