Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
68
.title {text-wrap:balance;} 2017 solar eclipse—obscuration 93.8% (view original) In 2017, I opted to skip the crowds and the drive and settle for a 94% solar eclipse. I fully regret that decision. Weather permitting, I’ll be photographing the full solar eclipse from the path of totality next Monday. While I’ve amassed a ton of gear, the main resource I’ve dumped into this project is time—time planning, practicing, and hacking. After investing all that time, here’s my plan. Why I’m never going to produce an eclipse photo comparable to the work of Miloslav Druckmüller—so why bother with photography at all? Photography is my hobby, and what’s a hobby without a challenge? Sure, the siren song of cool gear is part of it—I do love gear—but it also takes planning, hacking, and editing skills to create a great picture. I got to spend time rooting around inside libgphoto2, breaking out the soldering iron to jury-rig a custom ESP32-based release cable, and practicing every move I’ll make on...
10 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Tyler Cipriani: blog

Eventually consistent plain text accounting

.title { text-wrap: balance } Spending for October, generated by piping hledger → R Over the past six months, I’ve tracked my money with hledger—a plain text double-entry accounting system written in Haskell. It’s been surprisingly painless. My previous attempts to pick up real accounting tools floundered. Hosted tools are privacy nightmares, and my stint with GnuCash didn’t last. But after stumbling on Dmitry Astapov’s “Full-fledged hledger” wiki1, it clicked—eventually consistent accounting. Instead of modeling your money all at once, take it one hacking session at a time. It should be easy to work towards eventual consistency. […] I should be able to [add financial records] bit by little bit, leaving things half-done, and picking them up later with little (mental) effort. – Dmitry Astapov, Full-Fledged Hledger Principles of my system I’ve cobbled together a system based on these principles: Avoid manual entry – Avoid typing in each transaction. Instead, rely on CSVs from the bank. CSVs as truth – CSVs are the only things that matter. Everything else can be blown away and rebuilt anytime. Embrace version control – Keep everything under version control in Git for easy comparison and safe experimentation. Learn hledger in five minutes hledger concepts are heady, but its use is simple. I divide the core concepts into two categories: Stuff hledger cares about: Transactions – how hledger moves money between accounts. Journal files – files full of transactions Stuff I care about: Rules files – how I set up accounts, import CSVs, and move money between accounts. Reports – help me see where my money is going and if I messed up my rules. Transactions move money between accounts: 2024-01-01 Payday income:work $-100.00 assets:checking $100.00 This transaction shows that on Jan 1, 2024, money moved from income:work into assets:checking—Payday. The sum of each transaction should be $0. Money comes from somewhere, and the same amount goes somewhere else—double-entry accounting. This is powerful technology—it makes mistakes impossible to ignore. Journal files are text files containing one or more transactions: 2024-01-01 Payday income:work $-100.00 assets:checking $100.00 2024-01-02 QUANSHENG UVK5 assets:checking $-29.34 expenses:fun:radio $29.34 Rules files transform CSVs into journal files via regex matching. Here’s a CSV from my bank: Transaction Date,Description,Category,Type,Amount,Memo 09/01/2024,DEPOSIT Paycheck,Payment,Payment,1000.00, 09/04/2024,PizzaPals Pizza,Food & Drink,Sale,-42.31, 09/03/2024,Amazon.com*XXXXXXXXY,Shopping,Sale,-35.56, 09/03/2024,OBSIDIAN.MD,Shopping,Sale,-10.00, 09/02/2024,Amazon web services,Personal,Sale,-17.89, And here’s a checking.rules to transform that CSV into a journal file so I can use it with hledger: # checking.rules # -------------- # Map CSV fields → hledger fields[0] fields date,description,category,type,amount,memo,_ # `account1`: the account for the whole CSV.[1] account1 assets:checking account2 expenses:unknown skip 1 date-format %m/%d/%Y currency $ if %type Payment account2 income:unknown if %category Food & Drink account2 expenses:food:dining # [0]: <https://hledger.org/hledger.html#field-names> # [1]: <https://hledger.org/hledger.html#account-field> With these two files (checking.rules and 2024-09_checking.csv), I can make the CSV into a journal: $ > 2024-09_checking.journal \ hledger print \ --rules-file checking.rules \ -f 2024-09_checking.csv $ head 2024-09_checking.journal 2024-09-01 DEPOSIT Paycheck assets:checking $1000.00 income:unknown $-1000.00 2024-09-02 Amazon web services assets:checking $-17.89 expenses:unknown $17.89 Reports are interesting ways to view transactions between accounts. There are registers, balance sheets, and income statements: $ hledger incomestatement \ --depth=2 \ --file=2024-09_bank.journal Revenues: $1000.00 income:unknown ----------------------- $1000.00 Expenses: $42.31 expenses:food $63.45 expenses:unknown ----------------------- $105.76 ----------------------- Net: $894.24 At the beginning of September, I spent $105.76 and made $1000, leaving me with $894.24. But a good chunk is going to the default expense account, expenses:unknown. I can use the hleger aregister to see what those transactions are: $ hledger areg expenses:unknown \ --file=2024-09_checking.journal \ -O csv | \ csvcut -c description,change | \ csvlook | description | change | | ------------------------ | ------ | | OBSIDIAN.MD | 10.00 | | Amazon web services | 17.89 | | Amazon.com*XXXXXXXXY | 35.56 | l Then, I can add some more rules to my checking.rules: if OBSIDIAN.MD account2 expenses:personal:subscriptions if Amazon web services account2 expenses:personal:web:hosting if Amazon.com account2 expenses:personal:shopping:amazon Now, I can reprocess my data to get a better picture of my spending: $ > 2024-09_bank.journal \ hledger print \ --rules-file bank.rules \ -f 2024-09_bank.csv $ hledger bal expenses \ --depth=3 \ --percent \ -f 2024-09_checking2.journal 30.0 % expenses:food:dining 33.6 % expenses:personal:shopping 9.5 % expenses:personal:subscriptions 16.9 % expenses:personal:web -------------------- 100.0 % For the Amazon.com purchase, I lumped it into the expenses:personal:shopping account. But I could dig deeper—download my order history from Amazon and categorize that spending. This is the power of working bit-by-bit—the data guides you to the next, deeper rabbit hole. Goals and non-goals Why am I doing this? For years, I maintained a monthly spreadsheet of account balances. I had a balance sheet. But I still had questions. Spending over six months, generated by piping hledger → gnuplot Before diving into accounting software, these were my goals: Granular understanding of my spending – The big one. This is where my monthly spreadsheet fell short. I knew I had money in the bank—I kept my monthly balance sheet. I budgeted up-front the % of my income I was saving. But I had no idea where my other money was going. Data privacy – I’m unwilling to hand the keys to my accounts to YNAB or Mint. Increased value over time – The more time I put in, the more value I want to get out—this is what you get from professional tools built for nerds. While I wished for low-effort setup, I wanted the tool to be able to grow to more uses over time. Non-goals—these are the parts I never cared about: Investment tracking – For now, I left this out of scope. Between monthly balances in my spreadsheet and online investing tools’ ability to drill down, I was fine.2 Taxes – Folks smarter than me help me understand my yearly taxes.3 Shared system – I may want to share reports from this system, but no one will have to work in it except me. Cash – Cash transactions are unimportant to me. I withdraw money from the ATM sometimes. It evaporates. hledger can track all these things. My setup is flexible enough to support them someday. But that’s unimportant to me right now. Monthly maintenance I spend about an hour a month checking in on my money Which frees me to spend time making fancy charts—an activity I perversely enjoy. Income vs. Expense, generated by piping hledger → gnuplot Here’s my setup: $ tree ~/Documents/ledger . ├── export │   ├── 2024-balance-sheet.txt │   └── 2024-income-statement.txt ├── import │   ├── in │   │   ├── amazon │   │   │   └── order-history.csv │   │   ├── credit │   │   │   ├── 2024-01-01_2024-02-01.csv │   │   │   ├── ... │   │   │   └── 2024-10-01_2024-11-01.csv │   │   └── debit │   │   ├── 2024-01-01_2024-02-01.csv │   │   ├── ... │   │   └── 2024-10-01_2024-11-01.csv │   └── journal │   ├── amazon │   │   └── order-history.journal │   ├── credit │   │   ├── 2024-01-01_2024-02-01.journal │   │   ├── ... │   │   └── 2024-10-01_2024-11-01.journal │   └── debit │   ├── 2024-01-01_2024-02-01.journal │   ├── ... │   └── 2024-10-01_2024-11-01.journal ├── rules │   ├── amazon │   │   └── journal.rules │   ├── credit │   │   └── journal.rules │   ├── debit │   │   └── journal.rules │   └── common.rules ├── 2024.journal ├── Makefile └── README Process: Import – download a CSV for the month from each account and plop it into import/in/<account>/<dates>.csv Make – run make Squint – Look at git diff; if it looks good, git add . && git commit -m "💸" otherwise review hledger areg to see details. The Makefile generates everything under import/journal: journal files from my CSVs using their corresponding rules. reports in the export folder I include all the journal files in the 2024.journal with the line: include ./import/journal/*/*.journal Here’s the Makefile: SHELL := /bin/bash RAW_CSV = $(wildcard import/in/**/*.csv) JOURNALS = $(foreach file,$(RAW_CSV),$(subst /in/,/journal/,$(patsubst %.csv,%.journal,$(file)))) .PHONY: all all: $(JOURNALS) hledger is -f 2024.journal > export/2024-income-statement.txt hledger bs -f 2024.journal > export/2024-balance-sheet.txt .PHONY clean clean: rm -rf import/journal/**/*.journal import/journal/%.journal: import/in/%.csv @echo "Processing csv $< to $@" @echo "---" @mkdir -p $(shell dirname $@) @hledger print --rules-file rules/$(shell basename $$(dirname $<))/journal.rules -f "$<" > "$@" If I find anything amiss (e.g., if my balances are different than what the bank tells me), I look at hleger areg. I may tweak my rules or my CSVs and then I run make clean && make and try again. Simple, plain text accounting made simple. And if I ever want to dig deeper, hledger’s docs have more to teach. But for now, the balance of effort vs. reward is perfect. while reading a blog post from Jonathan Dowland↩︎ Note, this is covered by full-fledged hledger – Investements↩︎ Also covered in full-fledged hledger – Tax returns↩︎

3 months ago 32 votes
Subliminal git commits

Luckily, I speak Leet. – Amita Ramanujan, Numb3rs, CBS’s IRC Drama There’s an episode of the CBS prime-time drama Numb3rs that plumbs the depths of Dr. Joel Fleischman’s1 knowledge of IRC. In one scene, Fleischman wonders, “What’s ‘leet’”? “Leet” is writing that replaces letters with numbers, e.g., “Numb3rs,” where 3 stands in for e. In short, leet is like the heavy-metal “S” you drew in middle school: Sweeeeet. / \ / | \ | | | \ \ | | | \ | / \ / ASCII art version of your misspent youth. Following years of keen observation, I’ve noticed Git commit hashes are also letters and numbers. Git commit hashes are, as Fleischman might say, prime targets for l33tification. What can I spell with a git commit? DenITDao via orlybooks) With hexidecimal we can spell any word containing the set of letters {A, B, C, D, E, F}—DEADBEEF (a classic) or ABBABABE (for Mama Mia aficionados). This is because hexidecimal is a base-16 numbering system—a single “digit” represents 16 numbers: Base-10: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 16 15 Base-16: 0 1 2 3 4 5 6 7 8 9 A B C D E F Leet expands our palette of words—using 0, 1, and 5 to represent O, I, and S, respectively. I created a script that scours a few word lists for valid words and phrases. With it, I found masterpieces like DADB0D (dad bod), BADA55 (bad ass), and 5ADBAB1E5 (sad babies). Manipulating commit hashes for fun and no profit Git commit hashes are no mystery. A commit hash is the SHA-1 of a commit object. And a commit object is the commit message with some metadata. $ mkdir /tmp/BADA55-git && cd /tmp/BAD55-git $ git init Initialized empty Git repository in /tmp/BADA55-git/.git/ $ echo '# BADA55 git repo' > README.md && git add README.md && git commit -m 'Initial commit' [main (root-commit) 68ec0dd] Initial commit 1 file changed, 1 insertion(+) create mode 100644 README.md $ git log --oneline 68ec0dd (HEAD -> main) Initial commit Let’s confirm we can recreate the commit hash: $ git cat-file -p 68ec0dd > commit-msg $ sha1sum <(cat \ <(printf "commit ") \ <(wc -c < commit-msg | tr -d '\n') \ <(printf '%b' '\0') commit-msg) 68ec0dd6dead532f18082b72beeb73bd828ee8fc /dev/fd/63 Our repo’s first commit has the hash 68ec0dd. My goal is: Make 68ec0dd be BADA55. Keep the commit message the same, visibly at least. But I’ll need to change the commit to change the hash. To keep those changes invisible in the output of git log, I’ll add a \t and see what happens to the hash. $ truncate -s -1 commit-msg # remove final newline $ printf '\t\n' >> commit-msg # Add a tab $ # Check the new SHA to see if it's BADA55 $ sha1sum <(cat \ <(printf "commit ") \ <(wc -c < commit-msg | tr -d '\n') \ <(printf '%b' '\0') commit-msg) 27b22ba5e1c837a34329891c15408208a944aa24 /dev/fd/63 Success! I changed the SHA-1. Now to do this until we get to BADA55. Fortunately, user not-an-aardvark created a tool for that—lucky-commit that manipulates a commit message, adding a combination of \t and [:space:] characters until you hit a desired SHA-1. Written in rust, lucky-commit computes all 256 unique 8-bit strings composed of only tabs and spaces. And then pads out commits up to 48-bits with those strings, using worker threads to quickly compute the SHA-12 of each commit. It’s pretty fast: $ time lucky_commit BADA555 real 0m0.091s user 0m0.653s sys 0m0.007s $ git log --oneline bada555 (HEAD -> main) Initial commit $ xxd -c1 <(git cat-file -p 68ec0dd) | grep -cPo ': (20|09)' 12 $ xxd -c1 <(git cat-file -p HEAD) | grep -cPo ': (20|09)' 111 Now we have an more than an initial commit. We have a BADA555 initial commit. All that’s left to do is to make ALL our commits BADA55 by abusing git hooks. $ cat > .git/hooks/post-commit && chmod +x .git/hooks/post-commit #!/usr/bin/env bash echo 'L337-ifying!' lucky_commit BADA55 $ echo 'A repo that is very l33t.' >> README.md && git commit -a -m 'l33t' L337-ifying! [main 0e00cb2] l33t 1 file changed, 1 insertion(+) $ git log --oneline bada552 (HEAD -> main) l33t bada555 Initial commit And now I have a git repo almost as cool as the sweet “S” I drew in middle school. This is a Northern Exposure spin off, right? I’ve only seen 1:48 of the show…↩︎ or SHA-256 for repos that have made the jump to a more secure hash function↩︎

4 months ago 46 votes
The Pull Request

A brief and biased history. Oh yeah, there’s pull requests now – GitHub blog, Sat, 23 Feb 2008 When GitHub launched, it had no code review. Three years after launch, in 2011, GitHub user rtomayko became the first person to make a real code comment, which read, in full: “+1”. Before that, GitHub lacked any way to comment on code directly. Instead, pull requests were a combination of two simple features: Cross repository compare view – a feature they’d debuted in 2010—git diff in a web page. A comments section – a feature most blogs had in the 90s. There was no way to thread comments, and the comments were on a different page than the diff. GitHub pull requests circa 2010. This is from the official documentation on GitHub. Earlier still, when the pull request debuted, GitHub claimed only that pull requests were “a way to poke someone about code”—a way to direct message maintainers, but one that lacked any web view of the code whatsoever. For developers, it worked like this: Make a fork. Click “pull request”. Write a message in a text form. Send the message to someone1 with a link to your fork. Wait for them to reply. In effect, pull requests were a limited way to send emails to other GitHub users. Ten years after this humble beginning—seven years after the first code comment—when Microsoft acquired GitHub for $7.5 Billion, this cobbled-together system known as “GitHub flow” had become the default way to collaborate on code via Git. And I hate it. Pull requests were never designed. They emerged. But not from careful consideration of the needs of developers or maintainers. Pull requests work like they do because they were easy to build. In 2008, GitHub’s developers could have opted to use git format-patch instead of teaching the world to juggle branches. Or they might have chosen to generate pull requests using the git request-pull command that’s existed in Git since 2005 and is still used by the Linux kernel maintainers today2. Instead, they shrugged into GitHub flow, and that flow taught the world to use Git. And commit histories have sucked ever since. For some reason, github has attracted people who have zero taste, don’t care about commit logs, and can’t be bothered. – Linus Torvalds, 2012 “Someone” was a person chosen by you from a checklist of the people who had also forked this repository at some point.↩︎ Though to make small, contained changes you’d use git format-patch and git am.↩︎

5 months ago 61 votes
Git the stupid password store

.title {text-wrap:balance;} GIT - the stupid content tracker “git” can mean anything, depending on your mood. – Linus Torvalds, Initial revision of “git”, the information manager from hell Like most git features, gitcredentials(7) are obscure, byzantine, and incredibly useful. And, for me, they’re a nice, hacky solution to a simple problem. Problem: Home directories teeming with tokens. Too many programs store cleartext credentials in config files in my home directory, making exfiltration all too easy. Solution: For programs I write, I can use git credential fill – the password library I never knew I installed. #!/usr/bin/env bash input="\ protocol=https host=example.com user=thcipriani " eval "$(echo "$input" | git credential fill)" echo "The password is: $password" Which looks like this when you run it: $ ./prompt.sh Password for 'https://thcipriani@example.com': The password is: hunter2 What did git credentials fill do? Accepted a protocol, username, and host on standard input. Called out to my git credential helper My credential helper checked for credentials matching https://thcipriani@example.com and found nothing Since my credential helper came up empty, it prompted me for my password Finally, it echoed <key>=<value>\n pairs for the keys protocol, host, username, and password to standard output. If I want, I can tell my credential helper to store the information I entered: git credential approve <<EOF protocol=$protocol username=$username host=$host password=$password EOF If I do that, the next time I run the script, it finds the password without prompting: $ ./prompt.sh The password is: hunter2 What are git credentials? Surprisingly, the intended purpose of git credentials is NOT “a weird way to prompt for passwords.” The problem git credentials solve is this: With git over ssh, you use your keys. With git over https, you type a password. Over and over and over. Beleaguered git maintainers solved this dilemma with the credential storage system—git credentials. With the right configuration, git will stop asking for your password when you push to an https remote. Instead, git credentials retrieve and send auth info to remotes. On the labyrinthine options of git credentials My mind initially refused to learn git credentials due to its twisty maze of terms that all sound alike: git credential fill: how you invoke a user’s configured git credential helper git credential approve: how you save git credentials (if this is supported by the user’s git credential helper) git credential.helper: the git config that points to a script that poops out usernames and passwords. These helper scripts are often named git-credential-<something>. git-credential-cache: a specific, built-in git credential helper that caches credentials in memory for a while. git-credential-store: STOP. DON’T TOUCH. This is a specific, built-in git credential helper that stores credentials in cleartext in your home directory. Whomp whomp. git-credential-manager: a specific and confusingly named git credential helper from Microsoft®. If you’re on Linux or Mac, feel free to ignore it. But once I mapped the terms, I only needed to pick a git credential helper. Configuring good credential helpers The built-in git-credential-store is a bad credential helper—it saves your passwords in cleartext in ~/.git-credentials.1 If you’re on a Mac, you’re in luck2—one command points git credentials to your keychain: git config --global credential.helper osxkeychain Third-party developers have contributed helpers for popular password stores: 1Password pass: the standard Unix password manager OAuth Git’s documentation contains a list of credential-helpers, too Meanwhile, Linux and Windows have standard options. Git’s source repo includes helpers for these options in the contrib directory. On Linux, you can use libsecret. Here’s how I configured it on Debian: sudo apt install libsecret-1-0 libsecret-1-dev cd /usr/share/doc/git/contrib/credential/libsecret/ sudo make sudo mv git-credential-libsecret /usr/local/bin/ git config --global credential.helper libsecret On Windows, you can use the confusingly named git credential manager. I have no idea how to do this, and I refuse to learn. Now, if you clone a repo over https, you can push over https without pain3. Plus, you have a handy trick for shell scripts. git-credential-store is not a git credential helper of honor. No highly-esteemed passwords should be stored with it. This message is a warning about danger. The danger is still present, in your time, as it was in ours.↩︎ I think. I only have Linux computers to test this on, sorry ;_;↩︎ Or the config option pushInsteadOf, which is what I actually do.↩︎

6 months ago 47 votes
Hexadecimal Sucks

Humans do no operate on hexadecimal symbols effectively […] there are exceptions. – Dan Kaminsky When SSH added ASCII art fingerprints (AKA, randomart), the author credited a talk by Dan Kaminsky. As a refresher, randomart looks like this: $ ssh-keygen -lv -f ~/.ssh/id_ed25519.pub 256 SHA256:XrvNnhQuG1ObprgdtPiqIGXUAsHT71SKh9/WAcAKoS0 thcipriani@foo.bar (ED25519) +--[ED25519 256]--+ | .++ ... | | o+.... o | |E .oo=.o . | | . .+.= . | | o= .S.o.o | | o o.o+.= + | | . . .o B * | | . . + & . | | ..+o*.= | +----[SHA256]-----+ Ben Cox describes the algorithm for generating random art on his blog. Here’s a slo-mo version of the algorithm in action: ASCII art ssh fingerprints slo-mo algorithm But in Dan’s talk, he never mentions anything about ASCII art. Instead, his talk was about exploiting our brain’s hardware acceleration to make it easier for us to recognize SSH fingerprints. The talk is worth watching, but I’ll attempt a summary. What’s the problem? We’ll never memorize SHA256:XrvNnhQuG1ObprgdtPiqIGXUAsHT71SKh9/WAcAKoS0—hexadecimal and base64 were built to encode large amounts of information rather than be easy to remember. But that’s ok for SSH keys because there are different kinds of memory: Rejection: I’ve never seen that before! Recognition: I know it’s that one—not the other one. Recollection: rote recall, like a phone number or address. For SSH you’ll use recognition—do you recognize this key? Of course, SSH keys are still a problem because our working memory is too small to recognize such long strings of letters and numbers. Hacks abound to shore up our paltry working memory—what Dan called “brain hardware acceleration.” Randomart attempts to tap into our hardware acceleration for pattern recognition—the visiuo-spacial sketchpad, where we store pictures. Dan’s idea tapped into a different aspect of hardware acceleration, one often cited by memory competition champions: chunking. Memory chunking and sha256 The web service what3words maps every three cubic meters (3m²) on Earth to three words. The White House’s Oval Office is ///curve.empty.buzz. Three words encode the same information as latitude and longitude—38.89, -77.03—chunking the information to be small enough to fit in our working memory. The mapping of locations to words uses a list of 40 thousand common English words, so each word encodes 15.29 bits of information—45.9 bits of information, identifying 64 trillion unique places. Meanwhile sha256 is 256 bits of information: ~116 quindecillion unique combinations. 64000000000000 # 64 trillion (what3words) 115792089237316195423570985008687907853269984665640564039457584007913129639936 # 116 (ish) quindecillion (sha256) For SHA256, we need more than three words or a dictionary larger than 40,000 words. Dan’s insight was we can identify SSH fingerprints using pairs of human names—couples. The math works like this1: 131,072 first names: 17 bits per name (×2) 524,288 last names: 19 bits per name 2,048 cities: 11 bits per city 17+17+19+11 = 64 bits With 64 bits per couple, you could uniquely identify 116 quindecillion items with four couples. Turning this: $ ssh foo.bar The authenticity of host 'foo.bar' can't be established. ED25519 key fingerprint is SHA256:XrvNnhQuG1ObprgdtPiqIGXUAsHT71SKh9/WAcAKoS0. Are you sure you want to continue connecting (yes/no/[fingerprint])? Into this2: $ ssh foo.bar The authenticity of host 'foo.bar' can't be established. SHA256:XrvNnhQuG1ObprgdtPiqIGXUAsHT71SKh9/WAcAKoS0 Key Data: Svasse and Tainen Jesudasson from Fort Wayne, Indiana, United States Illma and Sibeth Primack from Itārsi, Madhya Pradesh, India Maarja and Nisim Balyeat from Mukilteo, Washington, United States Hsu-Heng and Rasim Haozi from Manali, Tamil Nadu, India Are you sure you want to continue connecting (yes/no/[fingerprint])? With enough exposure, building recognition for these names and places should be possible—at least more possible than memorizing host keys. I’ve modified this from the original talk, in 2006 we were using md5 fingerprints of 160-bits. Now we’re using 256-bit fingerprints, so we needed to encode even more information, but the idea still works.↩︎ A (very) rough code implementation is on my github.↩︎

8 months ago 58 votes

More in programming

How to select your first marketing channel

When you're brand new, how do you select your first marketing channel?

21 hours ago 4 votes
Europeans don't have or understand free speech

The new American vice president JD Vance just gave a remarkable talk at the Munich Security Conference on free speech and mass immigration. It did not go over well with many European politicians, some of which immediately proved Vance's point, and labeled the speech "not acceptable". All because Vance dared poke at two of the holiest taboos in European politics. Let's start with his points on free speech, because they're the foundation for understanding how Europe got into such a mess on mass immigration. See, Europeans by and large simply do not understand "free speech" as a concept the way Americans do. There is no first amendment-style guarantee in Europe, yet the European mind desperately wants to believe it has the same kind of free speech as the US, despite endless evidence to the contrary. It's quite like how every dictator around the world pretends to believe in democracy. Sure, they may repress the opposition and rig their elections, but they still crave the imprimatur of the concept. So too "free speech" and the Europeans. Vance illustrated his point with several examples from the UK. A country that pursues thousands of yearly wrong-speech cases, threatens foreigners with repercussions should they dare say too much online, and has no qualms about handing down draconian sentences for online utterances. It's completely totalitarian and completely nuts. Germany is not much better. It's illegal to insult elected officials, and if you say the wrong thing, or post the wrong meme, you may well find yourself the subject of a raid at dawn. Just crazy stuff. I'd love to say that Denmark is different, but sadly it is not. You can be put in prison for up to two years for mocking or degrading someone on the basis on their race. It recently become illegal to burn the Quran (which sadly only serves to legitimize crazy Muslims killing or stabbing those who do). And you may face up to three years in prison for posting online in a way that can be construed as morally supporting terrorism. But despite all of these examples and laws, I'm constantly arguing with Europeans who cling to the idea that they do have free speech like Americans. Many of them mistakenly think that "hate speech" is illegal in the US, for example. It is not. America really takes the first amendment quite seriously. Even when it comes to hate speech. Famously, the Jewish lawyers of the (now unrecognizable) ACLU defended the right of literal, actual Nazis to march for their hateful ideology in the streets of Skokie, Illinois in 1979 and won. Another common misconception is that "misinformation" is illegal over there too. It also is not. That's why the Twitter Files proved to be so scandalous. Because it showed the US government under Biden laundering an illegal censorship regime -- in grave violation of the first amendment -- through private parties, like the social media networks. In America, your speech is free to be wrong, free to be hateful, free to insult religions and celebrities alike. All because the founding fathers correctly saw that asserting the power to determine otherwise leads to a totalitarian darkness. We've seen vivid illustrations of both in recent years. At the height of the trans mania, questioning whether men who said they were women should be allowed in women's sports or bathrooms or prisons was frequently labeled "hate speech". During the pandemic, questioning whether the virus might have escaped from a lab instead of a wet market got labeled "misinformation". So too did any questions about the vaccine's inability to stop spread or infection. Or whether surgical masks or lock downs were effective interventions. Now we know that having a public debate about all of these topics was of course completely legitimate. Covid escaping from a lab is currently the most likely explanation, according to American intelligence services, and many European countries, including the UK, have stopped allowing puberty blockers for children. Which brings us to that last bugaboo: Mass immigration. Vance identified it as one of the key threats to Europe at the moment, and I have to agree. So should anyone who've been paying attention to the statistics showing the abject failure of this thirty-year policy utopia of a multi-cultural Europe. The fast changing winds in European politics suggest that's exactly what's happening. These are not separate issues. It's the lack of free speech, and a catastrophically narrow Overton window, which has led Europe into such a mess with mass immigration in the first place. In Denmark, the first popular political party that dared to question the wisdom of importing massive numbers of culturally-incompatible foreigners were frequently charged with claims of racism back in the 90s. The same "that's racist!" playbook is now being run on political parties across Europe who dare challenge the mass immigration taboo. But making plain observations that some groups of immigrants really do commit vastly more crime and contribute vastly less economically to society is not racist. It wasn't racist when the Danish Folkparty did it in Denmark in the 1990s, and it isn't racist now when the mainstream center-left parties have followed suit. I've drawn the contrast to Sweden many times, and I'll do it again here. Unlike Denmark, Sweden kept its Overton window shut on the consequences of mass immigration all the way up through the 90s, 00s, and 10s. As a prize, it now has bombs going off daily, the European record in gun homicides, and a government that admits that the immigrant violence is out of control. The state of Sweden today is a direct consequence of suppressing any talk of the downsides to mass immigration for decades. And while that taboo has recently been broken, it may well be decades more before the problems are tackled at their root. It's tragic beyond belief. The rest of Europe should look to Sweden as a cautionary tale, and the Danish alternative as a precautionary one. It's never too late to fix tomorrow. You can't fix today, but you can always fix tomorrow. So Vance was right to wag his finger at all this nonsense. The lack of free speech and the problems with mass immigration. He was right to assert that America and Europe has a shared civilization to advance and protect. Whether the current politicians of Europe wants to hear it or not, I'm convinced that average Europeans actually are listening.

yesterday 5 votes
Using Xcode's AI Is Like Pair Programming With A Monkey

I've never used any other AI "assistant," although I've talked with those who have, most of whom are not very positive. My experience using Xcode's AI is that it occasionally offers a line of code that works, but you mostly get junk

2 days ago 6 votes
tracepoints: gnarly but worth it

Hey all, quick post today to mention that I added tracing support to the . If the support library for is available when Whippet is compiled, Whippet embedders can visualize the GC process. Like this!Whippet GC libraryLTTng Click above for a full-scale screenshot of the trace explorer processing the with the on a 2.5x heap. Of course no image will have all the information; the nice thing about trace visualizers like is that you can zoom in to sub-microsecond spans to see exactly what is happening, have nice mouseovers and clicky-clickies. Fun times!Perfetto microbenchmarknboyerparallel copying collector Adding tracepoints to a library is not too hard in the end. You need to , which has a file. You need to . Then you have a that includes the header, to generate the code needed to emit tracepoints.pull in the librarylttng-ustdeclare your tracepoints in one of your header filesminimal C filepkg-config Annoyingly, this header file you write needs to be in one of the directories; it can’t be just in the the source directory, because includes it seven times (!!) using (!!!) and because the LTTng file header that does all the computed including isn’t in your directory, GCC won’t find it. It’s pretty ugly. Ugliest part, I would say. But, grit your teeth, because it’s worth it.-Ilttngcomputed includes Finally you pepper your source with tracepoints, which probably you so that you don’t have to require LTTng, and so you can switch to other tracepoint libraries, and so on.wrap in some macro I wrote up a little . It’s not as easy as , which I think is an error. Another ugly point. Buck up, though, you are so close to graphs!guide for Whippet users about how to actually get tracesperf record By which I mean, so close to having to write a Python script to make graphs! Because LTTng writes its logs in so-called Common Trace Format, which as you might guess is not very common. I have a colleague who swears by it, that for him it is the lowest-overhead system, and indeed in my case it has no measurable overhead when trace data is not being collected, but his group uses custom scripts to convert the CTF data that he collects to... (?!?!?!!).GTKWave In my case I wanted to use Perfetto’s UI, so I found a to convert from CTF to the . But, it uses an old version of Babeltrace that wasn’t available on my system, so I had to write a (!!?!?!?!!), probably the most Python I have written in the last 20 years.scriptJSON-based tracing format that Chrome profiling used to usenew script Yes. God I love blinkenlights. As long as it’s low-maintenance going forward, I am satisfied with the tradeoffs. Even the fact that I had to write a script to process the logs isn’t so bad, because it let me get nice nested events, which most stock tracing tools don’t allow you to do. I fixed a small performance bug because of it – a . A win, and one that never would have shown up on a sampling profiler too. I suspect that as I add more tracepoints, more bugs will be found and fixed.worker thread was spinning waiting for a pool to terminate instead of helping out I think the only thing that would be better is if tracepoints were a part of Linux system ABIs – that there would be header files to emit tracepoint metadata in all binaries, that you wouldn’t have to link to any library, and the actual tracing tools would be intermediated by that ABI in such a way that you wouldn’t depend on those tools at build-time or distribution-time. But until then, I will take what I can get. Happy tracing! on adding tracepoints using the thing is it worth it? fin

2 days ago 3 votes
Evaluating overlay-adjacent accessibility products

I get asked about my opinion on overlay-adjacent accessibility products with enough frequency that I thought it could be helpful to write about it. There’s a category of third party products out there that are almost, but not quite an accessibility overlay. By this I mean that they seem a little less predatory, and a little more grounded in terms of the promises they make. Some of these products are widgets. Some are browser extensions. Some are apps. Some are an odd fourth thing. Sometimes it’s a case of a solutioneering disability dongle grift, sometimes its a case of good intentions executed in a less-than-optimal way, and sometimes it’s something legitimately helpful. Oftentimes it’s something that lies in the middle area of all of this. Many of them also have some sort of “AI” integration, which is the unfortunate upsell du jour we have to collectively endure for the time being. The rubric I use to evaluate these products remains very similar to how I scrutinize overlays. Hopefully it’s something that can be helpful for your own efforts. Should the product’s functionality be patented? I’m not very happy with the idea that the mechanism to operate something in an accessible way is inhibited by way of legal restriction. This artificially limits who can use it, which is in opposition to the overall mission of digital accessibility. Ideally the technology is the free bit, and the service that facilitates it is what generates the profit. Do I need to subscribe to use it? A subscription-based model is a great way to run a business, but you don’t need to pay a recurring fee to use an accessible website. The nature of the web’s technology means it can be operated via keyboard, voice control, and other assistive technology if constructed properly. Workarounds and community support also exist for some things where it’s not built well. Here I’d also like you to consider the disability tax, and how that factors into a rental model. It’s not great. Does the browser or operating system already have this functionality? A lot of the time this boils down to an issue of discovery, digital literacy, or identity. As touched on in the previous section, browsers and operating systems offer a lot to help you self-serve. Notable examples are reading mode, on-screen narration, color filters, interface and text zoom, and forced color inversion. Can it be used across multiple experiences, or just one website? Stability and predictability of operation and output are vital for technology like this. It’s why I am so bullish on utilizing existing browser and operating system features. Products built to “enhance” the accessibility of a single website or app can’t contribute towards this. Ironically, their presence may actually contribute friction towards someone’s existing method of using things. A tricky little twist here is products that target a single website are often advertised towards the website owner, and not the people who will be using said website. Can I use the keyboard to operate it? I’ve gotten in the habit of pressing Tab a few times when I first check out the product’s website and see if anything happens. It’s a quick and easy test to see if the company walks the walk in addition to talking the talk. Here, I regrettably encounter missing focus indicators and non-semantic interactive controls more often than not. I might also sometimes run the homepage through axe DevTools, to see if there are other egregious errors. I then try to use the product itself with a keyboard if a demo is offered. I am usually found wanting here. How reliable is the AI? There are two broad considerations here: How reliable is the output? How can bias affect someone’s interpretation of things? While I am a skeptic, I can also acknowledge that there are some good use cases for LLMs and related technology when it comes to disability. I think about reliability in terms of the output in terms of the “assistive” part of assistive technology. By this, I mean it actually helps you do what you need to get done. Here, I’d point to Salma Alam-Naylor’s experience with newer startups in this space versus established, community supported solutions. Then consider LLM-based image description products. Here we want to make sure the content is accurate and relevant. Remember that image descriptions are the mechanism that some people rely on to help them understand the world. If that description is not accurate, it impacts how they form an understanding of their environment. A step past that thought is the biases inherent in, and perpetuated by LLM-based technology. I recall Ben Myers’ thoughts on implicit, hegemonic normalization, as well as the sobering truth that this technology can exert influence over its users worldview at scale. Can the company be trusted with your data? A lot of assistive technology is purposely designed to not announce the fact that it is being used. This is to stave off things like discrimination or ineffective, separate-yet-equal “accessibility only” sites. There’s also the murky world of data brokerage, and if the company is selling off this information or not. AccessiBe comes to mind here, and not in a good way. Also consider if the product has access to everything you visit and interact with, and who has access to that information. As a companion concern, it is also worth considering the product’s data security practices—or lack thereof. Here, I would like to point out that startups tend to deprioritize this boring kind of infrastructure work in favor of feature creation. Not having any personal information present in a system is the best way to guard against its theft. Also know that there is no way to undo a data breach once it occurs. Leaked information stays leaked. Will the company last? Speaking of startups, know that more fail than succeed. Are you prepared for an outcome where the product you rely on is is no longer updated or supported because the company that made it went out of business? It could also be a case where the company still exists, but ceases to support the product you use. Here, know that sometimes these companies will actively squash attempts for community-based resurrection and support of the service because it represents potential liability. This concern is another reason why I’m bullish on operating system and browser functionality. They have a lot more resiliency and focus on the long view in this particular area. But also I’m not the arbiter of who can use what. In the spirit of “the best camera is the one you have on you:” if something works for your specific access needs, by all means use it.

2 days ago 6 votes