More from Tyler Cipriani: blog
.title { text-wrap: balance } Spending for October, generated by piping hledger â R Over the past six months, Iâve tracked my money with hledgerâa plain text double-entry accounting system written in Haskell. Itâs been surprisingly painless. My previous attempts to pick up real accounting tools floundered. Hosted tools are privacy nightmares, and my stint with GnuCash didnât last. But after stumbling on Dmitry Astapovâs âFull-fledged hledgerâ wiki1, it clickedâeventually consistent accounting. Instead of modeling your money all at once, take it one hacking session at a time. It should be easy to work towards eventual consistency. [âŚ] I should be able to [add financial records] bit by little bit, leaving things half-done, and picking them up later with little (mental) effort. â Dmitry Astapov, Full-Fledged Hledger Principles of my system Iâve cobbled together a system based on these principles: Avoid manual entry â Avoid typing in each transaction. Instead, rely on CSVs from the bank. CSVs as truth â CSVs are the only things that matter. Everything else can be blown away and rebuilt anytime. Embrace version control â Keep everything under version control in Git for easy comparison and safe experimentation. Learn hledger in five minutes hledger concepts are heady, but its use is simple. I divide the core concepts into two categories: Stuff hledger cares about: Transactions â how hledger moves money between accounts. Journal files â files full of transactions Stuff I care about: Rules files â how I set up accounts, import CSVs, and move money between accounts. Reports â help me see where my money is going and if I messed up my rules. Transactions move money between accounts: 2024-01-01 Payday income:work $-100.00 assets:checking $100.00 This transaction shows that on Jan 1, 2024, money moved from income:work into assets:checkingâPayday. The sum of each transaction should be $0. Money comes from somewhere, and the same amount goes somewhere elseâdouble-entry accounting. This is powerful technologyâit makes mistakes impossible to ignore. Journal files are text files containing one or more transactions: 2024-01-01 Payday income:work $-100.00 assets:checking $100.00 2024-01-02 QUANSHENG UVK5 assets:checking $-29.34 expenses:fun:radio $29.34 Rules files transform CSVs into journal files via regex matching. Hereâs a CSV from my bank: Transaction Date,Description,Category,Type,Amount,Memo 09/01/2024,DEPOSIT Paycheck,Payment,Payment,1000.00, 09/04/2024,PizzaPals Pizza,Food & Drink,Sale,-42.31, 09/03/2024,Amazon.com*XXXXXXXXY,Shopping,Sale,-35.56, 09/03/2024,OBSIDIAN.MD,Shopping,Sale,-10.00, 09/02/2024,Amazon web services,Personal,Sale,-17.89, And hereâs a checking.rules to transform that CSV into a journal file so I can use it with hledger: # checking.rules # -------------- # Map CSV fields â hledger fields[0] fields date,description,category,type,amount,memo,_ # `account1`: the account for the whole CSV.[1] account1 assets:checking account2 expenses:unknown skip 1 date-format %m/%d/%Y currency $ if %type Payment account2 income:unknown if %category Food & Drink account2 expenses:food:dining # [0]: <https://hledger.org/hledger.html#field-names> # [1]: <https://hledger.org/hledger.html#account-field> With these two files (checking.rules and 2024-09_checking.csv), I can make the CSV into a journal: $ > 2024-09_checking.journal \ hledger print \ --rules-file checking.rules \ -f 2024-09_checking.csv $ head 2024-09_checking.journal 2024-09-01 DEPOSIT Paycheck assets:checking $1000.00 income:unknown $-1000.00 2024-09-02 Amazon web services assets:checking $-17.89 expenses:unknown $17.89 Reports are interesting ways to view transactions between accounts. There are registers, balance sheets, and income statements: $ hledger incomestatement \ --depth=2 \ --file=2024-09_bank.journal Revenues: $1000.00 income:unknown ----------------------- $1000.00 Expenses: $42.31 expenses:food $63.45 expenses:unknown ----------------------- $105.76 ----------------------- Net: $894.24 At the beginning of September, I spent $105.76 and made $1000, leaving me with $894.24. But a good chunk is going to the default expense account, expenses:unknown. I can use the hleger aregister to see what those transactions are: $ hledger areg expenses:unknown \ --file=2024-09_checking.journal \ -O csv | \ csvcut -c description,change | \ csvlook | description | change | | ------------------------ | ------ | | OBSIDIAN.MD | 10.00 | | Amazon web services | 17.89 | | Amazon.com*XXXXXXXXY | 35.56 | l Then, I can add some more rules to my checking.rules: if OBSIDIAN.MD account2 expenses:personal:subscriptions if Amazon web services account2 expenses:personal:web:hosting if Amazon.com account2 expenses:personal:shopping:amazon Now, I can reprocess my data to get a better picture of my spending: $ > 2024-09_bank.journal \ hledger print \ --rules-file bank.rules \ -f 2024-09_bank.csv $ hledger bal expenses \ --depth=3 \ --percent \ -f 2024-09_checking2.journal 30.0 % expenses:food:dining 33.6 % expenses:personal:shopping 9.5 % expenses:personal:subscriptions 16.9 % expenses:personal:web -------------------- 100.0 % For the Amazon.com purchase, I lumped it into the expenses:personal:shopping account. But I could dig deeperâdownload my order history from Amazon and categorize that spending. This is the power of working bit-by-bitâthe data guides you to the next, deeper rabbit hole. Goals and non-goals Why am I doing this? For years, I maintained a monthly spreadsheet of account balances. I had a balance sheet. But I still had questions. Spending over six months, generated by piping hledger â gnuplot Before diving into accounting software, these were my goals: Granular understanding of my spending â The big one. This is where my monthly spreadsheet fell short. I knew I had money in the bankâI kept my monthly balance sheet. I budgeted up-front the % of my income I was saving. But I had no idea where my other money was going. Data privacy â Iâm unwilling to hand the keys to my accounts to YNAB or Mint. Increased value over time â The more time I put in, the more value I want to get outâthis is what you get from professional tools built for nerds. While I wished for low-effort setup, I wanted the tool to be able to grow to more uses over time. Non-goalsâthese are the parts I never cared about: Investment tracking â For now, I left this out of scope. Between monthly balances in my spreadsheet and online investing toolsâ ability to drill down, I was fine.2 Taxes â Folks smarter than me help me understand my yearly taxes.3 Shared system â I may want to share reports from this system, but no one will have to work in it except me. Cash â Cash transactions are unimportant to me. I withdraw money from the ATM sometimes. It evaporates. hledger can track all these things. My setup is flexible enough to support them someday. But thatâs unimportant to me right now. Monthly maintenance I spend about an hour a month checking in on my money Which frees me to spend time making fancy chartsâan activity I perversely enjoy. Income vs. Expense, generated by piping hledger â gnuplot Hereâs my setup: $ tree ~/Documents/ledger . âââ export â  âââ 2024-balance-sheet.txt â  âââ 2024-income-statement.txt âââ import â  âââ in â  â  âââ amazon â  â  â  âââ order-history.csv â  â  âââ credit â  â  â  âââ 2024-01-01_2024-02-01.csv â  â  â  âââ ... â  â  â  âââ 2024-10-01_2024-11-01.csv â  â  âââ debit â  â  âââ 2024-01-01_2024-02-01.csv â  â  âââ ... â  â  âââ 2024-10-01_2024-11-01.csv â  âââ journal â  âââ amazon â  â  âââ order-history.journal â  âââ credit â  â  âââ 2024-01-01_2024-02-01.journal â  â  âââ ... â  â  âââ 2024-10-01_2024-11-01.journal â  âââ debit â  âââ 2024-01-01_2024-02-01.journal â  âââ ... â  âââ 2024-10-01_2024-11-01.journal âââ rules â  âââ amazon â  â  âââ journal.rules â  âââ credit â  â  âââ journal.rules â  âââ debit â  â  âââ journal.rules â  âââ common.rules âââ 2024.journal âââ Makefile âââ README Process: Import â download a CSV for the month from each account and plop it into import/in/<account>/<dates>.csv Make â run make Squint â Look at git diff; if it looks good, git add . && git commit -m "đ¸" otherwise review hledger areg to see details. The Makefile generates everything under import/journal: journal files from my CSVs using their corresponding rules. reports in the export folder I include all the journal files in the 2024.journal with the line: include ./import/journal/*/*.journal Hereâs the Makefile: SHELL := /bin/bash RAW_CSV = $(wildcard import/in/**/*.csv) JOURNALS = $(foreach file,$(RAW_CSV),$(subst /in/,/journal/,$(patsubst %.csv,%.journal,$(file)))) .PHONY: all all: $(JOURNALS) hledger is -f 2024.journal > export/2024-income-statement.txt hledger bs -f 2024.journal > export/2024-balance-sheet.txt .PHONY clean clean: rm -rf import/journal/**/*.journal import/journal/%.journal: import/in/%.csv @echo "Processing csv $< to $@" @echo "---" @mkdir -p $(shell dirname $@) @hledger print --rules-file rules/$(shell basename $$(dirname $<))/journal.rules -f "$<" > "$@" If I find anything amiss (e.g., if my balances are different than what the bank tells me), I look at hleger areg. I may tweak my rules or my CSVs and then I run make clean && make and try again. Simple, plain text accounting made simple. And if I ever want to dig deeper, hledgerâs docs have more to teach. But for now, the balance of effort vs. reward is perfect. while reading a blog post from Jonathan DowlandâŠď¸ Note, this is covered by full-fledged hledger â InvestementsâŠď¸ Also covered in full-fledged hledger â Tax returnsâŠď¸
Luckily, I speak Leet. â Amita Ramanujan, Numb3rs, CBSâs IRC Drama Thereâs an episode of the CBS prime-time drama Numb3rs that plumbs the depths of Dr. Joel Fleischmanâs1 knowledge of IRC. In one scene, Fleischman wonders, âWhatâs âleetââ? âLeetâ is writing that replaces letters with numbers, e.g., âNumb3rs,â where 3 stands in for e. In short, leet is like the heavy-metal âSâ you drew in middle school: Sweeeeet. / \ / | \ | | | \ \ | | | \ | / \ / ASCII art version of your misspent youth. Following years of keen observation, Iâve noticed Git commit hashes are also letters and numbers. Git commit hashes are, as Fleischman might say, prime targets for l33tification. What can I spell with a git commit? DenITDao via orlybooks) With hexidecimal we can spell any word containing the set of letters {A, B, C, D, E, F}âDEADBEEF (a classic) or ABBABABE (for Mama Mia aficionados). This is because hexidecimal is a base-16 numbering systemâa single âdigitâ represents 16 numbers: Base-10: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 16 15 Base-16: 0 1 2 3 4 5 6 7 8 9 A B C D E F Leet expands our palette of wordsâusing 0, 1, and 5 to represent O, I, and S, respectively. I created a script that scours a few word lists for valid words and phrases. With it, I found masterpieces like DADB0D (dad bod), BADA55 (bad ass), and 5ADBAB1E5 (sad babies). Manipulating commit hashes for fun and no profit Git commit hashes are no mystery. A commit hash is the SHA-1 of a commit object. And a commit object is the commit message with some metadata. $ mkdir /tmp/BADA55-git && cd /tmp/BAD55-git $ git init Initialized empty Git repository in /tmp/BADA55-git/.git/ $ echo '# BADA55 git repo' > README.md && git add README.md && git commit -m 'Initial commit' [main (root-commit) 68ec0dd] Initial commit 1 file changed, 1 insertion(+) create mode 100644 README.md $ git log --oneline 68ec0dd (HEAD -> main) Initial commit Letâs confirm we can recreate the commit hash: $ git cat-file -p 68ec0dd > commit-msg $ sha1sum <(cat \ <(printf "commit ") \ <(wc -c < commit-msg | tr -d '\n') \ <(printf '%b' '\0') commit-msg) 68ec0dd6dead532f18082b72beeb73bd828ee8fc /dev/fd/63 Our repoâs first commit has the hash 68ec0dd. My goal is: Make 68ec0dd be BADA55. Keep the commit message the same, visibly at least. But Iâll need to change the commit to change the hash. To keep those changes invisible in the output of git log, Iâll add a \t and see what happens to the hash. $ truncate -s -1 commit-msg # remove final newline $ printf '\t\n' >> commit-msg # Add a tab $ # Check the new SHA to see if it's BADA55 $ sha1sum <(cat \ <(printf "commit ") \ <(wc -c < commit-msg | tr -d '\n') \ <(printf '%b' '\0') commit-msg) 27b22ba5e1c837a34329891c15408208a944aa24 /dev/fd/63 Success! I changed the SHA-1. Now to do this until we get to BADA55. Fortunately, user not-an-aardvark created a tool for thatâlucky-commit that manipulates a commit message, adding a combination of \t and [:space:] characters until you hit a desired SHA-1. Written in rust, lucky-commit computes all 256 unique 8-bit strings composed of only tabs and spaces. And then pads out commits up to 48-bits with those strings, using worker threads to quickly compute the SHA-12 of each commit. Itâs pretty fast: $ time lucky_commit BADA555 real 0m0.091s user 0m0.653s sys 0m0.007s $ git log --oneline bada555 (HEAD -> main) Initial commit $ xxd -c1 <(git cat-file -p 68ec0dd) | grep -cPo ': (20|09)' 12 $ xxd -c1 <(git cat-file -p HEAD) | grep -cPo ': (20|09)' 111 Now we have an more than an initial commit. We have a BADA555 initial commit. All thatâs left to do is to make ALL our commits BADA55 by abusing git hooks. $ cat > .git/hooks/post-commit && chmod +x .git/hooks/post-commit #!/usr/bin/env bash echo 'L337-ifying!' lucky_commit BADA55 $ echo 'A repo that is very l33t.' >> README.md && git commit -a -m 'l33t' L337-ifying! [main 0e00cb2] l33t 1 file changed, 1 insertion(+) $ git log --oneline bada552 (HEAD -> main) l33t bada555 Initial commit And now I have a git repo almost as cool as the sweet âSâ I drew in middle school. This is a Northern Exposure spin off, right? Iâve only seen 1:48 of the showâŚâŠď¸ or SHA-256 for repos that have made the jump to a more secure hash functionâŠď¸
A brief and biased history. Oh yeah, thereâs pull requests now â GitHub blog, Sat, 23 Feb 2008 When GitHub launched, it had no code review. Three years after launch, in 2011, GitHub user rtomayko became the first person to make a real code comment, which read, in full: â+1â. Before that, GitHub lacked any way to comment on code directly. Instead, pull requests were a combination of two simple features: Cross repository compare view â a feature theyâd debuted in 2010âgit diff in a web page. A comments section â a feature most blogs had in the 90s. There was no way to thread comments, and the comments were on a different page than the diff. GitHub pull requests circa 2010. This is from the official documentation on GitHub. Earlier still, when the pull request debuted, GitHub claimed only that pull requests were âa way to poke someone about codeââa way to direct message maintainers, but one that lacked any web view of the code whatsoever. For developers, it worked like this: Make a fork. Click âpull requestâ. Write a message in a text form. Send the message to someone1 with a link to your fork. Wait for them to reply. In effect, pull requests were a limited way to send emails to other GitHub users. Ten years after this humble beginningâseven years after the first code commentâwhen Microsoft acquired GitHub for $7.5 Billion, this cobbled-together system known as âGitHub flowâ had become the default way to collaborate on code via Git. And I hate it. Pull requests were never designed. They emerged. But not from careful consideration of the needs of developers or maintainers. Pull requests work like they do because they were easy to build. In 2008, GitHubâs developers could have opted to use git format-patch instead of teaching the world to juggle branches. Or they might have chosen to generate pull requests using the git request-pull command thatâs existed in Git since 2005 and is still used by the Linux kernel maintainers today2. Instead, they shrugged into GitHub flow, and that flow taught the world to use Git. And commit histories have sucked ever since. For some reason, github has attracted people who have zero taste, donât care about commit logs, and canât be bothered. â Linus Torvalds, 2012 âSomeoneâ was a person chosen by you from a checklist of the people who had also forked this repository at some point.âŠď¸ Though to make small, contained changes youâd use git format-patch and git am.âŠď¸
.title {text-wrap:balance;} GIT - the stupid content tracker âgitâ can mean anything, depending on your mood. â Linus Torvalds, Initial revision of âgitâ, the information manager from hell Like most git features, gitcredentials(7) are obscure, byzantine, and incredibly useful. And, for me, theyâre a nice, hacky solution to a simple problem. Problem: Home directories teeming with tokens. Too many programs store cleartext credentials in config files in my home directory, making exfiltration all too easy. Solution: For programs I write, I can use git credential fill â the password library I never knew I installed. #!/usr/bin/env bash input="\ protocol=https host=example.com user=thcipriani " eval "$(echo "$input" | git credential fill)" echo "The password is: $password" Which looks like this when you run it: $ ./prompt.sh Password for 'https://thcipriani@example.com': The password is: hunter2 What did git credentials fill do? Accepted a protocol, username, and host on standard input. Called out to my git credential helper My credential helper checked for credentials matching https://thcipriani@example.com and found nothing Since my credential helper came up empty, it prompted me for my password Finally, it echoed <key>=<value>\n pairs for the keys protocol, host, username, and password to standard output. If I want, I can tell my credential helper to store the information I entered: git credential approve <<EOF protocol=$protocol username=$username host=$host password=$password EOF If I do that, the next time I run the script, it finds the password without prompting: $ ./prompt.sh The password is: hunter2 What are git credentials? Surprisingly, the intended purpose of git credentials is NOT âa weird way to prompt for passwords.â The problem git credentials solve is this: With git over ssh, you use your keys. With git over https, you type a password. Over and over and over. Beleaguered git maintainers solved this dilemma with the credential storage systemâgit credentials. With the right configuration, git will stop asking for your password when you push to an https remote. Instead, git credentials retrieve and send auth info to remotes. On the labyrinthine options of git credentials My mind initially refused to learn git credentials due to its twisty maze of terms that all sound alike: git credential fill: how you invoke a userâs configured git credential helper git credential approve: how you save git credentials (if this is supported by the userâs git credential helper) git credential.helper: the git config that points to a script that poops out usernames and passwords. These helper scripts are often named git-credential-<something>. git-credential-cache: a specific, built-in git credential helper that caches credentials in memory for a while. git-credential-store: STOP. DONâT TOUCH. This is a specific, built-in git credential helper that stores credentials in cleartext in your home directory. Whomp whomp. git-credential-manager: a specific and confusingly named git credential helper from MicrosoftÂŽ. If youâre on Linux or Mac, feel free to ignore it. But once I mapped the terms, I only needed to pick a git credential helper. Configuring good credential helpers The built-in git-credential-store is a bad credential helperâit saves your passwords in cleartext in ~/.git-credentials.1 If youâre on a Mac, youâre in luck2âone command points git credentials to your keychain: git config --global credential.helper osxkeychain Third-party developers have contributed helpers for popular password stores: 1Password pass: the standard Unix password manager OAuth Gitâs documentation contains a list of credential-helpers, too Meanwhile, Linux and Windows have standard options. Gitâs source repo includes helpers for these options in the contrib directory. On Linux, you can use libsecret. Hereâs how I configured it on Debian: sudo apt install libsecret-1-0 libsecret-1-dev cd /usr/share/doc/git/contrib/credential/libsecret/ sudo make sudo mv git-credential-libsecret /usr/local/bin/ git config --global credential.helper libsecret On Windows, you can use the confusingly named git credential manager. I have no idea how to do this, and I refuse to learn. Now, if you clone a repo over https, you can push over https without pain3. Plus, you have a handy trick for shell scripts. git-credential-store is not a git credential helper of honor. No highly-esteemed passwords should be stored with it. This message is a warning about danger. The danger is still present, in your time, as it was in ours.âŠď¸ I think. I only have Linux computers to test this on, sorry ;_;âŠď¸ Or the config option pushInsteadOf, which is what I actually do.âŠď¸
Humans do no operate on hexadecimal symbols effectively [âŚ] there are exceptions. â Dan Kaminsky When SSH added ASCII art fingerprints (AKA, randomart), the author credited a talk by Dan Kaminsky. As a refresher, randomart looks like this: $ ssh-keygen -lv -f ~/.ssh/id_ed25519.pub 256 SHA256:XrvNnhQuG1ObprgdtPiqIGXUAsHT71SKh9/WAcAKoS0 thcipriani@foo.bar (ED25519) +--[ED25519 256]--+ | .++ ... | | o+.... o | |E .oo=.o . | | . .+.= . | | o= .S.o.o | | o o.o+.= + | | . . .o B * | | . . + & . | | ..+o*.= | +----[SHA256]-----+ Ben Cox describes the algorithm for generating random art on his blog. Hereâs a slo-mo version of the algorithm in action: ASCII art ssh fingerprints slo-mo algorithm But in Danâs talk, he never mentions anything about ASCII art. Instead, his talk was about exploiting our brainâs hardware acceleration to make it easier for us to recognize SSH fingerprints. The talk is worth watching, but Iâll attempt a summary. Whatâs the problem? Weâll never memorize SHA256:XrvNnhQuG1ObprgdtPiqIGXUAsHT71SKh9/WAcAKoS0âhexadecimal and base64 were built to encode large amounts of information rather than be easy to remember. But thatâs ok for SSH keys because there are different kinds of memory: Rejection: Iâve never seen that before! Recognition: I know itâs that oneânot the other one. Recollection: rote recall, like a phone number or address. For SSH youâll use recognitionâdo you recognize this key? Of course, SSH keys are still a problem because our working memory is too small to recognize such long strings of letters and numbers. Hacks abound to shore up our paltry working memoryâwhat Dan called âbrain hardware acceleration.â Randomart attempts to tap into our hardware acceleration for pattern recognitionâthe visiuo-spacial sketchpad, where we store pictures. Danâs idea tapped into a different aspect of hardware acceleration, one often cited by memory competition champions: chunking. Memory chunking and sha256 The web service what3words maps every three cubic meters (3m²) on Earth to three words. The White Houseâs Oval Office is ///curve.empty.buzz. Three words encode the same information as latitude and longitudeâ38.89, -77.03âchunking the information to be small enough to fit in our working memory. The mapping of locations to words uses a list of 40 thousand common English words, so each word encodes 15.29 bits of informationâ45.9 bits of information, identifying 64 trillion unique places. Meanwhile sha256 is 256 bits of information: ~116 quindecillion unique combinations. 64000000000000 # 64 trillion (what3words) 115792089237316195423570985008687907853269984665640564039457584007913129639936 # 116 (ish) quindecillion (sha256) For SHA256, we need more than three words or a dictionary larger than 40,000 words. Danâs insight was we can identify SSH fingerprints using pairs of human namesâcouples. The math works like this1: 131,072 first names: 17 bits per name (Ă2) 524,288 last names: 19 bits per name 2,048 cities: 11 bits per city 17+17+19+11 = 64 bits With 64 bits per couple, you could uniquely identify 116 quindecillion items with four couples. Turning this: $ ssh foo.bar The authenticity of host 'foo.bar' can't be established. ED25519 key fingerprint is SHA256:XrvNnhQuG1ObprgdtPiqIGXUAsHT71SKh9/WAcAKoS0. Are you sure you want to continue connecting (yes/no/[fingerprint])? Into this2: $ ssh foo.bar The authenticity of host 'foo.bar' can't be established. SHA256:XrvNnhQuG1ObprgdtPiqIGXUAsHT71SKh9/WAcAKoS0 Key Data: Svasse and Tainen Jesudasson from Fort Wayne, Indiana, United States Illma and Sibeth Primack from ItÄrsi, Madhya Pradesh, India Maarja and Nisim Balyeat from Mukilteo, Washington, United States Hsu-Heng and Rasim Haozi from Manali, Tamil Nadu, India Are you sure you want to continue connecting (yes/no/[fingerprint])? With enough exposure, building recognition for these names and places should be possibleâat least more possible than memorizing host keys. Iâve modified this from the original talk, in 2006 we were using md5 fingerprints of 160-bits. Now weâre using 256-bit fingerprints, so we needed to encode even more information, but the idea still works.âŠď¸ A (very) rough code implementation is on my github.âŠď¸
More in programming
The concept of Product-Market Fit (PMF) collapse has gained renewed attention with the rise of large language models (LLMs), as highlighted in a recent Reforge article. The article argues weâre witnessing unprecedented market disruption, in this post, I propose weâre experiencing an acceleration of a familiar pattern rather than a fundamentally new phenomenon. Adoption Curves [âŚ] The post The Exodus Curve appeared first on Marc Astbury.
In 1940, President Roosevelt tapped William S. Knudsen to run the government's production of military equipment. Knudsen had spent a pivotal decade at Ford during the mass-production revolution, and was president of General Motors, when he was drafted as a civilian into service as a three-star general. Not bad for a Dane, born just ten minutes on bike from where I'm writing this in Copenhagen! Knudsen's leadership raised the productive capacity of the US war machine by a 100x in areas like plane production, where it went from producing 3,000 planes in 1939 to over 300,000 by 1945. He was quoted on his achievement: "We won because we smothered the enemy in an avalanche of production, the like of which he had never seen, nor dreamed possible". Knudsen wasn't an elected politician. He wasn't even a military man. But Roosevelt saw that this remarkable Dane had the skills needed to reform a puny war effort into one capable of winning the Second World War. Do you see where I'm going with this? Elon Musk is a modern day William S. Knudsen. Only even more accomplished in efficiency management, factory optimization, and first-order systems thinking. No, America isn't in a hot war with the Axis powers, but for the sake of the West, it damn well better be prepared for one in the future. Or better still, be so formidable that no other country or alliance would even think to start one. And this requires a strong, confident, and sound state with its affairs in order. If you look at the government budget alone, this is direly not so. The US was knocking on a two-trillion-dollar budget deficit in 2024! Adding to a towering debt that's now north of 36 trillion. A burden that's already consuming $881 billion in yearly interest payments. More than what's spent on the military or Medicare. Second to only Social Security on the list of line items. Clearly, this is not sustainable. This is the context of DOGE. The program, lead by Musk, that's been deputized by Trump to turn the ship around. History doesn't repeat, but it rhymes, and Musk is dropping beats that Knudsen would have surely been tapping his foot to. And just like Knudsen in his time, it's hard to think of any other American entrepreneur more qualified to tackle exactly this two-trillion dollar problem. It is through The Musk Algorithm that SpaceX lowered the cost of sending a kilo of goods into lower orbit from the US by well over a magnitude. And now America's share of worldwide space transit has risen from less than 30% in 2010 to about 85%. Thanks to reusable rockets and chopstick-catching landing towers. Thanks to Musk. Or to take a more earthly example with Twitter. Before Musk took over, Twitter had revenues of $5 billion and earned $682 million. After the take over, X has managed to earn $1.25 billion on $2.7 billion in revenue. Mostly thank to the fact that Musk cut 80% of the staff out of the operation, and savaged the cloud costs of running the service. This is not what people expected at the time of the take over! Not only did many commentators believe that Twitter was going to collapse from the drastic costs in staff, they also thought that the financing for the deal would implode. Chiefly as a result of advertisers withdrawing from the platform under intense media pressure. But that just didn't happen. Today, the debt used to take over Twitter and turn it into X is trading at 97 cents on the dollar. The business is twice as profitable as it was before, and arguably as influential as ever. All with just a fifth of the staff required to run it. Whatever you think of Musk and his personal tweets, it's impossible to deny what an insane achievement of efficiency this has been! These are just two examples of Musk's incredible ability to defy the odds and deliver the most unbelievable efficiency gains known to modern business records. And we haven't even talked about taking Tesla from producing 35,000 cars in 2014 to making 1.7 million in 2024. Or turning xAI into a major force in AI by assembling a 100,000 H100 cluster at "superhuman" pace. Who wouldn't want such a capacity involved in finding the waste, sloth, and squander in the US budget? Well, his political enemies, of course! And I get it. Musk's magic is balanced with mania and even a dash of madness. This is usually the case with truly extraordinary humans. The taller they stand, the longer the shadow. Expecting Musk to do what he does and then also be a "normal, chill dude" is delusional. But even so, I think it's completely fair to be put off by his tendency to fire tweets from the hip, opine on world affairs during all hours of the day, and offer his support to fringe characters in politics, business, and technology. I'd be surprised if even the most ardent Musk super fans don't wince a little every now and then at some of the antics. And yet, I don't have any trouble weighing those antics against the contributions he's made to mankind, and finding an easy and overwhelming balance in favor of his positive achievements. Musk is exactly the kind of formidable player you want on your team when you're down two trillion to nothing, needing a Hail Mary pass for the destiny of America, and eager to see the West win the future. He's a modern-day Knudsen on steroids (or Ketamine?). Let him cook.
Last week, James Truitt asked a question on Mastodon: James Truitt (he/him) @linguistory@code4lib.social Mastodon #digipres folks happen to have a handy repo of small invalid bags for testing purposes? I'm trying to automate our ingest process, and want to make sure I'm accounting for as many broken expectations as possible. Jan 31, 2025 at 07:49 PM The âbagsâ heâs referring to are BagIt bags. BagIt is an open format developed by the Library of Congress for packaging digital files. Bags include manifests and checksums that describe their contents, and theyâre often used by libraries and archives to organise files before transfering them to permanent storage. Although I donât use BagIt any more, I spent a lot of time working with it when I was a software developer at Wellcome Collection. We used BagIt as the packaging format for files saved to our cloud storage service, and we built a microservice very similar to what James is describing. The âbag verifierâ would look for broken bags, and reject them before they were copied to long-term storage. I wrote a lot of bag verifier test cases to confirm that it would spot invalid or broken bags, and that it would give a useful error message when it did. All of the code for Wellcomeâs storage service is shared on GitHub under an MIT license, including the bag verifier tests. Theyâre wrapped in a Scala test framework that might not be the easiest thing to read, so Iâm going to describe the test cases in a more human-friendly way. Before diving into specific examples, itâs worth remembering: context is king. BagIt is described by RFC 8493, and you could create invalid bags by doing a line-by-line reading and deliberately ignoring every âMUSTâ or âSHOULDâ but I wouldnât recommend this aproach. Youâd get a long list of test cases, but youâd be overwhelmed by examples, and you might miss specific requirements for your system. The BagIt RFC is written for the most general case, but if youâre actually building a storage service, youâll have more concrete requirements and context. Itâs helpful to look at that context, and how it affects the data you want to store. Whoâs creating the bags? How will they name files? Where are you going to store bags? How do bags fit into your wider systems? And so on. Understanding your context will allow you to skip verification steps that you donât need, and to add verification steps that are important to you. I doubt any two systems implement the exact same set of checks, because every system has different context. Here are examples of potential validation issues drawn from the BagIt specification and my real-world experience. You wonât need to check for everything on this list, and this list isnât exhaustive â but it should help you think about bag validation in your own context. The Bag Declaration bagit.txt This file declares that this is a BagIt bag, and the version of BagIt youâre using (RFC 8493 §2.1.1). It looks the same in almost every bag, for example: BagIt-Version: 1.0 Tag-File-Character-Encoding: UTF-8 This tightly prescribed format means it can only be invalid in a few ways: What if the bag doesnât have a bag declaration? Itâs a required element of every BagIt bag; it has to be there. What if the bag declaration is the wrong format? It should contain exactly two lines: a version number and a character encoding, in that order. What if the bag declaration has an unexpected version number? If you see a BagIt version that youâve not seen before, the bag might have a different structure than what you expect. The Payload Files and Payload Manifest The payload files are the actual content you want to save and preserve. They get saved in the payload directory data/ (RFC 8493 §2.1.2), and thereâs a payload manifest manifest-algorithm.txt that lists them, along with their checksums (RFC 8493 §2.1.3). Hereâs an example of a payload manifest with MD5 checksums: 37d0b74d5300cf839f706f70590194c3 data/waterfall.jpg This tells us that the bag contains a single file data/waterfall.jpg, and it has the MD5 checksum 37d0âŚ. These checksums can be used to verify that the files have transferred correctly, and havenât been corrupted in the process. There are lots of ways a payload manifest could be invalid: What if the bag doesnât have a payload manifest? Every BagIt bag must have at least one Payload Manifest file. What if the payload manifest is the wrong format? These files have a prescribed format â one file per line, with a checksum and file path. What if the payload manifest refers to a file that isnât in the bag? Either one of the files in the bag has been deleted, or the manifest has an erroneous entry. What if the bag has a file that isnât listed in the payload manifest? The manifest should be a complete listing of all the payload files in the bag. If the bag has a file which isnât in the payload manifest, either that file isnât meant to be there, or the manifest is missing an entry. Checking for unlisted files is how I spotted unwanted .DS_Store and Thumbs.db files. What if the checksum in the payload manifest doesnât match the checksum of the file? Either the file has been corrupted, or the checksum is incorrect. What if there are payload files outside the data/ directory? All the payload files should be stored in data/. Anything outside that is an error. What if there are duplicate entries in the payload manifest? Every payload file must be listed exactly once in the manifest. This avoids ambiguity â suppose a file is listed twice, with two different checksums. Is the bag valid if one of those checksums is correct? Requiring unique entries avoids this sort of issue. What if the payload directory is empty? This is perfectly acceptable in the BagIt RFC, but it may not be what you want. If you know that you will always be sending bags that contain files, you should flag empty payload directories as an error. What if the payload manifest contains paths outside data/, or relative paths that try to escape the bag? (e.g. ../file.txt) Now weâre into âmalicious bagâ territory â a bag uploaded by somebody whoâs trying to compromise your ingest pipeline. Any such bags should be treated with suspicion and rejected. If youâre concerned about malicious bags, you need a more thorough test suite to catch other shenanigans. We never went this far at Wellcome Collection, because we didnât ingest bags from arbitrary sources. The bags only came from internal systems, and our verification was mainly about spotting bugs in those systems, not defending against malicious actors. A bag can contain multiple payload manifests â for example, it might contain both MD5 and SHA1 checksums. Every payload manifest must be valid for the overall bag to be valid. Payload filenames There are lots of gotchas around filenames and paths. Itâs a complicated problem, and I definitely donât understand all of it. Itâs worth understanding the filename rules of any filesystem where you will be storing bags. For example, Azure Blob Storage has a number of rules around how you can name files, and Amazon S3 has different rules. We stored files in both at Wellcome Collection, and so the storage service had to enforce the superset of these rules. Iâve listed some edge cases of filenames you might want to consider, but itâs not a comlpete list. There are lots of ways that unexpected filenames could cause you issues, but whether you care depends on the source of your bags. If you control the bags and you know youâre not going to include any weird filenames, you can probably skip most of these. We only checked for one of these conditions at Wellcome Collection, because we had a pre-ingest step that normalised filenames. It converted filenames to ASCII, and saved a mapping between original and normalised filename in the bag. However, the normalisation was only designed for one filesystem, and produced filenames with trailing dots that were still disallowed in Azure Blob. What if a filename is too long? Some systems have a maximum path length, and an excessively deep directory structure or long filename could cause issues. What if a filename contains special characters? Spaces, emoji, or special characters (\, :, *, etc.) can cause problems for some tools. You should also think about characters that need to be URL-encoded. What if a filename has trailing spaces or dots? Some filesystems canât support filenames ending in a dot or a space. What happens if your bag contains such a file, and you try to save it to the filesystem? This caused us issues at Wellcome Collection. We initially stored bags just in Amazon S3, which is happy to take filenames with a trailing dot â then we added backups to Azure Blob, which doesnât. One of the bags weâd stored in Amazon S3 had a trailing dot in the filename, and caused us headaches when we tried to copy it to Azure. What if a filename contains a mix of path separators? The payload manifest uses a forward slash (/) as a path separator. If you have a filename with an alternative path separator, it might behave differently on different systems. For example, consider the payload file a\b\c. This would be a single file on macOS or Linux, but it would be nested inside two folders on Windows. What if the filenames are a mix of uppercase and lowercase characters? Some fileystems are case-sensitive, others arenât. This can cause issues when you move bags between systems. For example, suppose a bag contains two different files Macrodata.txt and macrodata.txt. When you save that bag on a case-insensitive filesystem, only one file will be saved. What if the same filename appears twice with different Unicode normalisations? This is similar to filenames which only differ in upper/lowercase. They might be treated as two files on one filesystem, but collapsed into one file on another. The classic example is the word âcafĂŠâ: this can be encoded as caf\xc3\xa9 (UTF-8 encoded ĂŠ) or cafe\xcc\x81 (e + combining acute accent). What if a filename contains a directory reference? A directory reference is /./ (current directory) or /../ (parent directory). Itâs used on both Unix and Windows-like systems, and itâs another case of two filenames that look different but can resolve to the same path. For example: a/b, a/./b and a/subdir/../b all resolve to the same path under these rules. This can cause particular issues if youâre moving between local filesystems and cloud storage. Local filesystems treat filenames as hierarchical paths, where cloud storage like Amazon S3 often treats them as opaque strings. This can cause issues if you try to copy files from cloud storage to a local system â if youâre not careful, you could lose files in the process. The Tag Manifest tagmanifest-algorithm.txt Similar to the payload manifest, the tag manifest lists the tag files and their checksums. A âtag fileâ is the BagIt term for any metadata file that isnât part of the payload (RFC 8493 §2.2.1). Unlike the payload manifest, the tag manifest is optional. A bag without a tag manifest can still be a valid bag. If the tag manifest is present, then many of the ways that a payload manifest can invalidate a bag â malformed contents, unreferenced files, or incorrect checksums â can also apply to tag manifests. There are some additional things to consider: What if a tag manifest lists payload files? The tag manifest lists tag files; the payload manifest lists payload files in the data/ directory. A tag manifest that lists files in the data/ directory is incorrect. What if the bag has a file that isnât listed in either manifest? Every file in a bag (except the tag manifests) should be listed in either a payload or a tag manifest. A file that appears in neither could mean an unexpected file, or a missing manifest entry. Although the tag manifest is optional in the BagIt spec, at Wellcome Collection we made it a required file. Every bag had to have at least one tag manifest file, or our storage service would refuse to ingest it. The Bag Metadata bag-info.txt This is an optional metadata file that describes the bag and its contents (RFC 8493 §2.2.2). Itâs a list of metadata elements, as simple label-value pairs, one per line. Hereâs an example of a bag metadata file: Source-Organization: Lumon Industries Organization-Address: 100 Main Street, Kier, PE, 07043 Contact-Name: Harmony Cobel Unlike the manifest files, this is primarily intended for human readers. You can put arbitrary metadata in here, so you can add fields specific to your organisation. Although this file is more flexible, there are still ways it can be invalid: What if the bag metadata is the wrong format? It should have one metadata entry per line, with a label-value pair thatâs separated by a colon. What if the Payload-Oxum is incorrect? The Payload-Oxum contains some concise statistics about the payload files: their total size in bytes, and how many there are. For example: Payload-Oxum: 517114.42 This tells us that the bag contains 42 payload files, and their total size is 517,114 bytes. If these stats donât match the rest of the bag, something is wrong. What if non-repeatable metadata element names are repeated? The BagIt RFC defines a small number of reserved metadata element names which have a standard meaning. Although most metadata element names can be repeated, there are some which canât, because they can only have one value. In particular: Bagging-Date, Bag-Size, Payload-Oxum and Bag-Group-Identifier. Although the bag metadata file is optional in a general BagIt bag, you may want to add your own rules based on how you use it. For example, at Wellcome Collection, we required all bags to have an External-Identifier value, that matched a specific schema. This allowed us to link bags to records in other databases, and our bag verifier would reject bags that didnât include it. The Fetch File fetch.txt This is an optional element that allows you to reference files stored elsewhere (RFC 8493 §2.2.3). It tells the person reading the bag that a file hasnât been included in this copy of the bag; they have to go and fetch it from somewhere else. The file is still recorded in the payload manifest (with a checksum you can verify), but you donât have a complete bag until youâve downloaded all the files. Hereâs an example of a fetch.txt: https://topekastar.com/~daria/article.txt 1841 data/article.txt This tells us that data/article.txt isnât included in this copy of the bag, but we we can download it from https://topekastar.com/~daria/article.txt. (The number 1841 is the size of the file in bytes. Itâs optional.) Using fetch.txt allows you to send a bag with âholesâ, which saves disk space and network bandwidth, but at a cost â weâre now relying on the remote location to remain available. From a preservation standpoint, this is scary! If topekastar.com goes away, this bag will be broken. I know some people donât use fetch.txt for precisely this reason. If you do use fetch.txt, here are some things to consider: What if the fetch file is the wrong format? Thereâs a prescribed format â one file per line, with a URL, optional file size, and file path. What if the fetch file lists a file which isnât in the payload manifest? The fetch.txt should only tell us that a file is stored elsewhere, and shouldnât be introducing otherwise unreferenced files. If a file appears in fetch.txt but not the payload manifest, then we canât verify the remote file because we donât have a checksum for it. Thereâs either an erroneous fetch file entry or a missing manifest entry. What if the fetch file points to a file at an unusable URL? The URL is only useful if the person who receives the bag can use it to download the file. If they canât, the bag might technically be valid, but itâs functionally broken. For example, you might reject URLs that donât start with http:// or https://. What if the fetch file points to a file with the wrong length? The fetch.txt can optionally specify the size of a file, so you know how much storage you need to download it. If you download the file, the actual size should match the stated size. What if the fetch files points to a file thatâs already included in the bag? Now you have two ways to get this file: you can read it from the bag, or from the remote URL. If a file is listed in both fetch.txt and included in the bag, either that file isnât meant to be in the bag, or the fetch file has an erroneous entry. We used fetch files at Wellcome Collection to implement versioning, and we added extra rules about what remote URLs were allowed. In particular, we didnât allow fetching a file from just anywhere â you could fetch from our S3 buckets, but not the general Internet. The bag verifier would reject a fetch file entry that pointed elsewhere. These examples illustrate just how many ways a BagIt bag can be invalid, from simple structural issues to complex edge cases. Remember: the key is to understand your specific needs and requirements. By considering your context â who creates your bags, where theyâll be stored, and how they fit into your wider systems â you can build a validation process to catch the issues that matter to you, while avoiding unnecessary complexity. I can give you my ideas, but only you can build your system. [If the formatting of this post looks odd in your feed reader, visit the original article]
We bought sixty-one servers for the launch of Basecamp 3 back in 2015. Dell R430s and R630s, packing thousands of cores and terabytes of RAM. Enough to fill all the app, job, cache, and database duties we needed. The entire outlay for this fleet was about half a million dollars, and it's only now, almost a decade later, that we're finally retiring the bulk of them for a full hardware refresh. What a bargain! That's over 3,500 days of service from this fleet, at a fully amortized cost of just $142/day. For everything needed to run Basecamp. A software service that has grossed hundreds of millions of dollars in that decade. We've of course had other expenses beyond hardware from operating Basecamp over the past decade. The ops team, the bandwidth, the power, and the cabinet rental across both our data centers. But none the less, owning our own iron has been a fantastically profitable proposition. Millions of dollars saved over renting in the cloud. And we aren't even done deriving value from this venerable fleet! The database servers, Dell R630s w/ Xeon E5-2699 CPUs and 768G of RAM, are getting handed down to some of our heritage apps. They will keep on trucking until they give up the ghost. When we did the public accounting for our cloud exit, it was based on five years of useful life from the hardware. But as this example shows, that's pretty conservative. Most servers can easily power your applications much longer than that. Owning your own servers has easily been one of our most effective cost advantages. Together with running a lean team. And managing our costs remains key to reaping the profitable fruit from the business. The dollar you keep at the end of the year is just as real whether you earn it or save it. So you just might want to run those cloud-exit numbers once more with a longer server lifetime value. It might just tip the equation, and motivate you to become a server owner rather than a renter.
At some point in a startupâs lifecycle, they decide that they need to be ready to go public in 18 months, and a flurry of IPO-readiness activity kicks off. This strategy focuses on a company working on IPO readiness, which has identified a gap in their internal controls for managing access to their usersâ data. Itâs a company that wants to meaningfully improve their security posture around user data access, but which has had a number of failed security initiatives over the years. Most of those initiatives have failed because they significantly degraded internal workflows for teams like customer support, such that the initial progress was reverted and subverted over time, to little long-term effect. This strategy represents the Chief Information Security Officerâs (CISO) attempt to acknowledge and overcome those historical challenges while meeting their IPO readiness obligations, andâmost importantlyâdoing right by their users. This is an exploratory, draft chapter for a book on engineering strategy that Iâm brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. Reading this document To apply this strategy, start at the top with Policy. To understand the thinking behind this strategy, read sections in reverse order, starting with Explore, then Diagnose and so on. Relative to the default structure, this document has been refactored in two ways to improve readability: first, Operation has been folded into Policy; second, Refine has been embedded in Diagnose. More detail on this structure in Making a readable Engineering Strategy document. Policy & Operations Our new policies, and the mechanisms to operate them are: Controls for accessing user data must be significantly stronger prior to our IPO. Senior leadership, legal, compliance and security have decided that we are not comfortable accepting the status quo of our user data access controls as a public company, and must meaningfully improve the quality of resource-level access controls as part of our pre-IPO readiness efforts. Our Security team is accountable for the exact mechanisms and approach to addressing this risk. We will continue to prioritize a hybrid solution to resource-access controls. This has been our approach thus far, and the fastest available option. Directly expose the log of our resource-level accesses to our users. We will build towards a user-accessible log of all company accesses of user data, and ensure we are comfortable explaining each and every access. In addition, it means that each rationale for access must be comprehensible and reasonable from a user perspective. This is important because it aligns our approach with our usersâ perspectives. They will be able to evaluate how we access their data, and make decisions about continuing to use our product based on whether they agree with our use. Good security discussions donât frame decisions as a compromise between security and usability. We will pursue multi-dimensional tradeoffs to simultaneously improve security and efficiency. Whenever we frame a discussion on trading off between security and utility, itâs a sign that we are having the wrong discussion, and that we should rethink our approach. We will prioritize mechanisms that can both automatically authorize and automatically document the rationale for accesses to customer data. The most obvious example of this is automatically granting access to a customer support agent for users who have an open support ticket assigned to that agent. (And removing that access when that ticket is reassigned or resolved.) Measure progress on percentage of customer data access requests justified by a user-comprehensible, automated rationale. This will anchor our approach on simultaneously improving the security of user data and the usability of our colleaguesâ internal tools. If we only expand requirements for accessing customer data, we wonât view this as progress because itâs not automated (and consequently is likely to encourage workarounds as teams try to solve problems quickly). Similarly, if we only improve usability, charts wonât represent this as progress, because we wonât have increased the number of supported requests. As part of this effort, we will create a private channel where the security and compliance team has visibility into all manual rationales for user-data access, and will directly message the manager of any individual who relies on a manual justification for accessing user data. Expire unused roles to move towards principle of least privilege. Today we have a number of roles granted in our role-based access control (RBAC) system to users who do not use the granted permissions. To address that issue, we will automatically remove roles from colleagues after 90 days of not using the roleâs permissions. Engineers in an active on-call rotation are the exception to this automated permission pruning. Weekly reviews until we see progress; monthly access reviews in perpetuity. Starting now, there will be a weekly sync between the security engineering team, teams working on customer data access initiatives, and the CISO. This meeting will focus on rapid iteration and problem solving. This is explicitly a forum for ongoing strategy testing, with CISO serving as the meetingâs sponsor, and their Principal Security Engineer serving as the meetingâs guide. It will continue until we have clarity on the path to 100% coverage of user-comprehensible, automated rationales for access to customer data. Separately, we are also starting a monthly review of sampled accesses to customer data to ensure the proper usage and function of the rationale-creation mechanisms we build. This meetingâs goal is to review access rationales for quality and appropriateness, both by reviewing sampled rationales in the short-term, and identifying more automated mechanisms for identifying high-risk accesses to review in the future. Exceptions must be granted in writing by CISO. While our overarching Engineering Strategy states that we follow an advisory architecture process as described in Facilitating Software Architecture, the customer data access policy is an exception and must be explicitly approved, with documentation, by the CISO. Start that process in the #ciso channel. Diagnose We have a strong baseline of role-based access controls (RBAC) and audit logging. However, we have limited mechanisms for ensuring assigned roles follow the principle of least privilege. This is particularly true in cases where individuals change teams or roles over the course of their tenure at the company: some individuals have collected numerous unused roles over five-plus years at the company. Similarly, our audit logs are durable and pervasive, but we have limited proactive mechanisms for identifying anomalous usage. Instead they are typically used to understand what occurred after an incident is identified by other mechanisms. For resource-level access controls, we rely on a hybrid approach between a 3rd-party platform for incoming user requests, and approval mechanisms within our own product. Providing a rationale for access across these two systems requires manual work, and those rationales are later manually reviewed for appropriateness in a batch fashion. There are two major ongoing problems with our current approach to resource-level access controls. First, the teams making requests view them as a burdensome obligation without much benefit to them or on behalf of the user. Second, because the rationale review steps are manual, there is no verifiable evidence of the quality of the review. Weâve found no evidence of misuse of user data. When colleagues do access user data, we have uniformly and consistently found that there is a clear, and reasonable rationale for that access. For example, a ticket in the user support system where the user has raised an issue. However, the quality of our documented rationales is consistently low because it depends on busy people manually copying over significant information many times a day. Because the rationales are of low quality, the verification of these rationales is somewhat arbitrary. From a literal compliance perspective, we do provide rationales and auditing of these rationales, but itâs unclear if the majority of these audits increase the security of our usersâ data. Historically, weâve made significant security investments that caused temporary spikes in our security posture. However, looking at those initiatives a year later, in many cases we see a pattern of increased scrutiny, followed by a gradual repeal or avoidance of the new mechanisms. We have found that most of them involved increased friction for essential work performed by other internal teams. In the natural order of performing work, those teams would subtly subvert the improvements because it interfered with their immediate goals (e.g. supporting customer requests). As such, we have high conviction from our track record that our historical approach can create optical wins internally. We have limited conviction that it can create long-term improvements outside of significant, unlikely internal changes (e.g. colleagues are markedly less busy a year from now than they are today). It seems likely we need a new approach to meaningfully shift our stance on these kinds of problems. Explore Our experience is that best practices around managing internal access to user data are widely available through our networks, and otherwise hard to find. The exact rationale for this is hard to determine, but it seems possible that itâs a topic that folks are generally uncomfortable discussing in public on account of potential future liability and compliance issues. In our exploration, we found two standardized dimensions (role-based access controls, audit logs), and one highly divergent dimension (resource-specific access controls): Role-based access controls (RBAC) are a highly standardized approach at this point. The core premise is that users are mapped to one or more roles, and each role is granted a certain set of permissions. For example, a role representing the customer support agent might be granted permission to deactivate an account, whereas a role representing the sales engineer might be able to configure a new account. Audit logs are similarly standardized. All access and mutation of resources should be tied in a durable log to the human who performed the action. These logs should be accumulated in a centralized, queryable solution. One of the core challenges is determining how to utilize these logs proactively to detect issues rather than reactively when an issue has already been flagged. Resource-level access controls are significantly less standardized than RBAC or audit logs. We found three distinct patterns adopted by companies, with little consistency across companies on which is adopted. Those three patterns for resource-level access control were: 3rd-party enrichment where access to resources is managed in a 3rd-party system such as Zendesk. This requires enriching objects within those systems with data and metadata from the product(s) where those objects live. It also requires implementing actions on the platform, such as archiving or configuration, allowing them to live entirely in that platformâs permission structure. The downside of this approach is tight coupling with the platform vendor, any limitations inherent to that platform, and the overhead of maintaining engineering teams familiar with both your internal technology stack and the platform vendorâs technology stack. 1st-party tool implementation where all activity, including creation and management of user issues, is managed within the core product itself. This pattern is most common in earlier stage companies or companies whose customer support leadership âgrew upâ within the organization without much exposure to the approach taken by peer companies. The advantage of this approach is that there is a single, tightly integrated and infinitely extensible platform for managing interactions. The downside is that you have to build and maintain all of that work internally rather than pushing it to a vendor that ought to be able to invest more heavily into their tooling. Hybrid solutions where a 3rd-party platform is used for most actions, and is further used to permit resource-level access within the 1st-party system. For example, you might be able to access a userâs data only while there is an open ticket created by that user, and assigned to you, in the 3rd-party platform. The advantage of this approach is that it allows supporting complex workflows that donât fit within the platformâs limitations, and allows you to avoid complex coupling between your product and the vendor platform. Generally, our experience is that all companies implement RBAC, audit logs, and one of the resource-level access control mechanisms. Most companies pursue either 3rd-party enrichment with a sizable, long-standing team owning the platform implementation, or rely on a hybrid solution where they are able to avoid a long-standing dedicated team by lumping that work into existing teams.