More from Tyler Cipriani: blog
Any code of your own that you haven’t looked at for six or more months might as well have been written by someone else. – Eagleson’s Law After scouring git history, I found the correct config file, but someone removed it. Their full commit message read: Remove config. Don't bring it back. Very. helpful. But I get it; it’s hard to care about commit messages when you’re making a quick change. Git commit templates can help. Commit templates provide a scaffold for your commit messages, reminding you to answer questions like: What problem are you solving? Why is this the solution? What alternatives did you consider? Where can I read more? What is a git commit template? When you type git commit, git pops open your text editor1. Git can pre-fill your editor with a commit template—a form that reminds you of everything it’s easy to forget when writing a commit. Creating a commit template is simple. Create a plaintext file – mine lives at ~/.config/git/message.txt Tell git to use it: git config --global \ commit.template '~/.config/git/message.txt' My template packs everything I know about writing a commit. Project-specific templates Large projects, such as Linux kernel, git, and MediaWiki, have their own commit guidelines. Git templates can remind you about these per-project requirements if you add a commit template to a project’s .git/config file. Another way to do this is git’s includeIf configuration setting. includeIf lets you override git config settings when you’re working under directories you define. For example, all my Wikimedia work lives in ~/Projects/Wikimedia and at the bottom of my ~/.config/git/config I have: [includeIf "gitdir:~/Projects/Wikimedia/**"] path = ~/.config/git/config.wikimedia In config.wikimedia, I point to my Wikimedia-specific commit template (along with other necessary git settings: my user.email, core.hooksPath, and a pushInsteadOf url to push to ssh even when I clone via https). Forge-specific templates Personal git commit templates lead to better commits, which make for a better history. The forge-specific pull-request templates are a band-aid, the cheap kind that falls off in the shower. There’s no incentive for GitHub to make git history better: the worse your commit history, the more you rely on GitHub. Still, all the major pull-request-style forges let you foist a pull-request template on your contributors. As a contributor, I dislike filling those out—they add unnecessary friction. Commit message contents Your commit template allows you do the hard thinking upfront. Then, when you make a commit, you simply follow the template. My template asks questions I answer with my commit message: 72ch. wide -------------------------------------------------------- BODY # | # - Why should this change be made? | # - What problem are you solving? | # - Why this solution? | # - What's wrong with the current code? | # - Are there other ways to do it? | # - How can the reviewer confirm it works? | # | # ---------------------------------------------------------------- /BODY But other clever folks cooked up conventions you could incorporate: Conventional commits – how do your commits relate to semantic versioning? This makes it easier for SRE and downstream users. Problem/Solution format – first pioneered by ZeroMQ2, this format anticipates the questions of future developers and reviewers. Gitmoji – developed for the GitHub crowd, this format defines an emoji shorthand that makes it easy to spot changes of a particular type. Commit message formatting How you format text affects how people read it. My template also deals with text formatting rules3: Subject – 50 characters or less, capitalized, no end punctuation. Body – Wrap at 72 characters with a blank line separating it from the subject. Trailers – Standard formats with a blank line separating them from the body. People will read your commit in different contexts: git log, git shortlog, and git rebase. But git’s pager has no line wrapping by default. I hard wrap at 72 characters because that makes text easier to read in wide terminals.4 Finally, my template addresses trailers, reminding me about standard trailers supported in the projects I’m working on. Git can interpret trailers, which can be useful later. For example, if I wanted a tab-separated list of commits and their related tasks I could find that with git log: $ TAB=%x09 $ BUG_TRAILER='%(trailers:key=Bug,valueonly=true,separator=%x2C )' $ SHORT_HASH=%h $ SUBJ=%s $ FORMAT="${SHORT_HASH}${TAB}${BUG_TRAILER}${TAB}${GIT_SUBJ}" $ git log --topo-order --no-merges \ --format="$FORMAT" d2b09deb12f T359762 Rewrite Kurdish (ku) Latin to Arabic converter 28123a6a262 T332865 tests: Remove non-static fallback in HookRunnerTestBase 4e919a307a4 T328919 tests: Remove unused argument from data provider in PageUpdaterTest bedd0f685f9 objectcache: Improve `RESTBagOStuff::handleError()` 2182a0c4490 T393219 tests: Remove two data provider in RestStructureTest Git commit templates free your brain from remembering what you should write, allowing you to focus on the story you should tell. Your future self will thank you for the effort. Starting with core.editor in your git config, $VISUAL or $EDITOR in your shell, finally falling back to vi.↩︎ I think…↩︎ All cribbed from Tim Pope↩︎ Another story I’ve heard: a standard terminal allows 80 characters per line. git log indents commit messages with 4 spaces. A 72-character-per-line commit centers text on an 80-character-per-line terminal. To me, readability in modern terminals is a better reason to wrap than kowtowing to antiquated terminals.↩︎
[The] Linux kernel uses GPLv2, and if you distribute GPLv2 code, you have to provide a copy of the source (and modifications) once someone asks for it. And now I’m asking nicely for you to do so 🙂 – Joga, bbs.onyx-international.com Boox in split screen, typewriter mode In January, I bought a Boox Go 10.3—a 10.3-inch, 300-ppi, e-ink Android tablet. After two months, I use the Boox daily—it’s replaced my planner, notebook, countless PDF print-offs, and the good parts of my phone. But Boox’s parent company, Onyx, is sketchy. I’m conflicted. The Boox Go is a beautiful, capable tablet that I use every day, but I recommend avoiding as long as Onyx continues to disregard the rights of its users. How I’m using my Boox My e-ink floor desk Each morning, I plop down in front of my MagicHold laptop stand and journal on my Boox with Obsidian. I use Syncthing to back up my planner and sync my Zotero library between my Boox and laptop. In the evening, I review my PDF planner and plot for tomorrow. I use these apps: Obsidian – a markdown editor that syncs between all my devices with no fuss for $8/mo. Syncthing – I love Syncthing—it’s an encrypted, continuous file sync-er without a centralized server. Meditation apps1 – Guided meditation away from the blue light glow of my phone or computer is better. Before buying the Boox, I considered a reMarkable. The reMarkable Paper Pro has a beautiful color screen with a frontlight, a nice pen, and a “type folio,” plus it’s certified by the Calm Tech Institute. But the reMarkable is a distraction-free e-ink tablet. Meanwhile, I need distraction-lite. What I like Calm(ish) technology – The Boox is an intentional device. Browsing the internet, reading emails, and watching videos is hard, but that’s good. Apps – Google Play works out of the box. I can install F-Droid and change my launcher without difficulty. Split screen – The built-in launcher has a split screen feature. I use it to open a PDF side-by-side with a notes doc. Reading – The screen is a 300ppi Carta 1200, making text crisp and clear. What I dislike I filmed myself typing at 240fps, each frame is 4.17ms. Boox’s typing latency is between 150ms and 275ms at the fastest refresh rate inside Obsidian. Typing – Typing latency is noticeable. At Boox’s highest refresh rate, after hitting a key, text takes between 150ms to 275ms to appear. I can still type, though it’s distracting at times. The horror of the default pen Accessories Pen – The default pen looks like a child’s whiteboard marker and feels cheap. I replaced it with the Kindle Scribe Premium pen, and the writing experience is vastly improved. Cover – It’s impossible to find a nice cover. I’m using a $15 cover that I’m encasing in stickers. Tool switching – Swapping between apps is slow and clunky. I blame Android and the current limitations of e-ink more than Boox. No frontlight – The Boox’s lack of frontlight prevents me from reading more with it. I knew this when I bought my Boox, but devices with frontlights seem to make other compromises. Onyx The Chinese company behind Boox, Onyx International, Inc., runs the servers where Boox shuttles tracking information. I block this traffic with Pi-Hole2. pihole-ing whatever telemetry Boox collects I inspected this traffic via Mitm proxy—most traffic was benign, though I never opted into sending any telemetry (nor am I logged in to a Boox account). But it’s also an Android device, so it’s feeding telemetry into Google’s gaping maw, too. Worse, Onyx is flouting the terms of the GNU Public License, declining to release Linux kernel modifications to users. This is anathema to me—GPL violations are tantamount to theft. Onyx’s disregard for user rights makes me regret buying the Boox. Verdict I’ll continue to use the Boox and feel bad about it. I hope my digging in this post will help the next person. Unfortunately, the e-ink tablet market is too niche to support the kind of solarpunk future I’d always imagined. But there’s an opportunity for an open, Linux-based tablet to dominate e-ink. Linux is playing catch-up on phones with PostmarketOS. Meanwhile, the best e-ink tablets have to offer are old, unupdateable versions of Android, like the OS on the Boox. In the future, I’d love to pay a license- and privacy-respecting company for beautiful, calm technology and recommend their product to everyone. But today is not the future. I go back and forth between “Waking Up” and “Calm”↩︎ Using github.com/JordanEJ/Onyx-Boox-Blocklist↩︎
.title { text-wrap: balance } Spending for October, generated by piping hledger → R Over the past six months, I’ve tracked my money with hledger—a plain text double-entry accounting system written in Haskell. It’s been surprisingly painless. My previous attempts to pick up real accounting tools floundered. Hosted tools are privacy nightmares, and my stint with GnuCash didn’t last. But after stumbling on Dmitry Astapov’s “Full-fledged hledger” wiki1, it clicked—eventually consistent accounting. Instead of modeling your money all at once, take it one hacking session at a time. It should be easy to work towards eventual consistency. […] I should be able to [add financial records] bit by little bit, leaving things half-done, and picking them up later with little (mental) effort. – Dmitry Astapov, Full-Fledged Hledger Principles of my system I’ve cobbled together a system based on these principles: Avoid manual entry – Avoid typing in each transaction. Instead, rely on CSVs from the bank. CSVs as truth – CSVs are the only things that matter. Everything else can be blown away and rebuilt anytime. Embrace version control – Keep everything under version control in Git for easy comparison and safe experimentation. Learn hledger in five minutes hledger concepts are heady, but its use is simple. I divide the core concepts into two categories: Stuff hledger cares about: Transactions – how hledger moves money between accounts. Journal files – files full of transactions Stuff I care about: Rules files – how I set up accounts, import CSVs, and move money between accounts. Reports – help me see where my money is going and if I messed up my rules. Transactions move money between accounts: 2024-01-01 Payday income:work $-100.00 assets:checking $100.00 This transaction shows that on Jan 1, 2024, money moved from income:work into assets:checking—Payday. The sum of each transaction should be $0. Money comes from somewhere, and the same amount goes somewhere else—double-entry accounting. This is powerful technology—it makes mistakes impossible to ignore. Journal files are text files containing one or more transactions: 2024-01-01 Payday income:work $-100.00 assets:checking $100.00 2024-01-02 QUANSHENG UVK5 assets:checking $-29.34 expenses:fun:radio $29.34 Rules files transform CSVs into journal files via regex matching. Here’s a CSV from my bank: Transaction Date,Description,Category,Type,Amount,Memo 09/01/2024,DEPOSIT Paycheck,Payment,Payment,1000.00, 09/04/2024,PizzaPals Pizza,Food & Drink,Sale,-42.31, 09/03/2024,Amazon.com*XXXXXXXXY,Shopping,Sale,-35.56, 09/03/2024,OBSIDIAN.MD,Shopping,Sale,-10.00, 09/02/2024,Amazon web services,Personal,Sale,-17.89, And here’s a checking.rules to transform that CSV into a journal file so I can use it with hledger: # checking.rules # -------------- # Map CSV fields → hledger fields[0] fields date,description,category,type,amount,memo,_ # `account1`: the account for the whole CSV.[1] account1 assets:checking account2 expenses:unknown skip 1 date-format %m/%d/%Y currency $ if %type Payment account2 income:unknown if %category Food & Drink account2 expenses:food:dining # [0]: <https://hledger.org/hledger.html#field-names> # [1]: <https://hledger.org/hledger.html#account-field> With these two files (checking.rules and 2024-09_checking.csv), I can make the CSV into a journal: $ > 2024-09_checking.journal \ hledger print \ --rules-file checking.rules \ -f 2024-09_checking.csv $ head 2024-09_checking.journal 2024-09-01 DEPOSIT Paycheck assets:checking $1000.00 income:unknown $-1000.00 2024-09-02 Amazon web services assets:checking $-17.89 expenses:unknown $17.89 Reports are interesting ways to view transactions between accounts. There are registers, balance sheets, and income statements: $ hledger incomestatement \ --depth=2 \ --file=2024-09_bank.journal Revenues: $1000.00 income:unknown ----------------------- $1000.00 Expenses: $42.31 expenses:food $63.45 expenses:unknown ----------------------- $105.76 ----------------------- Net: $894.24 At the beginning of September, I spent $105.76 and made $1000, leaving me with $894.24. But a good chunk is going to the default expense account, expenses:unknown. I can use the hleger aregister to see what those transactions are: $ hledger areg expenses:unknown \ --file=2024-09_checking.journal \ -O csv | \ csvcut -c description,change | \ csvlook | description | change | | ------------------------ | ------ | | OBSIDIAN.MD | 10.00 | | Amazon web services | 17.89 | | Amazon.com*XXXXXXXXY | 35.56 | l Then, I can add some more rules to my checking.rules: if OBSIDIAN.MD account2 expenses:personal:subscriptions if Amazon web services account2 expenses:personal:web:hosting if Amazon.com account2 expenses:personal:shopping:amazon Now, I can reprocess my data to get a better picture of my spending: $ > 2024-09_bank.journal \ hledger print \ --rules-file bank.rules \ -f 2024-09_bank.csv $ hledger bal expenses \ --depth=3 \ --percent \ -f 2024-09_checking2.journal 30.0 % expenses:food:dining 33.6 % expenses:personal:shopping 9.5 % expenses:personal:subscriptions 16.9 % expenses:personal:web -------------------- 100.0 % For the Amazon.com purchase, I lumped it into the expenses:personal:shopping account. But I could dig deeper—download my order history from Amazon and categorize that spending. This is the power of working bit-by-bit—the data guides you to the next, deeper rabbit hole. Goals and non-goals Why am I doing this? For years, I maintained a monthly spreadsheet of account balances. I had a balance sheet. But I still had questions. Spending over six months, generated by piping hledger → gnuplot Before diving into accounting software, these were my goals: Granular understanding of my spending – The big one. This is where my monthly spreadsheet fell short. I knew I had money in the bank—I kept my monthly balance sheet. I budgeted up-front the % of my income I was saving. But I had no idea where my other money was going. Data privacy – I’m unwilling to hand the keys to my accounts to YNAB or Mint. Increased value over time – The more time I put in, the more value I want to get out—this is what you get from professional tools built for nerds. While I wished for low-effort setup, I wanted the tool to be able to grow to more uses over time. Non-goals—these are the parts I never cared about: Investment tracking – For now, I left this out of scope. Between monthly balances in my spreadsheet and online investing tools’ ability to drill down, I was fine.2 Taxes – Folks smarter than me help me understand my yearly taxes.3 Shared system – I may want to share reports from this system, but no one will have to work in it except me. Cash – Cash transactions are unimportant to me. I withdraw money from the ATM sometimes. It evaporates. hledger can track all these things. My setup is flexible enough to support them someday. But that’s unimportant to me right now. Monthly maintenance I spend about an hour a month checking in on my money Which frees me to spend time making fancy charts—an activity I perversely enjoy. Income vs. Expense, generated by piping hledger → gnuplot Here’s my setup: $ tree ~/Documents/ledger . ├── export │ ├── 2024-balance-sheet.txt │ └── 2024-income-statement.txt ├── import │ ├── in │ │ ├── amazon │ │ │ └── order-history.csv │ │ ├── credit │ │ │ ├── 2024-01-01_2024-02-01.csv │ │ │ ├── ... │ │ │ └── 2024-10-01_2024-11-01.csv │ │ └── debit │ │ ├── 2024-01-01_2024-02-01.csv │ │ ├── ... │ │ └── 2024-10-01_2024-11-01.csv │ └── journal │ ├── amazon │ │ └── order-history.journal │ ├── credit │ │ ├── 2024-01-01_2024-02-01.journal │ │ ├── ... │ │ └── 2024-10-01_2024-11-01.journal │ └── debit │ ├── 2024-01-01_2024-02-01.journal │ ├── ... │ └── 2024-10-01_2024-11-01.journal ├── rules │ ├── amazon │ │ └── journal.rules │ ├── credit │ │ └── journal.rules │ ├── debit │ │ └── journal.rules │ └── common.rules ├── 2024.journal ├── Makefile └── README Process: Import – download a CSV for the month from each account and plop it into import/in/<account>/<dates>.csv Make – run make Squint – Look at git diff; if it looks good, git add . && git commit -m "💸" otherwise review hledger areg to see details. The Makefile generates everything under import/journal: journal files from my CSVs using their corresponding rules. reports in the export folder I include all the journal files in the 2024.journal with the line: include ./import/journal/*/*.journal Here’s the Makefile: SHELL := /bin/bash RAW_CSV = $(wildcard import/in/**/*.csv) JOURNALS = $(foreach file,$(RAW_CSV),$(subst /in/,/journal/,$(patsubst %.csv,%.journal,$(file)))) .PHONY: all all: $(JOURNALS) hledger is -f 2024.journal > export/2024-income-statement.txt hledger bs -f 2024.journal > export/2024-balance-sheet.txt .PHONY clean clean: rm -rf import/journal/**/*.journal import/journal/%.journal: import/in/%.csv @echo "Processing csv $< to $@" @echo "---" @mkdir -p $(shell dirname $@) @hledger print --rules-file rules/$(shell basename $$(dirname $<))/journal.rules -f "$<" > "$@" If I find anything amiss (e.g., if my balances are different than what the bank tells me), I look at hleger areg. I may tweak my rules or my CSVs and then I run make clean && make and try again. Simple, plain text accounting made simple. And if I ever want to dig deeper, hledger’s docs have more to teach. But for now, the balance of effort vs. reward is perfect. while reading a blog post from Jonathan Dowland↩︎ Note, this is covered by full-fledged hledger – Investements↩︎ Also covered in full-fledged hledger – Tax returns↩︎
A brief and biased history. Oh yeah, there’s pull requests now – GitHub blog, Sat, 23 Feb 2008 When GitHub launched, it had no code review. Three years after launch, in 2011, GitHub user rtomayko became the first person to make a real code comment, which read, in full: “+1”. Before that, GitHub lacked any way to comment on code directly. Instead, pull requests were a combination of two simple features: Cross repository compare view – a feature they’d debuted in 2010—git diff in a web page. A comments section – a feature most blogs had in the 90s. There was no way to thread comments, and the comments were on a different page than the diff. GitHub pull requests circa 2010. This is from the official documentation on GitHub. Earlier still, when the pull request debuted, GitHub claimed only that pull requests were “a way to poke someone about code”—a way to direct message maintainers, but one that lacked any web view of the code whatsoever. For developers, it worked like this: Make a fork. Click “pull request”. Write a message in a text form. Send the message to someone1 with a link to your fork. Wait for them to reply. In effect, pull requests were a limited way to send emails to other GitHub users. Ten years after this humble beginning—seven years after the first code comment—when Microsoft acquired GitHub for $7.5 Billion, this cobbled-together system known as “GitHub flow” had become the default way to collaborate on code via Git. And I hate it. Pull requests were never designed. They emerged. But not from careful consideration of the needs of developers or maintainers. Pull requests work like they do because they were easy to build. In 2008, GitHub’s developers could have opted to use git format-patch instead of teaching the world to juggle branches. Or they might have chosen to generate pull requests using the git request-pull command that’s existed in Git since 2005 and is still used by the Linux kernel maintainers today2. Instead, they shrugged into GitHub flow, and that flow taught the world to use Git. And commit histories have sucked ever since. For some reason, github has attracted people who have zero taste, don’t care about commit logs, and can’t be bothered. – Linus Torvalds, 2012 “Someone” was a person chosen by you from a checklist of the people who had also forked this repository at some point.↩︎ Though to make small, contained changes you’d use git format-patch and git am.↩︎
More in programming
If your executive calendar is packed back to back, you have no room for fires, customers, or serendipities. You've traded all your availability for efficiency. That's a bad deal. Executives of old used to know this! That's what the long lunches, early escapes to the golf course, and reading the paper at work were all about. A great fictional example of this is Bert Cooper from Mad Men. He knew his value was largely in his network. He didn't have to grind every minute of every day to prove otherwise. His function was to leap into action when the critical occasion arose or decision needed to be made. But modern executives are so insecure about seeming busy 24/7 that they'll wreck their business trying to prove it. Trying to outwork everyone. Sacrificing themselves thin so they can run a squirrel-brain operation that's constantly chasing every nutty idea. Now someone is inevitably going to say "but what about Elon!!". He's actually a perfect illustration of doing this right, actually. Even if he works 100-hour weeks, he's the CEO of 3 companies, has a Diablo VI addiction, and keeps a busy tweeting schedule too. In all of that, I'd be surprised if there was more than 20-30h per company per week on average. And your boss is not Elon. Wide open calendars should not be seen as lazy, but as intentional availability. It's time we brought them back into vogue.
A secret master plan, the official launch of Automerge 3, and an update on Sketchy Calendars
Create a small example app and send payloads from the server to the client using RSC's
I'm way too discombobulated from getting next month's release of Logic for Programmers ready, so I'm pulling a idea from the slush pile. Basically I wanted to come up with a mental model of arrays as a concept that explained APL-style multidimensional arrays and tables but also why there weren't multitables. So, arrays. In all languages they are basically the same: they map a sequence of numbers (I'll use 1..N)1 to homogeneous values (values of a single type). This is in contrast to the other two foundational types, associative arrays (which map an arbitrary type to homogeneous values) and structs (which map a fixed set of keys to heterogeneous values). Arrays appear in PLs earlier than the other two, possibly because they have the simplest implementation and the most obvious application to scientific computing. The OG FORTRAN had arrays. I'm interested in two structural extensions to arrays. The first, found in languages like nushell and frameworks like Pandas, is the table. Tables have string keys like a struct and indexes like an array. Each row is a struct, so you can get "all values in this column" or "all values for this row". They're heavily used in databases and data science. The other extension is the N-dimensional array, mostly seen in APLs like Dyalog and J. Think of this like arrays-of-arrays(-of-arrays), except all arrays at the same depth have the same length. So [[1,2,3],[4]] is not a 2D array, but [[1,2,3],[4,5,6]] is. This means that N-arrays can be queried on any axis. ]x =: i. 3 3 0 1 2 3 4 5 6 7 8 0 { x NB. first row 0 1 2 0 {"1 x NB. first column 0 3 6 So, I've had some ideas on a conceptual model of arrays that explains all of these variations and possibly predicts new variations. I wrote up my notes and did the bare minimum of editing and polishing. Somehow it ended up being 2000 words. 1-dimensional arrays A one-dimensional array is a function over 1..N for some N. To be clear this is math functions, not programming functions. Programming functions take values of a type and perform computations on them. Math functions take values of a fixed set and return values of another set. So the array [a, b, c, d] can be represented by the function (1 -> a ++ 2 -> b ++ 3 -> c ++ 4 -> d). Let's write the set of all four element character arrays as 1..4 -> char. 1..4 is the function's domain. The set of all character arrays is the empty array + the functions with domain 1..1 + the functions with domain 1..2 + ... Let's call this set Array[Char]. Our compilers can enforce that a type belongs to Array[Char], but some operations care about the more specific type, like matrix multiplication. This is either checked with the runtime type or, in exotic enough languages, with static dependent types. (This is actually how TLA+ does things: the basic collection types are functions and sets, and a function with domain 1..N is a sequence.) 2-dimensional arrays Now take the 3x4 matrix i. 3 4 0 1 2 3 4 5 6 7 8 9 10 11 There are two equally valid ways to represent the array function: A function that takes a row and a column and returns the value at that index, so it would look like f(r: 1..3, c: 1..4) -> Int. A function that takes a row and returns that column as an array, aka another function: f(r: 1..3) -> g(c: 1..4) -> Int.2 Man, (2) looks a lot like currying! In Haskell, functions can only have one parameter. If you write (+) 6 10, (+) 6 first returns a new function f y = y + 6, and then applies f 10 to get 16. So (+) has the type signature Int -> Int -> Int: it's a function that takes an Int and returns a function of type Int -> Int.3 Similarly, our 2D array can be represented as an array function that returns array functions: it has type 1..3 -> 1..4 -> Int, meaning it takes a row index and returns 1..4 -> Int, aka a single array. (This differs from conventional array-of-arrays because it forces all of the subarrays to have the same domain, aka the same length. If we wanted to permit ragged arrays, we would instead have the type 1..3 -> Array[Int].) Why is this useful? A couple of reasons. First of all, we can apply function transformations to arrays, like "combinators". For example, we can flip any function of type a -> b -> c into a function of type b -> a -> c. So given a function that takes rows and returns columns, we can produce one that takes columns and returns rows. That's just a matrix transposition! Second, we can extend this to any number of dimensions: a three-dimensional array is one with type 1..M -> 1..N -> 1..O -> V. We can still use function transformations to rearrange the array along any ordering of axes. Speaking of dimensions: What are dimensions, anyway Okay, so now imagine we have a Row × Col grid of pixels, where each pixel is a struct of type Pixel(R: int, G: int, B: int). So the array is Row -> Col -> Pixel But we can also represent the Pixel struct with a function: Pixel(R: 0, G: 0, B: 255) is the function where f(R) = 0, f(G) = 0, f(B) = 255, making it a function of type {R, G, B} -> Int. So the array is actually the function Row -> Col -> {R, G, B} -> Int And then we can rearrange the parameters of the function like this: {R, G, B} -> Row -> Col -> Int Even though the set {R, G, B} is not of form 1..N, this clearly has a real meaning: f[R] is the function mapping each coordinate to that coordinate's red value. What about Row -> {R, G, B} -> Col -> Int? That's for each row, the 3 × Col array mapping each color to that row's intensities. Really any finite set can be a "dimension". Recording the monitor over a span of time? Frame -> Row -> Col -> Color -> Int. Recording a bunch of computers over some time? Computer -> Frame -> Row …. This is pretty common in constraint satisfaction! Like if you're conference trying to assign talks to talk slots, your array might be type (Day, Time, Room) -> Talk, where Day/Time/Room are enumerations. An implementation constraint is that most programming languages only allow integer indexes, so we have to replace Rooms and Colors with numerical enumerations over the set. As long as the set is finite, this is always possible, and for struct-functions, we can always choose the indexing on the lexicographic ordering of the keys. But we lose type safety. Why tables are different One more example: Day -> Hour -> Airport(name: str, flights: int, revenue: USD). Can we turn the struct into a dimension like before? In this case, no. We were able to make Color an axis because we could turn Pixel into a Color -> Int function, and we could only do that because all of the fields of the struct had the same type. This time, the fields are different types. So we can't convert {name, flights, revenue} into an axis. 4 One thing we can do is convert it to three separate functions: airport: Day -> Hour -> Str flights: Day -> Hour -> Int revenue: Day -> Hour -> USD But we want to keep all of the data in one place. That's where tables come in: an array-of-structs is isomorphic to a struct-of-arrays: AirportColumns( airport: Day -> Hour -> Str, flights: Day -> Hour -> Int, revenue: Day -> Hour -> USD, ) The table is a sort of both representations simultaneously. If this was a pandas dataframe, df["airport"] would get the airport column, while df.loc[day1] would get the first day's data. I don't think many table implementations support more than one axis dimension but there's no reason they couldn't. These are also possible transforms: Hour -> NamesAreHard( airport: Day -> Str, flights: Day -> Int, revenue: Day -> USD, ) Day -> Whatever( airport: Hour -> Str, flights: Hour -> Int, revenue: Hour -> USD, ) In my mental model, the heterogeneous struct acts as a "block" in the array. We can't remove it, we can only push an index into the fields or pull a shared column out. But there's no way to convert a heterogeneous table into an array. Actually there is a terrible way Most languages have unions or product types that let us say "this is a string OR integer". So we can make our airport data Day -> Hour -> AirportKey -> Int | Str | USD. Heck, might as well just say it's Day -> Hour -> AirportKey -> Any. But would anybody really be mad enough to use that in practice? Oh wait J does exactly that. J has an opaque datatype called a "box". A "table" is a function Dim1 -> Dim2 -> Box. You can see some examples of what that looks like here Misc Thoughts and Questions The heterogeneity barrier seems like it explains why we don't see multiple axes of table columns, while we do see multiple axes of array dimensions. But is that actually why? Is there a system out there that does have multiple columnar axes? The array x = [[a, b, a], [b, b, b]] has type 1..2 -> 1..3 -> {a, b}. Can we rearrange it to 1..2 -> {a, b} -> 1..3? No. But we can rearrange it to 1..2 -> {a, b} -> PowerSet(1..3), which maps rows and characters to columns with that character. [(a -> {1, 3} ++ b -> {2}), (a -> {} ++ b -> {1, 2, 3}]. We can also transform Row -> PowerSet(Col) into Row -> Col -> Bool, aka a boolean matrix. This makes sense to me as both forms are means of representing directed graphs. Are other function combinators useful for thinking about arrays? Does this model cover pivot tables? Can we extend it to relational data with multiple tables? Systems Distributed Talk (will be) Online The premier will be August 6 at 12 CST, here! I'll be there to answer questions / mock my own performance / generally make a fool of myself. Sacrilege! But it turns out in this context, it's easier to use 1-indexing than 0-indexing. In the years since I wrote that article I've settled on "each indexing choice matches different kinds of mathematical work", so mathematicians and computer scientists are best served by being able to choose their index. But software engineers need consistency, and 0-indexing is overall a net better consistency pick. ↩ This is right-associative: a -> b -> c means a -> (b -> c), not (a -> b) -> c. (1..3 -> 1..4) -> Int would be the associative array that maps length-3 arrays to integers. ↩ Technically it has type Num a => a -> a -> a, since (+) works on floats too. ↩ Notice that if each Airport had a unique name, we could pull it out into AirportName -> Airport(flights, revenue), but we still are stuck with two different values. ↩
We don’t want to bury the lede: we have raised a $100M Series B, led by a new strategic partner in USIT with participation from all existing Oxide investors. To put that number in perspective: over the nearly six year lifetime of the company, we have raised $89M; our $100M Series B more than doubles our total capital raised to date — and positions us to make Oxide the generational company that we have always aspired it to be. If this aspiration seems heady now, it seemed absolutely outlandish when we were first raising venture capital in 2019. Our thesis was that cloud computing was the future of all computing; that running on-premises would remain (or become!) strategically important for many; that the entire stack — hardware and software — needed to be rethought from first principles to serve this market; and that a large, durable, public company could be built by whomever pulled it off. This scope wasn’t immediately clear to all potential investors, some of whom seemed to latch on to one aspect or another without understanding the whole. Their objections were revealing: "We know you can build this," began more than one venture capitalist (at which we bit our tongue; were we not properly explaining what we intended to build?!), "but we don’t think that there is a market." Entrepreneurs must become accustomed to rejection, but this flavor was particularly frustrating because it was exactly backwards: we felt that there was in fact substantial technical risk in the enormity of the task we put before ourselves — but we also knew that if we could build it (a huge if!) there was a huge market, desperate for cloud computing on-premises. Fortunately, in Eclipse Ventures we found investors who saw what we saw: that the most important products come when we co-design hardware and software together, and that the on-premises market was sick of being told that they either don’t exist or that they don’t deserve modernity. These bold investors — like the customers we sought to serve — had been waiting for this company to come along; we raised seed capital, and started building. And build it we did, making good on our initial technical vision: We did our own board designs, allowing for essential system foundation like a true hardware root-of-trust and end-to-end power observability. We did our own microcontroller operating system, and used it to replace the traditional BMC. We did our own platform enablement software, eliminating the traditional UEFI BIOS and its accompanying flotilla of vulnerabilities. We did our own host hypervisor, assuring an integrated and seamless user experience — and eliminating the need for a third-party hypervisor and its concomitant rapacious software licensing. We did our own switch — and our own switch runtime — eliminating entire universes of integration complexity and operational nightmares. We did our own integrated storage service, allowing the rack-scale system to have reliable, available, durable, elastic instance storage without necessitating a dependency on a third party. We did our own control plane, a sophisticated distributed system building on the foundation of our hardware and software components to deliver the API-driven services that modernity demands: elastic compute, virtual networking, and virtual storage. While these technological components are each very important (and each is in service to specific customer problems when deploying infrastructure on-premises), the objective is the product, not its parts. The journey to a product was long, but we ticked off the milestones. We got the boards brought up. We got the switch transiting packets. We got the control plane working. We got the rack manufactured. We passed FCC compliance. And finally, two years ago, we shipped our first system! Shortly thereafter, more milestones of the variety you can only get after shipping: our first update of the software in the field; our first update-delivered performance improvements; our first customer-requested features added as part of an update. Later that year, we hit general commercial availability, and things started accelerating. We had more customers — and our first multi-rack customer. We had customers go on the record about why they had selected Oxide — and customers describing the wins that they had seen deploying Oxide. Customers starting landing faster now: enterprise sales cycles are infamously long, but we were finding that we were going from first conversations to a delivered product surprisingly quickly. The quickening pace always seemed to be due in some way to our transparency: new customers were listeners to our podcast, or they had read our RFDs, or they had perused our documentation, or they had looked at the source code itself. With growing customer enthusiasm, we were increasingly getting questions about what it would look like to buy a large number of Oxide racks. Could we manufacture them? Could we support them? Could we make them easy to operate together? Into this excitement, a new potential investor, USIT, got to know us. They asked terrific questions, and we found a shared disposition towards building lasting value and doing it the right way. We learned more about them, too, and especially USIT’s founder, Thomas Tull. The more we each learned about the other, the more there was to like. And importantly, USIT had the vision for us that we had for ourselves: that there was a big, important market here — and that it was uniquely served by Oxide. We are elated to announce this new, exciting phase of the company. It’s not necessarily in our nature to celebrate fundraising, but this is a big milestone, because it will allow us to address our customers' most pressing questions around scale (manufacturing scale, system scale, operations scale) and roadmap scope. We have always believed in our mission, but this raise gives us a new sense of confidence when we say it: we’re going to kick butt, have fun, not cheat (of course!), love our customers — and change computing forever.