Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
8
I’ve written about how I don’t love the idea of overriding basic computing controls. Instead, I generally favor opting to respect user choice and provide the controls their platform does. Of course, this means platforms need to surface better primitives rather than supplying basic ones with an ability to opt out. What am I even talking about? Let me give an example. The Webkit team just shipped a new API for <input type=color> which provides users the ability to pick colors with wide gamut P3 and alpha transparency. The entire API is just a little bit of declarative HTML: <label> Select a color: <input type="color" colorspace="display-p3" alpha> </label> From that simple markup (on iOS) you get this beautiful, robust color picker. That’s a great color picker, and if you’re choosing colors a lot on iOS respectively and encountering this particular UI a lot, that’s even better — like, “Oh hey, I know how to use this thing!” With a picker like that, how many folks really want...
5 days ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Jim Nielsen’s Blog

Could I Have Some More Friction in My Life, Please?

A clip from “Buy Now! The Shopping Conspiracy” features a former executive of an online retailer explaining how motivated they were to make buying easy. Like, incredibly easy. So easy, in fact, that their goal was to “reduce your time to think a little bit more critically about a purchase you thought you wanted to make.” Why? Because if you pause for even a moment, you might realize you don’t actually want whatever you’re about to buy. Been there. Ready to buy something and the slightest inconvenience surfaces — like when I can’t remember the precise order of my credit card’s CCV number and realize I’ll have to find my credit card and look it up — and that’s enough for me to say, “Wait a second, do I actually want to move my slug of a body and find my credit card? Nah.” That feels like the socials too. The algorithms. The endless feeds. The social interfaces. All engineered to make you think less about what you’re consuming, to think less critically about reacting or responding or engaging. Don’t think, just scroll. Don’t think, just like. Don’t think, just repost. And now with AI don’t think at all.[1] Because if you have to think, that’s friction. Friction is an engagement killer on content, especially the low-grade stuff. Friction makes people ask, “Is this really worth my time?” Maybe we need a little more friction in the world. More things that merit our time. Less things that don’t. It’s kind of ironic how the things we need present so much friction in our lives (like getting healthcare) while the things we don’t need that siphon money from our pockets (like online gambling[2]) present so little friction you could almost inadvertently slip right into them. It’s as if The Good Things™️ in life are full of friction while the hollow ones are frictionless. Nicholas Carr said, “The endless labor of self-expression cries out for the efficiency of automation.” Why think when you can prompt a probability machine to stitch together a facade of thinking for you? ⏎ John Oliver did a segment on sports betting if you want to feel sad. ⏎ Email · Mastodon · Bluesky

3 days ago 3 votes
Product Pseudoscience

In his post about “Vibe Drive Development”, Robin Rendle warns against what I’ll call the pseudoscientific approach to product building prevalent across the software industry: when folks at tech companies talk about data they’re not talking about a well-researched study from a lab but actually wildly inconsistent and untrustworthy data scraped from an analytics dashboard. This approach has all the theater of science — “we measured and made decisions on the data, the numbers don’t lie” etc. — but is missing the rigor of science. Like, for example, corroboration. Independent corroboration is a vital practice of science that we in tech conveniently gloss over in our (self-proclaimed) objective data-driven decision making. In science you can observe something, measure it, analyze the results, and draw conclusions, but nobody accepts it as fact until there can be multiple instances of independent corroboration. Meanwhile in product, corroboration is often merely a group of people nodding along in support of a Powerpoint with some numbers supporting a foregone conclusion — “We should do X, that’s what the numbers say!” (What’s worse is when we have the hubris to think our experiments, anecdotal evidence, and conclusions should extend to others outside of our own teams, despite zero independent corroboration — looking at you Medium articles.) Don’t get me wrong, experimentation and measurement are great. But let’s not pretend there is (or should be) a science to everything we do. We don’t hold a candle to the rigor of science. Software is as much art as science. Embrace the vibe. Email · Mastodon · Bluesky

a week ago 7 votes
Multiple Computers

I’ve spent so much time, had so many headaches, and encountered so much complexity from what, in my estimation, boils down to this: trying to get something to work on multiple computers. It might be time to just go back to having one computer — a personal laptop — do everything. No more commit, push, and let the cloud build and deploy. No more making it possible to do a task on my phone and tablet too. No more striving to make it possible to do anything from anywhere. Instead, I should accept the constraint of doing specific kinds of tasks when I’m at my laptop. No laptop? Don’t do it. Save it for later. Is it really that important? I think I’d save myself a lot of time and headache with that constraint. No more continuous over-investment of my time in making it possible to do some particular task across multiple computers. It’s a subtle, but fundamental, shift in thinking about my approach to computing tasks. Today, my default posture is to defer control of tasks to cloud computing platforms. Let them do the work, and I can access and monitor that work from any device. Like, for example, publishing a version of my website: git commit, push, and let the cloud build and deploy it. But beware, there be possible dragons! The build fails. It’s not clear why, but it “works on my machine”. Something is different between my computer and the computer in the cloud. Now I’m troubleshooting an issue unrelated to my website itself. I’m troubleshooting an issue with the build and deployment of my website across multiple computers. It’s easy to say: build works on my machine, deploy it! It’s deceivingly time-consuming to take that one more step and say: let another computer build it and deploy it. So rather than taking the default posture of “cloud-first”, i.e. push to the cloud and let it handle everything, I’d rather take a “local-first” approach where I choose one primary device to do tasks on, and ensure I can do them from there. Everything else beyond that, i.e. getting it to work on multiple computers, is a “progressive enhancement” in my workflow. I can invest the time, if I want to, but I don’t have to. This stands in contrast to where I am today which is if a build fails in the cloud, I have to invest the time because that’s how I’ve setup my workflow. I can only deploy via the cloud. So I have to figure out how to get the cloud’s computer to build my site, even when my laptop is doing it just fine. It’s hard to make things work identically across multiple computers. I get it, that’s a program not software. And that’s the work. But sometimes a program is just fine. Wisdom is knowing the difference. Email · Mastodon · Bluesky

a week ago 14 votes
Notes from Alexander Petros’ “Building the Hundred-Year Web Service”

I loved this talk from Alexander Petros titled “Building the Hundred-Year Web Service”. What follows is summation of my note-taking from watching the talk on YouTube. Is what you’re building for future generations: Useful for them? Maintainable by them? Adaptable by them? Actually, forget about future generations. Is what you’re building for future you 6 months or 6 years from now aligning with those goals? While we’re building codebases which may not be useful, maintainable, or adaptable by someone two years from now, the Romans built a bridge thousands of years ago that is still being used today. It should be impossible to imagine building something in Roman times that’s still useful today. But if you look at [Trajan’s Bridge in Portugal, which is still used today] you can see there’s a little car on its and a couple pedestrians. They couldn’t have anticipated the automobile, but nevertheless it is being used for that today. That’s a conundrum. How do you build for something you can’t anticipate? You have to think resiliently. Ask yourself: What’s true today, that was true for a software engineer in 1991? One simple answer is: Sharing and accessing information with a uniform resource identifier. That was true 30+ years ago, I would venture to bet it will be true in another 30 years — and more! There [isn’t] a lot of source code that can run unmodified in software that is 30 years apart. And yet, the first web site ever made can do precisely that. The source code of the very first web page — which was written for a line mode browser — still runs today on a touchscreen smartphone, which is not a device that Tim Berners-less could have anticipated. Alexander goes on to point out how interaction with web pages has changed over time: In the original line mode browser, links couldn’t be represented as blue underlined text. They were represented more like footnotes on screen where you’d see something like this[1] and then this[2]. If you wanted to follow that link, there was no GUI to point and click. Instead, you would hit that number on your keyboard. In desktop browsers and GUI interfaces, we got blue underlines to represent something you could point and click on to follow a link On touchscreen devices, we got “tap” with your finger to follow a link. While these methods for interaction have changed over the years, the underlying medium remains unchanged: information via uniform resource identifiers. The core representation of a hypertext document is adaptable to things that were not at all anticipated in 1991. The durability guarantees of the web are absolutely astounding if you take a moment to think about it. In you’re sprinting you might beat the browser, but it’s running a marathon and you’ll never beat it in the long run. If your page is fast enough, [refreshes] won’t even repaint the page. The experience of refreshing a page, or clicking on a “hard link” is identical to the experience of partially updating the page. That is something that quietly happened in the last ten years with no fanfare. All the people who wrote basic HTML got a huge performance upgrade in their browser. And everybody who tried to beat the browser now has to reckon with all the JavaScript they wrote to emulate these basic features. Email · Mastodon · Bluesky

a week ago 10 votes

More in programming

Understanding the cause of hydration issues in react-router (tip)

Today we go over how hydration errors happen in react-router and how to fix them.

12 hours ago 2 votes
Rust streams and timeouts gotcha

Imagine we have a list of paths to Parquet files on R2. We need to fetch Parquet footer of each file. However, we don’t know in advance whether we will need footers of all files and we want to avoid fetching extra. Rust has a streams abstraction. It is kind of like an iterator, but with async operations permitted. Like iterators, streams are lazy - they don’t do anything unless explicitly polled. This sounds like it ticks all the boxes, so lets try to implement it:

yesterday 3 votes
Debugging X86-64 Assembly with GDB

Watch now (20 mins) | Learn how to inspect registers, step through instructions, and investigate crashes using GDB.

2 days ago 4 votes
Could I Have Some More Friction in My Life, Please?

A clip from “Buy Now! The Shopping Conspiracy” features a former executive of an online retailer explaining how motivated they were to make buying easy. Like, incredibly easy. So easy, in fact, that their goal was to “reduce your time to think a little bit more critically about a purchase you thought you wanted to make.” Why? Because if you pause for even a moment, you might realize you don’t actually want whatever you’re about to buy. Been there. Ready to buy something and the slightest inconvenience surfaces — like when I can’t remember the precise order of my credit card’s CCV number and realize I’ll have to find my credit card and look it up — and that’s enough for me to say, “Wait a second, do I actually want to move my slug of a body and find my credit card? Nah.” That feels like the socials too. The algorithms. The endless feeds. The social interfaces. All engineered to make you think less about what you’re consuming, to think less critically about reacting or responding or engaging. Don’t think, just scroll. Don’t think, just like. Don’t think, just repost. And now with AI don’t think at all.[1] Because if you have to think, that’s friction. Friction is an engagement killer on content, especially the low-grade stuff. Friction makes people ask, “Is this really worth my time?” Maybe we need a little more friction in the world. More things that merit our time. Less things that don’t. It’s kind of ironic how the things we need present so much friction in our lives (like getting healthcare) while the things we don’t need that siphon money from our pockets (like online gambling[2]) present so little friction you could almost inadvertently slip right into them. It’s as if The Good Things™️ in life are full of friction while the hollow ones are frictionless. Nicholas Carr said, “The endless labor of self-expression cries out for the efficiency of automation.” Why think when you can prompt a probability machine to stitch together a facade of thinking for you? ⏎ John Oliver did a segment on sports betting if you want to feel sad. ⏎ Email · Mastodon · Bluesky

3 days ago 3 votes
Stuff I learned at Carta.

Today’s my last day at Carta, where I got the chance to serve as their CTO for the past two years. I’ve learned so much working there, and I wanted to end my chapter there by collecting my thoughts on what I learned. (I am heading somewhere, and will share news in a week or two after firming up the communication plan with my new team there.) The most important things I learned at Carta were: Working in the details – if you took a critical lens towards my historical leadership style, I think the biggest issue you’d point at is my being too comfortable operating at a high level of abstraction. Utilizing the expertise of others to fill in your gaps is a valuable skill, but–like any single approach–it’s limiting when utilized too frequently. One of the strengths of Carta’s “house leadership style” is expecting leaders to go deep into the details to get informed and push pace. What I practiced there turned into the pieces on strategy testing and developing domain expertise. Refining my approach to engineering strategy – over the past 18 months, I’ve written a book on engineering strategy (posts are all in #eng-strategy-book), with initial chapters coming available for early release with O’Reilly next month. Fingers crossed, the book will be released in approximately October. Coming into Carta, I already had much of my core thesis about how to do engineering strategy, but Carta gave me a number of complex projects to practice on, and excellent people to practice with: thank you to Dan, Shawna and Vogl in particular! More on this project in the next few weeks. Extract the kernel – everywhere I’ve ever worked, teams have struggled understanding executives. In every case, the executives could be clearer, but it’s not particularly interesting to frame these problems as something the executives need to fix. Sure, that’s true they could communicate better, but that framing makes you powerless, when you have a great deal of power to understand confusing communication. After all, even good communicators communicate poorly sometimes. Meaningfully adopting LLMs – a year ago I wrote up notes on adopting LLMs in your products, based on what we’d learned so far. Since then, we’ve learned a lot more, and LLMs themselves have significantly improved. Carta has been using LLMs in real, business-impacting workflows for over a year. That’s continuing to expand into solving more complex internal workflows, and even more interestingly into creating net-new product capabilities that ought to roll out more widely in the next few months (currently released to small beta groups). This is the first major technology transition that I’ve experienced in a senior leadership role (since I was earlier in my career when mobile internet transitioned from novelty to commodity). The immense pressure to adopt faster, combined with the immense uncertainty if it’s a meaningful change or a brief blip was a lot of fun, and was the inspiration for this strategy document around LLM adoption. Multi-dimensional tradeoffs – a phrase that Henry Ward uses frequent is that “everyone’s right, just at a different altitude.” That idea resonates with me, and meshes well with the ideas of multi-dimensional tradeoffs and layers of context that I find improve decision making for folks in roles that require making numerous, complex decisions. Working at Carta, these ideas formalized from something I intuited into something I could explain clearly. Navigators – I think our most successful engineering strategy at Carta was rolling out the Navigator program, which ensured senior-most engineers had context and direct representation, rather than relying exclusively on indirect representation via engineering management. Carta’s engineering managers are excellent, but there’s always something lost as discussions extend across layers. The Navigator program probably isn’t a perfect fit for particularly small companies, but I think any company with more than 100-150 engineers would benefit from something along these lines. How to create software quality – I’ve evolved my thinking about software quality quite a bit over time, but Carta was particularly helpful in distinguishing why some pieces of software are so hard to build despite having little-to-no scale from a data or concurrency perspective. These systems, which I label as “high essential complexity”, deserve more credit for their complexity, even if they have little in the way of complexity from infrastructure scaling. Shaping eng org costs – a few years ago, I wrote about my mental model for managing infrastructure costs. At Carta, I got to refine my thinking about engineering salary costs, with most of those ideas getting incorporated in the Navigating Private Equity ownership strategy, and the eng org seniority mix model. The three biggest levers are (1) “N-1 backfills”, (2) requiring a business rationale for promotions into senior-most levels, and (3) shifting hiring into cost efficient hiring regions. None of these are the sort of inspiring topics that excite folks, but they are all essential to the long term stability of your organization. Explaining engineering costs to boards/execs – Similarly, I finally have a clear perspective on how to represent R&D investment to boards in the same language that they speak in, which I wrote up here, and know how to do it quickly without relying on any manually curated internal datasets. Lots of smaller stuff, like the no wrong doors policy for routing colleagues to appropriate channels, how to request headcount in a way that is convincing to executives, Act Two rationales for how people’s motivations evolve over the course of long careers (and my own personal career mission to advance the industry, why friction isn’t velocity even though many folks act like it is. I’ve also learned quite a bit about venture capital, fund administration, cap tables, non-social network products, operating a multi-business line company, and various operating models. Figuring out how to sanitize those learnings to share the interesting tidbits without leaking internal details is a bit too painful, so I’m omitting them for now. Maybe some will be shareable in four or five years after my context goes sufficiently stale. As a closing thought, I just want to say how much I’ve appreciated the folks I’ve gotten to work with at Carta. From the executive team (Ali, April, Charly, Davis, Henry, Jeff, Nicole, Vrushali) to my directs (Adi, Ciera, Dan, Dave, Jasmine, Javier, Jayesh, Karen, Madhuri, Sam, Shawna) to the navigators (there’s a bunch of y’all). The people truly are always the best part, and that was certainly true at Carta.

5 days ago 9 votes