Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
38
Quoting myself from former days on Twitter: Businesses have a mental model of what they do. Businesses build software to help them do it—a concrete manifestation of their mental model. A gap always exists between these two. What makes a great software business is their ability to keep that gap very small. I think this holds up. And I still think about this idea (hence this post). Software is an implementation of human understanding — people need X, so we made Y. But people change. Businesses change. So software must also change. One of your greatest strengths will be your ability to adapt and evolve your understanding of people’s needs and implement it in your software. In a sense, technical debt is the other side of this coin of change: an inability to keep up with your own metamorphosis and understanding. In a way, you could analogize this to the conundrum of rocket science: you need fuel to get to space, but the more fuel you add, the more weight you add, and the more weight you...
3 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Jim Nielsen’s Blog

Notes from Alexander Petros’ “Building the Hundred-Year Web Service”

I loved this talk from Alexander Petros titled “Building the Hundred-Year Web Service”. What follows is summation of my note-taking from watching the talk on YouTube. Is what you’re building for future generations: Useful for them? Maintainable by them? Adaptable by them? Actually, forget about future generations. Is what you’re building for future you 6 months or 6 years from now aligning with those goals? While we’re building codebases which may not be useful, maintainable, or adaptable by someone two years from now, the Romans built a bridge thousands of years ago that is still being used today. It should be impossible to imagine building something in Roman times that’s still useful today. But if you look at [Trajan’s Bridge in Portugal, which is still used today] you can see there’s a little car on its and a couple pedestrians. They couldn’t have anticipated the automobile, but nevertheless it is being used for that today. That’s a conundrum. How do you build for something you can’t anticipate? You have to think resiliently. Ask yourself: What’s true today, that was true for a software engineer in 1991? One simple answer is: Sharing and accessing information with a uniform resource identifier. That was true 30+ years ago, I would venture to bet it will be true in another 30 years — and more! There [isn’t] a lot of source code that can run unmodified in software that is 30 years apart. And yet, the first web site ever made can do precisely that. The source code of the very first web page — which was written for a line mode browser — still runs today on a touchscreen smartphone, which is not a device that Tim Berners-less could have anticipated. Alexander goes on to point out how interaction with web pages has changed over time: In the original line mode browser, links couldn’t be represented as blue underlined text. They were represented more like footnotes on screen where you’d see something like this[1] and then this[2]. If you wanted to follow that link, there was no GUI to point and click. Instead, you would hit that number on your keyboard. In desktop browsers and GUI interfaces, we got blue underlines to represent something you could point and click on to follow a link On touchscreen devices, we got “tap” with your finger to follow a link. While these methods for interaction have changed over the years, the underlying medium remains unchanged: information via uniform resource identifiers. The core representation of a hypertext document is adaptable to things that were not at all anticipated in 1991. The durability guarantees of the web are absolutely astounding if you take a moment to think about it. In you’re sprinting you might beat the browser, but it’s running a marathon and you’ll never beat it in the long run. If your page is fast enough, [refreshes] won’t even repaint the page. The experience of refreshing a page, or clicking on a “hard link” is identical to the experience of partially updating the page. That is something that quietly happened in the last ten years with no fanfare. All the people who wrote basic HTML got a huge performance upgrade in their browser. And everybody who tried to beat the browser now has to reckon with all the JavaScript they wrote to emulate these basic features. Email · Mastodon · Bluesky

17 hours ago 2 votes
Notes from the Chrome Team’s “Blink principles of web compatibility”

Following up on a previous article I wrote about backwards compatibility, I came across this document from Rick Byers of the Chrome team titled “Blink principles of web compatibility” which outlines how they navigate introducing breaking changes. “Hold up,” you might say. “Breaking changes? But there’s no breaking changes on the web!?” Well, as outlined in their Google Doc, “don’t break anyone ever” is a bit unrealistic. Here’s their rationale: The Chromium project aims to reduce the pain of breaking changes on web developers. But Chromium’s mission is to advance the web, and in some cases it’s realistically unavoidable to make a breaking change in order to do that. Since the web is expected to continue to evolve incrementally indefinitely, it’s essential to its survival that we have some mechanism for shedding some of the mistakes of the past. Fair enough. We all need ways of shedding mistakes from the past. But let’s not get too personal. That’s a different post. So when it comes to the web, how do you know when to break something and when to not? The Chrome team looks at the data collected via Chrome's anonymous usage statistics (you can take a peak at that data yourself) to understand how often “mistake” APIs are still being used. This helps them categorize breaking changes as low-risk or high-risk. What’s wild is that, given Chrome’s ubiquity as a browser, a number like 0.1% is classified as “high-risk”! As a general rule of thumb, 0.1% of PageVisits (1 in 1000) is large, while 0.001% is considered small but non-trivial. Anything below about 0.00001% (1 in 10 million) is generally considered trivial. There are around 771 billion web pages viewed in Chrome every month (not counting other Chromium-based browsers). So seriously breaking even 0.0001% still results in someone being frustrated every 3 seconds, and so not to be taken lightly! But the usage stats are merely a guide — a partially blind one at that. The Chrome team openly acknowledges their dataset doesn’t tell the whole story (e.g. Enterprise clients have metrics recording is disabled, China has Google’s metric servers are disabled, and Chromium derivatives don’t record metrics at all). And Chrome itself is only part of the story. They acknowledge that a change that would break Chrome but align it with other browsers is a good thing because it’s advancing the whole web while perhaps costing Chrome specifically in the short term — community > corporation?? Breaking changes which align Chromium’s behavior with other engines are much less risky than those which cause it to deviate…In general if a change will break only sites coding specifically for Chromium (eg. via UA sniffing), then it’s likely to be net-positive towards Chromium’s mission of advancing the whole web. Yay for advancing the web! And the web is open, which is why they also state they’ll opt for open formats where possible over closed, proprietary, “patent-encumbered” ones. The chromium project is committed to a free and open web, enabling innovation and competition by anyone in any size organization or of any financial means or legal risk tolerance. In general the chromium project will accept an increased level of compatibility risk in order to reduce dependence in the web ecosystem on technologies which cannot be implemented on a royalty-free basis. One example we saw of breaking change that excluded proprietary in favor of open was Flash. One way of dealing with a breaking change like that is to provide opt-out. In the case of Flash, users were given the ability to “opt-out” of Flash being deprecated via site settings (in other words, opt-in to using flash on a page-by-page basis). That was an important step in phasing out that behavior completely over time. But not all changes get that kind of heads-up. there is a substantial portion of the web which is unmaintained and will effectively never be updated…It may be useful to look at how long chromium has had the behavior in question to get some idea of the risk that a lot of unmaintained code will depend on it…In general we believe in the principle that the vast majority of websites should continue to function forever. There’s a lot going on with Chrome right now, but you gotta love seeing the people who work on it making public statements like that — “we believe…that the vast majority of websites should continue to function forever.” There’s some good stuff in this document that gives you hope that people really do care and work incredibly hard to not break the web! (It’s an ecosystem after all.) It’s important for [us] browser engineers to resist the temptation to treat breaking changes in a paternalistic fashion. It’s common to think we know better than web developers, only to find out that we were wrong and didn’t know as much about the real world as we thought we did. Providing at least a temporary developer opt-out is an act of humility and respect for developers which acknowledges that we’ll only succeed in really improving the web for users long-term via healthy collaborations between browser engineers and web developers. More 👏 acts 👏 of 👏 humility 👏 in tech 👏 please! Email · Mastodon · Bluesky

3 days ago 5 votes
Language Needs Innovation

In his book “The Order of Time” Carlo Rovelli notes how we often asks ourselves questions about the fundamental nature of reality such as “What is real?” and “What exists?” But those are bad questions he says. Why? the adjective “real” is ambiguous; it has a thousand meanings. The verb “to exist” has even more. To the question “Does a puppet whose nose grows when he lies exist?” it is possible to reply: “Of course he exists! It’s Pinocchio!”; or: “No, it doesn’t, he’s only part of a fantasy dreamed up by Collodi.” Both answers are correct, because they are using different meanings of the verb “to exist.” He notes how Pinocchio “exists” and is “real” in terms of a literary character, but not so far as any official Italian registry office is concerned. To ask oneself in general “what exists” or “what is real” means only to ask how you would like to use a verb and an adjective. It’s a grammatical question, not a question about nature. The point he goes on to make is that our language has to evolve and adapt with our knowledge. Our grammar developed from our limited experience, before we know what we know now and before we became aware of how imprecise it was in describing the richness of the natural world. Rovelli gives an example of this from a text of antiquity which uses confusing grammar to get at the idea of the Earth having a spherical shape: For those standing below, things above are below, while things below are above, and this is the case around the entire earth. On its face, that is a very confusing sentence full of contradictions. But the idea in there is profound: the Earth is round and direction is relative to the observer. Here’s Rovelli: How is it possible that “things above are below, while things below are above"? It makes no sense…But if we reread it bearing in mind the shape and the physics of the Earth, the phrase becomes clear: its author is saying that for those who live at the Antipodes (in Australia), the direction “upward” is the same as “downward” for those who are in Europe. He is saying, that is, that the direction “above” changes from one place to another on the Earth. He means that what is above with respect to Sydney is below with respect to us. The author of this text, written two thousand years ago, is struggling to adapt his language and his intuition to a new discovery: the fact that the Earth is a sphere, and that “up” and “down” have a meaning that changes between here and there. The terms do not have, as previously thought, a single and universal meaning. So language needs innovation as much as any technological or scientific achievement. Otherwise we find ourselves arguing over questions of deep import in a way that ultimately amounts to merely a question of grammar. Email · Mastodon · Bluesky

a week ago 8 votes
The Tumultuous Evolution of the Design Profession

Via Jeremy Keith’s link blog I found this article: Elizabeth Goodspeed on why graphic designers can’t stop joking about hating their jobs. It’s about the disillusionment of designers since the ~2010s. Having ridden that wave myself, there’s a lot of very relatable stuff in there about how design has evolved as a profession. But before we get into the meat of the article, there’s some bangers worth acknowledging, like this: Amazon – the most used website in the world – looks like a bunch of pop-up ads stitched together. lol, burn. Haven’t heard Amazon described this way, but it’s spot on. The hard truth, as pointed out in the article, is this: bad design doesn’t hurt profit margins. Or at least there’s no immediately-obvious, concrete data or correlation that proves this. So most decision makers don’t care. You know what does help profit margins? Spending less money. Cost-savings initiatives. Those always provide a direct, immediate, seemingly-obvious correlation. So those initiatives get prioritized. Fuzzy human-centered initiatives (humanities-adjacent stuff), are difficult to quantitatively (and monetarily) measure. “Let’s stop printing paper and sending people stuff in the mail. It’s expensive. Send them emails instead.” Boom! Money saved for everyone. That’s easier to prioritize than asking, “How do people want us to communicate with them — if at all?” Nobody ever asks that last part. Designers quickly realized that in most settings they serve the business first, customers second — or third, or fourth, or... Shar Biggers [says] designers are “realising that much of their work is being used to push for profit rather than change..” Meet the new boss. Same as the old boss. As students, designers are encouraged to make expressive, nuanced work, and rewarded for experimentation and personal voice. The implication, of course, is that this is what a design career will look like: meaningful, impactful, self-directed. But then graduation hits, and many land their first jobs building out endless Google Slides templates or resizing banner ads...no one prepared them for how constrained and compromised most design jobs actually are. Reality hits hard. And here’s the part Jeremy quotes: We trained people to care deeply and then funnelled them into environments that reward detachment. ​​And the longer you stick around, the more disorienting the gap becomes – especially as you rise in seniority. You start doing less actual design and more yapping: pitching to stakeholders, writing brand strategy decks, performing taste. Less craft, more optics; less idealism, more cynicism. Less work advocating for your customers, more work for advocating for yourself and your team within the organization itself. Then the cynicism sets in. We’re not making software for others. We’re making company numbers go up, so our numbers ($$$) will go up. Which reminds me: Stephanie Stimac wrote about reaching 1 year at Igalia and what stood out to me in her post was that she didn’t feel a pressing requirement to create visibility into her work and measure (i.e. prove) its impact. I’ve never been good at that. I’ve seen its necessity, but am just not good at doing it. Being good at building is great. But being good at the optics of building is often better — for you, your career, and your standing in many orgs. Anyway, back to Elizabeth’s article. She notes you’ll burn out trying to monetize something you love — especially when it’s in pursuit of maintaining a cost of living. Once your identity is tied up in the performance, it’s hard to admit when it stops feeling good. It’s a great article and if you’ve been in the design profession of building software, it’s worth your time. Email · Mastodon · Bluesky

a week ago 9 votes
Backwards Compatibility in the Web, but Not Its Tools

After reading an article, I ended up on HackerNews and stumbled on this comment: The most frustrating thing about dipping in to the FE is that it seems like literally everything is deprecated. Lol, so true. From the same comment, here’s a description of a day in the life of a front-end person: Oh, you used the apollo CLI in 2022? Bam, deprecated, go learn how to use graphql-client or whatever, which has a totally different configuration and doesn’t support all the same options. Okay, so we just keep the old one and disable the node engine check in pnpm that makes it complain. Want to do a patch upgrade to some dependency? Hope you weren’t relying on any of its type signatures! Pin that as well, with a todo in the codebase hoping someone will update the signatures. Finally get things running, watch the stream of hundreds of deprecation warnings fly by during the install. Eventually it builds, and I get the hell out of there. Apt. It’s ironic that the web platform itself has an ethos of zero breaking changes. But the tooling for building stuff on the web platform? The complete opposite. Breaking changes are a way of life. Is there some mystical correlation here, like the tools remain in such flux because the platform is so stable — stability taken for granted breeds instability? Either way, as Morpheus says in The Matrix: Fate, it seems, is not without a sense of irony. Email · Mastodon · Bluesky

2 weeks ago 5 votes

More in design

ZARA flagship store by AIM Architecture & Art Recherche Industrie

As a mass fashion retailer for many decades, it’s somewhat surprising that ZARA has quietly pulled the architecture card to...

19 hours ago 2 votes
Order is Always More Important than Action in Design

Before users can meaningfully act, they must understand — a principle our metrics-obsessed design culture has forgotten. Today’s metrics-obsessed design culture is too fixated on action. Clicks, conversions, and other easily quantified metrics have become our purpose. We’re so focused on outcomes that we’ve lost sight of what makes them valuable and what even makes them possible in the first place: order and understanding. The primary function of design is not to prompt action. It’s to bring form to intent through order: arranging and prioritizing information so that those who encounter it can see it, perceive it, and understand it. Why has action become our focus? Simple: it’s easier to measure than understanding. We can track how many people clicked a button but not how many people grasped the meaning behind it. We can measure time spent on a page but not comprehension gained during that time. And so, following the path of least resistance, we’ve collectively decided that what’s easy to measure must be what’s most important to optimize, leaving action metrics the only means by which the success of design is determined. This is backward. Action without understanding is merely manipulation — a short-term victory that creates long-term problems. Users who take actions without fully comprehending why become confused, frustrated, and ultimately distrustful of both the design and the organization behind it. A dirty little secret of action metrics is how often the success signal — a button click or a form submission — is immediately followed by a meandering session of actions that obviously signals confusion and possibly even regret. Often, confusion is easier to perceive from session data than much else. Even when action is an appropriate goal, it’s not a guaranteed outcome. Information can be perfectly clear and remain unpersuasive because persuasion is not entirely within the designer’s control. Information is at its most persuasive when it is (1) clear, (2) truthful, and (3) aligned with the intent of the recipient. As designers, we can only directly control the first two factors. As for alignment with user intent, we can attempt to influence this through audience targeting, but let’s be honest about the limitations. Audience targeting relies on data that we choose to believe is far more accurate than it actually is. We have geolocation, sentiment analysis, rich profiling, and nearly criminally invasive tracking, and yet, most networks think I am an entirely different kind of person than I am. And even if they got the facts right, they couldn’t truly promise intent-alignment at the accuracy they do without mind-reading. The other dirty secret of most marketing is we attempt to close the gap with manipulation designed to work on most people. We rationalize this by saying, “yeah, it’s cringe, but it works.” Because we prioritize action over understanding, we encourage designs that exploit psychological triggers rather than foster comprehension. Dark patterns, artificial scarcity, misleading comparisons, straight up negging — these are the tools of action-obsessed design. They may drive short-term metrics, but they erode trust and damage relationships with users. This misplaced emphasis also distorts our design practice. Specific tactics like button placement and styling, form design, and conventional call-to-action patterns carry disproportionate weight in our approach. These elements are important, but fixating on them distracts designers from the craft of order: information architecture, information design, typography, and layout — the foundational elements essential to clear communication. What might design look like if we properly valued order over action? First, we would invest more in information architecture and content strategy — the disciplines most directly concerned with creating meaningful order. These would not be phases to rush through, but central aspects of the design process. We would trust words more rather than chasing layout and media trends. Second, we would develop better ways to evaluate understanding. Qualitative methods like comprehension testing would be given as much weight as conversion rates. We would ask not just “Did users do what we wanted?” but “Did users understand what we were communicating?” This isn’t difficult or labor intensive, but it does require actually talking to people. Third, we would respect the user’s right not to act. We would recognize that sometimes the appropriate response to even the clearest information is to walk away or do nothing. None of this means that action isn’t important. Of course it is. A skeptic might ask: “What is the purpose of understanding if no action is taken?” In many cases, this is a fair question. The entire purpose of certain designs — like landing pages — may be to engage an audience and motivate their action. In such cases, measuring success through clicks and conversions not only makes sense, it’s really the only signal that can be quantified. But this doesn’t diminish the foundational role that understanding plays in supporting meaningful action, or the fact that overemphasis on action metrics can undercut the effectiveness of communication. Actions built on misunderstanding are like houses built on sand — they will inevitably collapse. When I say that order is more important than action, I don’t mean that action isn’t important. But there is no meaningful action without understanding, and there is no understanding without order. By placing order first in our design priorities, we don’t abandon action — we create the necessary foundation for it. We align our practice with our true purpose: not to trick people into doing things, but to help them see, know, and comprehend so they can make informed decisions about what to do next.

yesterday 3 votes
Glenmorangie whisky collection by Butterfly Cannon

Glenmorangie wanted to celebrate their Head of Whisky Creation’s combined passion for whisky and wine, through the release of three...

2 days ago 3 votes
Printing Everything and Owning Nothing

Something is starting to happen. As of right now, 3D Printer ownership is niche. Many know what it is, but very few people have one. This will change rapidly over the next few years. Plenty of contemporary sci-fi have depicted futures where everything is “printed.” The exact recipe of the “ink” is very much TBD, but the idea has taken hold. But I’ve been waiting for the consumer-level signals. I just saw one — an article about how Philips, the maker of my electric shaver, will be releasing printable accessories. You won’t be able to print a razor itself, but you will be able to print the blade guards — the fragile plastic snap-ons that enable you to control the depth of your cut. This seems neat, right? But it’s really an ingenious monthly recurring revenue strategy for Philips. The idea is, how many people own our electric shavers? What’s the lifespan of those shavers? Can we close the gap between purchase events? Obviously, yes. I have many well-worn guards for my shaver. Would I spend, say, a couple of dollars to print a fresh replacement that snaps in like new? Probably. If I had a printer. That’s going to start to be the pitch. The printer will be a utility. Not having one will be…weird, backward, luddite. Give it a few years. But the distance between discretionary accessories and the actual thing you need is quite short. Once major manufacturer’s demonstrate the sustainable demand for printables as MRR, it’s going to be a fast transition to printing the actual thing and therefore, most-objects-as-a-service. Regulating the supply chain will have as much to do with this as all the paranoid plutocrat energy I can muster in my imagination, obviously. I’m not into it; just calling it now.

3 days ago 4 votes
Fang Eyewear Showroom by M-D Design Studio

the Fang Eyewear Showroom by architecture firm M-D Design Studio, a project which reimagines the traditional showroom in the town...

6 days ago 8 votes