Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
65
When my site analytics reported a large number of inbound traffic from Hacker News clones, I got curious and started clicking links.[1] I like to visit links. I am connoisseur of it. I love the feeling of landing on something you didn’t expect — which is precisely what happened. I landed on a site that had one of those Cloudflare-esque “prove you're human” captchas. That didn’t seem particularly abnormal. Lots of website owners these days use them for protection against malicious activities like DDoS attacks. Anyhow, the page had a little graphic that said: “Press ‘Allow' to prove you are not a robot.” I sat there for a moment looking for a button, but couldn’t find one. “Where’s the “Allow” button?” I thought. A few seconds later, Safari’s native permission dialog popped up asking for permission to send me notifications! I immediately thought, “Ah, hell no!” and ran away from that website. That’s sneaky, leveraging tools site owners use to protect themselves — and therefore...
6 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Jim Nielsen’s Blog

Tradeoffs to Continuous Software?

I came across this post from the tech collective crftd. about how software is in a process of “continuous disintegration”: One of the uncomfortable truths we sometimes have to break to people is that software isn't just never “done”. Worse even, it rots… The practices of continuous integration act as enablers for us to keep adding value and keeping development maintainable, but they cannot stop the inevitable: The system will eventually fail in unexpected ways, as is the nature of complex systems: That all resonates with me — software is rarely “done”, it generally has shelf life and starts rotting the moment you ship it — but what really made me pause was this line: The practices of continuous integration act as enablers for us I read “enabler” there in the negative context of the word, like in addiction when the word “enabler” refers to someone who exploits others by encouraging a pattern of self-destructive behavior. Is CI/CD an enabler? I’d only ever thought on moving towards CI/CD as a net positive thing. Is it possible that, like everything, CI/CD has its tradeoffs and isn’t always the Best Thing Ever™️? What are the trade-offs of CI/CD? The thought occurred to me that CI stands for “continuous investment” because that’s what it requires to keep it working — a continuous investment in the both the infrastructure that delivers the software and the software itself. Everybody complains now-a-days about how software requires a subscription. Why is that? Could it be, perhaps, because of CI/CD? If you want continuous updates to your software, you’re going to have to pay for it continuously. We’ve made delivering software continuously easy, which means we’ve made creating software that’s “done” hard — be careful of what you make easy. In some sense — at least on the web — I think you could argue that we don’t know how to make software that’s done (e.g. software that ships on a CD). We’re inundated with tools and practices and norms that enable the opposite of that. And, perhaps, we’ve trading something there? When something comes along and enables new capabilities, it often severs others. Email · Mastodon · Bluesky

2 days ago 3 votes
Could I Have Some More Friction in My Life, Please?

A clip from “Buy Now! The Shopping Conspiracy” features a former executive of an online retailer explaining how motivated they were to make buying easy. Like, incredibly easy. So easy, in fact, that their goal was to “reduce your time to think a little bit more critically about a purchase you thought you wanted to make.” Why? Because if you pause for even a moment, you might realize you don’t actually want whatever you’re about to buy. Been there. Ready to buy something and the slightest inconvenience surfaces — like when I can’t remember the precise order of my credit card’s CCV number and realize I’ll have to find my credit card and look it up — and that’s enough for me to say, “Wait a second, do I actually want to move my slug of a body and find my credit card? Nah.” That feels like the socials too. The algorithms. The endless feeds. The social interfaces. All engineered to make you think less about what you’re consuming, to think less critically about reacting or responding or engaging. Don’t think, just scroll. Don’t think, just like. Don’t think, just repost. And now with AI don’t think at all.[1] Because if you have to think, that’s friction. Friction is an engagement killer on content, especially the low-grade stuff. Friction makes people ask, “Is this really worth my time?” Maybe we need a little more friction in the world. More things that merit our time. Less things that don’t. It’s kind of ironic how the things we need present so much friction in our lives (like getting healthcare) while the things we don’t need that siphon money from our pockets (like online gambling[2]) present so little friction you could almost inadvertently slip right into them. It’s as if The Good Things™️ in life are full of friction while the hollow ones are frictionless. Nicholas Carr said, “The endless labor of self-expression cries out for the efficiency of automation.” Why think when you can prompt a probability machine to stitch together a facade of thinking for you? ⏎ John Oliver did a segment on sports betting if you want to feel sad. ⏎ Email · Mastodon · Bluesky

5 days ago 4 votes
Webkit’s New Color Picker as an Example of Good Platform Defaults

I’ve written about how I don’t love the idea of overriding basic computing controls. Instead, I generally favor opting to respect user choice and provide the controls their platform does. Of course, this means platforms need to surface better primitives rather than supplying basic ones with an ability to opt out. What am I even talking about? Let me give an example. The Webkit team just shipped a new API for <input type=color> which provides users the ability to pick colors with wide gamut P3 and alpha transparency. The entire API is just a little bit of declarative HTML: <label> Select a color: <input type="color" colorspace="display-p3" alpha> </label> From that simple markup (on iOS) you get this beautiful, robust color picker. That’s a great color picker, and if you’re choosing colors a lot on iOS respectively and encountering this particular UI a lot, that’s even better — like, “Oh hey, I know how to use this thing!” With a picker like that, how many folks really want additional APIs to override that interface and style it themselves? This is the kind of better platform defaults I’m talking about. A little bit of HTML markup, and boom, a great interface to a common computing task that’s tailored to my device and uniform in appearance and functionality across the websites and applications I use. What more could I want? You might want more, like shoving your brand down my throat, but I really don’t need to see BigFinanceCorp Green™️ as a themed element in my color or date picker. If I could give HTML an aspirational slogan, it would be something along the lines of Mastercard’s old one: There are a few use cases platform defaults can’t solve, for everything else there’s HTML. Email · Mastodon · Bluesky

a week ago 9 votes
Product Pseudoscience

In his post about “Vibe Drive Development”, Robin Rendle warns against what I’ll call the pseudoscientific approach to product building prevalent across the software industry: when folks at tech companies talk about data they’re not talking about a well-researched study from a lab but actually wildly inconsistent and untrustworthy data scraped from an analytics dashboard. This approach has all the theater of science — “we measured and made decisions on the data, the numbers don’t lie” etc. — but is missing the rigor of science. Like, for example, corroboration. Independent corroboration is a vital practice of science that we in tech conveniently gloss over in our (self-proclaimed) objective data-driven decision making. In science you can observe something, measure it, analyze the results, and draw conclusions, but nobody accepts it as fact until there can be multiple instances of independent corroboration. Meanwhile in product, corroboration is often merely a group of people nodding along in support of a Powerpoint with some numbers supporting a foregone conclusion — “We should do X, that’s what the numbers say!” (What’s worse is when we have the hubris to think our experiments, anecdotal evidence, and conclusions should extend to others outside of our own teams, despite zero independent corroboration — looking at you Medium articles.) Don’t get me wrong, experimentation and measurement are great. But let’s not pretend there is (or should be) a science to everything we do. We don’t hold a candle to the rigor of science. Software is as much art as science. Embrace the vibe. Email · Mastodon · Bluesky

a week ago 10 votes
Multiple Computers

I’ve spent so much time, had so many headaches, and encountered so much complexity from what, in my estimation, boils down to this: trying to get something to work on multiple computers. It might be time to just go back to having one computer — a personal laptop — do everything. No more commit, push, and let the cloud build and deploy. No more making it possible to do a task on my phone and tablet too. No more striving to make it possible to do anything from anywhere. Instead, I should accept the constraint of doing specific kinds of tasks when I’m at my laptop. No laptop? Don’t do it. Save it for later. Is it really that important? I think I’d save myself a lot of time and headache with that constraint. No more continuous over-investment of my time in making it possible to do some particular task across multiple computers. It’s a subtle, but fundamental, shift in thinking about my approach to computing tasks. Today, my default posture is to defer control of tasks to cloud computing platforms. Let them do the work, and I can access and monitor that work from any device. Like, for example, publishing a version of my website: git commit, push, and let the cloud build and deploy it. But beware, there be possible dragons! The build fails. It’s not clear why, but it “works on my machine”. Something is different between my computer and the computer in the cloud. Now I’m troubleshooting an issue unrelated to my website itself. I’m troubleshooting an issue with the build and deployment of my website across multiple computers. It’s easy to say: build works on my machine, deploy it! It’s deceivingly time-consuming to take that one more step and say: let another computer build it and deploy it. So rather than taking the default posture of “cloud-first”, i.e. push to the cloud and let it handle everything, I’d rather take a “local-first” approach where I choose one primary device to do tasks on, and ensure I can do them from there. Everything else beyond that, i.e. getting it to work on multiple computers, is a “progressive enhancement” in my workflow. I can invest the time, if I want to, but I don’t have to. This stands in contrast to where I am today which is if a build fails in the cloud, I have to invest the time because that’s how I’ve setup my workflow. I can only deploy via the cloud. So I have to figure out how to get the cloud’s computer to build my site, even when my laptop is doing it just fine. It’s hard to make things work identically across multiple computers. I get it, that’s a program not software. And that’s the work. But sometimes a program is just fine. Wisdom is knowing the difference. Email · Mastodon · Bluesky

a week ago 14 votes

More in design

Junshanye × Googol by 古戈品牌

Unlocking the Code of Eastern Beauty in National Tea Where Mountains and Waters Sing in Harmony Nature holds the secrets...

yesterday 2 votes
Why AI Makes Craft More Valuable, Not Less

For the past twenty to thirty years, the creative services industry has pursued a strategy of elevating the perceived value of knowledge work over production work. Strategic thinking became the premium offering, while actual making was reframed as “tactical” and “commoditized.” Creative professionals steered their careers toward decision-making roles rather than making roles. Firms adjusted their positioning to sell ideas, not assets — strategy became the product, while labor became nearly anonymous. After twenty years in my own career, I believe this has been a fundamental mistake, especially for those who have so distanced themselves from craft that they can no longer make things. The Unintended Consequences The strategic pivot created two critical vulnerabilities that are now being exposed by AI: For individuals: AI is already perceived as delivering ideas faster and with greater accuracy than traditional strategic processes, repositioning much of what passed for strategy as little better than educated guesswork. The consultant who built their career on frameworks and insights suddenly finds themselves competing with a tool that can generate similar outputs in seconds. For firms: Those who focused staff on strategy and account management while “offshoring” production cannot easily pivot to new means of production, AI-assisted or otherwise. They’ve created organizations optimized for talking about work rather than doing it. The Canary in the Coal Mine In hindsight, the homogeneity of interaction design systems should have been our warning. We became so eager to accept tools that reduced labor — style guides that eliminated design decisions, component libraries that standardized interfaces, templates that streamlined production — that we literally cleared the decks for AI replacement. Many creative services firms now accept AI in the same way an army-less nation might surrender to an invader: they have no other choice. They’ve systematically dismantled their capacity to make things in favor of their capacity to think about things. Now they’re hoping they can just re-boot production with bots. I don’t think that will work. AI, impressive as it is, still cannot make anything and everything. More importantly, it cannot produce things for existing systems as efficiently and effectively as a properly equipped person who understands both the tools and the context. The real world still requires: Understanding client systems and constraints Navigating technical limitations and possibilities Iterating based on real feedback from real users Adapting to changing requirements mid-project Solving the thousand small problems that emerge during implementation These aren’t strategic challenges — they’re craft challenges. They require the kind of deep, hands-on knowledge that comes only from actually making things, repeatedly, over time. The New Premium I see the evidence everywhere in my firm’s client accounts: there’s a desperate need to move as quickly as ever, motivated by the perception that AI has created about the overall pace of the market. But there’s also an acknowledgment that meaningful progress doesn’t come at the push of a button. The value of simply doing something — competently, efficiently, and with an understanding of how it fits into larger systems — has never been higher. This is why I still invest energy in my own craft and in communicating design fundamentals to anyone who will listen. Not because I’m nostalgic for pre-digital methods, but because I believe craft represents a sustainable competitive advantage in an AI-augmented world. Action vs. Advice The fundamental issue is that we confused talking about work with doing work. We elevated advice-giving over action-taking. We prioritized the ability to diagnose problems over the ability to solve them. But clients don’t ultimately pay for insights — they pay for outcomes. And outcomes require action. They require the messy, iterative, problem-solving work of actually building something that works in the real world. The firms and individuals who will thrive in the coming years won’t be those with the best strategic frameworks or the most sophisticated AI prompts. They’ll be those who can take an idea — whether it comes from a human strategist or an AI system — and turn it into something real, functional, and valuable. In my work, I regularly review design output from teams across the industry. I encounter both good ideas and bad ones, skillful craft and poor execution. Here’s what I’ve learned: it’s better to have a mediocre idea executed with strong craft than a brilliant idea executed poorly. When craft is solid, you know the idea can be refined — the execution capability exists, so iteration is possible. But when a promising idea is rendered poorly, it will miss its mark entirely, not because the thinking was wrong, but because no one possessed the skills to bring it to life effectively. The pendulum that swung so far toward strategy needs to swing back toward craft. Not because technology is going away, but because technology makes the ability to actually build things more valuable, not less. In a world where everyone can generate ideas, the people who can execute those ideas become invaluable.

2 days ago 3 votes
Noise Beer by Kate Minchenok

Noise Beer is a collection of craft and dark beers whose visual identity is inspired by the noise music genre...

4 days ago 4 votes
visual journal – 2025 May 25

Many Grids I’ve been making very small collages, trying to challenge myself to create new patterns and new ways of connecting form and creating space. Well, are we? The last page in a book I started last year.

5 days ago 9 votes
9¾ by Constantin Bolimond

This gin is based on the industrial revolution that began in England in the last third of the 18th century...

a week ago 8 votes