Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
80
It’s Christmas circa 2004. My teenage brothers, sisters, and I have all finished opening presents and we’re more than content to have absolutely nothing to do — it’s Christmas day after all! But not Dad. He’s in the bathroom laying tile. Again, this is Christmas day and Dad is on his hands and knees, in the bathroom, laying tile. We all laugh at him. “Look at this guy, he can’t not do work on Christmas Day. He has a problem.” Fast-forward two decades and, when our family gets together, we still get a good laugh poking fun at Dad — “Remember when he was laying tile in the bathroom on Christmas? Lol!” And yet. Here it is, Christmas day 2024. The kids are all asleep after a hectic day, and where do I find myself? Working on my personal website. I ask myself, “What am I doing? It’s the end of Christmas day and I’m working on my website? I have a problem!” But then I think of Dad on Christmas. Maybe we’re more similar than I thought. What I saw as “work” I now see as finding solace in...
5 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Jim Nielsen’s Blog

Tradeoffs to Continuous Software?

I came across this post from the tech collective crftd. about how software is in a process of “continuous disintegration”: One of the uncomfortable truths we sometimes have to break to people is that software isn't just never “done”. Worse even, it rots… The practices of continuous integration act as enablers for us to keep adding value and keeping development maintainable, but they cannot stop the inevitable: The system will eventually fail in unexpected ways, as is the nature of complex systems: That all resonates with me — software is rarely “done”, it generally has shelf life and starts rotting the moment you ship it — but what really made me pause was this line: The practices of continuous integration act as enablers for us I read “enabler” there in the negative context of the word, like in addiction when the word “enabler” refers to someone who exploits others by encouraging a pattern of self-destructive behavior. Is CI/CD an enabler? I’d only ever thought on moving towards CI/CD as a net positive thing. Is it possible that, like everything, CI/CD has its tradeoffs and isn’t always the Best Thing Ever™️? What are the trade-offs of CI/CD? The thought occurred to me that CI stands for “continuous investment” because that’s what it requires to keep it working — a continuous investment in the both the infrastructure that delivers the software and the software itself. Everybody complains now-a-days about how software requires a subscription. Why is that? Could it be, perhaps, because of CI/CD? If you want continuous updates to your software, you’re going to have to pay for it continuously. We’ve made delivering software continuously easy, which means we’ve made creating software that’s “done” hard — be careful of what you make easy. In some sense — at least on the web — I think you could argue that we don’t know how to make software that’s done (e.g. software that ships on a CD). We’re inundated with tools and practices and norms that enable the opposite of that. And, perhaps, we’ve trading something there? When something comes along and enables new capabilities, it often severs others. Email · Mastodon · Bluesky

4 days ago 3 votes
Could I Have Some More Friction in My Life, Please?

A clip from “Buy Now! The Shopping Conspiracy” features a former executive of an online retailer explaining how motivated they were to make buying easy. Like, incredibly easy. So easy, in fact, that their goal was to “reduce your time to think a little bit more critically about a purchase you thought you wanted to make.” Why? Because if you pause for even a moment, you might realize you don’t actually want whatever you’re about to buy. Been there. Ready to buy something and the slightest inconvenience surfaces — like when I can’t remember the precise order of my credit card’s CCV number and realize I’ll have to find my credit card and look it up — and that’s enough for me to say, “Wait a second, do I actually want to move my slug of a body and find my credit card? Nah.” That feels like the socials too. The algorithms. The endless feeds. The social interfaces. All engineered to make you think less about what you’re consuming, to think less critically about reacting or responding or engaging. Don’t think, just scroll. Don’t think, just like. Don’t think, just repost. And now with AI don’t think at all.[1] Because if you have to think, that’s friction. Friction is an engagement killer on content, especially the low-grade stuff. Friction makes people ask, “Is this really worth my time?” Maybe we need a little more friction in the world. More things that merit our time. Less things that don’t. It’s kind of ironic how the things we need present so much friction in our lives (like getting healthcare) while the things we don’t need that siphon money from our pockets (like online gambling[2]) present so little friction you could almost inadvertently slip right into them. It’s as if The Good Things™️ in life are full of friction while the hollow ones are frictionless. Nicholas Carr said, “The endless labor of self-expression cries out for the efficiency of automation.” Why think when you can prompt a probability machine to stitch together a facade of thinking for you? ⏎ John Oliver did a segment on sports betting if you want to feel sad. ⏎ Email · Mastodon · Bluesky

a week ago 6 votes
Webkit’s New Color Picker as an Example of Good Platform Defaults

I’ve written about how I don’t love the idea of overriding basic computing controls. Instead, I generally favor opting to respect user choice and provide the controls their platform does. Of course, this means platforms need to surface better primitives rather than supplying basic ones with an ability to opt out. What am I even talking about? Let me give an example. The Webkit team just shipped a new API for <input type=color> which provides users the ability to pick colors with wide gamut P3 and alpha transparency. The entire API is just a little bit of declarative HTML: <label> Select a color: <input type="color" colorspace="display-p3" alpha> </label> From that simple markup (on iOS) you get this beautiful, robust color picker. That’s a great color picker, and if you’re choosing colors a lot on iOS respectively and encountering this particular UI a lot, that’s even better — like, “Oh hey, I know how to use this thing!” With a picker like that, how many folks really want additional APIs to override that interface and style it themselves? This is the kind of better platform defaults I’m talking about. A little bit of HTML markup, and boom, a great interface to a common computing task that’s tailored to my device and uniform in appearance and functionality across the websites and applications I use. What more could I want? You might want more, like shoving your brand down my throat, but I really don’t need to see BigFinanceCorp Green™️ as a themed element in my color or date picker. If I could give HTML an aspirational slogan, it would be something along the lines of Mastercard’s old one: There are a few use cases platform defaults can’t solve, for everything else there’s HTML. Email · Mastodon · Bluesky

a week ago 10 votes
Product Pseudoscience

In his post about “Vibe Drive Development”, Robin Rendle warns against what I’ll call the pseudoscientific approach to product building prevalent across the software industry: when folks at tech companies talk about data they’re not talking about a well-researched study from a lab but actually wildly inconsistent and untrustworthy data scraped from an analytics dashboard. This approach has all the theater of science — “we measured and made decisions on the data, the numbers don’t lie” etc. — but is missing the rigor of science. Like, for example, corroboration. Independent corroboration is a vital practice of science that we in tech conveniently gloss over in our (self-proclaimed) objective data-driven decision making. In science you can observe something, measure it, analyze the results, and draw conclusions, but nobody accepts it as fact until there can be multiple instances of independent corroboration. Meanwhile in product, corroboration is often merely a group of people nodding along in support of a Powerpoint with some numbers supporting a foregone conclusion — “We should do X, that’s what the numbers say!” (What’s worse is when we have the hubris to think our experiments, anecdotal evidence, and conclusions should extend to others outside of our own teams, despite zero independent corroboration — looking at you Medium articles.) Don’t get me wrong, experimentation and measurement are great. But let’s not pretend there is (or should be) a science to everything we do. We don’t hold a candle to the rigor of science. Software is as much art as science. Embrace the vibe. Email · Mastodon · Bluesky

a week ago 10 votes
Multiple Computers

I’ve spent so much time, had so many headaches, and encountered so much complexity from what, in my estimation, boils down to this: trying to get something to work on multiple computers. It might be time to just go back to having one computer — a personal laptop — do everything. No more commit, push, and let the cloud build and deploy. No more making it possible to do a task on my phone and tablet too. No more striving to make it possible to do anything from anywhere. Instead, I should accept the constraint of doing specific kinds of tasks when I’m at my laptop. No laptop? Don’t do it. Save it for later. Is it really that important? I think I’d save myself a lot of time and headache with that constraint. No more continuous over-investment of my time in making it possible to do some particular task across multiple computers. It’s a subtle, but fundamental, shift in thinking about my approach to computing tasks. Today, my default posture is to defer control of tasks to cloud computing platforms. Let them do the work, and I can access and monitor that work from any device. Like, for example, publishing a version of my website: git commit, push, and let the cloud build and deploy it. But beware, there be possible dragons! The build fails. It’s not clear why, but it “works on my machine”. Something is different between my computer and the computer in the cloud. Now I’m troubleshooting an issue unrelated to my website itself. I’m troubleshooting an issue with the build and deployment of my website across multiple computers. It’s easy to say: build works on my machine, deploy it! It’s deceivingly time-consuming to take that one more step and say: let another computer build it and deploy it. So rather than taking the default posture of “cloud-first”, i.e. push to the cloud and let it handle everything, I’d rather take a “local-first” approach where I choose one primary device to do tasks on, and ensure I can do them from there. Everything else beyond that, i.e. getting it to work on multiple computers, is a “progressive enhancement” in my workflow. I can invest the time, if I want to, but I don’t have to. This stands in contrast to where I am today which is if a build fails in the cloud, I have to invest the time because that’s how I’ve setup my workflow. I can only deploy via the cloud. So I have to figure out how to get the cloud’s computer to build my site, even when my laptop is doing it just fine. It’s hard to make things work identically across multiple computers. I get it, that’s a program not software. And that’s the work. But sometimes a program is just fine. Wisdom is knowing the difference. Email · Mastodon · Bluesky

a week ago 15 votes

More in design

PAC – Personal Ambient Computing

The next evolution of personal computing won’t replace your phone — it will free computing from any single device. Like most technologists of a certain age, many of my expectations for the future of computing were set by Star Trek production designers. It’s quite easy to connect many of the devices we have today to props designed 30-50 years ago: The citizens of the Federation had communicators before we had cellphones, tricorders before we had smartphones, PADDs before we had tablets, wearables before the Humane AI pin, and voice interfaces before we had Siri. You could easily make a case that Silicon Valley owes everything to Star Trek. But now, there seems to be a shared notion that the computing paradigm established over the last half-century has run its course. The devices all work well, of course, but they all come with costs to culture that are worth correcting. We want to be more mobile, less distracted, less encumbered by particular modes of use in certain contexts. And, it’s also worth pointing out that the creation of generative AI has made the corporate shareholders ravenous for new products and new revenue streams. So whether we want them or not, we’re going to get new devices. The question is, will they be as revolutionary as they’re already hyped up to be? I’m old enough to remember the gasping “city-changing” hype of the Segway before everyone realized it was just a scooter with a gyroscope. But in fairness, there was equal hype to the iPhone and it isn’t overreaching to say that it remade culture even more extensively than a vehicle ever could have. So time will tell. I do think there is room for a new approach to computing, but I don’t expect it to be a new device that renders all others obsolete. The smartphone didn’t do that to desktop or laptop computers, nor did the tablet. We shouldn’t expect a screenless, sensor-ridden device to replace anyone’s phone entirely, either. But done well, such a thing could be a welcome addition to a person’s kit. The question is whether that means just making a new thing or rethinking how the various computers in our life work together. As I’ve been pondering that idea, I keep thinking back to Star Trek, and how the device that probably inspired the least wonder in me as a child is the one that seems most relevant now: the Federation’s wearables. Every officer wore a communicator pin — a kind of Humane Pin light — but they also all wore smaller pins at their collars signifying rank. In hindsight, it seems like those collar pins, which were discs the size of a watch battery, could have formed some kind of wearable, personal mesh network. And that idea got me going… The future isn’t a zero-sum game between old and new interaction modes. Rather than being defined by a single new computing paradigm, the future will be characterized by an increase in computing: more devices doing more things. I’ve been thinking of this as a PAC — Personal Ambient Computing. Personal Ambient Computing At its core is a modular component I’ve been envisioning as a small, disc-shaped computing unit roughly the diameter of a silver dollar but considerably thicker. This disc would contain processing power, storage, connectivity, sensors, and microphones. The disc could be worn as jewelry, embedded in a wristwatch with its own display, housed in a handheld device like a phone or reader, integrated into a desktop or portable (laptop or tablet) display, or even embedded in household appliances. This approach would create a personal mesh network of PAC modules, each optimized for its context, rather than forcing every function in our lives through a smartphone. The key innovation lies in the standardized form factor. I imagine a magnetic edge system that allows the disc to snap into various enclosures — wristwatches, handhelds, desktop displays, wearable bands, necklaces, clips, and chargers. By getting the physical interface right from the start, the PAC hardware wouldn’t need significant redesign over time, but an entirely new ecosystem of enclosures could evolve more gradually and be created by anyone. A worthy paradigm shift in computing is one that makes the most use of modularity, open-source software and hardware, and context. Open-sourcing hardware enclosures, especially, would offer a massive leap forward for repairability and sustainability. In my illustration above, I even went as far as sketching a smaller handheld — exactly the sort of device I’d prefer over the typical smartphone. Mine would be proudly boxy with a larger top bezel to enable greater repair access to core components, like the camera, sensors, microphone, speakers, and a smaller, low-power screen I’d depend upon heavily for info throughout the day. Hey, a man can dream. The point is, a PAC approach would make niche devices much more likely. Power The disc itself could operate at lower power than a smartphone, while device pairings would benefit from additional power housed in larger enclosures, especially those with screens. This creates an elegant hierarchy where the disc provides your personal computing core and network connectivity, while housings add context-specific capabilities like high-resolution displays, enhanced processing, or extended battery life. Simple housings like jewelry would provide form factor and maybe extend battery life. More complex housings would add significant power and specialized components. People wouldn’t pay for screen-driving power in every disc they own, just in the housings that need it. This modularity solves the chicken-and-egg problem that kills many new computing platforms. Instead of convincing people to buy an entirely new device that comes with an established software ecosystem, PAC could give us familiar form factors — watches, phones, desktop accessories — powered by a new paradigm. Third-party manufacturers could create housings without rebuilding core computing components. Privacy This vision of personal ambient computing aligns with what major corporations already want to achieve, but with a crucial difference: privacy. The current trajectory toward ambient computing comes at the cost of unprecedented surveillance. Apple, Google, Meta, OpenAI, and all the others envision futures where computing is everywhere, but where they monitor, control and monetize the flow of information. PAC demands a different future — one that leaves these corporate gatekeepers behind. A personal mesh should be just that: personal. Each disc should be configurable to sense or not sense based on user preferences, allowing contextual control over privacy settings. Users could choose which sensors are active in which contexts, which data stays local versus shared across their mesh, and which capabilities are enabled in different environments. A PAC unit should be as personal as your crypto vault. Obviously, this is an idea with a lot of technical and practical hand-waving at work. And at this vantage point, it isn’t really about technical capability — I’m making a lot of assumptions about continued miniaturization. It is about computing power returning to individuals rather than being concentrated in corporate silos. PAC represents ambient computing without ambient surveillance. And it is about computing graduating its current form and becoming more humanely and elegantly integrated into our day to day lives. Next The smartphone isn’t going anywhere. And we’re going to get re-dos of the AI devices that have already spectacularly failed. But we won’t get anywhere especially exciting until we look at the personal computing ecosystem holistically. PAC offers a more distributed, contextual approach that enhances rather than replaces effective interaction modes. It’s additive rather than replacement-based, which historically tends to drive successful technology adoption. I know I’m not alone in imagining something like this. I’d just like to feel more confident that people with the right kind of resources would be willing to invest in it. By distributing computing across multiple form factors while maintaining continuity of experience, PAC could deliver on the promise of ubiquitous computing without sacrificing the privacy, control, and interaction diversity that make technology truly personal. The future of computing shouldn’t be about choosing between old and new paradigms. It should be about computing that adapts to us, not the other way around.

2 days ago 2 votes
Junshanye × Googol by 古戈品牌

Unlocking the Code of Eastern Beauty in National Tea Where Mountains and Waters Sing in Harmony Nature holds the secrets...

3 days ago 4 votes
Why AI Makes Craft More Valuable, Not Less

For the past twenty to thirty years, the creative services industry has pursued a strategy of elevating the perceived value of knowledge work over production work. Strategic thinking became the premium offering, while actual making was reframed as “tactical” and “commoditized.” Creative professionals steered their careers toward decision-making roles rather than making roles. Firms adjusted their positioning to sell ideas, not assets — strategy became the product, while labor became nearly anonymous. After twenty years in my own career, I believe this has been a fundamental mistake, especially for those who have so distanced themselves from craft that they can no longer make things. The Unintended Consequences The strategic pivot created two critical vulnerabilities that are now being exposed by AI: For individuals: AI is already perceived as delivering ideas faster and with greater accuracy than traditional strategic processes, repositioning much of what passed for strategy as little better than educated guesswork. The consultant who built their career on frameworks and insights suddenly finds themselves competing with a tool that can generate similar outputs in seconds. For firms: Those who focused staff on strategy and account management while “offshoring” production cannot easily pivot to new means of production, AI-assisted or otherwise. They’ve created organizations optimized for talking about work rather than doing it. The Canary in the Coal Mine In hindsight, the homogeneity of interaction design systems should have been our warning. We became so eager to accept tools that reduced labor — style guides that eliminated design decisions, component libraries that standardized interfaces, templates that streamlined production — that we literally cleared the decks for AI replacement. Many creative services firms now accept AI in the same way an army-less nation might surrender to an invader: they have no other choice. They’ve systematically dismantled their capacity to make things in favor of their capacity to think about things. Now they’re hoping they can just re-boot production with bots. I don’t think that will work. AI, impressive as it is, still cannot make anything and everything. More importantly, it cannot produce things for existing systems as efficiently and effectively as a properly equipped person who understands both the tools and the context. The real world still requires: Understanding client systems and constraints Navigating technical limitations and possibilities Iterating based on real feedback from real users Adapting to changing requirements mid-project Solving the thousand small problems that emerge during implementation These aren’t strategic challenges — they’re craft challenges. They require the kind of deep, hands-on knowledge that comes only from actually making things, repeatedly, over time. The New Premium I see the evidence everywhere in my firm’s client accounts: there’s a desperate need to move as quickly as ever, motivated by the perception that AI has created about the overall pace of the market. But there’s also an acknowledgment that meaningful progress doesn’t come at the push of a button. The value of simply doing something — competently, efficiently, and with an understanding of how it fits into larger systems — has never been higher. This is why I still invest energy in my own craft and in communicating design fundamentals to anyone who will listen. Not because I’m nostalgic for pre-digital methods, but because I believe craft represents a sustainable competitive advantage in an AI-augmented world. Action vs. Advice The fundamental issue is that we confused talking about work with doing work. We elevated advice-giving over action-taking. We prioritized the ability to diagnose problems over the ability to solve them. But clients don’t ultimately pay for insights — they pay for outcomes. And outcomes require action. They require the messy, iterative, problem-solving work of actually building something that works in the real world. The firms and individuals who will thrive in the coming years won’t be those with the best strategic frameworks or the most sophisticated AI prompts. They’ll be those who can take an idea — whether it comes from a human strategist or an AI system — and turn it into something real, functional, and valuable. In my work, I regularly review design output from teams across the industry. I encounter both good ideas and bad ones, skillful craft and poor execution. Here’s what I’ve learned: it’s better to have a mediocre idea executed with strong craft than a brilliant idea executed poorly. When craft is solid, you know the idea can be refined — the execution capability exists, so iteration is possible. But when a promising idea is rendered poorly, it will miss its mark entirely, not because the thinking was wrong, but because no one possessed the skills to bring it to life effectively. The pendulum that swung so far toward strategy needs to swing back toward craft. Not because technology is going away, but because technology makes the ability to actually build things more valuable, not less. In a world where everyone can generate ideas, the people who can execute those ideas become invaluable.

4 days ago 4 votes
Noise Beer by Kate Minchenok

Noise Beer is a collection of craft and dark beers whose visual identity is inspired by the noise music genre...

6 days ago 5 votes
visual journal – 2025 May 25

Many Grids I’ve been making very small collages, trying to challenge myself to create new patterns and new ways of connecting form and creating space. Well, are we? The last page in a book I started last year.

a week ago 10 votes