Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
31
The other day I saw a meme that went something like this: Isn’t it crappy how basic human activities like singing, dancing, and making art have been turned into skills instead of being recognized as behaviors? The point of doing these things has become to get good at them. But they should be recognized as things humans do innately, like how birds sing or bees make hives. I thought about that for a minute, then decided: making websites should be the same! The original vision for the web, according to Tim Berners-Lee, was to make it a collaborate medium where everyone could read and write. Social media sort of achieved this, but the incentives are off. And it’s not just about ownership of the content you produce and who can monetize it, but the context in which you produce it. Mandy nails this in her recent piece “Coming home”: While one of the reasons oft declared for using POSSE is the ability to own your content, I’m less interested in ownership than I am in context. Writing on my own...
5 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Jim Nielsen’s Blog

Building WebSites With LLMS

And by LLMS I mean: (L)ots of (L)ittle ht(M)l page(S). I recently shipped some updates to my blog. Through the design/development process, I had some insights which made me question my knee-jerk reaction to building pieces of a page as JS-powered interactions on top of the existing document. With cross-document view transitions getting broader and broader support, I’m realizing that building in-page, progressively-enhanced interactions is more work than simply building two HTML pages and linking them. I’m calling this approach “lots of little HTML pages” in my head. As I find myself trying to build progressively-enhanced features with JavaScript — like a fly-out navigation menu, or an on-page search, or filtering content — I stop and ask myself: “Can I build this as a separate HTML page triggered by a link, rather than JavaScript-injected content built from a button?” I kinda love the results. I build separate, small HTML pages for each “interaction” I want, then I let CSS transitions take over and I get something that feels better than its JS counterpart for way less work. Allow me two quick examples. Example 1: Filtering Working on my homepage, I found myself wanting a list of posts filtered by some kind of criteria, like: The most recent posts The ones being trafficked the most The ones that’ve had lots of Hacker News traffic in the past My first impulse was to have a list of posts you can filter with JavaScript. But the more I built it, the more complicated it got. Each “list” of posts needed a slightly different set of data. And each one had a different sort order. What I thought was going to be “stick a bunch of <li>s in the DOM, and show hide some based on the current filter” turned into lots of data-x attributes, per-list sorting logic, etc. I realized quickly this wasn’t a trivial, progressively-enhanced feature. I didn’t want to write a bunch of client-side JavaScript for what would take me seconds to write on “the server” (my static site generator). Then I thought: Why don’t I just do this with my static site generator? Each filter can be its own, separate HTML page, and with CSS view transitions I’ll get a nice transition effect for free! Minutes later I had it all working — mostly, I had to learn a few small things about aspect ratio in transitions — plus I had fancy transitions between “tabs” for free! This really feels like a game-changer for simple sites. If you can keep your site simple, it’s easier to build traditional, JavaScript-powered on-page interactions as small, linked HTML pages. Example 2: Navigation This got me thinking: maybe I should do the same thing for my navigation? Usually I think “Ok, so I’ll have a hamburger icon with a bunch of navigational elements in it, and when it’s clicked you gotta reveal it, etc." And I thought, “What if it’s just a new HTML page?”[1] Because I’m using a static site generator, it’s really easy to create a new HTML page. A few minutes later and I had it. No client-side JS required. You navigate to the “Menu” and you get a page of options, with an “x” to simulate closing the menu and going back to where you were. I liked it so much for my navigation, I did the same thing with search. Clicking the icon doesn’t use JavaScript to inject new markup and animate things on screen. Nope. It’s just a link to a new page with CSS supporting a cross-document view transition. Granted, there are some trade-offs to this approach. But on the whole, I really like it. It was so easy to build and I know it’s going to be incredibly easy to maintain! I think this is a good example of leveraging the grain of the web. It’s really easy to build a simple website when you can shift your perspective to viewing on-page interactivity as simple HTML page navigations powered by cross document CSS transitions (rather than doing all of that as client-side JS). Jason Bradberry has a neat article that’s tangential to this idea over at Piccalil. It’s more from the design standpoint, but functionally it could work pretty much the same as this: your “menu” or “navigation” is its own page. ⏎ Email · Mastodon · Bluesky

a week ago 9 votes
AX, DX, UX

Matt Biilman, CEO of Netlify, published an interesting piece called “Introducing AX: Why Agent Experience Matters” where he argues the coming importance of a new “X” (experience) in software: the agent experience, meaning the experience your users’ AI agents will have as automated users of products/platforms. Too many companies are focusing on adding shallow AI features all over their products or building yet another AI agent. The real breakthrough will be thinking about how your customers’ favorite agents can help them derive more value from your product. This requires thinking deeply about agents as a persona your team is building and developing for. In this future, software that can’t be used by an automated agent will feel less powerful and more burdensome to deal with, whereas software that AI agents can use on your behalf will become incredibly capable and efficient. So you have to start thinking about these new “users” of your product: Is it simple for an Agent to get access to operating a platform on behalf of a user? Are there clean, well described APIs that agents can operate? Are there machine-ready documentation and context for LLMs and agents to properly use the available platform and SDKs? Addressing the distinct needs of agents through better AX, will improve their usefulness for the benefit of the human user. In summary: We need to start focusing on AX or “agent experience” — the holistic experience AI agents will have as the user of a product or platform. The idea is: teams focus more time and attention on “AX” (agent experience) so that human end-users can bring their favorite agents to our platforms/products and increase productivity. But I’m afraid the reality will be that the limited time and resources teams spend today building stuff for humans will instead get spent building stuff for robots, and as a byproduct everything human-centric about software will become increasingly subpar as we rationalize to ourselves, “Software doesn’t need to be good for human because humans don’t use software anymore. Their robots do!” In that world, anybody complaining about bad UX will be told to shift to using the AX because “that’s where we spent all our time and effort to make your experience great”. Prior Art: DX DX in theory: make the DX for people who are building UX really great and they’ll be able to deliver more value faster. DX in practice: DX requires trade-offs, and a spotlight on DX concerns means UX concerns take a back seat. Ultimately, some DX concerns end up trumping UX concerns because “we’ll ship more value faster”, but the result is an overall degradation of UX because DX was prioritized first. Ultimately, time and resources are constraining factors and trade-offs have to be made somewhere, so they’re made for and in behalf of the people who make the software because they’re the ones who feel the pain directly. User pain is only indirect. Future Art: AX AX in theory: build great stuff for agents (AX) so people can use stuff more efficiently by bringing their own tools. AX in practice: time and resources being finite, AX trumps UX with the rationale being: “It’s ok if the human bit (UX) is a bit sloppy and obtuse because we’ll make the robot bit (AX) so good people won’t ever care about how poor the UX is because they’ll never use it!” But I think we know how that plays out. A few companies may do that well, but most software will become even more confusing and obtuse to humans because most thought and care is poured into the robot experience of the product. The thinking will be: “No need to pour extra care and thought into the inefficient experience some humans might have. Better to make the agent experience really great, so humans won’t want to interface with our thing manually.” In other words: we don’t have the time or resources to worry about the manual human experience because we’ve got all these robots to worry about! It appears there’s no need to fear AI becoming sentient and replacing us humans. We’ll phase ourselves out long before the robots ever become self-aware. All that said, I’m not against the idea of “AX” but I do think the North Star of any “X” should remain centered on the (human) end-user. UX over AX over DX. Email · Mastodon · Bluesky

a week ago 12 votes
Can You Get Better Doing a Bad Job?

Rick Rubin has an interview with Woody Harrelson on his podcast Tetragrammaton. Right at the beginning Woody talks about his experience acting and how he’s had roles that did’t turn out very well. He says sometimes he comes away from those experiences feeling dirty, like “I never connected to that, it never resonated, and now I feel like I sold myself...Why did I do that?!” Then Rick asks him: even in those cases, do you feel like you got better at your craft because you did your job? Woody’s response: I think when you do your job badly you never really get better at your craft. Seems relevant to making websites. I’ve built websites on technology stacks I knew didn’t feel fit for their context and Woody’s experience rings true. You just don’t feel right, like a little voice that says, “You knew that wasn’t going to turn out very good. Why did you do that??” I don’t know if I’d go so far as to say I didn’t get better because of it. Experience is a hard teacher. Perhaps, from a technical standpoint, my skillset didn’t get any better. But from an experiential standpoint, my judgement got better. I learned to avoid (or try to re-structure) work that’s being carried out in a way that doesn’t align with its own purpose and essence. Granted, that kind of alignment is difficult. If it makes you feel any better, even Woody admits this is not an easy thing to do: I would think after all this time, surely I’m not going to be doing stuff I’m not proud of. Or be a part of something I’m not proud of. But damn...it still happens. Email · Mastodon · Bluesky

a week ago 9 votes
Limitations vs. Capabilities

Andy Jiang over on the Deno blog writes “If you're not using npm specifiers, you're doing it wrong”: During the early days of Deno, we recommended importing npm packages via HTTP with transpile services such as esm.sh and unpkg.com. However, there are limitations to importing npm packages this way, such as lack of install hooks, duplicate dependency resolution issues, loading data files, etc. I know, I know, here I go harping on http imports again, but this article reinforces to me that one man’s “limitations” are another man’s “features”. For me, the limitations (i.e. constraints) of HTTP imports in Deno were a feature. I loved it precisely because it encouraged me to do something different than what node/npm encouraged. It encouraged me to 1) do less, and 2) be more web-like. Trying to do more with less is a great way to foster creativity. Plus, doing less means you have less to worry about. Take, for example, install hooks (since they’re mentioned in the article). Install hooks are a security vector. Use them and you’re trading ease for additional security concerns. Don’t use them and you have zero additional security concerns. (In the vein of being webby: browsers don’t offer install hooks on <script> tags.) I get it, though. It’s hard to advocate for restraint and simplicity in the face of gaining adoption within the web-industrial-complex. Giving people what they want — what they’re used to — is easier than teaching them to change their ways. Note to self: when you choose to use tools with practices, patterns, and recommendations designed for industrial-level use, you’re gonna get industrial-level side effects, industrial-level problems, and industrial-level complexity as a byproduct. As much as its grown, the web still has grassroots in being a programming platform accessible by regular people because making a website was meant to be for everyone. I would love a JavaScript runtime aligned with that ethos. Maybe with initiatives like project Fugu that runtime will actually be the browser. Email · Mastodon · Bluesky

a week ago 12 votes
Sanding UI, pt. II

Let’s say you make a UI to gather some user feedback. Nothing complicated. Just a thumbs up/down widget. It starts out neutral, but when the user clicks up or down, you highlight what they clicked an de-emphasize/disable the other (so it requires an explicit toggle to change your mind). So you implement it. Ship it. Cool. Works right? Well, per my previous article about “sanding” a user interface UI by clicking around a lot, did you click on it a lot? If you do, you’ll find that doing so selects the thumbs up/down icon as if it were text: So now you have this weird text selection that’s a bit of an eye sore. It’s not relevant to text selection because it’s not text. It’s an SVG. So the selection UI that appears is misleading and distracting. One possible fix: leverage the user-select: none property in CSS which makes it not selectable. When the user clicks multiple times to toggle, no text selection UI will appear. Cool. Great! Another reason to click around a lot. You can ensure any rough edges are smoothed out, and any “UI splinters” are ones you get (and fix) in place of your users. Email · Mastodon · Bluesky

2 weeks ago 16 votes

More in programming

Supa Pecha Kucha

slug: supapechakucha

17 hours ago 3 votes
The Power of Principles in Web Development Decision-Making (article)

Discover how The Epic Programming Principles can transform your web development decision-making, boost your career, and help you build better software.

9 hours ago 2 votes
Closing the borders alone won't fix the problems

Denmark has been reaping lots of delayed accolades from its relatively strict immigration policy lately. The Swedes and the Germans in particular are now eager to take inspiration from The Danish Model, given their predicaments. The very same countries that until recently condemned the lack of open-arms/open-border policies they would champion as Moral Superpowers.  But even in Denmark, thirty years after the public opposition to mass immigration started getting real political representation, the consequences of culturally-incompatible descendants from MENAPT continue to stress the high-trust societal model. Here are just three major cases that's been covered in the Danish media in 2025 alone: Danish public schools are increasingly struggling with violence and threats against students and teachers, primarily from descendants of MENAPT immigrants. In schools with 30% or more immigrants, violence is twice as prevalent. This is causing a flight to private schools from parents who can afford it (including some Syrians!). Some teachers are quitting the profession as a result, saying "the Quran run the class room". Danish women are increasingly feeling unsafe in the nightlife. The mayor of the country's third largest city, Odense, says he knows why: "It's groups of young men with an immigrant background that's causing it. We might as well be honest about that." But unfortunately, the only suggestion he had to deal with the problem was that "when [the women] meet these groups... they should take a big detour around them". A soccer club from the infamous ghetto area of Vollsmose got national attention because every other team in their league refused to play them. Due to the team's long history of violent assaults and death threats against opposing teams and referees. Bizarrely leading to the situation were the team got to the top of its division because they'd "win" every forfeited match. Problems of this sort have existed in Denmark for well over thirty years. So in a way, none of this should be surprising. But it actually is. Because it shows that long-term assimilation just isn't happening at a scale to tackle these problems. In fact, data shows the opposite: Descendants of MENAPT immigrants are more likely to be violent and troublesome than their parents. That's an explosive point because it blows up the thesis that time will solve these problems. Showing instead that it actually just makes it worse. And then what? This is particularly pertinent in the analysis of Sweden. After the "far right" party of the Swedish Democrats got into government, the new immigrant arrivals have plummeted. But unfortunately, the net share of immigrants is still increasing, in part because of family reunifications, and thus the problems continue. Meaning even if European countries "close the borders", they're still condemned to deal with the damning effects of maladjusted MENAPT immigrant descendants for decades to come. If the intervention stops there. There are no easy answers here. Obviously, if you're in a hole, you should stop digging. And Sweden has done just that. But just because you aren't compounding the problem doesn't mean you've found a way out. Denmark proves to be both a positive example of minimizing the digging while also a cautionary tale that the hole is still there.

19 hours ago 2 votes
We all lose when art is anonymised

One rabbit hole I can never resist going down is finding the original creator of a piece of art. This sounds simple, but it’s often quite difficult. The Internet is a maze of social media accounts that only exist to repost other people’s art, usually with minimal or non-existent attribution. A popular image spawns a thousand copies, each a little further from the original. Signatures get cropped, creators’ names vanish, and we’re left with meaningless phrases like “no copyright intended”, as if that magically absolves someone of artistic theft. Why do I do this? I’ve always been a bit obsessive, a bit completionist. I’ve worked in cultural heritage for eight years, which has made me more aware of copyright and more curious about provenance. And it’s satisfying to know I’ve found the original source, that I can’t dig any further. This takes time. It’s digital detective work, using tools like Google Lens and TinEye, and it’s not always easy or possible. Sometimes the original pops straight to the top, but other times it takes a lot of digging to find the source of an image. So many of us have become accustomed to art as an endless, anonymous stream of “content”. A beautiful image appears in our feed, we give it a quick heart, and scroll on, with no thought for the human who sweated blood and tears to create it. That original artist feels distant, disconected. Whatever benefit they might get from the “exposure” of your work going viral, they don’t get any if their name has been removed first. I came across two examples recently that remind me it’s not just artists who miss out – it’s everyone who enjoys art. I saw a photo of some traffic lights on Tumblr. I love their misty, nighttime aesthetic, the way the bright colours of the lights cut through the fog, the totality of the surrounding darkness. But there was no name – somebody had just uploaded the image to their Tumblr page, it was reblogged a bunch of times, and then it appeared on my dashboard. Who took it? I used Google Lens to find the original photographer: Lucas Zimmerman. Then I discovered it was part of a series. And there was a sequel. I found interviews. Context. Related work. I found all this cool stuff, but only because I knew Lucas’s name. Traffic Lights, by Lucas Zimmerman. Published on Behance.net under a CC BY‑NC 4.0 license, and reposted here in accordance with that license. The second example was a silent video of somebody making tiny chess pieces, just captioned “wow”. It was clearly an edit of another video, with fast-paced cuts to make it accommodate a short attention span – and again with no attribution. This was a little harder to find – I had to search several frames in Google Lens before I found a summary on a Russian website, which had a link to a YouTube video by metalworker and woodworker Левша (Levsha). This video is four times longer than the cut-up version I found, in higher resolution, and with commentary from the original creator. I don’t speak Russian, but YouTube has auto-translated subtitles. Now I know how this amazing set was made, and I have a much better understanding of the materials and techniques involved. (This includes the delightful name Wenge wood, which I’d never heard before.) https://youtube.com/watch?v=QoKdDK3y-mQ A piece of art is more than just a single image or video. It’s a process, a human story. When art is detached from its context and creator, we lose something fundamental. Creators lose the chance to benefit from their work, and we lose the opportunity to engage with it in a deeper way. We can’t learn how it was made, find their other work, or discover how to make similar art for ourselves. The Internet has done many wonderful things for art, but it’s also a machine for endless copyright infringement. It’s not just about generative AI and content scraping – those are serious issues, but this problem existed long before any of us had heard of ChatGPT. It’s a thousand tiny paper cuts. How many of us have used an image from the Internet because it showed up in a search, without a second thought for its creator? When Google Images says “images may be subject to copyright”, how many of us have really thought about what that means? Next time you want to use an image from the web, look to see if it’s shared under a license that allows reuse, and make sure you include the appropriate attribution – and if not, look for a different image. Finding the original creator is hard, sometimes impossible. The Internet is full of shadows: copies of things that went offline years ago. But when I succeed, it feels worth the effort – both for the original artist and myself. When I read a book or watch a TV show, the credits guide me to the artists, and I can appreciate both them and the rest of their work. I wish the Internet was more like that. I wish the platforms we rely on put more emphasis on credit and attribution, and the people behind art. The next time an image catches your eye, take a moment. Who made this? What does it mean? What’s their story? [If the formatting of this post looks odd in your feed reader, visit the original article]

yesterday 1 votes
Apple does AI as Microsoft did mobile

When the iPhone first appeared in 2007, Microsoft was sitting pretty with their mobile strategy. They'd been early to the market with Windows CE, they were fast-following the iPod with their Zune. They also had the dominant operating system, the dominant office package, and control of the enterprise. The future on mobile must have looked so bright! But of course now, we know it wasn't. Steve Ballmer infamously dismissed the iPhone with a chuckle, as he believed all of Microsoft's past glory would guarantee them mobile victory. He wasn't worried at all. He clearly should have been! After reliving that Ballmer moment, it's uncanny to watch this CNBC interview from one year ago with Johny Srouji and John Ternus from Apple on their AI strategy. Ternus even repeats the chuckle!! Exuding the same delusional confidence that lost Ballmer's Microsoft any serious part in the mobile game.  But somehow, Apple's problems with AI seem even more dire. Because there's apparently no one steering the ship. Apple has been promising customers a bag of vaporware since last fall, and they're nowhere close to being able to deliver on the shiny concept demos. The ones that were going to make Apple Intelligence worthy of its name, and not just terrible image generation that is years behind the state of the art. Nobody at Apple seems able or courageous enough to face the music: Apple Intelligence sucks. Siri sucks. None of the vaporware is anywhere close to happening. Yet as late as last week, you have Cook promoting the new MacBook Air with "Apple Intelligence". Yikes. This is partly down to the org chart. John Giannandrea is Apple's VP of ML/AI, and he reports directly to Tim Cook. He's been in the seat since 2018. But Cook evidently does not have the product savvy to be able to tell bullshit from benefit, so he keeps giving Giannandrea more rope. Now the fella has hung Apple's reputation on vaporware, promised all iPhone 16 customers something magical that just won't happen, and even spec-bumped all their devices with more RAM for nothing but diminished margins. Ouch. This is what regression to the mean looks like. This is what fiefdom management looks like. This is what having a company run by a logistics guy looks like. Apple needs a leadership reboot, stat. That asterisk is a stain.

2 days ago 3 votes