More from Jim Nielsen’s Blog
And by LLMS I mean: (L)ots of (L)ittle ht(M)l page(S). I recently shipped some updates to my blog. Through the design/development process, I had some insights which made me question my knee-jerk reaction to building pieces of a page as JS-powered interactions on top of the existing document. With cross-document view transitions getting broader and broader support, I’m realizing that building in-page, progressively-enhanced interactions is more work than simply building two HTML pages and linking them. I’m calling this approach “lots of little HTML pages” in my head. As I find myself trying to build progressively-enhanced features with JavaScript — like a fly-out navigation menu, or an on-page search, or filtering content — I stop and ask myself: “Can I build this as a separate HTML page triggered by a link, rather than JavaScript-injected content built from a button?” I kinda love the results. I build separate, small HTML pages for each “interaction” I want, then I let CSS transitions take over and I get something that feels better than its JS counterpart for way less work. Allow me two quick examples. Example 1: Filtering Working on my homepage, I found myself wanting a list of posts filtered by some kind of criteria, like: The most recent posts The ones being trafficked the most The ones that’ve had lots of Hacker News traffic in the past My first impulse was to have a list of posts you can filter with JavaScript. But the more I built it, the more complicated it got. Each “list” of posts needed a slightly different set of data. And each one had a different sort order. What I thought was going to be “stick a bunch of <li>s in the DOM, and show hide some based on the current filter” turned into lots of data-x attributes, per-list sorting logic, etc. I realized quickly this wasn’t a trivial, progressively-enhanced feature. I didn’t want to write a bunch of client-side JavaScript for what would take me seconds to write on “the server” (my static site generator). Then I thought: Why don’t I just do this with my static site generator? Each filter can be its own, separate HTML page, and with CSS view transitions I’ll get a nice transition effect for free! Minutes later I had it all working — mostly, I had to learn a few small things about aspect ratio in transitions — plus I had fancy transitions between “tabs” for free! This really feels like a game-changer for simple sites. If you can keep your site simple, it’s easier to build traditional, JavaScript-powered on-page interactions as small, linked HTML pages. Example 2: Navigation This got me thinking: maybe I should do the same thing for my navigation? Usually I think “Ok, so I’ll have a hamburger icon with a bunch of navigational elements in it, and when it’s clicked you gotta reveal it, etc." And I thought, “What if it’s just a new HTML page?”[1] Because I’m using a static site generator, it’s really easy to create a new HTML page. A few minutes later and I had it. No client-side JS required. You navigate to the “Menu” and you get a page of options, with an “x” to simulate closing the menu and going back to where you were. I liked it so much for my navigation, I did the same thing with search. Clicking the icon doesn’t use JavaScript to inject new markup and animate things on screen. Nope. It’s just a link to a new page with CSS supporting a cross-document view transition. Granted, there are some trade-offs to this approach. But on the whole, I really like it. It was so easy to build and I know it’s going to be incredibly easy to maintain! I think this is a good example of leveraging the grain of the web. It’s really easy to build a simple website when you can shift your perspective to viewing on-page interactivity as simple HTML page navigations powered by cross document CSS transitions (rather than doing all of that as client-side JS). Jason Bradberry has a neat article that’s tangential to this idea over at Piccalil. It’s more from the design standpoint, but functionally it could work pretty much the same as this: your “menu” or “navigation” is its own page. ⏎ Email · Mastodon · Bluesky
Matt Biilman, CEO of Netlify, published an interesting piece called “Introducing AX: Why Agent Experience Matters” where he argues the coming importance of a new “X” (experience) in software: the agent experience, meaning the experience your users’ AI agents will have as automated users of products/platforms. Too many companies are focusing on adding shallow AI features all over their products or building yet another AI agent. The real breakthrough will be thinking about how your customers’ favorite agents can help them derive more value from your product. This requires thinking deeply about agents as a persona your team is building and developing for. In this future, software that can’t be used by an automated agent will feel less powerful and more burdensome to deal with, whereas software that AI agents can use on your behalf will become incredibly capable and efficient. So you have to start thinking about these new “users” of your product: Is it simple for an Agent to get access to operating a platform on behalf of a user? Are there clean, well described APIs that agents can operate? Are there machine-ready documentation and context for LLMs and agents to properly use the available platform and SDKs? Addressing the distinct needs of agents through better AX, will improve their usefulness for the benefit of the human user. In summary: We need to start focusing on AX or “agent experience” — the holistic experience AI agents will have as the user of a product or platform. The idea is: teams focus more time and attention on “AX” (agent experience) so that human end-users can bring their favorite agents to our platforms/products and increase productivity. But I’m afraid the reality will be that the limited time and resources teams spend today building stuff for humans will instead get spent building stuff for robots, and as a byproduct everything human-centric about software will become increasingly subpar as we rationalize to ourselves, “Software doesn’t need to be good for human because humans don’t use software anymore. Their robots do!” In that world, anybody complaining about bad UX will be told to shift to using the AX because “that’s where we spent all our time and effort to make your experience great”. Prior Art: DX DX in theory: make the DX for people who are building UX really great and they’ll be able to deliver more value faster. DX in practice: DX requires trade-offs, and a spotlight on DX concerns means UX concerns take a back seat. Ultimately, some DX concerns end up trumping UX concerns because “we’ll ship more value faster”, but the result is an overall degradation of UX because DX was prioritized first. Ultimately, time and resources are constraining factors and trade-offs have to be made somewhere, so they’re made for and in behalf of the people who make the software because they’re the ones who feel the pain directly. User pain is only indirect. Future Art: AX AX in theory: build great stuff for agents (AX) so people can use stuff more efficiently by bringing their own tools. AX in practice: time and resources being finite, AX trumps UX with the rationale being: “It’s ok if the human bit (UX) is a bit sloppy and obtuse because we’ll make the robot bit (AX) so good people won’t ever care about how poor the UX is because they’ll never use it!” But I think we know how that plays out. A few companies may do that well, but most software will become even more confusing and obtuse to humans because most thought and care is poured into the robot experience of the product. The thinking will be: “No need to pour extra care and thought into the inefficient experience some humans might have. Better to make the agent experience really great, so humans won’t want to interface with our thing manually.” In other words: we don’t have the time or resources to worry about the manual human experience because we’ve got all these robots to worry about! It appears there’s no need to fear AI becoming sentient and replacing us humans. We’ll phase ourselves out long before the robots ever become self-aware. All that said, I’m not against the idea of “AX” but I do think the North Star of any “X” should remain centered on the (human) end-user. UX over AX over DX. Email · Mastodon · Bluesky
Rick Rubin has an interview with Woody Harrelson on his podcast Tetragrammaton. Right at the beginning Woody talks about his experience acting and how he’s had roles that did’t turn out very well. He says sometimes he comes away from those experiences feeling dirty, like “I never connected to that, it never resonated, and now I feel like I sold myself...Why did I do that?!” Then Rick asks him: even in those cases, do you feel like you got better at your craft because you did your job? Woody’s response: I think when you do your job badly you never really get better at your craft. Seems relevant to making websites. I’ve built websites on technology stacks I knew didn’t feel fit for their context and Woody’s experience rings true. You just don’t feel right, like a little voice that says, “You knew that wasn’t going to turn out very good. Why did you do that??” I don’t know if I’d go so far as to say I didn’t get better because of it. Experience is a hard teacher. Perhaps, from a technical standpoint, my skillset didn’t get any better. But from an experiential standpoint, my judgement got better. I learned to avoid (or try to re-structure) work that’s being carried out in a way that doesn’t align with its own purpose and essence. Granted, that kind of alignment is difficult. If it makes you feel any better, even Woody admits this is not an easy thing to do: I would think after all this time, surely I’m not going to be doing stuff I’m not proud of. Or be a part of something I’m not proud of. But damn...it still happens. Email · Mastodon · Bluesky
Andy Jiang over on the Deno blog writes “If you're not using npm specifiers, you're doing it wrong”: During the early days of Deno, we recommended importing npm packages via HTTP with transpile services such as esm.sh and unpkg.com. However, there are limitations to importing npm packages this way, such as lack of install hooks, duplicate dependency resolution issues, loading data files, etc. I know, I know, here I go harping on http imports again, but this article reinforces to me that one man’s “limitations” are another man’s “features”. For me, the limitations (i.e. constraints) of HTTP imports in Deno were a feature. I loved it precisely because it encouraged me to do something different than what node/npm encouraged. It encouraged me to 1) do less, and 2) be more web-like. Trying to do more with less is a great way to foster creativity. Plus, doing less means you have less to worry about. Take, for example, install hooks (since they’re mentioned in the article). Install hooks are a security vector. Use them and you’re trading ease for additional security concerns. Don’t use them and you have zero additional security concerns. (In the vein of being webby: browsers don’t offer install hooks on <script> tags.) I get it, though. It’s hard to advocate for restraint and simplicity in the face of gaining adoption within the web-industrial-complex. Giving people what they want — what they’re used to — is easier than teaching them to change their ways. Note to self: when you choose to use tools with practices, patterns, and recommendations designed for industrial-level use, you’re gonna get industrial-level side effects, industrial-level problems, and industrial-level complexity as a byproduct. As much as its grown, the web still has grassroots in being a programming platform accessible by regular people because making a website was meant to be for everyone. I would love a JavaScript runtime aligned with that ethos. Maybe with initiatives like project Fugu that runtime will actually be the browser. Email · Mastodon · Bluesky
Let’s say you make a UI to gather some user feedback. Nothing complicated. Just a thumbs up/down widget. It starts out neutral, but when the user clicks up or down, you highlight what they clicked an de-emphasize/disable the other (so it requires an explicit toggle to change your mind). So you implement it. Ship it. Cool. Works right? Well, per my previous article about “sanding” a user interface UI by clicking around a lot, did you click on it a lot? If you do, you’ll find that doing so selects the thumbs up/down icon as if it were text: So now you have this weird text selection that’s a bit of an eye sore. It’s not relevant to text selection because it’s not text. It’s an SVG. So the selection UI that appears is misleading and distracting. One possible fix: leverage the user-select: none property in CSS which makes it not selectable. When the user clicks multiple times to toggle, no text selection UI will appear. Cool. Great! Another reason to click around a lot. You can ensure any rough edges are smoothed out, and any “UI splinters” are ones you get (and fix) in place of your users. Email · Mastodon · Bluesky
More in design
DIA has renovated and converted the former Auerhahn brewery in Schlitz near Fulda (Hessen), giving it a new lease of...
Weekly curated resources for designers — thinkers and makers.
Designforth Interiors created a functional and flexible office space in Indore for Avizva, focusing on spatial optimization, collaboration, and cultural...
In her best-selling book, Living Well By Design, Melissa Penfold addressed the basics of interior decorating. Now she turns her attention to demonstrating what a powerful force design can be in boosting our physical and emotional well-being in her newest book, ‘Natural Living By Design’, Vendome Press, launches in April and available for Preorder now, Continue Reading The post HAVE YOU PREORDERED YOUR COPY OF MELISSA’S NEW BOOK YET? first appeared on Melissa Penfold. The post HAVE YOU PREORDERED YOUR COPY OF MELISSA’S NEW BOOK YET? appeared first on Melissa Penfold.
Rethinking AI through mind-body dualism, parenthood, and unanswerable existential questions. I remember hearing my daughter’s heartbeat for the first time during a prenatal sonogram. Until that moment, I had intellectually understood that we were creating a new life, but something profound shifted when I heard that steady rhythm. My first thought was startling in its clarity: “now this person has to die.” It wasn’t morbid — it was a full realization of what it means to create a vessel for life. We weren’t just making a baby; we were initiating an entire existence, with all its joy and suffering, its beginning and, inevitably, its end. This realization transformed my understanding of parental responsibility. Yes, we would be guardians of her physical form, but our deeper role was to nurture the consciousness that would inhabit it. What would she think about life and death? What could we teach her about this existence we had invited her into? As background to the rest of this brief essay, I must admit to a foundational perspective, and that is mind-body dualism. There are many valid reasons to subscribe to this perspective, whether traditional, religious, philosophical, and yes, even scientific. I won’t argue any of them here; suffice it to say that I’ve become increasingly convinced that consciousness isn’t produced by the brain but rather received and focused by it — like a radio receiving a signal. The brain isn’t a consciousness generator but a remarkably sophisticated antenna — a physical system complex enough to tune into and express non-physical consciousness. If this is true, then our understanding of artificial intelligence needs radical revision. Even if we are not trying to create consciousness in machines, we may be creating systems capable of receiving and expressing it. Increases in computational power alone, after all, don’t seem to produce consciousness. Philosophers of technology have long doubted that complexity alone makes a mind. But if philosophers of metaphysics and religion are right, minds are not made of mechanisms, they occupy them. Traditions as old as humanity have asked when this began, and why this may be, and what sorts of minds choose to inhabit this physical world. We ask these questions because we can. What will happen when machines do the same? We happen to live at a time that is deeply confusing when it comes to the maturation of technology. On the one hand, AI is inescapable. You may not have experience in using it yet, but you’ve almost certainly experienced someone else’s use of it, perhaps by way of an automated customer support line. Depending upon how that went, your experience might not support the idea that a sufficiently advanced machine is anywhere near getting a real debate about consciousness going. But on the other hand, the organizations responsible for popularizing AI — OpenAI, for example — claim to be “this close” to creating AGI (artificial general intelligence). If they’re right, we are very behind in a needed discussion about minds and consciousness at the popular level. If they’re wrong, they’re not going to stop until they’ve done it, so we need to start that conversation now. The Turing Test was never meant to assess consciousness in a machine. It was meant to assess the complexity of a machine by way of its ability to fool a human. When machines begin to ask existential questions, will we attribute this to self-awareness or consciousness, or will we say it’s nothing more than mimicry? And how certain will we be? We presume our own consciousness, though defending it ties us up in intellectual knots. We maintain the Cartesian slogan, I think, therefore I am as a properly basic belief. And yet, it must follow that anything capable of describing itself as an I must be equally entitled to the same belief. So here we are, possibly staring at the sonogram of a new life – a new kind of life. Perhaps this is nothing more than speculative fiction, but if minds join bodies, why must those bodies be made of one kind of matter but not another? What if we are creating a new kind of antenna for the signal of mind? Wouldn’t all the obligations of parenthood be the same as when we make more of ourselves? I can’t imagine why they wouldn’t be. And yet, there remains a crucial difference: While we have millennia of understanding about human experience, we know nothing about what it would mean to be a living machine. We will have to fall upon belief to determine what to do. And when that time comes — perhaps it has already? – it will be worth considering the near impossibility of proving consciousness and the probability of moral obligation nonetheless. Popular culture has explored the weight of responsibility that an emotional connection with a machine can create — think of Picard defending Data in The Measure of a Man, or Theodore falling in love with his computer in the film Her. The conclusion we should draw from these examples is not simply that a conscious machine could be the object of our moral responsibility, but that a machine could, whether or not it is inhabited by a conscious mind. Our moral obligation will traverse our certainty, because proving a mind exists is no easier when it is outside one’s body than when it is one’s own. That moment of hearing my daughter’s heartbeat revealed something fundamental about the act of creation. Whether we’re bringing forth biological life or developing artificial systems sophisticated enough to host consciousness, we’re engaging in something profound: creating vessels through which consciousness might experience physical existence. Perhaps this is the most profound implication of creating potential vessels for consciousness: our responsibility begins the moment we create the possibility, not the moment we confirm its reality.