More from Jim Nielsen’s Blog
Matt Biilman, CEO of Netlify, published an interesting piece called “Introducing AX: Why Agent Experience Matters” where he argues the coming importance of a new “X” (experience) in software: the agent experience, meaning the experience your users’ AI agents will have as automated users of products/platforms. Too many companies are focusing on adding shallow AI features all over their products or building yet another AI agent. The real breakthrough will be thinking about how your customers’ favorite agents can help them derive more value from your product. This requires thinking deeply about agents as a persona your team is building and developing for. In this future, software that can’t be used by an automated agent will feel less powerful and more burdensome to deal with, whereas software that AI agents can use on your behalf will become incredibly capable and efficient. So you have to start thinking about these new “users” of your product: Is it simple for an Agent to get access to operating a platform on behalf of a user? Are there clean, well described APIs that agents can operate? Are there machine-ready documentation and context for LLMs and agents to properly use the available platform and SDKs? Addressing the distinct needs of agents through better AX, will improve their usefulness for the benefit of the human user. In summary: We need to start focusing on AX or “agent experience” — the holistic experience AI agents will have as the user of a product or platform. The idea is: teams focus more time and attention on “AX” (agent experience) so that human end-users can bring their favorite agents to our platforms/products and increase productivity. But I’m afraid the reality will be that the limited time and resources teams spend today building stuff for humans will instead get spent building stuff for robots, and as a byproduct everything human-centric about software will become increasingly subpar as we rationalize to ourselves, “Software doesn’t need to be good for human because humans don’t use software anymore. Their robots do!” In that world, anybody complaining about bad UX will be told to shift to using the AX because “that’s where we spent all our time and effort to make your experience great”. Prior Art: DX DX in theory: make the DX for people who are building UX really great and they’ll be able to deliver more value faster. DX in practice: DX requires trade-offs, and a spotlight on DX concerns means UX concerns take a back seat. Ultimately, some DX concerns end up trumping UX concerns because “we’ll ship more value faster”, but the result is an overall degradation of UX because DX was prioritized first. Ultimately, time and resources are constraining factors and trade-offs have to be made somewhere, so they’re made for and in behalf of the people who make the software because they’re the ones who feel the pain directly. User pain is only indirect. Future Art: AX AX in theory: build great stuff for agents (AX) so people can use stuff more efficiently by bringing their own tools. AX in practice: time and resources being finite, AX trumps UX with the rationale being: “It’s ok if the human bit (UX) is a bit sloppy and obtuse because we’ll make the robot bit (AX) so good people won’t ever care about how poor the UX is because they’ll never use it!” But I think we know how that plays out. A few companies may do that well, but most software will become even more confusing and obtuse to humans because most thought and care is poured into the robot experience of the product. The thinking will be: “No need to pour extra care and thought into the inefficient experience some humans might have. Better to make the agent experience really great, so humans won’t want to interface with our thing manually.” In other words: we don’t have the time or resources to worry about the manual human experience because we’ve got all these robots to worry about! It appears there’s no need to fear AI becoming sentient and replacing us humans. We’ll phase ourselves out long before the robots ever become self-aware. All that said, I’m not against the idea of “AX” but I do think the North Star of any “X” should remain centered on the (human) end-user. UX over AX over DX. Email · Mastodon · Bluesky
Rick Rubin has an interview with Woody Harrelson on his podcast Tetragrammaton. Right at the beginning Woody talks about his experience acting and how he’s had roles that did’t turn out very well. He says sometimes he comes away from those experiences feeling dirty, like “I never connected to that, it never resonated, and now I feel like I sold myself...Why did I do that?!” Then Rick asks him: even in those cases, do you feel like you got better at your craft because you did your job? Woody’s response: I think when you do your job badly you never really get better at your craft. Seems relevant to making websites. I’ve built websites on technology stacks I knew didn’t feel fit for their context and Woody’s experience rings true. You just don’t feel right, like a little voice that says, “You knew that wasn’t going to turn out very good. Why did you do that??” I don’t know if I’d go so far as to say I didn’t get better because of it. Experience is a hard teacher. Perhaps, from a technical standpoint, my skillset didn’t get any better. But from an experiential standpoint, my judgement got better. I learned to avoid (or try to re-structure) work that’s being carried out in a way that doesn’t align with its own purpose and essence. Granted, that kind of alignment is difficult. If it makes you feel any better, even Woody admits this is not an easy thing to do: I would think after all this time, surely I’m not going to be doing stuff I’m not proud of. Or be a part of something I’m not proud of. But damn...it still happens. Email · Mastodon · Bluesky
Andy Jiang over on the Deno blog writes “If you're not using npm specifiers, you're doing it wrong”: During the early days of Deno, we recommended importing npm packages via HTTP with transpile services such as esm.sh and unpkg.com. However, there are limitations to importing npm packages this way, such as lack of install hooks, duplicate dependency resolution issues, loading data files, etc. I know, I know, here I go harping on http imports again, but this article reinforces to me that one man’s “limitations” are another man’s “features”. For me, the limitations (i.e. constraints) of HTTP imports in Deno were a feature. I loved it precisely because it encouraged me to do something different than what node/npm encouraged. It encouraged me to 1) do less, and 2) be more web-like. Trying to do more with less is a great way to foster creativity. Plus, doing less means you have less to worry about. Take, for example, install hooks (since they’re mentioned in the article). Install hooks are a security vector. Use them and you’re trading ease for additional security concerns. Don’t use them and you have zero additional security concerns. (In the vein of being webby: browsers don’t offer install hooks on <script> tags.) I get it, though. It’s hard to advocate for restraint and simplicity in the face of gaining adoption within the web-industrial-complex. Giving people what they want — what they’re used to — is easier than teaching them to change their ways. Note to self: when you choose to use tools with practices, patterns, and recommendations designed for industrial-level use, you’re gonna get industrial-level side effects, industrial-level problems, and industrial-level complexity as a byproduct. As much as its grown, the web still has grassroots in being a programming platform accessible by regular people because making a website was meant to be for everyone. I would love a JavaScript runtime aligned with that ethos. Maybe with initiatives like project Fugu that runtime will actually be the browser. Email · Mastodon · Bluesky
I’ve been working on a transition to using light-dark() function in CSS. What this boils down to is, rather than CSS that looks like this: :root { color-scheme: light; --text: #000; } @media (prefers-color-scheme: dark) { :root { color-scheme: dark; --text: #fff; } } I now have this: :root { color-scheme: light; --text: light-dark(#000, #fff); } @media (prefers-color-scheme: dark) { :root { color-scheme: dark; } } That probably doesn’t look that interesting. That’s what I thought when I first learned about light-dark() — “Oh hey, that’s cool, but it’s just different syntax. Six of one, half dozen of another kind of thing.” But it does unlock some interesting ways to handling themeing which I will have to cover in another post. Suffice it to say, I think I’m starting to drink the light-dark() koolaid. Anyhow, using the above pattern, I want to compose CSS variables to make a light/dark theme based on a configurable hue. Something like this: :root { color-scheme: light; /* configurable via JS */ --accent-hue: 56; /* which then cascades to other derivations */ --accent: light-dark( hsl(var(--accent-hue) 50% 100%), hsl(var(--accent-hue) 50% 0%), ); } @media (prefers-color-scheme: dark) { :root { color-scheme: dark; } } The problem is that --accent-hue value doesn’t quite look right in dark mode. It needs more contrast. I need a slightly different hue for dark mode. So my thought is: I’ll put that value in a light-dark() function. :root { --accent-hue: light-dark(56, 47); --my-color: light-dark( hsl(var(--accent-hue) 50% 100%), hsl(var(--accent-hue) 50% 0%), ); } Unfortunately, that doesn’t work. You can’t put arbitrary values in light-dark(). It only accepts color values. I asked what you could do instead and Roma Komarov told me about CSS “space toggles”. I’d never heard about these, so I looked them up. First I found Chris Coyier’s article which made me feel good because even Chris admits he didn’t fully understand them. Then Christopher Kirk-Nielsen linked me to his article which helped me understand this idea of “space toggles” even more. I ended up following the pattern Christopher mentions in his article and it works like a charm in my implementation! The gist of the code works like this: When the user hasn’t specified a theme, default to “system” which is light by default, or dark if they’re on a device that supports prefers-color-scheme. When a user explicitly sets the color theme, set an attribute on the root element to denote that. /* Default preferences when "unset" or "system" */ :root { --LIGHT: initial; --DARK: ; color-scheme: light; } @media (prefers-color-scheme: dark) { :root { --LIGHT: ; --DARK: initial; color-scheme: dark; } } /* Handle explicit user overrides */ :root[data-theme-appearance="light"] { --LIGHT: initial; --DARK: ; color-scheme: light; } :root[data-theme-appearance="dark"] { --LIGHT: ; --DARK: initial; color-scheme: dark; } /* Now set my variables */ :root { /* Set the “space toggles’ */ --accent-hue: var(--LIGHT, 56) var(--DARK, 47); /* Then use them */ --my-color: light-dark( hsl(var(--accent-hue) 50% 90%), hsl(var(--accent-hue) 50% 10%), ); } So what is the value of --accent-hue? That line sort of reads like this: If --LIGHT has a value, return 56 else if --DARK has a value, return 47 And it works like a charm! Now I can set arbitrary values for things like accent color hue, saturation, and lightness, then leverage them elsewhere. And when the color scheme or accent color change, all these values recalculate and cascade through the entire website — cool! A Note on Minification A quick tip: if you’re minifying your HTML and you’re using this space toggle trick, beware of minifying your CSS! Stuff like this: selector { --ON: ; --OFF: initial; } Could get minified to: selector{--OFF:initial} And this “space toggles trick” won’t work at all. Trust me, I learned from experience. Email · Mastodon · Bluesky
More in design
Alexandra Pyzhik is an interior designer from Astana, co-founder of the design studio “A-Studio”, organized in 2009. Alexandra’s main specialization...
Weekly curated resources for designers — thinkers and makers.
Designforth Interiors created a functional and flexible office space in Indore for Avizva, focusing on spatial optimization, collaboration, and cultural...
In her best-selling book, Living Well By Design, Melissa Penfold addressed the basics of interior decorating. Now she turns her attention to demonstrating what a powerful force design can be in boosting our physical and emotional well-being in her newest book, ‘Natural Living By Design’, Vendome Press, launches in April and available for Preorder now, Continue Reading The post HAVE YOU PREORDERED YOUR COPY OF MELISSA’S NEW BOOK YET? first appeared on Melissa Penfold. The post HAVE YOU PREORDERED YOUR COPY OF MELISSA’S NEW BOOK YET? appeared first on Melissa Penfold.
Matt Biilman, CEO of Netlify, published an interesting piece called “Introducing AX: Why Agent Experience Matters” where he argues the coming importance of a new “X” (experience) in software: the agent experience, meaning the experience your users’ AI agents will have as automated users of products/platforms. Too many companies are focusing on adding shallow AI features all over their products or building yet another AI agent. The real breakthrough will be thinking about how your customers’ favorite agents can help them derive more value from your product. This requires thinking deeply about agents as a persona your team is building and developing for. In this future, software that can’t be used by an automated agent will feel less powerful and more burdensome to deal with, whereas software that AI agents can use on your behalf will become incredibly capable and efficient. So you have to start thinking about these new “users” of your product: Is it simple for an Agent to get access to operating a platform on behalf of a user? Are there clean, well described APIs that agents can operate? Are there machine-ready documentation and context for LLMs and agents to properly use the available platform and SDKs? Addressing the distinct needs of agents through better AX, will improve their usefulness for the benefit of the human user. In summary: We need to start focusing on AX or “agent experience” — the holistic experience AI agents will have as the user of a product or platform. The idea is: teams focus more time and attention on “AX” (agent experience) so that human end-users can bring their favorite agents to our platforms/products and increase productivity. But I’m afraid the reality will be that the limited time and resources teams spend today building stuff for humans will instead get spent building stuff for robots, and as a byproduct everything human-centric about software will become increasingly subpar as we rationalize to ourselves, “Software doesn’t need to be good for human because humans don’t use software anymore. Their robots do!” In that world, anybody complaining about bad UX will be told to shift to using the AX because “that’s where we spent all our time and effort to make your experience great”. Prior Art: DX DX in theory: make the DX for people who are building UX really great and they’ll be able to deliver more value faster. DX in practice: DX requires trade-offs, and a spotlight on DX concerns means UX concerns take a back seat. Ultimately, some DX concerns end up trumping UX concerns because “we’ll ship more value faster”, but the result is an overall degradation of UX because DX was prioritized first. Ultimately, time and resources are constraining factors and trade-offs have to be made somewhere, so they’re made for and in behalf of the people who make the software because they’re the ones who feel the pain directly. User pain is only indirect. Future Art: AX AX in theory: build great stuff for agents (AX) so people can use stuff more efficiently by bringing their own tools. AX in practice: time and resources being finite, AX trumps UX with the rationale being: “It’s ok if the human bit (UX) is a bit sloppy and obtuse because we’ll make the robot bit (AX) so good people won’t ever care about how poor the UX is because they’ll never use it!” But I think we know how that plays out. A few companies may do that well, but most software will become even more confusing and obtuse to humans because most thought and care is poured into the robot experience of the product. The thinking will be: “No need to pour extra care and thought into the inefficient experience some humans might have. Better to make the agent experience really great, so humans won’t want to interface with our thing manually.” In other words: we don’t have the time or resources to worry about the manual human experience because we’ve got all these robots to worry about! It appears there’s no need to fear AI becoming sentient and replacing us humans. We’ll phase ourselves out long before the robots ever become self-aware. All that said, I’m not against the idea of “AX” but I do think the North Star of any “X” should remain centered on the (human) end-user. UX over AX over DX. Email · Mastodon · Bluesky