Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
28
I’m a fan of what Ink & Switch is doing in regards to local-first web development. I’ve got a few harebrained ideas myself I want to build. And I’ve written notes from a talk by Peter before. Which is all a preface for this set of notes from another talk by Peter. Here’s the talk on Vimeo and here are my notes (and opinions) from the talk. Be Scale Appropriate Increasing scale changes everything about a system. Tools that work for one magnitude of scale break down at other magnitudes of scale — in both directions! Peter’s advice? Plan to rebuild systems, with new tools, as you hit new orders of magnitude. There is no way around this. Complexity Independent dimensions multiply problems. For example: building a web application entails accounting for (at minimum): Browsers: Chrome, Safari, Firefox Platforms: macOS, Windows, Android, iOS Screen sizes: desktop, tablet, mobile Network speeds: Wifi, 5g, 4g, 3g The variability in each and across these factors multiply against each other: 3...
a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Jim Nielsen’s Blog

Building WebSites With LLMS

And by LLMS I mean: (L)ots of (L)ittle ht(M)l page(S). I recently shipped some updates to my blog. Through the design/development process, I had some insights which made me question my knee-jerk reaction to building pieces of a page as JS-powered interactions on top of the existing document. With cross-document view transitions getting broader and broader support, I’m realizing that building in-page, progressively-enhanced interactions is more work than simply building two HTML pages and linking them. I’m calling this approach “lots of little HTML pages” in my head. As I find myself trying to build progressively-enhanced features with JavaScript — like a fly-out navigation menu, or an on-page search, or filtering content — I stop and ask myself: “Can I build this as a separate HTML page triggered by a link, rather than JavaScript-injected content built from a button?” I kinda love the results. I build separate, small HTML pages for each “interaction” I want, then I let CSS transitions take over and I get something that feels better than its JS counterpart for way less work. Allow me two quick examples. Example 1: Filtering Working on my homepage, I found myself wanting a list of posts filtered by some kind of criteria, like: The most recent posts The ones being trafficked the most The ones that’ve had lots of Hacker News traffic in the past My first impulse was to have a list of posts you can filter with JavaScript. But the more I built it, the more complicated it got. Each “list” of posts needed a slightly different set of data. And each one had a different sort order. What I thought was going to be “stick a bunch of <li>s in the DOM, and show hide some based on the current filter” turned into lots of data-x attributes, per-list sorting logic, etc. I realized quickly this wasn’t a trivial, progressively-enhanced feature. I didn’t want to write a bunch of client-side JavaScript for what would take me seconds to write on “the server” (my static site generator). Then I thought: Why don’t I just do this with my static site generator? Each filter can be its own, separate HTML page, and with CSS view transitions I’ll get a nice transition effect for free! Minutes later I had it all working — mostly, I had to learn a few small things about aspect ratio in transitions — plus I had fancy transitions between “tabs” for free! This really feels like a game-changer for simple sites. If you can keep your site simple, it’s easier to build traditional, JavaScript-powered on-page interactions as small, linked HTML pages. Example 2: Navigation This got me thinking: maybe I should do the same thing for my navigation? Usually I think “Ok, so I’ll have a hamburger icon with a bunch of navigational elements in it, and when it’s clicked you gotta reveal it, etc." And I thought, “What if it’s just a new HTML page?”[1] Because I’m using a static site generator, it’s really easy to create a new HTML page. A few minutes later and I had it. No client-side JS required. You navigate to the “Menu” and you get a page of options, with an “x” to simulate closing the menu and going back to where you were. I liked it so much for my navigation, I did the same thing with search. Clicking the icon doesn’t use JavaScript to inject new markup and animate things on screen. Nope. It’s just a link to a new page with CSS supporting a cross-document view transition. Granted, there are some trade-offs to this approach. But on the whole, I really like it. It was so easy to build and I know it’s going to be incredibly easy to maintain! I think this is a good example of leveraging the grain of the web. It’s really easy to build a simple website when you can shift your perspective to viewing on-page interactivity as simple HTML page navigations powered by cross document CSS transitions (rather than doing all of that as client-side JS). Jason Bradberry has a neat article that’s tangential to this idea over at Piccalil. It’s more from the design standpoint, but functionally it could work pretty much the same as this: your “menu” or “navigation” is its own page. ⏎ Email · Mastodon · Bluesky

a week ago 10 votes
AX, DX, UX

Matt Biilman, CEO of Netlify, published an interesting piece called “Introducing AX: Why Agent Experience Matters” where he argues the coming importance of a new “X” (experience) in software: the agent experience, meaning the experience your users’ AI agents will have as automated users of products/platforms. Too many companies are focusing on adding shallow AI features all over their products or building yet another AI agent. The real breakthrough will be thinking about how your customers’ favorite agents can help them derive more value from your product. This requires thinking deeply about agents as a persona your team is building and developing for. In this future, software that can’t be used by an automated agent will feel less powerful and more burdensome to deal with, whereas software that AI agents can use on your behalf will become incredibly capable and efficient. So you have to start thinking about these new “users” of your product: Is it simple for an Agent to get access to operating a platform on behalf of a user? Are there clean, well described APIs that agents can operate? Are there machine-ready documentation and context for LLMs and agents to properly use the available platform and SDKs? Addressing the distinct needs of agents through better AX, will improve their usefulness for the benefit of the human user. In summary: We need to start focusing on AX or “agent experience” — the holistic experience AI agents will have as the user of a product or platform. The idea is: teams focus more time and attention on “AX” (agent experience) so that human end-users can bring their favorite agents to our platforms/products and increase productivity. But I’m afraid the reality will be that the limited time and resources teams spend today building stuff for humans will instead get spent building stuff for robots, and as a byproduct everything human-centric about software will become increasingly subpar as we rationalize to ourselves, “Software doesn’t need to be good for human because humans don’t use software anymore. Their robots do!” In that world, anybody complaining about bad UX will be told to shift to using the AX because “that’s where we spent all our time and effort to make your experience great”. Prior Art: DX DX in theory: make the DX for people who are building UX really great and they’ll be able to deliver more value faster. DX in practice: DX requires trade-offs, and a spotlight on DX concerns means UX concerns take a back seat. Ultimately, some DX concerns end up trumping UX concerns because “we’ll ship more value faster”, but the result is an overall degradation of UX because DX was prioritized first. Ultimately, time and resources are constraining factors and trade-offs have to be made somewhere, so they’re made for and in behalf of the people who make the software because they’re the ones who feel the pain directly. User pain is only indirect. Future Art: AX AX in theory: build great stuff for agents (AX) so people can use stuff more efficiently by bringing their own tools. AX in practice: time and resources being finite, AX trumps UX with the rationale being: “It’s ok if the human bit (UX) is a bit sloppy and obtuse because we’ll make the robot bit (AX) so good people won’t ever care about how poor the UX is because they’ll never use it!” But I think we know how that plays out. A few companies may do that well, but most software will become even more confusing and obtuse to humans because most thought and care is poured into the robot experience of the product. The thinking will be: “No need to pour extra care and thought into the inefficient experience some humans might have. Better to make the agent experience really great, so humans won’t want to interface with our thing manually.” In other words: we don’t have the time or resources to worry about the manual human experience because we’ve got all these robots to worry about! It appears there’s no need to fear AI becoming sentient and replacing us humans. We’ll phase ourselves out long before the robots ever become self-aware. All that said, I’m not against the idea of “AX” but I do think the North Star of any “X” should remain centered on the (human) end-user. UX over AX over DX. Email · Mastodon · Bluesky

a week ago 13 votes
Can You Get Better Doing a Bad Job?

Rick Rubin has an interview with Woody Harrelson on his podcast Tetragrammaton. Right at the beginning Woody talks about his experience acting and how he’s had roles that did’t turn out very well. He says sometimes he comes away from those experiences feeling dirty, like “I never connected to that, it never resonated, and now I feel like I sold myself...Why did I do that?!” Then Rick asks him: even in those cases, do you feel like you got better at your craft because you did your job? Woody’s response: I think when you do your job badly you never really get better at your craft. Seems relevant to making websites. I’ve built websites on technology stacks I knew didn’t feel fit for their context and Woody’s experience rings true. You just don’t feel right, like a little voice that says, “You knew that wasn’t going to turn out very good. Why did you do that??” I don’t know if I’d go so far as to say I didn’t get better because of it. Experience is a hard teacher. Perhaps, from a technical standpoint, my skillset didn’t get any better. But from an experiential standpoint, my judgement got better. I learned to avoid (or try to re-structure) work that’s being carried out in a way that doesn’t align with its own purpose and essence. Granted, that kind of alignment is difficult. If it makes you feel any better, even Woody admits this is not an easy thing to do: I would think after all this time, surely I’m not going to be doing stuff I’m not proud of. Or be a part of something I’m not proud of. But damn...it still happens. Email · Mastodon · Bluesky

a week ago 10 votes
Limitations vs. Capabilities

Andy Jiang over on the Deno blog writes “If you're not using npm specifiers, you're doing it wrong”: During the early days of Deno, we recommended importing npm packages via HTTP with transpile services such as esm.sh and unpkg.com. However, there are limitations to importing npm packages this way, such as lack of install hooks, duplicate dependency resolution issues, loading data files, etc. I know, I know, here I go harping on http imports again, but this article reinforces to me that one man’s “limitations” are another man’s “features”. For me, the limitations (i.e. constraints) of HTTP imports in Deno were a feature. I loved it precisely because it encouraged me to do something different than what node/npm encouraged. It encouraged me to 1) do less, and 2) be more web-like. Trying to do more with less is a great way to foster creativity. Plus, doing less means you have less to worry about. Take, for example, install hooks (since they’re mentioned in the article). Install hooks are a security vector. Use them and you’re trading ease for additional security concerns. Don’t use them and you have zero additional security concerns. (In the vein of being webby: browsers don’t offer install hooks on <script> tags.) I get it, though. It’s hard to advocate for restraint and simplicity in the face of gaining adoption within the web-industrial-complex. Giving people what they want — what they’re used to — is easier than teaching them to change their ways. Note to self: when you choose to use tools with practices, patterns, and recommendations designed for industrial-level use, you’re gonna get industrial-level side effects, industrial-level problems, and industrial-level complexity as a byproduct. As much as its grown, the web still has grassroots in being a programming platform accessible by regular people because making a website was meant to be for everyone. I would love a JavaScript runtime aligned with that ethos. Maybe with initiatives like project Fugu that runtime will actually be the browser. Email · Mastodon · Bluesky

2 weeks ago 13 votes
Sanding UI, pt. II

Let’s say you make a UI to gather some user feedback. Nothing complicated. Just a thumbs up/down widget. It starts out neutral, but when the user clicks up or down, you highlight what they clicked an de-emphasize/disable the other (so it requires an explicit toggle to change your mind). So you implement it. Ship it. Cool. Works right? Well, per my previous article about “sanding” a user interface UI by clicking around a lot, did you click on it a lot? If you do, you’ll find that doing so selects the thumbs up/down icon as if it were text: So now you have this weird text selection that’s a bit of an eye sore. It’s not relevant to text selection because it’s not text. It’s an SVG. So the selection UI that appears is misleading and distracting. One possible fix: leverage the user-select: none property in CSS which makes it not selectable. When the user clicks multiple times to toggle, no text selection UI will appear. Cool. Great! Another reason to click around a lot. You can ensure any rough edges are smoothed out, and any “UI splinters” are ones you get (and fix) in place of your users. Email · Mastodon · Bluesky

2 weeks ago 16 votes

More in programming

New Blog Post: "A Perplexing Javascript Parsing Puzzle"

I know I said we'd be back to normal newsletters this week and in fact had 80% of one already written. Then I unearthed something that was better left buried. Blog post here, Patreon notes here (Mostly an explanation of how I found this horror in the first place). Next week I'll send what was supposed to be this week's piece. (PS: April Cools in three weeks!)

17 hours ago 3 votes
Notes on Improving Churn

Ask any B2C SaaS founder what metric they’d like to improve and most will say reducing churn. However, proactively reducing churn is a difficult task. I’ll outline the approach we’ve taken at Jenni AI to go from ~17% to 9% churn over the past year. We are still a work in progress but hopefully you’ll […] The post Notes on Improving Churn appeared first on Marc Astbury.

20 hours ago 3 votes
Catching grace

Meditation is easy when you know what to do: absolutely nothing! It's hard at first, like trying to look at the back of your own head, but there's a knack to it.

17 hours ago 3 votes
Python Performance: Why 'if not list' is 2x Faster Than Using len()

Discover why 'if not mylist' is twice as fast as 'len(mylist) == 0' by examining CPython's VM instructions and object memory access patterns.

12 hours ago 2 votes
Our switch to Kamal is complete

In a fit of frustration, I wrote the first version of Kamal in six weeks at the start of 2023. Our plan to get out of the cloud was getting bogged down in enterprisey pricing and Kubernetes complexity. And I refused to accept that running our own hardware had to be that expensive or that convoluted. So I got busy building a cheap and simple alternative.  Now, just two years later, Kamal is deploying every single application in our entire heritage fleet, and everything in active development. Finalizing a perfectly uniform mode of deployment for every web app we've built over the past two decades and still maintain. See, we have this obsession at 37signals: That the modern build-boost-discard cycle of internet applications is a scourge. That users ought to be able to trust that when they adopt a system like Basecamp or HEY, they don't have to fear eviction from the next executive re-org. We call this obsession Until The End Of The Internet. That obsession isn't free, but it's worth it. It means we're still operating the very first version of Basecamp for thousands of paying customers. That's the OG code base from 2003! Which hasn't seen any updates since 2010, beyond security patches, bug fixes, and performance improvements. But we're still operating it, and, along with every other app in our heritage collection, deploying it with Kamal. That just makes me smile, knowing that we have customers who adopted Basecamp in 2004, and are still able to use the same system some twenty years later. In the meantime, we've relaunched and dramatically improved Basecamp many times since. But for customers happy with what they have, there's no forced migration to the latest version. I very much had all of this in mind when designing Kamal. That's one of the reasons I really love Docker. It allows you to encapsulate an entire system, with all of its dependencies, and run it until the end of time. Kind of how modern gaming emulators can run the original ROM of Pac-Man or Pong to perfection and eternity. Kamal seeks to be but a simple wrapper and workflow around this wondrous simplicity. Complexity is but a bridge — and a fragile one at that. To build something durable, you have to make it simple.

23 hours ago 2 votes