Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
33
I was reading Baldur’s article (which I took notes on) and he suggests an interesting overlap between AI enthusiasts and “idea people”: That algogen fans are predominantly idea people—the lot who think that 99% of the value delivered by any given form of media comes from the idea—isn’t a new observation, but it’s apt. If you don’t think the form or structure of the medium delivers any value, then it has to be a uniform commodity that can, and should, be generated algorithmically to save people from the tedious work of pointless creation. If all that matters is the idea, then AI is everything and execution doesn’t really matter. From this perspective, when it comes to generating text it’s the destination that matters not the journey. Or so some might believe. There’s an interesting parallel here, I think, to claims about how fast you can scaffold a website. X framework or Y host allows you to go from zero to a beautiful, functional (probably cloned from a template) website in “three...
a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Jim Nielsen’s Blog

Building WebSites With LLMS

And by LLMS I mean: (L)ots of (L)ittle ht(M)l page(S). I recently shipped some updates to my blog. Through the design/development process, I had some insights which made me question my knee-jerk reaction to building pieces of a page as JS-powered interactions on top of the existing document. With cross-document view transitions getting broader and broader support, I’m realizing that building in-page, progressively-enhanced interactions is more work than simply building two HTML pages and linking them. I’m calling this approach “lots of little HTML pages” in my head. As I find myself trying to build progressively-enhanced features with JavaScript — like a fly-out navigation menu, or an on-page search, or filtering content — I stop and ask myself: “Can I build this as a separate HTML page triggered by a link, rather than JavaScript-injected content built from a button?” I kinda love the results. I build separate, small HTML pages for each “interaction” I want, then I let CSS transitions take over and I get something that feels better than its JS counterpart for way less work. Allow me two quick examples. Example 1: Filtering Working on my homepage, I found myself wanting a list of posts filtered by some kind of criteria, like: The most recent posts The ones being trafficked the most The ones that’ve had lots of Hacker News traffic in the past My first impulse was to have a list of posts you can filter with JavaScript. But the more I built it, the more complicated it got. Each “list” of posts needed a slightly different set of data. And each one had a different sort order. What I thought was going to be “stick a bunch of <li>s in the DOM, and show hide some based on the current filter” turned into lots of data-x attributes, per-list sorting logic, etc. I realized quickly this wasn’t a trivial, progressively-enhanced feature. I didn’t want to write a bunch of client-side JavaScript for what would take me seconds to write on “the server” (my static site generator). Then I thought: Why don’t I just do this with my static site generator? Each filter can be its own, separate HTML page, and with CSS view transitions I’ll get a nice transition effect for free! Minutes later I had it all working — mostly, I had to learn a few small things about aspect ratio in transitions — plus I had fancy transitions between “tabs” for free! This really feels like a game-changer for simple sites. If you can keep your site simple, it’s easier to build traditional, JavaScript-powered on-page interactions as small, linked HTML pages. Example 2: Navigation This got me thinking: maybe I should do the same thing for my navigation? Usually I think “Ok, so I’ll have a hamburger icon with a bunch of navigational elements in it, and when it’s clicked you gotta reveal it, etc." And I thought, “What if it’s just a new HTML page?”[1] Because I’m using a static site generator, it’s really easy to create a new HTML page. A few minutes later and I had it. No client-side JS required. You navigate to the “Menu” and you get a page of options, with an “x” to simulate closing the menu and going back to where you were. I liked it so much for my navigation, I did the same thing with search. Clicking the icon doesn’t use JavaScript to inject new markup and animate things on screen. Nope. It’s just a link to a new page with CSS supporting a cross-document view transition. Granted, there are some trade-offs to this approach. But on the whole, I really like it. It was so easy to build and I know it’s going to be incredibly easy to maintain! I think this is a good example of leveraging the grain of the web. It’s really easy to build a simple website when you can shift your perspective to viewing on-page interactivity as simple HTML page navigations powered by cross document CSS transitions (rather than doing all of that as client-side JS). Jason Bradberry has a neat article that’s tangential to this idea over at Piccalil. It’s more from the design standpoint, but functionally it could work pretty much the same as this: your “menu” or “navigation” is its own page. ⏎ Email · Mastodon · Bluesky

2 hours ago 1 votes
AX, DX, UX

Matt Biilman, CEO of Netlify, published an interesting piece called “Introducing AX: Why Agent Experience Matters” where he argues the coming importance of a new “X” (experience) in software: the agent experience, meaning the experience your users’ AI agents will have as automated users of products/platforms. Too many companies are focusing on adding shallow AI features all over their products or building yet another AI agent. The real breakthrough will be thinking about how your customers’ favorite agents can help them derive more value from your product. This requires thinking deeply about agents as a persona your team is building and developing for. In this future, software that can’t be used by an automated agent will feel less powerful and more burdensome to deal with, whereas software that AI agents can use on your behalf will become incredibly capable and efficient. So you have to start thinking about these new “users” of your product: Is it simple for an Agent to get access to operating a platform on behalf of a user? Are there clean, well described APIs that agents can operate? Are there machine-ready documentation and context for LLMs and agents to properly use the available platform and SDKs? Addressing the distinct needs of agents through better AX, will improve their usefulness for the benefit of the human user. In summary: We need to start focusing on AX or “agent experience” — the holistic experience AI agents will have as the user of a product or platform. The idea is: teams focus more time and attention on “AX” (agent experience) so that human end-users can bring their favorite agents to our platforms/products and increase productivity. But I’m afraid the reality will be that the limited time and resources teams spend today building stuff for humans will instead get spent building stuff for robots, and as a byproduct everything human-centric about software will become increasingly subpar as we rationalize to ourselves, “Software doesn’t need to be good for human because humans don’t use software anymore. Their robots do!” In that world, anybody complaining about bad UX will be told to shift to using the AX because “that’s where we spent all our time and effort to make your experience great”. Prior Art: DX DX in theory: make the DX for people who are building UX really great and they’ll be able to deliver more value faster. DX in practice: DX requires trade-offs, and a spotlight on DX concerns means UX concerns take a back seat. Ultimately, some DX concerns end up trumping UX concerns because “we’ll ship more value faster”, but the result is an overall degradation of UX because DX was prioritized first. Ultimately, time and resources are constraining factors and trade-offs have to be made somewhere, so they’re made for and in behalf of the people who make the software because they’re the ones who feel the pain directly. User pain is only indirect. Future Art: AX AX in theory: build great stuff for agents (AX) so people can use stuff more efficiently by bringing their own tools. AX in practice: time and resources being finite, AX trumps UX with the rationale being: “It’s ok if the human bit (UX) is a bit sloppy and obtuse because we’ll make the robot bit (AX) so good people won’t ever care about how poor the UX is because they’ll never use it!” But I think we know how that plays out. A few companies may do that well, but most software will become even more confusing and obtuse to humans because most thought and care is poured into the robot experience of the product. The thinking will be: “No need to pour extra care and thought into the inefficient experience some humans might have. Better to make the agent experience really great, so humans won’t want to interface with our thing manually.” In other words: we don’t have the time or resources to worry about the manual human experience because we’ve got all these robots to worry about! It appears there’s no need to fear AI becoming sentient and replacing us humans. We’ll phase ourselves out long before the robots ever become self-aware. All that said, I’m not against the idea of “AX” but I do think the North Star of any “X” should remain centered on the (human) end-user. UX over AX over DX. Email · Mastodon · Bluesky

2 days ago 4 votes
Can You Get Better Doing a Bad Job?

Rick Rubin has an interview with Woody Harrelson on his podcast Tetragrammaton. Right at the beginning Woody talks about his experience acting and how he’s had roles that did’t turn out very well. He says sometimes he comes away from those experiences feeling dirty, like “I never connected to that, it never resonated, and now I feel like I sold myself...Why did I do that?!” Then Rick asks him: even in those cases, do you feel like you got better at your craft because you did your job? Woody’s response: I think when you do your job badly you never really get better at your craft. Seems relevant to making websites. I’ve built websites on technology stacks I knew didn’t feel fit for their context and Woody’s experience rings true. You just don’t feel right, like a little voice that says, “You knew that wasn’t going to turn out very good. Why did you do that??” I don’t know if I’d go so far as to say I didn’t get better because of it. Experience is a hard teacher. Perhaps, from a technical standpoint, my skillset didn’t get any better. But from an experiential standpoint, my judgement got better. I learned to avoid (or try to re-structure) work that’s being carried out in a way that doesn’t align with its own purpose and essence. Granted, that kind of alignment is difficult. If it makes you feel any better, even Woody admits this is not an easy thing to do: I would think after all this time, surely I’m not going to be doing stuff I’m not proud of. Or be a part of something I’m not proud of. But damn...it still happens. Email · Mastodon · Bluesky

4 days ago 4 votes
Limitations vs. Capabilities

Andy Jiang over on the Deno blog writes “If you're not using npm specifiers, you're doing it wrong”: During the early days of Deno, we recommended importing npm packages via HTTP with transpile services such as esm.sh and unpkg.com. However, there are limitations to importing npm packages this way, such as lack of install hooks, duplicate dependency resolution issues, loading data files, etc. I know, I know, here I go harping on http imports again, but this article reinforces to me that one man’s “limitations” are another man’s “features”. For me, the limitations (i.e. constraints) of HTTP imports in Deno were a feature. I loved it precisely because it encouraged me to do something different than what node/npm encouraged. It encouraged me to 1) do less, and 2) be more web-like. Trying to do more with less is a great way to foster creativity. Plus, doing less means you have less to worry about. Take, for example, install hooks (since they’re mentioned in the article). Install hooks are a security vector. Use them and you’re trading ease for additional security concerns. Don’t use them and you have zero additional security concerns. (In the vein of being webby: browsers don’t offer install hooks on <script> tags.) I get it, though. It’s hard to advocate for restraint and simplicity in the face of gaining adoption within the web-industrial-complex. Giving people what they want — what they’re used to — is easier than teaching them to change their ways. Note to self: when you choose to use tools with practices, patterns, and recommendations designed for industrial-level use, you’re gonna get industrial-level side effects, industrial-level problems, and industrial-level complexity as a byproduct. As much as its grown, the web still has grassroots in being a programming platform accessible by regular people because making a website was meant to be for everyone. I would love a JavaScript runtime aligned with that ethos. Maybe with initiatives like project Fugu that runtime will actually be the browser. Email · Mastodon · Bluesky

6 days ago 9 votes
Sanding UI, pt. II

Let’s say you make a UI to gather some user feedback. Nothing complicated. Just a thumbs up/down widget. It starts out neutral, but when the user clicks up or down, you highlight what they clicked an de-emphasize/disable the other (so it requires an explicit toggle to change your mind). So you implement it. Ship it. Cool. Works right? Well, per my previous article about “sanding” a user interface UI by clicking around a lot, did you click on it a lot? If you do, you’ll find that doing so selects the thumbs up/down icon as if it were text: So now you have this weird text selection that’s a bit of an eye sore. It’s not relevant to text selection because it’s not text. It’s an SVG. So the selection UI that appears is misleading and distracting. One possible fix: leverage the user-select: none property in CSS which makes it not selectable. When the user clicks multiple times to toggle, no text selection UI will appear. Cool. Great! Another reason to click around a lot. You can ensure any rough edges are smoothed out, and any “UI splinters” are ones you get (and fix) in place of your users. Email · Mastodon · Bluesky

a week ago 13 votes

More in programming

How to write exceptional documentation

Writing high-quality developer documentation is a challenging task. This is my personal approach to crafting holistic, comprehensive documentation.

21 hours ago 2 votes
Expanding Access: The History of Ecommerce Part 1

The earliest work with selling things online was all about reaching a shopping public ready to log on and start. But along the way, they found a whole new audience for shopping, which changed the way we think about commerce on the web.. The post Expanding Access: The History of Ecommerce Part 1 appeared first on The History of the Web.

4 hours ago 2 votes
whippet lab notebook: on untagged mallocs

Salutations, populations. Today’s note is more of a work-in-progress than usual; I have been finally starting to look at getting into , and there are some open questions.WhippetGuile I started by taking a look at how Guile uses the ‘s API, to make sure I had all my bases covered for an eventual switch to something that was not BDW. I think I have a good overview now, and have divided the parts of BDW-GC used by Guile into seven categories.Boehm-Demers-Weiser collector Firstly there are the ways in which Guile’s run-time and compiler depend on BDW-GC’s behavior, without actually using BDW-GC’s API. By this I mean principally that we assume that any reference to a GC-managed object from any thread’s stack will keep that object alive. The same goes for references originating in global variables, or static data segments more generally. Additionally, we rely on GC objects not to move: references to GC-managed objects in registers or stacks are valid across a GC boundary, even if those references are outside the GC-traced graph: all objects are pinned. Some of these “uses” are internal to Guile’s implementation itself, and thus amenable to being changed, albeit with some effort. However some escape into the wild via Guile’s API, or, as in this case, as implicit behaviors; these are hard to change or evolve, which is why I am putting my hopes on Whippet’s , which allows for conservative roots.mostly-marking collector Then there are the uses of BDW-GC’s API, not to accomplish a task, but to protect the mutator from the collector: , explicitly enabling or disabling GC, calls to that take BDW-GC’s use of POSIX signals into account, and so on. BDW-GC can stop any thread at any time, between any two instructions; for most users is anodyne, but if ever you use weak references, things start to get really gnarly.GC_call_with_alloc_locksigmask Of course a new collector would have its own constraints, but switching to cooperative instead of pre-emptive safepoints would be a welcome relief from this mess. On the other hand, we will require client code to explicitly mark their threads as inactive during calls in more cases, to ensure that all threads can promptly reach safepoints at all times. Swings and roundabouts? Did you know that the Boehm collector allows for precise tracing? It does! It’s slow and truly gnarly, but when you need precision, precise tracing nice to have. (This is the interface.) Guile uses it to mark Scheme stacks, allowing it to avoid treating unboxed locals as roots. When it loads compiled files, Guile also adds some sliced of the mapped files to the root set. These interfaces will need to change a bit in a switch to Whippet but are ultimately internal, so that’s fine.GC_new_kind What is not fine is that Guile allows C users to hook into precise tracing, notably via . This is not only the wrong interface, not allowing for copying collection, but these functions are just truly gnarly. I don’t know know what to do with them yet; are our external users ready to forgo this interface entirely? We have been working on them over time, but I am not sure.scm_smob_set_mark Weak references, weak maps of various kinds: the implementation of these in terms of BDW’s API is incredibly gnarly and ultimately unsatisfying. We will be able to replace all of these with ephemerons and tables of ephemerons, which are natively supported by Whippet. The same goes with finalizers. The same goes for constructs built on top of finalizers, such as ; we’ll get to reimplement these on top of nice Whippet-supplied primitives. Whippet allows for resuscitation of finalized objects, so all is good here.guardians There is a long list of miscellanea: the interfaces to explicitly trigger GC, to get statistics, to control the number of marker threads, to initialize the GC; these will change, but all uses are internal, making it not a terribly big deal. I should mention one API concern, which is that BDW’s state is all implicit. For example, when you go to allocate, you don’t pass the API a handle which you have obtained for your thread, and which might hold some thread-local freelists; BDW will instead load thread-local variables in its API. That’s not as efficient as it could be and Whippet goes the explicit route, so there is some additional plumbing to do. Finally I should mention the true miscellaneous BDW-GC function: . Guile exposes it via an API, . It was already vestigial and we should just remove it, as it has no sensible semantics or implementation.GC_freescm_gc_free That brings me to what I wanted to write about today, but am going to have to finish tomorrow: the actual allocation routines. BDW-GC provides two, essentially: and . The difference is that “atomic” allocations don’t refer to other GC-managed objects, and as such are well-suited to raw data. Otherwise you can think of atomic allocations as a pure optimization, given that BDW-GC mostly traces conservatively anyway.GC_mallocGC_malloc_atomic From the perspective of a user of BDW-GC looking to switch away, there are two broad categories of allocations, tagged and untagged. Tagged objects have attached metadata bits allowing their type to be inspected by the user later on. This is the happy path! We’ll be able to write a function that takes any object, does a switch on, say, some bits in the first word, dispatching to type-specific tracing code. As long as the object is sufficiently initialized by the time the next safepoint comes around, we’re good, and given cooperative safepoints, the compiler should be able to ensure this invariant.gc_trace_object Then there are untagged allocations. Generally speaking, these are of two kinds: temporary and auxiliary. An example of a temporary allocation would be growable storage used by a C run-time routine, perhaps as an unbounded-sized alternative to . Guile uses these a fair amount, as they compose well with non-local control flow as occurring for example in exception handling.alloca An auxiliary allocation on the other hand might be a data structure only referred to by the internals of a tagged object, but which itself never escapes to Scheme, so you never need to inquire about its type; it’s convenient to have the lifetimes of these values managed by the GC, and when desired to have the GC automatically trace their contents. Some of these should just be folded into the allocations of the tagged objects themselves, to avoid pointer-chasing. Others are harder to change, notably for mutable objects. And the trouble is that for external users of , I fear that we won’t be able to migrate them over, as we don’t know whether they are making tagged mallocs or not.scm_gc_malloc One conventional way to handle untagged allocations is to manage to fit your data into other tagged data structures; V8 does this in many places with instances of FixedArray, for example, and Guile should do more of this. Otherwise, you make new tagged data types. In either case, all auxiliary data should be tagged. I think there may be an alternative, which would be just to support the equivalent of untagged and ; but for that, I am out of time today, so type at y’all tomorrow. Happy hacking!GC_mallocGC_malloc_atomic inventory what is to be done? implicit uses defensive uses precise tracing reachability misc allocation

5 hours ago 2 votes
IndexedDB is Weird

Why? Well: The IndexedDB API is callback-based. With JavaScript being single-threaded, a blocking API would mean fully blocking the page, render and basic user interaction included, while the request is being processed. Although this is apparently good-enough for JSON.parse(), the W3C decided to make the IndexedDB API non-blocking. The first drafts for IndexedDB are from … Continue reading IndexedDB is Weird → The post IndexedDB is Weird appeared first on Quentin Santos.

yesterday 4 votes
Human service is luxury

Maybe one day AI will answer every customer question flawlessly, but we're nowhere near that reality right now. I can't tell you how often I've been stuck in some god-forsaken AI loop or phone tree WHEN ALL I WANT IS A HUMAN. So I end up either just yelling "operator", "operator", "operator" (the modern-day mayday!) or smashing zero over and over. It's a unworthy interaction for any premium service.   Don't get me wrong. I'm pretty excited about AI. I've seen it do some incredible things. And of course it's just going to keep getting better. But in our excitement about the technical promise, I think we're forgetting that humans need more than correct answers. Customer service at its best also offers understanding and reassurance. It offers a human connection. Especially as AI eats the low-end, commodity-style customer support. The sort that was always done poorly, by disinterested people, rapidly churning through a perceived dead-end job, inside companies that only ever saw support as a cost center. Yeah, nobody is going to cry a tear for losing that. But you know that isn't all there is to customer service. Hopefully you've had a chance to experience what it feels like when a cheerful, engaged human is interested in helping you figure out what's wrong or how to do something right. Because they know exactly what they're talking about. Because they've helped thousands of others through exactly the same situation. That stuff is gold. Partly because it feels bespoke. A customer service agent who's good at their job knows how to tailor the interaction not just to your problem, but to your temperament. Because they've seen all the shapes. They can spot an angry-but-actually-just-frustrated fit a thousand miles away. They can tell a timid-but-curious type too. And then deliver exactly what either needs in that moment. That's luxury. That's our thesis for Basecamp, anyway. That by treating customer service as a career, we'll end up with the kind of agents that embody this luxury, and our customers will feel the difference.

yesterday 3 votes