Full Width [alt+shift+f] Shortcuts [alt+shift+k] TRY SIMPLE MODE
Sign Up [alt+shift+s] Log In [alt+shift+l]
40
When working with CSS transitions, the need to detect whether or not the browser supports them may arise.  It can be of particular use when working with the transitionend event, which won't fire in unsupportive browsers. After finding a number of questionable solutions, I came across this gist that extends jQuery's $.support nicely. $.support.transition = (function(){ var thisBody = document.body || document.documentElement, thisStyle = thisBody.style, support = thisStyle.transition !== undefined || thisStyle.WebkitTransition !== undefined || thisStyle.MozTransition !== undefined || thisStyle.MsTransition !== undefined || thisStyle.OTransition !== undefined; return support; })(); The value of $.support.transition will be true or false depending on the browser's capabilities.
over a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from A Beautiful Site

Revisiting FOUCE

It's been awhile since I wrote about FOUCE and I've since come up with an improved solution that I think is worth a post. This approach is similar to hiding the page content and then fading it in, but I've noticed it's far less distracting without the fade. It also adds a two second timeout to prevent network issues or latency from rendering an "empty" page. First, we'll add a class called reduce-fouce to the <html> element. <html class="reduce-fouce"> ... </html> Then we'll add this rule to the CSS. <style> html.reduce-fouce { opacity: 0; } </style> Finally, we'll wait until all the custom elements have loaded or two seconds have elapsed, whichever comes first, and we'll remove the class causing the content to show immediately. <script type="module"> await Promise.race([ // Load all custom elements Promise.allSettled([ customElements.whenDefined('my-button'), customElements.whenDefined('my-card'), customElements.whenDefined('my-rating') // ... ]), // Resolve after two seconds new Promise(resolve => setTimeout(resolve, 2000)) ]); // Remove the class, showing the page content document.documentElement.classList.remove('reduce-fouce'); </script> This approach seems to work especially well and won't end up "stranding" the user if network issues occur.

7 months ago 88 votes
If Edgar Allan Poe was into Design Systems

Once upon a midnight dreary, while I pondered, weak and weary, While I nodded, nearly napping, suddenly there came a tapping, "'Tis a design system," I muttered, "bringing order to the core— Ah, distinctly I remember, every button, every splendor, Each component, standardized, like a raven's watchful eyes, Unified in system's might, like patterns we restore— And each separate style injection, linked with careful introspection, 'Tis a design system, nothing more.

9 months ago 93 votes
Web Components Are Not the Future — They’re the Present

It’s disappointing that some of the most outspoken individuals against Web Components are framework maintainers. These individuals are, after all, in some of the best positions to provide valuable feedback. They have a lot of great ideas! Alas, there’s little incentive for them because standards evolve independently and don’t necessarily align with framework opinions. How could they? Opinions are one of the things that make frameworks unique. And therein lies the problem. If you’re convinced that your way is the best and only way, it’s natural to feel disenchanted when a decision is made that you don’t fully agree with. This is my open response to Ryan Carniato’s post from yesterday called “Web Components Are Not the Future.” WTF is a component anyway? # The word component is a loaded term, but I like to think of it in relation to interoperability. If I write a component in Framework A, I would like to be able to use it in Framework B, C, and D without having to rewrite it or include its entire framework. I don’t think many will disagree with that objective. We’re not there yet, but the road has been paved and instead of learning to drive on it, frameworks are building…different roads. Ryan states: If the sheer number of JavaScript frameworks is any indicator we are nowhere near reaching a consensus on how one should author components on the web. And even if we were a bit closer today we were nowhere near there a decade ago. The thing is, we don’t need to agree on how to write components, we just need to agree on the underlying implementation, then you can use classes, hooks, or whatever flavor you want to create them. Turns out, we have a very well-known, ubiquitous technology that we’ve chosen to do this with: HTML. But it also can have a negative effect. If too many assumptions are made it becomes harder to explore alternative space because everything gravitates around the establishment. What is more established than a web standard that can never change? If the concern is premature standardization, well, it’s a bit late for that. So let’s figure out how to get from where we are now to where we want to be. The solution isn’t to start over at the specification level, it’s to rethink how front end frameworks engage with current and emerging standards and work to improve them. Respectfully, it’s time to stop complaining, move on, and fix the things folks perceive as suboptimal. The definition of component # That said, we also need to realize that Web Components aren’t a 1:1 replacement for framework components. They’re tangentially related things, and I think a lot of confusion stems from this. We should really fix the definition of component. So the fundamental problem with Web Components is that they are built on Custom Elements. Elements !== Components. More specifically, Elements are a subset of Components. One could argue that every Element could be a Component but not all Components are Elements. To be fair, I’ve never really liked the term “Web Components” because it competes with the concept of framework components, but that’s what caught on and that's what most people are familiar with these days. Alas, there is a very important distinction here. Sure, a button and a text field can be components, but there are other types. For example, many frameworks support a concept of renderless components that exist in your code, but not in the final HTML. You can’t do that with Web Components, because every custom element results in an actual DOM element. (FWIW I don’t think this is a bad thing — but I digress…) As to why Web components don’t do all the things framework components do, that’s because they’re a lower level implementation of an interoperable element. They’re not trying to do everything framework components do. That’s what frameworks are for. It’s ok to be shiny # In fact, this is where frameworks excel. They let you go above and beyond what the platform can do on its own. I fully support this trial-and-error way of doing things. After all, it’s fun to explore new ideas and live on the bleeding edge. We got a lot of cool stuff from doing that. We got document.querySelector() from jQuery. CSS Custom Properties were inspired by Sass. Tagged template literals were inspired by JSX. Soon we’re getting signals from Preact. And from all the component-based frameworks that came before them, we got Web Components: custom HTML elements that can be authored in many different ways (because we know people like choices) and are fully interoperable (if frameworks and metaframeworks would continue to move towards the standard instead of protecting their own). Frameworks are a testbed for new ideas that may or may not work out. We all need to be OK with that. Even framework authors. Especially framework authors. More importantly, we all need to stop being salty when our way isn’t what makes it into the browser. There will always be a better way to do something, but none of us have the foresight to know what a perfect solution looks like right now. Hindsight is 20/20. As humans, we’re constantly striving to make things better. We’re really good at it, by the way. But we must have the discipline to reach various checkpoints to pause, reflect, and gather feedback before continuing. Even the cheapest cars on the road today will outperform the Model T in every way. I’m sure Ford could have made the original Model T way better if they had spent another decade working on it, but do you know made the next version even better than 10 more years? The feedback they got from actual users who bought them, sat in them, and drove them around on actual roads. Web Standards offer a promise of stability and we need to move forward to improve them together. Using one’s influence to rally users against the very platform you’ve built your success on is damaging to both the platform and the community. We need these incredible minds to be less divisive and more collaborative. The right direction # Imagine if we applied the same arguments against HTML early on. What if we never standardized it at all? Would the Web be a better place if every site required a specific browser? (Narrator: it wasn't.) Would it be better if every site was Flash or a Java applet? (Remember Silverlight? lol) Sure, there are often better alternatives for every use case, but we have to pick something that works for the majority, then we can iterate on it. Web Components are a huge step in the direction of standardization and we should all be excited about that. But the Web Component implementation isn’t compatible with existing frameworks, and therein lies an existential problem. Web Components are a threat to the peaceful, proprietary way of life for frameworks that have amassed millions of users — the majority of web developers. Because opinions vary so wildly, when a new standard emerges frameworks can’t often adapt to them without breaking changes. And breaking changes can be detrimental to a user base. Have you spotted the issue? You can’t possibly champion Web Standards when you’ve built a non-standard thing that will break if you align with the emerging standard. It’s easier to oppose the threat than to adapt to it. And of course Web Components don’t do everything a framework does. How can the platform possibly add all the features every framework added last week? That would be absolutely reckless. And no, the platform doesn’t move as fast as your framework and that’s sometimes painful. But it’s by design. This process is what gives us APIs that continue to work for decades. As users, we need to get over this hurdle and start thinking about how frameworks can adapt to current standards and how to evolve them as new ones emerge. Let’s identify shortcomings in the spec and work together to improve the ecosystem instead of arguing about who’s shit smells worse. Reinventing the wheel isn’t the answer. Lock-in isn’t the answer. This is why I believe that next generation of frameworks will converge on custom elements as an interoperable component model, enhance that model by sprinkling in awesome features of their own, and focus more on flavors (class-based, functional, signals, etc.) and higher level functionality. As for today's frameworks? How they adapt will determine how relevant they remain. Living dangerously # Ryan concludes: So in a sense there are nothing wrong with Web Components as they are only able to be what they are. It's the promise that they are something that they aren't which is so dangerous. The way their existence warps everything around them that puts the whole web at risk. It's a price everyone has to pay. So Web Components aren’t the specific vision you had for components. That's fine. But that's how it is. They're not Solid components. They’re not React components. They’re not Svelte components. They’re not Vue components. They’re standards-based Web Components that work in all of the above. And they could work even better in all of the above if all of the above were interested in advancing the platform instead of locking users in. I’m not a conspiracy theorist, but I find interesting the number of people who are and have been sponsored and/or hired by for-profit companies whose platforms rely heavily on said frameworks. Do you think it’s in their best interest to follow Web Standards if that means making their service less relevant and less lucrative? Of course not. If you’ve built an empire on top of something, there’s absolutely zero incentive to tear it down for the betterment of humanity. That’s not how capitalism works. It’s far more profitable to lock users in and keep them paying. But you know what…? Web Standards don't give a fuck about monetization. Longevity supersedes ingenuity # The last thing I’d like to talk about is this line here. Web Components possibly pose the biggest risk to the future of the web that I can see. Of course, this is from the perspective of a framework author, not from the people actually shipping and maintaining software built using these frameworks. And the people actually shipping software are the majority, but that’s not prestigious so they rarely get the high follower counts. The people actually shipping software are tired of framework churn. They're tired of shit they wrote last month being outdated already. They want stability. They want to know that the stuff they build today will work tomorrow. As history has proven, no framework can promise that. You know what framework I want to use? I want a framework that aligns with the platform, not one that replaces it. I want a framework that values incremental innovation over user lock-in. I want a framework that says it's OK to break things if it means making the Web a better place for everyone. Yes, that comes at a cost, but almost every good investment does, and I would argue that cost will be less expensive than learning a new framework and rebuilding buttons for the umpteenth time. The Web platform may not be perfect, but it continuously gets better. I don’t think frameworks are bad but, as a community, we need to recognize that a fundamental piece of the platform has changed and it's time to embrace the interoperable component model that Web Component APIs have given us…even if that means breaking things to get there. The component war is over.

10 months ago 91 votes
Component Machines

Components are like little machines. You build them once. Use them whenever you need them. Every now and then you open them up to oil them or replace a part, then you send them back to work. And work, they do. Little component machines just chugging along so you never have to write them from scratch ever again. Adapted from this tweet.

11 months ago 84 votes
Styling Custom Elements Without Reflecting Attributes

I've been struggling with the idea of reflecting attributes in custom elements and when it's appropriate. I think I've identified a gap in the platform, but I'm not sure exactly how we should fill it. I'll explain with an example. Let's say I want to make a simple badge component with primary, secondary, and tertiary variants. <my-badge variant="primary">foo</my-badge> <my-badge variant="secondary">bar</my-badge> <my-badge variant="tertiary">baz</my-badge> This is a simple component, but one that demonstrates the problem well. I want to style the badge based on the variant property, but sprouting attributes (which occurs as a result of reflecting a property back to an attribute) is largely considered a bad practice. A lot of web component libraries do it out of necessary to facilitate styling — including Shoelace — but is there a better way? The problem # I need to style the badge without relying on reflected attributes. This means I can't use :host([variant="..."]) because the attribute may or may not be set by the user. For example, if the component is rendered in a framework that sets properties instead of attributes, or if the property is set or changed programmatically, the attribute will be out of sync and my styles will be broken. So how can I style the badge based its variants without reflection? Let's assume we have the following internals, which is all we really need for the badge. <my-badge> #shadowRoot <slot></slot> </my-badge> What can we do about it? # I can't add classes to the slot, because :host(:has(.slot-class)) won't match. I can't set a data attribute on the host element, because that's the same as reflection and might cause issues with SSR and DOM morphing libraries. I could add a wrapper element around the slot and apply classes to it, but I'd prefer not to bloat the internals with additional elements. With a wrapper, users would have to use ::part(wrapper) to target it. Without the wrapper, they can set background, border, and other CSS properties directly on the host element which is more desirable. I could add custom states for each variant, but this gets messy for non-Boolean values and feels like an abuse of the API. Filling the gap # I'm not sure what the best solution is or could be, but one thing that comes to mind is a way to provide some kind of cross-root version of :has that works with :host. Something akin to: :host(:has-in-shadow-root(.some-selector)) { /* maybe one day… */ } If you have any thoughts on this one, hit me up on Twitter.

a year ago 81 votes

More in programming

The future of large files in Git is Git

.title {text-wrap:balance;} #content > p:first-child {text-wrap:balance;} If Git had a nemesis, it’d be large files. Large files bloat Git’s storage, slow down git clone, and wreak havoc on Git forges. In 2015, GitHub released Git LFS—a Git extension that hacked around problems with large files. But Git LFS added new complications and storage costs. Meanwhile, the Git project has been quietly working on large files. And while LFS ain’t dead yet, the latest Git release shows the path towards a future where LFS is, finally, obsolete. What you can do today: replace Git LFS with Git partial clone Git LFS works by storing large files outside your repo. When you clone a project via LFS, you get the repo’s history and small files, but skip large files. Instead, Git LFS downloads only the large files you need for your working copy. In 2017, the Git project introduced partial clones that provide the same benefits as Git LFS: Partial clone allows us to avoid downloading [large binary assets] in advance during clone and fetch operations and thereby reduce download times and disk usage. – Partial Clone Design Notes, git-scm.com Git’s partial clone and LFS both make for: Small checkouts – On clone, you get the latest copy of big files instead of every copy. Fast clones – Because you avoid downloading large files, each clone is fast. Quick setup – Unlike shallow clones, you get the entire history of the project—you can get to work right away. What is a partial clone? A Git partial clone is a clone with a --filter. For example, to avoid downloading files bigger than 100KB, you’d use: git clone --filter='blobs:size=100k' <repo> Later, Git will lazily download any files over 100KB you need for your checkout. By default, if I git clone a repo with many revisions of a noisome 25 MB PNG file, then cloning is slow and the checkout is obnoxiously large: $ time git clone https://github.com/thcipriani/noise-over-git Cloning into '/tmp/noise-over-git'... ... Receiving objects: 100% (153/153), 1.19 GiB real 3m49.052s Almost four minutes to check out a single 25MB file! $ du --max-depth=0 --human-readable noise-over-git/. 1.3G noise-over-git/. $ ^ 🤬 And 50 revisions of that single 25MB file eat 1.3GB of space. But a partial clone side-steps these problems: $ git config --global alias.pclone 'clone --filter=blob:limit=100k' $ time git pclone https://github.com/thcipriani/noise-over-git Cloning into '/tmp/noise-over-git'... ... Receiving objects: 100% (1/1), 24.03 MiB real 0m6.132s $ du --max-depth=0 --human-readable noise-over-git/. 49M noise-over-git/ $ ^ 😻 (the same size as a git lfs checkout) My filter made cloning 97% faster (3m 49s → 6s), and it reduced my checkout size by 96% (1.3GB → 49M)! But there are still some caveats here. If you run a command that needs data you filtered out, Git will need to make a trip to the server to get it. So, commands like git diff, git blame, and git checkout will require a trip to your Git host to run. But, for large files, this is the same behavior as Git LFS. Plus, I can’t remember the last time I ran git blame on a PNG 🙃. Why go to the trouble? What’s wrong with Git LFS? Git LFS foists Git’s problems with large files onto users. And the problems are significant: 🖕 High vendor lock-in – When GitHub wrote Git LFS, the other large file systems—Git Fat, Git Annex, and Git Media—were agnostic about the server-side. But GitHub locked users to their proprietary server implementation and charged folks to use it.1 💸 Costly – GitHub won because it let users host repositories for free. But Git LFS started as a paid product. Nowadays, there’s a free tier, but you’re dependent on the whims of GitHub to set pricing. Today, a 50GB repo on GitHub will cost $40/year for storage. In contrast, storing 50GB on Amazon’s S3 standard storage is $13/year. 😰 Hard to undo – Once you’ve moved to Git LFS, it’s impossible to undo the move without rewriting history. 🌀 Ongoing set-up costs – All your collaborators need to install Git LFS. Without Git LFS installed, your collaborators will get confusing, metadata-filled text files instead of the large files they expect. The future: Git large object promisors Large files create problems for Git forges, too. GitHub and GitLab put limits on file size2 because big files cost more money to host. Git LFS keeps server-side costs low by offloading large files to CDNs. But the Git project has a new solution. Earlier this year, Git merged a new feature: large object promisers. Large object promisors aim to provide the same server-side benefits as LFS, minus the hassle to users. This effort aims to especially improve things on the server side, and especially for large blobs that are already compressed in a binary format. This effort aims to provide an alternative to Git LFS – Large Object Promisors, git-scm.com What is a large object promisor? Large object promisors are special Git remotes that only house large files. In the bright, shiny future, large object promisors will work like this: You push a large file to your Git host. In the background, your Git host offloads that large file to a large object promisor. When you clone, the Git host tells your Git client about the promisor. Your client will clone from the Git host, and automagically nab large files from the promisor remote. But we’re still a ways off from that bright, shiny future. Git large object promisors are still a work in progress. Pieces of large object promisors merged to Git in March of 2025. But there’s more to do and open questions yet to answer. And so, for today, you’re stuck with Git LFS for giant files. But once large object promisors see broad adoption, maybe GitHub will let you push files bigger than 100MB. The future of large files in Git is Git. The Git project is thinking hard about large files, so you don’t have to. Today, we’re stuck with Git LFS. But soon, the only obstacle for large files in Git will be your half-remembered, ominous hunch that it’s a bad idea to stow your MP3 library in Git. Edited by Refactoring English Later, other Git forges made their own LFS servers. Today, you can push to multiple Git forges or use an LFS transfer agent, but all this makes set up harder for contributors. You’re pretty much locked-in unless you put in extra effort to get unlocked.↩︎ File size limits: 100MB for GitHub, 100MB for GitLab.com↩︎

yesterday 7 votes
Just a Little More Context Bro, I Promise, and It’ll Fix Everything

Conrad Irwin has an article on the Zed blog “Why LLMs Can't Really Build Software”. He says it boils down to: the distinguishing factor of effective engineers is their ability to build and maintain clear mental models We do this by: Building a mental model of what you want to do Building a mental model of what the code does Reducing the difference between the two It’s kind of an interesting observation about how we (as humans) problem solve vs. how we use LLMs to problem solve: With LLMs, you stuff more and more information into context until it (hopefully) has enough to generate a solution. With your brain, you tweak, revise, or simplify your mental model more and more until the solution presents itself. One adds information — complexity you might even say — to solve a problem. The other eliminates it. You know what that sort of makes me think of? NPM driven development. Solving problems with LLMs is like solving front-end problems with NPM: the “solution” comes through installing more and more things — adding more and more context, i.e. more and more packages. LLM: Problem? Add more context. NPM: Problem? There’s a package for that. Contrast that with a solution that comes through simplification. You don’t add more context. You simplify your mental model so you need less to solve a problem — if you solve it at all, perhaps you eliminate the problem entirely! Rather than install another package to fix what ails you, you simplify your mental model which often eliminates the problem you had in the first place; thus eliminating the need to solve any problem at all, or to add any additional context or complexity (or dependency). As I’m typing this, I’m thinking of that image of the evolution of the Raptor engine, where it evolved in simplicity: This stands in contrast to my working with LLMs, which often wants more and more context from me to get to a generative solution: I know, I know. There’s probably a false equivalence here. This entire post started as a note and I just kept going. This post itself needs further thought and simplification. But that’ll have to come in a subsequent post, otherwise this never gets published lol. Email · Mastodon · Bluesky

yesterday 4 votes
How to Leverage the CPU’s Micro-Op Cache for Faster Loops

Measuring, analyzing, and optimizing loops using Linux perf, Top-Down Microarchitectural Analysis, and the CPU’s micro-op cache

yesterday 6 votes
Omarchy micro-forks Chromium

You can just change things! That's the power of open source. But for a lot of people, it might seem like a theoretical power. Can you really change, say, Chrome? Well, yes! We've made a micro fork of Chromium for Omarchy (our new 37signals Linux distribution). Just to add one feature needed for live theming. And now it's released as a package anyone can install on any flavor of Arch using the AUR (Arch User Repository). We got it all done in just four days. From idea, to solicitation, to successful patch, to release, to incorporation. And now it'll be part of the next release of Omarchy. There are no speed limits in open source. Nobody to ask for permission. You have the code, so you can make the change. All you need is skill and will (and maybe, if you need someone else to do it for you, a $5,000 incentive 😄).

2 days ago 4 votes
Choosing Tools To Make Websites

Jan Miksovsky lays out his idea for website creation as content transformation. He starts by talking about tools that hide what’s happening “under the hood”: A framework’s marketing usually pretends it is unnecessary for you to understand how its core transformation works — but without that knowledge, you can’t achieve the beautiful range of results you see in the framework’s sample site gallery. This is a great callout. Tools will say, “You don’t have to worry about the details.” But the reality is, you end up worrying about the details — at least to some degree. Why? Because what you want to build is full of personalization. That’s how you differentiate yourself, which means you’re going to need a tool that’s expressive enough to help you. So the question becomes: how hard is it to understand the details that are being intentionally hidden away? A lot of the time those details are not exposed directly. Instead they’re exposed through configuration. But configuration doesn’t really help you learn how something works. I mean, how many of you have learned how typescript works under the hood by using tsconfig.json? As Jan says: Configuration can lead to as many problems as it solves Nailed it. He continues: Configuring software is itself a form of programming, in fact a rather difficult and often baroque form. It can take more data files or code to configure a framework’s transformation than to write a program that directly implements that transformation itself. I’m not a Devops person, but that sounds like Devops in a nutshell right there. (It also perfectly encapsulates my feelings on trying to setup configuration in GitHub Actions.) Jan moves beyond site creation to also discuss site hosting. He gives good reasons for keeping your website’s architecture simple and decoupled from your hosting provider (something I’ve been a long time proponent of): These site hosting platforms typically charge an ongoing subscription fee. (Some offer a free tier that may meet your needs.) The monthly fee may not be large, but it’s forever. Ten years from now you’ll probably still want your content to be publicly available, but will you still be happy paying that monthly fee? If you stop paying, your site disappears. In subscription pricing, any price (however small) is recurring. Stated differently: pricing is forever. Anyhow, it’s a good read from Jan and lays out his vision for why he’s building Web Origami: a tool for that encourages you to understand (and customize) how you transform content to a website. He just launched version 0.4.0 which has some exciting stuff I’m excited to try out further (I’ll have to write about all that later). Email · Mastodon · Bluesky

3 days ago 5 votes