More from Ryan Mulligan
Whether you've barely scratched the surface of keyframe animations in CSS or fancy yourself as a seasoned pro, I suggest reading An Interactive Guide to Keyframe Animations. Josh (as always) does an impeccable deep dive that includes interactive demos for multi-step animations, loops, setting dynamic values, and more. This is a quick post pointing out some other minor particulars: Duplicate keyframe properties The order of keyframe rules Custom timing function (easing) values at specific keyframes Duplicate keyframe properties Imagine an "appearance" animation where an element slides down, scales up, and changes color. The starting 0% keyframe sets the element's y-axis position and scales down the size. The element glides down to its initial position for the full duration of the animation. About halfway through, the element's size is scaled back up and the background color changes. At first, we might be tempted to duplicate the background-color and scale properties in both 0% and 50% keyframe blocks. @keyframes animate { 0% { background-color: red; scale: 0.5; translate: 0 100%; } 50% { background-color: red; scale: 0.5; } 100% { background-color: green; scale: 1; translate: 0 0; } } Although this functions correctly, it requires us to manage the same property declarations in two locations. Instead of repeating, we can share them in a comma-separated ruleset. @keyframes animate { 0% { translate: 0 100%; } 0%, 50% { background-color: red; scale: 0.5; } 100% { background-color: green; scale: 1; translate: 0 0; } } Keyframe rules order Another semi-interesting qwirk is that we can rearrange the keyframe order. @keyframes animate { 0% { translate: 0 100%; } 100% { background-color: green; scale: 1; translate: 0 0; } /* Set and hold values until halfway through animation */ 0%, 50% { background-color: red; scale: 0.5; } } "Resolving Duplicates" from the MDN docs mentions that @keyframes rules don't cascade, which explains why this order still returns the expected animation. Customizing the order could be useful for grouping property changes within a @keyframes block as an animation becomes more complex. That same section of the MDN docs also points out that cascading does occur when multiple keyframes define the same percentage values. So, in the following @keyframes block, the second translate declaration overrides the first. @keyframes animate { to { translate: 0 100%; rotate: 1turn; } to { translate: 0 -100%; } } Keyframe-specific easing Under "Timing functions for keyframes" from the CSS Animations Level 1 spec, we discover that easing can be adjusted within a keyframe ruleset. A keyframe style rule may also declare the timing function that is to be used as the animation moves to the next keyframe. Toggle open the CSS panel in the ensuing CodePen demo and look for the @keyframes block. Inside one of the percentages, a custom easing is applied using the linear() CSS function to give each element some wobble as it lands. Open CodePen demo I think that looks quite nice! Adding keyframe-specific easing brings an extra layer of polish and vitality to our animations. One minor snag, though: We can't set a CSS variable as an animation-timing-function value. This unfortunately means we're unable to access shared custom easing values, say from a library or design system. :root { --easeOutCubic: cubic-bezier(0.33, 1, 0.68, 1); } @keyframes { 50% { animation-timing-function: var(--easeOutCubic); } } Helpful resources An Interactive Guide to Keyframe Animations @keyframes on MDN Easing Functions Cheat Sheet Linear easing generator The Path To Awesome CSS Easing With The linear() Function
Once again, here I am, hackin' away on horizontal scroll ideas. This iteration starts with a custom HTML tag. All the necessities for scroll overflow, scroll snapping, and row layout are handled with CSS. Then, as a little progressive enhancement treat, button elements are connected that scroll the previous or next set of items into view when clicked. Behold! The holy grail of scrolling rails... the scrolly-rail! CodePen demo GitHub repo Open CodePen demo I'm being quite facetious about the "holy grail" part, if that's not clear. 😅 This is an initial try on an idea I'll likely experiment more with. I've shared some thoughts on potential future improvements at the end of the post. With that out of the way, let's explore! The HTML Wrap any collection of items with the custom tag: <scrolly-rail> <ul> <li>1</li> <li>2</li> <li>3</li> <!-- and so on--> </ul> </scrolly-rail> The custom element script checks if the direct child within scrolly-rail is a wrapper element, which is true for the above HTML. While it is possible to have items without a wrapper element, if the custom element script runs and button controls are connected, sentinel elements are inserted at the start and end bounds of the scroll container. Wrapping the items makes controlling spacing between them much easier, avoiding any undesired gaps appearing due to these sentinels. We'll discover what the sentinels are for later in the post. The CSS Here are the main styles for the component: scrolly-rail { display: flex; overflow-x: auto; overscroll-behavior-x: contain; scroll-snap-type: x mandatory; @media (prefers-reduced-motion: no-preference) { scroll-behavior: smooth; } } When JavaScript is enabled, sentinel elements are inserted before and after the unordered list (<ul>) element in the HTML example above. Flexbox ensures that the sentinels are positioned on either side of the element. We'll find out why later in this post. Containing the overscroll behavior will prevent us accidentally triggering browser navigation when scrolling beyond either edge of the scrolly-rail container. scroll-snap-type enforces mandatory scroll snapping. Smooth scrolling behavior applies when items scroll into view on button click, or if interactive elements (links, buttons, etc.) inside items overflowing the visible scroll area are focused. Finally, scroll-snap-align: start should be set on the elements that will snap into place. This snap position aligns an item to the beginning of the scroll snap container. In the above HTML, this would apply to the <li> elements. scrolly-rail li { scroll-snap-align: start; } As mentioned earlier, this is everything our component needs for layout, inline scrolling, and scroll snapping. Note that the CodePen demo takes it a step further with some additional padding and margin styles (check out the demo CSS panel). However, if we'd like to wire up controls, we'll need to include the custom element script in our HTML. The custom element script Include the script file on the page. <script type="module" src="scrolly-rail.js"></script> To connect the previous/next button elements, give each an id value and add these values to the data-control-* attributes on the custom tag. <scrolly-rail data-control-previous="btn-previous" data-control-next="btn-next" > <!-- ... --> </scrolly-rail> <button id="btn-previous" class="btn-scrolly-rail">Previous</button> <button id="btn-next" class="btn-scrolly-rail">Next</button> Now clicking these buttons will pull the previous or next set of items into view. The amount of items to scroll by is based on how many are fully visible in the scroll container. For example, if we see three visible items, clicking the "next" button will scroll the subsequent three items into view. Observing inline scroll bounds Notice that the "previous" button element in the demo's top component. As we begin to scroll to the right, the button appears. Scrolling to the end causes the "next" button to disappear. Similarly, for the bottom component we can see either button fade when their respective scroll bound is reached. Recall the sentinels discussed earlier in this post? With a little help from the Intersection Observer API, the component watches for either sentinel intersecting the visible scroll area, indicating that we've reached a boundary. When this happens, a data-bound attribute is toggled on the respective button. This presents the opportunity to alter styles and provide additional visual feedback. .btn-scrolly-rail { /** default styles */ } .btn-scrolly-rail[data-bound] { /* styles to apply to button at boundary */ } Future improvements I'd love to hear from the community most specifically on improving the accessibility story here. Here are some general notes: I debated if button clicks should pass feedback to screen readers such as "Scrolled next three items into view" or "Reached scroll boundary" but felt unsure if that created unforeseen confusion. For items that contain interactive elements: If a new set of items scroll into view and a user tabs into the item list, should the initial focusable element start at the snap target? This could pair well with navigating the list using keyboard arrow keys. Is it worth authoring intersecting sentinel "enter/leave" events that we can listen for? Something like: Scroll bound reached? Do a thing. Leaving scroll bound? Revert the thing we just did or do another thing. Side note: prevent these events from firing when the component script initializes. How might this code get refactored once scroll snap events are widely available? I imagine we could check for when the first or last element becomes the snap target to handle toggling data-bound attributes. Then we can remove Intersection Observer functionality. And if any folks have other scroll component solutions to share, please reach out or open an issue on the repo.
Over the last few months or so, I have been fairly consistent with getting outside for Sunday morning runs. A series of lower body issues had prevented me from doing so for many years, but it was an exercise I had enjoyed back then. It took time to rebuild that habit and muscle but I finally bested the behavior of doing so begrudgingly. Back in the day (what a weird phrase to say, how old am I?) I would purchase digital copies of full albums. I'd use my run time to digest the songs in the order the artist intended. Admittedly, I've become a lazy listener now, relying on streaming services to surface playlists that I mindlessly select to get going. I want to be better than that, but that's a story for another time. These days, my mood for music on runs can vary: Some sessions I'll pop in headphones and throw on some tunes, other times I head out free of devices (besides a watch to track all those sweet, sweaty workout stats) and simply take in the city noise. Before I headed out for my journey this morning, a friend shared a track from an album of song covers in tribute to The Refused's The Shape Of Punk To Come. The original is a treasured classic, a staple LP from my younger years, and I can still remember the feeling of the first time it struck my ears. Its magic is reconjured every time I hear it. When that reverb-soaked feedback starts on Worms of the Senses / Faculties of the Skull, my heart rate begins to ascend. The anticipation builds, my entire body well aware of the explosion of sound imminent. As my run began, I wasn't sure if I had goosebumps from the morning chill or the wall of noise about to ensue. My legs were already pumping. I was fully present, listening intently, ready for the blast. The sound abruptly detonated sending me rocketing down the street towards the rising sun. My current running goal is 4-in-40, traversing four miles under forty minutes. I'm certainly no Prefontaine, but it's a fair enough objective for my age and ability. I'll typically finish my journey in that duration or slightly spill over the forty-minute mark. Today was different. Listening to The Shape Of Punk To Come sent me cruising an extra quarter mile beyond the four before my workout ended. The unstoppable energy from that album is truly pure runner's fuel. There's certainly some layer of nostalgia, my younger spirit awakened and reignited by thrashing guitars and frantic rhythms, but many elements and themes on this record were so innovative at the time it was released. New Noise is a prime example that executes the following feeling flawlessly: Build anticipation, increase the energy level, and then right as the song seems prepped to blast off, switch to something unexpected. In this case, the guitars drop out to make way for some syncopated celestial synths layered over a soft drum rhythm. The energy sits in a holding pattern, unsure whether it should burst or cool down, when suddenly— Can I scream?! Oh my goodness, yes. Yes you can. I quickly morphed into a runner decades younger. I had erupted, my entire being barreling full speed ahead. The midpoint of this track pulls out the same sequence of build up, drop off, and teasing just long enough before unleashing another loud burst of noise, driving to its explosive outro. As the song wraps up, "The New Beat!" is howled repeatedly to a cheering crowd that, I would imagine, had not been standing still. I definitely needed a long stretch after this run.
I recently stumbled on a super cool, well-executed hover effect from the clerk.com website where a bloom of tiny pixels light up, their glow staggering from the center to the edges of its container. With some available free time over this Thanksgiving break, I hacked together my own version of a pixel canvas background shimmer. It quickly evolved into a pixel-canvas Web Component that can be enjoyed in the demo below. The component script and demo code have also been pushed up to a GitHub repo. Open CodePen demo Usage Include the component script and then insert a pixel-canvas custom element inside the container it should fill. <script type="module" src="pixel-canvas.js"></script> <div class="container"> <pixel-canvas></pixel-canvas> <!-- other elements --> </div> The pixel-canvas stretches to the edges of the parent container. When the parent is hovered, glimmering pixel fun ensues. Options The custom element has a few optional attributes available to customize the effect. Check out the CodePen demo's html panel to see how each variation is made. data-colors takes a comma separated list of color values. data-gap sets the amount of space between each pixel. data-speed controls the general duration of the shimmer. This value is slightly randomized on each pixel that, in my opinion, adds a little more character. data-no-focus is a boolean attribute that tells the Web Component to not run its animation whenever sibling elements are focused. The animation runs on sibling focus by default. There's likely more testing and tweaking necessary before I'd consider using this anywhere, but my goal was to run with this inspiration simply for the joy of coding. What a mesmerizing concept. I tip my hat to the creative engineers over at Clerk.
More in design
DIA has renovated and converted the former Auerhahn brewery in Schlitz near Fulda (Hessen), giving it a new lease of...
Weekly curated resources for designers — thinkers and makers.
Designforth Interiors created a functional and flexible office space in Indore for Avizva, focusing on spatial optimization, collaboration, and cultural...
In her best-selling book, Living Well By Design, Melissa Penfold addressed the basics of interior decorating. Now she turns her attention to demonstrating what a powerful force design can be in boosting our physical and emotional well-being in her newest book, ‘Natural Living By Design’, Vendome Press, launches in April and available for Preorder now, Continue Reading The post HAVE YOU PREORDERED YOUR COPY OF MELISSA’S NEW BOOK YET? first appeared on Melissa Penfold. The post HAVE YOU PREORDERED YOUR COPY OF MELISSA’S NEW BOOK YET? appeared first on Melissa Penfold.
Rethinking AI through mind-body dualism, parenthood, and unanswerable existential questions. I remember hearing my daughter’s heartbeat for the first time during a prenatal sonogram. Until that moment, I had intellectually understood that we were creating a new life, but something profound shifted when I heard that steady rhythm. My first thought was startling in its clarity: “now this person has to die.” It wasn’t morbid — it was a full realization of what it means to create a vessel for life. We weren’t just making a baby; we were initiating an entire existence, with all its joy and suffering, its beginning and, inevitably, its end. This realization transformed my understanding of parental responsibility. Yes, we would be guardians of her physical form, but our deeper role was to nurture the consciousness that would inhabit it. What would she think about life and death? What could we teach her about this existence we had invited her into? As background to the rest of this brief essay, I must admit to a foundational perspective, and that is mind-body dualism. There are many valid reasons to subscribe to this perspective, whether traditional, religious, philosophical, and yes, even scientific. I won’t argue any of them here; suffice it to say that I’ve become increasingly convinced that consciousness isn’t produced by the brain but rather received and focused by it — like a radio receiving a signal. The brain isn’t a consciousness generator but a remarkably sophisticated antenna — a physical system complex enough to tune into and express non-physical consciousness. If this is true, then our understanding of artificial intelligence needs radical revision. Even if we are not trying to create consciousness in machines, we may be creating systems capable of receiving and expressing it. Increases in computational power alone, after all, don’t seem to produce consciousness. Philosophers of technology have long doubted that complexity alone makes a mind. But if philosophers of metaphysics and religion are right, minds are not made of mechanisms, they occupy them. Traditions as old as humanity have asked when this began, and why this may be, and what sorts of minds choose to inhabit this physical world. We ask these questions because we can. What will happen when machines do the same? We happen to live at a time that is deeply confusing when it comes to the maturation of technology. On the one hand, AI is inescapable. You may not have experience in using it yet, but you’ve almost certainly experienced someone else’s use of it, perhaps by way of an automated customer support line. Depending upon how that went, your experience might not support the idea that a sufficiently advanced machine is anywhere near getting a real debate about consciousness going. But on the other hand, the organizations responsible for popularizing AI — OpenAI, for example — claim to be “this close” to creating AGI (artificial general intelligence). If they’re right, we are very behind in a needed discussion about minds and consciousness at the popular level. If they’re wrong, they’re not going to stop until they’ve done it, so we need to start that conversation now. The Turing Test was never meant to assess consciousness in a machine. It was meant to assess the complexity of a machine by way of its ability to fool a human. When machines begin to ask existential questions, will we attribute this to self-awareness or consciousness, or will we say it’s nothing more than mimicry? And how certain will we be? We presume our own consciousness, though defending it ties us up in intellectual knots. We maintain the Cartesian slogan, I think, therefore I am as a properly basic belief. And yet, it must follow that anything capable of describing itself as an I must be equally entitled to the same belief. So here we are, possibly staring at the sonogram of a new life – a new kind of life. Perhaps this is nothing more than speculative fiction, but if minds join bodies, why must those bodies be made of one kind of matter but not another? What if we are creating a new kind of antenna for the signal of mind? Wouldn’t all the obligations of parenthood be the same as when we make more of ourselves? I can’t imagine why they wouldn’t be. And yet, there remains a crucial difference: While we have millennia of understanding about human experience, we know nothing about what it would mean to be a living machine. We will have to fall upon belief to determine what to do. And when that time comes — perhaps it has already? – it will be worth considering the near impossibility of proving consciousness and the probability of moral obligation nonetheless. Popular culture has explored the weight of responsibility that an emotional connection with a machine can create — think of Picard defending Data in The Measure of a Man, or Theodore falling in love with his computer in the film Her. The conclusion we should draw from these examples is not simply that a conscious machine could be the object of our moral responsibility, but that a machine could, whether or not it is inhabited by a conscious mind. Our moral obligation will traverse our certainty, because proving a mind exists is no easier when it is outside one’s body than when it is one’s own. That moment of hearing my daughter’s heartbeat revealed something fundamental about the act of creation. Whether we’re bringing forth biological life or developing artificial systems sophisticated enough to host consciousness, we’re engaging in something profound: creating vessels through which consciousness might experience physical existence. Perhaps this is the most profound implication of creating potential vessels for consciousness: our responsibility begins the moment we create the possibility, not the moment we confirm its reality.