Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
10
Care ethics is closely related to the rise of feminist voices in philosophy starting in the 1970s. Writers at the time were inspired by the vanguard of feminism — Mary Astell in 1694, Mary Wollstonecraft in 1792, and Harriet Taylor Mill in 1869. They pointed out the dominance of men in the history of ethics and sought to understand the impact that such a one-sided approach might have on ethical theories. Care ethics emerged as the counterpoint to consequentialism and deontology. It emphasizes relationships — not tabulated outcomes or logical rulebooks — as the primary test of ethical behavior. A note from the author Before I dive into the history and practice of care ethics, I should address a few things. I’m a white, cisgender, heterosexual male living in a country and a community that provide people like me significant privilege at the exclusion of others. I was conflicted about writing an essay on feminist ethics. I asked friends: is it right for me to try? Can I faithfully...
over a year ago

More from The personal website of Matthew Ström

UI Density

Interfaces are becoming less dense. I’m usually one to be skeptical of nostalgia and “we liked it that way” bias, but comparing websites and applications of 2024 to their 2000s-era counterparts, the spreading out of software is hard to ignore. To explain this trend, and suggest how we might regain density, I started by asking what, exactly, UI density is. It’s not just the way an interface looks at one moment in time; it’s about the amount of information an interface can provide over a series of moments. It’s about how those moments are connected through design decisions, and how those decisions are connected to the value the software provides. I’d like to share what I found. Hopefully this exploration helps you define UI density in concrete and useable terms. If you’re a designer, I’d like you to question the density of the interfaces you’re creating; if you’re not a designer, use the lens of UI density to understand the software you use. Visual density We think about density first with our eyes. At first glance, density is just how many things we see in a given space. This is visual density. A visually dense software interface puts a lot of stuff on the screen. A visually sparse interface puts less stuff on the screen. Bloomberg’s Terminal is perhaps the most common example of this kind of density. On just a single screen, you’ll see scrolling sparklines of the major market indices, detailed trading volume breakdowns, tables with dozens of rows and columns, scrolling headlines containing the latest news from agencies around the world, along with UI signposts for all the above with keyboard shortcuts and quick actions to take. A screenshot of Terminal’s interface. Via Objective Trade on YouTube Craigslist is another visually dense example, with its hundreds of plain links to categories and spartan search-and-filter interface. McMaster-Carr’s website shares similar design cues, listing out details for many product variations in a very small space. Screenshots of Craigslist's homepage and McMaster-Carr's product page circa 2024. You can form an opinion about the density of these websites simply by looking at an image for a fraction of a second. This opinion is from our subconsciousness, so it’s fast and intuitive. But like other snap judgements, it’s biased and unreliable. For example, which of these images is more dense? Both images have the same number of dots (500). Both take up the same amount of space. But at first glance, most people say image B looks more dense.1 What about these two images? Again, both images have the same number of dots, and are the same size. But organizing the dots into groups changes our perception of density. Visually density — our first, instinctual judgement of density — is unpredictable. It’s impossible to be fully objective in matters of design. But if we want to have conversations about density, we should aim for the most consistent, meaningful, and useful definition possible. Information density In The Visual Display of Quantitative Information, Edward Tufte approaches the design of charts and graphs from the ground up: Every bit of ink on a graphic requires reason. And nearly always that reason should be that the ink presents new information. Tufte introduces the idea of “data-ink,” defined as the useful parts of a given visualization. Tufte argues that visual elements that don’t strictly communicate data, whether it’s a scale value, a label, or the data itself — should be eliminated. Data-ink isn’t just the space a chart takes up. Some charts use very little extraneous ink, but still take up a lot of physical space. Tufte is talking about information density, not visual density. Information density is a measurable quantity: to calculate it, you simply divide the amount of “data-ink” in a chart by the total amount of ink it takes to print it. Of course what is and is not data-ink is somewhat subjective, but that’s not the point. The point is to get the ratio as close to 1 as possible. You can increase the ratio in two ways: Add data-ink: provide additional (useful) data Remove non-data-ink: erase the parts of the graphic that don’t communicate data Tufte's examples of graphics with a low data-ink ratio (first) and a high one (second). Reproduced from Edward Tufte's The Visual Display of Quantitative Information There’s an upper limit to information density, which means you can subtract too much ink, or add too much information. The audience matters, too: A bond trader at their 4-monitor desk will have a pretty high threshold; a 2nd grader reading a textbook will have a low one. Information density is related to visual density. Usually, the higher the information density is, the more dense a visualization will look. For example, take the train schedule published by E.J. Marey in 18852. It shows the arrival and departure times of dozens of trains across 13 stops from Paris to Lyon. The horizontal axis is time, and the vertical axis is space. The distance between stops on the chart reflects how far apart they are in the real world. The data-ink ratio is close to 1, allowing a huge amount of information — more than 260 arrival and departure times — to be packed into a relatively small space. The train schedule visualization published by E.J. Marey in 1885. Reproduced from Edward Tufte's The Visual Display of Quantitative Information Tufte makes this idea explicit: Maximize data density and the [amount of data], within reason (but at the same time exploiting the maximum resolution of the available data-display technology). He puts it more succinctly as the “Shrink Principle”: Graphics can be shrunk way down Information density is clearly useful for charts and graphs. But can we apply it to interfaces? The first half of the equation — information — applies to screens. We should maximize the amount of information that each part of our interface shows. But the second half of the equation — ink — is a bit harder to translate. It’s tempting to think that pixels and ink are equivalent. But any interface with more than a few elements needs separators, structural elements, and signposts to help a user understand the relationship each piece has to the other. It’s also tempting to follow Tufte’s Shrink Principle and try to eliminate all the whitespace in UI. But some whitespace has meaning almost as salient as the darker pixels of graphic elements. And we haven’t even touched on shadows, gradients, or color highlights; what role do they play in the data-ink equation? So, while information density is a helpful stepping stone, it’s clear that it’s only part of the bigger picture. How can we incorporate all of the design decisions in an interface into a more objective, quantitative understanding of density? Design density You might have already seen the first challenge in defining density in terms of design decisons: what counts as a design decision? In UI, UX, and product design, we make many decisions, consciously and subconsciously, in order to communicate information and ideas. But why do those particular choices convey the meaning that they do? Which ones are superlative or simply aesthetic, and which are actually doing the heavy lifting? These questions sparked 20th century German psychologists to explore how humans understand and interpret shapes and patterns. They called this field “gestalt,” which in German means “form.” In the course of their exploration, Gestalt psychologists described principles that describe how some things appear orderly, symmetrical, or simple, while others do not. While these psychologists weren’t designers, in some sense, they discovered the fundamental laws of design: Proximity: we perceive things that are close together a comprising a single group Similarity: objects that are similar in shape, size, color, or in other ways, appear related to one another. Closure: our minds fill in gaps in designs so that we tend to see whole shapes, even if there are none Symmetry: if we see shapes that are symmetrical to each other, we perceive them as a group formed around a center point Common fate: when objects move, we mentally group the ones that move in the same way Continuity: we can perceive objects as separate even when they overlap Past experience: we recognize familiar shapes and patterns even in unfamiliar contexts. Our expectations are based on what we’ve learned from our past experience of those shapes and patterns. Figure-ground relationship: we interpret what we see in a three-dimensional way, allowing even flat 2d images to have foreground and background elements. Examples of the princples of proximity (left), similarity (center), and closure (right). Gestalt principles explain why UI design goes beyond the pixels on the screen. For example: Because of the principle of similarity, users will understand that text with the same size, font, and color serves the same purpose in the interface. The principle of proximity explains why when a chart is close to a headline, it’s apparent that the headline refers to the chart. For the same reasons, a tightly packed grid of elements will look related, and separate from a menu above it separated by ample space. Thanks to our past experience with switches, combined with the figure-ground principle, a skeuomorphic design for a toggle switch will make it obvious to a user how to instantly turn on a feature. So, instead of focusing on the pixels, we think of design decisions as how we intentionally use gestalt principles to communicate meaning. And like Tufte’s data-ink ratio compares the strictly necessary ink to the total ink used to print a chart, we can calculate a gestalt ratio which compares the strictly necessary design decisions to the total decisions used in a design. This is design density. Four different treatments of the same information, using different types and amounts of gestalt principles. Which is the most dense? This is still subjective: a design decision that seems necessary to some might be superfluous to others. Our biases will skew our assessment, whether they’re personal tastes or cultural norms. But when it comes to user interfaces, counting design decisions is much more useful than counting the amount of data or “ink” alone. Design density isn’t perfect. User interfaces exist to do work, to have fun, to waste time, to create understanding, to facilitate personal connections, and more. Those things require the user to take one or more actions, and so density needs to look beyond components, layouts, and screens. Density should comprise all the actions a user takes in their journey — it should count in space and time. Density in time Just like the amount of stuff in a given space dictates visual density, the amount of things a user can do in a given amount of time dictates temporal — time-wise — density. Loading times are the biggest factor in temporal density. The faster the interface responds to actions and loads new pages or screens, the more dense the UI is. And unlike 2-dimensional whitespace, there’s almost no lower limit to the space needed between moments in time. Bloomberg’s Terminal loads screens full of data instantaneously With today’s bloated software, making a UI more dense in time is more impactful than just squeezing more stuff onto each screen. That’s why Bloomberg’s Terminal is still such a dominant tool in the financial analysis space; it loads data almost instantaneously. A skilled Terminal user can navigate between dozens of charts and graphs in milliseconds. There are plenty of ways to cram tons of financial data into a table, but loading it with no latency is Terminal’s real superpower. But say you’ve squeezed every second out of the loading times of your app. What next? There are some things that just can’t be sped up: you can’t change a user’s internet connection speed, or the computing speed of their CPU. Some operations, like uploading a file, waiting for a customer support response, or processing a payment, involve complex systems with unpredictable variables. In these cases, instead of changing the amount of time between tasks, you can change the perception of that time: Actions less than 100 milliseconds apart will feel simultaneous. If you tap on an icon and, 100ms later, a menu appears, it feels like no time at all passed between the two actions. So, if there’s an animation between the two actions — the menu slides in, for example — the illusion of simultaneity might be broken. For the smallest temporal spaces, animations and transitions can make the app feel slower.3 Between 100 milliseconds and 1 second, the connection between two actions is broken. If you tap on a link and there’s no change for a second, doubt creeps in: did you actually tap on anything? Is the app broken? Is your internet working? Animations and transitions can bridge this perceptual gap. Visual cues in these spaces make the UI feel more dense in time. Gaps between 1 and 10 seconds can’t be bridged with animations alone; research4 shows that users are most likely to abandon a page within the first 10 seconds. This means that if two actions are far enough apart, a user will leave the page instead of waiting for the second action. If you can’t decrease the time between these actions, show an indeterminate loading indicator — a small animation that tells the user that the system is operating normally. Gaps between 10 seconds and 1 minute are even harder to fill. After seeing an indeterminate loader for more than 10 seconds, a user is likely to see it as static, not dynamic, and start to assume that the page isn’t working as expected. Instead, you can use a determinate loading indicator — like a larger progress bar — that clearly indicates how much time is left until the next action happens. In fact, the right design can make the waiting time seem shorter than it actually is; the backwards-moving stripes that featured prominently in Apple’s “Aqua” design system made waiting times seem 11% shorter.5 For gaps longer than 1 minute, it’s best to let the user leave the page (or otherwise do something else), then notify them when the next action has occurred. Blocking someone from doing anything useful for longer than a minute creates frustration. Plus, long, complex processes are also susceptible to error, which can compound the frustration. In the end, though, making a UI dense in time and space is just a means to an end. No UI is valuable because of the way it looks. Interfaces are valuable in the outcomes they enable — whether directly associated with some dollar value, in the case of business software, or tied to some intangible value like entertainment or education. So what is density really about, then? It’s about providing the highest value outcomes in the smallest amount of time, space, pixels, and ink. Density in value Here’s an example of how value density is manifested: a common suggestion for any form-based interface is to break long forms into smaller chunks, then put those chunks together in a wizard-type interface that saves your progress as you go. That’s because there’s no value in a partly-filled-in-form; putting all the questions on a single page might look more visually dense, but if it takes longer to fill out, many users won’t submit it at all. This form is broken up into multiple parts, with clear errors and instructions for resolution. Making it possible for users to get to the end of a form with fewer errors might require the design to take up more space. It might require more steps, and take more time. But if the tradeoffs in visual and temporal density make the outcome more valuable — either by increasing submission rate or making the effort more worth the user’s time — then we’ve increased the overall value density. Likewise, if we can increase the visual and temporal density by making the form more compact, load faster, and less error-prone, without subtracting value to the user or the business, then that’s an overall increase in density. Channeling Tufte, we should try to increase value density as much as possible. Solving this optimization problem can have some counterintuitive results. When the internet was young, companies like Craigslist created value density by aggregating and curating information and displaying it in pages of links. Companies like Yahoo and Altavista made it possible to search for that information, but still put aggregation at the fore. Google took a radically different approach: use information gleaned by the internet’s long chains of linked lists to power a search box. Information was aggregating itself; a single text input was all users needed to access the entire web. Google and Yahoo's approach to data, design, and value density hasn't changed from 2001 (when the first screenshots were archived) to 2024 (when the second set of screenshots were taken). The value of the two companies' stocks reflect the result of these differing approaches. The UI was much less visually dense, but more value-dense by orders of magnitude. The results speak for themselves: Google went from a $23B valuation in 2004 to being worth over $2T today — closing in on a 100x increase. Yahoo went from being worth $125B in 2000 to being sold for $4.8B — less than 3% of its peak value.6 Conclusion Designing for UI density goes beyond the visual aspects of an interface. It includes all the implicit and explicit design decisions we make, and all the information we choose to show on the screen. It includes all time and the actions a user takes to get something valuable out of the software. So, finally, a concrete definition of UI density: UI density is the value a user gets from the interface divided by the time and space the interface occupies. Speed, usability, consistency, predictability, information richness, and functionality all play an important role in this equation. By taking account of all these aspects, we can understand why some interfaces succeed and others fail. And by designing for density, we can help people get more value out of the software we build. Footnotes & References This is a very unscientific statement based on a poll of 20 of my coworkers. Repeatability is questionable. ↩︎ The provenance of the chart is interesting. Not much is known about the original designer, Charles Ibry; but what we do know points to even earlier iterations of the design. If you’re interested, read Sandra Rendgen’s fascinating history of the train schedule. ↩︎ I have no scientific backing for this claim, but I believe it’s because a typical blink occurs in 100ms. When we blink, our brains fill in the gap with the last thing we saw, so we don’t notice the blink. That’s is why we don’t notice the gap between two actions that are less than 100ms apart. You can read more about this effect here: Visual Perception: Saccadic Omission — Suppression or Temporal Masking? ↩︎ Nielsen, Jakob. “How Long Do Users Stay on Web Pages?” Nielsen Norman Group, 11 Sept. 2011, https://www.nngroup.com/articles/how-long-do-users-stay-on-web-pages/ ↩︎ Harrison, Chris, Zhiquan Yeo, and Scott E. Hudson. “Faster Progress Bars: Manipulating Perceived Duration with Visual Augmentations.” Carnegie Mellon University, 2010, https://www.chrisharrison.net/projects/progressbars2/ProgressBarsHarrison.pdf ↩︎ HackerNews has pointed out that this is a ridiculous statement. And it is. Of course, value density isn’t the only reason why Google succeeded where Yahoo failed. But as a reflection of how each company thought about their products, it was a good leading indicator. ↩︎

8 months ago 7 votes
The polish paradox

Polish is a word that gets thrown out in conversations about craft, quality, and beauty. We talk about it at the end of the design process, before the work goes out the door: let’s polish this up. Let’s do a polish sprint. Could this use more polish? https://twitter.com/svlleyy/status/1780215102064452068 A tweet (xeet?) on my timeline asked: “what does polish in an app mean? fancy animations? clear consistent design patterns? hierarchy and colour? all the above?” I thought about it for a moment and got a familiar itch in the back of my brain. It’s a feeling that I associate with a zen kōan that goes (paraphrased): "A monk asked a zen master, ‘Does a dog have Buddha-nature?’ The master answered ‘無’. 無 (pronounced ‘wú’ in Mandarin or ‘mu’ in Japanese) literally translates to ‘not,’ as in ‘i have not done my chores today.’ It’s a negation of something, and in the koan’s case, it’s the master’s way of saying — paradoxically — that there’s no point in answering the question. In the case of the tweet, my 無-sense was tingling as I wrote a response: polish is something only the person who creates it will notice. It’s a paradox; polishing something makes it invisible. Which also means that pointing out examples of polish almost defeats the purpose. But in the spirit of learning, here’s a few things that come to mind when I think of polish: Note the direction the screws are facing. Photo by Chris Campbell, CC BY-NC 2.0 DEED Next time you flip a wall switch or plug something into an outlet, take a second and look at the two screws holding the face plate down. Which direction are the slots in the screw facing? Professional electricians will (almost) always line the screw slots up vertically. This has no functional purpose, and isn’t determined by the hardware itself; the person who put the plate on had to make a conscious decision to do it. Julian Baumgartner’s art restoration videos always include a note about his process for repairing or rebuilding the frame that the canvas is stretched over. When he puts the keys back into the frame to create extra tension, he attaches some fishing wire, wound around a tack, and threaded through each key; this, he says, “ensures the keys will never be lost.” How many of these details lie hidden in the backs of the paintings hung on the walls of the world’s most famous museums and galleries? A traditional go board, with a 15:14 aspect ratio. A traditional go board isn’t square. It’s very slightly longer than it is wide, with a 15:14 aspect ratio. This accounts for the optical foreshortening that happens when looking across the board. For similar reasons, traditionally, black go stones are slightly larger than white ones, as equal-sized stones would look unequal when seen next to each other on the board. The same subtle adjustments go into the shape of letters in a typeface: round letters like ‘e’ and ‘a’ are slightly taller than square letters like ‘x’ or ‘v’. The crossbars of the x don’t usually line up perfectly, either. The success of these demonstrations of polish is dictated by just how hard they are to see. So how should polish manifest in product design? One example is in UI animation. It is tempting to put transitions and animations on every component in the interface; when done right, an animated UI feels responsive and pleasant to use. But the polish required to reach that point of being “intuitive” or “natural” is immense: Animations should happen fast enough to be perceived as instantaneous. The threshold for this is commonly cited at 100ms; anything happening faster than this is indistinguishable from something happening right away. The speed of the animation has to be tuned to accelerate or decelerate at precise rates depending on how far the element is moving and what kind of transition is taking place. Changing a popover from the default linear animation to an ease-out curve will make it seem more natural. Often an animation should be faster or slower depending on whether it’s an “in” or “out” animation; a faster animation at the start of an interaction makes the interface feel snappy and responsive. A slower animation at the end of an interaction helps a user stay oriented to the result of their actions. Another example is in anticipating the user’s intent. A reactive UI should be constantly responding to a user’s input, with no lag between clicks and hovers and visual, audible, or tactile feedback. But with some interaction patterns, responding too quickly can make the interface feel twitchy or delicate. For this reason, nested dropdown menus often have invisible bridges connecting your cursor and the menu associated with what you’ve selected. This allows you to smoothly move to the next item, without the sub-menu disappearing. These bridges are invisible, but drawing them accurately requires pixel precision nonetheless. An example of Amazon’s mega dropdown menu, with invisible bridges connecting the top-level menu to the sub-menu. Image credit: Ben Kamens You benefit from this kind of anticipatory design every day. While designing the original iPhone’s keyboard, Ken Kocienda explored new form factors that took advantage of the unique properties of the phone’s touch screen. But breaking away from the familiarity of a QWERTY keyboard proved challenging; users had a hard time learning new formats. Instead, Kocienda had the keyboard’s touch targets invisibly adjust based on what is being typed, preventing users from making errors in the first place. The exact coordinates of the tap on the screen are adjusted, too, based on the fact that we can’t see what’s underneath our fingers when we’re using it. Early prototypes of the iPhone keyboard sacrificed familiarity in order to make the touchscreen interaction more finger-friendly. Images from Ken Kocienda's Creative Selection, via Commoncog Case Library The iPhone’s keyboard was one of the most crucial components to the success of such a risky innovation. Polish wasn’t a nice-to-have; it was the linchpin. The final design of the keyboard used a familiar QWERTY layout and hid all the complexity of the touch targets and error correction behind the scenes. Image from Apple’s getting started series on the original iphone. Retrieved from the Internet Archive The polish paradox is that the highest degrees of craft and quality are in the spaces we can’t see, the places we don’t necessarily look. Polish can’t be an afterthought. It must be an integral part of the process, a commitment to excellence from the beginning. The unseen effort to perfect every hidden aspect elevates products from good to great.

9 months ago 8 votes
How to generate color palettes for design systems

It used to be easy to pick colors for design systems. Years ago, you could pick a handful of colors to match your brand’s ethos, or start with an off-the-shelf palette (remember flatuicolors.com?). Each hue and shade served a purpose, and usually had a quirky name like “idea yellow” or “innovation blue”. This hands-on approach allowed for control and creativity, resulting in color schemes that could convey any mood or style. But as design systems have grown to keep up with ever-expanding software needs, the demands on color palette have grown exponentially too. Modern software needs accessibility, adaptability, and consistency across dozens of devices, themes, and contexts. Picking colors by hand is practically impossible. This is a familiar problem to the Stripe design team. In “Designing accessible color systems,” Daryl Koopersmith and Wilson Miner presented Stripe’s approach: using perceptually uniform color spaces to create aesthetically pleasing and accessible systems. Their method offered a new approach to selection to enhance beauty and usability, grounded in scientific understanding of human vision. In the four years since that post, Stripe has stretched those colors to the limit. The design system’s resilience through massive growth is a testament to the team’s original approach, but last year we started to see the need for a more flexible, scalable, and inclusive color system. This meant both an expansion of our color palette and a rethinking of how we generate and apply these colors to accommodate our still-growing products. This essay will take you through my attempts to solve these problems. Through this process, I’ve created a tool for generating expressive, functional, and accessible color systems for any design system. I’ll share the full code of my solution at the end of the essay; it represents not just a technical solution but a philosophical shift in how we think about color in design systems, emphasizing the balance between creativity and inclusivity. Why don’t the existing tools work? So what makes a good color palette? Through the looking glass: perceptual uniformity Picking the right color space Using OKHsl First steps with generated scales Scaling up Making scales expressive: Leveraging hue and saturation Hue Saturation and Chroma In practice: Crafting colors with functions Pick base hues Add functions for hue, saturation, and lightness Calculate the colors for each scale number Making scales adaptive: Using background color as an input Making scales accessible: Building in the WCAG contrast calculation Step 1: Calculate a target contrast ratio based on scale step Step 2: Calculate lightness based on a target contrast ratio Step 3: Translate from XYZ Y to OKHsl L Putting it all together: All the code you need What does it look like in practice? What we’ve learned and where we’re going Why don’t the existing tools work? In the past few years, I’ve come across dozens of tools that promise to generate color palettes for design systems. Some are simple, like Adobe Color, which generates color palettes based on a single input color, or even an image. Others are more complex, like Colorbox, which generates color scales based on a long list of parameters, easing curves, and input hues. But I’ve found that each of these tools has critical limitations. Complex tools like Colorbox or color x color allow for a high degree of customization, but they require a lot of manual input and don’t provide guidelines for accessibility. Simple tools like Adobe’s Color and Leonardo provide more constraints and accessibility features, but they does so at the expense of flexibility and extensibility. None of the tools I’ve found can integrate tightly with an existing design system; all are simply apps that generate an initial set of colors. None can respond to the unique constraints of your design system, or adapt as you add more themes, modes, or components. That’s why I ended up going back to first principles, and decided to build up a framework that can be adapted to any codebase, design tool, or end user interface. So what makes a good color palette? To build palettes from first principles, we need a strong conceptual foundation. A great color palette is like a Swiss Army knife, built to address a wide array of needs. But that same flexibility can make the system unwieldy and clunky. Through years of working on design systems, two principles have emerged as a constant benchmark for quality color palettes: utility and consistency. A color palette with high utility is vital for a robust design system, encompassing both adaptability and functionality. It should offer a wide array of shades and hues to cater to diverse use cases, such as status changes—reds for errors, greens for successes, and yellows for warnings—and interaction states like hovering, disabled, or active selections. It’s also essential for signifying actionable items like links and buttons. Beyond functionality, an adaptable palette enables smooth transitions between light, dark, and high contrast modes, supporting the evolution of your product and differing brand expressions. This ensures that your user interfaces remain consistent and recognizable across various platforms and usage contexts. Moreover, an adaptable palette underscores a commitment to accessibility—it should provide accessible contrast ratios across all components, accommodating users with visual impairments, and offer high-contrast modes that enhance visibility and functionality without sacrificing style. Consistency is another crucial aspect of a well-designed color palette. Despite the diverse range of components and their variants, a consistent palette maintains a coherent visual language throughout the system. This coherence ensures that elements like badges retain a consistent visual weight, avoiding misleading emphasis, and the relative contrast of components remains balanced between dark and light modes. This consistency helps preserve clarity and hierarchy, further enhancing the user experience and the overall aesthetics of the design system. As you’ll see, even simple questions about these goals reveals a deep rabbit hole of possible solutions. Through the looking glass: perceptual uniformity The principles of utility and consistency make selecting a color palette more complex. There’s a question at the heart of both constraints: what makes two colors look different? We have an intuitive sense that yellow and orange are more similar than green and blue, but can we prove it objectively? Scientists and artists have spent the last decade puzzling this out, and their answer is the concept of perceptual uniformity. Perceptual uniformity is rooted in how our eyes work. Humans see colors because of the interaction between wavelengths of light and cells in our eyes. In 1850, before we could look at cells under a microscope, scientist Hermann von Helmholtz theorized that there were three color vision cells (now known as cones) for blue, green, and red light. Thomas Young and Hermann von Helmholtz assumed that the eye’s retina consists of three different kinds of light receptors for red, green and blue. Public Domain via Wikipedia Most modern screens depend on this century-old theory, mixing red, green, and blue light to produce colors. Every combination of these colors produces a distinct one; 10% red, 50% green, and 25% blue light create the cartoon green of the Simpson’s yard. 75% red, 85% green, and 95% blue is the blindingly pale blue of the snow in Game of Thrones. Von Helmholtz was amazingly close to the truth, but until 1983, we didn’t have a full understanding of the exact way that each cell in our eyes responds to light. While it’s true that we have three kinds of color vision cells, and that each responds strongly to either red, green, or blue light, the full mechanism of color vision is much more nuanced. So, while it’s technologically simple to mix red, green, and blue lights to reproduce color, the red, green, and blue coordinate system — the RGB color space — isn’t perceptually uniform. Picking the right color space Despite not being perceptually uniform, many design systems still use RGB color space (and its derivative, HSL space) for picking colors. But over the past century, scientists and artists have invented more useful ways to map the landscape of color. Whether it’s capturing skin tones accurately in photographs or creating smooth gradients for data visualization, these different color spaces give us perceptually uniform paths through a gamut. Lab is an example of a perceptually uniform color space. Developed by the International Commission on Illumination, or CIE, the Lab color space is designed to be device-independent, encompassing all perceivable colors. Its three dimensions depict lightness (L), and color opponents (a and b) — the latter two varying between green-red and blue-yellow axes respectively. This makes it useful for measuring the differences between colors. However, it’s not very intuitive; for example, unless you’ve spent a lot of time working with the lab color space, it’s probably hard to imagine what a pair of (a, b) values like (70, -15), represents.1 LCh (Luminosity, Chroma, hue) is more ergonomic, but still perceptually uniform color space. It’s a cylindrical color space, which means that along the hue axis, colors change from red to blue to green, and then back to red — like traveling on a roundabout. Along the way, each color appears equally bright and colorful. Moving along the luminosity axis, a color appears brighter or dimmer but equally colorful, like adjusting a flashlight’s distance from a painted wall. Along the chroma axis, a color stays equally bright but looks more or less colorful, like it’s being mixed with different amounts of gray paint. The LCh color space. Note the uneven peaks of chroma at different hues. via Hueplot LCh trades off some of lab’s functionality for being more intuitive. But LCh can be clunky, too, because the C (chroma) axis starts at 0 and don’t have a strict upper limit. Chroma is meant to be a relative measure of a color’s “colorfulness”. Some colors are brighter and more colorful than others: is a light aqua blue as colorful as a neon lime green? How does a brick red compare to a grape soda purple? The chroma scale is meant to make these comparisons possible. But try for a moment to imagine a sea green as rich and deep as an ultraviolet blue. Lab and LCh both let you specify these “impossible” colors that don’t have a real-world representation. In technical parlance, they’re called “out of gamut,” since they can’t be produced by screens, or seen by human eyes. The existence of out-of-gamut colors makes it hard to reliably build a color system in LCh or lab color space. Finding colors with consistent properties is a manual process; when Stripe was building its previous color system using lab, the team made a specialized tool for visualizing the boundaries of possible colors, allowing designers to tweak each shade to maximize its saturation. This isn’t a tenable solution for most teams; what if there was a color space that combined the simplicity of RGB and HSL with the perceptual uniformity of lab and LCh? Björn Ottosson, creator of the OKLab color space, did just that in his blog post “OKHsv and OKHsl — two new color spaces for color picking.” OKHsl is similar to Lch in that it has three components, one for hue, one for colorfulness, and one for lightness. Like LCh, the hue axis is a circle with 360 degrees. The lightness axis is similar to Lch’s luminosity, going from 0 to 1 for every hue. In place of Lch’s chroma channel, though, OKHsl uses an absolute saturation axis that goes from 0 to 1 for every hue, at every lightness. 0 represents the least saturated color (grays ranging from white to black), and 1 represents the most saturated color available in the sRGB gamut. The OKHsl color space. It’s a cylinder, which makes it much better for generating color palettes. via Hueplot Practically, OKHsl allows for easier color selection and manipulation. It bypasses the issues found in LCh or lab, creating an intuitive, straightforward, and user-friendly system that can produce the desired colors without worrying about out-of-gamut colors. That’s why it’s the best space for generating color pallettes for design systems. Using OKHsl Practically speaking, to use OKHsl, you need to be able to convert colors to and from sRGB. This is a fairly straightforward calculation, but it’s not built into most design tools. Bjorn Ottosson linked the javascript code to do this conversion in his blog post, and the library colorjs.io will soon have support for OKHsl. Going forward, I’ll assume you have a way to convert colors to and from OKHsl. If you don’t, you can use the code I’ve written to generate a color palette in OKHsl, and then convert it to sRGB for use in your design system. First steps with generated scales To get started generating our color scales, we need a few values: The hue of the color we want to generate The saturation of the color we want to generate A list of lightness values we want to generate For example, we can generate a cool neutral color scale by choosing these values: Hue: 250 Saturation: 5 Lightness values: Light: 85 Medium: 50 Dark: 15 Using those values to pick colors in the OKHsl color space, we get the following palette: Neutral OKHsl sRGB Hex 250, 5, 85 #d2d5d8 250, 5, 50 #73787c 250, 5, 15 #212325 We can do the same thing for all our colors, picking numbers to build out the entire system. OKHsl sRGB Hex OKHsl sRGB Hex 250, 5, 85 #d2d5d8 250, 90, 85 #b6d9fd 250, 5, 50 #73787c 250, 90, 50 #1a7acb 250, 5, 15 #252628 250, 90, 15 #022342 OKHsl sRGB Hex OKHsl sRGB Hex OKHsl sRGB Hex 145, 90, 85 #6af778 20, 90, 85 #fec3ca 100, 90, 85 #eed63d 145, 90, 50 #388b3f 20, 90, 50 #d32d43 100, 90, 50 #877814 145, 90, 15 #0c2a0e 20, 90, 15 #45060f 100, 90, 15 #282302 Scaling up For bigger projects, you’ll often need more than just three shades per color. Choosing the right number can be tricky: too few shades limit your options, but too many can cause confusion. This can seem daunting, particularly in the early stages of your design system. But there’s a method to simplify this: use a consistent numbering system, ensuring your color choices remain versatile no matter how your system evolves. This system is often referred to as ‘magic numbers.’ If you’re familiar with Tailwind CSS or Material Design, you’ve seen this in action. Instead of naming shades like ‘light’ or ‘dark,’ each shade gets a number. For instance, in Tailwind, the scale goes from 0 to 1,000, and in Material Design, it’s 0 to 100. The extremes often correspond to near-white or near-black, with middle numbers denoting pure hues. The beauty of this system is its flexibility. If you initially use shades named ‘red 500’ and ‘red 700’, and later need something in between, you can simply introduce ‘red 600’. This keeps your design adaptable and intuitive. Another bonus of magic numbers is that we can often plug the number directly into a color picker to scale the lightness of the shade. That’s why, for the rest of this essay, I’ll call these scale numbers. For example, if we wanted to create a more extensive color scale for our blues, we could use the following values in the OKHsl color space: Blue Scale Number OKHsl sRGB Hex 0 250, 90, 100 #ffffff 10 250, 90, 90 #cfe5fe 20 250, 90, 80 #9dccfd 30 250, 90, 70 #68b1f9 40 250, 90, 60 #3395ed 50 250, 90, 50 #1b7acb 60 250, 90, 40 #0f60a3 70 250, 90, 30 #08477c 80 250, 90, 20 #032f55 90 250, 90, 10 #01172e 100 250, 90, 0 #000000 We’ve turned the scale number into the lightness value with the function $$L(n) = 1-n$$. In this formula, n is a normalized value — one that goes from 0 to 1 — that represents our scale number, and L(n) is the lightness value in OKHsl. It turns out that using functions and formulas in combination with scale numbers is a powerful way to create expressive color scales that can power beautiful design systems. Making scales expressive: Leveraging hue and saturation One advantage of scale numbers is the ability to plug them directly into a color picker to dictate the lightness of a shade. But scale numbers really show their usefulness and versatility when you leverage them across every component of your color system. That means venturing beyond lightness to explore hue and saturation, too. Hue When using scale numbers to control lightness, it’s easy to assume hue and saturation will behave consistently across the lightness range. However, our perception of color doesn’t work that simply. Hue can appear to shift dramatically between light and dark shades of the same color due to a phenomenon called the Bezold–Brücke effect: colors tend to look more purplish in shadows and more yellowish in highlights. So if we want to maintain consistent hue perception, we can use scale numbers for adapting the hues of our color scales. As lightness decreases, blues and reds should shift slightly towards more violet/purple tones to counteract the Bezold–Brücke effect. Likewise, as lightness increases, yellows, oranges, and reds should shift towards more yellowish hues. 23 Purple without (top) and with (bottom) accounting for the Bezold–Brücke shift Red without (top) and with (bottom) accounting for the Bezold–Brücke shift In both examples above, we’ve used the scale number to shift the hue slightly as the lightness increases. This looks like the following formula: $$H(n) = H_{base} + 5*(1 - n)$$. H(n) is the hue at a given normalized scale value; Hbase is the “base hue” of the color. The 5*(1 - n) term means the hue will change by 5 degrees as the scale number goes from one end to the other. If you’re using this formula, you should tweak the numbers to your liking. By making hue a function of lightness, with the scale number adjusting hue accordingly, hues look more consistent and harmonious across the entire scale. The shifts don’t need to be large – even subtle hue variations of a few percentage points can perceptually compensate for natural hue shifts with changing brightness. Saturation and Chroma From our understanding of the CIE LCh color space and its sibling, the OKHsl color space, we know that colors generally attain their peak chroma around the middle of the lightness scale.4 In design, this presents a fantastic opportunity. By designing our color scales such that the midpoint is the most chromatically rich, we can make sure that our colors are the most vibrant and saturated where it matters most. Conversely, as we veer towards the lightness extremes, we can have chroma values that taper off, ensuring that our brightest and darkest shades remain subtle and balanced. OKHsl gives us a saturation component that goes from 0% to 100% of the possible chroma at a given hue and lightness value. We can take advantage of this by using the normalized scale number as an input to a function that goes from a minimum saturation to a maximum and back again. Green with constant saturation (top) and varying saturation (bottom) In practice, the formula for achieving this looks like this: $$S(n) = -4n^2 + 4n$$, where S(n) is the saturation at a given (normalized, as before) scale value n. The formula is an upside-down parabola, which starts at 0% and peaks at 100% when the scale value is 0.5. You can add a few terms to adjust the minimum and maximum saturation if you’d like to adjust the scale further: Neutrals, for example, don’t need a high maximum saturation. But most colors do well moving between 0% and 100% saturation. In practice: Crafting colors with functions Let’s put this into practice and generate an extensive color scale with only a handful of functions. Functions allow us to build a flexible framework that is resilient to change and can be easily adapted to new requirements; if we need to add colors, tweak hues, or adjust saturation, we can do so without rewriting the entire system. Pick base hues First, let’s pick a handful of base hues. At the very least, you’ll need a blue for interactive elements like links and buttons, and green, red, and yellow for statuses. Your neutrals need a hue, too; though it won’t show up much, a cool neutral and a warm neutral have very different effects on the overall system. Neutral Blue Green Red Yellow Base hue (Hbase) 250 250 145 20 70 Add functions for hue, saturation, and lightness Next, let’s use the functions we came up with earlier to indicate how the colors should change depending on scale numbers. Neutral Blue Green Red Yellow Base hue (Hbase) 250 250 145 20 70 Hue function $$H(n) = 250$$ $$H(n) = H_{base} + 5*(1 - n)$$ Saturation function $$S(n) = -0.8n^2 + 0.8n$$ $$S(n) = -4n^2 + 4n$$ Lightness function $$L(n) = 1-n$$ The hue function is a constant for neutrals, and for the colors we use the function that accounts for the Bezold–Brücke shift. As for saturation, the neutral colors have a maximum saturation of 20% instead of the full 100%; the rest of the colors use the function that goes from 0% to 100% and back. The lightness function is the same for all colors. Calculate the colors for each scale number Now let’s let the math work its magic. For each scale number, for every color, we have all the information we need from the base hue, hue function, saturation function, and lightness function. sRGB Hex Scale Number Neutral Blue Green Red Yellow 0 #ffffff #ffffff #ffffff #ffffff #ffffff 10 #e0e3e6 #dae4f0 #d8e8d4 #f9d1d6 #f7e9a3 20 #bfc8d1 #aacaf1 #9adb90 #f1b5b7 #ebbe83 30 #9fadbd #73aff6 #67c55b #f7838c #e09c34 40 #8193a6 #2e92f9 #39ac30 #fa405e #c3810a 50 #67798c #0077d8 #009100 #dd0042 #a26900 60 #506070 #065faa #227021 #ae0f33 #815304 70 #3c4752 #0e477c #255125 #7e1a28 #5f3e0b 80 #292f35 #12304d #1c351c #4e1b1e #3e290f 90 #141619 #0d1722 #101910 #221111 #1d150b 100 #000000 #000000 #000000 #000000 #000000 Of course, this palette is fairly basic and might not be optimal for your needs. But using formulas and functions to calculate colors from scale numbers has a powerful advantage over manually picking each color; you can make tweaks to the formulas themselves and instantly see the entire palette adapt. Making scales adaptive: Using background color as an input Today, color modes like dark mode and high-contrast accessibility mode are table stakes in design systems. So, if you’re picking colors manually, you have to pick an additional 50 colors for each mode, carefully balancing the unique perception of color against each different background. However, with the functions-and-formulas approach to picking colors, we can abstract a color palette to respond to any background color we might want to apply. Let’s go back to the lightness formula we used in the previous palettes: Using this formula, the lightness will decrease as the scale number increases. In dark mode, we want the opposite: lightness should increase as the scale number increases. We can use a more detailed formula to switch the direction of our scale if the lightness of a background color is less than a specific value: 0.18 \\n, & \text{if } Y_{b} The “Yb” in this equation is the background color’s Y value in the XYZ color space. As I explained at the beginning of this essay, color spaces are different ways of mapping all the colors in a gamut; XYZ is an extremely precise and comprehensive color space. While the X and Z components don’t map neatly to phenomenological aspects of a color (like a and b in the LAB color space), the Y component represents the luminosity of a color. You may be wondering why we’re using another color space (in addition to OKHsl) to dictate lightness. This is because the WCAG (Web Content Accessibility Guidelines) color contrast algorithm compares Y values in XYZ space, which will be more relevant in the next section. A color with a Y value of 0.18 will have the particular quality of passing WCAG contrast level AA5 on both pure white ( #ffffff) and pure black ( #000000). That makes it a good test to see if a color is a light background (Yb > 0.18) or a dark background (Yb < 0.18). Using this equation for our color system, we can now get both dark mode and light mode colors, calculated automatically based on the background color we choose. The color palette calculated with a background color of #000000 (Yb = 0) sRGB Hex Scale Number Neutral Blue Green Red Yellow 0 #000000 #000000 #000000 #000000 #000000 10 #141619 #0e1722 #10190f #221112 #1c150b 20 #292f35 #132f4f #1e351a #4e1a20 #3d2a0f 30 #3c4752 #10467f #275122 #7e192b #5e3e0b 40 #506070 #075eac #25701e #ae0e36 #805304 50 #67798c #0077d8 #009100 #dd0042 #a26900 60 #8193a6 #2993f8 #35ac35 #fa405c #c4810a 70 #9fadbd #6fb0f6 #61c660 #f78489 #e29b35 80 #bfc8d1 #a7caf1 #96db94 #f1b5b5 #ecbd86 90 #e0e3e6 #d9e4f0 #d6e9d6 #f0dedd #eee0d1 100 #ffffff #ffffff #ffffff #ffffff #ffffff Making scales accessible: Building in the WCAG contrast calculation One of the most helpful aspects of scale numbers is that they can simplify accessibility substantially. The first time I saw this feature was with the US Web Design System’s (USWDS) design tokens.The USWDS color tokens have scale numbers from 0–100; using any tokens that have a scale number of 50 or more guarantees that those colors will meet the WCAG color contrast criteria at AA level. This makes designing accessible interfaces much easier. Instead of manually running each color pairing through a color contrast check, you can compare the scale numbers of the design tokens and instantly know if the combination meets accessibility criteria. When I first set out to build out a system of functions for Stripe’s color palette, this was the most daunting part of the challenge. Going in, I wasn’t even sure if it was possible to systematically target contrast ratios across all hues. However, after seeing the technique used in Adobe’s Leonardo, I had some degree of hope that such a function existed. After many false starts and dead ends, I found the right set of operations. Step 1: Calculate a target contrast ratio based on scale step Stripe’s color scales follow the lead of the USWDS; when scale numbers differ by 500 or greater, those two colors conform to the AA-level contrast ratio of 4.5:1. This means that when neutral.500 is used on top of neutral.0 (or vice versa), the color combination should be accessible. To accomplish this with calculated colors, it’s important to understand how WCAG’s contrast ratio is measured. A contrast ratio like 4.5:1 is the output ® of the following formula, which the WCAG calls “relative luminance”:6 In this equation, L1 is the luminance (i.e., the Y value of the color in XYZ color space) of the lighter color, and L2 is the luminance of the darker color. So how do we use this knowledge to transform scale steps into contrast ratios? Well, we know step 0 and 500 need to have a ratio of 4.5. Step 100 and step 600 also need to have a ratio of 4.5, and so on, up the scale. This is a feature of exponential equations; equally-spaced points along the function have consistent ratios. Exponential equations also model the growth of a population, or the spread of a virus. It happens that luminosity is also an exponential function of scale step, which shouldn’t be surprising if you know a bit of calculus. Exponential functions take the form $$f(x) = e^{kx}$$, where k is some constant. In our case, we’ll call the function r(x) (for contrast ratio), where x is a number between 0 and 1 that represents our scale step; we need to solve for k to find the exact constant that produces the correct contrast ratios. Since r(0.5) should be 4.5 — that is, scale step 500 has a contrast ratio of 4.5:1 with step 0 — we start with $$4.5 = e^{k * 0.5}$$. Solving for k yields $$k = ln(20.25)$$. To make this a little easier to work with, we can use a close approximation of this value, 3.009. And if browsers were perfect pieces of software, that would be that. But color in web browsers is a tricky technical problem. Specifically, when you convert an RGB color like rgb(129.3, 129.3, 129.3) to a hex color, it’s rounded off; the result is #818181, which is exactly rgb(129, 129, 129). The formula we derived, $$r(x) = e^{3.008x}$$ is exact, so if you round a color’s values at all after calculating it, you may end up with inaccessible colors. Therefore, in testing this function, I’ve found that adding a little extra contrast to the overall system helps guard against rounding errors. The final formula I used to calculate the contrast ratio from a scale step is as follows: Where r(x) is the target contrast ratio and x is a number from 0 to 1 that represents the scale number. If your scale numbers (like Stripe’s) goes from 0 to 1,000, then a scale number of 500 correlates to x=0.5. Step 2: Calculate lightness based on a target contrast ratio Now that we have a function to calculate a contrast ratio based on our scale number, let’s return to the relative luminance equation: If we solve this equation for L2, we get an equation for the luminosity of a color with the desired contrast ratio with a given color. This is true as long as L1 is greater than L2. Put another way, this covers cases where we’re generating a darker color than our given (background) color. For the opposite case, we can use the same formula, solved for L1 instead of L2. This gets us the following piecewise equation: 0.18, \\ R (Y_b + 0.05) - 0.05 & \text{if } Y_b As explained earlier, the 0.18 in this equation represents the luminosity of “middle gray,” a color equally contrasting with #000000 and #ffffff; Each case depends on whether the background color is dark or light.7 So, for example, if I want a foreground to have a 4.5:1 contrast ratio with the background color, I can calculate the luminosity of that color by inputting the luminosity of the background as Yb and the contrast ratio as 4.5. If the background is #ffffff, which has a luminosity of 1, Yf comes out to 0.183. We can substitute in our function for r(x) to get the following: 0.18, \\ e^{3.04x} (Y_b + 0.05) - 0.05 & \text{if } Y_b This is a function that takes: A number from 0 to 1 that represents a scale number, and The Y value of a background color, and provides the Y value (i.e., luminance) of a color at the given scale number. Step 3: Translate from XYZ Y to OKHsl L Despite its scientific accuracy, XYZ is not a great colorspace to work in for generating color scales for design systems — while we can step through the Y values in a fairly straightforward way, calculating X and Z values of a given color requires matrix multiplication. Instead, we can translate XYZ’s Y value into OKHsl’s l value with the following two-step process: First, we can use the following formula to convert the Y value to the lightness value in lab:8 0.0088564516 \end{cases}$$ Then, OKHsl uses a “toe” function to map the lab lightness value to a perceptually accurate lightness value. Essentially it adds a little space to the dark end of the spectrum. This function is a little complicated: The math gets a lot more manageable if we put it all into a javascript function: const YtoL = Y => { if (Y <= 0.0088564516) { return Y * 903.2962962; } else { return 116 * Math.pow(Y, 1/3) - 16; } } const toe = l => { const k_1 = 0.206 const k_2 = 0.03 const k_3 = (1+k_1)/(1+k_2) return 0.5*(k_3*l - k_1 + Math.sqrt((k_3*l - k_1)*(k_3*l - k_1) + 4*k_2*k_3*l)) } const computeScaleLightness = (scaleValue, backgroundY) => { let foregroundY; if (backgroundY > 0.18) { foregroundY = (backgroundY + 0.05) / Math.exp(3.04 * scaleValue) - 0.05; } else { foregroundY = Math.exp(3.04 * scaleValue) * (backgroundY + 0.05) - 0.05; } return toe(YtoL(foregroundY)); } The function computeScaleLightness takes two values, the normalized scale value and the Y value of your background color, and returns an OKHsl L (lightness) value for the color at that scale step. With this, we have all the pieces we need to generate a complete accessible color palette for any design system. Putting it all together: All the code you need Now we have all the components to write a complete color generation library. // utility functions const YtoL = (Y) => { if (Y <= 0.0088564516) { return Y * 903.2962962; } else { return 116 * Math.pow(Y, 1 / 3) - 16; } }; const toe = (l) => { const k_1 = 0.206; const k_2 = 0.03; const k_3 = (1 + k_1) / (1 + k_2); return ( 0.5 * (k_3 * l - k_1 + Math.sqrt((k_3 * l - k_1) * (k_3 * l - k_1) + 4 * k_2 * k_3 * l)) ); }; const normalizeScaleNumber = (scaleNumber, maxScaleNumber) => scaleNumber / maxScaleNumber; // hue, chroma, and lightness functions const computeScaleHue = (scaleValue, baseHue) => baseHue + 5 * (1 - scaleValue); const computeScaleChroma = (scaleValue, minChroma, maxChroma) => { const chromaDifference = maxChroma - minChroma; return ( -4 * chromaDifference * Math.pow(scaleValue, 2) + 4 * chromaDifference * scaleValue + minChroma ); }; const computeScaleLightness = (scaleValue, backgroundY) => { let foregroundY; if (backgroundY > 0.18) { foregroundY = (backgroundY + 0.05) / Math.exp(3.04 * scaleValue) - 0.05; } else { foregroundY = Math.exp(3.04 * scaleValue) * (backgroundY + 0.05) - 0.05; } return toe(YtoL(foregroundY)); }; // color generator function const computeColorAtScaleNumber = ( scaleNumber, maxScaleNumber, baseHue, minChroma, maxChroma, backgroundY, ) => { // create an OKHsl color object; this might look different depending on what library you use const okhslColor = {}; // normalize scale number const scaleValue = normalizeScaleNumber(scaleNumber, maxScaleNumber); // compute color values okhslColor.h = computeScaleHue(scaleValue, baseHue); okhslColor.s = computeScaleChroma(scaleValue, minChroma, maxChroma); okhslColor.l = computeScaleLightness(scaleValue, backgroundY); // convert OKHsl to sRGB hex; this will look different depending on what library you use return convertToHex(okhslColor); }; For this code to work, you’ll need a library to convert from OKHsl to sRGB hex. The upcoming version of colorjs.io supports this, as does culori. I’ve marked where that matters, in case you’d like to use a different color conversion utility. What does it look like in practice? Here are some examples of the same design in a number of themes, with different background colors: Three generated color palettes By adjusting the hue, chroma, and saturation when we generate our colors, we can get a broad and expressive range of hues, while ensuring each shade is accessible when used in the same context. What we’ve learned and where we’re going At Stripe, we’ve implemented this approach to generating color palettes. It’s now the foundation of the colors in our design system, Sail. The color generation function is also available to the users of our design system; this means that teams can offer theming features to end users, which is especially useful when Stripe’s merchants embed our UI in their own applications. One important lesson I learned while on this journey is the importance of token APIs. This is a bit of an esoteric topic and might be worthy of its own essay. The short version is: Using color aliases (like color.button.background referring to color.action.500 referring to color.base.blue.500) allows theming to happen “behind the scenes,” and ensures that components don’t need to update their code when switching themes. So where do we go from here? There are two features that I’d like to explore in the future to make this approach to color even more robust. First, I’d like to develop an alternative color lightness scale for APCA. The APCA color contrast function is an alternative to the current WCAG contrast ratio function. It purports to more accurately reflect contrast between colors, taking into account the “polarity” of the colors (e.g., dark-on-light or light-on-dark) and the font size of any text. The math behind the APCA contrast function is a bit more complicated than the WCAG function, and my early experiments weren’t very successful. Second, I’d like to extend this approach to work in wide-gamut color spaces like display P3. Currently, OKHsl only covers the sRGB gamut; more and more screens are capable of displaying colors beyond the sRGB gamut, offering even more possibilities for accessible color palettes. Calculating a P3 version of OKHsl should be possible, but it’s definitely outside the scope of my current ability/comprehension. Ultimately, however, the approach outlined in this essay should be a solid basis for generating colors for any design system. No matter how many hues you need, how expressive you’d like to be, how many shades your system consists of, or what kinds of themes you design, the set of functions I’ve covered will provide accessible color combinations. Special thanks to Dmitry Belyaev for providing feedback on a draft of this essay. Footnotes & References (70, -15) is the coordinate for pink in lab colors space. ↩︎ R. W. Pridmore, “Bezold–Brücke Hue-Shift as Functions of Luminance Level, Luminance Ratio, Interstimulus Interval and Adapting White for Aperture and Object Colors,” Vision Research 39, no. 19 (1999): 3873-3891. ↩︎ Jesús Lillo et al., “Lightness and Hue Perception: The Bezold-Brücke Effect and Colour Basic Categories,” Psicológica 25, no. 1 (2004): 23-43. ↩︎ However, it’s important to note that this peak can vary slightly depending on the specific hue in question. ↩︎ AA is generally accepted as the standard for accessibility. A and AAA ratings exist, but are much more lax and more more strict, respectively. You can read more about conformance levels on the W3C website. ↩︎ https://www.w3.org/WAI/GL/wiki/Contrast_ratio ↩︎ This isn’t extremely rigorous; you might want a “light theme” that starts from a dark gray background and gets darker as the scale number increases. I’ll leave that as an exercise to the reader. This formula will cover the typical dark and light mode calculations. ↩︎ If you’re like me and get suspicious when you see oddly specific numbers like 903.2962962 in equations like these, a quick explanation: unlike in the RGB color space, the XYZ color space has no “true white.” Because our eyes can perceive true white differently according to what light source is used, to transfer colors in and out of XYZ color space we often need to also define true white. The most common values are defined by something cryptically called the “CIE standard illuminant D65”, which corresponds roughly to what white looks like on a clear day in northern Europe. I am not making this up. ↩︎

9 months ago 8 votes
Creating a positive workplace community

Your workplace community — the way you interact with your coworkers every day — can have a major impact on your productivity, happiness, and self-worth. It’s natural to want to shape the community in ways that might make you feel more comfortable. But how can you shape it? Over my career I’ve developed a framework that strikes a balance between authority and autonomy; it’s neither omnipotent intelligent design nor chaotic natural selection. The framework consists of three components: culture, policy, and enforcement. Each shapes and influences the other in an endless feedback loop. By understanding them in turn, and seeing how they intersect, we can be intentional in how we design our community. What is culture? For most of my career, I’ve held that culture is all that mattered. Specifically, I believed the quote often misattributed to Peter Drucker: “Culture eats strategy for breakfast.” Which is to say, if your team’s culture isn’t aligned with your strategy, you’ll never succeed. But what is culture? “Culture” refers to the shared values, beliefs, attitudes, and rituals that shape the interactions among employees within an organization. If you were to draw a big Venn diagram of every single coworker’s mental model of the company, culture would be the part in the middle where they all intersect. In 2009, Patty McCord and Reid Hastings (chief talent officer and CEO of Netflix, respectively) wrote the book on modern tech company culture. More accurately, they wrote a 129-slide Powerpoint deck on the company’s culture; Sheryl Sandberg called it “one of the most important documents ever to come out of Silicon Valley.” It defined seven aspects of the culture, including its values, expectations for employees, approach to policy, ways of making decisions, compensation, and career progression frameworks. But culture can’t be written down. In the very same deck, McCord and Hastings cited Enron’s company values (“Integrity, communication, respect, excellence”). The values, they noted, were chiseled in marble in the lobby of Enron’s office. But history shows that Enron’s real company culture contained none of those things. What is policy? When I was running my own company, I genuinely enjoyed thinking about company policies. At the time, I felt that even though the company was small and relatively poor, our policies could attract the best talent in the world. “Policy” refers to the guidelines, rules, and procedures that govern employees. Some policies are bound to legal requirements: discrimination, harassment, and security policies are in place to ensure that employees don’t break the law. Other policies aren’t backed up by laws, but apply to the whole company equally. Vacation policies, for example, usually dictate the number of days an employee can take paid leave from work, and how employees should schedule and coordinate those days. Other policies still are put in place by smaller teams of coworkers to govern functional or cross-functional units as they do their work. These are policies like requiring regular critiques and approvals of creative work, getting peer code reviews, or doing postmortems after technical issues. Generally, I’m an acolyte of the McCord school of policy, which is to say I don’t think we need much at all: according to Netflix’s culture deck, in 2004 she said “There is no no clothing policy at Netflix, but no one has come to work naked lately.” In 2009, GM’s current CEO Mary Barra (then the VP of global human resources) demonstrated this approach in dramatic fashion, rewriting the company’s clothing policy from a 10-page manifesto to the two-word maxim “dress appropriately.” However, I’ve seen the minimal policy approach go awry; when not supported by cultural norms or consistent enforcement, the lack of policy can reinforce a status quo of privilege, bias, and hierarchy. What is enforcement? I’ve always struggled with enforcement. I believed that if culture and policy were strong, then there was no need for enforcement; everyone would feel compelled to follow the high standard they held for each other. But recently, I’ve understood its importance. That’s why it’s the third piece of this puzzle, the last one to fall into place. Culture is an unwritten belief. Policy is a recorded norm. “Enforcement” is an action that demonstrates those beliefs and norms. It can take many forms, like counseling, coaching, or discipline. It can be as light and casual as an emoji in a group chat, or as grave and serious as termination without notice. Effective enforcement is hard. It requires being both consistent and flexible. Every situation is unique; good enforcement is fair and equitable, with an emphasis on clear communication and collaboration. While, traditionally, HR is the group that enforces a company’s policies, the highest performing teams police themselves. Enforcement can positively reflect cultural values and policy beliefs. For instance, Kayak requires its engineers and designers to occasionally handle customer support, a task usually reserved for trained associates. Instead of merely suggesting this practice, Kayak enforces it. Kayak co-founder Paul English says “once they take those calls and realize that they have the authority and permission to give the customer their opinion of what is going on and then to make a fix and release that fix, it’s a pretty motivating part of the job.” Balancing the feedback loop Culture, policy, and enforcement constitute a web of forces in tension, holding the workplace community in balance. If any of the three pull too hard, the others can break, and the community can fall apart. So how do you keep the tension working for you? Culture can influence policy by first acknowledging and valuing policy. This doesn’t mean that policy has to be exhaustively written down; Mary Barra’s rewrite of GM’s dress code wasn’t about removing policy altogether. She was asking managers and employees to think carefully about the policy, to consider how it shaped (and was shaped by) the company’s culture, and to make decisions together. At Wharton’s 2018 People Analytics Conference, Bara said: “if you let people own policies themselves, it helps develop them.” Culture can influence enforcement by changing the manner of enforcement altogether. In a positive culture, enforcement is likely to be carried out in a fair and consistent manner. In a negative workplace culture, enforcement may be carried out in a punitive or arbitrary manner, which can lead to resentment. If your team’s mechanisms of enforcement are unclear, ask: “How do our cultural values result in action?” Policy influences culture by creating common knowledge. It’s a kind of mythos, an origin story, or a shared language. On most teams, one of the first things any new member does is learn the team’s policies; the first week of an employee’s tenure is usually the only time they read the company handbook. This sets the tone for the rest of their time with the company or team. Take advantage of those moments to build your culture up. Policy can influence enforcement by setting expectations, creating consistency, and guaranteeing fairness. Without clear policy, consistent enforcement is impossible and may seem arbitrary. If there is no policy at all, enforcement is entirely subjective and personal. Sometimes, the key to enforcement lies in simply defining, discussing, and committing to a policy. In the event that enforcement is necessary, the shared understanding created by clear policy will make it easy for the team to act. Enforcement shapes culture by buttressing the shared values of the team. Negative aspects of culture like privilege and bias are, in part, a result of inconsistent enforcement of policy: unfair enforcement creates a culture where some people expect to be exempt from some rules. Leaders should be just as beholden to a team’s values as those they lead, or else the culture will splinter along the fault lines of management layers. Enforcement shapes policy by creating (or reducing) “shadow policy.” That is, if not all policies are enforced, and if there are expectations that are enforced but not written or communicated, team members will tend to ignore policies altogether. In many cases of white collar crime or malfeasance, shadow policies overwhelmed the written rules, undermining them entirely. Conclusion Culture, policy, and enforcement are three aspects of every workplace community. The ways in which they interact define the health of that community. When they’re in balance, the community can grow and adapt to challenges without losing its identity, like an animal evolving, reacting to its environment by adapting over generations. If those aspects of community are out of balance, teams, functions, and entire companies are brittle and self-destructive. Bad culture undermines well-intentioned policy. Unclear, unwritten policy leads to unfair and inconsistent enforcement. Too much enforcement, or not enough, or the wrong kind at the wrong time, can fracture culture into in-groups and out-groups. In these ways, the balance of culture, policy, and enforcement is vital. Being vigilant about the balance, regardless or your role, will help you shape and guide your workplace community. The more your team works to understand these components, the more they make intentional choices to keep them in healthy tension, the happier, productive, and more fulfilled you’ll be.

a year ago 7 votes
Design-by-wire

There’s a lot of fear in the air. As AI gets better at design, it’s natural for designers to be worried about their jobs. But I think the question — will AI replace designers? — is a waste of time. Humans have always invented technology to do their work for them and will continue to do so as long as we exist. Let’s use our curiosity and creativity to imagine how technology will help us be better, more efficient, and more impactful. So, in that spirit, I’d like to share a metaphor that I think paints a picture of how the job of design will change in the next decade. Flying by wire Commercial airplanes are some of the most complicated machines humans have ever built. It took all of Wilbur Wright’s skill to fly the first powered airplane for a minute, just 10 feet off the ground. That plane, the Wright Flyer, could carry one person; it weighed 745 pounds with fuel and could reach a height of 30 feet at a maximum speed of 30 miles per hour.1 The Airbus A380, currently the world’s largest commercial airliner, weighs over a million pounds when fully loaded. It can carry up to 853 people, flying up to 43,000 feet at a cruising speed of 561 mph — 85% the speed of sound. Just two people pilot the A380.2 The cockpit of an Airbus A380. Photo by Steve Jurvetson, CC BY 2.0 The A380, and all modern commercial airplanes, wouldn’t exist without something called “fly-by-wire.” Fly-by-wire is a system that translates a pilot’s inputs — changing the throttle to speed up or slow down, controlling pitch and roll with the yoke, turning knobs and dials in the cockpit — into coordinated movements of the airplane’s engines and control surfaces. The first fly-by-wire systems were a veritable nervous system of electric relays and motors; today, they’re sophisticated computers in the belly of the plane. Originally, fly-by-wire had nothing to do with automation. As airplanes got larger, the cables, rods, and hydraulic links connecting the cockpit to the rest of the plane became a monumental design challenge. By replacing those complex, bulky components with electrical wires and switches, airplanes would be lighter and easier to maintain, with more room for passengers and cargo. The first commercial airplane with a fly-by-wire system was the supersonic Concorde jet. At speeds of over Mach 1, it would be almost impossible for a pilot to move the control surfaces of the airplane through sheer mechanical force; fly-by-wire allowed pilots to smoothly operate the plane at any speed. And because sudden changes at top speed could be catastrophic, the fly-by-wire system could use analog circuitry to smooth out a pilot’s inputs. An experimental fly by wire system in the Vought F-8 Crusader using data-processing equipment adapted from the Apollo Guidance Computer As fly-by-wire systems became more common, they went from faithfully transferring pilot’s inputs to interpreting and adjusting them. The Airbus A320, introduced in 1988, featured the first digital (computerized) fly-by-wire system; it included “flight envelope protection,” a system that prevents pilots from taking any action that would cause damage to the airplane. Depending on the speed, altitude, and phase of flight, the fly-by-wire system will ignore certain pilot inputs altogether. Fly-by-wire has been the focus of both scrutiny and praise since its introduction. On one hand, it has saved lives: When US Airways Flight 1549 (an Airbus A320) flew through a flock of birds on takeoff, it lost all power. The pilots had to make an emergency landing in the Hudson river, flying the airplane unusually low and slow, risking putting the plane into an uncontrollable stall. The fly-by-wire system, with its flight envelope protection, ensured the plane could maneuver at the very edge of its capability, leading to a controlled landing with only a few serious injuries for those aboard. On the other hand, fly-by-wire has been criticized for replacing parts of pilots’ expertise. In 2009, Air France Flight 447 (another Airbus A320), crashed in the Atlantic Ocean, killing all 228 passengers and crew. An investigation into the cause of the crash concluded that the autopilot and fly-by-wire protections started to malfunction when ice crystals interfered with the aircraft’s sensors; the pilots, used to flying with the safety of flight envelope protection, couldn’t correct for the errors, stalled the plane, and crashed into the ocean. Whether you think fly-by-wire is a crucial innovation or a crutch, its effect on the airline industry is easy to demonstrate. Bigger planes that fly farther can carry more passengers to more destinations. From 1970 to 2019, the number of airline passengers worldwide has grown over 1,400%, from 310 million to 4.4 billion.3 In the same time period, the number of commercial pilots — pilots holding “commercial” or “airline transport” licenses — has increased 27%, from 208,027 in 1969 to 265,810 in 2019. Pilots’ salaries have stayed consistently high: an average airline captain made about $51,750 a year in 19754, the equivalent to $287,770 in 20235. An airline captain with six years of experience can expect to make $285,460 today.6 Designing by wire Just as fly-by-wire systems have made pilots more efficient (not redundant), AI and automation will make designers more effective. Imagine a design-by-wire system. The job of the designer is to indicate what they want the desired outcome to be. Like a pilot pushing the throttle to make the airplane accelerate, a designer could assemble a wireframe or configure a screen to enable a user to accomplish a task. The design-by-wire system could then interpret the designer’s instructions. The system could change aspects of the design to use the latest design systems components in the correct way. The system could optimize the design to make implementation cheaper, faster, or less prone to bugs. The system could automatically fix accessibility issues, or add information to address accessibility concerns like keyboard shortcuts, screen reader labels, or high-contrast and reduced-motion variations. A designer could list out hypotheses about the design (like “will this convert the most users to paid plans?”), and the system could provide designs for multivariate testing, along with test or research plans. The system could automate QA by testing designs against simulated user behavior, adjusting the designs to cover the wide and unpredictable cases of real user interaction. You can already see these kinds of systems taking shape. Noya promises to take wireframes and turn them into production code using an existing design system. Galileo AI claims to be able to create fully-editable designs from a single text description. Diagram’s Genius aims to provide contextual suggestions in Figma, filling out designs with the click of a button. These are just early tech previews, but they paint a picture of AI becoming a core component of our design tools. At the end of the day, design-by-wire systems are centered around the designer. Like the pilot of an Airbus A380, the designer becomes the operator of a fantastically complicated machine. There is a risk in designing by wire. If designers don’t understand how the system works, they risk losing control, becoming less effective than they were before. That’s why it’s important that we become experts in AI; we don’t have to be able to write the code that drives these tools, but we need to understand the way the systems work. To use the machine to its full potential, the designer has to understand the intricacies of its operation: Pilots, by analogy, train for years in simulators before they step foot in the cockpit of a real airliner. Here’s a few resources you can use to learn more about GPTs, the systems driving the current boom in AI: Stephen Wolfram’s “What is ChatGPT Doing … and Why Does It Work?” is an incredibly in-depth exploration of the technology and concepts, with interactive code. Ted Chiang’s “ChatGPT Is a Blurry JPEG of the Web” uses analogies to explain the strengths and weaknesses of GPT technology. 3Blue1Brown has a 5-part video series explaining the basics of neural networks, including how they are trained. The Coding Train has 26 videos on neural networks, in which Daniel Schiffman builds and explains various components and variations of neural nets. There’s also an accompanying chapter in Schiffman’s book The Nature of Code. All four of the above authors are amazing teachers who mix mathematical depth with intuitive analogies and mental models. Conclusion AI will change our jobs in ways we can’t imagine. This has been happening to airline pilots since the advent of fly-by-wire systems. In the transition to newer, faster, larger, and more efficient airplanes, pilots have needed more and more technical understanding and skill in interacting with the computers that fly their planes. But pilots are still needed. Likewise, designers won’t be replaced; they’ll become operators of increasingly complicated AI-powered machines. New tools will enable designers to be more productive, designing applications and interfaces that can be implemented faster and with less bugs. These tools will expand our brains, helping us cover accessibility and usability concerns that previously took hours of effort from UX specialists and QA engineers. It’ll take years of training to become an expert at designing with these new AI-powered systems. Start now, and you’ll stay ahead of the curve. Wait, and the challenge won’t come from the AI itself; it’ll be other designers — ones who are skilled at AI-powered design — who will come for your job. Footnotes & References “Wright Flyer.” In Wikipedia, February 19, 2023. https://en.wikipedia.org/w/index.php?title=Wright_Flyer&oldid=1140234161. ↩︎ “Airbus A380.” In Wikipedia, February 10, 2023. https://en.wikipedia.org/w/index.php?title=Airbus_A380&oldid=1138648822. ↩︎ “Top 15 Countries with Departures by Air Transport - 1970/2020 -.” Accessed February 20, 2023. https://statisticsanddata.org/data/top-15-countries-with-departures-by-air-transport-1970-2020/. ↩︎ “Industry Wage Survey. Scheduled Airlines.” Scheduled Airlines, Bulletin / Bureau of Labor Statistics, 1977 1972, 2 v. https://catalog.hathitrust.org/Record/009881222. ↩︎ Calculated with https://www.in2013dollars.com/us/inflation/1975?amount=51750 ↩︎ “Major Airline Pilot Salary: First Officer and Captain Pay in 2023 / ATP Flight School.” Accessed February 18, 2023. https://atpflightschool.com/become-a-pilot/airline-career/major-airline-pilot-salary.html. ↩︎

a year ago 8 votes

More in design

Ten Books About AI Written Before the Year 2000

This by no means a definitive list, so don’t @ me! AI is an inescapable subject. There’s obviously an incredible headwind behind the computing progress of the last handful of years — not to mention the usual avarice — but there has also been nearly a century of thought put toward artificial intelligence. If you want to have a more robust understanding of what is at work beneath, say, the OpenAI chat box, pick any one of these texts. Each one would be worth a read — even a skim (this is by no means light reading). At the very least, familiarizing yourself with the intellectual path leading to now will help you navigate the funhouse of overblown marketing bullshit filling the internet right now, especially as it pertains to AGI. Read what the heavyweights had to say about it and you’ll see how many semantic games are being played while also moving the goalposts. Steps to an Ecology of Mind (1972) — Gregory Bateson. Through imagined dialogues with his daughter, Bateson explores how minds emerge from systems of information and communication, providing crucial insights for understanding artificial intelligence. The Sciences of the Artificial (1969) — Herbert Simon examines how artificial systems, including AI, differ from natural ones and introduces key concepts about bounded rationality. The Emperor’s New Mind (1989) — Roger Penrose. While arguing against strong AI, provides valuable insights into consciousness and computation that remain relevant to current AI discussions. Gödel, Escher, Bach: An Eternal Golden Braid (1979) — Douglas Hofstadter weaves together mathematics, art, and music to explore consciousness, self-reference, and emergent intelligence. Though not explicitly about AI, it provides fundamental insights into how complex cognition might emerge from simple rules and patterns. Perceptrons (1969) — Marvin Minsky & Seymour Papert. This controversial critique of neural networks temporarily halted research in the field but ultimately helped establish its theoretical foundations. Minsky and Papert’s mathematical analysis revealed both the limitations and potential of early neural networks. The Society of Mind (1986) — Marvin Minsky proposes that intelligence emerges from the interaction of simple agents working together, rather than from a single unified system. This theoretical framework remains relevant to understanding both human cognition and artificial intelligence. Computers and Thought (1963) — Edward Feigenbaum & Julian Feldman (editors) This is the first collection of articles about artificial intelligence, featuring contributions from pioneers like Herbert Simon and Allen Newell. It captures the foundational ideas and optimism of early AI research. Artificial Intelligence: A Modern Approach (1995) — Stuart Russell & Peter Norvig. This comprehensive textbook defined how AI would be taught for decades. It presents AI as rational agent design rather than human intelligence simulation, a framework that still influences the field. Computing Machinery and Intelligence (1950) — Alan Turing’s paper introduces the Turing Test and addresses fundamental questions about machine intelligence that we’re still grappling with today. It’s remarkable how many current AI debates were anticipated in this work. Cybernetics: Or Control and Communication in the Animal and the Machine (1948) — Norbert Wiener established the theoretical groundwork for understanding control systems in both machines and living things. His insights about feedback loops and communication remain crucial to understanding AI systems.

17 hours ago 2 votes
The Zettelkasten note taking methodology.

My thoughts about the Zettelkasten (Slip box) note taking methodology invented by the German sociologist Niklas Luhmann.

2 days ago 8 votes
DJI flagship store by Various Associates

Chinese interior studio Various Associates has completed an irregular pyramid-shaped flagship store for drone brand DJI in Shenzhen, China. Located...

2 days ago 3 votes
Notes on Google Search Now Requiring JavaScript

John Gruber has a post about how Google’s search results now require JavaScript[1]. Why? Here’s Google: the change is intended to “better protect” Google Search against malicious activity, such as bots and spam Lol, the irony. Let’s turn to JavaScript for protection, as if the entire ad-based tracking/analytics world born out of JavaScript’s capabilities isn’t precisely what led to a less secure, less private, more exploited web. But whatever, “the web” is Google’s product so they can do what they want with it — right? Here’s John: Old original Google was a company of and for the open web. Post 2010-or-so Google is a company that sees the web as a de facto proprietary platform that it owns and controls. Those who experience the web through Google Chrome and Google Search are on that proprietary not-closed-per-se-but-not-really-open web. Search that requires JavaScript won’t cause the web to die. But it’s a sign of what’s to come (emphasis mine): Requiring JavaScript for Google Search is not about the fact that 99.9 percent of humans surfing the web have JavaScript enabled in their browsers. It’s about taking advantage of that fact to tightly control client access to Google Search results. But the nature of the true open web is that the server sticks to the specs for the HTTP protocol and the HTML content format, and clients are free to interpret that as they see fit. Original, novel, clever ways to do things with website output is what made the web so thrilling, fun, useful, and amazing. This JavaScript mandate is Google’s attempt at asserting that it will only serve search results to exactly the client software that it sees fit to serve. Requiring JavaScript is all about control. The web was founded on the idea of open access for all. But since that’s been completely and utterly abused (see LLM training datasets) we’re gonna lose it. The whole “freemium with ads” model that underpins the web was exploited for profit by AI at an industrial scale and that’s causing the “free and open web” to become the “paid and private web”. Universal access is quickly becoming select access — Google search results included. If you want to go down a rabbit hole of reading more about this, there’s the TechCrunch article John cites, a Hacker News thread, and this post from a company founded on providing search APIs. ⏎ Email :: Mastodon :: Bluesky #generalNotes

3 days ago 9 votes
Kedrovka cedar milk by Maria Korneva

Kedrovka is a brand of plant-based milk crafted for those who care about their health, value natural ingredients, and seek...

3 days ago 3 votes