Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
23
Your workplace community — the way you interact with your coworkers every day — can have a major impact on your productivity, happiness, and self-worth. It’s natural to want to shape the community in ways that might make you feel more comfortable. But how can you shape it? Over my career I’ve developed a framework that strikes a balance between authority and autonomy; it’s neither omnipotent intelligent design nor chaotic natural selection. The framework consists of three components: culture, policy, and enforcement. Each shapes and influences the other in an endless feedback loop. By understanding them in turn, and seeing how they intersect, we can be intentional in how we design our community. What is culture? For most of my career, I’ve held that culture is all that mattered. Specifically, I believed the quote often misattributed to Peter Drucker: “Culture eats strategy for breakfast.” Which is to say, if your team’s culture isn’t aligned with your strategy, you’ll never...
over a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from The personal website of Matthew Ström

The history of album art

Album art didn’t always exist. In the early 1900s, recorded music was still a novelty, overshadowed by sales of sheet music. Early vinyl records were vastly different from what we think of today: discs were sold individually and could only hold up to four minutes of music per side. Sometimes, only one side of the record was used. One of the most popular records of 1910, for example, was “Come, Josephine, in My Flying Machine”: it clocked in at two minutes and 39 seconds. via Wikipedia The packaging of these records was strictly utilitarian: a brown paper sleeve to protect the record from dust, printed with the name of the record label or the retailer. Rarely did the packaging include any information on the disc inside; the label on the center of the disc was all there was to differentiate one record from another. But as record sales started to show signs of life, music publishers took note. Columbia Records, one of the first companies to sell music on discs, was especially successful. They pioneered the sale of songs in bundles: the individual discs were bound together in packages resembling photo albums, partly to protect the delicate shellac that the records were made of, partly to increase their sales. They resembled photo albums, so Columbia called them “record albums.” There were many more technological breakthroughs that made it possible to mass-manufacture and distribute music throughout the world at affordable prices. The five-minute-long 78 rpm discs were replaced by 20-minute discs that ran at 33 ⅓ rpm, which were replaced by the hour-long 12″ LP we know today. Delicate shellac was replaced by the more resilient (and cheaper) vinyl. Both recording technology and consumer electronics were always evolving, allowing more dynamic music to fit into smaller packages and be played on smaller, higher-fidelity stereos. The invention of album art can get lost in the story of technological mastery. But among all the factors that contributed to the rise of recorded music, it stands as one of the few that was wholly driven by creators themselves. Album art — first as marketing material, then as pure creative expression — turned an audio-only medium into a multi-sensory experience. This is the story of the people who made music visible. The prophet: Alex Steinweiss Alex Steinweiss was born in 1917, the son of eastern European immigrants. Growing up in Brooklyn, New York, Steinweiss took an early interest in art and earned a scholarship to Parsons School of Design. On graduating, he worked for Austrian designer Joseph Binder, whose bold, graphic posters had influenced design for the first decades of the 1900s. The Most Important Wheels in America, Association of American Railroads (1952) via Moma Joseph Binder, Österreichs Wiederaufbau Ausstellung Salzburg (1933) via Moma Joseph Binder, Air Corps U.S. Army (Winning entry for the MoMA National Defense Poster Competition [Army Air Corps Recruiting]) via Moma After his work with Binder, Steinweiss was hired by Columbia Records to produce promotional displays and ads, but the job didn’t stick. At the outbreak of World War II, he went to work for the Navy’s Training and Development Center in New York City, designing teaching material and cautionary posters. When the war ended, Steinweiss went back to freelancing for Columbia. At a lunch meeting in 1948, company president Ted Wallerstein mentioned that Columbia would soon introduce a new kind of record that, spinning at a slower speed of 33 ⅓ rpm, could hold more music than the older 78 rpm discs. But there was a problem: the smaller, more intricate grooves on the discs were being damaged by the heavy paper sleeves used for the 78s. After the lunch, Steinweiss went to work to create a new, safer jacket for the records. But his vision for the new packaging went beyond just its construction. “The way records were sold was ridiculous,” Steinweiss said. “The covers were brown, tan or green paper. They were not attractive, and lacked sales appeal.” He suggested that Columbia should spend more money on packaging, convinced that eye-catching designs would help sell records.1 His first chance to prove his case was a 1940 compilation by the songwriters Rodgers and Hart — one of the first releases on the new microgroove 33 ⅓ records. For it, he asked the Imperial Theater (located one block west of Times Square) to change the lettering on their marquee to read “SMASH SONG HITS BY RODGERS & HART." Steinweiss had a photographer take a photo, and back in his studio, superimposed “COLUMBIA RECORDS’’ on the image to match the perspective and style of the signage. The last touch, a nod to the graphic abstraction of his mentor Joseph Binder, were orange lines arcing around the marquee in the exact size of the record underneath. Album art was born. Smash Song Hits by Rodgers & Hart via RateYourMusic Steinweiss would go on to design hundreds of covers for Columbia from 1940 to 1945. His methodology was rigorous; the covers went beyond nice pictures to be visual representations of the music itself. Before most people owned a TV set, Steinweiss’s album covers were affordable multi-sensory entertainment. Looking at the album cover and listening to the music created an experience that was more than the sum of its parts. “I tried to get into the subject,” he explains, "either through the music, or the life and times of the composer. For example, with a Bartók piano concerto, I took the elements of the piano—the hammers, keys, and strings—and composed them in a contemporary setting using appropriate color and rendering. Since Bartók is Hungarian, I also put in the suggestion of a peasant figure.” via RateYourMusic Steinweiss was prophetic: His colorful compositions sold records. Newsweek reported that sales of Bruno Walter’s recording of Beethoven’s “Eroica” symphony increased 895% with its new Steinweiss cover.” 2 Eroica The challenger: Reid Miles From 1940 to 1950, Columbia Records was the dominant force in music sales. Buoyed by Steinweiss’s initial successes, Columbia hired more artists and designers to produce album art. Jim Flora led the charge from 1947–1950 with irreverent illustrations and more daring explorations of typography, and like Steinweiss, his work mirrored the music on the records. During the era, Columbia began to focus much more on popular music. Flora’s campy compositions screamed “this isn’t your parent’s music.” Gene Krupa and His Orchestra via JimFlora.com Jim Flora's cover for Bix and Tram via JimFlora.com Jim Flora's cover for Kid Ory and His Creole Jazz Band via JimFlora.com But while Columbia was focusing on making it into the hit parade, an upstart label was honing in on a sound that would come to define the era; Blue Note Records, founded in 1939, was fixated on the jazz underground. From its founding and throughout the 1950s, Blue Note focused on “hot jazz,” a mutant strain of jazz descending from the big band swing era, often including twangy banjoes, wailing clarinets, and rambunctious New Orleans second-line-style drumming. Founder Alfred Lion wrote the label’s manifesto: Blue Note Records are designed simply to serve the uncompromising expressions of hot jazz or swing, in general. Any particular style of playing which represents an authentic way of musical feeling is genuine expression. … Blue Note records are concerned with identifying its impulse, not its sensational and commercial adornments.3 One way Blue Note stood out from labels like Columbia was their dedication to their artists. Many of the working musicians of the ’50s lived like vampires, waking up after dusk and playing gigs into the early hours of the morning, then rehearsing until dawn. Blue Note would record their artists in the pre-dawn hours, giving musicians time to rest up before their next night’s gigs started. Art Blakey, Thelonius Monk, Charlie Parker, Dizzy Gillespie, and John Coltrane are household names now; but then, because of their drinking, drug use, and frenetic schedules, labels wouldn’t work with them. Blue Note embraced them, feeding their fires of creative innovation and creating an updraft for the insurgency of jazz to come. Album art was one more revolutionary way for Blue Note to explore “genuine expression.” Just as they fostered talented musicians, they’d give young designers a chance to shine. Alfred Lion’s childhood friend Francis Wolff had joined the label as a producer and photographer; he’d shoot candid portraits of the musicians as they worked. Then, designers like Paul Bacon, Gil Mellé (himself a musician), and John Hermansader would pair Wolff’s black-and-white photos with a single, bright color, then juxtapose them with stark, sans-serif type. Genius of Modern Music Vol. 1 via Deep Groove Mono Gil Mellé's cover featuring Francis Wolff's photography for his band's New Faces — New Sounds via Deep Groove Mono John Hermansader's cover featuring Francis Wolff's photography for George Wallington's Showcase via Deep Groove Mono As the 1960s approached, the musicians Blue Note worked so hard to cultivate were forging new styles, leaving behind the swing-era pretense of jazz as dance music. Charlie Parker and Bud Powell kept speeding up the tempo and stuffing more chords into progressions. Max Roach started playing the drums like a boxer, bobbing and weaving around the beat with skittering cymbals, waiting for the right moment to land a single monumental “thud” of a kick drum. Without the drums keeping a steady rhythm, bass players like Milt Hinton and Gene Ramey had to furiously mark out time with eighth notes, traversing chords by plucking up and down the scale. This was bebop, and it was musicians’ music. Blue Note’s ethos of artistic integrity was the perfect Petri dish for virtuosic musicians to develop innovative sounds — they worked in small ensembles, often just five players, constantly scrambling and re-arranging instrumentation, playing harder and faster and louder. Then, around 1955, just as Blue Note was hitting its stride, Wolff met a 28-year-old designer named Reid Miles. Miles had recently moved to New York and had been working for John Hermansader at Esquire magazine. He was a big fan of classical music but wasn’t so interested in jazz. Wolff convinced Miles to start designing covers for Blue Note all the same and kicked off one of the most influential partnerships in modern design. The first cover Miles created was for vibraphone player Milt Jackson; it picked up from the established art style, with Wolff’s photos and a single bright hue. But the type was even more exaggerated, and the photo took up more than half the cover. White dots overlayed on Jackson’s mallets were the perfect abstraction of the staccato tones of the vibraphone. It’s a great cover, but it was just a hint of what was to come. via Ariel Salminen A common theme of Miles’ covers was the emphasis on Wolff’s photography. We’re familiar with these iconic images today, but at the time they were revolutionary; before, black musicians like Louis Armstrong and Ella Fitzgerald were portrayed in tuxedos and evening gowns, posed smiling genially or laughing, rendered so as to not offend the largely white listening audience. Wolff’s portraits were candid, realistic, showing black musicians at work. For example, the cover for Art Blakey’s The Freedom Rider shows Blakey lost in a moment, almost entirely obscured by a cymbal. The drummer is smoking a cigarette, but it’s barely hanging onto the corner of his lip — his mouth is half-open, his brows clenched in a moment of agony or ecstasy. Miles would let the photo fill up the entire cover, cramming the name of the record into whatever empty space was available. The Freedom Rider via London Jazz Collector Miles sometimes reversed this relationship, pioneering the use of typography to convey the spirit of the music. His cover for Jackie McLean’s It’s Time! is composed of an edge-to-edge grid of 243 exclamation marks; a postage stamp picture of McLean graces the upper corner, almost a punchline. Lee Morgan’s The Rumproller is another type-only cover, this time with the title smeared out from corner to corner, like it was left on a hot dashboard for the day. Larry Young’s Unity has no photo at all; the four members of the quartet become orange dots resting in (or bubbling out of) the bowl of the U. It's Time via Ariel Salminen Reid Miles' cover for Lee Morgan's The Rumproller via Fonts in Use Reid Miles' cover for Larry Young's Unity Miles fulfilled the Blue Note manifesto. His album covers pushed the envelope of graphic design just as the artists on the records inside continued to break new ground in jazz. With the partnership of Miles and Wolff, alongside Alfred Lion’s commitment to artistic integrity, Blue Note became the standard-bearer for jazz. Columbia Records couldn’t help but notice. Even though Blue Note wasn’t nearly as commercially successful as Columbia, their willingness to take risks had established them as a much more sophisticated, innovative, and creative label; to compete for the best talent, Columbia would need to find a way to win the attention of both artists and listeners. The master: S. Neil Fujita Sadamitsu Fujita was born in 1921 in Waimea, Hawaii. He was assigned the name Neil in boarding school — leading up to World War II, anti-Japanese sentiment was rampant, especially in Hawaii. Fujita moved to LA to attend art school, but his studies were cut short in 1942 when Franklin Roosevelt signed executive order 9066, allowing the imprisonment of Japanese Americans living on the west coast. Fujita was sent to Wyoming, where he enlisted in the 442nd Regimental Combat Team. Before the war was over, he’d see combat in Italy, France, and the Pacific theater. After the war, Fujita finished his studies in LA. He quickly made a name for himself in the advertising world; his résumé landed on the desk of Bill Golden, the art director for CBS, which owned Columbia Records. Alex Steinweiss, the first album artist and Columbia’s ace in the hole, had moved on to RCA. Columbia needed a new direction. Golden called Fujita and asked him to run the art department. Fujita would be building a whole new team, replacing the relationships that Columbia had built with art studios for hire. This wasn’t going to be the hardest part of Fujita’s work; when offering him the job, Golden warned him that he’d experience a lot of racist attitudes still simmering in the wake of World War II.4 Still, Fujita agreed to take the job. Fujita’s first covers fit in with the work that Reid Miles was doing at Blue Note: single-color accents set against black-and-white photography. The Jazz Messengers via Discogs Fujita's cover for Miles Davis' 'Round About Midnight via Discogs In 1959, jazz was leaving the stratosphere. Ornette Coleman was performing what he called “free jazz,” frenetic, inscrutable compositions that drew backlash and praise in equal parts. John Coltrane recorded Giant Steps with a level of virtuosity that even his own bandmates struggled to keep up with. Miles Davis recorded Kind of Blue, which would go on to be regarded as one of the best recordings of all time. Fujita was also breaking ground at Columbia. He was one of the first directors to hire both men and women in a racially integrated office.5 He delegated work, tapping painters, illustrators, and photographers to contribute to covers. Fujita himself trained to be a painter before starting his career in design; he started looking for ways to incorporate his own original paintings into the covers: “We thought about what the picture was saying about the music,” Fujita recalled, “and how we could use that to sell the record. And abstract art was getting popular so we used a lot more abstraction in the designs—with jazz records especially.” He got the perfect opportunity to make his mark with two albums released in 1959: Charles Mingus’s Mingus Ah Um and Dave Brubeck’s Time Out. Mingus Ah Um Fujita's cover for Dave Brubeck's Time Out Fujita’s abstract paintings reflected the pure exuberance of Mingus’ and Brubeck’s music. In the case of Mingus Ah Um, the divisions and intersections spanning the cover read like a beam of light passing through exotic lenses, magnifiers, refractors, and prisms; through his music, Mingus was reflecting on the transition of jazz from popular entertainment to mind-expanding creative exercise. For Time Out, the wheels and rollers spooling out across the page echo the way that Brubeck’s quartet was experimenting with how time signatures could be interlocked, multiplied, and divided to create completely new textures and musical patterns. Fujita’s covers made it plain: Jazz was art. ’59 turned out to be a watershed for both jazz and album art. Brubeck’s Time Out went to #2 on the pop charts in 1961, and was the first jazz LP to sell more than a million copies; “Take Five,” the album’s standout hit, would also become the first jazz single to sell a million copies. For a unique moment in time, the music and art worlds were being propelled forward by a commercially successful record. Fujita’s paintings were making their way into millions of homes, driving sales of records by the vanguards of jazz. Fujita left Columbia records shortly after these major successes. “I wanted to be something other than just a record designer,” he said, “so I left to go on my own.” He’d go on to design the book covers for Truman Capote’s In Cold Blood and Mario Puzo’s The Godfather — when the latter was turned into Francis Coppola’s breakthrough film, Fujita’s design was used for its title and promotional art. But he’d continue to design album covers, creating paintings for each one. Far Out, Near In Fujita's cover for Dony Byrd and Gigi Gryce's Modern Jazz Perspective Fujita's cover for Columbia's recording of Glenn Gould performing Berg, Křenek, and Schoenberg. The next generation As jazz continued to evolve throughout the ’60s and ’70s, melding with rock ’n roll to produce punk, electronic, R&B, and rap, album art evolved alongside. Packaging became more sophisticated: multi-disc albums came in folding cases called gatefolds, accompanied by booklets of photography and art. New printing techniques allowed for brighter colors, shiny foil stamps, and textured finishes. Budgets for production grew larger and larger. The Beatles’ Sgt. Pepper’s Lonely Hearts Club Band featured an elaborate photo of the band members, 57 life-sized photograph cutouts, and nine wax sculptures. For the first time for a rock EP, the lyrics to the songs were printed on the back of the cover. In another first, the paper sleeve inside was not white but a colorful abstract pattern instead. Also inside was a sheet of cardboard cutouts, including a postcard portrait of Sgt. Pepper, a fake mustache, sergeant stripes, lapel badges, and a stand-up cutout of the Beatles themselves. The zany campiness of Sgt. Pepper’s could only be matched by an absurd gift box full of toys and games. The stark loneliness of the Beatles’ next album would be paired with a plain white cover, without even ink to fill in the impression of the words “The Beatles” on the front. Sgt. Pepper's Lonely Hearts Club Band, designed by Jann Haworth and Peter Blake and photographed by Michael Cooper The cover of The Beatles, designed by Richard Hamilton via Reddit The most famous artists and designers of each generation would try their hand at album art. Salvador Dali, Andy Warhol, Saul Bass, Keith Haring, Annie Leibovitz, Jeff Koons, Shepard Fairey, and Banksy would all create work for albums. Some of those pieces would become the most recognizable ones in an artist’s catalog. Greatest Hits by The Modern Jazz Quartet Andy Warhol's cover for The Velvet Underground & Nico via Leo Reynolds Saul Bass's cover for Frank Sinatra Conducts Tone Poems of Color via Moma Keith Haring's cover for David Bowie's Without You Annie Leibovitz and Andrea Klein's cover for Bruce Springsteen's Born In The USA Jeff Koons' cover for Lady Gaga's Artpop Shepard Fairey's cover for The Smashing Pumpkins' Zeitgeist Banksy's cover for Blur's Think Tank None of this would have been possible without the contributions of Alex Steinweiss, Jim Flora, Paul Bacon, Gil Mellé, John Hermansader, Reid Miles, S. Neil Fujita, and others. If not for the arms race between Columbia Records and Blue Note for the best art and the best artists of the ’50s, many artists would never have found their career. And in some cases, an album like The Rolling Stones’ Sticky Fingers would be remembered more for its art than for its music. When music was first pressed into discs, design was less than an afterthought. Today, album art is an extension of music itself. Footnotes & References https://www.nytimes.com/2011/07/20/business/media/alex-steinweiss-originator-of-artistic-album-covers-dies-at-94.html ↩︎ https://web.archive.org/web/20120412033422/http://www.adcglobal.org/archive/hof/1998/?id=318 ↩︎ https://web.archive.org/web/20080503055603/https://www.bluenote.com/History.aspx ↩︎ https://www.hellerbooks.com/pdfs/voice_s_neil_fujita.pdf ↩︎ https://www.nationalww2museum.org/war/articles/s-neil-fujita ↩︎

3 weeks ago 13 votes
UI Density

Interfaces are becoming less dense. I’m usually one to be skeptical of nostalgia and “we liked it that way” bias, but comparing websites and applications of 2024 to their 2000s-era counterparts, the spreading out of software is hard to ignore. To explain this trend, and suggest how we might regain density, I started by asking what, exactly, UI density is. It’s not just the way an interface looks at one moment in time; it’s about the amount of information an interface can provide over a series of moments. It’s about how those moments are connected through design decisions, and how those decisions are connected to the value the software provides. I’d like to share what I found. Hopefully this exploration helps you define UI density in concrete and useable terms. If you’re a designer, I’d like you to question the density of the interfaces you’re creating; if you’re not a designer, use the lens of UI density to understand the software you use. Visual density We think about density first with our eyes. At first glance, density is just how many things we see in a given space. This is visual density. A visually dense software interface puts a lot of stuff on the screen. A visually sparse interface puts less stuff on the screen. Bloomberg’s Terminal is perhaps the most common example of this kind of density. On just a single screen, you’ll see scrolling sparklines of the major market indices, detailed trading volume breakdowns, tables with dozens of rows and columns, scrolling headlines containing the latest news from agencies around the world, along with UI signposts for all the above with keyboard shortcuts and quick actions to take. A screenshot of Terminal’s interface. Via Objective Trade on YouTube Craigslist is another visually dense example, with its hundreds of plain links to categories and spartan search-and-filter interface. McMaster-Carr’s website shares similar design cues, listing out details for many product variations in a very small space. Screenshots of Craigslist's homepage and McMaster-Carr's product page circa 2024. You can form an opinion about the density of these websites simply by looking at an image for a fraction of a second. This opinion is from our subconsciousness, so it’s fast and intuitive. But like other snap judgements, it’s biased and unreliable. For example, which of these images is more dense? Both images have the same number of dots (500). Both take up the same amount of space. But at first glance, most people say image B looks more dense.1 What about these two images? Again, both images have the same number of dots, and are the same size. But organizing the dots into groups changes our perception of density. Visually density — our first, instinctual judgement of density — is unpredictable. It’s impossible to be fully objective in matters of design. But if we want to have conversations about density, we should aim for the most consistent, meaningful, and useful definition possible. Information density In The Visual Display of Quantitative Information, Edward Tufte approaches the design of charts and graphs from the ground up: Every bit of ink on a graphic requires reason. And nearly always that reason should be that the ink presents new information. Tufte introduces the idea of “data-ink,” defined as the useful parts of a given visualization. Tufte argues that visual elements that don’t strictly communicate data, whether it’s a scale value, a label, or the data itself — should be eliminated. Data-ink isn’t just the space a chart takes up. Some charts use very little extraneous ink, but still take up a lot of physical space. Tufte is talking about information density, not visual density. Information density is a measurable quantity: to calculate it, you simply divide the amount of “data-ink” in a chart by the total amount of ink it takes to print it. Of course what is and is not data-ink is somewhat subjective, but that’s not the point. The point is to get the ratio as close to 1 as possible. You can increase the ratio in two ways: Add data-ink: provide additional (useful) data Remove non-data-ink: erase the parts of the graphic that don’t communicate data Tufte's examples of graphics with a low data-ink ratio (first) and a high one (second). Reproduced from Edward Tufte's The Visual Display of Quantitative Information There’s an upper limit to information density, which means you can subtract too much ink, or add too much information. The audience matters, too: A bond trader at their 4-monitor desk will have a pretty high threshold; a 2nd grader reading a textbook will have a low one. Information density is related to visual density. Usually, the higher the information density is, the more dense a visualization will look. For example, take the train schedule published by E.J. Marey in 18852. It shows the arrival and departure times of dozens of trains across 13 stops from Paris to Lyon. The horizontal axis is time, and the vertical axis is space. The distance between stops on the chart reflects how far apart they are in the real world. The data-ink ratio is close to 1, allowing a huge amount of information — more than 260 arrival and departure times — to be packed into a relatively small space. The train schedule visualization published by E.J. Marey in 1885. Reproduced from Edward Tufte's The Visual Display of Quantitative Information Tufte makes this idea explicit: Maximize data density and the [amount of data], within reason (but at the same time exploiting the maximum resolution of the available data-display technology). He puts it more succinctly as the “Shrink Principle”: Graphics can be shrunk way down Information density is clearly useful for charts and graphs. But can we apply it to interfaces? The first half of the equation — information — applies to screens. We should maximize the amount of information that each part of our interface shows. But the second half of the equation — ink — is a bit harder to translate. It’s tempting to think that pixels and ink are equivalent. But any interface with more than a few elements needs separators, structural elements, and signposts to help a user understand the relationship each piece has to the other. It’s also tempting to follow Tufte’s Shrink Principle and try to eliminate all the whitespace in UI. But some whitespace has meaning almost as salient as the darker pixels of graphic elements. And we haven’t even touched on shadows, gradients, or color highlights; what role do they play in the data-ink equation? So, while information density is a helpful stepping stone, it’s clear that it’s only part of the bigger picture. How can we incorporate all of the design decisions in an interface into a more objective, quantitative understanding of density? Design density You might have already seen the first challenge in defining density in terms of design decisons: what counts as a design decision? In UI, UX, and product design, we make many decisions, consciously and subconsciously, in order to communicate information and ideas. But why do those particular choices convey the meaning that they do? Which ones are superlative or simply aesthetic, and which are actually doing the heavy lifting? These questions sparked 20th century German psychologists to explore how humans understand and interpret shapes and patterns. They called this field “gestalt,” which in German means “form.” In the course of their exploration, Gestalt psychologists described principles that describe how some things appear orderly, symmetrical, or simple, while others do not. While these psychologists weren’t designers, in some sense, they discovered the fundamental laws of design: Proximity: we perceive things that are close together a comprising a single group Similarity: objects that are similar in shape, size, color, or in other ways, appear related to one another. Closure: our minds fill in gaps in designs so that we tend to see whole shapes, even if there are none Symmetry: if we see shapes that are symmetrical to each other, we perceive them as a group formed around a center point Common fate: when objects move, we mentally group the ones that move in the same way Continuity: we can perceive objects as separate even when they overlap Past experience: we recognize familiar shapes and patterns even in unfamiliar contexts. Our expectations are based on what we’ve learned from our past experience of those shapes and patterns. Figure-ground relationship: we interpret what we see in a three-dimensional way, allowing even flat 2d images to have foreground and background elements. Examples of the princples of proximity (left), similarity (center), and closure (right). Gestalt principles explain why UI design goes beyond the pixels on the screen. For example: Because of the principle of similarity, users will understand that text with the same size, font, and color serves the same purpose in the interface. The principle of proximity explains why when a chart is close to a headline, it’s apparent that the headline refers to the chart. For the same reasons, a tightly packed grid of elements will look related, and separate from a menu above it separated by ample space. Thanks to our past experience with switches, combined with the figure-ground principle, a skeuomorphic design for a toggle switch will make it obvious to a user how to instantly turn on a feature. So, instead of focusing on the pixels, we think of design decisions as how we intentionally use gestalt principles to communicate meaning. And like Tufte’s data-ink ratio compares the strictly necessary ink to the total ink used to print a chart, we can calculate a gestalt ratio which compares the strictly necessary design decisions to the total decisions used in a design. This is design density. Four different treatments of the same information, using different types and amounts of gestalt principles. Which is the most dense? This is still subjective: a design decision that seems necessary to some might be superfluous to others. Our biases will skew our assessment, whether they’re personal tastes or cultural norms. But when it comes to user interfaces, counting design decisions is much more useful than counting the amount of data or “ink” alone. Design density isn’t perfect. User interfaces exist to do work, to have fun, to waste time, to create understanding, to facilitate personal connections, and more. Those things require the user to take one or more actions, and so density needs to look beyond components, layouts, and screens. Density should comprise all the actions a user takes in their journey — it should count in space and time. Density in time Just like the amount of stuff in a given space dictates visual density, the amount of things a user can do in a given amount of time dictates temporal — time-wise — density. Loading times are the biggest factor in temporal density. The faster the interface responds to actions and loads new pages or screens, the more dense the UI is. And unlike 2-dimensional whitespace, there’s almost no lower limit to the space needed between moments in time. Bloomberg’s Terminal loads screens full of data instantaneously With today’s bloated software, making a UI more dense in time is more impactful than just squeezing more stuff onto each screen. That’s why Bloomberg’s Terminal is still such a dominant tool in the financial analysis space; it loads data almost instantaneously. A skilled Terminal user can navigate between dozens of charts and graphs in milliseconds. There are plenty of ways to cram tons of financial data into a table, but loading it with no latency is Terminal’s real superpower. But say you’ve squeezed every second out of the loading times of your app. What next? There are some things that just can’t be sped up: you can’t change a user’s internet connection speed, or the computing speed of their CPU. Some operations, like uploading a file, waiting for a customer support response, or processing a payment, involve complex systems with unpredictable variables. In these cases, instead of changing the amount of time between tasks, you can change the perception of that time: Actions less than 100 milliseconds apart will feel simultaneous. If you tap on an icon and, 100ms later, a menu appears, it feels like no time at all passed between the two actions. So, if there’s an animation between the two actions — the menu slides in, for example — the illusion of simultaneity might be broken. For the smallest temporal spaces, animations and transitions can make the app feel slower.3 Between 100 milliseconds and 1 second, the connection between two actions is broken. If you tap on a link and there’s no change for a second, doubt creeps in: did you actually tap on anything? Is the app broken? Is your internet working? Animations and transitions can bridge this perceptual gap. Visual cues in these spaces make the UI feel more dense in time. Gaps between 1 and 10 seconds can’t be bridged with animations alone; research4 shows that users are most likely to abandon a page within the first 10 seconds. This means that if two actions are far enough apart, a user will leave the page instead of waiting for the second action. If you can’t decrease the time between these actions, show an indeterminate loading indicator — a small animation that tells the user that the system is operating normally. Gaps between 10 seconds and 1 minute are even harder to fill. After seeing an indeterminate loader for more than 10 seconds, a user is likely to see it as static, not dynamic, and start to assume that the page isn’t working as expected. Instead, you can use a determinate loading indicator — like a larger progress bar — that clearly indicates how much time is left until the next action happens. In fact, the right design can make the waiting time seem shorter than it actually is; the backwards-moving stripes that featured prominently in Apple’s “Aqua” design system made waiting times seem 11% shorter.5 For gaps longer than 1 minute, it’s best to let the user leave the page (or otherwise do something else), then notify them when the next action has occurred. Blocking someone from doing anything useful for longer than a minute creates frustration. Plus, long, complex processes are also susceptible to error, which can compound the frustration. In the end, though, making a UI dense in time and space is just a means to an end. No UI is valuable because of the way it looks. Interfaces are valuable in the outcomes they enable — whether directly associated with some dollar value, in the case of business software, or tied to some intangible value like entertainment or education. So what is density really about, then? It’s about providing the highest value outcomes in the smallest amount of time, space, pixels, and ink. Density in value Here’s an example of how value density is manifested: a common suggestion for any form-based interface is to break long forms into smaller chunks, then put those chunks together in a wizard-type interface that saves your progress as you go. That’s because there’s no value in a partly-filled-in-form; putting all the questions on a single page might look more visually dense, but if it takes longer to fill out, many users won’t submit it at all. This form is broken up into multiple parts, with clear errors and instructions for resolution. Making it possible for users to get to the end of a form with fewer errors might require the design to take up more space. It might require more steps, and take more time. But if the tradeoffs in visual and temporal density make the outcome more valuable — either by increasing submission rate or making the effort more worth the user’s time — then we’ve increased the overall value density. Likewise, if we can increase the visual and temporal density by making the form more compact, load faster, and less error-prone, without subtracting value to the user or the business, then that’s an overall increase in density. Channeling Tufte, we should try to increase value density as much as possible. Solving this optimization problem can have some counterintuitive results. When the internet was young, companies like Craigslist created value density by aggregating and curating information and displaying it in pages of links. Companies like Yahoo and Altavista made it possible to search for that information, but still put aggregation at the fore. Google took a radically different approach: use information gleaned by the internet’s long chains of linked lists to power a search box. Information was aggregating itself; a single text input was all users needed to access the entire web. Google and Yahoo's approach to data, design, and value density hasn't changed from 2001 (when the first screenshots were archived) to 2024 (when the second set of screenshots were taken). The value of the two companies' stocks reflect the result of these differing approaches. The UI was much less visually dense, but more value-dense by orders of magnitude. The results speak for themselves: Google went from a $23B valuation in 2004 to being worth over $2T today — closing in on a 100x increase. Yahoo went from being worth $125B in 2000 to being sold for $4.8B — less than 3% of its peak value.6 Conclusion Designing for UI density goes beyond the visual aspects of an interface. It includes all the implicit and explicit design decisions we make, and all the information we choose to show on the screen. It includes all time and the actions a user takes to get something valuable out of the software. So, finally, a concrete definition of UI density: UI density is the value a user gets from the interface divided by the time and space the interface occupies. Speed, usability, consistency, predictability, information richness, and functionality all play an important role in this equation. By taking account of all these aspects, we can understand why some interfaces succeed and others fail. And by designing for density, we can help people get more value out of the software we build. Footnotes & References This is a very unscientific statement based on a poll of 20 of my coworkers. Repeatability is questionable. ↩︎ The provenance of the chart is interesting. Not much is known about the original designer, Charles Ibry; but what we do know points to even earlier iterations of the design. If you’re interested, read Sandra Rendgen’s fascinating history of the train schedule. ↩︎ I have no scientific backing for this claim, but I believe it’s because a typical blink occurs in 100ms. When we blink, our brains fill in the gap with the last thing we saw, so we don’t notice the blink. That’s is why we don’t notice the gap between two actions that are less than 100ms apart. You can read more about this effect here: Visual Perception: Saccadic Omission — Suppression or Temporal Masking? ↩︎ Nielsen, Jakob. “How Long Do Users Stay on Web Pages?” Nielsen Norman Group, 11 Sept. 2011, https://www.nngroup.com/articles/how-long-do-users-stay-on-web-pages/ ↩︎ Harrison, Chris, Zhiquan Yeo, and Scott E. Hudson. “Faster Progress Bars: Manipulating Perceived Duration with Visual Augmentations.” Carnegie Mellon University, 2010, https://www.chrisharrison.net/projects/progressbars2/ProgressBarsHarrison.pdf ↩︎ HackerNews has pointed out that this is a ridiculous statement. And it is. Of course, value density isn’t the only reason why Google succeeded where Yahoo failed. But as a reflection of how each company thought about their products, it was a good leading indicator. ↩︎

a year ago 22 votes
The polish paradox

Polish is a word that gets thrown out in conversations about craft, quality, and beauty. We talk about it at the end of the design process, before the work goes out the door: let’s polish this up. Let’s do a polish sprint. Could this use more polish? https://twitter.com/svlleyy/status/1780215102064452068 A tweet (xeet?) on my timeline asked: “what does polish in an app mean? fancy animations? clear consistent design patterns? hierarchy and colour? all the above?” I thought about it for a moment and got a familiar itch in the back of my brain. It’s a feeling that I associate with a zen kōan that goes (paraphrased): "A monk asked a zen master, ‘Does a dog have Buddha-nature?’ The master answered ‘無’. 無 (pronounced ‘wú’ in Mandarin or ‘mu’ in Japanese) literally translates to ‘not,’ as in ‘i have not done my chores today.’ It’s a negation of something, and in the koan’s case, it’s the master’s way of saying — paradoxically — that there’s no point in answering the question. In the case of the tweet, my 無-sense was tingling as I wrote a response: polish is something only the person who creates it will notice. It’s a paradox; polishing something makes it invisible. Which also means that pointing out examples of polish almost defeats the purpose. But in the spirit of learning, here’s a few things that come to mind when I think of polish: Note the direction the screws are facing. Photo by Chris Campbell, CC BY-NC 2.0 DEED Next time you flip a wall switch or plug something into an outlet, take a second and look at the two screws holding the face plate down. Which direction are the slots in the screw facing? Professional electricians will (almost) always line the screw slots up vertically. This has no functional purpose, and isn’t determined by the hardware itself; the person who put the plate on had to make a conscious decision to do it. Julian Baumgartner’s art restoration videos always include a note about his process for repairing or rebuilding the frame that the canvas is stretched over. When he puts the keys back into the frame to create extra tension, he attaches some fishing wire, wound around a tack, and threaded through each key; this, he says, “ensures the keys will never be lost.” How many of these details lie hidden in the backs of the paintings hung on the walls of the world’s most famous museums and galleries? A traditional go board, with a 15:14 aspect ratio. A traditional go board isn’t square. It’s very slightly longer than it is wide, with a 15:14 aspect ratio. This accounts for the optical foreshortening that happens when looking across the board. For similar reasons, traditionally, black go stones are slightly larger than white ones, as equal-sized stones would look unequal when seen next to each other on the board. The same subtle adjustments go into the shape of letters in a typeface: round letters like ‘e’ and ‘a’ are slightly taller than square letters like ‘x’ or ‘v’. The crossbars of the x don’t usually line up perfectly, either. The success of these demonstrations of polish is dictated by just how hard they are to see. So how should polish manifest in product design? One example is in UI animation. It is tempting to put transitions and animations on every component in the interface; when done right, an animated UI feels responsive and pleasant to use. But the polish required to reach that point of being “intuitive” or “natural” is immense: Animations should happen fast enough to be perceived as instantaneous. The threshold for this is commonly cited at 100ms; anything happening faster than this is indistinguishable from something happening right away. The speed of the animation has to be tuned to accelerate or decelerate at precise rates depending on how far the element is moving and what kind of transition is taking place. Changing a popover from the default linear animation to an ease-out curve will make it seem more natural. Often an animation should be faster or slower depending on whether it’s an “in” or “out” animation; a faster animation at the start of an interaction makes the interface feel snappy and responsive. A slower animation at the end of an interaction helps a user stay oriented to the result of their actions. Another example is in anticipating the user’s intent. A reactive UI should be constantly responding to a user’s input, with no lag between clicks and hovers and visual, audible, or tactile feedback. But with some interaction patterns, responding too quickly can make the interface feel twitchy or delicate. For this reason, nested dropdown menus often have invisible bridges connecting your cursor and the menu associated with what you’ve selected. This allows you to smoothly move to the next item, without the sub-menu disappearing. These bridges are invisible, but drawing them accurately requires pixel precision nonetheless. An example of Amazon’s mega dropdown menu, with invisible bridges connecting the top-level menu to the sub-menu. Image credit: Ben Kamens You benefit from this kind of anticipatory design every day. While designing the original iPhone’s keyboard, Ken Kocienda explored new form factors that took advantage of the unique properties of the phone’s touch screen. But breaking away from the familiarity of a QWERTY keyboard proved challenging; users had a hard time learning new formats. Instead, Kocienda had the keyboard’s touch targets invisibly adjust based on what is being typed, preventing users from making errors in the first place. The exact coordinates of the tap on the screen are adjusted, too, based on the fact that we can’t see what’s underneath our fingers when we’re using it. Early prototypes of the iPhone keyboard sacrificed familiarity in order to make the touchscreen interaction more finger-friendly. Images from Ken Kocienda's Creative Selection, via Commoncog Case Library The iPhone’s keyboard was one of the most crucial components to the success of such a risky innovation. Polish wasn’t a nice-to-have; it was the linchpin. The final design of the keyboard used a familiar QWERTY layout and hid all the complexity of the touch targets and error correction behind the scenes. Image from Apple’s getting started series on the original iphone. Retrieved from the Internet Archive The polish paradox is that the highest degrees of craft and quality are in the spaces we can’t see, the places we don’t necessarily look. Polish can’t be an afterthought. It must be an integral part of the process, a commitment to excellence from the beginning. The unseen effort to perfect every hidden aspect elevates products from good to great.

a year ago 22 votes
How to generate color palettes for design systems

It used to be easy to pick colors for design systems. Years ago, you could pick a handful of colors to match your brand’s ethos, or start with an off-the-shelf palette (remember flatuicolors.com?). Each hue and shade served a purpose, and usually had a quirky name like “idea yellow” or “innovation blue”. This hands-on approach allowed for control and creativity, resulting in color schemes that could convey any mood or style. But as design systems have grown to keep up with ever-expanding software needs, the demands on color palette have grown exponentially too. Modern software needs accessibility, adaptability, and consistency across dozens of devices, themes, and contexts. Picking colors by hand is practically impossible. This is a familiar problem to the Stripe design team. In “Designing accessible color systems,” Daryl Koopersmith and Wilson Miner presented Stripe’s approach: using perceptually uniform color spaces to create aesthetically pleasing and accessible systems. Their method offered a new approach to selection to enhance beauty and usability, grounded in scientific understanding of human vision. In the four years since that post, Stripe has stretched those colors to the limit. The design system’s resilience through massive growth is a testament to the team’s original approach, but last year we started to see the need for a more flexible, scalable, and inclusive color system. This meant both an expansion of our color palette and a rethinking of how we generate and apply these colors to accommodate our still-growing products. This essay will take you through my attempts to solve these problems. Through this process, I’ve created a tool for generating expressive, functional, and accessible color systems for any design system. I’ll share the full code of my solution at the end of the essay; it represents not just a technical solution but a philosophical shift in how we think about color in design systems, emphasizing the balance between creativity and inclusivity. Why don’t the existing tools work? So what makes a good color palette? Through the looking glass: perceptual uniformity Picking the right color space Using OKHsl First steps with generated scales Scaling up Making scales expressive: Leveraging hue and saturation Hue Saturation and Chroma In practice: Crafting colors with functions Pick base hues Add functions for hue, saturation, and lightness Calculate the colors for each scale number Making scales adaptive: Using background color as an input Making scales accessible: Building in the WCAG contrast calculation Step 1: Calculate a target contrast ratio based on scale step Step 2: Calculate lightness based on a target contrast ratio Step 3: Translate from XYZ Y to OKHsl L Putting it all together: All the code you need What does it look like in practice? What we’ve learned and where we’re going Why don’t the existing tools work? In the past few years, I’ve come across dozens of tools that promise to generate color palettes for design systems. Some are simple, like Adobe Color, which generates color palettes based on a single input color, or even an image. Others are more complex, like Colorbox, which generates color scales based on a long list of parameters, easing curves, and input hues. But I’ve found that each of these tools has critical limitations. Complex tools like Colorbox or color x color allow for a high degree of customization, but they require a lot of manual input and don’t provide guidelines for accessibility. Simple tools like Adobe’s Color and Leonardo provide more constraints and accessibility features, but they does so at the expense of flexibility and extensibility. None of the tools I’ve found can integrate tightly with an existing design system; all are simply apps that generate an initial set of colors. None can respond to the unique constraints of your design system, or adapt as you add more themes, modes, or components. That’s why I ended up going back to first principles, and decided to build up a framework that can be adapted to any codebase, design tool, or end user interface. So what makes a good color palette? To build palettes from first principles, we need a strong conceptual foundation. A great color palette is like a Swiss Army knife, built to address a wide array of needs. But that same flexibility can make the system unwieldy and clunky. Through years of working on design systems, two principles have emerged as a constant benchmark for quality color palettes: utility and consistency. A color palette with high utility is vital for a robust design system, encompassing both adaptability and functionality. It should offer a wide array of shades and hues to cater to diverse use cases, such as status changes—reds for errors, greens for successes, and yellows for warnings—and interaction states like hovering, disabled, or active selections. It’s also essential for signifying actionable items like links and buttons. Beyond functionality, an adaptable palette enables smooth transitions between light, dark, and high contrast modes, supporting the evolution of your product and differing brand expressions. This ensures that your user interfaces remain consistent and recognizable across various platforms and usage contexts. Moreover, an adaptable palette underscores a commitment to accessibility—it should provide accessible contrast ratios across all components, accommodating users with visual impairments, and offer high-contrast modes that enhance visibility and functionality without sacrificing style. Consistency is another crucial aspect of a well-designed color palette. Despite the diverse range of components and their variants, a consistent palette maintains a coherent visual language throughout the system. This coherence ensures that elements like badges retain a consistent visual weight, avoiding misleading emphasis, and the relative contrast of components remains balanced between dark and light modes. This consistency helps preserve clarity and hierarchy, further enhancing the user experience and the overall aesthetics of the design system. As you’ll see, even simple questions about these goals reveals a deep rabbit hole of possible solutions. Through the looking glass: perceptual uniformity The principles of utility and consistency make selecting a color palette more complex. There’s a question at the heart of both constraints: what makes two colors look different? We have an intuitive sense that yellow and orange are more similar than green and blue, but can we prove it objectively? Scientists and artists have spent the last decade puzzling this out, and their answer is the concept of perceptual uniformity. Perceptual uniformity is rooted in how our eyes work. Humans see colors because of the interaction between wavelengths of light and cells in our eyes. In 1850, before we could look at cells under a microscope, scientist Hermann von Helmholtz theorized that there were three color vision cells (now known as cones) for blue, green, and red light. Thomas Young and Hermann von Helmholtz assumed that the eye’s retina consists of three different kinds of light receptors for red, green and blue. Public Domain via Wikipedia Most modern screens depend on this century-old theory, mixing red, green, and blue light to produce colors. Every combination of these colors produces a distinct one; 10% red, 50% green, and 25% blue light create the cartoon green of the Simpson’s yard. 75% red, 85% green, and 95% blue is the blindingly pale blue of the snow in Game of Thrones. Von Helmholtz was amazingly close to the truth, but until 1983, we didn’t have a full understanding of the exact way that each cell in our eyes responds to light. While it’s true that we have three kinds of color vision cells, and that each responds strongly to either red, green, or blue light, the full mechanism of color vision is much more nuanced. So, while it’s technologically simple to mix red, green, and blue lights to reproduce color, the red, green, and blue coordinate system — the RGB color space — isn’t perceptually uniform. Picking the right color space Despite not being perceptually uniform, many design systems still use RGB color space (and its derivative, HSL space) for picking colors. But over the past century, scientists and artists have invented more useful ways to map the landscape of color. Whether it’s capturing skin tones accurately in photographs or creating smooth gradients for data visualization, these different color spaces give us perceptually uniform paths through a gamut. Lab is an example of a perceptually uniform color space. Developed by the International Commission on Illumination, or CIE, the Lab color space is designed to be device-independent, encompassing all perceivable colors. Its three dimensions depict lightness (L), and color opponents (a and b) — the latter two varying between green-red and blue-yellow axes respectively. This makes it useful for measuring the differences between colors. However, it’s not very intuitive; for example, unless you’ve spent a lot of time working with the lab color space, it’s probably hard to imagine what a pair of (a, b) values like (70, -15), represents.1 LCh (Luminosity, Chroma, hue) is more ergonomic, but still perceptually uniform color space. It’s a cylindrical color space, which means that along the hue axis, colors change from red to blue to green, and then back to red — like traveling on a roundabout. Along the way, each color appears equally bright and colorful. Moving along the luminosity axis, a color appears brighter or dimmer but equally colorful, like adjusting a flashlight’s distance from a painted wall. Along the chroma axis, a color stays equally bright but looks more or less colorful, like it’s being mixed with different amounts of gray paint. The LCh color space. Note the uneven peaks of chroma at different hues. via Hueplot LCh trades off some of lab’s functionality for being more intuitive. But LCh can be clunky, too, because the C (chroma) axis starts at 0 and don’t have a strict upper limit. Chroma is meant to be a relative measure of a color’s “colorfulness”. Some colors are brighter and more colorful than others: is a light aqua blue as colorful as a neon lime green? How does a brick red compare to a grape soda purple? The chroma scale is meant to make these comparisons possible. But try for a moment to imagine a sea green as rich and deep as an ultraviolet blue. Lab and LCh both let you specify these “impossible” colors that don’t have a real-world representation. In technical parlance, they’re called “out of gamut,” since they can’t be produced by screens, or seen by human eyes. The existence of out-of-gamut colors makes it hard to reliably build a color system in LCh or lab color space. Finding colors with consistent properties is a manual process; when Stripe was building its previous color system using lab, the team made a specialized tool for visualizing the boundaries of possible colors, allowing designers to tweak each shade to maximize its saturation. This isn’t a tenable solution for most teams; what if there was a color space that combined the simplicity of RGB and HSL with the perceptual uniformity of lab and LCh? Björn Ottosson, creator of the OKLab color space, did just that in his blog post “OKHsv and OKHsl — two new color spaces for color picking.” OKHsl is similar to Lch in that it has three components, one for hue, one for colorfulness, and one for lightness. Like LCh, the hue axis is a circle with 360 degrees. The lightness axis is similar to Lch’s luminosity, going from 0 to 1 for every hue. In place of Lch’s chroma channel, though, OKHsl uses an absolute saturation axis that goes from 0 to 1 for every hue, at every lightness. 0 represents the least saturated color (grays ranging from white to black), and 1 represents the most saturated color available in the sRGB gamut. The OKHsl color space. It’s a cylinder, which makes it much better for generating color palettes. via Hueplot Practically, OKHsl allows for easier color selection and manipulation. It bypasses the issues found in LCh or lab, creating an intuitive, straightforward, and user-friendly system that can produce the desired colors without worrying about out-of-gamut colors. That’s why it’s the best space for generating color pallettes for design systems. Using OKHsl Practically speaking, to use OKHsl, you need to be able to convert colors to and from sRGB. This is a fairly straightforward calculation, but it’s not built into most design tools. Bjorn Ottosson linked the javascript code to do this conversion in his blog post, and the library colorjs.io will soon have support for OKHsl. Going forward, I’ll assume you have a way to convert colors to and from OKHsl. If you don’t, you can use the code I’ve written to generate a color palette in OKHsl, and then convert it to sRGB for use in your design system. First steps with generated scales To get started generating our color scales, we need a few values: The hue of the color we want to generate The saturation of the color we want to generate A list of lightness values we want to generate For example, we can generate a cool neutral color scale by choosing these values: Hue: 250 Saturation: 5 Lightness values: Light: 85 Medium: 50 Dark: 15 Using those values to pick colors in the OKHsl color space, we get the following palette: Neutral OKHsl sRGB Hex 250, 5, 85 #d2d5d8 250, 5, 50 #73787c 250, 5, 15 #212325 We can do the same thing for all our colors, picking numbers to build out the entire system. OKHsl sRGB Hex OKHsl sRGB Hex 250, 5, 85 #d2d5d8 250, 90, 85 #b6d9fd 250, 5, 50 #73787c 250, 90, 50 #1a7acb 250, 5, 15 #252628 250, 90, 15 #022342 OKHsl sRGB Hex OKHsl sRGB Hex OKHsl sRGB Hex 145, 90, 85 #6af778 20, 90, 85 #fec3ca 100, 90, 85 #eed63d 145, 90, 50 #388b3f 20, 90, 50 #d32d43 100, 90, 50 #877814 145, 90, 15 #0c2a0e 20, 90, 15 #45060f 100, 90, 15 #282302 Scaling up For bigger projects, you’ll often need more than just three shades per color. Choosing the right number can be tricky: too few shades limit your options, but too many can cause confusion. This can seem daunting, particularly in the early stages of your design system. But there’s a method to simplify this: use a consistent numbering system, ensuring your color choices remain versatile no matter how your system evolves. This system is often referred to as ‘magic numbers.’ If you’re familiar with Tailwind CSS or Material Design, you’ve seen this in action. Instead of naming shades like ‘light’ or ‘dark,’ each shade gets a number. For instance, in Tailwind, the scale goes from 0 to 1,000, and in Material Design, it’s 0 to 100. The extremes often correspond to near-white or near-black, with middle numbers denoting pure hues. The beauty of this system is its flexibility. If you initially use shades named ‘red 500’ and ‘red 700’, and later need something in between, you can simply introduce ‘red 600’. This keeps your design adaptable and intuitive. Another bonus of magic numbers is that we can often plug the number directly into a color picker to scale the lightness of the shade. That’s why, for the rest of this essay, I’ll call these scale numbers. For example, if we wanted to create a more extensive color scale for our blues, we could use the following values in the OKHsl color space: Blue Scale Number OKHsl sRGB Hex 0 250, 90, 100 #ffffff 10 250, 90, 90 #cfe5fe 20 250, 90, 80 #9dccfd 30 250, 90, 70 #68b1f9 40 250, 90, 60 #3395ed 50 250, 90, 50 #1b7acb 60 250, 90, 40 #0f60a3 70 250, 90, 30 #08477c 80 250, 90, 20 #032f55 90 250, 90, 10 #01172e 100 250, 90, 0 #000000 We’ve turned the scale number into the lightness value with the function $$L(n) = 1-n$$. In this formula, n is a normalized value — one that goes from 0 to 1 — that represents our scale number, and L(n) is the lightness value in OKHsl. It turns out that using functions and formulas in combination with scale numbers is a powerful way to create expressive color scales that can power beautiful design systems. Making scales expressive: Leveraging hue and saturation One advantage of scale numbers is the ability to plug them directly into a color picker to dictate the lightness of a shade. But scale numbers really show their usefulness and versatility when you leverage them across every component of your color system. That means venturing beyond lightness to explore hue and saturation, too. Hue When using scale numbers to control lightness, it’s easy to assume hue and saturation will behave consistently across the lightness range. However, our perception of color doesn’t work that simply. Hue can appear to shift dramatically between light and dark shades of the same color due to a phenomenon called the Bezold–Brücke effect: colors tend to look more purplish in shadows and more yellowish in highlights. So if we want to maintain consistent hue perception, we can use scale numbers for adapting the hues of our color scales. As lightness decreases, blues and reds should shift slightly towards more violet/purple tones to counteract the Bezold–Brücke effect. Likewise, as lightness increases, yellows, oranges, and reds should shift towards more yellowish hues. 23 Purple without (top) and with (bottom) accounting for the Bezold–Brücke shift Red without (top) and with (bottom) accounting for the Bezold–Brücke shift In both examples above, we’ve used the scale number to shift the hue slightly as the lightness increases. This looks like the following formula: $$H(n) = H_{base} + 5*(1 - n)$$. H(n) is the hue at a given normalized scale value; Hbase is the “base hue” of the color. The 5*(1 - n) term means the hue will change by 5 degrees as the scale number goes from one end to the other. If you’re using this formula, you should tweak the numbers to your liking. By making hue a function of lightness, with the scale number adjusting hue accordingly, hues look more consistent and harmonious across the entire scale. The shifts don’t need to be large – even subtle hue variations of a few percentage points can perceptually compensate for natural hue shifts with changing brightness. Saturation and Chroma From our understanding of the CIE LCh color space and its sibling, the OKHsl color space, we know that colors generally attain their peak chroma around the middle of the lightness scale.4 In design, this presents a fantastic opportunity. By designing our color scales such that the midpoint is the most chromatically rich, we can make sure that our colors are the most vibrant and saturated where it matters most. Conversely, as we veer towards the lightness extremes, we can have chroma values that taper off, ensuring that our brightest and darkest shades remain subtle and balanced. OKHsl gives us a saturation component that goes from 0% to 100% of the possible chroma at a given hue and lightness value. We can take advantage of this by using the normalized scale number as an input to a function that goes from a minimum saturation to a maximum and back again. Green with constant saturation (top) and varying saturation (bottom) In practice, the formula for achieving this looks like this: $$S(n) = -4n^2 + 4n$$, where S(n) is the saturation at a given (normalized, as before) scale value n. The formula is an upside-down parabola, which starts at 0% and peaks at 100% when the scale value is 0.5. You can add a few terms to adjust the minimum and maximum saturation if you’d like to adjust the scale further: Neutrals, for example, don’t need a high maximum saturation. But most colors do well moving between 0% and 100% saturation. In practice: Crafting colors with functions Let’s put this into practice and generate an extensive color scale with only a handful of functions. Functions allow us to build a flexible framework that is resilient to change and can be easily adapted to new requirements; if we need to add colors, tweak hues, or adjust saturation, we can do so without rewriting the entire system. Pick base hues First, let’s pick a handful of base hues. At the very least, you’ll need a blue for interactive elements like links and buttons, and green, red, and yellow for statuses. Your neutrals need a hue, too; though it won’t show up much, a cool neutral and a warm neutral have very different effects on the overall system. Neutral Blue Green Red Yellow Base hue (Hbase) 250 250 145 20 70 Add functions for hue, saturation, and lightness Next, let’s use the functions we came up with earlier to indicate how the colors should change depending on scale numbers. Neutral Blue Green Red Yellow Base hue (Hbase) 250 250 145 20 70 Hue function $$H(n) = 250$$ $$H(n) = H_{base} + 5*(1 - n)$$ Saturation function $$S(n) = -0.8n^2 + 0.8n$$ $$S(n) = -4n^2 + 4n$$ Lightness function $$L(n) = 1-n$$ The hue function is a constant for neutrals, and for the colors we use the function that accounts for the Bezold–Brücke shift. As for saturation, the neutral colors have a maximum saturation of 20% instead of the full 100%; the rest of the colors use the function that goes from 0% to 100% and back. The lightness function is the same for all colors. Calculate the colors for each scale number Now let’s let the math work its magic. For each scale number, for every color, we have all the information we need from the base hue, hue function, saturation function, and lightness function. sRGB Hex Scale Number Neutral Blue Green Red Yellow 0 #ffffff #ffffff #ffffff #ffffff #ffffff 10 #e0e3e6 #dae4f0 #d8e8d4 #f9d1d6 #f7e9a3 20 #bfc8d1 #aacaf1 #9adb90 #f1b5b7 #ebbe83 30 #9fadbd #73aff6 #67c55b #f7838c #e09c34 40 #8193a6 #2e92f9 #39ac30 #fa405e #c3810a 50 #67798c #0077d8 #009100 #dd0042 #a26900 60 #506070 #065faa #227021 #ae0f33 #815304 70 #3c4752 #0e477c #255125 #7e1a28 #5f3e0b 80 #292f35 #12304d #1c351c #4e1b1e #3e290f 90 #141619 #0d1722 #101910 #221111 #1d150b 100 #000000 #000000 #000000 #000000 #000000 Of course, this palette is fairly basic and might not be optimal for your needs. But using formulas and functions to calculate colors from scale numbers has a powerful advantage over manually picking each color; you can make tweaks to the formulas themselves and instantly see the entire palette adapt. Making scales adaptive: Using background color as an input Today, color modes like dark mode and high-contrast accessibility mode are table stakes in design systems. So, if you’re picking colors manually, you have to pick an additional 50 colors for each mode, carefully balancing the unique perception of color against each different background. However, with the functions-and-formulas approach to picking colors, we can abstract a color palette to respond to any background color we might want to apply. Let’s go back to the lightness formula we used in the previous palettes: Using this formula, the lightness will decrease as the scale number increases. In dark mode, we want the opposite: lightness should increase as the scale number increases. We can use a more detailed formula to switch the direction of our scale if the lightness of a background color is less than a specific value: 0.18 \\n, & \text{if } Y_{b} The “Yb” in this equation is the background color’s Y value in the XYZ color space. As I explained at the beginning of this essay, color spaces are different ways of mapping all the colors in a gamut; XYZ is an extremely precise and comprehensive color space. While the X and Z components don’t map neatly to phenomenological aspects of a color (like a and b in the LAB color space), the Y component represents the luminosity of a color. You may be wondering why we’re using another color space (in addition to OKHsl) to dictate lightness. This is because the WCAG (Web Content Accessibility Guidelines) color contrast algorithm compares Y values in XYZ space, which will be more relevant in the next section. A color with a Y value of 0.18 will have the particular quality of passing WCAG contrast level AA5 on both pure white ( #ffffff) and pure black ( #000000). That makes it a good test to see if a color is a light background (Yb > 0.18) or a dark background (Yb < 0.18). Using this equation for our color system, we can now get both dark mode and light mode colors, calculated automatically based on the background color we choose. The color palette calculated with a background color of #000000 (Yb = 0) sRGB Hex Scale Number Neutral Blue Green Red Yellow 0 #000000 #000000 #000000 #000000 #000000 10 #141619 #0e1722 #10190f #221112 #1c150b 20 #292f35 #132f4f #1e351a #4e1a20 #3d2a0f 30 #3c4752 #10467f #275122 #7e192b #5e3e0b 40 #506070 #075eac #25701e #ae0e36 #805304 50 #67798c #0077d8 #009100 #dd0042 #a26900 60 #8193a6 #2993f8 #35ac35 #fa405c #c4810a 70 #9fadbd #6fb0f6 #61c660 #f78489 #e29b35 80 #bfc8d1 #a7caf1 #96db94 #f1b5b5 #ecbd86 90 #e0e3e6 #d9e4f0 #d6e9d6 #f0dedd #eee0d1 100 #ffffff #ffffff #ffffff #ffffff #ffffff Making scales accessible: Building in the WCAG contrast calculation One of the most helpful aspects of scale numbers is that they can simplify accessibility substantially. The first time I saw this feature was with the US Web Design System’s (USWDS) design tokens.The USWDS color tokens have scale numbers from 0–100; using any tokens that have a scale number of 50 or more guarantees that those colors will meet the WCAG color contrast criteria at AA level. This makes designing accessible interfaces much easier. Instead of manually running each color pairing through a color contrast check, you can compare the scale numbers of the design tokens and instantly know if the combination meets accessibility criteria. When I first set out to build out a system of functions for Stripe’s color palette, this was the most daunting part of the challenge. Going in, I wasn’t even sure if it was possible to systematically target contrast ratios across all hues. However, after seeing the technique used in Adobe’s Leonardo, I had some degree of hope that such a function existed. After many false starts and dead ends, I found the right set of operations. Step 1: Calculate a target contrast ratio based on scale step Stripe’s color scales follow the lead of the USWDS; when scale numbers differ by 500 or greater, those two colors conform to the AA-level contrast ratio of 4.5:1. This means that when neutral.500 is used on top of neutral.0 (or vice versa), the color combination should be accessible. To accomplish this with calculated colors, it’s important to understand how WCAG’s contrast ratio is measured. A contrast ratio like 4.5:1 is the output ® of the following formula, which the WCAG calls “relative luminance”:6 In this equation, L1 is the luminance (i.e., the Y value of the color in XYZ color space) of the lighter color, and L2 is the luminance of the darker color. So how do we use this knowledge to transform scale steps into contrast ratios? Well, we know step 0 and 500 need to have a ratio of 4.5. Step 100 and step 600 also need to have a ratio of 4.5, and so on, up the scale. This is a feature of exponential equations; equally-spaced points along the function have consistent ratios. Exponential equations also model the growth of a population, or the spread of a virus. It happens that luminosity is also an exponential function of scale step, which shouldn’t be surprising if you know a bit of calculus. Exponential functions take the form $$f(x) = e^{kx}$$, where k is some constant. In our case, we’ll call the function r(x) (for contrast ratio), where x is a number between 0 and 1 that represents our scale step; we need to solve for k to find the exact constant that produces the correct contrast ratios. Since r(0.5) should be 4.5 — that is, scale step 500 has a contrast ratio of 4.5:1 with step 0 — we start with $$4.5 = e^{k * 0.5}$$. Solving for k yields $$k = ln(20.25)$$. To make this a little easier to work with, we can use a close approximation of this value, 3.009. And if browsers were perfect pieces of software, that would be that. But color in web browsers is a tricky technical problem. Specifically, when you convert an RGB color like rgb(129.3, 129.3, 129.3) to a hex color, it’s rounded off; the result is #818181, which is exactly rgb(129, 129, 129). The formula we derived, $$r(x) = e^{3.008x}$$ is exact, so if you round a color’s values at all after calculating it, you may end up with inaccessible colors. Therefore, in testing this function, I’ve found that adding a little extra contrast to the overall system helps guard against rounding errors. The final formula I used to calculate the contrast ratio from a scale step is as follows: Where r(x) is the target contrast ratio and x is a number from 0 to 1 that represents the scale number. If your scale numbers (like Stripe’s) goes from 0 to 1,000, then a scale number of 500 correlates to x=0.5. Step 2: Calculate lightness based on a target contrast ratio Now that we have a function to calculate a contrast ratio based on our scale number, let’s return to the relative luminance equation: If we solve this equation for L2, we get an equation for the luminosity of a color with the desired contrast ratio with a given color. This is true as long as L1 is greater than L2. Put another way, this covers cases where we’re generating a darker color than our given (background) color. For the opposite case, we can use the same formula, solved for L1 instead of L2. This gets us the following piecewise equation: 0.18, \\ R (Y_b + 0.05) - 0.05 & \text{if } Y_b As explained earlier, the 0.18 in this equation represents the luminosity of “middle gray,” a color equally contrasting with #000000 and #ffffff; Each case depends on whether the background color is dark or light.7 So, for example, if I want a foreground to have a 4.5:1 contrast ratio with the background color, I can calculate the luminosity of that color by inputting the luminosity of the background as Yb and the contrast ratio as 4.5. If the background is #ffffff, which has a luminosity of 1, Yf comes out to 0.183. We can substitute in our function for r(x) to get the following: 0.18, \\ e^{3.04x} (Y_b + 0.05) - 0.05 & \text{if } Y_b This is a function that takes: A number from 0 to 1 that represents a scale number, and The Y value of a background color, and provides the Y value (i.e., luminance) of a color at the given scale number. Step 3: Translate from XYZ Y to OKHsl L Despite its scientific accuracy, XYZ is not a great colorspace to work in for generating color scales for design systems — while we can step through the Y values in a fairly straightforward way, calculating X and Z values of a given color requires matrix multiplication. Instead, we can translate XYZ’s Y value into OKHsl’s l value with the following two-step process: First, we can use the following formula to convert the Y value to the lightness value in lab:8 0.0088564516 \end{cases}$$ Then, OKHsl uses a “toe” function to map the lab lightness value to a perceptually accurate lightness value. Essentially it adds a little space to the dark end of the spectrum. This function is a little complicated: The math gets a lot more manageable if we put it all into a javascript function: const YtoL = Y => { if (Y <= 0.0088564516) { return Y * 903.2962962; } else { return 116 * Math.pow(Y, 1/3) - 16; } } const toe = l => { const k_1 = 0.206 const k_2 = 0.03 const k_3 = (1+k_1)/(1+k_2) return 0.5*(k_3*l - k_1 + Math.sqrt((k_3*l - k_1)*(k_3*l - k_1) + 4*k_2*k_3*l)) } const computeScaleLightness = (scaleValue, backgroundY) => { let foregroundY; if (backgroundY > 0.18) { foregroundY = (backgroundY + 0.05) / Math.exp(3.04 * scaleValue) - 0.05; } else { foregroundY = Math.exp(3.04 * scaleValue) * (backgroundY + 0.05) - 0.05; } return toe(YtoL(foregroundY)); } The function computeScaleLightness takes two values, the normalized scale value and the Y value of your background color, and returns an OKHsl L (lightness) value for the color at that scale step. With this, we have all the pieces we need to generate a complete accessible color palette for any design system. Putting it all together: All the code you need Now we have all the components to write a complete color generation library. // utility functions const YtoL = (Y) => { if (Y <= 0.0088564516) { return Y * 903.2962962; } else { return 116 * Math.pow(Y, 1 / 3) - 16; } }; const toe = (l) => { const k_1 = 0.206; const k_2 = 0.03; const k_3 = (1 + k_1) / (1 + k_2); return ( 0.5 * (k_3 * l - k_1 + Math.sqrt((k_3 * l - k_1) * (k_3 * l - k_1) + 4 * k_2 * k_3 * l)) ); }; const normalizeScaleNumber = (scaleNumber, maxScaleNumber) => scaleNumber / maxScaleNumber; // hue, chroma, and lightness functions const computeScaleHue = (scaleValue, baseHue) => baseHue + 5 * (1 - scaleValue); const computeScaleChroma = (scaleValue, minChroma, maxChroma) => { const chromaDifference = maxChroma - minChroma; return ( -4 * chromaDifference * Math.pow(scaleValue, 2) + 4 * chromaDifference * scaleValue + minChroma ); }; const computeScaleLightness = (scaleValue, backgroundY) => { let foregroundY; if (backgroundY > 0.18) { foregroundY = (backgroundY + 0.05) / Math.exp(3.04 * scaleValue) - 0.05; } else { foregroundY = Math.exp(3.04 * scaleValue) * (backgroundY + 0.05) - 0.05; } return toe(YtoL(foregroundY)); }; // color generator function const computeColorAtScaleNumber = ( scaleNumber, maxScaleNumber, baseHue, minChroma, maxChroma, backgroundY, ) => { // create an OKHsl color object; this might look different depending on what library you use const okhslColor = {}; // normalize scale number const scaleValue = normalizeScaleNumber(scaleNumber, maxScaleNumber); // compute color values okhslColor.h = computeScaleHue(scaleValue, baseHue); okhslColor.s = computeScaleChroma(scaleValue, minChroma, maxChroma); okhslColor.l = computeScaleLightness(scaleValue, backgroundY); // convert OKHsl to sRGB hex; this will look different depending on what library you use return convertToHex(okhslColor); }; For this code to work, you’ll need a library to convert from OKHsl to sRGB hex. The upcoming version of colorjs.io supports this, as does culori. I’ve marked where that matters, in case you’d like to use a different color conversion utility. What does it look like in practice? Here are some examples of the same design in a number of themes, with different background colors: Three generated color palettes By adjusting the hue, chroma, and saturation when we generate our colors, we can get a broad and expressive range of hues, while ensuring each shade is accessible when used in the same context. What we’ve learned and where we’re going At Stripe, we’ve implemented this approach to generating color palettes. It’s now the foundation of the colors in our design system, Sail. The color generation function is also available to the users of our design system; this means that teams can offer theming features to end users, which is especially useful when Stripe’s merchants embed our UI in their own applications. One important lesson I learned while on this journey is the importance of token APIs. This is a bit of an esoteric topic and might be worthy of its own essay. The short version is: Using color aliases (like color.button.background referring to color.action.500 referring to color.base.blue.500) allows theming to happen “behind the scenes,” and ensures that components don’t need to update their code when switching themes. So where do we go from here? There are two features that I’d like to explore in the future to make this approach to color even more robust. First, I’d like to develop an alternative color lightness scale for APCA. The APCA color contrast function is an alternative to the current WCAG contrast ratio function. It purports to more accurately reflect contrast between colors, taking into account the “polarity” of the colors (e.g., dark-on-light or light-on-dark) and the font size of any text. The math behind the APCA contrast function is a bit more complicated than the WCAG function, and my early experiments weren’t very successful. Second, I’d like to extend this approach to work in wide-gamut color spaces like display P3. Currently, OKHsl only covers the sRGB gamut; more and more screens are capable of displaying colors beyond the sRGB gamut, offering even more possibilities for accessible color palettes. Calculating a P3 version of OKHsl should be possible, but it’s definitely outside the scope of my current ability/comprehension. Ultimately, however, the approach outlined in this essay should be a solid basis for generating colors for any design system. No matter how many hues you need, how expressive you’d like to be, how many shades your system consists of, or what kinds of themes you design, the set of functions I’ve covered will provide accessible color combinations. Special thanks to Dmitry Belyaev for providing feedback on a draft of this essay. Footnotes & References (70, -15) is the coordinate for pink in lab colors space. ↩︎ R. W. Pridmore, “Bezold–Brücke Hue-Shift as Functions of Luminance Level, Luminance Ratio, Interstimulus Interval and Adapting White for Aperture and Object Colors,” Vision Research 39, no. 19 (1999): 3873-3891. ↩︎ Jesús Lillo et al., “Lightness and Hue Perception: The Bezold-Brücke Effect and Colour Basic Categories,” Psicológica 25, no. 1 (2004): 23-43. ↩︎ However, it’s important to note that this peak can vary slightly depending on the specific hue in question. ↩︎ AA is generally accepted as the standard for accessibility. A and AAA ratings exist, but are much more lax and more more strict, respectively. You can read more about conformance levels on the W3C website. ↩︎ https://www.w3.org/WAI/GL/wiki/Contrast_ratio ↩︎ This isn’t extremely rigorous; you might want a “light theme” that starts from a dark gray background and gets darker as the scale number increases. I’ll leave that as an exercise to the reader. This formula will cover the typical dark and light mode calculations. ↩︎ If you’re like me and get suspicious when you see oddly specific numbers like 903.2962962 in equations like these, a quick explanation: unlike in the RGB color space, the XYZ color space has no “true white.” Because our eyes can perceive true white differently according to what light source is used, to transfer colors in and out of XYZ color space we often need to also define true white. The most common values are defined by something cryptically called the “CIE standard illuminant D65”, which corresponds roughly to what white looks like on a clear day in northern Europe. I am not making this up. ↩︎

a year ago 23 votes

More in design

9¾ by Constantin Bolimond

This gin is based on the industrial revolution that began in England in the last third of the 18th century...

2 days ago 4 votes
visual journal – 2025 May

Catching Up

4 days ago 8 votes
Milhóc Whisky by My Creative

Milhóc Whisky, is located in the middle of France’s famous Armagnac producing region! Their first-born single-grain whiskey, Le Premier-Né and...

a week ago 8 votes
Order is Always More Important than Action in Design

Before users can meaningfully act, they must understand — a principle our metrics-obsessed design culture has forgotten. Today’s metrics-obsessed design culture is too fixated on action. Clicks, conversions, and other easily quantified metrics have become our purpose. We’re so focused on outcomes that we’ve lost sight of what makes them valuable and what even makes them possible in the first place: order and understanding. The primary function of design is not to prompt action. It’s to bring form to intent through order: arranging and prioritizing information so that those who encounter it can see it, perceive it, and understand it. Why has action become our focus? Simple: it’s easier to measure than understanding. We can track how many people clicked a button but not how many people grasped the meaning behind it. We can measure time spent on a page but not comprehension gained during that time. And so, following the path of least resistance, we’ve collectively decided that what’s easy to measure must be what’s most important to optimize, leaving action metrics the only means by which the success of design is determined. This is backward. Action without understanding is merely manipulation — a short-term victory that creates long-term problems. Users who take actions without fully comprehending why become confused, frustrated, and ultimately distrustful of both the design and the organization behind it. A dirty little secret of action metrics is how often the success signal — a button click or a form submission — is immediately followed by a meandering session of actions that obviously signals confusion and possibly even regret. Often, confusion is easier to perceive from session data than much else. Even when action is an appropriate goal, it’s not a guaranteed outcome. Information can be perfectly clear and remain unpersuasive because persuasion is not entirely within the designer’s control. Information is at its most persuasive when it is (1) clear, (2) truthful, and (3) aligned with the intent of the recipient. As designers, we can only directly control the first two factors. As for alignment with user intent, we can attempt to influence this through audience targeting, but let’s be honest about the limitations. Audience targeting relies on data that we choose to believe is far more accurate than it actually is. We have geolocation, sentiment analysis, rich profiling, and nearly criminally invasive tracking, and yet, most networks think I am an entirely different kind of person than I am. And even if they got the facts right, they couldn’t truly promise intent-alignment at the accuracy they do without mind-reading. The other dirty secret of most marketing is we attempt to close the gap with manipulation designed to work on most people. We rationalize this by saying, “yeah, it’s cringe, but it works.” Because we prioritize action over understanding, we encourage designs that exploit psychological triggers rather than foster comprehension. Dark patterns, artificial scarcity, misleading comparisons, straight up negging — these are the tools of action-obsessed design. They may drive short-term metrics, but they erode trust and damage relationships with users. This misplaced emphasis also distorts our design practice. Specific tactics like button placement and styling, form design, and conventional call-to-action patterns carry disproportionate weight in our approach. These elements are important, but fixating on them distracts designers from the craft of order: information architecture, information design, typography, and layout — the foundational elements essential to clear communication. What might design look like if we properly valued order over action? First, we would invest more in information architecture and content strategy — the disciplines most directly concerned with creating meaningful order. These would not be phases to rush through, but central aspects of the design process. We would trust words more rather than chasing layout and media trends. Second, we would develop better ways to evaluate understanding. Qualitative methods like comprehension testing would be given as much weight as conversion rates. We would ask not just “Did users do what we wanted?” but “Did users understand what we were communicating?” This isn’t difficult or labor intensive, but it does require actually talking to people. Third, we would respect the user’s right not to act. We would recognize that sometimes the appropriate response to even the clearest information is to walk away or do nothing. None of this means that action isn’t important. Of course it is. A skeptic might ask: “What is the purpose of understanding if no action is taken?” In many cases, this is a fair question. The entire purpose of certain designs — like landing pages — may be to engage an audience and motivate their action. In such cases, measuring success through clicks and conversions not only makes sense, it’s really the only signal that can be quantified. But this doesn’t diminish the foundational role that understanding plays in supporting meaningful action, or the fact that overemphasis on action metrics can undercut the effectiveness of communication. Actions built on misunderstanding are like houses built on sand — they will inevitably collapse. When I say that order is more important than action, I don’t mean that action isn’t important. But there is no meaningful action without understanding, and there is no understanding without order. By placing order first in our design priorities, we don’t abandon action — we create the necessary foundation for it. We align our practice with our true purpose: not to trick people into doing things, but to help them see, know, and comprehend so they can make informed decisions about what to do next.

a week ago 11 votes
Glenmorangie whisky collection by Butterfly Cannon

Glenmorangie wanted to celebrate their Head of Whisky Creation’s combined passion for whisky and wine, through the release of three...

a week ago 9 votes