More from The personal website of Matthew Ström
Album art didn’t always exist. In the early 1900s, recorded music was still a novelty, overshadowed by sales of sheet music. Early vinyl records were vastly different from what we think of today: discs were sold individually and could only hold up to four minutes of music per side. Sometimes, only one side of the record was used. One of the most popular records of 1910, for example, was “Come, Josephine, in My Flying Machine”: it clocked in at two minutes and 39 seconds. via Wikipedia The packaging of these records was strictly utilitarian: a brown paper sleeve to protect the record from dust, printed with the name of the record label or the retailer. Rarely did the packaging include any information on the disc inside; the label on the center of the disc was all there was to differentiate one record from another. But as record sales started to show signs of life, music publishers took note. Columbia Records, one of the first companies to sell music on discs, was especially successful. They pioneered the sale of songs in bundles: the individual discs were bound together in packages resembling photo albums, partly to protect the delicate shellac that the records were made of, partly to increase their sales. They resembled photo albums, so Columbia called them “record albums.” There were many more technological breakthroughs that made it possible to mass-manufacture and distribute music throughout the world at affordable prices. The five-minute-long 78 rpm discs were replaced by 20-minute discs that ran at 33 ⅓ rpm, which were replaced by the hour-long 12″ LP we know today. Delicate shellac was replaced by the more resilient (and cheaper) vinyl. Both recording technology and consumer electronics were always evolving, allowing more dynamic music to fit into smaller packages and be played on smaller, higher-fidelity stereos. The invention of album art can get lost in the story of technological mastery. But among all the factors that contributed to the rise of recorded music, it stands as one of the few that was wholly driven by creators themselves. Album art — first as marketing material, then as pure creative expression — turned an audio-only medium into a multi-sensory experience. This is the story of the people who made music visible. The prophet: Alex Steinweiss Alex Steinweiss was born in 1917, the son of eastern European immigrants. Growing up in Brooklyn, New York, Steinweiss took an early interest in art and earned a scholarship to Parsons School of Design. On graduating, he worked for Austrian designer Joseph Binder, whose bold, graphic posters had influenced design for the first decades of the 1900s. The Most Important Wheels in America, Association of American Railroads (1952) via Moma Joseph Binder, Österreichs Wiederaufbau Ausstellung Salzburg (1933) via Moma Joseph Binder, Air Corps U.S. Army (Winning entry for the MoMA National Defense Poster Competition [Army Air Corps Recruiting]) via Moma After his work with Binder, Steinweiss was hired by Columbia Records to produce promotional displays and ads, but the job didn’t stick. At the outbreak of World War II, he went to work for the Navy’s Training and Development Center in New York City, designing teaching material and cautionary posters. When the war ended, Steinweiss went back to freelancing for Columbia. At a lunch meeting in 1948, company president Ted Wallerstein mentioned that Columbia would soon introduce a new kind of record that, spinning at a slower speed of 33 ⅓ rpm, could hold more music than the older 78 rpm discs. But there was a problem: the smaller, more intricate grooves on the discs were being damaged by the heavy paper sleeves used for the 78s. After the lunch, Steinweiss went to work to create a new, safer jacket for the records. But his vision for the new packaging went beyond just its construction. “The way records were sold was ridiculous,” Steinweiss said. “The covers were brown, tan or green paper. They were not attractive, and lacked sales appeal.” He suggested that Columbia should spend more money on packaging, convinced that eye-catching designs would help sell records.1 His first chance to prove his case was a 1940 compilation by the songwriters Rodgers and Hart — one of the first releases on the new microgroove 33 ⅓ records. For it, he asked the Imperial Theater (located one block west of Times Square) to change the lettering on their marquee to read “SMASH SONG HITS BY RODGERS & HART." Steinweiss had a photographer take a photo, and back in his studio, superimposed “COLUMBIA RECORDS’’ on the image to match the perspective and style of the signage. The last touch, a nod to the graphic abstraction of his mentor Joseph Binder, were orange lines arcing around the marquee in the exact size of the record underneath. Album art was born. Smash Song Hits by Rodgers & Hart via RateYourMusic Steinweiss would go on to design hundreds of covers for Columbia from 1940 to 1945. His methodology was rigorous; the covers went beyond nice pictures to be visual representations of the music itself. Before most people owned a TV set, Steinweiss’s album covers were affordable multi-sensory entertainment. Looking at the album cover and listening to the music created an experience that was more than the sum of its parts. “I tried to get into the subject,” he explains, "either through the music, or the life and times of the composer. For example, with a Bartók piano concerto, I took the elements of the piano—the hammers, keys, and strings—and composed them in a contemporary setting using appropriate color and rendering. Since Bartók is Hungarian, I also put in the suggestion of a peasant figure.” via RateYourMusic Steinweiss was prophetic: His colorful compositions sold records. Newsweek reported that sales of Bruno Walter’s recording of Beethoven’s “Eroica” symphony increased 895% with its new Steinweiss cover.” 2 Eroica The challenger: Reid Miles From 1940 to 1950, Columbia Records was the dominant force in music sales. Buoyed by Steinweiss’s initial successes, Columbia hired more artists and designers to produce album art. Jim Flora led the charge from 1947–1950 with irreverent illustrations and more daring explorations of typography, and like Steinweiss, his work mirrored the music on the records. During the era, Columbia began to focus much more on popular music. Flora’s campy compositions screamed “this isn’t your parent’s music.” Gene Krupa and His Orchestra via JimFlora.com Jim Flora's cover for Bix and Tram via JimFlora.com Jim Flora's cover for Kid Ory and His Creole Jazz Band via JimFlora.com But while Columbia was focusing on making it into the hit parade, an upstart label was honing in on a sound that would come to define the era; Blue Note Records, founded in 1939, was fixated on the jazz underground. From its founding and throughout the 1950s, Blue Note focused on “hot jazz,” a mutant strain of jazz descending from the big band swing era, often including twangy banjoes, wailing clarinets, and rambunctious New Orleans second-line-style drumming. Founder Alfred Lion wrote the label’s manifesto: Blue Note Records are designed simply to serve the uncompromising expressions of hot jazz or swing, in general. Any particular style of playing which represents an authentic way of musical feeling is genuine expression. … Blue Note records are concerned with identifying its impulse, not its sensational and commercial adornments.3 One way Blue Note stood out from labels like Columbia was their dedication to their artists. Many of the working musicians of the ’50s lived like vampires, waking up after dusk and playing gigs into the early hours of the morning, then rehearsing until dawn. Blue Note would record their artists in the pre-dawn hours, giving musicians time to rest up before their next night’s gigs started. Art Blakey, Thelonius Monk, Charlie Parker, Dizzy Gillespie, and John Coltrane are household names now; but then, because of their drinking, drug use, and frenetic schedules, labels wouldn’t work with them. Blue Note embraced them, feeding their fires of creative innovation and creating an updraft for the insurgency of jazz to come. Album art was one more revolutionary way for Blue Note to explore “genuine expression.” Just as they fostered talented musicians, they’d give young designers a chance to shine. Alfred Lion’s childhood friend Francis Wolff had joined the label as a producer and photographer; he’d shoot candid portraits of the musicians as they worked. Then, designers like Paul Bacon, Gil Mellé (himself a musician), and John Hermansader would pair Wolff’s black-and-white photos with a single, bright color, then juxtapose them with stark, sans-serif type. Genius of Modern Music Vol. 1 via Deep Groove Mono Gil Mellé's cover featuring Francis Wolff's photography for his band's New Faces — New Sounds via Deep Groove Mono John Hermansader's cover featuring Francis Wolff's photography for George Wallington's Showcase via Deep Groove Mono As the 1960s approached, the musicians Blue Note worked so hard to cultivate were forging new styles, leaving behind the swing-era pretense of jazz as dance music. Charlie Parker and Bud Powell kept speeding up the tempo and stuffing more chords into progressions. Max Roach started playing the drums like a boxer, bobbing and weaving around the beat with skittering cymbals, waiting for the right moment to land a single monumental “thud” of a kick drum. Without the drums keeping a steady rhythm, bass players like Milt Hinton and Gene Ramey had to furiously mark out time with eighth notes, traversing chords by plucking up and down the scale. This was bebop, and it was musicians’ music. Blue Note’s ethos of artistic integrity was the perfect Petri dish for virtuosic musicians to develop innovative sounds — they worked in small ensembles, often just five players, constantly scrambling and re-arranging instrumentation, playing harder and faster and louder. Then, around 1955, just as Blue Note was hitting its stride, Wolff met a 28-year-old designer named Reid Miles. Miles had recently moved to New York and had been working for John Hermansader at Esquire magazine. He was a big fan of classical music but wasn’t so interested in jazz. Wolff convinced Miles to start designing covers for Blue Note all the same and kicked off one of the most influential partnerships in modern design. The first cover Miles created was for vibraphone player Milt Jackson; it picked up from the established art style, with Wolff’s photos and a single bright hue. But the type was even more exaggerated, and the photo took up more than half the cover. White dots overlayed on Jackson’s mallets were the perfect abstraction of the staccato tones of the vibraphone. It’s a great cover, but it was just a hint of what was to come. via Ariel Salminen A common theme of Miles’ covers was the emphasis on Wolff’s photography. We’re familiar with these iconic images today, but at the time they were revolutionary; before, black musicians like Louis Armstrong and Ella Fitzgerald were portrayed in tuxedos and evening gowns, posed smiling genially or laughing, rendered so as to not offend the largely white listening audience. Wolff’s portraits were candid, realistic, showing black musicians at work. For example, the cover for Art Blakey’s The Freedom Rider shows Blakey lost in a moment, almost entirely obscured by a cymbal. The drummer is smoking a cigarette, but it’s barely hanging onto the corner of his lip — his mouth is half-open, his brows clenched in a moment of agony or ecstasy. Miles would let the photo fill up the entire cover, cramming the name of the record into whatever empty space was available. The Freedom Rider via London Jazz Collector Miles sometimes reversed this relationship, pioneering the use of typography to convey the spirit of the music. His cover for Jackie McLean’s It’s Time! is composed of an edge-to-edge grid of 243 exclamation marks; a postage stamp picture of McLean graces the upper corner, almost a punchline. Lee Morgan’s The Rumproller is another type-only cover, this time with the title smeared out from corner to corner, like it was left on a hot dashboard for the day. Larry Young’s Unity has no photo at all; the four members of the quartet become orange dots resting in (or bubbling out of) the bowl of the U. It's Time via Ariel Salminen Reid Miles' cover for Lee Morgan's The Rumproller via Fonts in Use Reid Miles' cover for Larry Young's Unity Miles fulfilled the Blue Note manifesto. His album covers pushed the envelope of graphic design just as the artists on the records inside continued to break new ground in jazz. With the partnership of Miles and Wolff, alongside Alfred Lion’s commitment to artistic integrity, Blue Note became the standard-bearer for jazz. Columbia Records couldn’t help but notice. Even though Blue Note wasn’t nearly as commercially successful as Columbia, their willingness to take risks had established them as a much more sophisticated, innovative, and creative label; to compete for the best talent, Columbia would need to find a way to win the attention of both artists and listeners. The master: S. Neil Fujita Sadamitsu Fujita was born in 1921 in Waimea, Hawaii. He was assigned the name Neil in boarding school — leading up to World War II, anti-Japanese sentiment was rampant, especially in Hawaii. Fujita moved to LA to attend art school, but his studies were cut short in 1942 when Franklin Roosevelt signed executive order 9066, allowing the imprisonment of Japanese Americans living on the west coast. Fujita was sent to Wyoming, where he enlisted in the 442nd Regimental Combat Team. Before the war was over, he’d see combat in Italy, France, and the Pacific theater. After the war, Fujita finished his studies in LA. He quickly made a name for himself in the advertising world; his résumé landed on the desk of Bill Golden, the art director for CBS, which owned Columbia Records. Alex Steinweiss, the first album artist and Columbia’s ace in the hole, had moved on to RCA. Columbia needed a new direction. Golden called Fujita and asked him to run the art department. Fujita would be building a whole new team, replacing the relationships that Columbia had built with art studios for hire. This wasn’t going to be the hardest part of Fujita’s work; when offering him the job, Golden warned him that he’d experience a lot of racist attitudes still simmering in the wake of World War II.4 Still, Fujita agreed to take the job. Fujita’s first covers fit in with the work that Reid Miles was doing at Blue Note: single-color accents set against black-and-white photography. The Jazz Messengers via Discogs Fujita's cover for Miles Davis' 'Round About Midnight via Discogs In 1959, jazz was leaving the stratosphere. Ornette Coleman was performing what he called “free jazz,” frenetic, inscrutable compositions that drew backlash and praise in equal parts. John Coltrane recorded Giant Steps with a level of virtuosity that even his own bandmates struggled to keep up with. Miles Davis recorded Kind of Blue, which would go on to be regarded as one of the best recordings of all time. Fujita was also breaking ground at Columbia. He was one of the first directors to hire both men and women in a racially integrated office.5 He delegated work, tapping painters, illustrators, and photographers to contribute to covers. Fujita himself trained to be a painter before starting his career in design; he started looking for ways to incorporate his own original paintings into the covers: “We thought about what the picture was saying about the music,” Fujita recalled, “and how we could use that to sell the record. And abstract art was getting popular so we used a lot more abstraction in the designs—with jazz records especially.” He got the perfect opportunity to make his mark with two albums released in 1959: Charles Mingus’s Mingus Ah Um and Dave Brubeck’s Time Out. Mingus Ah Um Fujita's cover for Dave Brubeck's Time Out Fujita’s abstract paintings reflected the pure exuberance of Mingus’ and Brubeck’s music. In the case of Mingus Ah Um, the divisions and intersections spanning the cover read like a beam of light passing through exotic lenses, magnifiers, refractors, and prisms; through his music, Mingus was reflecting on the transition of jazz from popular entertainment to mind-expanding creative exercise. For Time Out, the wheels and rollers spooling out across the page echo the way that Brubeck’s quartet was experimenting with how time signatures could be interlocked, multiplied, and divided to create completely new textures and musical patterns. Fujita’s covers made it plain: Jazz was art. ’59 turned out to be a watershed for both jazz and album art. Brubeck’s Time Out went to #2 on the pop charts in 1961, and was the first jazz LP to sell more than a million copies; “Take Five,” the album’s standout hit, would also become the first jazz single to sell a million copies. For a unique moment in time, the music and art worlds were being propelled forward by a commercially successful record. Fujita’s paintings were making their way into millions of homes, driving sales of records by the vanguards of jazz. Fujita left Columbia records shortly after these major successes. “I wanted to be something other than just a record designer,” he said, “so I left to go on my own.” He’d go on to design the book covers for Truman Capote’s In Cold Blood and Mario Puzo’s The Godfather — when the latter was turned into Francis Coppola’s breakthrough film, Fujita’s design was used for its title and promotional art. But he’d continue to design album covers, creating paintings for each one. Far Out, Near In Fujita's cover for Dony Byrd and Gigi Gryce's Modern Jazz Perspective Fujita's cover for Columbia's recording of Glenn Gould performing Berg, Křenek, and Schoenberg. The next generation As jazz continued to evolve throughout the ’60s and ’70s, melding with rock ’n roll to produce punk, electronic, R&B, and rap, album art evolved alongside. Packaging became more sophisticated: multi-disc albums came in folding cases called gatefolds, accompanied by booklets of photography and art. New printing techniques allowed for brighter colors, shiny foil stamps, and textured finishes. Budgets for production grew larger and larger. The Beatles’ Sgt. Pepper’s Lonely Hearts Club Band featured an elaborate photo of the band members, 57 life-sized photograph cutouts, and nine wax sculptures. For the first time for a rock EP, the lyrics to the songs were printed on the back of the cover. In another first, the paper sleeve inside was not white but a colorful abstract pattern instead. Also inside was a sheet of cardboard cutouts, including a postcard portrait of Sgt. Pepper, a fake mustache, sergeant stripes, lapel badges, and a stand-up cutout of the Beatles themselves. The zany campiness of Sgt. Pepper’s could only be matched by an absurd gift box full of toys and games. The stark loneliness of the Beatles’ next album would be paired with a plain white cover, without even ink to fill in the impression of the words “The Beatles” on the front. Sgt. Pepper's Lonely Hearts Club Band, designed by Jann Haworth and Peter Blake and photographed by Michael Cooper The cover of The Beatles, designed by Richard Hamilton via Reddit The most famous artists and designers of each generation would try their hand at album art. Salvador Dali, Andy Warhol, Saul Bass, Keith Haring, Annie Leibovitz, Jeff Koons, Shepard Fairey, and Banksy would all create work for albums. Some of those pieces would become the most recognizable ones in an artist’s catalog. Greatest Hits by The Modern Jazz Quartet Andy Warhol's cover for The Velvet Underground & Nico via Leo Reynolds Saul Bass's cover for Frank Sinatra Conducts Tone Poems of Color via Moma Keith Haring's cover for David Bowie's Without You Annie Leibovitz and Andrea Klein's cover for Bruce Springsteen's Born In The USA Jeff Koons' cover for Lady Gaga's Artpop Shepard Fairey's cover for The Smashing Pumpkins' Zeitgeist Banksy's cover for Blur's Think Tank None of this would have been possible without the contributions of Alex Steinweiss, Jim Flora, Paul Bacon, Gil Mellé, John Hermansader, Reid Miles, S. Neil Fujita, and others. If not for the arms race between Columbia Records and Blue Note for the best art and the best artists of the ’50s, many artists would never have found their career. And in some cases, an album like The Rolling Stones’ Sticky Fingers would be remembered more for its art than for its music. When music was first pressed into discs, design was less than an afterthought. Today, album art is an extension of music itself. Footnotes & References https://www.nytimes.com/2011/07/20/business/media/alex-steinweiss-originator-of-artistic-album-covers-dies-at-94.html ↩︎ https://web.archive.org/web/20120412033422/http://www.adcglobal.org/archive/hof/1998/?id=318 ↩︎ https://web.archive.org/web/20080503055603/https://www.bluenote.com/History.aspx ↩︎ https://www.hellerbooks.com/pdfs/voice_s_neil_fujita.pdf ↩︎ https://www.nationalww2museum.org/war/articles/s-neil-fujita ↩︎
Interfaces are becoming less dense. I’m usually one to be skeptical of nostalgia and “we liked it that way” bias, but comparing websites and applications of 2024 to their 2000s-era counterparts, the spreading out of software is hard to ignore. To explain this trend, and suggest how we might regain density, I started by asking what, exactly, UI density is. It’s not just the way an interface looks at one moment in time; it’s about the amount of information an interface can provide over a series of moments. It’s about how those moments are connected through design decisions, and how those decisions are connected to the value the software provides. I’d like to share what I found. Hopefully this exploration helps you define UI density in concrete and useable terms. If you’re a designer, I’d like you to question the density of the interfaces you’re creating; if you’re not a designer, use the lens of UI density to understand the software you use. Visual density We think about density first with our eyes. At first glance, density is just how many things we see in a given space. This is visual density. A visually dense software interface puts a lot of stuff on the screen. A visually sparse interface puts less stuff on the screen. Bloomberg’s Terminal is perhaps the most common example of this kind of density. On just a single screen, you’ll see scrolling sparklines of the major market indices, detailed trading volume breakdowns, tables with dozens of rows and columns, scrolling headlines containing the latest news from agencies around the world, along with UI signposts for all the above with keyboard shortcuts and quick actions to take. A screenshot of Terminal’s interface. Via Objective Trade on YouTube Craigslist is another visually dense example, with its hundreds of plain links to categories and spartan search-and-filter interface. McMaster-Carr’s website shares similar design cues, listing out details for many product variations in a very small space. Screenshots of Craigslist's homepage and McMaster-Carr's product page circa 2024. You can form an opinion about the density of these websites simply by looking at an image for a fraction of a second. This opinion is from our subconsciousness, so it’s fast and intuitive. But like other snap judgements, it’s biased and unreliable. For example, which of these images is more dense? Both images have the same number of dots (500). Both take up the same amount of space. But at first glance, most people say image B looks more dense.1 What about these two images? Again, both images have the same number of dots, and are the same size. But organizing the dots into groups changes our perception of density. Visually density — our first, instinctual judgement of density — is unpredictable. It’s impossible to be fully objective in matters of design. But if we want to have conversations about density, we should aim for the most consistent, meaningful, and useful definition possible. Information density In The Visual Display of Quantitative Information, Edward Tufte approaches the design of charts and graphs from the ground up: Every bit of ink on a graphic requires reason. And nearly always that reason should be that the ink presents new information. Tufte introduces the idea of “data-ink,” defined as the useful parts of a given visualization. Tufte argues that visual elements that don’t strictly communicate data, whether it’s a scale value, a label, or the data itself — should be eliminated. Data-ink isn’t just the space a chart takes up. Some charts use very little extraneous ink, but still take up a lot of physical space. Tufte is talking about information density, not visual density. Information density is a measurable quantity: to calculate it, you simply divide the amount of “data-ink” in a chart by the total amount of ink it takes to print it. Of course what is and is not data-ink is somewhat subjective, but that’s not the point. The point is to get the ratio as close to 1 as possible. You can increase the ratio in two ways: Add data-ink: provide additional (useful) data Remove non-data-ink: erase the parts of the graphic that don’t communicate data Tufte's examples of graphics with a low data-ink ratio (first) and a high one (second). Reproduced from Edward Tufte's The Visual Display of Quantitative Information There’s an upper limit to information density, which means you can subtract too much ink, or add too much information. The audience matters, too: A bond trader at their 4-monitor desk will have a pretty high threshold; a 2nd grader reading a textbook will have a low one. Information density is related to visual density. Usually, the higher the information density is, the more dense a visualization will look. For example, take the train schedule published by E.J. Marey in 18852. It shows the arrival and departure times of dozens of trains across 13 stops from Paris to Lyon. The horizontal axis is time, and the vertical axis is space. The distance between stops on the chart reflects how far apart they are in the real world. The data-ink ratio is close to 1, allowing a huge amount of information — more than 260 arrival and departure times — to be packed into a relatively small space. The train schedule visualization published by E.J. Marey in 1885. Reproduced from Edward Tufte's The Visual Display of Quantitative Information Tufte makes this idea explicit: Maximize data density and the [amount of data], within reason (but at the same time exploiting the maximum resolution of the available data-display technology). He puts it more succinctly as the “Shrink Principle”: Graphics can be shrunk way down Information density is clearly useful for charts and graphs. But can we apply it to interfaces? The first half of the equation — information — applies to screens. We should maximize the amount of information that each part of our interface shows. But the second half of the equation — ink — is a bit harder to translate. It’s tempting to think that pixels and ink are equivalent. But any interface with more than a few elements needs separators, structural elements, and signposts to help a user understand the relationship each piece has to the other. It’s also tempting to follow Tufte’s Shrink Principle and try to eliminate all the whitespace in UI. But some whitespace has meaning almost as salient as the darker pixels of graphic elements. And we haven’t even touched on shadows, gradients, or color highlights; what role do they play in the data-ink equation? So, while information density is a helpful stepping stone, it’s clear that it’s only part of the bigger picture. How can we incorporate all of the design decisions in an interface into a more objective, quantitative understanding of density? Design density You might have already seen the first challenge in defining density in terms of design decisons: what counts as a design decision? In UI, UX, and product design, we make many decisions, consciously and subconsciously, in order to communicate information and ideas. But why do those particular choices convey the meaning that they do? Which ones are superlative or simply aesthetic, and which are actually doing the heavy lifting? These questions sparked 20th century German psychologists to explore how humans understand and interpret shapes and patterns. They called this field “gestalt,” which in German means “form.” In the course of their exploration, Gestalt psychologists described principles that describe how some things appear orderly, symmetrical, or simple, while others do not. While these psychologists weren’t designers, in some sense, they discovered the fundamental laws of design: Proximity: we perceive things that are close together a comprising a single group Similarity: objects that are similar in shape, size, color, or in other ways, appear related to one another. Closure: our minds fill in gaps in designs so that we tend to see whole shapes, even if there are none Symmetry: if we see shapes that are symmetrical to each other, we perceive them as a group formed around a center point Common fate: when objects move, we mentally group the ones that move in the same way Continuity: we can perceive objects as separate even when they overlap Past experience: we recognize familiar shapes and patterns even in unfamiliar contexts. Our expectations are based on what we’ve learned from our past experience of those shapes and patterns. Figure-ground relationship: we interpret what we see in a three-dimensional way, allowing even flat 2d images to have foreground and background elements. Examples of the princples of proximity (left), similarity (center), and closure (right). Gestalt principles explain why UI design goes beyond the pixels on the screen. For example: Because of the principle of similarity, users will understand that text with the same size, font, and color serves the same purpose in the interface. The principle of proximity explains why when a chart is close to a headline, it’s apparent that the headline refers to the chart. For the same reasons, a tightly packed grid of elements will look related, and separate from a menu above it separated by ample space. Thanks to our past experience with switches, combined with the figure-ground principle, a skeuomorphic design for a toggle switch will make it obvious to a user how to instantly turn on a feature. So, instead of focusing on the pixels, we think of design decisions as how we intentionally use gestalt principles to communicate meaning. And like Tufte’s data-ink ratio compares the strictly necessary ink to the total ink used to print a chart, we can calculate a gestalt ratio which compares the strictly necessary design decisions to the total decisions used in a design. This is design density. Four different treatments of the same information, using different types and amounts of gestalt principles. Which is the most dense? This is still subjective: a design decision that seems necessary to some might be superfluous to others. Our biases will skew our assessment, whether they’re personal tastes or cultural norms. But when it comes to user interfaces, counting design decisions is much more useful than counting the amount of data or “ink” alone. Design density isn’t perfect. User interfaces exist to do work, to have fun, to waste time, to create understanding, to facilitate personal connections, and more. Those things require the user to take one or more actions, and so density needs to look beyond components, layouts, and screens. Density should comprise all the actions a user takes in their journey — it should count in space and time. Density in time Just like the amount of stuff in a given space dictates visual density, the amount of things a user can do in a given amount of time dictates temporal — time-wise — density. Loading times are the biggest factor in temporal density. The faster the interface responds to actions and loads new pages or screens, the more dense the UI is. And unlike 2-dimensional whitespace, there’s almost no lower limit to the space needed between moments in time. Bloomberg’s Terminal loads screens full of data instantaneously With today’s bloated software, making a UI more dense in time is more impactful than just squeezing more stuff onto each screen. That’s why Bloomberg’s Terminal is still such a dominant tool in the financial analysis space; it loads data almost instantaneously. A skilled Terminal user can navigate between dozens of charts and graphs in milliseconds. There are plenty of ways to cram tons of financial data into a table, but loading it with no latency is Terminal’s real superpower. But say you’ve squeezed every second out of the loading times of your app. What next? There are some things that just can’t be sped up: you can’t change a user’s internet connection speed, or the computing speed of their CPU. Some operations, like uploading a file, waiting for a customer support response, or processing a payment, involve complex systems with unpredictable variables. In these cases, instead of changing the amount of time between tasks, you can change the perception of that time: Actions less than 100 milliseconds apart will feel simultaneous. If you tap on an icon and, 100ms later, a menu appears, it feels like no time at all passed between the two actions. So, if there’s an animation between the two actions — the menu slides in, for example — the illusion of simultaneity might be broken. For the smallest temporal spaces, animations and transitions can make the app feel slower.3 Between 100 milliseconds and 1 second, the connection between two actions is broken. If you tap on a link and there’s no change for a second, doubt creeps in: did you actually tap on anything? Is the app broken? Is your internet working? Animations and transitions can bridge this perceptual gap. Visual cues in these spaces make the UI feel more dense in time. Gaps between 1 and 10 seconds can’t be bridged with animations alone; research4 shows that users are most likely to abandon a page within the first 10 seconds. This means that if two actions are far enough apart, a user will leave the page instead of waiting for the second action. If you can’t decrease the time between these actions, show an indeterminate loading indicator — a small animation that tells the user that the system is operating normally. Gaps between 10 seconds and 1 minute are even harder to fill. After seeing an indeterminate loader for more than 10 seconds, a user is likely to see it as static, not dynamic, and start to assume that the page isn’t working as expected. Instead, you can use a determinate loading indicator — like a larger progress bar — that clearly indicates how much time is left until the next action happens. In fact, the right design can make the waiting time seem shorter than it actually is; the backwards-moving stripes that featured prominently in Apple’s “Aqua” design system made waiting times seem 11% shorter.5 For gaps longer than 1 minute, it’s best to let the user leave the page (or otherwise do something else), then notify them when the next action has occurred. Blocking someone from doing anything useful for longer than a minute creates frustration. Plus, long, complex processes are also susceptible to error, which can compound the frustration. In the end, though, making a UI dense in time and space is just a means to an end. No UI is valuable because of the way it looks. Interfaces are valuable in the outcomes they enable — whether directly associated with some dollar value, in the case of business software, or tied to some intangible value like entertainment or education. So what is density really about, then? It’s about providing the highest value outcomes in the smallest amount of time, space, pixels, and ink. Density in value Here’s an example of how value density is manifested: a common suggestion for any form-based interface is to break long forms into smaller chunks, then put those chunks together in a wizard-type interface that saves your progress as you go. That’s because there’s no value in a partly-filled-in-form; putting all the questions on a single page might look more visually dense, but if it takes longer to fill out, many users won’t submit it at all. This form is broken up into multiple parts, with clear errors and instructions for resolution. Making it possible for users to get to the end of a form with fewer errors might require the design to take up more space. It might require more steps, and take more time. But if the tradeoffs in visual and temporal density make the outcome more valuable — either by increasing submission rate or making the effort more worth the user’s time — then we’ve increased the overall value density. Likewise, if we can increase the visual and temporal density by making the form more compact, load faster, and less error-prone, without subtracting value to the user or the business, then that’s an overall increase in density. Channeling Tufte, we should try to increase value density as much as possible. Solving this optimization problem can have some counterintuitive results. When the internet was young, companies like Craigslist created value density by aggregating and curating information and displaying it in pages of links. Companies like Yahoo and Altavista made it possible to search for that information, but still put aggregation at the fore. Google took a radically different approach: use information gleaned by the internet’s long chains of linked lists to power a search box. Information was aggregating itself; a single text input was all users needed to access the entire web. Google and Yahoo's approach to data, design, and value density hasn't changed from 2001 (when the first screenshots were archived) to 2024 (when the second set of screenshots were taken). The value of the two companies' stocks reflect the result of these differing approaches. The UI was much less visually dense, but more value-dense by orders of magnitude. The results speak for themselves: Google went from a $23B valuation in 2004 to being worth over $2T today — closing in on a 100x increase. Yahoo went from being worth $125B in 2000 to being sold for $4.8B — less than 3% of its peak value.6 Conclusion Designing for UI density goes beyond the visual aspects of an interface. It includes all the implicit and explicit design decisions we make, and all the information we choose to show on the screen. It includes all time and the actions a user takes to get something valuable out of the software. So, finally, a concrete definition of UI density: UI density is the value a user gets from the interface divided by the time and space the interface occupies. Speed, usability, consistency, predictability, information richness, and functionality all play an important role in this equation. By taking account of all these aspects, we can understand why some interfaces succeed and others fail. And by designing for density, we can help people get more value out of the software we build. Footnotes & References This is a very unscientific statement based on a poll of 20 of my coworkers. Repeatability is questionable. ↩︎ The provenance of the chart is interesting. Not much is known about the original designer, Charles Ibry; but what we do know points to even earlier iterations of the design. If you’re interested, read Sandra Rendgen’s fascinating history of the train schedule. ↩︎ I have no scientific backing for this claim, but I believe it’s because a typical blink occurs in 100ms. When we blink, our brains fill in the gap with the last thing we saw, so we don’t notice the blink. That’s is why we don’t notice the gap between two actions that are less than 100ms apart. You can read more about this effect here: Visual Perception: Saccadic Omission — Suppression or Temporal Masking? ↩︎ Nielsen, Jakob. “How Long Do Users Stay on Web Pages?” Nielsen Norman Group, 11 Sept. 2011, https://www.nngroup.com/articles/how-long-do-users-stay-on-web-pages/ ↩︎ Harrison, Chris, Zhiquan Yeo, and Scott E. Hudson. “Faster Progress Bars: Manipulating Perceived Duration with Visual Augmentations.” Carnegie Mellon University, 2010, https://www.chrisharrison.net/projects/progressbars2/ProgressBarsHarrison.pdf ↩︎ HackerNews has pointed out that this is a ridiculous statement. And it is. Of course, value density isn’t the only reason why Google succeeded where Yahoo failed. But as a reflection of how each company thought about their products, it was a good leading indicator. ↩︎
Polish is a word that gets thrown out in conversations about craft, quality, and beauty. We talk about it at the end of the design process, before the work goes out the door: let’s polish this up. Let’s do a polish sprint. Could this use more polish? https://twitter.com/svlleyy/status/1780215102064452068 A tweet (xeet?) on my timeline asked: “what does polish in an app mean? fancy animations? clear consistent design patterns? hierarchy and colour? all the above?” I thought about it for a moment and got a familiar itch in the back of my brain. It’s a feeling that I associate with a zen kōan that goes (paraphrased): "A monk asked a zen master, ‘Does a dog have Buddha-nature?’ The master answered ‘無’. 無 (pronounced ‘wú’ in Mandarin or ‘mu’ in Japanese) literally translates to ‘not,’ as in ‘i have not done my chores today.’ It’s a negation of something, and in the koan’s case, it’s the master’s way of saying — paradoxically — that there’s no point in answering the question. In the case of the tweet, my 無-sense was tingling as I wrote a response: polish is something only the person who creates it will notice. It’s a paradox; polishing something makes it invisible. Which also means that pointing out examples of polish almost defeats the purpose. But in the spirit of learning, here’s a few things that come to mind when I think of polish: Note the direction the screws are facing. Photo by Chris Campbell, CC BY-NC 2.0 DEED Next time you flip a wall switch or plug something into an outlet, take a second and look at the two screws holding the face plate down. Which direction are the slots in the screw facing? Professional electricians will (almost) always line the screw slots up vertically. This has no functional purpose, and isn’t determined by the hardware itself; the person who put the plate on had to make a conscious decision to do it. Julian Baumgartner’s art restoration videos always include a note about his process for repairing or rebuilding the frame that the canvas is stretched over. When he puts the keys back into the frame to create extra tension, he attaches some fishing wire, wound around a tack, and threaded through each key; this, he says, “ensures the keys will never be lost.” How many of these details lie hidden in the backs of the paintings hung on the walls of the world’s most famous museums and galleries? A traditional go board, with a 15:14 aspect ratio. A traditional go board isn’t square. It’s very slightly longer than it is wide, with a 15:14 aspect ratio. This accounts for the optical foreshortening that happens when looking across the board. For similar reasons, traditionally, black go stones are slightly larger than white ones, as equal-sized stones would look unequal when seen next to each other on the board. The same subtle adjustments go into the shape of letters in a typeface: round letters like ‘e’ and ‘a’ are slightly taller than square letters like ‘x’ or ‘v’. The crossbars of the x don’t usually line up perfectly, either. The success of these demonstrations of polish is dictated by just how hard they are to see. So how should polish manifest in product design? One example is in UI animation. It is tempting to put transitions and animations on every component in the interface; when done right, an animated UI feels responsive and pleasant to use. But the polish required to reach that point of being “intuitive” or “natural” is immense: Animations should happen fast enough to be perceived as instantaneous. The threshold for this is commonly cited at 100ms; anything happening faster than this is indistinguishable from something happening right away. The speed of the animation has to be tuned to accelerate or decelerate at precise rates depending on how far the element is moving and what kind of transition is taking place. Changing a popover from the default linear animation to an ease-out curve will make it seem more natural. Often an animation should be faster or slower depending on whether it’s an “in” or “out” animation; a faster animation at the start of an interaction makes the interface feel snappy and responsive. A slower animation at the end of an interaction helps a user stay oriented to the result of their actions. Another example is in anticipating the user’s intent. A reactive UI should be constantly responding to a user’s input, with no lag between clicks and hovers and visual, audible, or tactile feedback. But with some interaction patterns, responding too quickly can make the interface feel twitchy or delicate. For this reason, nested dropdown menus often have invisible bridges connecting your cursor and the menu associated with what you’ve selected. This allows you to smoothly move to the next item, without the sub-menu disappearing. These bridges are invisible, but drawing them accurately requires pixel precision nonetheless. An example of Amazon’s mega dropdown menu, with invisible bridges connecting the top-level menu to the sub-menu. Image credit: Ben Kamens You benefit from this kind of anticipatory design every day. While designing the original iPhone’s keyboard, Ken Kocienda explored new form factors that took advantage of the unique properties of the phone’s touch screen. But breaking away from the familiarity of a QWERTY keyboard proved challenging; users had a hard time learning new formats. Instead, Kocienda had the keyboard’s touch targets invisibly adjust based on what is being typed, preventing users from making errors in the first place. The exact coordinates of the tap on the screen are adjusted, too, based on the fact that we can’t see what’s underneath our fingers when we’re using it. Early prototypes of the iPhone keyboard sacrificed familiarity in order to make the touchscreen interaction more finger-friendly. Images from Ken Kocienda's Creative Selection, via Commoncog Case Library The iPhone’s keyboard was one of the most crucial components to the success of such a risky innovation. Polish wasn’t a nice-to-have; it was the linchpin. The final design of the keyboard used a familiar QWERTY layout and hid all the complexity of the touch targets and error correction behind the scenes. Image from Apple’s getting started series on the original iphone. Retrieved from the Internet Archive The polish paradox is that the highest degrees of craft and quality are in the spaces we can’t see, the places we don’t necessarily look. Polish can’t be an afterthought. It must be an integral part of the process, a commitment to excellence from the beginning. The unseen effort to perfect every hidden aspect elevates products from good to great.
Your workplace community — the way you interact with your coworkers every day — can have a major impact on your productivity, happiness, and self-worth. It’s natural to want to shape the community in ways that might make you feel more comfortable. But how can you shape it? Over my career I’ve developed a framework that strikes a balance between authority and autonomy; it’s neither omnipotent intelligent design nor chaotic natural selection. The framework consists of three components: culture, policy, and enforcement. Each shapes and influences the other in an endless feedback loop. By understanding them in turn, and seeing how they intersect, we can be intentional in how we design our community. What is culture? For most of my career, I’ve held that culture is all that mattered. Specifically, I believed the quote often misattributed to Peter Drucker: “Culture eats strategy for breakfast.” Which is to say, if your team’s culture isn’t aligned with your strategy, you’ll never succeed. But what is culture? “Culture” refers to the shared values, beliefs, attitudes, and rituals that shape the interactions among employees within an organization. If you were to draw a big Venn diagram of every single coworker’s mental model of the company, culture would be the part in the middle where they all intersect. In 2009, Patty McCord and Reid Hastings (chief talent officer and CEO of Netflix, respectively) wrote the book on modern tech company culture. More accurately, they wrote a 129-slide Powerpoint deck on the company’s culture; Sheryl Sandberg called it “one of the most important documents ever to come out of Silicon Valley.” It defined seven aspects of the culture, including its values, expectations for employees, approach to policy, ways of making decisions, compensation, and career progression frameworks. But culture can’t be written down. In the very same deck, McCord and Hastings cited Enron’s company values (“Integrity, communication, respect, excellence”). The values, they noted, were chiseled in marble in the lobby of Enron’s office. But history shows that Enron’s real company culture contained none of those things. What is policy? When I was running my own company, I genuinely enjoyed thinking about company policies. At the time, I felt that even though the company was small and relatively poor, our policies could attract the best talent in the world. “Policy” refers to the guidelines, rules, and procedures that govern employees. Some policies are bound to legal requirements: discrimination, harassment, and security policies are in place to ensure that employees don’t break the law. Other policies aren’t backed up by laws, but apply to the whole company equally. Vacation policies, for example, usually dictate the number of days an employee can take paid leave from work, and how employees should schedule and coordinate those days. Other policies still are put in place by smaller teams of coworkers to govern functional or cross-functional units as they do their work. These are policies like requiring regular critiques and approvals of creative work, getting peer code reviews, or doing postmortems after technical issues. Generally, I’m an acolyte of the McCord school of policy, which is to say I don’t think we need much at all: according to Netflix’s culture deck, in 2004 she said “There is no no clothing policy at Netflix, but no one has come to work naked lately.” In 2009, GM’s current CEO Mary Barra (then the VP of global human resources) demonstrated this approach in dramatic fashion, rewriting the company’s clothing policy from a 10-page manifesto to the two-word maxim “dress appropriately.” However, I’ve seen the minimal policy approach go awry; when not supported by cultural norms or consistent enforcement, the lack of policy can reinforce a status quo of privilege, bias, and hierarchy. What is enforcement? I’ve always struggled with enforcement. I believed that if culture and policy were strong, then there was no need for enforcement; everyone would feel compelled to follow the high standard they held for each other. But recently, I’ve understood its importance. That’s why it’s the third piece of this puzzle, the last one to fall into place. Culture is an unwritten belief. Policy is a recorded norm. “Enforcement” is an action that demonstrates those beliefs and norms. It can take many forms, like counseling, coaching, or discipline. It can be as light and casual as an emoji in a group chat, or as grave and serious as termination without notice. Effective enforcement is hard. It requires being both consistent and flexible. Every situation is unique; good enforcement is fair and equitable, with an emphasis on clear communication and collaboration. While, traditionally, HR is the group that enforces a company’s policies, the highest performing teams police themselves. Enforcement can positively reflect cultural values and policy beliefs. For instance, Kayak requires its engineers and designers to occasionally handle customer support, a task usually reserved for trained associates. Instead of merely suggesting this practice, Kayak enforces it. Kayak co-founder Paul English says “once they take those calls and realize that they have the authority and permission to give the customer their opinion of what is going on and then to make a fix and release that fix, it’s a pretty motivating part of the job.” Balancing the feedback loop Culture, policy, and enforcement constitute a web of forces in tension, holding the workplace community in balance. If any of the three pull too hard, the others can break, and the community can fall apart. So how do you keep the tension working for you? Culture can influence policy by first acknowledging and valuing policy. This doesn’t mean that policy has to be exhaustively written down; Mary Barra’s rewrite of GM’s dress code wasn’t about removing policy altogether. She was asking managers and employees to think carefully about the policy, to consider how it shaped (and was shaped by) the company’s culture, and to make decisions together. At Wharton’s 2018 People Analytics Conference, Bara said: “if you let people own policies themselves, it helps develop them.” Culture can influence enforcement by changing the manner of enforcement altogether. In a positive culture, enforcement is likely to be carried out in a fair and consistent manner. In a negative workplace culture, enforcement may be carried out in a punitive or arbitrary manner, which can lead to resentment. If your team’s mechanisms of enforcement are unclear, ask: “How do our cultural values result in action?” Policy influences culture by creating common knowledge. It’s a kind of mythos, an origin story, or a shared language. On most teams, one of the first things any new member does is learn the team’s policies; the first week of an employee’s tenure is usually the only time they read the company handbook. This sets the tone for the rest of their time with the company or team. Take advantage of those moments to build your culture up. Policy can influence enforcement by setting expectations, creating consistency, and guaranteeing fairness. Without clear policy, consistent enforcement is impossible and may seem arbitrary. If there is no policy at all, enforcement is entirely subjective and personal. Sometimes, the key to enforcement lies in simply defining, discussing, and committing to a policy. In the event that enforcement is necessary, the shared understanding created by clear policy will make it easy for the team to act. Enforcement shapes culture by buttressing the shared values of the team. Negative aspects of culture like privilege and bias are, in part, a result of inconsistent enforcement of policy: unfair enforcement creates a culture where some people expect to be exempt from some rules. Leaders should be just as beholden to a team’s values as those they lead, or else the culture will splinter along the fault lines of management layers. Enforcement shapes policy by creating (or reducing) “shadow policy.” That is, if not all policies are enforced, and if there are expectations that are enforced but not written or communicated, team members will tend to ignore policies altogether. In many cases of white collar crime or malfeasance, shadow policies overwhelmed the written rules, undermining them entirely. Conclusion Culture, policy, and enforcement are three aspects of every workplace community. The ways in which they interact define the health of that community. When they’re in balance, the community can grow and adapt to challenges without losing its identity, like an animal evolving, reacting to its environment by adapting over generations. If those aspects of community are out of balance, teams, functions, and entire companies are brittle and self-destructive. Bad culture undermines well-intentioned policy. Unclear, unwritten policy leads to unfair and inconsistent enforcement. Too much enforcement, or not enough, or the wrong kind at the wrong time, can fracture culture into in-groups and out-groups. In these ways, the balance of culture, policy, and enforcement is vital. Being vigilant about the balance, regardless or your role, will help you shape and guide your workplace community. The more your team works to understand these components, the more they make intentional choices to keep them in healthy tension, the happier, productive, and more fulfilled you’ll be.
More in design
Engelbert Strauss is one of the world’s leading work brands. The CI Factory is a progressive production site and high-tech...
What we lost when everything became a phone, and when the phone became everything. In 2001, I took a train from Providence to Detroit. What should have been a 12-hour journey stretched into 34 when we got caught in a Buffalo blizzard. As the train sat buried in rapidly accumulating snow, bathrooms failed, food ran out, and passengers struggled to cope with their containment. I had taken along my minidisc player and just three discs, assuming I’d spend most of the trip sleeping. With nothing else to do but stay put in my seat, I got to know those three albums very, very well. I’ve maintained a relationship with them with format fluidity. Over the course of my life, I’ve had copies of them on cassette tape, originals on compact disc, more copies on MiniDisc, purchased (and pirated) .mp3, .wav, and .flac files, and access through a dozen different streaming services. Regardless of how I listen to them, I am still transported back to that snow-bound train. After nearly twenty-five years, I have come to assume that this effect would be permanent. But I never expected it to intensify — in a sudden feeling of full return to the body of my youth — like it did when I dug out my old MiniDisc player, recharged its battery, and pressed play on the very same discs I held back in 2001. The momentary flash of being back on that train, of the raw exhilaration of the cold and of being alone in it, of reinhabiting a young mind still reeling from what was formative, culture-wide shock on September 11th — it all came back. This was truly a blast from the past. In some ways, I am simply describing true nostalgia. I had a sense of return, and a mix of pleasure and pain. But unlike other times, when simply replaying some music would trigger recall, this was as if the physical objects — the player and discs themselves — contained the original moment, and turning it on and pressing play released it back into my mind. To the Everything Machine and back When Steve Jobs unveiled the first iPhone, he presented it as three essential devices in one: “an iPod, a phone, and an internet communicator.” The audience cheered at each revelation. Of course they did — who wouldn’t want to carry one device instead of three? For a citizen of the early aughts, a single, “everything machine” was the dream. The consolidation seemed like an obvious win for convenience, for progress, for the future itself. Nearly twenty years later, we can see that this convergence did more than just empty our pockets of multiple devices. It fundamentally transformed our relationship with technology and information. Today’s iPhone isn’t just a unified tool for known purposes; it has become Marshall McLuhan’s medium-as-message, reshaping not just how we do things but what we choose to do and think about, what we know and want to know, what we believe and are. I doubt even Steve Jobs, a man capable of grandiosity to the extreme, could have imagined the epistemological and ontological effects of the iPhone. This realization has been progressive. Books, films, music, and a near constant conversation have been the public reckoning with the everything machine. We grapple with our newly acquired digital addiction in as many ways as it manifests. We do everything we can to counter the everything machine. One thing I have done, mostly out of curiosity, is to go back to the single-function devices I have accumulated over the years. Some of them have been put away, turned-off for longer than they were ever out an don. Simply turning them back on has been illuminating. Each one has reactivated a memory. Each one has reminded me of what it was like to use it for the first time, back at the time at which it was the latest and greatest — when it hinted at a world to come as much as it achieved something its present required. What started as a backward-looking survey of sorts — sifting through a catalog of dusty devices and once-murky memories — revealed something unexpected: Not only did these older, limited devices create a different kind of relationship with technology, catalyzing imagination rather than just consuming attention, there is still a place for them today. For context, here’s a list of the more interesting devices I have in what is a small, personal museum of technology: A partial catalog of my personal device library Device Media Year Nintendo GameBoy Video Game Console 1989 Qualcomm QCP-860 Mobile Phone 1999 Sony CMT CP11 Desktop Audio System 2000 Sony MXD-D40 CD/MiniDisc Deck 2001 Apple iPod 1st Generation mp3 Player 2001 Handspring Visor PDA 2001 Cybiko Classic PDA 2001 Tascam MD-350 MiniDisc Player/Recorder 2001 Sony MZ-B10 Portable MiniDisc Player/Recorder 2002 Siemens C55 Mobile Phone 2002 Sony CLIÉ PEG-SJ22 PDA 2003 BlackBerry Quark Smartphone 2003 Canon PowerShot A70 Digital Camera 2003 Sony Net MD Walkman MZ-N920 Portable MiniDisc Player/Recorder 2004 Sony DCR-HC36 MiniDV Camcorder 2006 OLPC XO Laptop Computer 2007 Sony NWZ-S615F Digital Media Player 2007 Sony NWZ-A815 Digital Media Player 2007 Sony NWZ-A726 Digital Media Player 2008 Cambridge Audio CXC Compact Disc Transport 2015 Sony NW-E394 Digital Media Player 2016 Sony NW-A105 Digital Media Player 2019 Yoto Player 1st Generation Audio Player 2020 Yoto Mini Audio Player 2021 Cambridge Audio CXA81 Integrated Amplifier 2020 easier to use. But if it is a better experience for the writer, who can argue with that? After all, in a world of as many options as we have, ease is not the only measure of value; there are as many measures as there are choices. Subjective experience might as well take the lead. There is also a common worry that returning to single-purpose devices is risky — that their media is somehow more fragile than cloud-hosted digital content. But I’ve found the opposite to be true. I returned to Blu-Ray when favorite shows vanished from streaming services. I started recording voices and broadcasts to MiniDisc when I realized how many digital files I’d lost between phone upgrades. My old MiniDiscs still work perfectly, my miniDV tapes still play, my GameBoy cartridges still save games. It’s not the media that’s fragile, it’s the platform. And sometimes, the platform wasn’t fragile, the market was. MiniDisc is, again, a great example of this. The discs were more portable, robust, and more easily recordable than larger Compact Discs and the players were smaller and more fully-featured. But they ran right into mp3 players in the marketplace. The average consumer valued high capacity and convenience over audio quality and recording features. But guess which devices still work just as they did back then with less effort? The MiniDisc players. Most mp3 players that aren’t also phones require a much greater effort to use today because of their dependence upon another computer and software that hasn’t been maintained. And, unlike most devices made today, older devices are much more easily repaired and modified. Of my list above, not a single device failed to do what it was created to do. Besides comprising a museum of personal choices, these devices are a fascinating timeline of interface design. Each one represents a unique experiment in human-computer interaction, often feeling alien compared to today’s homogeneous landscape of austere, glass-fronted rectangles. Re-exploring them reminds me that just because an idea was left behind doesn’t mean it wasn’t valuable. Their diversity of approaches to simple problems suggests paths not taken, possibilities still worth considering. That the interface is physical, and in some cases, also graphical, makes for a unique combination of efficiency and sensory pleasure. Analog enthusiasts, particularly in the high-fi space, will opine on things like “knob-feel,” and they have a point. When a button, switch, or knob has been created to meet our hands and afford us fine-tuning control over something buried within a circuitboard, it creates an experience totally unlike tapping a symbol projected onto glass. It’s not that one is objectively better than another — and context obviously matters here - but no haptic engine has replicated what a switch wired with intention can do for a fingertip. Today’s smartphone reviewer’s will mention button “clickiness,” but if that’s what gets you excited, I encourage you to flip a GameBoy’s switch again and feel the click that precedes the Nintendo chime; eject a MiniDisc and feel the spring-loaded mechanisms vibration agains the palm of your hand; drag the first iPod’s clickwheel with your thumb in a way that turned a low-fi text list of titles into something with weight. Physicality is what makes a device an extension of a body. Function is what makes a device an extension of a mind. And single-function devices, I believe, do this better. By doing less, of course, they can only be so distracting. Compared to an everything machine and the permanent state of cognitive fracture they’ve created, this is something we should look back upon with more than a bit of nostalgia. We still have something to learn from a device that is intentionally limited and can fully embody that limitation. But the single-function device doesn’t just do less; it creates a different kind of mental space. A GameBoy can only play games, but this limitation means you’re fully present with what you’re doing. A miniDV camcorder might be less convenient than a smartphone for capturing video, but its dedicated purpose makes you think more intentionally about what you’re recording and why. For many contemporary enthusiasts, the limitations of old media create artifacts and effects that are now aesthetically desirable: they want the lower resolution, the glitchiness, and the blurring of old camcorders in the same way that modern digital camera users apply software-driven film emulation recipes to synthesize the effects once produced by developing physical film. The limitation heightens the creation. Each device a doorway These devices remind us that technological progress isn’t always linear. Sometimes what we gain in convenience, we lose in engagement. The friction of switching between different devices might have been — and remains — inefficient, but it created natural boundaries between different modes of activity. Each device was a doorway to a specific kind of experience, rather than a portal to endless possibility. Modern devices have their place. When it comes to remaining in communication, I wouldn’t trade my current smartphone for the phone I used twenty years ago. As critical as I am of the everything machine, I’m inclined to work on building better personal use habits than I am to replace it with a worse experience of the features I use. But there is also room for rediscovering old devices and maintaining relationships with technologies that do less. I actually prefer playing movies and music on physical media than through a streaming interface; I would jump at the chance to reimagine my smartphone with fewer features and a more analog interface. Limitations expand our experience by engaging our imagination. Unlimited options arrest our imagination by capturing us in the experience of choice. One, I firmly believe, is necessary for creativity, while the other is its opiate. Generally speaking, we don’t need more features. We need more focus. Anyone working in interaction and product design can learn from rediscovering how older devices engaged the mind and body to create an experience far more expansive than their function. The future of computing, I hope, is one that will integrate the concept of intentional limitation. I think our minds and memories will depend upon it.
Roots — a retailer offering healthy, farm-fresh, and natural products. The project involves adapting one of Russia’s largest grocery chains...
Five fictional interface concepts that could reshape how humans and machines interact. Every piece of technology is an interface. Though the word has come to be a shorthand for what we see and use on a screen, an interface is anything that connects two or more things together. While that technically means that a piece of tape could be considered an interface between a picture and a wall, or a pipe between water and a home, interfaces become truly exciting when they create both a physical connection and a conceptual one — when they create a unique space for thinking, communicating, creating, or experiencing. This is why, despite the flexibility and utility of multifunction devices like the smartphone, single-function computing devices still have the power to fascinate us all. The reason for this, I believe, is not just that single-function devices enable their users to fully focus on the experience they create, but because the device can be fully built for that experience. Every aspect of its physical interface can be customized to its functionality; it can have dedicated buttons, switches, knobs, and displays that directly connect our bodies to its features, rather than abstracting them through symbols under a pane of glass. A perfect example of this comes from the very company responsible for steering our culture away from single-function devices; before the iPhone, Apple’s most influential product was the iPod, which won user’s over with an innovative approach to a physical interface: the clickwheel. It took the hand’s ability for fine motor control and coupled it for the need for speed in navigating a suddenly longer list of digital files. With a subtle but feel-good gesture, you could skip through thousands of files fluidly. It was seductive and encouraged us all to make full use of the newfound capacity the iPod provided. It was good for users and good for the .mp3 business. I may be overly nostalgic about this, but no feature of the iPhone feels as good to use as the clickwheel did. Of course, that’s an example that sits right at the nexus between dedicated — old-fashioned — devices and the smartphonization of everything. Prior to the iPod, we had many single-focus devices and countless examples of physical interfaces that gave people unique ways of doing things. Whenever I use these kinds of devices — particularly physical media devices — I start to imagine alternate technological timelines. Ones where the iPhone didn’t determine two decades of interface consolidation. I go full sci-fi. Science fiction, by the way, hasn’t just predicted our technological future. We all know the classic examples, particularly those from Star Trek: the communicator and tricorder anticipated the smartphone; the PADD anticipated the tablet; the ship’s computer anticipated Siri, Alexa, Google, and AI voice interfaces; the entire interior anticipated the Jony Ive glass filter on reality. It’s enough to make a case that Trek didn’t anticipate these things so much as those who watched it as young people matured in careers in design and engineering. But science fiction has also been a fertile ground for imagining very different ways for how humans and machines interact. For me, the most compelling interface concepts from fiction are the ones that are built upon radically different approaches to human-computer interaction. Today, there’s a hunger to “get past” screen-based computer interaction, which I think is largely borne out of a preference for novelty and a desire for the riches that come from bringing an entirely new product category to market. With AI, the desire seems to be to redefine everything we’re used to using on a screen through a voice interface — something I think is a big mistake. And though I’ve written about the reasons why screens still make a lot of sense, what I want to focus on here are different interface paradigms that still make use of a physical connection between people and machine. I think we’ve just scratched the surface for the potential of physical interfaces. Here are a few examples that come to mind that represent untried or untested ideas that captivate my imagination. Multiple Dedicated Screens: 2001’s Discovery One Our current computing convention is to focus on a single screen, which we then often divide among a variety of applications. The computer workstations aboard the Discovery One in 2001: A Space Odyssey featured something we rarely see today: multiple, dedicated smaller screens. Each screen served a specific, stable purpose throughout a work session. A simple shift to physically isolating environments and distributing them makes it interesting as a choice to consider now, not just an arbitrary limitation defined by how large screens were at the time the film was produced. Placing physical boundaries between screen-based environments rather than the soft, constantly shifting divisions we manage on our widescreen displays might seem cumbersome and unnecessary at first. But I wonder what half a century of computing that way would have created differently from what we ended up with thanks to the PC. Instead of spending time repositioning and reprioritizing windows — a task that has somehow become a significant part of modern computer use — dedicated displays would allow us to assign specific screens for ambient monitoring and others for focused work. The psychological impact could be profound. Choosing which information deserves its own physical space creates a different relationship with that information. It becomes less about managing digital real estate and more about curating meaningful, persistent contexts for different types of thinking. The Sonic Screwdriver: Intent as Interface The Doctor’s sonic screwdriver from Doctor Who represents perhaps the most elegant interface concept ever imagined: a universal tool that somehow interfaces with any technology through harmonic resonance. But the really interesting aspect isn’t the pseudo-scientific explanation — it’s how the device responds to intent rather than requiring learned commands or specific inputs. The sonic screwdriver suggests technology that adapts to human purpose rather than forcing humans to adapt to machine constraints. Instead of memorizing syntax, keyboard shortcuts, or navigation hierarchies, the user simply needs to clearly understand what they want to accomplish. The interface becomes transparent, disappearing entirely in favor of direct intention-to-result interaction. This points toward computing that works more like natural tool use — the way a craftsperson uses a hammer or chisel — where the tool extends human capability without requiring conscious attention to the tool itself. The Doctor’s screwdriver may, at this point, be indistinguishable from magic, but in a future with increased miniaturization, nanotech, and quantum computing, a personal device shaped by intent could be possible. Al’s Handlink: The Mind-Object In Quantum Leap, Al’s handlink device looks like a smartphone-sized Mondrian painting: no screen, no discernible buttons, just blocky areas of color that illuminate as he uses it. As the show progressed, the device became increasingly abstract until it seemed impossible that any human could actually operate it. But perhaps that’s the point. The handlink might represent a complete paradigm shift toward iconic and symbolic visual computing, or it could be something even more radical: a mind-object, a projection within a projection coming entirely from Al’s consciousness. A totem that’s entirely imaginary yet functionally real. In the context of the show, that was an explanation that made sense to me — Al, after all, wasn’t physically there with his time-leaping friend Sam, he was a holographic projection from a stable time in the future. He could have looked like anything; so, too, his computer. But that handlink as a mind-object also suggests computing that exists at the intersection of technology and parapsychology — interfaces that respond to mental states, emotions, or subconscious patterns rather than explicit physical inputs. What kind of computing would exist in a world where telepathy was as commonly experienced as the five senses? Penny’s Multi-Page Computer: Hardware That Adapts Inspector Gadget’s niece Penny carried a computer disguised as a book, anticipating today’s foldable devices. But unlike our current two-screen foldables arranged in codex format, Penny’s book had multiple pages, each providing a unique interface tailored to specific tasks. This represents customization at both the software and hardware layers simultaneously. Rather than software conforming to hardware constraints, the physical device itself adapts to the needs of different applications. Each page could offer different input methods, display characteristics, or interaction paradigms optimized for specific types of work. This could be achieved similarly to the Doctor’s screwdriver, but it also could be more within reach if we imagine this kind of layered interface as composed of individual modules. Google’s Project Ara was an inspiring foray into modular computing that, I believe, still has promise today, if not moreso thanks to 3D printing. What if you could print your own interface? The Holodeck as Thinking Interface Star Trek’s Holodeck is usually discussed as virtual reality entertainment, but some episodes showed it functioning as a thinking interface — a tool for conceptual exploration rather than just immersive experience. When Data’s artificial offspring used the Holodeck to visualize possible physical appearances while exploring identity, it functioned much like we use Midjourney today: prompting a machine with descriptions to produce images representing something we’ve already begun to visualize mentally. In another episode, when crew members used it to reconstruct a shared suppressed memory, it became a collaborative medium for group introspection and collective problem-solving. In both cases, the interface disappeared entirely. There was no “using” or “inhabiting” the Holodeck in any traditional sense — it became a transparent extension of human thought processes, whether individual identity exploration or collective memory recovery. Beyond the Screen, but Not the Body Each of these examples suggests moving past our current obsession with maximizing screen real estate and window management. They point toward interfaces that work more like natural human activities: environmental awareness, tool use, conversation, and collaborative thinking. The best interfaces we never built aren’t just sleeker screens — they’re fundamentally different approaches to creating that unique space for thinking, communicating, creating, and experiencing that makes technology truly exciting. We’ve spent two decades consolidating everything into glass rectangles. Perhaps it’s time to build something different.
We developed the complete design for the Lights & Shadows project—a selection of 12 organic teas—from naming and original illustrations...