Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
8
It used to be easy to pick colors for design systems. Years ago, you could pick a handful of colors to match your brand’s ethos, or start with an off-the-shelf palette (remember flatuicolors.com?). Each hue and shade served a purpose, and usually had a quirky name like “idea yellow” or “innovation blue”. This hands-on approach allowed for control and creativity, resulting in color schemes that could convey any mood or style. But as design systems have grown to keep up with ever-expanding software needs, the demands on color palette have grown exponentially too. Modern software needs accessibility, adaptability, and consistency across dozens of devices, themes, and contexts. Picking colors by hand is practically impossible. This is a familiar problem to the Stripe design team. In “Designing accessible color systems,” Daryl Koopersmith and Wilson Miner presented Stripe’s approach: using perceptually uniform color spaces to create aesthetically pleasing and accessible systems. Their...
9 months ago

More from The personal website of Matthew Ström

UI Density

Interfaces are becoming less dense. I’m usually one to be skeptical of nostalgia and “we liked it that way” bias, but comparing websites and applications of 2024 to their 2000s-era counterparts, the spreading out of software is hard to ignore. To explain this trend, and suggest how we might regain density, I started by asking what, exactly, UI density is. It’s not just the way an interface looks at one moment in time; it’s about the amount of information an interface can provide over a series of moments. It’s about how those moments are connected through design decisions, and how those decisions are connected to the value the software provides. I’d like to share what I found. Hopefully this exploration helps you define UI density in concrete and useable terms. If you’re a designer, I’d like you to question the density of the interfaces you’re creating; if you’re not a designer, use the lens of UI density to understand the software you use. Visual density We think about density first with our eyes. At first glance, density is just how many things we see in a given space. This is visual density. A visually dense software interface puts a lot of stuff on the screen. A visually sparse interface puts less stuff on the screen. Bloomberg’s Terminal is perhaps the most common example of this kind of density. On just a single screen, you’ll see scrolling sparklines of the major market indices, detailed trading volume breakdowns, tables with dozens of rows and columns, scrolling headlines containing the latest news from agencies around the world, along with UI signposts for all the above with keyboard shortcuts and quick actions to take. A screenshot of Terminal’s interface. Via Objective Trade on YouTube Craigslist is another visually dense example, with its hundreds of plain links to categories and spartan search-and-filter interface. McMaster-Carr’s website shares similar design cues, listing out details for many product variations in a very small space. Screenshots of Craigslist's homepage and McMaster-Carr's product page circa 2024. You can form an opinion about the density of these websites simply by looking at an image for a fraction of a second. This opinion is from our subconsciousness, so it’s fast and intuitive. But like other snap judgements, it’s biased and unreliable. For example, which of these images is more dense? Both images have the same number of dots (500). Both take up the same amount of space. But at first glance, most people say image B looks more dense.1 What about these two images? Again, both images have the same number of dots, and are the same size. But organizing the dots into groups changes our perception of density. Visually density — our first, instinctual judgement of density — is unpredictable. It’s impossible to be fully objective in matters of design. But if we want to have conversations about density, we should aim for the most consistent, meaningful, and useful definition possible. Information density In The Visual Display of Quantitative Information, Edward Tufte approaches the design of charts and graphs from the ground up: Every bit of ink on a graphic requires reason. And nearly always that reason should be that the ink presents new information. Tufte introduces the idea of “data-ink,” defined as the useful parts of a given visualization. Tufte argues that visual elements that don’t strictly communicate data, whether it’s a scale value, a label, or the data itself — should be eliminated. Data-ink isn’t just the space a chart takes up. Some charts use very little extraneous ink, but still take up a lot of physical space. Tufte is talking about information density, not visual density. Information density is a measurable quantity: to calculate it, you simply divide the amount of “data-ink” in a chart by the total amount of ink it takes to print it. Of course what is and is not data-ink is somewhat subjective, but that’s not the point. The point is to get the ratio as close to 1 as possible. You can increase the ratio in two ways: Add data-ink: provide additional (useful) data Remove non-data-ink: erase the parts of the graphic that don’t communicate data Tufte's examples of graphics with a low data-ink ratio (first) and a high one (second). Reproduced from Edward Tufte's The Visual Display of Quantitative Information There’s an upper limit to information density, which means you can subtract too much ink, or add too much information. The audience matters, too: A bond trader at their 4-monitor desk will have a pretty high threshold; a 2nd grader reading a textbook will have a low one. Information density is related to visual density. Usually, the higher the information density is, the more dense a visualization will look. For example, take the train schedule published by E.J. Marey in 18852. It shows the arrival and departure times of dozens of trains across 13 stops from Paris to Lyon. The horizontal axis is time, and the vertical axis is space. The distance between stops on the chart reflects how far apart they are in the real world. The data-ink ratio is close to 1, allowing a huge amount of information — more than 260 arrival and departure times — to be packed into a relatively small space. The train schedule visualization published by E.J. Marey in 1885. Reproduced from Edward Tufte's The Visual Display of Quantitative Information Tufte makes this idea explicit: Maximize data density and the [amount of data], within reason (but at the same time exploiting the maximum resolution of the available data-display technology). He puts it more succinctly as the “Shrink Principle”: Graphics can be shrunk way down Information density is clearly useful for charts and graphs. But can we apply it to interfaces? The first half of the equation — information — applies to screens. We should maximize the amount of information that each part of our interface shows. But the second half of the equation — ink — is a bit harder to translate. It’s tempting to think that pixels and ink are equivalent. But any interface with more than a few elements needs separators, structural elements, and signposts to help a user understand the relationship each piece has to the other. It’s also tempting to follow Tufte’s Shrink Principle and try to eliminate all the whitespace in UI. But some whitespace has meaning almost as salient as the darker pixels of graphic elements. And we haven’t even touched on shadows, gradients, or color highlights; what role do they play in the data-ink equation? So, while information density is a helpful stepping stone, it’s clear that it’s only part of the bigger picture. How can we incorporate all of the design decisions in an interface into a more objective, quantitative understanding of density? Design density You might have already seen the first challenge in defining density in terms of design decisons: what counts as a design decision? In UI, UX, and product design, we make many decisions, consciously and subconsciously, in order to communicate information and ideas. But why do those particular choices convey the meaning that they do? Which ones are superlative or simply aesthetic, and which are actually doing the heavy lifting? These questions sparked 20th century German psychologists to explore how humans understand and interpret shapes and patterns. They called this field “gestalt,” which in German means “form.” In the course of their exploration, Gestalt psychologists described principles that describe how some things appear orderly, symmetrical, or simple, while others do not. While these psychologists weren’t designers, in some sense, they discovered the fundamental laws of design: Proximity: we perceive things that are close together a comprising a single group Similarity: objects that are similar in shape, size, color, or in other ways, appear related to one another. Closure: our minds fill in gaps in designs so that we tend to see whole shapes, even if there are none Symmetry: if we see shapes that are symmetrical to each other, we perceive them as a group formed around a center point Common fate: when objects move, we mentally group the ones that move in the same way Continuity: we can perceive objects as separate even when they overlap Past experience: we recognize familiar shapes and patterns even in unfamiliar contexts. Our expectations are based on what we’ve learned from our past experience of those shapes and patterns. Figure-ground relationship: we interpret what we see in a three-dimensional way, allowing even flat 2d images to have foreground and background elements. Examples of the princples of proximity (left), similarity (center), and closure (right). Gestalt principles explain why UI design goes beyond the pixels on the screen. For example: Because of the principle of similarity, users will understand that text with the same size, font, and color serves the same purpose in the interface. The principle of proximity explains why when a chart is close to a headline, it’s apparent that the headline refers to the chart. For the same reasons, a tightly packed grid of elements will look related, and separate from a menu above it separated by ample space. Thanks to our past experience with switches, combined with the figure-ground principle, a skeuomorphic design for a toggle switch will make it obvious to a user how to instantly turn on a feature. So, instead of focusing on the pixels, we think of design decisions as how we intentionally use gestalt principles to communicate meaning. And like Tufte’s data-ink ratio compares the strictly necessary ink to the total ink used to print a chart, we can calculate a gestalt ratio which compares the strictly necessary design decisions to the total decisions used in a design. This is design density. Four different treatments of the same information, using different types and amounts of gestalt principles. Which is the most dense? This is still subjective: a design decision that seems necessary to some might be superfluous to others. Our biases will skew our assessment, whether they’re personal tastes or cultural norms. But when it comes to user interfaces, counting design decisions is much more useful than counting the amount of data or “ink” alone. Design density isn’t perfect. User interfaces exist to do work, to have fun, to waste time, to create understanding, to facilitate personal connections, and more. Those things require the user to take one or more actions, and so density needs to look beyond components, layouts, and screens. Density should comprise all the actions a user takes in their journey — it should count in space and time. Density in time Just like the amount of stuff in a given space dictates visual density, the amount of things a user can do in a given amount of time dictates temporal — time-wise — density. Loading times are the biggest factor in temporal density. The faster the interface responds to actions and loads new pages or screens, the more dense the UI is. And unlike 2-dimensional whitespace, there’s almost no lower limit to the space needed between moments in time. Bloomberg’s Terminal loads screens full of data instantaneously With today’s bloated software, making a UI more dense in time is more impactful than just squeezing more stuff onto each screen. That’s why Bloomberg’s Terminal is still such a dominant tool in the financial analysis space; it loads data almost instantaneously. A skilled Terminal user can navigate between dozens of charts and graphs in milliseconds. There are plenty of ways to cram tons of financial data into a table, but loading it with no latency is Terminal’s real superpower. But say you’ve squeezed every second out of the loading times of your app. What next? There are some things that just can’t be sped up: you can’t change a user’s internet connection speed, or the computing speed of their CPU. Some operations, like uploading a file, waiting for a customer support response, or processing a payment, involve complex systems with unpredictable variables. In these cases, instead of changing the amount of time between tasks, you can change the perception of that time: Actions less than 100 milliseconds apart will feel simultaneous. If you tap on an icon and, 100ms later, a menu appears, it feels like no time at all passed between the two actions. So, if there’s an animation between the two actions — the menu slides in, for example — the illusion of simultaneity might be broken. For the smallest temporal spaces, animations and transitions can make the app feel slower.3 Between 100 milliseconds and 1 second, the connection between two actions is broken. If you tap on a link and there’s no change for a second, doubt creeps in: did you actually tap on anything? Is the app broken? Is your internet working? Animations and transitions can bridge this perceptual gap. Visual cues in these spaces make the UI feel more dense in time. Gaps between 1 and 10 seconds can’t be bridged with animations alone; research4 shows that users are most likely to abandon a page within the first 10 seconds. This means that if two actions are far enough apart, a user will leave the page instead of waiting for the second action. If you can’t decrease the time between these actions, show an indeterminate loading indicator — a small animation that tells the user that the system is operating normally. Gaps between 10 seconds and 1 minute are even harder to fill. After seeing an indeterminate loader for more than 10 seconds, a user is likely to see it as static, not dynamic, and start to assume that the page isn’t working as expected. Instead, you can use a determinate loading indicator — like a larger progress bar — that clearly indicates how much time is left until the next action happens. In fact, the right design can make the waiting time seem shorter than it actually is; the backwards-moving stripes that featured prominently in Apple’s “Aqua” design system made waiting times seem 11% shorter.5 For gaps longer than 1 minute, it’s best to let the user leave the page (or otherwise do something else), then notify them when the next action has occurred. Blocking someone from doing anything useful for longer than a minute creates frustration. Plus, long, complex processes are also susceptible to error, which can compound the frustration. In the end, though, making a UI dense in time and space is just a means to an end. No UI is valuable because of the way it looks. Interfaces are valuable in the outcomes they enable — whether directly associated with some dollar value, in the case of business software, or tied to some intangible value like entertainment or education. So what is density really about, then? It’s about providing the highest value outcomes in the smallest amount of time, space, pixels, and ink. Density in value Here’s an example of how value density is manifested: a common suggestion for any form-based interface is to break long forms into smaller chunks, then put those chunks together in a wizard-type interface that saves your progress as you go. That’s because there’s no value in a partly-filled-in-form; putting all the questions on a single page might look more visually dense, but if it takes longer to fill out, many users won’t submit it at all. This form is broken up into multiple parts, with clear errors and instructions for resolution. Making it possible for users to get to the end of a form with fewer errors might require the design to take up more space. It might require more steps, and take more time. But if the tradeoffs in visual and temporal density make the outcome more valuable — either by increasing submission rate or making the effort more worth the user’s time — then we’ve increased the overall value density. Likewise, if we can increase the visual and temporal density by making the form more compact, load faster, and less error-prone, without subtracting value to the user or the business, then that’s an overall increase in density. Channeling Tufte, we should try to increase value density as much as possible. Solving this optimization problem can have some counterintuitive results. When the internet was young, companies like Craigslist created value density by aggregating and curating information and displaying it in pages of links. Companies like Yahoo and Altavista made it possible to search for that information, but still put aggregation at the fore. Google took a radically different approach: use information gleaned by the internet’s long chains of linked lists to power a search box. Information was aggregating itself; a single text input was all users needed to access the entire web. Google and Yahoo's approach to data, design, and value density hasn't changed from 2001 (when the first screenshots were archived) to 2024 (when the second set of screenshots were taken). The value of the two companies' stocks reflect the result of these differing approaches. The UI was much less visually dense, but more value-dense by orders of magnitude. The results speak for themselves: Google went from a $23B valuation in 2004 to being worth over $2T today — closing in on a 100x increase. Yahoo went from being worth $125B in 2000 to being sold for $4.8B — less than 3% of its peak value.6 Conclusion Designing for UI density goes beyond the visual aspects of an interface. It includes all the implicit and explicit design decisions we make, and all the information we choose to show on the screen. It includes all time and the actions a user takes to get something valuable out of the software. So, finally, a concrete definition of UI density: UI density is the value a user gets from the interface divided by the time and space the interface occupies. Speed, usability, consistency, predictability, information richness, and functionality all play an important role in this equation. By taking account of all these aspects, we can understand why some interfaces succeed and others fail. And by designing for density, we can help people get more value out of the software we build. Footnotes & References This is a very unscientific statement based on a poll of 20 of my coworkers. Repeatability is questionable. ↩︎ The provenance of the chart is interesting. Not much is known about the original designer, Charles Ibry; but what we do know points to even earlier iterations of the design. If you’re interested, read Sandra Rendgen’s fascinating history of the train schedule. ↩︎ I have no scientific backing for this claim, but I believe it’s because a typical blink occurs in 100ms. When we blink, our brains fill in the gap with the last thing we saw, so we don’t notice the blink. That’s is why we don’t notice the gap between two actions that are less than 100ms apart. You can read more about this effect here: Visual Perception: Saccadic Omission — Suppression or Temporal Masking? ↩︎ Nielsen, Jakob. “How Long Do Users Stay on Web Pages?” Nielsen Norman Group, 11 Sept. 2011, https://www.nngroup.com/articles/how-long-do-users-stay-on-web-pages/ ↩︎ Harrison, Chris, Zhiquan Yeo, and Scott E. Hudson. “Faster Progress Bars: Manipulating Perceived Duration with Visual Augmentations.” Carnegie Mellon University, 2010, https://www.chrisharrison.net/projects/progressbars2/ProgressBarsHarrison.pdf ↩︎ HackerNews has pointed out that this is a ridiculous statement. And it is. Of course, value density isn’t the only reason why Google succeeded where Yahoo failed. But as a reflection of how each company thought about their products, it was a good leading indicator. ↩︎

8 months ago 8 votes
The polish paradox

Polish is a word that gets thrown out in conversations about craft, quality, and beauty. We talk about it at the end of the design process, before the work goes out the door: let’s polish this up. Let’s do a polish sprint. Could this use more polish? https://twitter.com/svlleyy/status/1780215102064452068 A tweet (xeet?) on my timeline asked: “what does polish in an app mean? fancy animations? clear consistent design patterns? hierarchy and colour? all the above?” I thought about it for a moment and got a familiar itch in the back of my brain. It’s a feeling that I associate with a zen kōan that goes (paraphrased): "A monk asked a zen master, ‘Does a dog have Buddha-nature?’ The master answered ‘無’. 無 (pronounced ‘wú’ in Mandarin or ‘mu’ in Japanese) literally translates to ‘not,’ as in ‘i have not done my chores today.’ It’s a negation of something, and in the koan’s case, it’s the master’s way of saying — paradoxically — that there’s no point in answering the question. In the case of the tweet, my 無-sense was tingling as I wrote a response: polish is something only the person who creates it will notice. It’s a paradox; polishing something makes it invisible. Which also means that pointing out examples of polish almost defeats the purpose. But in the spirit of learning, here’s a few things that come to mind when I think of polish: Note the direction the screws are facing. Photo by Chris Campbell, CC BY-NC 2.0 DEED Next time you flip a wall switch or plug something into an outlet, take a second and look at the two screws holding the face plate down. Which direction are the slots in the screw facing? Professional electricians will (almost) always line the screw slots up vertically. This has no functional purpose, and isn’t determined by the hardware itself; the person who put the plate on had to make a conscious decision to do it. Julian Baumgartner’s art restoration videos always include a note about his process for repairing or rebuilding the frame that the canvas is stretched over. When he puts the keys back into the frame to create extra tension, he attaches some fishing wire, wound around a tack, and threaded through each key; this, he says, “ensures the keys will never be lost.” How many of these details lie hidden in the backs of the paintings hung on the walls of the world’s most famous museums and galleries? A traditional go board, with a 15:14 aspect ratio. A traditional go board isn’t square. It’s very slightly longer than it is wide, with a 15:14 aspect ratio. This accounts for the optical foreshortening that happens when looking across the board. For similar reasons, traditionally, black go stones are slightly larger than white ones, as equal-sized stones would look unequal when seen next to each other on the board. The same subtle adjustments go into the shape of letters in a typeface: round letters like ‘e’ and ‘a’ are slightly taller than square letters like ‘x’ or ‘v’. The crossbars of the x don’t usually line up perfectly, either. The success of these demonstrations of polish is dictated by just how hard they are to see. So how should polish manifest in product design? One example is in UI animation. It is tempting to put transitions and animations on every component in the interface; when done right, an animated UI feels responsive and pleasant to use. But the polish required to reach that point of being “intuitive” or “natural” is immense: Animations should happen fast enough to be perceived as instantaneous. The threshold for this is commonly cited at 100ms; anything happening faster than this is indistinguishable from something happening right away. The speed of the animation has to be tuned to accelerate or decelerate at precise rates depending on how far the element is moving and what kind of transition is taking place. Changing a popover from the default linear animation to an ease-out curve will make it seem more natural. Often an animation should be faster or slower depending on whether it’s an “in” or “out” animation; a faster animation at the start of an interaction makes the interface feel snappy and responsive. A slower animation at the end of an interaction helps a user stay oriented to the result of their actions. Another example is in anticipating the user’s intent. A reactive UI should be constantly responding to a user’s input, with no lag between clicks and hovers and visual, audible, or tactile feedback. But with some interaction patterns, responding too quickly can make the interface feel twitchy or delicate. For this reason, nested dropdown menus often have invisible bridges connecting your cursor and the menu associated with what you’ve selected. This allows you to smoothly move to the next item, without the sub-menu disappearing. These bridges are invisible, but drawing them accurately requires pixel precision nonetheless. An example of Amazon’s mega dropdown menu, with invisible bridges connecting the top-level menu to the sub-menu. Image credit: Ben Kamens You benefit from this kind of anticipatory design every day. While designing the original iPhone’s keyboard, Ken Kocienda explored new form factors that took advantage of the unique properties of the phone’s touch screen. But breaking away from the familiarity of a QWERTY keyboard proved challenging; users had a hard time learning new formats. Instead, Kocienda had the keyboard’s touch targets invisibly adjust based on what is being typed, preventing users from making errors in the first place. The exact coordinates of the tap on the screen are adjusted, too, based on the fact that we can’t see what’s underneath our fingers when we’re using it. Early prototypes of the iPhone keyboard sacrificed familiarity in order to make the touchscreen interaction more finger-friendly. Images from Ken Kocienda's Creative Selection, via Commoncog Case Library The iPhone’s keyboard was one of the most crucial components to the success of such a risky innovation. Polish wasn’t a nice-to-have; it was the linchpin. The final design of the keyboard used a familiar QWERTY layout and hid all the complexity of the touch targets and error correction behind the scenes. Image from Apple’s getting started series on the original iphone. Retrieved from the Internet Archive The polish paradox is that the highest degrees of craft and quality are in the spaces we can’t see, the places we don’t necessarily look. Polish can’t be an afterthought. It must be an integral part of the process, a commitment to excellence from the beginning. The unseen effort to perfect every hidden aspect elevates products from good to great.

9 months ago 8 votes
Creating a positive workplace community

Your workplace community — the way you interact with your coworkers every day — can have a major impact on your productivity, happiness, and self-worth. It’s natural to want to shape the community in ways that might make you feel more comfortable. But how can you shape it? Over my career I’ve developed a framework that strikes a balance between authority and autonomy; it’s neither omnipotent intelligent design nor chaotic natural selection. The framework consists of three components: culture, policy, and enforcement. Each shapes and influences the other in an endless feedback loop. By understanding them in turn, and seeing how they intersect, we can be intentional in how we design our community. What is culture? For most of my career, I’ve held that culture is all that mattered. Specifically, I believed the quote often misattributed to Peter Drucker: “Culture eats strategy for breakfast.” Which is to say, if your team’s culture isn’t aligned with your strategy, you’ll never succeed. But what is culture? “Culture” refers to the shared values, beliefs, attitudes, and rituals that shape the interactions among employees within an organization. If you were to draw a big Venn diagram of every single coworker’s mental model of the company, culture would be the part in the middle where they all intersect. In 2009, Patty McCord and Reid Hastings (chief talent officer and CEO of Netflix, respectively) wrote the book on modern tech company culture. More accurately, they wrote a 129-slide Powerpoint deck on the company’s culture; Sheryl Sandberg called it “one of the most important documents ever to come out of Silicon Valley.” It defined seven aspects of the culture, including its values, expectations for employees, approach to policy, ways of making decisions, compensation, and career progression frameworks. But culture can’t be written down. In the very same deck, McCord and Hastings cited Enron’s company values (“Integrity, communication, respect, excellence”). The values, they noted, were chiseled in marble in the lobby of Enron’s office. But history shows that Enron’s real company culture contained none of those things. What is policy? When I was running my own company, I genuinely enjoyed thinking about company policies. At the time, I felt that even though the company was small and relatively poor, our policies could attract the best talent in the world. “Policy” refers to the guidelines, rules, and procedures that govern employees. Some policies are bound to legal requirements: discrimination, harassment, and security policies are in place to ensure that employees don’t break the law. Other policies aren’t backed up by laws, but apply to the whole company equally. Vacation policies, for example, usually dictate the number of days an employee can take paid leave from work, and how employees should schedule and coordinate those days. Other policies still are put in place by smaller teams of coworkers to govern functional or cross-functional units as they do their work. These are policies like requiring regular critiques and approvals of creative work, getting peer code reviews, or doing postmortems after technical issues. Generally, I’m an acolyte of the McCord school of policy, which is to say I don’t think we need much at all: according to Netflix’s culture deck, in 2004 she said “There is no no clothing policy at Netflix, but no one has come to work naked lately.” In 2009, GM’s current CEO Mary Barra (then the VP of global human resources) demonstrated this approach in dramatic fashion, rewriting the company’s clothing policy from a 10-page manifesto to the two-word maxim “dress appropriately.” However, I’ve seen the minimal policy approach go awry; when not supported by cultural norms or consistent enforcement, the lack of policy can reinforce a status quo of privilege, bias, and hierarchy. What is enforcement? I’ve always struggled with enforcement. I believed that if culture and policy were strong, then there was no need for enforcement; everyone would feel compelled to follow the high standard they held for each other. But recently, I’ve understood its importance. That’s why it’s the third piece of this puzzle, the last one to fall into place. Culture is an unwritten belief. Policy is a recorded norm. “Enforcement” is an action that demonstrates those beliefs and norms. It can take many forms, like counseling, coaching, or discipline. It can be as light and casual as an emoji in a group chat, or as grave and serious as termination without notice. Effective enforcement is hard. It requires being both consistent and flexible. Every situation is unique; good enforcement is fair and equitable, with an emphasis on clear communication and collaboration. While, traditionally, HR is the group that enforces a company’s policies, the highest performing teams police themselves. Enforcement can positively reflect cultural values and policy beliefs. For instance, Kayak requires its engineers and designers to occasionally handle customer support, a task usually reserved for trained associates. Instead of merely suggesting this practice, Kayak enforces it. Kayak co-founder Paul English says “once they take those calls and realize that they have the authority and permission to give the customer their opinion of what is going on and then to make a fix and release that fix, it’s a pretty motivating part of the job.” Balancing the feedback loop Culture, policy, and enforcement constitute a web of forces in tension, holding the workplace community in balance. If any of the three pull too hard, the others can break, and the community can fall apart. So how do you keep the tension working for you? Culture can influence policy by first acknowledging and valuing policy. This doesn’t mean that policy has to be exhaustively written down; Mary Barra’s rewrite of GM’s dress code wasn’t about removing policy altogether. She was asking managers and employees to think carefully about the policy, to consider how it shaped (and was shaped by) the company’s culture, and to make decisions together. At Wharton’s 2018 People Analytics Conference, Bara said: “if you let people own policies themselves, it helps develop them.” Culture can influence enforcement by changing the manner of enforcement altogether. In a positive culture, enforcement is likely to be carried out in a fair and consistent manner. In a negative workplace culture, enforcement may be carried out in a punitive or arbitrary manner, which can lead to resentment. If your team’s mechanisms of enforcement are unclear, ask: “How do our cultural values result in action?” Policy influences culture by creating common knowledge. It’s a kind of mythos, an origin story, or a shared language. On most teams, one of the first things any new member does is learn the team’s policies; the first week of an employee’s tenure is usually the only time they read the company handbook. This sets the tone for the rest of their time with the company or team. Take advantage of those moments to build your culture up. Policy can influence enforcement by setting expectations, creating consistency, and guaranteeing fairness. Without clear policy, consistent enforcement is impossible and may seem arbitrary. If there is no policy at all, enforcement is entirely subjective and personal. Sometimes, the key to enforcement lies in simply defining, discussing, and committing to a policy. In the event that enforcement is necessary, the shared understanding created by clear policy will make it easy for the team to act. Enforcement shapes culture by buttressing the shared values of the team. Negative aspects of culture like privilege and bias are, in part, a result of inconsistent enforcement of policy: unfair enforcement creates a culture where some people expect to be exempt from some rules. Leaders should be just as beholden to a team’s values as those they lead, or else the culture will splinter along the fault lines of management layers. Enforcement shapes policy by creating (or reducing) “shadow policy.” That is, if not all policies are enforced, and if there are expectations that are enforced but not written or communicated, team members will tend to ignore policies altogether. In many cases of white collar crime or malfeasance, shadow policies overwhelmed the written rules, undermining them entirely. Conclusion Culture, policy, and enforcement are three aspects of every workplace community. The ways in which they interact define the health of that community. When they’re in balance, the community can grow and adapt to challenges without losing its identity, like an animal evolving, reacting to its environment by adapting over generations. If those aspects of community are out of balance, teams, functions, and entire companies are brittle and self-destructive. Bad culture undermines well-intentioned policy. Unclear, unwritten policy leads to unfair and inconsistent enforcement. Too much enforcement, or not enough, or the wrong kind at the wrong time, can fracture culture into in-groups and out-groups. In these ways, the balance of culture, policy, and enforcement is vital. Being vigilant about the balance, regardless or your role, will help you shape and guide your workplace community. The more your team works to understand these components, the more they make intentional choices to keep them in healthy tension, the happier, productive, and more fulfilled you’ll be.

a year ago 7 votes
Design-by-wire

There’s a lot of fear in the air. As AI gets better at design, it’s natural for designers to be worried about their jobs. But I think the question — will AI replace designers? — is a waste of time. Humans have always invented technology to do their work for them and will continue to do so as long as we exist. Let’s use our curiosity and creativity to imagine how technology will help us be better, more efficient, and more impactful. So, in that spirit, I’d like to share a metaphor that I think paints a picture of how the job of design will change in the next decade. Flying by wire Commercial airplanes are some of the most complicated machines humans have ever built. It took all of Wilbur Wright’s skill to fly the first powered airplane for a minute, just 10 feet off the ground. That plane, the Wright Flyer, could carry one person; it weighed 745 pounds with fuel and could reach a height of 30 feet at a maximum speed of 30 miles per hour.1 The Airbus A380, currently the world’s largest commercial airliner, weighs over a million pounds when fully loaded. It can carry up to 853 people, flying up to 43,000 feet at a cruising speed of 561 mph — 85% the speed of sound. Just two people pilot the A380.2 The cockpit of an Airbus A380. Photo by Steve Jurvetson, CC BY 2.0 The A380, and all modern commercial airplanes, wouldn’t exist without something called “fly-by-wire.” Fly-by-wire is a system that translates a pilot’s inputs — changing the throttle to speed up or slow down, controlling pitch and roll with the yoke, turning knobs and dials in the cockpit — into coordinated movements of the airplane’s engines and control surfaces. The first fly-by-wire systems were a veritable nervous system of electric relays and motors; today, they’re sophisticated computers in the belly of the plane. Originally, fly-by-wire had nothing to do with automation. As airplanes got larger, the cables, rods, and hydraulic links connecting the cockpit to the rest of the plane became a monumental design challenge. By replacing those complex, bulky components with electrical wires and switches, airplanes would be lighter and easier to maintain, with more room for passengers and cargo. The first commercial airplane with a fly-by-wire system was the supersonic Concorde jet. At speeds of over Mach 1, it would be almost impossible for a pilot to move the control surfaces of the airplane through sheer mechanical force; fly-by-wire allowed pilots to smoothly operate the plane at any speed. And because sudden changes at top speed could be catastrophic, the fly-by-wire system could use analog circuitry to smooth out a pilot’s inputs. An experimental fly by wire system in the Vought F-8 Crusader using data-processing equipment adapted from the Apollo Guidance Computer As fly-by-wire systems became more common, they went from faithfully transferring pilot’s inputs to interpreting and adjusting them. The Airbus A320, introduced in 1988, featured the first digital (computerized) fly-by-wire system; it included “flight envelope protection,” a system that prevents pilots from taking any action that would cause damage to the airplane. Depending on the speed, altitude, and phase of flight, the fly-by-wire system will ignore certain pilot inputs altogether. Fly-by-wire has been the focus of both scrutiny and praise since its introduction. On one hand, it has saved lives: When US Airways Flight 1549 (an Airbus A320) flew through a flock of birds on takeoff, it lost all power. The pilots had to make an emergency landing in the Hudson river, flying the airplane unusually low and slow, risking putting the plane into an uncontrollable stall. The fly-by-wire system, with its flight envelope protection, ensured the plane could maneuver at the very edge of its capability, leading to a controlled landing with only a few serious injuries for those aboard. On the other hand, fly-by-wire has been criticized for replacing parts of pilots’ expertise. In 2009, Air France Flight 447 (another Airbus A320), crashed in the Atlantic Ocean, killing all 228 passengers and crew. An investigation into the cause of the crash concluded that the autopilot and fly-by-wire protections started to malfunction when ice crystals interfered with the aircraft’s sensors; the pilots, used to flying with the safety of flight envelope protection, couldn’t correct for the errors, stalled the plane, and crashed into the ocean. Whether you think fly-by-wire is a crucial innovation or a crutch, its effect on the airline industry is easy to demonstrate. Bigger planes that fly farther can carry more passengers to more destinations. From 1970 to 2019, the number of airline passengers worldwide has grown over 1,400%, from 310 million to 4.4 billion.3 In the same time period, the number of commercial pilots — pilots holding “commercial” or “airline transport” licenses — has increased 27%, from 208,027 in 1969 to 265,810 in 2019. Pilots’ salaries have stayed consistently high: an average airline captain made about $51,750 a year in 19754, the equivalent to $287,770 in 20235. An airline captain with six years of experience can expect to make $285,460 today.6 Designing by wire Just as fly-by-wire systems have made pilots more efficient (not redundant), AI and automation will make designers more effective. Imagine a design-by-wire system. The job of the designer is to indicate what they want the desired outcome to be. Like a pilot pushing the throttle to make the airplane accelerate, a designer could assemble a wireframe or configure a screen to enable a user to accomplish a task. The design-by-wire system could then interpret the designer’s instructions. The system could change aspects of the design to use the latest design systems components in the correct way. The system could optimize the design to make implementation cheaper, faster, or less prone to bugs. The system could automatically fix accessibility issues, or add information to address accessibility concerns like keyboard shortcuts, screen reader labels, or high-contrast and reduced-motion variations. A designer could list out hypotheses about the design (like “will this convert the most users to paid plans?”), and the system could provide designs for multivariate testing, along with test or research plans. The system could automate QA by testing designs against simulated user behavior, adjusting the designs to cover the wide and unpredictable cases of real user interaction. You can already see these kinds of systems taking shape. Noya promises to take wireframes and turn them into production code using an existing design system. Galileo AI claims to be able to create fully-editable designs from a single text description. Diagram’s Genius aims to provide contextual suggestions in Figma, filling out designs with the click of a button. These are just early tech previews, but they paint a picture of AI becoming a core component of our design tools. At the end of the day, design-by-wire systems are centered around the designer. Like the pilot of an Airbus A380, the designer becomes the operator of a fantastically complicated machine. There is a risk in designing by wire. If designers don’t understand how the system works, they risk losing control, becoming less effective than they were before. That’s why it’s important that we become experts in AI; we don’t have to be able to write the code that drives these tools, but we need to understand the way the systems work. To use the machine to its full potential, the designer has to understand the intricacies of its operation: Pilots, by analogy, train for years in simulators before they step foot in the cockpit of a real airliner. Here’s a few resources you can use to learn more about GPTs, the systems driving the current boom in AI: Stephen Wolfram’s “What is ChatGPT Doing … and Why Does It Work?” is an incredibly in-depth exploration of the technology and concepts, with interactive code. Ted Chiang’s “ChatGPT Is a Blurry JPEG of the Web” uses analogies to explain the strengths and weaknesses of GPT technology. 3Blue1Brown has a 5-part video series explaining the basics of neural networks, including how they are trained. The Coding Train has 26 videos on neural networks, in which Daniel Schiffman builds and explains various components and variations of neural nets. There’s also an accompanying chapter in Schiffman’s book The Nature of Code. All four of the above authors are amazing teachers who mix mathematical depth with intuitive analogies and mental models. Conclusion AI will change our jobs in ways we can’t imagine. This has been happening to airline pilots since the advent of fly-by-wire systems. In the transition to newer, faster, larger, and more efficient airplanes, pilots have needed more and more technical understanding and skill in interacting with the computers that fly their planes. But pilots are still needed. Likewise, designers won’t be replaced; they’ll become operators of increasingly complicated AI-powered machines. New tools will enable designers to be more productive, designing applications and interfaces that can be implemented faster and with less bugs. These tools will expand our brains, helping us cover accessibility and usability concerns that previously took hours of effort from UX specialists and QA engineers. It’ll take years of training to become an expert at designing with these new AI-powered systems. Start now, and you’ll stay ahead of the curve. Wait, and the challenge won’t come from the AI itself; it’ll be other designers — ones who are skilled at AI-powered design — who will come for your job. Footnotes & References “Wright Flyer.” In Wikipedia, February 19, 2023. https://en.wikipedia.org/w/index.php?title=Wright_Flyer&oldid=1140234161. ↩︎ “Airbus A380.” In Wikipedia, February 10, 2023. https://en.wikipedia.org/w/index.php?title=Airbus_A380&oldid=1138648822. ↩︎ “Top 15 Countries with Departures by Air Transport - 1970/2020 -.” Accessed February 20, 2023. https://statisticsanddata.org/data/top-15-countries-with-departures-by-air-transport-1970-2020/. ↩︎ “Industry Wage Survey. Scheduled Airlines.” Scheduled Airlines, Bulletin / Bureau of Labor Statistics, 1977 1972, 2 v. https://catalog.hathitrust.org/Record/009881222. ↩︎ Calculated with https://www.in2013dollars.com/us/inflation/1975?amount=51750 ↩︎ “Major Airline Pilot Salary: First Officer and Captain Pay in 2023 / ATP Flight School.” Accessed February 18, 2023. https://atpflightschool.com/become-a-pilot/airline-career/major-airline-pilot-salary.html. ↩︎

a year ago 8 votes

More in design

Ten Books About AI Written Before the Year 2000

This by no means a definitive list, so don’t @ me! AI is an inescapable subject. There’s obviously an incredible headwind behind the computing progress of the last handful of years — not to mention the usual avarice — but there has also been nearly a century of thought put toward artificial intelligence. If you want to have a more robust understanding of what is at work beneath, say, the OpenAI chat box, pick any one of these texts. Each one would be worth a read — even a skim (this is by no means light reading). At the very least, familiarizing yourself with the intellectual path leading to now will help you navigate the funhouse of overblown marketing bullshit filling the internet right now, especially as it pertains to AGI. Read what the heavyweights had to say about it and you’ll see how many semantic games are being played while also moving the goalposts. Steps to an Ecology of Mind (1972) — Gregory Bateson. Through imagined dialogues with his daughter, Bateson explores how minds emerge from systems of information and communication, providing crucial insights for understanding artificial intelligence. The Sciences of the Artificial (1969) — Herbert Simon examines how artificial systems, including AI, differ from natural ones and introduces key concepts about bounded rationality. The Emperor’s New Mind (1989) — Roger Penrose. While arguing against strong AI, provides valuable insights into consciousness and computation that remain relevant to current AI discussions. Gödel, Escher, Bach: An Eternal Golden Braid (1979) — Douglas Hofstadter weaves together mathematics, art, and music to explore consciousness, self-reference, and emergent intelligence. Though not explicitly about AI, it provides fundamental insights into how complex cognition might emerge from simple rules and patterns. Perceptrons (1969) — Marvin Minsky & Seymour Papert. This controversial critique of neural networks temporarily halted research in the field but ultimately helped establish its theoretical foundations. Minsky and Papert’s mathematical analysis revealed both the limitations and potential of early neural networks. The Society of Mind (1986) — Marvin Minsky proposes that intelligence emerges from the interaction of simple agents working together, rather than from a single unified system. This theoretical framework remains relevant to understanding both human cognition and artificial intelligence. Computers and Thought (1963) — Edward Feigenbaum & Julian Feldman (editors) This is the first collection of articles about artificial intelligence, featuring contributions from pioneers like Herbert Simon and Allen Newell. It captures the foundational ideas and optimism of early AI research. Artificial Intelligence: A Modern Approach (1995) — Stuart Russell & Peter Norvig. This comprehensive textbook defined how AI would be taught for decades. It presents AI as rational agent design rather than human intelligence simulation, a framework that still influences the field. Computing Machinery and Intelligence (1950) — Alan Turing’s paper introduces the Turing Test and addresses fundamental questions about machine intelligence that we’re still grappling with today. It’s remarkable how many current AI debates were anticipated in this work. Cybernetics: Or Control and Communication in the Animal and the Machine (1948) — Norbert Wiener established the theoretical groundwork for understanding control systems in both machines and living things. His insights about feedback loops and communication remain crucial to understanding AI systems.

21 hours ago 2 votes
The Zettelkasten note taking methodology.

My thoughts about the Zettelkasten (Slip box) note taking methodology invented by the German sociologist Niklas Luhmann.

3 days ago 9 votes
DJI flagship store by Various Associates

Chinese interior studio Various Associates has completed an irregular pyramid-shaped flagship store for drone brand DJI in Shenzhen, China. Located...

3 days ago 3 votes
Notes on Google Search Now Requiring JavaScript

John Gruber has a post about how Google’s search results now require JavaScript[1]. Why? Here’s Google: the change is intended to “better protect” Google Search against malicious activity, such as bots and spam Lol, the irony. Let’s turn to JavaScript for protection, as if the entire ad-based tracking/analytics world born out of JavaScript’s capabilities isn’t precisely what led to a less secure, less private, more exploited web. But whatever, “the web” is Google’s product so they can do what they want with it — right? Here’s John: Old original Google was a company of and for the open web. Post 2010-or-so Google is a company that sees the web as a de facto proprietary platform that it owns and controls. Those who experience the web through Google Chrome and Google Search are on that proprietary not-closed-per-se-but-not-really-open web. Search that requires JavaScript won’t cause the web to die. But it’s a sign of what’s to come (emphasis mine): Requiring JavaScript for Google Search is not about the fact that 99.9 percent of humans surfing the web have JavaScript enabled in their browsers. It’s about taking advantage of that fact to tightly control client access to Google Search results. But the nature of the true open web is that the server sticks to the specs for the HTTP protocol and the HTML content format, and clients are free to interpret that as they see fit. Original, novel, clever ways to do things with website output is what made the web so thrilling, fun, useful, and amazing. This JavaScript mandate is Google’s attempt at asserting that it will only serve search results to exactly the client software that it sees fit to serve. Requiring JavaScript is all about control. The web was founded on the idea of open access for all. But since that’s been completely and utterly abused (see LLM training datasets) we’re gonna lose it. The whole “freemium with ads” model that underpins the web was exploited for profit by AI at an industrial scale and that’s causing the “free and open web” to become the “paid and private web”. Universal access is quickly becoming select access — Google search results included. If you want to go down a rabbit hole of reading more about this, there’s the TechCrunch article John cites, a Hacker News thread, and this post from a company founded on providing search APIs. ⏎ Email :: Mastodon :: Bluesky #generalNotes

4 days ago 9 votes
Kedrovka cedar milk by Maria Korneva

Kedrovka is a brand of plant-based milk crafted for those who care about their health, value natural ingredients, and seek...

4 days ago 3 votes