Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
91
Great design comes from seeing — seeing something for what it truly is, what it needs, and what it can be — both up close and at a distance. A great designer can focus intently on the smallest of details while still keeping the big picture in view, perceiving both the thing itself and its surrounding context. Designers who move most fluidly between these perspectives create work that endures and inspires. But there’s a paradox at the heart of design that’s rarely discussed: the discipline that most profoundly determines how lasting and inspiring a work of design can be is a designer’s ability to look away — not just from their own work, but from other solutions, other possibilities, other designers’ takes on similar problems. This runs counter to conventional wisdom. We’re told to study the masters, to immerse ourselves in the history of our craft, to stay current with trends and innovations. There’s value in this, of course — foundational knowledge creates the soil from which...
2 weeks ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Christopher Butler

The Art Secret Behind All Great Design

Composition Speaks Before Content When I was a young child, I would often pull books off of my father’s shelf and stare at their pages. In a clip from a 1987 home video that has established itself in our family canon — my father opens our apartment door, welcoming my newborn little sister home for the first time. There I stood, waiting for his arrival, in front of his bookshelves, holding an open book. From behind the camera, Dad said, “There’s Chris looking at the books. He doesn’t read the books…” I’m not sure I caught the remark at the time, but with every replay — and there were many — it began to sting. The truth was I didn’t really know what I was doing, but I did know my Dad was right — I wasn’t reading the books. Had I known then what I know now, I might have shared these words, from the artist Piet Mondrian, with my Dad: “Every true artist has been inspired more by the beauty of lines and color and the relationships between them than by the concrete subject of the picture.” — Piet Mondrian For most of my time as a working designer, this has absolutely been true, though I wasn’t fully aware of it. And it’s possible that one doesn’t need to think this way, or even agree with Mondrian, to be a good designer. But I have found that fully understanding Mondrian’s point has helped me greatly. I no longer worry about how long it takes me to do my work, and I doubt my design choices far less. I enjoy executing the fundamentals more, and I feel far less pressure to conform my work to current styles or overly decorate it to make it stand out more. It has helped me extract more power from simplicity. This shift in perspective led me to a deeper question: what exactly was I responding to in those childhood encounters with my father’s books, and why do certain visual arrangements feel inherently satisfying? A well-composed photograph communicates something essential even before we register its subject. A thoughtfully designed page layout feels right before we read a single word. There’s something happening in that first moment of perception that transcends the individual elements being composed. Mondrian understood this intuitively. His geometric abstractions stripped away all representational content, leaving only the pure relationships between lines, colors, and spaces. Yet his paintings remain deeply compelling, suggesting that there’s something fundamental about visual structure itself that speaks to us — a language of form that exists independent of subject matter. Perhaps we “read” composition the way we read text — our brains processing visual structure as a kind of fundamental grammar that exists beneath conscious recognition. Just as we don’t typically think about parsing sentences into subjects and predicates while reading, we don’t consciously deconstruct the golden ratio or rule of thirds while looking at an image. Yet in both cases, our minds are translating structure into meaning. This might explain why composition can be satisfying independent of content. When I look at my childrens’ art books, I can appreciate the composition of a Mondrian painting alongside them, even though they are primarily excited about the colors and shapes. We’re both “reading” the same visual language, just at different levels of sophistication. The fundamental grammar of visual composition speaks to us both. The parallels with reading go even deeper. Just as written language uses spacing, punctuation, and paragraph breaks to create rhythm and guide comprehension, visual composition uses negative space, leading lines, and structural elements to guide our eye and create meaning. These aren’t just aesthetic choices — they’re part of a visual syntax that our brains are wired to process. This might also explain why certain compositional principles appear across cultures and throughout history. The way we process visual hierarchy, balance, and proportion might be as fundamental to human perception as our ability to recognize faces or interpret gestures. It’s a kind of visual universal grammar, to borrow Chomsky’s linguistic term. What’s particularly fascinating is how this “reading” of composition happens at an almost precognitive level. Before we can name what we’re seeing or why we like it, our brains have already processed and responded to the underlying compositional structure. It’s as if there’s a part of our mind that reads pure form, independent of content or context. Mondrian’s work provides the perfect laboratory for understanding this phenomenon. His paintings contain no recognizable objects, no narrative content, no emotional subject matter in the traditional sense. Yet they continue to captivate viewers more than a century later. What we’re responding to is exactly what he identified: the beauty of relationships between visual elements — the conversation between lines, the tension between colors, the rhythm of spaces. Understanding composition as a form of reading might help explain why design can feel both intuitive and learnable. Just as we naturally acquire language through exposure but can also study its rules formally, we develop an intuitive sense of composition through experience while also being able to learn its principles explicitly. Looking at well-composed images or designs can feel like reading poetry in a language we didn’t know we knew. The syntax is familiar even when we can’t name the rules, and the meaning emerges not from what we’re looking at, but from how the elements relate to each other in space. In recognizing composition as this fundamental visual language, we begin to understand why good design works at such a deep level. It’s not just about making things look nice — it’s about speaking fluently in a language that predates words, tapping into patterns of perception that feel as natural as breathing. This understanding of composition as fundamental visual language has profound implications for how we approach design work. When we do this intentionally, we’re applying a kind of secret knowledge of graphic design: the best design works at a purely visual level, regardless of what specific words or images occupy the surface. This is why the “squint test” works. When we squint at a designed surface, the details are blurred but the overall structure remains visible, allowing us to see things like structure, hierarchy and tonal balance more clearly. This is a critical tool for designers; we inevitably reach a point when we need to see past the content in order to ensure that it is seen by others. No matter what I am creating — whether it is a screen in an application, a page on a website, or any other asset — I always begin with a wireframe. Most designers do this. But my secret is that I stick with wireframes far longer than most people would imagine. Regardless of how much material my layout will eventually contain is ready to go, I almost always finalize my layout choices using stand-in material. For images, that means grey boxes, for text, that means grey lines. The reason I do this is because I know that what Mondrian said is true: if it is beautiful purely on the merits of its structure, it will work to support just about any text, or any image. I can envision exceptions to this, and I’ve no doubt encountered them, but I have never felt the need to make a significant or labor-intensive structural change once a final images, colors, text, and other elements have been added in. More and more, I see designers starting with high-fidelity (i.e. fully styled) layouts or even with established components in browser, and while I don’t typically start there when asked for critical feedback, I almost always support the feedback I inevitably give by extolling the merits of wireframing. No matter what the environment, no matter what the form, establishing structure is the most important aspect of our discipline. The secret of graphic design has always been known by artists: structure does more work than content while convincing its audience of the opposite. Josef Albers said of the way he created images that they enabled a viewer to “see more than there is.” That is the mystery behind all looking — that there is always more to see than there is visible. Work with that mystery, and you’ll possess a secret that will transform your work.

yesterday 1 votes
If This, Then That, Except for When...

I’m not saying that “Agentic AI” will “never” replicate all tasks that workers are currently paid to complete, but I strongly suspect that current agents have an extremely limited task scope. As exhibit A, I submit this podcast episode, in which Agentic AI is sold on the basis of “solving” tasks that have long been solved and for which very few are paid. Seriously: one guy says, “Okay, what else do people do in their jobs? What are other tasks in the economy?” and the response is “Planning a weekend getaway.” FFS. We need to stop “solving” the solved and actually make some semblance of non-gaslit progress. Here’s the thing. I work with people who are seriously hustling to automate as much as they can. And what I’ve observed is that plenty can be automated, but it doesn’t scale to expectations. Most things are nestled into complex, sometimes intuitively managed workflows that are so exception-ridden that automation attempts fail early. And when the people trying to automated said procedures go back to stakeholders and say, “can we standardize this more” so that the machine can do it, the answers are often a myriad of very reasonably qualified NOs. After all, if it could have been standardized by now, it probably would have been. Not because that makes it fit more easily into a prompt, but because it makes it fit more easily into the work-day-sanity-drain-sequence that all employed humans endure. We have to stop accepting the simplicity of the AI “if this, then that” sales pitch, when the vast majority of things that get done amend it with “…except when…” That last point bears repeating: I have had it with AI being sold as if the entire population of working humans has just been too lazy to clean up their act and standardize, systematize, and optimize what they do. It’s as if the snAIk-oil salesmen assume that every tech executive is a hopeless, degenerate coke addict and they’ve chemtrailed the skies with blow. Look, I’ve accrued 20+ years of cynicism I work every day to dismantle — as much as the next person — but one flavor of it I haven’t had much experience with is this idea that most people are lazy, phoning it in, incompetent, and ripe for the AI’s plucking. So note to the plutocrats, I get what you’re up to. But hey: you have a good lead here. Use it to actually solve unsolved things. Solving the already solved is the same thing as high seas piracy, and things rarely ended well for pirates. And, P.S., no one takes a smiling Cassandra seriously, Dario, so if you’re going to flood the media landscape with your “warning,” maybe either mean it or get some media training.

yesterday 1 votes
PAC – Personal Ambient Computing

The next evolution of personal computing won’t replace your phone — it will free computing from any single device. Like most technologists of a certain age, many of my expectations for the future of computing were set by Star Trek production designers. It’s quite easy to connect many of the devices we have today to props designed 30-50 years ago: The citizens of the Federation had communicators before we had cellphones, tricorders before we had smartphones, PADDs before we had tablets, wearables before the Humane AI pin, and voice interfaces before we had Siri. You could easily make a case that Silicon Valley owes everything to Star Trek. But now, there seems to be a shared notion that the computing paradigm established over the last half-century has run its course. The devices all work well, of course, but they all come with costs to culture that are worth correcting. We want to be more mobile, less distracted, less encumbered by particular modes of use in certain contexts. And, it’s also worth pointing out that the creation of generative AI has made the corporate shareholders ravenous for new products and new revenue streams. So whether we want them or not, we’re going to get new devices. The question is, will they be as revolutionary as they’re already hyped up to be? I’m old enough to remember the gasping “city-changing” hype of the Segway before everyone realized it was just a scooter with a gyroscope. But in fairness, there was equal hype to the iPhone and it isn’t overreaching to say that it remade culture even more extensively than a vehicle ever could have. So time will tell. I do think there is room for a new approach to computing, but I don’t expect it to be a new device that renders all others obsolete. The smartphone didn’t do that to desktop or laptop computers, nor did the tablet. We shouldn’t expect a screenless, sensor-ridden device to replace anyone’s phone entirely, either. But done well, such a thing could be a welcome addition to a person’s kit. The question is whether that means just making a new thing or rethinking how the various computers in our life work together. As I’ve been pondering that idea, I keep thinking back to Star Trek, and how the device that probably inspired the least wonder in me as a child is the one that seems most relevant now: the Federation’s wearables. Every officer wore a communicator pin — a kind of Humane Pin light — but they also all wore smaller pins at their collars signifying rank. In hindsight, it seems like those collar pins, which were discs the size of a watch battery, could have formed some kind of wearable, personal mesh network. And that idea got me going… The future isn’t a zero-sum game between old and new interaction modes. Rather than being defined by a single new computing paradigm, the future will be characterized by an increase in computing: more devices doing more things. I’ve been thinking of this as a PAC — Personal Ambient Computing. Personal Ambient Computing At its core is a modular component I’ve been envisioning as a small, disc-shaped computing unit roughly the diameter of a silver dollar but considerably thicker. This disc would contain processing power, storage, connectivity, sensors, and microphones. The disc could be worn as jewelry, embedded in a wristwatch with its own display, housed in a handheld device like a phone or reader, integrated into a desktop or portable (laptop or tablet) display, or even embedded in household appliances. This approach would create a personal mesh network of PAC modules, each optimized for its context, rather than forcing every function in our lives through a smartphone. The key innovation lies in the standardized form factor. I imagine a magnetic edge system that allows the disc to snap into various enclosures — wristwatches, handhelds, desktop displays, wearable bands, necklaces, clips, and chargers. By getting the physical interface right from the start, the PAC hardware wouldn’t need significant redesign over time, but an entirely new ecosystem of enclosures could evolve more gradually and be created by anyone. A worthy paradigm shift in computing is one that makes the most use of modularity, open-source software and hardware, and context. Open-sourcing hardware enclosures, especially, would offer a massive leap forward for repairability and sustainability. In my illustration above, I even went as far as sketching a smaller handheld — exactly the sort of device I’d prefer over the typical smartphone. Mine would be proudly boxy with a larger top bezel to enable greater repair access to core components, like the camera, sensors, microphone, speakers, and a smaller, low-power screen I’d depend upon heavily for info throughout the day. Hey, a man can dream. The point is, a PAC approach would make niche devices much more likely. Power The disc itself could operate at lower power than a smartphone, while device pairings would benefit from additional power housed in larger enclosures, especially those with screens. This creates an elegant hierarchy where the disc provides your personal computing core and network connectivity, while housings add context-specific capabilities like high-resolution displays, enhanced processing, or extended battery life. Simple housings like jewelry would provide form factor and maybe extend battery life. More complex housings would add significant power and specialized components. People wouldn’t pay for screen-driving power in every disc they own, just in the housings that need it. This modularity solves the chicken-and-egg problem that kills many new computing platforms. Instead of convincing people to buy an entirely new device that comes with an established software ecosystem, PAC could give us familiar form factors — watches, phones, desktop accessories — powered by a new paradigm. Third-party manufacturers could create housings without rebuilding core computing components. Privacy This vision of personal ambient computing aligns with what major corporations already want to achieve, but with a crucial difference: privacy. The current trajectory toward ambient computing comes at the cost of unprecedented surveillance. Apple, Google, Meta, OpenAI, and all the others envision futures where computing is everywhere, but where they monitor, control and monetize the flow of information. PAC demands a different future — one that leaves these corporate gatekeepers behind. A personal mesh should be just that: personal. Each disc should be configurable to sense or not sense based on user preferences, allowing contextual control over privacy settings. Users could choose which sensors are active in which contexts, which data stays local versus shared across their mesh, and which capabilities are enabled in different environments. A PAC unit should be as personal as your crypto vault. Obviously, this is an idea with a lot of technical and practical hand-waving at work. And at this vantage point, it isn’t really about technical capability — I’m making a lot of assumptions about continued miniaturization. It is about computing power returning to individuals rather than being concentrated in corporate silos. PAC represents ambient computing without ambient surveillance. And it is about computing graduating its current form and becoming more humanely and elegantly integrated into our day to day lives. Next The smartphone isn’t going anywhere. And we’re going to get re-dos of the AI devices that have already spectacularly failed. But we won’t get anywhere especially exciting until we look at the personal computing ecosystem holistically. PAC offers a more distributed, contextual approach that enhances rather than replaces effective interaction modes. It’s additive rather than replacement-based, which historically tends to drive successful technology adoption. I know I’m not alone in imagining something like this. I’d just like to feel more confident that people with the right kind of resources would be willing to invest in it. By distributing computing across multiple form factors while maintaining continuity of experience, PAC could deliver on the promise of ubiquitous computing without sacrificing the privacy, control, and interaction diversity that make technology truly personal. The future of computing shouldn’t be about choosing between old and new paradigms. It should be about computing that adapts to us, not the other way around.

5 days ago 6 votes
Why AI Makes Craft More Valuable, Not Less

For the past twenty to thirty years, the creative services industry has pursued a strategy of elevating the perceived value of knowledge work over production work. Strategic thinking became the premium offering, while actual making was reframed as “tactical” and “commoditized.” Creative professionals steered their careers toward decision-making roles rather than making roles. Firms adjusted their positioning to sell ideas, not assets — strategy became the product, while labor became nearly anonymous. After twenty years in my own career, I believe this has been a fundamental mistake, especially for those who have so distanced themselves from craft that they can no longer make things. The Unintended Consequences The strategic pivot created two critical vulnerabilities that are now being exposed by AI: For individuals: AI is already perceived as delivering ideas faster and with greater accuracy than traditional strategic processes, repositioning much of what passed for strategy as little better than educated guesswork. The consultant who built their career on frameworks and insights suddenly finds themselves competing with a tool that can generate similar outputs in seconds. For firms: Those who focused staff on strategy and account management while “offshoring” production cannot easily pivot to new means of production, AI-assisted or otherwise. They’ve created organizations optimized for talking about work rather than doing it. The Canary in the Coal Mine In hindsight, the homogeneity of interaction design systems should have been our warning. We became so eager to accept tools that reduced labor — style guides that eliminated design decisions, component libraries that standardized interfaces, templates that streamlined production — that we literally cleared the decks for AI replacement. Many creative services firms now accept AI in the same way an army-less nation might surrender to an invader: they have no other choice. They’ve systematically dismantled their capacity to make things in favor of their capacity to think about things. Now they’re hoping they can just re-boot production with bots. I don’t think that will work. AI, impressive as it is, still cannot make anything and everything. More importantly, it cannot produce things for existing systems as efficiently and effectively as a properly equipped person who understands both the tools and the context. The real world still requires: Understanding client systems and constraints Navigating technical limitations and possibilities Iterating based on real feedback from real users Adapting to changing requirements mid-project Solving the thousand small problems that emerge during implementation These aren’t strategic challenges — they’re craft challenges. They require the kind of deep, hands-on knowledge that comes only from actually making things, repeatedly, over time. The New Premium I see the evidence everywhere in my firm’s client accounts: there’s a desperate need to move as quickly as ever, motivated by the perception that AI has created about the overall pace of the market. But there’s also an acknowledgment that meaningful progress doesn’t come at the push of a button. The value of simply doing something — competently, efficiently, and with an understanding of how it fits into larger systems — has never been higher. This is why I still invest energy in my own craft and in communicating design fundamentals to anyone who will listen. Not because I’m nostalgic for pre-digital methods, but because I believe craft represents a sustainable competitive advantage in an AI-augmented world. Action vs. Advice The fundamental issue is that we confused talking about work with doing work. We elevated advice-giving over action-taking. We prioritized the ability to diagnose problems over the ability to solve them. But clients don’t ultimately pay for insights — they pay for outcomes. And outcomes require action. They require the messy, iterative, problem-solving work of actually building something that works in the real world. The firms and individuals who will thrive in the coming years won’t be those with the best strategic frameworks or the most sophisticated AI prompts. They’ll be those who can take an idea — whether it comes from a human strategist or an AI system — and turn it into something real, functional, and valuable. In my work, I regularly review design output from teams across the industry. I encounter both good ideas and bad ones, skillful craft and poor execution. Here’s what I’ve learned: it’s better to have a mediocre idea executed with strong craft than a brilliant idea executed poorly. When craft is solid, you know the idea can be refined — the execution capability exists, so iteration is possible. But when a promising idea is rendered poorly, it will miss its mark entirely, not because the thinking was wrong, but because no one possessed the skills to bring it to life effectively. The pendulum that swung so far toward strategy needs to swing back toward craft. Not because technology is going away, but because technology makes the ability to actually build things more valuable, not less. In a world where everyone can generate ideas, the people who can execute those ideas become invaluable.

a week ago 6 votes
Action is Worth More Than Advice

For the past twenty to thirty years, the creative services industry has pursued a strategy of elevating the perceived value of knowledge work over production work. Strategic thinking became the premium offering, while actual making was reframed as “tactical” and “commoditized.” Creative professionals steered their careers toward decision-making roles rather than making roles. Firms adjusted their positioning to sell ideas, not assets — strategy became the product, while labor became nearly anonymous. After twenty years in my own career, I believe this has been a fundamental mistake, especially for those who have so distanced themselves from craft that they can no longer make things. The strategic pivot created two critical vulnerabilities that are now being exposed by AI: For individuals: AI is already perceived as delivering ideas faster and with greater accuracy than traditional strategic processes, repositioning much of what passed for strategy as little better than educated guesswork. The consultant who built their career on frameworks and insights suddenly finds themselves competing with a tool that can generate similar outputs in seconds. For firms: Those who focused staff on strategy and account management while “offshoring” production cannot easily pivot to new means of production, AI-assisted or otherwise. They’ve created organizations optimized for talking about work rather than doing it. In hindsight, the homogeneity of interaction design systems should have been our warning. We became so eager to accept tools that reduced labor — style guides that eliminated design decisions, component libraries that standardized interfaces, templates that streamlined production — that we literally cleared the decks for AI replacement. Many creative services firms now accept AI in the same way an army-less nation might surrender to an invader: they have no other choice. They’ve systematically dismantled their capacity to make things in favor of their capacity to think about things. Now they’re hoping they can just re-boot production with bots. I don’t think that will work. AI, impressive as it is, still cannot make anything and everything. More importantly, it cannot produce things for existing systems as efficiently and effectively as a properly equipped person who understands both the tools and the context. The real world still requires: Understanding client systems and constraints Navigating technical limitations and possibilities Iterating based on real feedback from real users Adapting to changing requirements mid-project Solving the thousand small problems that emerge during implementation These aren’t strategic challenges — they’re craft challenges. They require the kind of deep, hands-on knowledge that comes only from actually making things, repeatedly, over time. I see the evidence everywhere in my firm’s client accounts: there’s a desperate need to move as quickly as ever, motivated by the perception that AI has created about the overall pace of the market. But there’s also an acknowledgment that meaningful progress doesn’t come at the push of a button. The value of simply doing something — competently, efficiently, and with an understanding of how it fits into larger systems — has never been higher. This is why I still invest energy in my own craft and in communicating design fundamentals to anyone who will listen. Not because I’m nostalgic for pre-digital methods, but because I believe craft represents a sustainable competitive advantage in an AI-augmented world. The fundamental issue is that we confused talking about work with doing work. We elevated advice-giving over action-taking. We prioritized the ability to diagnose problems over the ability to solve them. But clients don’t ultimately pay for insights — they pay for outcomes. And outcomes require action. They require the messy, iterative, problem-solving work of actually building something that works in the real world. The firms and individuals who will thrive in the coming years won’t be those with the best strategic frameworks or the most sophisticated AI prompts. They’ll be those who can take an idea — whether it comes from a human strategist or an AI system — and turn it into something real, functional, and valuable. In my work, I regularly review design output from teams across the industry. I encounter both good ideas and bad ones, skillful craft and poor execution. Here’s what I’ve learned: it’s better to have a mediocre idea executed with strong craft than a brilliant idea executed poorly. When craft is solid, you know the idea can be refined — the execution capability exists, so iteration is possible. But when a promising idea is rendered poorly, it will miss its mark entirely, not because the thinking was wrong, but because no one possessed the skills to bring it to life effectively. The pendulum that swung so far toward strategy needs to swing back toward craft. Not because technology is going away, but because technology makes the ability to actually build things more valuable, not less. In a world where everyone can generate ideas, the people who can execute those ideas become invaluable.

a week ago 6 votes

More in design

RERY Fashion Lifestyle Centre by Dayuan Design

As China‘s consumer market, at least in its top-tier cities, has reached maturity level, the retail landscape is keeping abreast...

5 hours ago 1 votes
The Art Secret Behind All Great Design

Composition Speaks Before Content When I was a young child, I would often pull books off of my father’s shelf and stare at their pages. In a clip from a 1987 home video that has established itself in our family canon — my father opens our apartment door, welcoming my newborn little sister home for the first time. There I stood, waiting for his arrival, in front of his bookshelves, holding an open book. From behind the camera, Dad said, “There’s Chris looking at the books. He doesn’t read the books…” I’m not sure I caught the remark at the time, but with every replay — and there were many — it began to sting. The truth was I didn’t really know what I was doing, but I did know my Dad was right — I wasn’t reading the books. Had I known then what I know now, I might have shared these words, from the artist Piet Mondrian, with my Dad: “Every true artist has been inspired more by the beauty of lines and color and the relationships between them than by the concrete subject of the picture.” — Piet Mondrian For most of my time as a working designer, this has absolutely been true, though I wasn’t fully aware of it. And it’s possible that one doesn’t need to think this way, or even agree with Mondrian, to be a good designer. But I have found that fully understanding Mondrian’s point has helped me greatly. I no longer worry about how long it takes me to do my work, and I doubt my design choices far less. I enjoy executing the fundamentals more, and I feel far less pressure to conform my work to current styles or overly decorate it to make it stand out more. It has helped me extract more power from simplicity. This shift in perspective led me to a deeper question: what exactly was I responding to in those childhood encounters with my father’s books, and why do certain visual arrangements feel inherently satisfying? A well-composed photograph communicates something essential even before we register its subject. A thoughtfully designed page layout feels right before we read a single word. There’s something happening in that first moment of perception that transcends the individual elements being composed. Mondrian understood this intuitively. His geometric abstractions stripped away all representational content, leaving only the pure relationships between lines, colors, and spaces. Yet his paintings remain deeply compelling, suggesting that there’s something fundamental about visual structure itself that speaks to us — a language of form that exists independent of subject matter. Perhaps we “read” composition the way we read text — our brains processing visual structure as a kind of fundamental grammar that exists beneath conscious recognition. Just as we don’t typically think about parsing sentences into subjects and predicates while reading, we don’t consciously deconstruct the golden ratio or rule of thirds while looking at an image. Yet in both cases, our minds are translating structure into meaning. This might explain why composition can be satisfying independent of content. When I look at my childrens’ art books, I can appreciate the composition of a Mondrian painting alongside them, even though they are primarily excited about the colors and shapes. We’re both “reading” the same visual language, just at different levels of sophistication. The fundamental grammar of visual composition speaks to us both. The parallels with reading go even deeper. Just as written language uses spacing, punctuation, and paragraph breaks to create rhythm and guide comprehension, visual composition uses negative space, leading lines, and structural elements to guide our eye and create meaning. These aren’t just aesthetic choices — they’re part of a visual syntax that our brains are wired to process. This might also explain why certain compositional principles appear across cultures and throughout history. The way we process visual hierarchy, balance, and proportion might be as fundamental to human perception as our ability to recognize faces or interpret gestures. It’s a kind of visual universal grammar, to borrow Chomsky’s linguistic term. What’s particularly fascinating is how this “reading” of composition happens at an almost precognitive level. Before we can name what we’re seeing or why we like it, our brains have already processed and responded to the underlying compositional structure. It’s as if there’s a part of our mind that reads pure form, independent of content or context. Mondrian’s work provides the perfect laboratory for understanding this phenomenon. His paintings contain no recognizable objects, no narrative content, no emotional subject matter in the traditional sense. Yet they continue to captivate viewers more than a century later. What we’re responding to is exactly what he identified: the beauty of relationships between visual elements — the conversation between lines, the tension between colors, the rhythm of spaces. Understanding composition as a form of reading might help explain why design can feel both intuitive and learnable. Just as we naturally acquire language through exposure but can also study its rules formally, we develop an intuitive sense of composition through experience while also being able to learn its principles explicitly. Looking at well-composed images or designs can feel like reading poetry in a language we didn’t know we knew. The syntax is familiar even when we can’t name the rules, and the meaning emerges not from what we’re looking at, but from how the elements relate to each other in space. In recognizing composition as this fundamental visual language, we begin to understand why good design works at such a deep level. It’s not just about making things look nice — it’s about speaking fluently in a language that predates words, tapping into patterns of perception that feel as natural as breathing. This understanding of composition as fundamental visual language has profound implications for how we approach design work. When we do this intentionally, we’re applying a kind of secret knowledge of graphic design: the best design works at a purely visual level, regardless of what specific words or images occupy the surface. This is why the “squint test” works. When we squint at a designed surface, the details are blurred but the overall structure remains visible, allowing us to see things like structure, hierarchy and tonal balance more clearly. This is a critical tool for designers; we inevitably reach a point when we need to see past the content in order to ensure that it is seen by others. No matter what I am creating — whether it is a screen in an application, a page on a website, or any other asset — I always begin with a wireframe. Most designers do this. But my secret is that I stick with wireframes far longer than most people would imagine. Regardless of how much material my layout will eventually contain is ready to go, I almost always finalize my layout choices using stand-in material. For images, that means grey boxes, for text, that means grey lines. The reason I do this is because I know that what Mondrian said is true: if it is beautiful purely on the merits of its structure, it will work to support just about any text, or any image. I can envision exceptions to this, and I’ve no doubt encountered them, but I have never felt the need to make a significant or labor-intensive structural change once a final images, colors, text, and other elements have been added in. More and more, I see designers starting with high-fidelity (i.e. fully styled) layouts or even with established components in browser, and while I don’t typically start there when asked for critical feedback, I almost always support the feedback I inevitably give by extolling the merits of wireframing. No matter what the environment, no matter what the form, establishing structure is the most important aspect of our discipline. The secret of graphic design has always been known by artists: structure does more work than content while convincing its audience of the opposite. Josef Albers said of the way he created images that they enabled a viewer to “see more than there is.” That is the mystery behind all looking — that there is always more to see than there is visible. Work with that mystery, and you’ll possess a secret that will transform your work.

yesterday 1 votes
C’ME power bank by ONEBOOK DESIGN

“Instant power, C’ME is awesome” C’me is to keep you visible and online at any time! Whether it is an...

2 days ago 3 votes
PAC – Personal Ambient Computing

The next evolution of personal computing won’t replace your phone — it will free computing from any single device. Like most technologists of a certain age, many of my expectations for the future of computing were set by Star Trek production designers. It’s quite easy to connect many of the devices we have today to props designed 30-50 years ago: The citizens of the Federation had communicators before we had cellphones, tricorders before we had smartphones, PADDs before we had tablets, wearables before the Humane AI pin, and voice interfaces before we had Siri. You could easily make a case that Silicon Valley owes everything to Star Trek. But now, there seems to be a shared notion that the computing paradigm established over the last half-century has run its course. The devices all work well, of course, but they all come with costs to culture that are worth correcting. We want to be more mobile, less distracted, less encumbered by particular modes of use in certain contexts. And, it’s also worth pointing out that the creation of generative AI has made the corporate shareholders ravenous for new products and new revenue streams. So whether we want them or not, we’re going to get new devices. The question is, will they be as revolutionary as they’re already hyped up to be? I’m old enough to remember the gasping “city-changing” hype of the Segway before everyone realized it was just a scooter with a gyroscope. But in fairness, there was equal hype to the iPhone and it isn’t overreaching to say that it remade culture even more extensively than a vehicle ever could have. So time will tell. I do think there is room for a new approach to computing, but I don’t expect it to be a new device that renders all others obsolete. The smartphone didn’t do that to desktop or laptop computers, nor did the tablet. We shouldn’t expect a screenless, sensor-ridden device to replace anyone’s phone entirely, either. But done well, such a thing could be a welcome addition to a person’s kit. The question is whether that means just making a new thing or rethinking how the various computers in our life work together. As I’ve been pondering that idea, I keep thinking back to Star Trek, and how the device that probably inspired the least wonder in me as a child is the one that seems most relevant now: the Federation’s wearables. Every officer wore a communicator pin — a kind of Humane Pin light — but they also all wore smaller pins at their collars signifying rank. In hindsight, it seems like those collar pins, which were discs the size of a watch battery, could have formed some kind of wearable, personal mesh network. And that idea got me going… The future isn’t a zero-sum game between old and new interaction modes. Rather than being defined by a single new computing paradigm, the future will be characterized by an increase in computing: more devices doing more things. I’ve been thinking of this as a PAC — Personal Ambient Computing. Personal Ambient Computing At its core is a modular component I’ve been envisioning as a small, disc-shaped computing unit roughly the diameter of a silver dollar but considerably thicker. This disc would contain processing power, storage, connectivity, sensors, and microphones. The disc could be worn as jewelry, embedded in a wristwatch with its own display, housed in a handheld device like a phone or reader, integrated into a desktop or portable (laptop or tablet) display, or even embedded in household appliances. This approach would create a personal mesh network of PAC modules, each optimized for its context, rather than forcing every function in our lives through a smartphone. The key innovation lies in the standardized form factor. I imagine a magnetic edge system that allows the disc to snap into various enclosures — wristwatches, handhelds, desktop displays, wearable bands, necklaces, clips, and chargers. By getting the physical interface right from the start, the PAC hardware wouldn’t need significant redesign over time, but an entirely new ecosystem of enclosures could evolve more gradually and be created by anyone. A worthy paradigm shift in computing is one that makes the most use of modularity, open-source software and hardware, and context. Open-sourcing hardware enclosures, especially, would offer a massive leap forward for repairability and sustainability. In my illustration above, I even went as far as sketching a smaller handheld — exactly the sort of device I’d prefer over the typical smartphone. Mine would be proudly boxy with a larger top bezel to enable greater repair access to core components, like the camera, sensors, microphone, speakers, and a smaller, low-power screen I’d depend upon heavily for info throughout the day. Hey, a man can dream. The point is, a PAC approach would make niche devices much more likely. Power The disc itself could operate at lower power than a smartphone, while device pairings would benefit from additional power housed in larger enclosures, especially those with screens. This creates an elegant hierarchy where the disc provides your personal computing core and network connectivity, while housings add context-specific capabilities like high-resolution displays, enhanced processing, or extended battery life. Simple housings like jewelry would provide form factor and maybe extend battery life. More complex housings would add significant power and specialized components. People wouldn’t pay for screen-driving power in every disc they own, just in the housings that need it. This modularity solves the chicken-and-egg problem that kills many new computing platforms. Instead of convincing people to buy an entirely new device that comes with an established software ecosystem, PAC could give us familiar form factors — watches, phones, desktop accessories — powered by a new paradigm. Third-party manufacturers could create housings without rebuilding core computing components. Privacy This vision of personal ambient computing aligns with what major corporations already want to achieve, but with a crucial difference: privacy. The current trajectory toward ambient computing comes at the cost of unprecedented surveillance. Apple, Google, Meta, OpenAI, and all the others envision futures where computing is everywhere, but where they monitor, control and monetize the flow of information. PAC demands a different future — one that leaves these corporate gatekeepers behind. A personal mesh should be just that: personal. Each disc should be configurable to sense or not sense based on user preferences, allowing contextual control over privacy settings. Users could choose which sensors are active in which contexts, which data stays local versus shared across their mesh, and which capabilities are enabled in different environments. A PAC unit should be as personal as your crypto vault. Obviously, this is an idea with a lot of technical and practical hand-waving at work. And at this vantage point, it isn’t really about technical capability — I’m making a lot of assumptions about continued miniaturization. It is about computing power returning to individuals rather than being concentrated in corporate silos. PAC represents ambient computing without ambient surveillance. And it is about computing graduating its current form and becoming more humanely and elegantly integrated into our day to day lives. Next The smartphone isn’t going anywhere. And we’re going to get re-dos of the AI devices that have already spectacularly failed. But we won’t get anywhere especially exciting until we look at the personal computing ecosystem holistically. PAC offers a more distributed, contextual approach that enhances rather than replaces effective interaction modes. It’s additive rather than replacement-based, which historically tends to drive successful technology adoption. I know I’m not alone in imagining something like this. I’d just like to feel more confident that people with the right kind of resources would be willing to invest in it. By distributing computing across multiple form factors while maintaining continuity of experience, PAC could deliver on the promise of ubiquitous computing without sacrificing the privacy, control, and interaction diversity that make technology truly personal. The future of computing shouldn’t be about choosing between old and new paradigms. It should be about computing that adapts to us, not the other way around.

5 days ago 6 votes
Junshanye × Googol by 古戈品牌

Unlocking the Code of Eastern Beauty in National Tea Where Mountains and Waters Sing in Harmony Nature holds the secrets...

6 days ago 7 votes