More from Jim Nielsen’s Blog
Read more about RSS Club. I’ve been reading Apple in China by Patrick McGee. There’s this part in there where he’s talking about a guy who worked for Apple and was known for being ruthless, stopping at nothing to negotiate the best deal for Apple. He was so aggressive yet convincing that suppliers often found themselves faced with regret, wondering how they got talked into a deal that in hindsight was not in their best interest.[1] One particular Apple executive sourced in the book noted how there are companies who don’t employ questionable tactics to gain an edge, but most of them don’t exist anymore. To paraphrase: “I worked with two kinds of suppliers at Apple: 1) complete assholes, and 2) those who are no longer in business.” Taking advantage of people is normalized in business on account of it being existential, i.e. “If we don’t act like assholes — or have someone on our team who will on our behalf[1] — we will not survive!” In other words: All’s fair in self-defense. But what’s the point of survival if you become an asshole in the process? What else is there in life if not what you become in the process? It’s almost comedically twisted how easy it is for us to become the very thing we abhor if it means our survival. (Note to self: before you start anything, ask “What will this help me become, and is that who I want to be?”) It’s interesting how we can smile at stories like that and think, “Gosh they’re tenacious, glad they’re on my side!” Not stopping to think for a moment what it would feel like to be on the other side of that equation. ⏎ Email · Mastodon · Bluesky
Dan Abramov in “Static as a Server”: Static is a server that runs ahead of time. “Static” and “dynamic” don’t have to be binaries that describe an entire application architecture. As Dan describes in his post, “static” or “dynamic” it’s all just computers doing stuff. Computer A requests something (an HTML document, a PDF, some JSON, who knows) from computer B. That request happens via a URL and the response can be computed “ahead of time” or “at request time”. In this paradigm: “Static” is server responding ahead of time to an anticipated requests with identical responses. “Dynamic” is a server responding at request time to anticipated requests with varying responses. But these definitions aren’t binaries, but rather represent two ends of a spectrum. Ultimately, however you define “static” or “dynamic”, what you’re dealing with is a response generated by a server — i.e. a computer — so the question is really a matter of when you want to respond and with what. Answering the question of when previously had a really big impact on what kind of architecture you inherited. But I think we’re realizing we need more nimble architectures that can flex and grow in response to changing when a request/response cycle happens and what you respond with. Perhaps a poor analogy, but imagine you’re preparing holiday cards for your friends and family: “Static” is the same card sent to everyone “Dynamic” is a hand-written card to each individual But between these two are infinite possibilities, such as: A hand-written card that’s photocopied and sent to everyone A printed template with the same hand-written note to everyone A printed template with a different hand-written note for just some people etc. Are those examples “static” or “dynamic”? [Cue endless debate]. The beauty is that in proving the space between binaries — between what “static” means and what “dynamic” means — I think we develop a firmer grasp of what we mean by those words as well as what we’re trying to accomplish with our code. I love tools that help you think of the request/response cycle across your entire application as an endlessly-changing set of computations that happen either “ahead of time”, “just in time”, or somewhere in-between. Email · Mastodon · Bluesky
Dan Abramov on his blog (emphasis mine): The division between the frontend and the backend is physical. We can’t escape from the fact that we’re writing client/server applications. Some logic is naturally more suited to either side. But one side should not dominate the other. And we shouldn’t have to change the approach whenever we need to move the boundary. What we need are the tools that let us compose across the stack. What are these tools that allow us to easily change the computation of an application happening between two computers? I think Dan is arguing that RSC is one of these tools. I tend to think of Remix (v1) as one of these tools. Let me try and articulate why by looking at the difference between how we thought of websites in a “JAMstack” architecture vs. how tools (like Remix) are changing that perspective. JAMstack: a website is a collection of static documents which are created by a static site generator and put on a CDN. If you want dynamism, you “opt-out” of a static document for some host-specific solution whose architecture is starkly different from the rest of your site. Remix: a website is a collection of URLs that follow a request/response cycle handled by a server. Dynamism is “built-in” to the architecture and handled on a URL-by-URL basis. You choose how dynamic you want any particular response to be: from a static document on a CDN for everyone, to a custom response on a request-by-request basis for each user. As your needs grow beyond the basic “static files on disk”, a JAMstack architecture often ends up looking like a microservices architecture where you have disparate pieces that work together to create the final whole: your static site generator here, your lambda functions there, your redirect engine over yonder, each with its own requirements and lifecycles once deployed. Remix, in contrast, looks more like a monolith: your origin server handles the request/response lifecycle of all URLs at the time and in the manner of your choosing. Instead of a build tool that generates static documents along with a number of distinct “escape hatches” to handle varying dynamic needs, your entire stack is “just a server” (that can be hosted anywhere you host a server) and you decide how and when to respond to each request — beforehand (at build), or just in time (upon request). No architectural escape hatches necessary. You no longer have to choose upfront whether your site as a whole is “static” or “dynamic”, but rather how much dynamism (if any) is present on a URL-by-URL basis. It’s a sliding scale — a continuum of dynamism — from “completely static, the same for everyone” to “no one line of markup is the same from one request to another”, all of it modeled under the same architecture. And, crucially, that URL-by-URL decision can change as needs change. As Dan Abramov noted in a tweet: [your] build doesn’t have to be modeled as server. but modeling it as a server (which runs once early) lets you later move stuff around. Instead of opting into a single architecture up front with escape hatches for every need that breaks the mold, you’re opting in to the request/response cycle of the web’s natural grain, and deciding how to respond on a case-by-case basis. The web is not a collection of static documents. It’s a collection of URLs — of requests and responses — and tools that align themselves to this grain make composing sites with granular levels of dynamism so much easier. Email · Mastodon · Bluesky
Radek Sienkiewicz in a funny-because-its-true piece titled “Why do AI company logos look like buttholes?“: We made a circular shape [logo] with some angles because it looked nice, then wrote flowery language to justify why our…design is actually profound. As someone who has grown up through the tumult of the design profession in technology, that really resonates. I’ve worked on lots of projects where I got tired of continually justifying design decisions with language dressed in corporate rationality. This is part of the allure of code. To most people, code either works or it doesn’t. However bad it might be, you can always justify it with “Yeah, but it’s working.” But visual design is subjective forever. And that’s a difficult space to work in, where you need to forever justify your choices. In that kind of environment, decisions are often made by whoever can come up with the best language to justify their choices, or whoever has the most senior job title. Personally, I found it very exhausting. As Radek points out, this homogenization justified through seemingly-profound language reveals something deeper about tech as an industry: folks are afraid to stand out too much. Despite claims of innovation and disruption, there's tremendous pressure to look legitimate by conforming to established visual language. In contrast to this stands the work of individual creators whose work I have always loved — whether its individual blogs, videos, websites, you name it. The individual (and I’ll throw small teams in there too) have a sense of taste that doesn’t dilute through the structure and processes of a larger organization. No single person suggests making a logo that resembles an anus, but when everyone's feedback gets incorporated, that's what often emerges. In other words, no individual would ever recommend what you get through corporate hierarchies. That’s why I love the work of small teams and individuals. There’s still soul. You can still sense the individuals — their personalities, their values — oozing through the work. Reminds me of Jony Ive’s description of when he first encountered a Mac: I was shocked that I had a sense for the people who made it. They could’ve been in the room. You really had a sense of what was on their minds, and their values, and their joy and exuberance in making something that they knew was helpful. This is precisely why I love the websites of individuals because their visual language is as varied as the humans behind them — I mean, just look at the websites of these individuals and small teams. You immediately get a sense for the people behind them. I love it! Email · Mastodon · Bluesky
I quite enjoyed this talk. Some of the technical details went over my head (I don’t know what “split 16-bit mask into two 8-bit LTUs” means) but I could still follow the underlying point. First off, Andreas has a great story at the beginning about how he has a friend with a browser bookmarklet that replaces every occurrence of the word “dependency” with the word “liability”. Can you imagine npm working that way? Inside package.json: { "liabilities": { "react": "^19.0.0", "typescript": "^5.0.0" }, "devLiabilities": {...} } But I digress, back to Andreas. He points out that the context of your problems and the context of someone else’s problems do not overlap as often as we might think. It’s so unlikely that someone else tried to solve exactly our same problem with exactly our same constraints that [their solution or abstraction] will be the most economical or the best choice for us. It might be ok, but it won’t be the best thing. So while we immediately jump to tools built by others, the reality is that their tools were built for their problems and therefore won’t overlap with our problems as much or as often as we’re led to believe. In Andreas’ example, rather than using a third-party library to parse JSON and turn it into something, he writes his own bespoke parser for the problem at hand. His parser ignores a whole swath of abstractions a more generalized parser solves for, and guess what? His is an order of magnitude faster! Solving problems in the wrong domain and then glueing things together is always much, much worse [in terms of performance] than solving for what you actually need to solve. It’s fun watching him step through the performance gains as he goes from a generalized solution to one more tailored to his own specific context. What really resonates in his step-by-step process is how, as problems present themselves, you see how much easier it is to deal with performance issues for stuff you wrote vs. stuff others wrote. Not only that, but you can debug way faster! (Just think of the last time you tried to debug a file 1) you wrote, vs. 2) one you vendored vs. 3) one you installed deep down in node_modules somewhere.) Andreas goes from 41MB/s throughput to 1.25GB/s throughput without changing the behavior of the program. He merely removed a bunch of generalized abstractions he wasn’t using and didn’t need. Surprise, surprise: not doing unnecessary things is faster! You should always consider the unique context of your situation and weigh trade-offs. A “generic” solution means a solution “not tuned for your use case”. Email · Mastodon · Bluesky
More in design
Essence 58 reimagines premium packaging, redefining the interplay between texture, light, and craftsmanship. Developed under the “Harvested Essence” brand, this...
Five fictional interface concepts that could reshape how humans and machines interact. Every piece of technology is an interface. Though the word has come to be a shorthand for what we see and use on a screen, an interface is anything that connects two or more things together. While that technically means that a piece of tape could be considered an interface between a picture and a wall, or a pipe between water and a home, interfaces become truly exciting when they create both a physical connection and a conceptual one — when they create a unique space for thinking, communicating, creating, or experiencing. This is why, despite the flexibility and utility of multifunction devices like the smartphone, single-function computing devices still have the power to fascinate us all. The reason for this, I believe, is not just that single-function devices enable their users to fully focus on the experience they create, but because the device can be fully built for that experience. Every aspect of its physical interface can be customized to its functionality; it can have dedicated buttons, switches, knobs, and displays that directly connect our bodies to its features, rather than abstracting them through symbols under a pane of glass. A perfect example of this comes from the very company responsible for steering our culture away from single-function devices; before the iPhone, Apple’s most influential product was the iPod, which won user’s over with an innovative approach to a physical interface: the clickwheel. It took the hand’s ability for fine motor control and coupled it for the need for speed in navigating a suddenly longer list of digital files. With a subtle but feel-good gesture, you could skip through thousands of files fluidly. It was seductive and encouraged us all to make full use of the newfound capacity the iPod provided. It was good for users and good for the .mp3 business. I may be overly nostalgic about this, but no feature of the iPhone feels as good to use as the clickwheel did. Of course, that’s an example that sits right at the nexus between dedicated — old-fashioned — devices and the smartphonization of everything. Prior to the iPod, we had many single-focus devices and countless examples of physical interfaces that gave people unique ways of doing things. Whenever I use these kinds of devices — particularly physical media devices — I start to imagine alternate technological timelines. Ones where the iPhone didn’t determine two decades of interface consolidation. I go full sci-fi. Science fiction, by the way, hasn’t just predicted our technological future. We all know the classic examples, particularly those from Star Trek: the communicator and tricorder anticipated the smartphone; the PADD anticipated the tablet; the ship’s computer anticipated Siri, Alexa, Google, and AI voice interfaces; the entire interior anticipated the Jony Ive glass filter on reality. It’s enough to make a case that Trek didn’t anticipate these things so much as those who watched it as young people matured in careers in design and engineering. But science fiction has also been a fertile ground for imagining very different ways for how humans and machines interact. For me, the most compelling interface concepts from fiction are the ones that are built upon radically different approaches to human-computer interaction. Today, there’s a hunger to “get past” screen-based computer interaction, which I think is largely borne out of a preference for novelty and a desire for the riches that come from bringing an entirely new product category to market. With AI, the desire seems to be to redefine everything we’re used to using on a screen through a voice interface — something I think is a big mistake. And though I’ve written about the reasons why screens still make a lot of sense, what I want to focus on here are different interface paradigms that still make use of a physical connection between people and machine. I think we’ve just scratched the surface for the potential of physical interfaces. Here are a few examples that come to mind that represent untried or untested ideas that captivate my imagination. Multiple Dedicated Screens: 2001’s Discovery One Our current computing convention is to focus on a single screen, which we then often divide among a variety of applications. The computer workstations aboard the Discovery One in 2001: A Space Odyssey featured something we rarely see today: multiple, dedicated smaller screens. Each screen served a specific, stable purpose throughout a work session. A simple shift to physically isolating environments and distributing them makes it interesting as a choice to consider now, not just an arbitrary limitation defined by how large screens were at the time the film was produced. Placing physical boundaries between screen-based environments rather than the soft, constantly shifting divisions we manage on our widescreen displays might seem cumbersome and unnecessary at first. But I wonder what half a century of computing that way would have created differently from what we ended up with thanks to the PC. Instead of spending time repositioning and reprioritizing windows — a task that has somehow become a significant part of modern computer use — dedicated displays would allow us to assign specific screens for ambient monitoring and others for focused work. The psychological impact could be profound. Choosing which information deserves its own physical space creates a different relationship with that information. It becomes less about managing digital real estate and more about curating meaningful, persistent contexts for different types of thinking. The Sonic Screwdriver: Intent as Interface The Doctor’s sonic screwdriver from Doctor Who represents perhaps the most elegant interface concept ever imagined: a universal tool that somehow interfaces with any technology through harmonic resonance. But the really interesting aspect isn’t the pseudo-scientific explanation — it’s how the device responds to intent rather than requiring learned commands or specific inputs. The sonic screwdriver suggests technology that adapts to human purpose rather than forcing humans to adapt to machine constraints. Instead of memorizing syntax, keyboard shortcuts, or navigation hierarchies, the user simply needs to clearly understand what they want to accomplish. The interface becomes transparent, disappearing entirely in favor of direct intention-to-result interaction. This points toward computing that works more like natural tool use — the way a craftsperson uses a hammer or chisel — where the tool extends human capability without requiring conscious attention to the tool itself. The Doctor’s screwdriver may, at this point, be indistinguishable from magic, but in a future with increased miniaturization, nanotech, and quantum computing, a personal device shaped by intent could be possible. Al’s Handlink: The Mind-Object In Quantum Leap, Al’s handlink device looks like a smartphone-sized Mondrian painting: no screen, no discernible buttons, just blocky areas of color that illuminate as he uses it. As the show progressed, the device became increasingly abstract until it seemed impossible that any human could actually operate it. But perhaps that’s the point. The handlink might represent a complete paradigm shift toward iconic and symbolic visual computing, or it could be something even more radical: a mind-object, a projection within a projection coming entirely from Al’s consciousness. A totem that’s entirely imaginary yet functionally real. In the context of the show, that was an explanation that made sense to me — Al, after all, wasn’t physically there with his time-leaping friend Sam, he was a holographic projection from a stable time in the future. He could have looked like anything; so, too, his computer. But that handlink as a mind-object also suggests computing that exists at the intersection of technology and parapsychology — interfaces that respond to mental states, emotions, or subconscious patterns rather than explicit physical inputs. What kind of computing would exist in a world where telepathy was as commonly experienced as the five senses? Penny’s Multi-Page Computer: Hardware That Adapts Inspector Gadget’s niece Penny carried a computer disguised as a book, anticipating today’s foldable devices. But unlike our current two-screen foldables arranged in codex format, Penny’s book had multiple pages, each providing a unique interface tailored to specific tasks. This represents customization at both the software and hardware layers simultaneously. Rather than software conforming to hardware constraints, the physical device itself adapts to the needs of different applications. Each page could offer different input methods, display characteristics, or interaction paradigms optimized for specific types of work. This could be achieved similarly to the Doctor’s screwdriver, but it also could be more within reach if we imagine this kind of layered interface as composed of individual modules. Google’s Project Ara was an inspiring foray into modular computing that, I believe, still has promise today, if not moreso thanks to 3D printing. What if you could print your own interface? The Holodeck as Thinking Interface Star Trek’s Holodeck is usually discussed as virtual reality entertainment, but some episodes showed it functioning as a thinking interface — a tool for conceptual exploration rather than just immersive experience. When Data’s artificial offspring used the Holodeck to visualize possible physical appearances while exploring identity, it functioned much like we use Midjourney today: prompting a machine with descriptions to produce images representing something we’ve already begun to visualize mentally. In another episode, when crew members used it to reconstruct a shared suppressed memory, it became a collaborative medium for group introspection and collective problem-solving. In both cases, the interface disappeared entirely. There was no “using” or “inhabiting” the Holodeck in any traditional sense — it became a transparent extension of human thought processes, whether individual identity exploration or collective memory recovery. Beyond the Screen, but Not the Body Each of these examples suggests moving past our current obsession with maximizing screen real estate and window management. They point toward interfaces that work more like natural human activities: environmental awareness, tool use, conversation, and collaborative thinking. The best interfaces we never built aren’t just sleeker screens — they’re fundamentally different approaches to creating that unique space for thinking, communicating, creating, and experiencing that makes technology truly exciting. We’ve spent two decades consolidating everything into glass rectangles. Perhaps it’s time to build something different.
We developed the complete design for the Lights & Shadows project—a selection of 12 organic teas—from naming and original illustrations...
Why compensation, edification, and recognition aren’t equally important—and getting the order wrong can derail your career. Success is subjective. It means many things to many different people. But I think there is a general model that anyone can use to build a design career. I believe that success in a design career should be evaluated against three criteria: compensation, edification, and recognition. But contrary to how the design industry operates — and the advice typically given to emerging designers — these aren’t equally important. They form a hierarchy, and getting the order wrong can derail a career before it even begins. Compensation Comes First Compensation is the most important first signal of a successful design career, because it is the thing that enables the continuation of work. If you’re not being paid adequately, your ability to keep working is directly limited. This is directly in opposition to the advice I got time and again at the start of my career, which essentially boiled down to: do what you love and the money and recognition will come. This is almost never true. There have been rare cases where it has been true for people who, ultimately, happened to be in the right place at the right time with the right relationships already in place. The post-hoc narrative of their lottery-like success leaves out all the luck and privilege and focuses entirely on the passion. These stories are intoxicating. They feel good, blur our vision, and result in a working hangover that can waylay someone for years if not the entirety of their increasingly despiriting career. What does adequate compensation look like? It’s not about getting rich — it’s about reaching a threshold where money anxiety doesn’t dominate your decision-making. Can you pay rent without stress? Buy groceries without calculating every purchase? Take a sick day without losing income? Have a modest emergency fund? If you can answer yes to these basics, you’ve achieved the compensation foundation that makes everything else possible. This might mean taking a corporate design job instead of the “cool” startup that pays in equity and promises. It might mean freelancing for boring clients instead of passion projects. It might mean saying no to unpaid opportunities, even when they seem prestigious. The key insight is that financial stability creates the mental space and time horizon necessary for meaningful career development. This is not glamorous. It sounds boring. It may even be boring, but it doesn’t need to last that long. It’s easier to make money once you’ve made money. Then Focus on Edification Once compensation has been taken care of, the majority of a designer’s effort should be put toward edification. I choose this word very intentionally. There is nothing wrong with passion, but passion is the fossil fuel of the soul. It’s not an intrinsic expression of humanity; it is inspired by experience, nurtured by love, commitment, and work, and focused by discipline, labor, and feedback. Passion gets all the credit for inspiration and none of the blame for pain, but it’s worth pointing out that the ancient application of this word had more to do with suffering than success. Edification, on the other hand, covers the full, necessary cycle that keeps us working as designers: interest, information, instruction, improvement. You couldn’t ask for a more profound measure of success than maintaining the cycle of edification for an entire career. If you feel intimidated by a project, it is an opportunity to learn. Focus your interest toward gathering new information. If you feel uncomfortable during a project, you are probably growing. Seek instruction from those who you know that make the kind of work you admire in a way you can respect. If you feel like the work could have been better, you’re probably right. You’re ready to work toward improvement. This process doesn’t just happen once; a successful career is the repetition of this cycle again and again. What does edification look like in practice? It’s choosing projects that teach you something new, even if they’re not the most glamorous. It’s working with people who challenge your thinking. It’s seeking feedback that makes you uncomfortable. It’s reading, experimenting, and building things outside of work requirements. It’s the difference between collecting paychecks and building expertise. Considering the cycle of edification should help you select the right opportunities. Does the problem space interest you intellectually? Will the project expand your skill set? Will you work with people from whom you can learn? These not only become more viable considerations once you’re not worried about making rent, but the essential path forward. The transition point between focusing on compensation and edification isn’t about reaching a specific salary number — it’s about achieving enough financial stability that you can think beyond survival. For some, this might happen quickly; for others, it may take several years. It might happen more than once in a career. The key is recognizing when you’ve moved from financial desperation to financial adequacy. Recognition Is Always Overrated Finally, recognition. This is probably the least valuable measure of success a designer could pursue and receive. It is subjective. It is fickle. It is fleeting. And yet, it is the bait used to lure inexperienced designers — to unpaid internships, low-paid jobs, free services and spec work of all kinds. The pitch is always the same: we can’t pay you, but we can offer you exposure. This is a lie. Attention is harder to come by than money these days, so when a person offers you one in lieu of another, know it’s an IOU that will never pay out. Most designers are better off bootstrapping their own recognition rather than hoping for a sliver of someone else’s limelight. I might not have understood or believed this at the start of my career; I take it as fact today, twenty years in. That said, I wouldn’t say that all recognition is worthless. Peer respect within your professional community has value — it can lead to better opportunities and collaborations. Having work you’re proud to show can open doors. But these forms of recognition should be byproducts of doing good work, not primary goals that drive decision-making. Design careers built upon recognition alone are indistinguishable from entertainment. The recognition trap is particularly dangerous early in a career because it exploits the natural desire for validation. Young designers are told that working for prestigious brands or winning awards will jumpstart their careers. Sometimes this works, but more often it leads to a cycle of undervalued work performed in hopes of future payoff that never materializes. Applying the Hierarchy Here’s how this hierarchy works in practice: Early career: Focus almost exclusively on compensation. Take the job that pays best, even if it’s not the most exciting. Learn what you can, but prioritize financial stability above all else. Mid-career:: Once you’ve achieved financial adequacy, shift focus to edification. Be more selective about projects and opportunities. Invest in skills and relationships that will compound over time. Established career:: Recognition may come naturally as a result of good work and years of experience. If it doesn’t, that’s fine too — you’ll have built something more valuable: expertise and financial security. Looking back, I can say that I put far more emphasis on external recognition and validation too early on in my career. I got a lot more of it – and let it distract me — ten years into my career than I do now, and it shows in my work. It’s better now than it was then, even if no one is talking about it. Every designer is better off putting whatever energy they’d expend on an attention fetch quest toward getting paid for their work, because it’s the money that will get you what you really need in the early days of your career: a roof over your head, food on the table, a good night’s sleep, and a way to get from here to there. If you have those things and are working in design, keep at it. Either external recognition will come or you’ll work long enough to realize that sometimes the most important recognition is self-bestowed. If you can be satisfied by work before anyone else sees it, you will need less of the very thing least capable of sustaining you. You will always get farther on your own steam than someone else’s.
Nuqa, bridging timeless heritage and a boldly redesigned identity In the heart of the Middle East and North Africa, where...