Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
1
Bean counters have a bad rep for a reason. And it’s not because paying attention to the numbers is inherently unreasonable. It’s because weighing everything exclusively by its quantifiable properties is an impoverished way to view business (and the world!). Nobody presents this caricature better than the MBA types who think you can manage a business entirely in the abstract realm of "products," "markets," "resources," and "deliverables." To hell with that. The death of all that makes for a breakout product or service happens when the generic lingo of management theory takes over. This is why founder-led operations often keep an edge. Because when there’s someone at the top who actually gives a damn about cars, watches, bags, software, or whatever the hell the company makes, it shows up in a million value judgments that can’t be quantified neatly on a spreadsheet. Now, I love a beautiful spreadsheet that shows expanding margins, healthy profits, and customer growth as much as any...
3 hours ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from David Heinemeier Hansson

Air purifiers are a simple answer to allergies

I developed seasonal allergies relatively late in life. From my late twenties onward, I spent many miserable days in the throes of sneezing, headache, and runny eyes. I tried everything the doctors recommended for relief. About a million different types of medicine, several bouts of allergy vaccinations, and endless testing. But never once did an allergy doctor ask the basic question: What kind of air are you breathing? Turns out that's everything when you're allergic to pollen, grass, and dust mites! The air. That's what's carrying all this particulate matter, so if your idea of proper ventilation is merely to open a window, you're inviting in your nasal assailants. No wonder my symptoms kept escalating. For me, the answer was simply to stop breathing air full of everything I'm allergic to while working, sleeping, and generally just being inside. And the way to do that was to clean the air of all those allergens with air purifiers running HEPA-grade filters. That's it. That was the answer! After learning this, I outfitted everywhere we live with these machines of purifying wonder: One in the home office, one in the living area, one in the bedroom. All monitored for efficiency using Awair air sensors. Aiming to have the PM2.5 measure read a fat zero whenever possible. In America, I've used the Alen BreatheSmart series. They're great. And in Europe, I've used the Philips ones. Also good. It's been over a decade like this now. It's exceptionally rare that I have one of those bad allergy days now. It can still happen, of course — if I spend an entire day outside, breathing in allergens in vast quantities. But as with almost everything, the dose makes the poison. The difference between breathing in some allergens, some of the time, is entirely different from breathing all of it, all of the time. I think about this often when I see a doctor for something. Here was this entire profession of allergy specialists, and I saw at least a handful of them while I was trying to find a medical solution. None of them even thought about dealing with the environment. The cause of the allergy. Their entire field of view was restricted to dealing with mitigation rather than prevention. Not every problem, medical or otherwise, has a simple solution. But many problems do, and you have to be careful not to be so smart that you can't see it.

yesterday 2 votes
Human service is luxury

Maybe one day AI will answer every customer question flawlessly, but we're nowhere near that reality right now. I can't tell you how often I've been stuck in some god-forsaken AI loop or phone tree WHEN ALL I WANT IS A HUMAN. So I end up either just yelling "operator", "operator", "operator" (the modern-day mayday!) or smashing zero over and over. It's a unworthy interaction for any premium service.   Don't get me wrong. I'm pretty excited about AI. I've seen it do some incredible things. And of course it's just going to keep getting better. But in our excitement about the technical promise, I think we're forgetting that humans need more than correct answers. Customer service at its best also offers understanding and reassurance. It offers a human connection. Especially as AI eats the low-end, commodity-style customer support. The sort that was always done poorly, by disinterested people, rapidly churning through a perceived dead-end job, inside companies that only ever saw support as a cost center. Yeah, nobody is going to cry a tear for losing that. But you know that isn't all there is to customer service. Hopefully you've had a chance to experience what it feels like when a cheerful, engaged human is interested in helping you figure out what's wrong or how to do something right. Because they know exactly what they're talking about. Because they've helped thousands of others through exactly the same situation. That stuff is gold. Partly because it feels bespoke. A customer service agent who's good at their job knows how to tailor the interaction not just to your problem, but to your temperament. Because they've seen all the shapes. They can spot an angry-but-actually-just-frustrated fit a thousand miles away. They can tell a timid-but-curious type too. And then deliver exactly what either needs in that moment. That's luxury. That's our thesis for Basecamp, anyway. That by treating customer service as a career, we'll end up with the kind of agents that embody this luxury, and our customers will feel the difference.

3 days ago 5 votes
AMD in everything

Back in the mid 90s, I had a friend who was really into raytracing, but needed to nurture his hobby on a budget. So instead of getting a top-of-the-line Intel Pentium machine, he bought two AMD K5 boxes, and got a faster rendering flow for less money. All I cared about in the 90s was gaming, though, and for that, Intel was king, so to me, AMD wasn't even a consideration. And that's how it staying for the better part of the next three decades. AMD would put out budget parts that might make economic sense in narrow niches, but Intel kept taking all the big trophies in gaming, in productivity, and on the server. As late as the end of the 2010s, we were still buying Intel for our servers at 37signals. Even though AMD was getting more competitive, and the price-watt-performance equation was beginning to tilt in their favor. By the early 2020s, though, AMD had caught up on the server, and we haven't bought Intel since. The AMD EPYC line of chips are simply superior to anything Intel offers in our price/performance window. Today, the bulk of our new fleet run on dual EPYC 9454s for a total of 96 cores(!) per machine. They're awesome. It's been the same story on the desktop and laptop for me. After switching to Linux last year, I've been all in on AMD. My beloved Framework 13 is rocking an AMD 7640U, and my desktop machine runs on an AMD 7950X. Oh, and my oldest son just got a new gaming PC with an AMD 9900X, and my middle son has a AMD 8945HS in his gaming laptop. It's all AMD in everything! So why is this? Well, clearly the clever crew at AMD is putting out some great CPU designs lately with Lisa Su in charge. I'm particularly jazzed about the upcoming Framework desktop, which runs the latest Max 395+ chip, and can apportion up to 110GB of memory as VRAM (great for local AI!). This beast punches a multi-core score that's on par with that of an M4 Pro, and it's no longer that far behind in single-core either. But all the glory doesn't just go to AMD, it's just as much a triumph of TSMC. TSMC stands for Taiwan Semiconductor Manufacturing Company. They're the world leader in advanced chip making, and key to the story of how Apple was able to leapfrog the industry with the M-series chips back in 2020. Apple has long been the top customer for TSMC, so they've been able to reserve capacity on the latest manufacturing processes (called "nodes"), and as a result had a solid lead over everyone else for a while. But that lead is evaporating fast. That new Max+ 395 is showing that AMD has nearly caught up in terms of raw grunt, and the efficiency is no longer a million miles away either. This is again largely because AMD has been able to benefit from the same TSMC-powered progress that's also propelling Apple. But you know who it's not propelling? Intel. They're still trying to get their own chip-making processes to perform competitively, but so far it looks like they're just falling further and further behind. The latest Intel boards are more expensive and run slower than the competition from Apple, AMD, and Qualcomm. And there appears to be no easy fix to sort it all out around the corner. TSMC really is lifting all the boats behind its innovation locks. Qualcomm, just like AMD, have nearly caught up to Apple with their latest chips. The 8 Elite unit in my new Samsung S25 is faster than the A18 Pro in the iPhone 16 Pro in multi-core tests, and very close in single-core. It's also just as efficient now. This is obviously great for Android users, who for a long time had to suffer the indignity of truly atrocious CPU performance compared to the iPhone. It was so bad for a while that we had to program our web apps differently for Android, because they simply didn't have the power to run JavaScript fast enough! But that's all history now. But as much as I now cheer for Qualcomm's chips, I'm even more chuffed about the fact that AMD is on a roll. I spend far more time in front of my desktop than I do any other computer, and after dumping Apple, it's a delight to see that the M-series advantage is shrinking to irrelevance fast. There's of course still the software reason for why someone would pick Apple, and they continue to make solid hardware, but the CPU playing field is now being leveled. This is obviously a good thing if you're a fan of Linux, like me. Framework in particular has invigorated a credible alternative to the sleek, unibody but ultimately disposable nature of the reigning MacBook laptops. By focusing on repairability, upgradeability, and superior keyboards, we finally have an alternative for developer laptops that doesn't just feel like a cheap copy of a MacBook. And thanks to AMD pushing the envelope, these machines are rapidly closing the remaining gaps in performance and efficiency. And oh how satisfying it must be to sit as CEO of AMD now. The company was founded just one year after Intel, back in 1969, but for its entire existence, it's lived in the shadow of its older brother. Now, thanks to TSMC, great leadership from Lisa Su, and a crack team of chip designers, they're finally reaping the rewards. That is one hell of a journey to victory! So three cheers for AMD! A tip of the hat to TSMC. And what a gift to developers and computer enthusiasts everywhere that Apple once more has some stiff competition in the chip space.

a week ago 6 votes
The New York Times gives liberals The Danish Permission to pivot on mass immigration

One of the key roles The New York Times plays in American society is as guardians of the liberal Overton window. Its editorial line sets the terms for what's permissible to discuss in polite circles on the center left. Whether it's covid mask efficiency, trans kids, or, now, mass immigration. When The New York Times allows the counter argument to liberal orthodoxy to be published, it signals to its readers that it's time to pivot.  On mass immigration, the center-left liberal orthodoxy has for the last decade in particular been that this is an unreserved good. It's cultural enrichment! It's much-needed workers! It's a humanitarian imperative! Any opposition was treated as de-facto racism, and the idea that a country would enforce its own borders as evidence of early fascism. But that era is coming to a close, and The New York Times is using The Danish Permission to prepare its readers for the end. As I've often argued, Denmark is an incredibly effective case study in such arguments, because it's commonly thought of as the holy land of progressivism. Free college, free health care, amazing public transit, obsessive about bikes, and a solid social safety net. It's basically everything people on the center left ever thought they wanted from government. In theory, at least. In practice, all these government-funded benefits come with a host of trade-offs that many upper middle-class Americans (the primary demographic for The New York Times) would find difficult to swallow. But I've covered that in detail in The reality of the Danish fairytale, so I won't repeat that here. Instead, let's focus on the fact that The New York Times is now begrudgingly admitting that the main reason Europe has turned to the right, in election after election recently, is due to the problems stemming from mass immigration across the continent and the channel. For example, here's a bit about immigrant crime being higher: Crime and welfare were also flashpoints: Crime rates were substantially higher among immigrants than among native Danes, and employment rates were much lower, government data showed. It wasn't long ago that recognizing higher crime rates among MENAPT immigrants to Europe was seen as a racist dog whistle. And every excuse imaginable was leveled at the undeniable statistics showing that immigrants from countries like Tunisia, Lebanon, and Somalia are committing violent crime at rates 7-9 times higher than ethnic Danes (and that these statistics are essentially the same in Norway and Finland too). Or how about this one: Recognizing that many immigrants from certain regions were loafing on the welfare state in ways that really irked the natives: One source of frustration was the fact that unemployed immigrants sometimes received resettlement payments that made their welfare benefits larger than those of unemployed Danes. Or the explicit acceptance that a strong social welfare state requires a homogeneous culture in order to sustain the trust needed for its support: Academic research has documented that societies with more immigration tend to have lower levels of social trust and less generous government benefits. Many social scientists believe this relationship is one reason that the United States, which accepted large numbers of immigrants long before Europe did, has a weaker safety net. A 2006 headline in the British publication The Economist tartly summarized the conclusion from this research as, “Diversity or the welfare state: Choose one.” Diversity or welfare! That again would have been an absolutely explosive claim to make not all that long ago. Finally, there's the acceptance that cultural incompatibility, such as on the role of women in society, is indeed a problem: Gender dynamics became a flash point: Danes see themselves as pioneers for equality, while many new arrivals came from traditional Muslim societies where women often did not work outside the home and girls could not always decide when and whom to marry. It took a while, but The New York Times is now recognizing that immigrants from some regions really do commit vastly more violent crime, are net-negative contributors to the state budgets (by drawing benefits at higher rates and being unemployed more often), and that together with the cultural incompatibilities, end up undermining public trust in the shared social safety net.  The consequence of this admission is dawning not only on The New York Times, but also on other liberal entities around Europe: Tellingly, the response in Sweden and Germany has also shifted... Today many Swedes look enviously at their neighbor. The foreign-born population in Sweden has soared, and the country is struggling to integrate recent arrivals into society. Sweden now has the highest rate of gun homicides in the European Union, with immigrants committing a disproportionate share of gun violence. After an outburst of gang violence in 2023, Ulf Kristersson, the center-right prime minister, gave a televised address in which he blamed “irresponsible immigration policy” and “political naïveté.” Sweden’s center-left party has likewise turned more restrictionist. All these arguments are in service of the article's primary thesis: To win back power, the left, in Europe and America, must pivot on mass immigration, like the Danes did. Because only by doing so are they able to counter the threat of "the far right". The piece does a reasonable job accounting for the history of this evolution in Danish politics, except for the fact that it leaves out the main protagonist. The entire account is written from the self-serving perspective of the Danish Social Democrats, and it shows. It tells a tale of how it was actually Social Democrat mayors who first spotted the problems, and well, it just took a while for the top of the party to correct. Bullshit. The real reason the Danes took this turn is that "the far right" won in Denmark, and The Danish People's Party deserve the lion's share of the credit. They started in 1995, quickly set the agenda on mass immigration, and by 2015, they were the second largest party in the Danish parliament.  Does that story ring familiar? It should. Because it's basically what's been happening in Sweden, France, Germany, and the UK lately. The mainstream parties have ignored the grave concerns about mass immigration from its electorate, and only when "the far right" surged as a result, did the center left and right parties grow interested in changing course. Now on some level, this is just democracy at work. But it's also hilarious that this process, where voters choose parties that champion the causes they care about, has been labeled The Grave Threat to Democracy in recent years. Whether it's Trump, Le Pen, Weidel, or Kjærsgaard, they've all been met with contempt or worse for channeling legitimate voter concerns about immigration. I think this is the point that's sinking in at The New York Times. Opposition to mass immigration and multi-culturalism in Europe isn't likely to go away. The mayhem that's swallowing Sweden is a reality too obvious to ignore. And as long as the center left keeps refusing to engage with the topic honestly, and instead hides behind some anti-democratic firewall, they're going to continue to lose terrain. Again, this is how democracies are supposed to work! If your political class is out of step with the mood of the populace, they're supposed to lose. And this is what's broadly happening now. And I think that's why we're getting this New York Times pivot. Because losing sucks, and if you're on the center left, you'd like to see that end.

a week ago 8 votes

More in programming

Let's Talk About The American Dream

A few months ago I wrote about what it means to stay gold — to hold on to the best parts of ourselves, our communities, and the American Dream itself. But staying gold isn’t passive. It takes work. It takes action. It takes hard conversations that ask

16 hours ago 3 votes
Bitsnap, a screenshot capture tool for Medley Interlisp

<![CDATA[I wrote Bitsnap, a tool in Interlisp for capturing screenshots on the Medley environment. It can capture and optionally save to a file the full screen, a window with or without title bar and borders, or an arbitrary area. This project helped me learn the internals of Medley, such as extending the background menu, and produced a tool I wanted. For example, with Bitsnap I can capture some areas like specific windows without manually framing them; or the full screen of Medley excluding the title bar and borders of the operating systems that hosts Medley, Linux in my case. Medley can natively capture various portions of the screen. These facilities produce 1-bit images as instances of BITMAP, an image data structure Medley uses for everything from bit patterns, to icons, to actual images. Some Lisp functions manipulate bitmaps. Bitsnap glues together these facilities and packages them in an interactive interface accessible as a submenu of the background menu as well as a programmatic interface, the Interlisp function SNAP. To provide feedback after a capture Bitsnap displays in a window the area just captured, as shown here along with the Bitsnap menu. A bitmap captured with the Bitsnap screenshot tool and its menu on Medley Interlisp. The tool works by copying to a new bitmap the system bitmap that holds the designated area of the screen. Which is straighforward as there are Interlisp functions for accessing the source bitmaps. These functions return a BITMAP and capture: SCREENBITMAP: the full screen WINDOW.BITMAP: a window including the title bar and border BITMAPCOPY: the interior of a window with no title bar and border SNAPW: an arbitrary area The slightly more involved part is bringing captured bitmaps out of Medley in a format today's systems and tools understand. Some Interlisp functions can save a BITMAP to disk in text and binary encodings, none of which are modern standards. The only Medley tool to export to a modern — or less ancient — format less bound to Lisp is the xerox-to-xbm module which converts a BITMAP to the Unix XBM (X BitMap) format. However, xerox-to-xbm can't process large bitmaps. To work around the issue I wrote the function BMTOPBM that saves a BITMAP to a file in a slightly more modern and popular format, PBM (Portable BitMap). I can't think of anything simpler and, indeed, it took me just half a dozen minutes to write the function. Linux and other modern operating systems can natively display PBM files and Netpbm converts PBM to PNG and other widely used standards. For example, this Netpbm pipeline converts to PNG: $ pbmtopnm screenshot.pbm | pnmtopng screenshot.png BMTOPBM can handle bitmaps of any size but its simple algorithm is inefficient. However, on my PC the function takes about 5 seconds to save a 1920x1080 bitmap, which is the worst case as this is the maximum screen size Medley allows. Good enough for the time being. Bitsnap does pretty much all I want and doesn't need major new features. Still, I may optimize BMTOPBM or save directly to PNG. #Interlisp #Lisp a href="https://remark.as/p/journal.paoloamoroso.com/bitsnap-a-screenshot-capture-tool-for-medley-interlisp"Discuss.../a Email | Reply @amoroso@fosstodon.org !--emailsub--]]>

8 hours ago 2 votes
Who gets to do strategy?

If you talk to enough aspiring leaders, you’ll become familiar with the prevalent idea that they need to be promoted before they can work on strategy. It’s a truism, but I’ve also found this idea perfectly wrong: you can work on strategy from anywhere in an organization, it just requires different tactics to do so. Both Staff Engineer and The Engineering Executive’s Primer have chapters on strategy. While the chapters’ contents are quite different, both present a practical path to advancing your organization’s thinking about complex topics. This chapter explains my belief that anyone within an organization can make meaningful progress on strategy, particularly if you are honest about the tools accessible to you, and thoughtful about how to use them. The themes we’ll dig into are: How to do strategy as an engineer, particularly an engineer who hasn’t been given explicit authority to do strategy Doing strategy as an engineering executive who is responsible for your organization’s decision-making How you can do engineering strategy even when you depend on an absent strategy, cannot acknowledge parts of the diagnosis because addressing certain problems is politically sensitive, or struggle with pockets of misaligned incentives If this book’s argument is that everyone should do strategy, is there anyone who, nonetheless, really should not do strategy? By the end, you’ll hopefully agree that engineering strategy is accessible to everyone, even though you’re always operating within constraints. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. Doing strategy as an engineer It’s easy to get so distracted by executive’s top-down approach to strategy that you convince yourself that there aren’t other approachable mechanisms to doing strategy. There are! Staff Engineer introduces an approach I call Take five, then synthesize, which does strategy by: Documenting how five current and historical related decisions have been made in your organization. This is an extended exploration phase Synthesizing those five documents into a diagnosis and policy. You are naming the implicit strategy, so it’s impossible for someone to reasonably argue you’re not empowered to do strategy: you’re just describing what’s already happening At that point, either the organization feels comfortable with what you’ve written–which is their current strategy–or it doesn’t in which case you’ve forced a conversation about how to revise the approach. Creating awareness is often enough to drive strategic change, and doesn’t require any explicit authorization from an executive to do. When awareness is insufficient, the other pattern I’ve found highly effective in low-authority scenarios is an approach I wrote about in An Elegant Puzzle, and call model, document, and share: Model the approach you want others to adopt. Make it easy for them to observe how you’ve changed the way you’re doing things. Document the approach, the thinking behind it, and how to adopt it. Share the document around. If people see you succeeding with the approach, then they’re likely to copy it from you. You might be skeptical because this is an influence-based approach. However, as we’ll discuss in the next section, even executive-driven strategy is highly dependent on influence. Strategy archaeology Vernor Vinge’s A Deepness in the Sky, published in 1999, introduced the term software archaeologists, folks who created functionality by cobbling together millennia of scraps of existing software. Although it’s a somewhat different usage, I sometimes think of the “take five, then synthesize” approach as performing strategy archaeology. Simply by recording what has happened in the past, we make it easier to understand the present, and influence the future. Doing strategy as an executive The biggest misconception about executive roles, frequently held by non-executives and new executives who are about to make a series of regrettable mistakes, is that executives operate without constraints. That is false: executives have an extremely high number of constraints that they operate under. Executives have budgets, CEO visions, peers to satisfy, and a team to motivate. They can disappoint any of these temporarily, but long-term have to satisfy all of them. Nonetheless, it is true that executives have more latitude to mandate and cajole participation in the strategies that they sponsor. The Engineering Executive’s Primer’s chapter on strategy is a brief summary of this entire book, but it doesn’t say much about how executive strategy differs from non-executive strategy. How the executive’s approach to strategy differs from the engineer’s can be boiled down to: Executives can mandate following of their strategy, which empowers their policy options. An engineer can’t prevent the promotion of someone who refused to follow their policy, but an executive can. Mandates only matter if there are consequences. If an executive is unwilling to enforce consequences for non-compliance with a mandate, the ability to issue a mandate isn’t meaningful. This is also true if they can’t enforce a mandate because of lack of support from their peer executives. Even if an executive is unwilling to use mandates, they have significant visibility and access to their organization to advocate for their preferred strategy. Neither access nor mandates improve an executive’s ability to diagnose problems. However, both often create the appearance of progress. This is why executive strategies can fail so spectacularly and endure so long despite failure. As a result, my experience is that executives have an easier time doing strategy, but a much harder time learning how to do strategy well, and fewer protections to avoid serious mistakes. Further, the consequences of an executive’s poor strategy tend to be much further reaching than an engineer’s. Waiting to do strategy until you are an executive is a recipe for disaster, even if it looks easier from a distance. Doing strategy in other roles Even if you’re neither an engineer nor an engineering executive, you can still do engineering strategy. It’ll just require an even more influence-driven approach. The engineering organization is generally right to believe that they know the most about engineering, but that’s not always true. Sometimes a product manager used to be an engineer and has significant relevant experience. Other times, such as the early adoption of large language models, engineers don’t know much either, and benefit from outside perspectives. Doing strategy in challenging environments Good strategies succeed by accurately diagnosing circumstances and picking policies that address those circumstances. You are likely to spend time in organizations where both of those are challenging due to internal limitations, so it’s worth acknowledging that and discussing how to navigate those challenges. Low-trust environment Sometimes the struggle to diagnose problems is a skill issue. Being bad at strategy is in some ways the easy problem to solve: just do more strategy work to build expertise. In other cases, you may see what the problems are fairly clearly, but not know how to acknowledge the problems because your organization’s culture would frown on it. The latter is a diagnosis problem rooted in low-trust, and does make things more difficult. The chapter on Diagnosis recognizes this problem, and admits that sometimes you have to whisper the controversial parts of a strategy: When you’re writing a strategy, you’ll often find yourself trying to choose between two awkward options: say something awkward or uncomfortable about your company or someone working within it, or omit a critical piece of your diagnosis that’s necessary to understand the wider thinking. Whenever you encounter this sort of debate, my advice is to find a way to include the diagnosis, but to reframe it into a palatable statement that avoids casting blame too narrowly. In short, the solution to low-trust is to translate difficult messages into softer, less direct versions that are acceptable to state. If your goal is to hold people accountable, this can feel dishonest or like a ethical compromise, but the goal of strategy is to make better decisions, which is an entirely different concern than holding folks accountable for the past. Karpman Drama Triangle Sometimes when the diagnosis seems particularly obvious, and people don’t agree with you, it’s because you are wrong. When I’ve been obviously wrong about things I understand well, it’s usually because I’ve fallen into viewing a situation through the Karpman Drama Triangle, where all parties are mapped as the persecutor, the rescuer, or the victim. Poor-judgment environment Even when you do an excellent job diagnosing challenges, it can be difficult to drive agreement within the organization about how to address them. Sometimes this is due to genuinely complex tradeoffs, for example in Stripe’s acquisition of Index, there was debate about how to deal with Index’s Java-based technology stack, which culminated in a compromise that didn’t make anyone particularly happy: Defer making a decision regarding the introduction of Java to a later date: the introduction of Java is incompatible with our existing engineering strategy, but at this point we’ve also been unable to align stakeholders on how to address this decision. Further, we see attempting to address this issue as a distraction from our timely goal of launching a joint product within six months. We will take up this discussion after launching the initial release. That compromise is a good example of a difficult tradeoff: although parties disagreed with the approach, everyone understood the conflicting priorities that had to be addressed. In other cases, though, there are policy choices that simply don’t make much sense, generally driven by poor judgment in your organization. Sometimes it’s not poor technical judgment, but poor judgment in choosing to prioritize one’s personal interests at the expense of the company’s needs. Calm’s strategy to focus on being a product-engineering organization dealt with some aspects of that, acknowledged in its diagnosis: We’re arguing a particularly large amount about adopting new technologies and rewrites. Most of our disagreements stem around adopting new technologies or rewriting existing components into new technology stacks. For example, can we extend this feature or do we have to migrate it to a service before extending it? Can we add this to our database or should we move it into a new Redis cache instead? Is JavaScript a sufficient programming language, or do we need to rewrite this functionality in Go? In that situation, your strategy is an attempt to educate your colleagues about the tradeoffs they are making, but ultimately sometimes folks will disagree with your strategy. In that case, remember that most interesting problems require iterative solutions. Writing your strategy and sharing it will start to change the organization’s mind. Don’t get discouraged even if that change is initially slow. Dealing with missing strategies The strategy for dealing with new private equity ownership introduces a common problem: lack of clarity about what other parts of your own company want. In that case, it seems likely there will be a layoff, but it’s unclear how large that layoff will be: Based on general practice, it seems likely that our new Private Equity ownership will expect us to reduce R&D headcount costs through a reduction. However, we don’t have any concrete details to make a structured decision on this, and our approach would vary significantly depending on the size of the reduction. Many leaders encounter that sort of ambiguity and decide that they cannot move forward with a strategy of their own until that decision is made. While it’s true that it’s inconvenient not to know the details, getting blocked by ambiguity is always the wrong decision. Instead you should do what the private equity strategy does: accept that ambiguity as a fact to be worked around. Rather than giving up, it adopts a series of new policies to start reducing cost growth by changing their organization’s seniority mix, and recognizes that once there is clarity on reduction targets that there will be additional actions to be taken. Whenever you’re doing something challenging, there are an infinite number of reasonable rationales for why you shouldn’t or can’t make progress. Leadership is finding a way to move forward despite those issues. A missing strategy is always part of your diagnosis, but never a reason that you can’t do strategy. Who shouldn’t do strategy In my experience, there’s almost never a reason why you cannot do strategy, but there are two particular scenarios where doing strategy probably doesn’t make sense. The first is not a who, but a when problem: sometimes there is so much strategy already happening, that doing more is a distraction. If another part of your organization is already working on the same problem, do your best to work with them directly rather than generating competing work. The other time to avoid strategy is when you’re trying to satisfy an emotional need to make a direct, immediate impact. Sharing a thoughtful strategy always makes progress, but it’s often the slow, incremental progress of changing your organization’s beliefs. Even definitive, top-down strategies from executives are often ignored in pockets of an organization, and bottoms-up strategy spread slowly as they are modeled, documented and shared. Embarking on strategy work requires a tolerance for winning in the long-run, even when there’s little progress this week or this quarter. Summary As you finish reading this chapter, my hope is that you also believe that you can work on strategy in your organization, whether you’re an engineer or an executive. I also hope that you appreciate that the tools you use vary greatly depending on who you are within your organization and the culture in which you work. Whether you need to model or can mandate, there’s a mechanism that will work for you.

7 hours ago 2 votes
constantly divisionless random numbers

Last year I wrote about inlining just the fast path of Lemire’s algorithm for nearly-divisionless unbiased bounded random numbers. The idea was to reduce code bloat by eliminating lots of copies of the random number generator in the rarely-executed slow paths. However a simple split prevented the compiler from being able to optimize cases like pcg32_rand(1 << n), so a lot of the blog post was toying around with ways to mitigate this problem. On Monday while procrastinating a different blog post, I realised that it’s possible to do better: there’s a more general optimization which gives us the 1 << n special case for free. nearly divisionless Lemire’s algorithm has about 4 neat tricks: use multiplication instead of division to reduce the output of a random number generator modulo some limit eliminate the bias in (1) by (counterintuitively) looking at the lower digits fun modular arithmetic to calculate the reject threshold for (2) arrange the reject tests to avoid the slow division in (3) in most cases The nearly-divisionless logic in (4) leads to two copies of the random number generator, in the fast path and the slow path. Generally speaking, compilers don’t try do deduplicate code that was written by the programmer, so they can’t simplify the nearly-divisionless algorithm very much when the limit is constant. constantly divisionless Two points occurred to me: when the limit is constant, the reject threshold (3) can be calculated at compile time when the division is free, there’s no need to avoid it using (4) These observations suggested that when the limit is constant, the function for random numbers less than a limit should be written: static inline uint32_t pcg32_rand_const(pcg32_t *rng, uint32_t limit) { uint32_t reject = -limit % limit; uint64_t sample; do sample = (uint64_t)pcg32_random(rng) * (uint64_t)limit); while ((uint32_t)(sample) < reject); return ((uint32_t)(sample >> 32)); } This has only one call to pcg32_random(), saving space as I wanted, and the compiler is able to eliminate the loop automatically when the limit is a power of two. The loop is smaller than a call to an out-of-line slow path function, so it’s better all round than the code I wrote last year. algorithm selection As before it’s possible to automatically choose the constantly-divisionless or nearly-divisionless algorithms depending on whether the limit is a compile-time constant or run-time variable, using arcane C tricks or GNU C __builtin_constant_p(). I have been idly wondering how to do something similar in other languages. Rust isn’t very keen on automatic specialization, but it has a reasonable alternative. The thing to avoid is passing a runtime variable to the constantly-divisionless algorithm, because then it becomes never-divisionless. Rust has a much richer notion of compile-time constants than C, so it’s possible to write a method like the follwing, which can’t be misused: pub fn upto<const LIMIT: u32>(&mut self) -> u32 { let reject = LIMIT.wrapping_neg().wrapping_rem(LIMIT); loop { let (lo, hi) = self.get_u32().embiggening_mul(LIMIT); if lo < reject { continue; } else { return hi; } } } assert!(rng.upto::<42>() < 42); (embiggening_mul is my stable replacement for the unstable widening_mul API.) This is a nugatory optimization, but there are more interesting cases where it makes sense to choose a different implementation for constant or variable arguments – that it, the constant case isn’t simply a constant-folded or partially-evaluated version of the variable case. Regular expressions might be lex-style or pcre-style, for example. It’s a curious question of language design whether it should be possible to write a library that provides a uniform API that automatically chooses constant or variable implementations, or whether the user of the library must make the choice explicit. Maybe I should learn some Zig to see how its comptime works.

yesterday 4 votes