More from David Heinemeier Hansson
Back in the mid 90s, I had a friend who was really into raytracing, but needed to nurture his hobby on a budget. So instead of getting a top-of-the-line Intel Pentium machine, he bought two AMD K5 boxes, and got a faster rendering flow for less money. All I cared about in the 90s was gaming, though, and for that, Intel was king, so to me, AMD wasn't even a consideration. And that's how it staying for the better part of the next three decades. AMD would put out budget parts that might make economic sense in narrow niches, but Intel kept taking all the big trophies in gaming, in productivity, and on the server. As late as the end of the 2010s, we were still buying Intel for our servers at 37signals. Even though AMD was getting more competitive, and the price-watt-performance equation was beginning to tilt in their favor. By the early 2020s, though, AMD had caught up on the server, and we haven't bought Intel since. The AMD EPYC line of chips are simply superior to anything Intel offers in our price/performance window. Today, the bulk of our new fleet run on dual EPYC 9454s for a total of 96 cores(!) per machine. They're awesome. It's been the same story on the desktop and laptop for me. After switching to Linux last year, I've been all in on AMD. My beloved Framework 13 is rocking an AMD 7640U, and my desktop machine runs on an AMD 7950X. Oh, and my oldest son just got a new gaming PC with an AMD 9900X, and my middle son has a AMD 8945HS in his gaming laptop. It's all AMD in everything! So why is this? Well, clearly the clever crew at AMD is putting out some great CPU designs lately with Lisa Su in charge. I'm particularly jazzed about the upcoming Framework desktop, which runs the latest Max 395+ chip, and can apportion up to 110GB of memory as VRAM (great for local AI!). This beast punches a multi-core score that's on par with that of an M4 Pro, and it's no longer that far behind in single-core either. But all the glory doesn't just go to AMD, it's just as much a triumph of TSMC. TSMC stands for Taiwan Semiconductor Manufacturing Company. They're the world leader in advanced chip making, and key to the story of how Apple was able to leapfrog the industry with the M-series chips back in 2020. Apple has long been the top customer for TSMC, so they've been able to reserve capacity on the latest manufacturing processes (called "nodes"), and as a result had a solid lead over everyone else for a while. But that lead is evaporating fast. That new Max+ 395 is showing that AMD has nearly caught up in terms of raw grunt, and the efficiency is no longer a million miles away either. This is again largely because AMD has been able to benefit from the same TSMC-powered progress that's also propelling Apple. But you know who it's not propelling? Intel. They're still trying to get their own chip-making processes to perform competitively, but so far it looks like they're just falling further and further behind. The latest Intel boards are more expensive and run slower than the competition from Apple, AMD, and Qualcomm. And there appears to be no easy fix to sort it all out around the corner. TSMC really is lifting all the boats behind its innovation locks. Qualcomm, just like AMD, have nearly caught up to Apple with their latest chips. The 8 Elite unit in my new Samsung S25 is faster than the A18 Pro in the iPhone 16 Pro in multi-core tests, and very close in single-core. It's also just as efficient now. This is obviously great for Android users, who for a long time had to suffer the indignity of truly atrocious CPU performance compared to the iPhone. It was so bad for a while that we had to program our web apps differently for Android, because they simply didn't have the power to run JavaScript fast enough! But that's all history now. But as much as I now cheer for Qualcomm's chips, I'm even more chuffed about the fact that AMD is on a roll. I spend far more time in front of my desktop than I do any other computer, and after dumping Apple, it's a delight to see that the M-series advantage is shrinking to irrelevance fast. There's of course still the software reason for why someone would pick Apple, and they continue to make solid hardware, but the CPU playing field is now being leveled. This is obviously a good thing if you're a fan of Linux, like me. Framework in particular has invigorated a credible alternative to the sleek, unibody but ultimately disposable nature of the reigning MacBook laptops. By focusing on repairability, upgradeability, and superior keyboards, we finally have an alternative for developer laptops that doesn't just feel like a cheap copy of a MacBook. And thanks to AMD pushing the envelope, these machines are rapidly closing the remaining gaps in performance and efficiency. And oh how satisfying it must be to sit as CEO of AMD now. The company was founded just one year after Intel, back in 1969, but for its entire existence, it's lived in the shadow of its older brother. Now, thanks to TSMC, great leadership from Lisa Su, and a crack team of chip designers, they're finally reaping the rewards. That is one hell of a journey to victory! So three cheers for AMD! A tip of the hat to TSMC. And what a gift to developers and computer enthusiasts everywhere that Apple once more has some stiff competition in the chip space.
One of the key roles The New York Times plays in American society is as guardians of the liberal Overton window. Its editorial line sets the terms for what's permissible to discuss in polite circles on the center left. Whether it's covid mask efficiency, trans kids, or, now, mass immigration. When The New York Times allows the counter argument to liberal orthodoxy to be published, it signals to its readers that it's time to pivot. On mass immigration, the center-left liberal orthodoxy has for the last decade in particular been that this is an unreserved good. It's cultural enrichment! It's much-needed workers! It's a humanitarian imperative! Any opposition was treated as de-facto racism, and the idea that a country would enforce its own borders as evidence of early fascism. But that era is coming to a close, and The New York Times is using The Danish Permission to prepare its readers for the end. As I've often argued, Denmark is an incredibly effective case study in such arguments, because it's commonly thought of as the holy land of progressivism. Free college, free health care, amazing public transit, obsessive about bikes, and a solid social safety net. It's basically everything people on the center left ever thought they wanted from government. In theory, at least. In practice, all these government-funded benefits come with a host of trade-offs that many upper middle-class Americans (the primary demographic for The New York Times) would find difficult to swallow. But I've covered that in detail in The reality of the Danish fairytale, so I won't repeat that here. Instead, let's focus on the fact that The New York Times is now begrudgingly admitting that the main reason Europe has turned to the right, in election after election recently, is due to the problems stemming from mass immigration across the continent and the channel. For example, here's a bit about immigrant crime being higher: Crime and welfare were also flashpoints: Crime rates were substantially higher among immigrants than among native Danes, and employment rates were much lower, government data showed. It wasn't long ago that recognizing higher crime rates among MENAPT immigrants to Europe was seen as a racist dog whistle. And every excuse imaginable was leveled at the undeniable statistics showing that immigrants from countries like Tunisia, Lebanon, and Somalia are committing violent crime at rates 7-9 times higher than ethnic Danes (and that these statistics are essentially the same in Norway and Finland too). Or how about this one: Recognizing that many immigrants from certain regions were loafing on the welfare state in ways that really irked the natives: One source of frustration was the fact that unemployed immigrants sometimes received resettlement payments that made their welfare benefits larger than those of unemployed Danes. Or the explicit acceptance that a strong social welfare state requires a homogeneous culture in order to sustain the trust needed for its support: Academic research has documented that societies with more immigration tend to have lower levels of social trust and less generous government benefits. Many social scientists believe this relationship is one reason that the United States, which accepted large numbers of immigrants long before Europe did, has a weaker safety net. A 2006 headline in the British publication The Economist tartly summarized the conclusion from this research as, “Diversity or the welfare state: Choose one.” Diversity or welfare! That again would have been an absolutely explosive claim to make not all that long ago. Finally, there's the acceptance that cultural incompatibility, such as on the role of women in society, is indeed a problem: Gender dynamics became a flash point: Danes see themselves as pioneers for equality, while many new arrivals came from traditional Muslim societies where women often did not work outside the home and girls could not always decide when and whom to marry. It took a while, but The New York Times is now recognizing that immigrants from some regions really do commit vastly more violent crime, are net-negative contributors to the state budgets (by drawing benefits at higher rates and being unemployed more often), and that together with the cultural incompatibilities, end up undermining public trust in the shared social safety net. The consequence of this admission is dawning not only on The New York Times, but also on other liberal entities around Europe: Tellingly, the response in Sweden and Germany has also shifted... Today many Swedes look enviously at their neighbor. The foreign-born population in Sweden has soared, and the country is struggling to integrate recent arrivals into society. Sweden now has the highest rate of gun homicides in the European Union, with immigrants committing a disproportionate share of gun violence. After an outburst of gang violence in 2023, Ulf Kristersson, the center-right prime minister, gave a televised address in which he blamed “irresponsible immigration policy” and “political naïveté.” Sweden’s center-left party has likewise turned more restrictionist. All these arguments are in service of the article's primary thesis: To win back power, the left, in Europe and America, must pivot on mass immigration, like the Danes did. Because only by doing so are they able to counter the threat of "the far right". The piece does a reasonable job accounting for the history of this evolution in Danish politics, except for the fact that it leaves out the main protagonist. The entire account is written from the self-serving perspective of the Danish Social Democrats, and it shows. It tells a tale of how it was actually Social Democrat mayors who first spotted the problems, and well, it just took a while for the top of the party to correct. Bullshit. The real reason the Danes took this turn is that "the far right" won in Denmark, and The Danish People's Party deserve the lion's share of the credit. They started in 1995, quickly set the agenda on mass immigration, and by 2015, they were the second largest party in the Danish parliament. Does that story ring familiar? It should. Because it's basically what's been happening in Sweden, France, Germany, and the UK lately. The mainstream parties have ignored the grave concerns about mass immigration from its electorate, and only when "the far right" surged as a result, did the center left and right parties grow interested in changing course. Now on some level, this is just democracy at work. But it's also hilarious that this process, where voters choose parties that champion the causes they care about, has been labeled The Grave Threat to Democracy in recent years. Whether it's Trump, Le Pen, Weidel, or Kjærsgaard, they've all been met with contempt or worse for channeling legitimate voter concerns about immigration. I think this is the point that's sinking in at The New York Times. Opposition to mass immigration and multi-culturalism in Europe isn't likely to go away. The mayhem that's swallowing Sweden is a reality too obvious to ignore. And as long as the center left keeps refusing to engage with the topic honestly, and instead hides behind some anti-democratic firewall, they're going to continue to lose terrain. Again, this is how democracies are supposed to work! If your political class is out of step with the mood of the populace, they're supposed to lose. And this is what's broadly happening now. And I think that's why we're getting this New York Times pivot. Because losing sucks, and if you're on the center left, you'd like to see that end.
One of the biggest mistakes that new startup founders make is trying to get away from the customer-facing roles too early. Whether it's customer support or it's sales, it's an incredible advantage to have the founders doing that work directly, and for much longer than they find comfortable. The absolute worst thing you can do is hire a sales person or a customer service agent too early. You'll miss all the golden nuggets that customers throw at you for free when they're rejecting your pitch or complaining about the product. Seeing these reasons paraphrased or summarized destroy all the nutrients in their insights. You want that whole-grain feedback straight from the customers' mouth! When we launched Basecamp in 2004, Jason was doing all the customer service himself. And he kept doing it like that for three years!! By the time we hired our first customer service agent, Jason was doing 150 emails/day. The business was doing millions of dollars in ARR. And Basecamp got infinitely, better both as a market proposition and as a product, because Jason could funnel all that feedback into decisions and positioning. For a long time after that, we did "Everyone on Support". Frequently rotating programmers, designers, and founders through a day of answering emails directly to customers. The dividends of doing this were almost as high as having Jason run it all in the early years. We fixed an incredible number of minor niggles and annoying bugs because programmers found it easier to solve the problem than to apologize for why it was there. It's not easy doing this! Customers often offer their valuable insights wrapped in rude language, unreasonable demands, and bad suggestions. That's why many founders quit the business of dealing with them at the first opportunity. That's why few companies ever do "Everyone On Support". That's why there's such eagerness to reduce support to an AI-only interaction. But quitting dealing with customers early, not just in support but also in sales, is an incredible handicap for any startup. You don't have to do everything that every customer demands of you, but you should certainly listen to them. And you can't listen well if the sound is being muffled by early layers of indirection.
Most of our cultural virtues, celebrated heroes, and catchy slogans align with the idea of "never give up". That's a good default! Most people are inclined to give up too easily, as soon as the going gets hard. But it's also worth remembering that sometimes you really should fold, admit defeat, and accept that your plan didn't work out. But how to distinguish between a bad plan and insufficient effort? It's not easy. Plenty of plans look foolish at first glance, especially to people without skin in the game. That's the essence of a disruptive startup: The idea ought to look a bit daft at first glance or it probably doesn't carry the counter-intuitive kernel needed to really pop. Yet it's also obviously true that not every daft idea holds the potential to be a disruptive startup. That's why even the best venture capital investors in the world are wrong far more than they're right. Not because they aren't smart, but because nobody is smart enough to predict (the disruption of) the future consistently. The best they can do is make long bets, and then hope enough of them pay off to fund the ones that don't. So far, so logical, so conventional. A million words have been written by a million VCs about how their shrewd eyes let them see those hidden disruptive kernels before anyone else could. Good for them. What I'm more interested in knowing more about is how and when you pivot from a promising bet to folding your hand. When do you accept that no amount of additional effort is going to get that turkey to soar? I'm asking because I don't have any great heuristics here, and I'd really like to know! Because the ability to fold your hand, and live to play your remaining chips another day, isn't just about startups. It's also about individual projects. It's about work methods. Hell, it's even about politics and societies at large. I'll give you just one small example. In 2017, Rails 5.1 shipped with new tooling for doing end-to-end system tests, using a headless browser to validate the functionality, as a user would in their own browser. Since then, we've spent an enormous amount of time and effort trying to make this approach work. Far too much time, if you ask me now. This year, we finished our decision to fold, and to give up on using these types of system tests on the scale we had previously thought made sense. In fact, just last week, we deleted 5,000 lines of code from the Basecamp code base by dropping literally all the system tests that we had carried so diligently for all these years. I really like this example, because it draws parallels to investing and entrepreneurship so well. The problem with our approach to system tests wasn't that it didn't work at all. If that had been the case, bailing on the approach would have been a no brainer long ago. The trouble was that it sorta-kinda did work! Some of the time. With great effort. But ultimately wasn't worth the squeeze. I've seen this trap snap on startups time and again. The idea finds some traction. Enough for the founders to muddle through for years and years. Stuck with an idea that sorta-kinda does work, but not well enough to be worth a decade of their life. That's a tragic trap. The only antidote I've found to this on the development side is time boxing. Programmers are just as liable as anyone to believe a flawed design can work if given just a bit more time. And then a bit more. And then just double of what we've already spent. The time box provides a hard stop. In Shape Up, it's six weeks. Do or die. Ship or don't. That works. But what's the right amount of time to give a startup or a methodology or a societal policy? There's obviously no universal answer, but I'd argue that whatever the answer, it's "less than you think, less than you want". Having the grit to stick with the effort when the going gets hard is a key trait of successful people. But having the humility to give up on good bets turned bad might be just as important.
More in programming
Writing high-quality developer documentation is a challenging task. This is my personal approach to crafting holistic, comprehensive documentation.
Salutations, populations. Today’s note is more of a work-in-progress than usual; I have been finally starting to look at getting into , and there are some open questions.WhippetGuile I started by taking a look at how Guile uses the ‘s API, to make sure I had all my bases covered for an eventual switch to something that was not BDW. I think I have a good overview now, and have divided the parts of BDW-GC used by Guile into seven categories.Boehm-Demers-Weiser collector Firstly there are the ways in which Guile’s run-time and compiler depend on BDW-GC’s behavior, without actually using BDW-GC’s API. By this I mean principally that we assume that any reference to a GC-managed object from any thread’s stack will keep that object alive. The same goes for references originating in global variables, or static data segments more generally. Additionally, we rely on GC objects not to move: references to GC-managed objects in registers or stacks are valid across a GC boundary, even if those references are outside the GC-traced graph: all objects are pinned. Some of these “uses” are internal to Guile’s implementation itself, and thus amenable to being changed, albeit with some effort. However some escape into the wild via Guile’s API, or, as in this case, as implicit behaviors; these are hard to change or evolve, which is why I am putting my hopes on Whippet’s , which allows for conservative roots.mostly-marking collector Then there are the uses of BDW-GC’s API, not to accomplish a task, but to protect the mutator from the collector: , explicitly enabling or disabling GC, calls to that take BDW-GC’s use of POSIX signals into account, and so on. BDW-GC can stop any thread at any time, between any two instructions; for most users is anodyne, but if ever you use weak references, things start to get really gnarly.GC_call_with_alloc_locksigmask Of course a new collector would have its own constraints, but switching to cooperative instead of pre-emptive safepoints would be a welcome relief from this mess. On the other hand, we will require client code to explicitly mark their threads as inactive during calls in more cases, to ensure that all threads can promptly reach safepoints at all times. Swings and roundabouts? Did you know that the Boehm collector allows for precise tracing? It does! It’s slow and truly gnarly, but when you need precision, precise tracing nice to have. (This is the interface.) Guile uses it to mark Scheme stacks, allowing it to avoid treating unboxed locals as roots. When it loads compiled files, Guile also adds some sliced of the mapped files to the root set. These interfaces will need to change a bit in a switch to Whippet but are ultimately internal, so that’s fine.GC_new_kind What is not fine is that Guile allows C users to hook into precise tracing, notably via . This is not only the wrong interface, not allowing for copying collection, but these functions are just truly gnarly. I don’t know know what to do with them yet; are our external users ready to forgo this interface entirely? We have been working on them over time, but I am not sure.scm_smob_set_mark Weak references, weak maps of various kinds: the implementation of these in terms of BDW’s API is incredibly gnarly and ultimately unsatisfying. We will be able to replace all of these with ephemerons and tables of ephemerons, which are natively supported by Whippet. The same goes with finalizers. The same goes for constructs built on top of finalizers, such as ; we’ll get to reimplement these on top of nice Whippet-supplied primitives. Whippet allows for resuscitation of finalized objects, so all is good here.guardians There is a long list of miscellanea: the interfaces to explicitly trigger GC, to get statistics, to control the number of marker threads, to initialize the GC; these will change, but all uses are internal, making it not a terribly big deal. I should mention one API concern, which is that BDW’s state is all implicit. For example, when you go to allocate, you don’t pass the API a handle which you have obtained for your thread, and which might hold some thread-local freelists; BDW will instead load thread-local variables in its API. That’s not as efficient as it could be and Whippet goes the explicit route, so there is some additional plumbing to do. Finally I should mention the true miscellaneous BDW-GC function: . Guile exposes it via an API, . It was already vestigial and we should just remove it, as it has no sensible semantics or implementation.GC_freescm_gc_free That brings me to what I wanted to write about today, but am going to have to finish tomorrow: the actual allocation routines. BDW-GC provides two, essentially: and . The difference is that “atomic” allocations don’t refer to other GC-managed objects, and as such are well-suited to raw data. Otherwise you can think of atomic allocations as a pure optimization, given that BDW-GC mostly traces conservatively anyway.GC_mallocGC_malloc_atomic From the perspective of a user of BDW-GC looking to switch away, there are two broad categories of allocations, tagged and untagged. Tagged objects have attached metadata bits allowing their type to be inspected by the user later on. This is the happy path! We’ll be able to write a function that takes any object, does a switch on, say, some bits in the first word, dispatching to type-specific tracing code. As long as the object is sufficiently initialized by the time the next safepoint comes around, we’re good, and given cooperative safepoints, the compiler should be able to ensure this invariant.gc_trace_object Then there are untagged allocations. Generally speaking, these are of two kinds: temporary and auxiliary. An example of a temporary allocation would be growable storage used by a C run-time routine, perhaps as an unbounded-sized alternative to . Guile uses these a fair amount, as they compose well with non-local control flow as occurring for example in exception handling.alloca An auxiliary allocation on the other hand might be a data structure only referred to by the internals of a tagged object, but which itself never escapes to Scheme, so you never need to inquire about its type; it’s convenient to have the lifetimes of these values managed by the GC, and when desired to have the GC automatically trace their contents. Some of these should just be folded into the allocations of the tagged objects themselves, to avoid pointer-chasing. Others are harder to change, notably for mutable objects. And the trouble is that for external users of , I fear that we won’t be able to migrate them over, as we don’t know whether they are making tagged mallocs or not.scm_gc_malloc One conventional way to handle untagged allocations is to manage to fit your data into other tagged data structures; V8 does this in many places with instances of FixedArray, for example, and Guile should do more of this. Otherwise, you make new tagged data types. In either case, all auxiliary data should be tagged. I think there may be an alternative, which would be just to support the equivalent of untagged and ; but for that, I am out of time today, so type at y’all tomorrow. Happy hacking!GC_mallocGC_malloc_atomic inventory what is to be done? implicit uses defensive uses precise tracing reachability misc allocation
The earliest work with selling things online was all about reaching a shopping public ready to log on and start. But along the way, they found a whole new audience for shopping, which changed the way we think about commerce on the web.. The post Expanding Access: The History of Ecommerce Part 1 appeared first on The History of the Web.
Why? Well: The IndexedDB API is callback-based. With JavaScript being single-threaded, a blocking API would mean fully blocking the page, render and basic user interaction included, while the request is being processed. Although this is apparently good-enough for JSON.parse(), the W3C decided to make the IndexedDB API non-blocking. The first drafts for IndexedDB are from … Continue reading IndexedDB is Weird → The post IndexedDB is Weird appeared first on Quentin Santos.