Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
98
I love writing and sharing code as open source, but it's not an abstract act of pure altruism. The first recipients of these programming gifts are almost always myself and my company. It's an intentionally selfish drive first, then a broader benefit second. But, ironically, this is what's made my participation in the gift exchange of open source sustainable for twenty years and counting. Putting my own mask on first, so I can keep helping others. Contrast this perspective to the resentment I occasionally see from open source developers who feel shortchanged by the exchange. People who get sucked in by the ideal of pure altruism and pursue it, but are then disappointed by what they perceive as a lack of reciprocity after a while. This has always seemed like a contradiction in terms to me. To expect specific forms of reciprocity from acts of altruism. There's absolutely nothing wrong with developing software on commercial terms, and to expect to be paid for your work. I've built a...
a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from David Heinemeier Hansson

Don't make Google sell Chrome

The web will be far worse off if Google is forced to sell Chrome, even if it's to atone for legitimate ad-market monopoly abuses. Which mean we'll all be worse off as the web would lose ground to actual monopoly platforms, like the iOS App Store and Google's own Play Store. First, Chrome won the browser war fair and square by building a better surfboard for the internet. This wasn't some opportune acquisition. This was the result of grand investments, great technical prowess, and markets doing what they're supposed to do: rewarding the best. Besides, we have a million alternatives. Firefox still exists, so does Safari,  so does the billion Chromium-based browsers like Brave and Edge. And we finally even have new engines on the way with the Ladybird browser.  Look, Google's trillion-dollar business depends on a thriving web that can be searched by Google.com, that can be plastered in AdSense, and that now can feed the wisdom of AI. Thus, Google's incredible work to further the web isn't an act of charity, it's of economic self-interest, and that's why it works. Capitalism doesn't run on benevolence, but incentives. We want an 800-pound gorilla in the web's corner! Because Apple would love nothing better (despite the admirable work to keep up with Chrome by Team Safari) to see the web's capacity as an application platform diminished. As would every other owner of a proprietary application platform. Microsoft fought the web tooth and nail back in the 90s because they knew that a free, open application platform would undermine lock-in — and it did! But the vitality of that free and open application platform depends on constant development. If the web stagnates, other platforms will gain. But with Team Chrome pushing the web forward in a million ways — be it import maps, nested CSS, web push, etc. — is therefore essential. This is a classic wealth vs. riches mistake. Lawyers see Chrome as valuable in a moment's snapshot, but the value is all in the wealth that continued investment brings. A Chrome left to languish with half the investment will evaporate as quickly as a lottery winner's riches. Wealth requires maintenance to endure. Google should not get away with rigging the online ad market, but forcing it to sell Chrome will do great damage to the web.

a week ago 1 votes
We'll always need junior programmers

We received over 2,200 applications for our just-closed junior programmer opening, and now we're going through all of them by hand and by human. No AI screening here. It's a lot of work, but we have a great team who take the work seriously, so in a few weeks, we'll be able to invite a group of finalists to the next phase. This highlights the folly of thinking that what it'll take to land a job like this is some specific list of criteria, though. Yes, you have to present a baseline of relevant markers to even get into consideration, like a great cover letter that doesn't smell like AI slop, promising projects or work experience or educational background, etc. But to actually get the job, you have to be the best of the ones who've applied! It sounds self-evident, maybe, but I see questions time and again about it, so it must not be. Almost every job opening is grading applicants on the curve of everyone who has applied. And the best candidate of the lot gets the job. You can't quantify what that looks like in advance. I'm excited to see who makes it to the final stage. I already hear early whispers that we got some exceptional applicants in this round. It would be great to help counter the narrative that this industry no longer needs juniors. That's simply retarded. However good AI gets, we're always going to need people who know the ins and outs of what the machine comes up with. Maybe not as many, maybe not in the same roles, but it's truly utopian thinking that mankind won't need people capable of vetting the work done by AI in five minutes.

2 weeks ago 22 votes
The new Framework 13 HX370

The new AMD HX370 option in the Framework 13 is a good step forward in performance for developers. It runs our HEY test suite in 2m7s, compared to 2m43s for the 7840U (and 2m49s for a M4 Pro!). It's also about 20% faster in most single-core tasks than the 7840U. But is that enough to warrant the jump in price? AMD's latest, best chips have suddenly gotten pretty expensive. The F13 w/ HX370 now costs $1,992 with 32GB RAM / 1TB. Almost the same an M4 Pro MBP14 w/ 24GB / 1TB ($2,199). I'd pick the Framework any day for its better keyboard, 3:2 matte screen, repairability, and superb Linux compatibility, but it won't be because the top option is "cheaper" any more.  Of course you could also just go with the budget 6-core Ryzen AI 5 340 in same spec for $1,362. I'm sure that's a great machine too. But maybe the sweet spot is actually the Ryzen AI 7 350. It "only" has 8 cores (vs 12 on the 370), but four of those are performance cores -- the same as the 370. And it's $300 cheaper. So ~$1,600 gets you out the door. I haven't actually tried the 350, though, so that's just speculation. I've been running the 370 for the last few months. Whichever chip you choose, the rest of the Framework 13 package is as good as it ever was. This remains my favorite laptop of at least the last decade. I've been running one for over a year now, and combined with Omakub + Neovim, it's the first machine in forever where I've actually enjoyed programming on a 13" screen. The 3:2 aspect ratio combined with Linux's superb multiple desktops that switch with 0ms lag and no animations means I barely miss the trusted 6K Apple XDR screen when working away from the desk. The HX370 gives me about 6 hours of battery life in mixed use. About the same as the old 7840U. Though if all I'm doing is writing, I can squeeze that to 8-10 hours. That's good enough for me, but not as good as a Qualcomm machine or an Apple M-chip machine. For some people, those extra hours really make the difference. What does make a difference, of course, is Linux. I've written repeatedly about how much of a joy it's been to rediscover Linux on the desktop, and it's a joy that keeps on giving. For web work, it's so good. And for any work that requires even a minimum of Docker, it's so fast (as the HEY suite run time attests). Apple still has a strong hardware game, but their software story is falling apart. I haven't heard many people sing the praises of new iOS or macOS releases in a long while. It seems like without an asshole in charge, both have move towards more bloat, more ads, more gimmicks, more control. Linux is an incredible antidote to this nonsense these days. It's also just fun! Seeing AMD catch up in outright performance if not efficiency has been a delight. Watching Framework perfect their 13" laptop while remaining 100% backwards compatible in terms of upgrades with the first versions is heartwarming. And getting to test the new Framework Desktop in advance of its Q3 release has only affirmed my commitment to both. But on the new HX370, it's in my opinion the best Linux laptop you can buy today, which by extension makes it the best web developer laptop too. The top spec might have gotten a bit pricey, but there are options all along the budget spectrum, which retains all the key ingredients any way. Hard to go wrong. Forza Framework!

3 weeks ago 14 votes
Normal boyhood is ADHD

Nearly a quarter of seventeen-year-old boys in America have an ADHD diagnosis. That's crazy. But worse than the diagnosis is that the majority of them end up on amphetamines, like Adderall or Ritalin. These drugs allow especially teenage boys (diagnosed at 2-3x the rate of girls) to do what their mind would otherwise resist: Study subjects they find boring for long stretches of time. Hurray? Except, it doesn't even work. Because taking Adderall or Ritalin doesn't actually help you learn more, it merely makes trying tolerable. The kids might feel like the drugs are helping, but the test scores say they're not. It's Dunning-Kruger — the phenomenon where low-competence individuals overestimate their abilities — in a pill. Furthermore, even this perceived improvement is short-term. The sudden "miraculous" ability to sit still and focus on boring school work wanes in less than a year on the drugs. In three years, pill poppers are doing no better than those who didn't take amphetamines at all. These are all facts presented in a blockbuster story in New York Time Magazine entitled Have We Been Thinking About A.D.H.D. All Wrong?, which unpacks all the latest research on ADHD. It's depressing reading. Not least because the definition of ADHD is so subjective and  situational. The NYTM piece is full of anecdotes from kids with an ADHD diagnosis whose symptoms disappeared when they stopped pursuing a school path out of step with their temperament. And just look at these ADHD markers from the DSM-5: Inattention Difficulty staying focused on tasks or play. Frequently losing things needed for tasks (e.g., toys, school supplies). Easily distracted by unrelated stimuli. Forgetting daily activities or instructions. Trouble organizing tasks or completing schoolwork. Avoiding or disliking tasks requiring sustained mental effort. Hyperactivity Fidgeting, squirming, or inability to stay seated. Running or climbing in inappropriate situations. Excessive talking or inability to play quietly. Acting as if “driven by a motor,” always on the go. Impulsivity Blurting out answers before questions are completed. Trouble waiting for their turn. Interrupting others’ conversations or games. The majority of these so-called symptoms are what I'd classify as "normal boyhood". I certainly could have checked off a bunch of them, and you only need six over six months for an official ADHD diagnosis. No wonder a quarter of those seventeen year-old boys in America qualify! Borrowing from Erich Fromm’s The Sane Society, I think we're looking at a pathology of normalcy, where healthy boys are defined as those who can sit still, focus on studies, and suppress kinetic energy. Boys with low intensity and low energy. What a screwy ideal to chase for all. This is all downstream from an obsession with getting as many kids through as much safety-obsessed schooling as possible. While the world still needs electricians, carpenters, welders, soldiers, and a million other occupations that exist outside the narrow educational ideal of today. Now I'm sure there is a small number of really difficult cases where even the short-term break from severe symptoms that amphetamines can provide is welcome. The NYTM piece quotes the doctor that did one of the most consequential studies on ADHD as thinking that's around 3% — a world apart from the quarter of seventeen-year-olds discussed above. But as ever, there is no free lunch in medicine. Long-term use of amphetamines acts as a growth inhibitor, resulting in kids up to an inch shorter than they otherwise would have been. On top of the awful downs that often follow amphetamine highs. And the loss of interest, humor, and spirit that frequently comes with the treatment too. This is all eerily similar to what happened in America when a bad study from the 1990s convinced a generation of doctors that opioids actually weren't addictive. By the time they realized the damage, they'd set in motion an overdose and addiction cascade that's presently killing over a 100,000 Americans a year. The book Empire of Pain chronicles that tragedy well.  Or how about the surge in puberty-blocker prescriptions, which has now been arrested in the UK, following the Cass Review, as well as Finland, Norway, Sweden, France, and elsewhere. Doctors are supposed to first do no harm, but they're as liable to be swept up in bad paradigms, social contagions, and ideological echo chambers as the rest of us. And this insane over-diagnosis of ADHD fits that liability to a T.

3 weeks ago 15 votes
Believe it's going to work even though it probably won't

To be a successful founder, you have to believe that what you're working on is going to work — despite knowing it probably won't! That sounds like an oxymoron, but it's really not. Believing that what you're building is going to work is an essential component of coming to work with the energy, fortitude, and determination it's going to require to even have a shot. Knowing it probably won't is accepting the odds of that shot. It's simply the reality that most things in business don't work out. At least not in the long run. Most businesses fail. If not right away, then eventually. Yet the world economy is full of entrepreneurs who try anyway. Not because they don't know the odds, but because they've chosen to believe they're special. The best way to balance these opposing points — the conviction that you'll make it work, the knowledge that it probably won't — is to do all your work in a manner that'll make you proud either way. If it doesn't work, you still made something you wouldn't be ashamed to put your name on. And if it does work, you'll beam with pride from making it on the basis of something solid. The deep regret from trying and failing only truly hits when you look in the mirror and see Dostoevsky staring back at you with this punch to the gut: "Your worst sin is that you have destroyed and betrayed yourself for nothing." Oof. Believe it's going to work.  Build it in a way that makes you proud to sign it. Base your worth on a human on something greater than a business outcome.

4 weeks ago 13 votes

More in programming

Stress And Programming

Having spent four decades as a programmer in various industries and situations, I know that modern software development processes are far more stressful than when I started. It's not simply that developing software today is more complex than it was back in 1981. In that early decade, none

2 hours ago 2 votes
Espressif’s Automatic Reset

In previous articles, we saw how to use “real” UART, and looked into the trick used by Arduino to automatically reset boards when uploading firmware. Today, we’ll look into how Espressif does something similar, using even more tricks. “Real” UART on the Saola As usual, let’s first simply connect the UART adapter. Again, we connect … Continue reading Espressif’s Automatic Reset → The post Espressif’s Automatic Reset appeared first on Quentin Santos.

3 hours ago 1 votes
How I built a chatbot with my dog

Lessons for AI prompting and retrieval

6 hours ago 1 votes
Write the most clever code you possibly can

I started writing this early last week but Real Life Stuff happened and now you're getting the first-draft late this week. Warning, unedited thoughts ahead! New Logic for Programmers release! v0.9 is out! This is a big release, with a new cover design, several rewritten chapters, online code samples and much more. See the full release notes at the changelog page, and get the book here! Write the cleverest code you possibly can There are millions of articles online about how programmers should not write "clever" code, and instead write simple, maintainable code that everybody understands. Sometimes the example of "clever" code looks like this (src): # Python p=n=1 exec("p*=n*n;n+=1;"*~-int(input())) print(p%n) This is code-golfing, the sport of writing the most concise code possible. Obviously you shouldn't run this in production for the same reason you shouldn't eat dinner off a Rembrandt. Other times the example looks like this: def is_prime(x): if x == 1: return True return all([x%n != 0 for n in range(2, x)] This is "clever" because it uses a single list comprehension, as opposed to a "simple" for loop. Yes, "list comprehensions are too clever" is something I've read in one of these articles. I've also talked to people who think that datatypes besides lists and hashmaps are too clever to use, that most optimizations are too clever to bother with, and even that functions and classes are too clever and code should be a linear script.1. Clever code is anything using features or domain concepts we don't understand. Something that seems unbearably clever to me might be utterly mundane for you, and vice versa. How do we make something utterly mundane? By using it and working at the boundaries of our skills. Almost everything I'm "good at" comes from banging my head against it more than is healthy. That suggests a really good reason to write clever code: it's an excellent form of purposeful practice. Writing clever code forces us to code outside of our comfort zone, developing our skills as software engineers. Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you [will get excellent debugging practice at exactly the right level required to push your skills as a software engineer] — Brian Kernighan, probably There are other benefits, too, but first let's kill the elephant in the room:2 Don't commit clever code I am proposing writing clever code as a means of practice. Being at work is a job with coworkers who will not appreciate if your code is too clever. Similarly, don't use too many innovative technologies. Don't put anything in production you are uncomfortable with. We can still responsibly write clever code at work, though: Solve a problem in both a simple and a clever way, and then only commit the simple way. This works well for small scale problems where trying the "clever way" only takes a few minutes. Write our personal tools cleverly. I'm a big believer of the idea that most programmers would benefit from writing more scripts and support code customized to their particular work environment. This is a great place to practice new techniques, languages, etc. If clever code is absolutely the best way to solve a problem, then commit it with extensive documentation explaining how it works and why it's preferable to simpler solutions. Bonus: this potentially helps the whole team upskill. Writing clever code... ...teaches simple solutions Usually, code that's called too clever composes several powerful features together — the "not a single list comprehension or function" people are the exception. Josh Comeau's "don't write clever code" article gives this example of "too clever": const extractDataFromResponse = (response) => { const [Component, props] = response; const resultsEntries = Object.entries({ Component, props }); const assignIfValueTruthy = (o, [k, v]) => (v ? { ...o, [k]: v } : o ); return resultsEntries.reduce(assignIfValueTruthy, {}); } What makes this "clever"? I count eight language features composed together: entries, argument unpacking, implicit objects, splats, ternaries, higher-order functions, and reductions. Would code that used only one or two of these features still be "clever"? I don't think so. These features exist for a reason, and oftentimes they make code simpler than not using them. We can, of course, learn these features one at a time. Writing the clever version (but not committing it) gives us practice with all eight at once and also with how they compose together. That knowledge comes in handy when we want to apply a single one of the ideas. I've recently had to do a bit of pandas for a project. Whenever I have to do a new analysis, I try to write it as a single chain of transformations, and then as a more balanced set of updates. ...helps us master concepts Even if the composite parts of a "clever" solution aren't by themselves useful, it still makes us better at the overall language, and that's inherently valuable. A few years ago I wrote Crimes with Python's Pattern Matching. It involves writing horrible code like this: from abc import ABC class NotIterable(ABC): @classmethod def __subclasshook__(cls, C): return not hasattr(C, "__iter__") def f(x): match x: case NotIterable(): print(f"{x} is not iterable") case _: print(f"{x} is iterable") if __name__ == "__main__": f(10) f("string") f([1, 2, 3]) This composes Python match statements, which are broadly useful, and abstract base classes, which are incredibly niche. But even if I never use ABCs in real production code, it helped me understand Python's match semantics and Method Resolution Order better. ...prepares us for necessity Sometimes the clever way is the only way. Maybe we need something faster than the simplest solution. Maybe we are working with constrained tools or frameworks that demand cleverness. Peter Norvig argued that design patterns compensate for missing language features. I'd argue that cleverness is another means of compensating: if our tools don't have an easy way to do something, we need to find a clever way. You see this a lot in formal methods like TLA+. Need to check a hyperproperty? Cast your state space to a directed graph. Need to compose ten specifications together? Combine refinements with state machines. Most difficult problems have a "clever" solution. The real problem is that clever solutions have a skill floor. If normal use of the tool is at difficult 3 out of 10, then basic clever solutions are at 5 out of 10, and it's hard to jump those two steps in the moment you need the cleverness. But if you've practiced with writing overly clever code, you're used to working at a 7 out of 10 level in short bursts, and then you can "drop down" to 5/10. I don't know if that makes too much sense, but I see it happen a lot in practice. ...builds comradery On a few occasions, after getting a pull request merged, I pulled the reviewer over and said "check out this horrible way of doing the same thing". I find that as long as people know they're not going to be subjected to a clever solution in production, they enjoy seeing it! Next week's newsletter will probably also be late, after that we should be back to a regular schedule for the rest of the summer. Mostly grad students outside of CS who have to write scripts to do research. And in more than one data scientist. I think it's correlated with using Jupyter. ↩ If I don't put this at the beginning, I'll get a bajillion responses like "your team will hate you" ↩

yesterday 2 votes
I switched from GMail and nobody died

Whether we like it or not, email is widely used to identify a person. Code sent to email is used as authentication and sometimes as authorisation for certain actions. I’m not comfortable with Google having such power over me, especially given the fact that they practically don’t have any support you can appeal to. If your Google account is blocked, that’s it. Maybe you know someone from Google and they can help you, but for most of us mortals that’s not an option.

yesterday 2 votes