More from the singularity is nearer
Pulled up at a stop light Imagine flying an x-wing down a corridor, having to turn the plane sideways to fit, a missile on your tail and closing, hitting the turbo, feeling the g force, coming up on the end of the corridor, pulling back hard on the stick the second the corridor opens, turning 90 degrees and watching the missile continue straight. Tingles. Adrenaline. Release. Or if you don’t want sci-fi, imagine winter circa 1645 in America. Several of your group almost dead from lack of food, tracking a deer, spotting it, shooting it with your bow, hitting but the deer is trying to run, fast twitch muscles charging and leaping, plunging a knife into its heart and knowing at that moment everyone is going to be okay. Heart rate calming laying on the warm deer. The modern world doesn’t have any real experiences like this any more. Survival has become a technocratic plod, making the right boring and careful decisions. There’s only fake experiences like the above, video games, sports, and drugs. And things like reckless driving, which are just kind of stupid. As we march toward ASI, this will only get worse. What the unabomber describes as Type 2 experiences, ones where you can achieve results with serious effort, will vanish. All that will be left are things you can have for no effort (like food) and things you can never have (like world peace). Even when humanity goes to Mars, we will be going as cargo. I was told recently I’m not engaged in my life, and it’s pretty true. Until I see a solution to this problem, even a sketch of a solution, what’s the point? Why sprint if you aren’t sure where you are going? I’m trying my best with comma and tiny corp, how do you make technology itself more accessible, not a fucking packaged product like when the default world talks about making technology more accessible. That’s just hiding complexity. But it’s so hard. Companies don’t work like how I thought they did, they just…exist. Which I guess in retrospect is obvious, there’s no adults in the room. Knowing the future doesn’t help you change it. I am continually shocked at how little people understand about anything, they don’t even understand that they don’t understand. Am I the same way? I try extremely hard to constantly test myself, if my predictions are wrong it’s clear I don’t understand. If I can’t build it I don’t understand. I’ll frequently read comments saying I don’t understand, but when I engage with these people they can’t explain what my world model gets wrong. A different meta world model? Or are they just idiots? To anyone who wants to supersede rationality, you better understand how to steelman every rationality argument. If I want to succeed, I believe I have to change who I am, and I’m not sure if that’s possible. I believe I’ve been making efforts in that direction, but I haven’t seen results yet. Working on AI is both the only thing that matters and also so demoralizing because of the above. I believe you have to give individuals control over the technology. And not by setting permissions in AWS that can be revoked, I mean in a nature sense. The ghost gunner is the real second amendment. This ideology holds me back so much in business, to the point I struggle to be competitive. But if you abandon that ideology, what’s the point to doing it at all? I have to win with a hand tied behind my back. We only make products for spiritual tops, not the majority of the world which is spiritual bottoms. Now, perhaps I have an ace in the hole. With the rise of AI, the spiritual bottoms will soon have no cash, because AI is the ultimate spiritual bottom. It would take a highly skilled terrorist to build spiritual top AI and even I’m not that crazy. So we’ll only have bottom AI, and it will outcompete all the human bottoms. Advertising will vanish once the hypnodrones have been released. Those humans will likely wirehead themselves out of the picture. This is the world I’m building for. Have you ever unconstrained your mind and thought about where the world is going? This won’t be like the steam engine replacing the horse, because all horses were bottoms. Ever seen a horse riding a human? Humanity bifurcates. Humans will retain control for the foreseeable future, the only question is, how many humans? If it’s 10, I’m out. If it’s 10k, 50/50 I’m in. If it’s 10M, I’m definitely in. My goal is to make this number as large as possible, it’s my best chance of survival. Give control of the technology to as many people as possible in a deep nature sense, not a permissions sense. In my opinion, this is what Elon gets wrong. Of course, he’s likely to be one of the 10, so maybe that’s why he doesn’t care. But what if he isn’t? Tesla and SpaceX are huge silos begging to be co-opted. I don’t think building silos like this is a good idea, compare the fate of the Telegram founder to the Signal founder. Build technology and structures that are inseparable from the narrative you want, as opposed to ones you think you can wield for good. On a long enough timeline, it will always end up in your enemy’s hands. Imagine if the only thing they could do with it furthers your goals.
I’m getting on a plane back to America tonight, been away for over 3 months. It sort of fills me with dread and anxiety. I remember going to the Apple store before I was leaving, the uhhhhhhh from the sales people was awful. 0 pride. Nobody cares. So different from the sales people at the Hong Kong Apple store. America has had its social fabric torn to shreds. I’ll be back for a month, and I will see if it’s how I remember it, but I’m really not bullish. Wokism is really just Protestantism evolved, it’s not an aberration. I don’t think still fundamentally religious US society will fair very well with AI when it becomes clear just how unspecial people are. Sidenote, I’m in 7th place on Advent of Code thanks to AI, and it is progressing so fast. A capability that was unknown to the world a few years ago. This is the only real issue that matters. It will change society more than you can possibly believe. I’m predicting Chinese religion, a “combination of Buddhism and Taoism with a Confucian worldview” will fair much better with AI. You can already see this in surveys of AI acceptance. In 2016 it was clear Trump became a kingmaker for the Republican party. While he couldn’t guarantee an election win, he could hand victory to the Democrats if he went against whoever the Republicans nominated. All three Republican nominees since have been Trump. Elon represents a similar force, but it’s easy to imagine him supporting the Democrats next election if things don’t go well this cycle. It’s possible Elon now actually has the complete power to choose the winner. I know I’m an Elon swing voter. While there’s things I don’t agree with him on, it’s hard to imagine the clowns in the political establishment offering something remotely compelling against him. If the Democrats want a chance next election cycle, they pick someone like Mark Cuban and get Elon’s support. I predict they won’t. Anti-Elon will not be a tenable political position, and this is a good thing. Godspeed to those who try. Even if I stay in the country, I’m leaving California. They need to turn around and get pro Musk people in government, or the mass exodus will continue. Also looking into moving my companies out of Delaware. Dead end. Now, within a everyone is pro Elon (pro-growth) political framework, there are still choices. Isolationism, tariffs, abortion, infrastructure spending, social safety net, etc… Once we are in that framework, politics can return, and there’s hope for America from a political standpoint. But if being anti-growth remains in the Overton window, there’s little hope. Does anyone think there’s hope for Europe? Culturally, there’s a far deeper problem. The soul isn’t real, and this will be a very hard pill for many westerners to swallow. There’s already so little social fabric and this will only make it worse. In rich Western society people’s expectations exceed their abilities. AI will pummel this even harder. All the clowns who worked jobs that were detrimental to society for fake money. The money is the map, not the territory! If you pervert a map you don’t change the territory, you are just lost. We’ll see how it is being back, but I’m leaning towards leaving and applying for residency here. Btw, how does the US still tax nonresidents? Will be nice when the empire decays to the point it can no longer do that, the influence of the US on the payment rails of the world needs to go.
ugh the deep state didn’t come for me I just realized that what gets engagement is so boring. you wish there was a deep state that came for me. then at least there would be some adults in the room. I used to fantasize about being or kissing Skrillex the whole album is bangers btw. that’s my third quote from it. and now the blog post. Western society is predicated around the existence of the individual, and what really is the individual without the soul, or consciousness if you want the secular term. I used to believe in the individual, but I hadn’t really thought about it that much. You are a machine learning algorithm. You have some priors in your DNA. You learn on data, RL style because your actions affect the next data you see; the dataset depends on the model. There’s no room in this for an I. Control the DNA, control the data, control the outcome. Sad but true. How much of this graph is people starting to figure this out? (US religious affiliation) Can Christianity survive the death of the soul? Can liberalism survive the death of consciousness? How will it cope harder as progress in ML makes this belief more and more ridiculous? There will be everything on a continuous spectrum from logic gates to human level and beyond. Which models are individuals? People used to believe the sun rose cause some dude pulled it in a chariot. People still believe they aren’t computers. Like the chariot people, they are just as wrong.
I wrote a tweet about this but deleted it, since it’s a much more nuanced topic than can be discussed there. Nuclear weapons are the Chekhov’s gun on the world stage. When, if ever, are they going to be fired? When should they be? I suspect this is not a question a lot of people give much thought to, since it’s obvious nukes are terrible and we should never use them, right? Mutually assured destruction and all that. But what if you think about this in a long term historical context? Surely in the past some terrible weapon was created and there was a great moral panic about it, probably something today that we consider quaint. I suspect that in 100 years nuclear weapons will seem quaint. It’s so easy to imagine weapons that are way more horrible, think drone swarms and bioweapons. Oh that’s cute, it blows up a city. This new weapon seeks out and kills every <insert race here> person on earth and tortures them before they die. Even worse, these sort of weapons can be deployed tactically, where for nukes that’s actually kind of hard. Nobody wants an irradiated pile of rubble. With that understood, when do we want to fire the nukes? Firstly, they will not kill all humans. Probably around a third, on par with the black death or the Khmer Rouge. And they will not create a nuclear winter ending all life. They may change the climate, but not more than the asteroid that killed the dinosaurs. I understand the loss of short-term growth is a hard pill to swallow, but would we be better or worse off in 100 years if we fired all the nukes today? It is not clear to me that the answer is worse. The nukes will force systems to decentralize and become less complex, but in exchange those systems will become more robust. Will Google survive an all-out nuclear exchange? Will the Bitcoin blockchain? “If you say why not bomb them tomorrow, I say why not today? If you say today at 5 o’clock, I say why not one o’clock?” – John von Neumann The more I think about this, the more I think it’s not the worst idea. I’m not a terrorist, and will do nothing to actually further this cause, but it’s an interesting thought experiment. Once the gun has been placed on the stage, it will be fired. Now or later? If you are an accelerationist, you want whatever is going to happen to happen sooner. Welcome to nuke/acc
More in programming
There are 35% fewer software developer job listings on Indeed today, than five years ago. Compared to other industries, job listings for software engineers grew much more in 2021-2022, but have declined much faster since. A look into possible reasons for this, and what could come next.
As well as changing the way I organise my writing, last year I made some cosmetic improvements to this site. I design everything on this site myself, and I write the CSS by hand – I don’t use any third-party styles or frameworks. I don’t have any design training, and I don’t do design professionally, so I use this site as a place to learn and practice my design skills. It’s a continual work-in-progress, but I’d like to think it’s getting better over time. I design this site for readers. I write long, text-heavy posts with the occasional illustration or diagram, so I want something that will be comfortable to read and look good on a wide variety of browsers and devices. I get a lot of that “for free” by using semantic HTML and the default styles – most of my CSS is just cosmetic. Let’s go through some of the changes. Cleaning up the link styles This is what links used to look like: Every page has a tint colour, and then I was deriving different shades to style different links – a darker shade for visited links, a lighter shade for visited links in dark mode, and a background that appears on hover. I’m generating these new colours programatically, and I was so proud of getting that code working that I didn’t stop to think whether it was a good idea. In hindsight, I see several issues. The tint colour is meant to give the page a consistent visual appearance, but the different shades diluted that effect. I don’t think their meaning was especially obvious. How many readers ever worked it out? And the hover styles are actively unhelpful – just as you hover over a link you’re interested in, I’m making it harder to read! (At least in light mode – in dark mode, the hover style is barely legible.) One thing I noticed is that for certain tint colours, the “visited” colour I generated was barely distinguishable from the text colour. So I decided to lean into that in the new link styles: visited links are now the same colour as regular text. This new set of styles feels more coherent. I’m only using one shade of the tint colour, and I think the meaning is a bit clearer – only new-to-you links will get the pop of colour to stand out from the rest of the text. I’m happy to rely on underlines for the links you’ve already visited. And when you hover, the thick underline means you can see where you are, but the link text remains readable. Swapping out the font I swapped out the font, replacing Georgia with Charter. The difference is subtle, so I’d be surprised if anyone noticed: I’ve always used web safe fonts for this site – the fonts that are built into web browsers, and don’t need to be downloaded first. I’ve played with custom fonts from time to time, but there’s no font I like more enough to justify the hassle of loading a custom font. I still like Georgia, but I felt it was showing its age – it was designed in 1993 to look good on low-resolution screens, but looks a little chunky on modern displays. I think Charter looks nicer on high-resolution screens, but if you don’t have it installed then I fall back to Georgia. Making all the roundrects consistent I use a lot of rounded rectangles for components on this site, including article cards, blockquotes, and code blocks. For a long time they had similar but not identical styles, because I designed them all at different times. There were weird inconsistencies. For example, why does one roundrect have a 2px border, but another one is 3px? These are small details that nobody will ever notice directly, but undermine the sense of visual together-ness. I’ve done a complete overhaul of these styles, to make everything look more consistent. I’m leaning heavily on CSS variables, a relatively new CSS feature that I’ve really come to like. Variables make it much easier to use consistent values in different rules. I also tweaked the appearance: I’ve removed another two shades of the tint colour. (Yes, those shades were different from the ones used in links.) Colour draws your attention, so I’m trying to use it more carefully. A link says “click here”. A heading says “start here”. What does a blockquote or code snippet say? It’s just part of the text, so it shouldn’t be grabbing your attention. I think the neutral background also makes the syntax highlighting easier to read, because the tint colour isn’t clashing with the code colours. I could probably consolidate the shades of grey I’m using, but that’s a task for another day. I also removed the left indent on blockquotes and code blocks – I think it looks nicer to have a flush left edge for everything, and it means you can read more text on mobile screens. (That’s where I really felt the issues with the old design.) What’s next? By tidying up the design and reducing the number of unique elements, I’ve got a bit of room to add something new. For a while now I’ve wanted a place at the bottom of posts for common actions, or links to related and follow-up posts. As I do more and more long-form, reflective writing, I want to be able to say “if you liked this, you should read this too”. I want something that catches your eye, but doesn’t distract from the article you’re already reading. Louie Mantia has a version of this that I quite like: I’ve held off designing this because the existing pages felt too busy, but now I feel like I have space to add this – there aren’t as many clashing colours and components to compete for your attention. I’m still sketching out designs – my current idea is my rounded rectangle blocks, but with a coloured border instead of a subtle grey, but when I did a prototype, I feel like it’s missing something. I need to try a few more ideas. Watch this space! [If the formatting of this post looks odd in your feed reader, visit the original article]
One of the biggest mistakes that new startup founders make is trying to get away from the customer-facing roles too early. Whether it's customer support or it's sales, it's an incredible advantage to have the founders doing that work directly, and for much longer than they find comfortable. The absolute worst thing you can do is hire a sales person or a customer service agent too early. You'll miss all the golden nuggets that customers throw at you for free when they're rejecting your pitch or complaining about the product. Seeing these reasons paraphrased or summarized destroy all the nutrients in their insights. You want that whole-grain feedback straight from the customers' mouth! When we launched Basecamp in 2004, Jason was doing all the customer service himself. And he kept doing it like that for three years!! By the time we hired our first customer service agent, Jason was doing 150 emails/day. The business was doing millions of dollars in ARR. And Basecamp got infinitely, better both as a market proposition and as a product, because Jason could funnel all that feedback into decisions and positioning. For a long time after that, we did "Everyone on Support". Frequently rotating programmers, designers, and founders through a day of answering emails directly to customers. The dividends of doing this were almost as high as having Jason run it all in the early years. We fixed an incredible number of minor niggles and annoying bugs because programmers found it easier to solve the problem than to apologize for why it was there. It's not easy doing this! Customers often offer their valuable insights wrapped in rude language, unreasonable demands, and bad suggestions. That's why many founders quit the business of dealing with them at the first opportunity. That's why few companies ever do "Everyone On Support". That's why there's such eagerness to reduce support to an AI-only interaction. But quitting dealing with customers early, not just in support but also in sales, is an incredible handicap for any startup. You don't have to do everything that every customer demands of you, but you should certainly listen to them. And you can't listen well if the sound is being muffled by early layers of indirection.
Humanity's Last Exam by Center for AI Safety (CAIS) and Scale AI
Most of our cultural virtues, celebrated heroes, and catchy slogans align with the idea of "never give up". That's a good default! Most people are inclined to give up too easily, as soon as the going gets hard. But it's also worth remembering that sometimes you really should fold, admit defeat, and accept that your plan didn't work out. But how to distinguish between a bad plan and insufficient effort? It's not easy. Plenty of plans look foolish at first glance, especially to people without skin in the game. That's the essence of a disruptive startup: The idea ought to look a bit daft at first glance or it probably doesn't carry the counter-intuitive kernel needed to really pop. Yet it's also obviously true that not every daft idea holds the potential to be a disruptive startup. That's why even the best venture capital investors in the world are wrong far more than they're right. Not because they aren't smart, but because nobody is smart enough to predict (the disruption of) the future consistently. The best they can do is make long bets, and then hope enough of them pay off to fund the ones that don't. So far, so logical, so conventional. A million words have been written by a million VCs about how their shrewd eyes let them see those hidden disruptive kernels before anyone else could. Good for them. What I'm more interested in knowing more about is how and when you pivot from a promising bet to folding your hand. When do you accept that no amount of additional effort is going to get that turkey to soar? I'm asking because I don't have any great heuristics here, and I'd really like to know! Because the ability to fold your hand, and live to play your remaining chips another day, isn't just about startups. It's also about individual projects. It's about work methods. Hell, it's even about politics and societies at large. I'll give you just one small example. In 2017, Rails 5.1 shipped with new tooling for doing end-to-end system tests, using a headless browser to validate the functionality, as a user would in their own browser. Since then, we've spent an enormous amount of time and effort trying to make this approach work. Far too much time, if you ask me now. This year, we finished our decision to fold, and to give up on using these types of system tests on the scale we had previously thought made sense. In fact, just last week, we deleted 5,000 lines of code from the Basecamp code base by dropping literally all the system tests that we had carried so diligently for all these years. I really like this example, because it draws parallels to investing and entrepreneurship so well. The problem with our approach to system tests wasn't that it didn't work at all. If that had been the case, bailing on the approach would have been a no brainer long ago. The trouble was that it sorta-kinda did work! Some of the time. With great effort. But ultimately wasn't worth the squeeze. I've seen this trap snap on startups time and again. The idea finds some traction. Enough for the founders to muddle through for years and years. Stuck with an idea that sorta-kinda does work, but not well enough to be worth a decade of their life. That's a tragic trap. The only antidote I've found to this on the development side is time boxing. Programmers are just as liable as anyone to believe a flawed design can work if given just a bit more time. And then a bit more. And then just double of what we've already spent. The time box provides a hard stop. In Shape Up, it's six weeks. Do or die. Ship or don't. That works. But what's the right amount of time to give a startup or a methodology or a societal policy? There's obviously no universal answer, but I'd argue that whatever the answer, it's "less than you think, less than you want". Having the grit to stick with the effort when the going gets hard is a key trait of successful people. But having the humility to give up on good bets turned bad might be just as important.