More from Dustin Curtis
We do not know how, why, or when the X algorithm devalues posts with links, but it does—without telling you, and by a lot—and it makes the experience there worse. Without links, information on X is headlines without stories, commentary without context, magic without the prestige. We do not know by how much the inability to share or see source links has impacted the spread of misleading or incorrect information, but we do know that primary sources cannot be put into X posts and that replies with links are shown to 70-90% fewer people. Speech on X is free, but only if you reference other speech on X. In Laos, I once asked a rural villager how he determined the truth, given that the government restricted his media to their controlled outlets. He thought for a few minutes, looked around, became confused, and then said, “Isn’t the truth what the government says?” The truth on X is what random people commentate, polarize, interpret, and summarize from source material that is intentionally lost by a black box algorithm. There is no depth to anything on X because context with links is heavily penalized. This is bad for humanity and the opposite of free speech. It is link winter on X.
A couple of years ago, I posted a slight criticism of Elon Musk that led prominent venture capitalist Marc Andreessen to block me. Since then, I have been unable to view his posts1, which is a shame because I valued his thoughts and opinions. Today, X engineering announced changes to the “block” feature: Soon we’ll be launching a change to how the block function works. If your posts are set to public, accounts you have blocked will be able to view them, but they will not be able to engage (like, reply, repost, etc.). I’m glad to see this change from X. The block feature has always been flawed, and this makes it slightly less flawed. On buttons and algorithms # The block button is a problematic feature on X because it’s used so often, so cavalierly, and by so many people that it’s difficult to determine whether someone was blocked due to genuine harassment or simply because the blocker disagreed with one (or all) of the person’s posts. So a user being blocked isn’t a high quality signal for the algorithm to use when generating a user’s feed–though it is, I am reliably told, used for that purpose–and it’s also not very useful for community moderation, either. Feed algorithms use various automated and aggregated signals to shape a user’s feed, but these signals are almost entirely hidden from users to preserve the illusion that their feed is generated by magic. I think we’re now at a point in the evolution of these algorithms where users should be given some insight into how their behavior impacts the content they are shown. It’s not always intuitive. For example, I doubt many people know that scrolling behavior on Instagram is heavily used to feed the algorithm–if you’re scrolling through your feed and pause on a post for a few moments, the algorithm ingests that behavior and uses it as a strong signal that you want to see similar posts. In a way, the software is watching you and making assumptions about your behavior that may or may not be accurate–and then it’s altering what you see in your feed. A few weeks ago, Elon Musk explained that “one of the strongest signals” to the X algorithm that you like a post is if you click the “share” button. This is scary, because a person or team at X somehow decided that clicking “share” is a positive signal, so they built it into the algorithm that way. As it turns out, a lot of people use the share button for other reasons, such as sharing posts they are outraged by, which can seriously distort their feed. Users, of course, were never told that their sharing behavior was being monitored and fed into the algorithm, let alone that it was one of the strongest signals. I don’t think most people consider how the block button impacts the algorithm, either, but it certainly has some impact, so using the button changes the content the algorithm exposes in your feed. How big of an impact the button has is a secret inside a black box. So when people use the block feature like Andreessen did with me, presumably to avoid seeing more of my posts, they might also prevent themselves from being exposed to similar posts from other people. Very slowly and completely unwittingly, they might eventually find themselves in an echo chamber of their own design, filled entirely with people sharing only one side of every story. The X algorithm is particularly scary because it has the ability to radicalize people by reinforcing their beliefs subtly, over a long period of time. Most of the newfangled AI companies are so devoted to safety that they literally define their work around making sure it’s safe. Where are the social media algorithm safety teams? With every block, with every share, with every action you take–knowingly or not–the algorithm might reduce the diversity of thought you’re exposed to just a little bit more. Until it’s too late. Technically, I could still see his posts if I viewed his profile in incognito mode–another reason this feature change makes sense. ↩
A couple of years ago, I posted a slight criticism of Elon Musk that led prominent venture capitalist Marc Andreessen to block me. Since then, I have been unable to view his posts1, which is a shame because I valued his thoughts and opinions. Today, X engineering announced changes to the “block” feature: Soon we’ll be launching a change to how the block function works. If your posts are set to public, accounts you have blocked will be able to view them, but they will not be able to engage (like, reply, repost, etc.). I’m glad to see this change from X. The block feature has always been flawed, and this makes it slightly less so. On buttons and algorithms # The block button is a problematic feature on X because it’s used so often, so cavalierly, and by so many people that it’s difficult to determine whether someone was blocked due to genuine harassment or simply because the blocker disagreed with one (or all) of the blockee’s posts. As a result, being blocked doesn’t provide a high-quality signal for the algorithm when generating a user’s feed—though, I’m reliably told, it’s used for that purpose anyway—and it’s not very helpful for community moderation either. Feed algorithms use tons of automated and aggregated signals to shape a user’s feed, but these signals are almost entirely hidden from users to preserve the illusion that their feed is generated by magic. But I think we’re now at a point in the evolution of these algorithms where users should be given some insight into how their behavior impacts the content they are shown. It’s not always intuitive. For example, I doubt many people know that scrolling behavior on Instagram is heavily used to feed the algorithm–if you’re scrolling through your feed and pause on a post for a few moments, the algorithm ingests that behavior and uses it as a strong signal–stronger than clicking the heart button–that you want to see more posts like that one. Similarly, YouTube’s algorithm uses the passive metric of watch time on a video to determine your interest, and practically ignores the like button. The software is watching you and making assumptions about your behavior that may or may not be accurate–and then it’s altering what you are exposed to in your feed. A few weeks ago, Elon Musk explained that “one of the strongest signals” to the X algorithm that you like a post is if you click the “share” button. This is scary, because a person or team at X somehow decided that clicking “share” is a positive signal, so they built it into the algorithm that way. As it turns out, a lot of people use the share button for other reasons, such as sharing posts they are outraged by, which can seriously distort their feed. Users, of course, were never told that their sharing behavior was being monitored and fed into the algorithm, let alone that it was one of the strongest signals. I don’t think most people consider how the block button impacts the algorithm, either, but it does, so using the button changes the content the algorithm exposes to you. How big of an impact the button has is a secret inside of a black box. So when people use the block feature like Andreessen did with me, presumably to avoid seeing more of my posts, they might also prevent themselves from being exposed to similar posts from other people. Very slowly and completely unwittingly, they may eventually find themselves in an echo chamber of their own design, filled entirely with posts discussing only one side of every story. The X algorithm is particularly scary because it has the ability to radicalize people by reinforcing their beliefs subtly, over a long period of time. Most of the new generative AI companies are so obsessed with preventing this kind of human interference that they literally build their products around safety. Are the algorithms at X treated with the same degree of concern? With every block, with every share, with every action you take–knowingly or not–the algorithm may be reducing the diversity of thought you’re exposed to just a little bit more, until one day you might look back and see that it interfered with your mind–and your opinions. Block carefully. Technically, I can still see his posts if I view his profile in incognito mode–another reason this feature change makes sense. ↩
In 2009, Microsoft released an enormous 200lb coffee table with an embedded 30-inch touchscreen called Surface. Although the iPhone had been around for a little while, the larger screen made Surface feel absolutely futuristic: in the Photos app, you could toss around pictures like they were physically in front of you. It cost $10,000. Very few people ever bought it. A little more than a year later, Apple released the $499 iPad. Microsoft had made a $10,000 table for no one, and Apple made a $499 tablet for everyone. This is a common theme among Apple’s most important products. They are usually built around existing ideas and technologies that have been improved and then repackaged into beautiful, premium experiences which are expensive but not unaffordable. This happened with the iMac, iPod, iPhone, iPad, and Apple Watch. Whatever the product, Apple has always brought seemingly impossible levels of quality and craftsmanship to the masses. Apple is luxury for everyone. Apple Vision Pro, however, is different. Yes, it is an undeniably beautiful product, and the software is very impressive. When I first used it, I was overcome with a sense of awe that I haven’t felt since seeing kinetic scrolling on the first iPhone. But Vision Pro costs nearly $4,000 and has enough faults that it still feels a bit like a technology demo. It is not affordable at all, and it brings nothing to the masses. Vision Pro feels bizarrely un-Apple in a way that only a few products have before, like the 18-karat gold Apple Watch, the $700 Mac Pro wheels, or the $1,000 Pro Display XDR stand. These recent Apple products are shameful Veblen goods that do not offer value commensurate with their price. And while the raw technology in Vision Pro is perhaps worth $4,000 today, I do not think it delivers nearly $4,000 in value. This is the exact opposite of most other transformative Apple products. So what happened? Good product design is a careful dance between what’s best and what’s possible. For the iPhone, building the right combination of technology and software at a practical price point was an enormous challenge that Apple pulled off. But it took years and years of development for the required technology in the iPhone to reach a price that was suitable for the market. When things were cost-prohibitive, the designers of the iPhone found clever workarounds or made hard trade-offs. The first iPhone wasn’t a perfect product, but it was designed against reasonable constraints. I don’t think Vision Pro was designed against reasonable constraints. If the goal was to make the equivalent of the iPod in a sea of mediocre MP3 players, Vision Pro hasn’t succeeded. It isn’t a disruptive VR headset because it isn’t even in the same market as its competitors, the majority of which are ten times cheaper. The goal, then, must have been to make a totally new product segment that only incidentally resembles the current VR market. Apple hints at this strategy by calling Vision Pro a “spacial computer.” The problem here is that if a spacial computer can’t be made today for under $4,000, then the technology simply isn’t ready. In its current state, I think Vision Pro is antithetical to Apple’s DNA: it isn’t accessible to most people, it is large and inelegant, and the platform itself has nebulous use cases. Design Philosophy # In my experience, whether it is hardware or software, there are two fundamental ways to approach product design. The first (and most common) philosophy is to build from the bottom-up, which involves assembling low-cost and basic components first, and then working to build up from those components to an experience that reaches a desired price-quality equilibrium. The second philosophy starts the other way around, by considering the maximum reasonable quality of an experience first–even if it is impractical–and then working over iterations to build down the product until it reaches an acceptable experience-cost equilibrium by making careful trade-offs and cleverly working around constraints. An example of a bottom-up product is the Amazon Kindle, which is made of inexpensive, flimsy injection-molded plastic and shows no signs of craftsmanship – it simply does what it says it will do. On the other hand, consider the Apple Watch, which is, even without its electronics, a beautiful object. It takes only a few moments of touching the watch case to realize that an incredible amount of thought was put into the materials, angles, and curves, and that perhaps even novel manufacturing techniques had to be invented to construct it. The top-down approach is more expensive and takes longer, but – as long as you have reasonable constraints and goals – the quality of the output is exponentially better. Apple Vision Pro seems to have subscribed to neither of these approaches, or its designers started with the top down approach and then gave up before hitting a reasonable equilibrium. It’s both absurdly expensive and has extreme tradeoffs that don’t seem to hit any cohesive product design strategy that would make it a great standalone product. It also has strange extraneous features like EyeSight, which must be incredibly expensive for what it accomplishes (rather poorly). What was the purpose of launching Apple Vision Pro now, when it is incapable of bringing anything new to the masses? It’s not luxurious, even though it’s well constructed. And at its current price, it’s definitely not for everyone. Essentially, it’s an expensive tech demo. Apple’s other groundbreaking products, like iPod, iMac, iPhone, and Apple Watch were all very focused products that launched with reasonable features at reasonable prices. They relied on Apple’s soul to guide their development. Vision Pro, it seems, not so much. Apple’s DNA and culture used to drive the company to make $499 tablets for everyone – a feat that seemed impossible at the time. But today, like the $10,000 Surface Table in 2009, Apple now makes a $4,000 headset for no one.
More in startups
The Murdoch family succession drama is as succession-y as "Succession" could ever succession.
If Trump and Elon think they can forge a grand right-wing alliance with China and Russia, they're heading for trouble.
Human fallout may include being replaced by LLMs, diminished skills, and fewer career options for all but the elite scholars.
A repost, with some urgent updates.
OpenAI’s Deep Research is built for me, and I can’t use it. It’s another amazing demo, until it breaks. But it breaks in really interesting ways.