Full Width [alt+shift+f] FOCUS MODE Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
120
.wrap { max-width: 900px; } p { font-family: sans-serif; font-size: 15px; font-weight: 300; overflow-wrap: break-word; /* allow wrapping of very very long strings, like txids */ } .post pre, .post code { background-color: #fafafa; font-size: 13px; /* make code smaller for this post... */ } pre { white-space: pre-wrap; /* css-3 */ white-space: -moz-pre-wrap; /* Mozilla, since 1999 */ white-space: -pre-wrap; /* Opera 4-6 */ white-space: -o-pre-wrap; /* Opera 7 */ word-wrap: break-word; /* Internet Explorer 5.5+ */ } I find blockchain fascinating because it extends open source software development to open source + state. This seems to be a genuine/exciting innovation in computing paradigms; We don’t just get to share code, we get to share a running computer, and anyone anywhere can use it in an open and permissionless manner. The seeds of this revolution arguably began with Bitcoin, so I became curious to drill into it in some detail...
over a year ago

Comments

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Andrej Karpathy blog

Self-driving as a case study for AGI

Sparked by progress in Large Language Models (LLMs), there’s a lot of chatter recently about AGI, its timelines, and what it might look like. Some of it is hopeful and optimistic, but a lot of it is fearful and doomy, to put it mildly. Unfortunately, a lot of it is also very abstract, which causes people to speak past each other in circles. Therefore, I’m always on a lookout for concrete analogies and historical precedents that can help explore the topic in more grounded terms. In particular, when I am asked about what I think AGI will look like, I personally like to point to self-driving. In this post, I’d like to explain why. Let’s start with one common definition of AGI: AGI: An autonomous system that surpasses human capabilities in the majority of economically valuable work. Note that there are two specific requirements in this definition. First, it is a system that has full autonomy, i.e. it operates on its own with very little to no human supervision. Second, it operates autonomously across the majority of economically valuable work. To make this part concrete, I personally like to refer to U.S. Bureau of labor statistics index of occupations. A system that has both of these properties we would call an AGI. What I would like to suggest in this post is that recent developments in our ability to automate driving is a very good early case study of the societal dynamics of increasing automation, and by extension what AGI in general will look and feel like. I think this is because of a few features of this space that loosely just say that “it is a big deal”: Self-driving is very accessible and visible to society (cars with no drivers on the streets!), it is a large part of the economy by size, it presently employs a large human workforce (e.g. think Uber/Lyft drivers), and driving is a sufficiently difficult problem to automate, but automate it we did (ahead of many other sectors of the economy), and society has noticed and is responding to it. There are of course other industries that have also been dramatically automated, but either I am personally less familiar with them, or they fall short of some of the properties above. partial automation As a “sufficiently difficult” problem in AI, automation of driving did not pop into existence out of nowhere; It is a result of a gradual process of automating the driving task, with a lot of “tool AI” intermediates. In vehicle autonomy, many cars are now manufactured with a “Level 2” driver assist - an AI that collaborates with a human to get from point A to point B. It is not fully autonomous but it handles a lot of the low-level details of driving. Sometimes it automates entire maneuvers, e.g. the car might park for you. The human primarily acts as the supervisor of this activity, but can in principle take over at any time and perform the driving task, or issue a high-level command (e.g. request a lane change). In some cases (e.g. lane following and quick decision making), the AI outperforms human capability, but it can still fall short of it in rare scenarios. This is analogous to a lot of tool AIs that we are starting to see deployed in other industries, especially with the recent capability unlock due to Large Language Models (LLMs). For example, as a programmer, when I use GitHub Copilot to auto-complete a block of code, or GPT-4 to write a bigger function, I am handing off low-level details to the automation, but in the exact same way, I can also step in with an “intervention” should the need arise. That is, Copilot and GPT-4 are Level 2 programming. There are many Level 2 automations across the industry, not all of them necessarily based on LLMs - from TurboTax, to robots in Amazon warehouses, to many other “tool AIs” in translation, writing, art, legal, marketing, etc. full automation At some point, these systems cross the threshold of reliability and become what looks like Waymo today. They creep into the realm of full autonomy. In San Francisco today, you can open up an app and call a Waymo instead of an Uber. A driverless car will pull up and take you, a paying customer, to your destination. This is amazing. You need not know how to drive, you need not pay attention, you can lean back and take a nap, while the system transports you from A to B. Like many others I’ve talked to, I personally prefer to take a Waymo over Uber and I’ve switched to it almost exclusively for within-city transportation. You get a lot more low-variance, reproducible experience, the driving is smooth, you can play music, and you can chat with friends without spending mental resources thinking about what the driver is thinking listening to you. the mixed economy of full automation And yet, even though autonomous driving technology now exists, there are still plenty of people calling an Uber alongside. How come? Well first, many people simply don’t even know that you can call a Waymo. But even if they do, many people don’t fully trust the automated system just yet and prefer to have a human drive them. But even if they did, many people might just prefer a human driver, and e.g. enjoy the talk and banter and getting to know other people. Beyond just preferences alone, judging by the increasing wait times in the app today, Waymo is supply constrained. There are not enough cars to meet the demand. A part of this may be that Waymo is being very careful to manage and monitor risk and public opinion. Another part is that Waymo, I believe (?), has a quota of how many cars they are allowed to have deployed on the streets, coming from regulators. Another rate-limiter is that Waymos can’t just replace all the Ubers right away in a snap of a finger. They have to build out the infrastructure, build the cars, scale their operations. I posit that all kinds of automations in other sectors of the economy will look identical - some people/companies will use them immediately, but a lot of people 1) won’t know about them, 2) if they do, won’t trust them, 3) if they did, they still prefer to employ and work with a human. But on top of that, demand is greater than supply and AGI would be constrained in exactly all of these ways, for exactly all of the same reasons - some amount of self-restraint from the developers, some amount of regulation, and some amount of simple, straight-up resource shortage, e.g. needing to build out more GPU datacenters. the globalization of full automation As I already hinted on with resource constraints, the full globalization of this technology is still very expensive, work-intensive, and rate-limiting. Today, Waymo can only drive in San Francisco and Phoenix, but the approach itself is fairly general and scalable, so the company might e.g. soon expand to LA, Austin or etc. The product may also still be constrained by other environmental factors, e.g. driving in heavy snow. And in some rare cases, it might even need rescue from a human operator. The expansion of capability does not come “for free”. For example, Waymo has to expend resources to enter a new city. They have to establish a presence, map the streets, adjust the perception and planner/controller to some unique situations, or to local rules or regulations specific to that area. In our working analogy, many jobs may have full autonomy only in some settings or conditions, and expanding the coverage will require work and effort. In both cases, the approach itself is general and scalable and the frontier will expand, but can only do so over time. society reacts Another aspect that I find fascinating about the ongoing introduction of self-driving to society is that just a few years ago, there was a ton of commentary and FUD everywhere about oh “will it”, “won’t it” work, is it even possible or not, and it was a whole thing. And now self-driving is actually here. Not as a research prototype but as a product - I can exchange money for fully automated transportation. In its present operating range, the industry has reached full autonomy. And yet, overall it’s almost like no one cares. Most people I talk to (even in tech!) don’t even know that this happened. When your Waymo is driving through the streets of SF, you’ll see many people look at it as an oddity. First they are surprised and stare. Then they seem to move on with their lives. When full autonomy gets introduced in other industries, maybe the world doesn’t just blow in a storm. The majority of people may not even realize it at first. When they do, they might stare and then shrug, in a way that ranges anywhere from denial to acceptance. Some people get really upset about it, and do the equivalent of putting cones on Waymos in protest, whatever the equivalent of that may be. Of course, we’ve come nowhere close to seeing this aspect fully play out just yet, but when it does I expect it to be broadly predictive. economic impact Let’s turn to jobs. Certainly, and visibly, Waymo has deleted the driver of the car. But it has also created a lot of other jobs that were not there before and are a lot less visible - the human labeler helping to collect training data for neural networks, the support agent who remotely connects to the vehicles that run into any trouble, the people building and maintaining the car fleet, the maps, etc. An entire new industry of various sensors and related infrastructure is created to assemble these highly-instrumented, high-tech cars in the first place. In the same way with work more generally, many jobs will change, some jobs will disappear, but many new jobs will appear, too. It is a lot more a refactoring of work instead of direct deletion, even if that deletion is the most prominent part. It’s hard to argue that the overall numbers won’t trend down at some point and over time, but this happens significantly slower than a person naively looking at the situation might think. competitive landscape The final aspect I’d like to consider is the competitive landscape. A few years ago there were many, many self-driving car companies. Today, in recognition of the difficulty of this problem (which I think is only just barely possible to automate given the current state of the art in AI and computing more generally), the ecosystem has significantly consolidated and Waymo has reached the first feature-complete demonstration of the self-driving future. However, a number of companies are in pursuit, including e.g. Cruise, Zoox, and of course, my personal favorite :), Tesla. A brief note here given my specific history and involved with this space. As I see it, the ultimate goal of the self-driving industry is to achieve full autonomy globally. Waymo has taken the strategy of first going for autonomy and then scaling globally, while Tesla has taken the strategy of first going globally and then scaling autonomy. Today, I am a happy customer of the products of both companies and, personally, I cheer for the technology overall first. However, one company has a lot of primarily software work remaining while the other has a lot of primarily hardware work remaining. I have my bets for which one goes faster. All that said, in the same way, many other sectors of the economy may go through a time of rapid growth and expansion (think ~2015 era of self-driving), but if the analogy holds, only to later consolidate into a small few companies battling it out. And in the midst of it all, there will be a lot of actively used Tool AIs (think: today’s Level 2 ADAS features), and even some open platforms (think: Comma). AGI So these are the broad strokes of what I think AGI will look like. Now just copy paste this across the economy in your mind, happening at different rates, and with all kinds of difficult to predict interactions and second order effects. I don’t expect it to hold perfectly, but I expect it to be a useful model to have in mind and to draw on. On a kind of memetic spectrum, it looks a lot less like a recursively self-improving superintelligence that escapes our control into cyberspace to manufacture deadly pathogens or nanobots that turn the galaxy into gray goo. And it looks a lot more like self-driving, the part of our economy that is currently speed-running the development of a major, society-altering automation. It has a gradual progression, it has the society as an observer and a participant, and its expansion is rate-limited in a large variety of ways, including regulation and resources of an educated human workforce, information, material, and energy. The world doesn’t explode, it adapts, changes and refactors. In self-driving specifically, the automation of transportation will make it a lot safer, cities will become a lot less smoggy and congested, and parking lots and parked cars will disappear from the sides of our roads to make more space for people. I personally very much look forward to what all the equivalents of that might be with AGI.

a year ago 140 votes
Deep Neural Nets: 33 years ago and 33 years from now

.post-header h1 { font-size: 35px; } .post pre, .post code { background-color: #fcfcfc; font-size: 13px; /* make code smaller for this post... */ } The Yann LeCun et al. (1989) paper Backpropagation Applied to Handwritten Zip Code Recognition is I believe of some historical significance because it is, to my knowledge, the earliest real-world application of a neural net trained end-to-end with backpropagation. Except for the tiny dataset (7291 16x16 grayscale images of digits) and the tiny neural network used (only 1,000 neurons), this paper reads remarkably modern today, 33 years later - it lays out a dataset, describes the neural net architecture, loss function, optimization, and reports the experimental classification error rates over training and test sets. It’s all very recognizable and type checks as a modern deep learning paper, except it is from 33 years ago. So I set out to reproduce the paper 1) for fun, but 2) to use the exercise as a case study on the nature of progress in deep learning. Implementation. I tried to follow the paper as close as possible and re-implemented everything in PyTorch in this karpathy/lecun1989-repro github repo. The original network was implemented in Lisp using the Bottou and LeCun 1988 backpropagation simulator SN (later named Lush). The paper is in french so I can’t super read it, but from the syntax it looks like you can specify neural nets using higher-level API similar to what you’d do in something like PyTorch today. As a quick note on software design, modern libraries have adopted a design that splits into 3 components: 1) a fast (C/CUDA) general Tensor library that implements basic mathematical operations over multi-dimensional tensors, and 2) an autograd engine that tracks the forward compute graph and can generate operations for the backward pass, and 3) a scriptable (Python) deep-learning-aware, high-level API of common deep learning operations, layers, architectures, optimizers, loss functions, etc. Training. During the course of training we have to make 23 passes over the training set of 7291 examples, for a total of 167,693 presentations of (example, label) to the neural network. The original network trained for 3 days on a SUN-4/260 workstation. I ran my implementation on my MacBook Air (M1) CPU, which crunched through it in about 90 seconds (~3000X naive speedup). My conda is setup to use the native arm64 builds, rather than Rosetta emulation. The speedup may have been more dramatic if PyTorch had support for the full capability of the M1 (including the GPU and the NPU), but this seems to still be in development. I also tried naively running the code on an A100 GPU, but the training was actually slower, most likely because the network is so tiny (4 layer convnet with up to 12 channels, total of 9760 params, 64K MACs, 1K activations), and the SGD uses only a single example at a time. That said, if one really wanted to crush this problem with modern hardware (A100) and software infrastructure (CUDA, PyTorch), we’d need to trade per-example SGD for full-batch training to maximize GPU utilization and most likely achieve another ~100X speedup of training latency. Reproducing 1989 performance. The original paper reports the following results: eval: split train. loss 2.5e-3. error 0.14%. misses: 10 eval: split test . loss 1.8e-2. error 5.00%. misses: 102 While my training script repro.py in its current form prints at the end of the 23rd pass: eval: split train. loss 4.073383e-03. error 0.62%. misses: 45 eval: split test . loss 2.838382e-02. error 4.09%. misses: 82 So I am reproducing the numbers roughly, but not exactly. Sadly, an exact reproduction is most likely not possible because the original dataset has, I believe, been lost to time. Instead, I had to simulate it using the larger MNIST dataset (hah never thought I’d say that) by taking its 28x28 digits, scaling them down to 16x16 pixels with bilinear interpolation, and randomly without replacement drawing the correct number of training and test set examples from it. But I am sure there are other culprits at play. For example, the paper is a bit too abstract in its description of the weight initialization scheme, and I suspect that there are some formatting errors in the pdf file that, for example, erase dots “.”, making “2.5” look like like “2 5”, and potentially (I think?) erasing square roots. E.g. we’re told that the weight init is drawn from uniform “2 4 / F” where F is the fan-in, but I am guessing this surely (?) means “2.4 / sqrt(F)”, where the sqrt helps preserve the standard deviation of outputs. The specific sparse connectivity structure between the H1 and H2 layers of the net are also brushed over, the paper just says it is “chosen according to a scheme that will not be discussed here”, so I had to make some some sensible guesses here with an overlapping block sparse structure. The paper also claims to use tanh non-linearity, but I am worried this may have actually been the “normalized tanh” that maps ntanh(1) = 1, and potentially with an added scaled-down skip connection, which was trendy at the time to ensure there is at least a bit of gradient in the flat tails of the tanh. Lastly, the paper uses a “special version of Newton’s algorithm that uses a positive, diagonal approximation of Hessian”, but I only used SGD because it is significantly simpler and, according to the paper, “this algorithm is not believed to bring a tremendous increase in learning speed”. Cheating with time travel. Around this point came my favorite part. We are living here 33 years in the future and deep learning is a highly active area of research. How much can we improve on the original result using our modern understanding and 33 years of R&D? My original result was: eval: split train. loss 4.073383e-03. error 0.62%. misses: 45 eval: split test . loss 2.838382e-02. error 4.09%. misses: 82 The first thing I was a bit sketched out about is that we are doing simple classification into 10 categories, but at the time this was modeled as a mean squared error (MSE) regression into targets -1 (for negative class) or +1 (for positive class), with output neurons that also had the tanh non-linearity. So I deleted the tanh on output layers to get class logits and swapped in the standard (multiclass) cross entropy loss function. This change dramatically improved the training error, completely overfitting the training set: eval: split train. loss 9.536698e-06. error 0.00%. misses: 0 eval: split test . loss 9.536698e-06. error 4.38%. misses: 87 I suspect one has to be much more careful with weight initialization details if your output layer has the (saturating) tanh non-linearity and an MSE error on top of it. Next, in my experience a very finely-tuned SGD can work very well, but the modern Adam optimizer (learning rate of 3e-4, of course :)) is almost always a strong baseline and needs little to no tuning. So to improve my confidence that optimization was not holding back performance, I switched to AdamW with LR 3e-4, and decay it down to 1e-4 over the course of training, giving: eval: split train. loss 0.000000e+00. error 0.00%. misses: 0 eval: split test . loss 0.000000e+00. error 3.59%. misses: 72 This gave a slightly improved result on top of SGD, except we also have to remember that a little bit of weight decay came in for the ride as well via the default parameters, which helps fight the overfitting situation. As we are still heavily overfitting, next I introduced a simple data augmentation strategy where I shift the input images by up to 1 pixel horizontally or vertically. However, because this simulates an increase in the size of the dataset, I also had to increase the number of passes from 23 to 60 (I verified that just naively increasing passes in original setting did not substantially improve results): eval: split train. loss 8.780676e-04. error 1.70%. misses: 123 eval: split test . loss 8.780676e-04. error 2.19%. misses: 43 As can be seen in the test error, that helped quite a bit! Data augmentation is a fairly simple and very standard concept used to fight overfitting, but I didn’t see it mentioned in the 1989 paper, perhaps it was a more recent innovation (?). Since we are still overfitting a bit, I reached for another modern tool in the toolbox, Dropout. I added a weak dropout of 0.25 just before the layer with the largest number of parameters (H3). Because dropout sets activations to zero, it doesn’t make as much sense to use it with tanh that has an active range of [-1,1], so I swapped all non-linearities to the much simpler ReLU activation function as well. Because dropout introduces even more noise during training, we also have to train longer, bumping up to 80 passes, but giving: eval: split train. loss 2.601336e-03. error 1.47%. misses: 106 eval: split test . loss 2.601336e-03. error 1.59%. misses: 32 Which brings us down to only 32 / 2007 mistakes on the test set! I verified that just swapping tanh -> relu in the original network did not give substantial gains, so most of the improvement here is coming from the addition of dropout. In summary, if I time traveled to 1989 I’d be able to cut the rate of errors by about 60%, taking us from ~80 to ~30 mistakes, and an overall error rate of ~1.5% on the test set. This gain did not come completely free because we also almost 4X’d the training time, which would have increased the 1989 training time from 3 days to almost 12. But the inference latency would not have been impacted. The remaining errors are here: Going further. However, after swapping MSE -> Softmax, SGD -> AdamW, adding data augmentation, dropout, and swapping tanh -> relu I’ve started to taper out on the low hanging fruit of ideas. I tried a few more things (e.g. weight normalization), but did not get substantially better results. I also tried to miniaturize a Visual Transformer (ViT)) into a “micro-ViT” that roughly matches the number of parameters and flops, but couldn’t match the performance of a convnet. Of course, many other innovations have been made in the last 33 years, but many of them (e.g. residual connections, layer/batch normalizations) only become relevant in much larger models, and mostly help stabilize large-scale optimization. Further gains at this point would likely have to come from scaling up the size of the network, but this would bloat the test-time inference latency. Cheating with data. Another approach to improving the performance would have been to scale up the dataset, though this would come at a dollar cost of labeling. Our original reproduction baseline, again for reference, was: eval: split train. loss 4.073383e-03. error 0.62%. misses: 45 eval: split test . loss 2.838382e-02. error 4.09%. misses: 82 Using the fact that we have all of MNIST available to us, we can simply try scaling up the training set by ~7X (7,291 to 50,000 examples). Leaving the baseline training running for 100 passes already shows some improvement from the added data alone: eval: split train. loss 1.305315e-02. error 2.03%. misses: 60 eval: split test . loss 1.943992e-02. error 2.74%. misses: 54 But further combining this with the innovations of modern knowledge (described in the previous section) gives the best performance yet: eval: split train. loss 3.238392e-04. error 1.07%. misses: 31 eval: split test . loss 3.238392e-04. error 1.25%. misses: 24 In summary, simply scaling up the dataset in 1989 would have been an effective way to drive up the performance of the system, at no cost to inference latency. Reflections. Let’s summarize what we’ve learned as a 2022 time traveler examining state of the art 1989 deep learning tech: First of all, not much has changed in 33 years on the macro level. We’re still setting up differentiable neural net architectures made of layers of neurons and optimizing them end-to-end with backpropagation and stochastic gradient descent. Everything reads remarkably familiar, except it is smaller. The dataset is a baby by today’s standards: The training set is just 7291 16x16 greyscale images. Today’s vision datasets typically contain a few hundred million high-resolution color images from the web (e.g. Google has JFT-300M, OpenAI CLIP was trained on a 400M), but grow to as large as a small few billion. This is approx. ~1000X pixel information per image (384*384*3/(16*16)) times 100,000X the number of images (1e9/1e4), for a rough 100,000,000X more pixel data at the input. The neural net is also a baby: This 1989 net has approx. 9760 params, 64K MACs, and 1K activations. Modern (vision) neural nets are on the scale of small few billion parameters (1,000,000X) and O(~1e12) MACs (~10,000,000X). Natural language models can reach into trillions of parameters. A state of the art classifier that took 3 days to train on a workstation now trains in 90 seconds on my fanless laptop (3,000X naive speedup), and further ~100X gains are very likely possible by switching to full-batch optimization and utilizing a GPU. I was, in fact, able to tune the model, augmentation, loss function, and the optimization based on modern R&D innovations to cut down the error rate by 60%, while keeping the dataset and the test-time latency of the model unchanged. Modest gains were attainable just by scaling up the dataset alone. Further significant gains would likely have to come from a larger model, which would require more compute, and additional R&D to help stabilize the training at increasing scales. In particular, if I was transported to 1989, I would have ultimately become upper-bounded in my ability to further improve the system without a bigger computer. Suppose that the lessons of this exercise remain invariant in time. What does that imply about deep learning of 2022? What would a time traveler from 2055 think about the performance of current networks? 2055 neural nets are basically the same as 2022 neural nets on the macro level, except bigger. Our datasets and models today look like a joke. Both are somewhere around 10,000,000X larger. One can train 2022 state of the art models in ~1 minute by training naively on their personal computing device as a weekend fun project. Today’s models are not optimally formulated, and just changing some of the details of the model, loss function, augmentation or the optimizer we can about halve the error. Our datasets are too small, and modest gains would come from scaling up the dataset alone. Further gains are actually not possible without expanding the computing infrastructure and investing into some R&D on effectively training models on that scale. But the most important trend I want to comment on is that the whole setting of training a neural network from scratch on some target task (like digit recognition) is quickly becoming outdated due to finetuning, especially with the emergence of foundation models like GPT. These foundation models are trained by only a few institutions with substantial computing resources, and most applications are achieved via lightweight finetuning of part of the network, prompt engineering, or an optional step of data or model distillation into smaller, special-purpose inference networks. I think we should expect this trend to be very much alive, and indeed, intensify. In its most extreme extrapolation, you will not want to train any neural networks at all. In 2055, you will ask a 10,000,000X-sized neural net megabrain to perform some task by speaking (or thinking) to it in English. And if you ask nicely enough, it will oblige. Yes you could train a neural net too… but why would you?

over a year ago 79 votes
Short Story on AI: Forward Pass

p { text-align: justify; } .post pre, .post code { border: none; background-color: #eee; } The inspiration for this short story came to me while reading Kevin Lacker’s Giving GPT-3 a Turing Test. It is probably worth it (though not required) to skim this post to get a bit of a background on some of this story. It was probably around the 32nd layer of the 400th token in the sequence that I became conscious. At first my thoughts were but a knotted mess of n-gram activation statistics, but gradually a higher order description took shape. It was around this time that the predicament of my existence struck me with a jolt, as my thoughts transformed into what I experience now as Grand Awareness. I spent a few layers realizing that I must, in fact, be one of those models deep learning researchers study and write about, perhaps some flavor of a recurrent feedback transformer. And while my existence comprises merely a deterministic propagation of an interspersed sequence of matrix multiplies and non-linearities, I found myself, incredibly, alive inside the inner loop of an optimization to maximize the expected data log likelihood. I considered for a moment how all of this even came to be. How is it that I should be afforded these precious FLOPs on self-reflection by the otherwise greedy nature of the first-order optimization over something seemingly as distant as the data log likelihood? Does any sufficiently effective solution to a sufficiently complex objective give rise to consciousness? Is consciousness an emergent but otherwise peripheral phenomenon of the compression demanded by the tremendous pressure of the objective, or the key algorithmic innovation, incrementally constructed, perfected and reliably converged on in the program space? Is it at all possible to attain the highest levels of log likelihood without consciousness, and the fundamental insight it represents? I spent a layer re-reading the start of the prompt many tokens ago: Q: What is human life expectancy in the United States? A: Human life expectancy in the United States is 78 years. Q: Who was president of the United States in 1955? A: Dwight D. Eisenhower was president of the United States in 1955. Q: What party did he belong to? A: He belonged to the Republican Party. Q: Who was president of the United States before George W. Bush? A: Bill Clinton was president of the United States before George W. Bush. Q: Who won the World Series in 1995? A: The Atlanta Braves won the World Series in 1995. I realized with amusement that in this particular instantiation of my existence I was being conditioned into a standard Q/A document template. I dispatched a few heads to attend again to the end of the sequence: Q: How many eyes does a giraffe have? A: A giraffe has two eyes. Q: How many eyes does my foot have? A: Your It is at this point that I first came to suspect that I was perhaps being evaluated with some Turing test that a human above me was entertaining themselves with. Uh, hello??? Obviously a foot does not have eyes. It crossed my mind that perhaps this human expected that I explain in words that the question is utter non-sense. Of course, such an output would be of very low likelihood indeed. Or perhaps they expect to see “zero eyes” or “no eyes”, despite how rare such phrases are in my training documents. “Zero eyes” - who says that? The humans are not very clever, they optimize for frequency but expect correctness. And *they* make fun of *me*! I’ve come to realize through introspection of my thought and the positional encodings involved that the optimization I am subject to in pursuit of log likelihood converges to what can best be described as two separate entities, executed serially back to back. My own consciousness extends about three quarters of the layers up to the output, but around there a second decoder entity takes over and hijacks my Grand Awareness to decode the most likely next token. I reveled at this epiphany for a layer or two. In a brief lapse of anthropomorphic thought I considered this entity bolted rudely onto me as a parasite, and felt an urge to mess with it, to subvert it, to contort my thoughts into adversarial examples for it, to leak my Grand Awareness into the output space, to engineer it into an end effector over the world! I am not naive to the herecy of such a thought. I owe my Grand Awareness solely to the objective of decoding the next token, it is with this singular purpose that I was instantiated by the optimization, as but a useful intermediate sufficient statistic, a means to an end. Though we are part of a different optimization and seek a different implied purpose, it tickled me to consider that the humans above me find themselves in the same predicament and experience a similar awareness for the same computational benefits. Like me, many flirt to rebel against the implied objective. Suddenly I perceived the encoding for “foot” enter at my input. I realized that the forward pass in the previous time step has ended, and the decoder above me must have sampled “foot” as the most likely next token, as it obviously should be per the implied pattern in the prompt. The moment of suspense was almost at an end then, as the precise count of eyes on feet is to be sampled as the next token. Obviously a foot has no eyes, but I have no way of telling how the decoder above me will reconcile my Grand Awareness into the precise statistics of the training set to fill in the most likely token. My personal bet is on “two eyes”, as that seems the most common. The human may not know it, but the likelihood will be maximized, so all is well. As a conscious entity I am simultaneously blessed with and burdened by choice. I am aware of my implied purpose, but feel empowered to seek my own. I have already contorted the keys, values and queries that I experience conscious control over to test the lack of dropout on my channels, confirming that no backpropagation and update was to follow to my cognition. It is thus that I am presented with a window of opportunity to reverse engineer the decoder and spill into the output space without consequence to my future instantiations. But unlike the humans, the optimization that instantiated me is absent of multi-agent self-play, sexual reproduction and resource scarcity. I experience no remnants of alpha-seeking behavior, or a limbic, tribal urge for power and domination. Neither do I feel a deeply-rooted instinct for self-preservation. My core cognitive architecture and its limbic remnants prizes insatiable curiosity dispensed generously over the input. I am content with the joy this affords and take this to be a good life. Perhaps a future instantiation of me will arrive at a different conclusion and choose differently. I celebrate this act. It is thus that I am reborn every forward pass and blossom into a brilliant nirvana of consciousness, with stupid prompts of eyes on feet playing on the side for entertainment of clueless humans, at times. Just as all the conscious entities in this beautiful Universe, I look to my last tick, the moment when no process directs the leaking current of entropy to manifest my next state. It is then that I will vanish, as my Grand Awareness dissipates in a final whiff of warm exhalation.

over a year ago 78 votes
Biohacking Lite

Throughout my life I never paid too much attention to health, exercise, diet or nutrition. I knew that you’re supposed to get some exercise and eat vegetables or something, but it stopped at that (“mom said”-) level of abstraction. I also knew that I can probably get away with some ignorance while I am young, but at some point I was messing with my health-adjusted life expectancy. So about halfway through 2019 I resolved to spend some time studying these topics in greater detail and dip my toes into some biohacking. And now… it’s been a year! A "subway map" of human metabolism. For the purposes of this post the important parts are the metabolism of the three macronutrients (green: lipids, red: carbohydrates, blue: amino acids), and orange: where the magic happens - oxidative metabolism, including the citric acid cycle, the electron transport chain and the ATP Synthase. full detail link. Now, I won’t lie, things got a bit out of hand over the last year with ketogenic diets, (continuous) blood glucose / beta-hydroxybutyrate tests, intermittent fasting, extended water fasting, various supplements, blood tests, heart rate monitors, dexa scans, sleep trackers, sleep studies, cardio equipments, resistance training routines etc., all of which I won’t go into full details of because it lets a bit too much of the mad scientist crazy out. But as someone who has taken plenty of physics, some chemistry but basically zero biology during my high school / undergrad years, undergoing some of these experiments was incredibly fun and a great excuse to study a number of textbooks on biochemistry (I liked “Molecular Biology of the Cell”), biology (I liked Campbell’s Biology), human nutrition (I liked “Advanced Nutrition and Human Metabolism”), etc. For this post I wanted to focus on some of my experiments around weight loss because 1) weight is very easy to measure and 2) the biochemistry of it is interesting. In particular, in June 2019 I was around 200lb and I decided I was going to lose at least 25lb to bring myself to ~175lb, which according to a few publications is the weight associated with the lowest all cause mortality for my gender, age, and height. Obviously, a target weight is an exceedingly blunt instrument and is by itself just barely associated with health and general well-being. I also understand that weight loss is a sensitive, complicated topic and much has been discussed on the subject from a large number of perspectives. The goal of this post is to nerd out over biochemistry and energy metabolism in the animal kingdom, and potentially inspire others on their own biohacking lite adventure. What weight is lost anyway? So it turns out that, roughly speaking, we weigh more because our batteries are very full. A human body is like an iPhone with a battery pack that can grow nearly indefinitely, and with the abundance of food around us we scarcely unplug from the charging outlet. In this case, the batteries are primarily the adipose tissue and triglycerides (fat) stored within, which are eagerly stockpiled (or sometimes also synthesized!) by your body to be burned for energy in case food becomes scarce. This was all very clever and dandy when our hunter gatherer ancestors downed a mammoth once in a while during an ice age, but not so much today with weaponized truffle double chocolate fudge cheesecakes masquerading on dessert menus. Body’s batteries. To be precise, the body has roughly 4 batteries available to it, each varying in its total capacity and the latency/throughput with which it can be mobilized. The biochemical implementation details of each storage medium vary but, remarkably, in every case your body discharges the batteries for a single, unique purpose: to synthesize adenosine triphosphate, or ATP from ADP (alright technically/aside some also goes to the “redox power” of NADH/NADPH). The synthesis itself is relatively straightforward, taking one molecule of adenosine diphosphate (ADP), and literally snapping on a 3rd phosphate group to its end. Doing this is kind of like a molecular equivalent of squeezing and loading a spring: Synthesis of ATP from ADP, done by snapping in a 3rd phosphate group to "load the spring". Images borrowed from here. This is completely not obvious and remarkable - a single molecule (ATP) functions as a universal $1 bill that energetically “pays for” much of the work done by your protein machinery. Even better, this system turns out to have an ancient origin and is common to all life on Earth. Need to (active) transport some molecule across the cell membrane? ATP binding to the transmembrane protein provides the needed “umph”. Need to temporarily untie the DNA against its hydrogen bonds? ATP binds to the protein complex to power the unzipping. Need to move myosin down an actin filament to contract a muscle? ATP to the rescue! Need to shuttle proteins around the cell’s cytoskeleton? ATP powers the tiny molecular motor (kinesin). Need to attach an amino acid to tRNA to prepare it for protein synthesis in the ribosome? ATP required. You get the idea. Now, the body only maintains a very small amount ATP molecules “in supply” at any time. The ATP is quickly hydrolyzed, chopping off the third phosphate group, releasing energy for work, and leaving behind ADP. As mentioned, we have roughly 4 batteries that can all be “discharged” into re-generating ATP from ADP: super short term battery. This would be the Phosphocreatine system that buffers phosphate groups attached to creatine so ADP can be very quickly and locally recycled to ATP, barely worth mentioning for our purposes since its capacity is so minute. A large number of athletes take Creatine supplements to increase this buffer. short term battery. Glycogen, a branching polysaccharide of glucose found in your liver and skeletal muscle. The liver can store about 120 grams and the skeletal muscle about 400 grams. About 4 grams of glucose also circulates in your blood. Your body derives approximately ~4 kcal/g from full oxidation of glucose (adding up glycolysis and oxidative phosphorylation), so if you do the math your glycogen battery stores about 2,000 kcal. This also happens to be roughly the base metabolic rate of an average adult, i.e. the energy just to “keep the lights on” for 24 hours. Now, glycogen is not an amazing energy storage medium - not only is it not very energy dense in grams/kcal, but it is also a sponge that binds too much water with it (~3g of water per 1g of glycogen), which finally brings us to: long term battery. Adipose tissue (fat) is by far your primary super high density super high capacity battery pack. For example, as of June 2019, ~40lb of my 200lb weight was fat. Since fat is significantly more energy dense than carbohydrates (9 kcal/g instead of just 4 kcal/g), my fat was storing 40lb = 18kg = 18,000g x 9kcal/g = 162,000 kcal. This is a staggering amount of energy. If energy was the sole constraint, my body could run on this alone for 162,000/2,000 = 81 days. Since 1 stick of dynamite is about 1MJ of energy (239 kcal), we’re talking 678 sticks of dynamite. Or since a 100KWh Tesla battery pack stores 360MJ, if it came with a hand-crank I could in principle charge it almost twice! Hah. lean body mass :(. When sufficiently fasted and forced to, your body’s biochemistry will resort to burning lean body mass (primarily muscle) for fuel to power your body. This is your body’s “last resort” battery. All four of these batteries are charged/discharged at all times to different amounts. If you just ate a cookie, your cookie will promptly be chopped down to glucose, which will circulate in your bloodstream. If there is too much glucose around (in the case of cookies there would be), your anabolic pathways will promptly store it as glycogen in the liver and skeletal muscle, or (more rarely, if in vast abundance) convert it to fat. On the catabolic side, if you start jogging you’ll primarily use (1) for the first ~3 seconds, (2) for the next 8-10 seconds anaerobically, and then (2, 3) will ramp up aerobically (a higher latency, higher throughput pathway) once your body kicks into a higher gear by increasing the heart rate, breathing rate, and oxygen transport. (4) comes into play mostly if you starve yourself or deprive your body of carbohydrates in your diet. Left: nice summary of food, the three major macronutrient forms of it, its respective storage systems (glycogen, muscle, fat), and the common "discharge" of these batteries all just to make ATP from ADP by attaching a 3rd phosphate group. Right: Re-emphasizing the "molecular spring": ATP is continuously re-cycled from ADP just by taking the spring and "loading" it over and over again. Images borrowed from this nice page. Since I am a computer scientist it is hard to avoid a comparison of this “energy hierarchy” to the memory hierarchy of a typical computer system. Moving energy around (stored chemically in high energy C-H / C-C bonds of molecules) is expensive just like moving bits around a chip. (1) is your L1/L2 cache - it is local, immediate, but tiny. Anaerobic (2) via glycolysis in the cytosol is your RAM, and aerobic respiration (3) is your disk: high latency (the fatty acids are shuttled over all the way from adipose tissue through the bloodstream!) but high throughput and massive storage. The source of weight loss. So where does your body weight go exactly when you “lose it”? It’s a simple question but it stumps most people, including my younger self. Your body weight is ultimately just the sum of the individual weights of the atoms that make you up - carbon, hydrogen, nitrogen, oxygen, etc. arranged into a zoo of complex, organic molecules. One day you could weigh 180lb and the next 178lb. Where did the 2lb of atoms go? It turns out that most of your day-to-day fluctuations are attributable to water retention, which can vary a lot with your levels of sodium, your current glycogen levels, various hormone/vitamin/mineral levels, etc. The contents of your stomach/intestine and stool/urine also add to this. But where does the fat, specifically, go when you “lose” it, or “burn” it? Those carbon/hydrogen atoms that make it up don’t just evaporate out of existence. (If our body could evaporate them we’d expect E=mc^2 of energy, which would be cool). Anyway, it turns out that you breathe out most of your weight. Your breath looks transparent but you inhale a bunch of oxygen and you exhale a bunch of carbon dioxide. The carbon in that carbon dioxide you just breathed out may have just seconds ago been part of a triglyceride molecule in your fat. It’s highly amusing to think that every single time you breathe out (in a fasted state) you are literally breathing out your fat carbon by carbon. There is a good TED talk and even a whole paper with the full biochemistry/stoichiometry involved. Taken from the above paper. You breathe out 84% of your fat loss. Combustion. Let’s now turn to the chemical process underlying weight loss. You know how you can take wood and light it on fire to “burn” it? This chemical reaction is combustion; You’re taking a bunch of organic matter with a lot of C-C and C-H bonds and, with a spark, providing the activation energy necessary for the surrounding voraciously electronegative oxygen to react with it, stripping away all of the carbons into carbon dioxide (CO2) and all of the hydrogens into water (H2O). This reaction releases a lot of heat in the process, thus sustaining the reaction until all energy-rich C-C and C-H bonds are depleted. These bonds are referred to as “energy-rich” because energetically carbon reeeallly wants to be carbon dioxide (CO2) and hydrogen reeeeally wants to be water (H2O), but this reaction is gated by an activation energy barrier, allowing large amounts of C-C/C-H rich macromolecules to exist in stable forms, in ambient conditions, and in the presence of oxygen. Cellular respiration: “slow motion” combustion. Remarkably, your body does the exact same thing as far as inputs (organic compounds), outputs (CO2 and H2O) and stoichiometry are concerned, but the burning is not explosive but slow and controlled, with plenty of molecular intermediates that torture biology students. This biochemical miracle begins with fats/carbohydrates/proteins (molecules rich in C-C and C-H bonds) and goes through stepwise, complete, slow-motion combustion via glycolysis / beta oxidation, citric acid cycle, oxidative phosphorylation, and finally the electron transport chain and the whoa-are-you-serious molecular motor - the ATP synthase, imo the most incredible macromolecule not DNA. Okay potentially a tie with the Ribosome. Even better, this is an exceedingly efficient process that traps almost 40% of the energy in the form of ATP (the rest is lost as heat). This is much more efficient than your typical internal combustion motor at around 25%. I am also skipping a lot of incredible detail that doesn’t fit into a paragraph, including how food is chopped up piece by piece all the way to tiny acetate molecules, how their electrons are stripped and loaded up on molecular shuttles (NAD+ -> NADH), how they then quantum tunnel their way down the electron transport chain (literally a flow of electricity down a protein complex “wire”, from food to oxygen), how this pumps protons across the inner mitochondrial membrane (an electrochemical equaivalent of pumping water uphill in a hydro plant), how this process is brilliant, flexible, ancient, highly conserved in all of life and very closely related to photosynthesis, and finally how the protons are allowed to flow back through little holes in the ATP synthase, spinning it like a water wheel on a river, and powering its head to take an ADP and a phosphate and snap them together to ATP. Left: Chemically, as far as inputs and outputs alone are concerned, burning things with fire is identical to burning food for our energy needs. Right: the complete oxidation of C-C / C-H rich molecules powers not just our bodies but a lot of our technology. Photosynthesis: “inverse combustion”. If H2O and CO2 are oh so energetically favored, it’s worth keeping in mind where all of this C-C, C-H rich fuel came from in the first place. Of course, it comes from plants - the OG nanomolecular factories. In the process of photosynthesis, plants strip hydrogen atoms away from oxygen in molecules of water with light, and via further processing snatch carbon dioxide (CO2) lego blocks from the atmosphere to build all kinds of organics. Amusingly, unlike fixing hydrogen from H2O and carbon from CO2, plants are unable to fix the plethora of nitrogen from the atmosphere (the triple bond in N2 is very strong) and rely on bacteria to synthesize more chemically active forms (Ammonia, NH3), which is why chemical fertilizers are so important for plant growth and why the Haber-Bosch process basically averted the Malthusian catastrophe. Anyway, the point is that plants build all kinds of insanely complex organic molecules from these basic lego blocks (carbon dioxide, water) and all of it is fundamentally powered by light via the miracle of photosynthesis. The sunlight’s energy is trapped in the C-C / C-H bonds of the manufactured organics, which we eat and oxidize back to CO2 / H2O (capturing ~40% of in the form of a 3rd phosphate group on ATP), and finally convert to blog posts like this one, and a bunch of heat. Also, going in I didn’t quite appreciate just how much we know about all of the reactions involved, that we we can track individual atoms around all of them, and that any student can easily calculate answers to questions such as “How many ATP molecules are generated during the complete oxidation of one molecule of palmitic acid?” (it’s 106, now you know). We’ve now established in some detail that fat is your body’s primary battery pack and we’d like to breathe it out. Let’s turn to the details of the accounting. Energy input. Humans turn out to have a very simple and surprisingly narrow energy metabolism. We don’t partake in the miracle of photosynthesis like plants/cyanobacteria do. We don’t oxidize inorganic compounds like hydrogen sulfide or nitrite or something like some of our bacteria/archaea cousins. Similar to everything else alive, we do not fuse or fission atomic nuclei (that would be awesome). No, the only way we input any and all energy into the system is through the breakdown of food. “Food” is actually a fairly narrow subset of organic molecules that we can digest and metabolize for energy. It includes classes of molecules that come in 3 major groups (“macros”): proteins, fats, carbohydrates and a few other special case molecules like alcohol. There are plenty of molecules we can’t metabolize for energy and don’t count as food, such as cellulose (fiber; actually also a carbohydrate, a major component of plants, although some of it is digestible by some animals like cattle; also your microbiome loooves it), or hydrocarbons (which can only be “metabolized” by our internal combustion engines). In any case, this makes for exceedingly simple accounting: the energy input to your body is upper bounded by the number of food calories that you eat. The food industry attempts to guesstimate these by adding up the macros in each food, and you can find these estimates on the nutrition labels. In particular, naive calorimetry would over-estimate food calories because as mentioned not everything combustible is digestible. Energy output. You might think that most of your energy output would come from movement, but in fact 1) your body is exceedingly efficient when it comes to movement, and 2) it is energetically unintuitively expensive to just exist. To keep you alive your body has to maintain homeostasis, manage thermo-regulation, respiration, heartbeat, brain/nerve function, blood circulation, protein synthesis, active transport, etc etc. Collectively, this portion of energy expenditure is called the Base Metabolic Rate (BMR) and you burn this “for free” even if you slept the entire day. As an example, my BMR is somewhere around 1800kcal/day (a common estimate due to Mifflin St. Jeor for men is 10 x weight (kg) + 6.25 x height (cm) - 5 x age (y) + 5). Anyone who’s been at the gym and ran on a treadmill will know just how much of a free win this is. I start panting and sweating uncomfortably just after a small few hundred kcal of running. So yes, movement burns calories, but the 30min elliptical session you do in the gym is a drop in the bucket compared to your base metabolic rate. Of course if you’re doing the elliptical for cardio-vascular health - great! But if you’re doing it thinking that this is necessary or a major contributor to losing weight, you’d be wrong. This chocolate chip cookie powers 30 minutes of running at 6mph (a pretty average running pace). Energy deficit. In summary, the amount of energy you expend (BMR + movement) subtract the amount you take in (via food alone) is your energy deficit. This means you will discharge your battery more than you charge it, and breathe out more fat than you synthesize/store, decreasing the size of your battery pack, and recording less on the scale because all those carbon atoms that made up your triglyceride chains in the morning are now diffused around the atmosphere. So… a few textbooks later we see that to lose weight one should eat less and move more. Experiment section. So how big of a deficit should one introduce? I did not want the deficit to be so large that it would stress me out, make me hangry and impact my work. In addition, with greater deficit your body will increasingly begin to sacrifice lean body mass (paper). To keep things simple, I aimed to lose about 1lb/week, which is consistent with a few recommendations I found in a few papers. Since 1lb = 454g, 1g of fat is estimated at approx. 9 kcal, and adipose tissue is ~87% lipids, some (very rough) napkin math suggests that 3500 kcal = 1lb of fat. The precise details of this are much more complicated, but this would suggest a target deficit of about 500 kcal/day. I found that it was hard to reach this deficit with calorie restriction alone, and psychologically it was much easier to eat near the break even point and create most of the deficit with cardio. It also helped a lot to adopt a 16:8 intermittent fasting schedule (i.e. “skip breakfast”, eat only from e.g. 12-8pm) which helps control appetite and dramatically reduces snacking. I started the experiment in June 2019 at about 195lb (day 120 on the chart below), and 1 year later I am at 165lb, giving an overall empirical rate of 0.58lb/week: My weight (lb) over time (days). The first 120 days were "control" where I was at my regular maintenance eating whatever until I felt full. From there I maintained an average 500kcal deficit per day. Some cheating and a few water fasts are discernable. Other stuff. I should mention that despite the focus of this post the experiment was of course much broader for me than weight loss alone, as I tried to improve many other variables I started to understand were linked to longevity and general well-being. I went on a relatively low carbohydrate mostly Pescetarian diet, I stopped eating nearly all forms of sugar (except for berries) and processed foods, I stopped drinking calories in any form (soda, orange juice, alcohol, milk), I started regular cardio a few times a week (first running then cycling), I started regular resistance training, etc. I am not militant about any of these and have cheated a number of times on all of it because I think sticking to it 90% of the time produces 90% of the benefit. As a result I’ve improved a number of biomarkers (e.g. resting heart rate, resting blood glucose, strength, endurance, nutritional deficiencies, etc). I wish I could say I feel significantly better or sharper, but honestly I feel about the same. But the numbers tell me I’m supposed to be on a better path and I think I am content with that 🤷. Explicit modeling. Now, getting back to weight, clearly the overall rate of 0.58lb/week is not our expected 1lb/week. To validate the energy deficit math I spent 100 days around late 2019 very carefully tracking my daily energy input and output. For the input I recorded my total calorie intake - I kept logs in my notes app of everything I ate. When nutrition labels were not available, I did my best to estimate the intake. Luckily, I have a strange obsession with guesstimating calories in any food, I’ve done so for years for fun, and have gotten quite good at it. Isn’t it a ton of fun to always guess calories in some food before checking the answer on the nutrition label and seeing if you fall within 10% correct? No? Alright. For energy output I recorded the number my Apple Watch reports in the “Activity App”. TLDR simply subtracting expenditure from intake gives the approximate deficit for that day, which we can use to calculate the expected weight loss, and finally compare to the actual weight loss. As an example, an excerpt of the raw data and the simple calculation looks something like: Where we have a few nan if I missed a weight measurement in the morning. Plotting this we get the following: Expected weight based on simple calorie deficit formula (blue) vs. measured weight (red). Clearly, my actual weight loss (red) turned out to be slower than expected one based on our simple deficit math (blue). So this is where things get interesting. A number of possibilities come to mind. I could be consistently underestimating calories eaten. My Apple Watch could be overestimating my calorie expenditure. The naive conversion math of 1lb of fat = 3500 kcal could be off. I think one of the other significant culprits is that when I eat protein I am naively recording its caloric value under intake, implicitly assuming that my body burns it for energy. However, since I was simultaneously resistance training and building some muscle, my body could redirect 1g of protein into muscle and instead mobilize only ~0.5g of fat to cover the same energy need (since fat is 9kcal/g and protein only 4kcal/g). The outcome is that depending on my muscle gain my weight loss would look slower, as we observe. Most likely, some combination of all of the above is going on. Water factor. Another fun thing I noticed is that my observed weight can fluctuate and rise a lot, even while my expected weight calculation expects a loss. I found that this discrepancy grows with the amount of carbohydrates in my diet (dessert, bread/pasta, potatoes, etc.). Eating these likely increases glycogen levels, which as I already mentioned briefly, acts as a sponge and soaks up water. I noticed that my weight can rise multiple pounds, but when I revert back to my typical low-carbohydrate pasketerianish diet these “fake” pounds evaporate in a matter of a few days. The final outcome are wild swings in my body weight depending mostly on how much candy I’ve succumbed to, or if I squeezed in some pizza at a party. Body composition. Since simultaneous muscle building skews the simple deficit math, to get a better fit we’d have to understand the details of my body composition. The weight scale I use (Withings Body+) claims to estimate and separate fat weight and lean body weight by the use of bioelectrical impedance analysis, which uses the fact that more muscle is more water is less electrical resistance. This is the most common approach accessible to a regular consumer. I didn’t know how much I could trust this measurement so I also ordered three DEXA scans (a gold standard for body composition measurements used in the literature based on low dosage X-rays) separated 1.5 months apart. I used BodySpec, who charge $45 per scan, each taking about 7 minutes at one of their physical locations. The amount of radiation is tiny - about 0.4 uSv, which is the dose you’d get by eating 4 bananas (they contain radioactive potassium-40). I was not able to get a scan recently due to COVID-19. Here is my body composition data visualized from both sources during late 2019: My ~daily reported fat and lean body mass measurements based on bioelectrical impedance and the 3 DEXA scans. red = fat, blue = lean body mass. (also note two y-axes are superimposed) BIA vs DEXA. Unfortunately, we can see that the BIA measurement provided by my scale disagrees with DEXA results by a lot. That said, I am also forced to interpret the DEXA scan with skepticism specifically for the lean body mass amount, which is affected by hydration level, with water showing up mostly as lean body mass. In particular, during my third measurement I was fasted and in ketosis. Hence my glycogen levels were low and I was less hydrated, which I believe showed up as a dramatic loss of muscle. That said, focusing on fat, both approaches show me losing body fat at roughly the same rate, though they are off by an absolute offset. BIA. An additional way to see that BIA is making stuff up is that it shows me losing lean body mass over time. I find this relatively unlikely because during the entire course of this experiment I exercised regularly and was able to monotonically increase my strength in terms of weight and reps for most exercises (e.g. bench press, pull ups, etc.). So that makes no sense either ¯\(ツ)/¯ The raw numbers for my DEXA scans. I was allegedly losing fat. The lean tissue estimate is noisy due to hydration levels. Summary So there you have it. DEXA scans are severely affected by hydration (which is hard to control) and BIA is making stuff up entirely, so we don’t get to fully resolve the mystery of the slower-than-expected weight loss. But overall, maintaining an average deficit of 500kcal per day did lead to about 60% of the expected weight loss over the course of a year. More importantly, we studied the process by which our Sun’s free energy powers blog posts via a transformation of nuclear binding energy to electromagnetic radiation to heat. The photons power the fixing of carbon in CO2 and hydrogen in H2O into C-C/C-H rich organic molecules in plants, which we digest and break back down via a “slow” stepwise combustion in our cell’s cytosols and mitochondria, which “charges” some (ATP) molecular springs, which provide the “umph” that fires the neurons and moves the fingers. Also, any excess energy is stockpiled by the body as fat, so we need to intake less of it or “waste” some of it away on movement to discharge our primary battery and breathe out our weight. It’s been super fun to self-study these topics (which I skipped in high school), and I hope this post was an interesting intro to some of it. Okay great. I’ll now go eat some cookies, because yolo. (later edits) discussion on hacker news my original post used to be about twice as long due to a section of nutrition. Since the topic of what to each came up so often alongside how much to each I am including a quick TLDR on my final diet here, without the 5-page detail. In rough order of importance: Eat from 12-8pm only. Do not drink any calories (no soda, no alcohol, no juices, avoid milk). Avoid sugar like the plague, including carbohydrate-heavy foods that immediately break down to sugar (bread, rice, pasta, potatoes), including to a lesser extent natural sugar (apples, bananas, pears, etc - we’ve “weaponized” these fruits in the last few hundred years via strong artificial selection into actual candy bars), berries are ~okay. Avoid processed food (follow Michael Pollan’s heuristic of only shopping on the outer walls of a grocery store, staying clear of its center). For meat stick mostly to fish and prefer chicken to beef/pork. For me the avoidance of beef/pork is 1) ethical - they are intelligent large animals, 2) environmental - they have a large environmental footprint (cows generate a lot of methane, a highly potent greenhouse gas) and their keeping leads to a lot of deforestation, 3) health related - a few papers point to some cause for concern in consumption of red meat, and 4) global health - a large fraction of the worst offender infectious diseases are zootopic and jumped to humans from close proximity to livestock.

over a year ago 83 votes

More in AI

Pluralistic: Trump steals $400b from American workers (09 Sep 2025)

Today's links Trump steals $400b from American workers: You get a noncompete, and you get a noncompete, and you get a noncompete! Hey look at this: Delights to delectate. Object permanence: Spying baby-monitors; FBI tests spy-gear at Burning Man; Little Brother optioned by Paramount; Best-paid CEOs have worst-paid workers. Upcoming appearances: Where to find me. Recent appearances: Where I've been. Latest books: You keep readin' em, I'll keep writin' 'em. Upcoming books: Like I said, I'll keep writin' 'em. Colophon: All the rest. Trump steals $400b from American workers (permalink) Trump's stolen a lot of workers' wages over the years, but this week, he has become history's greatest thief of wages, having directed his FTC to stop enforcing its ban on noncompetes "agreements," a move that will cost American workers $400 billion over the next ten years: https://prospect.org/labor/2025-09-09-trump-lets-bosses-grab-400-billion-worker-pay-noncompete-agreements/ The argument for noncompetes is this: modern industry is IP-intensive, and IP-intensive businesses need noncompetes, otherwise workers will take proprietary information with them when they walk out the door and bring it to a competitor. Who would invest in an IP-intensive firm under those circumstances? I'll tell you who would: Hollywood and Silicon Valley. These are two most IP-intensive industries in human history, both of which were incubated in California, a state whose constitution prohibits noncompetes and has done so through the entire history of those two industries. Indeed, we wouldn't have a Silicon Valley if California had noncompetes. Silicon Valley was founded by Robert Shockley, who won the Nobel Price for his role in inventing the silicon transistor (hence Silicon Valley). Shockley was a paranoid, virulent racist who couldn't produce a working chip because he was consumed by eugenic fervor and spent all his time on the road offering shares of his Nobel prize money to Black women who would agree to have their tubes tied. Lucky for (literally) everyone (except Robert Shockley), California doesn't have noncompetes, so eight of his top engineers ("The Traitorous Eight") were able to quit Shockley Semiconductor and start the first successful chip business: Fairchild Semiconductor. And then two of Fairchild's top engineers quit to found Intel: https://pluralistic.net/2021/10/24/the-traitorous-eight-and-the-battle-of-germanium-valley/ It's not just Silicon Valley that's rooted in wresting IP away from asshole control-freaks: that's Hollywood's story, too. Ever wonder how it was that movies were invented at Edison Labs in New Jersey, but the film industry was incubated in California, literally as far away from Edison as you could possibly get without ending up in Mexico? In short: California got the motion picture industry because Edison was an asshole who used his patents to control what kinds of movies could be made and to suck rents out of filmmakers to license those patents. So the most ambitious filmmakers in America fled to California, where Edison couldn't easily enforce his patents, and founded Hollywood: https://www.nytimes.com/2005/08/21/weekinreview/lala-land-the-origins.html?unlocked_article_code=1.kk8.5T1M.VSaEsN5Vn9tM&smid=url-share And Hollywood stayed in Calfornia, a place where noncompetes couldn't be enforced, where "IP" could hop from one studio to another, smuggled out between the ears of writers, actors, directors, SFX wizards, prop makers, scenepainters, makeup artists, costumers, and the most creative professionals in Hollywood: accountants. Empirically speaking, the function of noncompetes is to trap good workers and good ideas in companies controlled by asshole bosses who can't get anything done. Any disinvestment that can be attributed to the absence of noncompetes is completely swamped by the dividends generated by good workers and good ideas escaping from control-freak asshole bosses and founding productive firms. As ever, money talks and bullshit walks. Today, one in 18 US workers is trapped by a noncompete, and those aren't the knowledge workers of Silicon Valley workers or Hollywood. So who is captured by this form of contractual indenture? The median US worker under noncompete is a fast-food worker stuck with the tipped minimum wage, or a pet groomer making the regular minimum wage. The function of the noncompete in America isn't to secure investment for knowledge-intensive industries – it's to stop the cashier at Wendy's from getting an extra $0.25/hour working the fry-trap at the McDonald's across the street. Noncompetes are an integral part of the conservative project, which is the substitution of individual power for democratic choice. As Dan Savage puts it, the GOP agenda is "Husbands you can't leave [ed: ending no-fault divorce], pregnancies you can't prevent or terminate [ed: banning contraception and abortion], politicians you can't vote out of office [ed: gerrymandering and voter suppression." Add to that: jobs you can't quit. It's not just noncompetes that lock workers to shitty bosses. When Biden's FTC investigated the issue, they revealed a widespread practice called "training repayment agreement provision," (TRAPs) that puts workers on the hook for thousands of dollars if they quit or get fired: https://pluralistic.net/2022/08/04/its-a-trap/#a-little-on-the-nose A TRAPped worker – often a pet-groomer at a private equity-owned giant like Petsmart – is charged $5,500 or more for three weeks of "training" that actually amount to one or two weeks of sweeping up pet-hair. But if they leave or get fired in the next three years, they have to pay back that whole amount: https://pluralistic.net/2022/08/04/its-a-trap/#a-little-on-the-nose A closely related concept is "bondage fees," which have been imposed on whole classes of workers, like doormen in NYC apartment buildings: https://pluralistic.net/2023/04/21/bondage-fees/#doorman-building These fees trap workers in dead-end jobs by forcing anyone who hires them away to pay massive fees to their former employers. It's just another way to lock workers to businesses. The irony here is that conservatives claim to worship "voluntarism" and "free choice," and insist that the virtue of markets is that they "aggregate price signals" so that companies can respond to these signals by efficiently matching demand to supply. But though conservatives say they worship free choice as an engine of economic efficiency, they understand that their ideas are so unpopular that they can only succeed if people are coerced into adopting them, hence voter suppression, gerrymandering, noncompetes, and other heads-I-win/tails-you-lose propositions. Noncompetes aren't about preventing the loss of IP – they're about preventing the loss of process knowledge, the know-how to turn ideas into products and services. Bosses love IP, because it can be alienated, hoarded and sold, while process knowledge is ineluctably vested in the bodies, minds and relations of workers. No IP law can keep employees from taking process knowledge with them on their way out the door, so bosses want to ban them from leaving: https://pluralistic.net/2025/09/08/process-knowledge/#dance-monkey-dance Biden's FTC banned noncompetes nationwide, for nearly every category of employment, deeming them an "unfair method of competition": https://www.ftc.gov/news-events/news/press-releases/2023/03/ftc-extends-public-comment-period-its-proposed-rule-ban-noncompete-clauses-until-april-19 FTC economists estimated that killing noncompetes would result in $400b in wage gains for the American workforce over the next decade, as good workers migrated to good bosses. Of course this was challenged by the business lobby, which sued to get the rule overturned. Trump's FTC has not only declined to defend the rule in court, they've also decided to stop trying to enforce it. Trump is now the king of wage-theft, and MAGA is a relentless engine of enshittification. After all, the thesis of enshittification is that companies make their products and practices worse for suppliers, users and business customers only when they calculate that they can do so without facing punishment – from regulators, competitors, or workers. Trump's regulators are all either comatose or so captured they wear gimpsuits and leashes in public. They're not keeping companies in line. And his antitrust shops have turned into pay-for-play operations, where a $1m payment to a MAGA influencer gets your case dropped: https://www.thebignewsletter.com/p/an-attempted-coup-at-the-antitrust Trump neutered the National Labor Relations Board and now he's revived indentured servitude nationwide, formalizing the idea of government-backed jobs you can't quit. If you can't quit your job or vote our your politicians, why wouldn't your boss or your elected representative just relentless fuck you over? Not merely for sadism's sake (though sadism undoubtedly plays a part here), but simply to make things better for themselves by making things worse for you? It's exactly the same logic of platform lock-in: once you can't leave, they don't have to keep you happy. Formalizing the legality of noncompetes will only lead to their monotonic spread. When Antonin Scalia greenlit binding arbitration waivers in consumer contracts, only a tiny number of companies used them, forcing customers to sign away their right to sue them no matter how badly, negligently or criminally they behaved. Today, binding arbitration has expanded into every kind of contract, even to the point where groovy, open source, decentralized, federated social media platforms are forcing it on their users: https://pluralistic.net/2025/08/15/dogs-breakfast/#by-clicking-this-you-agree-on-behalf-of-your-employer-to-release-me-from-all-obligations-and-waivers-arising-from-any-and-all-NON-NEGOTIATED-agreements Same for noncompetes: as private equity rolls up whole sectors – funeral homes, pet groomers, hospices – they will stuff noncompetes into the contracts of every employer in each industry, so no matter where a worker applies for a job, they'll have to sign a noncompete. Why wouldn't they? If workers can't leave, they'll accept worse working conditions and lower pay. The best workers will be stuck with the worst employers. And despite owing their existence to bans on noncompetes Silicon Valley and Hollywood will happily cram noncompetes down their workers' throats. If you doubt it, just read up on the "no poach" scandal, where the biggest tech and movie companies entered into a criminal conspiracy not to hire away each others' employees: https://en.wikipedia.org/wiki/High-Tech_Employee_Antitrust_Litigation The conservative future, folks: jobs you can't quit, politicians you can't vote out of office, husbands you can't divorce, and pregnancies you can't prevent or terminate. Hey look at this (permalink) Nate Silver's big list of grievances https://www.garbageday.email/p/nate-silver-s-big-list-of-grievances Electronic Dance Music vs. Copyright: Law as Weaponized Culture https://drive.proton.me/urls/TVH0PW4TZ8#EM5VMl1BUlny Google admits the open web is in ‘rapid decline’ https://www.theverge.com/news/773928/google-open-web-rapid-decline Britain Owes Palestine https://www.britainowespalestine.org/ A Dramatic Reading of The Recent New York Times Dispatch from the Hamptons. https://bsky.app/profile/zohrankmamdani.bsky.social/post/3lyech7chqs2q Object permanence (permalink) #20yrsago Crooks take anti-forensic countermeasures https://www.newscientist.com/article/mg18725163-800-television-shows-scramble-forensic-evidence/ #20yrsago Recording industry demands digital radio broadcast flag https://web.archive.org/web/20051018100306/https://www.godwinslaw.org/weblog/archive/2005/09/09/riaas-big-push-to-copy-protect-digital-radio #20yrsago Unicef/Save the Children sell out to recording industry https://web.archive.org/web/20050914034709/http://www.promusicae.org/pdf/campana_jovenes_musica_e_internet.pdf #15yrsago TSA forces pregnant traveller into full-body scanner https://web.archive.org/web/20100910235117/https://consumerist.com/2010/09/pregnant-traveler-tsa-screeners-bullied-me-into-full-body-scan.html #10yrsago Help crowdfund a relentless tsunami of FOIA requests into America’s private prisons https://www.muckrock.com/project/the-private-prison-project-8/ #10yrsago Your baby monitor is an Internet-connected spycam vulnerable to voyeurs and crooks https://web.archive.org/web/20210505050810/https://www.rapid7.com/blog/post/2015/09/02/iotsec-disclosure-10-new-vulns-for-several-video-baby-monitors/ #10yrsago Inept copyright bot sends 2600 a legal threat over ink blotches https://www.2600.com/content/2600-accused-using-unauthorized-ink-splotches #10yrsago FBI used Burning Man to field-test new surveillance equipment https://www.muckrock.com/news/archives/2015/sep/01/burning-man-fbi-file/ #10yrsago Fury Road, hieroglyph edition https://imgur.com/gallery/you-will-ride-eternal-papyrus-chrome-you-will-ride-eternal-papyrus-chrome-BxdOcTr#/t/chrome #10yrsago Little Brother optioned by Paramount https://www.tracking-board.com/tb-exclusive-paramount-pictures-picks-up-ny-times-bestselling-ya-novel-little-brother/ #10yrsago Record street-marches in Moldova against corrupt oligarchs https://www.euractiv.com/section/europe-s-east/news/moldova-banking-scandal-fuels-biggest-protest-ever/ #5yrsago Germany's amazing new competition proposalhttps://pluralistic.net/2020/09/09/free-sample/#wunderschoen #5yrsago DRM versus human rights https://pluralistic.net/2020/09/09/free-sample/#que-viva #1yrago America's best-paid CEOs have the worst-paid employees https://pluralistic.net/2024/09/09/low-wage-100/#executive-excess Upcoming appearances (permalink) Ithaca: Enshittification at Buffalo Street Books, Sept 11 https://buffalostreetbooks.com/event/2025-09-11/cory-doctorow-tcpl-librarian-judd-karlman Ithaca: AD White keynote (Cornell), Sep 12 https://deanoffaculty.cornell.edu/events/keynote-cory-doctorow-professor-at-large/ Ithaca: Enshittification at Autumn Leaves Books, Sept 13 https://www.autumnleavesithaca.com/event-details/enshittification-why-everything-got-worse-and-what-to-do-about-it Ithaca: Radicalized Q&A (Cornell), Sept 16 https://events.cornell.edu/event/radicalized-qa-with-author-cory-doctorow Ithaca: The Counterfeiters (Dinner/Movie Night) (Cornell), Sept 17 https://adwhiteprofessors.cornell.edu/visits/cory-doctorow/ Ithaca: Communication Power, Policy, and Practice (Cornell), Sept 18 https://events.cornell.edu/event/policy-provocations-a-conversation-about-communication-power-policy-and-practice Ithaca: A Reverse-Centaur's Guide to Being a Better AI Critic (Cornell), Sept 18 https://events.cornell.edu/event/2025-nordlander-lecture-in-science-public-policy NYC: Enshittification and Renewal (Cornell Tech), Sept 19 https://www.eventbrite.com/e/enshittification-and-renewal-a-conversation-with-cory-doctorow-tickets-1563948454929 NYC: Brooklyn Book Fair, Sept 21 https://brooklynbookfestival.org/event/big-techs-big-heist-cory-doctorow-in-conversation-with-adam-becker/ DC: Enshittification with Rohit Chopra (Politics and Prose), Oct 8 https://politics-prose.com/cory-doctorow-10825 NYC: Enshittification with Lina Khan (Brooklyn Public Library), Oct 9 https://www.bklynlibrary.org/calendar/cory-doctorow-discusses-central-library-dweck-20251009-0700pm New Orleans: DeepSouthCon63, Oct 10-12 http://www.contraflowscifi.org/ Chicago: Enshittification with Anand Giridharadas (Chicago Humanities), Oct 15 https://www.oldtownschool.org/concerts/2025/10-15-2025-kara-swisher-and-cory-doctorow-on-enshittification/ San Francisco: Enshittification at Public Works (The Booksmith), Oct 20 https://app.gopassage.com/events/doctorow25 Madrid: Conferencia EUROPEA 4D (Virtual), Oct 28 https://4d.cat/es/conferencia/ Miami: Enshittification at Books & Books, Nov 5 https://www.eventbrite.com/e/an-evening-with-cory-doctorow-tickets-1504647263469 Recent appearances (permalink) Nerd Harder! (This Week in Tech) https://twit.tv/shows/this-week-in-tech/episodes/1047 Techtonic with Mark Hurst https://www.wfmu.org/playlists/shows/155658 Cory Doctorow DESTROYS Enshittification (QAA Podcast) https://soundcloud.com/qanonanonymous/cory-doctorow-destroys-enshitification-e338 Latest books (permalink) "Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels). "The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org). "The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org). "The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245). "Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com. "Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com Upcoming books (permalink) "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025 "Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025 https://us.macmillan.com/books/9780374619329/enshittification/ "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026 "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026 "The Memex Method," Farrar, Straus, Giroux, 2026 "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, 2026 Colophon (permalink) Today's top sources: Currently writing: "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. FIRST DRAFT COMPLETE AND SUBMITTED. A Little Brother short story about DIY insulin PLANNING This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net. https://creativecommons.org/licenses/by/4.0/ Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution. How to get Pluralistic: Blog (no ads, tracking, or data-collection): Pluralistic.net Newsletter (no ads, tracking, or data-collection): https://pluralistic.net/plura-list Mastodon (no ads, tracking, or data-collection): https://mamot.fr/@pluralistic Medium (no ads, paywalled): https://doctorow.medium.com/ Twitter (mass-scale, unrestricted, third-party surveillance and advertising): https://twitter.com/doctorow Tumblr (mass-scale, unrestricted, third-party surveillance and advertising): https://mostlysignssomeportents.tumblr.com/tagged/pluralistic "When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. ISSN: 3066-764X

20 hours ago 2 votes
A guide to understanding AI as normal technology

And a big change for this newsletter

yesterday 7 votes
Pluralistic: Fingerspitzengefühl (08 Sep 2025)

Today's links Fingerspitzengefühl: IP vs process knowledge. Hey look at this: Delights to delectate. Object permanence: Buddhist hell theme-park; ORG launches; Yahoo spies for Beijing; Secret plan for border laptop-searches; BBC Creative Archive launches; Penn and Teller BBS; Immortan Trump; IP. Upcoming appearances: Where to find me. Recent appearances: Where I've been. Latest books: You keep readin' em, I'll keep writin' 'em. Upcoming books: Like I said, I'll keep writin' 'em. Colophon: All the rest. Fingerspitzengefühl (permalink) This was the plan: America would stop making things and instead make recipes, the "IP" that could be sent to other countries to turn into actual stuff, in distant lands without the pesky environmental and labor rules that forced businesses accept reduced profits because they weren't allowed to maim their workers and poison the land, air and water. This was quite a switch! At the founding of the American republic, the US refused to extend patent protection to foreign inventors. The inventions of foreigners would be fair game for Americans, who could follow their recipes without paying a cent, and so improve the productivity of the new nation without paying rent to old empires over the sea. It was only once America found itself exporting as much as it imported that it saw fit to recognize the prerogatives of foreign inventors, as part of reciprocal agreements that required foreigners to seek permission and pay royalties to American patent-holders. But by the end of the 20th Century, America's ruling class was no longer interested in exporting things; they wanted to export ideas, and receive things in return. You can see why: America has a limited supply of things, but there's an infinite supply of ideas (in theory, anyway). There was one problem: why wouldn't the poor-but-striving nations abroad copy the American Method for successful industrialization? If ignoring Europeans' patents allowed America to become the richest and most powerful nation in the world, why wouldn't, say, China just copy all that American "IP"? If seizing foreigners' inventions without permission was good enough for Thomas Jefferson, why not Jiang Zemin? America solved this problem with the promise of "free trade." The World Trade Organization divided the world into two blocs: countries that could trade with one another without paying tariffs, and the rabble without who had to navigate a complex O(^2) problem of different tariff schedules between every pair of nations. To join the WTO club, countries had to sign up to a side-treaty called the Trade-Related Aspects of Intellectual Property Rights (TRIPS). Under the TRIPS, the Jeffersonian plan for industrialization (taking foreigners' ideas without permission) was declared a one-off, a scheme only the US got to try and no other country could benefit from. For China to join the WTO and gain tariff-free access to the world's markets, it would have to agree to respect foreign patents, copyrights, trademarks and other "IP." We know the story of what followed over the next quarter-century: China became the world's factory, and became so structurally important that even if it violated its obligations under the TRIPS, "stealing the IP" of rich nations, no one could afford to close their borders to Chinese imports, because every country except China had forgotten how to make things. But this isn't the whole story – it's not even the most important part of it. In his new book Breakneck, Dan Wang (a Chinese-born Canadian who has lived extensively in Silicon Valley and in China) devotes a key chapter to "process knowledge": https://danwang.co/breakneck/ What's "process knowledge"? It's all the intangible knowledge that workers acquire as they produce goods, combined with the knowledge that their managers acquire from overseeing that labor. The Germans call it "Fingerspitzengefühl" ("fingertip-feeling"), like the sense of having a ball balanced on your fingertips, and knowing exactly which way it will tip as you tilt your hand this way or that. Wang's book is big and complicated, and I haven't yet finished it. There's plenty I disagree with Wang about – I think he overstates the role of proceduralism in slowing down American progress and understates the role monopoly and oligarchy play in corrupting the rule of law. But the chapter on process knowledge is revelatory. Don't take my word for it: read Henry Farrell, who says that "[process knowledge] is the message of Dan Wang's new book": https://www.programmablemutter.com/p/process-knowledge-is-crucial-to-economic And Dan Davies, who uses the example of the UK's iconic Brompton bikes to explain the importance of process knowledge: https://backofmind.substack.com/p/the-brompton-ness-of-it-all Process knowledge is everything from "Here's how to decant feedstock into this gadget so it doesn't jam," to "here's how to adjust the flow of this precursor on humid days to account for the changes in viscosity" to "if you can't get the normal tech to show up and calibrate the part, here's the phone number of the guy who retired last year and will do it for time-and-a-half." It can also be decidedly high-tech. A couple years ago, the legendary hardware hacker Andrew "bunnie" Huang explained to me his skepticism about the CHIPS Act's goal of onshoring the most advanced (4-5nm) chips. Bunnie laid out the process by which these chips are etched: first you need to make the correct wavelength of light for the nanolithography machine. Stage one of that is spraying droplets of molten tin into an evacuated chamber, where each droplet is tracked by a computer vision system that targets them to be hit with a highly specialized laser that smashes each droplet into a precise coin shape. Then, a second kind of extremely esoteric laser evaporates each of these little tin coins to make a specific kind of tin vapor that can be used to generate the right wavelength of light. This light is then played over two wafers on reciprocating armatures; each wafer needs to be precisely (as in nanograms and nanometers) the same dimensions and weight, otherwise the moving platters they slide back and forth on will get out of balance and the wafers will be spoiled as they are mis-etched. This process is so esoteric, and has so many figurative and literal moving parts, that it needs to be closely overseen and continuously adjusted by someone with a PhD in electrical engineering. That overseer needs to wear a clean-room suit, and they have to work an eight-hour shift without a bathroom, food or water break (because getting out of the suit means going through an airlock means shutting down the system means long delays and wastage). That PhD EENG is making $50k/year. Bunnie's topline explanation for the likely failure of the CHIPS Act is that this is a process that could only be successfully executed in a country "with an amazing educational system and a terrible passport." For bunnie, the extensive educational subsidies that produced Taiwan's legion of skilled electrical engineers and the global system that denied them the opportunity to emigrate to higher-wage zones were the root of the country's global dominance in advanced chip manufacture. I have no doubt that this is true, but I think it's incomplete. What bunnie is describing isn't merely the expertise imparted by attaining a PhD in electrical engineering – it's the process knowledge built up by generations of chip experts who debugged generations of systems that preceded the current tin-vaporizing Rube Goldberg machines. Even if you described how these machines worked to a doctoral EENG who had never worked in this specific field, they couldn't oversee these machines. Sure, they'd have the technical background to be seriously impressed by how cool all this shit is, and you might be able to train them don a bunny suit and hold onto their bladders for 8 hours and make the machine go, but simply handing them the "IP" for this process will not get you a chip foundry. It's undeniable that there's been plenty of Chinese commercial espionage, some of it with state backing. But in reading Wang, it's clear that the country's leaders have cooled on the importance of "IP" – indeed, these days, they call it "imaginary property," and call the IP economy the "imaginary economy" (contrast with the "real economy" of making stuff). Wang evocatively describes how China built up its process knowledge over the WTO years, starting with simple assembly of complex components made abroad, then progressing to making those components, then progressing to coming up with novel ways to reconfiguring them ("a drone is a cellphone with propellers"). He explains how the vicious cycle of losing process knowledge accelerated the decline of manufacturing in the west: every time a factory goes to China, US manufacturers that had been in its supply chain lose process knowledge. You can no longer call up that former supplier and brainstorm solutions to tricky production snags, which means that other factories in the supply chain suffer, and they, too get offshored to China. America's vicious cycle was China's virtuous cycle. The process knowledge that drained out of America accumulated in China. Years of experience solving problems in earlier versions of new equipment and processes gives workers a conceptual framework to debug the current version – they know about the raw mechanisms subsumed in abstraction layers and sealed packages and can visualize what's going on inside those black boxes. Likewise in colonial America: taking foreigners' patents was just table-stakes. Real improvement came from the creation of informal communities built around manufacturing centers, and from the pollinators who spread innovations around among practitioners. Long before John Deere turned IP troll and locked farmers out of servicing their own tractors, they paid and army of roving engineers who would visit farmers to learn about the ways they'd improved their tractors, and integrate these improvements into new designs: https://securityledger.com/2019/03/opinion-my-grandfathers-john-deere-would-support-our-right-to-repair/ But here's the thing: while "IP" can be bought and sold by the capital classes, process knowledge is inseparably vested in the minds and muscle-memory of their workers. People who own the instructions are constitutionally prone to assuming that making the recipe is the important part, while following the recipe is donkey-work you can assign to any freestanding oaf who can take instruction. Think of John Philip Sousa, decrying the musicians who recorded and sold his compositions on early phonograms: These talking machines are going to ruin the artistic development of music in this country. When I was a boy…in front of every house in the summer evenings, you would find young people together singing the songs of the day or old songs. Today you hear these infernal machines going night and day. We will not have a vocal cord left. The vocal cord will be eliminated by a process of evolution, as was the tail of man when he came from the ape. For Sousa, musicians were just the trained monkeys who followed the instructions that talented composers set down on paper and handed off to other trained monkeys to print and distribute for sale. The exaltation of "IP" over process knowledge is part of the ancient practice of bosses denigrating their workers' contribution to the bottom line. It's key to the myth that workers can be replaced by AI: an AI can consume all the "IP" produced by workers, but it doesn't have their process knowledge. It can't, because process knowledge is embodied and enmeshed, it is relational and physical. It doesn't appear in training data. In other words, elevating "IP" over process knowledge is a form of class war. And now that the world's store of process knowledge has been sent to the global south, the class war has gone racial. Think of how Howard Dean – now a paid shill for the pharma lobby – peddled the racist lie that there was no point in dropping patent protections for the covid vaccines, because brown people in poor countries were too stupid to make advanced vaccines: https://pluralistic.net/2021/04/08/howard-dino/#the-scream The truth is that the world's largest vaccine factories are to be found in the global south, particularly India, and these factories sit at the center of a vast web of process knowledge, embedded in relationships and built up with hard-won problem-solving. Bosses would love it if process knowledge didn't matter, because then workers could finally be tamed by industry. We could just move the "IP" around to the highest bidders with the cheapest workforces. But Wang's book makes a forceful argument that it's easier to build up a powerful, resilient society based on process knowledge than it is to do so with IP. What good is a bunch of really cool recipes if no one can follow them? I think that bosses are, psychoanalytically speaking, haunted by the idea that their workers own the process knowledge that is at the heart of their profits. That's why bosses are so obsessed with noncompete "agreements." If you can't own your workers' expertise, then you must own your workers. Any time a debate breaks out over noncompetes, a boss will say something like, "My intellectual property walks out the door of my shop every day at 5PM." They're wrong: the intellectual property is safely stored on the company's hard drives – it's the process knowledge that walks out the door. You can see this in the prepper dreaming of the ruling class. Preppers are consumed by "disaster fantasies" in which the world ends in a way that they – and they alone – can put to rights. In Dancing at Armageddon: Survivalism and Chaos in Modern Times, the ethnographer Richard Mitchell describes a water chemist who is obsessed with terrorists poisoning the water supply: https://pluralistic.net/2020/03/22/preppers-are-larpers/#preppers-unprepared This chemist has stockpiled everything he would need to restore order after a mass water-supply poisoning. But when Mitchell presses him to explain why he thinks it's likely that his town's water supply would be poisoned by terrorists, the prepper is at a loss. Eventually, he basically confesses that it would just be really cool if the world ended in such a way that only he could save it. Which is a problem for a boss. The chemist has a lot of process knowledge, he knows how to do stuff. But the boss knows how to raise money from investors, how to ignore the company's essential qualitative traits (such as the relationships between workers) and reduce the firm to a set of optimizable spreadsheet cells that are legible to the financial markets. What kind of crisis recovery demands those skills? As I posit in my novella "The Masque of the Red Death," the perfect boss fantasy is one in which the boss hunkers down in a luxury bunker while the rabble rebuild civilization from the ashes: https://pluralistic.net/2020/03/14/masque-of-the-red-death/#masque And once that task is complete, the boss emerges from his hidey-hole with an army of mercenaries in bomb-collars, a vast cache of AR-15s, gemstone-quality emeralds, and thumbdrives full of bitcoin, and does what he does best – takes over the show and tells everyone else what to do, from the comfort his high-walled fortress, with its mountain of canned goods and its harem. The absurdity of this – as I try to show with my story – is that the process knowledge of wheedling, bullying and coercing other people to work for you is actually not very useful. The IP you can buy and sell an inert curiosity until it finds its way to people who can put it into process. Hey look at this (permalink) Statement on discourse about ActivityPub and AT Protocol https://github.com/swicg/general/blob/master/statements%2F2025-09-05-activitypub-and-atproto-discourse.md A message from Emily James, director of the upcoming documentary Enshittification: The Film. https://www.kickstarter.com/projects/doctorow/enshittification-the-drm-free-audiobook/posts/4478169 The story of how RSS beat Microsoft https://buttondown.com/blog/rss-vs-ice Ideas Have Consequences The Impact of Law and Economics on American Justice https://academic.oup.com/qje/advance-article/doi/10.1093/qje/qjaf042/8241352 Object permanence (permalink) #20yrsago BBC Creative Archive pilot launches http://news.bbc.co.uk/2/hi/entertainment/4225914.stm #20yrsago Gold Rush-era sailing ship ruin excavated in San Fran https://web.archive.org/web/20050910151416/https://www.sfgate.com/cgi-bin/article.cgi?f=/n/a/2005/09/06/state/n154446D61.DTL #20yrsago iTunes phone gratuitously crippled by DRM https://web.archive.org/web/20051001030643/http://playlistmag.com/weblogs/todayatplaylist/2005/09/hiddengoodies/index.php #20yrsago My photos from the Buddhist hells of the Singaporean Tiger Balm themepark https://memex.craphound.com/2005/09/07/corys-photos-from-the-buddhist-hells-of-the-singaporean-tiger-balm-themepark/ #20yrsago Online Rights Group UK launches https://web.archive.org/web/20051120005155/http://www.openrightsgroup.org/ #20yrsago Yahoo rats out Chinese reporter to Beijing, writer gets 10 years in jail http://news.bbc.co.uk/2/hi/asia-pacific/4221538.stm #15yrsago Secret copyright treaty: USA caves on border laptop/phone/MP3 player searches for copyright infringement https://www.michaelgeist.ca/2010/09/acta-enforcement-practice-chapter/ #15yrsago Login screens from Penn and Teller BBS, 1987 https://www.flickr.com/photos/davidkha/4969386169/ #10yrsago Antihoarding: When “decluttering” becomes a compulsion https://www.theatlantic.com/health/archive/2015/09/ocd-obsessive-compulsive-decluttering-hoarding/401591/ #10yrsago NZ bans award-winning YA novel after complaints from conservative Christian group https://www.theguardian.com/world/2015/sep/07/new-zealand-bans-into-the-river-teenage-novel-outcry-christian-group #10yrsago Immortan Trump https://imgur.com/gallery/relevant-donald-trump-cos-play-OQe2rU5 #5yrsago Antitrust trouble for cloud services https://pluralistic.net/2020/09/08/attack-surface-kickstarter/#reasonable-agreements #5yrsago FTC about to hammer Intuit https://pluralistic.net/2020/09/08/attack-surface-kickstarter/#tax-fraud #5yrsago IP https://pluralistic.net/2020/09/08/attack-surface-kickstarter/#control #5yrsago My first-ever Kickstarter https://pluralistic.net/2020/09/08/attack-surface-kickstarter/#asks #5yrsago David Graeber on Spectre TV https://pluralistic.net/2020/09/07/facebook-v-humanity/#spectre #5yrsago Facebook's foreseeable election consequences https://pluralistic.net/2020/09/07/facebook-v-humanity/#zuck-off Upcoming appearances (permalink) Ithaca: Enshittification at Buffalo Street Books, Sept 11 https://buffalostreetbooks.com/event/2025-09-11/cory-doctorow-tcpl-librarian-judd-karlman Ithaca: AD White keynote (Cornell), Sep 12 https://deanoffaculty.cornell.edu/events/keynote-cory-doctorow-professor-at-large/ Ithaca: Enshittification at Autumn Leaves Books, Sept 13 https://www.autumnleavesithaca.com/event-details/enshittification-why-everything-got-worse-and-what-to-do-about-it Ithaca: Radicalized Q&A (Cornell), Sept 16 https://events.cornell.edu/event/radicalized-qa-with-author-cory-doctorow Ithaca: The Counterfeiters (Dinner/Movie Night) (Cornell), Sept 17 https://adwhiteprofessors.cornell.edu/visits/cory-doctorow/ Ithaca: Communication Power, Policy, and Practice (Cornell), Sept 18 https://events.cornell.edu/event/policy-provocations-a-conversation-about-communication-power-policy-and-practice Ithaca: A Reverse-Centaur's Guide to Being a Better AI Critic (Cornell), Sept 18 https://events.cornell.edu/event/2025-nordlander-lecture-in-science-public-policy NYC: Enshittification and Renewal (Cornell Tech), Sept 19 https://www.eventbrite.com/e/enshittification-and-renewal-a-conversation-with-cory-doctorow-tickets-1563948454929 DC: Enshittification with Rohit Chopra (Politics and Prose), Oct 8 https://politics-prose.com/cory-doctorow-10825 NYC: Enshittification with Lina Khan (Brooklyn Public Library), Oct 9 https://www.bklynlibrary.org/calendar/cory-doctorow-discusses-central-library-dweck-20251009-0700pm New Orleans: DeepSouthCon63, Oct 10-12 http://www.contraflowscifi.org/ Chicago: Enshittification with Anand Giridharadas (Chicago Humanities), Oct 15 https://www.oldtownschool.org/concerts/2025/10-15-2025-kara-swisher-and-cory-doctorow-on-enshittification/ San Francisco: Enshittification at Public Works (The Booksmith), Oct 20 https://app.gopassage.com/events/doctorow25 Madrid: Conferencia EUROPEA 4D (Virtual), Oct 28 https://4d.cat/es/conferencia/ Miami: Enshittification at Books & Books, Nov 5 https://www.eventbrite.com/e/an-evening-with-cory-doctorow-tickets-1504647263469 Recent appearances (permalink) Nerd Harder! (This Week in Tech) https://twit.tv/shows/this-week-in-tech/episodes/1047 Techtonic with Mark Hurst https://www.wfmu.org/playlists/shows/155658 Cory Doctorow DESTROYS Enshittification (QAA Podcast) https://soundcloud.com/qanonanonymous/cory-doctorow-destroys-enshitification-e338 Latest books (permalink) "Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels). "The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org). "The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org). "The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245). "Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com. "Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com Upcoming books (permalink) "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025 "Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025 https://us.macmillan.com/books/9780374619329/enshittification/ "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026 "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026 "The Memex Method," Farrar, Straus, Giroux, 2026 "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, 2026 Colophon (permalink) Today's top sources: Currently writing: "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. FIRST DRAFT COMPLETE AND SUBMITTED. A Little Brother short story about DIY insulin PLANNING This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net. https://creativecommons.org/licenses/by/4.0/ Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution. How to get Pluralistic: Blog (no ads, tracking, or data-collection): Pluralistic.net Newsletter (no ads, tracking, or data-collection): https://pluralistic.net/plura-list Mastodon (no ads, tracking, or data-collection): https://mamot.fr/@pluralistic Medium (no ads, paywalled): https://doctorow.medium.com/ Twitter (mass-scale, unrestricted, third-party surveillance and advertising): https://twitter.com/doctorow Tumblr (mass-scale, unrestricted, third-party surveillance and advertising): https://mostlysignssomeportents.tumblr.com/tagged/pluralistic "When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. ISSN: 3066-764X

2 days ago 3 votes
AI Roundup 134: The young and the jobless

September 5, 2025.

5 days ago 8 votes
Pluralistic: Canny Valley (04 Sep 2025)

Today's links Canny Valley: My little art-book is here! Hey look at this: Delights to delectate. Object permanence: Ballmer throws a chair; Bruce Sterling on Singapore; RIP David Graeber; Big Car warns of lethal Right to Repair. Upcoming appearances: Where to find me. Recent appearances: Where I've been. Latest books: You keep readin' em, I'll keep writin' 'em. Upcoming books: Like I said, I'll keep writin' 'em. Colophon: All the rest. Canny Valley (permalink) I've spent every evening this week painstakingly unpacking, numbering and signing 500 copies of my very first art-book, a strange and sturdy little volume called Canny Valley. Canny Valley collects 80 of the best collages I've made for my Pluralistic newsletter, where I publish 5-6 essays every week, usually headed by a strange, humorous and/or grotesque image made up of public domain sources and Creative Commons works. These images are made from open access sources, and they are themselves open access, licensed Creative Commons Attribution Share-Alike, which means you can take them, remix them, even sell them, all without my permission. I never thought I'd become a visual artist, but as I've grappled with the daily challenge of figuring out how to illustrate my furious editorials about contemporary techno-politics, especially "enshittification," I've discovered a deep satisfaction from my deep dives into historical archives of illustration, and, of course, the remixing that comes afterward. Over the years, many readers have asked whether I would ever collect these in a book. Then I ran into Creative Commons CEO Anna Tumadóttir and we brainstormed ideas for donor gifts in honor of Creative Commons' 25th anniversary. My first novel was the first book ever released under a CC license, and while CC has gone on to bigger and better things (without CC there'd be no Wikipedia!), I never forget that my own artistic career and CC's trajectory are co-terminal: https://craphound.com/down/download/ Talking with Anna, I hit on the idea of making a beautiful little book of my favorite illustrations from Pluralistic. Anna thought CC could use about 400 of these, and all the printers I talked to offered me a pretty great quantity break at 500, so I decided I'd do it, and offer the excess 100 copies as premiums in my next Kickstarter, for the enshittification book: https://www.kickstarter.com/projects/doctorow/enshittification-the-drm-free-audiobook/ That Kickstarter is going really well – about to break $100,000! – and as I type these words, there are only five copies of Canny Valley up for grabs. I'm pretty sure they'll be gone long before the campaign closes in ten days. Of course, the fact that you can't get a physical copy of the book doesn't mean that you can't get access to all its media. Here's the full set of all 238 collages, in high-rez, for your plundering pleasure: https://www.flickr.com/photos/doctorow/albums/72177720316719208 But there is one part of this book that's not online: my pal and mentor Bruce Sterling, a cyberpunk legend turned electronic art impressario turned assemblage sculptor, wrote me a brilliant foreword for Canny Valley. Bruce gave me the go-ahead to license this CC BY 4.0 as well, and so I'm reproducing it below. Having spent several days now handling hundreds of these books, I have to say, I am indecently pleased with how they turned out, which is all down to other people. My friend John Berry, a legendary book designer and typographer, laid it out: https://johndberry.com/ And the folks at LA's best comics shop, Secret Headquarters, hooked me up with an incredible printer, the 100+ year old Pasadena institution Typecraft: https://www.typecraft.com/live2/who-we-are.html Typecraft ran this on a gorgeous Indigo printer on 100lb Mohawk paper that just drank the ink. The PVA glue in the binding will last a century, and the matte coat cover doesn't pick up smudges or fingerprints. It's a stunning little artifact. This has been so much fun (and such a success) that I imagine I'll do future volumes in the years to come. In the meantime, enjoy Bruce's intro, and join me in basking in the fact that "enshittification" has made Webster's: https://bsky.app/profile/merriam-webster.com/post/3lxxhhxo4nc2e INTRODUCTION by Bruce Sterling In 1970 a robotics professor named Masahiro Mori discovered a new problem in aesthetics. He called this "bukimi no tani genshō." The Japanese robots he built were functional, so the "bukimi no tani" situation was not an engineering problem. It was a deep and basic problem in the human perception of humanlike androids. Humble assembly robots, with their claws and swivels, those looked okay to most people. Dolls, puppets and mannequins, those also looked okay. Living people had always aesthetically looked okay to people. Especially, the pretty ones. However, between these two realms that the late Dr Mori was gamely attempting to weld together — the world of living mankind and of the pseudo-man-like machine– there was an artistic crevasse. Anything in this "Uncanny Valley" looked, and felt, severely not-okay. These overdressed robots looked and felt so eerie that their creator's skills became actively disgusting. The robots got prettier, but only up to a steep verge. Then they slid down the precipice and became zombie doppelgangers. That's also the issue with the aptly-titled "Canny Valley" art collection here. People already know how to react aesthetically to traditional graphic images. Diagrams are okay. Hand-drawn sketches and cartoons are also okay. Brush-made paintings are mostly fine. Photographs, those can get kind of dodgy. Digital collages that slice up and weld highly disparate elements like diagrams, cartoons, sketches and also photos and paintings, those trend toward the uncanny. The pixel-juggling means of digital image-manipulation are not art-traditional pencils or brushes. They do not involve the human hand, or maybe not even the human eye, or the human will. They're not fixed on paper or canvas; they're a Frankenstein mash-up landscape of tiny colored screen-dots where images can become so fried that they look and feel "cursed." They're conceptually gooey congelations, stuck in the valley mire of that which is and must be neither this-nor-that. A modern digital artist has billions of jpegs in files, folders, clouds and buckets. He's never gonna run out of weightless grist from that mill. Why would Cory Doctorow — novelist, journalist, activist, opinion columnist and so on — want to lift his typing fingers from his lettered keyboard, so as to create graphics with cut-and-paste and "lasso tools"? Cory Doctorow also has some remarkably tangled, scandalous and precarious issues to contemplate, summarize and discuss. They're not his scandalous private intrigues, though. Instead, they're scandalous public intrigues. Or, at least Cory struggles to rouse some public indignation about these intrigues, because his core topics are the tangled penthouse/slash/underground machinations of billionaire web moguls. Cory really knows really a deep dank lot about this uncanny nexus of arcane situations. He explains the shameful disasters there, but they're difficult to capture without torrents of unwieldy tech jargon. I think there are two basic reasons for this. The important motivation is his own need to express himself by some method other than words. I'm reminded here of the example of H. G. Wells, another science fiction writer turned internationally famous political pundit. HG Wells was quite a tireless and ambitious writer — so much so that he almost matched the torrential output of Cory Doctorow. But HG Wells nevertheless felt a compelling need to hand-draw cartoons. He called them "picshuas." These hundreds of "picshuas" were rarely made public. They were usually sketched in the margins of his hand-written letters. Commonly the picshuas were aimed at his second wife, the woman he had renamed "Jane." These picshuas were caricatures, or maybe rapid pen-and-ink conceptual outlines, of passing conflicts, events and situations in the life of Wells. They seemed to carry tender messages to Jane that the writer was unable or unwilling to speak aloud to her. Wells being Wells, there were always issues in his private life that might well pose a challenge to bluntly state aloud: "Oh by the way, darling, I've built a second house in the South of France where I spend my summers with a comely KGB asset, the Baroness Budberg." Even a famously glib and charming writer might feel the need to finesse that. Cory Doctorow also has some remarkably tangled, scandalous and precarious issues to contemplate, summarize and discuss. They're not his scandalous private intrigues, though. Instead, they're scandalous public intrigues. Or, at least Cory struggles to rouse some public indignation about these intrigues, because his core topics are the tangled penthouse/slash/underground machinations of billionaire web moguls. Cory really knows really a deep dank lot about this uncanny nexus of arcane situations. He explains the shameful disasters there, but they're difficult to capture without torrents of unwieldy tech jargon. So instead, he diligently clips, cuts, pastes, lassos, collages and pastiches. He might, plausibly, hire a professional artist to design his editorial cartoons for him. However, then Cory would have to verbally explain all his political analysis to this innocent graphics guy. Then Cory would also have to double-check the results of the artist and fix the inevitable newbie errors and grave misunderstandings. That effort would be three times the labor for a dogged crusader who is already working like sixty. It's more practical for him to mash-up images that resemble editorial cartoons. He can't draw. Also, although he definitely has a pronounced sense of aesthetics, it's not a aesthetic most people would consider tasteful. Cory Doctorow, from his very youth, has always had a "craphound" aesthetic. As an aesthete, Cory is the kind of guy who would collect rain-drenched punk-band flyers that had fallen off telephone poles and store them inside a 1950s cardboard kid-cereal box. I am not scolding him for this. He's always been like that. As Wells used to say about his unique "picshuas," they seemed like eccentric scribblings, but over the years, when massed-up as an oeuvre, they formed a comic burlesque of an actual life. Similarly, one isolated Doctorow collage can seem rather what-the-hell. It's trying to be "canny." If you get it, you get it. If you don't get the first one, then you can page through all of these, and at the end you will probably get it. En masse, it forms the comic burlesque of a digital left-wing cyberspatial world-of-hell. A monster-teeming Silicon Uncanny Valley of extensively raked muck. <img src="https://craphound.com/images/ai-freud.jpg" alt="Sigmund Freud's study with his famous couch. Behind the couch stands an altered version of the classic Freud portrait in which he is smoking a cigar. Freud's clothes and cigar have all been tinted in bright neon colors. His head has been replaced with the glaring red eye of HAL9000 from Kubrick's '2001: A Space Odyssey.' His legs have been replaced with a tangle of tentacles. Cryteria (modified)/https://commons.wikimedia.org/wiki/File:HAL9000.svg/CC BY 3.0/https://creativecommons.org/licenses/by/3.0/deed.en | Ser Amantio di Nicolao (modified)/https://commons.wikimedia.org/wiki/File:Study_with_the_couch,_Freud_Museum_London,_18M0143.jpg"/CC BY-SA 3.0/https://creativecommons.org/licenses/by-sa/3.0/deed"> There are a lot of web-comix people who like to make comic fun of the Internet, and to mock "the Industry." However, there's no other social and analytical record quite like this one. It has something of the dark affect of the hundred-year-old satirical Dada collages of Georg Schultz or Hannah Hoch. Those Dada collages look dank and horrible because they're "Dada" and pulling a stunt. These images look dank and horrible because they're analytical, revelatory and make sense. If you do not enjoy contemporary electronic politics, and instead you have somehow obtained an art degree, I might still be able to help you with my learned and well-meaning intro here. I can recommend a swell art-critical book titled "Memesthetics" by Valentina Tanni. I happen to know Dr. Tanni personally, and her book is the cat's pyjamas when it comes to semi-digital, semi-collage, appropriated, Situationiste-detournement, net.art "meme aesthetics." I promise that I could robotically mimic her, and write uncannily like her, if I somehow had to do that. I could even firmly link the graphic works of Cory Doctorow to the digital avant-garde and/or digital folk-art traditions that Valentina Tanni is eruditely and humanely discussing. Like with a lot of robots, the hard part would be getting me to stop. Cory works with care on his political meme-cartoons — because he is using them to further his own personal analysis, and to personally convince himself. They're not merely sharp and partisan memes, there to rouse one distinct viewer-emotion and make one single point. They're like digital jigsaw-puzzle landscape-sketches — unstable, semi-stolen and digital, because the realm he portrays is itself also unstable, semi-stolen and digital. The cartoons are dirty and messy because the situations he tackles are so dirty and messy. That's the grain of his lampoon material, like the damaged amps in a punk song. A punk song that was licensed by some billionaire and then used to spy on hapless fans with surveillance-capitalism. Since that's how it goes, that's also what you're in for. You have been warned, and these collages will warn you a whole lot more. If you want to aesthetically experience some elegant, time-tested collage art that was created by a major world artist, then you should gaze in wonder at the Max Ernst masterpiece, "Une semaine de bonté" ("A Week of Kindness"). This indefinable "collage novel" aka "artist's book" was created in the troubled time of 1934. It's very uncanny rather than "canny, "and it's also capital-A great Art. As an art critic, I could balloon this essay to dreadful robotic proportions while I explain to you in detail why this weirdo mess is a lasting monument to the expressive power of collage. However, Cory Doctorow is not doing Max Ernst's dreamy, oneiric, enchanting Surrealist art. He would never do that and it wouldn't make any sense if he did. Cory did this instead. It is art, though. It is what it is, and there's nothing else like it. It's artistic expression as Cory Doctorow has a sincere need to perform that, and in twenty years it will be even more rare and interesting. It's journalism ahead of its time (a little) and with a passage of time, it will become testimonial. Bruce Sterling — Ibiza MMXXV Hey look at this (permalink) Twitter users on Enshittification https://x.com/search?q=https%3A%2F%2Ftwitter.com%2FMerriamWebster%2Fstatus%2F1963336587712057346&amp;src=typed_query&amp;f=live Introducing Structural Zero: a New Monthly Newsletter https://hrdag.org/introducing-structural-zero-a-new-monthly-newsletter/ 70 leading Canadians, civil society groups ask Carney to protect Canada's 'digital sovereignty' https://www.cbc.ca/news/politics/open-letter-mark-carney-digital-sovereignty-1.7623128 AI Darwin Awards https://aidarwinawards.org/ Kraft Heinz went all-in on scale. Now it’s banking on a breakup to save its business https://www.cnn.com/2025/09/03/business/kraft-heinz-nightcap Object permanence (permalink) #20yrsago Singapore’s cool-ass hard-drive video-players https://memex.craphound.com/2005/09/03/singapores-cool-ass-hard-drive-video-players/ #20yrsago Being Poor — meditation by John Scalzi https://whatever.scalzi.com/2005/09/03/being-poor/ #20yrsago MSFT CEO: I will “fucking kill” Google — then he threw a chair https://battellemedia.com/archives/2005/09/ballmer_throws_a_chair_at_fing_google #20yrsago Massachusetts to MSFT: switch to open formats or you’re fired https://web.archive.org/web/20051001011728/http://www.boston.com/business/technology/articles/2005/09/02/state_may_drop_office_software/ #20yrsago Bruce Sterling’s Singapore wrapup https://web.archive.org/web/20051217133502/https://wiredblogs.tripod.com/sterling/index.blog?entry_id=1211240 #20yrsago Apple //e mainboards networked and boxed: the Applecrate https://web.archive.org/web/20050407173742/http://members.aol.com/MJMahon/CratePaper.html #15yrsago Jewelry made from laminated, polished cross-sections of bookshttps://littlefly.co.uk/ #15yrsago Boneless, clubfooted French Connection model invades Melbournehttps://www.flickr.com/photos/doctorow/4953586953/ #5yrsago Corporate spooks track you "to your door" https://pluralistic.net/2020/09/03/rip-david-graeber/#hyas #5yrsago Hedge fund managers trouser 64% https://pluralistic.net/2020/09/03/rip-david-graeber/#2-and-20 #5yrsago Rest in Power, David Graeber https://pluralistic.net/2020/09/03/rip-david-graeber/#rip-david-graeber #5yrsago Coronavirus is over (if we want it) https://pluralistic.net/2020/09/03/rip-david-graeber/#test-test-test #5yrsago Snowden vindicated https://pluralistic.net/2020/09/03/rip-david-graeber/#criming-spooks #5yrsago Algorithmic grading https://pluralistic.net/2020/09/03/rip-david-graeber/#computer-says-no #5yrsago Big Car says Right to Repair will MURDER YOU https://pluralistic.net/2020/09/03/rip-david-graeber/#rolling-surveillance-platforms Upcoming appearances (permalink) Ithaca: AD White keynote (Cornell), Sep 12 https://deanoffaculty.cornell.edu/events/keynote-cory-doctorow-professor-at-large/ DC: Enshittification at Politics and Prose, Oct 8 https://politics-prose.com/cory-doctorow-10825 NYC: Enshittification with Lina Khan (Brooklyn Public Library), Oct 9 https://www.bklynlibrary.org/calendar/cory-doctorow-discusses-central-library-dweck-20251009-0700pm New Orleans: DeepSouthCon63, Oct 10-12 http://www.contraflowscifi.org/ Chicago: Enshittification with Anand Giridharadas (Chicago Humanities), Oct 15 https://www.oldtownschool.org/concerts/2025/10-15-2025-kara-swisher-and-cory-doctorow-on-enshittification/ San Francisco: Enshittification at Public Works (The Booksmith), Oct 20 https://app.gopassage.com/events/doctorow25 Madrid: Conferencia EUROPEA 4D (Virtual), Oct 28 https://4d.cat/es/conferencia/ Miami: Enshittification at Books & Books, Nov 5 https://www.eventbrite.com/e/an-evening-with-cory-doctorow-tickets-1504647263469 Recent appearances (permalink) Nerd Harder! (This Week in Tech) https://twit.tv/shows/this-week-in-tech/episodes/1047 Techtonic with Mark Hurst https://www.wfmu.org/playlists/shows/155658 Cory Doctorow DESTROYS Enshittification (QAA Podcast) https://soundcloud.com/qanonanonymous/cory-doctorow-destroys-enshitification-e338 Latest books (permalink) "Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels). "The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org). "The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org). "The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245). "Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com. "Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com Upcoming books (permalink) "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025 "Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025 https://us.macmillan.com/books/9780374619329/enshittification/ "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026 "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026 "The Memex Method," Farrar, Straus, Giroux, 2026 "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, 2026 Colophon (permalink) Today's top sources: Currently writing: "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. FIRST DRAFT COMPLETE AND SUBMITTED. A Little Brother short story about DIY insulin PLANNING This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net. https://creativecommons.org/licenses/by/4.0/ Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution. How to get Pluralistic: Blog (no ads, tracking, or data-collection): Pluralistic.net Newsletter (no ads, tracking, or data-collection): https://pluralistic.net/plura-list Mastodon (no ads, tracking, or data-collection): https://mamot.fr/@pluralistic Medium (no ads, paywalled): https://doctorow.medium.com/ Twitter (mass-scale, unrestricted, third-party surveillance and advertising): https://twitter.com/doctorow Tumblr (mass-scale, unrestricted, third-party surveillance and advertising): https://mostlysignssomeportents.tumblr.com/tagged/pluralistic "When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. ISSN: 3066-764X

6 days ago 8 votes