Full Width [alt+shift+f] FOCUS MODE Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
53
In this four-part article, I’ll go over some of the lessons I learned living and doing business in China’s tech industry. During my time in China, I’ve led a team of 10+ engineers to develop a location-based IoT and sensing platform, co-founded an open-source project called Towhee, and developed countless relationships with folks in a number of difference cities (many of whom I now consider good friends). I’ll go over some of the common misconceptions about China ranging from living and working in China to the government’s pandemic response. I originally intended for part II of this blog post to cover the tech industry in more detail (996, CSDN, open-source, etc…), but given the current spike in COVID cases these past two weeks plus the current lockdown in Shanghai, I felt it was more appropriate to first cover pandemic life in China. As always, if you have any questions, comments, or concerns, feel free to connect with me on Twitter or LinkedIn. Thanks for reading! Before reading this...
over a year ago

Comments

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Frank’s Ramblings

Vision Transformers are Overrated

Vision transformers (ViTs) have seen an incredible rise in the past four years. They have an obvious upside: in a visual recognition setting, the receptive field of a pure ViT is effectively the entire image 1. In particular, vanilla ViTs maintain the quadratic time complexity (w.r.t. number of input patches) of language models with dense attention. Kernels in convolutional networks, on the other hand, have the property of being invariant to the input pixel/voxel that it is applied to, a feature that is typically referred to as translation equivariance. This is desirable because it allows the model to effectively recognize patterns and objects regardless of where they are located spatially. The weight sharing present in convolutional layers also makes convnets highly parameter-efficient and less prone to overfitting - a property ViTs do not have. As such, you might expect that ViTs and convnets are used equally in production environments that leverage visual models - ViTs for “global” tasks such as scene recognition and convnets for more “local” tasks such as object recognition. Even so, we’ve been inundated with work that utilizes ViTs, with bold high-level claims (mostly by media outlets) that convnets are a thing of the past. Curious to see if I could lend a hand in helping debunk this claim, I set out to figure whether or not a mostly vanilla ResNet could match or even exceed the performance of both ViT and ConvNeXt. The comparison to ConvNeXt is of particular interest, since it is a fully convolutional network that attempts to bridge the gap between transformers and convnets. With a bit of experimentation on Imagenet-1k, we can reach 82.0% accuracy with a 176x176 training image size with no extra data, matching ConvNeXt-T (v1, without pre-training a-la MAE) and surpassing ViT-S (specifically, the ViT flavor from DeiT-III). Training methodology We start by adopting the training methodology set in Pytorch’s late 2021 blog, where they achieved an impressive 80.8% accuracy on Imagenet-1k with a stock ResNet50 model. Here’s a couple of key points to note: We stick with SGD as the optimizer, rather than going for RMSProp or Adam (or any of their variants). The scheduler uses cosine decay with five warmup epochs and 600 total epochs. This may seem like an unnecessarily large number of epochs, but we’ll get around to reducing this later. We utilize a whole slew of augmentations found in modern literature, including, but not limited to: label smoothing, mixup, cutmix, and model EMA. To prevent overfitting on the validation dataset, we’ll skip hyperparameter tuning and grid search and stick with the stock training methodology listed out in the blog post. Nearly all of these training optimizations have already been used to boost the performance of modern visual recognition models, but adopting these changes don’t quite get us to the magical 82% accuracy we’re looking for. Architectural modifications The baseline ResNet architecture is strong but not optimal, so we adopt a few architectural modifications to enable better performance: ResNet-d First order of business is the embrace some “modernizations” to ResNet. For completeness, here are the changes listed out: The initial 7x7 convolution is changed to a sequence of three 3x3 convolutions with 32, 64, and 128 output channels, respectively. The stride remains on the first convolutional layer. With this change, we now use exclusively 3x3 and 1x1 convolutions across the entire network all while retaining the original size of the receptive field for the network head. Strides in downsampling residual blocks are moved from the first 1x1 convolutional layer to the subsequent 3x3 convolutional layer. This has the effect of capturing all input pixels in a downsampling block, since a strided 1x1 convolution effectively skips every other pixel. The max pooling in the stem is removed. The first 3x3 convolution of the first residual block now has a stride of two, matching the remaining residual blocks. While max pooling is theoretically useful for retaining edges, corners, and other low-level features, I haven’t found it to be particularly useful in practice. The strided 1x1 convolution in the shortcut connections of downsampling blocks is replaced with 2x2 average pooling followed by a standard 1x1 convolutional layer. Again, this has the effect of capturing all input activations rather than just one out of every four input channels. The resulting micro-optimizations result in an architecture that is extremely close to ResNet-d, with some very minor differences. ReLU -> SiLU ReLU has two weaknesses compared to other activation functions: 1) it is not smooth (ReLU is, strictly speaking, non-differentiable at 0), and 2) the “dying ReLU” problem, where pre-activation values are near-universally negative during a forward pass, causing gradients to always be zero and the neuron to carry no information. As a direct result, a number of novel activations have been proposed throughout the years - Leaky ReLU, Parametric ReLU, ELU, and Softplus are three well-known albeit older examples. The idea behind all of these is to fix one or both of the above problems; Parametric ReLU, for example, attempts to fix the dying ReLU problem by introducing a learnable parameter $\alpha$ that defines the slope the function for negative pre-activation values. For this model, I went with the SiLU, (also commonly known as Swish), defined by $SiLU(x) = \frac{x}{1+e^{-x}}$, which has already seen success with a number of visual recognition models. Since this switch enabled faster training, I reduced the number of epochs from 600 to 450. Although I could’ve used GELU, I decided to use SiLU because it has an inplace parameter and could serve as a drop-in replacement for ReLU in the original reference implementation. GELU or GLU variants (SwiGLU, GeGLU) might have performed slightly better as they are widely used in language models. Although GELU and SiLU are highly correlated 2, networks trained with GELU are not equivalent to networks trained with SiLU in terms of representational capacity due to differences in weight decay and initialization. Lastly, I hypothesize that a SiLU network would likely perform better with stochastic depth since ReLU may act like a weak implicit regularizer by adding sparsity to the network activations. This can be great for overparameterized models, but not for parameter-efficient models. SiLU, on the other hand, has nonzero gradients for all values $x$ except for $x \approx -1.278$. As such, with the switch from ReLU to SiLU, adding a bit of regularization might be warranted. I’ll have to experiment more with this in the upcoming weeks. Update (03/23/2024): After some experimentation, I found that stochastic depth with a drop probability of 0.1 negatively impacts the performance of the network (by about 0.2% or so), but reducing it to 0.05 results in what is effectively the same accuracy. I’ll need to play around with it a bit more. Split normalization Vanilla ResNet uses a generous amount of batch normalization (BN); one BN layer per convolutional layer to be exact. The original BN paper argues that BN improves internal covariate shift (ICS) - defined by the authors as the change any intermediate layer sees as upstream network weights shift - but this has since proven to be untrue (I’ll elaborate on this in a bit). I wanted to go back to the original ICS thesis, i.e. normalization in BN was meant to re-center the activations, while the learnable affine transformation immediately following normalization was meant to preserve each layer’s representational capacity. It simply made no sense to me that these two must be applied back-to-back. Furthermore, since backpropogation effectively treats each individual layer of neurons as an independent learner, the most sensible thing to do is to normalize layer inputs rather than outputs. Long story short, I found that splitting BN into two separate layers - pre-convolution normalization and post-convolution affine transformation - improves the network’s performance by over 0.4%. While this does negatively affect speed and memory consumption during training, it has zero impact on inference performance since the normalization and affine transformations can be represented as diagonal matrices and fused with the weights of the convolutional layer once the network is fully trained. Split normalization, visualized. I wanted to better understand the theory behind “split” normalization but couldn’t find it anywhere in ML literature3. As a result, I looked towards BN theory first; the most compelling research in my eyes comes from Santurkar et al.’s 2018 paper. In it, they show that BN often increases ICS. Instead, they argue that batch normalization works well because improves the first- and second-order properties of the loss landscape. Through a quick exercise, we can show that split normalization (SN) has the same effect. Let’s consider two networks - one without SN defined by loss function $L$ and one with SN defined by loss function $\hat{L}$. For the network with SN, the gradients through each of these layers is as follows: Where $m$ is the size of each mini-batch and $y_i$, $\hat{y}_i$, $\hat{x}_i$, $x_i$ represent the activations for the $i$th sample in our batch. In practice, the dimensionality of the activation tensors can be arbitrarily large or small (e.g. 3d for most convnets). With this, we can represent the full loss gradient via dot products: For a function $f(a)$, the L2 norm of its gradient $\left\Vert\frac{df}{da}\right\Vert$ is a good proxy for Lipschitzness. The same holds our loss function, i.e. we would like to show that $\left\Vert\frac{\partial\hat{L}}{\partial\mathbf{x}}\right\Vert \leq \left\Vert\frac{\partial L}{\partial\mathbf{x}}\right\Vert$. Given a matrix $\mathbf{A}$ and vector $\mathbf{b}$, the norm of the two multiplied together is bound above by the largest singular value of $\mathbf{A}$, i.e. $\Vert\mathbf{A}\cdot\mathbf{b}\Vert \leq s_{max}(\mathbf{A})\Vert\mathbf{b}\Vert = \sqrt{\lambda_{max}(\mathbf{W}^T\mathbf{W})}\Vert\mathbf{b}\Vert$. Given this, we have: Applying the reduction from C.2 in Santurkar et al., we get: In my eyes, we should separate the multiplicative term (i.e. $\frac{\gamma^2s_{max}^2}{\sigma^2}$) from the additive term (i.e. $- \frac{1}{m}\left\Vert\mathbf{1} \cdot \frac{\partial L}{\partial\mathbf{y}}\right\Vert^2 - \frac{1}{m}\left\Vert\frac{\partial L}{\partial\mathbf{y}} \cdot \mathbf{x}\right\Vert^2$) since a) the multiplicative effects can be counteracted by increasing or decreasing the learning rate and b) $\mathbf{W}$ tends to change much slower than other terms in the equation. In particular, the additive term is strictly negative, which means that the overall loss landscape is smoother, while the potentially large multiplicative upper bound implies that SN may, in certain situations, be increasing the Lipschitz constant of the loss. At the same time, ICS at the inputs of each layer is strictly decreased, as the learnable affine transformation now comes after the weights rather than before. The results The final 26M parameter model successfully reaches 82.0% accuracy on Imagenet-1k without any external sources of data! In the spirit of modern machine learning research, let’s give this network a fancy name: GResNet (Good/Great/Gangster/Godlike ResNet). Model Accuracy Params Throughput GResNet 82.0%* 25.7M 2057 im/s ConvNeXt 82.1% 28.6M 853 im/s ViT (DeiT) 81.4% 22.0M 1723 im/s Comparison of different models. Throughput calculated on a single Nvidia A100 with batch size 256 without network optimizations. *Accuracy improves to 82.2% and throughput drops to 1250 im/s when we use ConvNeXt's train image size of 224x224 instead of 176x176. The GResNet model definition is available here, while weights are available here. Accuracy curve during training. Ending words What exactly have we shown here? With some simple modifications to ResNet, we can attain excellent performance - on par or better than both ViT and a ViT-inspired convnet (ConvNeXt) on smallish datasets. ConvNets never die, they just Transform — Peyman Milanfar (@docmilanfar) October 27, 2023 ResNet strikes back... again? You might be asking: why Imagenet-1k? Aren’t there a number of much larger labelled visual datasets i.e. YFCC, LAION, etc? Secondly, since modern LLMs are exclusively transformer-based, isn’t it beneficial to also use transformers for vision in order to take advantage of cross-attention or by linearly projecting patches into the decoder? The answer is yes: for large multimodal models bound by text, self-attention reigns supreme. But small models (e.g. most embedding models) are arguably more important because of their portability and adaptability, and these models benefit greatly from the exact type experiment of outlined in this post: strong augmentation with limited data trained across many epochs. This is exactly the type of data that Imagenet-1k represents. And on the topic of ViTs being superior to convnets on large datasets: the 2023 paper titled Convnets match vision transformers at scale from folks at Google DeepMind is worth a read. The concluding section contains a stark result: “Although the success of ViTs in computer vision is extremely impressive, in our view there is no strong evidence to suggest that pre-trained ViTs outperform pre-trained ConvNets when evaluated fairly.” This simply reinforces a lesson that ought to be repeated: optimizations to model architecture should always come after 1) a large, high-quality dataset, 2) a solid, highly parallelizable training strategy, and 3) having lots of H100s. I’d argue that the bulk of transformers’ success has come from their ability to be efficiently and effectively scaled to hundreds of billions of parameters - scaling that could theoretically also be done with RNNs if research scientists had decades of time to train them (spoiler: they don’t). Addendum - comparing embedding quality I thought it might be interesting to compare embeddings from GResNet, ConvNeXt, and ViT by storing and indexing the embeddings from each model in Milvus: >>> from milvus import default_server >>> from pymilvus import MilvusClient >>> default_server.start() >>> client = MilvusClient(uri="http://127.0.0.1:19530") >>> # initialize model, transform, and in1k val paths ... >>> with torch.no_grad(): ... for n, path in enumerate(paths): ... img = Image.open(path).convert("RGB") ... feat = gresnet(transform(img).unsqueeze(0)) ... client.insert(collection_name="gresnet", data=[feat]) ... >>> I removed the model initialization and data loading snippets for brevity and used Euclidean/L2 as the distance metric with no indexing (i.e. FLAT). With that step done, we can then query the collections to get results that look like this: One could argue that GResNet tends to pick out images which are stylistically closer to the query image in addition to being the same class, but aside from that, the results between all three models are very comparable. For a visual recognition model, the receptive field is the effective area of the input Nd-xels that a layer or neuron “sees” and can capture. Early layers in a pure convolutional model, for example, have a very small receptive field, while each layer in a vision transformer with dense attention sees the entire input image. ↩ There exists a fairly accurate approximation that relates GELU and SiLU: $GELU(x) = \frac{SiLU(1.702x)}{1.702}$. ↩ Please reach out to me if you know of prior work that implements this so I can give it a proper citation. ↩

a year ago 99 votes
a16z Blogs Are Just Glorified Marketing

… glorified marketing for portfolio companies, that is I came across one of a16z’s blog posts on Hacker News today, titled Emerging Architectures for LLM Applications. For folks who didn’t catch it, here’s the tl;dr: The emerging LLM stack is composed of several elements centered around data orchestration tools such as Langchain and Llamaindex. Data pipelines, embedding models, vector databases, and queries form the primary input for these orchestration tools. The stack is based on in-context learning, where off-the-shelf LLMs are used and their behavior is controlled through prompting and conditioning on contextual data. Strategies for prompting LLMs are becoming increasingly complex and are a core differentiating factor for both closed-source and open-source LLMs. Of these LLMs, strategies for GPT-3.5 and GPT-4 are most common, seeing as OpenAI is the current leader. AI agents - programmatic runtimes that can reason and plan - excite both developers and researchers alike, but don’t work just yet. Most agent frameworks are currently in PoC phase. Overall, I thought the article was informative, but I was surprised that the section on vector databases mentions neither Milvus nor Zilliz, especially since Milvus was mentioned in an older a16z blog on data and ML infrastructure: Also of note: another Zilliz project (GPTCache) is listed in the post. My initial instinct was that Milvus was left off because it is part of the LF AI & Data Foundation rather being a project wholly owned by Zilliz, so I left a comment on the HN post that links back to the Milvus website. I came back a couple of hours later to find an interesting take: Full disclosure: we (Zilliz) raised $103M back in 2022, and Pinecone raised $100M this April. Running it back in my head, I felt that SheepHerdr’s response actually made excellent sense - a16z’s ultimate goal is to generate returns for LPs, and the best way to do that is by supporting founders and propping their portfolio companies. To me, this is also unequivocally unfair to Vespa, Weaviate, etc as it delivers a subliminal message that they have no realistic long-term chance in the vector database space relative to Pinecone. This, of course, is absolute nonsense: vector databases are NOT a zero-sum game. I dove a bit deeper and was surprised to find that this is fairly commonplace behavior for a16z as a firm: The aforementioned article also lists Databricks in the “Data Pipelines” section, but not Snowflake. There is a Snowflake loader for Langchain and a guide for using Llamaindex with Snowflake. Databricks is an a16z portfolio company. The Modern Transactional Stack doesn’t come close to listing all of the available data connectors. To be fair, Airbyte and Fivetran (an a16z portfolio company) are the two largest and most well-known, but to distill the entire segment to just two companies seems unfair. a16z’s crypto division has backed LayerZero, going as far as actively voting against Wormhole, a LayerZero competitor. Side note: LayerZero was also featured in a16z’s Crypto Startup School. These are just three random examples I dug out - there are probably many other examples in verticals that I am unfamiliar with. Other LLM/GenAI Infrastructure landscapes Here’s a couple alternative landscapes that are, in my eyes, more wholly representative: ML/AI/Data Landscape (Interactive version). Matt Turck’s MAD Landscape is arguably the most complete out there. Companies that do vector search are listed under “Infrastructure/Vector Database” and “Analytics/Enterprise Search” categories. It was released in February 2023 so it’s about 4 months old, but a good resource nonetheless. Future of AI-Native Infrastructure. This one’s from Wei Lien Dang and David Hershey of Unusual Ventures. I found this pretty unique as it has a vertical for AI agents. It’s unfortunately not as complete as the MAD Landscape (missing Vespa, Vectara, etc), but still a good overview. The New Language Model Stack. Sequoia Capital’s blog post on the LLM stack is also excellent. Milvus isn’t in the diagram, but it’s mentioned in the section on vector databases. Vector Database Landscape. Yingjun Wu’s infographic is centered specifically around vector search infrastructure. Final thoughts I have tremendous respect for a16z, a firm that helped pioneer the practice of working with and nurturing founders rather than forcing them out pre-IPO or minmaxing term sheets. Their content is also incredibly informative and valuable for understanding the nuances of building a company, from finding PMF to hiring executives. I also wholeheartedly understand a16z’s motivation for sharing knowledge and highlighting their portfolio companies, but to do so under the guise of being helpful and impartial is just plain silly. In particular, a16z’s blog post yesterday has as much to do with emerging strategies for portfolio company marketing as it does with emerging architectures for LLM applications. This practice would be somewhat analagous to Google putting paid URLs at the very top of search results without an “Ad” label. (To be clear, Google doesn’t do this.) I’d like to end with some glorified marketing of my own: % pip install milvus

over a year ago 59 votes
Hierarchical Navigable Small Worlds (HNSW)

(Note: A version of this post has been cross-published to the Zilliz blog) In a previous blog, we took a look at scalar quantization and product quantization - two indexing strategies which are used to reduce the overall size of the database without reducing the scope of a search. To better illustrate how scalar quantization and product quantization works, we also implemented our own versions in Python. In this tutorial, we’ll build on top of that knowledge by looking at what is perhaps the most commonly used primary algorithm today: Hierarchical Navigable Small Worlds (HNSW). HNSW performs very well when it comes to both speed and accuracy, making it an incredibly robust vector search algorithm. Despite it being popular, understanding HNSW can be a bit tricky. In the next couple of sections, we’ll break down HNSW into its individual steps, developing our own simple implementation along the way. HNSW basics Recall from a previous post that there are four different types of vector search indexes: hash-based, tree-based, cluster-based, and graph-based. HNSW fits firmly into the lattermost, combining two core concepts together - the skip list and Navigable Small World (NSW). Let’s first dive into these two concepts individually before discussing HNSW. Skip list overview First up: skip lists. Recall the venerable linked list - a well-known data structure where each element in the list maintains a pointer the the next element. Although linked lists work great for implementing LIFO and FIFO data structures such as stacks and queues, a major downside is their time complexity when it comes to random access: O(n). Skip lists aim to solve this problem by introducing additional layers, allowing for O(log n) random access time complexity. By incurring extra memory (O(n log n) space complexity as opposed to O(n) for a normal linked list) and a bit of runtime overhead for inserts and deletes. A skip list is essentially a multi-level linked list, where the upper levels maintain long connections. As we move down the layers, the connections become shorter and shorter, with the bottommost layer being the “original” linked list containing all of the elements. The image below illustrates this: The skip list, illustrated. Higher layers have fewer elements. To reach element i in a skip list, we first start at the highest layer. Once we find a node that corresponds to an element in the list that is greater than i, we then backtrack to the previous node and move to the layer below. This continues all the way until we’ve found the element we’re looking for. Note that skip lists only work for sorted lists, as we need a way to directly compare the magnitude of two objects. Inserts work probabilistically. For any new element, we first need to figure out the layer with which the element appears first. The uppermost layer has the lowest probability, with increasing probability as we move down in layers. The general rule is that any element in a layer will appear in layer above it with some pre-defined probability p. Therefore, if an element first appears in some layer l, it will also get added to layers l-1, l-2, and so on. Note that, while it is possible to have a terribly balanced skip list that performs no better than a standard linked list, the probability of this happening is incredibly low. What the heck is a Navigable Small World? Now that we’ve gotten skip lists out of the way, let’s take some time to talk about Navigable Small Worlds. The general idea here is to first imagine a large number of nodes in a network. Each node will will have short-, medium-, and long-range connections to other nodes. When performing a search, we’ll first begin at some pre-defined entry point. From there, we’ll evaluate connections to other nodes, and jump to the one closest to the one we hope to find. This process repeats until we’ve found our nearest neighbor. This type of search is called greedy search. For small NSWs in the hundreds or thousands of nodes, this algorithm works, but it tends to break down for much larger NSWs. We can fix this by increasing the average number of short-, medium-, and long-range connections for each node, but this increases the overall complexity of the network and results in longer search times. In the absolute “worst” case, where each node is connected to every other node in our dataset, NSW is no better than naïve (linear) search. NSWs are cool and all, but how does this relate to vector search? The idea here is to imagine all vectors in our dataset as points in an NSW, with long-range connections being defined by vectors which are dissimilar from one another and the opposite for short-range connections. Recall that vector similarity scores are measured with a similarity metric - typically L2 distance or inner product for floating point vectors and Jaccard or Hamming distance for binary vectors. By constructing an NSW with dataset vectors as vertices, we can effectively perform nearest neighbor search by simply greedily traversing the NSW towards vertices closer and closer to our query vector. HNSW, explained When it comes to vector search, we often have dataset sizes in the hundreds of millions or even billions of vectors. Plain NSWs are less effective at this scale, so we’ll need a better graph. HNSW extends NSW by borrowing from the concept of skip lists. Like the skip list, HNSW maintains multiple layers (hence the term Hierarchical Navigable Small World), only of NSWs instead of linked lists. The uppermost layer of an HNSW graph has few nodes and the longest links, while the bottommost layer has all nodes and the shortest links. During the search process, we enter a pre-defined point in the uppermost layer and greedily route ourselves towards the nearest neighbor to our query vector. Once we reach the nearest node, we then move to the second layer and repeat this process. This continues until we’ve reached our nearest neighbor. A diagram from the HNSW paper which visualizes the layered graph concept. Inserts work similarly to the skip list. For some vector v, We first traverse the first layer of the graph, finding its nearest neighbor before moving to the layer below it. We then traverse the graph again to find its nearest neighbor in the second layer. This process until we’ve reached the nearest neighbor in the bottommost graph. From here, we then need to determine which links (connections between vertices) to create. Again, we have a pre-defined parameter M which determines the maximum number of bidirectional links that we can add. These links are usually simply set as the nearest neighbors to v, but other heuristics can be used as well. The same process then repeats for the upper layers, assuming the vector appears there. As with the skip list, the query vector will appear in upper layers with exponentially decreasing probability. Specifically, the HNSW paper uses the equation floor(-ln(rand(0, 1))), where rand(0, 1) is a random number sampled from a uniform distribution between (0, 1]. Note how this does not actually constrain the minimum distance between any two vertices/vectors in a particular layer - it’s entirely possible that we end up with a poorly constructed graph, but the probability that this happens is incredibly low, especially as we scale up the number of vectors in the HNSW index. Implementing HNSW HNSW is not trivial to implement, so we’ll implement only a very basic version here. As usual, let’s start with creating a dataset of (128 dimensional) vectors: >>> import numpy as np >>> dataset = np.random.normal(size=(1000, 128)) The first step is to build the HNSW index. To do so, we’ll need to add each vector in our dataset one-by-one. Let’s first create a data structure to hold our index. In this basic example, we’ll use a list of lists to represent the index, with the inner lists corresponding to each layer/graph: >>> L = 5 # 5-layer HNSW >>> index = [[] for _ in range(L)] Every element in each graph is a 3-tuple containing the vector, a list of indexes that the vector links to within the graph, and the index for the corresponding node in the layer below it. For the bottommost layer, the third element of the 3-tuple will be set to None. Since every insert first requires a search for the nearest neighbor in graph, let’s implement that first. We can traverse any of the subgraphs in the index as so: def _search_layer(graph, entry, query, ef=1): best = (np.linalg.norm(graph[entry][0] - query), entry) nns = [best] visit = set(best) # set of visited nodes candid = [best] # candidate nodes to insert into nearest neighbors heapify(candid) # find top-k nearest neighbors while candid: cv = heappop(candid) if nns[-1][0] > cv[0]: break # loop through all nearest neighbors to the candidate vector for e in graph[cv[1]][1]: d = np.linalg.norm(graph[e][0] - query) if (d, e) not in visit: visit.add((d, e)) # push only "better" vectors into candidate heap if d < nns[-1][0] or len(nns) < ef: heappush(candid, (d, e)) insort(nns, (d, e)) if len(nns) > ef: nns.pop() return nns This code snippet is a bit more involved, but it’s much easier to understand with a bit of explanation. Here, we use a heap to implement a priority queue, which we use to order nearest neighbor vectors in the graph. Like all of the previous examples, I’m using L2 distance here, but this code can be extended to other distance metrics as well. We first populate the heap with the entry point. Here, all we’re doing is implementing greedy search. At every iteration, our goal is to update two variables: nns, our output list of nearest neighbors, and candid, a heap of candidate points. We evaluate all nearest neighbors to the “best” vector in candid, adding better (better means closer to the query vector) vectors to the output list of nearest neighbors as well as to the heap of candidate points for evaluation on the next iteration. This repeats until one of two stopping conditions is reached: we either run out of candidate points to evaluate, or we’ve determined that we can no longer do any better than what we already have. With top-k graph search out of the way, we can now now implement the top-level search function for searching the entire HNSW index: def search(index, query, ef=1): # if the index is empty, return an empty list if not index[0]: return [] best_v = 0 # set the initial best vertex to the entry point for graph in index: best_d, best_v = _search_layer(graph, best_v, query, ef=1)[0] if graph[best_v][2]: best_v = graph[best_v][2] else: return _search_layer(graph, best_v, query, ef=ef) We first start at the entry point (zeroth element in the uppermost graph), and search for the nearest neighbor in each layer of the index until we reach the bottommost layer. Recall that the final element of the 3-tuple will resolve to None if we are at the bottommost layer - this is what the final if statement is for. Once we reach the bottommost layer, we search the graph using best_v as the entry point. Let’s go back go the HNSW insert. We’ll first need to figure out which layer to insert our new vector into. This is fairly straightforward: def _get_insert_layer(L, mL): # ml is a multiplicative factor used to normalized the distribution l = -int(np.log(np.random.random()) * mL) return min(l, L) With everything in place, we can now implement the insertion function. def insert(self, vec, efc=10): # if the index is empty, insert the vector into all layers and return if not index[0]: i = None for graph in index[::-1]: graph.append((vec, [], i)) i = 0 return l = _get_insert_layer(1/np.log(L)) start_v = 0 for n, graph in enumerate(index): # perform insertion for layers [l, L) only if n < l: _, start_v = _search_layer(graph, start_v, vec, ef=1)[0] else: node = (vec, [], len(_index[n+1]) if n < L-1 else None) nns = _search_layer(graph, start_v, vec, ef=efc) for nn in nns: node[1].append(nn[1]) # outbound connections to NNs graph[nn[1]][1].append(len(graph)) # inbound connections to node graph.append(node) # set the starting vertex to the nearest neighbor in the next layer start_v = graph[start_v][2] If the index is empty, we’ll insert vec into all layers and return immediately. This serves to initialize the index and allow for successful insertions later. If the index has already been populated, we begin insertion by first computing the insertion layer via the get_insert_layer function we implemented in the previous step. From there, we find the nearest neighbor to the vector in the uppermost graph. This process continues for the layers below it until we reach layer l, the insertion layer. For layer l and all those below it, we first find the nearest neighbors to vec up to a pre-determined number ef. We then create connections from the node to its nearest neighbors and vice versa. Note that a proper implementation should also have a pruning technique to prevent early vectors from being connected to too many others - I’ll leave that as an exercise for the reader :sunny:. We now have both search (query) and insert functionality complete. Let’s combine everything together in a class: from bisect import insort from heapq import heapify, heappop, heappush import numpy as np from ._base import _BaseIndex class HNSW(_BaseIndex): def __init__(self, L=5, mL=0.62, efc=10): self._L = L self._mL = mL self._efc = efc self._index = [[] for _ in range(L)] @staticmethod def _search_layer(graph, entry, query, ef=1): best = (np.linalg.norm(graph[entry][0] - query), entry) nns = [best] visit = set(best) # set of visited nodes candid = [best] # candidate nodes to insert into nearest neighbors heapify(candid) # find top-k nearest neighbors while candid: cv = heappop(candid) if nns[-1][0] > cv[0]: break # loop through all nearest neighbors to the candidate vector for e in graph[cv[1]][1]: d = np.linalg.norm(graph[e][0] - query) if (d, e) not in visit: visit.add((d, e)) # push only "better" vectors into candidate heap if d < nns[-1][0] or len(nns) < ef: heappush(candid, (d, e)) insort(nns, (d, e)) if len(nns) > ef: nns.pop() return nns def create(self, dataset): for v in dataset: self.insert(v) def search(self, query, ef=1): # if the index is empty, return an empty list if not self._index[0]: return [] best_v = 0 # set the initial best vertex to the entry point for graph in self._index: best_d, best_v = HNSW._search_layer(graph, best_v, query, ef=1)[0] if graph[best_v][2]: best_v = graph[best_v][2] else: return HNSW._search_layer(graph, best_v, query, ef=ef) def _get_insert_layer(self): # ml is a multiplicative factor used to normalize the distribution l = -int(np.log(np.random.random()) * self._mL) return min(l, self._L-1) def insert(self, vec, efc=10): # if the index is empty, insert the vector into all layers and return if not self._index[0]: i = None for graph in self._index[::-1]: graph.append((vec, [], i)) i = 0 return l = self._get_insert_layer() start_v = 0 for n, graph in enumerate(self._index): # perform insertion for layers [l, L) only if n < l: _, start_v = self._search_layer(graph, start_v, vec, ef=1)[0] else: node = (vec, [], len(self._index[n+1]) if n < self._L-1 else None) nns = self._search_layer(graph, start_v, vec, ef=efc) for nn in nns: node[1].append(nn[1]) # outbound connections to NNs graph[nn[1]][1].append(len(graph)) # inbound connections to node graph.append(node) # set the starting vertex to the nearest neighbor in the next layer start_v = graph[start_v][2] Boom, done! All code for this tutorial can be accessed on Github: https://github.com/fzliu/vector-search.

over a year ago 57 votes
My Experience Living and Working in China, Part I

In this four-part article, I’ll go over some of the lessons I learned living and doing business in China’s tech industry. During my time in China, I’ve led a team of 10+ engineers to develop a location-based IoT and sensing platform, co-founded an open-source project called Towhee, and developed countless relationships with folks in a number of difference cities (many of whom I now consider good friends). I’ll go over some of the common misconceptions about China ranging from living and working in China to the government’s pandemic response. Part I of this blog post covers some of the basics without diving too deep into the tech world: some interesting things I learned while living, working, and interacting in China. If you have any questions, comments, or concerns, feel free to connect with me on Twitter or Linkedin. Thanks for reading! Update (03/29/2022): Part II is up. You can read it here. Before I begin, a bit about me. I was born in Nanjing, China, but moved to the US when I was barely three years old. I spent about five years in New Jersey before moving to Corvallis, Oregon (a place that I am, to this day, proud to call home). I moved to Norcal for college, studying EE (with a minor in CS) at Stanford. I stayed there for my Master’s degree as well, which I completed in 2014. Afterwards, I worked at Yahoo’s San Francisco office as a Machine Learning Engineer for two years. As a hybrid software development & research role, I was able to research and productionize the industry’s first deep learning-based model for scoring images based on aesthetics. I also had the pleasure of attending Yahoo’s internal TechPulse conference (where my co-author and I won a best paper award) all while keeping up with interesting deep learning uses cases. All-in-all, I was quite happy with the work I was doing, but also slowly started to develop the entrepreneurship itch. In the lead up to 2017, I returned to my Electrical Engineering roots and co-founded a company developing solutions for indoor localization and navigation. Efforts I put in towards finding investment continuously had little to no return - feedback we got from a lot of investors was that they believed in the team, but that the product lacked a “viability test” with an initial customer, something difficult for an early-stage hardware startup due to the high development overhead. I had some simulations and early board designs which I believed was enough, but for an investor, diving deep into an unknown company’s technology can often be costly in terms of time and energy. This is where my story takes a bit of a turn. In late 2017, the company received an early-stage seed investment offer from mainland China, and after a bit of consideration, we decided to go for it. It was at this point that a lot of friends and family asked me a question I’ve become very good at answering over the years: Why did you choose to leave Silicon Valley for an unknown country with less talent and an arguably inferior tech industry? The answer is threefold: 1) I felt that Chinese investors were more open to funding hardware startups due to the ultra-fast turnaround times for fabrication, 2) the bay area was just getting too damn expensive for my taste, and 3) from a personal perspective, I wanted to understand my birth country from cultural, social, and economic standpoints. I felt good about my decision and thought that the greatest challenge would be language; my Mandarin was workable but far from proficient. San Francisco Chinatown is a poor caricature of Qing dynasty China. Same goes for the architecture you see in Chinese restaurants across America. Photo by Dennis Jarvis, CC BY-SA 2.0 license, original photo. Alipay, WeChat, and QR codes The very first thing you’ll learn about China is that everything revolves around either Alipay (支付宝) or WeChat (微信), two apps known primarily for their payment capabilities. What a lot of folks outside China don’t know is that these two apps can be used as gateways to a number of other mini-programs (小程序), i.e. subapps developed by other organizations such as KFC, Walmart, etc. These subapps can be used directly within either Alipay or Wechat, forgoing the need to individually download apps from an app store. Imagine ordering furniture from IKEA, dinner from Chipotle, and movie tickets to Century Theaters all from the same app - that’s Alipay/Wechat for you. The obvious downside to this is that personal information becomes extremely centralized. If something like this were to happen in the US, antitrust lawsuits would come faster than a speeding bullet, and for good reason too - big conglomerates monopolizing data is dangerous and their wide adoption stilfes innovation. While Alipay and WeChat were years ahead of the US’s card-based (credit/debit) payments system when first released, Android Pay and Apple Pay (NFC-based) have since then become a lot easier to use. Alipay and WeChat work by opening a camera and scanning a QR code, which redirects you to the store's payments page. You can then pay an arbitrary amount of RMB, which will immediately show up in the payee's balance once complete. Photo by Harald Groven, CC BY-SA 2.0 license, original photo. Here's a screenshot of my Alipay. Its primary use is for payments, as evident by the top row, but mini-programs (second row from the top) have now become an important part of the app. Alipay and WeChat’s success within mainland China are in large part due to the smartphone + QR code revolution, which has truly permated all aspects of Chinese life. Shared bikes can be unlocked by scanning a QR code on your phone. You can add friends on Alipay and WeChat using QR codes. Many Communist Party of China (CPC) functions rely on tight Alipay or WeChat integration. You can even login to third-party websites and check in as a guest in office buildings via QR codes. I am by no means a security expert, but this system somehow feels a bit gameable despite its widespread use by over a billion people. Red tape, CPC style While Alipay and WeChat have made life considerably easier for the majority of people living in China, many civil and commercial processes are still incredibly difficult and filled with unnecessary paperwork. Registering for a company and acquiring a work permit in China is quite possibly one of the most insanely frustrating things on Earth. I won’t go into all of the details, but just know that it involved a mountain of paperwork, letters of commitment, countless passport scans and other documentation, etc… We ended up hiring an administrative assistant to handle a lot of this work for us, but the amount of time and energy one has to dedicate towards this can be a bit demoralizing. Some provincial (the equivalent of a state in America) governments have issued new policies aimed towards combating the problem of excessive paperwork. But the CPC is massive, and massive entities have even larger amounts of inertia. Rather than reducing the amount of mandatory paperwork, many of those policies revolved around reducing the number of trips needed to see the process to completion. This is definitely a step in the right direction, but compiling a thick folder of paperwork is still not a fun experience. A common joke in China is that there are four castes. From top to bottom these are: 1) CPC officials, 2) foreigners, 3) white collar workers, and finally 4) blue collar workers. Even with this supposed semi-VIP treatment, getting a business license such as this one is something I do not want to go through again. The same goes for pretty much all processes which require some sort of government approval, including but not limited to acquiring a work permit, registering an address change, and replacing a lost ID card. Even flying to China requires a mountain of paperwork and approvals, even if you already have a Chinese visa. My main problem with all this is the CPC’s complete lack of transparency. Why can’t I transit through a third country on my way to China if I’m going to have to undergo 14 days of mandatory hotel quarantine plus another 7 days of home quarantine anyway? From a foreigner’s perspective, this is one of the most frustrating aspects of China in an otherwise amazing experience - CPC overreach in almost every aspect of everyday life. The CPC grossly mismanages via overregulation in some sectors and underregulation (hello, housing market) in others. Social regression, economic growth This ties into another common misconception about China - the idea that the government wants to track everything you do at all hours of the day (for the moment, let’s ignore the feasibility of doing so for a population for 1.4 billion people) through a combination of CCTV, mobile phones, and browsing habits. I’ve read countless articles written by American and European media outlets overstating the dystopia that China has fallen into, but the reality is that the Chinese government cares little for storing said data long-term and uses it primarily in criminal cases. I was involved in a project that uses face recognition to track residents going in and out of communities; not only were the residents eager to have such a system installed, but it eventually also helped track a man guilty of sexual assault. Data from such a system was also entirely managed at the local level and not automatically shared with the provincial or central governments. Xinjiang and Tibet are two exceptions to this which I won’t dive deep into. I also haven’t been to either province, so it would be inappropriate for me to comment on what’s going on in Western China. Other surveillance programs such as social credit (社会信用) and city brain (城市大脑) are also widely misunderstood. The social credit system primarily punishes and constrains businesses rather than people, while social credit for individuals is somewhat analagous to a background check in America. A lot of American and European commentators will point out some insane social credit rules, such as deducting points for cheating on the college entrance exam (essentially the SAT on steroids); while I do not disagree, there are undoubtedly similar occurances for American laws. When I was still a student at Stanford, I once lost an internship opportunity because a “traffic violation” - biking at night without a bike light - showed up on my background check. In all fairness, I consider it to be extremely easy to stay off China’s social credit “blacklist” - just be reasonable and avoid breaking the law. China’s “city brains” are a totally different beast, designed to anticipate and reduce traffic, improve city planning, and provide advanced 3D models and visualization techniques. My understanding is that most city brain projects achieve… none of these, despite the fact that cities pay the equivalent of tens to hundreds of millions of dollars for just one of these solutions. An interesting side note - a recruiter once tried getting me to lead Yiwu’s city brain project, but it fell through after he discovered I wasn’t a Chinese citizen (these projects, for obvious reasons, strictly prohibit participation from non-Chinese citizens). An image I found of Pudong District's (Pudong is a district in Shanghai, home to Shanghai Pudong International Airport i.e. PVG) city brain platform via a Baidu search. Although it looks fancy, there is really little to no new underlying technology behind these systems. You might wonder how China’s economy is able to grow at such a blistering pace despite the huge number of arguably inefficient government programs. The answer is rooted in East Asian culture: work ethic. Blue collar Chinese workers are willing work 60+ hour weeks while sustaining themselves on ramen and $1.5 cigarette packs every day just to ensure their kids can get the best education and an improved quality of life. The whole concept of 996 is rooted in the Confucian ideals of hard work and industriousness. The “laziest” men and women in China are arguably owners of small- to mid-size businesses; they are often the last to arrive and first to leave from work. The CPC loves to take credit for China’s recent growth, but the reality is that the growth was the result of Chinese work ethic plus a switch from central planning to a mixed economy. By industriousness, I really do mean everybody. In 2019, I visited a prison in Jiangxi to discuss a potential prisoner safety solution. In a meeting with the vice-warden, he tacitly mentioned how Adidas shoes were being made in the prison that he was running. We quickly pulled out of that project. I haven’t bought Adidas- or Nike-branded shoes since1. Personal identity With the current political climate and state of affairs in mainland China, many Gen Z-ers and Millenials (mostly from Guangdong Province), as I consider Macau, Taiwan, and Hong Kong to be separate territories) who hail from mainland China but don’t refer to themselves as Chinese, instead calling themselves Cantonese. While some simply wish to preserve personal identity, there are also many who dissociate themselves simply because they believe the rest of China to be inferior. I’ve heard some of the most asinine reasons - people spit too often in the streets, everybody plays loud Douyin/TikTok videos while riding high-speed rail, too many cigarette smokers, etc. These are the same people who conveniently forget that some sidewalks along the Mission are lined with old discarded chewing gum, that loud music is played frequently on BART or in a BART station, or that open drug usage occurs nightly in the Tenderloin. I strongly dislike the CPC, but have immense love for Chinese people and Chinese culture. China is an super-massive collection of people that, in my eyes, have made incredible economic and social progress since my birth year, and will continue to do so in the decades ahead. And as a result of all of this, I’m proud to call myself Chinese American. Wrapping up Entire dissertations could be dedicated to each of the above sections, but I wanted to highlight misconceptions and some other bits of information that might not be as readily accessible. In particular, the previous section is by no means a comprehensive list of social issues that China is facing, but rather a brief summary of things that might not be too well understood in the West. #MeToo2, a declining natural birth rate, and racial divisions are just a small number of similar/parallel issues that are happening in both America and China. If you made it this far, thanks for reading. This post has been a bit rambly and all over the place, but the next couple should hopefully be a bit more focused. If you liked this article and are an open-source developer like myself, please give the Towhee project a star on Github as a show of support. In part II, I’ll cover the Chinese tech scene, from 996’ing to the open source community. Stay tuned! Forced labor in Xinjiang has made headlines in recent months, but in reality, it happens everywhere in China. ↩ Justice for Zhou Xiaoxuan. ↩

over a year ago 49 votes

More in AI

Pluralistic: A weekend's worth of links (30 Aug 2025)

Today's links A weekend's worth of links: Short hits for a long weekend. Object permanence: Floppy disk CD sleeves; Rules for radicals; California's preventable fires; Muppet Haunted Mansion; Wells Fargo steals rescued Nazi loot; Texas abortion release. Upcoming appearances: Where to find me. Recent appearances: Where I've been. Latest books: You keep readin' em, I'll keep writin' 'em. Upcoming books: Like I said, I'll keep writin' 'em. Colophon: All the rest. A weekend's worth of links (permalink) Did you know that it's possible to cut a hole in any cube such that an identical cube can fit inside it? Really! It's called "Rupert's Property." Further, all Platonic solids are Rupert! Except one, newly discovered shape, which cannot fit inside itself. What is this eldritch polygon called? A Nopeterhedron! https://arxiv.org/pdf/2508.18475 "Nopeterhedron" is the best coinage I've heard in months, which makes it a natural to open this week's linkdump, a collection of the links that piled up this week without making it into my newsletter. This is my 33d Saturday linkdump – here's the previous 31 editions: https://pluralistic.net/tag/linkdump/ Speaking of eldritch geometry? Perhaps you've heard that Donald Trump plans to add a 90,000 sqft ballroom to the (55,000 sqft) White House. As Kate "McMansion Hell" Wagner writes for The Nation, this is a totally bullshit story floated by Trump and a notorious reactionary starchitect, and to call it a "plan" is to do unforgiveable violence to the noble art of planning: https://www.thenation.com/article/culture/white-house-ballroom-mccrery-postmodernism/ Wagner is both my favorite architecture critic and the only architecture critic I read. That's because she's every bit as talented a writer as she is a perspicacious architecture critic. What's more, she's a versatile writer. She doesn't just write these sober-but-scathing, erudite pieces for The Nation; she has, for many years, invented the genre of snarky Zillow annotations, which are convulsively funny and trenchant: https://mcmansionhell.com/ At the Electronic Frontier Foundation, we often find ourselves at the center of in big political legal fights; for example, we were the first group to sue Musk and DOGE: https://www.eff.org/press/releases/eff-sues-opm-doge-and-musk-endangering-privacy-millions Knowing that I'm part of this stuff helps me get through tough times – but I'm also so glad that we get to step in and defend brilliant writers like Wagner, as we did a few years ago, when Zillow tried to use legal bullying tactics to make her stop being mean to their shitty houses: https://www.eff.org/deeplinks/2017/06/mcmansion-hell-responds-zillows-unfounded-legal-claims If this kind of stuff excites you as much as it excites me and you're in the Bay Area, get thee to the EFF Awards (or tune into the livestream) and watch us honor this year's winners: Just Futures Law, Erie Meyer, and the Software Freedom Law Center, India: https://www.eff.org/deeplinks/2025/08/join-your-fellow-digital-rights-supporters-eff-awards-september-10 So much of the activity that EFF defends involves writing. The web was written into existence, after all, both by the coders who hacked it together and the writers who filled it up. I've always wanted to be a writer, since I was six years old, and I'm so lucky to have grown up through an era is which the significance of the written word has continuously expanded. I was equally lucky to have writing teachers who permanently, profoundly shaped my relationship with the written word. I've had many of those, but none were so foundational as Harriet Wolff, the longest-serving English teacher at Toronto's first alternative school, SEED School, whence I graduated after a mere seven years of instruction. Harriet was a big part of why I spent seven years getting a four year diploma. She was such a brilliant English teacher, and presided over such an excellent writing workshop, that I felt like I still had so much to learn from high school, even after I'd amassed enough credits to graduate, so I just stuck around. Harriet died this summer: https://obituaries.thestar.com/obituary/harriet-wolff-1093038534 We hadn't spoken much over the past decade, though she did come to my wedding and was every bit as charming and wonderful as I'd remembered her. Despite not having spoken to her in many years, hardly a day went by without my thinking of her and the many lessons she imparted to me. Harriet took a very broad view of what could be good writing. Though she wasn't much of a science fiction fan, she always took my sf stories seriously – as seriously as she took the more "literary" fiction and poetry submitted by my peers. She kept a filing cabinet full of mimeographs and photocopies, each excellent examples of various forms of writing. Over the years, she handed me everything from Joan Didion essays to especially sharp op-eds from Time Magazine, along with tons of fiction. Harriet taught me how to criticize fiction, as a means of improving my understand of what I was doing with my writing, and as a way of exposing other writers to new ways of squeezing their own big, numinous, irreducible feelings out of their fingertips and out onto the page. She was the first person I called when I sold my first story, at 17, and I still remember standing on the lawn of my parents' house, cordless phone in one hand and acceptance letter in the other, and basking in her approval. Harriet was a tough critiquer. Like many of the writers in her workshop, I had what you might call "glibness privilege" – a facility with words that I could use to paper over poor characterization or plotting. Whenever I'd do this, she'd fix me with her stare and say, "Cory, this is merely clever." I have used that phrase countless times – both in relation to my own work and into the work of my students. Though Harriet was unsparing in her critiques, they never stung, because she always treated the writers in her workshop as her peers in a lifelong journey to improve our craft. She'd come out for cigarettes with us, and she came to every house party I invited her to, bringing a good, inexpensive bottle of wine and finding a sofa to sit on and discuss writing an literature. She invited me to Christmas dinner one year when I was alone for the holidays and introduced me to Yorkshire pudding, still one of my favorite dishes (though none has ever matched the pleasure of eating that first one from her oven). Harriet apparently told her family that she didn't want a memorial, though from emails with her former students, I know that there might end up being something planned in Toronto. After all, memorials are for the living as much as for the dead. It's unlikely I'll be home for that one, but of course, the best way to memorialize Harriet is in writing. For Harriet, writing was a big, big church, and every kind of writing was worth serious attention. I always thought of the web as a very Wolffian innovation, because it exposed so many kinds of audiences to so many kinds of writers. There's Kate Wagner's acerbic Zillow annotations, of course, but also so much more. One of the web writers I've followed since the start is Kevin Kelly, who went from The Whole Earth Review to serving as Wired's first executive editor. Over the years, Kevin has blazed new trails for those of us who write in public, publishing many seminal pieces online. But Kevin was and is a print guy, who has blazed new trails in self-publishing, producing books that are both brilliant and beautifully wrought artifacts, like his giant, three-volume set of photos of "Vanishing Asia": https://vanishing.asia/the-making-of-vanishing-asia/ This week, Kelly published one of his famous soup-to-nuts guides to a subject: "Everything I Know about Self-Publishing": https://kk.org/thetechnium/everything-i-know-about-self-publishing/ It's a long, thoughtful, and extremely practical guide that is full of advice on everything from printing to promo. I've self-published several volumes, and I learned a lot. One very important writer who's trying something new this summer – to wonderful effect – is Hilary J Allen, a business law professor at American University. During the first cryptocurrency bubble, Allen wrote some of the sharpest critiques of fintech, dubbing it "Shadow Banking 2.0": https://pluralistic.net/2022/03/02/shadow-banking-2-point-oh/#leverage Allen also coined the term "driverless finance," a devastatingly apt description of the crypto bro's desire for a financial system with no governance, which she expounded upon in a critical book: https://driverlessfinancebook.com/ This summer, Allen has serialized "FinTech Dystopia," which she called "A summer beach read about Silicon Valley ruining things." Chapter 9 dropped this week, "Let’s Get Skeptical": https://fintechdystopia.com/chapters/chapter9.html It's a tremendous read, and while it mostly concerns itself with summarizing her arguments against the claims of fintech boosters, there's an absolutely jaw-dropped section on Neom, the doomed Saudi megaproject to build a massive "linear city" in the desert: More than 21,000 workers (primarily from India, Bangladesh, and Nepal) are reported to have died working on NEOM and related projects in Saudi Arabia since 2017, with more than 20,000 indigenous people reported to have been forcibly displaced to make room for the development. Allen offers these statistics as part of her critique of the "Abundance agenda," which focuses on overregulation as the main impediment to a better world. Like Allen, I'm not afraid to criticize bad regulation, but also like Allen, I'm keenly aware of the terrible harms that arise out of a totally unregulated system. The same goes for technology, of course. There's plenty of ways to use technology that is harmful, wasteful and/or cruel, but that isn't a brief against technology itself There are many ways that technology has been used (and can be used) to make things better. One of the pioneers of technology for good is Jim Fruchterman, founder of the venerable tech nonprofit Benetech, for which he was awarded a Macarthur "Genius" award. Fruchterman has just published his first book, with MIT Press, in which he sums up a lifetime's experience in finding ways to improve the world with technology. Appropriately enough, it's called Technology For Good: https://mitpress.mit.edu/9780262050975/technology-for-good/ After all, technology is so marvelously flexible that there's always a countertechnology, for every abusive tech. Every 10-foot digital wall implies an 11-foot digital ladder. Last month, I wrote about Echelon, a company that makes digitally connected exercise bikes, who had pushed a mandatory update to their customers' bikes that took away functionality they got for free and sold it back to them in inferior form: https://pluralistic.net/2025/07/26/manifolds/#bark-chicken-bark Repair hero Louis Rossman – who is running a new, direct action right to repair group named Fulu – offered a $20,000 bounty to anyone who could crack the firmware on an Echelon bike and create a disenshittified software stack that restored the original functionality: https://www.youtube.com/watch?v=2zayHD4kfcA In short order, app engineer Ricky Witherspoon, had cracked it, and had a way to continue to use SyncSpin, his popular app for Echelon bikes, which had been shut out by Echelon's enshittification. However, as Witherspoon told 404 Media's Jason Koebler, he won't release his code, not even for a $20,000 bounty, because doing so would make him liable to a $500,000 fine, and a five-year prison sentence, under Section 1201 of the Digital Millennium Copyright Act: https://www.404media.co/developer-unlocks-newly-enshittified-echelon-exercise-bikes-but-cant-legally-release-his-software/ Fulu paid Witherspoon anyway (they're good eggs). Witherspoon told Koebler: For now it’s just about spreading awareness that this is possible, and that there’s another example of egregious behavior from a company like this […] if one day releasing this was made legal, I would absolutely open source this. I can legally talk about how I did this to a certain degree, and if someone else wants to do this, they can open source it if they want to. Free/open source software is a powerful tonic against enshittification, and it has the alchemical property of transforming the products of bad companies into good utilities that everyone benefits from. One example of this is Whisper, an open source audio transcription model released by Openai. Since Whisper's release, free software hackers have made steady – even remarkable – improvements to it. I discovered Whisper earlier this summer, when I couldn't locate a quote I'd heard on a recent podcast that I wanted to reference in a column. I installed Whisper on my laptop and fed it the last 30+ hours' worth of podcasts I'd listened to. An hour later, it had fully transcribed all of them, with timecode, and had put so little load on my laptop that the fan didn't even turn on. I was able to search all that text, locate the quote, and use the timecode to find the clip and check the transcription. Whisper has turned extremely accurate transcription into a utility, something that can just be added to any program or operating system for free. I think this is going to be quietly revolutionary, bringing full-text search and captioning to audio and video as something we can just take for granted. That's already happening! FFMpeg is the gold-standard free software tool for converting, encoding and re-encoding video, and now the latest version integrates Whisper, allowing FFMpeg to subtitle your videos on the fly: https://www.theregister.com/2025/08/28/ffmpeg_8_huffman/ Whisper is an example of the "residue" that will be left behind when the AI bubble pops. All bubbles pop, after all, but not all bubbles leave behind a useful residue. When crypto dies, its residue will be a few programmers who've developed secure coding habits in Rust, but besides that, all that will be left behind is terrible Austrian economics and worse monkey JPEGs: https://pluralistic.net/2023/12/19/bubblenomics/#pop But the free/open source code generated by stupid and/or evil projects often lives on long after those projects are forgotten. And lots (most) of free/open code is written for good purposes. Take Madeline, a platform for tracking loans made by co-operatives, produced by the Seed Commons, which is now used by financial co-ops around the world, as they make "non-extractive investments in worker and community-owned businesses on the ground": https://seedcommons.org/posts/digital-infrastructure-for-a-non-extractive-economy-the-story-of-madeline Madeline (and Seed Commons) are one of those bright lights that are easy to miss in these brutal and terrifying times. And if that's not enough, there's always booze. If you're thinking of drowning your sorrows, you could do worse than to pour your brown liquor out of a decanter shaped like a giant Atari CX-10 joystick: https://atari.com/products/atari-joystick-decanter-set That's the kind of brand necrophilia that could really enhance a night's drinking. Object permanence (permalink) #20yrago 5.25″ floppies make great CD sleeves https://web.archive.org/web/20050924144644/http://www.readymademag.com/feature_18_monkey.php #20yrsago Hollywood can break down any door in Delhi https://web.archive.org/web/20050903065949/https://www.eff.org/deeplinks/archives/003943.php #20yrsago Side-band attack tips virtual Blackjack dealer’s hand https://web.archive.org/web/20051119111417/https://haacked.com/archive/2005/08/29/9748.aspx #20yrsago Judge to RIAA: Keep your “conference center” out of my court https://web.archive.org/web/20051001031307/http://www.godwinslaw.org/weblog/archive/2005/08/29/runaround-suits #15yrsago Which ebook sellers will allow publishers and writers to opt out of DRM? https://www.publishersweekly.com/pw/by-topic/columns-and-blogs/cory-doctorow/article/44012-doctorow-s-first-law.html #15yrsago 10 Rules for Radicals: Lessons from rogue archivist Carl Malamud https://public.resource.org/rules/ #15yrsago Homeowners’ associations: hives of petty authoritarianism https://web.archive.org/web/20100606170504/http://theweek.com/article/index/104150/top-7-insane-homeowners-association-rules #15yrsago Lynd Ward’s wordless, Depression-era woodcut novels https://memex.craphound.com/2010/08/29/lynd-wards-wordless-depression-era-woodcut-novels/#5yrsago #10yrago Suit: Wells Fargo sent contractors to break into our house, loot family treasures rescued from Nazis https://theintercept.com/2015/08/28/wells-fargo-contractors-stole-family-heirlooms/ #10yrsago Texas doctor’s consent form for women seeking abortions https://memex.craphound.com/wp-content/uploads/2020/09/3kscWU5-2-scaled.jpg #10yrsago Spear phishers with suspected ties to Russian government spoof fake EFF domain, attack White House https://www.eff.org/deeplinks/2015/08/new-spear-phishing-campaign-pretends-be-eff #10yrsago Rowlf the dog gives a dramatic reading of “Grim Grinning Ghosts.” https://www.youtube.com/watch?v=CPMTEJ_IAAU #5yrsago California's preventable fires https://pluralistic.net/2020/08/29/chickenized-home-to-roost/#cal-burning Upcoming appearances (permalink) Ithaca: AD White keynote (Cornell), Sep 12 https://deanoffaculty.cornell.edu/events/keynote-cory-doctorow-professor-at-large/ DC: Enshittification at Politics and Prose, Oct 8 https://politics-prose.com/cory-doctorow-10825 NYC: Enshittification with Lina Khan (Brooklyn Public Library), Oct 9 https://www.bklynlibrary.org/calendar/cory-doctorow-discusses-central-library-dweck-20251009-0700pm New Orleans: DeepSouthCon63, Oct 10-12 http://www.contraflowscifi.org/ Chicago: Enshittification with Anand Giridharadas (Chicago Humanities), Oct 15 https://www.oldtownschool.org/concerts/2025/10-15-2025-kara-swisher-and-cory-doctorow-on-enshittification/ San Francisco: Enshittification at Public Works (The Booksmith), Oct 20 https://app.gopassage.com/events/doctorow25 Miami: Enshittification at Books & Books, Nov 5 https://www.eventbrite.com/e/an-evening-with-cory-doctorow-tickets-1504647263469 Recent appearances (permalink) Cory Doctorow DESTROYS Enshittification (QAA Podcast) https://soundcloud.com/qanonanonymous/cory-doctorow-destroys-enshitification-e338 Divesting from Amazon’s Audible and the Fight for Digital Rights (Libro.fm) https://pocketcasts.com/podcasts/9349e8d0-a87f-013a-d8af-0acc26574db2/00e6cbcf-7f27-4589-a11e-93e4ab59c04b The Utopias Podcast https://www.buzzsprout.com/2272465/episodes/17650124 Latest books (permalink) "Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels). "The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org). "The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org). "The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245). "Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com. "Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com Upcoming books (permalink) "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025 "Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025 https://us.macmillan.com/books/9780374619329/enshittification/ "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026 "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026 "The Memex Method," Farrar, Straus, Giroux, 2026 "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, 2026 Colophon (permalink) Today's top sources: Currently writing: "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. (747 words yesterday, 46239 words total). FIRST DRAFT COMPLETE A Little Brother short story about DIY insulin PLANNING This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net. https://creativecommons.org/licenses/by/4.0/ Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution. How to get Pluralistic: Blog (no ads, tracking, or data-collection): Pluralistic.net Newsletter (no ads, tracking, or data-collection): https://pluralistic.net/plura-list Mastodon (no ads, tracking, or data-collection): https://mamot.fr/@pluralistic Medium (no ads, paywalled): https://doctorow.medium.com/ Twitter (mass-scale, unrestricted, third-party surveillance and advertising): https://twitter.com/doctorow Tumblr (mass-scale, unrestricted, third-party surveillance and advertising): https://mostlysignssomeportents.tumblr.com/tagged/pluralistic "When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. ISSN: 3066-764X

2 days ago 5 votes
Tradeoffs Exist

And Denying That Has Corroded Public Discourse

2 days ago 7 votes
AI Roundup 133: Nano banana

August 29, 2025.

3 days ago 10 votes
Mass Intelligence

From GPT-5 to nano banana: everyone is getting access to powerful AI

4 days ago 12 votes
Pluralistic: The capitalism of fools (28 Aug 2025)

Today's links The capitalism of fools: Trump's mirror-world New Deal. Hey look at this: Delights to delectate. Object permanence: IBM's fabric design; Nixon Cthulu; Surveillance capitalism is capitalism, with surveillance; Dismaland ad; Outdoor ed vs TB; Mathematicians' fave chalk. Upcoming appearances: Where to find me. Recent appearances: Where I've been. Latest books: You keep readin' em, I'll keep writin' 'em. Upcoming books: Like I said, I'll keep writin' 'em. Colophon: All the rest. The capitalism of fools (permalink) As Trump rails against free trade, demands public ownership stakes in corporations that receive government funds, and (selectively) enforces antitrust law, some (stupid) people are wondering, "Is Trump a communist?" In The American Prospect, David Dayen writes about the strange case of Trump's policies, which fly in the face of right wing economic orthodoxy and have the superficial trappings of a leftist economic program: https://prospect.org/economy/2025-08-28-judge-actually-existing-trump-economy/ The problem isn't that tariffs are always bad, nor is it that demanding state ownership stakes in structurally important companies that depend on public funds is bad policy. The problem is that Trump's version of these policies sucks, because everything Trump touches dies, and because he governs solely on vibes, half-remembered wisdom imparted by the last person who spoke to him, and the dying phantoms of old memories as they vanish beneath a thick bark of amyloid plaque. Take Trump's demand for a 10% stake in Intel (a course of action endorsed by no less than Bernie Sanders). Intel is a company in trouble, whose financialization has left it dependent on other companies (notably TMSC) to make its most advanced chips. The company has hollowed itself out, jettisoning both manufacturing capacity and cash reserves, pissing away the funds thus freed up on stock buybacks and dividends. Handing Trump a 10% "golden share" does nothing to improve Intel's serious structural problems. And if you take Trump at his word and accept that securing US access to advanced chips is a national security priority, Trump's Intel plan does nothing to advance that access. But it gets worse: Trump also says denying China access to these chips is a national security priority, but he greenlit Nvidia's plan to sell its top-of-the-range silicon to China in exchange for a gaudy statuette and a 15% export tax. It's possible to pursue chip manufacturing as a matter of national industrial policy, and it's even possible to achieve this goal by taking ownership stakes in key firms – because it's often easier to demand corporate change via a board seat than it is to win the court battles needed to successfully invoke the Defense Production Act. The problem is that Trumpland is uninterested in making any of that happen. They just want a smash and grab and some red meat for the base: "Look, we made Intel squeal!" Then there's the Trump tariffs. Writing in Vox EU, Lausanne prof of international business Richard Baldwin writes about the long and checkered history of using tariffs to incubate and nurture domestic production: https://www.nakedcapitalism.com/2025/08/trumpian-tariffs-rerun-the-failed-strategy-of-import-substitution-industrialization.html The theory of tariffs goes like this: if we make imports more expensive by imposing a tax on them (tariffs are taxes that are paid by consumers, after all), then domestic manufacturers will build factories and start manufacturing the foreign goods we've just raised prices on. This is called "import substitution," and it really has worked, but only in a few cases. What do those cases have in common? They were part of a comprehensive program of "export discipline, state-directed credit, and careful government–business coordination": https://academic.oup.com/book/10201 In other words, tariffs only work to reshore production where there is a lot of careful planning, diligent data-collection, and review. Governments have to provide credit to key firms to get them capitalized, provide incentives, and smack nonperformers around. Basically, this is the stuff that Biden did for renewables with the energy sector, and – to a lesser extent – for silicon with the CHIPS Act. Trump's not doing any of that. He's just winging it. There's zero follow-through. It's all about appearances, soundbites, and the libidinal satisfaction of watching corporate titans bend the knee to your cult leader. This is also how Trump approaches antitrust. When it comes to corporate power, both Trump and Biden's antitrust enforcers are able to strike terror into the hearts of corporate behemoths. The difference is that the Biden administration prioritized monopolists based on how harmful they were to the American people and the American economy, whereas Trump's trustbusters target companies based on whether Trump is mad at them: https://pluralistic.net/2024/11/12/the-enemy-of-your-enemy/#is-your-enemy What's more, any company willing to hand a million or two to a top Trump enforcer can just walk away from the charges: https://prospect.org/power/2025-08-19-doj-insider-blows-whistle-pay-to-play-antitrust-corruption/ In her 2023 book Doppelganger, Naomi Klein introduces the idea of a right-wing "mirror world" that offers a conspiratorial, unhinged version of actual problems that leftists wrestle with: https://pluralistic.net/2023/09/05/not-that-naomi/#if-the-naomi-be-klein-youre-doing-just-fine For example, the antivax movement claims that pharma companies operate on the basis of unchecked greed, without regard to the harm their defective products cause to everyday people. When they talk about this, they sound an awful like leftists who are angry that the Sacklers killed a million Americans with their opiods and then walked away with billions of dollars: https://pluralistic.net/2023/12/05/third-party-nonconsensual-releases/#au-recherche-du-pedos-perdue Then there are the conspiracy theories about voting machines. Progressives have been sounding the alarm about the security defects in voting machine since the Bush v Gore years, but that doesn't mean that Venezuelan hackers stole the 2020 election for Biden: https://pluralistic.net/2021/01/11/seeing-things/#ess When anti-15-minute-city weirdos warn that automated license-plate cameras are a gift to tyrants both petty and gross, they are repeating a warning that leftists have sounded since the Patriot Act: https://locusmag.com/2023/05/commentary-cory-doctorow-the-swivel-eyed-loons-have-a-point/ The mirror-world is a world where real problems (the rampant sexual abuse of children by powerful people and authortiy figures) are met with fake solutions (shooting up pizza parlors and transferring Ghislaine Maxwell to a country-club prison): https://www.bbc.com/news/articles/czd049y2qymo Most of the people stuck in the mirror world are poor and powerless, because desperation makes you an easy mark for grifters peddling conspiracy theories. But Trump's policies on corporate power are what happens in the mirror world inhabited by the rich and powerful. Trump is risking the economic future of every person in America (except a few cronies), but that's not the only risk here. There's also the risk that reasonable people will come to view industrial policy, government stakes in publicly supported companies, and antitrust as reckless showboating, a tactic exclusively belonging to right wing nutjobs and would-be dictators. Sociologists have a name for this: they call it "schismogenesis," when a group defines itself in opposition to its rivals. Schismogenesis is progressives insisting that voting machines and pharma companies are trustworthy and that James Comey is a resistance hero: https://pluralistic.net/2021/12/18/schizmogenesis/ After we get rid of Trump, America will be in tatters. We're going to need big, muscular state action to revive the nation and rebuild its economy. We can't afford to let Trump poison the well for the very idea of state intervention in corporate activity. Hey look at this (permalink) Thinking Ahead to the Full Military Takeover of Cities https://www.hamiltonnolan.com/p/thinking-ahead-to-the-full-military Framework is working on a giant haptic touchpad, Trackpoint nub, and eGPU for its laptops https://www.theverge.com/news/766161/framework-egpu-haptic-touchpad-trackpoint-nub National says "fuck you" on the right to repair https://norightturn.blogspot.com/2025/08/national-says-fuck-you-on-right-to.html?m=1 Tax the Rich. They’ll Stay https://www.rollingstone.com/politics/political-commentary/zohran-mamdani-tax-rich-new-york-city-1235414327/ Welcome to the Free Online Tax Preparation Feedback Survey https://irsresearch.gov1.qualtrics.com/jfe/form/SV_ewDJ6DeBj3ockGa Object permanence (permalink) #20yrsago Cops have to pay $41k for stopping man from videoing them https://web.archive.org/web/20050905015507/http://www.paed.uscourts.gov/documents/opinions/05D0847P.pdf #20yrsago Commercial music in podcasts: the end of free expression? https://memex.craphound.com/2005/08/26/commercial-music-in-podcasts-the-end-of-free-expression/ #10yrsago North Dakota cops can now use lobbyist-approved taser/pepper-spray drones https://www.thedailybeast.com/first-state-legalizes-taser-drones-for-cops-thanks-to-a-lobbyist/ #10yrsago Illinois mayor appoints failed censor to town library board https://ncac.org/news/blog/mayor-appoints-would-be-censor-to-library-board #10yrsago IBM’s lost, glorious fabric design https://collection.cooperhewitt.org/users/mepelman/visits/qtxg/87597377/ #10yrsago Former mayor of SLC suing NSA for warrantless Olympic surveillance https://www.techdirt.com/2015/08/26/prominent-salt-lake-city-residents-sue-nsa-over-mass-warrantless-surveillance-during-2002-olympics/ #10yrsago Health’s unkillable urban legend: “You must drink 8 glasses of water/day” https://www.nytimes.com/2015/08/25/upshot/no-you-do-not-have-to-drink-8-glasses-of-water-a-day.html?_r=0 #10yrsago Austin Grossman’s CROOKED: the awful, cthulhoid truth about Richard Nixon https://memex.craphound.com/2015/08/26/austin-grossmans-crooked-the-awful-cthulhoid-truth-about-richard-nixon/ #10yrsago After Katrina, FBI prioritized cellphone surveillance https://www.muckrock.com/news/archives/2015/aug/27/stingray-katrina/ #10yrsago Germany’s spy agency gave the NSA the private data of German citizens in exchange for Xkeyscore access https://www.zeit.de/digital/datenschutz/2015-08/xkeyscore-nsa-domestic-intelligence-agency #10yrsago Elaborate spear-phishing attempt against global Iranian and free speech activists, including an EFF staffer https://citizenlab.ca/2015/08/iran_two_factor_phishing/ #10yrsago Commercial for Banksy’s Dismaland https://www.youtube.com/watch?v=V2NG-MgHqEk #5yrsago Outdoor education beat TB in 1907 https://pluralistic.net/2020/08/27/cult-chalk/#tb #5yrsago Hagoromo, mathematicians' cult chalk https://pluralistic.net/2020/08/27/cult-chalk/#hagoromo #5yrsago Principles for platform regulation https://pluralistic.net/2020/08/27/cult-chalk/#eff-eu #5yrsago It's blursday https://pluralistic.net/2020/08/26/destroy-surveillance-capitalism/#blursday #5yrsago Surveillance Capitalism is just capitalism, plus surveillance https://pluralistic.net/2020/08/26/destroy-surveillance-capitalism/#surveillance-monopolism Upcoming appearances (permalink) Ithaca: AD White keynote (Cornell), Sep 12 https://deanoffaculty.cornell.edu/events/keynote-cory-doctorow-professor-at-large/ DC: Enshittification at Politics and Prose, Oct 8 https://politics-prose.com/cory-doctorow-10825 New Orleans: DeepSouthCon63, Oct 10-12 http://www.contraflowscifi.org/ Chicago: Enshittification with Kara Swisher (Chicago Humanities), Oct 15 https://www.oldtownschool.org/concerts/2025/10-15-2025-kara-swisher-and-cory-doctorow-on-enshittification/ San Francisco: Enshittification at Public Works (The Booksmith), Oct 20 https://app.gopassage.com/events/doctorow25 Miami: Enshittification at Books & Books, Nov 5 https://www.eventbrite.com/e/an-evening-with-cory-doctorow-tickets-1504647263469 Recent appearances (permalink) Divesting from Amazon’s Audible and the Fight for Digital Rights (Libro.fm) https://pocketcasts.com/podcasts/9349e8d0-a87f-013a-d8af-0acc26574db2/00e6cbcf-7f27-4589-a11e-93e4ab59c04b The Utopias Podcast https://www.buzzsprout.com/2272465/episodes/17650124 Tariffs vs IP Law (Firewalls Don't Stop Dragons) https://www.youtube.com/watch?v=LFABFe-5-uQ Latest books (permalink) "Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels). "The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org). "The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org). "The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245). "Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com. "Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com Upcoming books (permalink) "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025 "Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025 https://us.macmillan.com/books/9780374619329/enshittification/ "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026 "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026 "The Memex Method," Farrar, Straus, Giroux, 2026 "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, 2026 Colophon (permalink) Today's top sources: Currently writing: "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. (1090 words yesterday, 45491 words total). A Little Brother short story about DIY insulin PLANNING This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net. https://creativecommons.org/licenses/by/4.0/ Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution. How to get Pluralistic: Blog (no ads, tracking, or data-collection): Pluralistic.net Newsletter (no ads, tracking, or data-collection): https://pluralistic.net/plura-list Mastodon (no ads, tracking, or data-collection): https://mamot.fr/@pluralistic Medium (no ads, paywalled): https://doctorow.medium.com/ Twitter (mass-scale, unrestricted, third-party surveillance and advertising): https://twitter.com/doctorow Tumblr (mass-scale, unrestricted, third-party surveillance and advertising): https://mostlysignssomeportents.tumblr.com/tagged/pluralistic "When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. ISSN: 3066-764X

4 days ago 7 votes