Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
63
Vision transformers (ViTs) have seen an incredible rise in the past four years. They have an obvious upside: in a visual recognition setting, the receptive field of a pure ViT is effectively the entire image 1. In particular, vanilla ViTs maintain the quadratic time complexity (w.r.t. number of input patches) of language models with dense attention. Kernels in convolutional networks, on the other hand, have the property of being invariant to the input pixel/voxel that it is applied to, a feature that is typically referred to as translation equivariance. This is desirable because it allows the model to effectively recognize patterns and objects regardless of where they are located spatially. The weight sharing present in convolutional layers also makes convnets highly parameter-efficient and less prone to overfitting - a property ViTs do not have. As such, you might expect that ViTs and convnets are used equally in production environments that leverage visual models - ViTs for “global”...
a year ago

More from Frank’s Ramblings

a16z Blogs Are Just Glorified Marketing

… glorified marketing for portfolio companies, that is I came across one of a16z’s blog posts on Hacker News today, titled Emerging Architectures for LLM Applications. For folks who didn’t catch it, here’s the tl;dr: The emerging LLM stack is composed of several elements centered around data orchestration tools such as Langchain and Llamaindex. Data pipelines, embedding models, vector databases, and queries form the primary input for these orchestration tools. The stack is based on in-context learning, where off-the-shelf LLMs are used and their behavior is controlled through prompting and conditioning on contextual data. Strategies for prompting LLMs are becoming increasingly complex and are a core differentiating factor for both closed-source and open-source LLMs. Of these LLMs, strategies for GPT-3.5 and GPT-4 are most common, seeing as OpenAI is the current leader. AI agents - programmatic runtimes that can reason and plan - excite both developers and researchers alike, but don’t work just yet. Most agent frameworks are currently in PoC phase. Overall, I thought the article was informative, but I was surprised that the section on vector databases mentions neither Milvus nor Zilliz, especially since Milvus was mentioned in an older a16z blog on data and ML infrastructure: Also of note: another Zilliz project (GPTCache) is listed in the post. My initial instinct was that Milvus was left off because it is part of the LF AI & Data Foundation rather being a project wholly owned by Zilliz, so I left a comment on the HN post that links back to the Milvus website. I came back a couple of hours later to find an interesting take: Full disclosure: we (Zilliz) raised $103M back in 2022, and Pinecone raised $100M this April. Running it back in my head, I felt that SheepHerdr’s response actually made excellent sense - a16z’s ultimate goal is to generate returns for LPs, and the best way to do that is by supporting founders and propping their portfolio companies. To me, this is also unequivocally unfair to Vespa, Weaviate, etc as it delivers a subliminal message that they have no realistic long-term chance in the vector database space relative to Pinecone. This, of course, is absolute nonsense: vector databases are NOT a zero-sum game. I dove a bit deeper and was surprised to find that this is fairly commonplace behavior for a16z as a firm: The aforementioned article also lists Databricks in the “Data Pipelines” section, but not Snowflake. There is a Snowflake loader for Langchain and a guide for using Llamaindex with Snowflake. Databricks is an a16z portfolio company. The Modern Transactional Stack doesn’t come close to listing all of the available data connectors. To be fair, Airbyte and Fivetran (an a16z portfolio company) are the two largest and most well-known, but to distill the entire segment to just two companies seems unfair. a16z’s crypto division has backed LayerZero, going as far as actively voting against Wormhole, a LayerZero competitor. Side note: LayerZero was also featured in a16z’s Crypto Startup School. These are just three random examples I dug out - there are probably many other examples in verticals that I am unfamiliar with. Other LLM/GenAI Infrastructure landscapes Here’s a couple alternative landscapes that are, in my eyes, more wholly representative: ML/AI/Data Landscape (Interactive version). Matt Turck’s MAD Landscape is arguably the most complete out there. Companies that do vector search are listed under “Infrastructure/Vector Database” and “Analytics/Enterprise Search” categories. It was released in February 2023 so it’s about 4 months old, but a good resource nonetheless. Future of AI-Native Infrastructure. This one’s from Wei Lien Dang and David Hershey of Unusual Ventures. I found this pretty unique as it has a vertical for AI agents. It’s unfortunately not as complete as the MAD Landscape (missing Vespa, Vectara, etc), but still a good overview. The New Language Model Stack. Sequoia Capital’s blog post on the LLM stack is also excellent. Milvus isn’t in the diagram, but it’s mentioned in the section on vector databases. Vector Database Landscape. Yingjun Wu’s infographic is centered specifically around vector search infrastructure. Final thoughts I have tremendous respect for a16z, a firm that helped pioneer the practice of working with and nurturing founders rather than forcing them out pre-IPO or minmaxing term sheets. Their content is also incredibly informative and valuable for understanding the nuances of building a company, from finding PMF to hiring executives. I also wholeheartedly understand a16z’s motivation for sharing knowledge and highlighting their portfolio companies, but to do so under the guise of being helpful and impartial is just plain silly. In particular, a16z’s blog post yesterday has as much to do with emerging strategies for portfolio company marketing as it does with emerging architectures for LLM applications. This practice would be somewhat analagous to Google putting paid URLs at the very top of search results without an “Ad” label. (To be clear, Google doesn’t do this.) I’d like to end with some glorified marketing of my own: % pip install milvus

a year ago 27 votes
Hierarchical Navigable Small Worlds (HNSW)

(Note: A version of this post has been cross-published to the Zilliz blog) In a previous blog, we took a look at scalar quantization and product quantization - two indexing strategies which are used to reduce the overall size of the database without reducing the scope of a search. To better illustrate how scalar quantization and product quantization works, we also implemented our own versions in Python. In this tutorial, we’ll build on top of that knowledge by looking at what is perhaps the most commonly used primary algorithm today: Hierarchical Navigable Small Worlds (HNSW). HNSW performs very well when it comes to both speed and accuracy, making it an incredibly robust vector search algorithm. Despite it being popular, understanding HNSW can be a bit tricky. In the next couple of sections, we’ll break down HNSW into its individual steps, developing our own simple implementation along the way. HNSW basics Recall from a previous post that there are four different types of vector search indexes: hash-based, tree-based, cluster-based, and graph-based. HNSW fits firmly into the lattermost, combining two core concepts together - the skip list and Navigable Small World (NSW). Let’s first dive into these two concepts individually before discussing HNSW. Skip list overview First up: skip lists. Recall the venerable linked list - a well-known data structure where each element in the list maintains a pointer the the next element. Although linked lists work great for implementing LIFO and FIFO data structures such as stacks and queues, a major downside is their time complexity when it comes to random access: O(n). Skip lists aim to solve this problem by introducing additional layers, allowing for O(log n) random access time complexity. By incurring extra memory (O(n log n) space complexity as opposed to O(n) for a normal linked list) and a bit of runtime overhead for inserts and deletes. A skip list is essentially a multi-level linked list, where the upper levels maintain long connections. As we move down the layers, the connections become shorter and shorter, with the bottommost layer being the “original” linked list containing all of the elements. The image below illustrates this: The skip list, illustrated. Higher layers have fewer elements. To reach element i in a skip list, we first start at the highest layer. Once we find a node that corresponds to an element in the list that is greater than i, we then backtrack to the previous node and move to the layer below. This continues all the way until we’ve found the element we’re looking for. Note that skip lists only work for sorted lists, as we need a way to directly compare the magnitude of two objects. Inserts work probabilistically. For any new element, we first need to figure out the layer with which the element appears first. The uppermost layer has the lowest probability, with increasing probability as we move down in layers. The general rule is that any element in a layer will appear in layer above it with some pre-defined probability p. Therefore, if an element first appears in some layer l, it will also get added to layers l-1, l-2, and so on. Note that, while it is possible to have a terribly balanced skip list that performs no better than a standard linked list, the probability of this happening is incredibly low. What the heck is a Navigable Small World? Now that we’ve gotten skip lists out of the way, let’s take some time to talk about Navigable Small Worlds. The general idea here is to first imagine a large number of nodes in a network. Each node will will have short-, medium-, and long-range connections to other nodes. When performing a search, we’ll first begin at some pre-defined entry point. From there, we’ll evaluate connections to other nodes, and jump to the one closest to the one we hope to find. This process repeats until we’ve found our nearest neighbor. This type of search is called greedy search. For small NSWs in the hundreds or thousands of nodes, this algorithm works, but it tends to break down for much larger NSWs. We can fix this by increasing the average number of short-, medium-, and long-range connections for each node, but this increases the overall complexity of the network and results in longer search times. In the absolute “worst” case, where each node is connected to every other node in our dataset, NSW is no better than naïve (linear) search. NSWs are cool and all, but how does this relate to vector search? The idea here is to imagine all vectors in our dataset as points in an NSW, with long-range connections being defined by vectors which are dissimilar from one another and the opposite for short-range connections. Recall that vector similarity scores are measured with a similarity metric - typically L2 distance or inner product for floating point vectors and Jaccard or Hamming distance for binary vectors. By constructing an NSW with dataset vectors as vertices, we can effectively perform nearest neighbor search by simply greedily traversing the NSW towards vertices closer and closer to our query vector. HNSW, explained When it comes to vector search, we often have dataset sizes in the hundreds of millions or even billions of vectors. Plain NSWs are less effective at this scale, so we’ll need a better graph. HNSW extends NSW by borrowing from the concept of skip lists. Like the skip list, HNSW maintains multiple layers (hence the term Hierarchical Navigable Small World), only of NSWs instead of linked lists. The uppermost layer of an HNSW graph has few nodes and the longest links, while the bottommost layer has all nodes and the shortest links. During the search process, we enter a pre-defined point in the uppermost layer and greedily route ourselves towards the nearest neighbor to our query vector. Once we reach the nearest node, we then move to the second layer and repeat this process. This continues until we’ve reached our nearest neighbor. A diagram from the HNSW paper which visualizes the layered graph concept. Inserts work similarly to the skip list. For some vector v, We first traverse the first layer of the graph, finding its nearest neighbor before moving to the layer below it. We then traverse the graph again to find its nearest neighbor in the second layer. This process until we’ve reached the nearest neighbor in the bottommost graph. From here, we then need to determine which links (connections between vertices) to create. Again, we have a pre-defined parameter M which determines the maximum number of bidirectional links that we can add. These links are usually simply set as the nearest neighbors to v, but other heuristics can be used as well. The same process then repeats for the upper layers, assuming the vector appears there. As with the skip list, the query vector will appear in upper layers with exponentially decreasing probability. Specifically, the HNSW paper uses the equation floor(-ln(rand(0, 1))), where rand(0, 1) is a random number sampled from a uniform distribution between (0, 1]. Note how this does not actually constrain the minimum distance between any two vertices/vectors in a particular layer - it’s entirely possible that we end up with a poorly constructed graph, but the probability that this happens is incredibly low, especially as we scale up the number of vectors in the HNSW index. Implementing HNSW HNSW is not trivial to implement, so we’ll implement only a very basic version here. As usual, let’s start with creating a dataset of (128 dimensional) vectors: >>> import numpy as np >>> dataset = np.random.normal(size=(1000, 128)) The first step is to build the HNSW index. To do so, we’ll need to add each vector in our dataset one-by-one. Let’s first create a data structure to hold our index. In this basic example, we’ll use a list of lists to represent the index, with the inner lists corresponding to each layer/graph: >>> L = 5 # 5-layer HNSW >>> index = [[] for _ in range(L)] Every element in each graph is a 3-tuple containing the vector, a list of indexes that the vector links to within the graph, and the index for the corresponding node in the layer below it. For the bottommost layer, the third element of the 3-tuple will be set to None. Since every insert first requires a search for the nearest neighbor in graph, let’s implement that first. We can traverse any of the subgraphs in the index as so: def _search_layer(graph, entry, query, ef=1): best = (np.linalg.norm(graph[entry][0] - query), entry) nns = [best] visit = set(best) # set of visited nodes candid = [best] # candidate nodes to insert into nearest neighbors heapify(candid) # find top-k nearest neighbors while candid: cv = heappop(candid) if nns[-1][0] > cv[0]: break # loop through all nearest neighbors to the candidate vector for e in graph[cv[1]][1]: d = np.linalg.norm(graph[e][0] - query) if (d, e) not in visit: visit.add((d, e)) # push only "better" vectors into candidate heap if d < nns[-1][0] or len(nns) < ef: heappush(candid, (d, e)) insort(nns, (d, e)) if len(nns) > ef: nns.pop() return nns This code snippet is a bit more involved, but it’s much easier to understand with a bit of explanation. Here, we use a heap to implement a priority queue, which we use to order nearest neighbor vectors in the graph. Like all of the previous examples, I’m using L2 distance here, but this code can be extended to other distance metrics as well. We first populate the heap with the entry point. Here, all we’re doing is implementing greedy search. At every iteration, our goal is to update two variables: nns, our output list of nearest neighbors, and candid, a heap of candidate points. We evaluate all nearest neighbors to the “best” vector in candid, adding better (better means closer to the query vector) vectors to the output list of nearest neighbors as well as to the heap of candidate points for evaluation on the next iteration. This repeats until one of two stopping conditions is reached: we either run out of candidate points to evaluate, or we’ve determined that we can no longer do any better than what we already have. With top-k graph search out of the way, we can now now implement the top-level search function for searching the entire HNSW index: def search(index, query, ef=1): # if the index is empty, return an empty list if not index[0]: return [] best_v = 0 # set the initial best vertex to the entry point for graph in index: best_d, best_v = _search_layer(graph, best_v, query, ef=1)[0] if graph[best_v][2]: best_v = graph[best_v][2] else: return _search_layer(graph, best_v, query, ef=ef) We first start at the entry point (zeroth element in the uppermost graph), and search for the nearest neighbor in each layer of the index until we reach the bottommost layer. Recall that the final element of the 3-tuple will resolve to None if we are at the bottommost layer - this is what the final if statement is for. Once we reach the bottommost layer, we search the graph using best_v as the entry point. Let’s go back go the HNSW insert. We’ll first need to figure out which layer to insert our new vector into. This is fairly straightforward: def _get_insert_layer(L, mL): # ml is a multiplicative factor used to normalized the distribution l = -int(np.log(np.random.random()) * mL) return min(l, L) With everything in place, we can now implement the insertion function. def insert(self, vec, efc=10): # if the index is empty, insert the vector into all layers and return if not index[0]: i = None for graph in index[::-1]: graph.append((vec, [], i)) i = 0 return l = _get_insert_layer(1/np.log(L)) start_v = 0 for n, graph in enumerate(index): # perform insertion for layers [l, L) only if n < l: _, start_v = _search_layer(graph, start_v, vec, ef=1)[0] else: node = (vec, [], len(_index[n+1]) if n < L-1 else None) nns = _search_layer(graph, start_v, vec, ef=efc) for nn in nns: node[1].append(nn[1]) # outbound connections to NNs graph[nn[1]][1].append(len(graph)) # inbound connections to node graph.append(node) # set the starting vertex to the nearest neighbor in the next layer start_v = graph[start_v][2] If the index is empty, we’ll insert vec into all layers and return immediately. This serves to initialize the index and allow for successful insertions later. If the index has already been populated, we begin insertion by first computing the insertion layer via the get_insert_layer function we implemented in the previous step. From there, we find the nearest neighbor to the vector in the uppermost graph. This process continues for the layers below it until we reach layer l, the insertion layer. For layer l and all those below it, we first find the nearest neighbors to vec up to a pre-determined number ef. We then create connections from the node to its nearest neighbors and vice versa. Note that a proper implementation should also have a pruning technique to prevent early vectors from being connected to too many others - I’ll leave that as an exercise for the reader :sunny:. We now have both search (query) and insert functionality complete. Let’s combine everything together in a class: from bisect import insort from heapq import heapify, heappop, heappush import numpy as np from ._base import _BaseIndex class HNSW(_BaseIndex): def __init__(self, L=5, mL=0.62, efc=10): self._L = L self._mL = mL self._efc = efc self._index = [[] for _ in range(L)] @staticmethod def _search_layer(graph, entry, query, ef=1): best = (np.linalg.norm(graph[entry][0] - query), entry) nns = [best] visit = set(best) # set of visited nodes candid = [best] # candidate nodes to insert into nearest neighbors heapify(candid) # find top-k nearest neighbors while candid: cv = heappop(candid) if nns[-1][0] > cv[0]: break # loop through all nearest neighbors to the candidate vector for e in graph[cv[1]][1]: d = np.linalg.norm(graph[e][0] - query) if (d, e) not in visit: visit.add((d, e)) # push only "better" vectors into candidate heap if d < nns[-1][0] or len(nns) < ef: heappush(candid, (d, e)) insort(nns, (d, e)) if len(nns) > ef: nns.pop() return nns def create(self, dataset): for v in dataset: self.insert(v) def search(self, query, ef=1): # if the index is empty, return an empty list if not self._index[0]: return [] best_v = 0 # set the initial best vertex to the entry point for graph in self._index: best_d, best_v = HNSW._search_layer(graph, best_v, query, ef=1)[0] if graph[best_v][2]: best_v = graph[best_v][2] else: return HNSW._search_layer(graph, best_v, query, ef=ef) def _get_insert_layer(self): # ml is a multiplicative factor used to normalize the distribution l = -int(np.log(np.random.random()) * self._mL) return min(l, self._L-1) def insert(self, vec, efc=10): # if the index is empty, insert the vector into all layers and return if not self._index[0]: i = None for graph in self._index[::-1]: graph.append((vec, [], i)) i = 0 return l = self._get_insert_layer() start_v = 0 for n, graph in enumerate(self._index): # perform insertion for layers [l, L) only if n < l: _, start_v = self._search_layer(graph, start_v, vec, ef=1)[0] else: node = (vec, [], len(self._index[n+1]) if n < self._L-1 else None) nns = self._search_layer(graph, start_v, vec, ef=efc) for nn in nns: node[1].append(nn[1]) # outbound connections to NNs graph[nn[1]][1].append(len(graph)) # inbound connections to node graph.append(node) # set the starting vertex to the nearest neighbor in the next layer start_v = graph[start_v][2] Boom, done! All code for this tutorial can be accessed on Github: https://github.com/fzliu/vector-search.

a year ago 20 votes
My Experience Living and Working in China, Part II: COVID Stories

In this four-part article, I’ll go over some of the lessons I learned living and doing business in China’s tech industry. During my time in China, I’ve led a team of 10+ engineers to develop a location-based IoT and sensing platform, co-founded an open-source project called Towhee, and developed countless relationships with folks in a number of difference cities (many of whom I now consider good friends). I’ll go over some of the common misconceptions about China ranging from living and working in China to the government’s pandemic response. I originally intended for part II of this blog post to cover the tech industry in more detail (996, CSDN, open-source, etc…), but given the current spike in COVID cases these past two weeks plus the current lockdown in Shanghai, I felt it was more appropriate to first cover pandemic life in China. As always, if you have any questions, comments, or concerns, feel free to connect with me on Twitter or LinkedIn. Thanks for reading! Before reading this blog post, I strongly recommend you read part I if you haven’t yet. Part I received much more exposure than I had anticipated; I received a lot of positive feedback and I enjoyed reading many of the responses, especially those which provided a different outlook on China and its citizens. While I had originally intended for part II to cover China’s tech industry, I decided to instead cover China’s handling of the pandemic first, given Shanghai’s current lockdown. Stocking up on food right before the Shanghai lockdown in March 2022. I imagine the conversation with supermarket staff went something like this: Q - What kind of ramen would you like? A - All if it. A couple of words before steaming ahead into part II: 1) This article will be focused around three pandemic stories from China which will depict how China’s zero-COVID policy has affected Chinese citizens. These purely anecdotal stories are not meant to directly prove a point or argue a cause; rather, my hope is that they can provide a “boots-on-the-ground” perspective for readers unfamiliar with life in China (and, to a lesser extent, other east Asian countries) during COVID. 2) Recent articles with visualizations which conveniently ignore certain population segments (or other statistical anomalies) have unfortunately reduced my faith in “data-driven” articles1. As such, a small portion of this blog post will be dedicated towards picking apart pure data-driven arguments against China’s COVID statistics. 3) Lastly, I’d like to remind everyone to keep comments civil. I was subject to a private but fairly negative personal attack from one of the readers over the Personal identity section in part I. As the post’s author, I’m fine with it, but do not subject other readers and community members to the same or similar treatment - it’s irresponsible and does nothing to improve the quality of the discussion. With that said, let’s dive in. Three pandemic stories It would be easy for me to simply “tell you” how the pandemic has changed China; instead, I’d like to start this blog post with three “pandemic stories” - short excerpts which highlight the scope with which China’s zero-COVID policy has affected the population. These purely anecdotal stories are not meant to directly prove a point or argue a cause; rather, my hope is that they can provide a “boots-on-the-ground” perspective. Alex’s story Alex (I’ve used an alias to ensure privacy) is a Taiwanese expat working in Shanghai. Her story is a bit unique given her background - she’s been in Shanghai since early 2020 and, due to quarantine policies on both sides of the Taiwan strait, hasn’t been back home in over two years. More on this in a bit. When Alex flew from Taiwan to Shanghai in February of 2020, she immediately found herself in unfamiliar territory. Streets were nearly completely empty, and the few folks who did wander outside were tightly masked. The only businesses open were supermarkets, which were required by central government policy to have workers and/or guards standing by entrances, recording everybody’s name, ID number, phone number, and body temperature. News on Wuhan and the COVID-related restrictions popping up around the country were being constantly broadcast by state-run media. Red propaganda posters filled the streets, warning the general populace to remain masked and to stay away from wild game (野味). As time progressed, it became clear to Alex that, while people living in Western countries had lost jobs and loved ones, people living in China lost significant freedom and social capital in a country already short on both. In a culture that prides itself on family and connectivity - especially during Lunar New Year 2 - not returning home for over two years is borderline criminal. However, for Alex, this was not by choice. The policy for foreign travelers entering Taiwan is 14 days of quarantine, while the policy for travelers entering Shanghai is 14 days of hotel quarantine plus another 7 days at home. Because no human contact is allowed during the entire quarantine period, these quarantine periods are generally referred to as isolation (隔离) in Mandarin. For Alex, spending over a month in quarantine/isolation would simply be unacceptable, especially as the rest of her co-workers are all in-office. Two years away from Taiwan also resulted in a loss of something known as household registration (户籍). Although it may not seem like a big deal, household registration is more significantly meaningful in Taiwan than residency in the USA or Canada - everything from local health insurance to voting rights are impacted by the loss or acquisition of household residency. While she’s still in Shanghai today, she remains hopeful for the opportunity to return home to Taiwan later this year. Although the strict COVID policies have soured her attitudes toward working and living in mainland China, her views on the citizens of Shanghai and Cross-Strait relations remain positive. Ding Liren’s story Ding Liren (Ding is his family name; this is the standard naming convention in China) is China’s top-rated chess player. He’s currently world number 2 behind Magnus Carlsen of Norway3. A bit about competitive chess before I continue. The World Chess Championship is almost universally recognized as the world’s premier chess tournament. It is a head-to-head match between the reigning world champion and a challenger. In modern times, the challenger has been determined via a biennial 8-player tournament called the Candidates Tournament. Ding first participated in 2018’s tournament, placing 4th of 8. It was a decent showing, but after an unbeaten streak of 100 games ending in late 2018 and a win in the 2019 Sinquefield Cup (where he beat top-rated Magnus Carlsen in playoffs), he was widely considered to be one of the favorites in the 2020 Candidates Tournament (along with Fabiano Caruana, the winner of the 2018 Candidates Tournament). Early 2020 is where Ding’s story take a turn for the worse. The 2020 Candidates Tournament was scheduled to take place mid-March in Yekaterinburg, Russia. Upon entry, the Russian government decided to put Ding into quarantine in a rural cottage near Moscow due to the COVID pandemic. This quarantine took an incredible mental toll on Ding, putting him in a tie for last place after 7 of 14 rounds. After the 7th round, FIDE, chess’s governing body, decided to suspend play to mitigate the spread of COVID. When play resumed mid-April 2021, Ding (who did not have to quarantine this time around) looked to be back in top form, winning his final three games of the tournament, one of which was over the eventual challenger, Ian Nepomniachtchi. In a game where draws are incredibly common at the highest level of play, three wins in a row can be considered a major accomplishment in and of itself. The story doesn’t quite end there. With Ian bombing out of the 2022 World Chess Championship match with Magnus, Ding is once again widely considered to be one of the favorites to win the 2022 Candidates Tournament… if he could actually qualify for it. The top two finishers of the 2021 Chess World Cup, 2021 Grand Swiss Tournament, and 2022 Grand Prix are given berths into the FIDE Candidates Tournament4. Although Ding was invited to and had planned on participating in all three of the aforementioned tournaments, he ended up being unable to attend any of them due to a combination of China’s zero-COVID stance and the Schengen area visa policy; he’s repeatedly been unable to purchase a return flight from Europe to China due to China’s constant updating of return flight rules and the complete lack of available flight options. For reference, a one-way flight from San Francisco to Shanghai on 05/13 of this year costs $9628 (transiting through a third country is disallowed if direct flights exist). I was able to secure a one-way flight from San Francisco to Shanghai for $267.20 pre-pandemic. In a major twist of events, it seems that Ding may yet qualify due to Sergey Karjakin’s chess ban5. If Ding does end up playing in the 2022 Candidates Tournament, I’ll certainly be rooting for him - I hope you will too. Ding Liren so obviously belongs in the candidates tournament. That he does not even get a chance to qualify, is saddening. — Peter Heine Nielsen (@PHChess) February 1, 2022 My own story The word “lockdown” is generally understood to be a break in transportation and other non-essential public services; this is not the case in China. The last story that I’d like to share is a personal one detailing the time I had the great displeasure of participating in a 48-hour COVID-induced building-wide lockdown in Shanghai. On the evening of December 13th of 2021, the Anlian building in Shanghai’s Yangpu district went under a full-fledged 48-hour lockdown. Although I had left before police and health officials came to lock the building down, Anlian’s building management was still able to contact and inform me of the mandatory 48-hour quarantine (I was obviously not enthralled by this). Right before I re-entered the building, I took the picture below. One of the police officers noticed me snapping photos and was about to confiscate my phone before I told him that I had already deleted them (I lied). I didn’t end up taking any more pictures of the lockdown due to this strict “no photographs” policy. Shanghai's Anlian building on the first night of lockdown - notice the barricade at the entrance to the left of the blue tent. There's police everywhere, and local health workers arrived in full personal protective equipment (PPE) to administer nucleic acid amplification tests (NAATs). The first night was the most eventful. Occupants ordered takeout (外卖) for dinner, resulting in mass confusion as bags of food were left outside the building entrance with nobody to bring them in. There was also mandatory COVID testing for the entire building and strict mask requirements while lining up for the test; those who weren’t wearing them tightly over both the nose and mouth were forcibly pulled aside and given stern warnings. Later at night, internet speeds slowed considerably as everybody began streaming television shows, downloading Steam games (CS:GO, anyone?), watching Netflix (through a VPN), etc. Long lines formed at bathrooms as well. In particular, the women’s bathroom became congested as many vied for mirror space to apply dry shampoo and/or remove makeup. Local health workers brought and distributed blankets, but only enough for about 1/5th of the people in the building - tough luck for everybody else. Day 2 was much of the same, with most folks fairly tired and sleep-deprived from the night before. Another round of NAATs took place on the first floor during a very specific time window. I was unfortunately late, which resulted in a heated argument between building management (who was supposed to make sure everyone in the building was present for the second round of COVID tests) and local health workers (who had to once again put on PPE and re-open test kits). This happened even though it was fairly clear at that point that nobody in the building had contracted COVID. I later found out that the 48-hour lockdown wasn’t due to secondary contact (次秘接) as opposed to primary contact: an international traveller from Japan who was in contact with a confirmed COVID case had passed through the 25th floor of the building earlier in the day. I was skeptical that health officials would go through , but I later confirmed it with both a local health official as well as one of the folks most heavily affected who worked on the 25th floor of the office building. In any case, if there’s one thing I learned from this whole ordeal, it’s that sleeping in office chairs is extremely uncomfortable. On China’s COVID statistics These stories should help shed some light the three distinct phases that China’s zero-COVID policy has gone through. The first phase takes place from December 2019 to April 2020. During these critical months, China set a precedent for the rest of the world by engaging in mass lockdowns, city-wide testing, and virtual meetings. Official statistics (deaths, cases, recoveries) during this time are highly inaccurate due mostly to intentional but also some inadvertent miscounts. From May 2020 onward, China entered a delicate equilibrium, maintaining its zero-COVID policy through strict 21-day quarantine for international travelers - 14 in a hotel plus 7 at home. Chinese policy became fairly standard throughout the country, and most citizens simply forgot about COVID altogether, save for the occasional article or two bashing America for an unnecessarily high death count. Since January 2022, driven by Omicron’s high transmissibility, China has been grappling with outbreak after outbreak and re-engaging in citywide lockdowns. Through a fundamental misunderstanding of the first two phases, China writers such as George Calhoun criticize Beijing for underreporting the infection rate. He views China’s COVID statistics as a “statistical, medical, biological, political and economic impossibility” because he’s never lived in a dense, authoritarian country. Writers like George deserve substantial criticism for cherry-picking statistics while simultaneously avoiding a wholistic approach to analyzing China’s COVID response. China’s COVID eradication program in phases one and two were successful because the central government’s containment policies were unimaginably draconian. The 48-hour lockdown story should serve as a great example of this - a city or state leader in America forcing an entire building into a military lockdown would be political suicide. As mentioned above, I have no doubt that the COVID cases and deaths for phase one are significantly higher than initially reported. Phase two, however, is entirely different. With COVID’s strong transmissibility and incredibly dense urban centers, entire swaths of the population would be simultaneously unable to work if even a few COVID cases slipped through without quarantine. Simply put, hospitals would be overrun, and the Chinese populace would notice. Halloween (2020) in a tier 2 Chinese city. Quite the super-spreader event, no? My personal opinion The purpose of the three above stories was to portray how lockdowns, quarantine, and general COVID policy in China differs from that of other countries. This should hopefully also show why China’s zero-COVID strategy was considerably more successful than that of other countries in addition to why zero-COVID is socially and economically unsustainable in the era of Omicron. Unless China cuts its citizens off completely from the rest of the world, I don’t see zero-COVID as a long-term possibility for any country, let alone one with an economy and population as large as China’s. China’s zero-COVID policy was warranted when the disease was much deadlier, but with Omicron accounting for nearly 100% of all recent worldwide COVID cases, it is highly impractical for China to continue these unsustainable zero-COVID rules, as they will have increasingly negative social and economic side effects. In particular, China’s zero-COVID policy has put the population in a COVID-ignorant state of mind - more and more people are showing an unwillingness to comply with local COVID mandates, all while the percentage of fully vaccinated elderly Chinese citizens remains low. Thankfully, there are rumors that China wants to ease its zero-COVID policy. However, given the speed with which the central government was able to lockdown cities and restrict the flow of people in early 2020, I see no excuse for the current unease and slowness with which opening up is being discussed. Western media coverage One final note on Western media and its coverage of China’s pandemic response. The majority of media outlets have repeatedly failed to read between the lines when it comes to CPC pandemic policy6. While part of the reason is to prevent the spread of COVID domestically, another major reason is talent retention. China is undergoing a fairly seismic demographic shift, with a rapidly shrinking young population (ages 25-34). I personally know several young Chinese professionals who studied at an international university before deciding to return to China instead of staying abroad - nearly all of these instances were due to rising costs associated with traveling in and out of mainland China, both in terms of time and money. Alex’s and Ding’s stories are perfect reflections of this. It’s time for Western media to treat China’s policies as socioeconomic manipulation at the expense of other countries (including America) rather than natural byproducts of an authoritarian government. Western governments should band together and respond in kind with their own talent retention policies, and, if necessary, embargoes/sanctions against China. Wrapping up Thanks for making it this far - I hope this post was informative. As mentioned before, this is part II of a four-part series. In part III, I’ll cover the Chinese tech scene, from 996’ing to the open source community. Stay tuned! Example: where’s the line for white, non-Latina women in this article? ↩ 有钱没钱回家过年, i.e. returning home for LNY is a must, regardless of one’s fiscal condition. ↩ Ding and Levon Aronian are my two favorite players. In particular, I enjoy watching Ding’s solid playstyle in conjunction with his cold, hard calculation capabilities. He’s also an incredibly humble person.</sup> ↩ Traditionally, there has also been a slot for the highest-rated player, but this was removed in the 2022 cycle due to rating protection/manipulation by previous Candidates Tournament participants (Ding would’ve otherwise qualified this year). ↩ Sergey had qualified via the Chess World Cup held in 2021, but due to his support of the Russian invasion of Ukraine, he received a 6-month ban from all FIDE tournaments. This reinforces my belief that the only true winners of Russia’s invasion of Ukraine are China and India. ↩ China’s great firewall is another example of Western media missing the complete picture. While minimizing external influence and internal dissent is undoubtedly a major reason for building the firewall, an equally important reason was to promote the growth of China’s own tech giants - Alibaba, Tencent, Baidu, etc. I’ve actually read articles and papers which argue that the latter reason is the primary one for the great firewall; given the prevalence of VPNs and proxies (翻墙软件) within mainland China, I must say that I agree. ↩

over a year ago 22 votes
My Experience Living and Working in China, Part I

In this four-part article, I’ll go over some of the lessons I learned living and doing business in China’s tech industry. During my time in China, I’ve led a team of 10+ engineers to develop a location-based IoT and sensing platform, co-founded an open-source project called Towhee, and developed countless relationships with folks in a number of difference cities (many of whom I now consider good friends). I’ll go over some of the common misconceptions about China ranging from living and working in China to the government’s pandemic response. Part I of this blog post covers some of the basics without diving too deep into the tech world: some interesting things I learned while living, working, and interacting in China. If you have any questions, comments, or concerns, feel free to connect with me on Twitter or Linkedin. Thanks for reading! Update (03/29/2022): Part II is up. You can read it here. Before I begin, a bit about me. I was born in Nanjing, China, but moved to the US when I was barely three years old. I spent about five years in New Jersey before moving to Corvallis, Oregon (a place that I am, to this day, proud to call home). I moved to Norcal for college, studying EE (with a minor in CS) at Stanford. I stayed there for my Master’s degree as well, which I completed in 2014. Afterwards, I worked at Yahoo’s San Francisco office as a Machine Learning Engineer for two years. As a hybrid software development & research role, I was able to research and productionize the industry’s first deep learning-based model for scoring images based on aesthetics. I also had the pleasure of attending Yahoo’s internal TechPulse conference (where my co-author and I won a best paper award) all while keeping up with interesting deep learning uses cases. All-in-all, I was quite happy with the work I was doing, but also slowly started to develop the entrepreneurship itch. In the lead up to 2017, I returned to my Electrical Engineering roots and co-founded a company developing solutions for indoor localization and navigation. Efforts I put in towards finding investment continuously had little to no return - feedback we got from a lot of investors was that they believed in the team, but that the product lacked a “viability test” with an initial customer, something difficult for an early-stage hardware startup due to the high development overhead. I had some simulations and early board designs which I believed was enough, but for an investor, diving deep into an unknown company’s technology can often be costly in terms of time and energy. This is where my story takes a bit of a turn. In late 2017, the company received an early-stage seed investment offer from mainland China, and after a bit of consideration, we decided to go for it. It was at this point that a lot of friends and family asked me a question I’ve become very good at answering over the years: Why did you choose to leave Silicon Valley for an unknown country with less talent and an arguably inferior tech industry? The answer is threefold: 1) I felt that Chinese investors were more open to funding hardware startups due to the ultra-fast turnaround times for fabrication, 2) the bay area was just getting too damn expensive for my taste, and 3) from a personal perspective, I wanted to understand my birth country from cultural, social, and economic standpoints. I felt good about my decision and thought that the greatest challenge would be language; my Mandarin was workable but far from proficient. San Francisco Chinatown is a poor caricature of Qing dynasty China. Same goes for the architecture you see in Chinese restaurants across America. Photo by Dennis Jarvis, CC BY-SA 2.0 license, original photo. Alipay, WeChat, and QR codes The very first thing you’ll learn about China is that everything revolves around either Alipay (支付宝) or WeChat (微信), two apps known primarily for their payment capabilities. What a lot of folks outside China don’t know is that these two apps can be used as gateways to a number of other mini-programs (小程序), i.e. subapps developed by other organizations such as KFC, Walmart, etc. These subapps can be used directly within either Alipay or Wechat, forgoing the need to individually download apps from an app store. Imagine ordering furniture from IKEA, dinner from Chipotle, and movie tickets to Century Theaters all from the same app - that’s Alipay/Wechat for you. The obvious downside to this is that personal information becomes extremely centralized. If something like this were to happen in the US, antitrust lawsuits would come faster than a speeding bullet, and for good reason too - big conglomerates monopolizing data is dangerous and their wide adoption stilfes innovation. While Alipay and WeChat were years ahead of the US’s card-based (credit/debit) payments system when first released, Android Pay and Apple Pay (NFC-based) have since then become a lot easier to use. Alipay and WeChat work by opening a camera and scanning a QR code, which redirects you to the store's payments page. You can then pay an arbitrary amount of RMB, which will immediately show up in the payee's balance once complete. Photo by Harald Groven, CC BY-SA 2.0 license, original photo. Here's a screenshot of my Alipay. Its primary use is for payments, as evident by the top row, but mini-programs (second row from the top) have now become an important part of the app. Alipay and WeChat’s success within mainland China are in large part due to the smartphone + QR code revolution, which has truly permated all aspects of Chinese life. Shared bikes can be unlocked by scanning a QR code on your phone. You can add friends on Alipay and WeChat using QR codes. Many Communist Party of China (CPC) functions rely on tight Alipay or WeChat integration. You can even login to third-party websites and check in as a guest in office buildings via QR codes. I am by no means a security expert, but this system somehow feels a bit gameable despite its widespread use by over a billion people. Red tape, CPC style While Alipay and WeChat have made life considerably easier for the majority of people living in China, many civil and commercial processes are still incredibly difficult and filled with unnecessary paperwork. Registering for a company and acquiring a work permit in China is quite possibly one of the most insanely frustrating things on Earth. I won’t go into all of the details, but just know that it involved a mountain of paperwork, letters of commitment, countless passport scans and other documentation, etc… We ended up hiring an administrative assistant to handle a lot of this work for us, but the amount of time and energy one has to dedicate towards this can be a bit demoralizing. Some provincial (the equivalent of a state in America) governments have issued new policies aimed towards combating the problem of excessive paperwork. But the CPC is massive, and massive entities have even larger amounts of inertia. Rather than reducing the amount of mandatory paperwork, many of those policies revolved around reducing the number of trips needed to see the process to completion. This is definitely a step in the right direction, but compiling a thick folder of paperwork is still not a fun experience. A common joke in China is that there are four castes. From top to bottom these are: 1) CPC officials, 2) foreigners, 3) white collar workers, and finally 4) blue collar workers. Even with this supposed semi-VIP treatment, getting a business license such as this one is something I do not want to go through again. The same goes for pretty much all processes which require some sort of government approval, including but not limited to acquiring a work permit, registering an address change, and replacing a lost ID card. Even flying to China requires a mountain of paperwork and approvals, even if you already have a Chinese visa. My main problem with all this is the CPC’s complete lack of transparency. Why can’t I transit through a third country on my way to China if I’m going to have to undergo 14 days of mandatory hotel quarantine plus another 7 days of home quarantine anyway? From a foreigner’s perspective, this is one of the most frustrating aspects of China in an otherwise amazing experience - CPC overreach in almost every aspect of everyday life. The CPC grossly mismanages via overregulation in some sectors and underregulation (hello, housing market) in others. Social regression, economic growth This ties into another common misconception about China - the idea that the government wants to track everything you do at all hours of the day (for the moment, let’s ignore the feasibility of doing so for a population for 1.4 billion people) through a combination of CCTV, mobile phones, and browsing habits. I’ve read countless articles written by American and European media outlets overstating the dystopia that China has fallen into, but the reality is that the Chinese government cares little for storing said data long-term and uses it primarily in criminal cases. I was involved in a project that uses face recognition to track residents going in and out of communities; not only were the residents eager to have such a system installed, but it eventually also helped track a man guilty of sexual assault. Data from such a system was also entirely managed at the local level and not automatically shared with the provincial or central governments. Xinjiang and Tibet are two exceptions to this which I won’t dive deep into. I also haven’t been to either province, so it would be inappropriate for me to comment on what’s going on in Western China. Other surveillance programs such as social credit (社会信用) and city brain (城市大脑) are also widely misunderstood. The social credit system primarily punishes and constrains businesses rather than people, while social credit for individuals is somewhat analagous to a background check in America. A lot of American and European commentators will point out some insane social credit rules, such as deducting points for cheating on the college entrance exam (essentially the SAT on steroids); while I do not disagree, there are undoubtedly similar occurances for American laws. When I was still a student at Stanford, I once lost an internship opportunity because a “traffic violation” - biking at night without a bike light - showed up on my background check. In all fairness, I consider it to be extremely easy to stay off China’s social credit “blacklist” - just be reasonable and avoid breaking the law. China’s “city brains” are a totally different beast, designed to anticipate and reduce traffic, improve city planning, and provide advanced 3D models and visualization techniques. My understanding is that most city brain projects achieve… none of these, despite the fact that cities pay the equivalent of tens to hundreds of millions of dollars for just one of these solutions. An interesting side note - a recruiter once tried getting me to lead Yiwu’s city brain project, but it fell through after he discovered I wasn’t a Chinese citizen (these projects, for obvious reasons, strictly prohibit participation from non-Chinese citizens). An image I found of Pudong District's (Pudong is a district in Shanghai, home to Shanghai Pudong International Airport i.e. PVG) city brain platform via a Baidu search. Although it looks fancy, there is really little to no new underlying technology behind these systems. You might wonder how China’s economy is able to grow at such a blistering pace despite the huge number of arguably inefficient government programs. The answer is rooted in East Asian culture: work ethic. Blue collar Chinese workers are willing work 60+ hour weeks while sustaining themselves on ramen and $1.5 cigarette packs every day just to ensure their kids can get the best education and an improved quality of life. The whole concept of 996 is rooted in the Confucian ideals of hard work and industriousness. The “laziest” men and women in China are arguably owners of small- to mid-size businesses; they are often the last to arrive and first to leave from work. The CPC loves to take credit for China’s recent growth, but the reality is that the growth was the result of Chinese work ethic plus a switch from central planning to a mixed economy. By industriousness, I really do mean everybody. In 2019, I visited a prison in Jiangxi to discuss a potential prisoner safety solution. In a meeting with the vice-warden, he tacitly mentioned how Adidas shoes were being made in the prison that he was running. We quickly pulled out of that project. I haven’t bought Adidas- or Nike-branded shoes since1. Personal identity With the current political climate and state of affairs in mainland China, many Gen Z-ers and Millenials (mostly from Guangdong Province), as I consider Macau, Taiwan, and Hong Kong to be separate territories) who hail from mainland China but don’t refer to themselves as Chinese, instead calling themselves Cantonese. While some simply wish to preserve personal identity, there are also many who dissociate themselves simply because they believe the rest of China to be inferior. I’ve heard some of the most asinine reasons - people spit too often in the streets, everybody plays loud Douyin/TikTok videos while riding high-speed rail, too many cigarette smokers, etc. These are the same people who conveniently forget that some sidewalks along the Mission are lined with old discarded chewing gum, that loud music is played frequently on BART or in a BART station, or that open drug usage occurs nightly in the Tenderloin. I strongly dislike the CPC, but have immense love for Chinese people and Chinese culture. China is an super-massive collection of people that, in my eyes, have made incredible economic and social progress since my birth year, and will continue to do so in the decades ahead. And as a result of all of this, I’m proud to call myself Chinese American. Wrapping up Entire dissertations could be dedicated to each of the above sections, but I wanted to highlight misconceptions and some other bits of information that might not be as readily accessible. In particular, the previous section is by no means a comprehensive list of social issues that China is facing, but rather a brief summary of things that might not be too well understood in the West. #MeToo2, a declining natural birth rate, and racial divisions are just a small number of similar/parallel issues that are happening in both America and China. If you made it this far, thanks for reading. This post has been a bit rambly and all over the place, but the next couple should hopefully be a bit more focused. If you liked this article and are an open-source developer like myself, please give the Towhee project a star on Github as a show of support. In part II, I’ll cover the Chinese tech scene, from 996’ing to the open source community. Stay tuned! Forced labor in Xinjiang has made headlines in recent months, but in reality, it happens everywhere in China. ↩ Justice for Zhou Xiaoxuan. ↩

over a year ago 18 votes

More in AI

The Starting Line for Self-Driving Cars

IEEE Spectrum reported at the time, it was “the motleyest assortment of vehicles assembled in one place since the filming of Mad Max 2: The Road Warrior.” Not a single entrant made it across the finish line. Some didn’t make it out of the parking lot. So it’s all the more remarkable that in the second DARPA Grand Challenge, just a year and a half later, five vehicles crossed the finish line. Stanley, developed by the Stanford Racing Team, eked out a first-place win to claim the $2 million purse. This modified Volkswagen Touareg [shown at top] completed the 212-kilometer course in 6 hours, 54 minutes. Carnegie Mellon’s Sandstorm and H1ghlander took second and third place, respectively, with times of 7:05 and 7:14. So how did the Grand Challenge go from a total bust to having five robust finishers in such a short period of time? It’s definitely a testament to what can be accomplished when engineers rise to a challenge. But the outcome of this one race was preceded by a much longer path of research, and that plus a little bit of luck are what ultimately led to victory. Before Stanley, there was Minerva Let’s back up to 1998, when computer scientist Sebastian Thrun was working at Carnegie Mellon and experimenting with a very different robot: a museum tour guide. For two weeks in the summer, Minerva, which looked a bit like a Dalek from “Doctor Who,” navigated an exhibit at the Smithsonian National Museum of American History. Its main task was to roll around and dispense nuggets of information about the displays. Minerva was a museum tour-guide robot developed by Sebastian Thrun. In an interview at the time, Thrun acknowledged that Minerva was there to entertain. But Minerva wasn’t just a people pleaser ; it was also a machine learning experiment. It had to learn where it could safely maneuver without taking out a visitor or a priceless artifact. Visitor, nonvisitor; display case, not-display case; open floor, not-open floor. It had to react to humans crossing in front of it in unpredictable ways. It had to learn to “see.” Fast-forward five years: Thrun transferred to Stanford in July 2003. Inspired by the first Grand Challenge, he organized the Stanford Racing Team with the aim of fielding a robotic car in the second competition. team’s paper.) A remote-control kill switch, which DARPA required on all vehicles, would deactivate the car before it could become a danger. About 100,000 lines of code did that and much more. Many of the other 2004 competitors regrouped to try again, and new ones entered the fray. In all, 195 teams applied to compete in the 2005 event. Teams included students, academics, industry experts, and hobbyists. In the early hours of 8 October, the finalists gathered for the big race. Each team had a staggered start time to help avoid congestion along the route. About two hours before a team’s start, DARPA gave them a CD containing approximately 3,000 GPS coordinates representing the course. Once the team hit go, it was hands off: The car had to drive itself without any human intervention. PBS’s NOVA produced an excellent episode on the 2004 and 2005 Grand Challenges that I highly recommend if you want to get a feel for the excitement, anticipation, disappointment, and triumph. In the 2005 Grand Challenge, Carnegie Mellon University’s H1ghlander was one of five autonomous cars to finish the race.Damian Dovarganes/AP H1ghlander held the pole position, having placed first in the qualifying rounds, followed by Stanley and Sandstorm. H1ghlander pulled ahead early and soon had a substantial lead. That’s where luck, or rather the lack of it, came in. What went wrong with H1ghlander remained a mystery, even after extensive postrace analysis. It wasn’t until 12 years after the race—and once again with a bit of luck—that CMU discovered the problem: Pressing on a small electronic filter between the engine control module and the fuel injector caused the engine to lose power and even turn off. Team members speculated that an accident a few weeks before the competition had damaged the filter. (To learn more about how CMU finally figured this out, see Spectrum Senior Editor Evan Ackerman’s 2017 story.) The Legacy of the DARPA Grand Challenge Regardless of who won the Grand Challenge, many success stories came out of the contest. A year and a half after the race, Thrun had already made great progress on adaptive cruise control and lane-keeping assistance, which is now readily available on many commercial vehicles. He then worked on Google’s Street View and its initial self-driving cars. CMU’s Red Team worked with NASA to develop rovers for potentially exploring the moon or distant planets. Closer to home, they helped develop self-propelled harvesters for the agricultural sector. Stanford team leader Sebastian Thrun holds a $2 million check, the prize for winning the 2005 Grand Challenge.Damian Dovarganes/AP Of course, there was also a lot of hype, which tended to overshadow the race’s militaristic origins—remember, the “D” in DARPA stands for “defense.” Back in 2000, a defense authorization bill had stipulated that one-third of the U.S. ground combat vehicles be “unmanned” by 2015, and DARPA conceived of the Grand Challenge to spur development of these autonomous vehicles. The U.S. military was still fighting in the Middle East, and DARPA promoters believed self-driving vehicles would help minimize casualties, particularly those caused by improvised explosive devices. 2007 Urban Challenge, in which vehicles navigated a simulated city and suburban environment; the 2012 Robotics Challenge for disaster-response robots; and the 2022 Subterranean Challenge for—you guessed it—robots that could get around underground. Despite the competitions, continued military conflicts, and hefty government contracts, actual advances in autonomous military vehicles and robots did not take off to the extent desired. As of 2023, robotic ground vehicles made up only 3 percent of the global armored-vehicle market. Much of the contemporary reporting on the Grand Challenge predicted that self-driving cars would take us closer to a “Jetsons” future, with a self-driving vehicle to ferry you around. But two decades after Stanley, the rollout of civilian autonomous cars has been confined to specific applications, such as Waymo robotaxis transporting people around San Francisco or the GrubHub Starships struggling to deliver food across my campus at the University of South Carolina. A Tale of Two Stanleys Not long after the 2005 race, Stanley was ready to retire. Recalling his experience testing Minerva at the National Museum of American History, Thrun thought the museum would make a nice home. He loaned it to the museum in 2006, and since 2008 it has resided permanently in the museum’s collections, alongside other remarkable specimens in robotics and automobiles. In fact, it isn’t even the first Stanley in the collection. Stanley now resides in the collections of the Smithsonian Institution’s National Museum of American History, which also houses another Stanley—this 1910 Stanley Runabout. Behring Center/National Museum of American History/Smithsonian Institution That distinction belongs to a 1910 Stanley Runabout, an early steam-powered car introduced at a time when it wasn’t yet clear that the internal-combustion engine was the way to go. Despite clear drawbacks—steam engines had a nasty tendency to explode—“Stanley steamers” were known for their fine craftsmanship. Fred Marriott set the land speed record while driving a Stanley in 1906. It clocked in at 205.5 kilometers per hour, which was significantly faster than the 21st-century Stanley’s average speed of 30.7 km/hr. To be fair, Marriott’s Stanley was racing over a flat, straight course rather than the off-road terrain navigated by Thrun’s Stanley. Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology. An abridged version of this article appears in the February 2025 print issue as “Slow and Steady Wins the Race.” References Sebastian Thrun and his colleagues at the Stanford Artificial Intelligence Laboratory, along with members of the other groups that sponsored Stanley, published “Stanley: The Robot That Won the DARPA Grand Challenge.” This paper, from the Journal of Field Robotics, explains the vehicle’s development. The NOVA PBS episode “The Great Robot Race” provides interviews and video footage from both the failed first Grand Challenge and the successful second one. I personally liked the side story of GhostRider, an autonomous motorcycle that competed in both competitions but didn’t quite cut it. (GhostRider also now resides in the Smithsonian’s collection.) Smithsonian curator Carlene Stephens kindly talked with me about how she collected Stanley for the National Museum of American History and where she sees artifacts like this fitting into the stream of history.

12 hours ago 2 votes
Why does AI slop feel so bad to read?

I don’t like reading obviously AI-generated content on Twitter. There’s a derogatory term for it: AI “slop”, which means something like “AI…

yesterday 3 votes
Video Friday: Aibo Foster Parents

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANY German Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANY European Robotics Forum: 25–27 March 2025, STUTTGART, GERMANY RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLAND ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC ICRA 2025: 19–23 May 2025, ATLANTA, GA London Humanoids Summit: 29–30 May 2025, LONDON IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN 2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TX RSS 2025: 21–25 June 2025, LOS ANGELES Enjoy today’s videos! This video about ‘foster’ Aibos helping kids at a children’s hospital is well worth turning on auto-translated subtitles for. [ Aibo Foster Program ] Hello everyone, let me introduce myself again. I am Unitree H1 “Fuxi”. I am now a comedian at the Spring Festival Gala, hoping to bring joy to everyone. Let’s push boundaries every day and shape the future together. [ Unitree ] Happy Chinese New Year from PNDbotics! [ PNDbotics ] In celebration of the upcoming Year of the Snake, TRON 1 swishes into three little lions, eager to spread hope, courage, and strength to everyone in 2025. Wishing you a Happy Chinese New Year and all the best, TRON TRON TRON! [ LimX Dynamics ] Designing planners and controllers for contact-rich manipulation is extremely challenging as contact violates the smoothness conditions that many gradient-based controller synthesis tools assume. We introduce natural baselines for leveraging contact smoothing to compute (a) open-loop plans robust to uncertain conditions and/or dynamics, and (b) feedback gains to stabilize around open-loop plans. Mr. Bucket is my favorite. [ Mitsubishi Electric Research Laboratories ] Thanks, Yuki! What do you get when you put three aliens in a robotaxi? The first-ever Zoox commercial! We hope you have as much fun watching it as we had creating it and can’t wait for you to experience your first ride in the not-too-distant future. [ Zoox ] The Humanoids Summit at the Computer History Museum in December was successful enough (either because of or in spite of my active participation) that it’s not only happening again in 2025, there’s also going to be a spring version of the conference in London in May! [ Humanoids Summit ] I’m not sure it’ll ever be practical at scale, but I do really like JSK’s musculoskeletal humanoid work. [ Paper ] In November 2024, as part of the CRS-31 mission, flight controllers remotely maneuvered Canadarm2 and Dextre to extract a payload from the SpaceX Dragon cargo ship’s trunk (CRS-31) and install it on the International Space Station. This animation was developed in preparation for the operation and shows just how complex robotic tasks can be. [ Canadian Space Agency ] Staci Americas, a third-party logistics provider, addressed its inventory challenges by implementing the Corvus One™ Autonomous Inventory Management System in its Georgia and New Jersey facilities. The system uses autonomous drones for nightly, lights-out inventory scans, identifying discrepancies and improving workflow efficiency. [ Corvus Robotics ] Thanks, Joan! I would have said that this controller was too small to be manipulated with a pinch grasp. I would be wrong. [ Pollen ] How does NASA plan to use resources on the surface of the Moon? One method is the ISRU Pilot Excavator, or IPEx! Designed by Kennedy Space Center’s Swamp Works team, the primary goal of IPEx is to dig up lunar soil, known as regolith, and transport it across the Moon’s surface. [ NASA ] The TBS Mojito is an advanced forward-swept FPV flying wing platform that delivers unmatched efficiency and flight endurance. By focusing relentlessly on minimizing drag, the wing reaches speeds upwards of 200 km/h (125 mph), while cruising at 90-120 km/h (60-75 mph) with minimal power consumption. [ Team BlackSheep ] At Zoox, safety is more than a priority—it’s foundational to our mission and one of the core reasons we exist. Our System Design & Mission Assurance (SDMA) team is responsible for building the framework for safe autonomous driving. Our Co-Founder and CTO, Jesse Levinson, and Senior Director of System Design and Mission Assurance (SDMA), Qi Hommes, hosted a LinkedIn Live to provide an insider’s overview of the teams responsible for developing the metrics that ensure our technology is safe for deployment on public roads. [ Zoox ]

2 days ago 2 votes
AI Roundup 103: The DeepSeek edition

January 31, 2025.

2 days ago 4 votes