Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
71
The jhanas are a series of eight (or nine) altered mental states, which progress from euphoria, to calm, to dissolution of reality – culminating in cessation, or loss of consciousness. They are induced via sustained concentration, without any external stimuli or substances. This is a practical guide on how to do them yourself. Table of Contents Jhanas are learned by doing, not reading What the jhanas feel like Why learn the jhanas? Hours practiced Retreat I (March 2024) Retreat II (June 2024) Practice between retreats General tips for practice Experiment with different techniques Flow state » relaxation A jhana is like a sneeze Pace yourself and listen to your body Instructions for accessing the jhanas How I entered J1<>J4 How I entered J5<>J7 How I entered J7<>J9 What’s going on under the hood? Impact of the jhanas In conclusion: try it! Notes Jhanas are learned by doing, not reading The word jhana comes from Buddhist scriptures, where they were first described....
7 months ago

More from Nadia Asparouhova

Does meditation experience improve success with the jhanas?

Jhanas – a series of altered mental states that are accessed via concentration – are often described as an “advanced meditation practice,” a phrase that suggests that one must be a skilled meditator to access them: just as only a skilled outdoorsman would embark upon an expedition to the Arctic Circle. It implies that meditation exists on a spectrum of difficulty, with perhaps mindfulness apps like Calm and Headspace on one end, and jhanas on the other. Anecdotally, however, many modern teachers notice that even experienced meditators can struggle with the jhanas, while inexperienced meditators find success. A light, playful approach seems important: it’s often said that the most effective way to access the jhanas is to not try too hard at all. While a novice backpacker should not attempt to trek to the Arctic Circle, some novice meditators appear to be quite capable of accessing the jhanas. I recently experienced this myself as a novice meditator, where the jhanas came more quickly than I had been led to believe was possible. Is there any relationship between meditation experience and jhana success? I decided to team up with Jhourney – the company that taught me the jhanas – to answer this question. (Please note that views here are my own, and any mistakes in this piece are mine alone.) Methodology We looked at an anonymized sample of 81 unique participants who attended a Jhourney retreat between September 2023 and April 2024, all of whom were new to the jhanas. Figuring out how to measure “meditation experience” was its own challenge. If we were examining differences between experienced and beginner swimmers, for example, we could approximate experience based on their lap times and effort expended. But meditation, for the most part, is a hard-to-verify skill. Because no one metric seems to give us a complete picture of meditation experience, we decided to look at these three variables, all of which were self-reported: Estimated lifetime hours meditated: This is a commonly used, but not especially reliable estimate, as many people can’t precisely recall this number. How often (hours/week) they meditated in the 6 months leading up to the retreat: This is more helpful, but still doesn’t tell the whole story, as meditation hours can vary widely in quality, especially on- versus off-retreat. (Meditating for one hour per week for a year, for example, is different from meditating 50 hours in one week at a retreat.) Whether they had attended a meditation retreat before: Though not granular enough on its own, this could be a valuable data point to capture, as people who are familiar with practicing in a dedicated, structured format might progress through the jhanas more quickly. Then we chose a few key milestones to capture how participants progressed through the jhanas during the retreat: Whether they experienced a jhana. Because jhanas are a highly subjective experience, we only looked at people where their descriptions matched key markers that are commonly seen across jhana self-reports. How far they progressed through the jhanic states, bucketed into two categories: jhanas 1-3 (which are more embodied and blissful), or jhana 4 and above (which are more mental and peaceful). Whether they have deterministic access to the jhanas; that is: by the end of the retreat, were they able to access the jhanas at will? There are a few caveats to our sample. We looked at attendees across several different retreats, which employed a variety of formats, including online and in-person, as well as different teachers and programming. They also represented a mix of demographics. We were not able to control for these variables, and the margin of error is large (±8.09%, 95% CI), so we will be cautious about the conclusions we can draw from our analysis. The difference in jhana success rates between beginner and experienced meditators is small We started by looking at absolute differences in jhana success rates between experienced and beginner meditators, using our three markers of meditation experience: [1] For lifetime meditation hours, experienced = at or above the median in our sample (300 hours); beginner = below the median For “has been on retreat before,” experienced = yes; beginner = no For hours meditated per week, experienced = at or above the median in our sample (2 hours/week); beginner = below the median Across all three dimensions of meditation experience, we see virtually no difference in success rates between experienced vs. beginner meditators. While the group that meditated 2+ hours/week was slightly more likely to experience a jhana, it’s worth noting that all observed differences between groups were within our margin of error. Jhana success rates among experienced meditators, compared to beginners † = within margin of error Experienced meditators are more likely to progress further with the jhanas when they succeed… Separately, we looked at skill differences between experienced and beginner meditators who accessed the jhanas: how far they progressed, and whether they could access the jhanas deterministically by the end of the retreat. Here, the differences between groups are more pronounced. Likelihood of jhana skill progression in experienced meditators, compared to beginners † = within margin of error Experienced meditators in our sample were more likely to have progressed to higher jhanic states, with the biggest difference seen among those who had attended a retreat before. They were also somewhat more likely to have gained deterministic access to the jhanas – although in this case, we saw the smallest differences between those who had attended a retreat before, versus those who had not. It’s tempting to conclude, based on the above, that beginner meditators are just as likely to access the jhanas, but experienced meditators are more likely to progress further in their jhana skills. But the story isn’t quite that simple! …but these differences are not attributable to meditation experience Having identified some differences between groups, we took a closer look at the data to determine whether we could establish a correlation between meditation experience and jhana skill. Here is where we found a surprise: in our sample, we saw no significant correlations between meditation experience – no matter how it’s measured – and any jhana skill. How is this possible, given the group differences we just observed? While beginners and experienced meditators do show differences on average, there’s a wide range of outcomes within each group. This matches the meditation teachers’ anecdotal reports, where some people – beginner or experienced – find it easy to access the jhanas, while others struggle. The variability within each group cancels out any clear pattern in the data. Correlation coefficients: prior meditation experiences vs. jhana milestones Our findings suggest that observed differences between groups aren’t explained by meditation experience, but by other factors that haven’t yet been identified. This presents an exciting opportunity for further research! If not meditation experience, what predicts jhana success? To summarize – in our sample: We found no significant difference in jhana success rates between experienced and beginner meditators. If this is the case, we ought to exercise caution in describing jhanas as an “advanced meditation practice,” because it could deter those who might otherwise succeed – and personally benefit – from the jhanas. “Advanced” might refer more to the consciousness-altering effects of the jhanas than the experience required to access them. We found no correlation between meditation experience and any jhana skill. This suggests that we either need to find more accurate ways of measuring meditation experience, or consider whether meditation experience is a (poor) proxy for some other skill that’s critical for getting into jhanas, such as an ability to sustain attention, or what’s sometimes called mental absorption. [2] Here are a few theories we can think of as to why meditation experience doesn’t seem to impact one’s experience with the jhanas: We don’t have great ways of measuring (or defining!) meditation experience. What does it mean to be an “experienced” meditator, in the way that someone is an “experienced” swimmer? Number of hours meditated tells us how long someone has been practicing, but not how adept they are at cultivating and sustaining a quiet mind. Experienced meditators may have more confidence that they know what meditation is, so while they have assets (ex. experience sitting for long hours on retreat, or familiarity with deep levels of mental absorption), they are also at times mistaken about which skills to use and when. This misplaced confidence could lead to mixed results. Other types of meditation do not develop the skills needed to get into jhana. It’s possible that many popular forms of meditation (such as mindfulness, “dry insight” Vipassana, or nondual exercises) teach skills that are mechanistically different from what’s needed to get into the jhanas. If not meditation experience, what does predict jhana success? As I’ve talked to more people about their experiences, my current hypothesis is that there are two main skills involved: Ability to invoke a positive feeling in the body (the initial “spark” of joy) Ability to sustain attention (letting the spark grow into a flame) It seems that many of the common challenges I’ve heard about can be diagnosed as one of these two issues. For example, some people seem to fear (or “brace”) against pleasure; think they don’t deserve it; can’t come up with a source of pure, uncomplicated joy; or struggle to tap into any strong emotion at all: these are issues with invoking a positive feeling. It also explains why people sometimes report that therapy or prior psychedelic use seems to help with jhana practice. Other practitioners struggle with anxiety; lack confidence in their abilities; get distracted or bored with practice; or strive and grasp too much. Though it may not seem obvious at first, these are issues with attention. It’s an inability to focus on the task at hand without one’s inner narrative getting in the way. The relationship between attention and emotion has been well-noted in clinical psychology, and there’s evidence to suggest that training one’s attention can improve emotional regulation. Meditation is one method to develop better control over one’s attention, but certainly not the only one, which might be why we see mixed results among meditators – because we’re only measuring how long someone has meditated, rather than the underlying skill. Someone who practices violin for an hour a day is not necessarily proficient at the violin. We assess a violinist’s skill by how they play music, not by how often they practice. Similarly, if we can identify and measure the skills that meditation is supposed to cultivate, it could help us more clearly diagnose and address common challenges with accessing the jhanas. A bitter lesson for the jhanas? In a 2019 essay, computer scientist Rich Sutton identifies a “bitter lesson” for artificial intelligence research: in the last 70 years, major progress in AI was made not by leveraging human knowledge (better models for how our brains work), but by leveraging computation (i.e. Moore’s Law, which observes that computation power doubles roughly every two years). Sutton believes that some AI researchers are misguided in their focus on developing new, complex ways of modeling the human mind, because they don’t want to admit that crude, simplistic “brute force” is actually what worked. I wonder if there is a bitter lesson to be found for the jhanas, as well. Maybe it’s less important to unpack why you are anxious, or why you can’t seem to let yourself feel joy, or why you think you’re not good enough. Instead of introspecting heavily – trying to model where these feelings came from, and how they impact one’s behavior – it could be more effective to simply “brute force” one’s way into, for example, sparking joy and cultivating attention. With success, all the other limiting beliefs might fall away – or, perhaps, resolve themselves – in the process. [3] As my Asterisk editor Jake Eaton wrote, while reflecting on his experience with the jhanas: I’ve spent so much time — through therapy and self-enquiry and whatever form of analytical thought — looking for answers to questions that plague me, but when I look back at my own growth, it was never inspired by finding an answer or identifying some Freudian root. I just learned, through whatever grace, to drop the question. Additional research, such as practitioner interviews and self-assessments, could help us better understand and validate these hypotheses, as well as surface clues on how to reliably measure and improve the skills required to successfully access, and progress through, the jhanas. Thanks to Stephen Zerfas, Alex Gruver, and Matt Lanter. Notes We use the term “beginner” meditator in this post only as a shorthand to differentiate this group from “experienced” meditators. Experience is all relative! ↩ We also noticed that measuring meditation experience in different ways – lifetime hours, having attended a retreat, hours meditated per week – seemed to yield different conclusions. Strangely, however, we don’t see that one method of measurement clearly maps to consistent differences across every outcome. This could be due to the large margin of error with our sample; because different types of meditation activities influence certain outcomes more than others; or – as our findings suggest – because the causal mechanism isn’t meditation experience at all, but factors that are only partly represented by these variables. Given these inconsistencies, however, we suggest that researchers carefully consider how they measure prior meditation experience when studying the jhanas, as well as the conclusions they can draw. ↩ I’m reminded of the observation that if you force yourself to smile, even if you don’t feel like it, eventually, the act of smiling will boost your mood anyway. ↩

7 months ago 57 votes
Working notes for Summer of Protocols

I’m participating in the Summer of Protocols research program this summer as a Core Researcher. It’s an 18-week program, funded by the Ethereum Foundation, that aims to catalyze a wider exploration of protocols and their social implications. I plan to focus on protocols as systems of social control. My brain has struggled to reconcile how protocols have a very technical meaning for the internet (HTTP, TCP/IP, IP, etc), but are also used in a variety of other sectors in nontechnical ways (diplomacy, healthcare, emergency response, etc). I want to develop a history of protocols, through the lens of control, that shows how all these different types are interrelated – then use that to understand what the next generation of protocols might look like. I thought it might be useful to share my working notes as I dive into this process, especially since Summer of Protocols is an interesting meta-experiment in funding a cohort of independent researchers. I’ll update this page every few weeks with major themes and challenges I’m working through. I’ll try to keep these summaries fairly condensed, so as not to overwhelm. Enjoy! Weeks 1-2 Struggling to define what protocols are I’m surprised how much of a blocker this has been for me. It feels difficult to proceed with my current project scope until I understand where the boundaries are. I don’t normally like to get this meta, but I think it’s important, given that this is a nascent field of study without existing precedents. Not addressing this question up front will make everything feel loose and disconnected later on Challenges to field building when a research topic is too broadly defined We don’t want to broaden the definition of protocols so much that it becomes meaningless, which is a real danger when evaluating protocols in a non-purely-technical sense Bernadette shared this paper with me about how the lack of definition around “culture” has caused challenges in academia for those studying organizational culture. I like this excerpt about how to build a field that doesn’t just attract grifters: “In 1996, Ed Schein, perhaps the seminal figure in the field, called for researchers to meet four conditions to make progress in understanding organizational culture. First, the culture research needed to be anchored in concrete observations of real behavior in organizations. Second, these observations needed to be consistent or “hang together.” Third, there needed to be a consistent definition of culture that permitted researchers to study the phenomenon. And, fourth, this approach needed to make sense to the concerns of practitioners confronted with real problems, an edict that likely contributed to the consulting emphasis that we discussed above. Without consistency in definition and measurement, he argued, studies of culture will simply fail to aggregate, with different researchers studying different constructs even as they label them “culture.” Unfortunately, we believe that this lack of unity describes the current state of the field. While there have been voluminous studies on the subject, it is difficult to see with any clarity what we really understand about culture.” Dorian also drew parallels to the UX field, which apparently has become similarly populated with grifters due to lack of clear definitions + industry’s interests overshadowing academia Venkat shared a paper about low-paradigm vs. high-paradigm fields, which helped me think about where the study of protocols should fall. He also clarified that we don’t need a proper research field (i.e. “protocol studies”) to emerge from SoP, and maybe that’s part of the experiment in itself. I still think it’s important to feel like this body of work is cohesive and practically useful to “protocol practitioners,” even if it doesn’t turn into a field, and I want that to guide my work Core researchers come up with their own definitions of protocols The aforementioned paper on organizational culture defines culture as “the norms and values that guide behavior within organizations and act as a social control system.” I like this term “social control system,” and think this is more precisely relevant to protocols vs. culture as a whole Toby’s definition Rafa’s definition Dorian’s definition Venkat reminds us that we are unlikely to all settle on a single, shared definition of protocols, and that’s perfectly fine - but that if we end up with several competing schools of thought, that would be a good thing! I settle on a working definition of protocols for myself: “Systems of social control that dictate the procedural steps to resolve a coordination problem” I don’t want to spend more brain cycles on definitions than I have to. I want to keep things intentionally simple; I just need a heuristic that helps me guide my work, and I trust that I’ll improve on it as I get deeper into research. But I know I’m not going to get to the right answer just by thinking about it in a vacuum I’m also realizing that core researchers are approaching protocols from many different angles, beyond what I had even considered on my own. I think I can understand “protocols as systems of social control” by looking at how they exist across many different layers: psychological / self, physical / built environment, social, technological, cultural. I’m gonna try to workshop this into a more coherent framework, but will use this initial hypothesis to guide my plan of attack (i.e. try to dive deep into each layer and see if this is true) Guiding questions I’ve collected to scope my project + focus What is not considered a protocol? (via Rafa) What problem/s do we see among current practitioners / users of protocols in the wild that we would like to address? (via Kei) What do we currently all believe about protocols? What may or may not be true about that? 30 years from now, if someone were to write a history of protocols, what would they say about this era, and how it evolved into the next era? How would someone describe our collective history of prior thinking about protocols, even up til this day? In light of all this definitional work, I’ve decided to adjust my project scope. Originally, I wanted to look at how protocols spread and are transmitted (especially since I’ve had a tangled body of thought around antimimetics that I think would complement this work nicely). But writing about antimimetic protocols feels like Protocols 201. As fun as it would be, given the nascency of the field, I think I need to stick to a Protocols 101 project first. Otherwise it will just be confusing and not stick in the heads of anyone reading it. Updated my project description here. Weeks 3-4 Looking for protocol literature Having a hard time finding any literature about protocols that isn’t purely technical, which I guess is unsurprising. I think I need to use my own definition of protocols to figure out what isn’t necessarily coded as protocols right now, then weave that story together myself I did re-read a old book I remembered I had about protocols and control, Protocol: How Control Exists After Decentralization. I remember thinking it was way too postmodern for my taste when I first read it, but it’s actually been quite useful to return to (though I still skimmed through a lot of the Foucault talk). It’s funny to consider where our common understanding of protocol governance was when I first read it in 2018, right after the first big crypto boom but well before the web3 era, and how much more relevant this book feels now. I think I wasn’t really able to place this book into modern context when I first read it, but I got a lot more out of it now. Starting to flesh out my “material layers” of protocols Aka psychological, physical, social, technological, cultural layers. After reading Galloway’s book (see above), I’m sort of thinking about these as the various “corporeal” forms that protocols take Starting to work through each of these sub-themes as practical applications of my thesis, and hopefully come out with a more refined intuition for what “protocols” are vs. everything else (culture, norms, rituals, etc) I’m realizing that each of these layers had a “golden era” of development in post-industrial history (I think?), which I started outlining for myself. I’m gonna try to do a deeper dive into each of those periods on their own, and also see if they string together into any sort of interesting chronology Had a useful convo with Angela about our shared interest in protocols that exist on the psychological layer (psychoanalysis, internal narratives, etc). We all have unconscious protocols (i.e. patterns of behavior) that dictate our reactions in any given situation, and these protocols are often hidden even to ourselves. We also often don’t know how we acquired these protocols, but can still be “trapped” (aka controlled) by them nonetheless.

a year ago 67 votes
Explaining tech’s notion of talent scarcity

TLDR: Most conversations about “top talent” assume Pareto distribution; however, a closer examination suggests that different corporate cultures benefit from different types of talent distribution (normal, Pareto, and a third option – bimodal) according to the problem they’re trying to solve. Bimodal talent distribution is rare but more frequently observed in creative industries, including some types of software companies. While Pareto companies compete for A-players (“high-IQ generalists”), bimodal companies compete for linchpins (those who are uniquely gifted at a task that few others can do). These differences account for variations in management style and corporate cultures. It was a group of consultants at McKinsey & Company who coined the “war for talent” in their 1998 report and subsequent book of the same name, propelling the term “top talent” into the corporate executive hive-mind for the next two decades. While McKinsey refrained from offering a precise definition of talent, they thought that a shortage of “smart, energetic, ambitious individuals” was coming, and that it would lead companies to fight to attract and retain the very best. In software, there is a related but distinct notion of the “10x developer,” which dates at least as far back as a 1968 study that accidentally uncovered individual differences in programmer performance, and was further popularized by Fred Brooks’ 1975 book, The Mythical Man-Month. The definition of a 10x developer is similarly vague, and its existence is frequently contested. Depending on who you ask, a 10x developer might be someone who can write code 10x faster; is 10x better at understanding product needs; makes their team 10x more effective; or is 10x as good at finding and resolving issues in their code. Despite the similarity between these two concepts, McKinsey’s notion of top talent and software’s 10x developer reveal subtle cultural differences. Both are concerned with identifying the best people to work with, but the McKinsey version defines the best as the top percentile in their field, whereas the 10x developer is often a singular, talented individual whose magic is difficult to explain or replicate. For example, in conversations about hiring AI researchers, many people have said something to the effect of “There are only [10-200] people in the world who can do what [highly-paid AI researcher] does.” This is a very different statement from, say, “We are trying to hire top AI researchers.” In the latter case, “top” means the highest-performing slice of all AI researchers, but in the former, the assumption is that there are only a handful of people who can perform the job at all. While this idea is intuitive among software engineers, it is rarely seen in other industries. Why can’t more people be trained to do certain tasks in software? Why aren’t there more Linus Torvaldses or John Carmacks? Will there only be 100 people, ever, who can do what some AI researchers do? After exploring these questions, I identified three distinct models of talent distribution, which correlate strongly to industry, but vary even within industries, depending on what the company does and how mature it is: Normal distribution: Talent follows a normal distribution. Companies succeed not by attracting and retaining “top talent,” but by the strength of their processes, to which all employees are expected to conform. Frequently seen among manufacturing, construction, and logistics companies. Pareto distribution: Talent follows a Pareto distribution, skewed towards the top nth percentile. Companies benefit from attracting, retaining, and cultivating “A-players,” who are expected to demonstrate exceptional individual performance. Frequently seen among knowledge work and sales-centric companies. Bimodal distribution: Talent follows a bimodal distribution, where companies benefit from identifying, hiring, and retaining “linchpins,” who make up a fraction of headcount, but drive most of the company’s success. Frequently seen in creative industries (ex. entertainment, fashion, design), as well as software companies solving difficult technical problems (ex. infrastructure). A company’s distribution type also shapes their organizational culture, which lives downstream of the types of talent they are most incentivized to seek out and hire. Most notably, we can understand the difference between what I’ll call McKinsey and Silicon Valley mindsets by understanding differences in their respective definitions of “top talent.” [1] Normal distribution Examples: manufacturing; freight and shipping logistics; construction Companies with a normal talent distribution are influenced by scientific management theory, or the “assembly line” approach, which emerged in the early 1900s in response to industrialization. This approach maximizes worker productivity while ensuring that the production process is highly predictable. Production is standardized and hierarchical, work is broken down into smaller tasks, and employees operate in lockstep as a machine. The competitive advantage of these companies lies not in a select number of top performers, but in the strength of its processes, built on specialized knowledge that is refined through years of practice. Workers’ roles are clearly defined and rarely change. They need to be competent and reliable, but individualism is gently discouraged, as it threatens the resilience of the process. Employees take pride in maintaining performance and being part of something bigger than themselves, rather than in standing apart from their peers. Toyota is a classic example of a company that differentiates itself through operational excellence, pioneering a more efficient approach to manufacturing throughout the second half of the 20th century. Their organizational culture is defined by “The Toyota Way” (its corporate philosophy, which emphasizes respect and a team-centric approach to continuous improvement) and the “Toyota Production System” (its manufacturing process, which emphasizes reducing waste). These philosophies tend to be emergent artifacts of tacit knowledge and practice, rather than articulated up front; The Toyota Way took decades to develop. Companies that benefit from normal talent distribution can still experience talent scarcity: for example, a shortage of construction or warehouse workers. But scarcity usually comes from a lack of demand among workers for these jobs (due to, ex. low pay or high barriers to certification), rather than because the talent pipeline doesn’t exist at all. Pareto distribution Examples: management consulting, investment banking, strong sales cultures Companies with a Pareto talent distribution are influenced by modern management theories that emerged in the 1940s and 1950s. Peter Drucker, for example, envisioned decentralized, participatory management among employees, as well as the rise of “knowledge workers” who would play a more active role in guiding organizational strategy. In this model, the central importance of knowledge workers and diffusion of managerial power among employees means that companies must preoccupy themselves with attracting, retaining, and cultivating “A-players,” or talent that performs at the top nth percentile of their field. A-players are “high-IQ generalists’’ who can excel at many different types of tasks and are capable of handling complexity at work. As described in McKinsey’s 1998 report, executives at “high-performing” companies (defined by the top percentile of shareholder returns) were more likely to have attended a Tier 1 undergraduate school, graduated in the top 10% of one’s class, had a higher undergraduate GPA, and had a master’s degree. An A-player might also have at least one or two hobbies they’re exceptional at, whether competitive rowing, language learning, or classical piano. They tend to perform at the top of their field, regardless of what they do. “Companies [should not] hesitate to go outside their own industry. Sears hired Gulf War general Gus Pagonis to run its logistics; Banc One hired Taco Bell head Ken Stevens to lead retail banking.” – McKinsey’s The War For Talent report A-players can be quantitatively defined and ranked against B- and C-players, which is why management consulting firms use IQ tests, math tests, and personality tests to hire A-players. Because this type of talent is easier to identify, Pareto-distribution companies often have robust recruiting programs to hire graduates straight out of college: specialized skills matter less than general competence. Management consulting (ex. McKinsey, BCG, Bain), investment banking (ex. Goldman Sachs, JP Morgan), and Big Tech associate product manager programs (ex. Facebook, Google) all recruit heavily out of college, with plenty of interview tips and preparation materials proactively offered. Applying to one of these jobs is not so different from studying for college entrance exams or applying to a top-tier university. If a normal-distribution company is more like socialism – where employees see themselves as part of a bigger machine, and benefits are evenly distributed – the Pareto-distribution company is more like capitalism, where rewards are unevenly distributed, accruing to the best performers. Employees are expected to be exceptional; consistently average performers who merely “meet expectations” will languish or be fired. Because not every employee can be exceptional (or else it would just be the new average), this leads to zero-sum internal competitions for power. In terms of talent scarcity, A-players represent a comparatively small percentage of the population. But, with the benefit of twenty years’ hindsight, McKinsey probably overstated the idea that there would be a “war” for A-player talent. Senior A-players are widely known and competed for, so employers pay well to keep them from being poached. But among junior A-players, even though the pool is somewhat fixed, it’s still a big pool, relative to the number of jobs that require A-players. And they are relatively easy to identify and cultivate. While junior A-players may be more interchangeable with one another, companies do fight for a monopoly over the talent pipeline. McKinsey, Goldman, and Google, for example, all want to be the de facto place for top college graduates looking for their first job. Although these companies are not actually competitors in terms of products, they are competitive when it comes to owning the A-player talent pool. There is also a sort of “brain drain” paradox that occurs within an organization, rather than between organizations, where A-players tend to gravitate towards (or are recruited into) high-visibility management roles. This makes it harder to hire and retain A-players for lower-paying or less visible types of roles. Jan Fields, former CEO of McDonald’s, started out as a fry chef; Stuart Rose, former CEO of Marks & Spencer, started out on the sales floor. In other words, although there are other roles that could benefit from A-player skills, A-players are groomed to only do certain types of roles, so that there is no shortage of A-players competing for a small number of high-profile roles. [2] Bimodal distribution Examples: software infrastructure (ex. data management, cloud computing, security); hedge funds; creative industries (ex. entertainment, fashion, design) Companies that benefit from bimodal talent distribution succeed by attracting and retaining “linchpins,” whose unique skills provide a competitive advantage to the company. In contrast to A-players – generalists that like to solve any problem thrown at them – linchpins are specialists who are very, very good at one particular type of task, which most other people in the world cannot do. This type of company is rare, but more frequently observed among software companies tackling difficult technical problems, as well as creative industries. Disney’s Bob Iger blocked Marvel’s CEO Ike Perlmutter from firing its president, Kevin Feige, after they acquired the company, whom Iger saw as its visionary. Apple’s history is frequently told through the dynasties of its most iconic designers: Steve Jobs, Jony Ive. Linchpins are qualitatively defined, and are thus especially difficult to hire for, or even identify. While A-players willingly cram for their exams at Tier 1 management consulting firms, software engineers frequently criticize code interviews and hiring practices because they think they don’t accurately test for ability. The lack of consensus around what even constitutes a 10x engineer, again, points to the difficulty in quantifying linchpin talent, even if everyone ‘knows it when they see it.’ “I came to see that the types of people who are good at pleasing admissions committees are not the types of people who are good at founding companies.” – Michael Gibson, Paper Belt on Fire, reflecting on the Thiel Fellowship program, which he helped launch and run Linchpins are more frequently observed in software because, unlike physical engineering, software has sprawling complexity. A civil engineer who builds a bridge must conform to industry standards that limit their creativity, and they are beholden to the laws of physics regardless. But writing software has no such constraints. Solving problems with code has an infinite possibility space; what you can build, and how you build it, is only bound by imagination. New programming languages, tools, and frameworks can be invented entirely and used as needed. Unless the task is well-understood and frequently repeated, one cannot simply teach an engineer what to do in every circumstance, because there are endless “unknown unknowns” that could lead to better solutions. Similarly, fixing bugs has an equally infinite (and frustrating) possibility space; unlike a civil engineer, who can see and inspect a malfunctioning bridge, code that doesn’t act as expected can be blamed on any number of invisible dependencies. The gap between an average and exceptional software engineer, then, is much bigger than that between an average and exceptional physical engineer, as an exceptional developer can “see” possibilities that an average one cannot, due to some amorphous combination of intelligence, creativity, and intuition. [3] Most software companies these days don’t require linchpins, however, because they sell commoditized products without a competitive technical advantage. While these products may not have to conform to industry standards, as with physical engineering disciplines, the right tools for the task are strongly influenced by social norms. This seems to be where much of the confusion lies regarding “10x developers:” while historically, virtually all software companies were tackling difficult technical challenges, today, most probably look more like Pareto-distribution companies, and should follow a different management approach. But companies that are building foundational technology still benefit from linchpins. For example, Snowflake was co-founded by Benoît Dageville and Thierry Cruanes, two highly respected software architects at Oracle; Databricks was co-founded by the creators of Apache Spark. Bimodal-distribution companies benefit from what Sebastian Bensusan calls “high variance management,” which he compares to producing a Hollywood movie instead of a Broadway play. With a live performance, an actress must be able to deliver her lines correctly at every single performance, so it’s important to select for consistency. But when filming a movie, the actress can fail six times if that means she produces one really amazing performance. Thus, a movie director can be more adventurous in deciding who to cast, as well as encourage her to take risks with her performance. “Talent with a creative spark…is where the bureaucratic approach is most deadly.” – Tyler Cowen, Talent: How to Identify Energizers, Creatives, and Winners Around the World At bimodal-distribution companies, only a handful of linchpins are needed; everyone else at the company plays a supporting role. At first glance, these companies might look like they are normal distribution, because they have tightly-organized production processes, but the difference is that everyone is working in support of a single individual’s vision. Fred Brooks compared this to surgical teams in The Mythical Man-Month, where “few minds are involved in design and construction, yet many hands are brought to bear.” Linchpins are usually siloed from one another: a team comprised of linchpins doesn’t necessarily produce great outcomes, versus having teams that are each centered around a linchpin, who’s paired with a supporting team. Linchpins are more likely than A-players to produce work of public benefit, such as inventing a new technology or design, which can muddle their market value to employers. By contrast, an A-player’s value is usually confined to their employer – for example, managing teams or projects, or generating sales – which makes the return on investment clear. Only when linchpins offer private value (ex. infrastructure providers competing to hire a software architect) will they command large sums in the market. The heated competition for AI researchers between Google and Microsoft/OpenAI, for example, is partly driven by these dynamics. Because these companies are building foundational technology, whoever aggregates the most AI researchers from a select pool of hires will have a major competitive advantage. On the other hand, companies don’t fight to hire prolific open source developers – unless having their expertise in-house gives the company a competitive advantage – because an open source developer’s output is primarily a public good: they will produce that same value regardless of whom they work for. Right hands There is also a variant of linchpins that I’ll call “right hands” (a term borrowed from ex-Stripe Will Larson). Right hands are people who enjoy a uniquely high-trust, close relationship to at least one executive, and operate as a proxy for that executive’s interests. Although right hands are often generalists, they are more similar to linchpins than A-players, because they are qualitatively defined, have a unique role that no one else can perform, and work best in silos. Like linchpins, right hands are a rare form of talent, and companies need them to achieve their most ambitious, innovative goals. They are – ideally – exceptionally competent, loyal to the company, and capable of seeing the “big picture.” A right hand’s value is primarily derived from being an executive’s “chosen one,” which gives them unusual levels of creative working freedom. Because this freedom is so contextual, their success is typically confined to one company, and/or working with that particular executive. Right hands don’t necessarily have senior titles, nor is their true impact and favored position always visible to outsiders. Unlike A-players, they don’t tend to follow a typical corporate leadership path – they’re more likely to start their own companies afterwards, versus becoming an executive at another company. (This phenomenon can be observed among highly successful startups, which create “mafias” of early employees.) “I care a LOT about Stripe: when I see something out of place I feel antsy and want to fix it.” – Michelle Bu, Payments Products Tech Lead at Stripe “[W]hen Jack Welch met with The Home Depot to share what is distinctive about GE’s approach to managing growth, he took two human resources executives with him…building their bench is a crucial part of their job.” – McKinsey’s “The War for Talent” report Putting these models into practice A company’s talent needs might change over time. For example, while Google likely started as a bimodal-distribution company to build its advantage in search, it appears to have become more of a Pareto-distribution company as the organization matured (though I’m not sure this was advantageous for its reputation as an innovator!). Corporate culture norms can be explained by the types of talent that a company is trying to acquire or cultivate. For example: OpenAI’s heavy mission focus and its unusual governance and funding structure can be partly explained by its need to attract “linchpin” AI researchers, who are often very principled OKRs, KPIs, and similar performance frameworks are designed for Pareto-distribution companies, where performance is quantifiable and measurable against other employees Zero incident culture (“X days without an accident”) emphasizes consistency and collective responsibility, and discourages risky behavior among normal-distribution companies The prototypical software company culture (flexible hours, games and snacks in the office, “quiet areas”) was designed to attract and retain linchpin developers Finally, different models of talent partly explain the differences between McKinsey and Silicon Valley cultures. The former is highly quantified, favors tightly-coordinated teams of overachievers, and encourages a competitive zero-sum talent environment (A-players are a fixed percentile of the overall pool, and if you’re not in, you’re out). The latter skews more qualitative, favors lone “creative genius” archetypes, and has a positive-sum approach to talent (linchpins could be lurking anywhere, and we surely haven’t uncovered them all). It could also help explain why historically, many software companies struggle to mature or produce shareholder returns in public markets, as succeeding at this scale requires transitioning organizational culture from bimodal- to Pareto-distribution. [4] Thanks to Sebastian Bensusan, Bernadette Doerr, and Daniel Lee for conversations that influenced the direction of this piece. Notes As discussed later, many if not most software companies are probably Pareto-distribution now, because they don’t compete on technical advantage. But – unlike management consulting – the software industry was historically built upon the notion of bimodal talent distribution, and is still culturally influenced by this way of seeing the world. ↩ See also: Peter Turchin’s notion of elite overproduction, or the idea that society produces too many elites for a limited number of powerful positions. ↩ I was struck by how every engineer I spoke to struggled to articulate this concept more precisely. The words “voodoo,” “magic,” and “alchemy” were all used. The competitive advantage of linchpins reminds me, surprisingly, of normal-distribution companies at the organizational level, where process power can’t be described or replicated because it is acquired through tacit knowledge. But in the case of software engineers, this indescribable advantage occurs at the individual level. ↩ I don’t know if this is true or not. It’s a hypothesis! ↩

a year ago 67 votes
Mapping digital worlds

I gave a talk at The Stoa in February about “Mapping digital worlds to understand our present and future.” You can watch the video here; the following is a transcript of that talk. Peter Limberg asked me to talk about an analysis I did late last year, where I mapped out all the different tribes that are influencing the climate conversation today. But since Peter also did a great analysis of mimetic tribes back in 2018 that I enjoyed, I thought it’d be fun to zoom out and talk about this meta-practice of mapping digital worlds more broadly: both why it matters, and how I and others go about doing it. Table of Contents Maps legitimized our physical world The digital world is still largely unmapped Digital maps are more subjective than physical ones My process for mapping digital worlds Figure out what to show “See” the space Record your landmarks Add detail Draw the map Let’s look at some maps! Skeumorphic maps “Type of Guy” maps Schematic maps “Chinese menu” maps Matrix maps World cloud maps Maps create power (good and bad) Maps legitimized our physical world In the physical world, universal access to maps is still a fairly recent thing. I still remember my dad teaching me how to read a map when I was little; we’d get these big foldout travel maps from AAA and stash them in the car in case we got lost. Then things got a little more technologically advanced, and we’d type our destination into MapQuest and print out the directions. Today, Google Maps (and Apple Maps) have made maps so ubiquitous that we take it for granted that everyone can have an instant magical bird’s eye view of any physical location in the world. For the layperson, this represents a completion of the cartographic quest of mapping our physical world, that really took centuries to get to. No one thinks about it now because it seems so mundane, but to me, this is a huge feat that didn’t happen until during our lifetimes. For most of modern history, having access to maps was a form of power, like books or any other form of information. If you had a map, you could see the world in a way that others couldn’t. Maps were a way to extend political power and influence. If you knew where certain natural resources were located, or where to build roads and trade routes, you knew how extract value from the world around you. If you knew where all the towns and farms were in your kingdom, you knew whose doors to knock on to make them pay taxes. If you didn’t have maps, you didn’t know anything about the world, beyond what you could see with your own naked eye, which was…not very much. During the sixteenth century of exploration, European governments would sometimes keep their maps hidden or unpublished so that competing countries couldn’t benefit from them. So there were a lot of scary implications associated with mapmaking, because it made all these previously unknowable things visible to outsiders, and to potential enemies and oppressors. You had townspeople and farmers and people in neighboring lands, some of whom deliberately tried to stay off of the maps, and fought being drawn and measured and cataloged, because they didn’t want anyone to know they existed. On the other hand, I would argue that mapmaking can also empower us to define ourselves in relation to others. The concept of a nation state is meaningless without a map that proves where its boundaries are, and that we have shared consensus on. And so I kinda take a more neutral stance: maps created a playing field for power. While it’s nice to imagine a peaceful world in which everyone lived in their own little towns and didn’t interact with each other, I also think maps were inevitable to create a world where people could fight for bigger rewards and play higher-stakes games with each other. The digital world is still largely unmapped That’s what happened in the physical world. But in the digital world, there’s still no equivalent of Google Maps to understand all the different online communities and spaces that many of us interact with. You can try to Google them, but Google isn’t very helpful. We are still in this sort of uncharted, “natives only,” IYKYK stage of cartography in the digital world. You have to know what you’re looking for. Part of the reason for this, I think, is that people who are deep in a particular world often underestimate the value of their own tacit knowledge. They take for granted that these spaces are easily findable or understandable to others, and they forget how much context you actually need. The other reason, just as with physical territories, is that some of this is deliberate hiding by the “natives” themselves. A lot of online communities don’t want to be found. You’ll sometimes come across popular blog posts that with a disclaimer that says, “Please stop posting this to Hacker News” or whatever, because they’re tired of being flooded by randos. On the other hand, some of these online communities are having a growing influence on “real world politics,” and this is where we see conflicts arise. Because when outsiders do stumble across your digital land, and they don’t have a map, they start trying to fit it into the frameworks they do have, and things get very confusing quickly. And if they perceive your territory is powerful enough, they will try to conquer it. For example, I’ve found it somewhat painful to watch effective altruism and rationalists in the media spotlight this past year, because you have people on the left claiming it’s filled with a bunch of “tech bro libertarians,” and people on the right equally claiming it’s a bunch of “woke commie Marxists.” In reality, neither is correct. But you can’t necessarily blame people for it, either; they’re just using the maps they do have to try to interpret all this unknown territory. And in fact, this is something that Peter and Conor predicted in their mimetic tribes piece: that as EA became more popular and influential and discovered, that they would start being subjected to these outsider values. “Incubated on Overcoming Bias and LessWrong, [the rationalist diaspora] is an observer tribe in the culture war….Watch for a popularity boost to Effective Altruism, a struggle with the downsides of increased attention, and possible pressure from the SJAs for the Rationalists to commit to progressive values.” —Peter Limberg and Conor Barnes, “The Memetic Tribes Of Culture War 2.0” I get why digital territories don’t always want to be mapped, and I’d even say that most probably don’t need to be. A lot of online communities function better as quiet incubators of ideas. But when ideas start to escape from those communities and find themselves landing into mainstream conversations, I think maps can be helpful to reduce misunderstandings and help newcomers find their way around the topic. In particular, mapping digital worlds can help us take these big, unwieldy, fast-changing topics - like what Peter and Conor did in mapping the shift in political conversations in 2018, or what I recently tried to do in mapping out the shift in climate conversations - and distill things down to their most important elements. Digital maps are more subjective than physical ones One thing that I think people consistently misunderstand is that the world is much smaller than it seems. The number of people who influence a given topic is never really that big. But for some reason, people often try to map out hot topics by looking at the outputs. So in the case of climate, it’s like, “Let’s try to make a big list of all the different technologies that are being developed today.” Nuclear, solar, wind, geothermal, carbon removal, whatever. And then let’s list all the companies that are working on those things. I find this approach to be confusing and overwhelming, because outputs change very quickly. It’s not always obvious, by looking at them without context, how they’re all connected or what’s driving their development. By the time you’ve locked down the current position, it’s already moved again. Whereas looking at these topics on the tribal level – meaning, the people who are driving these changes – is more like “playing the man, not the cards” in poker. It tells you why certain outputs are moving in the direction they are, and helps you predict where they might go next. If there are, say, a million points of output you could be looking at, the number of people creating them is more like a hundred data points. And that’s a much easier universe to wrap our heads around. One thing that’s kinda tricky about mapping digital worlds, though, is that they are way more subjective than mapping physical territories. It can be hard to know when you’ve hit on an objective truth, because community lore can go really deep, and there’s also plenty of misdirection; one of the more recent examples is a fake interview given by two Twitter alts. If you talk to the wrong people, or read the wrong forums or blog posts, you can get a completely different picture of what a space looks like. This is why there are a lot of misunderstandings from journalists who try to report on digital spaces, because they’re just training themselves on all the wrong inputs. The other thing, of course, is that even if you do manage to map the space out accurately, digital territories are just more ephemeral. Communities are always changing, and central figures or gathering spots can grow or die very quickly. And that means the maps of these territories also go out of date quickly. I do think mapping digital spaces is different from mapping a physical space, where you can see with your own eyes – okay, there’s a river here, and a mountain there. Mapping digital territories is more like echolocation, where you’re standing in a dark space, totally blind, and you need to “ping” the space around you and see what you get back. Eventually, with enough data from those pings, you can start to “see” the world around you. But you’re not really using your eyes to map it: it’s more like another proprioceptive sense that emerges, or a vague sense of your position in space. And learning how to map digital territories requires training that sense. My process for mapping digital worlds I’m gonna talk about how I approach mapping digital worlds. This was kind of a fun exercise for me, because I hadn’t really thought about my methodology before preparing for this talk. So we’re gonna talk about that, and then we’re gonna look at a bunch of other digital maps to see how other people do it. This is my general process for mapmaking. It starts with figuring out what I’m trying to show, “seeing” the space with that proprioceptive sense I talked about, recording your landmarks, going back and adding detail, and then finally drawing the map. Figure out what you’re trying to show “See” the space Record your landmarks Add detail Draw the map Figure out what to show “Figuring out what to show” is about figuring out the general theme of the map. You can have a map that’s more unopinionated – where you’re just trying to draw the entire landscape – or you can have a more thematic approach that’s trying to highlight a certain aspect of that world. Just getting to the right question itself can take awhile. With my climate tribes work for example, I started out thinking I was trying to model what these so-called “doomer industries” look like, which are industries that are oriented around some apocalyptic vision of the world. My first attempt at that did lead to the creation of a map, where I was trying to show what the general anatomy was of a doomer industry, using a Tootsie Pop as an analogy. But the more interesting thing I stumbled upon was realizing that the climate discourse has changed a lot from the early 2000s. Even though people still talk about climate deniers, I think we’ve implicitly moved from being divided on “Is climate change real, or not” to these more actionable, tribal divisions around the right solutions to pursue. I decided to make that the focus of my map, and develop language to identify what all the different climate tribes were, some of which don’t even use the term “climate” or try to distance themselves from it, but are still part of that conversation anyway: “See” the space Once I have a goal, I start trying to “see” or traverse the space, using that echolocation I talked about. I think this happens interchangeably with the first step. You start with a question, and then you look at the space a bit, and then that helps you refine your question, and you iterate your way towards a framework. It’s kinda hard for me to describe how “seeing” a space works, but a lot of it is paying attention to what’s happening between the lines, and where the boundaries are. You might look for conflicts between two groups, and then you say, “Okay, what type of language are they using? What’s different about their goals? Where do they disagree?” One example that I looked at with climate are what I called the energy maximalists, and they stand out because they don’t really talk about climate change, but they do like to talk about “energy.” For example: Via Twitter. Then you ask yourself, well, why do they use that term, and it’s because they don’t want to talk about the scarcity of environmental resources. They want to focus on abundance, like how do we create more energy. So that already gives me some motivations and key vocabulary to start recording. You can use that to go deeper into the group and say, “Who do they keep referencing? Which blog posts do they cite? Where do they gather online?” Record your landmarks As I go through that process, I start recording key landmarks that I notice along the way. Landmarks, just like rivers or mountains, can be people, organizations, keywords, canonical reading, events, where they gather online, things like that. Just sticking with the climate examples for a bit, I noticed at least there was some cluster of people who would reference Michael Shellenberger’s book Apocalypse Never, or the Breakthrough Institute, or the Eco-Modernist manifesto. Those are some examples of landmarks I used to “draw” a picture of that tribe and their territory. The idea being that if someone had a collection of all these items, they too might be able to “see” the same space that I see. Add detail At this point, I have a general schematic of the space, and that’s when I go in and start trying to add detail: stress-testing my theories, and making sure that my assumptions hold. This is the equivalent of adding fine lines or shading in a drawing. Some of this work I can do on my own. I’ll do things like go through and read people’s Twitter feeds, or blogs, with my “picture” of the space in my mind, and see if anything breaks the model. If so, then I go back and refine my model to reflect those changes. But I think it’s also kinda necessary to just talk to as many people as you can, or just find ways to be around the activities they’re doing in their natural environment, so you can observe what’s going on. Draw the map Finally, there’s the actual drawing or visualizing of the map in a way that makes it understandable to your audience. Writing out my process for this talk has made me realize that I’m pretty bad at visualizations. I’m a text-heavy person. I’m very into the “back-end” part of map work, like actually figuring out what does the territory even look like, but I’m less into the “front-end” part of it – meaning, how to communicate it effectively to my desired audience. So I don’t know that I have great advice here – or rather, whatever my advice is, you probably shouldn’t listen to me, because I just like text-based everything. I was proud of myself with the climate tribes piece, though, because I used DALL-E to generate images for each tribe, which was actually really helpful. So I guess if you’re very text-centric like I am, this is one way to get out of your comfort zone. I also remembered at the last minute before publishing to put all the images and tribes and descriptions into a little table, which I think was helpful for people to see it all in one place, instead of in one long blog post, and it made it more shareable as a summary. So that was one way to visualize it. Let’s look at some maps! That’s my process for mapmaking. As our last activity, I thought it’d be fun to look at some different examples of digital maps, and try to unpack the methodologies they used. There’s no order to these examples, I just collected a few that I thought were interesting for different reasons. Shoutout to Rival Voices for publishing a collection of maps on his Substack, that was very helpful for me. Skeumorphic maps To start with the obvious, I’ll show a few examples of what I call “skeumorphic” maps, which use physical territories as a literal metaphor for digital ones. Via Ribbonfarm. This Ribbonfarm one is a classic that’s stuck in my mind over the years, and you can see how different geographic landmarks, like mist or islands, or giant crystals, are used to convey something about the character of each community. Via Slate Star Codex. Via Slate Star Codex. Scott Alexander has also made a map for effective altruism, as well as for the rationalist and rationalist-adjacent blogosphere, using this skeumorphic approach. Via xkcd. This one is from xkcd. It’s a “map of online communities” in 2010, back when the world of online communities was small enough to fit on a map. These skeumorphic maps can be useful, because they borrow from an existing framework that readers already understand. For example, in the xkcd map, Facebook is enormous in terms of its influence, but also kinda isolated as this behemoth, whereas YouTube and Twitter, just below it, are comparatively smaller, but are more like biodiverse islands that give way to other little communities. But skeumorphic maps can also be limited, in that, we can see a picture of the entire, broad landscape here, but it’s a very shallow set of information. I would call this a relational map – it tells us how these different territories compare to each other, but doesn’t tell us a whole lot about any one specific community. The other issue is that accessing the digital world isn’t the same as the physical world. Going back to that need for echolocation or proprioception, I think we need to rely more on those senses to make our way around digital worlds. A physical map doesn’t really tell me how to find the interesting stuff on YouTube or Facebook or whatever, beyond going to the literal websites of course. I have very few landmarks here to help me generate a picture of any one community. These maps are always fun to look at, but I think digital-native maps actually look quite different. “Type of Guy” maps Doomers Virgins vs. chads These are on the other end of the information scale, and they feel much more digital-native. I call these “type of guy” maps. You might say they’re more like memes than maps, because they don’t necessarily look like what we think of as a map, but they actually contain a lot of condensed information that helps us generate a picture of a digital world more quickly. These maps tell you how to conjure an image of a certain type of person, or community, in your mind, based on landmarks that might have otherwise been invisible to you. They are less useful at conveying relational information as the skeumorphic maps – the virgin-chad meme, for example, would break if we added too many more personas – but they are good for helping you find your way to at least one or two places. Schematic maps Memetic tribes Idea machines Staying with the more detailed or thematic map examples, schematic maps are also more about depth over breadth. They’re also more opinionated and explicit about the underlying framework being used, versus the other two types we looked at. We’ve got Peter and Conor’s memetic tribes 2.0 above, which gives each memetic tribe a set of characteristics - sacred values, existential threats, campfires - which is a term I love for “gathering places.” Below that is a diagram I made for a post I wrote last year about what I called “idea machines,” or these amorphous organisms, like effective altruism or progress studies, that can turn ideas into outcomes. In both cases here, the concept of a “mimetic tribe,” or the concept of an “idea machine,” is just as important about these maps as what’s actually being mapped inside it. “Chinese menu” maps via Food+Tech Connect. via Sequoia (source). I personally love schematic maps, because as I’ve said, I’m very text-heavy, and it’s kinda all the visual I need. But on the more visual side, this “Chinese menu”-style map is a very popular landscape mapping technique, but I think it’s a deceptive one. It doesn’t really help you find your way around a space, because it generally only uses one type of landmark, like companies or organizations, and groups them into themes. To me, this is kinda like handing someone a list of every river in the United States – just rivers, nothing else – and asking someone to draw a map of America based on that. I think you just need more than one type of landmark to paint that picture. So I don’t find this type of map to be particularly actionable. Perhaps it’s more like an infographic than a map, but it is very popular. Matrix maps Political compass Alignment map A more information-rich way of showing a digital landscape is the matrix map, which is also very popular, and gives the reader more relational context. Above, I have the political compass, as well as an example of “alignment” charts, which map certain landscapes based on a good-evil and lawful-chaotic framework. It still only gives you one type of landmark, such as political parties, or game companies in the examples here, but it at least “draws” the space by putting those landmarks into a more contextual landscape. World cloud maps via Twitter. This is the last type of map I’ll show, just to bring our analogy back to the physical world again. The mapping methodology I described earlier, which I use, is kinda like me drawing maps by hand. It’s like I went and surveyed the land with my own eyes and then figured out how to draw all that out. This type of map, on the other hand, is more algorithmically generated. It’s like using satellite imagery to generate a map – very Google Maps-esque. I call it the “word cloud” technique. This example is from a set of maps that someone published a few years ago, where they clustered different groups of people on Twitter together based on their public interactions, and used that to visualize all these digital spaces. They labeled the blue circles as “accelerationism and esoteric philosophy”, orange as “weird rationalists?,” and purple as “4channish, ironic humor.” And then they didn’t know what pink was. If what I do is more like “echolocation” of flooding my brain with a ton of input and seeing what emerges in my mind, I think this approach is more like a typically “scientific” approach, which can probably uncover connections that we might not be aware of otherwise, and also just helps remove the doubt of “Am I training my brain on the wrong inputs,” or “Am I stuck in the wrong corner of the web,” because you can see it all. On the other hand, I think this techique highlights what is so difficult about mapping digital versus physical territories, because digital spaces are so subjective and hard to discern just with cold hard data. For example, there are plenty of people who talk to each other a lot in DMs or group chats, but don’t interact much in public, and you wouldn’t know that if you were just looking at public data. There are prominent people in certain communities who aren’t on Twitter, but have very active blogs or newsletters. So I think there’s still a lot of tacit knowledge that’s hard to capture with this approach. It’d be cool to see some of these mapping tools developed as the “satellite imagery” equivalent for digital maps, but I also think their usefulness is more limited in the digital world. But that might just be my bias. Maps create power (good and bad) I hope this gave you some ideas about both why digital maps are useful, why we should take this practice more seriously, and how you might go about creating your own. I’d love to see more maps of digital worlds, so if you make any, please send them to me! To wrap up, I think a lot of people want to think of digital territories as these idyllic tribes, like a safe space or escape from the real world. But even the short history of online communities thus far suggests that they are evolving and changing. We can’t be stuck in the ’90s forever. The 2010 xkcd map of online communities, for example, paints a very different world from the map we’d draw today. Things are changing. So if we view the history of digital worlds as linear or progressive or on a forward trajectory in any way, rather than a static state, I think mapmaking could help us take our digital worlds more seriously, and even legitimize them. Balaji Srinivasan published a book last year called The Network State, which basically introduces his idea of what comes after the nation state. While nation states are defined by geographic borders and physical territories, network states are a digital-first version that start out as online communities, but can eventually acquire land and diplomatic power, which would make them as powerful as nation states. Maybe digital spaces are still in this ephemeral, nomadic tribe state because we artificially keep them there by refusing to map them. Opening these spaces up to the outside world can make them into targets, but it also creates new pathways for transactions to flow between worlds, and that can make the digital world more powerful. If we want to take our digital worlds more seriously, part of that starts with finding ways to legitimize them, instead of leaving them undefined. And in the physical world, at least, the way that started was with mapmaking.

a year ago 40 votes

More in startups

DeepSeek: Links and Memes (So Many Memes)

How a Chinese AI lab spun out of a hedge fund shook up the entire tech industry.

19 hours ago 4 votes
Andrew Rose, part 2: solving coordination

A longer and more chaotic follow-up conversation in which Andrew and I dive into the weeds of our differing approaches to solving coordination.

4 hours ago 2 votes
Patrick Collison interview + at least five interesting things (#58)

Patrick interviews me; the energy transition; Americans die young; family and fertility; educating the poor; AI and growth

17 hours ago 2 votes
Andrew Rose on building communities and institutions

yes, Nothing Human now has a podcast!

5 hours ago 2 votes
Non-Western founders say DeepSeek is proof that innovation need not cost billions of dollars

The Chinese app has just “blown the roof off” and “shifted the power dynamics.”

3 days ago 3 votes