More from ntietz.com blog - technically a blog
If you manage a team, who are your teammates? If you're a staff software engineer embedded in a product team, who are your teammates? The answer to the question comes down to who your main responsibility lies with. That's not the folks you're managing and leading. Your responsibility lies with your fellow leaders, and they're your teammates. The first team mentality There's a concept in leadership called the first team mentality. If you're a leader, then you're a member of a couple of different teams at the same time. Using myself as an example, I'm a member of the company's leadership team (along with the heads of marketing, sales, product, etc.), and I'm also a member of the engineering department's leadership team (along with the engineering directors and managers and the CTO). I'm also sometimes embedded into a team for a project, and at one point I was running a 3-person platform team day-to-day. So I'm on at least two teams, but often three or more. Which of these is my "first" team, the one which I will prioritize over all the others? For my role, that's ultimately the company leadership. Each department is supposed to work toward the company goals, and so if there's an inter-department conflict you need to do what's best for the company—helping your fellow department heads—rather than what's best for your department. (Ultimately, your job is to get both of these into alignment; more on that later.) This applies across roles. If you're an engineering manager, your teammates are not the people who report to you. Your teammates are the other engineering managers and staff engineers at your level. You all are working together toward department goals, and sometimes the team has to sacrifice to make that happen. Focus on the bigger goals One of the best things about a first team mentality is that it comes with a shift in where your focus is. You have to focus on the broader goals your group is working in service of, instead of focusing on your group's individual work. I don't think you can achieve either without the other. When you zoom out from the team you lead or manage and collaborate with your fellow leaders, you gain context from them. You see what their teams are working on, and you can contextualize your work with theirs. And you also see how your work impacts theirs, both positively and negatively. That broader context gives you a reminder of the bigger, broader goals. It can also show you that those goals are unclear. And if that's the case, then the work you're doing in your individual teams doesn't matter, because no one is going in the same direction! What's more important there is to focus on figuring out what the bigger goals should be. And once those are done, then you can realign each of your groups around them. Conflicts are a lens Sometimes the first team mentality will result in a conflict. There's something your group wants or needs, which will result in a problem for another group. Ultimately, this is your work to resolve, and the conflict is a lens you can use to see misalignment and to improve the greater organization. You have to find a way to make sure that your group is healthy and able to thrive. And you also have to make sure that your group works toward collective success, which means helping all the groups achieve success. Any time you run into a conflict like this, it means that something went wrong in alignment. Either your group was doing something which worked against its own goal, or it was doing something which worked against another group's goal. If the latter, then that means that the goals themselves fundamentally conflicted! So you go and you take that conflict, and you work through it. You work with your first team—and you figure out what the mismatch is, where it came from, and most importantly, what we do to resolve it. Then you take those new goals back to your group. And you do it with humility, since you're going to have to tell them that you made a mistake. Because that alignment is ultimately your job, and you have to own your failures if you expect your team to be able to trust you and trust each other.
Code ownership is a popular concept, but it emphasizes the wrong thing. It can bring out the worst in a person or a team: defensiveness, control-seeking, power struggles. Instead, we should be focusing on stewardship. How code ownership manifests Code ownership as a concept means that a particular person or team "owns" a section of the codebase. This gives them certain rights and responsibilities: They control what goes into the code, and can approve or deny changes They are responsible for fixing bugs in that part of the code They are responsible for maintaining and improving that part of the code There are tools that help with these, like the CODEOWNERS file on GitHub. This file lets you define a group or list of individuals who own a section of the repository. Then you can require reviews/approvals from them before anything gets merged. These are all coming from a good place. We want our code to be well-maintained, and we want to make sure that someone is responsible for its direction. It really helps to know who to go to with questions or requests. Without these, changes can grind to a halt, mired in confusion and tech debt. But the concept in practice brings challenges. If you've worked on a team using code ownership before, you've probably run into: that engineer who guards the code against anyone else's changes, wanting all the credit for themselves that engineer who refuses to add anything else to their codebase, because they don't want to maintain it that engineer who tries to gain code ownership over more areas, to control more of the code and more of the company I've done certainly acted badly due to code ownership, without realizing what I was doing or or why I was doing it at the time. There are almost endless ways that code ownership can bring out the worst in people. And it all makes sense. We can do better by shifting to stewardship instead of ownership. Stewardship is about service We are all stewards of things we own or are responsible for. I have stewardship over the house I live in with my family, for example. I also have stewardship over the espresso machine I use every day: It's a big piece of machinery, and it's my responsibility to take good care of it and to ensure that as long as it's mine, it operates well and lasts a long time. That reduces expense, reduces waste, and reduces impact on the world—but it also means that the object (an espresso machine) is serving its purpose to bring joy and connection. Code is no different. By focusing on stewardship rather than ownership, we are focusing on the responsible, sustainable maintenance of the code. We focus on taking good care of that which we're entrusted with. A steward doesn't jealously guard, or struggle to gain more power. A steward watches what her responsibilities are, ensuring enough to contribute but not so many as to burn out. And she nurtures and cares for the code, to make sure that it continues to serve its purpose. Instead of an adversarial relationship, stewardship promotes partnership: It promotes working with others to figure out how to make the best use of resources, instead of hoarding them for yourself. Stewardship can solve many of the same problems that code ownership does: It gives you someone who's a main point of contact for some code It grants someone responsibility for bug fixes and maintenance of that code And in some ways, they look alike. You're going to do a lot of the same things, controlling what goes in or out. But they are very different in the focus. Owners are concerned with the value of what they own. Stewards are concerned with how well it can serve the group. And this makes all the difference in producing better outcomes.
Last week, I finally got verified on LinkedIn. Now there's a little badge next to my name that says "yes, she's a human who is legally named Nicole." Their marketing for verification says that I should now expect 60% more profile views and 50% more comments and reactions. For a writer like me, that seems great. More people viewing my content means more people can learn from me, or be entertained by me. And all that for free? There's a problem, of course. Nicole is my legal name, so I was able to get verified as a result. But many people don't go by their legal name. Other names are common So who doesn't go by their legal name? I didn't, for years after my wife and I got married. I went by an alias—our hyphenated last name—without legally changing it. I wasn't eligible to get verified then, since that name was not on my ID. I didn't, when I came out as transgender. It takes time to change your name and update your documents. Until that was complete, I would have had to go by my deadname or lose verification. And what about women who change their name when they get married, but go by their previous name professionally? This is an alias, and it is their name even though it's not what the government knows them as. But they would lose verification for doing this. Anyone who goes by a nickname or alias is ineligible. I have many friends who go by a different name than what their ID shows. This isn't fraudulent—it just reflects who they are. Penalized for being yourself And yet, if you fall into any case where you cannot get verified, then you can't get the benefits. You can't get your extra profile views and extra comments. Or put another way: you're penalized for not being verified. You'll get about 40% fewer profile views and 33% fewer comments/reactions than people who are verified. You get marginalized, unable to reap the full benefits of the platform, if you don't conform to a very particular outlook on what a name is (the official sequence of letters on your ID). To be clear, the problem isn't the verification process itself1. That process (and its associated benefits) may be in place to deal with bot traffic. I can sympathize with this, and I do want lower bot traffic—it makes platforms much more pleasant to use. Let us use our names The problem is that you have to show your legal name to everyone. There should be a process for being verified without it being your legal name on display. This process doesn't have to be scalable if the group that would utilize it is small—since surely they'd only forget about small populations2. It can be as simple as filing a help ticket and allowing a human to approve it based on some evidence. Is your name consistent across your public profiles? And you're a human being? Cool, verified. I believe that LinkedIn can, and should, do better. This feature as implemented is harming marginalized folks who are not able to get the same visibility when they cannot get verified. It reduces the exposure that marginalized creators can get. You should keep this in mind in the products you make, too. Don't require people to display their legal names. And before you even collect that data, think about what problem you're trying to solve. Do you need to collect legal names to solve that? (Probably not.) If so, do you need to store them after processing once? (Probably not.) And if so, do you need to display them publicly? (Probably not.) Names are so much more than what the government knows us by. Let us be our true selves and verify us with our true names. 1 It's not free of problems, though: I'd like to have a way to achieve the same result ("she's a human! she's generally the internet person she claims she is!") without showing my government identity documents to a third party. 2 Though, companies have been known to marginalize large groups of people. This is a rhetorical point that they are either harming a lot of people or they could solve the problem for a low cost.
The title is not a rhetorical question, and I'm not going to bury an answer. I don't have an answer. This post is my exploration of the question, and why I think it is a question1. Important things up front: what's my relationship with LLMs today? I don't use any LLMs regularly. I do have access to GitHub Copilot through my employer. I have it available on a hotkey, I think, and I cannot remember how to trigger it since I do not use it. I've explored using LLMs in the past. I used to be a regular Copilot user, and I explored ChatGPT, Claude, etc. to see what their capabilities were. I have done trainings for my coworkers on how to use them effectively, though I would not feel comfortable doing so now. My employer's product uses LLMs. I don't want to link to my employer, but yeah, I guess my paycheck depends on being okay with integrating them in? It's complicated (a refrain). (This post is, obviously, my opinions and does not reflect my employer.) I don't think using or not using them is a moral failing. There is a lot of moralizing around LLM usage. I'm not doing any of that here. I have my own beliefs (or my own questions), but I don't think people using LLMs are immoral (or vice versa). So, you can see I have used them and I'm not absolutist, but I don't use them today. Why not? Why did I stop using them in spite of the advances, where they're more capable than ever? It's because of these questions and issues. Where I have undecided ethical questions, I lean toward the more conservative2 choice of not using them until I have clarity on the ethics. (Note: I am not inviting folks to email me with answers to this question.) Energy usage Another technology that uses a lot of energy is blockchains. I think using public blockchains is almost universally unethical since there are other, better, less harmful options. Part of the harm from blockchains is an absurd amount of energy usage. LLMs also use a lot of energy. This can be split into training and inference energy usage. These vary based on the model. Some models can run locally on Apple silicon, and those are lower energy usage—their upper bound is running your computer full tilt, and an M4 Mac mini's max power consumption is 65 watts. This is roughly equivalent to one incandescent light bulb, or 8 LED light bulbs. It's good to turn off unnecessary lights, but doing so isn't going to solve the climate crisis; we need bigger, more sweeping reforms. I don't think that local models are going to significantly alter the climate crisis in either direction. Other models are massive and run in data centers on lots of power-hungry GPUs3. These data centers also require construction, and that comes with its own environmental impact. An article from Tom's Guide last year showed that "a single query on ChatGPT-4 can use up to 3 bottles of water, that a year of queries uses enough electricity to power over nine houses". A lot of the cost comes from new data centers being built. The demand for LLMs has led to more demand for power generation and more demand for data centers. And this new power generation is coming from gas-fired plants instead of sustainable, clean energy sources, because that's all we can build fast enough. A lot of attention is given to the training side. The numbers for training are large and shocking: Llama 3 used 500 mWh and GPT-3 training used 1,287 mWh—even more if you include the cost of training failed models which preceded these, the experiments that made the models possible. The listed figures are high, and 500 mWh is about the cost of a large jet flying for 7 hours. But we do it once per foundational model, and then the cost is spread across all the remaining usage. I don't think that the training side is significantly shifting the equation on climate change. We'd have a much larger impact on improving the climate crisis by advocating for remote work—reducing vehicles on the road, making many flights unnecessary—than by not training models. Overall it feels to me like local models have a clearly acceptable impact, and data center models have higher energy usage but still probably do not change the situation very much. Training data The training data for LLMs has largely been lifted without the consent of the people who generated that data. This is a lot of writing, music, videos, visual art, all of it. There are some attempts at there at using licensed data only, but the majority of models, and the most popular ones, are unlicensed data. Now the question is: is this an ethical problem? I know there are opinions on both sides of this. Some say that this data is publicly visible on the internet (though some of the data was not on the internet), and so it's fair game. Others say that this use isn't one that people consented to, and it should require that consent. My thought experiment is this: If we made search engines today, would people have this same objection to a search engine using their data without their express consent? I think most people would ultimately support search engines. They are different than LLMs, because they (mostly) serve results that point you to the original source, rather than create new content for you to replace the original sources with. But maybe people would reject search engines. And maybe consistency between these two isn't necessary, or maybe it's a false consistency—there could be other differentiating details that lead to different answers. Where I come down is that I think we need a robust mechanism to opt out of use of data for training, but that it's probably fine to train with publicly available data on the internet. What you do with the trained model is another question entirely. When you try to replace people making original works instead of creating an entirely new function, that's where you get questions. "Replacing" people There's a lot of LLM usage that is just trying to replace people in entire jobs. I mean, it feels like all of it. We see LLMs that are meant to replace writers and editors, artists and illustrators, musicians and songwriters. It doesn't say that directly—it says you're empowered to create things yourself. But what it means is that people should be able to press a few buttons and, with no artist involved, get out a beautiful artwork. Sounds like replacing to me. This is something we've long done with technology. We make technology that puts people out of jobs, and that was the whole industrial revolution. That's what happened with shipping from the containerization of ports, putting many dockworkers out of jobs. Replacing people in the abstract is not unethical. The problem is if we fail to deal with the harm created from replacing people. And people will be harmed, because losing your job or the value of it going down has a serious impact on quality of life. When we put people out of work, we—both society and technologists—have an ethical responsibility to ensure there's a plan to mitigate the harm from that. Maybe that means grants for living expenses while people switch fields, if put out of work in an LLM-heavy industry. Maybe that means a universal basic income. But it certainly doesn't mean doing nothing. Incorrect information and bias One of the major well-known problems of LLMs is a tendency to "hallucinate," or to confidently state facts that are made up from whole cloth. They also have an unknown amount of bias, with unknown mitigations in place, due to being closed systems. This is a big problem! We're not good at seeing what information is incorrect in something that's generated to look like it's the most likely string of tokens. If there is incorrect information in there, we'll just miss it. This means that people can make poor decisions on the basis of what an LLM tells them. They can have lost income due to its mistakes. And the bias? That's a huge problem, because it means that we don't know if this system will reinforce existing problematic norms4. We don't know what it will reinforce, because the training data is closed and there's not a lot of public evaluation on bias. So ultimately, we're left with an unknown harm of unknown magnitude to an unknown population5. Concentrating power (with the wealthy elite) A big ethical concern for me is also what this will do to our entire society. Many technologies are heralded as "democratizing" things. Spotify "democratized" music by making it so that anyone can get listens—but, y'all, it ended up flattening the tail and making the popular artists more money while making small artists less money. Will LLMs do the same? We know that the big models need large data centers to run their training and inference. And even small models need beefy hardware to run inference, let alone training! We have some access to models which can run locally, which is a good step. But the problem is that we can only run other people's models. They'll have those people's decisions baked in, decisions on which data to include or to exclude, decisions on how to approach questions of bias and abuse. And when the hardware to run the biggest models is only accessible to a few companies, that means that those companies really get a lot more powerful. OpenAI and Anthropic and Google and Meta all have the ability to run really large models. I certainly don't, though. This means that a technology that many are heralding as making things more accessible to everyone is controlled by a small handful of people. A small handful of people can decide how the models are trained, and set policies on how they're used. In a time when the US government is trying to get any paper retracted that mentioned queer people, and erasing trans people from the Stonewall monument, it feels self-evident that letting a small group of people control this technology imperils the future of many people. * * * Ultimately, I want robots to do the things I don't want to do. I want them to do my dishes and my laundry. I don't want them to play music instead of me, write code instead of me, write words instead of me. I am not sure whether or not using LLMs is unethical. There are certainly ways of using them which are unquestionably unethical—as is true with every technology. And there are ways of developing LLMs which are unethical—as is true with every technology. But the problems with them are large. I think it is unethical to use them without addressing the ethical questions above. If you're not working on mitigating the harms from LLMs (which do exist), then you might be doing something unethical. 1 I've been in the interesting situation of having anti-LLM people think I'm pro-LLM and vice versa. It's a very weird feeling, and makes me a little nervous of posting this! But this is an important question and an important conversation. 2 Footnoting out of an abundance of caution: I don't meant conservative-like-Republicans, because, no, I'd like to keep my rights thank you very much. I just mean in terms of minimizing risk. Let's please stop attacking trans rights, immigrants, Palestinians, and you know, everyone else that I've forgotten because the whole world seems to be on fire. 3 If we ever achieve artificial sentience, this sentence may acquire a second, more sinister, meaning. 4 It will. 5 My guess is a large harm to all underrepresented groups, but who asked the trans woman?
More in programming
If you manage a team, who are your teammates? If you're a staff software engineer embedded in a product team, who are your teammates? The answer to the question comes down to who your main responsibility lies with. That's not the folks you're managing and leading. Your responsibility lies with your fellow leaders, and they're your teammates. The first team mentality There's a concept in leadership called the first team mentality. If you're a leader, then you're a member of a couple of different teams at the same time. Using myself as an example, I'm a member of the company's leadership team (along with the heads of marketing, sales, product, etc.), and I'm also a member of the engineering department's leadership team (along with the engineering directors and managers and the CTO). I'm also sometimes embedded into a team for a project, and at one point I was running a 3-person platform team day-to-day. So I'm on at least two teams, but often three or more. Which of these is my "first" team, the one which I will prioritize over all the others? For my role, that's ultimately the company leadership. Each department is supposed to work toward the company goals, and so if there's an inter-department conflict you need to do what's best for the company—helping your fellow department heads—rather than what's best for your department. (Ultimately, your job is to get both of these into alignment; more on that later.) This applies across roles. If you're an engineering manager, your teammates are not the people who report to you. Your teammates are the other engineering managers and staff engineers at your level. You all are working together toward department goals, and sometimes the team has to sacrifice to make that happen. Focus on the bigger goals One of the best things about a first team mentality is that it comes with a shift in where your focus is. You have to focus on the broader goals your group is working in service of, instead of focusing on your group's individual work. I don't think you can achieve either without the other. When you zoom out from the team you lead or manage and collaborate with your fellow leaders, you gain context from them. You see what their teams are working on, and you can contextualize your work with theirs. And you also see how your work impacts theirs, both positively and negatively. That broader context gives you a reminder of the bigger, broader goals. It can also show you that those goals are unclear. And if that's the case, then the work you're doing in your individual teams doesn't matter, because no one is going in the same direction! What's more important there is to focus on figuring out what the bigger goals should be. And once those are done, then you can realign each of your groups around them. Conflicts are a lens Sometimes the first team mentality will result in a conflict. There's something your group wants or needs, which will result in a problem for another group. Ultimately, this is your work to resolve, and the conflict is a lens you can use to see misalignment and to improve the greater organization. You have to find a way to make sure that your group is healthy and able to thrive. And you also have to make sure that your group works toward collective success, which means helping all the groups achieve success. Any time you run into a conflict like this, it means that something went wrong in alignment. Either your group was doing something which worked against its own goal, or it was doing something which worked against another group's goal. If the latter, then that means that the goals themselves fundamentally conflicted! So you go and you take that conflict, and you work through it. You work with your first team—and you figure out what the mismatch is, where it came from, and most importantly, what we do to resolve it. Then you take those new goals back to your group. And you do it with humility, since you're going to have to tell them that you made a mistake. Because that alignment is ultimately your job, and you have to own your failures if you expect your team to be able to trust you and trust each other.
We didn’t used to need an explanation for having kids. That was just life. That’s just what you did. But now we do, because now we don’t. So allow me: Having kids means making the most interesting people in the world. Not because toddlers or even teenagers are intellectual oracles — although life through their eyes is often surprising and occasionally even profound — but because your children will become the most interesting people to you. That’s the important part. To you. There are no humans on earth I’m as interested in as my children. Their maturation and growth are the greatest show on the planet. And having a front-seat ticket to this performance is literally the privilege of a lifetime. But giving a review of this incredible show just doesn’t work. I could never convince a stranger that my children are the most interesting people in the world, because they wouldn’t be, to them. So words don’t work. It’s a leap of faith. All I can really say is this: Trust me, bro.
If you give some monkeys a slice of cucumber each, they are all pretty happy. Then you give one monkey a grape, and nobody is happy with their cucumber any more. They might even throw the slices back at the experimenter. He got a god damned grape this is bullshit I don’t want a cucumber anymore! Nobody was in absolute terms worse off, but that doesn’t prevent the monkeys from being upset. And this isn’t unique to monkeys, I see this same behavior on display when I hear about billionaires. It’s not about what I have, they got a grape. The tweet is here. What do you do about this? Of course, you can fire this women, but what percent of people in American society feel the same way? How much of this can you tolerate and still have a functioning society? What’s particularly absurd about the critique in the video is that it hasn’t been thought through very far. If that house and its friends stopped “ordering shit”, the company would stop making money and she wouldn’t have that job. There’s nothing preventing her from quitting today and getting the same outcome for herself. But of course, that isn’t what it’s about, because then somebody else would be delivering the packages. You see, that house got a grape. So how do we get through this? I’ll propose something, but it’s sort of horrible. Bring people to power based on this feeling. Let everyone indulge fully in their resentment. Kill the bourgeois. They got grapes, kill them all! Watch the situation not improve. Realize that this must be because there’s still counterrevolutionaries in the mix, still a few grapefuckers. Some billionaire is trying to hide his billions! Let the purge continue! And still, things are not improving. People are starving. The economy isn’t even tracked anymore. Things are bad. Millions are dead. The demoralization is complete. Starvation and real poverty are more powerful emotions than resentment. It was bad when people were getting grapes, but now there aren’t even cucumbers anymore. In the face of true poverty for all, the resentment fades. Society begins to heal. People are grateful to have food, they are grateful for what they have. Expectations are back in line with market value. You have another way to fix this? Cause this is what seems to happen in history, and it takes a generation. The demoralization is just beginning.
Yesterday, the tj-actions repository, a popular tool used with Github Actions was compromised (for more background read one of these two articles). Watching the infrastructure and security engineering teams at Carta respond, it highlighted to me just how much LLMs can’t meaningfully replace many essential roles of software professionals. However, I’m also reading Jennifer Palkha’s Recoding America, which makes an important point: decision-makers can remain irrational longer than you can remain solvent. (Or, in this context, remain employed.) I’ve been thinking about this a lot lately, as I’ve ended up having more “2025 is not much fun”-themed career discussions with prior colleagues navigating the current job market. I’ve tried to pull together my points from those conversations here: Many people who first entered senior roles in 2010-2020 are finding current roles a lot less fun. There are a number of reasons for this. First, managers were generally evaluated in that period based on their ability to hire, retain and motivate teams. The current market doesn’t value those skills particularly highly, but instead prioritizes a different set of skills: working in the details, pushing pace, and navigating the technology transition to foundational models / LLMs. This means many members of the current crop of senior leaders are either worse at the skills they currently need to succeed, or are less motivated by those activities. Either way, they’re having less fun. Similarly, the would-be senior leaders from 2010-2020 era who excelled at working in the details, pushing pace and so on, are viewed as stagnate in their careers so are still finding it difficult to move into senior roles. This means that many folks feel like the current market has left them behind. This is, of course, not universal. It is a general experience that many people are having. Many people are not having this experience. The technology transition to Foundational models / LLMs as a core product and development tool is causing many senior leaders’ hard-earned playbooks to be invalidated. Many companies that were stable, durable market leaders are now in tenuous positions because foundational models threaten to erode their advantage. Whether or not their advantage is truly eroded is uncertain, but it is clear that usefully adopting foundational models into a product requires more than simply shoving an OpenAI/Anthropic API call in somewhere. Instead, you have to figure out how to design with progressive validation, with critical data validated via human-in-the-loop techniques before it is used in a critical workflow. It also requires designing for a rapidly improving toolkit: many workflows that were laughably bad in 2023 work surprisingly well with the latest reasoning models. Effective product design requires architecting for both massive improvement, and no improvement at all, of models in 2026-2027. This is equally true of writing software itself. There’s so much noise about how to write software, and much of it’s clearly propaganda–this blog’s opening anecdote regarding the tj-actions repository prove that expertise remains essential–but parts of it aren’t. I spent a few weeks in the evenings working on a new side project via Cursor in January, and I was surprised at how much my workflow changed even through Cursor itself was far from perfect. Even since then, Claude has advanced from 3.5 to 3.7 with extended thinking. Again, initial application development might easily be radically different in 2027, or it might be largely unchanged after the scaffolding step in complex codebases. (I’m also curious to see if context window limitations drive another flight from monolithic architectures.) Sitting out this transition, when we are relearning how to develop software, feels like a high risk proposition. Your well-honed skills in team development are already devalued today relative to three years ago, and now your other skills are at risk of being devalued as well. Valuations and funding are relatively less accessible to non-AI companies than they were three years ago. Certainly elite companies are doing alright, whether or not they have a clear AI angle, but the cutoff for remaining elite has risen. Simultaneously, the public markets are challenged, which means less willingness for both individuals and companies to purchase products, which slows revenue growth, further challenging valuations and funding. The consequence of this if you’re at a private, non-AI company, is that you’re likely to hire less, promote less, see less movement in pay bands, and experience a less predictable path to liquidity. It also means fewer open roles at other companies, so there’s more competition when attempting to trade up into a larger, higher compensated role at another company. The major exception to this is joining an AI company, but generally those companies are in extremely competitive markets and are priced more appropriately for investors managing a basket of investments than for employees trying to deliver a predictable return. If you join one of these companies today, you’re probably joining a bit late to experience a big pop, your equity might go to zero, and you’ll be working extremely hard for the next five to seven years. This is the classic startup contract, but not necessarily the contract that folks have expected over the past decade as maximum compensation has generally come from joining a later-stage company or member of the Magnificent Seven. As companies respond to the reduced valuations and funding, they are pushing their teams harder to find growth with their existing team. In the right environment, this can be motivating, but people may have opted into to a more relaxed experience that has become markedly less relaxed without their consent. If you pull all those things together, you’re essentially in a market where profit and pace are fixed, and you have to figure out how you personally want to optimize between people, prestige and learning. Whereas a few years ago, I think these variables were much more decoupled, that is not what I hear from folks today, even if their jobs were quite cozy a few years ago. Going a bit further, I know folks who are good at their jobs, and have been struggling to find something meaningful for six-plus months. I know folks who are exceptionally strong candidates, who can find reasonably good jobs, but even they are finding that the sorts of jobs they want simply don’t exist right now. I know folks who are strong candidates but with some oddities in their profile, maybe too many short stints, who are now being filtered out because hiring managers need some way to filter through the higher volume of candidates. I can’t give advice on what you should do, but if you’re finding this job market difficult, it’s certainly not personal. My sense is that’s basically the experience that everyone is having when searching for new roles right now. If you are in a role today that’s frustrating you, my advice is to try harder than usual to find a way to make it a rewarding experience, even if it’s not perfect. I also wouldn’t personally try to sit this cycle out unless you’re comfortable with a small risk that reentry is quite difficult: I think it’s more likely that the ecosystem is meaningfully different in five years than that it’s largely unchanged. Altogether, this hasn’t really been the advice that anyone wanted when they chatted with me, but it seems to generally have resonated with them as a realistic appraisal of the current markets. Hopefully there’s something useful for you in here as well.