Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
2
I’m turning forty in a few weeks, and there’s a listicle archetype along the lines of “Things I’ve learned in the first half of my career as I turn forty and have now worked roughly twenty years in the technology industry.” How do you write that and make it good? Don’t ask me. I don’t know! As I considered what I would write to summarize my career learnings so far, I kept thinking about updating my post Advancing the industry from a few years ago, where I described using that concept as a north star for my major career decisions. So I wrote about that instead. Recapping the concept Adopting advancing the industry as my framework for career decisions came down to three things: The opportunity to be more intentional: After ~15 years in the industry, I entered a “third stage” of my career where neither financial considerations (1st stage) nor controlling pace to support an infant/toddler (2nd stage) were my highest priorities. Although I might not be working wholly by choice, I had enough...
2 days ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Irrational Exuberance

Moving from an orchestration-heavy to leadership-heavy management role.

For managers who have spent a long time reporting to a specific leader or working in an organization with well‑understood goals, it’s easy to develop skill gaps without realizing it. Usually this happens because those skills were not particularly important in the environment you grew up in. You may become extremely confident in your existing skills, enter a new organization that requires a different mix of competencies, and promptly fall on your face. There are a few common varieties of this, but the one I want to discuss here is when managers grow up in an organization that operates from top‑down plans (“orchestration‑heavy roles”) and then find themselves in a sufficiently senior role, or in a bottom‑up organization, that expects them to lead rather than orchestrate (“leadership‑heavy roles”). Orchestration versus leadership You can break the components of solving a problem down in a number of ways, and I’m not saying this is the perfect way to do it, but here are six important components of directing a team’s work: Problem discovery: Identifying which problems to work on Problem selection: Aligning with your stakeholders on the problems you’ve identified Solution discovery: Identifying potential solutions to the selected problem Solution selection: Aligning with your stakeholders on the approach you’ve chosen Execution: Implementing the selected solution Ongoing revision: Keeping your team and stakeholders aligned as you evolve the plan In an orchestration‑heavy management role, you might focus only on the second half of these steps. In a leadership‑heavy management role, you work on all six steps. Folks who’ve only worked in orchestration-heavy roles often have no idea that they are expected to perform all of these. So, yes, there’s a skill gap in performing the work, but more importantly there’s an awareness gap that the work actually exists to be done. Here are a few ways you can identify an orchestration‑heavy manager that doesn’t quite understand their current, leadership‑heavy circumstances: Focuses on prioritization as “solution of first resort.” When you’re not allowed to change the problem or the approach, prioritization becomes one of the best tools you have. Accepts problems and solutions as presented. If a stakeholder asks for something, questions are around priority rather than whether the project makes sense to do at all, or suggestions of alternative approaches. There’s no habit of questioning whether the request makes sense—that’s left to the stakeholder or to more senior functional leadership. Focuses on sprint planning and process. With the problem and approach fixed, protecting your team from interruption and disruption is one of your most valuable tools. Operating strictly to a sprint cadence (changing plans only at the start of each sprint) is a powerful technique. All of these things are still valuable in a leadership‑heavy role, but they just aren’t necessarily the most valuable things you could be doing. Operating in a leadership-heavy role There is a steep learning curve for managers who find themselves in a leadership‑heavy role, because it’s a much more expansive role. However, it’s important to realize that there are no senior engineering leadership roles focused solely on orchestration. You either learn this leadership style or you get stuck in mid‑level roles (even in organizations that lean orchestration-heavy). Further, the technology industry generally believes it overinvested in orchestration‑heavy roles in the 2010s. Consequently, companies are eliminating many of those roles and preventing similar roles from being created in the next generation of firms. There’s a pervasive narrative attributing this shift to the increased productivity brought by LLMs, but I’m skeptical of that relationship—this change was already underway before LLMs became prominent. My advice for folks working through the leadership‑heavy role learning curve is: Think of your job’s core loop as four steps: Identify the problems your team should be working on Decide on a destination that solves those problems Explain to your team, stakeholders, and executives the path the team will follow to reach that destination Communicate both data and narratives that provide evidence you’re walking that path successfully If you are not doing these four things, you are not performing your full role, even if people say you do some parts well. Similarly, if you want to get promoted or secure more headcount, those four steps are the path to doing so (I previously discussed this in How to get more headcount). Ask your team for priorities and problems to solve. Mining for bottom‑up projects is a critical part of your role. If you wait only for top‑down and lateral priorities, you aren’t performing the first step of the core loop. It’s easy to miss this expectation—it’s invisible to you but obvious to everyone else, so they don’t realize it needs to be said. If you’re not sure, ask. If your leadership chain is running the core loop for your team, it’s because they lack evidence that you can run it yourself. That’s a bad sign. What’s “business as usual” in an orchestration‑heavy role actually signals a struggling manager in a leadership‑heavy role. Get your projects prioritized by following the core loop. If you have a major problem on your team and wonder why it isn’t getting solved, that’s on you. Leadership‑heavy roles won’t have someone else telling you how to frame your team’s work—unless they think you’re doing a bad job. Picking the right problems and solutions is your highest‑leverage work. No, this is not only your product manager’s job or your tech lead’s—it is your job. It’s also theirs, but leadership overlaps because getting it right is so valuable. Generalizing a bit, your focus now is effectiveness of your team’s work, not efficiency in implementing it. Moving quickly on the wrong problem has no value. Understand your domain and technology in detail. You don’t have to write all the software—but you should have written some simple pull requests to verify you can reason about the codebase. You don’t have to author every product requirement or architecture proposal, but you should write one occasionally to prove you understand the work. If you don’t feel capable of that, that’s okay. But you need to urgently write down steps you’ll take to close that gap and share that plan with your team and manager. They currently see you as not meeting expectations and want to know how you’ll start meeting them. If you think that gap cannot be closed or that it’s unreasonable to expect you to close it, you misunderstand your role. Some organizations will allow you to misunderstand your role for a long time, provided you perform parts of it well, but they rarely promote you under those circumstances—and most won’t tolerate it for senior leaders. Align with your team and cross‑functional stakeholders as much as you align with your executive. If your executive is wrong and you follow them, it is your fault that your team and stakeholders are upset: part of your job is changing your executive’s mind. Yes, it can feel unfair if you’re the type to blame everything on your executive. But it’s still true: expecting your executive to get everything right is a sure way to feel superior without accomplishing much. Now that I’ve shared my perspective, I admit I’m being a bit extreme on purpose—people who don’t pick up on this tend to dispute its validity strongly unless there is no room to debate. There is room for nuance, but if you think my entire point is invalid, I encourage you to have a direct conversation with your manager and team about their expectations and how they feel you’re meeting them.

21 hours ago 5 votes
What can agents actually do?

There’s a lot of excitement about what AI (specifically the latest wave of LLM-anchored AI) can do, and how AI-first companies are different from the prior generations of companies. There are a lot of important and real opportunities at hand, but I find that many of these conversations occur at such an abstract altitude that they’re a bit too abstract. Sort of like saying that your company could be much better if you merely adopted software. That’s certainly true, but it’s not a particularly helpful claim. This post is an attempt to concisely summarize how AI agents work, apply that summary to a handful of real-world use cases for AI, and make the case that the potential of AI agents is equivalent to the potential of this generation of AI. By the end of this writeup, my hope is that you’ll be well-armed to have a concrete discussion about how LLMs and agents could change the shape of your company. How do agents work? At its core, using an LLM is an API call that includes a prompt. For example, you might call Anthropic’s /v1/message with a prompt: How should I adopt LLMs in my company? That prompt is used to fill the LLM’s context window, which conditions the model to generate certain kinds of responses. This is the first important thing that agents can do: use an LLM to evaluate a context window and get a result. Prompt engineering, or context engineering as it’s being called now, is deciding what to put into the context window to best generate the responses you’re looking for. For example, In-Context Learning (ICL) is one form of context engineering, where you supply a bunch of similar examples before asking a question. If I want to determine if a transaction is fraudulent, then I might supply a bunch of prior transactions and whether they were, or were not, fraudulent as ICL examples. Those examples make generating the correct answer more likely. However, composing the perfect context window is very time intensive, benefiting from techniques like metaprompting to improve your context. Indeed, the human (or automation) creating the initial context might not know enough to do a good job of providing relevant context. For example, if you prompt, Who is going to become the next mayor of New York City?, then you are unsuited to include the answer to that question in your prompt. To do that, you would need to already know the answer, which is why you’re asking the question to begin with! This is where we see model chat experiences from OpenAI and Anthropic use web search to pull in context that you likely don’t have. If you ask a question about the new mayor of New York, they use a tool to retrieve web search results, then add the content of those searches to your context window. This is the second important thing that agents can do: use an LLM to suggest tools relevant to the context window, then enrich the context window with the tool’s response. However, it’s important to clarify how “tool usage” actually works. An LLM does not actually call a tool. (You can skim OpenAI’s function calling documentation if you want to see a specific real-world example of this.) Instead there is a five-step process to calling tools that can be a bit counter-intuitive: The program designer that calls the LLM API must also define a set of tools that the LLM is allowed to suggest using. Every API call to the LLM includes that defined set of tools as options that the LLM is allowed to recommend The response from the API call with defined functions is either: Generated text as any other call to an LLM might provide A recommendation to call a specific tool with a specific set of parameters, e.g. an LLM that knows about a get_weather tool, when prompted about the weather in Paris, might return this response: [{ "type": "function_call", "name": "get_weather", "arguments": "{\"location\":\"Paris, France\"}" }] The program that calls the LLM API then decides whether and how to honor that requested tool use. The program might decide to reject the requested tool because it’s been used too frequently recently (e.g. rate limiting), it might check if the associated user has permission to use the tool (e.g. maybe it’s a premium only tool), it might check if the parameters match the user’s role-based permissions as well (e.g. the user can check weather, but only admin users are allowed to check weather in France). If the program does decide to call the tool, it invokes the tool, then calls the LLM API with the output of the tool appended to the prior call’s context window. The important thing about this loop is that the LLM itself can still only do one interesting thing: taking a context window and returning generated text. It is the broader program, which we can start to call an agent at this point, that calls tools and sends the tools’ output to the LLM to generate more context. What’s magical is that LLMs plus tools start to really improve how you can generate context windows. Instead of having to have a very well-defined initial context window, you can use tools to inject relevant context to improve the initial context. This brings us to the third important thing that agents can do: they manage flow control for tool usage. Let’s think about three different scenarios: Flow control via rules has concrete rules about how tools can be used. Some examples: it might only allow a given tool to be used once in a given workflow (or a usage limit of a tool for each user, etc) it might require that a human-in-the-loop approves parameters over a certain value (e.g. refunds more than $100 require human approval) it might run a generated Python program and return the output to analyze a dataset (or provide error messages if it fails) apply a permission system to tool use, restricting who can use which tools and which parameters a given user is able to use (e.g. you can only retrieve your own personal data) a tool to escalate to a human representative can only be called after five back and forths with the LLM agent Flow control via statistics can use statistics to identify and act on abnormal behavior: if the size of a refund is higher than 99% of other refunds for the order size, you might want to escalate to a human if a user has used a tool more than 99% of other users, then you might want to reject usage for the rest of the day it might escalate to a human representative if tool parameters are more similar to prior parameters that required escalation to a human agent LLMs themselves absolutely cannot be trusted. Anytime you rely on an LLM to enforce something important, you will fail. Using agents to manage flow control is the mechanism that makes it possible to build safe, reliable systems with LLMs. Whenever you find yourself dealing with an unreliable LLM-based system, you can always find a way to shift the complexity to a tool to avoid that issue. As an example, if you want to do algebra with an LLM, the solution is not asking the LLM to directly perform algebra, but instead providing a tool capable of algebra to the LLM, and then relying on the LLM to call that tool with the proper parameters. At this point, there is one final important thing that agents do: they are software programs. This means they can do anything software can do to build better context windows to pass on to LLMs for generation. This is an infinite category of tasks, but generally these include: Building general context to add to context window, sometimes thought of as maintaining memory Initiating a workflow based on an incoming ticket in a ticket tracker, customer support system, etc Periodically initiating workflows at a certain time, such as hourly review of incoming tickets Alright, we’ve now summarized what AI agents can do down to four general capabilities. Recapping a bit, those capabilities are: Use an LLM to evaluate a context window and get a result Use an LLM to suggest tools relevant to the context window, then enrich the context window with the tool’s response Manage flow control for tool usage via rules or statistical analysis Agents are software programs, and can do anything other software programs do Armed with these four capabilities, we’ll be able to think about the ways we can, and cannot, apply AI agents to a number of opportunities. Use Case 1: Customer Support Agent One of the first scenarios that people often talk about deploying AI agents is customer support, so let’s start there. A typical customer support process will have multiple tiers of agents who handle increasingly complex customer problems. So let’s set a goal of taking over the easiest tier first, with the goal of moving up tiers over time as we show impact. Our approach might be: Allow tickets (or support chats) to flow into an AI agent Provide a variety of tools to the agent to support: Retrieving information about the user: recent customer support tickets, account history, account state, and so on Escalating to next tier of customer support Refund a purchase (almost certainly implemented as “refund purchase” referencing a specific purchase by the user, rather than “refund amount” to prevent scenarios where the agent can be fooled into refunding too much) Closing the user account on request Include customer support guidelines in the context window, describe customer problems, map those problems to specific tools that should be used to solve the problems Flow control rules that ensure all calls escalate to a human if not resolved within a certain time period, number of back-and-forth exchanges, if they run into an error in the agent, and so on. These rules should be both rules-based and statistics-based, ensuring that gaps in your rules are neither exploitable nor create a terrible customer experience Review agent-customer interactions for quality control, making improvements to the support guidelines provided to AI agents. Initially you would want to review every interaction, then move to interactions that lead to unusual outcomes (e.g. escalations to human) and some degree of random sampling Review hourly, then daily, and then weekly metrics of agent performance Based on your learnings from the metric reviews, you should set baselines for alerts which require more immediate response. For example, if a new topic comes up frequently, it probably means a serious regression in your product or process, and it requires immediate review rather than periodical review. Note that even when you’ve moved “Customer Support to AI agents”, you still have: a tier of human agents dealing with the most complex calls humans reviewing the periodic performance statistics humans performing quality control on AI agent-customer interactions You absolutely can replace each of those downstream steps (reviewing performance statistics, etc) with its own AI agent, but doing that requires going through the development of an AI product for each of those flows. There is a recursive process here, where over time you can eliminate many human components of your business, in exchange for increased fragility as you have more tiers of complexity. The most interesting part of complex systems isn’t how they work, it’s how they fail, and agent-driven systems will fail occasionally, as all systems do, very much including human-driven ones. Applied with care, the above series of actions will work successfully. However, it’s important to recognize that this is building an entire software pipeline, and then learning to operate that software pipeline in production. These are both very doable things, but they are meaningful work, turning customer support leadership into product managers and requiring an engineering team building and operating the customer support agent. Use Case 2: Triaging incoming bug reports When an incident is raised within your company, or when you receive a bug report, the first problem of the day is determining how severe the issue might be. If it’s potentially quite severe, then you want on-call engineers immediately investigating; if it’s certainly not severe, then you want to triage it in a less urgent process of some sort. It’s interesting to think about how an AI agent might support this triaging workflow. The process might work as follows: Pipe all created incidents and all created tickets to this agent for review. Expose these tools to the agent: Open an incident Retrieve current incidents Retrieve recently created tickets Retrieve production metrics Retrieve deployment logs Retrieve feature flag change logs Toggle known-safe feature flags Propose merging an incident with another for human approval Propose merging a ticket with another ticket for human approval Redundant LLM providers for critical workflows. If the LLM provider’s API is unavailable, retry three times over ten seconds, then resort to using a second model provider (e.g. Anthropic first, if unavailable try OpenAI), and then finally create an incident that the triaging mechanism is unavailable. For critical workflows, we can’t simply assume the APIs will be available, because in practice all major providers seem to have monthly availability issues. Merge duplicates. When a ticket comes in, first check ongoing incidents and recently created tickets for potential duplicates. If there is a probable duplicate, suggest merging the ticket or incident with the existing issue and exit the workflow. Assess impact. If production statistics are severely impacted, or if there is a new kind of error in production, then this is likely an issue that merits quick human review. If it’s high priority, open an incident. If it’s low priority, create a ticket. Propose cause. Now that the incident has been sized, switch to analyzing the potential causes of the incident. Look at the code commits in recent deploys and suggest potential issues that might have caused the current error. In some cases this will be obvious (e.g. spiking errors with a traceback of a line of code that changed recently), and in other cases it will only be proximity in time. Apply known-safe feature flags. Establish an allow list of known safe feature flags that the system is allowed to activate itself. For example, if there are expensive features that are safe to disable, it could be allowed to disable them, e.g. restricting paginating through deeper search results when under load might be a reasonable tradeoff between stability and user experience. Defer to humans. At this point, rely on humans to drive incident, or ticket, remediation to completion. Draft initial incident report. If an incident was opened, the agent should draft an initial incident report including the timeline, related changes, and the human activities taken over the course of the incident. This report should then be finalized by the human involved in the incident. Run incident review. Your existing incident review process should take the incident review and determine how to modify your systems, including the triaging agent, to increase reliability over time. Safeguard to reenable feature flags. Since we now have an agent disabling feature flags, we also need to add a periodic check (agent-driven or otherwise) to reenable the “known safe” feature flags if there isn’t an ongoing incident to avoid accidentally disabling them for long periods of time. This is another AI agent that will absolutely work as long as you treat it as a software product. In this case, engineering is likely the product owner, but it will still require thoughtful iteration to improve its behavior over time. Some of the ongoing validation to make this flow work includes: The role of humans in incident response and review will remain significant, merely aided by this agent. This is especially true in the review process, where an agent cannot solve the review process because it’s about actively learning what to change based on the incident. You can make a reasonable argument that an agent could decide what to change and then hand that specification off to another agent to implement it. Even today, you can easily imagine low risk changes (e.g. a copy change) being automatically added to a ticket for human approval. Doing this for more complex, or riskier changes, is possible but requires an extraordinary degree of care and nuance: it is the polar opposite of the idea of “just add agents and things get easy.” Instead, enabling that sort of automation will require immense care in constraining changes to systems that cannot expose unsafe behavior. For example, one startup I know has represented their domain logic in a domain-specific language (DSL) that can be safely generated by an LLM, and are able to represent many customer-specific features solely through that DSL. Expanding the list of known-safe feature flags to make incidents remediable. To do this widely will require enforcing very specific requirements for how software is developed. Even doing this narrowly will require changes to ensure the known-safe feature flags remain safe as software is developed. Periodically reviewing incident statistics over time to ensure mean-time-to-resolution (MTTR) is decreasing. If the agent is truly working, this should decrease. If the agent isn’t driving a reduction in MTTR, then something is rotten in the details of the implementation. Even a very effective agent doesn’t relieve the responsibility of careful system design. Rather, agents are a multiplier on the quality of your system design: done well, agents can make you significantly more effective. Done poorly, they’ll only amplify your problems even more widely. Do AI Agents Represent Entirety of this Generation of AI? If you accept my definition that AI agents are any combination of LLMs and software, then I think it’s true that there’s not much this generation of AI can express that doesn’t fit this definition. I’d readily accept the argument that LLM is too narrow a term, and that perhaps foundational model would be a better term. My sense is that this is a place where frontier definitions and colloquial usage have deviated a bit. Closing thoughts LLMs and agents are powerful mechanisms. I think they will truly change how products are designed and how products work. An entire generation of software makers, and company executives, are in the midst of learning how these tools work. Software isn’t magic, it’s very logical, but what it can accomplish is magical. The same goes for agents and LLMs. The more we can accelerate that learning curve, the better for our industry.

a week ago 20 votes
What is the competitive advantage of authors in the age of LLMs?

Over the past 19 months, I’ve written Crafting Engineering Strategy, a book on creating engineering strategy. I’ve also been working increasingly with large language models at work. Unsurprisingly, the intersection of those two ideas is a topic that I’ve been thinking about a lot. What, I’ve wondered, is the role of the author, particularly the long-form author, in a world where an increasingly large percentage of writing is intermediated by large language models? One framing I’ve heard somewhat frequently is the view that LLMs are first and foremost a great pillaging of authors’ work. It’s true. They are that. At some point there was a script to let you check which books had been loaded into Meta’s LLaMa, and every book I’d written at that point was included, none of them with my consent. However, I long ago made my peace with plagiarism online, and this strikes me as not particularly different, albeit conducted by larger players. The folks using this writing are going to keep using it beyond the constraints I’d prefer it to be used in, and I’m disinterested in investing my scarce mental energy chasing through digital or legal mazes. Instead, I’ve been thinking about how this transition might go right for authors. My favorite idea that I’ve come up with is the idea of written content as “datapacks” for thinking. Buy someone’s book / “datapack”, then upload it into your LLM, and you can immediately operate almost as if you knew the book’s content. Let’s start with an example. Imagine you want help onboarding as an executive, and you’ve bought a copy of The Engineering Executive’s Primer, you could create a project in Anthropic’s Claude, and upload the LLM-optimized book into your project. Here is what your Claude project might look like. Once you have it set up, you can ask it to help you create your onboarding plan. This guidance makes sense, largely pulled from Your first 90 days as CTO. As always, you can iterate on your initial prompt–including more details you want to include into the plan–along with follow ups to improve the formatting and so on. One interesting thing here, is that I don’t currently have a datapack for The Engineering Executive’s Primer! To solve that, I built one from all my blog posts marked with the “executive” tag. I did that using this script that packages Hugo blog posts, that I generated using this prompt with Claude 3.7 Sonnet. The output of that script gets passed into repomix via: repomix --include "`./scripts/tags.py content executive | paste -d, -s -`" The mess with paste is to turn the multiline output from tags.py into a comma-separated list that repomix knows how to use. This is a really neat pattern, and starts to get at where I see the long-term advantage of writers in the current environment: if you’re a writer and have access to your raw content, you can create a problem-specific datapack to discuss the problem. You can also give that datapack to someone else, or use it to answer their questions. For example, someone asked me a very detailed followup question about a recent blog post. It was a very long question, and I was on a weekend trip. I already had a Claude project setup with the contents of Crafting Engineering Strategy, so I just passed the question verbatim into that project, and sent the answer back to the person who asked it. (I did have to ask Claude to revise the answer once to focus more on what I thought the most important part of the answer was.) This, for what it’s worth, wasn’t a perfect answer, but it’s pretty good. If the question asker had the right datapack, they could have gotten it themselves, without needing me to decide to answer it. However, this post is less worried about the reader than it is about the author. What is our competitive advantage as authors in a future where people are not reading our work? Well, maybe they’re still buying our work in the form of datapacks and such, but it certainly seems likely that book sales, like blog traffic, will be impacted negatively. In trade, it’s now possible for machines to understand our thinking that we’ve recorded down into words over time. There’s a running joke in my executive learning circle that I’ve written a blog post on every topic that comes up, and that’s kind of true. That means that I am on the cusp of the opportunity to uniquely scale myself by connecting “intelligence on demand for a few cents” with the written details of my thinking built over the past two decades of being a writer who operates. The tools that exist today are not quite there yet, although a combination of selling datapacks like the one for Crafting Engineering Strategy and tools like Claude’s projects are a good start. There are many ways the exact details might come together, but I’m optimistic that writing will become more powerful rather than less in this new world, even if the particular formats change. (For what it’s worth, I don’t think human readers are going away either.) If you’re interested in the fully fleshed out version of this idea, starting here you can read the full AI Companion to Crafting Engineering Strategy. The datapack will be available via O’Reilly in the next few months. If you’re an existing O’Reilly author who’s skepical of this idea, don’t worry: I worked with them to sign a custom contract, this usage–as best I understood it, although I am not a lawyer and am not providing legal advice–is outside of the scope of the default contract I signed with my prior book, and presumably most others’ contracts as well.

a month ago 20 votes
My desk setup in 2025.

Since 2020, I’ve been working on my desk setup, and I think I finally have it mostly pulled together at this point. I don’t really think my desk setup is very novel, and I’m sure there are better ways to pull it together, but I will say that it finally works the way I want since I added the CalDigit TS5 Plus, which has been a long time coming. My requirements for my desk are: Has support for 2-3 Mac laptops Has support for a Windows gaming desktop with a dedicated GPU Has a dedicated microphone Has good enough lighting Is not too messy I can switch between any laptop and desktop with a single Thunderbolt cable Historically the issue here has been the final requirement, where switching required moving two cables–a Thunderbolt and a cable for the dedicated graphics card–but with my new dock this finally works with just one cable. The equipment shown here, and my brief review of each piece, is: UPLIFT v2 Standing Desk – is the standing desk I use. I both have a lot of stuff on my desk, and also want my desk to feel minimal, so I opted for the 72" x 30" verison. At the time I ordered it in 2020, the only option shipping quickly was the bamboo finish, so that’s what I got. CalDigit TS5 Plus Dock – this was the missing component that has three Thunderbolt ports and a DisplayPort. I have the external graphics card directly connected to the DisplayPort, and then move the Thunderbolt port from computer to computer to change which one is active. It also has enough USB-A ports to connect the adapters for my wireless keyboard and mouse, to avoid needing to pair them across computers which would create friction in switching computers. Apple Studio Display – I experimented with dedicated speakers and video camera, but for me having them built into the monitor was helpful to reduce the number of things on my desk. The Studio Display’s monitor, speakers and video camera are all solidly good enough for my purposes: I’m sure I could get better on each dimension, but in practice I never think about this and don’t find any issues with them. On the other hand, while I was initially hopeful that I could also get rid of my microphone, the microphone quality just wasn’t that good for me, as I spend a lot of time on video conferences and recording podcasts, etc. Beelink GTi Ultra & EX Pro Docking Station – are my Windows mini desktop and dock which allows mounting an external GPU to the mini desktop. Beelink itself is slightly aggrevating because as best I can tell they’ve done something quite odd in terms of custom patching Windows 11, but ultimately it’s worked well for me as a dedicated gaming machine, and the build quality and size profile are both just fantastic. MSI Gaming RTX 4070 Ti Super 16G Graphics Card – I bought this earlier this year, looking for something that was in stock, and was good enough that it would last me a generation or two of graphics card upgrades without shelling out a truly massive amount for a 50XX edition (some of which don’t seem to be upgrades on the 40XX predecesors anyway). Hexcal Studio – this is the workstation / monitor stand / cable management system, with lighting and so on. I ultimately do like this, but it’s not perfect, e.g. my Qi charger technically works but provides such bad charging speeds that it effectively doesn’t work. It’s definitely too expensive for something that doesn’t entirely work, so I can’t really recommend it, although now that I’ve paid for it, I wouldn’t bother replacing it either. Audio-Technica AT2020USB Cardioid Condenser USB Microphone – this is the microphone I’ve been using for six years, and it’s really quite good and cost something like $120 at the time. It’s discontinued now, but presumably there’s a more modern version somewhere. I have it mounted on this boom arm. LUME CUBE Edge 2.0 LED Desk Lamp – I have two of these for lighting during recordings. I don’t actually like using them very much, I just hate looking into lights, but I do use them periodically when I want to make sure lighting is actually correct. Logitech MX Keys Advanced Wireless Illuminated Keyboard for Mac – this keyboard works well for me, and has a USB-C so I can use a single powered USB-C cable from the Hexcal to charge my keyboard, my mouse, my phone, and my headphones. Logitech MX Master 3S Wireless Mouse – I’ve been using variations of this mouse for a long time, I specifically bought this version a year or two ago to standardize all charging ports on USB-C. Laptop stand – I’m not actually sure where I got this laptop stand from, it might have been Etsy. I found it relatively hard to find stands that support three laptops rather than just two. Before finding this one, I used this two-laptop stand which is fine. Laptops – these are my personal and work Macbooks. Here’s a slightly closer look at the left side of the desk. At this point, I really have nothing left that I’m upset about with my setup, and I can’t imagine changing this again in the next few years. As a bonus, my office has a handful of pieces of “professional art” that represent things I am proud of. From left to right, it’s the cover of An Elegant Puzzle, a map of San Francisco drawn exclusively from Uber trip data on the night of Halloween 2014, and then the cover of The Engineering Executive’s Primer. It’s probably a bit vain, but I like to remember some of the accomplishments.

a month ago 25 votes

More in programming

Moving from an orchestration-heavy to leadership-heavy management role.

For managers who have spent a long time reporting to a specific leader or working in an organization with well‑understood goals, it’s easy to develop skill gaps without realizing it. Usually this happens because those skills were not particularly important in the environment you grew up in. You may become extremely confident in your existing skills, enter a new organization that requires a different mix of competencies, and promptly fall on your face. There are a few common varieties of this, but the one I want to discuss here is when managers grow up in an organization that operates from top‑down plans (“orchestration‑heavy roles”) and then find themselves in a sufficiently senior role, or in a bottom‑up organization, that expects them to lead rather than orchestrate (“leadership‑heavy roles”). Orchestration versus leadership You can break the components of solving a problem down in a number of ways, and I’m not saying this is the perfect way to do it, but here are six important components of directing a team’s work: Problem discovery: Identifying which problems to work on Problem selection: Aligning with your stakeholders on the problems you’ve identified Solution discovery: Identifying potential solutions to the selected problem Solution selection: Aligning with your stakeholders on the approach you’ve chosen Execution: Implementing the selected solution Ongoing revision: Keeping your team and stakeholders aligned as you evolve the plan In an orchestration‑heavy management role, you might focus only on the second half of these steps. In a leadership‑heavy management role, you work on all six steps. Folks who’ve only worked in orchestration-heavy roles often have no idea that they are expected to perform all of these. So, yes, there’s a skill gap in performing the work, but more importantly there’s an awareness gap that the work actually exists to be done. Here are a few ways you can identify an orchestration‑heavy manager that doesn’t quite understand their current, leadership‑heavy circumstances: Focuses on prioritization as “solution of first resort.” When you’re not allowed to change the problem or the approach, prioritization becomes one of the best tools you have. Accepts problems and solutions as presented. If a stakeholder asks for something, questions are around priority rather than whether the project makes sense to do at all, or suggestions of alternative approaches. There’s no habit of questioning whether the request makes sense—that’s left to the stakeholder or to more senior functional leadership. Focuses on sprint planning and process. With the problem and approach fixed, protecting your team from interruption and disruption is one of your most valuable tools. Operating strictly to a sprint cadence (changing plans only at the start of each sprint) is a powerful technique. All of these things are still valuable in a leadership‑heavy role, but they just aren’t necessarily the most valuable things you could be doing. Operating in a leadership-heavy role There is a steep learning curve for managers who find themselves in a leadership‑heavy role, because it’s a much more expansive role. However, it’s important to realize that there are no senior engineering leadership roles focused solely on orchestration. You either learn this leadership style or you get stuck in mid‑level roles (even in organizations that lean orchestration-heavy). Further, the technology industry generally believes it overinvested in orchestration‑heavy roles in the 2010s. Consequently, companies are eliminating many of those roles and preventing similar roles from being created in the next generation of firms. There’s a pervasive narrative attributing this shift to the increased productivity brought by LLMs, but I’m skeptical of that relationship—this change was already underway before LLMs became prominent. My advice for folks working through the leadership‑heavy role learning curve is: Think of your job’s core loop as four steps: Identify the problems your team should be working on Decide on a destination that solves those problems Explain to your team, stakeholders, and executives the path the team will follow to reach that destination Communicate both data and narratives that provide evidence you’re walking that path successfully If you are not doing these four things, you are not performing your full role, even if people say you do some parts well. Similarly, if you want to get promoted or secure more headcount, those four steps are the path to doing so (I previously discussed this in How to get more headcount). Ask your team for priorities and problems to solve. Mining for bottom‑up projects is a critical part of your role. If you wait only for top‑down and lateral priorities, you aren’t performing the first step of the core loop. It’s easy to miss this expectation—it’s invisible to you but obvious to everyone else, so they don’t realize it needs to be said. If you’re not sure, ask. If your leadership chain is running the core loop for your team, it’s because they lack evidence that you can run it yourself. That’s a bad sign. What’s “business as usual” in an orchestration‑heavy role actually signals a struggling manager in a leadership‑heavy role. Get your projects prioritized by following the core loop. If you have a major problem on your team and wonder why it isn’t getting solved, that’s on you. Leadership‑heavy roles won’t have someone else telling you how to frame your team’s work—unless they think you’re doing a bad job. Picking the right problems and solutions is your highest‑leverage work. No, this is not only your product manager’s job or your tech lead’s—it is your job. It’s also theirs, but leadership overlaps because getting it right is so valuable. Generalizing a bit, your focus now is effectiveness of your team’s work, not efficiency in implementing it. Moving quickly on the wrong problem has no value. Understand your domain and technology in detail. You don’t have to write all the software—but you should have written some simple pull requests to verify you can reason about the codebase. You don’t have to author every product requirement or architecture proposal, but you should write one occasionally to prove you understand the work. If you don’t feel capable of that, that’s okay. But you need to urgently write down steps you’ll take to close that gap and share that plan with your team and manager. They currently see you as not meeting expectations and want to know how you’ll start meeting them. If you think that gap cannot be closed or that it’s unreasonable to expect you to close it, you misunderstand your role. Some organizations will allow you to misunderstand your role for a long time, provided you perform parts of it well, but they rarely promote you under those circumstances—and most won’t tolerate it for senior leaders. Align with your team and cross‑functional stakeholders as much as you align with your executive. If your executive is wrong and you follow them, it is your fault that your team and stakeholders are upset: part of your job is changing your executive’s mind. Yes, it can feel unfair if you’re the type to blame everything on your executive. But it’s still true: expecting your executive to get everything right is a sure way to feel superior without accomplishing much. Now that I’ve shared my perspective, I admit I’m being a bit extreme on purpose—people who don’t pick up on this tend to dispute its validity strongly unless there is no room to debate. There is room for nuance, but if you think my entire point is invalid, I encourage you to have a direct conversation with your manager and team about their expectations and how they feel you’re meeting them.

21 hours ago 5 votes
lazy import of JavaScript modules

When working on big JavaScript web apps, you can split the bundle in multiple chunks and import selected chunks lazily, only when needed. That makes the main bundle smaller, faster to load and parse. How to lazy import a module? let hljs = await import("highlight.js").default; is equivalent of: import hljs from "highlight.js"; Now:   let libZip = await import("@zip.js/zip.js");   let blobReader = new libZip.BlobReader(blob); Is equivalent to: import { BlobReader } from "@zip.js/zip.js"; It’s simple if we call it from async function but sometimes we want to lazy load from non-async function so things might get more complicated: let isLazyImportng = false; let hljs; let markdownIt; let markdownItAnchor; async function lazyImports() { if (isLazyImportng) return; isLazyImportng = true; let promises = await Promise.all([ import("highlight.js"), import("markdown-it"), import("markdown-it-anchor"), ]); hljs = promises[0].default; markdownIt = promises[1].default; markdownItAnchor = promises[2].default; } We can run it from non-async function: function doit() { lazyImports().then( () => { if (hljs) { // use hljs to do something } }) } I’ve included protection against kicking of lazy import more than once. That means on second and n-th call we might not yet have the module loaded so hljs will be still undefined.

8 hours ago 2 votes
x86 Assembly Exercise #1: Toy kill Program (Solution)

A step-by-step walkthrough of the toy kill program using raw Linux syscalls.

13 hours ago 2 votes
vite /rollup manualChunks

When building a large web app it’s possible to split the .js bundle into chunks and lazy load certain parts only when needed. For example, in Edna I use markdown-it and highlight.js library only in a certain scenario. By putting it in it’s separate chunk, I save almost 1 MB of uncompressed JavaScript in main bundle. Faster to download, faster to run. ../dist/assets/markdownit-hljs-DbctGXX9.js 1,087.33 kB │ gzip: 358.42 kB To split in chunks you configure rollup in vite.config.js: function manualChunks(id) { // partition files into chunks } export default defineConfig({ build: { rollupOptions: { output: { manualChunks: manualChunks, } } } } manualChunks() functions takes a path of the file (don’t know why everyone calls it an id). If you return a string for a given path, you tell rollup to bundle that file in a given chunk. If you return nothing (i.e. undefined) rollup will decide on how to chunk automatically, most likely putting everything into a single chunk It gets called for .css files, .js files and probably others. Here’s my hard won wisdom: console.log(id) when working on manualChunks() to see what files are processed chunk specific modules from node_modules to be lazy loaded everything else from node_modules goes into vendor chunk the rest is my own code and goes into lmain chunk as decided by rollup Seems simple enough: function manualChunks(id) { console.log(id); const chunksDef = [ ["/@zip.js/zip.js/", "zipjs"], ["/prettier/", "prettier"], // markdown-it and highlight.js are used together in askai.svelte [ "/markdown-it/", "/markdown-it-anchor/", "/highlight.js/", "/entities/", "/linkify-it/", "/mdurl/", "/punycode.js/", "/uc.micro/", "markdownit-hljs", ], ]; for (let def of chunksDef) { let n = def.length; for (let i = 0; i < n - 1; i++) { if (id.includes(def[i])) { return def[n - 1]; } } } // bundle all other 3rd-party modules into a single vendor.js module if (id.includes("/node_modules/")) { return "vendor"; } // when we return undefined, rollup will decide } This is real example from Edna. I’ve put zip.js, prettier and markdown-it + markdown-it-anchor + highlight.js into their own chunks, which I lazily import. Things to note: order is important. If I match /node_modules/ first, then everything would end up in vendor bundle id is a full path of bundled file in Unix format e.g. C:/Users/kjk/src/elaris/node_modules/prettier/standalone.mjs. People seem to match the path against just package name like prettier. I match against /prettier/ so that if some file has string prettier in it, it won’t be accidentally put in prettier chunk /entities/, /mdurl/ etc. are used by markdown-it so they should be included in its chunk. That’s where console.log(id) is helpful. I saw modules that I didn’t explicitly put in package.json which means they are implicit dependencies. I used bun.lock to see which package depends on those mysterious packages and that’s how I found what is used by markdown-it There were 2 remaining problems. I also had this in my code: import "highlight.js/styles/github.css"; manualChunks was called for "highlight.js/styles/github.css" to which I returned markdownit-hljs so it was put in its own chunk. Which was too much because I didn’t want to lazy import a small CSS file, so I told rollup to put all .css files in main CSS chunk: function manualChunks(id) { // pack all .css files in the same chunk if (id.endsWith(".css")) { return; } // ... rest of the code } There was one more thing that was a big pain in the ass to debug. To verify things are properly chunked I opened Dev Tools in Chrome and looked at network tab. To my surprise, markdownit-hljs was loaded immediately. After lots of debugging and research: turns out that vite bundles some helper functions. Because I didn’t specify explicitly which chunk they should go into, rollup decided to put them in markdownit-hljs chunk. Because main chunk was using that function, it had to import it, defeating my cunning plan to load it lazily later. The fix was to direct those known helper function into vendor bundle: function manualChunks(id) { // ... other code // bundle all other 3rd-party modules into a single vendor.js module if ( id.includes("/node_modules/") || ROLLUP_COMMON_MODULES.some((commonModule) => id.includes(commonModule)) ) { return "vendor"; } // ... rest of the code } You can read more at: https://github.com/vitejs/vite/issues/17823 https://github.com/vitejs/vite/issues/19758 https://github.com/vitejs/vite/issues/5189 Things might still go wrong. I got one invalid build that created a chunk that wouldn’t parse in the browser. For that reason I suggest to start simple: empty manualChunks() function that packs everything in one chunk. Then add desired chunks one by one and after each change. And how to lazy import things? let markdownIt = (await import("markdown-it")).default; is equivalent to static import: import markdownIt from "markdown-it"; Read more about lazy imports.

yesterday 2 votes