Full Width [alt+shift+f] FOCUS MODE Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
76
I’ve been reading Steven Sinofsky’s Hardcore Software, and particularly enjoyed this quote from a memo discussed in the Zero Defects chapter: You can improve the quality of your code, and if you do, the rewards for yourself and for Microsoft will be immense. The hardest part is to decide that you want to write perfect code. If I wrote that in an internal memo, I imagine the engineering team would mutiny, but software quality is certainly an interesting topic where I continue to refine my thinking. There are so many software quality playbooks out there, and I increasingly believe that all these playbooks work in their intended context, but are often misapplied. For example, pretty much every startup has someone on an infrastructure team who believes that all quality problems can be solved with a sufficiently nuanced automated rollout strategy. That’s generally been true in my experience at companies with a high volume of engaged usage, and has not at all been true in environments with...
a year ago

Comments

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Irrational Exuberance

Commenting on Notion docs via OpenAI and Zapier.

One of my side quests at work is to get a simple feedback loop going where we can create knowledge bases that comment on Notion documents. I was curious if I could hook this together following these requirements: No custom code hosting Prompt is editable within Notion rather than requiring understanding of Zapier Should be be fairly quickly Ultimately, I was able to get it working. So a quick summary of how it works, some comments on why I don’t particularly like this approach, then some more detailed comments on getting it working. General approach Create a Notion database of prompts. Create a specific prompt for providing feedback on RFCs. Create a Notion database for all RFCs. Add an automation into this database that calls a Zapier webhook. The Zapier webhook does a variety of things that culminate in using the RFC prompt to provide feedback on the specific RFC as a top-level comment in the RFC. Altogether this works fairly well. The challenges with this approach The best thing about this approach is that it actually works, and it works fairly well. However, as we dig into the implementation details, you’ll also see that a series of things are unnaturally difficult with Zapier: Managing rich text in Notion because it requires navigating the blocks datastructure Allowing looping API constructs such as making it straightforward to leave multiple comments on specific blocks rather than a single top-level comment Notion only allows up to 2,000 characters per block, but chunking into multiple blocks is moderately unnatural. In a true Python environment, it would be trivial to translate to and from Markdown using something like md2notion Ultimately, I could only recommend this approach as an initial validation. It’s definitely not the right long-term resting place for this kind of approach. Zapier implementation I already covered the Notion side of the integration, so let’s dig into the Zapier pieces a bit. Overall it had eight steps. I’ve skipped the first step, which was just a default webhook receiver. The second step was retrieving a statically defined Notion page containing the prompt. (In later steps I just use the Notion API directly, which I would do here if I was redoing this, but this worked too. The advantage of the API is that it returns a real JSON object, this doesn’t, probably because I didn’t specify the content-type header or some such.) This is the configuration page of step 2, where I specify the prompt’s page explicitly. ) Probably because I didn’t set content-type, I think I was getting post formatted data here, so I just regular expressed the data out. It’s a bit sloppy, but hey it worked, so there’s that. ) Here is using the Notion API request tool to retrieve the updated RFC (as opposed to the prompt which we already retrieved). ) The API request returns a JSON object that you can navigate without writing regular expressions, so that’s nice. ) Then we send both the prompt as system instructions and the RFC as the user message to Open AI. ) Then pass the response from OpenAI to json.dumps to encode it for being included in an API call. This is mostly solving for newlines being \n rather than literal newlines. ) Then format the response into an API request to add a comment to the document. Anyway, this wasn’t beautiful, and I think you could do a much better job by just doing all of this in Python, but it’s a workable proof of concept.

a month ago 24 votes
An agent to use Notion docs as prompts to comment on Notion docs.

Last weekend, I wrote a bit about using Zapier to load Notion pages as prompts to comment on other Notion pages. That worked well enough, but not that well. This weekend I spent some time getting the next level of this working, creating an agent that runs as an AWS Lambda. This, among other things, allowed me to rely on agent tool usage to support both page and block-level comments, and altogether I think the idea works extremely well. This was mostly implemented by Claude Code, and I think the code is actually fairly bad as a result, but you can see the full working implementation at lethain:basic_notion-agent on Github. Installation and configuration options are there as well. Watch a quick walkthrough of the project I recorded on YouTube. Screenshots To give a sense of what the end experience is, here are some screenshots. You start by creating a prompt in a Notion document. Then it will provide inline comments on blocks within your document. It will also provide a summary comment on the document overall (although this is configurable if you only want in-line comments). A feature I particularly like is that the agent is aware of existing comments on the document, and who made them, and will reply to those comments. Altogether, it’s a fun little project and works surprisingly well, as almost all agents do with enough prompt tuning.

a month ago 16 votes
Moving from an orchestration-heavy to leadership-heavy management role.

For managers who have spent a long time reporting to a specific leader or working in an organization with well‑understood goals, it’s easy to develop skill gaps without realizing it. Usually this happens because those skills were not particularly important in the environment you grew up in. You may become extremely confident in your existing skills, enter a new organization that requires a different mix of competencies, and promptly fall on your face. There are a few common varieties of this, but the one I want to discuss here is when managers grow up in an organization that operates from top‑down plans (“orchestration‑heavy roles”) and then find themselves in a sufficiently senior role, or in a bottom‑up organization, that expects them to lead rather than orchestrate (“leadership‑heavy roles”). Orchestration versus leadership You can break the components of solving a problem down in a number of ways, and I’m not saying this is the perfect way to do it, but here are six important components of directing a team’s work: Problem discovery: Identifying which problems to work on Problem selection: Aligning with your stakeholders on the problems you’ve identified Solution discovery: Identifying potential solutions to the selected problem Solution selection: Aligning with your stakeholders on the approach you’ve chosen Execution: Implementing the selected solution Ongoing revision: Keeping your team and stakeholders aligned as you evolve the plan In an orchestration‑heavy management role, you might focus only on the second half of these steps. In a leadership‑heavy management role, you work on all six steps. Folks who’ve only worked in orchestration-heavy roles often have no idea that they are expected to perform all of these. So, yes, there’s a skill gap in performing the work, but more importantly there’s an awareness gap that the work actually exists to be done. Here are a few ways you can identify an orchestration‑heavy manager that doesn’t quite understand their current, leadership‑heavy circumstances: Focuses on prioritization as “solution of first resort.” When you’re not allowed to change the problem or the approach, prioritization becomes one of the best tools you have. Accepts problems and solutions as presented. If a stakeholder asks for something, questions are around priority rather than whether the project makes sense to do at all, or suggestions of alternative approaches. There’s no habit of questioning whether the request makes sense—that’s left to the stakeholder or to more senior functional leadership. Focuses on sprint planning and process. With the problem and approach fixed, protecting your team from interruption and disruption is one of your most valuable tools. Operating strictly to a sprint cadence (changing plans only at the start of each sprint) is a powerful technique. All of these things are still valuable in a leadership‑heavy role, but they just aren’t necessarily the most valuable things you could be doing. Operating in a leadership-heavy role There is a steep learning curve for managers who find themselves in a leadership‑heavy role, because it’s a much more expansive role. However, it’s important to realize that there are no senior engineering leadership roles focused solely on orchestration. You either learn this leadership style or you get stuck in mid‑level roles (even in organizations that lean orchestration-heavy). Further, the technology industry generally believes it overinvested in orchestration‑heavy roles in the 2010s. Consequently, companies are eliminating many of those roles and preventing similar roles from being created in the next generation of firms. There’s a pervasive narrative attributing this shift to the increased productivity brought by LLMs, but I’m skeptical of that relationship—this change was already underway before LLMs became prominent. My advice for folks working through the leadership‑heavy role learning curve is: Think of your job’s core loop as four steps: Identify the problems your team should be working on Decide on a destination that solves those problems Explain to your team, stakeholders, and executives the path the team will follow to reach that destination Communicate both data and narratives that provide evidence you’re walking that path successfully If you are not doing these four things, you are not performing your full role, even if people say you do some parts well. Similarly, if you want to get promoted or secure more headcount, those four steps are the path to doing so (I previously discussed this in How to get more headcount). Ask your team for priorities and problems to solve. Mining for bottom‑up projects is a critical part of your role. If you wait only for top‑down and lateral priorities, you aren’t performing the first step of the core loop. It’s easy to miss this expectation—it’s invisible to you but obvious to everyone else, so they don’t realize it needs to be said. If you’re not sure, ask. If your leadership chain is running the core loop for your team, it’s because they lack evidence that you can run it yourself. That’s a bad sign. What’s “business as usual” in an orchestration‑heavy role actually signals a struggling manager in a leadership‑heavy role. Get your projects prioritized by following the core loop. If you have a major problem on your team and wonder why it isn’t getting solved, that’s on you. Leadership‑heavy roles won’t have someone else telling you how to frame your team’s work—unless they think you’re doing a bad job. Picking the right problems and solutions is your highest‑leverage work. No, this is not only your product manager’s job or your tech lead’s—it is your job. It’s also theirs, but leadership overlaps because getting it right is so valuable. Generalizing a bit, your focus now is effectiveness of your team’s work, not efficiency in implementing it. Moving quickly on the wrong problem has no value. Understand your domain and technology in detail. You don’t have to write all the software—but you should have written some simple pull requests to verify you can reason about the codebase. You don’t have to author every product requirement or architecture proposal, but you should write one occasionally to prove you understand the work. If you don’t feel capable of that, that’s okay. But you need to urgently write down steps you’ll take to close that gap and share that plan with your team and manager. They currently see you as not meeting expectations and want to know how you’ll start meeting them. If you think that gap cannot be closed or that it’s unreasonable to expect you to close it, you misunderstand your role. Some organizations will allow you to misunderstand your role for a long time, provided you perform parts of it well, but they rarely promote you under those circumstances—and most won’t tolerate it for senior leaders. Align with your team and cross‑functional stakeholders as much as you align with your executive. If your executive is wrong and you follow them, it is your fault that your team and stakeholders are upset: part of your job is changing your executive’s mind. Yes, it can feel unfair if you’re the type to blame everything on your executive. But it’s still true: expecting your executive to get everything right is a sure way to feel superior without accomplishing much. Now that I’ve shared my perspective, I admit I’m being a bit extreme on purpose—people who don’t pick up on this tend to dispute its validity strongly unless there is no room to debate. There is room for nuance, but if you think my entire point is invalid, I encourage you to have a direct conversation with your manager and team about their expectations and how they feel you’re meeting them.

a month ago 26 votes
Advancing the industry, part two.

I’m turning forty in a few weeks, and there’s a listicle archetype along the lines of “Things I’ve learned in the first half of my career as I turn forty and have now worked roughly twenty years in the technology industry.” How do you write that and make it good? Don’t ask me. I don’t know! As I considered what I would write to summarize my career learnings so far, I kept thinking about updating my post Advancing the industry from a few years ago, where I described using that concept as a north star for my major career decisions. So I wrote about that instead. Recapping the concept Adopting advancing the industry as my framework for career decisions came down to three things: The opportunity to be more intentional: After ~15 years in the industry, I entered a “third stage” of my career where neither financial considerations (1st stage) nor controlling pace to support an infant/toddler (2nd stage) were my highest priorities. Although I might not be working wholly by choice, I had enough flexibility that I could no longer hide behind “maximizing financial return” to guide, or excuse, my decision making. My decade goals kept going stale. Since 2020, I’ve tracked against my decade goals for the 2020s, and annual tracking has been extremely valuable. Part of that value was realizing that I’d made enough progress on several initial goals that they weren’t meaningful to continue measuring. For example, I had written and published three professional books. Publishing another book was not a goal for me. That’s not to say I wouldn’t write another—in fact, I have—but it would serve another goal, not be a goal in itself. As a second example, I set a goal to get twenty people I’ve managed or mentored into VPE/CTO roles running engineering organizations of 50+ people or $100M+ valuation. By the end of last year, ten people met that criteria after four years. Based on that, it seems quite likely I’ll reach twenty within the next six years, and I’d already increased that goal from ten to twenty a few years ago, so I’m not interested in raising it again. “Advancing the industry” offered a solution to both, giving me a broader goal to work toward and reframe my decade and annual goals. That mission still resonates with me: it’s large, broad, and ambiguous enough to support many avenues of progress while feeling achievable within two decades. Though the goal resonates, my thinking about the best mechanism to make progress toward it has shifted over the past few years. Writing from primary to secondary mechanism Roughly a decade ago, I discovered the most effective mechanism I’ve found to advance the industry: learn at work, write blog posts about those learnings, and then aggregate the posts into a book. An Elegant Puzzle was the literal output of that loop. Staff Engineer was a more intentional effort but still the figurative output. My last two books have been more designed than aggregated, but still generally followed this pattern. That said, as I finish up Crafting Engineering Strategy, I think the loop remains valid, but it’s run its course for me personally. There are several reasons: First, what was energizing four books ago feels like a slog today. Making a book is a lot of work, and much of it isn’t fun, so you need to be really excited about the fun parts to balance it out. I used to check my Amazon sales standing every day, thrilled to see it move up and down the charts. Each royalty payment felt magical: something I created that people paid real money for. It’s still cool, but the excitement has tempered over six years. Second, most of my original thinking is already captured in my books or fits shorter-form content like blog posts. I won’t get much incremental leverage from another book. I do continue to get leverage from shorter-form writing and will keep doing it. Finally, as I wrote in Writers who operate, professional writing quality often suffers when writing becomes the “first thing” rather than the “second thing.” Chasing distribution subtly damages quality. I’ve tried hard to keep writing as a second thing, but over the past few years my topic choices have been overly pulled toward filling book chapters instead of what’s most relevant to my day-to-day work. If writing is second, what is first? My current thinking on how to best advance the industry rests on four pillars: Industry leadership and management practices are generally poor. We can improve these by making better practices more accessible (my primary focus in years past but where I’ve seen diminishing returns). We can improve practices by growing the next generation of industry leaders (the rationale behind my decade goal to mentor/manage people into senior roles, but I can’t scale it much through executive roles alone) We can improve practices by modeling them authentically in a very successful company and engineering organization. The fourth pillar is my current focus and likely will remain so for the upcoming decade, though who knows—your focus can change a lot over ten years. Why now? Six years ago, I wouldn’t have believed I could influence my company enough to make this impact, but the head of engineering roles I’ve pursued are exactly those that can. With access to such roles at companies with significant upward trajectories, I have the best laboratory to validate and evolve ways to advance the industry: leading engineering in great companies. Cargo-culting often spreads the most influential ideas—20% time at Google, AI adoption patterns at Spotify, memo culture at Amazon, writing culture at Stripe, etc. Hopefully, developing and documenting ideas with integrity will hopefully be even more effective than publicity-driven cargo-culting. That said, I’d be glad to accept the “mere” success of ideas like 20% time. Returning to the details Most importantly for me personally, focusing on modeling ideas in my own organization aligns “advancing the industry” with something I’ve been craving for a long time now: spending more time in the details of the work. Writing for broad audiences is a process of generalizing, but day-to-day execution succeeds or fails on particulars. I’ve spent much of the past decade translating between the general and the particular, and I’m relieved to return fully to the particulars. Joining Imprint six weeks ago gave me a chance to practice this: I’ve written/merged/deployed six pull requests at work, tweaked our incident tooling to eliminate gaps in handoff with Zapier integrations, written an RFC, debugged a production incident, and generally been two or three layers deeper than at Carta. Part of that is that Imprint’s engineering team is currently much smaller— 40 rather than 350—and another part is that industry expectations in the post-ZIRP reentrenchment and LLM boom pull leaders towards the details. But mostly, it’s just where my energy is pulling me lately.

a month ago 19 votes
What can agents actually do?

There’s a lot of excitement about what AI (specifically the latest wave of LLM-anchored AI) can do, and how AI-first companies are different from the prior generations of companies. There are a lot of important and real opportunities at hand, but I find that many of these conversations occur at such an abstract altitude that they’re a bit too abstract. Sort of like saying that your company could be much better if you merely adopted software. That’s certainly true, but it’s not a particularly helpful claim. This post is an attempt to concisely summarize how AI agents work, apply that summary to a handful of real-world use cases for AI, and make the case that the potential of AI agents is equivalent to the potential of this generation of AI. By the end of this writeup, my hope is that you’ll be well-armed to have a concrete discussion about how LLMs and agents could change the shape of your company. How do agents work? At its core, using an LLM is an API call that includes a prompt. For example, you might call Anthropic’s /v1/message with a prompt: How should I adopt LLMs in my company? That prompt is used to fill the LLM’s context window, which conditions the model to generate certain kinds of responses. This is the first important thing that agents can do: use an LLM to evaluate a context window and get a result. Prompt engineering, or context engineering as it’s being called now, is deciding what to put into the context window to best generate the responses you’re looking for. For example, In-Context Learning (ICL) is one form of context engineering, where you supply a bunch of similar examples before asking a question. If I want to determine if a transaction is fraudulent, then I might supply a bunch of prior transactions and whether they were, or were not, fraudulent as ICL examples. Those examples make generating the correct answer more likely. However, composing the perfect context window is very time intensive, benefiting from techniques like metaprompting to improve your context. Indeed, the human (or automation) creating the initial context might not know enough to do a good job of providing relevant context. For example, if you prompt, Who is going to become the next mayor of New York City?, then you are unsuited to include the answer to that question in your prompt. To do that, you would need to already know the answer, which is why you’re asking the question to begin with! This is where we see model chat experiences from OpenAI and Anthropic use web search to pull in context that you likely don’t have. If you ask a question about the new mayor of New York, they use a tool to retrieve web search results, then add the content of those searches to your context window. This is the second important thing that agents can do: use an LLM to suggest tools relevant to the context window, then enrich the context window with the tool’s response. However, it’s important to clarify how “tool usage” actually works. An LLM does not actually call a tool. (You can skim OpenAI’s function calling documentation if you want to see a specific real-world example of this.) Instead there is a five-step process to calling tools that can be a bit counter-intuitive: The program designer that calls the LLM API must also define a set of tools that the LLM is allowed to suggest using. Every API call to the LLM includes that defined set of tools as options that the LLM is allowed to recommend The response from the API call with defined functions is either: Generated text as any other call to an LLM might provide A recommendation to call a specific tool with a specific set of parameters, e.g. an LLM that knows about a get_weather tool, when prompted about the weather in Paris, might return this response: [{ "type": "function_call", "name": "get_weather", "arguments": "{\"location\":\"Paris, France\"}" }] The program that calls the LLM API then decides whether and how to honor that requested tool use. The program might decide to reject the requested tool because it’s been used too frequently recently (e.g. rate limiting), it might check if the associated user has permission to use the tool (e.g. maybe it’s a premium only tool), it might check if the parameters match the user’s role-based permissions as well (e.g. the user can check weather, but only admin users are allowed to check weather in France). If the program does decide to call the tool, it invokes the tool, then calls the LLM API with the output of the tool appended to the prior call’s context window. The important thing about this loop is that the LLM itself can still only do one interesting thing: taking a context window and returning generated text. It is the broader program, which we can start to call an agent at this point, that calls tools and sends the tools’ output to the LLM to generate more context. What’s magical is that LLMs plus tools start to really improve how you can generate context windows. Instead of having to have a very well-defined initial context window, you can use tools to inject relevant context to improve the initial context. This brings us to the third important thing that agents can do: they manage flow control for tool usage. Let’s think about three different scenarios: Flow control via rules has concrete rules about how tools can be used. Some examples: it might only allow a given tool to be used once in a given workflow (or a usage limit of a tool for each user, etc) it might require that a human-in-the-loop approves parameters over a certain value (e.g. refunds more than $100 require human approval) it might run a generated Python program and return the output to analyze a dataset (or provide error messages if it fails) apply a permission system to tool use, restricting who can use which tools and which parameters a given user is able to use (e.g. you can only retrieve your own personal data) a tool to escalate to a human representative can only be called after five back and forths with the LLM agent Flow control via statistics can use statistics to identify and act on abnormal behavior: if the size of a refund is higher than 99% of other refunds for the order size, you might want to escalate to a human if a user has used a tool more than 99% of other users, then you might want to reject usage for the rest of the day it might escalate to a human representative if tool parameters are more similar to prior parameters that required escalation to a human agent LLMs themselves absolutely cannot be trusted. Anytime you rely on an LLM to enforce something important, you will fail. Using agents to manage flow control is the mechanism that makes it possible to build safe, reliable systems with LLMs. Whenever you find yourself dealing with an unreliable LLM-based system, you can always find a way to shift the complexity to a tool to avoid that issue. As an example, if you want to do algebra with an LLM, the solution is not asking the LLM to directly perform algebra, but instead providing a tool capable of algebra to the LLM, and then relying on the LLM to call that tool with the proper parameters. At this point, there is one final important thing that agents do: they are software programs. This means they can do anything software can do to build better context windows to pass on to LLMs for generation. This is an infinite category of tasks, but generally these include: Building general context to add to context window, sometimes thought of as maintaining memory Initiating a workflow based on an incoming ticket in a ticket tracker, customer support system, etc Periodically initiating workflows at a certain time, such as hourly review of incoming tickets Alright, we’ve now summarized what AI agents can do down to four general capabilities. Recapping a bit, those capabilities are: Use an LLM to evaluate a context window and get a result Use an LLM to suggest tools relevant to the context window, then enrich the context window with the tool’s response Manage flow control for tool usage via rules or statistical analysis Agents are software programs, and can do anything other software programs do Armed with these four capabilities, we’ll be able to think about the ways we can, and cannot, apply AI agents to a number of opportunities. Use Case 1: Customer Support Agent One of the first scenarios that people often talk about deploying AI agents is customer support, so let’s start there. A typical customer support process will have multiple tiers of agents who handle increasingly complex customer problems. So let’s set a goal of taking over the easiest tier first, with the goal of moving up tiers over time as we show impact. Our approach might be: Allow tickets (or support chats) to flow into an AI agent Provide a variety of tools to the agent to support: Retrieving information about the user: recent customer support tickets, account history, account state, and so on Escalating to next tier of customer support Refund a purchase (almost certainly implemented as “refund purchase” referencing a specific purchase by the user, rather than “refund amount” to prevent scenarios where the agent can be fooled into refunding too much) Closing the user account on request Include customer support guidelines in the context window, describe customer problems, map those problems to specific tools that should be used to solve the problems Flow control rules that ensure all calls escalate to a human if not resolved within a certain time period, number of back-and-forth exchanges, if they run into an error in the agent, and so on. These rules should be both rules-based and statistics-based, ensuring that gaps in your rules are neither exploitable nor create a terrible customer experience Review agent-customer interactions for quality control, making improvements to the support guidelines provided to AI agents. Initially you would want to review every interaction, then move to interactions that lead to unusual outcomes (e.g. escalations to human) and some degree of random sampling Review hourly, then daily, and then weekly metrics of agent performance Based on your learnings from the metric reviews, you should set baselines for alerts which require more immediate response. For example, if a new topic comes up frequently, it probably means a serious regression in your product or process, and it requires immediate review rather than periodical review. Note that even when you’ve moved “Customer Support to AI agents”, you still have: a tier of human agents dealing with the most complex calls humans reviewing the periodic performance statistics humans performing quality control on AI agent-customer interactions You absolutely can replace each of those downstream steps (reviewing performance statistics, etc) with its own AI agent, but doing that requires going through the development of an AI product for each of those flows. There is a recursive process here, where over time you can eliminate many human components of your business, in exchange for increased fragility as you have more tiers of complexity. The most interesting part of complex systems isn’t how they work, it’s how they fail, and agent-driven systems will fail occasionally, as all systems do, very much including human-driven ones. Applied with care, the above series of actions will work successfully. However, it’s important to recognize that this is building an entire software pipeline, and then learning to operate that software pipeline in production. These are both very doable things, but they are meaningful work, turning customer support leadership into product managers and requiring an engineering team building and operating the customer support agent. Use Case 2: Triaging incoming bug reports When an incident is raised within your company, or when you receive a bug report, the first problem of the day is determining how severe the issue might be. If it’s potentially quite severe, then you want on-call engineers immediately investigating; if it’s certainly not severe, then you want to triage it in a less urgent process of some sort. It’s interesting to think about how an AI agent might support this triaging workflow. The process might work as follows: Pipe all created incidents and all created tickets to this agent for review. Expose these tools to the agent: Open an incident Retrieve current incidents Retrieve recently created tickets Retrieve production metrics Retrieve deployment logs Retrieve feature flag change logs Toggle known-safe feature flags Propose merging an incident with another for human approval Propose merging a ticket with another ticket for human approval Redundant LLM providers for critical workflows. If the LLM provider’s API is unavailable, retry three times over ten seconds, then resort to using a second model provider (e.g. Anthropic first, if unavailable try OpenAI), and then finally create an incident that the triaging mechanism is unavailable. For critical workflows, we can’t simply assume the APIs will be available, because in practice all major providers seem to have monthly availability issues. Merge duplicates. When a ticket comes in, first check ongoing incidents and recently created tickets for potential duplicates. If there is a probable duplicate, suggest merging the ticket or incident with the existing issue and exit the workflow. Assess impact. If production statistics are severely impacted, or if there is a new kind of error in production, then this is likely an issue that merits quick human review. If it’s high priority, open an incident. If it’s low priority, create a ticket. Propose cause. Now that the incident has been sized, switch to analyzing the potential causes of the incident. Look at the code commits in recent deploys and suggest potential issues that might have caused the current error. In some cases this will be obvious (e.g. spiking errors with a traceback of a line of code that changed recently), and in other cases it will only be proximity in time. Apply known-safe feature flags. Establish an allow list of known safe feature flags that the system is allowed to activate itself. For example, if there are expensive features that are safe to disable, it could be allowed to disable them, e.g. restricting paginating through deeper search results when under load might be a reasonable tradeoff between stability and user experience. Defer to humans. At this point, rely on humans to drive incident, or ticket, remediation to completion. Draft initial incident report. If an incident was opened, the agent should draft an initial incident report including the timeline, related changes, and the human activities taken over the course of the incident. This report should then be finalized by the human involved in the incident. Run incident review. Your existing incident review process should take the incident review and determine how to modify your systems, including the triaging agent, to increase reliability over time. Safeguard to reenable feature flags. Since we now have an agent disabling feature flags, we also need to add a periodic check (agent-driven or otherwise) to reenable the “known safe” feature flags if there isn’t an ongoing incident to avoid accidentally disabling them for long periods of time. This is another AI agent that will absolutely work as long as you treat it as a software product. In this case, engineering is likely the product owner, but it will still require thoughtful iteration to improve its behavior over time. Some of the ongoing validation to make this flow work includes: The role of humans in incident response and review will remain significant, merely aided by this agent. This is especially true in the review process, where an agent cannot solve the review process because it’s about actively learning what to change based on the incident. You can make a reasonable argument that an agent could decide what to change and then hand that specification off to another agent to implement it. Even today, you can easily imagine low risk changes (e.g. a copy change) being automatically added to a ticket for human approval. Doing this for more complex, or riskier changes, is possible but requires an extraordinary degree of care and nuance: it is the polar opposite of the idea of “just add agents and things get easy.” Instead, enabling that sort of automation will require immense care in constraining changes to systems that cannot expose unsafe behavior. For example, one startup I know has represented their domain logic in a domain-specific language (DSL) that can be safely generated by an LLM, and are able to represent many customer-specific features solely through that DSL. Expanding the list of known-safe feature flags to make incidents remediable. To do this widely will require enforcing very specific requirements for how software is developed. Even doing this narrowly will require changes to ensure the known-safe feature flags remain safe as software is developed. Periodically reviewing incident statistics over time to ensure mean-time-to-resolution (MTTR) is decreasing. If the agent is truly working, this should decrease. If the agent isn’t driving a reduction in MTTR, then something is rotten in the details of the implementation. Even a very effective agent doesn’t relieve the responsibility of careful system design. Rather, agents are a multiplier on the quality of your system design: done well, agents can make you significantly more effective. Done poorly, they’ll only amplify your problems even more widely. Do AI Agents Represent Entirety of this Generation of AI? If you accept my definition that AI agents are any combination of LLMs and software, then I think it’s true that there’s not much this generation of AI can express that doesn’t fit this definition. I’d readily accept the argument that LLM is too narrow a term, and that perhaps foundational model would be a better term. My sense is that this is a place where frontier definitions and colloquial usage have deviated a bit. Closing thoughts LLMs and agents are powerful mechanisms. I think they will truly change how products are designed and how products work. An entire generation of software makers, and company executives, are in the midst of learning how these tools work. Software isn’t magic, it’s very logical, but what it can accomplish is magical. The same goes for agents and LLMs. The more we can accelerate that learning curve, the better for our industry.

a month ago 33 votes

More in programming

The ROI of exercise

The math on why exercise is a good deal.

18 hours ago 4 votes
Do blogs need to be so lonely?

If the web is participatory, and I really think it is, then how come blogging can feel so lonely? The post Do blogs need to be so lonely? appeared first on The History of the Web.

2 days ago 5 votes
Sapir-Whorf does not apply to Programming Languages

This one is a hot mess but it's too late in the week to start over. Oh well! Someone recognized me at last week's Chipy and asked for my opinion on Sapir-Whorf hypothesis in programming languages. I thought this was interesting enough to make a newsletter. First what it is, then why it looks like it applies, and then why it doesn't apply after all. The Sapir-Whorf Hypothesis We dissect nature along lines laid down by our native language. — Whorf To quote from a Linguistics book I've read, the hypothesis is that "an individual's fundamental perception of reality is moulded by the language they speak." As a massive oversimplification, if English did not have a word for "rebellion", we would not be able to conceive of rebellion. This view, now called Linguistic Determinism, is mostly rejected by modern linguists. The "weak" form of SWH is that the language we speak influences, but does not decide our cognition. For example, Russian has distinct words for "light blue" and "dark blue", so can discriminate between "light blue" and "dark blue" shades faster than they can discriminate two "light blue" shades. English does not have distinct words, so we discriminate those at the same speed. This linguistic relativism seems to have lots of empirical support in studies, but mostly with "small indicators". I don't think there's anything that convincingly shows linguistic relativism having effects on a societal level.1 The weak form of SWH for software would then be the "the programming languages you know affects how you think about programs." SWH in software This seems like a natural fit, as different paradigms solve problems in different ways. Consider the hardest interview question ever, "given a list of integers, sum the even numbers". Here it is in four paradigms: Procedural: total = 0; foreach x in list {if IsEven(x) total += x}. You iterate over data with an algorithm. Functional: reduce(+, filter(IsEven, list), 0). You apply transformations to data to get a result. Array: + fold L * iseven L.2 In English: replace every element in L with 0 if odd and 1 if even, multiple the new array elementwise against L, and then sum the resulting array. It's like functional except everything is in terms of whole-array transformations. Logical: Somethingish like sumeven(0, []). sumeven(X, [Y|L]) :- iseven(Y) -> sumeven(Z, L), X is Y + Z ; sumeven(X, L). You write a set of equations that express what it means for X to be the sum of events of L. There's some similarities between how these paradigms approach the problem, but each is also unique, too. It's plausible that where a procedural programmer "sees" a for loop, a functional programmer "sees" a map and an array programmer "sees" a singular operator. I also have a personal experience with how a language changed the way I think. I use TLA+ to detect concurrency bugs in software designs. After doing this for several years, I've gotten much better at intuitively seeing race conditions in things even without writing a TLA+ spec. It's even leaked out into my day-to-day life. I see concurrency bugs everywhere. Phone tag is a race condition. But I still don't think SWH is the right mental model to use, for one big reason: language is special. We think in language, we dream in language, there are huge parts of our brain dedicated to processing language. We don't use those parts of our brain to read code. SWH is so intriguing because it seems so unnatural, that the way we express thoughts changes the way we think thoughts. That I would be a different person if I was bilingual in Spanish, not because the life experiences it would open up but because grammatical gender would change my brain. Compared to that, the idea that programming languages affect our brain is more natural and has a simpler explanation: It's the goddamned Tetris Effect. The Goddamned Tetris Effect The Tetris effect occurs when someone dedicates vast amounts of time, effort and concentration on an activity which thereby alters their thoughts, dreams, and other experiences not directly linked to said activity. — Wikipedia Every skill does this. I'm a juggler, so every item I can see right now has a tiny metadata field of "how would this tumble if I threw it up". I teach professionally, so I'm always noticing good teaching examples everywhere. I spent years writing specs in TLA+ and watching the model checker throw concurrency errors in my face, so now race conditions have visceral presence. Every skill does this. And to really develop a skill, you gotta practice. This is where I think programming paradigms do something especially interesting that make them feel more like Sapir-Whorfy than, like, juggling. Some languages mix lots of different paradigms, like Javascript or Rust. Others like Haskell really focus on excluding paradigms. If something is easy for you in procedural and hard in FP, in JS you could just lean on the procedural bits. In Haskell, too bad, you're learning how to do it the functional way.3 And that forces you to practice, which makes you see functional patterns everywhere. Tetris effect! Anyway this may all seem like quibbling— why does it matter whether we call it "Tetris effect" or "Sapir-Whorf", if our brains is get rewired either way? For me, personally, it's because SWH sounds really special and unique, while Tetris effect sounds mundane and commonplace. Which it is. But also because TE suggests it's not just programming languages that affect how we think about software, it's everything. Spending lots of time debugging, profiling, writing exploits, whatever will change what you notice, what you think a program "is". And that's a way useful idea that shouldn't be restricted to just PLs. (Then again, the Tetris Effect might also be a bad analogy to what's going on here, because I think part of it is that it wears off after a while. Maybe it's just "building a mental model is good".) I just realized all of this might have missed the point Wait are people actually using SWH to mean the weak form or the strong form? Like that if a language doesn't make something possible, its users can't conceive of it being possible. I've been arguing against the weaker form in software but I think I've seen strong form often too. Dammit. Well, it's already Thursday and far too late to rewrite the whole newsletter, so I'll just outline the problem with the strong form: we describe the capabilities of our programming languages with human language. In college I wrote a lot of crappy physics lab C++ and one of my projects was filled with comments like "man I hate copying this triply-nested loop in 10 places with one-line changes, I wish I could put it in one function and just take the changing line as a parameter". Even if I hadn't encountered higher-order functions, I was still perfectly capable of expressing the idea. So if the strong SWH isn't true for human language, it's not true for programming languages either. Systems Distributed talk now up! Link here! Original abstract: Building correct distributed systems takes thinking outside the box, and the fastest way to do that is to think inside a different box. One different box is "formal methods", the discipline of mathematically verifying software and systems. Formal methods encourages unusual perspectives on systems, models that are also broadly useful to all software developers. In this talk we will learn two of the most important FM perspectives: the abstract specifications behind software systems, and the property they are and aren't supposed to have. The talk ended up evolving away from that abstract but I like how it turned out! There is one paper arguing that people who speak a language that doesn't have a "future tense" are more likely to save and eat healthy, but it is... extremely questionable. ↩ The original J is +/ (* (0 = 2&|)). Obligatory Notation as a Tool of Thought reference ↩ Though if it's too hard for you, that's why languages have escape hatches ↩

2 days ago 5 votes
Consistent Navigation Across My Inconsistent Websites, Part II

I refreshed the little thing that let’s you navigate consistently between my inconsistent subdomains (video recording). Here’s the tl;dr on the update: I had to remove some features on each site to make this feel right. Takeaway: adding stuff is easy, removing stuff is hard. The element is a web component and not even under source control (🤫). I serve it directly from my cdn. If I want to make an update, I tweak the file on disk and re-deploy. Takeaway: cowboy codin’, yee-haw! Live free and die hard. So. Many. Iterations. All of which led to what? A small, iterative evolution. Takeaway: it’s ok for design explorations to culminate in updates that look more like an evolution than a mutation. Want more info on the behind-the-scenes work? Read on! Design Explorations It might look like a simple iteration on what I previously had, but that doesn’t mean I didn’t explore the universe of possibilities first before coming back to the current iteration. v0: Tabs! A tab-like experience seemed the most natural, but how to represent it? I tried a few different ideas. On top. On bottom. Different visual styles, etc. And of course, gotta explore how that plays out on desktop too. Some I liked, some I didn’t. As much as I wanted to play with going to the edges of the viewport, I realized that every browser is different and you won't be able to get a consistent “bleed-like” visual experience across browsers. For example, if you try to make tabs that bleed to the edges, it looks nice in a frame in Figma, and even in some browsers. But it won’t look right in all browser, like iOS Safari. So I couldn’t reliably leverage the idea of a bounded canvas as a design element — which, I should’ve known, has always been the case with the web. v1: Bottom Tabs With a Site Theme I really like this pattern on mobile devices, so I thought maybe I’d consider it for navigating between my sites. But how to theme across differently-styled sites? The favicon styles seemed like a good bet! And, of course, what do to on larger devices? Just stacking it felt like overkill, so I explored moving it to the edge. I actually prototyped this in code, but I didn’t like how it felt so I scratched the idea and went other directions. v2: The Unification The more I explored what to do with this element, the more it started taking on additional responsibility. “What if I unified its position with site-specific navigation?” I thought. This led to design explorations where the disparate subdomains began to take on not just a unified navigational element, but a unified header. And I made small, stylistic explorations with the tabs themselves too. You can see how I played toyed with the idea of a consistent header across all my sites (not an intended goal, but ya know, scope creep gets us all). As I began to explore more possibilities than I planned for, things started to get out of hand. v3: Do More. MORE. MORE!! Questions I began asking: Why aren’t these all under the same domain?! What if I had a single domain for feeds across all of them, e.g. feeds.jim-nielsen.com? What about icons instead of words? Wait, wait, wait Jim. Consistent navigation across inconsistent sites. That’s the goal. Pare it back a little. v4: Reigning It Back In To counter my exploratory ambitions, I told myself I needed to ship something without the need to modify the entire design style of all my sites. So how do I do that? That got me back to a simpler premise: consistent navigation across my inconsistent sites. Better — and implementable. Technical Details The implementation was pretty simple. I basically just forked my previous web component and changed some styles. That’s it. The only thing I did different was I moved the web component JS file from being part of my www.jim-nielsen.com git repository to a standalone file (not under git control) on my CDN. This felt like one of the exceptions to the rule of always keeping stuff under version control. It’s more of the classic FTP-style approach to web development. Granted, it’s riskier, but it’s also way more flexible. And I’m good with that trade-off for now. (Ask me again in a few months if I’ve done anything terrible and now have regrets.) Each site implements the component like this (with a different subdomain attribute for each site): <script type="module" src="https://cdn.jim-nielsen.com/shared/jim-site-switcher.js"></script> <jim-site-switcher subdomain="blog"></jim-site-switcher> That’s really all there is to say. Thanks to Zach for prodding me to make this post. Email · Mastodon · Bluesky

4 days ago 5 votes
The many benefits of paying for search

“Wait, you PAY for search?” We get this reaction a lot about Kagi.

5 days ago 12 votes