More from Irrational Exuberance
There’s a lot of excitement about what AI (specifically the latest wave of LLM-anchored AI) can do, and how AI-first companies are different from the prior generations of companies. There are a lot of important and real opportunities at hand, but I find that many of these conversations occur at such an abstract altitude that they’re a bit too abstract. Sort of like saying that your company could be much better if you merely adopted software. That’s certainly true, but it’s not a particularly helpful claim. This post is an attempt to concisely summarize how AI agents work, apply that summary to a handful of real-world use cases for AI, and make the case that the potential of AI agents is equivalent to the potential of this generation of AI. By the end of this writeup, my hope is that you’ll be well-armed to have a concrete discussion about how LLMs and agents could change the shape of your company. How do agents work? At its core, using an LLM is an API call that includes a prompt. For example, you might call Anthropic’s /v1/message with a prompt: How should I adopt LLMs in my company? That prompt is used to fill the LLM’s context window, which conditions the model to generate certain kinds of responses. This is the first important thing that agents can do: use an LLM to evaluate a context window and get a result. Prompt engineering, or context engineering as it’s being called now, is deciding what to put into the context window to best generate the responses you’re looking for. For example, In-Context Learning (ICL) is one form of context engineering, where you supply a bunch of similar examples before asking a question. If I want to determine if a transaction is fraudulent, then I might supply a bunch of prior transactions and whether they were, or were not, fraudulent as ICL examples. Those examples make generating the correct answer more likely. However, composing the perfect context window is very time intensive, benefiting from techniques like metaprompting to improve your context. Indeed, the human (or automation) creating the initial context might not know enough to do a good job of providing relevant context. For example, if you prompt, Who is going to become the next mayor of New York City?, then you are unsuited to include the answer to that question in your prompt. To do that, you would need to already know the answer, which is why you’re asking the question to begin with! This is where we see model chat experiences from OpenAI and Anthropic use web search to pull in context that you likely don’t have. If you ask a question about the new mayor of New York, they use a tool to retrieve web search results, then add the content of those searches to your context window. This is the second important thing that agents can do: use an LLM to suggest tools relevant to the context window, then enrich the context window with the tool’s response. However, it’s important to clarify how “tool usage” actually works. An LLM does not actually call a tool. (You can skim OpenAI’s function calling documentation if you want to see a specific real-world example of this.) Instead there is a five-step process to calling tools that can be a bit counter-intuitive: The program designer that calls the LLM API must also define a set of tools that the LLM is allowed to suggest using. Every API call to the LLM includes that defined set of tools as options that the LLM is allowed to recommend The response from the API call with defined functions is either: Generated text as any other call to an LLM might provide A recommendation to call a specific tool with a specific set of parameters, e.g. an LLM that knows about a get_weather tool, when prompted about the weather in Paris, might return this response: [{ "type": "function_call", "name": "get_weather", "arguments": "{\"location\":\"Paris, France\"}" }] The program that calls the LLM API then decides whether and how to honor that requested tool use. The program might decide to reject the requested tool because it’s been used too frequently recently (e.g. rate limiting), it might check if the associated user has permission to use the tool (e.g. maybe it’s a premium only tool), it might check if the parameters match the user’s role-based permissions as well (e.g. the user can check weather, but only admin users are allowed to check weather in France). If the program does decide to call the tool, it invokes the tool, then calls the LLM API with the output of the tool appended to the prior call’s context window. The important thing about this loop is that the LLM itself can still only do one interesting thing: taking a context window and returning generated text. It is the broader program, which we can start to call an agent at this point, that calls tools and sends the tools’ output to the LLM to generate more context. What’s magical is that LLMs plus tools start to really improve how you can generate context windows. Instead of having to have a very well-defined initial context window, you can use tools to inject relevant context to improve the initial context. This brings us to the third important thing that agents can do: they manage flow control for tool usage. Let’s think about three different scenarios: Flow control via rules has concrete rules about how tools can be used. Some examples: it might only allow a given tool to be used once in a given workflow (or a usage limit of a tool for each user, etc) it might require that a human-in-the-loop approves parameters over a certain value (e.g. refunds more than $100 require human approval) it might run a generated Python program and return the output to analyze a dataset (or provide error messages if it fails) apply a permission system to tool use, restricting who can use which tools and which parameters a given user is able to use (e.g. you can only retrieve your own personal data) a tool to escalate to a human representative can only be called after five back and forths with the LLM agent Flow control via statistics can use statistics to identify and act on abnormal behavior: if the size of a refund is higher than 99% of other refunds for the order size, you might want to escalate to a human if a user has used a tool more than 99% of other users, then you might want to reject usage for the rest of the day it might escalate to a human representative if tool parameters are more similar to prior parameters that required escalation to a human agent LLMs themselves absolutely cannot be trusted. Anytime you rely on an LLM to enforce something important, you will fail. Using agents to manage flow control is the mechanism that makes it possible to build safe, reliable systems with LLMs. Whenever you find yourself dealing with an unreliable LLM-based system, you can always find a way to shift the complexity to a tool to avoid that issue. As an example, if you want to do algebra with an LLM, the solution is not asking the LLM to directly perform algebra, but instead providing a tool capable of algebra to the LLM, and then relying on the LLM to call that tool with the proper parameters. At this point, there is one final important thing that agents do: they are software programs. This means they can do anything software can do to build better context windows to pass on to LLMs for generation. This is an infinite category of tasks, but generally these include: Building general context to add to context window, sometimes thought of as maintaining memory Initiating a workflow based on an incoming ticket in a ticket tracker, customer support system, etc Periodically initiating workflows at a certain time, such as hourly review of incoming tickets Alright, we’ve now summarized what AI agents can do down to four general capabilities. Recapping a bit, those capabilities are: Use an LLM to evaluate a context window and get a result Use an LLM to suggest tools relevant to the context window, then enrich the context window with the tool’s response Manage flow control for tool usage via rules or statistical analysis Agents are software programs, and can do anything other software programs do Armed with these four capabilities, we’ll be able to think about the ways we can, and cannot, apply AI agents to a number of opportunities. Use Case 1: Customer Support Agent One of the first scenarios that people often talk about deploying AI agents is customer support, so let’s start there. A typical customer support process will have multiple tiers of agents who handle increasingly complex customer problems. So let’s set a goal of taking over the easiest tier first, with the goal of moving up tiers over time as we show impact. Our approach might be: Allow tickets (or support chats) to flow into an AI agent Provide a variety of tools to the agent to support: Retrieving information about the user: recent customer support tickets, account history, account state, and so on Escalating to next tier of customer support Refund a purchase (almost certainly implemented as “refund purchase” referencing a specific purchase by the user, rather than “refund amount” to prevent scenarios where the agent can be fooled into refunding too much) Closing the user account on request Include customer support guidelines in the context window, describe customer problems, map those problems to specific tools that should be used to solve the problems Flow control rules that ensure all calls escalate to a human if not resolved within a certain time period, number of back-and-forth exchanges, if they run into an error in the agent, and so on. These rules should be both rules-based and statistics-based, ensuring that gaps in your rules are neither exploitable nor create a terrible customer experience Review agent-customer interactions for quality control, making improvements to the support guidelines provided to AI agents. Initially you would want to review every interaction, then move to interactions that lead to unusual outcomes (e.g. escalations to human) and some degree of random sampling Review hourly, then daily, and then weekly metrics of agent performance Based on your learnings from the metric reviews, you should set baselines for alerts which require more immediate response. For example, if a new topic comes up frequently, it probably means a serious regression in your product or process, and it requires immediate review rather than periodical review. Note that even when you’ve moved “Customer Support to AI agents”, you still have: a tier of human agents dealing with the most complex calls humans reviewing the periodic performance statistics humans performing quality control on AI agent-customer interactions You absolutely can replace each of those downstream steps (reviewing performance statistics, etc) with its own AI agent, but doing that requires going through the development of an AI product for each of those flows. There is a recursive process here, where over time you can eliminate many human components of your business, in exchange for increased fragility as you have more tiers of complexity. The most interesting part of complex systems isn’t how they work, it’s how they fail, and agent-driven systems will fail occasionally, as all systems do, very much including human-driven ones. Applied with care, the above series of actions will work successfully. However, it’s important to recognize that this is building an entire software pipeline, and then learning to operate that software pipeline in production. These are both very doable things, but they are meaningful work, turning customer support leadership into product managers and requiring an engineering team building and operating the customer support agent. Use Case 2: Triaging incoming bug reports When an incident is raised within your company, or when you receive a bug report, the first problem of the day is determining how severe the issue might be. If it’s potentially quite severe, then you want on-call engineers immediately investigating; if it’s certainly not severe, then you want to triage it in a less urgent process of some sort. It’s interesting to think about how an AI agent might support this triaging workflow. The process might work as follows: Pipe all created incidents and all created tickets to this agent for review. Expose these tools to the agent: Open an incident Retrieve current incidents Retrieve recently created tickets Retrieve production metrics Retrieve deployment logs Retrieve feature flag change logs Toggle known-safe feature flags Propose merging an incident with another for human approval Propose merging a ticket with another ticket for human approval Redundant LLM providers for critical workflows. If the LLM provider’s API is unavailable, retry three times over ten seconds, then resort to using a second model provider (e.g. Anthropic first, if unavailable try OpenAI), and then finally create an incident that the triaging mechanism is unavailable. For critical workflows, we can’t simply assume the APIs will be available, because in practice all major providers seem to have monthly availability issues. Merge duplicates. When a ticket comes in, first check ongoing incidents and recently created tickets for potential duplicates. If there is a probable duplicate, suggest merging the ticket or incident with the existing issue and exit the workflow. Assess impact. If production statistics are severely impacted, or if there is a new kind of error in production, then this is likely an issue that merits quick human review. If it’s high priority, open an incident. If it’s low priority, create a ticket. Propose cause. Now that the incident has been sized, switch to analyzing the potential causes of the incident. Look at the code commits in recent deploys and suggest potential issues that might have caused the current error. In some cases this will be obvious (e.g. spiking errors with a traceback of a line of code that changed recently), and in other cases it will only be proximity in time. Apply known-safe feature flags. Establish an allow list of known safe feature flags that the system is allowed to activate itself. For example, if there are expensive features that are safe to disable, it could be allowed to disable them, e.g. restricting paginating through deeper search results when under load might be a reasonable tradeoff between stability and user experience. Defer to humans. At this point, rely on humans to drive incident, or ticket, remediation to completion. Draft initial incident report. If an incident was opened, the agent should draft an initial incident report including the timeline, related changes, and the human activities taken over the course of the incident. This report should then be finalized by the human involved in the incident. Run incident review. Your existing incident review process should take the incident review and determine how to modify your systems, including the triaging agent, to increase reliability over time. Safeguard to reenable feature flags. Since we now have an agent disabling feature flags, we also need to add a periodic check (agent-driven or otherwise) to reenable the “known safe” feature flags if there isn’t an ongoing incident to avoid accidentally disabling them for long periods of time. This is another AI agent that will absolutely work as long as you treat it as a software product. In this case, engineering is likely the product owner, but it will still require thoughtful iteration to improve its behavior over time. Some of the ongoing validation to make this flow work includes: The role of humans in incident response and review will remain significant, merely aided by this agent. This is especially true in the review process, where an agent cannot solve the review process because it’s about actively learning what to change based on the incident. You can make a reasonable argument that an agent could decide what to change and then hand that specification off to another agent to implement it. Even today, you can easily imagine low risk changes (e.g. a copy change) being automatically added to a ticket for human approval. Doing this for more complex, or riskier changes, is possible but requires an extraordinary degree of care and nuance: it is the polar opposite of the idea of “just add agents and things get easy.” Instead, enabling that sort of automation will require immense care in constraining changes to systems that cannot expose unsafe behavior. For example, one startup I know has represented their domain logic in a domain-specific language (DSL) that can be safely generated by an LLM, and are able to represent many customer-specific features solely through that DSL. Expanding the list of known-safe feature flags to make incidents remediable. To do this widely will require enforcing very specific requirements for how software is developed. Even doing this narrowly will require changes to ensure the known-safe feature flags remain safe as software is developed. Periodically reviewing incident statistics over time to ensure mean-time-to-resolution (MTTR) is decreasing. If the agent is truly working, this should decrease. If the agent isn’t driving a reduction in MTTR, then something is rotten in the details of the implementation. Even a very effective agent doesn’t relieve the responsibility of careful system design. Rather, agents are a multiplier on the quality of your system design: done well, agents can make you significantly more effective. Done poorly, they’ll only amplify your problems even more widely. Do AI Agents Represent Entirety of this Generation of AI? If you accept my definition that AI agents are any combination of LLMs and software, then I think it’s true that there’s not much this generation of AI can express that doesn’t fit this definition. I’d readily accept the argument that LLM is too narrow a term, and that perhaps foundational model would be a better term. My sense is that this is a place where frontier definitions and colloquial usage have deviated a bit. Closing thoughts LLMs and agents are powerful mechanisms. I think they will truly change how products are designed and how products work. An entire generation of software makers, and company executives, are in the midst of learning how these tools work. Software isn’t magic, it’s very logical, but what it can accomplish is magical. The same goes for agents and LLMs. The more we can accelerate that learning curve, the better for our industry.
Over the past 19 months, I’ve written Crafting Engineering Strategy, a book on creating engineering strategy. I’ve also been working increasingly with large language models at work. Unsurprisingly, the intersection of those two ideas is a topic that I’ve been thinking about a lot. What, I’ve wondered, is the role of the author, particularly the long-form author, in a world where an increasingly large percentage of writing is intermediated by large language models? One framing I’ve heard somewhat frequently is the view that LLMs are first and foremost a great pillaging of authors’ work. It’s true. They are that. At some point there was a script to let you check which books had been loaded into Meta’s LLaMa, and every book I’d written at that point was included, none of them with my consent. However, I long ago made my peace with plagiarism online, and this strikes me as not particularly different, albeit conducted by larger players. The folks using this writing are going to keep using it beyond the constraints I’d prefer it to be used in, and I’m disinterested in investing my scarce mental energy chasing through digital or legal mazes. Instead, I’ve been thinking about how this transition might go right for authors. My favorite idea that I’ve come up with is the idea of written content as “datapacks” for thinking. Buy someone’s book / “datapack”, then upload it into your LLM, and you can immediately operate almost as if you knew the book’s content. Let’s start with an example. Imagine you want help onboarding as an executive, and you’ve bought a copy of The Engineering Executive’s Primer, you could create a project in Anthropic’s Claude, and upload the LLM-optimized book into your project. Here is what your Claude project might look like. Once you have it set up, you can ask it to help you create your onboarding plan. This guidance makes sense, largely pulled from Your first 90 days as CTO. As always, you can iterate on your initial prompt–including more details you want to include into the plan–along with follow ups to improve the formatting and so on. One interesting thing here, is that I don’t currently have a datapack for The Engineering Executive’s Primer! To solve that, I built one from all my blog posts marked with the “executive” tag. I did that using this script that packages Hugo blog posts, that I generated using this prompt with Claude 3.7 Sonnet. The output of that script gets passed into repomix via: repomix --include "`./scripts/tags.py content executive | paste -d, -s -`" The mess with paste is to turn the multiline output from tags.py into a comma-separated list that repomix knows how to use. This is a really neat pattern, and starts to get at where I see the long-term advantage of writers in the current environment: if you’re a writer and have access to your raw content, you can create a problem-specific datapack to discuss the problem. You can also give that datapack to someone else, or use it to answer their questions. For example, someone asked me a very detailed followup question about a recent blog post. It was a very long question, and I was on a weekend trip. I already had a Claude project setup with the contents of Crafting Engineering Strategy, so I just passed the question verbatim into that project, and sent the answer back to the person who asked it. (I did have to ask Claude to revise the answer once to focus more on what I thought the most important part of the answer was.) This, for what it’s worth, wasn’t a perfect answer, but it’s pretty good. If the question asker had the right datapack, they could have gotten it themselves, without needing me to decide to answer it. However, this post is less worried about the reader than it is about the author. What is our competitive advantage as authors in a future where people are not reading our work? Well, maybe they’re still buying our work in the form of datapacks and such, but it certainly seems likely that book sales, like blog traffic, will be impacted negatively. In trade, it’s now possible for machines to understand our thinking that we’ve recorded down into words over time. There’s a running joke in my executive learning circle that I’ve written a blog post on every topic that comes up, and that’s kind of true. That means that I am on the cusp of the opportunity to uniquely scale myself by connecting “intelligence on demand for a few cents” with the written details of my thinking built over the past two decades of being a writer who operates. The tools that exist today are not quite there yet, although a combination of selling datapacks like the one for Crafting Engineering Strategy and tools like Claude’s projects are a good start. There are many ways the exact details might come together, but I’m optimistic that writing will become more powerful rather than less in this new world, even if the particular formats change. (For what it’s worth, I don’t think human readers are going away either.) If you’re interested in the fully fleshed out version of this idea, starting here you can read the full AI Companion to Crafting Engineering Strategy. The datapack will be available via O’Reilly in the next few months. If you’re an existing O’Reilly author who’s skepical of this idea, don’t worry: I worked with them to sign a custom contract, this usage–as best I understood it, although I am not a lawyer and am not providing legal advice–is outside of the scope of the default contract I signed with my prior book, and presumably most others’ contracts as well.
Since 2020, I’ve been working on my desk setup, and I think I finally have it mostly pulled together at this point. I don’t really think my desk setup is very novel, and I’m sure there are better ways to pull it together, but I will say that it finally works the way I want since I added the CalDigit TS5 Plus, which has been a long time coming. My requirements for my desk are: Has support for 2-3 Mac laptops Has support for a Windows gaming desktop with a dedicated GPU Has a dedicated microphone Has good enough lighting Is not too messy I can switch between any laptop and desktop with a single Thunderbolt cable Historically the issue here has been the final requirement, where switching required moving two cables–a Thunderbolt and a cable for the dedicated graphics card–but with my new dock this finally works with just one cable. The equipment shown here, and my brief review of each piece, is: UPLIFT v2 Standing Desk – is the standing desk I use. I both have a lot of stuff on my desk, and also want my desk to feel minimal, so I opted for the 72" x 30" verison. At the time I ordered it in 2020, the only option shipping quickly was the bamboo finish, so that’s what I got. CalDigit TS5 Plus Dock – this was the missing component that has three Thunderbolt ports and a DisplayPort. I have the external graphics card directly connected to the DisplayPort, and then move the Thunderbolt port from computer to computer to change which one is active. It also has enough USB-A ports to connect the adapters for my wireless keyboard and mouse, to avoid needing to pair them across computers which would create friction in switching computers. Apple Studio Display – I experimented with dedicated speakers and video camera, but for me having them built into the monitor was helpful to reduce the number of things on my desk. The Studio Display’s monitor, speakers and video camera are all solidly good enough for my purposes: I’m sure I could get better on each dimension, but in practice I never think about this and don’t find any issues with them. On the other hand, while I was initially hopeful that I could also get rid of my microphone, the microphone quality just wasn’t that good for me, as I spend a lot of time on video conferences and recording podcasts, etc. Beelink GTi Ultra & EX Pro Docking Station – are my Windows mini desktop and dock which allows mounting an external GPU to the mini desktop. Beelink itself is slightly aggrevating because as best I can tell they’ve done something quite odd in terms of custom patching Windows 11, but ultimately it’s worked well for me as a dedicated gaming machine, and the build quality and size profile are both just fantastic. MSI Gaming RTX 4070 Ti Super 16G Graphics Card – I bought this earlier this year, looking for something that was in stock, and was good enough that it would last me a generation or two of graphics card upgrades without shelling out a truly massive amount for a 50XX edition (some of which don’t seem to be upgrades on the 40XX predecesors anyway). Hexcal Studio – this is the workstation / monitor stand / cable management system, with lighting and so on. I ultimately do like this, but it’s not perfect, e.g. my Qi charger technically works but provides such bad charging speeds that it effectively doesn’t work. It’s definitely too expensive for something that doesn’t entirely work, so I can’t really recommend it, although now that I’ve paid for it, I wouldn’t bother replacing it either. Audio-Technica AT2020USB Cardioid Condenser USB Microphone – this is the microphone I’ve been using for six years, and it’s really quite good and cost something like $120 at the time. It’s discontinued now, but presumably there’s a more modern version somewhere. I have it mounted on this boom arm. LUME CUBE Edge 2.0 LED Desk Lamp – I have two of these for lighting during recordings. I don’t actually like using them very much, I just hate looking into lights, but I do use them periodically when I want to make sure lighting is actually correct. Logitech MX Keys Advanced Wireless Illuminated Keyboard for Mac – this keyboard works well for me, and has a USB-C so I can use a single powered USB-C cable from the Hexcal to charge my keyboard, my mouse, my phone, and my headphones. Logitech MX Master 3S Wireless Mouse – I’ve been using variations of this mouse for a long time, I specifically bought this version a year or two ago to standardize all charging ports on USB-C. Laptop stand – I’m not actually sure where I got this laptop stand from, it might have been Etsy. I found it relatively hard to find stands that support three laptops rather than just two. Before finding this one, I used this two-laptop stand which is fine. Laptops – these are my personal and work Macbooks. Here’s a slightly closer look at the left side of the desk. At this point, I really have nothing left that I’m upset about with my setup, and I can’t imagine changing this again in the next few years. As a bonus, my office has a handful of pieces of “professional art” that represent things I am proud of. From left to right, it’s the cover of An Elegant Puzzle, a map of San Francisco drawn exclusively from Uber trip data on the night of Halloween 2014, and then the cover of The Engineering Executive’s Primer. It’s probably a bit vain, but I like to remember some of the accomplishments.
Today’s my last day at Carta, where I got the chance to serve as their CTO for the past two years. I’ve learned so much working there, and I wanted to end my chapter there by collecting my thoughts on what I learned. (I am heading somewhere, and will share news in a week or two after firming up the communication plan with my new team there.) The most important things I learned at Carta were: Working in the details – if you took a critical lens towards my historical leadership style, I think the biggest issue you’d point at is my being too comfortable operating at a high level of abstraction. Utilizing the expertise of others to fill in your gaps is a valuable skill, but–like any single approach–it’s limiting when utilized too frequently. One of the strengths of Carta’s “house leadership style” is expecting leaders to go deep into the details to get informed and push pace. What I practiced there turned into the pieces on strategy testing and developing domain expertise. Refining my approach to engineering strategy – over the past 18 months, I’ve written a book on engineering strategy (posts are all in #eng-strategy-book), with initial chapters coming available for early release with O’Reilly next month. Fingers crossed, the book will be released in approximately October. Coming into Carta, I already had much of my core thesis about how to do engineering strategy, but Carta gave me a number of complex projects to practice on, and excellent people to practice with: thank you to Dan, Shawna and Vogl in particular! More on this project in the next few weeks. Extract the kernel – everywhere I’ve ever worked, teams have struggled understanding executives. In every case, the executives could be clearer, but it’s not particularly interesting to frame these problems as something the executives need to fix. Sure, that’s true they could communicate better, but that framing makes you powerless, when you have a great deal of power to understand confusing communication. After all, even good communicators communicate poorly sometimes. Meaningfully adopting LLMs – a year ago I wrote up notes on adopting LLMs in your products, based on what we’d learned so far. Since then, we’ve learned a lot more, and LLMs themselves have significantly improved. Carta has been using LLMs in real, business-impacting workflows for over a year. That’s continuing to expand into solving more complex internal workflows, and even more interestingly into creating net-new product capabilities that ought to roll out more widely in the next few months (currently released to small beta groups). This is the first major technology transition that I’ve experienced in a senior leadership role (since I was earlier in my career when mobile internet transitioned from novelty to commodity). The immense pressure to adopt faster, combined with the immense uncertainty if it’s a meaningful change or a brief blip was a lot of fun, and was the inspiration for this strategy document around LLM adoption. Multi-dimensional tradeoffs – a phrase that Henry Ward uses frequent is that “everyone’s right, just at a different altitude.” That idea resonates with me, and meshes well with the ideas of multi-dimensional tradeoffs and layers of context that I find improve decision making for folks in roles that require making numerous, complex decisions. Working at Carta, these ideas formalized from something I intuited into something I could explain clearly. Navigators – I think our most successful engineering strategy at Carta was rolling out the Navigator program, which ensured senior-most engineers had context and direct representation, rather than relying exclusively on indirect representation via engineering management. Carta’s engineering managers are excellent, but there’s always something lost as discussions extend across layers. The Navigator program probably isn’t a perfect fit for particularly small companies, but I think any company with more than 100-150 engineers would benefit from something along these lines. How to create software quality – I’ve evolved my thinking about software quality quite a bit over time, but Carta was particularly helpful in distinguishing why some pieces of software are so hard to build despite having little-to-no scale from a data or concurrency perspective. These systems, which I label as “high essential complexity”, deserve more credit for their complexity, even if they have little in the way of complexity from infrastructure scaling. Shaping eng org costs – a few years ago, I wrote about my mental model for managing infrastructure costs. At Carta, I got to refine my thinking about engineering salary costs, with most of those ideas getting incorporated in the Navigating Private Equity ownership strategy, and the eng org seniority mix model. The three biggest levers are (1) “N-1 backfills”, (2) requiring a business rationale for promotions into senior-most levels, and (3) shifting hiring into cost efficient hiring regions. None of these are the sort of inspiring topics that excite folks, but they are all essential to the long term stability of your organization. Explaining engineering costs to boards/execs – Similarly, I finally have a clear perspective on how to represent R&D investment to boards in the same language that they speak in, which I wrote up here, and know how to do it quickly without relying on any manually curated internal datasets. Lots of smaller stuff, like the no wrong doors policy for routing colleagues to appropriate channels, how to request headcount in a way that is convincing to executives, Act Two rationales for how people’s motivations evolve over the course of long careers (and my own personal career mission to advance the industry, why friction isn’t velocity even though many folks act like it is. I’ve also learned quite a bit about venture capital, fund administration, cap tables, non-social network products, operating a multi-business line company, and various operating models. Figuring out how to sanitize those learnings to share the interesting tidbits without leaking internal details is a bit too painful, so I’m omitting them for now. Maybe some will be shareable in four or five years after my context goes sufficiently stale. As a closing thought, I just want to say how much I’ve appreciated the folks I’ve gotten to work with at Carta. From the executive team (Ali, April, Charly, Davis, Henry, Jeff, Nicole, Vrushali) to my directs (Adi, Ciera, Dan, Dave, Jasmine, Javier, Jayesh, Karen, Madhuri, Sam, Shawna) to the navigators (there’s a bunch of y’all). The people truly are always the best part, and that was certainly true at Carta.
Back in 2018, I wrote lethain/systems as a domain-specific language for writing runnable systems models, and introduced it with this blog post modeling a hiring funnel. While it’s far from a perfect system, I’ve gotten a lot of value out of it over the last seven years, because it allows me to maintain systems models in version control. As I’ve been playing with writing Model Context Protocol (MCP) servers, one I’ve been thinking about frequently is one to help writing systems syntax, and I finally put that together in the lethain/systems-mcp repository. More detailed installation and usage instructions are in the GitHub repository, so I’ll just share a couple of screenshots and comments here. Starting with the load_systems_documentation tool which loads a copy of lethain/systems/README.md and a file with example systems into the context window. The biggest challenge of properly writing DSLs with an LLM is providing enough in-context learning (ICL) examples, and I think the idea of providing tools that are specifically designed to provide that context is a very interesting idea. Eventually I imagine there will be generalized tools for this, e.g. a search index of the best ICL examples for a wide variety of DSLs. Until then, my guess is that this sort of tool is particularly valuable. The second tool is run_systems_model which passes the DSL (and an optional parameter for number of rounds) to the tool and then returns the result. I experimented with interface design here, initially trying to return a rendered chart of the results, but ultimately even multi-modal models are just much better at working with text than with images. This meant that I had the best results returning JSON of the results and then having the LLM build a tool for interacting with the results. Altogether, a fun little experiment, and another confirmation in my mind that the most interesting part of designing MCPs today is deciding where to introduce and eliminate complexity from the LLM. Introduce too little and the tool lacks power; eliminate too little and the combination rarely works.
More in programming
Hold my hand, grow my skin Erica Western Geiger Counter Do you have any addictions? You may not register them as such, perhaps because they don’t lead to anything you consider harmful consequences. But you have them. In some ways, all your behavior is compulsive. What would the alternative be? A point is, if we have something that we can predict this video Free will comes from the “veil of computability”, things look random until you find the pattern. I was at a bar last night and this girl told me you can’t predict humans, and the exact example she used was that it’s not like y = mx + b Oh, if only she knew. The dreams of my childhood have come true, studying machine learning has shown me how I work. I tried to explain that instead of 2 parameters it’s 100 trillion parameters, and it’s the slightly different y = relu(w@x) + b a bunch of times, you have to put some nonlinearities in there cause linear systems can only approximate a small class of functions. But this explanation was not heard at a bar. She was so confident she was right, and like I don’t even know where to start. Reader of this blog, do you know? AI is coming and we are so unbelievably unprepared. What is this garbage and this garbage. It’s nerd shit and political propaganda. The amount of power over nature that the Silicon Valley death cult is stumbling into is horrifying, and these high priests don’t have a basic grasp of people. No humanities education (perhaps the programs were gutted on purpose). Are we ready for the hypnodrones? How the fuck is targeted advertising legal and culturally okay? This will not stop until they take our free will from us. There’s a fire that burns today Better Nukes don’t end humanity. Current path AI doesn’t end humanity. It just ends all the machines and hands the world over to the street people. Now I see how the dark ages happened. If all the humans died today, all the machines would shortly follow. If all the machines died today, humanity would keep on going. Pay attention to this milestone. To date, machines are not robust, and evolution may be efficient at robust search. If it is, we get dark ages. If it’s not and we find a shortcut, God only knows.
This is re-post of How to Permanently Increase Your Sales by 50% or More in Only One Day article by Steve Pavlina Of all the things you can do to increase your sales, one of the highest leverage activities is attempting to increase your products’ registration rate. Increasing your registration rate from 1.0% to 1.5% means that you simply convince one more downloader out of every 200 to make the decision to buy. Yet that same tiny increase will literally increase your sales by a full 50%. If you’re one of those developers who simply slapped the ubiquitous 30-day trial incentive on your shareware products without going any further than that, then I think a 50% increase in your registration rate is a very attainable goal you can achieve if you spend just one full day of concentrated effort on improving your product’s ability to sell. My hope is that this article will get you off to a good start and get you thinking more creatively. And even if you fail, your result might be that you achieve only a 25% or a 10% increase. How much additional money would that represent to you over the next five years of sales? What influence, if any, did the title of this article have on your decision to read it? If I had titled this article, “Registration Incentives,” would you have been more or less likely to read it now? Note that the title expresses a specific and clear benefit to you. It tells you exactly what you can expect to gain by reading it. Effective registration incentives work the same way. They offer clear, specific benefits to the user if a purchase is made. In order to improve your registration incentives, the first thing you need to do is to adopt some new beliefs that will change your perspective. I’m going to introduce you to what I call the “lies of success” in the shareware industry. These are statements that are not true at all, but if you accept them as true anyway, you’ll achieve far better results than if you don’t. Rule 1: What you are selling is merely the difference between the shareware and the registered versions, not the registered version itself. Note that this is not a true statement, but if you accept it as true, you’ll immediately begin to see the weaknesses in your registration incentives. If there are few additional benefits for buying the full version vs. using the shareware version, then you aren’t offering the user strong enough incentives to make the full purchase. Rule 2: The sole purpose of the shareware version is to close the sale. This is our second lie of success. Note the emphasis on the word “close.” Your shareware version needs to act as a direct sales vehicle. It must be able to take the user all the way to the point of purchase, i.e. your online order form, ideally with nothing more than a few mouse clicks. Anything that detracts from achieving a quick sale is likely to hurt sales. Rule 3: The customer’s perspective is the only one that matters. Defy this rule at your peril. Customers don’t care that you spent 2000 hours creating your product. Customers don’t care that you deserve the money for your hard work. Customers don’t care that you need to do certain things to prevent piracy. All that matters to them are their own personal wants and needs. Yes, these are lies of success. Some customers will care, but if you design your registration incentives assuming they only care about their own self-interests, your motivation to buy will be much stronger than if you merely appeal to their sense of honesty, loyalty, or honor. Assume your customers are all asking, “What’s in it for me if I choose to buy? What will I get? How will this help me?” I don’t care if you’re selling to Fortune 500 companies. At some point there will be an individual responsible for causing the purchase to happen, and that individual is going to consider how the purchase will affect him/her personally: “Will this purchase get me fired? Will it make me look good in front of my peers? Will this make my job easier or harder?” Many shareware developers get caught in the trap of discriminating between honest and dishonest users, believing that honest users will register and dishonest ones won’t. This line of thinking will ultimately get you nowhere, and it violates the third lie of success. When you make a purchase decision, how often do you use honesty as the deciding factor? Do you ever say, “I will buy this because I’m honest?” Or do you consider other more selfish factors first, such as how it will make you feel to purchase the software? The truth is that every user believes s/he is honest, so no user applies the honesty criterion when making a purchase decision. Thinking of your users in terms of honest ones vs. dishonest ones is a complete waste of time because that’s not how users primarily view themselves. Rule 4: Customers buy on emotion and justify with fact. If you’re honest with yourself, you’ll see that this is how you make most purchase decisions. Remember the last time you bought a computer. Is it fair to say that you first became emotionally attached to the idea of owning a new machine? For me, it’s the feeling of working faster, owning the latest technology, and being more productive that motivates me to go computer shopping. Once I’ve become emotionally committed, the justifications follow: “It’s been two years since I’ve upgraded, it will pay for itself with the productivity boost I gain, I can easily afford it, I’ve worked hard and I deserve a new machine, etc.” You use facts to justify the purchase. Once you understand how purchase decisions are made, you can see that your shareware products need to first get the user emotionally invested in the purchase, and then you give them all the facts they need to justify it. Now that we’ve gotten these four lies of success out of the way, let’s see how we might apply them to create some compelling registration incentives. Let’s start with Rule 1. What incentives can be spawned from this rule? The common 30-day trial is one obvious derivative. If you are only selling the difference between the shareware and registered versions, then a 30-day trial implies that you are selling unlimited future days of usage of the program after the trial period expires. This is a powerful incentive, and it’s been proven effective for products that users will continue to use month after month. 30-day trials are easy for users to understand, and they’re also easy to implement. You could also experiment with other time periods such as 10 days, 14 days, or 90 days. The only way of truly knowing which will work best for your products is to experiment. But let’s see if we can move a bit beyond the basic 30-day trial here by mixing in a little of Rule 3. How would the customer perceive a 30-day trial? In most cases 30 days is plenty of time to evaluate a product. But in what situations would a 30-day trial have a negative effect? A good example is when the user downloads, installs, and briefly checks out a product s/he may not have time to evaluate right away. By the time the user gets around to fully evaluating it, the shareware version has already expired, and a sale may be lost as a result. To get around this limitation, many shareware developers have started offering 30 days of actual program usage instead of 30 consecutive days. This allows the user plenty of time to try out the program at his/her convenience. Another possibility would be to limit the number of times the program can be run. The basic idea is that you are giving away limited usage and selling unlimited usage of the program. This incentive definitely works if your product is one that will be used frequently over a long period of time (much longer than the trial period). The flip side of usage limitation is to offer an additional bonus for buying within a certain period of time. For instance, in my game Dweep, I offer an extra 5 free bonus levels to everyone who buys within the first 10 days. In truth I give the bonus levels to everyone who buys, but the incentive is real from the customer’s point of view. Remember Rule 3 - it doesn’t matter what happens on my end; it only matters what the customer perceives. Any customer that buys after the first 10 days will be delighted anyway to receive a bonus they thought they missed. So if your product has no time-based incentives at all, this is the first place to start. When would you pay your bills if they were never due, and no interest was charged on late payments? Use time pressure to your advantage, either by disabling features in the shareware version after a certain time or by offering additional bonuses for buying sooner rather than later. If nothing else and if it’s legal in your area, offer a free entry in a random monthly drawing for a small prize, such as one of your other products, for anyone who buys within the first X days. Another logical derivative of Rule 1 is the concept of feature limitation. On the crippling side, you can start with the registered version and begin disabling functionality to create the shareware version. Disabling printing in a shareware text editor is a common strategy. So is corrupting your program’s output with a simple watermark. For instance, your shareware editor could print every page with your logo in the background. Years ago the Association of Shareware Professionals had a strict policy against crippling, but that policy was abandoned, and crippling has been recognized as an effective registration incentive. It is certainly possible to apply feature limitation without having it perceived as crippling. This is especially easy for games, which commonly offer a limited number of playable levels in the shareware version with many more levels available only in the registered version. In this situation you offer the user a seemingly complete experience of your product in the shareware version, and you provide additional features on top of that for the registered version. Time-based incentives and feature-based incentives are perhaps the two most common strategies used by shareware developers for enticing users to buy. Which will work best for you? You will probably see the best results if you use both at the same time. Imagine you’re the end user for a moment. Would you be more likely to buy if you were promised additional features and given a deadline to make the decision? I’ve seen several developers who were using only one of these two strategies increase their registration rates dramatically by applying the second strategy on top of the first. If you only use time-based limitations, how could you apply feature limitation as well? Giving the user more reasons to buy will translate to more sales per download. One you have both time-based and feature-based incentives to buy, the next step is to address the user’s perceived risk by applying a risk-reversal strategy. Fortunately, the shareware model already reduces the perceived risk of purchasing significantly, since the user is able to try before buying. But let’s go a little further, keeping Rule 3 in mind. What else might be a perceived risk to the user? What if the user reaches the end of the trial period and still isn’t certain the product will do what s/he needs? What if the additional features in the registered version don’t work as the user expects? What can we do to make the decision to purchase safer for the user? One approach is to offer a money-back guarantee. I’ve been offering a 60-day unconditional money-back guarantee on all my products since January 2000. If someone asks for their money back for any reason, I give them a full refund right away. So what is my return rate? Well, it’s about 8%. Just kidding! Would it surprise you to learn that my return rate at the time of this writing is less than 0.2%? Could you handle two returns out of every 1000 sales? My best estimate is that this one technique increased my sales by 5-10%, and it only took a few minutes to implement. When I suggest this strategy to other shareware developers, the usual reaction is fear. “But everyone would rip me off,” is a common response. I suggest trying it for yourself on an experimental basis; a few brave souls have already tried it and are now offering money-back guarantees prominently. Try putting it up on your web site for a while just to convince yourself it works. You can take it down at any time. After a few months, if you’re happy with the results, add the guarantee to your shareware products as well. I haven’t heard of one bad outcome yet from those who’ve tried it. If you use feature limitation in your shareware products, another important component of risk reversal is to show the user exactly what s/he will get in the full version. In Dweep I give away the first five levels in the demo version, and purchasing the full version gets you 147 more levels. When I thought about this from the customer’s perspective (Rule 3), I realized that a perceived risk is that s/he doesn’t know if the registered version levels will be as fun as the demo levels. So I released a new demo where you can see every level but only play the first five. This lets the customer see all the fun that awaits them. So if you have a feature-limited product, show the customer how the feature will work. For instance, if your shareware version has printing disabled, the customer could be worried that the full version’s print capability won’t work with his/her printer or that the output quality will be poor. A better strategy is to allow printing, but to watermark the output. This way the customer can still test and verify the feature, and it doesn’t take much imagination to realize what the output will look like without the watermark. Our next step is to consider Rule 2 and include the ability close the sale. It is imperative that you include an “instant gratification” button in your shareware products, so the customer can click to launch their default web browser and go directly to your online order form. If you already have a “buy now” button in your products, go a step further. A small group of us have been finding that the more liberally these buttons are used, the better. If you only have one or two of these buttons in your shareware program, you should increase the count by at least an order of magnitude. The current Dweep demo now has over 100 of these buttons scattered throughout the menus and dialogs. This makes it extremely easy for the customer to buy, since s/he never has to hunt around for the ordering link. What should you label these buttons? “Buy now” or “Register now” are popular, so feel free to use one of those. I took a slightly different approach by trying to think like a customer (Rule 3 again). As a customer the word “buy” has a slightly negative association for me. It makes me think of parting with my cash, and it brings up feelings of sacrifice and pressure. The words “buy now” imply that I have to give away something. So instead, I use the words, “Get now.” As a customer I feel much better about getting something than buying something, since “getting” brings up only positive associations. This is the psychology I use, but at present, I don’t know of any hard data showing which is better. Unless you have a strong preference, trust your intuition. Make it as easy as possible for the willing customer to buy. The more methods of payment you accept, the better your sales will be. Allow the customer to click a button to print an order form directly from your program and mail it with a check or money order. On your web order form, include a link to a printable text order form for those who are afraid to use their credit cards online. If you only accept two or three major credit cards, sign up with a registration service to handle orders for those you don’t accept. So far we’ve given the customer some good incentives to buy, minimized perceived risk, and made it easy to make the purchase. But we haven’t yet gotten the customer emotionally invested in making the purchase decision. That’s where Rule 4 comes in. First, we must recognize the difference between benefits and features. We need to sell the sizzle, not the steak. Features describe your product, while benefits describe what the user will get by using your product. For instance, a personal information manager (PIM) program may have features such as daily, weekly, and monthly views; task and event timers; and a contact database. However, the benefits of the program might be that it helps the user be more organized, earn more money, and enjoy more free time. For a game, the main benefit might be fun. For a nature screensaver, it could be relaxation, beauty appreciation, or peace. Features are logical; benefits are emotional. Logical features are an important part of the sale, but only after we’ve engaged the customer’s emotions. Many products do a fair job of getting the customer emotionally invested during the trial period. If you have an addictive program or one that’s fun to use, such as a game, you may have an easy time getting the customer emotionally attached to using it because the experience is already emotional in nature. But whatever your product is, you can increase your sales by clearly illustrating the benefits of making the purchase. A good place to do this is in your nag screens. I use nag screens both before and after the program runs to remind the user of the benefits of buying the full version. At the very least, include a nag screen when the customer exits the program, so the last thing s/he sees will be a reminder of the product’s benefits. Take this opportunity to sell the user on the product. Don’t expect features like “customizable colors” to motivate anyone to buy. Paint a picture of what benefits the user will obtain with the full version. Will I save time? Will I have more fun? Will I live longer, save money, or feel better? The simple change from feature-oriented selling to benefit-oriented selling can easily double or triple your sales. Be sure to use this approach on your web site as well if you don’t already. Developers who’ve recently made the switch have been reporting some amazing results. If you’re drawing a blank when trying to come up with benefits for your products, the best thing you can do is to email some of your old customers and ask them why they bought your program. What did it do for them? I’ve done this and was amazed at the answers I got back. People were buying my games for reasons I’d never anticipated, and that told me which benefits I needed to emphasize in my sales pitch. The next key is to make your offer irresistible to potential customers. Find ways to offer the customer so much value that it would be harder to say no than to say yes. Take a look at your shareware product as if you were a potential customer who’d never seen it before. Being totally honest with yourself, would you buy this program if someone else had written it? If not, don’t stop here. As a potential customer, what additional benefits or features would put you over the top and convince you to buy? More is always better than less. In the original version of Dweep, I offered ten levels in the demo and thirty in the registered version. Now I offer only five demo levels and 152 in the full version, plus a built-in level editor. Originally, I offered the player twice the value of the demo; now I’m offering over thirty times the value. I also offer free hints and solutions to every level; the benefit here is that it minimizes player frustration. As I keep adding bonuses for purchasing, the offer becomes harder and harder to resist. What clever bonuses can you throw in for registering? Take the time to watch an infomercial. Notice that there is always at least one “FREE” bonus thrown in. Consider offering a few extra filters for an image editor, ten extra images for a screensaver, or extra levels for a game. What else might appeal to your customers? Be creative. Your bonus doesn’t even have to be software-based. Offer a free report about building site traffic with your HTML editor, include an essay on effective time management with your scheduling program, or throw in a small business success guide with your billing program. If you make such programs, you shouldn’t have too much trouble coming up with a few pages of text that would benefit your customers. Keep working at it until your offer even looks irresistible to you. If all the bonuses you offer can be delivered electronically, how many can you afford to include? If each one only gains one more customer in a thousand (0.1%), would it be worth the effort over the lifetime of your sales? So how do you know if your registration incentives are strong enough? And how do you know if your product is over-crippled? Where do you draw the line? These are tough issues, but there is a good way to handle them if your product is likely to be used over a long period of time, particularly if it’s used on a daily basis. Simply make your program gradually increase its registration incentives over time. One easy way to do this is with a delay timer on your nag screens that increases each time the program is run. Another approach is to disable certain features at set intervals. You begin by disabling non-critical features and gradually move up to disabling key functionality. The program becomes harder and harder to continue using for free, so the benefits of registering become more and more compelling. Instead of having your program completely disable itself after your trial period, you gradually degrade its usability with additional usage. This approach can be superior to a strict 30-day trial, since it allows your program to still be used for a while, but after prolonged usage it becomes effectively unusable. However, you don’t simply shock the user by taking away all the benefits s/he has become accustomed to on a particular day. Instead, you begin with a gentle reminder that becomes harder and harder to ignore. There may be times when your 30-day trial shuts off at an inconvenient time for the user, and you may lose a sale as a result. For instance, the user may not have the money at the time, or s/he may be busy at the trial’s end and forget to register. In that case s/he may quickly replace what was lost with a competitor’s trial version. The gradual degradation approach allows the user to continue using your product, but with increasing difficulty over time. Eventually, there is a breaking point where the user either decides to buy or to stop using the program completely, but this can be done within a window of time at the user’s convenience. Hopefully this article has gotten you thinking creatively about all the overlooked ways you can entice people to buy your shareware products. The most important thing you can do is to begin seeing your products through your customers’ eyes. What additional motivation would convince you to buy? What would represent an irresistible offer to you? There is no limit to how many incentives you can add. Don’t stop at just one or two; instead, give the customer a half dozen or more reasons to buy, and you’ll see your registration rate soar. Is it worth spending a day to do this? I think so.
I'm a big (neo)vim buff. My config is over 1500 lines and I regularly write new scripts. I recently ported my neovim config to a new laptop. Before then, I was using VSCode to write, and when I switched back I immediately saw a big gain in productivity. People often pooh-pooh vim (and other assistive writing technologies) by saying that writing code isn't the bottleneck in software development. Reading, understanding, and thinking through code is! Now I don't know how true this actually is in practice, because empirical studies of time spent coding are all over the place. Most of them, like this study, track time spent in the editor but don't distinguish between time spent reading code and time spent writing code. The only one I found that separates them was this study. It finds that developers spend only 5% of their time editing. It also finds they spend 14% of their time moving or resizing editor windows, so I don't know how clean their data is. But I have a bigger problem with "writing is not the bottleneck": when I think of a bottleneck, I imagine that no amount of improvement will lead to productivity gains. Like if a program is bottlenecked on the network, it isn't going to get noticeably faster with 100x more ram or compute. But being able to type code 100x faster, even with without corresponding improvements to reading and imagining code, would be huge. We'll assume the average developer writes at 80 words per minute, at five characters a word, for 400 characters a minute.What could we do if we instead wrote at 8,000 words/40k characters a minute? Writing fast Boilerplate is trivial Why do people like type inference? Because writing all of the types manually is annoying. Why don't people like boilerplate? Because it's annoying to write every damn time. Programmers like features that help them write less! That's not a problem if you can write all of the boilerplate in 0.1 seconds. You still have the problem of reading boilerplate heavy code, but you can use the remaining 0.9 seconds to churn out an extension that parses the file and presents the boilerplate in a more legible fashion. We can write more tooling This is something I've noticed with LLMs: when I can churn out crappy code as a free action, I use that to write lots of tools that assist me in writing good code. Even if I'm bottlenecked on a large program, I can still quickly write a script that helps me with something. Most of these aren't things I would have written because they'd take too long to write! Again, not the best comparison, because LLMs also shortcut learning the relevant APIs, so also optimize the "understanding code" part. Then again, if I could type real fast I could more quickly whip up experiments on new apis to learn them faster. We can do practices that slow us down in the short-term Something like test-driven development significantly slows down how fast you write production code, because you have to spend a lot more time writing test code. Pair programming trades speed of writing code for speed of understanding code. A two-order-of-magnitude writing speedup makes both of them effectively free. Or, if you're not an eXtreme Programming fan, you can more easily follow the The Power of Ten Rules and blanket your code with contracts and assertions. We could do more speculative editing This is probably the biggest difference in how we'd work if we could write 100x faster: it'd be much easier to try changes to the code to see if they're good ideas in the first place. How often have I tried optimizing something, only to find out it didn't make a difference? How often have I done a refactoring only to end up with lower-quality code overall? Too often. Over time it makes me prefer to try things that I know will work, and only "speculatively edit" when I think it be a fast change. If I could code 100x faster it would absolutely lead to me trying more speculative edits. This is especially big because I believe that lots of speculative edits are high-risk, high-reward: given 50 things we could do to the code, 49 won't make a difference and one will be a major improvement. If I only have time to try five things, I have a 10% chance of hitting the jackpot. If I can try 500 things I will get that reward every single time. Processes are built off constraints There are just a few ideas I came up with; there are probably others. Most of them, I suspect, will share the same property in common: they change the process of writing code to leverage the speedup. I can totally believe that a large speedup would not remove a bottleneck in the processes we currently use to write code. But that's because those processes are developed work within our existing constraints. Remove a constraint and new processes become possible. The way I see it, if our current process produces 1 Utils of Software / day, a 100x writing speedup might lead to only 1.5 UoS/day. But there are other processes that produce only 0.5 UoS/d because they are bottlenecked on writing speed. A 100x speedup would lead to 10 UoS/day. The problem with all of this that 100x speedup isn't realistic, and it's not obvious whether a 2x improvement would lead to better processes. Then again, one of the first custom vim function scripts I wrote was an aid to writing unit tests in a particular codebase, and it lead to me writing a lot more tests. So maybe even a 2x speedup is going to be speed things up, too. Patreon Stuff I wrote a couple of TLA+ specs to show how to model fork-join algorithms. I'm planning on eventually writing them up for my blog/learntla but it'll be a while, so if you want to see them in the meantime I put them up on Patreon.
Here’s Jony Ive in his Stripe interview: What we make stands testament to who we are. What we make describes our values. It describes our preoccupations. It describes beautiful succinctly our preoccupation. I’d never really noticed the connection between these two words: occupation and preoccupation. What comes before occupation? Pre-occupation. What comes before what you do for a living? What you think about. What you’re preoccupied with. What you think about will drive you towards what you work on. So when you’re asking yourself, “What comes next? What should I work on?” Another way of asking that question is, “What occupies my thinking right now?” And if what you’re occupied with doesn’t align with what you’re preoccupied with, perhaps it's time for a change. Email · Mastodon · Bluesky
There's no country on earth that does hype better than America. It's one of the most appealing aspects about being here. People are genuinely excited about the future and never stop searching for better ways to work, live, entertain, and profit. There's a unique critical mass in the US accelerating and celebrating tomorrow. The contrast to Europe couldn't be greater. Most Europeans are allergic to anything that even smells like a commercial promise of a better tomorrow. "Hype" is universally used as a term to ridicule anyone who dares to be excited about something new, something different. Only a fool would believe that real progress is possible! This is cultural bedrock. The fault lines have been settling for generations. It'll take an earthquake to move them. You see this in AI, you saw it in the Internet. Europeans are just as smart, just as inventive as their American brethren, but they don't do hype, so they're rarely the ones able to sell the sizzle that public opinion requires to shift its vision for tomorrow. To say I have a complicated relationship with venture capital is putting it mildly. I've spent a career proving the counter narrative. Proving that you can build and bootstrap an incredible business without investor money, still leave a dent in the universe, while enjoying the spoils of capitalism. And yet... I must admit that the excesses of venture capital are integral to this uniquely American advantage on hype. The lavish overspending during the dot-com boom led directly to a spectacular bust, but it also built the foundation of the internet we all enjoy today. Pets.com and Webvan flamed out such that Amazon and Shopify could transform ecommerce out of the ashes. We're in the thick of peak hype on AI right now. Fantastical sums are chasing AGI along with every dumb derivative mirage along the way. The most outrageous claims are being put forth on the daily. It's easy to look at that spectacle with European eyes and roll them. Some of it is pretty cringe! But I think that would be a mistake. You don't have to throw away your critical reasoning to accept that in the face of unknown potential, optimism beats pessimism. We all have to believe in something, and you're much better off believing that things can get better than not. Americans fundamentally believe this. They believe the hype, so they make it come to fruition. Not every time, not all of them, but more of them, more of the time than any other country in the world. That really is exceptional.