More from Paolo Amoroso's Journal
<![CDATA[It has been a year since I set up my System76 Merkaat with Linux Mint. In July of 2024 I migrated from ChromeOS and the Merkaat has been my daily driver on the desktop. A year later I have nothing major to report, which is the point. Despite the occasional unplanned reinstallation I have been enjoying the stability of Linux and just using the PC. This stability finally enabled me to burn bridges with mainstream operating systems and fully embrace Linux and open systems. I'm ready to handle the worst and get back to work. Just a few years ago the frustration of troubleshooting a broken system would have made me seriously consider the switch to a proprietary solution. But a year of regular use, with an ordinary mix of quiet moments and glitches, gave me the confidence to stop worrying and learn to love Linux. linux a href="https://remark.as/p/journal.paoloamoroso.com/my-first-year-since-coming-back-to-linux"Discuss.../a Email | Reply @amoroso@oldbytes.space !--emailsub--]]>
<![CDATA[DandeGUI now does graphics and this is what it looks like. Some text and graphics output windows created with DandeGUI on Medley Interlisp. In addition to the square root table text output demo, I created the other graphics windows with the newly implemented functionality. For example, this code draws the random circles of the top window: (DEFUN RANDOM-CIRCLES (&KEY (N 200) (MAX-R 50) (WIDTH 640) (HEIGHT 480)) (LET ((RANGE-X (- WIDTH ( 2 MAX-R))) (RANGE-Y (- HEIGHT ( 2 MAX-R))) (SHADES (LIST IL:BLACKSHADE IL:GRAYSHADE (RANDOM 65536)))) (DANDEGUI:WITH-GRAPHICS-WINDOW (STREAM :TITLE "Random Circles") (DOTIMES (I N) (DECLARE (IGNORE I)) (IL:FILLCIRCLE (+ MAX-R (RANDOM RANGE-X)) (+ MAX-R (RANDOM RANGE-Y)) (RANDOM MAX-R) (ELT SHADES (RANDOM 3)) STREAM))))) GUI:WITH-GRAPHICS-WINDOW, GUI:OPEN-GRAPHICS-STREAM, and GUI:WITH-GRAPHICS-STREAM are the main additions. These functions and macros are the equivalent for graphics of what GUI:WITH-OUTPUT-TO-WINDOW, GUI:OPEN-WINDOW-STREAM, and GUI:WITH-WINDOW-STREAM, respectively, do for text. The difference is the text facilities send output to TEXTSTREAM streams whereas the graphics facilities to IMAGESTREAM, a type of device-independent graphics streams. Under the hood DandeGUI text windows are customized TEdit windows with an associated TEXTSTREAM. TEdit is the rich text editor of Medley Interlisp. Similarly, the graphics windows of DandeGUI run the Sketch line drawing editor under the hood. Sketch windows have an IMAGESTREAM which Interlisp graphics primitives like IL:DRAWLINE and IL:DRAWPOINT accept as an output destination. DandeGUI creates and manages Sketch windows with the type of stream the graphics primitives require. In other words, IMAGESTREAM is to Sketch what TEXTSTREAM is to TEdit. The benefits of programmatically using Sketch for graphics are the same as TEdit windows for text: automatic window repainting, scrolling, and resizing. The downside is overhead. Scrolling more than a few thousand graphics elements is slow and adding even more may crash the system. However, this is an acceptable tradeoff. The new graphics functions and macros work similarly to the text ones, with a few differences. First, DandeGUI now depends on the SKETCH and SKETCH-STREAM library modules which it automatically loads. Since Sketch has no notion of a read-only drawing area GUI:OPEN-GRAPHICS-STREAM achieves the same effect by other means: (DEFUN OPEN-GRAPHICS-STREAM (&KEY (TITLE "Untitled")) "Open a new window and return the associated IMAGESTREAM to send graphics output to. Sets the window title to TITLE if supplied." (LET ((STREAM (IL:OPENIMAGESTREAM '|Untitled| 'IL:SKETCH '(IL:FONTS ,DEFAULT-FONT*))) (WINDOW (IL:\\SKSTRM.WINDOW.FROM.STREAM STREAM))) (IL:WINDOWPROP WINDOW 'IL:TITLE TITLE) ;; Disable left and middle-click title bar menu (IL:WINDOWPROP WINDOW 'IL:BUTTONEVENTFN NIL) ;; Disable sketch editing via right-click actions (IL:WINDOWPROP WINDOW 'IL:RIGHTBUTTONFN NIL) ;; Disable querying the user whether to save changes (IL:WINDOWPROP WINDOW 'IL:DONTQUERYCHANGES T) STREAM)) Only the mouse gestures and commands of the middle-click title bar menu and the right-click menu change the drawing area interactively. To disable these actions GUI:OPEN-GRAPHICS-STREAM removes their menu handlers by setting to NIL the window properties IL:BUTTONEVENTFN and IL:RIGHTBUTTONFN. This way only programmatic output can change the drawing area. The function also sets IL:DONTQUERYCHANGES to T to prevent querying whether to save the changes at window close. By design output to DandeGUI windows is not permanent, so saving isn't necessary. GUI:WITH-GRAPHICS-STREAM and GUI:WITH-GRAPHICS-WINDOW are straightforward: (DEFMACRO WITH-GRAPHICS-STREAM ((VAR STREAM) &BODY BODY) "Perform the operations in BODY with VAR bound to the graphics window STREAM. Evaluates the forms in BODY in a context in which VAR is bound to STREAM which must already exist, then returns the value of the last form of BODY." `(LET ((,VAR ,STREAM)) ,@BODY)) (DEFMACRO WITH-GRAPHICS-WINDOW ((VAR &KEY TITLE) &BODY BODY) "Perform the operations in BODY with VAR bound to a new graphics window stream. Creates a new window titled TITLE if supplied, binds VAR to the IMAGESTREAM associated with the window, and executes BODY in this context. Returns the value of the last form of BODY." `(WITH-GRAPHICS-STREAM (,VAR (OPEN-GRAPHICS-STREAM :TITLE (OR ,TITLE "Untitled"))) ,@BODY)) Unlike GUI:WITH-TEXT-STREAM and GUI:WITH-TEXT-WINDOW, which need to call GUI::WITH-WRITE-ENABLED to establish a read-only environment after every output operation, GUI:OPEN-GRAPHICS-STREAM can do this only once at window creation. GUI:CLEAR-WINDOW, GUI:WINDOW-TITLE, and GUI:PRINT-MESSAGE now work with graphics streams in addition to text streams. For IMAGESTREAM arguments GUI:PRINT-MESSAGE prints to the system prompt window as Sketch stream windows have no prompt area. The random circles and fractal triangles graphics demos round up the latest additions. #DandeGUI #CommonLisp #Interlisp #Lisp a href="https://remark.as/p/journal.paoloamoroso.com/adding-graphics-support-to-dandegui"Discuss.../a Email | Reply @amoroso@oldbytes.space !--emailsub--]]>
<![CDATA[Printing rich text to windows is one of the planned features of DandeGUI, the GUI library for Medley Interlisp I'm developing in Common Lisp. I finally got around to this and implemented the GUI:WITH-TEXT-STYLE macro which controls the attributes of text printed to a window, such as the font family and face. GUI:WITH-TEXT-STYLE establishes a context in which text printed to the stream associated with a TEdit window is rendered in the style specified by the arguments. The call to GUI:WITH-TEXT-STYLE here extends the square root table example by printing the heading in a 12-point bold sans serif font: (gui:with-output-to-window (stream :title "Table of square roots") (gui:with-text-style (stream :family :sans :size 12 :face :bold) (format stream "~&Number~40TSquare Root~2%")) (loop for n from 1 to 30 do (format stream "~&~4D~40T~8,4F~%" n (sqrt n)))) The code produces this window in which the styled column headings stand out: Medley Interlisp window of a square root table generated by the DandeGUI GUI library. The :FAMILY, :SIZE, and :FACE arguments determine the corresponding text attributes. :FAMILY may be a generic family such as :SERIF for an unspecified serif font; :SANS for a sans serif font; :FIX for a fixed width font; or a keyword denoting a specific family like :TIMESROMAN. At the heart of GUI:WITH-TEXT-STYLE is a pair of calls to the Interlisp function PRINTOUT that wrap the macro body, the first for setting the font of the stream to the specified style and the other for restoring the default: (DEFMACRO WITH-TEXT-STYLE ((STREAM &KEY FAMILY SIZE FACE) &BODY BODY) (ONCE-ONLY (STREAM) `(UNWIND-PROTECT (PROGN (IL:PRINTOUT ,STREAM IL:.FONT (TEXT-STYLE-TO-FD ,FAMILY ,SIZE ,FACE)) ,@BODY) (IL:PRINTOUT ,STREAM IL:.FONT DEFAULT-FONT)))) PRINTOUT is an Interlisp function for formatted output similar to Common Lisp's FORMAT but with additional font control via the .FONT directive. The symbols of PRINTOUT, i.e. its directives and arguments, are in the Interlisp package. In turn GUI:WITH-TEXT-STYLE calls GUI::TEXT-STYLE-TO-FD, an internal DandeGUI function which passes to .FONT a font descriptor matching the required text attributes. GUI::TEXT-STYLE-TO-FD calls IL:FONTCOPY to build a descriptor that merges the specified attributes with any unspecified ones copied from the default font. The font descriptor is an Interlisp data structure that represents a font on the Medley environment. #DandeGUI #CommonLisp #Interlisp #Lisp a href="https://remark.as/p/journal.paoloamoroso.com/changing-text-style-for-dandegui-window-output"Discuss.../a Email | Reply @amoroso@oldbytes.space !--emailsub--]]>
<![CDATA[I continued working on DandeGUI, a GUI library for Medley Interlisp I'm writing in Common Lisp. I added two new short public functions, GUI:CLEAR-WINDOW and GUI:PRINT-MESSAGE, and fixed a bug in some internal code. GUI:CLEAR-WINDOW deletes the text of the window associated with the Interlisp TEXTSTREAM passed as the argument: (DEFUN CLEAR-WINDOW (STREAM) "Delete all the text of the window associated with STREAM. Returns STREAM" (WITH-WRITE-ENABLED (STR STREAM) (IL:TEDIT.DELETE STR 1 (IL:TEDIT.NCHARS STR))) STREAM) It's little more than a call to the TEdit API function IL:TEDIT.DELETE for deleting text in the editor buffer, wrapped in the internal macro GUI::WITH-WRITE-ENABLED that establishes a context for write access to a window. I also wrote GUI:PRINT-MESSAGE. This function prints a message to the prompt area of the window associated with the TEXTSTREAM passed as an argument, optionally clearing the area prior to printing. The prompt area is a one-line Interlisp prompt window attached above the title bar of the TEdit window where the editor displays errors and status messages. (DEFUN PRINT-MESSAGE (STREAM MESSAGE &OPTIONAL DONT-CLEAR-P) "Print MESSAGE to the prompt area of the window associated with STREAM. If DONT-CLEAR-P is non NIL the area will be cleared first. Returns STREAM." (IL:TEDIT.PROMPTPRINT STREAM MESSAGE (NOT DONT-CLEAR-P)) STREAM) GUI:PRINT-MESSAGE just passes the appropriate arguments to the TEdit API function IL:TEDIT.PROMPTPRINT which does the actual printing. The documentation of both functions is in the API reference on the project repo. Testing DandeGUI revealed that sometimes text wasn't appended to the end but inserted at the beginning of windows. To address the issue I changed GUI::WITH-WRITE-ENABLED to ensure the file pointer of the stream is set to the end of the file (i.e -1) prior to passing control to output functions. The fix was to add a call to the Interlisp function IL:SETFILEPTR: (IL:SETFILEPTR ,STREAM -1) #DandeGUI #CommonLisp #Interlisp #Lisp a href="https://remark.as/p/journal.paoloamoroso.com/adding-window-clearing-and-message-printing-to-dandegui"Discuss.../a Email | Reply @amoroso@oldbytes.space !--emailsub--]]>
<![CDATA[I'm working on DandeGUI, a Common Lisp GUI library for simple text and graphics output on Medley Interlisp. The name, pronounced "dandy guy", is a nod to the Dandelion workstation, one of the Xerox D-machines Interlisp-D ran on in the 1980s. DandeGUI allows the creation and management of windows for stream-based text and graphics output. It captures typical GUI patterns of the Medley environment such as printing text to a window instead of the standard output. The main window of this screenshot was created by the code shown above it. A text output window created with DandeGUI on Medley Interlisp and the Lisp code that generated it. The library is written in Common Lisp and exposes its functionality as an API callable from Common Lisp and Interlisp code. Motivations In most of my prior Lisp projects I wrote programs that print text to windows. In general these windows are actually not bare Medley windows but running instances of the TEdit rich-text editor. Driving a full editor instead of directly creating windows may be overkill, but I get for free content scrolling as well as window resizing and repainting which TEdit handles automatically. Moreover, TEdit windows have an associated TEXTSTREAM, an Interlisp data structure for text stream I/O. A TEXTSTREAM can be passed to any Common Lisp or Interlisp output function that takes a stream as an argument such as PRINC, FORMAT, and PRIN1. For example, if S is the TEXTSTREAM associated with a TEdit window, (FORMAT S "~&Hello, Medley!~%") inserts the text "Hello, Medley!" in the window at the position of the cursor. Simple and versatile. As I wrote more GUI code, recurring patterns and boilerplate emerged. These programs usually create a new TEdit window; set up the title and other options; fetch the associated text stream; and return it for further use. The rest of the program prints application specific text to the stream and hence to the window. These patterns were ripe for abstracting and packaging in a library that other programs can call. This work is also good experience with API design. Usage An example best illustrates what DandeGUI can do and how to use it. Suppose you want to display in a window some text such as a table of square roots. This code creates the table in the screenshot above: (gui:with-output-to-window (stream :title "Table of square roots") (format stream "~&Number~40TSquare Root~2%") (loop for n from 1 to 30 do (format stream "~&~4D~40T~8,4F~%" n (sqrt n)))) DandeGUI exports all the public symbols from the DANDEGUI package with nickname GUI. The macro GUI:WITH-OUTPUT-TO-WINDOW creates a new TEdit window with title specified by :TITLE, and establishes a context in which the variable STREAM is bound to the stream associated with the window. The rest of the code prints the table by repeatedly calling the Common Lisp function FORMAT with the stream. GUI:WITH-OUTPUT-TO-WINDOW is best suited for one-off output as the stream is no longer accessible outside of its scope. To retain the stream and send output in a series of steps, or from different parts of the program, you need a combination of GUI:OPEN-WINDOW-STREAM and GUI:WITH-WINDOW-STREAM. The former opens and returns a new window stream which may later be used by FORMAT and other stream output functions. These functions must be wrapped in calls to the macro GUI:WITH-WINDOW-STREAM to establish a context in which a variable is bound to the appropriate stream. The DandeGUI documentation on the project repository provides more details, sample code, and the API reference. Design DandeGUI is a thin wrapper around the Interlisp system facilities that provide the underlying functionality. The main reason for a thin wrapper is to have a simple API that covers the most common user interface patterns. Despite the simplicity, the library takes care of a lot of the complexity of managing Medley GUIs such as content scrolling and window repainting and resizing. A thin wrapper doesn't hide much the data structures ubiquitous in Medley GUIs such as menus and font descriptors. This is a plus as the programmer leverages prior knowledge of these facilities. So far I have no clear idea how DandeGUI may evolve. One more reason not to deepen the wrapper too much without a clear direction. The user needs not know whether DandeGUI packs TEdit or ordinary windows under the hood. Therefore, another design goal is to hide this implementation detail. DandeGUI, for example, disables the main command menu of TEdit and sets the editor buffer to read-only so that typing in the window doesn't change the text accidentally. Using Medley Common Lisp DandeGUI relies on basic Common Lisp features. Although the Medley Common Lisp implementation is not ANSI compliant it provides all I need, with one exception. The function DANDEGUI:WINDOW-TITLE returns the title of a window and allows to set it with a SETF function. However, the SEdit structure editor and the File Manager of Medley don't support or track function names that are lists such as (SETF WINDOW-TITLE). A good workaround is to define SETF functions with DEFSETF which Medley does support along with the CLtL macro DEFINE-SETF-METHOD. Next steps At present DandeGUI doesn't do much more than what described here. To enhance this foundation I'll likely allow to clear existing text and give control over where to insert text in windows, such as at the beginning or end. DandeGUI will also have rich text facilities like printing in bold or changing fonts. The windows of some of my programs have an attached menu of commands and a status area for displaying errors and other messages. I will eventually implement such menu-ed windows. To support programs that do graphics output I plan to leverage the functionality of Sketch for graphics in a way similar to how I build upon TEdit for text. Sketch is the line drawing editor of Medley. The Interlisp graphics primitives require as an argument a DISPLAYSTREAM, a data stracture that represents an output sink for graphics. It is possible to use the Sketch drawing area as an output destination by associating a DISPLAYSTREAM with the editor's window. Like TEdit, Sketch takes care of repainting content as well as window scrolling and resizing. In other words, DISPLAYSTREAM is to Sketch what TEXTSTREAM is to TEdit. DandeGUI will create and manage Sketch windows with associated streams suitable for use as the DISPLAYSTREAM the graphics primitives require. #DandeGUI #CommonLisp #Interlisp #Lisp a href="https://remark.as/p/journal.paoloamoroso.com/dandegui-a-gui-library-for-medley-interlisp"Discuss.../a Email | Reply @amoroso@fosstodon.org !--emailsub--]]>
More in programming
<![CDATA[It has been a year since I set up my System76 Merkaat with Linux Mint. In July of 2024 I migrated from ChromeOS and the Merkaat has been my daily driver on the desktop. A year later I have nothing major to report, which is the point. Despite the occasional unplanned reinstallation I have been enjoying the stability of Linux and just using the PC. This stability finally enabled me to burn bridges with mainstream operating systems and fully embrace Linux and open systems. I'm ready to handle the worst and get back to work. Just a few years ago the frustration of troubleshooting a broken system would have made me seriously consider the switch to a proprietary solution. But a year of regular use, with an ordinary mix of quiet moments and glitches, gave me the confidence to stop worrying and learn to love Linux. linux a href="https://remark.as/p/journal.paoloamoroso.com/my-first-year-since-coming-back-to-linux"Discuss.../a Email | Reply @amoroso@oldbytes.space !--emailsub--]]>
The mystery In the previous article, I briefly mentioned a slight difference between the ESP-Prog and the reproduced circuit, when it comes to EN: Focusing on EN, it looks like the voltage level goes back to 3.3V much faster on the ESP-Prog than on the breadboard circuit. The grid is horizontally spaced at 2ms, so … Continue reading Overanalyzing a minor quirk of Espressif’s reset circuit → The post Overanalyzing a minor quirk of Espressif’s reset circuit appeared first on Quentin Santos.
There’s a lot of excitement about what AI (specifically the latest wave of LLM-anchored AI) can do, and how AI-first companies are different from the prior generations of companies. There are a lot of important and real opportunities at hand, but I find that many of these conversations occur at such an abstract altitude that they’re a bit too abstract. Sort of like saying that your company could be much better if you merely adopted software. That’s certainly true, but it’s not a particularly helpful claim. This post is an attempt to concisely summarize how AI agents work, apply that summary to a handful of real-world use cases for AI, and make the case that the potential of AI agents is equivalent to the potential of this generation of AI. By the end of this writeup, my hope is that you’ll be well-armed to have a concrete discussion about how LLMs and agents could change the shape of your company. How do agents work? At its core, using an LLM is an API call that includes a prompt. For example, you might call Anthropic’s /v1/message with a prompt: How should I adopt LLMs in my company? That prompt is used to fill the LLM’s context window, which conditions the model to generate certain kinds of responses. This is the first important thing that agents can do: use an LLM to evaluate a context window and get a result. Prompt engineering, or context engineering as it’s being called now, is deciding what to put into the context window to best generate the responses you’re looking for. For example, In-Context Learning (ICL) is one form of context engineering, where you supply a bunch of similar examples before asking a question. If I want to determine if a transaction is fraudulent, then I might supply a bunch of prior transactions and whether they were, or were not, fraudulent as ICL examples. Those examples make generating the correct answer more likely. However, composing the perfect context window is very time intensive, benefiting from techniques like metaprompting to improve your context. Indeed, the human (or automation) creating the initial context might not know enough to do a good job of providing relevant context. For example, if you prompt, Who is going to become the next mayor of New York City?, then you are unsuited to include the answer to that question in your prompt. To do that, you would need to already know the answer, which is why you’re asking the question to begin with! This is where we see model chat experiences from OpenAI and Anthropic use web search to pull in context that you likely don’t have. If you ask a question about the new mayor of New York, they use a tool to retrieve web search results, then add the content of those searches to your context window. This is the second important thing that agents can do: use an LLM to suggest tools relevant to the context window, then enrich the context window with the tool’s response. However, it’s important to clarify how “tool usage” actually works. An LLM does not actually call a tool. (You can skim OpenAI’s function calling documentation if you want to see a specific real-world example of this.) Instead there is a five-step process to calling tools that can be a bit counter-intuitive: The program designer that calls the LLM API must also define a set of tools that the LLM is allowed to suggest using. Every API call to the LLM includes that defined set of tools as options that the LLM is allowed to recommend The response from the API call with defined functions is either: Generated text as any other call to an LLM might provide A recommendation to call a specific tool with a specific set of parameters, e.g. an LLM that knows about a get_weather tool, when prompted about the weather in Paris, might return this response: [{ "type": "function_call", "name": "get_weather", "arguments": "{\"location\":\"Paris, France\"}" }] The program that calls the LLM API then decides whether and how to honor that requested tool use. The program might decide to reject the requested tool because it’s been used too frequently recently (e.g. rate limiting), it might check if the associated user has permission to use the tool (e.g. maybe it’s a premium only tool), it might check if the parameters match the user’s role-based permissions as well (e.g. the user can check weather, but only admin users are allowed to check weather in France). If the program does decide to call the tool, it invokes the tool, then calls the LLM API with the output of the tool appended to the prior call’s context window. The important thing about this loop is that the LLM itself can still only do one interesting thing: taking a context window and returning generated text. It is the broader program, which we can start to call an agent at this point, that calls tools and sends the tools’ output to the LLM to generate more context. What’s magical is that LLMs plus tools start to really improve how you can generate context windows. Instead of having to have a very well-defined initial context window, you can use tools to inject relevant context to improve the initial context. This brings us to the third important thing that agents can do: they manage flow control for tool usage. Let’s think about three different scenarios: Flow control via rules has concrete rules about how tools can be used. Some examples: it might only allow a given tool to be used once in a given workflow (or a usage limit of a tool for each user, etc) it might require that a human-in-the-loop approves parameters over a certain value (e.g. refunds more than $100 require human approval) it might run a generated Python program and return the output to analyze a dataset (or provide error messages if it fails) apply a permission system to tool use, restricting who can use which tools and which parameters a given user is able to use (e.g. you can only retrieve your own personal data) a tool to escalate to a human representative can only be called after five back and forths with the LLM agent Flow control via statistics can use statistics to identify and act on abnormal behavior: if the size of a refund is higher than 99% of other refunds for the order size, you might want to escalate to a human if a user has used a tool more than 99% of other users, then you might want to reject usage for the rest of the day it might escalate to a human representative if tool parameters are more similar to prior parameters that required escalation to a human agent LLMs themselves absolutely cannot be trusted. Anytime you rely on an LLM to enforce something important, you will fail. Using agents to manage flow control is the mechanism that makes it possible to build safe, reliable systems with LLMs. Whenever you find yourself dealing with an unreliable LLM-based system, you can always find a way to shift the complexity to a tool to avoid that issue. As an example, if you want to do algebra with an LLM, the solution is not asking the LLM to directly perform algebra, but instead providing a tool capable of algebra to the LLM, and then relying on the LLM to call that tool with the proper parameters. At this point, there is one final important thing that agents do: they are software programs. This means they can do anything software can do to build better context windows to pass on to LLMs for generation. This is an infinite category of tasks, but generally these include: Building general context to add to context window, sometimes thought of as maintaining memory Initiating a workflow based on an incoming ticket in a ticket tracker, customer support system, etc Periodically initiating workflows at a certain time, such as hourly review of incoming tickets Alright, we’ve now summarized what AI agents can do down to four general capabilities. Recapping a bit, those capabilities are: Use an LLM to evaluate a context window and get a result Use an LLM to suggest tools relevant to the context window, then enrich the context window with the tool’s response Manage flow control for tool usage via rules or statistical analysis Agents are software programs, and can do anything other software programs do Armed with these four capabilities, we’ll be able to think about the ways we can, and cannot, apply AI agents to a number of opportunities. Use Case 1: Customer Support Agent One of the first scenarios that people often talk about deploying AI agents is customer support, so let’s start there. A typical customer support process will have multiple tiers of agents who handle increasingly complex customer problems. So let’s set a goal of taking over the easiest tier first, with the goal of moving up tiers over time as we show impact. Our approach might be: Allow tickets (or support chats) to flow into an AI agent Provide a variety of tools to the agent to support: Retrieving information about the user: recent customer support tickets, account history, account state, and so on Escalating to next tier of customer support Refund a purchase (almost certainly implemented as “refund purchase” referencing a specific purchase by the user, rather than “refund amount” to prevent scenarios where the agent can be fooled into refunding too much) Closing the user account on request Include customer support guidelines in the context window, describe customer problems, map those problems to specific tools that should be used to solve the problems Flow control rules that ensure all calls escalate to a human if not resolved within a certain time period, number of back-and-forth exchanges, if they run into an error in the agent, and so on. These rules should be both rules-based and statistics-based, ensuring that gaps in your rules are neither exploitable nor create a terrible customer experience Review agent-customer interactions for quality control, making improvements to the support guidelines provided to AI agents. Initially you would want to review every interaction, then move to interactions that lead to unusual outcomes (e.g. escalations to human) and some degree of random sampling Review hourly, then daily, and then weekly metrics of agent performance Based on your learnings from the metric reviews, you should set baselines for alerts which require more immediate response. For example, if a new topic comes up frequently, it probably means a serious regression in your product or process, and it requires immediate review rather than periodical review. Note that even when you’ve moved “Customer Support to AI agents”, you still have: a tier of human agents dealing with the most complex calls humans reviewing the periodic performance statistics humans performing quality control on AI agent-customer interactions You absolutely can replace each of those downstream steps (reviewing performance statistics, etc) with its own AI agent, but doing that requires going through the development of an AI product for each of those flows. There is a recursive process here, where over time you can eliminate many human components of your business, in exchange for increased fragility as you have more tiers of complexity. The most interesting part of complex systems isn’t how they work, it’s how they fail, and agent-driven systems will fail occasionally, as all systems do, very much including human-driven ones. Applied with care, the above series of actions will work successfully. However, it’s important to recognize that this is building an entire software pipeline, and then learning to operate that software pipeline in production. These are both very doable things, but they are meaningful work, turning customer support leadership into product managers and requiring an engineering team building and operating the customer support agent. Use Case 2: Triaging incoming bug reports When an incident is raised within your company, or when you receive a bug report, the first problem of the day is determining how severe the issue might be. If it’s potentially quite severe, then you want on-call engineers immediately investigating; if it’s certainly not severe, then you want to triage it in a less urgent process of some sort. It’s interesting to think about how an AI agent might support this triaging workflow. The process might work as follows: Pipe all created incidents and all created tickets to this agent for review. Expose these tools to the agent: Open an incident Retrieve current incidents Retrieve recently created tickets Retrieve production metrics Retrieve deployment logs Retrieve feature flag change logs Toggle known-safe feature flags Propose merging an incident with another for human approval Propose merging a ticket with another ticket for human approval Redundant LLM providers for critical workflows. If the LLM provider’s API is unavailable, retry three times over ten seconds, then resort to using a second model provider (e.g. Anthropic first, if unavailable try OpenAI), and then finally create an incident that the triaging mechanism is unavailable. For critical workflows, we can’t simply assume the APIs will be available, because in practice all major providers seem to have monthly availability issues. Merge duplicates. When a ticket comes in, first check ongoing incidents and recently created tickets for potential duplicates. If there is a probable duplicate, suggest merging the ticket or incident with the existing issue and exit the workflow. Assess impact. If production statistics are severely impacted, or if there is a new kind of error in production, then this is likely an issue that merits quick human review. If it’s high priority, open an incident. If it’s low priority, create a ticket. Propose cause. Now that the incident has been sized, switch to analyzing the potential causes of the incident. Look at the code commits in recent deploys and suggest potential issues that might have caused the current error. In some cases this will be obvious (e.g. spiking errors with a traceback of a line of code that changed recently), and in other cases it will only be proximity in time. Apply known-safe feature flags. Establish an allow list of known safe feature flags that the system is allowed to activate itself. For example, if there are expensive features that are safe to disable, it could be allowed to disable them, e.g. restricting paginating through deeper search results when under load might be a reasonable tradeoff between stability and user experience. Defer to humans. At this point, rely on humans to drive incident, or ticket, remediation to completion. Draft initial incident report. If an incident was opened, the agent should draft an initial incident report including the timeline, related changes, and the human activities taken over the course of the incident. This report should then be finalized by the human involved in the incident. Run incident review. Your existing incident review process should take the incident review and determine how to modify your systems, including the triaging agent, to increase reliability over time. Safeguard to reenable feature flags. Since we now have an agent disabling feature flags, we also need to add a periodic check (agent-driven or otherwise) to reenable the “known safe” feature flags if there isn’t an ongoing incident to avoid accidentally disabling them for long periods of time. This is another AI agent that will absolutely work as long as you treat it as a software product. In this case, engineering is likely the product owner, but it will still require thoughtful iteration to improve its behavior over time. Some of the ongoing validation to make this flow work includes: The role of humans in incident response and review will remain significant, merely aided by this agent. This is especially true in the review process, where an agent cannot solve the review process because it’s about actively learning what to change based on the incident. You can make a reasonable argument that an agent could decide what to change and then hand that specification off to another agent to implement it. Even today, you can easily imagine low risk changes (e.g. a copy change) being automatically added to a ticket for human approval. Doing this for more complex, or riskier changes, is possible but requires an extraordinary degree of care and nuance: it is the polar opposite of the idea of “just add agents and things get easy.” Instead, enabling that sort of automation will require immense care in constraining changes to systems that cannot expose unsafe behavior. For example, one startup I know has represented their domain logic in a domain-specific language (DSL) that can be safely generated by an LLM, and are able to represent many customer-specific features solely through that DSL. Expanding the list of known-safe feature flags to make incidents remediable. To do this widely will require enforcing very specific requirements for how software is developed. Even doing this narrowly will require changes to ensure the known-safe feature flags remain safe as software is developed. Periodically reviewing incident statistics over time to ensure mean-time-to-resolution (MTTR) is decreasing. If the agent is truly working, this should decrease. If the agent isn’t driving a reduction in MTTR, then something is rotten in the details of the implementation. Even a very effective agent doesn’t relieve the responsibility of careful system design. Rather, agents are a multiplier on the quality of your system design: done well, agents can make you significantly more effective. Done poorly, they’ll only amplify your problems even more widely. Do AI Agents Represent Entirety of this Generation of AI? If you accept my definition that AI agents are any combination of LLMs and software, then I think it’s true that there’s not much this generation of AI can express that doesn’t fit this definition. I’d readily accept the argument that LLM is too narrow a term, and that perhaps foundational model would be a better term. My sense is that this is a place where frontier definitions and colloquial usage have deviated a bit. Closing thoughts LLMs and agents are powerful mechanisms. I think they will truly change how products are designed and how products work. An entire generation of software makers, and company executives, are in the midst of learning how these tools work. Software isn’t magic, it’s very logical, but what it can accomplish is magical. The same goes for agents and LLMs. The more we can accelerate that learning curve, the better for our industry.
This is not going to be a cakewalk like self driving cars. Most of comma’s competition is now out of business, taking billions and billions of dollars with it. Re: Tesla and FSD, we always expected Tesla to have the lead, but it’s not a winner take all market, it will look more like iOS vs Android. comma has been around for 10 years, is profitable, and is now growing rapidly. In self driving, most of the competition wasn’t even playing the right game. This isn’t how it is for ML frameworks. tinygrad’s competition is playing the right game, open source, and run by some quite smart people. But this is my second startup, so hopefully taking a bit more risk is appropriate. For comma to win, all it would take is people in 2016 being wrong about LIDAR, mapping, end to end, and hand coding, which hopefully we all agree now that they were. For tinygrad to win, it requires something much deeper to be wrong about software development in general. As it stands now, tinygrad is 14556 lines. Line count is not a perfect proxy for complexity, but when you have differences of multiple orders of magnitude, it might mean something. I asked ChatGPT to estimate the lines of code in PyTorch, JAX, and MLIR. JAX = 400k MLIR = 950k PyTorch = 3300k They range from one to two orders of magnitude off. And this isn’t even including all the libraries and drivers the other frameworks rely on, CUDA, cuBLAS, Triton, nccl, LLVM, etc…. tinygrad includes every single piece of code needed to drive an AMD RDNA3 GPU except for LLVM, and we plan to remove LLVM in a year or two as well. But so what? What does line count matter? One hypothesis is that tinygrad is only smaller because it’s not speed or feature competitive, and that if and when it becomes competitive, it will also be that many lines. But I just don’t think that’s true. tinygrad is already feature competitive, and for speed, I think the bitter lesson also applies to software. When you look at the machine learning ecosystem, you realize it’s just the same problems over and over again. The problem of multi machine, multi GPU, multi SM, multi ALU, cross machine memory scheduling, DRAM scheduling, SRAM scheduling, register scheduling, it’s all the same underlying problem at different scales. And yet, in all the current ecosystems, there are completely different codebases and libraries at each scale. I don’t think this stands. I suspect there is a simple formulation of the problem underlying all of the scheduling. Of course, this problem will be in NP and hard to optimize, but I’m betting the bitter lesson wins here. The goal of the tinygrad project is to abstract away everything except the absolute core problem in the cleanest way possible. This is why we need to replace everything. A model for the hardware is simple compared to a model for CUDA. If we succeed, tinygrad will not only be the fastest NN framework, but it will be under 25k lines all in, GPT-5 scale training job to MMIO on the PCIe bus! Here are the steps to get there: Expose the underlying search problem spanning several orders of magnitude. Due to the execution of neural networks not being data dependent, this problem is very amenable to search. Make sure your formulation is simple and complete. Fully capture all dimensions of the search space. The optimization goal is simple, run faster. Apply the state of the art in search. Burn compute. Use LLMs to guide. Use SAT solvers. Reinforcement learning. It doesn’t matter, there’s no way to cheat this goal. Just see if it runs faster. If this works, not only do we win with tinygrad, but hopefully people begin to rethink software in general. Of course, it’s a big if, this isn’t like comma where it was hard to lose. But if it wins… The main thing to watch is development speed. Our bet has to be that tinygrad’s development speed is outpacing the others. We have the AMD contract to train LLaMA 405B as fast as NVIDIA due in a year, let’s see if we succeed.
Explaining nil interface{} gotcha in Go A footgun In Go empty interface is an interface without any methods, typed as interface{}. A zero value of interface{} is nil: var v interface{} // compiler sets this to nil, you could explicitly write = nil if v == nil { fmt.Printf("v is nil\n") } else { fmt.Printf("v is NOT nil\n") } Try online This prints: v is nil. However, this sometimes trips people up: type Foo struct { } var v interface{} var nilFoo *Foo // implicilty initialized by compiler to nil if nilFoo == nil { fmt.Printf("nilFoo is nil.") } else { fmt.Printf("nilFoo is NOT nil.") } v = nilFoo if v == nil { fmt.Printf("v is nil\n") } else { fmt.Printf("v is NOT nil\n") } Try online This prints: nilFoo is nil. v is NOT nil. On surface level, this is wrong: t is a nil. We assigned a nil to v but it doesn’t equal to nil? How to check if interface{} is nil of any pointer type? func isNilPointer(i interface{}) bool { if i == nil { return false // interface itself is nil } v := reflect.ValueOf(i) return v.Kind() == reflect.Ptr && v.IsNil() } type Foo struct { } var pf *Foo var v interface{} = pf if isNilPointer(v) { fmt.Printf("v is nil pointer\n") } else { fmt.Printf("v is NOT nil pointer\n") } Try online Why There’s a reason for this perplexing behavior. nil is an abstract value. If you come from C/C++ or Java/C#, you might think that this is equivalent of NULL pointer or null reference. It isn’t. nil is a symbol that represents a zero value of pointers, channels, maps, slices. Logically interface{} combines type and value. You can think of it as a tuple (type, value). An uninitialized value of interface{} is a tuple without a type and value (no type, no value). In Go uninitialized value is zero value and since nil is an abstract value representing zero value for several types, it makes sense to use it for zero value of interface{}. So: zero value of interface{} is nil which is (no type, no value). When we assigned nilFoo to v, the value is (*Foo, nil). Are you surprised that (no type, no value) is not the same as (*Foo, nil)? To understand this gotcha, you have to understand two things. One: nil is an abstract value that only has a meaning in context. Consider this: var ch chan (bool) var m map[string]bool if ch == m { fmt.Printf("ch is equal to m\n") } Try online This snippet doesn’t even compile: Error:./prog.go:8:11: invalid operation: ch == m (mismatched types chan bool and map[string]bool). Both ch and m are nil but you can’t compare them because they are of different types. nil != nil because nil is an abstract concept, not an actual value. Two: nil value of interface{} is (no type, no value). Once you understand the above, you’ll understand why nil doesn’t compare to (type, nil) e.g. (*Foo, nil) or (map[string]bool, nil) or (int, 0) or (string, ""). Bad design or inevitable consequence of previous decisions? Many claim it’s a bad design. No-one describes what a better design would look like. Let’s play act a Go language designer. You’ve already designed concrete types, you came up with notion of zero value and created nil to denote zero value for pointers, channels, maps, slices. You’re now designing interface{} as a logical tuple of (type, value). The zero value is obviously (no type, no value). You have to figure how to represent the zero value. A different symbol for interface{} zero value Instead of using nil you could create a different symbol e.g. zeroInteface. You could then write: var v interface{} var v2 interface{} = &Foo{nil} var v3 interface{} = int(0) if v == zeroInteface { // this is true } if v2 == nil { // tihs is true } if v3 == nil { // is it true or not? } Is this a better design? I don’t think so. We don’t have zeroPointer, zeroMap, zeroChanel etc. so this breaks consistency. It sticks out like a sore zeroInterface. And v == nil is subtle. Not all values wrapped in an interface{} have zero value of nil. What should happen if you compare to (int, 0) given that 0 is zero value of int? Damn the consistency, let’s do what user expects You could ditch the strict logic of nil values and special case the if v == nil for interface{} to do what people superficially expect to happen. You then have to answer the question below: what happens when you do if (int, 0) == nil? The biggest issue is that you’ve lost ability to distinguish between (no type, no value) and (type, nil). They both compare to nil so how would you test for (no type, no value) but not (type, nil)? It doesn’t seem like a better design either. Your proposal Now that you understand the problem and seen two ideas for how to fix it, it’s your turn to design a better solution. I tried and the above 2 are the only ideas I had. We are boxed by existing notions of zero values and using nil to represent them. We could explore designs that re-think those assumptions but would that be Go anymore? It’s easy to complain that something is a bad design. It’s much harder, often impossible, to design something better.