Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
20
<![CDATA[It has been a year since I set up my System76 Merkaat with Linux Mint. In July of 2024 I migrated from ChromeOS and the Merkaat has been my daily driver on the desktop. A year later I have nothing major to report, which is the point. Despite the occasional unplanned reinstallation I have been enjoying the stability of Linux and just using the PC. This stability finally enabled me to burn bridges with mainstream operating systems and fully embrace Linux and open systems. I'm ready to handle the worst and get back to work. Just a few years ago the frustration of troubleshooting a broken system would have made me seriously consider the switch to a proprietary solution. But a year of regular use, with an ordinary mix of quiet moments and glitches, gave me the confidence to stop worrying and learn to love Linux. linux a href="https://remark.as/p/journal.paoloamoroso.com/my-first-year-since-coming-back-to-linux"Discuss.../a Email | Reply...
3 weeks ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Paolo Amoroso's Journal

Adding graphics support to DandeGUI

<![CDATA[DandeGUI now does graphics and this is what it looks like. Some text and graphics output windows created with DandeGUI on Medley Interlisp. In addition to the square root table text output demo, I created the other graphics windows with the newly implemented functionality. For example, this code draws the random circles of the top window: (DEFUN RANDOM-CIRCLES (&KEY (N 200) (MAX-R 50) (WIDTH 640) (HEIGHT 480)) (LET ((RANGE-X (- WIDTH ( 2 MAX-R))) (RANGE-Y (- HEIGHT ( 2 MAX-R))) (SHADES (LIST IL:BLACKSHADE IL:GRAYSHADE (RANDOM 65536)))) (DANDEGUI:WITH-GRAPHICS-WINDOW (STREAM :TITLE "Random Circles") (DOTIMES (I N) (DECLARE (IGNORE I)) (IL:FILLCIRCLE (+ MAX-R (RANDOM RANGE-X)) (+ MAX-R (RANDOM RANGE-Y)) (RANDOM MAX-R) (ELT SHADES (RANDOM 3)) STREAM))))) GUI:WITH-GRAPHICS-WINDOW, GUI:OPEN-GRAPHICS-STREAM, and GUI:WITH-GRAPHICS-STREAM are the main additions. These functions and macros are the equivalent for graphics of what GUI:WITH-OUTPUT-TO-WINDOW, GUI:OPEN-WINDOW-STREAM, and GUI:WITH-WINDOW-STREAM, respectively, do for text. The difference is the text facilities send output to TEXTSTREAM streams whereas the graphics facilities to IMAGESTREAM, a type of device-independent graphics streams. Under the hood DandeGUI text windows are customized TEdit windows with an associated TEXTSTREAM. TEdit is the rich text editor of Medley Interlisp. Similarly, the graphics windows of DandeGUI run the Sketch line drawing editor under the hood. Sketch windows have an IMAGESTREAM which Interlisp graphics primitives like IL:DRAWLINE and IL:DRAWPOINT accept as an output destination. DandeGUI creates and manages Sketch windows with the type of stream the graphics primitives require. In other words, IMAGESTREAM is to Sketch what TEXTSTREAM is to TEdit. The benefits of programmatically using Sketch for graphics are the same as TEdit windows for text: automatic window repainting, scrolling, and resizing. The downside is overhead. Scrolling more than a few thousand graphics elements is slow and adding even more may crash the system. However, this is an acceptable tradeoff. The new graphics functions and macros work similarly to the text ones, with a few differences. First, DandeGUI now depends on the SKETCH and SKETCH-STREAM library modules which it automatically loads. Since Sketch has no notion of a read-only drawing area GUI:OPEN-GRAPHICS-STREAM achieves the same effect by other means: (DEFUN OPEN-GRAPHICS-STREAM (&KEY (TITLE "Untitled")) "Open a new window and return the associated IMAGESTREAM to send graphics output to. Sets the window title to TITLE if supplied." (LET ((STREAM (IL:OPENIMAGESTREAM '|Untitled| 'IL:SKETCH '(IL:FONTS ,DEFAULT-FONT*))) (WINDOW (IL:\\SKSTRM.WINDOW.FROM.STREAM STREAM))) (IL:WINDOWPROP WINDOW 'IL:TITLE TITLE) ;; Disable left and middle-click title bar menu (IL:WINDOWPROP WINDOW 'IL:BUTTONEVENTFN NIL) ;; Disable sketch editing via right-click actions (IL:WINDOWPROP WINDOW 'IL:RIGHTBUTTONFN NIL) ;; Disable querying the user whether to save changes (IL:WINDOWPROP WINDOW 'IL:DONTQUERYCHANGES T) STREAM)) Only the mouse gestures and commands of the middle-click title bar menu and the right-click menu change the drawing area interactively. To disable these actions GUI:OPEN-GRAPHICS-STREAM removes their menu handlers by setting to NIL the window properties IL:BUTTONEVENTFN and IL:RIGHTBUTTONFN. This way only programmatic output can change the drawing area. The function also sets IL:DONTQUERYCHANGES to T to prevent querying whether to save the changes at window close. By design output to DandeGUI windows is not permanent, so saving isn't necessary. GUI:WITH-GRAPHICS-STREAM and GUI:WITH-GRAPHICS-WINDOW are straightforward: (DEFMACRO WITH-GRAPHICS-STREAM ((VAR STREAM) &BODY BODY) "Perform the operations in BODY with VAR bound to the graphics window STREAM. Evaluates the forms in BODY in a context in which VAR is bound to STREAM which must already exist, then returns the value of the last form of BODY." `(LET ((,VAR ,STREAM)) ,@BODY)) (DEFMACRO WITH-GRAPHICS-WINDOW ((VAR &KEY TITLE) &BODY BODY) "Perform the operations in BODY with VAR bound to a new graphics window stream. Creates a new window titled TITLE if supplied, binds VAR to the IMAGESTREAM associated with the window, and executes BODY in this context. Returns the value of the last form of BODY." `(WITH-GRAPHICS-STREAM (,VAR (OPEN-GRAPHICS-STREAM :TITLE (OR ,TITLE "Untitled"))) ,@BODY)) Unlike GUI:WITH-TEXT-STREAM and GUI:WITH-TEXT-WINDOW, which need to call GUI::WITH-WRITE-ENABLED to establish a read-only environment after every output operation, GUI:OPEN-GRAPHICS-STREAM can do this only once at window creation. GUI:CLEAR-WINDOW, GUI:WINDOW-TITLE, and GUI:PRINT-MESSAGE now work with graphics streams in addition to text streams. For IMAGESTREAM arguments GUI:PRINT-MESSAGE prints to the system prompt window as Sketch stream windows have no prompt area. The random circles and fractal triangles graphics demos round up the latest additions. #DandeGUI #CommonLisp #Interlisp #Lisp a href="https://remark.as/p/journal.paoloamoroso.com/adding-graphics-support-to-dandegui"Discuss.../a Email | Reply @amoroso@oldbytes.space !--emailsub--]]>

a month ago 24 votes
Changing text style for DandeGUI window output

<![CDATA[Printing rich text to windows is one of the planned features of DandeGUI, the GUI library for Medley Interlisp I'm developing in Common Lisp. I finally got around to this and implemented the GUI:WITH-TEXT-STYLE macro which controls the attributes of text printed to a window, such as the font family and face. GUI:WITH-TEXT-STYLE establishes a context in which text printed to the stream associated with a TEdit window is rendered in the style specified by the arguments. The call to GUI:WITH-TEXT-STYLE here extends the square root table example by printing the heading in a 12-point bold sans serif font: (gui:with-output-to-window (stream :title "Table of square roots") (gui:with-text-style (stream :family :sans :size 12 :face :bold) (format stream "~&Number~40TSquare Root~2%")) (loop for n from 1 to 30 do (format stream "~&~4D~40T~8,4F~%" n (sqrt n)))) The code produces this window in which the styled column headings stand out: Medley Interlisp window of a square root table generated by the DandeGUI GUI library. The :FAMILY, :SIZE, and :FACE arguments determine the corresponding text attributes. :FAMILY may be a generic family such as :SERIF for an unspecified serif font; :SANS for a sans serif font; :FIX for a fixed width font; or a keyword denoting a specific family like :TIMESROMAN. At the heart of GUI:WITH-TEXT-STYLE is a pair of calls to the Interlisp function PRINTOUT that wrap the macro body, the first for setting the font of the stream to the specified style and the other for restoring the default: (DEFMACRO WITH-TEXT-STYLE ((STREAM &KEY FAMILY SIZE FACE) &BODY BODY) (ONCE-ONLY (STREAM) `(UNWIND-PROTECT (PROGN (IL:PRINTOUT ,STREAM IL:.FONT (TEXT-STYLE-TO-FD ,FAMILY ,SIZE ,FACE)) ,@BODY) (IL:PRINTOUT ,STREAM IL:.FONT DEFAULT-FONT)))) PRINTOUT is an Interlisp function for formatted output similar to Common Lisp's FORMAT but with additional font control via the .FONT directive. The symbols of PRINTOUT, i.e. its directives and arguments, are in the Interlisp package. In turn GUI:WITH-TEXT-STYLE calls GUI::TEXT-STYLE-TO-FD, an internal DandeGUI function which passes to .FONT a font descriptor matching the required text attributes. GUI::TEXT-STYLE-TO-FD calls IL:FONTCOPY to build a descriptor that merges the specified attributes with any unspecified ones copied from the default font. The font descriptor is an Interlisp data structure that represents a font on the Medley environment. #DandeGUI #CommonLisp #Interlisp #Lisp a href="https://remark.as/p/journal.paoloamoroso.com/changing-text-style-for-dandegui-window-output"Discuss.../a Email | Reply @amoroso@oldbytes.space !--emailsub--]]>

2 months ago 26 votes
Adding window clearing and message printing to DandeGUI

<![CDATA[I continued working on DandeGUI, a GUI library for Medley Interlisp I'm writing in Common Lisp. I added two new short public functions, GUI:CLEAR-WINDOW and GUI:PRINT-MESSAGE, and fixed a bug in some internal code. GUI:CLEAR-WINDOW deletes the text of the window associated with the Interlisp TEXTSTREAM passed as the argument: (DEFUN CLEAR-WINDOW (STREAM) "Delete all the text of the window associated with STREAM. Returns STREAM" (WITH-WRITE-ENABLED (STR STREAM) (IL:TEDIT.DELETE STR 1 (IL:TEDIT.NCHARS STR))) STREAM) It's little more than a call to the TEdit API function IL:TEDIT.DELETE for deleting text in the editor buffer, wrapped in the internal macro GUI::WITH-WRITE-ENABLED that establishes a context for write access to a window. I also wrote GUI:PRINT-MESSAGE. This function prints a message to the prompt area of the window associated with the TEXTSTREAM passed as an argument, optionally clearing the area prior to printing. The prompt area is a one-line Interlisp prompt window attached above the title bar of the TEdit window where the editor displays errors and status messages. (DEFUN PRINT-MESSAGE (STREAM MESSAGE &OPTIONAL DONT-CLEAR-P) "Print MESSAGE to the prompt area of the window associated with STREAM. If DONT-CLEAR-P is non NIL the area will be cleared first. Returns STREAM." (IL:TEDIT.PROMPTPRINT STREAM MESSAGE (NOT DONT-CLEAR-P)) STREAM) GUI:PRINT-MESSAGE just passes the appropriate arguments to the TEdit API function IL:TEDIT.PROMPTPRINT which does the actual printing. The documentation of both functions is in the API reference on the project repo. Testing DandeGUI revealed that sometimes text wasn't appended to the end but inserted at the beginning of windows. To address the issue I changed GUI::WITH-WRITE-ENABLED to ensure the file pointer of the stream is set to the end of the file (i.e -1) prior to passing control to output functions. The fix was to add a call to the Interlisp function IL:SETFILEPTR: (IL:SETFILEPTR ,STREAM -1) #DandeGUI #CommonLisp #Interlisp #Lisp a href="https://remark.as/p/journal.paoloamoroso.com/adding-window-clearing-and-message-printing-to-dandegui"Discuss.../a Email | Reply @amoroso@oldbytes.space !--emailsub--]]>

3 months ago 19 votes
DandeGUI, a GUI library for Medley Interlisp

<![CDATA[I'm working on DandeGUI, a Common Lisp GUI library for simple text and graphics output on Medley Interlisp. The name, pronounced "dandy guy", is a nod to the Dandelion workstation, one of the Xerox D-machines Interlisp-D ran on in the 1980s. DandeGUI allows the creation and management of windows for stream-based text and graphics output. It captures typical GUI patterns of the Medley environment such as printing text to a window instead of the standard output. The main window of this screenshot was created by the code shown above it. A text output window created with DandeGUI on Medley Interlisp and the Lisp code that generated it. The library is written in Common Lisp and exposes its functionality as an API callable from Common Lisp and Interlisp code. Motivations In most of my prior Lisp projects I wrote programs that print text to windows. In general these windows are actually not bare Medley windows but running instances of the TEdit rich-text editor. Driving a full editor instead of directly creating windows may be overkill, but I get for free content scrolling as well as window resizing and repainting which TEdit handles automatically. Moreover, TEdit windows have an associated TEXTSTREAM, an Interlisp data structure for text stream I/O. A TEXTSTREAM can be passed to any Common Lisp or Interlisp output function that takes a stream as an argument such as PRINC, FORMAT, and PRIN1. For example, if S is the TEXTSTREAM associated with a TEdit window, (FORMAT S "~&Hello, Medley!~%") inserts the text "Hello, Medley!" in the window at the position of the cursor. Simple and versatile. As I wrote more GUI code, recurring patterns and boilerplate emerged. These programs usually create a new TEdit window; set up the title and other options; fetch the associated text stream; and return it for further use. The rest of the program prints application specific text to the stream and hence to the window. These patterns were ripe for abstracting and packaging in a library that other programs can call. This work is also good experience with API design. Usage An example best illustrates what DandeGUI can do and how to use it. Suppose you want to display in a window some text such as a table of square roots. This code creates the table in the screenshot above: (gui:with-output-to-window (stream :title "Table of square roots") (format stream "~&Number~40TSquare Root~2%") (loop for n from 1 to 30 do (format stream "~&~4D~40T~8,4F~%" n (sqrt n)))) DandeGUI exports all the public symbols from the DANDEGUI package with nickname GUI. The macro GUI:WITH-OUTPUT-TO-WINDOW creates a new TEdit window with title specified by :TITLE, and establishes a context in which the variable STREAM is bound to the stream associated with the window. The rest of the code prints the table by repeatedly calling the Common Lisp function FORMAT with the stream. GUI:WITH-OUTPUT-TO-WINDOW is best suited for one-off output as the stream is no longer accessible outside of its scope. To retain the stream and send output in a series of steps, or from different parts of the program, you need a combination of GUI:OPEN-WINDOW-STREAM and GUI:WITH-WINDOW-STREAM. The former opens and returns a new window stream which may later be used by FORMAT and other stream output functions. These functions must be wrapped in calls to the macro GUI:WITH-WINDOW-STREAM to establish a context in which a variable is bound to the appropriate stream. The DandeGUI documentation on the project repository provides more details, sample code, and the API reference. Design DandeGUI is a thin wrapper around the Interlisp system facilities that provide the underlying functionality. The main reason for a thin wrapper is to have a simple API that covers the most common user interface patterns. Despite the simplicity, the library takes care of a lot of the complexity of managing Medley GUIs such as content scrolling and window repainting and resizing. A thin wrapper doesn't hide much the data structures ubiquitous in Medley GUIs such as menus and font descriptors. This is a plus as the programmer leverages prior knowledge of these facilities. So far I have no clear idea how DandeGUI may evolve. One more reason not to deepen the wrapper too much without a clear direction. The user needs not know whether DandeGUI packs TEdit or ordinary windows under the hood. Therefore, another design goal is to hide this implementation detail. DandeGUI, for example, disables the main command menu of TEdit and sets the editor buffer to read-only so that typing in the window doesn't change the text accidentally. Using Medley Common Lisp DandeGUI relies on basic Common Lisp features. Although the Medley Common Lisp implementation is not ANSI compliant it provides all I need, with one exception. The function DANDEGUI:WINDOW-TITLE returns the title of a window and allows to set it with a SETF function. However, the SEdit structure editor and the File Manager of Medley don't support or track function names that are lists such as (SETF WINDOW-TITLE). A good workaround is to define SETF functions with DEFSETF which Medley does support along with the CLtL macro DEFINE-SETF-METHOD. Next steps At present DandeGUI doesn't do much more than what described here. To enhance this foundation I'll likely allow to clear existing text and give control over where to insert text in windows, such as at the beginning or end. DandeGUI will also have rich text facilities like printing in bold or changing fonts. The windows of some of my programs have an attached menu of commands and a status area for displaying errors and other messages. I will eventually implement such menu-ed windows. To support programs that do graphics output I plan to leverage the functionality of Sketch for graphics in a way similar to how I build upon TEdit for text. Sketch is the line drawing editor of Medley. The Interlisp graphics primitives require as an argument a DISPLAYSTREAM, a data stracture that represents an output sink for graphics. It is possible to use the Sketch drawing area as an output destination by associating a DISPLAYSTREAM with the editor's window. Like TEdit, Sketch takes care of repainting content as well as window scrolling and resizing. In other words, DISPLAYSTREAM is to Sketch what TEXTSTREAM is to TEdit. DandeGUI will create and manage Sketch windows with associated streams suitable for use as the DISPLAYSTREAM the graphics primitives require. #DandeGUI #CommonLisp #Interlisp #Lisp a href="https://remark.as/p/journal.paoloamoroso.com/dandegui-a-gui-library-for-medley-interlisp"Discuss.../a Email | Reply @amoroso@fosstodon.org !--emailsub--]]>

3 months ago 32 votes

More in programming

Our $100M Series B

We don’t want to bury the lede: we have raised a $100M Series B, led by a new strategic partner in USIT with participation from all existing Oxide investors. To put that number in perspective: over the nearly six year lifetime of the company, we have raised $89M; our $100M Series B more than doubles our total capital raised to date — and positions us to make Oxide the generational company that we have always aspired it to be. If this aspiration seems heady now, it seemed absolutely outlandish when we were first raising venture capital in 2019. Our thesis was that cloud computing was the future of all computing; that running on-premises would remain (or become!) strategically important for many; that the entire stack — hardware and software — needed to be rethought from first principles to serve this market; and that a large, durable, public company could be built by whomever pulled it off. This scope wasn’t immediately clear to all potential investors, some of whom seemed to latch on to one aspect or another without understanding the whole. Their objections were revealing: "We know you can build this," began more than one venture capitalist (at which we bit our tongue; were we not properly explaining what we intended to build?!), "but we don’t think that there is a market." Entrepreneurs must become accustomed to rejection, but this flavor was particularly frustrating because it was exactly backwards: we felt that there was in fact substantial technical risk in the enormity of the task we put before ourselves — but we also knew that if we could build it (a huge if!) there was a huge market, desperate for cloud computing on-premises. Fortunately, in Eclipse Ventures we found investors who saw what we saw: that the most important products come when we co-design hardware and software together, and that the on-premises market was sick of being told that they either don’t exist or that they don’t deserve modernity. These bold investors — like the customers we sought to serve — had been waiting for this company to come along; we raised seed capital, and started building. And build it we did, making good on our initial technical vision: We did our own board designs, allowing for essential system foundation like a true hardware root-of-trust and end-to-end power observability. We did our own microcontroller operating system, and used it to replace the traditional BMC. We did our own platform enablement software, eliminating the traditional UEFI BIOS and its accompanying flotilla of vulnerabilities. We did our own host hypervisor, assuring an integrated and seamless user experience — and eliminating the need for a third-party hypervisor and its concomitant rapacious software licensing. We did our own switch — and our own switch runtime — eliminating entire universes of integration complexity and operational nightmares. We did our own integrated storage service, allowing the rack-scale system to have reliable, available, durable, elastic instance storage without necessitating a dependency on a third party. We did our own control plane, a sophisticated distributed system building on the foundation of our hardware and software components to deliver the API-driven services that modernity demands: elastic compute, virtual networking, and virtual storage. While these technological components are each very important (and each is in service to specific customer problems when deploying infrastructure on-premises), the objective is the product, not its parts. The journey to a product was long, but we ticked off the milestones. We got the boards brought up. We got the switch transiting packets. We got the control plane working. We got the rack manufactured. We passed FCC compliance. And finally, two years ago, we shipped our first system! Shortly thereafter, more milestones of the variety you can only get after shipping: our first update of the software in the field; our first update-delivered performance improvements; our first customer-requested features added as part of an update. Later that year, we hit general commercial availability, and things started accelerating. We had more customers — and our first multi-rack customer. We had customers go on the record about why they had selected Oxide — and customers describing the wins that they had seen deploying Oxide. Customers starting landing faster now: enterprise sales cycles are infamously long, but we were finding that we were going from first conversations to a delivered product surprisingly quickly. The quickening pace always seemed to be due in some way to our transparency: new customers were listeners to our podcast, or they had read our RFDs, or they had perused our documentation, or they had looked at the source code itself. With growing customer enthusiasm, we were increasingly getting questions about what it would look like to buy a large number of Oxide racks. Could we manufacture them? Could we support them? Could we make them easy to operate together? Into this excitement, a new potential investor, USIT, got to know us. They asked terrific questions, and we found a shared disposition towards building lasting value and doing it the right way. We learned more about them, too, and especially USIT’s founder, Thomas Tull. The more we each learned about the other, the more there was to like. And importantly, USIT had the vision for us that we had for ourselves: that there was a big, important market here — and that it was uniquely served by Oxide. We are elated to announce this new, exciting phase of the company. It’s not necessarily in our nature to celebrate fundraising, but this is a big milestone, because it will allow us to address our customers' most pressing questions around scale (manufacturing scale, system scale, operations scale) and roadmap scope. We have always believed in our mission, but this raise gives us a new sense of confidence when we say it: we’re going to kick butt, have fun, not cheat (of course!), love our customers — and change computing forever.

16 hours ago 5 votes
React Server Components with Vite and React-Router (tip)

Create a small example app and send payloads from the server to the client using RSC's

10 hours ago 3 votes
2000 words about arrays and tables

I'm way too discombobulated from getting next month's release of Logic for Programmers ready, so I'm pulling a idea from the slush pile. Basically I wanted to come up with a mental model of arrays as a concept that explained APL-style multidimensional arrays and tables but also why there weren't multitables. So, arrays. In all languages they are basically the same: they map a sequence of numbers (I'll use 1..N)1 to homogeneous values (values of a single type). This is in contrast to the other two foundational types, associative arrays (which map an arbitrary type to homogeneous values) and structs (which map a fixed set of keys to heterogeneous values). Arrays appear in PLs earlier than the other two, possibly because they have the simplest implementation and the most obvious application to scientific computing. The OG FORTRAN had arrays. I'm interested in two structural extensions to arrays. The first, found in languages like nushell and frameworks like Pandas, is the table. Tables have string keys like a struct and indexes like an array. Each row is a struct, so you can get "all values in this column" or "all values for this row". They're heavily used in databases and data science. The other extension is the N-dimensional array, mostly seen in APLs like Dyalog and J. Think of this like arrays-of-arrays(-of-arrays), except all arrays at the same depth have the same length. So [[1,2,3],[4]] is not a 2D array, but [[1,2,3],[4,5,6]] is. This means that N-arrays can be queried on any axis. ]x =: i. 3 3 0 1 2 3 4 5 6 7 8 0 { x NB. first row 0 1 2 0 {"1 x NB. first column 0 3 6 So, I've had some ideas on a conceptual model of arrays that explains all of these variations and possibly predicts new variations. I wrote up my notes and did the bare minimum of editing and polishing. Somehow it ended up being 2000 words. 1-dimensional arrays A one-dimensional array is a function over 1..N for some N. To be clear this is math functions, not programming functions. Programming functions take values of a type and perform computations on them. Math functions take values of a fixed set and return values of another set. So the array [a, b, c, d] can be represented by the function (1 -> a ++ 2 -> b ++ 3 -> c ++ 4 -> d). Let's write the set of all four element character arrays as 1..4 -> char. 1..4 is the function's domain. The set of all character arrays is the empty array + the functions with domain 1..1 + the functions with domain 1..2 + ... Let's call this set Array[Char]. Our compilers can enforce that a type belongs to Array[Char], but some operations care about the more specific type, like matrix multiplication. This is either checked with the runtime type or, in exotic enough languages, with static dependent types. (This is actually how TLA+ does things: the basic collection types are functions and sets, and a function with domain 1..N is a sequence.) 2-dimensional arrays Now take the 3x4 matrix i. 3 4 0 1 2 3 4 5 6 7 8 9 10 11 There are two equally valid ways to represent the array function: A function that takes a row and a column and returns the value at that index, so it would look like f(r: 1..3, c: 1..4) -> Int. A function that takes a row and returns that column as an array, aka another function: f(r: 1..3) -> g(c: 1..4) -> Int.2 Man, (2) looks a lot like currying! In Haskell, functions can only have one parameter. If you write (+) 6 10, (+) 6 first returns a new function f y = y + 6, and then applies f 10 to get 16. So (+) has the type signature Int -> Int -> Int: it's a function that takes an Int and returns a function of type Int -> Int.3 Similarly, our 2D array can be represented as an array function that returns array functions: it has type 1..3 -> 1..4 -> Int, meaning it takes a row index and returns 1..4 -> Int, aka a single array. (This differs from conventional array-of-arrays because it forces all of the subarrays to have the same domain, aka the same length. If we wanted to permit ragged arrays, we would instead have the type 1..3 -> Array[Int].) Why is this useful? A couple of reasons. First of all, we can apply function transformations to arrays, like "combinators". For example, we can flip any function of type a -> b -> c into a function of type b -> a -> c. So given a function that takes rows and returns columns, we can produce one that takes columns and returns rows. That's just a matrix transposition! Second, we can extend this to any number of dimensions: a three-dimensional array is one with type 1..M -> 1..N -> 1..O -> V. We can still use function transformations to rearrange the array along any ordering of axes. Speaking of dimensions: What are dimensions, anyway Okay, so now imagine we have a Row × Col grid of pixels, where each pixel is a struct of type Pixel(R: int, G: int, B: int). So the array is Row -> Col -> Pixel But we can also represent the Pixel struct with a function: Pixel(R: 0, G: 0, B: 255) is the function where f(R) = 0, f(G) = 0, f(B) = 255, making it a function of type {R, G, B} -> Int. So the array is actually the function Row -> Col -> {R, G, B} -> Int And then we can rearrange the parameters of the function like this: {R, G, B} -> Row -> Col -> Int Even though the set {R, G, B} is not of form 1..N, this clearly has a real meaning: f[R] is the function mapping each coordinate to that coordinate's red value. What about Row -> {R, G, B} -> Col -> Int? That's for each row, the 3 × Col array mapping each color to that row's intensities. Really any finite set can be a "dimension". Recording the monitor over a span of time? Frame -> Row -> Col -> Color -> Int. Recording a bunch of computers over some time? Computer -> Frame -> Row …. This is pretty common in constraint satisfaction! Like if you're conference trying to assign talks to talk slots, your array might be type (Day, Time, Room) -> Talk, where Day/Time/Room are enumerations. An implementation constraint is that most programming languages only allow integer indexes, so we have to replace Rooms and Colors with numerical enumerations over the set. As long as the set is finite, this is always possible, and for struct-functions, we can always choose the indexing on the lexicographic ordering of the keys. But we lose type safety. Why tables are different One more example: Day -> Hour -> Airport(name: str, flights: int, revenue: USD). Can we turn the struct into a dimension like before? In this case, no. We were able to make Color an axis because we could turn Pixel into a Color -> Int function, and we could only do that because all of the fields of the struct had the same type. This time, the fields are different types. So we can't convert {name, flights, revenue} into an axis. 4 One thing we can do is convert it to three separate functions: airport: Day -> Hour -> Str flights: Day -> Hour -> Int revenue: Day -> Hour -> USD But we want to keep all of the data in one place. That's where tables come in: an array-of-structs is isomorphic to a struct-of-arrays: AirportColumns( airport: Day -> Hour -> Str, flights: Day -> Hour -> Int, revenue: Day -> Hour -> USD, ) The table is a sort of both representations simultaneously. If this was a pandas dataframe, df["airport"] would get the airport column, while df.loc[day1] would get the first day's data. I don't think many table implementations support more than one axis dimension but there's no reason they couldn't. These are also possible transforms: Hour -> NamesAreHard( airport: Day -> Str, flights: Day -> Int, revenue: Day -> USD, ) Day -> Whatever( airport: Hour -> Str, flights: Hour -> Int, revenue: Hour -> USD, ) In my mental model, the heterogeneous struct acts as a "block" in the array. We can't remove it, we can only push an index into the fields or pull a shared column out. But there's no way to convert a heterogeneous table into an array. Actually there is a terrible way Most languages have unions or product types that let us say "this is a string OR integer". So we can make our airport data Day -> Hour -> AirportKey -> Int | Str | USD. Heck, might as well just say it's Day -> Hour -> AirportKey -> Any. But would anybody really be mad enough to use that in practice? Oh wait J does exactly that. J has an opaque datatype called a "box". A "table" is a function Dim1 -> Dim2 -> Box. You can see some examples of what that looks like here Misc Thoughts and Questions The heterogeneity barrier seems like it explains why we don't see multiple axes of table columns, while we do see multiple axes of array dimensions. But is that actually why? Is there a system out there that does have multiple columnar axes? The array x = [[a, b, a], [b, b, b]] has type 1..2 -> 1..3 -> {a, b}. Can we rearrange it to 1..2 -> {a, b} -> 1..3? No. But we can rearrange it to 1..2 -> {a, b} -> PowerSet(1..3), which maps rows and characters to columns with that character. [(a -> {1, 3} ++ b -> {2}), (a -> {} ++ b -> {1, 2, 3}]. We can also transform Row -> PowerSet(Col) into Row -> Col -> Bool, aka a boolean matrix. This makes sense to me as both forms are means of representing directed graphs. Are other function combinators useful for thinking about arrays? Does this model cover pivot tables? Can we extend it to relational data with multiple tables? Systems Distributed Talk (will be) Online The premier will be August 6 at 12 CST, here! I'll be there to answer questions / mock my own performance / generally make a fool of myself. Sacrilege! But it turns out in this context, it's easier to use 1-indexing than 0-indexing. In the years since I wrote that article I've settled on "each indexing choice matches different kinds of mathematical work", so mathematicians and computer scientists are best served by being able to choose their index. But software engineers need consistency, and 0-indexing is overall a net better consistency pick. ↩ This is right-associative: a -> b -> c means a -> (b -> c), not (a -> b) -> c. (1..3 -> 1..4) -> Int would be the associative array that maps length-3 arrays to integers. ↩ Technically it has type Num a => a -> a -> a, since (+) works on floats too. ↩ Notice that if each Airport had a unique name, we could pull it out into AirportName -> Airport(flights, revenue), but we still are stuck with two different values. ↩

16 hours ago 2 votes
Playing with open source LLMs

Every 6 months or so, I decide to leave my cave and check out what the cool kids are doing with AI. Apparently the latest trend is to use fancy command line tools to write code using LLMs. This is a very nice change, since it suddenly makes AI compatible with my allergy to getting out of the terminal. The most popular of these tools seems to be Claude Code. It promises to be able to build in total autonomy, being able to use search code, write code, run tests, lint, and commit the changes. While this sounds great on paper, I’m not keen on getting locked into vendor tools from an unprofitable company. At some point, they will either need to raise their prices, enshittify their product, or most likely do both. So I went looking for what the free and open source alternatives are. Picking a model There’s a large amount of open source large language models on the market, with new ones getting released all the time. However, they are not all ready to be used locally in coding tasks, so I had to try a bunch of them before settling on one. deepseek-r1:8b Deepseek is the most popular open source model right now. It was created by the eponymous Chinese company. It made the news by beating numerous benchmarks while being trained on a budget that is probably lower than the compensation of some OpenAI workers. The 8b variant only weights 5.2 GB and runs decently on limited hardware, like my three years old Mac. This model is famous for forgetting about world events from 1989, but also seems to have a few issues when faced with concrete coding tasks. It is a reasoning model, meaning it “thinks” before acting, which should lead to improved accuracy. In practice, it regularly gets stuck indefinitely searching where it should start and jumping from one problem to the other in a loop. This can happen even on simple problems, and made it unusable for me. mistral:7b Mistral is the French alternative to American and Chinese models. I have already talked about their 7b model on this blog. It is worth noting that they have kept updating their models, and it should now be much more accurate than two years ago. Mistral is not a reasoning model, so it will jump straight to answering. This is very good if you’re working with tasks where speed and low compute use are a priority. Sadly, the accuracy doesn’t seem good enough for coding. Even on simple tasks, it will hallucinate functions or randomly delete parts of the code I didn’t want to touch. qwen3:8b Another model from China, qwen3 was created by the folks at Alibaba. It also claims impressive benchmark results, and can work as both a reasoning or non-thinking model. It was made with modern AI tooling in mind, by supporting MCPs and a framework for agentic development. This model actually seems to work as expected, providing somewhat accurate code output while not hanging in the reasoning part. Since it runs decently on my local setup, I decided to stick to that model for now. Setting up a local API with Ollama Ollama is now the default way to download and run local LLMs. It can be simply installed by downloading it from their website. Once installed, it works like Docker for models, by giving us access to commands like pull, run, or rm. Ollama will expose an API on localhost which can be used by other programs. For example, you can use it from your Python programs through ollama-python. Pair programming with aider The next piece of software I installed is aider. I assume it’s pronounced like the French word, but I could not confirm that. Aider describes itself as a “pair programming” application. Its main job is to pass context to the model, let it write the output to files, run linters, and commit the changes. Getting started It can be installed using the official Python package or via Homebrew if you use Mac. Once it is installed, just navigate to your code repository and launch it: export OLLAMA_API_BASE=http://127.0.0.1:11434 aider --model ollama_chat/qwen3:8b The CLI should automatically create some configuration files and add them to the repo’s .gitignore. Usage Aider isn’t meant to be left alone in complete autonomy. You’ll have to guide the AI through the process of making changes to your repository. To start, use the /add command to add files you want to focus on. Those files will be passed entirely to the model’s context and the model will be able to write in them. You can then ask questions using the /ask command. If you want to generate code, a good strategy can be to starting by requesting a plan of actions. When you want it to actually write to the files, you can prompt it using the /code command. This is also the default mode. There’s no absolute guarantee that it will follow a plan if you agreed on one previously, but it is still a good idea to have one. The /architect command seems to automatically ask for a plan, accept it, and write the code. The specificity of this command is that it lets you use different models to plan and write the changes. Refactoring I tried coding with aider in a few situations to see how it performs in practice. First, I tried making it do a simple refactoring on Itako, which is a project of average complexity. When pointed to the exact part of code where the issues happened, and explained explicitly what to do, the model managed to change the target struct according to the instructions. It did unexpectedly change a function that was outside the scope of what I asked, but this was easy to spot. On paper, this looks like a success. In practice, the time spent crafting a prompt, waiting for the AI to run and fixing the small issue that came up immensely exceeds the 10 minutes it would have taken me to edit the file myself. I don’t think coding that way would lead me to a massive performance improvement for now. Greenfield project For a second scenario, I wanted to see how it would perform on a brand-new project. I quickly set up a Python virtual environment, and asked aider to work with me at building a simple project. We would be opening a file containing Japanese text, parsing it with fugashi, and counting the words. To my surprise, this was a disaster. All I got was a bunch on hallucination riddled python that wouldn’t run under any circumstances. It may be that the lack of context actually made it harder for the model to generate code. Troubleshooting Finally, I went back to Itako, and decided to check how it would perform on common troubleshooting tasks. I introduced a few bugs to my code and gathered some error messages. I then proceeded to simply give aider the files mentioned by the error message and just use /ask to have it explain the errors to me, without requiring it to implement the code. This part did work very well. If I compare it with Googling unknown error messages, I think this can cut the time spent on the issue by half This is not just because Google is getting worse every day, but the model having access to the actual code does give it a massive advantage. I do think this setup is something I can use instead of the occasional frustration of scrolling through StackOverflow threads when something unexpected breaks. What about the Qwen CLI? With everyone jumping on the trend of CLI tools for LLMs, the Qwen team released its own Qwen Code. It can be installed using npm, and connects to a local model if configured like this: export OPENAI_API_KEY="ollama" export OPENAI_BASE_URL="http://localhost:11434/v1/" export OPENAI_MODEL="qwen3:8b" Compared to aider, it aims at being fully autonomous. For example, it will search your repository using grep. However, I didn’t manage to get it to successfully write any code. The tool seems optimized for larger, online models, with context sizes up to 1M tokens. Our local qwen3 context only has a 40k tokens context size, which can get overwhelmed very quickly when browsing entire code repositories. Even when I didn’t run out of context, the tool mysteriously failed when trying to write files. It insists it can only write to absolute paths, which the model doesn’t seem to agree with providing. I did not investigate the issue further.

2 days ago 5 votes
The beauty of ideals

Ideals are supposed to be unattainable for the great many. If everyone could be the smartest, strongest, prettiest, or best, there would be no need for ideals — we'd all just be perfect. But we're not, so ideals exist to show us the peak of humanity and to point our ambition and appreciation toward it. This is what I always hated about the 90s. It was a decade that made it cool to be a loser. It was the decade of MTV's Beavis and Butt-Head. It was the age of grunge. I'm generationally obliged to like Nirvana, but what a perfectly depressive, suicidal soundtrack to loser culture. Naomi Wolf's The Beauty Myth was published in 1990. It took a critical theory-like lens on beauty ideals, and finding it all so awfully oppressive. Because, actually, seeing beautiful, slim people in advertising or media is bad. Because we don't all look like that! And who's even to say what "beauty" is, anyway? It's all just socially constructed!  The final stage of that dead-end argument appeared as an ad here in Copenhagen thirty years later during the 2020 insanity: I passed it every day biking the boys to school for weeks. Next to other slim, fit Danes also riding their bikes. None of whom resembled the grotesque display of obesity towering over them on their commute from Calvin Klein. While this campaign was laughably out of place in Copenhagen, it's possible that it brought recognition and representation in some parts of America. But a celebration of ideals it was not. That's the problem with the whole "representation" narrative. It proposes we're all better off if all we see is a mirror of ourselves, however obese, lazy, ignorant, or incompetent, because at least it won't be "unrealistic". Screw that. The last thing we need is a patronizing message that however little you try, you're perfect just the way you are. No, the beauty of ideals is that they ask more of us. Ask us to pursue knowledge, fitness, and competence by taking inspiration from the best human specimens. Thankfully, no amount of post-modern deconstruction or academic theory babble seems capable of suppressing the intrinsic human yearning for excellence forever. The ideals are finally starting to emerge again.

3 days ago 4 votes