Full Width [alt+shift+f] FOCUS MODE Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
19
shop.evilmadscientist.com/productsmenu/652
over a year ago

Comments

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from David Crawshaw

How I program with LLMs

How I program with LLMs 2025-01-06 This document is a summary of my personal experiences using generative models while programming over the past year. It has not been a passive process. I have intentionally sought ways to use LLMs while programming to learn about them. The result has been that I now regularly use LLMs while working and I consider their benefits net-positive on my productivity. (My attempts to go back to programming without them are unpleasant.) Along the way I have found oft-repeated steps that can be automated, and a few of us are working on building those into a tool specifically for Go programming: . It’s very early but so far the experience has been positive.sketch.dev I am typically curious about new technology. It took very little experimentation with LLMs for me to want to see if I could extract practical value. There is an allure to a technology that can (at least some of the time) craft sophisticated responses to challenging questions. It is even more exciting to watch a computer attempt to write a piece of a program as requested, and make solid progress. The only technological shift I have experienced that feels similar to me happened in 1995, when we first configured our LAN with a usable default route. We replaced the shared computer in the other room running Trumpet Winsock with a machine that could route a dialup connection, and all at once I had The Internet on tap. Having the internet all the time was astonishing, and felt like the future. Probably far more to me in that moment than to many who had been on the internet longer at universities, because I was immediately dropped into high internet technology: web browsers, JPEGs, and millions of people. Access to a powerful LLM feels like that. So I followed this curiosity, to see if a tool that can generate something mostly not wrong most of the time could be a net benefit in my daily work. The answer appears to be yes, generative models are useful for me when I program. It has not been easy to get to this point. My underlying fascination with the new technology is the only way I have managed to figure it out, so I am sympathetic when other engineers claim LLMs are “useless.” But as I have been asked more than once how I can possibly use them effectively, this post is my attempt to describe what I have found so far. There are three ways I use LLMs in my day-to-day programming: As this is about the of programming, this has been a fundamentally qualitative process that is hard to write about with quantitative rigor. The closest I will get to data is to say: it appears from my records that for every two hours of programming I do now, I accept more than 10 autocomplete suggestions, use LLM for a search-like task once, and program in a chat session once.practice The rest of this is about extracting value from chat-driven programming. Let me try to motivate this for the skeptical. A lot of the value I personally get out of chat-driven programming is I reach a point in the day when I know what needs to be written, I can describe it, but I don’t have the energy to create a new file, start typing, then start looking up the libraries I need. (I’m an early-morning person, so this is usually any time after 11am for me, though it can also be any time I context-switch into a different language/framework/etc.) LLMs perform that service for me in programming. They give me a first draft, with some good ideas, with several of the dependencies I need, and often some mistakes. Often, .I find fixing those mistakes is a lot easier than starting from scratch This means chat-based programming may not be for you. I am doing a particular kind of programming, product development, which could be roughly described as trying to bring programs to a user through a robust interface. That means I am building a lot, throwing away a lot, and bouncing around between environments. Some days I mostly write typescript, some days mostly Go. I spent a week in a C++ codebase last month exploring an idea, and just had an opportunity to learn the HTTP server-side events format. I am all over the place, constantly forgetting and relearning. If you spend more time proving your optimization of a cryptographic algorithm is not vulnerable to timing attacks than you do writing the code, I don’t think any of my observations here are going to be useful to you. Give an LLM a specific objective and all the background material it needs so it can craft a well-contained code review packet and expect it to adjust as you question it. There are two major elements to this: The ideal task for an LLM is one where it needs to use a lot of common libraries (more than a human can remember, so it is doing a lot of small-scale research for you), working to an interface you designed or produces a small interface you can verify as sensible quickly, and it can write readable tests. Sometimes this means choosing the library for it, if you want something obscure (though with open source code LLMs are quite good at this). You always need to pass an LLM’s code through a compiler and run the tests before spending time reading it. They all produce code that doesn’t compile sometimes. (Always making errors I find surprisingly human, every time I see one I think, there but for the grace of God go I.) The better LLMs are very good at recovering from their mistakes, often all they need is for you to paste the compiler error or test failure into the chat and they fix the code. There are vague tradeoffs we make every day around the cost of writing, the cost of reading, and the cost of refactoring code. Let’s take Go package boundaries as an example. The standard library has a package “net/http” that contains some fundamental types for dealing with wire format encoding, MIME types, etc. It contains an HTTP client, and an HTTP server. Should it be one package, or several? Reasonable people can disagree! So much so, I do not know if there is a correct answer today. What we have works, after 15 years of use it is still not clear to me that some other package arrangement would work better. Advantages of a larger package include: centralized documentation for callers, easier initial writing, easier refactoring, easier sharing of helper code without devising robust interfaces for them (which often involves pulling the fundamental types of a package out into yet another leaf package filled with types). The disadvantages include the package being harder to read because many different things are going on (try reading the net/http client implementation without tripping up and finding yourself in the server code for a few minutes), or it being harder to use because there is too much going on in it. For example I have a codebase that uses a C library in some fundamental types, but parts of the codebase need to be in a binary widely distributed to many platforms that does not technically need the C library, so have more packages than you might expect in the codebase isolating the use of the C library to avoid cgo in the multi-platform binary. There are no right answers here, instead we are trading off different types of work that an engineer will have to do (upfront and ongoing). LLMs influence those tradeoffs: Let me work an example to combine a few of the discussed ideas: Write a reservoir sampler for the quartiles of floats. First off, package structure. Were I doing this before LLMs, I would have chosen to have some sort of streamstat package that contained several algorithms, maybe one per file. This does not seem to be a unique opinion, here is an open source package following that model. Now, I want just this one algorithm in its own package. Other variants or related algorithms can have their own package.quantile Next up, what do we get from an LLM. The first pass is not bad. That prompt, with some details about wanting it in Go got me quartile_sampler.go: The core interface is good too: Great! There are also tests. An aside: this may be the place to stop. Sometimes I use LLM codegen as a form of specialized search. E.g. I’m curious about reservoir sampling, but want to see how the algorithm would be applied under some surprising constraint, for example time-windowed sampling. Instead of doing a literature search I might amend my prompt for an implementation that tracks freshness. (I could also ask it to include references to the literature in the comments, which I could manually check to see if it’s making things up or if there’s some solid research to work from.) Often I spend 60 seconds reading some generated code, see an obvious trick I hadn’t thought of, then throw it away and start over. Now I know the trick is possible. This is why it is so hard to attribute value generated by LLMs. Yes sometimes it makes bad code, gets stuck in a rut, makes up something impossible (it hallucinated a part of the monaco API I wish existed the other day) and wastes my time. It can also save me hours by pointing out something relevant I don’t know. Back to the code. Fascinatingly, the initial code produced didn’t compile. In the middle of the Quartiles implementation there was the line: Which is a fine line, sorted is a slice defined a few lines earlier. But the value is never used so gopls (and the Go compiler if you run go build) immediately says: This is a very easy fix. If I paste the error back into the LLM it will correct it. Though in this case, as I’m reading the code, it’s quite clear to me that I can just delete the line myself, so I do. Now the tests. I got what I expected. In quartile_sampler_test.go: Exactly the sort of thing I would write! I would run some cases through another implementation to generate expected outputs and copy them into a test like this. But there are two issues with this. The first is the LLM did run these numbers through another implementation. (To the best of my knowledge. When using a sophisticated LLM service, it is hard to say for sure what is happening behind the scenes.) It made them up, and LLMs have a reputation for being weak at arithmetic. So this sort of test, while reasonable for a human to write because we base it on the output of another tool, or if we are particularly old-school do some arithmetic ourselves, is not great from an LLM.not The second issue with this is we can do better. I am happy we now live in a time when programmers write their own tests, but we do not hold ourselves to the same standards with tests as we do with production code. That is a reasonable tradeoff, there are only so many hours in the day. But what LLMs lack in arithmetical prowess, they make up for in enthusiasm. Let’s ask for an even better test. This got us some new test code: The original test from above has been reworked to to use checkQuartiles and we have something new: This is fun, because it's wrong. My running tool immediately says:gopls Pasting that error back into the LLM gets it to regenerate the fuzz test such that it is built around a function that uses to extract floats from the data slice. Interactions like this point us towards automating the feedback from tools: all it needed was the obvious error message to make solid progress towards something useful. I was not needed.func(t *testing.T, data []byte)math.Float64frombits Doing a quick survey of the last few weeks of my LLM chat history shows (which as I mentioned earlier, is not a proper quantitative analysis by any measure) that more than 80% of the time there is a tooling error, the LLM can make useful progress without me adding any insight. About half the time it can completely resolve the issue without me saying anything of note, I am just acting as the messenger. There was a programming movement some 25 years ago focused around the principle “don’t repeat yourself.” As is so often the case with short snappy principles taught to undergrads, it got taken too far. There is a lot of cost associated with abstracting out a piece of code so it can be reused, it requires creating intermediate abstractions that must be learned, and it requires adding features to the factored out code to make it maximally useful to the maximum number of people, which means we depend on libraries filled with useless distracting features. The past 10-15 years has seen a far more tempered approach to writing code, with many programmers understanding it is better to reimplement a concept if the cost of sharing the implementation is higher than the cost of implementing and maintaining separate code. It is far less common for me to write on a code review “this isn’t worth it, separate the implementations.” (Which is fortunate, because people really don’t want to hear things like that after they have done all the work.) Programmers are getting better at tradeoffs. What we have now is a world where the tradeoffs have shifted. It is now easier to write more comprehensive tests. You can have the LLM write the fuzz test implementation you want but didn’t have the hours to build properly. You can spend a lot more time writing tests to be readable, because the LLM is not sitting there constantly thinking “it would be better for the company if I went and picked another bug off the issue tracker than doing this.” So the tradeoff shifts in favor of having more specialized implementations. The place where I expect this to be most visible is language-specific . Every major company API comes with dozens of these, usually low quality, wrappers written by people who aren’t actually using their implementations for a specific goal, instead are trying to capture every nook and cranny of an API in a large and complex interface. Even when it is done well, I have found it easier to go to the REST documentation (usually a set of curl commands), and implement a language wrapper for the 1% of the API I actually care about. It cuts down the amount of the API I need to learn upfront, and it cuts down how much future programmers (myself) reading the code need to understand.REST API wrappers For example, as part of my recent work on sketch.dev I implemented a Gemini API wrapper in Go. Even though the in Go has been carefully handcrafted by people who know the language well and clearly care, there is a lot to read to understand it:official wrapper My simplistic initial wrapper was 200 lines of code total, one method, three types. Reading the entire implementation is 20% of the work of reading the documentation of the official package, and if you decide to try digging into its implementation you will discover that it is a wrapper around another largely code-generated implementation with protos and grpc and the works. All I want is to cURL and parse a JSON object. There obviously comes a point in a project, where Gemini is the foundation of the entire app, where nearly every feature is used, where building on gRPC aligns well with the telemetry system elsewhere in your organization, where you should use the large official wrapper. But most of the time it is so much more time consuming, both upfront and ongoing, to do so given we almost always want only some wafer-thin sliver of whatever API we need to use today, that custom clients, largely written by a GPU, are far more effective for getting work done. So I foresee a world with far more specialized code, with fewer generalized packages, and more readable tests. Reusable code will continue to thrive around small robust interfaces and otherwise will be pulled apart into specialized code. Depending how well this is done, it will lead to either better software or worse software. I would expect both, with a long-term trend towards better software by the metrics that matter. As a programmer my instinct is to make computers do work for me. It is a lot of work getting value out of LLMs, how can a computer do it? I believe the key to solving a problem is not to overgeneralize. Solve a particular problem and then expand slowly. So instead of building a general-purpose UI for chat programming that is just as good at COBOL as it is for Haskell, we want to focus on one particular environment. The bulk of my programming is in Go, and so what I want is easy to imagine for a Go programmer: A few of us have built an early prototype of this: .sketch.dev The goal is not a “Web IDE” but rather to challenge the notion that chat-based programming even belongs in what is traditionally called an IDE. IDEs are collections of tools arranged for people. It is a delicate environment where I know what is going on. While an LLM is ultimately a developer tool, it is one that needs its own IDE to get the feedback it needs to operate effectively.I do not want an LLM spewing its first draft all over my current branch. Put another way: we didn’t embed goimports into sketch for it to be used by humans, but to get Go code closer to compiling using automatic signals, so that the compiler can provide better error feedback to the LLM driving it. It might be better to think of sketch.dev as a “Go IDE for LLMs”. This is all very recent work with a lot left to do, e.g. git integration so we can load existing packages for editing and drop the results on a branch. Better test feedback. More console control. (If the answer is to run sed, run sed. Be you the human or the LLM.) We are still exploring, but are convinced that focusing an environment for a particular kind of programming will give us better results than the generalized tool. Background Overview Why use chat at all? Chat-based LLMs do best with exam-style questions Extra code structure is much cheaper An example Where are we going? Better tests, maybe even less DRY Automating these observations: sketch.dev . This makes me more productive by doing a lot of the more-obvious typing for me. It turns out the current state of the art can be improved on here, but that’s a conversation for another day. Even the standard products you can get off the shelf are better for me than nothing. I convinced myself of that by trying to give them up. I could not go a week without getting frustrated by how much mundane typing I had to do before having a FIM model. This is the place to experiment first. Autocomplete . If I have a question about a complex environment, say “how do I make a button transparent in CSS” I will get a far better answer asking any consumer-based LLM, o1, sonnet 3.5, etc, than I do using an old fashioned web search engine and trying to parse the details out of whatever page I land on. (Sometimes the LLM is wrong. So are people. The other day I put my shoe on my head and asked my two year old what she thought of my hat. She dealt with it and gave me a proper scolding. I can deal with LLMs being wrong sometimes too.) Search . This is the hardest of the three. This is where I get the most value of LLMs, but also the one that bothers me the most. It involves learning a lot and adjusting how you program, and on principle I don’t like that. It requires at least as much messing about to get value out of LLM chat as it does to learn to use a slide rule, with the added annoyance that it is a non-deterministic service that is regularly changing its behavior and user interface. Indeed, the long-term goal in my work is to replace the need for chat-driven programming, to bring the power of these models to a developer in a way that is not so off-putting. But as of now I am dedicated to approaching the problem incrementally, which means figuring out how to do best with what we have and improve it.Chat-driven programming Avoid creating a situation with so much complexity and ambiguity that the LLM gets confused and produces bad results. This is why I have had little success with chat inside my IDE. My workspace is often messy, the repository I am working on is by default too large, it is filled with distractions. One thing humans appear to be much better than LLMs at (as of January 2025) is not getting distracted. That is why I still use an LLM via a web browser, because I want a blank slate on which to craft a well-contained request. Ask for work that is easy to verify. Your job as a programmer using an LLM is to read the code it produces, think about it, and decide if the work is good. You can ask an LLM to do things you would never ask a human to do. “Rewrite all of your new tests introducing an <intermediate concept designed to make the tests easier to read>” is an appalling thing to ask a human, you’re going to have days of tense back-and-forth about whether the cost of the work is worth the benefit. An LLM will do it in 60 seconds and not make you fight to get it done. Take advantage of the fact that .redoing work is extremely cheap As LLMs do better with exam-style questions, more and smaller packages make it easier to give a complete and yet isolated context for a piece of work. This is true for humans too, which is why we use packages at all, but we trade off package size against the extra typing/plumbing/filing to make more readable code. With an LLM both doing and benefiting from a big chunk of that extra work, the tradeoff shifts. (As a bonus, we humans get more readable code!) Smaller and more numerous packages can be compiled and tested independently of unrelated code. This is helpful for the LLM dev cycle as it means changes can be compiled and tested before other dependent packages are refactored, and the package has a simpler test environment. something like the Go playground, built around editing a package and tests with a chat interface onto editable code a little UNIX env where we can run go get and go test goimports integration gopls integration automatic model feedback: on model edit run go get, go build, go test, feedback missing packages, compiler errors, test failures to the model to try and get them fixed automatically // QuartileSampler maintains a reservoir sample to estimate quartiles of a stream // of float64 values. It uses reservoir sampling to maintain a representative // sample when processing large streams of data. type QuartileSampler struct { reservoir []float64 seen int maxSize int rng *rand.Rand } // NewQuartileSampler creates a new sampler that maintains up to maxSize samples. // A larger maxSize provides more accurate estimates but uses more memory. func NewQuartileSampler(maxSize int) *QuartileSampler { … } // Add adds a new value to the sampler. func (qs *QuartileSampler) Add(value float64) { … } // Quartiles returns the estimated 25th, 50th (median), and 75th percentiles // of the data seen so far. Returns (0,0,0) if no data has been added. func (qs *QuartileSampler) Quartiles() (q1, median, q3 float64) { if len(qs.reservoir) == 0 { return 0, 0, 0 } // Make a copy to avoid modifying the reservoir sorted := make([]float64, len(qs.reservoir)) copy(sorted, qs.reservoir) slices.Sort(sorted) n := len(sorted) q1 = percentile(sorted, 0.25) median = percentile(sorted, 0.50) q3 = percentile(sorted, 0.75) return q1, median, q3 } n := len(sorted) declared and not used: n func TestQuartileSampler(t *testing.T) { tests := []struct { name string data []float64 wantQ1 float64 wantMed float64 wantQ3 float64 epsilon float64 // allowed error for floating point comparison }{ { name: "basic sequence", data: []float64{1, 2, 3, 4, 5, 6, 7}, wantQ1: 2, wantMed: 4, wantQ3: 6, epsilon: 0.1, }, … } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { qs := NewQuartileSampler(100) // Use large reservoir for accurate testing for _, v := range tt.data { qs.Add(v) } q1, med, q3 := qs.Quartiles() if math.Abs(q1-tt.wantQ1) > tt.epsilon { t.Errorf("Q1 = %v, want %v (±%v)", q1, tt.wantQ1, tt.epsilon) } if math.Abs(med-tt.wantMed) > tt.epsilon { t.Errorf("Median = %v, want %v (±%v)", med, tt.wantMed, tt.epsilon) } if math.Abs(q3-tt.wantQ3) > tt.epsilon { t.Errorf("Q3 = %v, want %v (±%v)", q3, tt.wantQ3, tt.epsilon) } }) } } // referenceQuartiles calculates the exact quartiles for a slice of float64 values // using linear interpolation, matching the behavior expected from the sampler. func referenceQuartiles(data []float64) (q1, median, q3 float64) { … } // compareQuartiles checks if two sets of quartiles are within epsilon of each other. // Returns true if they match within the tolerance, false otherwise. func compareQuartiles(q1a, meda, q3a, q1b, medb, q3b, epsilon float64) bool { … } // checkQuartiles is a test helper that compares sampler output against the reference // implementation and reports any differences. func checkQuartiles(t *testing.T, data []float64, epsilon float64) { t.Helper() // Get reference values wantQ1, wantMed, wantQ3 := referenceQuartiles(data) // Get sampler values using a large reservoir for accuracy qs := NewQuartileSampler(1000) for _, v := range data { qs.Add(v) } gotQ1, gotMed, gotQ3 := qs.Quartiles() if !compareQuartiles(gotQ1, gotMed, gotQ3, wantQ1, wantMed, wantQ3, epsilon) { t.Errorf("Quartiles mismatch:\ngot (q1=%v, med=%v, q3=%v)\nwant (q1=%v, med=%v, q3=%v)\nepsilon=%v", gotQ1, gotMed, gotQ3, wantQ1, wantMed, wantQ3, epsilon) } } func FuzzQuartileSampler(f *testing.F) { // Add some seed corpus f.Add([]float64{1, 2, 3, 4, 5}) f.Fuzz(func(t *testing.T, data []float64) { // Use a larger epsilon for fuzzing since we might get more extreme values checkQuartiles(t, data, 0.2) }) } fuzzing arguments can only have the following types: string, bool, float32, float64, int, int8, int16, int32, int64, uint, uint8, uint16, uint32, uint64, []byte $ go doc -all genai | wc -l 1155 In the tests, implement the simplest, most readable version of the standard code for quartiles over a fixed set of known values in a slice. Then pass the test cases through the standard code and the reservoir sampler and confirm they are within an epsilon of each other. Structure the comparison code such that it can be used in a fuzz test too.

8 months ago 37 votes
jsonfile: a quick hack for tinkering

jsonfile: a quick hack for tinkering Consider your requirements! A reference implementation A final thought 2024-02-06 The year is 2024. I am on vacation and dream up a couple of toy programs I would like to build. It has been a few years since I built a standalone toy, I have . So instead of actually building any of the toys I think of, I spend my time researching if anything has changed since the last time I did it. Should pick up new tools or techniques?been busy It turns out lots of things have changed! There’s some great stuff out there, including decent quorum-write regional cloud databases now. Oh and the ability to have a fascinating hour-long novel conversation with transistors. But things are still awkward for small fast tinkering. Going back in time, I struggled constantly rewriting the database for the prototype for Tailscale, so I ended up writing my in-memory objects out as . It went far further than I planned. Somewhere in the intervening years I convinced myself it must have been a bad idea even for toys, given all the pain migrating away from it caused. But now that I find myself in an empty text editor wanting to write a little web server, I am not so sure. The migration was painful, and a lot of that pain was born by others (which is unfortunate, I find handing a mess to someone else deeply unpleasant). Much of that pain came from the brittle design of the caching layers on top (also my doing), which came from not moving to an SQL system soon enough.a JSON file I suspect, considering the process retrospect, a great deal of that pain can be avoided by committing to migrating directly to an SQL system the moment you need an index. You can pay down a lot of exploratory design work in a prototype before you need an index, which n is small, full scans are fine. But you don’t make it very far into production before one of your values of n crosses something around a thousand and you long for an index. With a clear exit strategy for avoiding big messes, that means the JSON file as database is still a valid technique for prototyping. And having spent a couple of days remembering what a misery it is to write a unit test for software that uses postgresql (mocks? docker?? for a database program I first ran on a computer with less power than my 2024 wrist watch?) and struggling figuring out how to make my cgo sqlite cross-compile to Windows, I’m firmly back to thinking a JSON file can be a perfectly adequate database for a 200-line toy. Before you jump into this and discover it won’t work, or just as bad, dismiss the small and unscaling as always a bad idea, consider the requirements of your software. Using a JSON file as a database means your software: Programming is the art of tradeoffs. You have to decide what matters and what does not. Some of those decisions need to be made early, usually with imperfect information. You may very well need a powerful SQL DBMS from the moment you start programming, depending on the kind of program you’re writing! An implementation of jsonfile (which Brad called JSONMutexDB, which is cooler because it has an x in it, but requires more typing) can fit in about 70 lines of Go. But there are a couple of lessons we ran into in the early days of Tailscale that can be paid down relatively easily, growing the implementation to 85 lines. (More with comments!) I think it’s worth describing the interesting things we ran into, both in code and here. You can find the implementation of jsonfile here: . The interface is:https://github.com/crawshaw/jsonfile/blob/main/jsonfile.go There is some experience behind this design. In no particular order: One of the early pain points in the transition was figuring out the equivalent of when to , , and . The first version exposed the mutex directly (which was later converted into a RWMutex).BEGINCOMMITROLLBACK There is no advantage to paying this transition cost later. It is easy to box up read/write transactions with a callback. This API does that, and provides a great point to include other safety mechanisms. There are two forms of this. The first is if the write fn fails half-way through, having edited the db object in some way. To avoid this, the implementation first creates an entirely new copy of the DB before applying the edit, so the entire change set can be thrown away on error. Yes, this is inefficient. No, it doesn’t matter. Inefficiency in this design is dominated by the I/O required to write the entire database on every edit. If you are concerned about the duplicate-on-write cost, you are not modeling I/O cost appropriately (which is important, because if I/O matters, switch to SQL). The second is from a full disk. The easy to write a file in Go is to call os.WriteFile, which the first implementation did. But that means: A failure can occur in any of those system calls, resulting in a corrupt DB. So this implementation creates a new file, loads the DB into it, and when that has all succeeded, uses . It is not a panacea, our operating systems do not make all the promises we wish they would about rename. But it is much better than the default.rename(2) A nasty issue I have run into twice is aliasing memory. This involves doing something like: An intermediate version of this code kept the previous database file on write. But there’s an easier and even more robust strategy: never rename the file back to the original. Always create a new file, . On starting, load the most recent file. Then when your data is worth backing up (if ever), have a separate program prune down the number of files and send them somewhere robust.Backups.mydb.json.<timestamp> Not in this implementation but you may want to consider, is removing the risk of a Read function editing memory. You can do that with View* types generated by the tool. It’s neat, but more than quadruples the complexity of JSONFileDB, complicates the build system, and initially isn’t very important in the sorts of programs I write. I have found several memory aliasing bugs in all the code I’ve written on top of a JSON file, but have yet to accidentally write when reading. Still, for large code bases Views are quite pleasant and well-worth considering about the point when a project should move to a real SQL.Constant memory.viewer There is some room for performance improvements too (using cloner instead of unmarshalling a fresh copy of the data for writing), though I must point out again that needing more performance is a good sign it is time to move on to SQLite, or something bigger. It’s a tiny library. Copy and edit as needed. It is an all-new implementation so I will be fixing bugs as I find them. (As a bonus: this was my first time using a Go generic! 👴 It went fine. Parametric polymorphism is ok.) Why go out of my way to devise an inadequate replacement for a database? Most projects fail before they start. They fail because the is too high. Our dreams are big and usually too much, as dreams should be.activation energy But software is not building a house or traveling the world. You can realize a dream with the tools you have on you now, in a few spare hours. This is the great joy of it, you are free from physical and economic constraint. If you start. Be willing to compromise almost everything to start. Doesn’t have a lot of data. Keep it to a few megabytes. The data structure is boring enough not to require indexes. You don’t need something interesting like full-text search. You do plenty of reads, but writes are infrequent. Ideally no more than one every few seconds. Truncating the database file Making multiple system calls to .write(2) Calling .close(2) type JSONFile[Data any] struct { … } func New[Data any](path string) (*JSONFile[Data], error) func Load[Data any](path string) (*JSONFile[Data], error) func (p *JSONFile[Data]) Read(fn func(data *Data)) func (p *JSONFile[Data]) Write(fn func(*Data) error) error list := []int{1, 2, 3} db.Write(func() { db.List = list }) list[0] = 10 // editing the database! Transactions Database corruption through partial writes Memory aliasing Some changes you may want to consider

a year ago 34 votes
new year, same plan

new year, same plan 2022-12-31 Some months ago, the bill from GCE for hosting this blog jumped from nearly nothing to far too much for what it is, so I moved provider and needed to write a blog post to test it all. I could have figured out why my current provider hiked the price. Presumably I was Holding It Wrong and with just a few grip adjustments I could get the price dropped. But if someone mysteriously starts charging you more money, and there are other people who offer the same service, why would you stay? This has not been a particularly easy year, for a variety of reasons. But here I am at the end of it, and beyond a few painful mistakes that in retrospect I did not have enough information to get right, I made mostly the same decisions I would again. There were a handful of truly wonderful moments. So the plan for 2023 is the same: keep the kids intact, and make programming more fun. There is also the question of Twitter. It took me a few years to develop the skin to handle the generally unpleasant environment. (I can certainly see why almost no old Twitter employees used their product.) The experience recently has degraded, there are still plenty of funny tweets, but far less moments of interesting content. Here is a recent exception, but it is notable because it's the first time in weeks I learned anything from twitter: . I now find more new ideas hiding in HN comments than on Twitter.https://twitter.com/lrocket/status/1608883621980704768 Many people I know have sort-of moved to Mastodon, but it has a pretty horrible UX that is just enough work that I, on the whole, don't enjoy it much. And the fascinating insights don't seem to be there yet, but I'm still reading and waiting. On the writing side, it might be a good idea to lower the standards (and length) of my blog posts to replace writing tweets. But maybe there isn't much value in me writing short notes anyway, are my contributions as fascinating as the ones I used to sift through Twitter to read? Not really. So maybe the answer is to give up the format entirely. That might be something new for 2023. Here is something to think about for the new year: http://www.shoppbs.pbs.org/now/transcript/transcriptNOW140_full.html DAVID BRANCACCIO: There's a little sweet moment, I've got to say, in a very intense book– your latest– in which you're heading out the door and your wife says what are you doing? I think you say– I'm getting– I'm going to buy an envelope. KURT VONNEGUT: Yeah. DAVID BRANCACCIO: What happens then? KURT VONNEGUT: Oh, she says well, you're not a poor man. You know, why don't you go online and buy a hundred envelopes and put them in the closet? And so I pretend not to hear her. And go out to get an envelope because I'm going to have a hell of a good time in the process of buying one envelope. I meet a lot of people. And, see some great looking babes. And a fire engine goes by. And I give them the thumbs up. And, and ask a woman what kind of dog that is. And, and I don't know. The moral of the story is, is we're here on Earth to fart around. And, of course, the computers will do us out of that. And, what the computer people don't realize, or they don't care, is we're dancing animals. You know, we love to move around. And, we're not supposed to dance at all anymore.

over a year ago 31 votes
log4j: between a rock and a hard place

log4j: between a rock and a hard place 2021-12-11 What does backwards compatibility mean to me? Backwards compatibility should not have forced log4j to keep LDAP/JNDI URLs The other side of compatibility: being cautious adding features There is more than enough written on the mechanics of and mitigations for the recent . On prevention, this is the most interesting widely-reshared I have seen:severe RCE in log4jinsight This is making the rounds because highly-profitable companies are using infrastructure they do not pay for. That is a worthy topic, but not the most interesting thing in this particular case because it would not clearly have contributed to preventing this bug. It is the second statement in this tweet that is worthy of attention: the long ago, but could not because of the backwards compatibility promises they are held to.maintainers of log4j would have loved to remove this bad feature I am often heard to say that I love backwards compatibility, and that it is underrated. But what exactly do I mean? I don't mean that whenever I upgrade a dependency, I expect zero side effects. If a library function gets two times faster in an upgrade, that is a change in behavior that might break my software! But obviously the exact timings of functions can change between versions. In some extreme cases I need libraries to promise the algorithmic complexity of run time or memory usage, where I am providing extremely large inputs, or need constant-time algorithms to avoid timing attacks. But I don't need that from a logging library. So let me back up and describe what is important. The ideal version of this is I run my package manager's upgrade command, execute the tests, commit the output, and not think about it any more. This means the API/ABI stays similar enough that the compiler won't break, the behavior of the library routines is similar enough the tests will pass, and no other constraints, such as total binary size limits, are exceeded. This is impossible in the general case. The only way to achieve it is to not make any changes at all. When we write down a promise, we leave lots of definitional holes in the promise. E.g. take the (generally excellent) :Go compatibility promise Here "correctly" means according to the Go language specification and the API documentation. The spec and the docs do not cover run time, memory use, or binary size. The next version of Go can be 10x slower and be compatible! But I can assure you if that were the case I would fail my goal of not spending much time upgrading a dependency. But the Go team know this, and work to the spirit of their promise. Very occasionally they break things, for security reasons, and when they do I have to spend time upgrading a dependency for a really good reason: my program needs it.very If I want my program to work correctly I should write tests for all the behaviors I care about. But like all programmers, I am short on hours in the day to do all that needs doing, and never have enough tests. So whenever a change in behavior happens in an upstream library that my tests don't catch but makes it into production, my instinct is to blame upstream. This is of course unfair, the burden for releasing good programs is borne by the person pressing the release button. But it is an expression of a programming social contract that has taken hold: a good software project tries to break downstream as little as possible, and when we do break downstream, we should do our best to make the breakage obvious and easy to fix. No compatibility promise I have seen covers the spirit of minimizing breakage and moving it to the right part of the process. As far as I can tell, engineers aren't taught this in school, and many have never heard the concept articulated. So much of best practice in releasing libraries is learned on the job and not well communicated (yet). Good upstream dependencies are maintained by people who have figured this out the hard way and do their best by their users. As a user, it is extremely hard to know what kind of library you are getting when you first consider a dependency, unless it is a very old and well established project. This is where software goes wrong the most for me. I want, year after year, to come back to a tool and be able to apply the knowledge I acquired the last time I used it, to new things I learn, and build on it. I want to hone my craft by growing a deep understanding of the tools I use. Some new features are additive. If I buy a new for framing, and it has a notch on it my old one didn't that I can use as a shortcut in marking up a beam, its presence does not invalidate my old knowledge. If the new interior notch replaces a marking that was on the outside of the square, then when I go to find my trusty marking I remember from years ago, and it's missing, I need to stop and figure out a new way to solve this old problem. Maybe I will notice the new feature, or, more likely, I'll pull out the tape measure measure I know how to use and find my mark that (slower) way. If someone who knew what they were doing saw me they could correct me! But like programming, I'm usually making a mess with wood alone in a few spare hours on a Saturday.speed square When software "upgrades" invalidate my old knowledge, it makes me a worse programmer. I can spend time getting back to where I was, but that's time I am not spending improving on where I was. To give a concrete example: I will never be an expert at developing for macOS or iOS. I bounce into and out of projects for Apple devices, spending no more than 10% of my hours on their platform. Their APIs change constantly. The buttons in Xcode move so quickly I sometimes wonder if it's happening before my eyes. Try looking up some Swift syntax on stack overflow and you'll find the answers are constantly edited for the latest version of Swift. At this point, I assume every time I come back to macOS/iOS, that I know nothing and I am learning the platform for the first time. Compare the shifting sands of Swift with the stability of awk. I have spent not a tenth of the time learning awk that I have spent relearning Swift, and yet I am about as capable in each language. An awk one-liner I learned 20 years ago still works today! When I see someone use awk to solve a problem, I'm enthusiastic to learn how they did it, because I know that 20 years from now the trick will work. By what backwards compatibility means to me, a project like log4j will break fewer people by removing a feature like the JNDI URLs than by marking an old API method with some mechanical deprecation notice that causes a build process's equivalent of to fail and moving it to a new name. They will in practice, break fewer people removing this feature than they would by slowing down a critical path by 10%, which is the sort of thing that can trivially slip into a release unnoticed.-Wall But the spirit of compatibility promises appears to be poorly understood across our industry (as software updates demonstrate to me every week), and so we lean on the pseudo-legalistic wording of project documentation to write strongly worded emails or snarky tweets any time a project makes work for us (because most projects don't get it, so surely every example of a breakage must be a project that doesn't get it, not a good reason), and upstream maintainers become defensive and overly conservative. The result is now everyone's Java software is broken! We as a profession misunderstand and misuse the concept of backwards compatibility, both upstream and downstream, by focusing on narrow legalistic definitions instead of outcomes. This is a harder, longer topic that maybe I'll find enough clarity to write properly about one day. It should be easy to hack up code and share it! We should also be cautious about adding burdensome features. This particular bug feels impossibly strange to me, because my idea of a logging API is file descriptor number 2 with the system call. None of the bells and whistles are necessary and we should be conservative about our core libraries. Indeed libraries like these are why I have been growing ever-more skeptical of using any depdendencies, and now force myself to read a big chunk of any library before adding it to a project.write But I have also written my share of misfeatures, as much as I would like to forget them. I am thankful my code I don't like has never achieved the success or wide use of log4j, and I cannot fault diligent (and unpaid!) maintainers doing their best under those circumstances. Log4j maintainers have been working sleeplessly on mitigation measures; fixes, docs, CVE, replies to inquiries, etc. Yet nothing is stopping people to bash us, for work we aren't paid for, for a feature we all dislike yet needed to keep due to backward compatibility concerns. It is intended that programs written to the Go 1 specification will continue to compile and run correctly, unchanged, over the lifetime of that specification. I want to not spend much time upgrading a dependency I want any problems caused by the upgrade to be caught early, not in production. I want to be able to build knowledge of the library over a long time, to hone my craft

over a year ago 35 votes
Software I’m thankful for

Software I’m thankful for 2021-11-25 A few of the things that come to mind, this thanksgiving. Most Unix-ish APIs, from files to sockets are a bit of a mess today. Endless poorly documented sockopts, unexpected changes in write semantics across FSs and OSes, good luck trying to figure out . But despite the mess, I can generally wrap my head around open/read/write/close. I can strace a binary and figure out the sequence and decipher what’s going on. Sprinkle in some printfs and state is quickly debuggable. Stack traces are useful!mtimes Enormous effort has been spent on many projects to replace this style of I/O programming, for efficiency or aesthetics, often with an asynchronous bent. I am thankful for this old reliable standby of synchronous open/read/write/close, and hope to see it revived and reinvented throughout my career to be cleaner and simpler. Goroutines are coroutines with compiler/runtime optimized yielding, to make them behave like threads. This breathes new life into the previous technology I’m thankful for: simple blocking I/O. With goroutines it becomes cheap to write large-scale blocking servers without running out of OS resources (like heavy threads, on OSes where they’re heavy, or FDs). It also makes it possible to use blocking interfaces between “threads” within a process without paying the ever-growing price of a context switch in the post- world.spectre This is the first year where the team working on Tailscale has outgrown and eclipsed me to the point where I can be thankful for Tailscale without feeling like I’m thanking myself. Many of the wonderful new features that let me easily wire machines together wherever they are, like userspace networking or MagicDNS, are not my doing. I’m thankful for the product, and the opportunity to work with the best engineering team I’ve ever had the privilege of being part of. Much like open/read/write/close, SQLite is an island of stability in a constantly changing technical landscape. Techniques I learned 10 or 15 years ago using SQLite work today. As a bonus, it does so much more than then: WAL mode for highly-concurrent servers, advanced SQL like window functions, excellent ATTACH semantics. It has done all of this while keeping the number of, in the projects own language, “goofy design” decisions to a minimum and holding true to its mission of being “lite”. I aspire to write such wonderful software. JSON is the worst form of encoding — except for all the others that have been tried. It’s complicated, but not too complicated. It’s not easily read by humans, but it can be read by humans. It is possible to extend it in intuitive ways. When it gets printed onto your terminal, you can figure out what’s going on without going and finding the magic decoder ring of the week. It makes some things that are extremely hard with XML or INI easy, without introducing accidental Turing completeness or turning . Writing software is better for it, and shows the immense effect carefully describing something can do for programming. JSON was everywhere in our JavaScript before the term was defined, the definition let us see it and use it elsewhere.country codes into booleans WireGuard is a great demonstration of why the total complexity of the implementation ends up affecting the UX of the product. In theory I could have been making tunnels between my devices for years with IPSec or TLS, in practice I’d completely given it up until something came along that made it easier. It didn’t make it easier by putting a slick UI over complex technology, it made the underlying technology simpler, so even I could (eventually) figure out the configuration. Most importantly, by not eating my entire complexity budget with its own internals, I could suddenly see it as a building block in larger projects. Complexity makes more things possible, and fewer things possible, simultaneously. WireGuard is a beautiful example of simplicity and I’m thankful for it. Before Go became popular, the fast programming language compilers of the 90s had mostly fallen by the wayside, to be replaced with a bimodal world of interpreters/JITs on one side and creaky slow compilers attempting to produce extremely optimal code on the other. The main Go toolchain found, or rediscovered, a new optimal point in the plane of tradeoffs for programming languages to sit: ahead of time compiled, but with a fast less-than-optimal compiler. It has managed to continue to hold that interesting, unstable equilibrium for a decade now, which is incredibly impressive. (E.g. I personally would love to improve its inliner, but know that it’s nearly impossible to get too far into that project without sacrificing a lot of the compiler’s speed.) I’ve always been cranky about GCC: I find its codebase nearly impossible to modify, it’s slow, the associated ducks I need to line up to make it useful (binutils, libc, etc) blow out the complexity budget on any project I try to start before I get far, and it is associated with GNU, which I used to view as an oddity and now view as a millstone around the neck of an otherwise excellent software project. But these are all the sorts of complaints you only make when using something truly invaluable. GCC is invaluable. I would never have learned to program if a free C compiler hadn’t been available in the 90s, so I owe it my career. To this day, it vies neck-and-neck with LLVM for best performing object code. Without the competition between them, compiler technology would stagnate. And while LLVM now benefits from $10s or $100s of millions a year in Silicon Valley salaries working on it, GCC does it all with far less investment. I’m thankful it keeps on going. I keep trying to quit vim. I keep ending up inside a terminal, inside vim, writing code. Like SQLite, vim is an island of stability over my career. While I wish IDEs were better, I am extremely thankful for tools that work and respect the effort I have taken to learn them, decade after decade. SSH gets me from here to there, and has done since ~1999. There is a lot about ssh that needs reinventing, but I am thankful for stable, reliable tools. It takes a lot of work to keep something like ssh working and secure, and if the maintainers are ever looking for someone to buy them a round they know where to find me. How would I get anything done without all the wonderful information on the public web and search engines to find it? What an amazing achievement. Thanks everyone, for making computers so great. open/read/write/close goroutines Tailscale SQLite JSON WireGuard The speed of the Go compiler GCC vim ssh The public web and search engines

over a year ago 36 votes

More in programming

Dreams of Late Summer

Here on a summer night in the grass and lilac smell Drunk on the crickets and the starry sky, Oh what fine stories we could tell With this moonlight to tell them by. A summer night, and you, and paradise, So lovely and so filled with grace, Above your head, the universe has hung its … Continue reading Dreams of Late Summer →

23 hours ago 7 votes
chapter two

I had watched enough true crime to know that you should never talk to the police. And I wasn’t arrogant enough to believe that I was different. While I felt like I knew the interrogation tactics in and out, they were repeat customers of that interaction. I wasn’t going to call. I was going to ignore it. I’m not getting Reid techniqued. Why did they ask for me? This house was owned by my mother, how do they even know I live here? Wait who am I kidding, of course they know. I went to high school here, governments have records of that kind of thing. But still, why ask for me? Another thing was odd. We lived in Brooklyn, aka Kings County. Not Nassau County. These guys must have driven all the way here on a Saturday night. I felt like I was being watched. They wouldn’t drive all the way here to just leave a business card. I felt trapped in the house. Like they were a mountain lion on a rock perch and I was the prey in the valley below. They had the high ground and I didn’t know what they could see. But this was crazy, I didn’t do anything! Should I call them? Figure out what they want? No! That’s exactly what they want. They know I feel like this. This is exactly what they are going for. Another system carefully crafted based on years and years of “user feedback” designed to manipulate you into doing what it wants. But what if I’m doing what they want right now? Maybe they don’t want me to call. Maybe the real goal is to figure out what I do next. Watching and hoping I’ll go check on the body or something. But there wasn’t a body! If I did commit a crime this would all be a lot easier, I’d know why they were here and what they wanted and could plan my next move accordingly. I opened another Bud Light, took my clothes off, and got into bed. Even though there was nobody else home, I kept the sound off on the porn. Just in case they were listening. After I finished, I felt a bit more calm. Dude get a grip, all they did was leave a business card. Coming out of the paranoid spiral a bit, I realized what it must be about. It must have had to do with my Dad’s meeting. That was in Long Island, aka Nassau County. Probably some dumb financial crap. My mother was out with her friends in Manhattan, but she’d be home tonight and maybe she knew what the meeting was. It was now twenty to nine and I texted Brian. He’s like yea bro Dave just got here come through. And you still have that case of Bud Light? I put the beers in a backpack. Is this what the detective planned? Maybe I was playing right into the plot; arrest me for underage possession of alcohol and then get me to talk about what I knew. But I didn’t even know anything! This whole thing was stupid. I thought about how I got the beers, wondering if the whole thing was somehow a set-up. Totally nonsense thought. Kids buy beer with fake IDs all the time. When I got to Brian’s everything was normal. I walked around the back of his house and opened the screen door to his basement. There were three leather couches in a U-shape, two of which were sparsely occupied by Brian and Dave. I took my place on the third empty one and put my backpack on the center ottoman. “Pretty cool, right? Yea I found it in my Dad’s old stuff.” said Brian, referring to the inflated bag atop a device labeled Volcano sharing the ottoman with my backpack. “What is it?” “Bro it’s like an old vape. You put the weed in and plug it in to the wall.” He detached the cloudy bag from the device and demonstrated. If you pushed on the mouthpiece, it let air through and you could breathe in the vaporized drug. “It’s like a bong but chill.” I inhaled. This probably wasn’t smart with how paranoid I was from the interaction earlier, but I felt safe in the basement. It was a summer night, I was with friends, I had drank beer. Life was good. Dave showed us this reel. It was a mouse in a maze, and it started from the mouse’s perspective. Kind of like a skater cam, wow these things could scurry. Then it zoomed out so you could see the maze from the perspective of the experimenter. Then seeing the back of his head looking down at the maze, cutting to sped up dashcam video of him driving home from work. Zooming out again with a sparkling line showing his route through the grid of city streets. AI has done wonders for these video transitions. Maybe this whole video was AI. “What if we’re the mouse,” said Dave in the most stereotypical stoner voice. He’d always find shit like this, in that way that when you are high the thought seems really deep. But if you think about it more it’s nonsense, like that mouse is in a maze constructed by humans, and even if it doesn’t always feel like it, the society we live in is jointly constructed by all of us. Brian showed a video of two girls at some Mardi Gras bead type event licking one ice cream cone. He told us he wasn’t a virgin but I didn’t really believe him. It was a bit after midnight and it was time to go home. I hadn’t really thought about the interaction from earlier, but I started to again when I got outside. It was a half mile walk back home; I was grateful to hear all the noises of the city. Even though I couldn’t see it, it reminded me that there was a society out there. My mom’s car wasn’t in the driveway. Maybe she met a guy. Nothing too out of the ordinary. I unlocked the door, closed it behind me, locked both the knob and the deadbolt, went upstairs into my room, locked that door, and with the blanket of those three locks, a bunch of beers, and a couple hits of the Volcano, drifted off to sleep.

14 hours ago 2 votes
Thrice charmed at Rails World

The first Rails World in Amsterdam was a roaring success back in 2023. Tickets sold out in 45 minutes, the atmosphere was electric, and The Rails Foundation set a new standard for conference execution in the Ruby community. So when we decided to return to the Dutch Capital for the third edition of the conference this year, the expectations were towering. And yet, Amanda Perino, our executive director and event organizer extraordinaire, managed to outdo herself, and produced an even better show this year.  The venue we returned to was already at capacity the first time around, but Amanda managed to fit a third more attendees by literally using slimmer chairs! And I didn't hear any complaints the folks who had to sit a little closer together in order for more people to enjoy the gathering. The increased capacity didn't come close to satisfy the increased demand, though. This year, tickets sold out in less than two minutes. Crazy. But for the 800+ people who managed to secure a pass, I'm sure it felt worth the refresh-the-website scramble to buy a ticket.   And, as in years past, Amanda's recording crew managed to turn around post-production on my keynote in less than 24 hours, so anyone disappointed with missing out on a ticket could at least be in the loop on all the awesome new Rails stuff we were releasing up to and during the conference. Every other session was recorded too, and will soon be on the Rails YouTube channel. You can't stream the atmosphere, the enthusiasm, and the genuine love of Ruby on Rails, though. I was once again blown away by just how many incredible people and stories we have in this ecosystem. From entrepreneurs who've built million (or billion!) dollar businesses on Rails, to programmers who've been around the framework for decades, to people who just picked it up this year. It was a thrill to meet all of them, to take hundreds of selfies, and to talk about Ruby, Rails, and the Omarchy expansion pack for hours on the hallway track! I've basically stopped doing prepared presentations at conferences, but Rails World is the one exception. I really try my best to put on a good show, present the highlights of what we've been working on in the past year at 37signals, and transfer the never-ending enthusiasm I continue to feel for this framework, this programming language, and this ecosystem.  True, I may occasionally curse that commitment in the weeks leading up to the conference, but the responsibility is always rewarded during and after the execution with a deep sense of satisfaction. Not everyone is so lucky as I've been to find their life's work early in their career, and see it continue to blossom over the decades. I'm eternally grateful that I have. Of course, there's been ups and downs over the years — nothing is ever just a straight line of excitement up and to the right! — but we're oh-so-clearly on the up-up-up part of that curve at the moment. I don't know whether it's just the wind or the whims, but Rails is enjoying an influx of a new generation of programmers at the moment. No doubt it helps when I get to wax poetically about Ruby for an hour with Lex Fridman in front of an audience of millions. No doubt Shopify's continued success eating the world of ecommerce helps. No doubt the stability, professionalism, and execution from The Rails Foundation is an aid. There are many auxiliary reasons why we're riding a wave at the moment, but key to it all is also that Ruby on Rails is simply really, really good! Next year, with RailsConf finished, it's time to return to the US. Amanda has picked a great spot in Austin, we're planning to dramatically expand the capacity, but I also fully expect that demand will continue to rise, especially in the most prosperous and successful market for Rails. Thanks again to all The Rails Foundation members who believed in the vision for a new institution back in 2022. It looks like a no-brainer to join such a venture now, given the success of Rails World and everything else, but it actually took guts to sign on back then. I approached quite a few companies at that time who could see the value, but couldn't find the courage to support our work, as our industry was still held hostage to a band of bad ideas and terrible ideologies. All that nonsense is thankfully now long gone in the Rails world. We're enjoying a period of peak unity, excitement, progress, and determination to continue to push for end-to-end problem solving, open source, and freedom. I can't tell you how happy it makes me feel when I hear from yet another programmer who credits Ruby on Rails with finding joy and beauty in the writing web applications because of what I started over 22 years ago. It may sound trite, but it's true: It's an honor and a privilege. I hope to carry this meaningful burden for as long as my intellectual legs still let me stand. See you next year in Austin? I hope so!

yesterday 5 votes
chapter one

I hadn’t lost my virginity yet. And it wasn’t for lack of trying; it seemed like the rest of my generation was no longer interested in sex. On some level, I understood where they were coming from, the whole act did seem kind of pointless. But after a few beers, that wasn’t how my mind was working. I turned 19 last week. Dad flew in from Idaho, and it was the first time he was in the house I shared with my mother. He left when I was 12, and it was always apparent that parenting wasn’t the top thing on his mind. There was some meeting on Long Island. That’s probably why he was there, in addition to the fact he knew mom wouldn’t make him sleep on the couch. He had many reasons to be in New York that weren’t me. My birthday was just a flimsy pretense. He’d worked on Wall Street the whole time he was around, a quant. He wrote programs that made other people rich. But something happened to him right before he left. A crisis of conscience perhaps; he was spiraling for weeks, cursing the capitalist system, calling my mother a gold-digging whore (which was mostly true), and saying things needed to change. Then he packed a single backpack and left for Idaho. I visited him out there once my sophomore year. He had a camouflaged one room cabin in the middle of a spruce forest, but instead of the hunting or fishing stuff you might expect, the walls were adorned with electrical test equipment and various things that looked like they were out of a biology or chemistry lab. I didn’t know much about this stuff and that wasn’t what he wanted to talk about anyway. He wanted to talk about “man shit” like nature and women and not being life’s bitch. I tried to act like I did, but I didn’t really listen. All I remember is how eerily quiet the night was, I could hear every animal movement outside. My dad said you get used to it. Brian was having a party tonight. Well okay, party is a lofty way to describe it. He’d replaced the fluorescent lights in his mom’s basement with blacklights, and we’d go over there to drink beer and smoke weed and sit around on our phones and scroll. And sometimes someone would laugh at something and share with the group. I had a case of Bud Light left over from the last party and drank two of them today. Hence the thinking about sex and not thinking that thinking about sex was stupid. People wouldn’t be going over there for a few more hours, so I laid in my bed, drank, and loosely beat off to YouTube. Celebrity gossip, internet gossip, speedrun videos, nothing even arousing. I liked the true crime videos about the hot female teachers who slept with their students. Yea yea yea terrible crime and they all act holier than thou about what if the genders were reversed, but the genders weren’t reversed. Maybe they just don’t want to get demonetized. There were never women at these parties. Okay maybe one or two. But nobody ever slept with them or much thought about them that way. They were the agendered mass like the rest of us. Fellow consumers, not providers. Fuck I should just go visit a hooker. I didn’t know much about that, were hookers real? I’d never met one, and there wasn’t a good way to find out about stuff like this anymore. The Internet was pretty much all “advertiser friendly” now, declawed, sanitized. Once the algorithms got good enough and it was technically easy to censor, there was nothing holding them back. It wasn’t actually censored, it would just redirect you elsewhere. And if you didn’t pay careful attention, you wouldn’t even notice it happening. I tried asking ChatGPT about hookers and it told me to call them sex workers. And this was kind of triggering. Who the fuck does this machine think it is? But then I was lost on this tangent, the algorithms got a rise out of me and I went back to comfort food YouTube. Look this guy beat Minecraft starting with only one block. The doorbell rang. This always gives me anxiety. And it was particularly anxiety inducing since I was the only one home. Normally I could just know that the door of my room was locked and someone else would get it and this would be a downstairs issue. But it was just me at home. My heart rate jumped. I waited for it to ring again, but prayed that it wouldn’t. Please just go away. But sure enough, it rang again. I went to my window, my room was on the second floor. There was a black Escalade in the driveway that I hadn’t seen before, and I could see two men at the door. They were wearing suits. I ducked as to make sure they wouldn’t look up at me, making as little noise as possible. Peering over the window sill I could see one opening the screen door, and it looked like he stuck something to the main door. My heart was beating even faster now. It was Saturday night, why were there two men in suits? And why were they here? It felt longer, but 3 minutes later they drove off. I waited another 3 for good measure, just watching the clock on my computer until it hit 6:57. I doubled checked out the window to make sure they were actually gone, and crept down the stairs to retrieve whatever they left on the door. It was a business card, belonging to a “Detective James Reese” of the Nassau County Police. And on the back of the card, there was handwriting. “John – call me” John was my name.

yesterday 3 votes
Apologies and forgiveness

The first in a series of posts about doing things the right way

2 days ago 10 votes