More from Computer Things
I have a lot in the works for the this month's Logic for Programmers release. Among other things, I'm completely rewriting the chapter on Logic Programming Languages. I originally showcased the paradigm with puzzle solvers, like eight queens or four-coloring. Lots of other demos do this too! It takes creativity and insight for humans to solve them, so a program doing it feels magical. But I'm trying to write a book about practical techniques and I want everything I talk about to be useful. So in v0.9 I'll be replacing these examples with a couple of new programs that might get people thinking that Prolog could help them in their day-to-day work. On the other hand, for a newsletter, showcasing a puzzle solver is pretty cool. And recently I stumbled into this post by my friend Pablo Meier, where he solves a videogame puzzle with Prolog:1 Summary for the text-only readers: We have a test with 10 true/false questions (denoted a/b) and four student attempts. Given the scores of the first three students, we have to figure out the fourth student's score. bbababbabb = 7 baaababaaa = 5 baaabbbaba = 3 bbaaabbaaa = ??? You can see Pablo's solution here, and try it in SWI-prolog here. Pretty cool! But after way too long studying Prolog just to write this dang book chapter, I wanted to see if I could do it more elegantly than him. Code and puzzle spoilers to follow. (Normally here's where I'd link to a gentler introduction I wrote but I think this is my first time writing about Prolog online? Uh here's a Picat intro instead) The Program You can try this all online at SWISH or just jump to my final version here. :- use_module(library(dif)). % Sound inequality :- use_module(library(clpfd)). % Finite domain constraints First some imports. dif lets us write dif(A, B), which is true if A and B are not equal. clpfd lets us write A #= B + 1 to say "A is 1 more than B".2 We'll say both the student submission and the key will be lists, where each value is a or b. In Prolog, lowercase identifiers are atoms (like symbols in other languages) and identifiers that start with a capital are variables. Prolog finds values for variables that match equations (unification). The pattern matching is real real good. % ?- means query ?- L = [a,B,c], [Y|X] = [1,2|L], B + 1 #= 7. B = 6, L = [a, 6, c], X = [2, a, 6, c], Y = 1 Next, we define score/33 recursively. % The student's test score % score(student answers, answer key, score) score([], [], 0). score([A|As], [A|Ks], N) :- N #= M + 1, score(As, Ks, M). score([A|As], [K|Ks], N) :- dif(A, K), score(As, Ks, N). First key is the student's answers, second is the answer key, third is the final score. The base case is the empty test, which has score 0. Otherwise, we take the head values of each list and compare them. If they're the same, we add one to the score, otherwise we keep the same score. Notice we couldn't write if x then y else z, we instead used pattern matching to effectively express (x && y) || (!x && z). Prolog does have a conditional operator, but it prevents backtracking so what's the point??? A quick break about bidirectionality One of the coolest things about Prolog: all purely logical predicates are bidirectional. We can use score to check if our expected score is correct: ?- score([a, b, b], [b, b, b], 2). true But we can also give it answers and a key and ask it for the score: ?- score([a, b, b], [b, b, b], X). X = 2 Or we could give it a key and a score and ask "what test answers would have this score?" ?- score(X, [b, b, b], 2). X = [b, b, _A], dif(_A,b) X = [b, _A, b], dif(_A,b) X = [_A, b, b], dif(_A,b) The different value is written _A because we never told Prolog that the array can only contain a and b. We'll fix this later. Okay back to the program Now that we have a way of computing scores, we want to find a possible answer key that matches all of our observations, ie gives everybody the correct scores. key(Key) :- % Figure it out score([b, b, a, b, a, b, b, a, b, b], Key, 7), score([b, a, a, a, b, a, b, a, a, a], Key, 5), score([b, a, a, a, b, b, b, a, b, a], Key, 3). So far we haven't explicitly said that the Key length matches the student answer lengths. This is implicitly verified by score (both lists need to be empty at the same time) but it's a good idea to explicitly add length(Key, 10) as a clause of key/1. We should also explicitly say that every element of Key is either a or b.4 Now we could write a second predicate saying Key had the right 'type': keytype([]). keytype([K|Ks]) :- member(K, [a, b]), keytype(Ks). But "generating lists that match a constraint" is a thing that comes up often enough that we don't want to write a separate predicate for each constraint! So after some digging, I found a more elegant solution: maplist. Let L=[l1, l2]. Then maplist(p, L) is equivalent to the clause p(l1), p(l2). It also accepts partial predicates: maplist(p(x), L) is equivalent to p(x, l1), p(x, l2). So we could write5 contains(L, X) :- member(X, L). key(Key) :- length(Key, 10), maplist(contains([a,b]), L), % the score stuff Now, let's query for the Key: ?- key(Key) Key = [a, b, a, b, a, a, b, a, a, b] Key = [b, b, a, b, a, a, a, a, a, b] Key = [b, b, a, b, a, a, b, b, a, b] Key = [b, b, b, b, a, a, b, a, a, b] So there are actually four different keys that all explain our data. Does this mean the puzzle is broken and has multiple different answers? Nope The puzzle wasn't to find out what the answer key was, the point was to find the fourth student's score. And if we query for it, we see all four solutions give him the same score: ?- key(Key), score([b, b, a, a, a, b, b, a, a, a], Key, X). X = 6 X = 6 X = 6 X = 6 Huh! I really like it when puzzles look like they're broken, but every "alternate" solution still gives the same puzzle answer. Total program length: 15 lines of code, compared to the original's 80 lines. Suck it, Pablo. (Incidentally, you can get all of the answer at once by writing findall(X, (key(Key), score($answer-array, Key, X)), L).) I still don't like puzzles for teaching The actual examples I'm using in the book are "analyzing a version control commit graph" and "planning a sequence of infrastructure changes", which are somewhat more likely to occur at work than needing to solve a puzzle. You'll see them in the next release! I found it because he wrote Gamer Games for Lite Gamers as a response to my Gamer Games for Non-Gamers. ↩ These are better versions of the core Prolog expressions \+ (A = B) and A is B + 1, because they can defer unification. ↩ Prolog-descendants have a convention of writing the arity of the function after its name, so score/3 means "score has three parameters". I think they do this because you can overload predicates with multiple different arities. Also Joe Armstrong used Prolog for prototyping, so Erlang and Elixir follow the same convention. ↩ It still gets the right answers without this type restriction, but I had no idea it did until I checked for myself. Probably better not to rely on this! ↩ We could make this even more compact by using a lambda function. First import module yall, then write maplist([X]>>member(X, [a,b]), Key). But (1) it's not a shorter program because you replace the extra definition with an extra module import, and (2) yall is SWI-Prolog specific and not an ISO-standard prolog module. Using contains is more portable. ↩
My April Cools is out! Gaming Games for Non-Gamers is a 3,000 word essay on video games worth playing if you've never enjoyed a video game before. Patreon notes here. (April Cools is a project where we write genuine content on non-normal topics. You can see all the other April Cools posted so far here. There's still time to submit your own!) April Cools' Club
Logic for Programmers v0.8 now out! The new release has minor changes: new formatting for notes and a better introduction to predicates. I would have rolled it all into v0.9 next month but I like the monthly cadence. Get it here! Betteridge's Law of Software Engineering Specialness In There is No Automatic Reset in Engineering, Tim Ottinger asks: Do the other people have to live with January 2013 for the rest of their lives? Or is it only engineering that has to deal with every dirty hack since the beginning of the organization? Betteridge's Law of Headlines says that if a journalism headline ends with a question mark, the answer is probably "no". I propose a similar law relating to software engineering specialness:1 If someone asks if some aspect of software development is truly unique to just software development, the answer is probably "no". Take the idea that "in software, hacks are forever." My favorite example of this comes from a different profession. The Dewey Decimal System hierarchically categorizes books by discipline. For example, Covered Bridges of Pennsylvania has Dewey number 624.37. 6-- is the technology discipline, 62- is engineering, 624 is civil engineering, and 624.3 is "special types of bridges". I have no idea what the last 0.07 means, but you get the picture. Now if you look at the 6-- "technology" breakdown, you'll see that there's no "software" subdiscipline. This is because when Dewey preallocated the whole technology block in 1876. New topics were instead to be added to the 00- "general-knowledge" catch-all. Eventually 005 was assigned to "software development", meaning The C Programming Language lives at 005.133. Incidentally, another late addition to the general knowledge block is 001.9: "controversial knowledge". And that's why my hometown library shelved the C++ books right next to The Mothman Prophecies. How's that for technical debt? If anything, fixing hacks in software is significantly easier than in other fields. This came up when I was interviewing classic engineers. Kludges happened all the time, but "refactoring" them out is expensive. Need to house a machine that's just two inches taller than the room? Guess what, you're cutting a hole in the ceiling. (Even if we restrict the question to other departments in a software company, we can find kludges that are horrible to undo. I once worked for a company which landed an early contract by adding a bespoke support agreement for that one customer. That plagued them for years afterward.) That's not to say that there aren't things that are different about software vs other fields!2 But I think that most of the time, when we say "software development is the only profession that deals with XYZ", it's only because we're ignorant of how those other professions work. Short newsletter because I'm way behind on writing my April Cools. If you're interested in April Cools, you should try it out! I make it way harder on myself than it actually needs to be— everybody else who participates finds it pretty chill. Ottinger caveats it with "engineering, software or otherwise", so I think he knows that other branches of engineering, at least, have kludges. ↩ The "software is different" idea that I'm most sympathetic to is that in software, the tools we use and the products we create are made from the same material. That's unusual at least in classic engineering. Then again, plenty of machinists have made their own lathes and mills! ↩
No newsletter next week, I'm teaching a TLA+ workshop. Speaking of which: I spend a lot of time thinking about formal methods (and TLA+ specifically) because it's where the source of almost all my revenue. But I don't share most of the details because 90% of my readers don't use FM and never will. I think it's more interesting to talk about ideas from FM that would be useful to people outside that field. For example, the idea of "property strength" translates to the idea that some tests are stronger than others. Another possible export is how FM approaches nondeterminism. A nondeterministic algorithm is one that, from the same starting conditions, has multiple possible outputs. This is nondeterministic: # Pseudocode def f() { return rand()+1; } When specifying systems, I may not encounter nondeterminism more often than in real systems, but I am definitely more aware of its presence. Modeling nondeterminism is a core part of formal specification. I mentally categorize nondeterminism into five buckets. Caveat, this is specifically about nondeterminism from the perspective of system modeling, not computer science as a whole. If I tried to include stuff on NFAs and amb operations this would be twice as long.1 1. True Randomness Programs that literally make calls to a random function and then use the results. This the simplest type of nondeterminism and one of the most ubiquitous. Most of the time, random isn't truly nondeterministic. Most of the time computer randomness is actually pseudorandom, meaning we seed a deterministic algorithm that behaves "randomly-enough" for some use. You could "lift" a nondeterministic random function into a deterministic one by adding a fixed seed to the starting state. # Python from random import random, seed def f(x): seed(x) return random() >>> f(3) 0.23796462709189137 >>> f(3) 0.23796462709189137 Often we don't do this because the point of randomness is to provide nondeterminism! We deliberately abstract out the starting state of the seed from our program, because it's easier to think about it as locally nondeterministic. (There's also "true" randomness, like using thermal noise as an entropy source, which I think are mainly used for cryptography and seeding PRNGs.) Most formal specification languages don't deal with randomness (though some deal with probability more broadly). Instead, we treat it as a nondeterministic choice: # software if rand > 0.001 then return a else crash # specification either return a or crash This is because we're looking at worst-case scenarios, so it doesn't matter if crash happens 50% of the time or 0.0001% of the time, it's still possible. 2. Concurrency # Pseudocode global x = 1, y = 0; def thread1() { x++; x++; x++; } def thread2() { y := x; } If thread1() and thread2() run sequentially, then (assuming the sequence is fixed) the final value of y is deterministic. If the two functions are started and run simultaneously, then depending on when thread2 executes y can be 1, 2, 3, or 4. Both functions are locally sequential, but running them concurrently leads to global nondeterminism. Concurrency is arguably the most dramatic source of nondeterminism. Small amounts of concurrency lead to huge explosions in the state space. We have words for the specific kinds of nondeterminism caused by concurrency, like "race condition" and "dirty write". Often we think about it as a separate topic from nondeterminism. To some extent it "overshadows" the other kinds: I have a much easier time teaching students about concurrency in models than nondeterminism in models. Many formal specification languages have special syntax/machinery for the concurrent aspects of a system, and generic syntax for other kinds of nondeterminism. In P that's choose. Others don't special-case concurrency, instead representing as it as nondeterministic choices by a global coordinator. This more flexible but also more inconvenient, as you have to implement process-local sequencing code yourself. 3. User Input One of the most famous and influential programming books is The C Programming Language by Kernighan and Ritchie. The first example of a nondeterministic program appears on page 14: For the newsletter readers who get text only emails,2 here's the program: #include /* copy input to output; 1st version */ main() { int c; c = getchar(); while (c != EOF) { putchar(c); c = getchar(); } } Yup, that's nondeterministic. Because the user can enter any string, any call of main() could have any output, meaning the number of possible outcomes is infinity. Okay that seems a little cheap, and I think it's because we tend to think of determinism in terms of how the user experiences the program. Yes, main() has an infinite number of user inputs, but for each input the user will experience only one possible output. It starts to feel more nondeterministic when modeling a long-standing system that's reacting to user input, for example a server that runs a script whenever the user uploads a file. This can be modeled with nondeterminism and concurrency: We have one execution that's the system, and one nondeterministic execution that represents the effects of our user. (One intrusive thought I sometimes have: any "yes/no" dialogue actually has three outcomes: yes, no, or the user getting up and walking away without picking a choice, permanently stalling the execution.) 4. External forces The more general version of "user input": anything where either 1) some part of the execution outcome depends on retrieving external information, or 2) the external world can change some state outside of your system. I call the distinction between internal and external components of the system the world and the machine. Simple examples: code that at some point reads an external temperature sensor. Unrelated code running on a system which quits programs if it gets too hot. API requests to a third party vendor. Code processing files but users can delete files before the script gets to them. Like with PRNGs, some of these cases don't have to be nondeterministic; we can argue that "the temperature" should be a virtual input into the function. Like with PRNGs, we treat it as nondeterministic because it's useful to think in that way. Also, what if the temperature changes between starting a function and reading it? External forces are also a source of nondeterminism as uncertainty. Measurements in the real world often comes with errors, so repeating a measurement twice can give two different answers. Sometimes operations fail for no discernable reason, or for a non-programmatic reason (like something physically blocks the sensor). All of these situations can be modeled in the same way as user input: a concurrent execution making nondeterministic choices. 5. Abstraction This is where nondeterminism in system models and in "real software" differ the most. I said earlier that pseudorandomness is arguably deterministic, but we abstract it into nondeterminism. More generally, nondeterminism hides implementation details of deterministic processes. In one consulting project, we had a machine that received a message, parsed a lot of data from the message, went into a complicated workflow, and then entered one of three states. The final state was totally deterministic on the content of the message, but the actual process of determining that final state took tons and tons of code. None of that mattered at the scope we were modeling, so we abstracted it all away: "on receiving message, nondeterministically enter state A, B, or C." Doing this makes the system easier to model. It also makes the model more sensitive to possible errors. What if the workflow is bugged and sends us to the wrong state? That's already covered by the nondeterministic choice! Nondeterministic abstraction gives us the potential to pick the worst-case scenario for our system, so we can prove it's robust even under those conditions. I know I beat the "nondeterminism as abstraction" drum a whole lot but that's because it's the insight from formal methods I personally value the most, that nondeterminism is a powerful tool to simplify reasoning about things. You can see the same approach in how I approach modeling users and external forces: complex realities black-boxed and simplified into nondeterministic forces on the system. Anyway, I hope this collection of ideas I got from formal methods are useful to my broader readership. Lemme know if it somehow helps you out! I realized after writing this that I already talked wrote an essay about nondeterminism in formal specification just under a year ago. I hope this one covers enough new ground to be interesting! ↩ There is a surprising number of you. ↩
More in programming
I’ve been writing some internal dashboards recently, and one hard part is displaying timestamps. Our server does everything in UTC, but the team is split across four different timezones, so the server timestamps aren’t always easy to read. For most people, it’s harder to understand a UTC timestamp than a timestamp in your local timezone. Did that event happen just now, an hour ago, or much further back? Was that at the beginning of your working day? Or at the end? Then I remembered that I tried to solve this five years ago at a previous job. I wrote a JavaScript snippet that converts UTC timestamps into human-friendly text. It displays times in your local time zone, and adds a short suffix if the time happened recently. For example: today @ 12:00 BST (1 hour ago) In my old project, I was using writing timestamps in a <div> and I had to opt into the human-readable text for every date on the page. It worked, but it was a bit fiddly. Doing it again, I thought of a more elegant solution. HTML has a <time> element for expressing datetimes, which is a more meaningful wrapper than a <div>. When I render the dashboard on the server, I don’t know the user’s timezone, so I include the UTC timestamp in the page like so: <time datetime="2025-04-15 19:45:00Z"> Tue, 15 Apr 2025 at 19:45 UTC </time> I put a machine-readable date and time string with a timezone offset string in the datetime attribute, and then a more human-readable string in the text of the element. Then I add this JavaScript snippet to the page: window.addEventListener("DOMContentLoaded", function() { document.querySelectorAll("time").forEach(function(timeElem) { // Set the `title` attribute to the original text, so a user // can hover over a timestamp to see the UTC time. timeElem.setAttribute("title", timeElem.innerText); // Replace the display text with a human-friendly date string // which is localised to the user's timezone. timeElem.innerText = getHumanFriendlyDateString( timeElem.getAttribute("datetime") ); }) }); This updates any <time> element on the page to use a human friendly date string, which is localised to the user’s timezone. For example, I’m in the UK so that becomes: <time datetime="2025-04-15 19:45:00Z" title="Tue, 15 Apr 2025 at 19:45 UTC"> Tue, 15 Apr 2025 at 20:45 BST </time> In my experience, these timestamps are easier and more intuitive for people to read. I always include a timezone string (e.g. BST, EST, PDT) so it’s obvious that I’m showing a localised timestamp. If you really need the UTC timestamp, it’s in the title attribute, so you can see it by hovering over it. (Sorry, mouseless users, but I don’t think any of my team are browsing our dashboards from their phone or tablet.) If the JavaScript doesn’t load, you see the plain old UTC timestamp. It’s not ideal, but the page still loads and you can see all the information – this behaviour is an enhancement, not an essential. To me, this is the unfulfilled promise of the <time> element. In my fantasy world, web page authors would write the time in a machine-readable format, and browsers would show it in a way that makes sense for the reader. They’d take into account their language, locale, and time zone. I understand why that hasn’t happened – it’s much easier said than done. You need so much context to know what’s the “right” thing to do when dealing with datetimes, and guessing without that context is at the heart of many datetime bugs. These sort of human-friendly, localised timestamps are very handy sometimes, and a complete mess at other times. In my staff-only dashboards, I have that context. I know what these timestamps mean, who’s going to be reading them, and I think they’re a helpful addition that makes the data easier to read. [If the formatting of this post looks odd in your feed reader, visit the original article]
Nearly a quarter of seventeen-year-old boys in America have an ADHD diagnosis. That's crazy. But worse than the diagnosis is that the majority of them end up on amphetamines, like Adderall or Ritalin. These drugs allow especially teenage boys (diagnosed at 2-3x the rate of girls) to do what their mind would otherwise resist: Study subjects they find boring for long stretches of time. Hurray? Except, it doesn't even work. Because taking Adderall or Ritalin doesn't actually help you learn more, it merely makes trying tolerable. The kids might feel like the drugs are helping, but the test scores say they're not. It's Dunning-Kruger — the phenomenon where low-competence individuals overestimate their abilities — in a pill. Furthermore, even this perceived improvement is short-term. The sudden "miraculous" ability to sit still and focus on boring school work wanes in less than a year on the drugs. In three years, pill poppers are doing no better than those who didn't take amphetamines at all. These are all facts presented in a blockbuster story in New York Time Magazine entitled Have We Been Thinking About A.D.H.D. All Wrong?, which unpacks all the latest research on ADHD. It's depressing reading. Not least because the definition of ADHD is so subjective and situational. The NYTM piece is full of anecdotes from kids with an ADHD diagnosis whose symptoms disappeared when they stopped pursuing a school path out of step with their temperament. And just look at these ADHD markers from the DSM-5: Inattention Difficulty staying focused on tasks or play. Frequently losing things needed for tasks (e.g., toys, school supplies). Easily distracted by unrelated stimuli. Forgetting daily activities or instructions. Trouble organizing tasks or completing schoolwork. Avoiding or disliking tasks requiring sustained mental effort. Hyperactivity Fidgeting, squirming, or inability to stay seated. Running or climbing in inappropriate situations. Excessive talking or inability to play quietly. Acting as if “driven by a motor,” always on the go. Impulsivity Blurting out answers before questions are completed. Trouble waiting for their turn. Interrupting others’ conversations or games. The majority of these so-called symptoms are what I'd classify as "normal boyhood". I certainly could have checked off a bunch of them, and you only need six over six months for an official ADHD diagnosis. No wonder a quarter of those seventeen year-old boys in America qualify! Borrowing from Erich Fromm’s The Sane Society, I think we're looking at a pathology of normalcy, where healthy boys are defined as those who can sit still, focus on studies, and suppress kinetic energy. Boys with low intensity and low energy. What a screwy ideal to chase for all. This is all downstream from an obsession with getting as many kids through as much safety-obsessed schooling as possible. While the world still needs electricians, carpenters, welders, soldiers, and a million other occupations that exist outside the narrow educational ideal of today. Now I'm sure there is a small number of really difficult cases where even the short-term break from severe symptoms that amphetamines can provide is welcome. The NYTM piece quotes the doctor that did one of the most consequential studies on ADHD as thinking that's around 3% — a world apart from the quarter of seventeen-year-olds discussed above. But as ever, there is no free lunch in medicine. Long-term use of amphetamines acts as a growth inhibitor, resulting in kids up to an inch shorter than they otherwise would have been. On top of the awful downs that often follow amphetamine highs. And the loss of interest, humor, and spirit that frequently comes with the treatment too. This is all eerily similar to what happened in America when a bad study from the 1990s convinced a generation of doctors that opioids actually weren't addictive. By the time they realized the damage, they'd set in motion an overdose and addiction cascade that's presently killing over a 100,000 Americans a year. The book Empire of Pain chronicles that tragedy well. Or how about the surge in puberty-blocker prescriptions, which has now been arrested in the UK, following the Cass Review, as well as Finland, Norway, Sweden, France, and elsewhere. Doctors are supposed to first do no harm, but they're as liable to be swept up in bad paradigms, social contagions, and ideological echo chambers as the rest of us. And this insane over-diagnosis of ADHD fits that liability to a T.
<![CDATA[My journey to Lisp began in the early 1990s. Over three decades later, a few days ago I rediscovered the first Lisp environment I ever used back then which contributed to my love for the language. Here it is, PC Scheme running under DOSBox-X on my Linux PC: Screenshot of the PC Scheme Lisp development environment for MS-DOS by Texas Instruments running under DOSBox-X on Linux Mint Cinnamon. Using PC Scheme again brought back lots of great memories and made me reflect on what the environment taught me about Lisp and Lisp tooling. As a Computer Science student at the University of Milan, Italy, around 1990 I took an introductory computers and programming class taught by Prof. Stefano Cerri. The textbook was the first edition of Structure and Interpretation of Computer Programs (SICP) and Texas Instruments PC Scheme for MS-DOS the recommended PC implementation. I installed PC Scheme under DR-DOS on a 20 MHz 386 Olidata laptop with 2 MB RAM and a 40 MB hard disk drive. Prior to the class I had read about Lisp here and there but never played with the language. SICP and its use of Scheme as an elegant executable formalism instantly fascinated me. It was Lisp love at first sight. The class covered the first three chapters of the book but I later read the rest on my own. I did lots of exercises using PC Scheme to write and run them. Soon I became one with PC Scheme. The environment enabled a tight development loop thanks to its Emacs-like EDWIN editor that was well integrated with the system. The Lisp awareness of EDWIN blew my mind as it was the first such tool I encountered. The editor auto-indented and reformatted code, matched parentheses, and supported evaluating expressions and code blocks. Typing a closing parenthesis made EDWIN blink the corresponding opening one and briefly show a snippet of the beginning of the matched expression. Paying attention to the matching and the snippets made me familiar with the shape and structure of Lisp code, giving a visual feel of whether code looks syntactically right or off. Within hours of starting to use EDWIN the parentheses ceased to be a concern and disappeared from my conscious attention. Handling parentheses came natural. I actually ended up loving parentheses and the aesthetics of classic Lisp. Parenthesis matching suggested me a technique for writing syntactically correct Lisp code with pen and paper. When writing a closing parenthesis with the right hand I rested the left hand on the paper with the index finger pointed at the corresponding opening parenthesis, moving the hands in sync to match the current code. This way it was fast and easy to write moderately complex code. PC Scheme spoiled me and set the baseline of what to expect in a Lisp environment. After the class I moved to PCS/Geneva, a more advanced PC Scheme fork developed at the University of Geneva. Over the following decades I encountered and learned Common Lisp, Emacs, Lisp, and Interlisp. These experiences cemented my passion for Lisp. In the mid-1990s Texas Instruments released the executable and sources of PC Scheme. I didn't know it at the time, or if I noticed I long forgot. Until a few days ago, when nostalgia came knocking and I rediscovered the PC Scheme release. I installed PC Scheme under the DOSBox-X MS-DOS emulator on my Linux Mint Cinnamon PC. It runs well and I enjoy going through the system to rediscover what it can do. Playing with PC Scheme after decades of Lisp experience and hindsight on computing evolution shines new light on the environment. I didn't fully realize at the time but the product packed an amazing value for the price. It cost $99 in the US and I paid it about 150,000 Lira in Italy. Costing as much as two or three texbooks, the software was affordable even to students and hobbyists. PC Scheme is a rich, fast, and surprisingly capable environment with features such as a Lisp-aware editor, a good compiler, a structure editor and other tools, many Scheme extensions such as engines and OOP, text windows, graphics, and a lot more. The product came with an extensive manual, a thick book in a massive 3-ring binder I read cover to cover more than once. A paper on the implementation of PC Scheme sheds light on how good the system is given the platform constraints. Using PC Scheme now lets me put into focus what it taught me about Lisp and Lisp systems: the convenience and productivity of Lisp-aware editors; interactive development and exploratory programming; and a rich Lisp environment with a vast toolbox of libraries and facilities — this is your grandfather's batteries included language. Three decades after PC Scheme a similar combination of features, facilities, and classic aesthetics drew me to Medley Interlisp, my current daily driver for Lisp development. #Lisp #MSDOS #retrocomputing a href="https://remark.as/p/journal.paoloamoroso.com/rediscovering-the-origins-of-my-lisp-journey"Discuss.../a Email | Reply @amoroso@fosstodon.org !--emailsub--]]>
Two weeks ago we released Dice’n Goblins, our first game on Steam. This project allowed me to discover and learn a lot of new things about game development and the industry. I will use this blog post to write down what I consider to be the most important lessons from the months spent working on this. The development started around 2 years ago when Daphnée started prototyping a dungeon crawler featuring a goblin protagonist. After a few iterations, the game combat started featuring dice, and then those dice could be used to make combos. In May 2024, the game was baptized Dice’n Goblins, and a Steam page was created featuring some early gameplay screenshots and footage. I joined the project full-time around this period. Almost one year later, after amassing more than 8000 wishlists, the game finally released on Steam on April 4th, 2025. It was received positively by the gaming press, with great reviews from PCGamer and LadiesGamers. It now sits at 92% positive reviews from players on Steam. Building RPGs isn’t easy As you can see from the above timeline, building this game took almost two years and two programmers. This is actually not that long if you consider that other indie RPGs have taken more than 6 years to come out. The main issue with the genre is that you need to create a believable world. In practice, this requires programming many different systems that will interact together to give the impression of a cohesive universe. Every time you add a new system, you need to think about how it will fit all the existing game features. For example, players typically expect an RPG to have a shop system. Of course, this means designing a shop non-player character (or building) and creating a UI that is displayed when you interact with it. But this also means thinking through a lot of other systems: combat needs to be changed to reward the player with gold, every item needs a price tag, chests should sometimes reward the player with gold, etc… Adding too many systems can quickly get into scope creep territory, and make the development exponentially longer. But you can only get away with removing so much until your game stops being an RPG. Making a game without a shop might be acceptable, but the experience still needs to have more features than “walking around and fighting monsters” to feel complete. RPGs are also, by definition, narrative experiences. While some games have managed to get away with procedurally generating 90% of the content, in general, you’ll need to get your hands dirty, write a story, and design a bunch of maps. Creating enough content for a game to fit 12h+ without having the player go through repetitive grind will by itself take a lot of time. Having said all that, I definitely wouldn’t do any other kind of games than RPGs, because this is what I enjoy playing. I don’t think I would be able to nail what makes other genres fun if I don’t play them enough to understand what separates the good from the mediocre. Marketing isn’t that complicated Everyone in the game dev community knows that there are way too many games releasing on Steam. To stand out amongst the 50+ games coming out every day, it’s important not only to have a finished product but also to plan a marketing campaign well in advance. For most people coming from a software engineering background, like me, this can feel extremely daunting. Our education and jobs do not prepare us well for this kind of task. In practice, it’s not that complicated. If your brain is able to provision a Kubernetes cluster, then you are most definitely capable of running a marketing campaign. Like anything else, it’s a skill that you can learn over time by practicing it, and iteratively improving your methods. During the 8 months following the Steam page release, we tried basically everything you can think of as a way to promote the game. Every time something was having a positive impact, we would do it more, and we quickly stopped things with low impact. The most important thing to keep in mind is your target audience. If you know who wants the game of games you’re making, it is very easy to find where they hang out and talk to them. This is however not an easy question to answer for every game. For a long while, we were not sure who would like Dice’n Goblins. Is it people who like Etrian Odyssey? Fans of Dicey Dungeons? Nostalgic players of Paper Mario? For us, the answer was mostly #1, with a bit of #3. Once we figured out what was our target audience, how to communicate with them, and most importantly, had a game that was visually appealing enough, marketing became very straightforward. This is why we really struggled to get our first 1000 wishlists, but getting the last 5000 was actually not that complicated. Publishers aren’t magic At some point, balancing the workload of actually building the game and figuring out how to market it felt too much for a two-person team. We therefore did what many indie studios do, and decided to work with a publisher. We worked with Rogue Duck Interactive, who previously published Dice & Fold, a fairly successful dice roguelike. Without getting too much into details, it didn’t work out as planned and we decided, by mutual agreement, to go back to self-publishing Dice’n Goblins. The issue simply came from the audience question mentioned earlier. Even though Dice & Fold and Dice’n Goblins share some similarities, they target a different audience, which requires a completely different approach to marketing. The lesson learned is that when picking a publisher, the most important thing you can do is to check that their current game catalog really matches the idea you have of your own game. If you’re building a fast-paced FPS, a publisher that only has experience with cozy simulation games will not be able to help you efficiently. In our situation, a publisher with experience in roguelikes and casual strategy games wasn’t a good fit for an RPG. In addition to that, I don’t think the idea of using a publisher to remove marketing toil and focus on making the game is that much of a good idea in the long term. While it definitely helps to remove the pressure from handling social media accounts and ad campaigns, new effort will be required in communicating and negotiating with the publishing team. In the end, the difference between the work saved and the work gained might not have been worth selling a chunk of your game. Conclusion After all this was said and done, one big question I haven’t answered is: would I do it again? The answer is definitely yes. Not only building this game was an extremely satisfying endeavor, but so much has been learned and built while doing it, it would be a shame not to go ahead and do a second one.