Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
17
Two weeks ago we released Dice’n Goblins, our first game on Steam. This project allowed me to discover and learn a lot of new things about game development and the industry. I will use this blog post to write down what I consider to be the most important lessons from the months spent working on this. The development started around 2 years ago when Daphnée started prototyping a dungeon crawler featuring a goblin protagonist. After a few iterations, the game combat started featuring dice, and then those dice could be used to make combos. In May 2024, the game was baptized Dice’n Goblins, and a Steam page was created featuring some early gameplay screenshots and footage. I joined the project full-time around this period. Almost one year later, after amassing more than 8000 wishlists, the game finally released on Steam on April 4th, 2025. It was received positively by the gaming press, with great reviews from PCGamer and LadiesGamers. It now sits at 92% positive reviews from players on...
3 weeks ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Alice GG

Writing GDScript with Neovim

Neovim is by far my favorite text editor. The clutter-free interface and keyboard-only navigation are what keep me productive in my daily programming. In an earlier post, I explained how I configure it into a minimalist development environment. Today, I will show you how to use it with Godot and GDScript. Configure Godot First, we need to tell Godot to use nvim as a text editor instead of the built-in one. Open Godot, and head to Editor Settings > General > Text Editor > External. There, you will need to tick the box Use external editor, indicate your Neovim installation path, and use --server /tmp/godothost --remote-send "<C-\><C-N>:n {file}<CR>{line}G{col}|" as execution flags. While in the settings, head to Network > Language Server and note down the remote port Godot is using. By default, it should be 6005. We will need that value later. Connecting to Godot with vim-godot Neovim will be able to access Godot features by using a plugin called vim-godot. We will need to edit the nvim configuration file to install plugins and configure Neovim. On Mac and Linux, it is located at ~/.config/nvim/init.vim I use vim-plug to manage my plugins, so I can just add it to my configuration like this: call plug#begin('~/.vim/plugged') " ... Plug 'habamax/vim-godot' " ... call plug#end() Once the configuration file is modified and saved, use the :PlugInstall command to install it. You’ll also need to indicate Godot’s executable path. Add this line to your init.vim: let g:godot_executable = '/Applications/Godot.app/Contents/MacOS/Godot' For vim-godot to communicate with the Godot editor, it will need to listen to the /tmp/godothost file we configured in the editor previously. To do that, simply launch nvim with the flag --listen /tmp/godothost. To save you some precious keypress, I suggest creating a new alias in your bashrc/zshrc like this: alias gvim="nvim --listen /tmp/godothost" Getting autocompletion with coc.nvim Godot ships with a language server. It means the Godot editor can provide autocompletion, syntax highlighting, and advanced navigation to external editors like nvim. While Neovim now has built-in support for the language server protocol, I’ve used the plugin coc.nvim to obtain these functionalities for years and see no reason to change. You can also install it with vim-plug by adding the following line to your plugin list: Plug 'neoclide/coc.nvim', {'branch':'release'} Run :PlugInstall again to install it. You’ll need to indicate the Godot language server address and port using the command :CocConfig. It should open Coc’s configuration file, which is a JSON file normally located at ~/.config/nvim/coc-settings.json. In this file enter the following data, and make sure the port number matches the one located in your editor: { "languageserver": { "godot": { "host": "127.0.0.1", "filetypes": ["gdscript"], "port": 6005 } } } I recommend adding Coc’s example configuration to your init.vim file. You can find it on GitHub. It will provide you with a lot of useful shortcuts, such as using gd to go to a function definition and gr to list its references. Debugging using nvim-dap If you want to use the debugger from inside Neovim, you’ll need to install another plugin called nvim-dap. Add the following to your plugins list: Plug 'mfussenegger/nvim-dap' The plugin authors suggest configuring it using Lua, so let’s do that by adding the following in your init.vim: lua <<EOF local dap = require("dap") dap.adapters.godot = { type = "server", host = "127.0.0.1", port = 6006, } dap.configurations.gdscript = { { type = "godot", request = "launch", name = "Launch scene", project = "${workspaceFolder}", launch_scene = true, }, } vim.api.nvim_create_user_command("Breakpoint", "lua require'dap'.toggle_breakpoint()", {}) vim.api.nvim_create_user_command("Continue", "lua require'dap'.continue()", {}) vim.api.nvim_create_user_command("StepOver", "lua require'dap'.step_over()", {}) vim.api.nvim_create_user_command("StepInto", "lua require'dap'.step_into()", {}) vim.api.nvim_create_user_command("REPL", "lua require'dap'.repl.open()", {}) EOF This will connect to the language server (here on port 6005), and allow you to pilot the debugger using the following commands: :Breakpoint to create (or remove) a breakpoint :Continue to launch the game or run until the next breakpoint :StepOver to step over a line :StepInto to step inside a function definition :REPL to launch a REPL (useful if you want to examine values) Conclusion I hope you’ll have a great time developing Godot games with Neovim. If it helps you, you can check out my entire init.vim file on GitHub gist.

6 months ago 68 votes
Stuff I've been working on

It’s been around 2 years that I’ve had to stop with my long-term addiction to stable jobs. Quite a few people who read this blog are wondering what the hell exactly I’ve been doing since then so I’m going to update all of you on the various projects I’ve been working on. Meme credit: Fabian Stadler Mikochi Last year, I created Mikochi, a minimalist remote file browser written in Go and Preact. It has slowly been getting more and more users, and it’s now sitting at more than 200 GitHub stars and more than 6000 Docker pulls. I personally use it almost every day and it fits my use case perfectly. It is basically feature-complete so I don’t do too much development on it. I’ve actually been hoping users help me solve the few remaining GitHub issues. So far it happened twice, a good start I guess. Itako You may have seen a couple of posts on this blog regarding finance. It’s a subject I’ve been trying to learn more about for a while now. This led me to read some excellent books including Nassim Taleb’s Fooled by Randomness, Robert Shiller’s Irrational Exuberance, and Robert Carver’s Smart Portfolios. Those books have pushed me toward a more systematic approach to investing, and I’ve built Itako to help me with that. I’ve not talked about it on this blog so far, but it’s a SaaS software that gives clear data visualizations of a stock portfolio performance, volatility, and diversification. It’s currently in beta and usable for free. I’m quite happy that there are actually people using it and that it seems to work without any major issues. However, I think making it easier to use and adding a couple more features would be necessary to make it into a commercially viable product. I try to work on it when I find the time, but for the next couple of months, I have to prioritize the next project. Dice’n Goblins I play RPGs too much and now I’m even working on making them. This project was actually not started by me but by Daphnée Portheault. In the past, we worked on a couple of game jams and produced Cosmic Delusion and Duat. Now we’re trying to make a real commercial game called Dice’n Goblins. The game is about a Goblin who tries to escape from a dungeon that seems to grow endlessly. It’s inspired by classic dungeon crawlers like Etrian Odyssey and Lands of Lore. The twist is that you have to use dice to fight monsters. Equipping items you find in the dungeon gives you new dice and using skills allows you to change the dice values during combat (and make combos). We managed to obtain a decent amount of traction on this project and now it’s being published by Rogue Duck Interactive. The full game should come out in Q1 2025, for PC, Mac, and Linux. You can already play the demo (and wishlist the game) on Steam. If you’re really enthusiastic about it, don’t hesitate to join the Discord community. Technically it’s quite a big change for me to work on game dev since I can’t use that many of the reflexes I’ve built while working on infra subjects. But I’m getting more and more comfortable with using Godot and figuring out all the new game development related lingo. It’s also been an occasion to do a bit of work with non-code topics, like press relations. Japanese Something totally not relevant to tech. Since I’ve managed to reach a ‘goed genoeg’ level of Dutch, I’ve also started to learn more Japanese. I’ve almost reached the N4 level. (By almost I mean I’ve failed but it was close.) A screenshot from the Kanji Study Android App I’ve managed to learn all the hiraganas, katakanas, basic vocabulary, and grammar. So now all I’ve left to do is a huge amount of immersion and grind more kanjis. This is tougher than I thought it would be but I guess it’s fun that I can pretend to be studying while playing Dragon Quest XI in Japanese.

7 months ago 86 votes
How to publish your Godot game on Mac

Since 2019, Apple has required all MacOS software to be signed and notarized. This is meant to prevent naive users from installing malware while running software from unknown sources. Since this process is convoluted, it stops many indie game developers from releasing their Godot games on Mac. To solve this, this article will attempt to document each and every step of the signing and notarization process. Photo by Natasya Chen Step 0: Get a Mac While there tools exists to codesign/notarize Mac executables from other platforms, I think having access to a MacOS machine will remove quite a few headaches. A Mac VM, or even a cloud machine, might do the job. I have not personally tested those alternatives, so if you do, please tell me if it works well. Step 1: Get an Apple ID and the Developer App You can create an Apple ID through Apple’s website. While the process should be straightforward, it seems like Apple has trust issues when it comes to email from protonmail.com or custom domains. Do not hesitate to contact support in case you encounter issues creating or logging into your Apple ID. They are quite responsive. Once you have a working Apple ID, use it to log into the App Store on your Mac and install the Apple Developer application. Step 2: Enroll in the Apple Developer Program Next, open the Apple Developer app, log in, and ask to “Enroll” in the developer program. This will require you to scan your ID, fill in data about you and your company, and most likely confirm those data with a support agent by phone. The process costs ~99$ and should take between 24 and 48 hours. Step 3: Setup Xcode Xcode will be used to codesign and notarize your app through Godot. You should install the app through the App Store like you did for the Apple Developer application. Once the app is installed, we need to accept the license. First, launch Xcode and close it. Then open a terminal and run the following commands: sudo xcode-select -s /Applications/Xcode.app/Contents/Developer sudo xcodebuild -license accept Step 4: Generate a certificate signing request To obtain a code signing certificate, we need to generate a certificate request. To do this, open Keychain, and from the top menu, select Keychain Access > Certificate Assistant > Request a Certificate From a Certificate Authority. Fill in your email address and choose Request is: Saved to disk. Click on Continue and save the .certSigningRequest file somewhere. Step 5: Obtain a code signing certificate Now, head to the Apple Developer website. Log in and go to the certificate list. Click on the + button to create a new certificate. Check Developer ID Application when asked for the type of certificate and click on continue. On the next screen, upload the certificate signing request we generated in step 4. You’ll be prompted to download your certificate. Do it and add it to Keychain by double-clicking on the file. You can check that your certificate was properly added by running the following command: security find-identity -v -p codesigning It should return (at least) one identity. Step 6: Get an App Store Connect API Key Back to the Apple Developer website, go to Users and Access, and open the Integrations tab. From this page, you should request access to the App Store Connect API. This access should normally be granted immediately. From this page, create a new key by clicking on the + icon. Give your key a name you will remember and give it the Developer access. Click on Generate and the key will be created. You will then be prompted to download your key. Do it and store the file safely, as you will only be able to download it once. Step 7: Configure Godot Open your Godot project and head to the project settings using the top menu (Project > Project Settings). From there search for VRAM Compression and check Import ETC2 ASTC. Then make sure you have installed up-to-date export templates by going through the Editor > Manage Export Templates menu and clicking on Download and Install. To export your project, head to the Project > Export. Click on Add and select macOS to create new presets. In the presets form on the left, you’ll have to fill in a unique Bundle Identifier in the Application section, this can be com.yourcompany.yourgame. In the Codesign section, select Xcode codesign and fill in your Apple Team ID and Identity. Those can be found using the security find-identity -v -p codesigning command: the first (~40 characters) part of the output is your identity, and the last (~10 characters, between parentheses) is your Team ID. In the Notarization section, select Xcode notarytool and fill in your API UUID (found on the appstoreconnect page), API Key (the file you saved in Step 6), and API Key ID (also found on the appstoreconnect page). Click on Export Project… to start the export. Step 8: Checking the notarization status Godot will automatically send your exported file for notarization. You can check the notarization progress by running: xcrun notarytool history --key YOUR_AUTH_KEY_FILE.p8 --key-id YOUR_KEY_ID --issuer YOUR_ISSUER_ID According to Apple, the process should rarely take more than 15 minutes. Empirically, this is sometimes very false and the process can give you enough time to grab a coffee, bake a cake, and water your plants. Once the notarization appears completed (with the status Valid or Invalid), you can run this command to check the result (using the job ID found in the previous command output): xcrun notarytool log --key YOUR_AUTH_KEY_FILE.p8 --key-id YOUR_KEY_ID --issuer YOUR_ISSUER_ID YOUR_JOB_ID Step 9: Stapling your executable To make sure that your executable can work offline, you are supposed to ‘staple’ the notarization to it. This is done by running the following command: xcrun stapler staple MY_SOFTWARE.dmg Extra: Exporting the game as .app Godot can export your game as .dmg, .zip, or .app. For most users, it is more convenient to receive the game as .app, as those can be directly executed. However, the notarization process doesn’t support uploading .app files to Apple’s server. I think the proper way to obtain a notarized .app file is to: Export the project .dmg from Godot with code signing and notarization Mount the .dmg and extract the .app located inside of it Staple the .app bundle Extra: Code signing GodotSteam GodotSteam is a Godot add-on that wraps the Steam API inside GDscript. If you use it, you might encounter issues during notarization, because it adds a bunch of .dylib and .framework files. What I did to work around that was to codesign the framework folders: codesign --deep --force --verify --verbose --sign "Developer ID Application: My Company" libgodotsteam.macos.template_release.framework codesign --deep --force --verify --verbose --sign "Developer ID Application: My Company" libgodotsteam.macos.template_debug.framework I also checked the options Allow DyId environment variable and Disable Library Validation in the export settings (section Codesign > Entitlements). FAQ: Is this really necessary if I’m just going to publish my game on Steam? Actually, I’m not 100% sure, but I think it is only “recommended” and Steam can bypass the notarization. Steamworks does contain a checkbox asking if App Bundles Are Notarized, so I assume it might do something.

7 months ago 85 votes
Create a presskit in 10 minutes with Milou

Talking to the press is an inevitable part of marketing a game or software. To make the journalist’s job easier, it’s a good idea to put together a press kit. The press kit should contain all the information someone could want to write an article about your product, as well as downloadable, high-resolution assets. Dice'n Goblins Introducing Milou Milou is a NodeJS software that generates press kits in the form of static websites. It aims at creating beautiful, fast, and responsive press kits, using only YAML configuration files. I built it on top of presskit.html, which solved the same problem but isn’t actively maintained at the moment. Milou improves on its foundation by using a more modern CSS, YAML instead of XML, and up-to-date Javascript code. Installation First, you will need to have NodeJS installed: curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash nvm install 22 Once Node is ready, you can use NPM to install Milou: npm install -g milou Running milou -V should display its version (currently 1.1.1). Let’s build a press kit Let’s create a new project: mkdir mypresskit cd mypresskit milou new The root directory of your project will be used for your company. In this directory, the file data.yml should contain data about your company, such as your company name, location, website, etc… You can find an example of a fully completed company data.yml file on GitHub. To validate that your file is a valid YAML file, you can use an online validator. Your company directory should contain a sub-folder called images, you should put illustrations you want to appear in your press kit inside it. Any file named header.*** will be used as the page header, favicon.ico will be used as the page favicon, and files prefixed by the word logo will appear in a dedicated logo section of the page (eg. logo01.png or logo.jpg). Other images you put in this folder will be included in your page, in the Images section. After completing your company page, we can create a product page. This will be done in a subfolder: mkdir myproduct cd myproduct milou new -t product Just like for a company, you should fill in the data.yml file with info about your product, like its title, features, and prices. You can find an example of a product file on GitHub. The product folder should also contain an images subfolder. It works the same way as for the company. When your product is ready, go back to the company folder and build the press kit: cd ../ milou build . This will generate the HTML and CSS files for your online presskit in the directory build. You can then use any web server to access them. For example, this will make them accessible from http://localhost:3000/ cd build npx serve To put your press kit online, you can upload this folder to any static site host, like CloudFlare Pages, Netlify, or your own server running Nginx. Conclusion Milou is still quite new, and if you encounter issues while using it, don’t hesitate to open an issue. And if it works perfectly for you, leave a star on GitHub.

11 months ago 104 votes

More in programming

Stress And Programming

Having spent four decades as a programmer in various industries and situations, I know that modern software development processes are far more stressful than when I started. It's not simply that developing software today is more complex than it was back in 1981. In that early decade, none

10 hours ago 2 votes
Espressif’s Automatic Reset

In previous articles, we saw how to use “real” UART, and looked into the trick used by Arduino to automatically reset boards when uploading firmware. Today, we’ll look into how Espressif does something similar, using even more tricks. “Real” UART on the Saola As usual, let’s first simply connect the UART adapter. Again, we connect … Continue reading Espressif’s Automatic Reset → The post Espressif’s Automatic Reset appeared first on Quentin Santos.

10 hours ago 2 votes
How I built a chatbot with my dog

Lessons for AI prompting and retrieval

13 hours ago 2 votes
a whippet waypoint

Hey peoples! Tonight, some meta-words. As you know I am fascinated by compilers and language implementations, and I just want to know all the things and implement all the fun stuff: intermediate representations, flow-sensitive source-to-source optimization passes, register allocation, instruction selection, garbage collection, all of that. It started long ago with a combination of curiosity and a hubris to satisfy that curiosity. The usual way to slake such a thirst is structured higher education followed by industry apprenticeship, but for whatever reason my path sent me through a nuclear engineering bachelor’s program instead of computer science, and continuing that path was so distasteful that I noped out all the way to rural Namibia for a couple years. Fast-forward, after 20 years in the programming industry, and having picked up some language implementation experience, a few years ago I returned to garbage collection. I have a good level of language implementation chops but never wrote a memory manager, and Guile’s performance was limited by its use of the Boehm collector. I had been on the lookout for something that could help, and when I learned of it seemed to me that the only thing missing was an appropriate implementation for Guile, and hey I could do that!Immix I started with the idea of an -style interface to a memory manager that was abstract enough to be implemented by a variety of different collection algorithms. This kind of abstraction is important, because in this domain it’s easy to convince oneself that a given algorithm is amazing, just based on vibes; to stay grounded, I find I always need to compare what I am doing to some fixed point of reference. This GC implementation effort grew into , but as it did so a funny thing happened: the as a direct replacement for the Boehm collector maintained mark bits in a side table, which I realized was a suitable substrate for Immix-inspired bump-pointer allocation into holes. I ended up building on that to develop an Immix collector, but without lines: instead each granule of allocation (16 bytes for a 64-bit system) is its own line.MMTkWhippetmark-sweep collector that I prototyped The is funny, because it defines itself as a new class of collector, fundamentally different from the three other fundamental algorithms (mark-sweep, mark-compact, and evacuation). Immix’s are blocks (64kB coarse-grained heap divisions) and lines (128B “fine-grained” divisions); the innovation (for me) is the discipline by which one can potentially defragment a block without a second pass over the heap, while also allowing for bump-pointer allocation. See the papers for the deets!Immix papermark-regionregionsoptimistic evacuation However what, really, are the regions referred to by ? If they are blocks, then the concept is trivial: everyone has a block-structured heap these days. If they are spans of lines, well, how does one choose a line size? As I understand it, Immix’s choice of 128 bytes was to be fine-grained enough to not lose too much space to fragmentation, while also being coarse enough to be eagerly swept during the GC pause.mark-region This constraint was odd, to me; all of the mark-sweep systems I have ever dealt with have had lazy or concurrent sweeping, so the lower bound on the line size to me had little meaning. Indeed, as one reads papers in this domain, it is hard to know the real from the rhetorical; the review process prizes novelty over nuance. Anyway. What if we cranked the precision dial to 16 instead, and had a line per granule? That was the process that led me to Nofl. It is a space in a collector that came from mark-sweep with a side table, but instead uses the side table for bump-pointer allocation. Or you could see it as an Immix whose line size is 16 bytes; it’s certainly easier to explain it that way, and that’s the tack I took in a .recent paper submission to ISMM’25 Wait what! I have a fine job in industry and a blog, why write a paper? Gosh I have meditated on this for a long time and the answers are very silly. Firstly, one of my language communities is Scheme, which was a research hotbed some 20-25 years ago, which means many practitioners—people I would be pleased to call peers—came up through the PhD factories and published many interesting results in academic venues. These are the folks I like to hang out with! This is also what academic conferences are, chances to shoot the shit with far-flung fellows. In Scheme this is fine, my work on Guile is enough to pay the intellectual cover charge, but I need more, and in the field of GC I am not a proven player. So I did an atypical thing, which is to cosplay at being an independent researcher without having first been a dependent researcher, and just solo-submit a paper. Kids: if you see yourself here, just go get a doctorate. It is not easy but I can only think it is a much more direct path to goal. And the result? Well, friends, it is this blog post :) I got the usual assortment of review feedback, from the very sympathetic to the less so, but ultimately people were confused by leading with a comparison to Immix but ending without an evaluation against Immix. This is fair and the paper does not mention that, you know, I don’t have an Immix lying around. To my eyes it was a good paper, an , but, you know, just a try. I’ll try again sometime.80% paper In the meantime, I am driving towards getting Whippet into Guile. I am hoping that sometime next week I will have excised all the uses of the BDW (Boehm GC) API in Guile, which will finally allow for testing Nofl in more than a laboratory environment. Onwards and upwards! whippet regions? paper??!?

7 hours ago 1 votes
Write the most clever code you possibly can

I started writing this early last week but Real Life Stuff happened and now you're getting the first-draft late this week. Warning, unedited thoughts ahead! New Logic for Programmers release! v0.9 is out! This is a big release, with a new cover design, several rewritten chapters, online code samples and much more. See the full release notes at the changelog page, and get the book here! Write the cleverest code you possibly can There are millions of articles online about how programmers should not write "clever" code, and instead write simple, maintainable code that everybody understands. Sometimes the example of "clever" code looks like this (src): # Python p=n=1 exec("p*=n*n;n+=1;"*~-int(input())) print(p%n) This is code-golfing, the sport of writing the most concise code possible. Obviously you shouldn't run this in production for the same reason you shouldn't eat dinner off a Rembrandt. Other times the example looks like this: def is_prime(x): if x == 1: return True return all([x%n != 0 for n in range(2, x)] This is "clever" because it uses a single list comprehension, as opposed to a "simple" for loop. Yes, "list comprehensions are too clever" is something I've read in one of these articles. I've also talked to people who think that datatypes besides lists and hashmaps are too clever to use, that most optimizations are too clever to bother with, and even that functions and classes are too clever and code should be a linear script.1. Clever code is anything using features or domain concepts we don't understand. Something that seems unbearably clever to me might be utterly mundane for you, and vice versa. How do we make something utterly mundane? By using it and working at the boundaries of our skills. Almost everything I'm "good at" comes from banging my head against it more than is healthy. That suggests a really good reason to write clever code: it's an excellent form of purposeful practice. Writing clever code forces us to code outside of our comfort zone, developing our skills as software engineers. Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you [will get excellent debugging practice at exactly the right level required to push your skills as a software engineer] — Brian Kernighan, probably There are other benefits, too, but first let's kill the elephant in the room:2 Don't commit clever code I am proposing writing clever code as a means of practice. Being at work is a job with coworkers who will not appreciate if your code is too clever. Similarly, don't use too many innovative technologies. Don't put anything in production you are uncomfortable with. We can still responsibly write clever code at work, though: Solve a problem in both a simple and a clever way, and then only commit the simple way. This works well for small scale problems where trying the "clever way" only takes a few minutes. Write our personal tools cleverly. I'm a big believer of the idea that most programmers would benefit from writing more scripts and support code customized to their particular work environment. This is a great place to practice new techniques, languages, etc. If clever code is absolutely the best way to solve a problem, then commit it with extensive documentation explaining how it works and why it's preferable to simpler solutions. Bonus: this potentially helps the whole team upskill. Writing clever code... ...teaches simple solutions Usually, code that's called too clever composes several powerful features together — the "not a single list comprehension or function" people are the exception. Josh Comeau's "don't write clever code" article gives this example of "too clever": const extractDataFromResponse = (response) => { const [Component, props] = response; const resultsEntries = Object.entries({ Component, props }); const assignIfValueTruthy = (o, [k, v]) => (v ? { ...o, [k]: v } : o ); return resultsEntries.reduce(assignIfValueTruthy, {}); } What makes this "clever"? I count eight language features composed together: entries, argument unpacking, implicit objects, splats, ternaries, higher-order functions, and reductions. Would code that used only one or two of these features still be "clever"? I don't think so. These features exist for a reason, and oftentimes they make code simpler than not using them. We can, of course, learn these features one at a time. Writing the clever version (but not committing it) gives us practice with all eight at once and also with how they compose together. That knowledge comes in handy when we want to apply a single one of the ideas. I've recently had to do a bit of pandas for a project. Whenever I have to do a new analysis, I try to write it as a single chain of transformations, and then as a more balanced set of updates. ...helps us master concepts Even if the composite parts of a "clever" solution aren't by themselves useful, it still makes us better at the overall language, and that's inherently valuable. A few years ago I wrote Crimes with Python's Pattern Matching. It involves writing horrible code like this: from abc import ABC class NotIterable(ABC): @classmethod def __subclasshook__(cls, C): return not hasattr(C, "__iter__") def f(x): match x: case NotIterable(): print(f"{x} is not iterable") case _: print(f"{x} is iterable") if __name__ == "__main__": f(10) f("string") f([1, 2, 3]) This composes Python match statements, which are broadly useful, and abstract base classes, which are incredibly niche. But even if I never use ABCs in real production code, it helped me understand Python's match semantics and Method Resolution Order better. ...prepares us for necessity Sometimes the clever way is the only way. Maybe we need something faster than the simplest solution. Maybe we are working with constrained tools or frameworks that demand cleverness. Peter Norvig argued that design patterns compensate for missing language features. I'd argue that cleverness is another means of compensating: if our tools don't have an easy way to do something, we need to find a clever way. You see this a lot in formal methods like TLA+. Need to check a hyperproperty? Cast your state space to a directed graph. Need to compose ten specifications together? Combine refinements with state machines. Most difficult problems have a "clever" solution. The real problem is that clever solutions have a skill floor. If normal use of the tool is at difficult 3 out of 10, then basic clever solutions are at 5 out of 10, and it's hard to jump those two steps in the moment you need the cleverness. But if you've practiced with writing overly clever code, you're used to working at a 7 out of 10 level in short bursts, and then you can "drop down" to 5/10. I don't know if that makes too much sense, but I see it happen a lot in practice. ...builds comradery On a few occasions, after getting a pull request merged, I pulled the reviewer over and said "check out this horrible way of doing the same thing". I find that as long as people know they're not going to be subjected to a clever solution in production, they enjoy seeing it! Next week's newsletter will probably also be late, after that we should be back to a regular schedule for the rest of the summer. Mostly grad students outside of CS who have to write scripts to do research. And in more than one data scientist. I think it's correlated with using Jupyter. ↩ If I don't put this at the beginning, I'll get a bajillion responses like "your team will hate you" ↩

2 days ago 3 votes