More from Alice GG
Neovim is by far my favorite text editor. The clutter-free interface and keyboard-only navigation are what keep me productive in my daily programming. In an earlier post, I explained how I configure it into a minimalist development environment. Today, I will show you how to use it with Godot and GDScript. Configure Godot First, we need to tell Godot to use nvim as a text editor instead of the built-in one. Open Godot, and head to Editor Settings > General > Text Editor > External. There, you will need to tick the box Use external editor, indicate your Neovim installation path, and use --server /tmp/godothost --remote-send "<C-\><C-N>:n {file}<CR>{line}G{col}|" as execution flags. While in the settings, head to Network > Language Server and note down the remote port Godot is using. By default, it should be 6005. We will need that value later. Connecting to Godot with vim-godot Neovim will be able to access Godot features by using a plugin called vim-godot. We will need to edit the nvim configuration file to install plugins and configure Neovim. On Mac and Linux, it is located at ~/.config/nvim/init.vim I use vim-plug to manage my plugins, so I can just add it to my configuration like this: call plug#begin('~/.vim/plugged') " ... Plug 'habamax/vim-godot' " ... call plug#end() Once the configuration file is modified and saved, use the :PlugInstall command to install it. You’ll also need to indicate Godot’s executable path. Add this line to your init.vim: let g:godot_executable = '/Applications/Godot.app/Contents/MacOS/Godot' For vim-godot to communicate with the Godot editor, it will need to listen to the /tmp/godothost file we configured in the editor previously. To do that, simply launch nvim with the flag --listen /tmp/godothost. To save you some precious keypress, I suggest creating a new alias in your bashrc/zshrc like this: alias gvim="nvim --listen /tmp/godothost" Getting autocompletion with coc.nvim Godot ships with a language server. It means the Godot editor can provide autocompletion, syntax highlighting, and advanced navigation to external editors like nvim. While Neovim now has built-in support for the language server protocol, I’ve used the plugin coc.nvim to obtain these functionalities for years and see no reason to change. You can also install it with vim-plug by adding the following line to your plugin list: Plug 'neoclide/coc.nvim', {'branch':'release'} Run :PlugInstall again to install it. You’ll need to indicate the Godot language server address and port using the command :CocConfig. It should open Coc’s configuration file, which is a JSON file normally located at ~/.config/nvim/coc-settings.json. In this file enter the following data, and make sure the port number matches the one located in your editor: { "languageserver": { "godot": { "host": "127.0.0.1", "filetypes": ["gdscript"], "port": 6005 } } } I recommend adding Coc’s example configuration to your init.vim file. You can find it on GitHub. It will provide you with a lot of useful shortcuts, such as using gd to go to a function definition and gr to list its references. Debugging using nvim-dap If you want to use the debugger from inside Neovim, you’ll need to install another plugin called nvim-dap. Add the following to your plugins list: Plug 'mfussenegger/nvim-dap' The plugin authors suggest configuring it using Lua, so let’s do that by adding the following in your init.vim: lua <<EOF local dap = require("dap") dap.adapters.godot = { type = "server", host = "127.0.0.1", port = 6006, } dap.configurations.gdscript = { { type = "godot", request = "launch", name = "Launch scene", project = "${workspaceFolder}", launch_scene = true, }, } vim.api.nvim_create_user_command("Breakpoint", "lua require'dap'.toggle_breakpoint()", {}) vim.api.nvim_create_user_command("Continue", "lua require'dap'.continue()", {}) vim.api.nvim_create_user_command("StepOver", "lua require'dap'.step_over()", {}) vim.api.nvim_create_user_command("StepInto", "lua require'dap'.step_into()", {}) vim.api.nvim_create_user_command("REPL", "lua require'dap'.repl.open()", {}) EOF This will connect to the language server (here on port 6005), and allow you to pilot the debugger using the following commands: :Breakpoint to create (or remove) a breakpoint :Continue to launch the game or run until the next breakpoint :StepOver to step over a line :StepInto to step inside a function definition :REPL to launch a REPL (useful if you want to examine values) Conclusion I hope you’ll have a great time developing Godot games with Neovim. If it helps you, you can check out my entire init.vim file on GitHub gist.
It’s been around 2 years that I’ve had to stop with my long-term addiction to stable jobs. Quite a few people who read this blog are wondering what the hell exactly I’ve been doing since then so I’m going to update all of you on the various projects I’ve been working on. Meme credit: Fabian Stadler Mikochi Last year, I created Mikochi, a minimalist remote file browser written in Go and Preact. It has slowly been getting more and more users, and it’s now sitting at more than 200 GitHub stars and more than 6000 Docker pulls. I personally use it almost every day and it fits my use case perfectly. It is basically feature-complete so I don’t do too much development on it. I’ve actually been hoping users help me solve the few remaining GitHub issues. So far it happened twice, a good start I guess. Itako You may have seen a couple of posts on this blog regarding finance. It’s a subject I’ve been trying to learn more about for a while now. This led me to read some excellent books including Nassim Taleb’s Fooled by Randomness, Robert Shiller’s Irrational Exuberance, and Robert Carver’s Smart Portfolios. Those books have pushed me toward a more systematic approach to investing, and I’ve built Itako to help me with that. I’ve not talked about it on this blog so far, but it’s a SaaS software that gives clear data visualizations of a stock portfolio performance, volatility, and diversification. It’s currently in beta and usable for free. I’m quite happy that there are actually people using it and that it seems to work without any major issues. However, I think making it easier to use and adding a couple more features would be necessary to make it into a commercially viable product. I try to work on it when I find the time, but for the next couple of months, I have to prioritize the next project. Dice’n Goblins I play RPGs too much and now I’m even working on making them. This project was actually not started by me but by Daphnée Portheault. In the past, we worked on a couple of game jams and produced Cosmic Delusion and Duat. Now we’re trying to make a real commercial game called Dice’n Goblins. The game is about a Goblin who tries to escape from a dungeon that seems to grow endlessly. It’s inspired by classic dungeon crawlers like Etrian Odyssey and Lands of Lore. The twist is that you have to use dice to fight monsters. Equipping items you find in the dungeon gives you new dice and using skills allows you to change the dice values during combat (and make combos). We managed to obtain a decent amount of traction on this project and now it’s being published by Rogue Duck Interactive. The full game should come out in Q1 2025, for PC, Mac, and Linux. You can already play the demo (and wishlist the game) on Steam. If you’re really enthusiastic about it, don’t hesitate to join the Discord community. Technically it’s quite a big change for me to work on game dev since I can’t use that many of the reflexes I’ve built while working on infra subjects. But I’m getting more and more comfortable with using Godot and figuring out all the new game development related lingo. It’s also been an occasion to do a bit of work with non-code topics, like press relations. Japanese Something totally not relevant to tech. Since I’ve managed to reach a ‘goed genoeg’ level of Dutch, I’ve also started to learn more Japanese. I’ve almost reached the N4 level. (By almost I mean I’ve failed but it was close.) A screenshot from the Kanji Study Android App I’ve managed to learn all the hiraganas, katakanas, basic vocabulary, and grammar. So now all I’ve left to do is a huge amount of immersion and grind more kanjis. This is tougher than I thought it would be but I guess it’s fun that I can pretend to be studying while playing Dragon Quest XI in Japanese.
Since 2019, Apple has required all MacOS software to be signed and notarized. This is meant to prevent naive users from installing malware while running software from unknown sources. Since this process is convoluted, it stops many indie game developers from releasing their Godot games on Mac. To solve this, this article will attempt to document each and every step of the signing and notarization process. Photo by Natasya Chen Step 0: Get a Mac While there tools exists to codesign/notarize Mac executables from other platforms, I think having access to a MacOS machine will remove quite a few headaches. A Mac VM, or even a cloud machine, might do the job. I have not personally tested those alternatives, so if you do, please tell me if it works well. Step 1: Get an Apple ID and the Developer App You can create an Apple ID through Apple’s website. While the process should be straightforward, it seems like Apple has trust issues when it comes to email from protonmail.com or custom domains. Do not hesitate to contact support in case you encounter issues creating or logging into your Apple ID. They are quite responsive. Once you have a working Apple ID, use it to log into the App Store on your Mac and install the Apple Developer application. Step 2: Enroll in the Apple Developer Program Next, open the Apple Developer app, log in, and ask to “Enroll” in the developer program. This will require you to scan your ID, fill in data about you and your company, and most likely confirm those data with a support agent by phone. The process costs ~99$ and should take between 24 and 48 hours. Step 3: Setup Xcode Xcode will be used to codesign and notarize your app through Godot. You should install the app through the App Store like you did for the Apple Developer application. Once the app is installed, we need to accept the license. First, launch Xcode and close it. Then open a terminal and run the following commands: sudo xcode-select -s /Applications/Xcode.app/Contents/Developer sudo xcodebuild -license accept Step 4: Generate a certificate signing request To obtain a code signing certificate, we need to generate a certificate request. To do this, open Keychain, and from the top menu, select Keychain Access > Certificate Assistant > Request a Certificate From a Certificate Authority. Fill in your email address and choose Request is: Saved to disk. Click on Continue and save the .certSigningRequest file somewhere. Step 5: Obtain a code signing certificate Now, head to the Apple Developer website. Log in and go to the certificate list. Click on the + button to create a new certificate. Check Developer ID Application when asked for the type of certificate and click on continue. On the next screen, upload the certificate signing request we generated in step 4. You’ll be prompted to download your certificate. Do it and add it to Keychain by double-clicking on the file. You can check that your certificate was properly added by running the following command: security find-identity -v -p codesigning It should return (at least) one identity. Step 6: Get an App Store Connect API Key Back to the Apple Developer website, go to Users and Access, and open the Integrations tab. From this page, you should request access to the App Store Connect API. This access should normally be granted immediately. From this page, create a new key by clicking on the + icon. Give your key a name you will remember and give it the Developer access. Click on Generate and the key will be created. You will then be prompted to download your key. Do it and store the file safely, as you will only be able to download it once. Step 7: Configure Godot Open your Godot project and head to the project settings using the top menu (Project > Project Settings). From there search for VRAM Compression and check Import ETC2 ASTC. Then make sure you have installed up-to-date export templates by going through the Editor > Manage Export Templates menu and clicking on Download and Install. To export your project, head to the Project > Export. Click on Add and select macOS to create new presets. In the presets form on the left, you’ll have to fill in a unique Bundle Identifier in the Application section, this can be com.yourcompany.yourgame. In the Codesign section, select Xcode codesign and fill in your Apple Team ID and Identity. Those can be found using the security find-identity -v -p codesigning command: the first (~40 characters) part of the output is your identity, and the last (~10 characters, between parentheses) is your Team ID. In the Notarization section, select Xcode notarytool and fill in your API UUID (found on the appstoreconnect page), API Key (the file you saved in Step 6), and API Key ID (also found on the appstoreconnect page). Click on Export Project… to start the export. Step 8: Checking the notarization status Godot will automatically send your exported file for notarization. You can check the notarization progress by running: xcrun notarytool history --key YOUR_AUTH_KEY_FILE.p8 --key-id YOUR_KEY_ID --issuer YOUR_ISSUER_ID According to Apple, the process should rarely take more than 15 minutes. Empirically, this is sometimes very false and the process can give you enough time to grab a coffee, bake a cake, and water your plants. Once the notarization appears completed (with the status Valid or Invalid), you can run this command to check the result (using the job ID found in the previous command output): xcrun notarytool log --key YOUR_AUTH_KEY_FILE.p8 --key-id YOUR_KEY_ID --issuer YOUR_ISSUER_ID YOUR_JOB_ID Step 9: Stapling your executable To make sure that your executable can work offline, you are supposed to ‘staple’ the notarization to it. This is done by running the following command: xcrun stapler staple MY_SOFTWARE.dmg Extra: Exporting the game as .app Godot can export your game as .dmg, .zip, or .app. For most users, it is more convenient to receive the game as .app, as those can be directly executed. However, the notarization process doesn’t support uploading .app files to Apple’s server. I think the proper way to obtain a notarized .app file is to: Export the project .dmg from Godot with code signing and notarization Mount the .dmg and extract the .app located inside of it Staple the .app bundle Extra: Code signing GodotSteam GodotSteam is a Godot add-on that wraps the Steam API inside GDscript. If you use it, you might encounter issues during notarization, because it adds a bunch of .dylib and .framework files. What I did to work around that was to codesign the framework folders: codesign --deep --force --verify --verbose --sign "Developer ID Application: My Company" libgodotsteam.macos.template_release.framework codesign --deep --force --verify --verbose --sign "Developer ID Application: My Company" libgodotsteam.macos.template_debug.framework I also checked the options Allow DyId environment variable and Disable Library Validation in the export settings (section Codesign > Entitlements). FAQ: Is this really necessary if I’m just going to publish my game on Steam? Actually, I’m not 100% sure, but I think it is only “recommended” and Steam can bypass the notarization. Steamworks does contain a checkbox asking if App Bundles Are Notarized, so I assume it might do something.
Talking to the press is an inevitable part of marketing a game or software. To make the journalist’s job easier, it’s a good idea to put together a press kit. The press kit should contain all the information someone could want to write an article about your product, as well as downloadable, high-resolution assets. Dice'n Goblins Introducing Milou Milou is a NodeJS software that generates press kits in the form of static websites. It aims at creating beautiful, fast, and responsive press kits, using only YAML configuration files. I built it on top of presskit.html, which solved the same problem but isn’t actively maintained at the moment. Milou improves on its foundation by using a more modern CSS, YAML instead of XML, and up-to-date Javascript code. Installation First, you will need to have NodeJS installed: curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash nvm install 22 Once Node is ready, you can use NPM to install Milou: npm install -g milou Running milou -V should display its version (currently 1.1.1). Let’s build a press kit Let’s create a new project: mkdir mypresskit cd mypresskit milou new The root directory of your project will be used for your company. In this directory, the file data.yml should contain data about your company, such as your company name, location, website, etc… You can find an example of a fully completed company data.yml file on GitHub. To validate that your file is a valid YAML file, you can use an online validator. Your company directory should contain a sub-folder called images, you should put illustrations you want to appear in your press kit inside it. Any file named header.*** will be used as the page header, favicon.ico will be used as the page favicon, and files prefixed by the word logo will appear in a dedicated logo section of the page (eg. logo01.png or logo.jpg). Other images you put in this folder will be included in your page, in the Images section. After completing your company page, we can create a product page. This will be done in a subfolder: mkdir myproduct cd myproduct milou new -t product Just like for a company, you should fill in the data.yml file with info about your product, like its title, features, and prices. You can find an example of a product file on GitHub. The product folder should also contain an images subfolder. It works the same way as for the company. When your product is ready, go back to the company folder and build the press kit: cd ../ milou build . This will generate the HTML and CSS files for your online presskit in the directory build. You can then use any web server to access them. For example, this will make them accessible from http://localhost:3000/ cd build npx serve To put your press kit online, you can upload this folder to any static site host, like CloudFlare Pages, Netlify, or your own server running Nginx. Conclusion Milou is still quite new, and if you encounter issues while using it, don’t hesitate to open an issue. And if it works perfectly for you, leave a star on GitHub.
More in programming
<![CDATA[My journey to Lisp began in the early 1990s. Over three decades later, a few days ago I rediscovered the first Lisp environment I ever used back then which contributed to my love for the language. Here it is, PC Scheme running under DOSBox-X on my Linux PC: Screenshot of the PC Scheme Lisp development environment for MS-DOS by Texas Instruments running under DOSBox-X on Linux Mint Cinnamon. Using PC Scheme again brought back lots of great memories and made me reflect on what the environment taught me about Lisp and Lisp tooling. As a Computer Science student at the University of Milan, Italy, around 1990 I took an introductory computers and programming class taught by Prof. Stefano Cerri. The textbook was the first edition of Structure and Interpretation of Computer Programs (SICP) and Texas Instruments PC Scheme for MS-DOS the recommended PC implementation. I installed PC Scheme under DR-DOS on a 20 MHz 386 Olidata laptop with 2 MB RAM and a 40 MB hard disk drive. Prior to the class I had read about Lisp here and there but never played with the language. SICP and its use of Scheme as an elegant executable formalism instantly fascinated me. It was Lisp love at first sight. The class covered the first three chapters of the book but I later read the rest on my own. I did lots of exercises using PC Scheme to write and run them. Soon I became one with PC Scheme. The environment enabled a tight development loop thanks to its Emacs-like EDWIN editor that was well integrated with the system. The Lisp awareness of EDWIN blew my mind as it was the first such tool I encountered. The editor auto-indented and reformatted code, matched parentheses, and supported evaluating expressions and code blocks. Typing a closing parenthesis made EDWIN blink the corresponding opening one and briefly show a snippet of the beginning of the matched expression. Paying attention to the matching and the snippets made me familiar with the shape and structure of Lisp code, giving a visual feel of whether code looks syntactically right or off. Within hours of starting to use EDWIN the parentheses ceased to be a concern and disappeared from my conscious attention. Handling parentheses came natural. I actually ended up loving parentheses and the aesthetics of classic Lisp. Parenthesis matching suggested me a technique for writing syntactically correct Lisp code with pen and paper. When writing a closing parenthesis with the right hand I rested the left hand on the paper with the index finger pointed at the corresponding opening parenthesis, moving the hands in sync to match the current code. This way it was fast and easy to write moderately complex code. PC Scheme spoiled me and set the baseline of what to expect in a Lisp environment. After the class I moved to PCS/Geneva, a more advanced PC Scheme fork developed at the University of Geneva. Over the following decades I encountered and learned Common Lisp, Emacs, Lisp, and Interlisp. These experiences cemented my passion for Lisp. In the mid-1990s Texas Instruments released the executable and sources of PC Scheme. I didn't know it at the time, or if I noticed I long forgot. Until a few days ago, when nostalgia came knocking and I rediscovered the PC Scheme release. I installed PC Scheme under the DOSBox-X MS-DOS emulator on my Linux Mint Cinnamon PC. It runs well and I enjoy going through the system to rediscover what it can do. Playing with PC Scheme after decades of Lisp experience and hindsight on computing evolution shines new light on the environment. I didn't fully realize at the time but the product packed an amazing value for the price. It cost $99 in the US and I paid it about 150,000 Lira in Italy. Costing as much as two or three texbooks, the software was affordable even to students and hobbyists. PC Scheme is a rich, fast, and surprisingly capable environment with features such as a Lisp-aware editor, a good compiler, a structure editor and other tools, many Scheme extensions such as engines and OOP, text windows, graphics, and a lot more. The product came with an extensive manual, a thick book in a massive 3-ring binder I read cover to cover more than once. A paper on the implementation of PC Scheme sheds light on how good the system is given the platform constraints. Using PC Scheme now lets me put into focus what it taught me about Lisp and Lisp systems: the convenience and productivity of Lisp-aware editors; interactive development and exploratory programming; and a rich Lisp environment with a vast toolbox of libraries and facilities — this is your grandfather's batteries included language. Three decades after PC Scheme a similar combination of features, facilities, and classic aesthetics drew me to Medley Interlisp, my current daily driver for Lisp development. #Lisp #MSDOS #retrocomputing a href="https://remark.as/p/journal.paoloamoroso.com/rediscovering-the-origins-of-my-lisp-journey"Discuss.../a Email | Reply @amoroso@fosstodon.org !--emailsub--]]>
I’ve been writing some internal dashboards recently, and one hard part is displaying timestamps. Our server does everything in UTC, but the team is split across four different timezones, so the server timestamps aren’t always easy to read. For most people, it’s harder to understand a UTC timestamp than a timestamp in your local timezone. Did that event happen just now, an hour ago, or much further back? Was that at the beginning of your working day? Or at the end? Then I remembered that I tried to solve this five years ago at a previous job. I wrote a JavaScript snippet that converts UTC timestamps into human-friendly text. It displays times in your local time zone, and adds a short suffix if the time happened recently. For example: today @ 12:00 BST (1 hour ago) In my old project, I was using writing timestamps in a <div> and I had to opt into the human-readable text for every date on the page. It worked, but it was a bit fiddly. Doing it again, I thought of a more elegant solution. HTML has a <time> element for expressing datetimes, which is a more meaningful wrapper than a <div>. When I render the dashboard on the server, I don’t know the user’s timezone, so I include the UTC timestamp in the page like so: <time datetime="2025-04-15 19:45:00Z"> Tue, 15 Apr 2025 at 19:45 UTC </time> I put a machine-readable date and time string with a timezone offset string in the datetime attribute, and then a more human-readable string in the text of the element. Then I add this JavaScript snippet to the page: window.addEventListener("DOMContentLoaded", function() { document.querySelectorAll("time").forEach(function(timeElem) { // Set the `title` attribute to the original text, so a user // can hover over a timestamp to see the UTC time. timeElem.setAttribute("title", timeElem.innerText); // Replace the display text with a human-friendly date string // which is localised to the user's timezone. timeElem.innerText = getHumanFriendlyDateString( timeElem.getAttribute("datetime") ); }) }); This updates any <time> element on the page to use a human friendly date string, which is localised to the user’s timezone. For example, I’m in the UK so that becomes: <time datetime="2025-04-15 19:45:00Z" title="Tue, 15 Apr 2025 at 19:45 UTC"> Tue, 15 Apr 2025 at 20:45 BST </time> In my experience, these timestamps are easier and more intuitive for people to read. I always include a timezone string (e.g. BST, EST, PDT) so it’s obvious that I’m showing a localised timestamp. If you really need the UTC timestamp, it’s in the title attribute, so you can see it by hovering over it. (Sorry, mouseless users, but I don’t think any of my team are browsing our dashboards from their phone or tablet.) If the JavaScript doesn’t load, you see the plain old UTC timestamp. It’s not ideal, but the page still loads and you can see all the information – this behaviour is an enhancement, not an essential. To me, this is the unfulfilled promise of the <time> element. In my fantasy world, web page authors would write the time in a machine-readable format, and browsers would show it in a way that makes sense for the reader. They’d take into account their language, locale, and time zone. I understand why that hasn’t happened – it’s much easier said than done. You need so much context to know what’s the “right” thing to do when dealing with datetimes, and guessing without that context is at the heart of many datetime bugs. These sort of human-friendly, localised timestamps are very handy sometimes, and a complete mess at other times. In my staff-only dashboards, I have that context. I know what these timestamps mean, who’s going to be reading them, and I think they’re a helpful addition that makes the data easier to read. [If the formatting of this post looks odd in your feed reader, visit the original article]
Nearly a quarter of seventeen-year-old boys in America have an ADHD diagnosis. That's crazy. But worse than the diagnosis is that the majority of them end up on amphetamines, like Adderall or Ritalin. These drugs allow especially teenage boys (diagnosed at 2-3x the rate of girls) to do what their mind would otherwise resist: Study subjects they find boring for long stretches of time. Hurray? Except, it doesn't even work. Because taking Adderall or Ritalin doesn't actually help you learn more, it merely makes trying tolerable. The kids might feel like the drugs are helping, but the test scores say they're not. It's Dunning-Kruger — the phenomenon where low-competence individuals overestimate their abilities — in a pill. Furthermore, even this perceived improvement is short-term. The sudden "miraculous" ability to sit still and focus on boring school work wanes in less than a year on the drugs. In three years, pill poppers are doing no better than those who didn't take amphetamines at all. These are all facts presented in a blockbuster story in New York Time Magazine entitled Have We Been Thinking About A.D.H.D. All Wrong?, which unpacks all the latest research on ADHD. It's depressing reading. Not least because the definition of ADHD is so subjective and situational. The NYTM piece is full of anecdotes from kids with an ADHD diagnosis whose symptoms disappeared when they stopped pursuing a school path out of step with their temperament. And just look at these ADHD markers from the DSM-5: Inattention Difficulty staying focused on tasks or play. Frequently losing things needed for tasks (e.g., toys, school supplies). Easily distracted by unrelated stimuli. Forgetting daily activities or instructions. Trouble organizing tasks or completing schoolwork. Avoiding or disliking tasks requiring sustained mental effort. Hyperactivity Fidgeting, squirming, or inability to stay seated. Running or climbing in inappropriate situations. Excessive talking or inability to play quietly. Acting as if “driven by a motor,” always on the go. Impulsivity Blurting out answers before questions are completed. Trouble waiting for their turn. Interrupting others’ conversations or games. The majority of these so-called symptoms are what I'd classify as "normal boyhood". I certainly could have checked off a bunch of them, and you only need six over six months for an official ADHD diagnosis. No wonder a quarter of those seventeen year-old boys in America qualify! Borrowing from Erich Fromm’s The Sane Society, I think we're looking at a pathology of normalcy, where healthy boys are defined as those who can sit still, focus on studies, and suppress kinetic energy. Boys with low intensity and low energy. What a screwy ideal to chase for all. This is all downstream from an obsession with getting as many kids through as much safety-obsessed schooling as possible. While the world still needs electricians, carpenters, welders, soldiers, and a million other occupations that exist outside the narrow educational ideal of today. Now I'm sure there is a small number of really difficult cases where even the short-term break from severe symptoms that amphetamines can provide is welcome. The NYTM piece quotes the doctor that did one of the most consequential studies on ADHD as thinking that's around 3% — a world apart from the quarter of seventeen-year-olds discussed above. But as ever, there is no free lunch in medicine. Long-term use of amphetamines acts as a growth inhibitor, resulting in kids up to an inch shorter than they otherwise would have been. On top of the awful downs that often follow amphetamine highs. And the loss of interest, humor, and spirit that frequently comes with the treatment too. This is all eerily similar to what happened in America when a bad study from the 1990s convinced a generation of doctors that opioids actually weren't addictive. By the time they realized the damage, they'd set in motion an overdose and addiction cascade that's presently killing over a 100,000 Americans a year. The book Empire of Pain chronicles that tragedy well. Or how about the surge in puberty-blocker prescriptions, which has now been arrested in the UK, following the Cass Review, as well as Finland, Norway, Sweden, France, and elsewhere. Doctors are supposed to first do no harm, but they're as liable to be swept up in bad paradigms, social contagions, and ideological echo chambers as the rest of us. And this insane over-diagnosis of ADHD fits that liability to a T.