Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
51
teleportation does exist from OR to recovery room I left something behind not quite a part of myself —unwelcome guests poisoning me from the inside no longer welcome
6 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from ntietz.com blog

Licensing can be joyful (and legally dubious)

Software licenses are a reflection of our values. How you choose to license a piece of software says a lot about what you want to achieve with it. Do you want to reach the maximum amount of users? Do you want to ensure future versions remain free and open source? Do you want to preserve your opportunity to make a profit? They can also be used to reflect other values. For example, there is the infamous JSON license written by Doug Crockford. It's essentially the MIT license with this additional clause: The Software shall be used for Good, not Evil. This has caused quite some consternation. It is a legally dubious addition, because "Good" and "Evil" are not defined here. Many people disagree on what these are. This is really not enforceable, and it's going to make many corporate lawyers wary of using software under this license1. I don't think that enforcing this clause was the point. The point is more signaling values and just having fun with it. I don't think anyone seriously believes that this license will be enforceable, or that it will truly curb the amount of evil in the world. But will it start conversations? * * * There are a lot of other small, playful licenses. None of these are going to change the world, but they inject a little joy and play into an area of software that is usually serious and somber. When I had to pick a license for my exceptional language (Hurl), I went down that serious spiral at first. What license will give the project the best adoption, or help it achieve its goals? What are its goals? Well, one its goals was definitely to be funny. Another was to make sure that people can use the software for educational purposes. If I make a language as a joke, I do want people to be able to learn from it and do their own related projects! This is where we enter one of the sheerly joyous parts of licensing: the ability to apply multiple licenses to software so that the user can decide which one to use the software under. You see a lot of Rust projects dual-licensed under Apache and MIT licenses, because the core language is dual-licensed for very good reasons. We can apply similar rationale to Hurl's license, and we end up with triple-licensing. It's currently available under three licenses, each for a separate purpose. Licensing it under the AGPL enables users to create derivative works for their own purposes (probably to learn) as long as it remains licensed the same way. And then we have a commercial license option, which is there so that if you want to commercialize it, I can get a cut of that2. The final option is to license it under the Gay Agenda License, which was created originally for this project. This option basically requires you to not be a bigot, and then you can use the software under the MIT license terms. It seems fair to me. When I got through that license slide at SIGBOVIK 2024, I knew that the mission was accomplished: bigotry was defeated the audience laughed. * * * The Gay Agenda License is a modified MIT license which requires you do a few things: You must provide attribution (typical MIT manner) You have to stand up for LGBTQ rights You have to say "be gay, do crime" during use of the software Oh, and if you support restricting LGBTQ rights, then you lose that license right away. No bigots allowed here. This is all, of course, written in more complete sentences in the license itself. The best thing is that you can use this license today! There is a website for the Gay Agenda License, the very fitting gal.gay3. The website has all the features you'd expect, like showing the license text, using appropriate flags, and copying the text to the clipboard for ease of putting this in your project. Frequently Anticipated Questions Inspired by Hannah's brilliant post's FAQ, here are answers to your questions that you must have by now. Is this enforceable? We don't really know until it's tested in court, but if that happens, everyone has already lost. So, who knows, I hope we don't find out! Isn't it somewhat ambiguous? What defines what is standing up for LGBTQ rights? Ah, yes, good catch. This is a big problem for this totally serious license. Definitely a problem. Can I use it in my project? Yeah! Let me know if you do so I can add it into a showcase on the website. But keep in mind, this is a joke totally serious license, so only use it on silly things highly serious commercial projects! How do I get a commercial license of Hurl? This is supposed to be about the Gay Agenda License, not Hurl. But since you asked, contact me for pricing. When exactly do I have to say "be gay, do crime"? To be safe, it's probably best that you mutter it continuously while using all software. You never know when it's going to be licensed under the Gay Agenda License, so repeat the mantra to ensure compliance. Thank you to Anya for the feedback on a draft of this post. Thank you to Chris for building the first version of gal.gay for me. 1 Not for nothing, because most of those corporations would probably be using the software for evil. So, mission accomplished, I guess? 2 For some reason, no one has contacted me for this option yet. I suspect widespread theft of my software, since surely people want to use Hurl. They're not using the third option, since we still see rampant transphobia. 3 This is my most expensive domain yet at $130 for the first year. I'm hoping that the price doesn't raise dramatically over time, but I'm not optimistic, since it's a three-letter domain. That said, anything short of extortion will likely be worth keeping for the wonderful email addresses I get out of this, being a gay gal myself. It's easier to spell on the phone than this domain is, anyway.

6 months ago 66 votes
Asheville

Asheville is in crisis right now. They're without drinking water, faucets run dry, and it's difficult to flush toilets. As of yesterday, the hospital has water (via tanker trucks), but 80% of the public water system is still without running water. Things are really bad. Lots of infrastructure has been washed away. Even when water is back, there has been tremendous damage done that will take a long time to recover from and rebuild. * * * Here's the only national news story my friend from Asheville had seen which covered the water situation specifically. It's hard for me to understand why this is not covered more broadly. And my heart aches for those in and around the Asheville area. As I'm far away, I can't do a lot to help. But I can donate money, which my friend said is the only donation that would help right now if you aren't in the area. She specifically pointed me to these two ways to donate: Beloved Asheville: a respected community organization in Asheville, this is a great place to send money to help. (If you're closer to that area, it does look like they have specific things they're asking for as well, but this feels like an "if you can help this way, you'd already know" situation.) Mutual Aid Disaster Relief: there's a local Asheville chapter which is doing work to help. Also an organization to support for broad disaster recovery in general. I've donated money. I hope you will, too, for this and for the many other crises that affect us. Let's help each other.

6 months ago 60 votes
Rust needs a web framework for lazy developers

I like to make silly things, and I also like to put in minimal effort for those silly things. I also like to make things in Rust, mostly for the web, and this is where we run into a problem. See, if I want to make something for the web, I could use Django but I don't want that. I mean, Django is for building serious businesses, not for building silly non-commercial things! But using Rust, we have to do a lot more work than if we build it with Django or friends. See, so far, there's no equivalent, and the Rust community leans heavily into the "wire it up yourself" approach. As Are We Web Yet? says, "[...] you generally have to wire everything up yourself. Expect to put in a little bit of extra set up work to get started." This undersells it, though. It's more than a little bit of extra work to get started! I know because I made a list of things to do to get started. Rust needs something that does bundle this up for you, so that we can serve all web developers. Having it would make it a lot easier to make the case to use Rust. The benefits are there: you get wonderful type system, wonderful performance, and build times that give you back those coffee breaks you used to get while your code compiled. What do we need? There is a big pile of stuff that nearly every web app needs, no matter if it's big or small. Here's a rough list of what seems pretty necessary to me: Routing/handlers: this is pretty obvious, but we have to be able to get an incoming request to some handler for it. Additionally, this routing needs to handle path parameters, ideally with type information, and we'll give bonus points for query parameters, forms, etc. Templates: we'll need to generate, you know, HTML (and sometimes other content, like JSON or, if you're in the bad times still, XML). Usually I want these to have basic logic, like conditionals, match/switch, and loops. Static file serving: we'll need to serve some assets, like CSS files. This can be done separately, but having it as part of the same web server is extremely handy for both local development and for small-time deployments that won't handle much traffic. Logins: You almost always need some way to log in, since apps are usually multi-user or deployed on a public network. This is just annoying to wire up every time! It should be customizable and something you can opt out of, but it should be trivial to have logins from the start. Permissions: You also need this for systems that have multiple users, since people will have different data they're allowed to access or different roles in the system. Permissions can be complicated but you can make something relatively simple that follows the check(user, object, action) pattern and get really far with it. Database interface: You're probably going to have to store data for your app, so you want a way to do that. Something that's ORM-like is often nice, but something light is fine. Whatever you do here isn't the only way to interact with the database, but it'll be used for things like logins, permissions, and admin tools, so it's going to be a fundamental piece. Admin tooling: This is arguably a quality-of-life issue, not a necessity, except that every time you setup your application in a local environment or in production you're going to have to bootstrap it with at least one user or some data. And you'll have to do admin actions sometimes! So I think having this built-in for at least some of the common actions is a necessity for a seamless experience. WebSockets: I use WebSockets in a lot of my projects. They just let you do really fun things with pushing data out to connected users in a more real-time fashion! Hot reloading: This is a huge one for developer experience, because you want to have the ability to see changes really quickly. When code or a template change, you need to see that reflected as soon as humanly possible (or as soon as the Rust compiler allows). Then we have a pile of things that are quality-of-life improvements, and I think are necessary for long-term projects but might not be as necessary upfront, so users are less annoyed at implementing it themselves because the cost is spread out. Background tasks: There needs to be a story for these! You're going to have features that have to happen on a schedule, and having a consistent way to do that is a big benefit and makes development easier. Monitoring/observability: Only the smallest, least-critical systems should skip this. It's really important to have and it will make your life so much easier when you have it in that moment that you desperately need it. Caching: There are a lot of ways to do this, and all of them make things more complicated and maybe faster? So this is nice to have a story for, but users can also handle it themselves. Emails and other notifications: It's neat to be able to have password resets and things built-in, and this is probably a necessity if you're going to have logins, so you can have password resets. But other than that feature, it feels like it won't get used that much and isn't a big deal to add in when you need it. Deployment tooling: Some consistent way to deploy somewhere is really nice, even if it's just an autogenerated Dockerfile that you can use with a host of choice. CSS/JS bundling: In the time it is, we use JS and CSS everywhere, so you probably want a web tool to be aware of them so they can be included seamlessly. But does it really have to be integrated in? Probably not... So those are the things I'd target in a framework if I were building one! I might be doing that... The existing ecosystem There's quite a bit out there already for building web things in Rust. None of them quite hit what I want, which is intentional on their part: none of them aspire to be what I'm looking for here. I love what exists, and I think we're sorely missing what I want here (I don't think I'm alone). Web frameworks There are really two main groups of web frameworks/libraries right now: the minimalist ones, and the single-page app ones. The minimalist ones are reminiscent of Flask, Sinatra, and other small web frameworks. These include the excellent actix-web and axum, as well as myriad others. There are so many of these, and they all bring a nice flavor to web development by leveraging Rust's type system! But they don't give you much besides handlers; none of the extra functionality we want in a full for-lazy-developers framework. Then there are the single-page app frameworks. These fill a niche where you can build things with Rust on the backend and frontend, using WebAssembly for the frontend rendering. These tend to be less mature, but good examples include Dioxus, Leptos, and Yew. I used Yew to build a digital vigil last year, and it was enjoyable but I'm not sure I'd want to do it in a "real" production setting. Each of these is excellent for what it is—but what it is requires a lot of wiring up still. Most of my projects would work well with the minimalist frameworks, but those require so much wiring up! So it ends up being a chore to set that up each time that I want to do something. Piles of libraries! The rest of the ecosystem is piles of libraries. There are lots of template libraries! There are some libraries for logins, and for permissions. There are WebSocket libraries! Often you'll find some projects and examples which integrate a couple of the things you're using, but you won't find something that integrates all the pieces you're using. I've run into some of the examples being out of date, which is to be expected in a fast-moving ecosystem. The pile of libraries leaves a lot of friction, though. It makes getting started, the "just wiring it up" part, very difficult and often an exercise in researching how things work, to understand them in depth enough to do the integration. What I've done before The way I've handled this before is basically to pick a base framework (typically actix-web or axum) and then search out all the pieces I want on top of it. Then I'd wire them up, either all at the beginning or as I need them. There are starter templates that could help me avoid some of this pain. They can definitely help you skip some of the initial pain, but you still get all the maintenance burden. You have to make sure your libraries stay up to date, even when there are breaking changes. And you will drift from the template, so it's not really feasible to merge changes to it into your project. For the projects I'm working on, this means that instead of keeping one framework up to date, I have to keep n bespoke frameworks up to date across all my projects! Eep! I'd much rather have a single web framework that handles it all, with clean upgrade instructions between versions. There will be breaking changes sometimes, but this way they can be documented instead of coming about due to changes in the interactions between two components which don't even know they're going to be integrated together. Imagining the future I want In an ideal world, there would be a framework for Rust that gives me all the features I listed above. And it would also come with excellent documentation, changelogs, thoughtful versioning and handling of breaking changes, and maybe even a great community. All the things I love about Django, could we have those for a Rust web framework so that we can reap the benefits of Rust without having to go needlessly slowly? This doesn't exist right now, and I'm not sure if anyone else is working on it. All paths seem to lead me toward "whoops I guess I'm building a web framework." I hope someone else builds one, too, so we can have multiple options. To be honest, "web framework" sounds way too grandiose for what I'm doing, which is simply wiring things together in an opinionated way, using (mostly) existing building blocks1. Instead of calling it a framework, I'm thinking of it as a web toolkit: a bundle of tools tastefully chosen and arranged to make the artisan highly effective. My toolkit is called nicole's web toolkit, or newt. It's available in a public repository, but it's really not usable (the latest changes aren't even pushed yet). It's not even usable for me yet—this isn't a launch post, more shipping my design doc (and hoping someone will do my work for me so I don't have to finish newt :D). The goal for newt is to be able to create a new small web app and start on the actual project in minutes instead of days, bypassing the entire process of wiring things up. I think the list of must-haves and quality-of-life features above will be a start, but by no means everything we need. I'm not ready to accept contributions, but I hope to be there at some point. I think that Rust really needs this, and the whole ecosystem will benefit from it. A healthy ecosystem will have multiple such toolkits, and I hope to see others develop as well. * * * If you want to follow along with mine, though, feel free to subscribe to my RSS feed or newsletter, or follow me on Mastodon. I'll try to let people know in all those places when the toolkit is ready for people to try out. Or I'll do a post-mortem on it, if it ends up that I don't get far with it! Either way, this will be fun. 1 I do plan to build a few pieces from scratch for this, as the need arises. Some things will be easier that way, or fit more cohesively. Can't I have a little greenfield, as a treat?

6 months ago 60 votes
What I tell people new to on-call

The first time I went on call as a software engineer, it was exciting—and ultimately traumatic. Since then, I've had on-call experiences at multiple other jobs and have grown to really appreciate it as part of the role. As I've progressed through my career, I've gotten to help establish on-call processes and run some related trainings. Here is some of what I wish I'd known when I started my first on-call shift, and what I try to tell each engineer before theirs. Heroism isn't your job, triage is It's natural to feel a lot of pressure with on-call responsibilities. You have a production application that real people need to use! When that pager goes off, you want to go in and fix the problem yourself. That's the job, right? But it's not. It's not your job to fix every issue by yourself. It is your job to see that issues get addressed. The difference can be subtle, but important. When you get that page, your job is to assess what's going on. A few questions I like to ask are: What systems are affected? How badly are they impacted? Does this affect users? With answers to those questions, you can figure out what a good course of action is. For simple things, you might just fix it yourself! If it's a big outage, you're putting on your incident commander hat and paging other engineers to help out. And if it's a false alarm, then you're putting in a fix for the noisy alert! (You're going to fix it, not just ignore that, right?) Just remember not to be a hero. You don't need to fix it alone, you just need to figure out what's going on and get a plan. Call for backup Related to the previous one, you aren't going this alone. Your main job in holding the pager is to assess and make sure things get addressed. Sometimes you can do that alone, but often you can't! Don't be afraid to call for backup. People want to be helpful to their teammates, and they want that support available to them, too. And it's better to be wake me up a little too much than to let me sleep through times when I was truly needed. If people are getting woken up a lot, the issue isn't calling for backup, it's that you're having too many true emergencies. It's best to figure out that you need backup early, like 10 minutes in, to limit the damage of the incident. The faster you figure out other people are needed, the faster you can get the situation under control. Communicate a lot In any incident, adrenaline runs and people are stressed out. The key to good incident response is communication in spite of the adrenaline. Communicating under pressure is a skill, and it's one you can learn. Here are a few of the times and ways of communicating that I think are critical: When you get on and respond to an alert, say that you're there and that you're assessing the situation Once you've assessed it, post an update; if the assessment is taking a while, post updates every 15 minutes while you do so (and call for backup) After the situation is being handled, update key stakeholders at least every 30 minutes for the first few hours, and then after that slow down to hourly You are also going to have to communicate within the response team! There might be a dedicated incident channel or one for each incident. Either way, try to over communicate about what you're working on and what you've learned. Keep detailed notes, with timestamps When you're debugging weird production stuff at 3am, that's the time you really need to externalize your memory and thought processes into a notes document. This helps you keep track of what you're doing, so you know which experiments you've run and which things you've ruled out as possibilities or determined as contributing factors. It also helps when someone else comes up to speed! That person will be able to use your notes to figure out what has happened, instead of you having to repeat it every time someone gets on. Plus, the notes doc won't forget things, but you will. You will also need these notes later to do a post-mortem. What was tried, what was found, and how it was fixed are all crucial for the discussion. Timestamps are critical also for understanding the timeline of the incident and the response! This document should be in a shared place, since people will use it when they join the response. It doesn't need to be shared outside of the engineering organization, though, and likely should not be. It may contain details that lead to more questions than they answer; sometimes, normal engineering things can seem really scary to external stakeholders! You will learn a lot! When you're on call, you get to see things break in weird and unexpected ways. And you get to see how other people handle those things! Both of these are great ways to learn a lot. You'll also just get exposure to things you're not used to seeing. Some of this will be areas that you don't usually work in, like ops if you're a developer, or application code if you're on the ops side. Some more of it will be business side things for the impact of incidents. And some will be about the psychology of humans, as you see the logs of a user clicking a button fifteen hundred times (get that person an esports sponsorship, geez). My time on call has led to a lot of my professional growth as a software engineer. It has dramatically changed how I worked on systems. I don't want to wake up at 3am to fix my bad code, and I don't want it to wake you up, either. Having to respond to pages and fix things will teach you all the ways they can break, so you'll write more resilient software that doesn't break. And it will teach you a lot about the structure of your engineering team, good or bad, in how it's structured and who's responding to which things. Learn by shadowing No one is born skilled at handling production alerts. You gain these skills by doing, so get out there and do it—but first, watch someone else do it. No matter how much experience you have writing code (or responding to incidents), you'll learn a lot by watching a skilled coworker handle incoming issues. Before you're the primary for an on-call shift, you should shadow someone for theirs. This will let you see how they handle things and what the general vibe is. This isn't easy to do! It means that they'll have to make sure to loop you in even when blood is pumping, so you may have to remind them periodically. You'll probably miss out on some things, but you'll see a lot, too. Some things can (and should) wait for Monday morning When we get paged, it usually feels like a crisis. If not to us, it sure does to the person who's clicking that button in frustration, generating a ton of errors, and somehow causing my pager to go off. But not all alerts are created equal. If you assess something and figure out that it's only affecting one or two customers in something that's not time sensitive, and it's currently 4am on a Saturday? Let people know your assessment (and how to reach you if you're wrong, which you could be) and go back to bed. Real critical incidents have to be fixed right away, but some things really need to wait. You want to let them go until later for two reasons. First is just the quality of the fix. You're going to fix things more completely if you're rested when you're doing so! Second, and more important, is your health. It's wrong to sacrifice your health (by being up at 4am fixing things) for something non-critical. Don't sacrifice your health Many of us have had bad on-call experiences. I sure have. One regret is that I didn't quit that on-call experience sooner. I don't even necessarily mean quitting the job, but pushing back on it. If I'd stood up for myself and said "hey, we have five engineers, it should be more than just me on call," and held firm, maybe I'd have gotten that! Or maybe I'd have gotten a new job. What I wouldn't have gotten is the knowledge that you can develop a rash from being too stressed. If you're in a bad on-call situation, please try to get out of it! And if you can't get out of it, try to be kind to yourself and protect yourself however you can (you deserve better). Be methodical and reproduce before you fix Along with taking great notes, you should make sure that you test hypotheses. What could be causing this issue? And before that, what even is the problem? And how do we make it happen? Write down your answers to these! Then go ahead and try to reproduce the issue. After reproducing it, you can try to go through your hypotheses and test them out to see what's actually contributing to the issue. This way, you can bisect problem spaces instead of just eliminating one thing at a time. And since you know how to reproduce the issue now, you can be confident that you do have a fix at the end of it all! Have fun Above all, the thing I want people new to on-call to do? Just have fun. I know this might sound odd, because being on call is a big job responsibility! But I really do think it can be fun. There's a certain kind of joy in going through the on-call response together. And there's a fun exhilaration to it all. And the joy of fixing things and really being the competent engineer who handled it with grace under pressure. Try to make some jokes (at an appropriate moment!) and remember that whatever happens, it's going to be okay. Probably.

6 months ago 70 votes

More in programming

The Post-Developer Era

When OpenAI released GPT-4 back in March 2023, they kickstarted the AI revolution. The consensus online was that front-end development jobs would be totally eliminated within a year or two.Well, it’s been more than two years since then, and I thought it was worth revisiting some of those early predictions, and seeing if we can glean any insights about where things are headed.

21 hours ago 3 votes
Thoughts on releasing our first indie game

Two weeks ago we released Dice’n Goblins, our first game on Steam. This project allowed me to discover and learn a lot of new things about game development and the industry. I will use this blog post to write down what I consider to be the most important lessons from the months spent working on this. The development started around 2 years ago when Daphnée started prototyping a dungeon crawler featuring a goblin protagonist. After a few iterations, the game combat started featuring dice, and then those dice could be used to make combos. In May 2024, the game was baptized Dice’n Goblins, and a Steam page was created featuring some early gameplay screenshots and footage. I joined the project full-time around this period. Almost one year later, after amassing more than 8000 wishlists, the game finally released on Steam on April 4th, 2025. It was received positively by the gaming press, with great reviews from PCGamer and LadiesGamers. It now sits at 92% positive reviews from players on Steam. Building RPGs isn’t easy As you can see from the above timeline, building this game took almost two years and two programmers. This is actually not that long if you consider that other indie RPGs have taken more than 6 years to come out. The main issue with the genre is that you need to create a believable world. In practice, this requires programming many different systems that will interact together to give the impression of a cohesive universe. Every time you add a new system, you need to think about how it will fit all the existing game features. For example, players typically expect an RPG to have a shop system. Of course, this means designing a shop non-player character (or building) and creating a UI that is displayed when you interact with it. But this also means thinking through a lot of other systems: combat needs to be changed to reward the player with gold, every item needs a price tag, chests should sometimes reward the player with gold, etc… Adding too many systems can quickly get into scope creep territory, and make the development exponentially longer. But you can only get away with removing so much until your game stops being an RPG. Making a game without a shop might be acceptable, but the experience still needs to have more features than “walking around and fighting monsters” to feel complete. RPGs are also, by definition, narrative experiences. While some games have managed to get away with procedurally generating 90% of the content, in general, you’ll need to get your hands dirty, write a story, and design a bunch of maps. Creating enough content for a game to fit 12h+ without having the player go through repetitive grind will by itself take a lot of time. Having said all that, I definitely wouldn’t do any other kind of games than RPGs, because this is what I enjoy playing. I don’t think I would be able to nail what makes other genres fun if I don’t play them enough to understand what separates the good from the mediocre. Marketing isn’t that complicated Everyone in the game dev community knows that there are way too many games releasing on Steam. To stand out amongst the 50+ games coming out every day, it’s important not only to have a finished product but also to plan a marketing campaign well in advance. For most people coming from a software engineering background, like me, this can feel extremely daunting. Our education and jobs do not prepare us well for this kind of task. In practice, it’s not that complicated. If your brain is able to provision a Kubernetes cluster, then you are most definitely capable of running a marketing campaign. Like anything else, it’s a skill that you can learn over time by practicing it, and iteratively improving your methods. During the 8 months following the Steam page release, we tried basically everything you can think of as a way to promote the game. Every time something was having a positive impact, we would do it more, and we quickly stopped things with low impact. The most important thing to keep in mind is your target audience. If you know who wants the game of games you’re making, it is very easy to find where they hang out and talk to them. This is however not an easy question to answer for every game. For a long while, we were not sure who would like Dice’n Goblins. Is it people who like Etrian Odyssey? Fans of Dicey Dungeons? Nostalgic players of Paper Mario? For us, the answer was mostly #1, with a bit of #3. Once we figured out what was our target audience, how to communicate with them, and most importantly, had a game that was visually appealing enough, marketing became very straightforward. This is why we really struggled to get our first 1000 wishlists, but getting the last 5000 was actually not that complicated. Publishers aren’t magic At some point, balancing the workload of actually building the game and figuring out how to market it felt too much for a two-person team. We therefore did what many indie studios do, and decided to work with a publisher. We worked with Rogue Duck Interactive, who previously published Dice & Fold, a fairly successful dice roguelike. Without getting too much into details, it didn’t work out as planned and we decided, by mutual agreement, to go back to self-publishing Dice’n Goblins. The issue simply came from the audience question mentioned earlier. Even though Dice & Fold and Dice’n Goblins share some similarities, they target a different audience, which requires a completely different approach to marketing. The lesson learned is that when picking a publisher, the most important thing you can do is to check that their current game catalog really matches the idea you have of your own game. If you’re building a fast-paced FPS, a publisher that only has experience with cozy simulation games will not be able to help you efficiently. In our situation, a publisher with experience in roguelikes and casual strategy games wasn’t a good fit for an RPG. In addition to that, I don’t think the idea of using a publisher to remove marketing toil and focus on making the game is that much of a good idea in the long term. While it definitely helps to remove the pressure from handling social media accounts and ad campaigns, new effort will be required in communicating and negotiating with the publishing team. In the end, the difference between the work saved and the work gained might not have been worth selling a chunk of your game. Conclusion After all this was said and done, one big question I haven’t answered is: would I do it again? The answer is definitely yes. Not only building this game was an extremely satisfying endeavor, but so much has been learned and built while doing it, it would be a shame not to go ahead and do a second one.

3 hours ago 2 votes
Rediscovering the origins of my Lisp journey

<![CDATA[My journey to Lisp began in the early 1990s. Over three decades later, a few days ago I rediscovered the first Lisp environment I ever used back then which contributed to my love for the language. Here it is, PC Scheme running under DOSBox-X on my Linux PC: Screenshot of the PC Scheme Lisp development environment for MS-DOS by Texas Instruments running under DOSBox-X on Linux Mint Cinnamon. Using PC Scheme again brought back lots of great memories and made me reflect on what the environment taught me about Lisp and Lisp tooling. As a Computer Science student at the University of Milan, Italy, around 1990 I took an introductory computers and programming class taught by Prof. Stefano Cerri. The textbook was the first edition of Structure and Interpretation of Computer Programs (SICP) and Texas Instruments PC Scheme for MS-DOS the recommended PC implementation. I installed PC Scheme under DR-DOS on a 20 MHz 386 Olidata laptop with 2 MB RAM and a 40 MB hard disk drive. Prior to the class I had read about Lisp here and there but never played with the language. SICP and its use of Scheme as an elegant executable formalism instantly fascinated me. It was Lisp love at first sight. The class covered the first three chapters of the book but I later read the rest on my own. I did lots of exercises using PC Scheme to write and run them. Soon I became one with PC Scheme. The environment enabled a tight development loop thanks to its Emacs-like EDWIN editor that was well integrated with the system. The Lisp awareness of EDWIN blew my mind as it was the first such tool I encountered. The editor auto-indented and reformatted code, matched parentheses, and supported evaluating expressions and code blocks. Typing a closing parenthesis made EDWIN blink the corresponding opening one and briefly show a snippet of the beginning of the matched expression. Paying attention to the matching and the snippets made me familiar with the shape and structure of Lisp code, giving a visual feel of whether code looks syntactically right or off. Within hours of starting to use EDWIN the parentheses ceased to be a concern and disappeared from my conscious attention. Handling parentheses came natural. I actually ended up loving parentheses and the aesthetics of classic Lisp. Parenthesis matching suggested me a technique for writing syntactically correct Lisp code with pen and paper. When writing a closing parenthesis with the right hand I rested the left hand on the paper with the index finger pointed at the corresponding opening parenthesis, moving the hands in sync to match the current code. This way it was fast and easy to write moderately complex code. PC Scheme spoiled me and set the baseline of what to expect in a Lisp environment. After the class I moved to PCS/Geneva, a more advanced PC Scheme fork developed at the University of Geneva. Over the following decades I encountered and learned Common Lisp, Emacs, Lisp, and Interlisp. These experiences cemented my passion for Lisp. In the mid-1990s Texas Instruments released the executable and sources of PC Scheme. I didn't know it at the time, or if I noticed I long forgot. Until a few days ago, when nostalgia came knocking and I rediscovered the PC Scheme release. I installed PC Scheme under the DOSBox-X MS-DOS emulator on my Linux Mint Cinnamon PC. It runs well and I enjoy going through the system to rediscover what it can do. Playing with PC Scheme after decades of Lisp experience and hindsight on computing evolution shines new light on the environment. I didn't fully realize at the time but the product packed an amazing value for the price. It cost $99 in the US and I paid it about 150,000 Lira in Italy. Costing as much as two or three texbooks, the software was affordable even to students and hobbyists. PC Scheme is a rich, fast, and surprisingly capable environment with features such as a Lisp-aware editor, a good compiler, a structure editor and other tools, many Scheme extensions such as engines and OOP, text windows, graphics, and a lot more. The product came with an extensive manual, a thick book in a massive 3-ring binder I read cover to cover more than once. A paper on the implementation of PC Scheme sheds light on how good the system is given the platform constraints. Using PC Scheme now lets me put into focus what it taught me about Lisp and Lisp systems: the convenience and productivity of Lisp-aware editors; interactive development and exploratory programming; and a rich Lisp environment with a vast toolbox of libraries and facilities — this is your grandfather's batteries included language. Three decades after PC Scheme a similar combination of features, facilities, and classic aesthetics drew me to Medley Interlisp, my current daily driver for Lisp development. #Lisp #MSDOS #retrocomputing a href="https://remark.as/p/journal.paoloamoroso.com/rediscovering-the-origins-of-my-lisp-journey"Discuss.../a Email | Reply @amoroso@fosstodon.org !--emailsub--]]>

an hour ago 1 votes
Normal boyhood is ADHD

Nearly a quarter of seventeen-year-old boys in America have an ADHD diagnosis. That's crazy. But worse than the diagnosis is that the majority of them end up on amphetamines, like Adderall or Ritalin. These drugs allow especially teenage boys (diagnosed at 2-3x the rate of girls) to do what their mind would otherwise resist: Study subjects they find boring for long stretches of time. Hurray? Except, it doesn't even work. Because taking Adderall or Ritalin doesn't actually help you learn more, it merely makes trying tolerable. The kids might feel like the drugs are helping, but the test scores say they're not. It's Dunning-Kruger — the phenomenon where low-competence individuals overestimate their abilities — in a pill. Furthermore, even this perceived improvement is short-term. The sudden "miraculous" ability to sit still and focus on boring school work wanes in less than a year on the drugs. In three years, pill poppers are doing no better than those who didn't take amphetamines at all. These are all facts presented in a blockbuster story in New York Time Magazine entitled Have We Been Thinking About A.D.H.D. All Wrong?, which unpacks all the latest research on ADHD. It's depressing reading. Not least because the definition of ADHD is so subjective and  situational. The NYTM piece is full of anecdotes from kids with an ADHD diagnosis whose symptoms disappeared when they stopped pursuing a school path out of step with their temperament. And just look at these ADHD markers from the DSM-5: Inattention Difficulty staying focused on tasks or play. Frequently losing things needed for tasks (e.g., toys, school supplies). Easily distracted by unrelated stimuli. Forgetting daily activities or instructions. Trouble organizing tasks or completing schoolwork. Avoiding or disliking tasks requiring sustained mental effort. Hyperactivity Fidgeting, squirming, or inability to stay seated. Running or climbing in inappropriate situations. Excessive talking or inability to play quietly. Acting as if “driven by a motor,” always on the go. Impulsivity Blurting out answers before questions are completed. Trouble waiting for their turn. Interrupting others’ conversations or games. The majority of these so-called symptoms are what I'd classify as "normal boyhood". I certainly could have checked off a bunch of them, and you only need six over six months for an official ADHD diagnosis. No wonder a quarter of those seventeen year-old boys in America qualify! Borrowing from Erich Fromm’s The Sane Society, I think we're looking at a pathology of normalcy, where healthy boys are defined as those who can sit still, focus on studies, and suppress kinetic energy. Boys with low intensity and low energy. What a screwy ideal to chase for all. This is all downstream from an obsession with getting as many kids through as much safety-obsessed schooling as possible. While the world still needs electricians, carpenters, welders, soldiers, and a million other occupations that exist outside the narrow educational ideal of today. Now I'm sure there is a small number of really difficult cases where even the short-term break from severe symptoms that amphetamines can provide is welcome. The NYTM piece quotes the doctor that did one of the most consequential studies on ADHD as thinking that's around 3% — a world apart from the quarter of seventeen-year-olds discussed above. But as ever, there is no free lunch in medicine. Long-term use of amphetamines acts as a growth inhibitor, resulting in kids up to an inch shorter than they otherwise would have been. On top of the awful downs that often follow amphetamine highs. And the loss of interest, humor, and spirit that frequently comes with the treatment too. This is all eerily similar to what happened in America when a bad study from the 1990s convinced a generation of doctors that opioids actually weren't addictive. By the time they realized the damage, they'd set in motion an overdose and addiction cascade that's presently killing over a 100,000 Americans a year. The book Empire of Pain chronicles that tragedy well.  Or how about the surge in puberty-blocker prescriptions, which has now been arrested in the UK, following the Cass Review, as well as Finland, Norway, Sweden, France, and elsewhere. Doctors are supposed to first do no harm, but they're as liable to be swept up in bad paradigms, social contagions, and ideological echo chambers as the rest of us. And this insane over-diagnosis of ADHD fits that liability to a T.

an hour ago 1 votes