Full Width [alt+shift+f] FOCUS MODE Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
5
In my recent side projet, I’ve been deploying to fly.io and really enjoying it. It’s fairly easy to get setup. And it supports my preferred workflow of deploying my changes early and often. I have run into a few snags though. Fly.io builds your project into a docker image and deploys containers for you. That process is mostly seamless when it works. But sometimes it fails, and you need to debug. By default, fly builds your docker images in the cloud. This is convient and preferred most of the time. But when I wanted to test some changes to my build, I wanted to try building locally using Docker Desktop. This should be easy. The fly cli is quite nice. And there is a flag to build locally. fly deploy --build-only --local-only This failed saying it couldn’t find Docker. > fly deploy --build-only --local-only ==> Verifying app config Validating /Users/polotek/src/harembase/fly.toml Platform: machines ✓ Configuration is valid --> Verified app config ==> Building image Error: failed to...
a year ago

Comments

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from These Yaks Ain't Gonna Shave Themselves

The Frontend Treadmill

A lot of frontend teams are very convinced that rewriting their frontend will lead to the promised land. And I am the bearer of bad tidings. If you are building a product that you hope has longevity, your frontend framework is the least interesting technical decision for you to make. And all of the time you spend arguing about it is wasted energy. I will die on this hill. If your product is still around in 5 years, you’re doing great and you should feel successful. But guess what? Whatever framework you choose will be obsolete in 5 years. That’s just how the frontend community has been operating, and I don’t expect it to change soon. Even the popular frameworks that are still around are completely different. Because change is the name of the game. So they’re gonna rewrite their shit too and just give it a new version number. Product teams that are smart are getting off the treadmill. Whatever framework you currently have, start investing in getting to know it deeply. Learn the tools until they are not an impediment to your progress. That’s the only option. Replacing it with a shiny new tool is a trap. I also wanna give a piece of candid advice to engineers who are searching for jobs. If you feel strongly about what framework you want to use, please make that a criteria for your job search. Please stop walking into teams and derailing everything by trying to convince them to switch from framework X to your framework of choice. It’s really annoying and tremendously costly. I always have to start with the cynical take. It’s just how I am. But I do want to talk about what I think should be happening instead. Companies that want to reduce the cost of their frontend tech becoming obsoleted so often should be looking to get back to fundamentals. Your teams should be working closer to the web platform with a lot less complex abstractions. We need to relearn what the web is capable of and go back to that. Let’s be clear, I’m not suggesting this is strictly better and the answer to all of your problems. I’m suggesting this as an intentional business tradeoff that I think provides more value and is less costly in the long run. I believe if you stick closer to core web technologies, you’ll be better able to hire capable engineers in the future without them convincing you they can’t do work without rewriting millions of lines of code. And if you’re an engineer, you will be able to retain much higher market value over time if you dig into and understand core web technologies. I was here before react, and I’ll be here after it dies. You may trade some job marketability today. But it does a lot more for career longevity than trying to learn every new thing that gets popular. And you see how quickly they discarded us when the market turned anyway. Knowing certain tech won’t save you from those realities. I couldn’t speak this candidly about this stuff when I held a management role. People can’t help but question my motivations and whatever agenda I may be pushing. Either that or I get into a lot of trouble with my internal team because they think I’m talking about them. But this is just what I’ve seen play out after doing this for 20+ years. And I feel like we need to be able to speak plainly. This has been brewing in my head for a long time. The frontend ecosystem is kind of broken right now. And it’s frustrating to me for a few different reasons. New developers are having an extremely hard time learning enough skills to be gainfully employed. They are drowning in this complex garbage and feeling really disheartened. As a result, companies are finding it more difficult to do basic hiring. The bar is so high just to get a regular dev job. And everybody loses. What’s even worse is that I believe a lot of this energy is wasted. People that are learning the current tech ecosystem are absolutely not learning web fundamentals. They are too abstracted away. And when the stack changes again, these folks are going to be at a serious disadvantage when they have to adapt away from what they learned. It’s a deep disservice to people’s professional careers, and it’s going to cause a lot of heartache later. On a more personal note, this is frustrating to me because I think it’s a big part of why we’re seeing the web stagnate so much. I still run into lots of devs who are creative and enthusiastic about building cool things. They just can’t. They are trying and failing because the tools being recommended to them are just not approachable enough. And at the same time, they’re being convinced that learning fundamentals is a waste of time because it’s so different from what everybody is talking about. I guess I want to close by stating my biases. I’m a web guy. I’ve been bullish on the web for 20+ years, and I will continue to be. I think it is an extremely capable and unique platform for delivering software. And it has only gotten better over time while retaining an incredible level of backwards compatibility. The underlying tools we have are dope now. But our current framework layer is working against the grain instead of embracing the platform. This is from a recent thread I wrote on mastodon. Reproduced with only light editing.

a year ago 5 votes
A Different Vision for a Healthy Fediverse

This is from a recent thread I wrote on mastodon. Reproduced with only light editing. Hm. I feel like I wanted to like this more than I actually do. I definitely think the fediverse needs to continue to grow more capabilities. But this doesn’t feel like the energy I was looking for. Half of it feels like a laundry list of ways to commodify things. Dragging a lot of things people hate about corporate social media into the fediverse. I’ve only just started to engage with the fediverse as a concept and a movement. And mastodon is only one part of a wide ecosystem. I think what has been surprising to me is it at least within mastodon, it doesn’t feel like the culture is centered around enabling people to build and experiment. Maybe this is only how I think about it. But there are only a few good reasons to do all of this work. We want to reclaim our online experiences. So they aren’t fully captured by corporate interests. But after that? I think the goal should be to enable greater diversity of experience. People can have what they want by running it on their own servers. It doesn’t have to be something that we wait for someone else to build and ship. It feels like we’re still trying to over-design corporate solutions that work for “everybody”. And that feels like constrained thinking. I feel like the fediverse should be on the other end of the spectrum. There should be an explosion of solutions. Most of them will probably be crap. But the ones who keep refining and improving will rise to the top and gain more adoption. Honestly I don’t think “gaining adoption” is that important in a truly diverse ecosystem. The reason concepts like adoption become useful is when it drives compatibility. We do want different servers to be able to participate in the larger society. But I think compatibility emerges because people want to participate. You have to add the value first. Then people will do the work to be compatible so they can get to the value. If I was stating what I think is important in the fediverse right now, it would be describing what it takes to be “compatible”. I think the “core” groups around fediverse technologies should be hyper focused on describing and documenting how their foundational protocols behave. And their measure of success should be seeing other groups building compatible servers entirely independent of them. That is a healthy fediverse imo. I don’t want to start too much trouble here. But I’m already on record with my criticisms of the open source community. I hope we can acknowledge that the community of devs, who is doing much of this work for free, has some serious cultural issues to contend with if they’re going to serve the wider set of users who want and need this stuff. We know that corporate interests want to own and capture our experiences for the purposes of profit and control. But open source devs often want to own and capture the work. So that it can only happen the way they say. And as a result, anything we want to see happen is bottlenecked on a small set of humans who have set themselves up as gatekeepers. I’m not suggesting this is always a malicious dynamic. A lot of times people have legitimate concerns for gatekeeping. Like protecting the security and privacy of users. Or preventing data corruption. Some elements of software do need to be scrutinized by experts so that people don’t get hurt. But I believe that’s a smaller area than people seem to think. I’m not that interested in debating the reasons for some of the more frustrating elements of open source culture. All I’m saying today is that I believe that open source culture will need to evolve pretty quickly if it’s going to rise to this moment of enabling a healthy and vibrant fediverse.

a year ago 5 votes
Getting A Local Mastodon Setup In Docker

This is the first in probably a series of posts as I dig into the technical aspects of mastodon. My goal is to get a better understanding of the design of ActivityPub and how mastodon itself is designed to use ActivityPub. Eventually I want to learn enough to maybe do some hacking and create some of the experiences I want that mastodon doesn’t support today. The first milestone is just getting a mastodon instance set up on my laptop. I’m gonna give some background and context. If you want to skip straight to the meat of things, here’s an anchor link. Some background Mastodon is a complex application with lots of moving parts. For now, all I want is to get something running so I can poke at it. Docker should be a great tool for this. Because a lot of that complexity can be packaged up in pre-built images. I tried several times using using the official docs and various other alternative projects to get a working mastodon instance in docker. But I kept running into problems that were hard to understand and harder to resolve. I have a lot to learn about all the various pieces of mastodon and how they fit together. But I understand docker pretty well. So after some experimenting, I was able to get an instance running on my own. The rest of this post will be dedicated to explaining what I did and what I learned along the way. One final note. I know many folks work hard to write docs and provide an out of the box dev experience that works. This isn’t meant to dismiss that hard work. It just didn’t work for me. I’m certainly going to share this experience with the mastodon team. Hopefully these lessons can make the experience better for others in the future. The approach Here’s the outline of what we’re doing. We’re going to use a modified version of the docker-compose.yml that comes in the official mastodon repo. It doesn’t work out of the box. So I had to make some heavy tweaks. As of this writing, the mastodon docs seem to want people to use an alternate setup based on Dev Containers. I found that very confusing, and it didn’t work for me at all. Once we have all of the docker images we need, all of the headaches are in configuring them to work together. Most of mastodon is a ruby on rails app with a database. But there is also a node app to handle streaming updates, redis for caching and background jobs, and we need to handle file storage. We will do the minimum configuration to get all of that set up and able to talk to each other. There is also support for sending emails and optional search capabilities. These are not required just to get something working, so we’ll ignore them for now. It’s also worth noting that if we want to develop code in mastodon, we need to put our rails app in development mode. That introduces another layer of headaches and errors that I haven’t figured out yet. So that will be a later milestone. For now, all of this will be in “production” mode by default. That’s how the docker image comes packaged. Keep it simple. There are still many assumptions here. I am running on Mac OS with Apple Silicon (M3). If you’re trying this out, you may run into different issues depending your environment. Pre-requisites We need docker. And a relatively new version. The first thing I did was ditch the version: 3 specifier in the docker-compose.yml. Using versions in these files is deprecated, and we can use some newer features of docker compose. I have v4.30.0 of Docker Desktop for Mac. We also need caddy. Mastodon instances require a domain in most cases. This is mostly about identity and security. It would be bad if an actor on mastodon could change their identity very easily just by pretending to be a different domain or account. There are ways around this, but I couldn’t get any of them to work for me. That complicates our setup. Because we can’t just use localhost in the browser. We need a domain, which means we also need HTTPS support. Modern browsers require it by default unless you jump through a bunch of hoops. Caddy gives us all of that out of the box really easily. It will be the only thing running outside of docker. There’s only one caveat with caddy. The way that it is able to do ssl termination so easily is that it creates its own certificates on the fly. The way it does this is by installing it’s own root cert on your machine. You’ll have to give it permission by putting in your laptop password the first time you run caddy. If that makes you nervous, feel free to skip this and use whatever solution you’re comfortable with for SSL termination. But as far as I know, you need this part. Choose a domain for your local instance. For me it was polotek-social.local. Something that mkes it obvious that this is not a real online instance. Add an entry to your /etc/hosts and point this to localhost. Or whatever people have to do on Windows these days. Let’s run a mastodon I put all of my changes in my fork of the official mastodon repo. You can clone this branch and follow along. All of the commands assume you are in the root directory of the cloned repo. https://github.com/polotek/mastodon/tree/polotek-docker-build > git clone git@github.com:polotek/mastodon.git > cd mastodon > git co -b polotek-docker-build I rewrote the docker section of the README.md to outline the new instructions. I’m going to walk through my explanation of the changes. Pull docker images This is the easiest part. All of the docker images are prepackaged. Even the rails app. You can use the docker compose command to pull them all. It’ll take a minute or 2. > docker compose pull Setup config files We’re using a couple of config files. The repo comes with .env.production.sample. This is a nice way to outline the minimum configuration that is required. You can copy that to .env.production and everything is already set up to look for that file. The only thing you have to do here is update the LOCAL_DOMAIN field. This should be the same as the domain you chose and put in your /etc/hosts. You can put all of your configuration in this file. But I found it more convenient to separate out the various secrets. These often need to be changed or regenerated. I wrote a script to make that repeatable. Any secrets go in .env.secrets. We’ll come back to how you get those values in a bit. I had to make some other fixes here. Because we’re using docker, we need to update how the rails app finds the other dependencies. The default values seem to assume that redis and postgres are reachable locally on the same machine. I had to change those values to match the docker setup. The REDIS_HOST is redis, and the DB_HOST is db. Because that’s what they are named in the docker-compose file. Diff of config file on github The rest of the changes are just disabling non-essential services like elastic search and s3 storage. Generate secrets We need just a handful of config fields that are randomly generated and considered sensitive. Rails makes it easy to generate secrets. But run the required commands through docker and getting them in the right place is left as an exercise for the reader. I added a small script that runs these commands and outputs the right fields. Rather than try to edit the .env.production file in the right places everytime secrets get regenerated, I think it’s much easier to have them in a separate file. Fortunately, docker-compose allows us to specify multiple files to fill out the environment variables. Diff of config file on github This was a nice quality of life change. And now regenerated secrets and making them available is just one command. > bin/gen_secrets > .env.secrets Any additional secrets can be added by just updating this script. For example, I use 1password to store lots of things, even for development. And I can pull things out using their cli named op. Here’s how I configured the email secrets with the credentials from my mailgun account. # Email echo SMTP_LOGIN=$(op read "op://Dev/Mailgun SMTP/username") echo SMTP_PASSWORD=$(op read "op://Dev/Mailgun SMTP/password") Run the database Running the database is easy. > docker compose up db -d You’ll need to have your database running while you run these next steps. The -d flag will run it in the background so you can get your terminal back. I often prefer to skip the -d and run multiple terminal windows. That way I can know at a glance if something is running or not. But do whatever feels good. The only note here is to explain another small change to docker-compose to get this running. We’re using a docker image that comes ready to run postgres. This is great because it removes a lot of the fuss of running a database. The image also provides some convenient ways to configure the name of the database and the primary user account. This becomes important because maston preconfigures these values for rails. We can see this in the default .env.production values. DB_USER=mastodon DB_NAME=mastodon_production The database name is not a big issue. Rails will create a database with that name if it doesn’t exist. But it will not create the user (maybe there’s a non-standard flag you can set?). We have to make sure postgres already recognizes a user with the name mastodon. That’s easy enough to do by passing these as environment variables to the database container only. Diff of config file on github Load the database schema One thing that’s always a pain when running rails in docker. Rails won’t start successfully until you load the schema into the database and seed it with the minimal data. This is easy to do if you can run the rake tasks locally. You can’t run the rake tasks until you have a properly configured rails. And it’s hard to figure out if your rails is configured properly because it won’t run without the database. I don’t know what this is supposed to look like to a seasoned rails expert. But for me it’s always a matter of getting the db:setup rake task to run successfully at least once. After that, everything else starts making sense. However, how do you get this to work in our docker setup? We can’t just do docker compose up, because the rails container will fail. We can’t use docker compose exec because that expects to attach to an existing instance. So the best thing to do is run a one-off container that only runs the rake task. The way to achieve that with docker compose is docker compose run --rm. The rm flags just makes sure the container gets trashed afterwards. Because we’re running our own command instead of the default one, we don’t want it hanging around and potentially muddying the waters. Once we know the magic incantation, we can setup the database. > docker compose run --rm web bundle exec rails db:setup Note: Usually you don’t put quotes around the whole command. For some reason, this can cause problems in certain cases. You can put quotes around any individual arguments if you need to. Run rails and sidekiq If you’ve gotten through all of the steps above, you’re ready to run the whole shebang. > docker compose up This will start all of the other necessary containers, including rails and sidekiq. Everything should be able to recognize and connect to postgres and redis. We’re in the home stretch. But if you try to reach rails directly in your browser by going to https://localhost:3000, you’ll get this cryptic error. ERROR -- : [ActionDispatch::HostAuthorization::DefaultResponseApp] Blocked hosts: localhost:3000 It took me a while to track this down. It’s a nice security feature built into rails. When running in production, you need to configure a whitelist of domains that rails will run under. If it receives request headers that don’t match those domains, it produces this error. This prevents certain attacks like dns rebinding. (Which I also learned about at the same time) If you set RAILS_ENV=development, then localhost is added to the whitelist by default. That’s convenient, and what we would expect from dev mode. But remember we’re not running in development mode quite yet. So this is a problem for us. The nice thing is that mastodon has added a domain to the whitelist already. Whatever value you put in the LOCAL_DOMAIN field is recognized by rails. (In fact, if you just set this to localhost you might be good to go. Shoutout to Ben.) However, when you use an actual domain, then most modern web browsers force you to use HTTPS. This is another generally nice security feature that is getting in our way right now. So we need a way to use our LOCAL_DOMAIN, terminate SSL, and then proxy the request to the rails server running inside docker. That brings us to the last piece of the puzzle. Running caddy outside of docker. Run a reverse proxy The configuration for caddy is very basic. We put in our domain, we put in two reverse proxy entries. One for rails and one for the streaming server provided by node.js. Assuming you don’t need anything fancy, caddy provides SSL termination out of the box with no additional configuration. # Caddyfile polotek-social.local reverse_proxy :3000 reverse_proxy /api/v1/streaming/* :4000 We put this in a file named Caddyfile in the root of our mastodon project, then in a new terminal window, start caddy. > caddy run Success? If everything has gone as planned, you should be able to put your local mastodon domain in your browser and see the frontpage of mastodon! Mastodon frontpage running under local domain! In the future, I’ll be looking at how to get actual accounts set up and how to see what we can see under the hood of mastodon. I’m sure I’ll work to make all of this more developement friendly to work with. But I learned a lot about mastodon just by getting this to run. I hope some of these changes can be contributed back to the main project in the future. Or at least serve as lessons that can be incorporated. I’d like to see it be easier for more people to get mastodon set up and start poking around.

a year ago 5 votes
How to Actually Integrate Angular and Nestjs

I don’t know who needs to hear this. But your frontend and backend systems don’t need to be completely separate. I started anew side project recently. You know, one of things that allows me to tinker with new technology but will probably never be finished. I’m using Angular for the frontend and Nestjs for the backend. All good. But then I go to do something that I thought was very normal and common and run into a wall. I want to integrate the two frameworks. I want to serve my initial html with nestjs and add script tags so that Angular takes over the frontend. This will allow me to do dynamic things on the backend and frontend however I want. But also deploy the system all as one cohesive product. Apparently this is no longer How Things Are Done. I literally could not find documentation on how to do this. When you read the docs and blog posts, everybody expects you to just have two systems that run entirely independently. Here’s the server for your backend and here’s the entirely different server for your frontend. Hashtag winning! When I google for “integrate angular and nestjs”, nobody knows what I’m talking about. On the surface, this seems like is a great technical blog post from LogRocket. It says “I will teach you how. First, set up two separate servers…” I think I know why the community has ended up in this place. But that’s a rant for another blog post. Let me try to explain what I’m talking about. Angular is designed as an a frontend framework (let’s set aside SSR for now). The primary output of an Angular build is javascript and css files that are meant to run in the browser. When you run ng build, you’ll get a set of files put into your output folder. Usually the folder is dist/<your_project_name>. Let’s look at what’s in there. polotek $> ls -la dist/my-angular-project -rw-r--r-- 1 polotek staff 12K Sep 13 14:15 3rdpartylicenses.txt -rw-r--r-- 1 polotek staff 948B Sep 13 14:15 favicon.ico -rw-r--r-- 1 polotek staff 573B Sep 13 14:15 index.html -rw-r--r-- 1 polotek staff 181K Sep 13 14:15 main.c01cba7b28b56cb8.js -rw-r--r-- 1 polotek staff 33K Sep 13 14:15 polyfills.2f491a303e062d57.js -rw-r--r-- 1 polotek staff 902B Sep 13 14:15 runtime.0b9744f158e85515.js -rw-r--r-- 1 polotek staff 0B Sep 13 14:15 styles.ef46db3751d8e999.css Some javascript and css files. Just as expected. A favicon. Sure, why not. Something about 3rd party licenses. I have no idea what that is, so let’s ignore it. But there’s also an index.html file. This is where the magic is. This file sets up your html so it can serve Angular files. It’s very simple and looks like this. <!doctype html> <html lang="en" data-critters-container> <head> <meta charset="utf-8"> <title>MyAngularProject</title> <base href="/"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="icon" type="image/x-icon" href="favicon.ico"> <link rel="stylesheet" href="styles.ef46db3751d8e999.css"> </head> <body> <app-root></app-root> <script src="runtime.b3cecf81bdcc5839.js" type="module"></script> <script src="polyfills.41808b7aa9da5ebc.js" type="module"></script> <script src="main.cf1267740c62d53b.js" type="module"></script> </body> </html> It turns out the web browser still works the way it always did. You use <script> tags and <link> tags to load your javascript and css into the page. But we want to let the backend do this rather than using this static html file. I’m using NestJS for the backend. It’s modeled after Angular, so a lot of the structures are very similar. Just without all of the browser-specific stuff. Nest is not so important here though. This problem is the same with whatever backend you’re using. The important thing is how static files are served. If you copy the above html into a backend template, it probably won’t work. This is what you get in the browser when you try this with NestJS. Angular fails to load. This is part of my gripe. By default, these are two separate systems right now. So NestJS doesn’t know that these files exist. And they’re in two separate folders. So it’s unclear what the best way is to integrate them. In the future, I might talk about more sustainable ways to do this for a real project. But for now, I’m going to do the simple thing just to illustrate how this is supposed to work. In NestJS, or whatever backend you’re using, you should be able to configure where your static files go. In Nest, it looks something like this. async function bootstrap() { const app = await NestFactory.create<NestExpressApplication>(AppModule); app.useStaticAssets(path.resolve("./public")); await app.listen(3000); } bootstrap(); So there should be a folder called public in your backend project, and that’s where it expect to find javascript and css files. So here’s the magic. Copy the Angular files into that folder. Let’s say you have the two projects side by side. It might look like this. polotek $> cp my-angular-project/dist/my-angular-project-ui/* my-nest-project/public/ This will also copy the original index.html file and the other junk. We don’t care about that for now. This is just for illustration. So now we’ve made NestJS aware of our Angular files. Reload your NestJS page and you should see this. Assets loading properly. Angular Welcome screen loading. We did it! This is how to integrate a cohesive system with frontend and backend. The frontend ecosystem has wandered away from this path. But this is how the web is supposed to work in my opinion. And more importantly, it is actually how a lot of real products companies want to manage their system. I want to acknowledge that there are still a lot of unanswered questions here. You can’t deploy this to production. The purpose of this blog post is to help the next person like me who was trying to google how to actually integrate Angular and a backend like NestJS because I assumed there was a common and documented path to doing so. If this was useful for you, and you’re interested in having me write about the rest of what we’re missing in modern frontend, let me know.

a year ago 5 votes

More in programming

Apologies and forgiveness

The first in a series of posts about doing things the right way

5 hours ago 3 votes
Understanding Bazel remote caching

A deep dive into the Action Cache, the CAS, and the security issues that arise from using Bazel with a remote cache but without remote execution

6 hours ago 3 votes
Trying to Make Sense of Casing Conventions on the Web

(I present to you my stream of consciousness on the topic of casing as it applies to the web platform.) I’m reading about the new command and commandfor attributes — which I’m super excited about, declarative behavior invocation in HTML? YES PLEASE!! — and one thing that strikes me is the casing in these APIs. For example, the command attribute has a variety of values in HTML which correspond to APIs in JavaScript. The show-popover attribute value maps to .showPopover() in JavaScript. hide-popover maps to .hidePopover(), etc. So what we have is: lowercase in attribute names e.g. commandfor="..." kebab-case in attribute values e.g. show-popover camelCase for JS counterparts e.g. showPopover() After thinking about this a little more, I remember that HTML attributes names are case insensitive, so the browser will normalize them to lowercase during parsing. Given that, I suppose you could write commandFor="..." but it’s effectively the same. Ok, lowercase attribute names in HTML makes sense. The related popover attributes follow the same convention: popovertarget popovertargetaction And there are many other attribute names in HTML that are lowercase, e.g.: maxlength novalidate contenteditable autocomplete formenctype So that all makes sense. But wait, there are some attribute names with hyphens in them, like aria-label="..." and data-value="...". So why isn’t it command-for="..."? Well, upon further reflection, I suppose those attributes were named that way for extensibility’s sake: they are essentially wildcard attributes that represent a family of attributes that are all under the same namespace: aria-* and data-*. But wait, isn’t that an argument for doing popover-target and popover-target-action? Or command and command-for? But wait (I keep saying that) there are kebab-case attribute names in HTML — like http-equiv on the <meta> tag, or accept-charset on the form tag — but those seem more like legacy exceptions. It seems like the only answer here is: there is no rule. Naming is driven by convention and decisions are made on a case-by-case basis. But if I had to summarize, it would probably be that the default casing for new APIs tends to follow the rules I outlined at the start (and what’s reflected in the new command APIs): lowercase for HTML attributes names kebab-case for HTML attribute values camelCase for JS counterparts Let’s not even get into SVG attribute names We need one of those “bless this mess” signs that we can hang over the World Wide Web. Email · Mastodon · Bluesky

yesterday 5 votes
The Angels and Demons of Nondeterminism

Greetings everyone! You might have noticed that it's September and I don't have the next version of Logic for Programmers ready. As penance, here's ten free copies of the book. So a few months ago I wrote a newsletter about how we use nondeterminism in formal methods. The overarching idea: Nondeterminism is when multiple paths are possible from a starting state. A system preserves a property if it holds on all possible paths. If even one path violates the property, then we have a bug. An intuitive model of this is that for this is that when faced with a nondeterministic choice, the system always makes the worst possible choice. This is sometimes called demonic nondeterminism and is favored in formal methods because we are paranoid to a fault. The opposite would be angelic nondeterminism, where the system always makes the best possible choice. A property then holds if any possible path satisfies that property.1 This is not as common in FM, but it still has its uses! "Players can access the secret level" or "We can always shut down the computer" are reachability properties, that something is possible even if not actually done. In broader computer science research, I'd say that angelic nondeterminism is more popular, due to its widespread use in complexity analysis and programming languages. Complexity Analysis P is the set of all "decision problems" (basically, boolean functions) can be solved in polynomial time: there's an algorithm that's worst-case in O(n), O(n²), O(n³), etc.2 NP is the set of all problems that can be solved in polynomial time by an algorithm with angelic nondeterminism.3 For example, the question "does list l contain x" can be solved in O(1) time by a nondeterministic algorithm: fun is_member(l: List[T], x: T): bool { if l == [] {return false}; guess i in 0..<(len(l)-1); return l[i] == x; } Say call is_member([a, b, c, d], c). The best possible choice would be to guess i = 2, which would correctly return true. Now call is_member([a, b], d). No matter what we guess, the algorithm correctly returns false. and just return false. Ergo, O(1). NP stands for "Nondeterministic Polynomial". (And I just now realized something pretty cool: you can say that P is the set of all problems solvable in polynomial time under demonic nondeterminism, which is a nice parallel between the two classes.) Computer scientists have proven that angelic nondeterminism doesn't give us any more "power": there are no problems solvable with AN that aren't also solvable deterministically. The big question is whether AN is more efficient: it is widely believed, but not proven, that there are problems in NP but not in P. Most famously, "Is there any variable assignment that makes this boolean formula true?" A polynomial AN algorithm is again easy: fun SAT(f(x1, x2, …: bool): bool): bool { N = num_params(f) for i in 1..=num_params(f) { guess x_i in {true, false} } return f(x_1, x_2, …) } The best deterministic algorithms we have to solve the same problem are worst-case exponential with the number of boolean parameters. This a real frustrating problem because real computers don't have angelic nondeterminism, so problems like SAT remain hard. We can solve most "well-behaved" instances of the problem in reasonable time, but the worst-case instances get intractable real fast. Means of Abstraction We can directly turn an AN algorithm into a (possibly much slower) deterministic algorithm, such as by backtracking. This makes AN a pretty good abstraction over what an algorithm is doing. Does the regex (a+b)\1+ match "abaabaabaab"? Yes, if the regex engine nondeterministically guesses that it needs to start at the third letter and make the group aab. How does my PL's regex implementation find that match? I dunno, backtracking or NFA construction or something, I don't need to know the deterministic specifics in order to use the nondeterministic abstraction. Neel Krishnaswami has a great definition of 'declarative language': "any language with a semantics has some nontrivial existential quantifiers in it". I'm not sure if this is identical to saying "a language with an angelic nondeterministic abstraction", but they must be pretty close, and all of his examples match: SQL's selects and joins Parsing DSLs Logic programming's unification Constraint solving On top of that I'd add CSS selectors and planner's actions; all nondeterministic abstractions over a deterministic implementation. He also says that the things programmers hate most in declarative languages are features that "that expose the operational model": constraint solver search strategies, Prolog cuts, regex backreferences, etc. Which again matches my experiences with angelic nondeterminism: I dread features that force me to understand the deterministic implementation. But they're necessary, since P probably != NP and so we need to worry about operational optimizations. Eldritch Nondeterminism If you need to know the ratio of good/bad paths, the number of good paths, or probability, or anything more than "there is a good path" or "there is a bad path", you are beyond the reach of heaven or hell. Angelic and demonic nondeterminism are duals: angelic returns "yes" if some choice: correct and demonic returns "no" if !all choice: correct, which is the same as some choice: !correct. ↩ Pet peeve about Big-O notation: O(n²) is the set of all algorithms that, for sufficiently large problem sizes, grow no faster that quadratically. "Bubblesort has O(n²) complexity" should be written Bubblesort in O(n²), not Bubblesort = O(n²). ↩ To be precise, solvable in polynomial time by a Nondeterministic Turing Machine, a very particular model of computation. We can broadly talk about P and NP without framing everything in terms of Turing machines, but some details of complexity classes (like the existence "weak NP-hardness") kinda need Turing machines to make sense. ↩

yesterday 5 votes
Announcing the 2025 TokyoDev Developers Survey

The 2025 edition of the TokyoDev Developer Survey is now live! If you’re a software developer living in Japan, please take a few minutes to participate. All questions are optional, and it should take less than 10 minutes to complete. The survey will remain open until September 30th. Last year, we received over 800 responses. Highlights included: Median compensation remained stable. The pay gap between international and Japanese companies narrowed to 47%. Fewer respondents had the option to work fully remotely. For 2025, we’ve added several new questions, including a dedicated section on one of the most talked-about topics in development today: AI. The survey is completely anonymous, and only aggregated results will be shared—never personally identifiable information. The more responses we get, the deeper and more meaningful our insights will be. Please help by taking the survey and sharing it with your peers!

yesterday 8 votes