More from These Yaks Ain't Gonna Shave Themselves
This is from a recent thread I wrote on mastodon. Reproduced with only light editing. Hm. I feel like I wanted to like this more than I actually do. I definitely think the fediverse needs to continue to grow more capabilities. But this doesn’t feel like the energy I was looking for. Half of it feels like a laundry list of ways to commodify things. Dragging a lot of things people hate about corporate social media into the fediverse. I’ve only just started to engage with the fediverse as a concept and a movement. And mastodon is only one part of a wide ecosystem. I think what has been surprising to me is it at least within mastodon, it doesn’t feel like the culture is centered around enabling people to build and experiment. Maybe this is only how I think about it. But there are only a few good reasons to do all of this work. We want to reclaim our online experiences. So they aren’t fully captured by corporate interests. But after that? I think the goal should be to enable greater diversity of experience. People can have what they want by running it on their own servers. It doesn’t have to be something that we wait for someone else to build and ship. It feels like we’re still trying to over-design corporate solutions that work for “everybody”. And that feels like constrained thinking. I feel like the fediverse should be on the other end of the spectrum. There should be an explosion of solutions. Most of them will probably be crap. But the ones who keep refining and improving will rise to the top and gain more adoption. Honestly I don’t think “gaining adoption” is that important in a truly diverse ecosystem. The reason concepts like adoption become useful is when it drives compatibility. We do want different servers to be able to participate in the larger society. But I think compatibility emerges because people want to participate. You have to add the value first. Then people will do the work to be compatible so they can get to the value. If I was stating what I think is important in the fediverse right now, it would be describing what it takes to be “compatible”. I think the “core” groups around fediverse technologies should be hyper focused on describing and documenting how their foundational protocols behave. And their measure of success should be seeing other groups building compatible servers entirely independent of them. That is a healthy fediverse imo. I don’t want to start too much trouble here. But I’m already on record with my criticisms of the open source community. I hope we can acknowledge that the community of devs, who is doing much of this work for free, has some serious cultural issues to contend with if they’re going to serve the wider set of users who want and need this stuff. We know that corporate interests want to own and capture our experiences for the purposes of profit and control. But open source devs often want to own and capture the work. So that it can only happen the way they say. And as a result, anything we want to see happen is bottlenecked on a small set of humans who have set themselves up as gatekeepers. I’m not suggesting this is always a malicious dynamic. A lot of times people have legitimate concerns for gatekeeping. Like protecting the security and privacy of users. Or preventing data corruption. Some elements of software do need to be scrutinized by experts so that people don’t get hurt. But I believe that’s a smaller area than people seem to think. I’m not that interested in debating the reasons for some of the more frustrating elements of open source culture. All I’m saying today is that I believe that open source culture will need to evolve pretty quickly if it’s going to rise to this moment of enabling a healthy and vibrant fediverse.
This is the first in probably a series of posts as I dig into the technical aspects of mastodon. My goal is to get a better understanding of the design of ActivityPub and how mastodon itself is designed to use ActivityPub. Eventually I want to learn enough to maybe do some hacking and create some of the experiences I want that mastodon doesn’t support today. The first milestone is just getting a mastodon instance set up on my laptop. I’m gonna give some background and context. If you want to skip straight to the meat of things, here’s an anchor link. Some background Mastodon is a complex application with lots of moving parts. For now, all I want is to get something running so I can poke at it. Docker should be a great tool for this. Because a lot of that complexity can be packaged up in pre-built images. I tried several times using using the official docs and various other alternative projects to get a working mastodon instance in docker. But I kept running into problems that were hard to understand and harder to resolve. I have a lot to learn about all the various pieces of mastodon and how they fit together. But I understand docker pretty well. So after some experimenting, I was able to get an instance running on my own. The rest of this post will be dedicated to explaining what I did and what I learned along the way. One final note. I know many folks work hard to write docs and provide an out of the box dev experience that works. This isn’t meant to dismiss that hard work. It just didn’t work for me. I’m certainly going to share this experience with the mastodon team. Hopefully these lessons can make the experience better for others in the future. The approach Here’s the outline of what we’re doing. We’re going to use a modified version of the docker-compose.yml that comes in the official mastodon repo. It doesn’t work out of the box. So I had to make some heavy tweaks. As of this writing, the mastodon docs seem to want people to use an alternate setup based on Dev Containers. I found that very confusing, and it didn’t work for me at all. Once we have all of the docker images we need, all of the headaches are in configuring them to work together. Most of mastodon is a ruby on rails app with a database. But there is also a node app to handle streaming updates, redis for caching and background jobs, and we need to handle file storage. We will do the minimum configuration to get all of that set up and able to talk to each other. There is also support for sending emails and optional search capabilities. These are not required just to get something working, so we’ll ignore them for now. It’s also worth noting that if we want to develop code in mastodon, we need to put our rails app in development mode. That introduces another layer of headaches and errors that I haven’t figured out yet. So that will be a later milestone. For now, all of this will be in “production” mode by default. That’s how the docker image comes packaged. Keep it simple. There are still many assumptions here. I am running on Mac OS with Apple Silicon (M3). If you’re trying this out, you may run into different issues depending your environment. Pre-requisites We need docker. And a relatively new version. The first thing I did was ditch the version: 3 specifier in the docker-compose.yml. Using versions in these files is deprecated, and we can use some newer features of docker compose. I have v4.30.0 of Docker Desktop for Mac. We also need caddy. Mastodon instances require a domain in most cases. This is mostly about identity and security. It would be bad if an actor on mastodon could change their identity very easily just by pretending to be a different domain or account. There are ways around this, but I couldn’t get any of them to work for me. That complicates our setup. Because we can’t just use localhost in the browser. We need a domain, which means we also need HTTPS support. Modern browsers require it by default unless you jump through a bunch of hoops. Caddy gives us all of that out of the box really easily. It will be the only thing running outside of docker. There’s only one caveat with caddy. The way that it is able to do ssl termination so easily is that it creates its own certificates on the fly. The way it does this is by installing it’s own root cert on your machine. You’ll have to give it permission by putting in your laptop password the first time you run caddy. If that makes you nervous, feel free to skip this and use whatever solution you’re comfortable with for SSL termination. But as far as I know, you need this part. Choose a domain for your local instance. For me it was polotek-social.local. Something that mkes it obvious that this is not a real online instance. Add an entry to your /etc/hosts and point this to localhost. Or whatever people have to do on Windows these days. Let’s run a mastodon I put all of my changes in my fork of the official mastodon repo. You can clone this branch and follow along. All of the commands assume you are in the root directory of the cloned repo. https://github.com/polotek/mastodon/tree/polotek-docker-build > git clone git@github.com:polotek/mastodon.git > cd mastodon > git co -b polotek-docker-build I rewrote the docker section of the README.md to outline the new instructions. I’m going to walk through my explanation of the changes. Pull docker images This is the easiest part. All of the docker images are prepackaged. Even the rails app. You can use the docker compose command to pull them all. It’ll take a minute or 2. > docker compose pull Setup config files We’re using a couple of config files. The repo comes with .env.production.sample. This is a nice way to outline the minimum configuration that is required. You can copy that to .env.production and everything is already set up to look for that file. The only thing you have to do here is update the LOCAL_DOMAIN field. This should be the same as the domain you chose and put in your /etc/hosts. You can put all of your configuration in this file. But I found it more convenient to separate out the various secrets. These often need to be changed or regenerated. I wrote a script to make that repeatable. Any secrets go in .env.secrets. We’ll come back to how you get those values in a bit. I had to make some other fixes here. Because we’re using docker, we need to update how the rails app finds the other dependencies. The default values seem to assume that redis and postgres are reachable locally on the same machine. I had to change those values to match the docker setup. The REDIS_HOST is redis, and the DB_HOST is db. Because that’s what they are named in the docker-compose file. Diff of config file on github The rest of the changes are just disabling non-essential services like elastic search and s3 storage. Generate secrets We need just a handful of config fields that are randomly generated and considered sensitive. Rails makes it easy to generate secrets. But run the required commands through docker and getting them in the right place is left as an exercise for the reader. I added a small script that runs these commands and outputs the right fields. Rather than try to edit the .env.production file in the right places everytime secrets get regenerated, I think it’s much easier to have them in a separate file. Fortunately, docker-compose allows us to specify multiple files to fill out the environment variables. Diff of config file on github This was a nice quality of life change. And now regenerated secrets and making them available is just one command. > bin/gen_secrets > .env.secrets Any additional secrets can be added by just updating this script. For example, I use 1password to store lots of things, even for development. And I can pull things out using their cli named op. Here’s how I configured the email secrets with the credentials from my mailgun account. # Email echo SMTP_LOGIN=$(op read "op://Dev/Mailgun SMTP/username") echo SMTP_PASSWORD=$(op read "op://Dev/Mailgun SMTP/password") Run the database Running the database is easy. > docker compose up db -d You’ll need to have your database running while you run these next steps. The -d flag will run it in the background so you can get your terminal back. I often prefer to skip the -d and run multiple terminal windows. That way I can know at a glance if something is running or not. But do whatever feels good. The only note here is to explain another small change to docker-compose to get this running. We’re using a docker image that comes ready to run postgres. This is great because it removes a lot of the fuss of running a database. The image also provides some convenient ways to configure the name of the database and the primary user account. This becomes important because maston preconfigures these values for rails. We can see this in the default .env.production values. DB_USER=mastodon DB_NAME=mastodon_production The database name is not a big issue. Rails will create a database with that name if it doesn’t exist. But it will not create the user (maybe there’s a non-standard flag you can set?). We have to make sure postgres already recognizes a user with the name mastodon. That’s easy enough to do by passing these as environment variables to the database container only. Diff of config file on github Load the database schema One thing that’s always a pain when running rails in docker. Rails won’t start successfully until you load the schema into the database and seed it with the minimal data. This is easy to do if you can run the rake tasks locally. You can’t run the rake tasks until you have a properly configured rails. And it’s hard to figure out if your rails is configured properly because it won’t run without the database. I don’t know what this is supposed to look like to a seasoned rails expert. But for me it’s always a matter of getting the db:setup rake task to run successfully at least once. After that, everything else starts making sense. However, how do you get this to work in our docker setup? We can’t just do docker compose up, because the rails container will fail. We can’t use docker compose exec because that expects to attach to an existing instance. So the best thing to do is run a one-off container that only runs the rake task. The way to achieve that with docker compose is docker compose run --rm. The rm flags just makes sure the container gets trashed afterwards. Because we’re running our own command instead of the default one, we don’t want it hanging around and potentially muddying the waters. Once we know the magic incantation, we can setup the database. > docker compose run --rm web bundle exec rails db:setup Note: Usually you don’t put quotes around the whole command. For some reason, this can cause problems in certain cases. You can put quotes around any individual arguments if you need to. Run rails and sidekiq If you’ve gotten through all of the steps above, you’re ready to run the whole shebang. > docker compose up This will start all of the other necessary containers, including rails and sidekiq. Everything should be able to recognize and connect to postgres and redis. We’re in the home stretch. But if you try to reach rails directly in your browser by going to https://localhost:3000, you’ll get this cryptic error. ERROR -- : [ActionDispatch::HostAuthorization::DefaultResponseApp] Blocked hosts: localhost:3000 It took me a while to track this down. It’s a nice security feature built into rails. When running in production, you need to configure a whitelist of domains that rails will run under. If it receives request headers that don’t match those domains, it produces this error. This prevents certain attacks like dns rebinding. (Which I also learned about at the same time) If you set RAILS_ENV=development, then localhost is added to the whitelist by default. That’s convenient, and what we would expect from dev mode. But remember we’re not running in development mode quite yet. So this is a problem for us. The nice thing is that mastodon has added a domain to the whitelist already. Whatever value you put in the LOCAL_DOMAIN field is recognized by rails. (In fact, if you just set this to localhost you might be good to go. Shoutout to Ben.) However, when you use an actual domain, then most modern web browsers force you to use HTTPS. This is another generally nice security feature that is getting in our way right now. So we need a way to use our LOCAL_DOMAIN, terminate SSL, and then proxy the request to the rails server running inside docker. That brings us to the last piece of the puzzle. Running caddy outside of docker. Run a reverse proxy The configuration for caddy is very basic. We put in our domain, we put in two reverse proxy entries. One for rails and one for the streaming server provided by node.js. Assuming you don’t need anything fancy, caddy provides SSL termination out of the box with no additional configuration. # Caddyfile polotek-social.local reverse_proxy :3000 reverse_proxy /api/v1/streaming/* :4000 We put this in a file named Caddyfile in the root of our mastodon project, then in a new terminal window, start caddy. > caddy run Success? If everything has gone as planned, you should be able to put your local mastodon domain in your browser and see the frontpage of mastodon! Mastodon frontpage running under local domain! In the future, I’ll be looking at how to get actual accounts set up and how to see what we can see under the hood of mastodon. I’m sure I’ll work to make all of this more developement friendly to work with. But I learned a lot about mastodon just by getting this to run. I hope some of these changes can be contributed back to the main project in the future. Or at least serve as lessons that can be incorporated. I’d like to see it be easier for more people to get mastodon set up and start poking around.
In my recent side projet, I’ve been deploying to fly.io and really enjoying it. It’s fairly easy to get setup. And it supports my preferred workflow of deploying my changes early and often. I have run into a few snags though. Fly.io builds your project into a docker image and deploys containers for you. That process is mostly seamless when it works. But sometimes it fails, and you need to debug. By default, fly builds your docker images in the cloud. This is convient and preferred most of the time. But when I wanted to test some changes to my build, I wanted to try building locally using Docker Desktop. This should be easy. The fly cli is quite nice. And there is a flag to build locally. fly deploy --build-only --local-only This failed saying it couldn’t find Docker. > fly deploy --build-only --local-only ==> Verifying app config Validating /Users/polotek/src/harembase/fly.toml Platform: machines ✓ Configuration is valid --> Verified app config ==> Building image Error: failed to fetch an image or build from source: docker is unavailable to build the deployment image I spent quite a bit of time googling for the problem here. You can also run fly doctor --verbose to get some info. (If you run this in your fly app folder, it will show more info not relevant to this topic.) > fly doctor --verbose Testing authentication token... PASSED Testing flyctl agent... PASSED Testing local Docker instance... Nope (We got: failed pinging docker instance: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?) This is fine, we'll use a remote builder. Pinging WireGuard gateway (give us a sec)... PASSED No app provided; skipping app specific checks I found various forum posts discussing this problem. The folks at fly have spent a lot of time investigating some deep technical issues. I appreciate that work, but ultimately none of it seems to reflect my problem. And the issue felt simpler to me. Fly couldn’t find docker. Why not? Where is it looking? Eventually I found the answer on stackoverflow. It turns out that things have settled pretty recently to a basic config setting. By default, Docker Desktop installs the socket for the daemon in a non-global space. Usually in your personal user folder, e.g. ~/.docker/run/docker.sock. But other tools expect the docker daemon socket to be available in a standard location, e.g. /var/run/docker.sock As of this writing, Docker Deskstop has added a recommended way to enable the standard location. In the Docker Desktop dashboard, got to Settings > Advanced and enable “Allow the default Docker socket to be used”. Docker for Mac settings screen This will require your system password and restart. Then you should be able to see the docker socket in the standard place. And fly will be able to see it! Hopefully the next person who’s banging their head against this will have an easier time.
I don’t know who needs to hear this. But your frontend and backend systems don’t need to be completely separate. I started anew side project recently. You know, one of things that allows me to tinker with new technology but will probably never be finished. I’m using Angular for the frontend and Nestjs for the backend. All good. But then I go to do something that I thought was very normal and common and run into a wall. I want to integrate the two frameworks. I want to serve my initial html with nestjs and add script tags so that Angular takes over the frontend. This will allow me to do dynamic things on the backend and frontend however I want. But also deploy the system all as one cohesive product. Apparently this is no longer How Things Are Done. I literally could not find documentation on how to do this. When you read the docs and blog posts, everybody expects you to just have two systems that run entirely independently. Here’s the server for your backend and here’s the entirely different server for your frontend. Hashtag winning! When I google for “integrate angular and nestjs”, nobody knows what I’m talking about. On the surface, this seems like is a great technical blog post from LogRocket. It says “I will teach you how. First, set up two separate servers…” I think I know why the community has ended up in this place. But that’s a rant for another blog post. Let me try to explain what I’m talking about. Angular is designed as an a frontend framework (let’s set aside SSR for now). The primary output of an Angular build is javascript and css files that are meant to run in the browser. When you run ng build, you’ll get a set of files put into your output folder. Usually the folder is dist/<your_project_name>. Let’s look at what’s in there. polotek $> ls -la dist/my-angular-project -rw-r--r-- 1 polotek staff 12K Sep 13 14:15 3rdpartylicenses.txt -rw-r--r-- 1 polotek staff 948B Sep 13 14:15 favicon.ico -rw-r--r-- 1 polotek staff 573B Sep 13 14:15 index.html -rw-r--r-- 1 polotek staff 181K Sep 13 14:15 main.c01cba7b28b56cb8.js -rw-r--r-- 1 polotek staff 33K Sep 13 14:15 polyfills.2f491a303e062d57.js -rw-r--r-- 1 polotek staff 902B Sep 13 14:15 runtime.0b9744f158e85515.js -rw-r--r-- 1 polotek staff 0B Sep 13 14:15 styles.ef46db3751d8e999.css Some javascript and css files. Just as expected. A favicon. Sure, why not. Something about 3rd party licenses. I have no idea what that is, so let’s ignore it. But there’s also an index.html file. This is where the magic is. This file sets up your html so it can serve Angular files. It’s very simple and looks like this. <!doctype html> <html lang="en" data-critters-container> <head> <meta charset="utf-8"> <title>MyAngularProject</title> <base href="/"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="icon" type="image/x-icon" href="favicon.ico"> <link rel="stylesheet" href="styles.ef46db3751d8e999.css"> </head> <body> <app-root></app-root> <script src="runtime.b3cecf81bdcc5839.js" type="module"></script> <script src="polyfills.41808b7aa9da5ebc.js" type="module"></script> <script src="main.cf1267740c62d53b.js" type="module"></script> </body> </html> It turns out the web browser still works the way it always did. You use <script> tags and <link> tags to load your javascript and css into the page. But we want to let the backend do this rather than using this static html file. I’m using NestJS for the backend. It’s modeled after Angular, so a lot of the structures are very similar. Just without all of the browser-specific stuff. Nest is not so important here though. This problem is the same with whatever backend you’re using. The important thing is how static files are served. If you copy the above html into a backend template, it probably won’t work. This is what you get in the browser when you try this with NestJS. Angular fails to load. This is part of my gripe. By default, these are two separate systems right now. So NestJS doesn’t know that these files exist. And they’re in two separate folders. So it’s unclear what the best way is to integrate them. In the future, I might talk about more sustainable ways to do this for a real project. But for now, I’m going to do the simple thing just to illustrate how this is supposed to work. In NestJS, or whatever backend you’re using, you should be able to configure where your static files go. In Nest, it looks something like this. async function bootstrap() { const app = await NestFactory.create<NestExpressApplication>(AppModule); app.useStaticAssets(path.resolve("./public")); await app.listen(3000); } bootstrap(); So there should be a folder called public in your backend project, and that’s where it expect to find javascript and css files. So here’s the magic. Copy the Angular files into that folder. Let’s say you have the two projects side by side. It might look like this. polotek $> cp my-angular-project/dist/my-angular-project-ui/* my-nest-project/public/ This will also copy the original index.html file and the other junk. We don’t care about that for now. This is just for illustration. So now we’ve made NestJS aware of our Angular files. Reload your NestJS page and you should see this. Assets loading properly. Angular Welcome screen loading. We did it! This is how to integrate a cohesive system with frontend and backend. The frontend ecosystem has wandered away from this path. But this is how the web is supposed to work in my opinion. And more importantly, it is actually how a lot of real products companies want to manage their system. I want to acknowledge that there are still a lot of unanswered questions here. You can’t deploy this to production. The purpose of this blog post is to help the next person like me who was trying to google how to actually integrate Angular and a backend like NestJS because I assumed there was a common and documented path to doing so. If this was useful for you, and you’re interested in having me write about the rest of what we’re missing in modern frontend, let me know.
More in programming
<![CDATA[I'm exploring another corner of the Interlisp ecosystem and history: the Interlisp-10 implementation for DEC PDP-10 mainframes, a 1970s character based environment that predated the graphical Interlisp-D system. I approached this corner when I set out to learn and experiment with a tool I initially checked out only superficially, the TTY editor. This command line structure editor for Lisp code and expressions was the only one of Interlisp-10. The oldest of the Interlisp editors, it came before graphical interfaces and SEdit. On Medley Interlisp the TTY editor is still useful for specialized tasks. For example, its extensive set of commands with macro support is effectively a little language for batch editing and list structure manipulation. Think Unix sed for s-exps. The language even provides the variable EDITMACROS (wink wink). Evaluating (PRINTDEF EDITMACROS) gives a flavor for the language. For an experience closer to 1970s Interlisp I'm using the editor in its original environment, Interlisp-10 on TWENEX. SDF provides a publicly accessible TWENEX system running on a PDP-10 setup. With the product name TOPS-20, TWENEX was a DEC operating system for DECSYSTEM-20/PDP-10 mainframes derived from TENEX originally developed by BBN. SDF's TWENEX system comes with Interlisp-10 and other languages. This is Interlisp-10 in a TWENEX session accessed from my Linux box: A screenshot of a Linux terminal showing Interlisp-10 running under TWENEX in a SSH session. Creating a TWENEX account is straightforward but I didn't receive the initial password via email as expected. After reporting this to the twenex-l mailing list I was soon emailed the password which I changed with the TWENEX command CHANGE DIRECTORY PASSWORD. Interacting with TWENEX is less alien or arcane than I thought. I recognize the influence of TENEX and TWENEX on Interlisp terminology and notation. For example, the Interlisp REPL is called Exec after the Exec command processor of the TENEX operating system. And, like TENEX, Interlisp uses angle brackets as part of directory names. It's clear the influence of these operating systems also on the design of CP/M and hence MS-DOS, for example the commands DIR and TYPE. SDF's TWENEX system provides a complete Interlisp-10 implementation with only one notable omission: HELPSYS, the interactive facility for consulting the online documentation of Interlisp. The SDF wiki describes the basics of using Interlisp-10 and editing Lisp code with the TTY editor. After a couple of years of experience with Medley Interlisp the Interlisp-10 environment feels familiar. Most of the same functions and commands control the development tools and facilities. My first impression of the TTY editor is it's reasonably efficient and intuitive to edit Lisp code, at least using the basic commands. One thing that's not immediately apparent is that EDITF, the entry point for editing a function, works only with existing functions and can't create new ones. The workaround is to define a stub from the Exec like this: (DEFINEQ (NEW.FUNCTION () T)) and then call (EDITF NEW.FUNCTION) to flesh it out. Transferring files between TWENEX and the external world, such as my Linux box, involves two steps because the TWENEX system is not accessible outside of SDF. First, I log into Unix on sdf.org with my SDF account and from there ftp to kankan.twenex.org (172.16.36.36) with my TWENEX account. Once the TWENEX files are on Unix I access them from Linux with scp or sftp to sdf.org. This may require the ARPA tier of SDF membership. Everything is ready for a small Interlisp-10 programming project. #Interlisp #Lisp a href="https://remark.as/p/journal.paoloamoroso.com/exploring-interlisp-10-and-twenex"Discuss.../a Email | Reply @amoroso@oldbytes.space !--emailsub--]]>
Total disassociation, fully out your mind That Funny Feeling I was thinking today about a disc jockey. Like one in the 80s, where you actually had to put the records on the turntables to get the music. You move the information. You were the file system. I like the Retro Game Mechanics channel on YouTube. What was possible was limited by the hardware, and in a weird way it forced games to be good. Skill was apparent by a quick viewing, and different skill is usually highly correlated. Good graphics meant good story – not true today. I was thinking about all the noobs showing up to comma. If you can put a technical barrier up to stop them, like it used to be. But you can’t. These barriers can’t be fake, because a fake barrier isn’t like a real barrier. A fake barrier is one small patch away from being gone. What if the Internet was a mistake? I feel like it’s breaking my brain. It was this mind expanding world in my childhood, but now it’s a set of narrow loops that are harder and harder to get out of. And you can’t escape it. Once you have Starlink to your phone, not having the Internet with you will be a choice, not a real barrier. There’s nowhere to hide. Chris McCandless wanted to be an explorer, but being born in 1968 meant that the world was already all explored. His clever solution, throw away the map. But that didn’t make him an explorer, it made him an idiot who died 5 miles from a bridge that would have saved his life. And I’ll tell you something else that you ain’t dying enough to know Big Casino Sure, you can still spin real records, code for the NES, and SSH into your comma device. But you don’t have to. And that makes the people who do it come from a different distribution from the people who used to. They are not explorers in the same way Chris McCandless wasn’t. When I found out about the singularity at 15, I was sure it was going to happen. It was depressing for a while, realizing that machines would be able to do everything a lot better than I could. But then I realized that it wasn’t like that yet and I could still work on this problem. And here I am, working in AI 20 years later. I thought I came to grips with obsolescence. But it’s not obsolescence, the reality is looking to be so much sadder than I imagined. It won’t be humans accepting the rise of the machines, it won’t be humans fighting the rise of the machines, it will be human shaped zoo animals oddly pacing back and forth in a corner of the cage while the world keeps turning around them. It’s easy to see the appeal of conspiracy theories. Even if they hate you, it’s more comforting to believe that they exist. That at least somebody is driving. But that’s not true. It’s just going. There are no longer Western institutions capable of making sense of the world. (maybe the Chinese ones can? it’s hard to tell) We are shoved up brutally against evolution, just of the memetic variety. The TikTok brainrot kids will be nothing compared to the ChatGPT brainrot kids. And I’m not talking like an old curmudgeon about the new forms of media being bad and the youth being bad like Socrates said. Because you can never go back. It will be whatever it is. To every fool preaching the end of history, evolution spits in your face. To every fool preaching the world government AI singleton, evolution spits in your face. I knew these things intellectually, but viscerally it’s just hard to live through. The world feels so small and I feel like I’m being stared at by the Eye of Sauron.
I always had a diffuse idea of why people are spending so much time and money on amateur radio. Once I got my license and started to amass radios myself, it became more clear.
What does it mean when someone writes that a programming language is “strongly typed”? I’ve known for many years that “strongly typed” is a poorly-defined term. Recently I was prompted on Lobsters to explain why it’s hard to understand what someone means when they use the phrase. I came up with more than five meanings! how strong? The various meanings of “strongly typed” are not clearly yes-or-no. Some developers like to argue that these kinds of integrity checks must be completely perfect or else they are entirely worthless. Charitably (it took me a while to think of a polite way to phrase this), that betrays a lack of engineering maturity. Software engineers, like any engineers, have to create working systems from imperfect materials. To do so, we must understand what guarantees we can rely on, where our mistakes can be caught early, where we need to establish processes to catch mistakes, how we can control the consequences of our mistakes, and how to remediate when somethng breaks because of a mistake that wasn’t caught. strong how? So, what are the ways that a programming language can be strongly or weakly typed? In what ways are real programming languages “mid”? Statically typed as opposed to dynamically typed? Many languages have a mixture of the two, such as run time polymorphism in OO languages (e.g. Java), or gradual type systems for dynamic languages (e.g. TypeScript). Sound static type system? It’s common for static type systems to be deliberately unsound, such as covariant subtyping in arrays or functions (Java, again). Gradual type systems migh have gaping holes for usability reasons (TypeScript, again). And some type systems might be unsound due to bugs. (There are a few of these in Rust.) Unsoundness isn’t a disaster, if a programmer won’t cause it without being aware of the risk. For example: in Lean you can write “sorry” as a kind of “to do” annotation that deliberately breaks soundness; and Idris 2 has type-in-type so it accepts Girard’s paradox. Type safe at run time? Most languages have facilities for deliberately bypassing type safety, with an “unsafe” library module or “unsafe” language features, or things that are harder to spot. It can be more or less difficult to break type safety in ways that the programmer or language designer did not intend. JavaScript and Lua are very safe, treating type safety failures as security vulnerabilities. Java and Rust have controlled unsafety. In C everything is unsafe. Fewer weird implicit coercions? There isn’t a total order here: for instance, C has implicit bool/int coercions, Rust does not; Rust has implicit deref, C does not. There’s a huge range in how much coercions are a convenience or a source of bugs. For example, the PHP and JavaScript == operators are made entirely of WAT, but at least you can use === instead. How fancy is the type system? To what degree can you model properties of your program as types? Is it convenient to parse, not validate? Is the Curry-Howard correspondance something you can put into practice? Or is it only capable of describing the physical layout of data? There are probably other meanings, e.g. I have seen “strongly typed” used to mean that runtime representations are abstract (you can’t see the underlying bytes); or in the past it sometimes meant a language with a heavy type annotation burden (as a mischaracterization of static type checking). how to type So, when you write (with your keyboard) the phrase “strongly typed”, delete it, and come up with a more precise description of what you really mean. The desiderata above are partly overlapping, sometimes partly orthogonal. Some of them you might care about, some of them not. But please try to communicate where you draw the line and how fuzzy your line is.