More from These Yaks Ain't Gonna Shave Themselves
A lot of frontend teams are very convinced that rewriting their frontend will lead to the promised land. And I am the bearer of bad tidings. If you are building a product that you hope has longevity, your frontend framework is the least interesting technical decision for you to make. And all of the time you spend arguing about it is wasted energy. I will die on this hill. If your product is still around in 5 years, you’re doing great and you should feel successful. But guess what? Whatever framework you choose will be obsolete in 5 years. That’s just how the frontend community has been operating, and I don’t expect it to change soon. Even the popular frameworks that are still around are completely different. Because change is the name of the game. So they’re gonna rewrite their shit too and just give it a new version number. Product teams that are smart are getting off the treadmill. Whatever framework you currently have, start investing in getting to know it deeply. Learn the tools until they are not an impediment to your progress. That’s the only option. Replacing it with a shiny new tool is a trap. I also wanna give a piece of candid advice to engineers who are searching for jobs. If you feel strongly about what framework you want to use, please make that a criteria for your job search. Please stop walking into teams and derailing everything by trying to convince them to switch from framework X to your framework of choice. It’s really annoying and tremendously costly. I always have to start with the cynical take. It’s just how I am. But I do want to talk about what I think should be happening instead. Companies that want to reduce the cost of their frontend tech becoming obsoleted so often should be looking to get back to fundamentals. Your teams should be working closer to the web platform with a lot less complex abstractions. We need to relearn what the web is capable of and go back to that. Let’s be clear, I’m not suggesting this is strictly better and the answer to all of your problems. I’m suggesting this as an intentional business tradeoff that I think provides more value and is less costly in the long run. I believe if you stick closer to core web technologies, you’ll be better able to hire capable engineers in the future without them convincing you they can’t do work without rewriting millions of lines of code. And if you’re an engineer, you will be able to retain much higher market value over time if you dig into and understand core web technologies. I was here before react, and I’ll be here after it dies. You may trade some job marketability today. But it does a lot more for career longevity than trying to learn every new thing that gets popular. And you see how quickly they discarded us when the market turned anyway. Knowing certain tech won’t save you from those realities. I couldn’t speak this candidly about this stuff when I held a management role. People can’t help but question my motivations and whatever agenda I may be pushing. Either that or I get into a lot of trouble with my internal team because they think I’m talking about them. But this is just what I’ve seen play out after doing this for 20+ years. And I feel like we need to be able to speak plainly. This has been brewing in my head for a long time. The frontend ecosystem is kind of broken right now. And it’s frustrating to me for a few different reasons. New developers are having an extremely hard time learning enough skills to be gainfully employed. They are drowning in this complex garbage and feeling really disheartened. As a result, companies are finding it more difficult to do basic hiring. The bar is so high just to get a regular dev job. And everybody loses. What’s even worse is that I believe a lot of this energy is wasted. People that are learning the current tech ecosystem are absolutely not learning web fundamentals. They are too abstracted away. And when the stack changes again, these folks are going to be at a serious disadvantage when they have to adapt away from what they learned. It’s a deep disservice to people’s professional careers, and it’s going to cause a lot of heartache later. On a more personal note, this is frustrating to me because I think it’s a big part of why we’re seeing the web stagnate so much. I still run into lots of devs who are creative and enthusiastic about building cool things. They just can’t. They are trying and failing because the tools being recommended to them are just not approachable enough. And at the same time, they’re being convinced that learning fundamentals is a waste of time because it’s so different from what everybody is talking about. I guess I want to close by stating my biases. I’m a web guy. I’ve been bullish on the web for 20+ years, and I will continue to be. I think it is an extremely capable and unique platform for delivering software. And it has only gotten better over time while retaining an incredible level of backwards compatibility. The underlying tools we have are dope now. But our current framework layer is working against the grain instead of embracing the platform. This is from a recent thread I wrote on mastodon. Reproduced with only light editing.
This is the first in probably a series of posts as I dig into the technical aspects of mastodon. My goal is to get a better understanding of the design of ActivityPub and how mastodon itself is designed to use ActivityPub. Eventually I want to learn enough to maybe do some hacking and create some of the experiences I want that mastodon doesn’t support today. The first milestone is just getting a mastodon instance set up on my laptop. I’m gonna give some background and context. If you want to skip straight to the meat of things, here’s an anchor link. Some background Mastodon is a complex application with lots of moving parts. For now, all I want is to get something running so I can poke at it. Docker should be a great tool for this. Because a lot of that complexity can be packaged up in pre-built images. I tried several times using using the official docs and various other alternative projects to get a working mastodon instance in docker. But I kept running into problems that were hard to understand and harder to resolve. I have a lot to learn about all the various pieces of mastodon and how they fit together. But I understand docker pretty well. So after some experimenting, I was able to get an instance running on my own. The rest of this post will be dedicated to explaining what I did and what I learned along the way. One final note. I know many folks work hard to write docs and provide an out of the box dev experience that works. This isn’t meant to dismiss that hard work. It just didn’t work for me. I’m certainly going to share this experience with the mastodon team. Hopefully these lessons can make the experience better for others in the future. The approach Here’s the outline of what we’re doing. We’re going to use a modified version of the docker-compose.yml that comes in the official mastodon repo. It doesn’t work out of the box. So I had to make some heavy tweaks. As of this writing, the mastodon docs seem to want people to use an alternate setup based on Dev Containers. I found that very confusing, and it didn’t work for me at all. Once we have all of the docker images we need, all of the headaches are in configuring them to work together. Most of mastodon is a ruby on rails app with a database. But there is also a node app to handle streaming updates, redis for caching and background jobs, and we need to handle file storage. We will do the minimum configuration to get all of that set up and able to talk to each other. There is also support for sending emails and optional search capabilities. These are not required just to get something working, so we’ll ignore them for now. It’s also worth noting that if we want to develop code in mastodon, we need to put our rails app in development mode. That introduces another layer of headaches and errors that I haven’t figured out yet. So that will be a later milestone. For now, all of this will be in “production” mode by default. That’s how the docker image comes packaged. Keep it simple. There are still many assumptions here. I am running on Mac OS with Apple Silicon (M3). If you’re trying this out, you may run into different issues depending your environment. Pre-requisites We need docker. And a relatively new version. The first thing I did was ditch the version: 3 specifier in the docker-compose.yml. Using versions in these files is deprecated, and we can use some newer features of docker compose. I have v4.30.0 of Docker Desktop for Mac. We also need caddy. Mastodon instances require a domain in most cases. This is mostly about identity and security. It would be bad if an actor on mastodon could change their identity very easily just by pretending to be a different domain or account. There are ways around this, but I couldn’t get any of them to work for me. That complicates our setup. Because we can’t just use localhost in the browser. We need a domain, which means we also need HTTPS support. Modern browsers require it by default unless you jump through a bunch of hoops. Caddy gives us all of that out of the box really easily. It will be the only thing running outside of docker. There’s only one caveat with caddy. The way that it is able to do ssl termination so easily is that it creates its own certificates on the fly. The way it does this is by installing it’s own root cert on your machine. You’ll have to give it permission by putting in your laptop password the first time you run caddy. If that makes you nervous, feel free to skip this and use whatever solution you’re comfortable with for SSL termination. But as far as I know, you need this part. Choose a domain for your local instance. For me it was polotek-social.local. Something that mkes it obvious that this is not a real online instance. Add an entry to your /etc/hosts and point this to localhost. Or whatever people have to do on Windows these days. Let’s run a mastodon I put all of my changes in my fork of the official mastodon repo. You can clone this branch and follow along. All of the commands assume you are in the root directory of the cloned repo. https://github.com/polotek/mastodon/tree/polotek-docker-build > git clone git@github.com:polotek/mastodon.git > cd mastodon > git co -b polotek-docker-build I rewrote the docker section of the README.md to outline the new instructions. I’m going to walk through my explanation of the changes. Pull docker images This is the easiest part. All of the docker images are prepackaged. Even the rails app. You can use the docker compose command to pull them all. It’ll take a minute or 2. > docker compose pull Setup config files We’re using a couple of config files. The repo comes with .env.production.sample. This is a nice way to outline the minimum configuration that is required. You can copy that to .env.production and everything is already set up to look for that file. The only thing you have to do here is update the LOCAL_DOMAIN field. This should be the same as the domain you chose and put in your /etc/hosts. You can put all of your configuration in this file. But I found it more convenient to separate out the various secrets. These often need to be changed or regenerated. I wrote a script to make that repeatable. Any secrets go in .env.secrets. We’ll come back to how you get those values in a bit. I had to make some other fixes here. Because we’re using docker, we need to update how the rails app finds the other dependencies. The default values seem to assume that redis and postgres are reachable locally on the same machine. I had to change those values to match the docker setup. The REDIS_HOST is redis, and the DB_HOST is db. Because that’s what they are named in the docker-compose file. Diff of config file on github The rest of the changes are just disabling non-essential services like elastic search and s3 storage. Generate secrets We need just a handful of config fields that are randomly generated and considered sensitive. Rails makes it easy to generate secrets. But run the required commands through docker and getting them in the right place is left as an exercise for the reader. I added a small script that runs these commands and outputs the right fields. Rather than try to edit the .env.production file in the right places everytime secrets get regenerated, I think it’s much easier to have them in a separate file. Fortunately, docker-compose allows us to specify multiple files to fill out the environment variables. Diff of config file on github This was a nice quality of life change. And now regenerated secrets and making them available is just one command. > bin/gen_secrets > .env.secrets Any additional secrets can be added by just updating this script. For example, I use 1password to store lots of things, even for development. And I can pull things out using their cli named op. Here’s how I configured the email secrets with the credentials from my mailgun account. # Email echo SMTP_LOGIN=$(op read "op://Dev/Mailgun SMTP/username") echo SMTP_PASSWORD=$(op read "op://Dev/Mailgun SMTP/password") Run the database Running the database is easy. > docker compose up db -d You’ll need to have your database running while you run these next steps. The -d flag will run it in the background so you can get your terminal back. I often prefer to skip the -d and run multiple terminal windows. That way I can know at a glance if something is running or not. But do whatever feels good. The only note here is to explain another small change to docker-compose to get this running. We’re using a docker image that comes ready to run postgres. This is great because it removes a lot of the fuss of running a database. The image also provides some convenient ways to configure the name of the database and the primary user account. This becomes important because maston preconfigures these values for rails. We can see this in the default .env.production values. DB_USER=mastodon DB_NAME=mastodon_production The database name is not a big issue. Rails will create a database with that name if it doesn’t exist. But it will not create the user (maybe there’s a non-standard flag you can set?). We have to make sure postgres already recognizes a user with the name mastodon. That’s easy enough to do by passing these as environment variables to the database container only. Diff of config file on github Load the database schema One thing that’s always a pain when running rails in docker. Rails won’t start successfully until you load the schema into the database and seed it with the minimal data. This is easy to do if you can run the rake tasks locally. You can’t run the rake tasks until you have a properly configured rails. And it’s hard to figure out if your rails is configured properly because it won’t run without the database. I don’t know what this is supposed to look like to a seasoned rails expert. But for me it’s always a matter of getting the db:setup rake task to run successfully at least once. After that, everything else starts making sense. However, how do you get this to work in our docker setup? We can’t just do docker compose up, because the rails container will fail. We can’t use docker compose exec because that expects to attach to an existing instance. So the best thing to do is run a one-off container that only runs the rake task. The way to achieve that with docker compose is docker compose run --rm. The rm flags just makes sure the container gets trashed afterwards. Because we’re running our own command instead of the default one, we don’t want it hanging around and potentially muddying the waters. Once we know the magic incantation, we can setup the database. > docker compose run --rm web bundle exec rails db:setup Note: Usually you don’t put quotes around the whole command. For some reason, this can cause problems in certain cases. You can put quotes around any individual arguments if you need to. Run rails and sidekiq If you’ve gotten through all of the steps above, you’re ready to run the whole shebang. > docker compose up This will start all of the other necessary containers, including rails and sidekiq. Everything should be able to recognize and connect to postgres and redis. We’re in the home stretch. But if you try to reach rails directly in your browser by going to https://localhost:3000, you’ll get this cryptic error. ERROR -- : [ActionDispatch::HostAuthorization::DefaultResponseApp] Blocked hosts: localhost:3000 It took me a while to track this down. It’s a nice security feature built into rails. When running in production, you need to configure a whitelist of domains that rails will run under. If it receives request headers that don’t match those domains, it produces this error. This prevents certain attacks like dns rebinding. (Which I also learned about at the same time) If you set RAILS_ENV=development, then localhost is added to the whitelist by default. That’s convenient, and what we would expect from dev mode. But remember we’re not running in development mode quite yet. So this is a problem for us. The nice thing is that mastodon has added a domain to the whitelist already. Whatever value you put in the LOCAL_DOMAIN field is recognized by rails. (In fact, if you just set this to localhost you might be good to go. Shoutout to Ben.) However, when you use an actual domain, then most modern web browsers force you to use HTTPS. This is another generally nice security feature that is getting in our way right now. So we need a way to use our LOCAL_DOMAIN, terminate SSL, and then proxy the request to the rails server running inside docker. That brings us to the last piece of the puzzle. Running caddy outside of docker. Run a reverse proxy The configuration for caddy is very basic. We put in our domain, we put in two reverse proxy entries. One for rails and one for the streaming server provided by node.js. Assuming you don’t need anything fancy, caddy provides SSL termination out of the box with no additional configuration. # Caddyfile polotek-social.local reverse_proxy :3000 reverse_proxy /api/v1/streaming/* :4000 We put this in a file named Caddyfile in the root of our mastodon project, then in a new terminal window, start caddy. > caddy run Success? If everything has gone as planned, you should be able to put your local mastodon domain in your browser and see the frontpage of mastodon! Mastodon frontpage running under local domain! In the future, I’ll be looking at how to get actual accounts set up and how to see what we can see under the hood of mastodon. I’m sure I’ll work to make all of this more developement friendly to work with. But I learned a lot about mastodon just by getting this to run. I hope some of these changes can be contributed back to the main project in the future. Or at least serve as lessons that can be incorporated. I’d like to see it be easier for more people to get mastodon set up and start poking around.
In my recent side projet, I’ve been deploying to fly.io and really enjoying it. It’s fairly easy to get setup. And it supports my preferred workflow of deploying my changes early and often. I have run into a few snags though. Fly.io builds your project into a docker image and deploys containers for you. That process is mostly seamless when it works. But sometimes it fails, and you need to debug. By default, fly builds your docker images in the cloud. This is convient and preferred most of the time. But when I wanted to test some changes to my build, I wanted to try building locally using Docker Desktop. This should be easy. The fly cli is quite nice. And there is a flag to build locally. fly deploy --build-only --local-only This failed saying it couldn’t find Docker. > fly deploy --build-only --local-only ==> Verifying app config Validating /Users/polotek/src/harembase/fly.toml Platform: machines ✓ Configuration is valid --> Verified app config ==> Building image Error: failed to fetch an image or build from source: docker is unavailable to build the deployment image I spent quite a bit of time googling for the problem here. You can also run fly doctor --verbose to get some info. (If you run this in your fly app folder, it will show more info not relevant to this topic.) > fly doctor --verbose Testing authentication token... PASSED Testing flyctl agent... PASSED Testing local Docker instance... Nope (We got: failed pinging docker instance: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?) This is fine, we'll use a remote builder. Pinging WireGuard gateway (give us a sec)... PASSED No app provided; skipping app specific checks I found various forum posts discussing this problem. The folks at fly have spent a lot of time investigating some deep technical issues. I appreciate that work, but ultimately none of it seems to reflect my problem. And the issue felt simpler to me. Fly couldn’t find docker. Why not? Where is it looking? Eventually I found the answer on stackoverflow. It turns out that things have settled pretty recently to a basic config setting. By default, Docker Desktop installs the socket for the daemon in a non-global space. Usually in your personal user folder, e.g. ~/.docker/run/docker.sock. But other tools expect the docker daemon socket to be available in a standard location, e.g. /var/run/docker.sock As of this writing, Docker Deskstop has added a recommended way to enable the standard location. In the Docker Desktop dashboard, got to Settings > Advanced and enable “Allow the default Docker socket to be used”. Docker for Mac settings screen This will require your system password and restart. Then you should be able to see the docker socket in the standard place. And fly will be able to see it! Hopefully the next person who’s banging their head against this will have an easier time.
I don’t know who needs to hear this. But your frontend and backend systems don’t need to be completely separate. I started anew side project recently. You know, one of things that allows me to tinker with new technology but will probably never be finished. I’m using Angular for the frontend and Nestjs for the backend. All good. But then I go to do something that I thought was very normal and common and run into a wall. I want to integrate the two frameworks. I want to serve my initial html with nestjs and add script tags so that Angular takes over the frontend. This will allow me to do dynamic things on the backend and frontend however I want. But also deploy the system all as one cohesive product. Apparently this is no longer How Things Are Done. I literally could not find documentation on how to do this. When you read the docs and blog posts, everybody expects you to just have two systems that run entirely independently. Here’s the server for your backend and here’s the entirely different server for your frontend. Hashtag winning! When I google for “integrate angular and nestjs”, nobody knows what I’m talking about. On the surface, this seems like is a great technical blog post from LogRocket. It says “I will teach you how. First, set up two separate servers…” I think I know why the community has ended up in this place. But that’s a rant for another blog post. Let me try to explain what I’m talking about. Angular is designed as an a frontend framework (let’s set aside SSR for now). The primary output of an Angular build is javascript and css files that are meant to run in the browser. When you run ng build, you’ll get a set of files put into your output folder. Usually the folder is dist/<your_project_name>. Let’s look at what’s in there. polotek $> ls -la dist/my-angular-project -rw-r--r-- 1 polotek staff 12K Sep 13 14:15 3rdpartylicenses.txt -rw-r--r-- 1 polotek staff 948B Sep 13 14:15 favicon.ico -rw-r--r-- 1 polotek staff 573B Sep 13 14:15 index.html -rw-r--r-- 1 polotek staff 181K Sep 13 14:15 main.c01cba7b28b56cb8.js -rw-r--r-- 1 polotek staff 33K Sep 13 14:15 polyfills.2f491a303e062d57.js -rw-r--r-- 1 polotek staff 902B Sep 13 14:15 runtime.0b9744f158e85515.js -rw-r--r-- 1 polotek staff 0B Sep 13 14:15 styles.ef46db3751d8e999.css Some javascript and css files. Just as expected. A favicon. Sure, why not. Something about 3rd party licenses. I have no idea what that is, so let’s ignore it. But there’s also an index.html file. This is where the magic is. This file sets up your html so it can serve Angular files. It’s very simple and looks like this. <!doctype html> <html lang="en" data-critters-container> <head> <meta charset="utf-8"> <title>MyAngularProject</title> <base href="/"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="icon" type="image/x-icon" href="favicon.ico"> <link rel="stylesheet" href="styles.ef46db3751d8e999.css"> </head> <body> <app-root></app-root> <script src="runtime.b3cecf81bdcc5839.js" type="module"></script> <script src="polyfills.41808b7aa9da5ebc.js" type="module"></script> <script src="main.cf1267740c62d53b.js" type="module"></script> </body> </html> It turns out the web browser still works the way it always did. You use <script> tags and <link> tags to load your javascript and css into the page. But we want to let the backend do this rather than using this static html file. I’m using NestJS for the backend. It’s modeled after Angular, so a lot of the structures are very similar. Just without all of the browser-specific stuff. Nest is not so important here though. This problem is the same with whatever backend you’re using. The important thing is how static files are served. If you copy the above html into a backend template, it probably won’t work. This is what you get in the browser when you try this with NestJS. Angular fails to load. This is part of my gripe. By default, these are two separate systems right now. So NestJS doesn’t know that these files exist. And they’re in two separate folders. So it’s unclear what the best way is to integrate them. In the future, I might talk about more sustainable ways to do this for a real project. But for now, I’m going to do the simple thing just to illustrate how this is supposed to work. In NestJS, or whatever backend you’re using, you should be able to configure where your static files go. In Nest, it looks something like this. async function bootstrap() { const app = await NestFactory.create<NestExpressApplication>(AppModule); app.useStaticAssets(path.resolve("./public")); await app.listen(3000); } bootstrap(); So there should be a folder called public in your backend project, and that’s where it expect to find javascript and css files. So here’s the magic. Copy the Angular files into that folder. Let’s say you have the two projects side by side. It might look like this. polotek $> cp my-angular-project/dist/my-angular-project-ui/* my-nest-project/public/ This will also copy the original index.html file and the other junk. We don’t care about that for now. This is just for illustration. So now we’ve made NestJS aware of our Angular files. Reload your NestJS page and you should see this. Assets loading properly. Angular Welcome screen loading. We did it! This is how to integrate a cohesive system with frontend and backend. The frontend ecosystem has wandered away from this path. But this is how the web is supposed to work in my opinion. And more importantly, it is actually how a lot of real products companies want to manage their system. I want to acknowledge that there are still a lot of unanswered questions here. You can’t deploy this to production. The purpose of this blog post is to help the next person like me who was trying to google how to actually integrate Angular and a backend like NestJS because I assumed there was a common and documented path to doing so. If this was useful for you, and you’re interested in having me write about the rest of what we’re missing in modern frontend, let me know.
More in programming
<![CDATA[I'm exploring another corner of the Interlisp ecosystem and history: the Interlisp-10 implementation for DEC PDP-10 mainframes, a 1970s character based environment that predated the graphical Interlisp-D system. I approached this corner when I set out to learn and experiment with a tool I initially checked out only superficially, the TTY editor. This command line structure editor for Lisp code and expressions was the only one of Interlisp-10. The oldest of the Interlisp editors, it came before graphical interfaces and SEdit. On Medley Interlisp the TTY editor is still useful for specialized tasks. For example, its extensive set of commands with macro support is effectively a little language for batch editing and list structure manipulation. Think Unix sed for s-exps. The language even provides the variable EDITMACROS (wink wink). Evaluating (PRINTDEF EDITMACROS) gives a flavor for the language. For an experience closer to 1970s Interlisp I'm using the editor in its original environment, Interlisp-10 on TWENEX. SDF provides a publicly accessible TWENEX system running on a PDP-10 setup. With the product name TOPS-20, TWENEX was a DEC operating system for DECSYSTEM-20/PDP-10 mainframes derived from TENEX originally developed by BBN. SDF's TWENEX system comes with Interlisp-10 and other languages. This is Interlisp-10 in a TWENEX session accessed from my Linux box: A screenshot of a Linux terminal showing Interlisp-10 running under TWENEX in a SSH session. Creating a TWENEX account is straightforward but I didn't receive the initial password via email as expected. After reporting this to the twenex-l mailing list I was soon emailed the password which I changed with the TWENEX command CHANGE DIRECTORY PASSWORD. Interacting with TWENEX is less alien or arcane than I thought. I recognize the influence of TENEX and TWENEX on Interlisp terminology and notation. For example, the Interlisp REPL is called Exec after the Exec command processor of the TENEX operating system. And, like TENEX, Interlisp uses angle brackets as part of directory names. It's clear the influence of these operating systems also on the design of CP/M and hence MS-DOS, for example the commands DIR and TYPE. SDF's TWENEX system provides a complete Interlisp-10 implementation with only one notable omission: HELPSYS, the interactive facility for consulting the online documentation of Interlisp. The SDF wiki describes the basics of using Interlisp-10 and editing Lisp code with the TTY editor. After a couple of years of experience with Medley Interlisp the Interlisp-10 environment feels familiar. Most of the same functions and commands control the development tools and facilities. My first impression of the TTY editor is it's reasonably efficient and intuitive to edit Lisp code, at least using the basic commands. One thing that's not immediately apparent is that EDITF, the entry point for editing a function, works only with existing functions and can't create new ones. The workaround is to define a stub from the Exec like this: (DEFINEQ (NEW.FUNCTION () T)) and then call (EDITF NEW.FUNCTION) to flesh it out. Transferring files between TWENEX and the external world, such as my Linux box, involves two steps because the TWENEX system is not accessible outside of SDF. First, I log into Unix on sdf.org with my SDF account and from there ftp to kankan.twenex.org (172.16.36.36) with my TWENEX account. Once the TWENEX files are on Unix I access them from Linux with scp or sftp to sdf.org. This may require the ARPA tier of SDF membership. Everything is ready for a small Interlisp-10 programming project. #Interlisp #Lisp a href="https://remark.as/p/journal.paoloamoroso.com/exploring-interlisp-10-and-twenex"Discuss.../a Email | Reply @amoroso@oldbytes.space !--emailsub--]]>
Total disassociation, fully out your mind That Funny Feeling I was thinking today about a disc jockey. Like one in the 80s, where you actually had to put the records on the turntables to get the music. You move the information. You were the file system. I like the Retro Game Mechanics channel on YouTube. What was possible was limited by the hardware, and in a weird way it forced games to be good. Skill was apparent by a quick viewing, and different skill is usually highly correlated. Good graphics meant good story – not true today. I was thinking about all the noobs showing up to comma. If you can put a technical barrier up to stop them, like it used to be. But you can’t. These barriers can’t be fake, because a fake barrier isn’t like a real barrier. A fake barrier is one small patch away from being gone. What if the Internet was a mistake? I feel like it’s breaking my brain. It was this mind expanding world in my childhood, but now it’s a set of narrow loops that are harder and harder to get out of. And you can’t escape it. Once you have Starlink to your phone, not having the Internet with you will be a choice, not a real barrier. There’s nowhere to hide. Chris McCandless wanted to be an explorer, but being born in 1968 meant that the world was already all explored. His clever solution, throw away the map. But that didn’t make him an explorer, it made him an idiot who died 5 miles from a bridge that would have saved his life. And I’ll tell you something else that you ain’t dying enough to know Big Casino Sure, you can still spin real records, code for the NES, and SSH into your comma device. But you don’t have to. And that makes the people who do it come from a different distribution from the people who used to. They are not explorers in the same way Chris McCandless wasn’t. When I found out about the singularity at 15, I was sure it was going to happen. It was depressing for a while, realizing that machines would be able to do everything a lot better than I could. But then I realized that it wasn’t like that yet and I could still work on this problem. And here I am, working in AI 20 years later. I thought I came to grips with obsolescence. But it’s not obsolescence, the reality is looking to be so much sadder than I imagined. It won’t be humans accepting the rise of the machines, it won’t be humans fighting the rise of the machines, it will be human shaped zoo animals oddly pacing back and forth in a corner of the cage while the world keeps turning around them. It’s easy to see the appeal of conspiracy theories. Even if they hate you, it’s more comforting to believe that they exist. That at least somebody is driving. But that’s not true. It’s just going. There are no longer Western institutions capable of making sense of the world. (maybe the Chinese ones can? it’s hard to tell) We are shoved up brutally against evolution, just of the memetic variety. The TikTok brainrot kids will be nothing compared to the ChatGPT brainrot kids. And I’m not talking like an old curmudgeon about the new forms of media being bad and the youth being bad like Socrates said. Because you can never go back. It will be whatever it is. To every fool preaching the end of history, evolution spits in your face. To every fool preaching the world government AI singleton, evolution spits in your face. I knew these things intellectually, but viscerally it’s just hard to live through. The world feels so small and I feel like I’m being stared at by the Eye of Sauron.
I always had a diffuse idea of why people are spending so much time and money on amateur radio. Once I got my license and started to amass radios myself, it became more clear.
What does it mean when someone writes that a programming language is “strongly typed”? I’ve known for many years that “strongly typed” is a poorly-defined term. Recently I was prompted on Lobsters to explain why it’s hard to understand what someone means when they use the phrase. I came up with more than five meanings! how strong? The various meanings of “strongly typed” are not clearly yes-or-no. Some developers like to argue that these kinds of integrity checks must be completely perfect or else they are entirely worthless. Charitably (it took me a while to think of a polite way to phrase this), that betrays a lack of engineering maturity. Software engineers, like any engineers, have to create working systems from imperfect materials. To do so, we must understand what guarantees we can rely on, where our mistakes can be caught early, where we need to establish processes to catch mistakes, how we can control the consequences of our mistakes, and how to remediate when somethng breaks because of a mistake that wasn’t caught. strong how? So, what are the ways that a programming language can be strongly or weakly typed? In what ways are real programming languages “mid”? Statically typed as opposed to dynamically typed? Many languages have a mixture of the two, such as run time polymorphism in OO languages (e.g. Java), or gradual type systems for dynamic languages (e.g. TypeScript). Sound static type system? It’s common for static type systems to be deliberately unsound, such as covariant subtyping in arrays or functions (Java, again). Gradual type systems migh have gaping holes for usability reasons (TypeScript, again). And some type systems might be unsound due to bugs. (There are a few of these in Rust.) Unsoundness isn’t a disaster, if a programmer won’t cause it without being aware of the risk. For example: in Lean you can write “sorry” as a kind of “to do” annotation that deliberately breaks soundness; and Idris 2 has type-in-type so it accepts Girard’s paradox. Type safe at run time? Most languages have facilities for deliberately bypassing type safety, with an “unsafe” library module or “unsafe” language features, or things that are harder to spot. It can be more or less difficult to break type safety in ways that the programmer or language designer did not intend. JavaScript and Lua are very safe, treating type safety failures as security vulnerabilities. Java and Rust have controlled unsafety. In C everything is unsafe. Fewer weird implicit coercions? There isn’t a total order here: for instance, C has implicit bool/int coercions, Rust does not; Rust has implicit deref, C does not. There’s a huge range in how much coercions are a convenience or a source of bugs. For example, the PHP and JavaScript == operators are made entirely of WAT, but at least you can use === instead. How fancy is the type system? To what degree can you model properties of your program as types? Is it convenient to parse, not validate? Is the Curry-Howard correspondance something you can put into practice? Or is it only capable of describing the physical layout of data? There are probably other meanings, e.g. I have seen “strongly typed” used to mean that runtime representations are abstract (you can’t see the underlying bytes); or in the past it sometimes meant a language with a heavy type annotation burden (as a mischaracterization of static type checking). how to type So, when you write (with your keyboard) the phrase “strongly typed”, delete it, and come up with a more precise description of what you really mean. The desiderata above are partly overlapping, sometimes partly orthogonal. Some of them you might care about, some of them not. But please try to communicate where you draw the line and how fuzzy your line is.