More from These Yaks Ain't Gonna Shave Themselves
A lot of frontend teams are very convinced that rewriting their frontend will lead to the promised land. And I am the bearer of bad tidings. If you are building a product that you hope has longevity, your frontend framework is the least interesting technical decision for you to make. And all of the time you spend arguing about it is wasted energy. I will die on this hill. If your product is still around in 5 years, you’re doing great and you should feel successful. But guess what? Whatever framework you choose will be obsolete in 5 years. That’s just how the frontend community has been operating, and I don’t expect it to change soon. Even the popular frameworks that are still around are completely different. Because change is the name of the game. So they’re gonna rewrite their shit too and just give it a new version number. Product teams that are smart are getting off the treadmill. Whatever framework you currently have, start investing in getting to know it deeply. Learn the tools until they are not an impediment to your progress. That’s the only option. Replacing it with a shiny new tool is a trap. I also wanna give a piece of candid advice to engineers who are searching for jobs. If you feel strongly about what framework you want to use, please make that a criteria for your job search. Please stop walking into teams and derailing everything by trying to convince them to switch from framework X to your framework of choice. It’s really annoying and tremendously costly. I always have to start with the cynical take. It’s just how I am. But I do want to talk about what I think should be happening instead. Companies that want to reduce the cost of their frontend tech becoming obsoleted so often should be looking to get back to fundamentals. Your teams should be working closer to the web platform with a lot less complex abstractions. We need to relearn what the web is capable of and go back to that. Let’s be clear, I’m not suggesting this is strictly better and the answer to all of your problems. I’m suggesting this as an intentional business tradeoff that I think provides more value and is less costly in the long run. I believe if you stick closer to core web technologies, you’ll be better able to hire capable engineers in the future without them convincing you they can’t do work without rewriting millions of lines of code. And if you’re an engineer, you will be able to retain much higher market value over time if you dig into and understand core web technologies. I was here before react, and I’ll be here after it dies. You may trade some job marketability today. But it does a lot more for career longevity than trying to learn every new thing that gets popular. And you see how quickly they discarded us when the market turned anyway. Knowing certain tech won’t save you from those realities. I couldn’t speak this candidly about this stuff when I held a management role. People can’t help but question my motivations and whatever agenda I may be pushing. Either that or I get into a lot of trouble with my internal team because they think I’m talking about them. But this is just what I’ve seen play out after doing this for 20+ years. And I feel like we need to be able to speak plainly. This has been brewing in my head for a long time. The frontend ecosystem is kind of broken right now. And it’s frustrating to me for a few different reasons. New developers are having an extremely hard time learning enough skills to be gainfully employed. They are drowning in this complex garbage and feeling really disheartened. As a result, companies are finding it more difficult to do basic hiring. The bar is so high just to get a regular dev job. And everybody loses. What’s even worse is that I believe a lot of this energy is wasted. People that are learning the current tech ecosystem are absolutely not learning web fundamentals. They are too abstracted away. And when the stack changes again, these folks are going to be at a serious disadvantage when they have to adapt away from what they learned. It’s a deep disservice to people’s professional careers, and it’s going to cause a lot of heartache later. On a more personal note, this is frustrating to me because I think it’s a big part of why we’re seeing the web stagnate so much. I still run into lots of devs who are creative and enthusiastic about building cool things. They just can’t. They are trying and failing because the tools being recommended to them are just not approachable enough. And at the same time, they’re being convinced that learning fundamentals is a waste of time because it’s so different from what everybody is talking about. I guess I want to close by stating my biases. I’m a web guy. I’ve been bullish on the web for 20+ years, and I will continue to be. I think it is an extremely capable and unique platform for delivering software. And it has only gotten better over time while retaining an incredible level of backwards compatibility. The underlying tools we have are dope now. But our current framework layer is working against the grain instead of embracing the platform. This is from a recent thread I wrote on mastodon. Reproduced with only light editing.
This is from a recent thread I wrote on mastodon. Reproduced with only light editing. Hm. I feel like I wanted to like this more than I actually do. I definitely think the fediverse needs to continue to grow more capabilities. But this doesn’t feel like the energy I was looking for. Half of it feels like a laundry list of ways to commodify things. Dragging a lot of things people hate about corporate social media into the fediverse. I’ve only just started to engage with the fediverse as a concept and a movement. And mastodon is only one part of a wide ecosystem. I think what has been surprising to me is it at least within mastodon, it doesn’t feel like the culture is centered around enabling people to build and experiment. Maybe this is only how I think about it. But there are only a few good reasons to do all of this work. We want to reclaim our online experiences. So they aren’t fully captured by corporate interests. But after that? I think the goal should be to enable greater diversity of experience. People can have what they want by running it on their own servers. It doesn’t have to be something that we wait for someone else to build and ship. It feels like we’re still trying to over-design corporate solutions that work for “everybody”. And that feels like constrained thinking. I feel like the fediverse should be on the other end of the spectrum. There should be an explosion of solutions. Most of them will probably be crap. But the ones who keep refining and improving will rise to the top and gain more adoption. Honestly I don’t think “gaining adoption” is that important in a truly diverse ecosystem. The reason concepts like adoption become useful is when it drives compatibility. We do want different servers to be able to participate in the larger society. But I think compatibility emerges because people want to participate. You have to add the value first. Then people will do the work to be compatible so they can get to the value. If I was stating what I think is important in the fediverse right now, it would be describing what it takes to be “compatible”. I think the “core” groups around fediverse technologies should be hyper focused on describing and documenting how their foundational protocols behave. And their measure of success should be seeing other groups building compatible servers entirely independent of them. That is a healthy fediverse imo. I don’t want to start too much trouble here. But I’m already on record with my criticisms of the open source community. I hope we can acknowledge that the community of devs, who is doing much of this work for free, has some serious cultural issues to contend with if they’re going to serve the wider set of users who want and need this stuff. We know that corporate interests want to own and capture our experiences for the purposes of profit and control. But open source devs often want to own and capture the work. So that it can only happen the way they say. And as a result, anything we want to see happen is bottlenecked on a small set of humans who have set themselves up as gatekeepers. I’m not suggesting this is always a malicious dynamic. A lot of times people have legitimate concerns for gatekeeping. Like protecting the security and privacy of users. Or preventing data corruption. Some elements of software do need to be scrutinized by experts so that people don’t get hurt. But I believe that’s a smaller area than people seem to think. I’m not that interested in debating the reasons for some of the more frustrating elements of open source culture. All I’m saying today is that I believe that open source culture will need to evolve pretty quickly if it’s going to rise to this moment of enabling a healthy and vibrant fediverse.
In my recent side projet, I’ve been deploying to fly.io and really enjoying it. It’s fairly easy to get setup. And it supports my preferred workflow of deploying my changes early and often. I have run into a few snags though. Fly.io builds your project into a docker image and deploys containers for you. That process is mostly seamless when it works. But sometimes it fails, and you need to debug. By default, fly builds your docker images in the cloud. This is convient and preferred most of the time. But when I wanted to test some changes to my build, I wanted to try building locally using Docker Desktop. This should be easy. The fly cli is quite nice. And there is a flag to build locally. fly deploy --build-only --local-only This failed saying it couldn’t find Docker. > fly deploy --build-only --local-only ==> Verifying app config Validating /Users/polotek/src/harembase/fly.toml Platform: machines ✓ Configuration is valid --> Verified app config ==> Building image Error: failed to fetch an image or build from source: docker is unavailable to build the deployment image I spent quite a bit of time googling for the problem here. You can also run fly doctor --verbose to get some info. (If you run this in your fly app folder, it will show more info not relevant to this topic.) > fly doctor --verbose Testing authentication token... PASSED Testing flyctl agent... PASSED Testing local Docker instance... Nope (We got: failed pinging docker instance: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?) This is fine, we'll use a remote builder. Pinging WireGuard gateway (give us a sec)... PASSED No app provided; skipping app specific checks I found various forum posts discussing this problem. The folks at fly have spent a lot of time investigating some deep technical issues. I appreciate that work, but ultimately none of it seems to reflect my problem. And the issue felt simpler to me. Fly couldn’t find docker. Why not? Where is it looking? Eventually I found the answer on stackoverflow. It turns out that things have settled pretty recently to a basic config setting. By default, Docker Desktop installs the socket for the daemon in a non-global space. Usually in your personal user folder, e.g. ~/.docker/run/docker.sock. But other tools expect the docker daemon socket to be available in a standard location, e.g. /var/run/docker.sock As of this writing, Docker Deskstop has added a recommended way to enable the standard location. In the Docker Desktop dashboard, got to Settings > Advanced and enable “Allow the default Docker socket to be used”. Docker for Mac settings screen This will require your system password and restart. Then you should be able to see the docker socket in the standard place. And fly will be able to see it! Hopefully the next person who’s banging their head against this will have an easier time.
I don’t know who needs to hear this. But your frontend and backend systems don’t need to be completely separate. I started anew side project recently. You know, one of things that allows me to tinker with new technology but will probably never be finished. I’m using Angular for the frontend and Nestjs for the backend. All good. But then I go to do something that I thought was very normal and common and run into a wall. I want to integrate the two frameworks. I want to serve my initial html with nestjs and add script tags so that Angular takes over the frontend. This will allow me to do dynamic things on the backend and frontend however I want. But also deploy the system all as one cohesive product. Apparently this is no longer How Things Are Done. I literally could not find documentation on how to do this. When you read the docs and blog posts, everybody expects you to just have two systems that run entirely independently. Here’s the server for your backend and here’s the entirely different server for your frontend. Hashtag winning! When I google for “integrate angular and nestjs”, nobody knows what I’m talking about. On the surface, this seems like is a great technical blog post from LogRocket. It says “I will teach you how. First, set up two separate servers…” I think I know why the community has ended up in this place. But that’s a rant for another blog post. Let me try to explain what I’m talking about. Angular is designed as an a frontend framework (let’s set aside SSR for now). The primary output of an Angular build is javascript and css files that are meant to run in the browser. When you run ng build, you’ll get a set of files put into your output folder. Usually the folder is dist/<your_project_name>. Let’s look at what’s in there. polotek $> ls -la dist/my-angular-project -rw-r--r-- 1 polotek staff 12K Sep 13 14:15 3rdpartylicenses.txt -rw-r--r-- 1 polotek staff 948B Sep 13 14:15 favicon.ico -rw-r--r-- 1 polotek staff 573B Sep 13 14:15 index.html -rw-r--r-- 1 polotek staff 181K Sep 13 14:15 main.c01cba7b28b56cb8.js -rw-r--r-- 1 polotek staff 33K Sep 13 14:15 polyfills.2f491a303e062d57.js -rw-r--r-- 1 polotek staff 902B Sep 13 14:15 runtime.0b9744f158e85515.js -rw-r--r-- 1 polotek staff 0B Sep 13 14:15 styles.ef46db3751d8e999.css Some javascript and css files. Just as expected. A favicon. Sure, why not. Something about 3rd party licenses. I have no idea what that is, so let’s ignore it. But there’s also an index.html file. This is where the magic is. This file sets up your html so it can serve Angular files. It’s very simple and looks like this. <!doctype html> <html lang="en" data-critters-container> <head> <meta charset="utf-8"> <title>MyAngularProject</title> <base href="/"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="icon" type="image/x-icon" href="favicon.ico"> <link rel="stylesheet" href="styles.ef46db3751d8e999.css"> </head> <body> <app-root></app-root> <script src="runtime.b3cecf81bdcc5839.js" type="module"></script> <script src="polyfills.41808b7aa9da5ebc.js" type="module"></script> <script src="main.cf1267740c62d53b.js" type="module"></script> </body> </html> It turns out the web browser still works the way it always did. You use <script> tags and <link> tags to load your javascript and css into the page. But we want to let the backend do this rather than using this static html file. I’m using NestJS for the backend. It’s modeled after Angular, so a lot of the structures are very similar. Just without all of the browser-specific stuff. Nest is not so important here though. This problem is the same with whatever backend you’re using. The important thing is how static files are served. If you copy the above html into a backend template, it probably won’t work. This is what you get in the browser when you try this with NestJS. Angular fails to load. This is part of my gripe. By default, these are two separate systems right now. So NestJS doesn’t know that these files exist. And they’re in two separate folders. So it’s unclear what the best way is to integrate them. In the future, I might talk about more sustainable ways to do this for a real project. But for now, I’m going to do the simple thing just to illustrate how this is supposed to work. In NestJS, or whatever backend you’re using, you should be able to configure where your static files go. In Nest, it looks something like this. async function bootstrap() { const app = await NestFactory.create<NestExpressApplication>(AppModule); app.useStaticAssets(path.resolve("./public")); await app.listen(3000); } bootstrap(); So there should be a folder called public in your backend project, and that’s where it expect to find javascript and css files. So here’s the magic. Copy the Angular files into that folder. Let’s say you have the two projects side by side. It might look like this. polotek $> cp my-angular-project/dist/my-angular-project-ui/* my-nest-project/public/ This will also copy the original index.html file and the other junk. We don’t care about that for now. This is just for illustration. So now we’ve made NestJS aware of our Angular files. Reload your NestJS page and you should see this. Assets loading properly. Angular Welcome screen loading. We did it! This is how to integrate a cohesive system with frontend and backend. The frontend ecosystem has wandered away from this path. But this is how the web is supposed to work in my opinion. And more importantly, it is actually how a lot of real products companies want to manage their system. I want to acknowledge that there are still a lot of unanswered questions here. You can’t deploy this to production. The purpose of this blog post is to help the next person like me who was trying to google how to actually integrate Angular and a backend like NestJS because I assumed there was a common and documented path to doing so. If this was useful for you, and you’re interested in having me write about the rest of what we’re missing in modern frontend, let me know.
More in programming
<![CDATA[I'm exploring another corner of the Interlisp ecosystem and history: the Interlisp-10 implementation for DEC PDP-10 mainframes, a 1970s character based environment that predated the graphical Interlisp-D system. I approached this corner when I set out to learn and experiment with a tool I initially checked out only superficially, the TTY editor. This command line structure editor for Lisp code and expressions was the only one of Interlisp-10. The oldest of the Interlisp editors, it came before graphical interfaces and SEdit. On Medley Interlisp the TTY editor is still useful for specialized tasks. For example, its extensive set of commands with macro support is effectively a little language for batch editing and list structure manipulation. Think Unix sed for s-exps. The language even provides the variable EDITMACROS (wink wink). Evaluating (PRINTDEF EDITMACROS) gives a flavor for the language. For an experience closer to 1970s Interlisp I'm using the editor in its original environment, Interlisp-10 on TWENEX. SDF provides a publicly accessible TWENEX system running on a PDP-10 setup. With the product name TOPS-20, TWENEX was a DEC operating system for DECSYSTEM-20/PDP-10 mainframes derived from TENEX originally developed by BBN. SDF's TWENEX system comes with Interlisp-10 and other languages. This is Interlisp-10 in a TWENEX session accessed from my Linux box: A screenshot of a Linux terminal showing Interlisp-10 running under TWENEX in a SSH session. Creating a TWENEX account is straightforward but I didn't receive the initial password via email as expected. After reporting this to the twenex-l mailing list I was soon emailed the password which I changed with the TWENEX command CHANGE DIRECTORY PASSWORD. Interacting with TWENEX is less alien or arcane than I thought. I recognize the influence of TENEX and TWENEX on Interlisp terminology and notation. For example, the Interlisp REPL is called Exec after the Exec command processor of the TENEX operating system. And, like TENEX, Interlisp uses angle brackets as part of directory names. It's clear the influence of these operating systems also on the design of CP/M and hence MS-DOS, for example the commands DIR and TYPE. SDF's TWENEX system provides a complete Interlisp-10 implementation with only one notable omission: HELPSYS, the interactive facility for consulting the online documentation of Interlisp. The SDF wiki describes the basics of using Interlisp-10 and editing Lisp code with the TTY editor. After a couple of years of experience with Medley Interlisp the Interlisp-10 environment feels familiar. Most of the same functions and commands control the development tools and facilities. My first impression of the TTY editor is it's reasonably efficient and intuitive to edit Lisp code, at least using the basic commands. One thing that's not immediately apparent is that EDITF, the entry point for editing a function, works only with existing functions and can't create new ones. The workaround is to define a stub from the Exec like this: (DEFINEQ (NEW.FUNCTION () T)) and then call (EDITF NEW.FUNCTION) to flesh it out. Transferring files between TWENEX and the external world, such as my Linux box, involves two steps because the TWENEX system is not accessible outside of SDF. First, I log into Unix on sdf.org with my SDF account and from there ftp to kankan.twenex.org (172.16.36.36) with my TWENEX account. Once the TWENEX files are on Unix I access them from Linux with scp or sftp to sdf.org. This may require the ARPA tier of SDF membership. Everything is ready for a small Interlisp-10 programming project. #Interlisp #Lisp a href="https://remark.as/p/journal.paoloamoroso.com/exploring-interlisp-10-and-twenex"Discuss.../a Email | Reply @amoroso@oldbytes.space !--emailsub--]]>
Total disassociation, fully out your mind That Funny Feeling I was thinking today about a disc jockey. Like one in the 80s, where you actually had to put the records on the turntables to get the music. You move the information. You were the file system. I like the Retro Game Mechanics channel on YouTube. What was possible was limited by the hardware, and in a weird way it forced games to be good. Skill was apparent by a quick viewing, and different skill is usually highly correlated. Good graphics meant good story – not true today. I was thinking about all the noobs showing up to comma. If you can put a technical barrier up to stop them, like it used to be. But you can’t. These barriers can’t be fake, because a fake barrier isn’t like a real barrier. A fake barrier is one small patch away from being gone. What if the Internet was a mistake? I feel like it’s breaking my brain. It was this mind expanding world in my childhood, but now it’s a set of narrow loops that are harder and harder to get out of. And you can’t escape it. Once you have Starlink to your phone, not having the Internet with you will be a choice, not a real barrier. There’s nowhere to hide. Chris McCandless wanted to be an explorer, but being born in 1968 meant that the world was already all explored. His clever solution, throw away the map. But that didn’t make him an explorer, it made him an idiot who died 5 miles from a bridge that would have saved his life. And I’ll tell you something else that you ain’t dying enough to know Big Casino Sure, you can still spin real records, code for the NES, and SSH into your comma device. But you don’t have to. And that makes the people who do it come from a different distribution from the people who used to. They are not explorers in the same way Chris McCandless wasn’t. When I found out about the singularity at 15, I was sure it was going to happen. It was depressing for a while, realizing that machines would be able to do everything a lot better than I could. But then I realized that it wasn’t like that yet and I could still work on this problem. And here I am, working in AI 20 years later. I thought I came to grips with obsolescence. But it’s not obsolescence, the reality is looking to be so much sadder than I imagined. It won’t be humans accepting the rise of the machines, it won’t be humans fighting the rise of the machines, it will be human shaped zoo animals oddly pacing back and forth in a corner of the cage while the world keeps turning around them. It’s easy to see the appeal of conspiracy theories. Even if they hate you, it’s more comforting to believe that they exist. That at least somebody is driving. But that’s not true. It’s just going. There are no longer Western institutions capable of making sense of the world. (maybe the Chinese ones can? it’s hard to tell) We are shoved up brutally against evolution, just of the memetic variety. The TikTok brainrot kids will be nothing compared to the ChatGPT brainrot kids. And I’m not talking like an old curmudgeon about the new forms of media being bad and the youth being bad like Socrates said. Because you can never go back. It will be whatever it is. To every fool preaching the end of history, evolution spits in your face. To every fool preaching the world government AI singleton, evolution spits in your face. I knew these things intellectually, but viscerally it’s just hard to live through. The world feels so small and I feel like I’m being stared at by the Eye of Sauron.
I always had a diffuse idea of why people are spending so much time and money on amateur radio. Once I got my license and started to amass radios myself, it became more clear.
What does it mean when someone writes that a programming language is “strongly typed”? I’ve known for many years that “strongly typed” is a poorly-defined term. Recently I was prompted on Lobsters to explain why it’s hard to understand what someone means when they use the phrase. I came up with more than five meanings! how strong? The various meanings of “strongly typed” are not clearly yes-or-no. Some developers like to argue that these kinds of integrity checks must be completely perfect or else they are entirely worthless. Charitably (it took me a while to think of a polite way to phrase this), that betrays a lack of engineering maturity. Software engineers, like any engineers, have to create working systems from imperfect materials. To do so, we must understand what guarantees we can rely on, where our mistakes can be caught early, where we need to establish processes to catch mistakes, how we can control the consequences of our mistakes, and how to remediate when somethng breaks because of a mistake that wasn’t caught. strong how? So, what are the ways that a programming language can be strongly or weakly typed? In what ways are real programming languages “mid”? Statically typed as opposed to dynamically typed? Many languages have a mixture of the two, such as run time polymorphism in OO languages (e.g. Java), or gradual type systems for dynamic languages (e.g. TypeScript). Sound static type system? It’s common for static type systems to be deliberately unsound, such as covariant subtyping in arrays or functions (Java, again). Gradual type systems migh have gaping holes for usability reasons (TypeScript, again). And some type systems might be unsound due to bugs. (There are a few of these in Rust.) Unsoundness isn’t a disaster, if a programmer won’t cause it without being aware of the risk. For example: in Lean you can write “sorry” as a kind of “to do” annotation that deliberately breaks soundness; and Idris 2 has type-in-type so it accepts Girard’s paradox. Type safe at run time? Most languages have facilities for deliberately bypassing type safety, with an “unsafe” library module or “unsafe” language features, or things that are harder to spot. It can be more or less difficult to break type safety in ways that the programmer or language designer did not intend. JavaScript and Lua are very safe, treating type safety failures as security vulnerabilities. Java and Rust have controlled unsafety. In C everything is unsafe. Fewer weird implicit coercions? There isn’t a total order here: for instance, C has implicit bool/int coercions, Rust does not; Rust has implicit deref, C does not. There’s a huge range in how much coercions are a convenience or a source of bugs. For example, the PHP and JavaScript == operators are made entirely of WAT, but at least you can use === instead. How fancy is the type system? To what degree can you model properties of your program as types? Is it convenient to parse, not validate? Is the Curry-Howard correspondance something you can put into practice? Or is it only capable of describing the physical layout of data? There are probably other meanings, e.g. I have seen “strongly typed” used to mean that runtime representations are abstract (you can’t see the underlying bytes); or in the past it sometimes meant a language with a heavy type annotation burden (as a mischaracterization of static type checking). how to type So, when you write (with your keyboard) the phrase “strongly typed”, delete it, and come up with a more precise description of what you really mean. The desiderata above are partly overlapping, sometimes partly orthogonal. Some of them you might care about, some of them not. But please try to communicate where you draw the line and how fuzzy your line is.