Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
10
Why do many web developers hate jQuery? ¶Why do many web developers hate jQuery? The following is an answer I originally posted to the above question posed over on Hashnode, the developer community. Thought it was worth sharing :) When jQuery was the new thing, people loved it. It made cross-browser JS so much easier, it taught us a few new tricks, it made AJAX and animations dead simple (which was pretty tricky back when most of the world were on IE6!) It evolved a not-to-be-sniffed at plugin community and immense mind-share. It became the de facto client-side Javascript library for years. Those were the heady days of jQuery’s heyday and it gained a lot of weight as it spent its popularity on better support and more features. Also it attracted a lot of mediocre developers because it was seemingly ‘easy to use’. This resulted in a lot of terrible Javascript being written and much debate between so-called pros (although there’s nothing professional about berating newcomers) about the...
over a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Sometimes It Works :: simonhamp.me

On AI doomerism

So what's going on? This is a reflection on this post (of the same title) that I thoroughly enjoyed, by Flavio Copes. "I am the genuine article, therefore I don't have to try. I just have to be. You, on the other hand, have to try any passing bandwagon, because what else have you got?" Life Isn't All Ha Ha Hee Hee It seems that a great many people are feeling the pinch of a tech job market squeeze that seems to have lasted for over 2 years now. A lot of folks seem to be quick to dump the blame for this squarely on "AI". And what can you expect when the tech CEOs are all doing their stupid dance, touting the latest fadtech as the breakthrough that justifies them laying off swathes of employees in short-sighted attempts to prop up shareholder value? A more self-destructive spiral I have not seen. But the truth is masked by this conclusion. I don't believe for one second—and I don't think a lot of other popcorn-crunching devs believe it either—that "AI" is remotely close to the claims being made by these flaccid excuses for billionaires. Something else is afoot. "First, there’s a lot of people, and I mean a LOT, that went into programming and didn’t have a genuine passion for the job. For them, it’s just a way to make money." Hard agree! I've seen some of these people. I don't believe it's a bad reason to get into any line of work, but in my experience, they have made for some of the worst developers. Not because of a lack of technical capability (I truly believe almost anyone can learn to program well), but the money motivator only takes you so far. Like any job worth doing, it is hard. The rewards come to those who continue to put in the effort. If you've based your decision to become a programmer on seeing someone else claiming success on social media, I can only tell you that you've been duped. The vast majority do not have this story. And even some of the ones that appear to, don't. "Very few jobs out there give those kind of perks: high pay and comfortable life." It's all relative. My personal experience with programming hasn't been "high pay... comfortable life". I've worked damn hard for a very long time. I still don't feel sufficiently compensated and I've just had my best year ever. And that as a freelancer! I've never seen programming (or really any possible career path) as "easy". Currently "AI" makes my life a bit easier, but nothing about using it in and out every day to build things convinces me that it's going to do away with my job. "To them, those people that care about the craft, AI is not a problem at all." I totally agree. ¶So what's going on? "post-COVID companies hired like crazy" "the end of the zero-interest rate period" "increased competition" from the potential of a globally shifted workforce "investor pressure" "AI" Look, there's not just one answer. That's clear. And these are just part of the picture. I have another theory: a huge factor at play here is that, for most of these businesses, their revenue is based on ads. Therefore, consumers (for the most part) are the driving force of a lot of the top tech companies' bottom lines. But the consumers are getting sick and tired of: Privacy violations. Crypto scams. Income disparity between the workers and the C-suite. Ads! Ads! Ads! everywhere, all of the time. The lies and manipulation of it all. Many of these tech companies haven't innovated anything meaningful in a long time. They're jumping on AI right now because they're all hoping they'll discover the next big breakthrough and unlock megabucks. And in the meantime they prop themselves up by touting eye-watering sales figures (and net losses) of their next "always-on, internet connected, touchscreen, AI-powered, super agent whosawidget." Or their "giant, face-hugging, reality-distortion spectacles that no person in their right mind would be caught dead wearing in public" But are any of these "advances" really moving the needle on solving the bigger problems we're facing as a species? No. So the ads and PR BS isn't working as well as it used. If your business model is to be a big promotional platform that relies on real human beings to view the content that your advertisers want to promote, then you have to bring real value. And if you're the advertiser whose whole business model depends on crazy ad spend to penetrate into and reach your target market of what you hope are real people, then you need a lot of cheap money to get there. It's a poisoned lake. These big fish have done it to themselves. Sure, they're big, so they'll survive for a while, but it's still terminal. The good news is that, in their wake will come true innovators. I expect the next few years to be something of the beginnings of a rebirth period in this constant boom-bust circus. And I genuinely believe that "AI" will enable a lot of that. It is already enabling small teams to move faster whilst staying lean. In time, these little fish will grow and once again find opportunity in hiring fresh talent. Get in with those companies now while they're still small by seeking them out and starting a conversation. It might not be a job right away, but it's building the relationship. "You have to have a network... Local meetups, and local conferences... Go a few days before... hang out before the conference... Go to the after parties. That’s where you actually meet and get to know people." This is the way. You know, I've never once applied to FAANG. I've never felt worthy. In most cases, I don't meet the requirements as I don't have a degree. And honestly, the companies and their leaders do not align with my values at all, so I'm extremely disinclined to apply. But imagine if someone like me, a "self-taught" nobody dev, with no recognisable education, only having a connection or two, skips the queue ahead of you and lands a job at the next great innovative tech company. All for the sake of saying hello and getting to know people. This is why, especially in the age of "AI", cultivating genuine relationships with real people is so important. Work on silly side projects (if you have the capacity). Read! Share what you're working on. Share what you enjoy. Retweet. Reply. Write about it—in the abstract if you have to. Make videos. Do a podcast. Go meet new people, in and out of your field. Don't do it for the algorithm, or the likes and subscribes. Do it for the genuine relationships. Because in a world of fakes and forgeries, the genuine article sticks out like a sore thumb!

5 months ago 17 votes
2024: Just the Start

2024 has overall been a great year for me personally. I believe it represents a turning point. 2023 was tough, but 2024 went some way to redressing the balance and I am even more positive about 2025 because of it. Here are a few highlights: We went to Japan! My first time in Asia. The best trip I've been on so far. I managed to get NativePHP support for Windows released, amongst many other improvements. It's almost ready for production usage. With over 1,000 devs in the Discord, $1,000/month in sponsorships and tons of contributors helping, it really feels like it's flying. I started building ReelFlow with an old acquaintance and his business partner, and it's got some very promising signs of growth. I went to Laracon US in Dallas and it was epic! Laradir became Laradevs, blew past 4,000 registered developer profiles and has some awesome features on the way. I was interviewed by Eric from Laravel News. I started four (yes, four(!)) podcasts. I finally fixed my website so I could start publishing to it again. And to top it all off, I got invited to speak at Laracon EU 2025, which is coming up in February. But it wasn't without its challenges! Up to about March/April, I had almost zero income. I was in a rut with NativePHP because I was so stressed about finding paid work. It was really frustrating to have all these goals and intentions for NativePHP (and of course a backlog of PRs and issues to get through!) but not really having the time to focus on it. I managed to pick some up when we returned from Japan, but it wasn't quite enough, and then all of a sudden I had too much. Committing to open source in your spare time is hard at the best of times, but especially so when you've got more than a full plate of full-time client work and you're trying grow a SaaS (Laradevs) on the side... I can tell you now, I have (and still continue) to make poor decisions about the best use of my time, for which I can only say that I'm very grateful to and for my long-suffering wife. Bu, I'm sorry that I've not spent enough time with you. Honestly, I feel like I've been teetering on the edge of burnout. So I decided that I needed to drop one client and that freed up my time. Thankfully, we managed to get away for a week later in the year. It was just to Tenerife, our neighbouring island, and we went up into the hills, so despite being the tail end of summer (when you'd expect great weather and warm temperatures), we were cold and damp in amongst the clouds. But it was just what I needed. We chilled out, cooked food, sampled the local shops, bars and restaurants, spent good time with great friends and just really relaxed. What's more, I felt like I truly disconnected for once (even though my wife might disagree!) So I'm determined to do more of that in 2025. Saying that, January is going to be tough! I've got a talk to finish so that I can go on stage in front of the biggest audience I've ever spoken in front of, to talk about something that I feel like I have almost zero knowledge about. A lot of the past couple of years - of basically being forced to go back into freelancing during the height of a hiring crisis - has felt like when I tried to start a business back in 2008/9, at the peak of the recession: hard work and I'm way out of my depth. Although many things have changed over those years, the one factor that's really stood out as being different is me. I can see clearly how I've grown in so many ways. I feel like I'm doing some of the best work of my career and I'm enjoying it a lot. The other thing I've learned is that I want to go all-in on NativePHP. In my opinion, the potential for this tech is huge. I'm not under any illusion that this is going to happen quickly. So one of my goals for 2025 is to build up one of those side projects enough to give me more time to spend on NativePHP, a stepping stone on my way to making it my full-time focus. It might be Laradevs, it might be something else. I don't know for sure yet, but I'm putting my chips on a few numbers. Besides that, I'm looking forward to visiting Amsterdam for the first time. I'm cautiously optimistic about my talk (though I'm starting to get very nervous) and I'm hoping to be able to attend Laracon US again. I've also got a couple more personal challenge goals. I'm 40 in 2025, so: I want to get my body into shape - I have a little excess weight to lose and while I'm probably the fittest I've been since the pandemic, I am still not fit enough. I want to build 40 apps with NativePHP! I want to meet (virtually or IRL) 1,000 Laravel developers I've not met before. 2024 has been a year of finding clarity and focus. 2025 will be the year of doubling down and building bigger. I hope you've had an opportunity to reflect and find some positives about 2024. I know it has been especially hard for many of you. I also hope that you can find a way to look into 2025 with enthusiasm and energy. I'd love to hear more about your ups and downs and what you're excited about for 2025, so please feel free to reach out to me

6 months ago 18 votes
Slow Tech is Good Tech

Avoid the shiny Avoid comparison Some examples Laravel Build tools Typescript and JS frameworks The Web Platform What I keep up with This morning, I watched a video advertising a course aimed at developers. One of the first sentiments proclaimed is that of feeling left behind, that tech is moving too fast to keep up. I think we're probably one of only a few industries globally that have this problem. I don't see baristas learning how to use every kind of coffee machine, or carpenters buying and using all the different types of wood saw. I don't see how they could. I'm sure those industries don't move anywhere near as fast as tech, so it may be a bit of an unfair comparison, but that has led me to a really interesting axiom: After 20 years in tech, I've learned that slow tech is good tech. I'm happier when I'm not trying to be on the bleeding edge. ¶Avoid the shiny I realised over time that I have been quite a lot more dismissive of "the new shiny" than a lot of other devs. In the back of my mind I was always kinda worried that I was being left behind because I wasn't learning X or getting experience with Y. But I've learned over the years that it truly doesn't matter. Why? Most importantly: users won't notice. Second: many companies don't have the capacity or desire to be on the bleeding edge. Finally: my brain still works, I can learn new things any time, when I really need to. And I learn faster now anyway. ¶Avoid comparison It's led me to believe that "falling behind" is a made-up concept designed to sell you stuff you probably don't need. 🌶️ Or maybe it's just human nature, similar to "keeping up with the Jones's". We're competitive, we compare ourselves to others very quickly. It's a bad habit though. I'm here to tell you that it's not just ok to fall (a little bit) behind, it can even be A Good Thing™. I've been falling behind for my entire career. 😂 I'm probably more behind now than I've ever been. But I'm also doing better now - in many ways - than I ever have. ¶Some examples ¶Laravel I didn't use Laravel until a couple years after its first release. When I picked up v4.2 it was using Composer and had started the transition to Symfony components. It had DI and all sorts of other goodies that earlier versions lacked. ¶Build tools I never used Bower/Gulp/Grunt/Yarn; I settled on NPM after the war was over. I haven't switched to Bun (and probably won't). I barely touched Webpack thanks to Laravel Mix. I use Vite, but also hardly use it directly, thanks to Laravel's first-party Vite plugin. I waited so long on all of this stuff that when I actually needed it, the choices were easy. Sure, I wasn't there at the front lines; I didn't invent React. I didn't build Vite. I didn't write the Laravel plugin. But then, I didn't need to, I wasn't building the frontend to the biggest online social network ever or a toolchain for other developers. ¶Typescript and JS frameworks I looked at Typescript once. You can just write plain JavaScript and get the same results. I slept on Coffeescript, Backbone, Angular, React, Vue, Svelte etc. Though I've used some of them briefly during my time, I never went deep on any of them. I stuck with jQuery for a long time because it worked and I knew how to build things rapidly with it. Importantly, I know the value of these other tools and when to use them. That hasn't stopped me being able to work on teams that use them heavily. But I've learned that it probably won't need to be me that's the person who's working day-in, day-out with them. And I'm ok with that. I use Alpine and Livewire now, as they were built to work with the tools I know and love using, each with a small footprint and easy to learn. They're more than enough for my needs—and many of my clients! And you know what? Some of my clients still use jQuery. 🤷‍♂️ ¶The Web Platform I love seeing the latest features in HTML, CSS & JS. I'll play with them, but very rarely deploy them. It usually takes some time before new features are available on ALL browsers and devices. And even then, billions of potential users are still running older versions. Yes, it still tickles my curiosity learning new things. It's intellectually satisfying. And, yes, proficiency in multiple tools can make you a more valuable asset. 💰 I'm not saying "you should not...", I'm just saying "you don't need to". ¶What I keep up with The only things that have been really important for me to stay up to date on are the core technologies I use: PHP & Laravel. I make it a point to keep my apps up to date with close to the latest releases - mainly for security and performance reasons, but also so I can use the latest and greatest features. 😛 But I rarely upgrade apps in production to the very latest versions as soon as they're available... I always leave it a few weeks. In that way, my work is not dictated by someone else's release schedule! That's not to say I don't keep abreast of what's coming in future releases of those tools; they're core to the service I provide and the tools I build, so I would be remiss not to. Being aware of what's coming is the priority there. But I don't have to have hands-on experience with everything. Testing early ("beta") releases against existing code is a useful exercise from time to time, but not always required. So basically I only need to regularly monitor two technologies. Easily done with a reasonably well-curated Twitter or a couple of RSS feeds. Sure, I keep my finger on the pulse of all the other tech I use, glancing occasionally out the corner of my eye and paying attention when the sources I am more focused on mention them But by keeping it simple, I can focus on delivering the most value rather than spinning my wheels on all of the superfluous things adjacent to the. Everything else can wait. Don't try to learn and keep up to date with everything; pick your battles!

a year ago 18 votes
Simon Shares

I've started a little podcast! If you give it a listen, I'd love to know what you think.

a year ago 20 votes
Why You (Probably) Shouldn't Start With an SPA

Some history, some context When does an SPA make sense? So, why are SPAs bad again? Your front-end and back-end get decoupled! It may hurt customisability Performance will suffer So what's the alternative? I came across this interesting article by @gregnavis the other day. I guess it's from a few years ago now, judging by some of the other posts on his blog. But it still holds up. It's maybe even more relevant now. The article is entitled The Architecture No One Needs and it makes a simple and clear case, arguing that SPA's are more expensive than a standard multi-page app (which may or may not be a monolith). I'm going to use SPA throughout this post to mean the whole umbrella of ways you might be building a front-end application that is not server-side rendered. I think this is in line with Greg's intent too. I don't want to split hairs over whether a particular framework can be used to build front-end apps that aren't strictly SPAs. Since I read it, I haven't stopped thinking about this article. I found myself agreeing with all of Greg's stated points and it made me realise I actually have a strong opinion about this topic. I believe Greg is right, and as time rolls on I think I'm becoming more bullish in my stance on SPAs too. ¶Some history, some context My foray into the dirty, hubbub streets of front-end frameworks came about because of Laravel Nova and Statamic. They both use Vue, so I learned Vue. Of course, I looked around. But React made me retch. Angular almost made me want to buy a katana just to perform seppuku. (Of course I'm being hyperbolic.) I hear things about all of these and more thanks to Twitter. I can say some of it is good, but most of it continues to push me away. I stuck with Vue. I actually like Vue a lot, even if v3 has caused a bit of headache—it's actually better for it in my opinion and yes I do like the composition API. Overall though, I'm definitely heading more towards preferring not to have to build my front-end using node/deno/bun or whatever tool becomes popular today. That said, you just try and pry Tailwind from my frigid, rigid fingers! I'm quite firmly in the Livewire camp now. In another life, I may even like HTMX? I've built a few SPAs, but I can probably count them on one hand. But not only will I probably not build another one, I think you shouldn't either. I'm strongly encouraging my clients not to. I do think SPAs have a place, but it is almost certainly not in your project. Yes, I know this argument has been made before and probably far more eloquently than I'm going to, but you know I just felt like I needed to get this out of my head in my own way. ¶When does an SPA make sense? Let's get this out of the way first before I dunk on SPAs some more. They do have a place: I believe that the main advantage of an SPA is/was/has always been the decoupling of the front-end from the back-end. As time goes on and engineers specialise in areas that interest them the most, roles become more well-defined. This is why we have 'front-end' and 'back-end' and 'full stack' engineers. Although there is principally a lot of overlap (this is all programming at the end of the day and many concepts are similar), the domain—the environment that the engineer is most familiar with—is what determines their preference. Some engineers will prefer back-end because they don't want to think about or deal with a certain class of bugs or issues that arise from the rapidly-evolving world of front-end development. They may be uncomfortable using 3 or 4 different languages at the same time to get their work done, or they groan as soon as there's another major version of some framework which is going to require a load of refactoring that doesn't add immediate value to your product. Consider: every user of your app could be using a different version of a particular browser, which uses one kind of rendering engine, and a specific level of conformity to various web standards e.g. ECMAScript (the official standard that underpins what most of us think of as JavaScript) or CSS. Keeping on top of these variations and differences across desktop and mobile is enough to make anyone's head spin. And other engineers will prefer the front-end for its stateful nature, or because they're more comfortable with JavaScript/TypeScript, they grok CSS, and they love the intersection of code and design. Or maybe they just dislike dealing with databases, concurrency, queues, messaging and APIs a lot more than they dislike wild browser evolution. In any case, whatever size your team is, there will be these preferences. For example, I consider myself a full-stack developer, but honestly I like to avoid front-end work as much as possible, so will generally take the easy route there. Developing an SPA could allow you/your business to split the work of front-end and back-end into separate teams, which may help each team focus so they can do what they do best and be the happiest they can be. Which is the most important metric that will make an appreciable difference to your bottom-line long term. Being intentional about this will see you hit Conway's Law head-on and potentially tackle that beast in the best way, as long as you build up the necessary lines of communication. It also allows you to scale the two parts of the system separately—both in terms of team scaling and resource scaling—which, if you ever need that flexibility, could end up saving you from a certain group of headaches. And I'm gonna be honest, building distributed systems like this is a cool problem to solve and will be a point of growth for many of your engineers. ¶So, why are SPAs bad again? Yeh, so far this all sounds great, right? Well, just letting ol' Conway right into your living room isn't exactly the best idea. Aside of all the points that Greg makes in his article (if you haven't read it yet, go read it), I want to present three extra reasons why I think an SPA is a bad idea. ¶Your front-end and back-end get decoupled! Your back-end and front-end are always coupled. So trying to split them in anything but the most extreme circumstances is an exercise in futility. I think this is probably the worst part of this whole story. If your back-end team want to move in one direction, they've got to align with the front-end team. If timings and priorities don't work out, it's going to force someone to either put a hold on some work that really needs sorting out or do some grunt work just to patch over a hole that's about to appear. This is communication overhead. It adds risk. It adds complexity. It adds meetings into engineers' calendars. It adds friction, and stress, and distraction. It flies in the face of that number one principle: let your teams focus and be happy. This literally costs you money one way or another, cost that you could avoid. Deployments get unavoidably riskier in ways that are super difficult to test because testing distributed systems is really hard. Again, this might all be fine, in the most extreme cases, where you need the decoupling. Then this extra expense, and complexity, and churn-causing evil, is just a necessary evil that you have to learn to swallow and live with. But I've got x engineers, y1 are front-end, y2 are back-end. What do I do? I would strongly recommend that one of your engineering groups need to roll up their sleeves and get on learning the other group's code, tooling and responsibilities. This will have multiple benefits: career progression and learning opportunities, increased bus factor, fewer meetings and more collaboration. Sure there will be challenges too, but they won't be as big or as painful as the other challenges you'd face with an SPA. ¶It may hurt customisability As I mentioned earlier, the reason I got into Vue at all was because other tech I was using required me to. In both Statamic and Laravel Nova, the choice of Vue—well, not specifically Vue, but rather a reactive front-end framework—made sense because at the time it really was the best way to build flexible, reactive front-ends. And both of those tools needed that power and have become fantastic tools because of it. But there is one pain point that it's created that's quite hard to escape: the customisation story for each of these is harder because of it. How so? Basically, because each tool needs to build the assets to ship their product. And once they're built it's hard for third-parties to build on what's already there. How is it harder? Let's say I'm creating a Statamic add-on that allows CMS owners to post to social media from their control panel. As Statamic uses Vue and already has a bunch of components I can leverage, I am going to use some of them. But I'm also going to add some of my own functionality that doesn't already exist within Statamic. Now what happens? Well I build the Javascript... but wait. I can't change the bundled JS that's part of Statamic core. I have to build my own JS and load it at the right time, something I'm not in control of. Thankfully the Statamic team (building on tooling from the Vue & JS community) have worked hard to make this relatively easy, but my tool choice is now limited by what they support - if they're using Vue 2, I have to use Vue 2; if I don't like Vite or Webpack, tough luck. And on top of that, the builds have to happen at the client's end, which means we're now offloading some of the responsibility of making this whole thing work to people who don't need or want to know anything about this stuff. They just want to install your thing and get on with their jobs and lives. Why is this such a pain though? It used to be (in other platforms) that I could just load some extra JS file into the admin interface and do what I want. Honestly, we probably should never have been doing that either. Hands up, how many of you have seen a WordPress installation that tries to load 2 or more different versions of jQuery? 🙋🏼‍♂️ So these build tools go some way to alleviating some problems, but in the process have introduced so many layers of protection and abstraction that it presents a brain-melting, Japanese puzzle box to unlock. And all this JavaScript flying around is really unsafe, because JavaScript is completely malleable on the client side. That's meant library creators have had to go to some unusual lengths to protect the state of the application and encapsulate the code in attempts to make it safer and more portable. I won't pretend to understand all of the requirements, pre-requisites and implications for why built JavaScript assets are packaged up in a complex soup of function calls and obfuscated code, but suffice to say this makes building on top of pre-existing code that much harder. The web standards track is working to make this easier: we have Web Components and module starting to come through which should alleviate some of this. But if you're not building with those standard in play—I understand, it might not be possible because of browser support etc—and you want to expose your user's to third-party plugins/add-ons, then you've got to figure out how to make it easy for other developers. Some of that's going to require the specific implementation, the other part is going to be documentation. No matter how you cut it, it's going to be harder to get right than if you had server-side rendered views that you allow your third-party developers to load at runtime. ¶Performance will suffer This isn't really an extra reason as Greg did touch on this a little already, but I wanted to go harder on performance. You should never choose to build an SPA because of some supposed performance benefit. That is the wrong hill to try and defend for many reasons, but primarily because you've got the whole of the web stack—on horses, with bazookas—nipping at your heels. Sure, it may take a little while for web standards to get ratified and then rolled out, but the reality is that it's only a matter of time. We now have wide browser support for QUIC / HTTP/3 (which brings faster downloads and reduced server load) and things like 103 Early Hints response headers (which let us tell browsers what they should prioritise pre-loading), making the standard, non-SPA web even more performant. (And yes, some of this benefits SPAs too!) Sure, you can argue some of this advancement may have been driven by SPAs and their apparent benefits. But there's some inevitability to all of this (both the appearance of SPAs and the advancement of HTTP) which makes the whole argument moot in my opinion. As adoption and overall performance of the web platform increases, SPAs will even start to feel slow in comparison. Some feel slow already! That's because so much of the heavy lifting is left to the userland threads of the in-browser JavaScript engine instead of the lower-level compiled languages, the ones used to build the core browser engine itself (C++, Rust, Swift etc). That translates to a poorer experience in your app, and a penalty for your users. While it's not impossible for JavaScript to be as fast or faster than the actual browser it's running in, it's just such a long way for it to get there it's a no-brainer at this stage to let the browser do what the browser does best instead of trying to replicate all of that in JS. So don't! Use JavaScript the way it was intended: as a sugar-coating to enhance the pages, not to build entire pages. I mean, you wouldn't eat a donut made entirely out of sugar, would you? Again, in the extreme cases, maybe you would. Maybe for this donut-sized/-shaped sugar torus, you have an appropriately-sized dough-only counterpart, both of which you consume in close proximity... I donut-know where this analogy is going. ¶So what's the alternative? You, dear reader, are not in the most extreme case. Probably not even close! And you may never be. So, if you haven't started building an SPA, don't! Keep your code together in a single application where the front-end is rendered by the back-end, and then you can test and deploy it as a single unit. And yes, you can even do that without Docker 😱 Create it as a fully server-rendered, multi-page application with page reloads and everything. Go on! I double dare you. If you really want/need the reactivity, try something like Livewire, Hotwire or HTMX. You can do this all the way up to many millions of users per day and it will be fine, which you are a long way from. Trust me, your front-end will never meaningfully need to move faster than your back-end, and vice-versa. If you're already running an SPA and are contemplating bringing the two parts of the donut back together, do it! Bite your lip, close your eyes, and just do it. Relevant

a year ago 13 votes

More in programming

That boolean should probably be something else

One of the first types we learn about is the boolean. It's pretty natural to use, because boolean logic underpins much of modern computing. And yet, it's one of the types we should probably be using a lot less of. In almost every single instance when you use a boolean, it should be something else. The trick is figuring out what "something else" is. Doing this is worth the effort. It tells you a lot about your system, and it will improve your design (even if you end up using a boolean). There are a few possible types that come up often, hiding as booleans. Let's take a look at each of these, as well as the case where using a boolean does make sense. This isn't exhaustive—[1]there are surely other types that can make sense, too. Datetimes A lot of boolean data is representing a temporal event having happened. For example, websites often have you confirm your email. This may be stored as a boolean column, is_confirmed, in the database. It makes a lot of sense. But, you're throwing away data: when the confirmation happened. You can instead store when the user confirmed their email in a nullable column. You can still get the same information by checking whether the column is null. But you also get richer data for other purposes. Maybe you find out down the road that there was a bug in your confirmation process. You can use these timestamps to check which users would be affected by that, based on when their confirmation was stored. This is the one I've seen discussed the most of all these. We run into it with almost every database we design, after all. You can detect it by asking if an action has to occur for the boolean to change values, and if values can only change one time. If you have both of these, then it really looks like it is a datetime being transformed into a boolean. Store the datetime! Enums Much of the remaining boolean data indicates either what type something is, or its status. Is a user an admin or not? Check the is_admin column! Did that job fail? Check the failed column! Is the user allowed to take this action? Return a boolean for that, yes or no! These usually make more sense as an enum. Consider the admin case: this is really a user role, and you should have an enum for it. If it's a boolean, you're going to eventually need more columns, and you'll keep adding on other statuses. Oh, we had users and admins, but now we also need guest users and we need super-admins. With an enum, you can add those easily. enum UserRole { User, Admin, Guest, SuperAdmin, } And then you can usually use your tooling to make sure that all the new cases are covered in your code. With a boolean, you have to add more booleans, and then you have to make sure you find all the places where the old booleans were used and make sure they handle these new cases, too. Enums help you avoid these bugs. Job status is one that's pretty clearly an enum as well. If you use booleans, you'll have is_failed, is_started, is_queued, and on and on. Or you could just have one single field, status, which is an enum with the various statuses. (Note, though, that you probably do want timestamp fields for each of these events—but you're still best having the status stored explicitly as well.) This begins to resemble a state machine once you store the status, and it means that you can make much cleaner code and analyze things along state transition lines. And it's not just for storing in a database, either. If you're checking a user's permissions, you often return a boolean for that. fn check_permissions(user: User) -> bool { false // no one is allowed to do anything i guess } In this case, true means the user can do it and false means they can't. Usually. I think. But you can really start to have doubts here, and with any boolean, because the application logic meaning of the value cannot be inferred from the type. Instead, this can be represented as an enum, even when there are just two choices. enum PermissionCheck { Allowed, NotPermitted(reason: String), } As a bonus, though, if you use an enum? You can end up with richer information, like returning a reason for a permission check failing. And you are safe for future expansions of the enum, just like with roles. You can detect when something should be an enum a proliferation of booleans which are mutually exclusive or depend on one another. You'll see multiple columns which are all changed at the same time. Or you'll see a boolean which is returned and used for a long time. It's important to use enums here to keep your program maintainable and understandable. Conditionals But when should we use a boolean? I've mainly run into one case where it makes sense: when you're (temporarily) storing the result of a conditional expression for evaluation. This is in some ways an optimization, either for the computer (reuse a variable[2]) or for the programmer (make it more comprehensible by giving a name to a big conditional) by storing an intermediate value. Here's a contrived example where using a boolean as an intermediate value. fn calculate_user_data(user: User, records: RecordStore) { // this would be some nice long conditional, // but I don't have one. So variables it is! let user_can_do_this: bool = (a && b) && (c || !d); if user_can_do_this && records.ready() { // do the thing } else if user_can_do_this && records.in_progress() { // do another thing } else { // and something else! } } But even here in this contrived example, some enums would make more sense. I'd keep the boolean, probably, simply to give a name to what we're calculating. But the rest of it should be a match on an enum! * * * Sure, not every boolean should go away. There's probably no single rule in software design that is always true. But, we should be paying a lot more attention to booleans. They're sneaky. They feel like they make sense for our data, but they make sense for our logic. The data is usually something different underneath. By storing a boolean as our data, we're coupling that data tightly to our application logic. Instead, we should remain critical and ask what data the boolean depends on, and should we maybe store that instead? It comes easier with practice. Really, all good design does. A little thinking up front saves you a lot of time in the long run. I know that using an em-dash is treated as a sign of using LLMs. LLMs are never used for my writing. I just really like em-dashes and have a dedicated key for them on one of my keyboard layers. ↩ This one is probably best left to the compiler. ↩

22 hours ago 3 votes
AmigaGuide Reference Library

As I slowly but surely work towards the next release of my setcmd project for the Amiga (see the 68k branch for the gory details and my total noob-like C flailing around), I’ve made heavy use of documentation in the AmigaGuide format. Despite it’s age, it’s a great Amiga-native format and there’s a wealth of great information out there for things like the C API, as well as language guides and tutorials for tools like the Installer utility - and the AmigaGuide markup syntax itself. The only snag is, I had to have access to an Amiga (real or emulated), or install one of the various viewer programs on my laptops. Because like many, I spend a lot of time in a web browser and occasionally want to check something on my mobile phone, this is less than convenient. Fortunately, there’s a great AmigaGuideJS online viewer which renders AmigaGuide format documents using Javascript. I’ve started building up a collection of useful developer guides and other files in my own reference library so that I can access this documentation whenever I’m not at my Amiga or am coding in my “modern” dev environment. It’s really just for my own personal use, but I’ll be adding to it whenever I come across a useful piece of documentation so I hope it’s of some use to others as well! And on a related note, I now have a “unified” code-base so that SetCmd now builds and runs on 68k-based OS 3.x systems as well as OS 4.x PPC systems like my X5000. I need to: Tidy up my code and fix all the “TODO” stuff Update the Installer to run on OS 3.x systems Update the documentation Build a new package and upload to Aminet/OS4Depot Hopefully I’ll get that done in the next month or so. With the pressures of work and family life (and my other hobbies), progress has been a lot slower these last few years but I’m still really enjoying working on Amiga code and it’s great to have a fun personal project that’s there for me whenever I want to hack away at something for the sheer hell of it. I’ve learned a lot along the way and the AmigaOS is still an absolute joy to develop for. I even brought my X5000 to the most recent Kickstart Amiga User Group BBQ/meetup and had a fun day working on the code with fellow Amigans and enjoying some classic gaming & demos - there was also a MorphOS machine there, which I think will be my next target as the codebase is slowly becoming more portable. Just got to find some room in the “retro cave” now… This stuff is addictive :)

14 hours ago 2 votes
An Analysis of Links From The White House’s “Wire” Website

A little while back I heard about the White House launching their version of a Drudge Report style website called White House Wire. According to Axios, a White House official said the site’s purpose was to serve as “a place for supporters of the president’s agenda to get the real news all in one place”. So a link blog, if you will. As a self-professed connoisseur of websites and link blogs, this got me thinking: “I wonder what kind of links they’re considering as ‘real news’ and what they’re linking to?” So I decided to do quick analysis using Quadratic, a programmable spreadsheet where you can write code and return values to a 2d interface of rows and columns. I wrote some JavaScript to: Fetch the HTML page at whitehouse.gov/wire Parse it with cheerio Select all the external links on the page Return a list of links and their headline text In a few minutes I had a quick analysis of what kind of links were on the page: This immediately sparked my curiosity to know more about the meta information around the links, like: If you grouped all the links together, which sites get linked to the most? What kind of interesting data could you pull from the headlines they’re writing, like the most frequently used words? What if you did this analysis, but with snapshots of the website over time (rather than just the current moment)? So I got to building. Quadratic today doesn’t yet have the ability for your spreadsheet to run in the background on a schedule and append data. So I had to look elsewhere for a little extra functionality. My mind went to val.town which lets you write little scripts that can 1) run on a schedule (cron), 2) store information (blobs), and 3) retrieve stored information via their API. After a quick read of their docs, I figured out how to write a little script that’ll run once a day, scrape the site, and save the resulting HTML page in their key/value storage. From there, I was back to Quadratic writing code to talk to val.town’s API and retrieve my HTML, parse it, and turn it into good, structured data. There were some things I had to do, like: Fine-tune how I select all the editorial links on the page from the source HTML (I didn’t want, for example, to include external links to the White House’s social pages which appear on every page). This required a little finessing, but I eventually got a collection of links that corresponded to what I was seeing on the page. Parse the links and pull out the top-level domains so I could group links by domain occurrence. Create charts and graphs to visualize the structured data I had created. Selfish plug: Quadratic made this all super easy, as I could program in JavaScript and use third-party tools like tldts to do the analysis, all while visualizing my output on a 2d grid in real-time which made for a super fast feedback loop! Once I got all that done, I just had to sit back and wait for the HTML snapshots to begin accumulating! It’s been about a month and a half since I started this and I have about fifty days worth of data. The results? Here’s the top 10 domains that the White House Wire links to (by occurrence), from May 8 to June 24, 2025: youtube.com (133) foxnews.com (72) thepostmillennial.com (67) foxbusiness.com (66) breitbart.com (64) x.com (63) reuters.com (51) truthsocial.com (48) nypost.com (47) dailywire.com (36) From the links, here’s a word cloud of the most commonly recurring words in the link headlines: “trump” (343) “president” (145) “us” (134) “big” (131) “bill” (127) “beautiful” (113) “trumps” (92) “one” (72) “million” (57) “house” (56) The data and these graphs are all in my spreadsheet, so I can open it up whenever I want to see the latest data and re-run my script to pull the latest from val.town. In response to the new data that comes in, the spreadsheet automatically parses it, turn it into links, and updates the graphs. Cool! If you want to check out the spreadsheet — sorry! My API key for val.town is in it (“secrets management” is on the roadmap). But I created a duplicate where I inlined the data from the API (rather than the code which dynamically pulls it) which you can check out here at your convenience. Email · Mastodon · Bluesky

3 hours ago 2 votes
Implementation of optimized vector of strings in C++ in SumatraPDF

SumatraPDF is a fast, small, open-source PDF reader for Windows, written in C++. This article describes how I implemented StrVec class for efficiently storing multiple strings. Much ado about the strings Strings are among the most used types in most programs. Arrays of strings are also used often. I count ~80 uses of StrVec in SumatraPDF code. This article describes how I implemented an optimized array of strings in SumatraPDF C++ code . No STL for you Why not use std::vector<std::string>? In SumatraPDF I don’t use STL. I don’t use std::string, I don’t use std::vector. For me it’s a symbol of my individuality, and my belief in personal freedom. As described here, minimum size of std::string on 64-bit machines is 32 bytes for msvc / gcc and 24 bytes for short strings (15 chars for msvc / gcc, 22 chars for clang). For longer strings we have more overhead: 32⁄24 bytes for the header memory allocator overhead allocator metadata padding due to rounding allocations to at least 16 bytes There’s also std::vector overhead: for fast appends (push()) std::vectorimplementations over-allocated space Longer strings are allocated at random addresses so they can be spread out in memory. That is bad for cache locality and that often cause more slowness than executing lots of instructions. Design and implementation of StrVec StrVec (vector of strings) solves all of the above: per-string overhead of only 8 bytes strings are laid out next to each other in memory StrVec High level design of StrVec: backing memory is allocated in singly-linked pages similar to std::vector, we start with small page and increase the size of the page. This strikes a balance between speed of accessing a string at random index and wasted space unlike std::vector we don’t reallocate memory (most of the time). That saves memory copy when re-allocating backing space Here’s all there is to StrVec: struct StrVec { StrVecPage* first = nullptr; int nextPageSize = 256; int size = 0; } size is a cached number of strings. It could be calculated by summing the size in all StrVecPages. nextPageSize is the size of the next StrVecPage. Most array implementation increase the size of next allocation by 1.4x - 2x. I went with the following progression: 256 bytes, 1k, 4k, 16k, 32k and I cap it at 64k. I don’t have data behind those numbers, they feel right. Bigger page wastes more space. Smaller page makes random access slower because to find N-th string we need to traverse linked list of StrVecPage. nextPageSize is exposed to allow the caller to optimize use. E.g. if it expects lots of strings, it could set nextPageSize to a large number. StrVecPage Most of the implementation is in StrVecPage. The big idea here is: we allocate a block of memory strings are allocated from the end of memory block at the beginning of the memory block we build and index of strings. For each string we have: u32 size u32 offset of the string within memory block, counting from the beginning of the block The layout of memory block is: StrVecPage struct { size u32; offset u32 } [] … not yet used space strings This is StrVecPage: struct StrVecPage { struct StrVecPage* next; int pageSize; int nStrings; char* currEnd; } next is for linked list of pages. Since pages can have various sizes we need to record pageSize. nStrings is number of strings in the page and currEnd points to the end of free space within page. Implementing operations Appending a string Appending a string at the end is most common operation. To append a string: we calculate how much memory inside a page it’ll need: str::Len(string) + 1 + sizeof(u32) + sizeof(u32). +1 is for 0-termination for compatibility with C APIs that take char*, and 2xu32 for size and offset. If we have enough space in last page, we add size and offset at the end of index and append a string from the end i.e. `currEnd - (str::Len(string) + 1). If there is not enough space in last page, we allocate new page We can calculate how much space we have left with: int indexEntrySize = sizeof(u32) + sizeof(u32); // size + offset char* indexEnd = (char*)pageStart + sizeof(StrVecPage) + nStrings*indexEntrySize int nBytesFree = (int)(currEnd - indexEnd) Removing a string Removing a string is easy because it doesn’t require moving memory inside StrVecPage. We do nStrings-- and move index values of strings after the removed string. I don’t bother freeing the string memory within a page. It’s possible but complicated enough I decided to skip it. You can compact StrVec to remove all overhead. If you do not care about preserving order of strings after removal, I haveRemoveAtFast() which uses a trick: instead of copying memory of all index values after removed string, I copy a single index from the end into a slot of the string being removed. Replacing a string or inserting in the middle Replacing a string or inserting a string in the middle is more complicated because there might not be enough space in the page for the string. When there is enough space, it’s as simple as append. When there is not enough space, I re-use the compacting capability: I compact all existing pages into a single page with extra space for the string and some extra space as an optimization for multiple inserts. Iteration A random access requires traversing a linked list. I think it’s still fast because typically there aren’t many pages and we only need to look at a single nStrings value. After compaction to a single page, random access is as fast as it could ever be. C++ iterator is optimized for sequential access: struct iterator { const StrVec* v; int idx; // perf: cache page, idxInPage from prev iteration int idxInPage; StrVecPage* page; } We cache the current state of iteration as page and idxInPage. To advance to next string we advance idxInPage. If it exceeds nStrings, we advance to page->next. Optimized search Finding a string is as optimized as it could be without a hash table. Typically to compare char* strings you need to call str::Eq(s, s2) for every string you compare it to. That is a function call and it has to touch s2 memory. That is bad for performance because it blows the cache. In StrVec I calculate length of the string to find once and then traverse the size / offset index. Only when size is different I have to compare the strings. Most of the time we just look at offset / size in L1 cache, which is very fast. Compacting If you know that you’ll not be adding more strings to StrVec you can compact all pages into a single page with no overhead of empty space. It also speeds up random access because we don’t have multiple pages to traverse to find the item and a given index. Representing a nullptr char* Even though I have a string class, I mostly use char* in SumatraPDF code. In that world empty string and nullptr are 2 different things. To allow storing nullptr strings in StrVec (and not turning them into empty strings on the way out) I use a trick: a special u32 value kNullOffset represents nullptr. StrVec is a string pool allocator In C++ you have to track the lifetime of each object: you allocate with malloc() or new when you no longer need to object, you call free() or delete However, the lifetime of allocations is often tied together. For example in SumatraPDF an opened document is represented by a class. Many allocations done to construct that object last exactly as long as the object. The idea of a pool allocator is that instead of tracking the lifetime of each allocation, you have a single allocator. You allocate objects with the same lifetime from that allocator and you free them with a single call. StrVec is a string pool allocator: all strings stored in StrVec have the same lifetime. Testing In general I don’t advocate writing a lot of tests. However, low-level, tricky functionality like StrVec deserves decent test coverage to ensure basic functionality works and to exercise code for corner cases. I have 360 lines of tests for ~700 lines of of implementation. Potential tweaks and optimization When designing and implementing data structures, tradeoffs are aplenty. Interleaving index and strings I’m not sure if it would be faster but instead of storing size and offset at the beginning of the page and strings at the end, we could store size / string sequentially from the beginning. It would remove the need for u32 of offset but would make random access slower. Varint encoding of size and offset Most strings are short, under 127 chars. Most offsets are under 16k. If we stored size and offset as variable length integers, we would probably bring down average per-string overhead from 8 bytes to ~4 bytes. Implicit size When strings are stored sequentially size is implicit as difference between offset of the string and offset of next string. Not storing size would make insert and set operations more complicated and costly: we would have to compact and arrange strings in order every time. Storing index separately We could store index of size / offset in a separate vector and use pages to only allocate string data. This would simplify insert and set operations. With current design if we run out of space inside a page, we have to re-arrange memory. When offset is stored outside of the page, it can refer to any page so insert and set could be as simple as append. The evolution of StrVec The design described here is a second implementation of StrVec. The one before was simply a combination of str::Str (my std::string) for allocating all strings and Vec<u32> (my std::vector) for storing offset index. It had some flaws: appending a string could re-allocate memory within str::Str. The caller couldn’t store returned char* pointer because it could be invalidated. As a result the API was akward and potentially confusing: I was returning offset of the string so the string was str::Str.Data() + offset. The new StrVec doesn’t re-allocate on Append, only (potentially) on InsertAt and SetAt. The most common case is append-only which allows the caller to store the returned char* pointers. Before implementing StrVec I used Vec<char*>. Vec is my version of std::vector and Vec<char*> would just store pointer to individually allocated strings. Cost vs. benefit I’m a pragmatist: I want to achieve the most with the least amount of code, the least amount of time and effort. While it might seem that I’m re-implementing things willy-nilly, I’m actually very mindful of the cost of writing code. Writing software is a balance between effort and resulting quality. One of the biggest reasons SumatraPDF so popular is that it’s fast and small. That’s an important aspect of software quality. When you double click on a PDF file in an explorer, SumatraPDF starts instantly. You can’t say that about many similar programs and about other software in general. Keeping SumatraPDF small and fast is an ongoing focus and it does take effort. StrVec.cpp is only 705 lines of code. It took me several days to complete. Maybe 2 days to write the code and then some time here and there to fix the bugs. That being said, I didn’t start with this StrVec. For many years I used obvious Vec<char*>. Then I implemented somewhat optimized StrVec. And a few years after that I implemented this ultra-optimized version. References SumatraPDF is a small, fast, multi-format (PDF/eBook/Comic Book and more), open-source reader for Windows. The implementation described here: StrVec.cpp, StrVec.h, StrVec_ut.cpp By the time you read this, the implementation could have been improved.

22 hours ago 1 votes
The parental dead end of consent morality

Consent morality is the idea that there are no higher values or virtues than allowing consenting adults to do whatever they please. As long as they're not hurting anyone, it's all good, and whoever might have a problem with that is by definition a bigot.  This was the overriding morality I picked up as a child of the 90s. From TV, movies, music, and popular culture. Fly your freak! Whatever feels right is right! It doesn't seem like much has changed since then. What a moral dead end. I first heard the term consent morality as part of Louise Perry's critique of the sexual revolution. That in the context of hook-up culture, situationships, and falling birthrates, we have to wrestle with the fact that the sexual revolution — and it's insistence that, say, a sky-high body count mustn't be taboo — has led society to screwy dating market in the internet age that few people are actually happy with. But the application of consent morality that I actually find even more troubling is towards parenthood. As is widely acknowledged now, we're in a bit of a birthrate crisis all over the world. And I think consent morality can help explain part of it. I was reminded of this when I posted a cute video of a young girl so over-the-moon excited for her dad getting off work to argue that you'd be crazy to trade that for some nebulous concept of "personal freedom". Predictably, consent morality immediately appeared in the comments: Some people just don't want children and that's TOTALLY OKAY and you're actually bad for suggesting they should! No. It's the role of a well-functioning culture to guide people towards The Good Life. Not force, but guide. Nobody wants to be convinced by the morality police at the pointy end of a bayonet, but giving up on the whole idea of objective higher values and virtues is a nihilistic and cowardly alternative. Humans are deeply mimetic creatures. It's imperative that we celebrate what's good, true, and beautiful, such that these ideals become collective markers for morality. Such that they guide behavior. I don't think we've done a good job at doing that with parenthood in the last thirty-plus years. In fact, I'd argue we've done just about everything to undermine the cultural appeal of the simple yet divine satisfaction of child rearing (and by extension maligned the square family unit with mom, dad, and a few kids). Partly out of a coordinated campaign against the family unit as some sort of trad (possibly fascist!) identity marker in a long-waged culture war, but perhaps just as much out of the banal denigration of how boring and limiting it must be to carry such simple burdens as being a father or a mother in modern society. It's no wonder that if you incessantly focus on how expensive it is, how little sleep you get, how terrifying the responsibility is, and how much stress is involved with parenthood that it doesn't seem all that appealing! This is where Jordan Peterson does his best work. In advocating for the deeper meaning of embracing burden and responsibility. In diagnosing that much of our modern malaise does not come from carrying too much, but from carrying too little. That a myopic focus on personal freedom — the nights out, the "me time", the money saved — is a spiritual mirage: You think you want the paradise of nothing ever being asked of you, but it turns out to be the hell of nobody ever needing you. Whatever the cause, I think part of the cure is for our culture to reembrace the virtue and the value of parenthood without reservation. To stop centering the margins and their pathologies. To start centering the overwhelming middle where most people make for good parents, and will come to see that role as the most meaningful part they've played in their time on this planet. But this requires giving up on consent morality as the only way to find our path to The Good Life. It involves taking a moral stance that some ways of living are better than other ways of living for the broad many. That parenthood is good, that we need more children both for the literal survival of civilization, but also for the collective motivation to guard against the bad, the false, and the ugly. There's more to life than what you feel like doing in the moment. The worst thing in the world is not to have others ask more of you. Giving up on the total freedom of the unmoored life is a small price to pay for finding the deeper meaning in a tethered relationship with continuing a bloodline that's been drawn for hundreds of thousands of years before it came to you. You're never going to be "ready" before you take the leap. If you keep waiting, you'll wait until the window has closed, and all you see is regret. Summon a bit of bravery, don't overthink it, and do your part for the future of the world. It's 2.1 or bust, baby!

yesterday 2 votes