Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
10
I started streaming on Twitch a while ago. My plan was to do it regularly, but you know how life gets in the way. I'm going to make a concerted effort to do more tho going forwards. Streams are going to be focused on my side projects and open source work I stream on Twitch currently (though I might move fully to YouTube as Twitch is a bit too gaming-focused) and I keep recordings organised on YouTube Follow me on whichever you prefer
a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Sometimes It Works :: simonhamp.me

On AI doomerism

So what's going on? This is a reflection on this post (of the same title) that I thoroughly enjoyed, by Flavio Copes. "I am the genuine article, therefore I don't have to try. I just have to be. You, on the other hand, have to try any passing bandwagon, because what else have you got?" Life Isn't All Ha Ha Hee Hee It seems that a great many people are feeling the pinch of a tech job market squeeze that seems to have lasted for over 2 years now. A lot of folks seem to be quick to dump the blame for this squarely on "AI". And what can you expect when the tech CEOs are all doing their stupid dance, touting the latest fadtech as the breakthrough that justifies them laying off swathes of employees in short-sighted attempts to prop up shareholder value? A more self-destructive spiral I have not seen. But the truth is masked by this conclusion. I don't believe for one second—and I don't think a lot of other popcorn-crunching devs believe it either—that "AI" is remotely close to the claims being made by these flaccid excuses for billionaires. Something else is afoot. "First, there’s a lot of people, and I mean a LOT, that went into programming and didn’t have a genuine passion for the job. For them, it’s just a way to make money." Hard agree! I've seen some of these people. I don't believe it's a bad reason to get into any line of work, but in my experience, they have made for some of the worst developers. Not because of a lack of technical capability (I truly believe almost anyone can learn to program well), but the money motivator only takes you so far. Like any job worth doing, it is hard. The rewards come to those who continue to put in the effort. If you've based your decision to become a programmer on seeing someone else claiming success on social media, I can only tell you that you've been duped. The vast majority do not have this story. And even some of the ones that appear to, don't. "Very few jobs out there give those kind of perks: high pay and comfortable life." It's all relative. My personal experience with programming hasn't been "high pay... comfortable life". I've worked damn hard for a very long time. I still don't feel sufficiently compensated and I've just had my best year ever. And that as a freelancer! I've never seen programming (or really any possible career path) as "easy". Currently "AI" makes my life a bit easier, but nothing about using it in and out every day to build things convinces me that it's going to do away with my job. "To them, those people that care about the craft, AI is not a problem at all." I totally agree. ¶So what's going on? "post-COVID companies hired like crazy" "the end of the zero-interest rate period" "increased competition" from the potential of a globally shifted workforce "investor pressure" "AI" Look, there's not just one answer. That's clear. And these are just part of the picture. I have another theory: a huge factor at play here is that, for most of these businesses, their revenue is based on ads. Therefore, consumers (for the most part) are the driving force of a lot of the top tech companies' bottom lines. But the consumers are getting sick and tired of: Privacy violations. Crypto scams. Income disparity between the workers and the C-suite. Ads! Ads! Ads! everywhere, all of the time. The lies and manipulation of it all. Many of these tech companies haven't innovated anything meaningful in a long time. They're jumping on AI right now because they're all hoping they'll discover the next big breakthrough and unlock megabucks. And in the meantime they prop themselves up by touting eye-watering sales figures (and net losses) of their next "always-on, internet connected, touchscreen, AI-powered, super agent whosawidget." Or their "giant, face-hugging, reality-distortion spectacles that no person in their right mind would be caught dead wearing in public" But are any of these "advances" really moving the needle on solving the bigger problems we're facing as a species? No. So the ads and PR BS isn't working as well as it used. If your business model is to be a big promotional platform that relies on real human beings to view the content that your advertisers want to promote, then you have to bring real value. And if you're the advertiser whose whole business model depends on crazy ad spend to penetrate into and reach your target market of what you hope are real people, then you need a lot of cheap money to get there. It's a poisoned lake. These big fish have done it to themselves. Sure, they're big, so they'll survive for a while, but it's still terminal. The good news is that, in their wake will come true innovators. I expect the next few years to be something of the beginnings of a rebirth period in this constant boom-bust circus. And I genuinely believe that "AI" will enable a lot of that. It is already enabling small teams to move faster whilst staying lean. In time, these little fish will grow and once again find opportunity in hiring fresh talent. Get in with those companies now while they're still small by seeking them out and starting a conversation. It might not be a job right away, but it's building the relationship. "You have to have a network... Local meetups, and local conferences... Go a few days before... hang out before the conference... Go to the after parties. That’s where you actually meet and get to know people." This is the way. You know, I've never once applied to FAANG. I've never felt worthy. In most cases, I don't meet the requirements as I don't have a degree. And honestly, the companies and their leaders do not align with my values at all, so I'm extremely disinclined to apply. But imagine if someone like me, a "self-taught" nobody dev, with no recognisable education, only having a connection or two, skips the queue ahead of you and lands a job at the next great innovative tech company. All for the sake of saying hello and getting to know people. This is why, especially in the age of "AI", cultivating genuine relationships with real people is so important. Work on silly side projects (if you have the capacity). Read! Share what you're working on. Share what you enjoy. Retweet. Reply. Write about it—in the abstract if you have to. Make videos. Do a podcast. Go meet new people, in and out of your field. Don't do it for the algorithm, or the likes and subscribes. Do it for the genuine relationships. Because in a world of fakes and forgeries, the genuine article sticks out like a sore thumb!

5 months ago 17 votes
2024: Just the Start

2024 has overall been a great year for me personally. I believe it represents a turning point. 2023 was tough, but 2024 went some way to redressing the balance and I am even more positive about 2025 because of it. Here are a few highlights: We went to Japan! My first time in Asia. The best trip I've been on so far. I managed to get NativePHP support for Windows released, amongst many other improvements. It's almost ready for production usage. With over 1,000 devs in the Discord, $1,000/month in sponsorships and tons of contributors helping, it really feels like it's flying. I started building ReelFlow with an old acquaintance and his business partner, and it's got some very promising signs of growth. I went to Laracon US in Dallas and it was epic! Laradir became Laradevs, blew past 4,000 registered developer profiles and has some awesome features on the way. I was interviewed by Eric from Laravel News. I started four (yes, four(!)) podcasts. I finally fixed my website so I could start publishing to it again. And to top it all off, I got invited to speak at Laracon EU 2025, which is coming up in February. But it wasn't without its challenges! Up to about March/April, I had almost zero income. I was in a rut with NativePHP because I was so stressed about finding paid work. It was really frustrating to have all these goals and intentions for NativePHP (and of course a backlog of PRs and issues to get through!) but not really having the time to focus on it. I managed to pick some up when we returned from Japan, but it wasn't quite enough, and then all of a sudden I had too much. Committing to open source in your spare time is hard at the best of times, but especially so when you've got more than a full plate of full-time client work and you're trying grow a SaaS (Laradevs) on the side... I can tell you now, I have (and still continue) to make poor decisions about the best use of my time, for which I can only say that I'm very grateful to and for my long-suffering wife. Bu, I'm sorry that I've not spent enough time with you. Honestly, I feel like I've been teetering on the edge of burnout. So I decided that I needed to drop one client and that freed up my time. Thankfully, we managed to get away for a week later in the year. It was just to Tenerife, our neighbouring island, and we went up into the hills, so despite being the tail end of summer (when you'd expect great weather and warm temperatures), we were cold and damp in amongst the clouds. But it was just what I needed. We chilled out, cooked food, sampled the local shops, bars and restaurants, spent good time with great friends and just really relaxed. What's more, I felt like I truly disconnected for once (even though my wife might disagree!) So I'm determined to do more of that in 2025. Saying that, January is going to be tough! I've got a talk to finish so that I can go on stage in front of the biggest audience I've ever spoken in front of, to talk about something that I feel like I have almost zero knowledge about. A lot of the past couple of years - of basically being forced to go back into freelancing during the height of a hiring crisis - has felt like when I tried to start a business back in 2008/9, at the peak of the recession: hard work and I'm way out of my depth. Although many things have changed over those years, the one factor that's really stood out as being different is me. I can see clearly how I've grown in so many ways. I feel like I'm doing some of the best work of my career and I'm enjoying it a lot. The other thing I've learned is that I want to go all-in on NativePHP. In my opinion, the potential for this tech is huge. I'm not under any illusion that this is going to happen quickly. So one of my goals for 2025 is to build up one of those side projects enough to give me more time to spend on NativePHP, a stepping stone on my way to making it my full-time focus. It might be Laradevs, it might be something else. I don't know for sure yet, but I'm putting my chips on a few numbers. Besides that, I'm looking forward to visiting Amsterdam for the first time. I'm cautiously optimistic about my talk (though I'm starting to get very nervous) and I'm hoping to be able to attend Laracon US again. I've also got a couple more personal challenge goals. I'm 40 in 2025, so: I want to get my body into shape - I have a little excess weight to lose and while I'm probably the fittest I've been since the pandemic, I am still not fit enough. I want to build 40 apps with NativePHP! I want to meet (virtually or IRL) 1,000 Laravel developers I've not met before. 2024 has been a year of finding clarity and focus. 2025 will be the year of doubling down and building bigger. I hope you've had an opportunity to reflect and find some positives about 2024. I know it has been especially hard for many of you. I also hope that you can find a way to look into 2025 with enthusiasm and energy. I'd love to hear more about your ups and downs and what you're excited about for 2025, so please feel free to reach out to me

6 months ago 18 votes
Slow Tech is Good Tech

Avoid the shiny Avoid comparison Some examples Laravel Build tools Typescript and JS frameworks The Web Platform What I keep up with This morning, I watched a video advertising a course aimed at developers. One of the first sentiments proclaimed is that of feeling left behind, that tech is moving too fast to keep up. I think we're probably one of only a few industries globally that have this problem. I don't see baristas learning how to use every kind of coffee machine, or carpenters buying and using all the different types of wood saw. I don't see how they could. I'm sure those industries don't move anywhere near as fast as tech, so it may be a bit of an unfair comparison, but that has led me to a really interesting axiom: After 20 years in tech, I've learned that slow tech is good tech. I'm happier when I'm not trying to be on the bleeding edge. ¶Avoid the shiny I realised over time that I have been quite a lot more dismissive of "the new shiny" than a lot of other devs. In the back of my mind I was always kinda worried that I was being left behind because I wasn't learning X or getting experience with Y. But I've learned over the years that it truly doesn't matter. Why? Most importantly: users won't notice. Second: many companies don't have the capacity or desire to be on the bleeding edge. Finally: my brain still works, I can learn new things any time, when I really need to. And I learn faster now anyway. ¶Avoid comparison It's led me to believe that "falling behind" is a made-up concept designed to sell you stuff you probably don't need. 🌶️ Or maybe it's just human nature, similar to "keeping up with the Jones's". We're competitive, we compare ourselves to others very quickly. It's a bad habit though. I'm here to tell you that it's not just ok to fall (a little bit) behind, it can even be A Good Thing™. I've been falling behind for my entire career. 😂 I'm probably more behind now than I've ever been. But I'm also doing better now - in many ways - than I ever have. ¶Some examples ¶Laravel I didn't use Laravel until a couple years after its first release. When I picked up v4.2 it was using Composer and had started the transition to Symfony components. It had DI and all sorts of other goodies that earlier versions lacked. ¶Build tools I never used Bower/Gulp/Grunt/Yarn; I settled on NPM after the war was over. I haven't switched to Bun (and probably won't). I barely touched Webpack thanks to Laravel Mix. I use Vite, but also hardly use it directly, thanks to Laravel's first-party Vite plugin. I waited so long on all of this stuff that when I actually needed it, the choices were easy. Sure, I wasn't there at the front lines; I didn't invent React. I didn't build Vite. I didn't write the Laravel plugin. But then, I didn't need to, I wasn't building the frontend to the biggest online social network ever or a toolchain for other developers. ¶Typescript and JS frameworks I looked at Typescript once. You can just write plain JavaScript and get the same results. I slept on Coffeescript, Backbone, Angular, React, Vue, Svelte etc. Though I've used some of them briefly during my time, I never went deep on any of them. I stuck with jQuery for a long time because it worked and I knew how to build things rapidly with it. Importantly, I know the value of these other tools and when to use them. That hasn't stopped me being able to work on teams that use them heavily. But I've learned that it probably won't need to be me that's the person who's working day-in, day-out with them. And I'm ok with that. I use Alpine and Livewire now, as they were built to work with the tools I know and love using, each with a small footprint and easy to learn. They're more than enough for my needs—and many of my clients! And you know what? Some of my clients still use jQuery. 🤷‍♂️ ¶The Web Platform I love seeing the latest features in HTML, CSS & JS. I'll play with them, but very rarely deploy them. It usually takes some time before new features are available on ALL browsers and devices. And even then, billions of potential users are still running older versions. Yes, it still tickles my curiosity learning new things. It's intellectually satisfying. And, yes, proficiency in multiple tools can make you a more valuable asset. 💰 I'm not saying "you should not...", I'm just saying "you don't need to". ¶What I keep up with The only things that have been really important for me to stay up to date on are the core technologies I use: PHP & Laravel. I make it a point to keep my apps up to date with close to the latest releases - mainly for security and performance reasons, but also so I can use the latest and greatest features. 😛 But I rarely upgrade apps in production to the very latest versions as soon as they're available... I always leave it a few weeks. In that way, my work is not dictated by someone else's release schedule! That's not to say I don't keep abreast of what's coming in future releases of those tools; they're core to the service I provide and the tools I build, so I would be remiss not to. Being aware of what's coming is the priority there. But I don't have to have hands-on experience with everything. Testing early ("beta") releases against existing code is a useful exercise from time to time, but not always required. So basically I only need to regularly monitor two technologies. Easily done with a reasonably well-curated Twitter or a couple of RSS feeds. Sure, I keep my finger on the pulse of all the other tech I use, glancing occasionally out the corner of my eye and paying attention when the sources I am more focused on mention them But by keeping it simple, I can focus on delivering the most value rather than spinning my wheels on all of the superfluous things adjacent to the. Everything else can wait. Don't try to learn and keep up to date with everything; pick your battles!

a year ago 18 votes
Simon Shares

I've started a little podcast! If you give it a listen, I'd love to know what you think.

a year ago 21 votes
Why You (Probably) Shouldn't Start With an SPA

Some history, some context When does an SPA make sense? So, why are SPAs bad again? Your front-end and back-end get decoupled! It may hurt customisability Performance will suffer So what's the alternative? I came across this interesting article by @gregnavis the other day. I guess it's from a few years ago now, judging by some of the other posts on his blog. But it still holds up. It's maybe even more relevant now. The article is entitled The Architecture No One Needs and it makes a simple and clear case, arguing that SPA's are more expensive than a standard multi-page app (which may or may not be a monolith). I'm going to use SPA throughout this post to mean the whole umbrella of ways you might be building a front-end application that is not server-side rendered. I think this is in line with Greg's intent too. I don't want to split hairs over whether a particular framework can be used to build front-end apps that aren't strictly SPAs. Since I read it, I haven't stopped thinking about this article. I found myself agreeing with all of Greg's stated points and it made me realise I actually have a strong opinion about this topic. I believe Greg is right, and as time rolls on I think I'm becoming more bullish in my stance on SPAs too. ¶Some history, some context My foray into the dirty, hubbub streets of front-end frameworks came about because of Laravel Nova and Statamic. They both use Vue, so I learned Vue. Of course, I looked around. But React made me retch. Angular almost made me want to buy a katana just to perform seppuku. (Of course I'm being hyperbolic.) I hear things about all of these and more thanks to Twitter. I can say some of it is good, but most of it continues to push me away. I stuck with Vue. I actually like Vue a lot, even if v3 has caused a bit of headache—it's actually better for it in my opinion and yes I do like the composition API. Overall though, I'm definitely heading more towards preferring not to have to build my front-end using node/deno/bun or whatever tool becomes popular today. That said, you just try and pry Tailwind from my frigid, rigid fingers! I'm quite firmly in the Livewire camp now. In another life, I may even like HTMX? I've built a few SPAs, but I can probably count them on one hand. But not only will I probably not build another one, I think you shouldn't either. I'm strongly encouraging my clients not to. I do think SPAs have a place, but it is almost certainly not in your project. Yes, I know this argument has been made before and probably far more eloquently than I'm going to, but you know I just felt like I needed to get this out of my head in my own way. ¶When does an SPA make sense? Let's get this out of the way first before I dunk on SPAs some more. They do have a place: I believe that the main advantage of an SPA is/was/has always been the decoupling of the front-end from the back-end. As time goes on and engineers specialise in areas that interest them the most, roles become more well-defined. This is why we have 'front-end' and 'back-end' and 'full stack' engineers. Although there is principally a lot of overlap (this is all programming at the end of the day and many concepts are similar), the domain—the environment that the engineer is most familiar with—is what determines their preference. Some engineers will prefer back-end because they don't want to think about or deal with a certain class of bugs or issues that arise from the rapidly-evolving world of front-end development. They may be uncomfortable using 3 or 4 different languages at the same time to get their work done, or they groan as soon as there's another major version of some framework which is going to require a load of refactoring that doesn't add immediate value to your product. Consider: every user of your app could be using a different version of a particular browser, which uses one kind of rendering engine, and a specific level of conformity to various web standards e.g. ECMAScript (the official standard that underpins what most of us think of as JavaScript) or CSS. Keeping on top of these variations and differences across desktop and mobile is enough to make anyone's head spin. And other engineers will prefer the front-end for its stateful nature, or because they're more comfortable with JavaScript/TypeScript, they grok CSS, and they love the intersection of code and design. Or maybe they just dislike dealing with databases, concurrency, queues, messaging and APIs a lot more than they dislike wild browser evolution. In any case, whatever size your team is, there will be these preferences. For example, I consider myself a full-stack developer, but honestly I like to avoid front-end work as much as possible, so will generally take the easy route there. Developing an SPA could allow you/your business to split the work of front-end and back-end into separate teams, which may help each team focus so they can do what they do best and be the happiest they can be. Which is the most important metric that will make an appreciable difference to your bottom-line long term. Being intentional about this will see you hit Conway's Law head-on and potentially tackle that beast in the best way, as long as you build up the necessary lines of communication. It also allows you to scale the two parts of the system separately—both in terms of team scaling and resource scaling—which, if you ever need that flexibility, could end up saving you from a certain group of headaches. And I'm gonna be honest, building distributed systems like this is a cool problem to solve and will be a point of growth for many of your engineers. ¶So, why are SPAs bad again? Yeh, so far this all sounds great, right? Well, just letting ol' Conway right into your living room isn't exactly the best idea. Aside of all the points that Greg makes in his article (if you haven't read it yet, go read it), I want to present three extra reasons why I think an SPA is a bad idea. ¶Your front-end and back-end get decoupled! Your back-end and front-end are always coupled. So trying to split them in anything but the most extreme circumstances is an exercise in futility. I think this is probably the worst part of this whole story. If your back-end team want to move in one direction, they've got to align with the front-end team. If timings and priorities don't work out, it's going to force someone to either put a hold on some work that really needs sorting out or do some grunt work just to patch over a hole that's about to appear. This is communication overhead. It adds risk. It adds complexity. It adds meetings into engineers' calendars. It adds friction, and stress, and distraction. It flies in the face of that number one principle: let your teams focus and be happy. This literally costs you money one way or another, cost that you could avoid. Deployments get unavoidably riskier in ways that are super difficult to test because testing distributed systems is really hard. Again, this might all be fine, in the most extreme cases, where you need the decoupling. Then this extra expense, and complexity, and churn-causing evil, is just a necessary evil that you have to learn to swallow and live with. But I've got x engineers, y1 are front-end, y2 are back-end. What do I do? I would strongly recommend that one of your engineering groups need to roll up their sleeves and get on learning the other group's code, tooling and responsibilities. This will have multiple benefits: career progression and learning opportunities, increased bus factor, fewer meetings and more collaboration. Sure there will be challenges too, but they won't be as big or as painful as the other challenges you'd face with an SPA. ¶It may hurt customisability As I mentioned earlier, the reason I got into Vue at all was because other tech I was using required me to. In both Statamic and Laravel Nova, the choice of Vue—well, not specifically Vue, but rather a reactive front-end framework—made sense because at the time it really was the best way to build flexible, reactive front-ends. And both of those tools needed that power and have become fantastic tools because of it. But there is one pain point that it's created that's quite hard to escape: the customisation story for each of these is harder because of it. How so? Basically, because each tool needs to build the assets to ship their product. And once they're built it's hard for third-parties to build on what's already there. How is it harder? Let's say I'm creating a Statamic add-on that allows CMS owners to post to social media from their control panel. As Statamic uses Vue and already has a bunch of components I can leverage, I am going to use some of them. But I'm also going to add some of my own functionality that doesn't already exist within Statamic. Now what happens? Well I build the Javascript... but wait. I can't change the bundled JS that's part of Statamic core. I have to build my own JS and load it at the right time, something I'm not in control of. Thankfully the Statamic team (building on tooling from the Vue & JS community) have worked hard to make this relatively easy, but my tool choice is now limited by what they support - if they're using Vue 2, I have to use Vue 2; if I don't like Vite or Webpack, tough luck. And on top of that, the builds have to happen at the client's end, which means we're now offloading some of the responsibility of making this whole thing work to people who don't need or want to know anything about this stuff. They just want to install your thing and get on with their jobs and lives. Why is this such a pain though? It used to be (in other platforms) that I could just load some extra JS file into the admin interface and do what I want. Honestly, we probably should never have been doing that either. Hands up, how many of you have seen a WordPress installation that tries to load 2 or more different versions of jQuery? 🙋🏼‍♂️ So these build tools go some way to alleviating some problems, but in the process have introduced so many layers of protection and abstraction that it presents a brain-melting, Japanese puzzle box to unlock. And all this JavaScript flying around is really unsafe, because JavaScript is completely malleable on the client side. That's meant library creators have had to go to some unusual lengths to protect the state of the application and encapsulate the code in attempts to make it safer and more portable. I won't pretend to understand all of the requirements, pre-requisites and implications for why built JavaScript assets are packaged up in a complex soup of function calls and obfuscated code, but suffice to say this makes building on top of pre-existing code that much harder. The web standards track is working to make this easier: we have Web Components and module starting to come through which should alleviate some of this. But if you're not building with those standard in play—I understand, it might not be possible because of browser support etc—and you want to expose your user's to third-party plugins/add-ons, then you've got to figure out how to make it easy for other developers. Some of that's going to require the specific implementation, the other part is going to be documentation. No matter how you cut it, it's going to be harder to get right than if you had server-side rendered views that you allow your third-party developers to load at runtime. ¶Performance will suffer This isn't really an extra reason as Greg did touch on this a little already, but I wanted to go harder on performance. You should never choose to build an SPA because of some supposed performance benefit. That is the wrong hill to try and defend for many reasons, but primarily because you've got the whole of the web stack—on horses, with bazookas—nipping at your heels. Sure, it may take a little while for web standards to get ratified and then rolled out, but the reality is that it's only a matter of time. We now have wide browser support for QUIC / HTTP/3 (which brings faster downloads and reduced server load) and things like 103 Early Hints response headers (which let us tell browsers what they should prioritise pre-loading), making the standard, non-SPA web even more performant. (And yes, some of this benefits SPAs too!) Sure, you can argue some of this advancement may have been driven by SPAs and their apparent benefits. But there's some inevitability to all of this (both the appearance of SPAs and the advancement of HTTP) which makes the whole argument moot in my opinion. As adoption and overall performance of the web platform increases, SPAs will even start to feel slow in comparison. Some feel slow already! That's because so much of the heavy lifting is left to the userland threads of the in-browser JavaScript engine instead of the lower-level compiled languages, the ones used to build the core browser engine itself (C++, Rust, Swift etc). That translates to a poorer experience in your app, and a penalty for your users. While it's not impossible for JavaScript to be as fast or faster than the actual browser it's running in, it's just such a long way for it to get there it's a no-brainer at this stage to let the browser do what the browser does best instead of trying to replicate all of that in JS. So don't! Use JavaScript the way it was intended: as a sugar-coating to enhance the pages, not to build entire pages. I mean, you wouldn't eat a donut made entirely out of sugar, would you? Again, in the extreme cases, maybe you would. Maybe for this donut-sized/-shaped sugar torus, you have an appropriately-sized dough-only counterpart, both of which you consume in close proximity... I donut-know where this analogy is going. ¶So what's the alternative? You, dear reader, are not in the most extreme case. Probably not even close! And you may never be. So, if you haven't started building an SPA, don't! Keep your code together in a single application where the front-end is rendered by the back-end, and then you can test and deploy it as a single unit. And yes, you can even do that without Docker 😱 Create it as a fully server-rendered, multi-page application with page reloads and everything. Go on! I double dare you. If you really want/need the reactivity, try something like Livewire, Hotwire or HTMX. You can do this all the way up to many millions of users per day and it will be fine, which you are a long way from. Trust me, your front-end will never meaningfully need to move faster than your back-end, and vice-versa. If you're already running an SPA and are contemplating bringing the two parts of the donut back together, do it! Bite your lip, close your eyes, and just do it. Relevant

a year ago 13 votes

More in programming

Logical Quantifiers in Software

I realize that for all I've talked about Logic for Programmers in this newsletter, I never once explained basic logical quantifiers. They're both simple and incredibly useful, so let's do that this week! Sets and quantifiers A set is a collection of unordered, unique elements. {1, 2, 3, …} is a set, as are "every programming language", "every programming language's Wikipedia page", and "every function ever defined in any programming language's standard library". You can put whatever you want in a set, with some very specific limitations to avoid certain paradoxes.2 Once we have a set, we can ask "is something true for all elements of the set" and "is something true for at least one element of the set?" IE, is it true that every programming language has a set collection type in the core language? We would write it like this: # all of them all l in ProgrammingLanguages: HasSetType(l) # at least one some l in ProgrammingLanguages: HasSetType(l) This is the notation I use in the book because it's easy to read, type, and search for. Mathematicians historically had a few different formats; the one I grew up with was ∀x ∈ set: P(x) to mean all x in set, and ∃ to mean some. I use these when writing for just myself, but find them confusing to programmers when communicating. "All" and "some" are respectively referred to as "universal" and "existential" quantifiers. Some cool properties We can simplify expressions with quantifiers, in the same way that we can simplify !(x && y) to !x || !y. First of all, quantifiers are commutative with themselves. some x: some y: P(x,y) is the same as some y: some x: P(x, y). For this reason we can write some x, y: P(x,y) as shorthand. We can even do this when quantifying over different sets, writing some x, x' in X, y in Y instead of some x, x' in X: some y in Y. We can not do this with "alternating quantifiers": all p in Person: some m in Person: Mother(m, p) says that every person has a mother. some m in Person: all p in Person: Mother(m, p) says that someone is every person's mother. Second, existentials distribute over || while universals distribute over &&. "There is some url which returns a 403 or 404" is the same as "there is some url which returns a 403 or some url that returns a 404", and "all PRs pass the linter and the test suites" is the same as "all PRs pass the linter and all PRs pass the test suites". Finally, some and all are duals: some x: P(x) == !(all x: !P(x)), and vice-versa. Intuitively: if some file is malicious, it's not true that all files are benign. All these rules together mean we can manipulate quantifiers almost as easily as we can manipulate regular booleans, putting them in whatever form is easiest to use in programming. Speaking of which, how do we use this in in programming? How we use this in programming First of all, people clearly have a need for directly using quantifiers in code. If we have something of the form: for x in list: if P(x): return true return false That's just some x in list: P(x). And this is a prevalent pattern, as you can see by using GitHub code search. It finds over 500k examples of this pattern in Python alone! That can be simplified via using the language's built-in quantifiers: the Python would be any(P(x) for x in list). (Note this is not quantifying over sets but iterables. But the idea translates cleanly enough.) More generally, quantifiers are a key way we express higher-level properties of software. What does it mean for a list to be sorted in ascending order? That all i, j in 0..<len(l): if i < j then l[i] <= l[j]. When should a ratchet test fail? When some f in functions - exceptions: Uses(f, bad_function). Should the image classifier work upside down? all i in images: classify(i) == classify(rotate(i, 180)). These are the properties we verify with tests and types and MISU and whatnot;1 it helps to be able to make them explicit! One cool use case that'll be in the book's next version: database invariants are universal statements over the set of all records, like all a in accounts: a.balance > 0. That's enforceable with a CHECK constraint. But what about something like all i, i' in intervals: NoOverlap(i, i')? That isn't covered by CHECK, since it spans two rows. Quantifier duality to the rescue! The invariant is equivalent to !(some i, i' in intervals: Overlap(i, i')), so is preserved if the query SELECT COUNT(*) FROM intervals CROSS JOIN intervals … returns 0 rows. This means we can test it via a database trigger.3 There are a lot more use cases for quantifiers, but this is enough to introduce the ideas! Next week's the one year anniversary of the book entering early access, so I'll be writing a bit about that experience and how the book changed. It's crazy how crude v0.1 was compared to the current version. MISU ("make illegal states unrepresentable") means using data representations that rule out invalid values. For example, if you have a location -> Optional(item) lookup and want to make sure that each item is in exactly one location, consider instead changing the map to item -> location. This is a means of implementing the property all i in item, l, l' in location: if ItemIn(i, l) && l != l' then !ItemIn(i, l'). ↩ Specifically, a set can't be an element of itself, which rules out constructing things like "the set of all sets" or "the set of sets that don't contain themselves". ↩ Though note that when you're inserting or updating an interval, you already have that row's fields in the trigger's NEW keyword. So you can just query !(some i in intervals: Overlap(new, i')), which is more efficient. ↩

15 hours ago 2 votes
Setting Element Ordering With HTML Rewriter Using CSS

After shipping my work transforming HTML with Netlify’s edge functions I realized I have a little bug: the order of the icons specified in the URL doesn’t match the order in which they are displayed on screen. Why’s this happening? I have a bunch of links in my HTML document, like this: <icon-list> <a href="/1/">…</a> <a href="/2/">…</a> <a href="/3/">…</a> <!-- 2000+ more --> </icon-list> I use html-rewriter in my edge function to strip out the HTML for icons not specified in the URL. So for a request to: /lookup?id=1&id=2 My HTML will be transformed like so: <icon-list> <!-- Parser keeps these two --> <a href="/1/">…</a> <a href="/2/">…</a> <!-- But removes this one --> <a href="/3/">…</a> </icon-list> Resulting in less HTML over the wire to the client. But what about the order of the IDs in the URL? What if the request is to: /lookup?id=2&id=1 Instead of: /lookup?id=1&id=2 In the source HTML document containing all the icons, they’re marked up in reverse chronological order. But the request for this page may specify a different order for icons in the URL. So how do I rewrite the HTML to match the URL’s ordering? The problem is that html-rewriter doesn’t give me a fully-parsed DOM to work with. I can’t do things like “move this node to the top” or “move this node to position x”. With html-rewriter, you only “see” each element as it streams past. Once it passes by, your chance at modifying it is gone. (It seems that’s just the way these edge function tools are designed to work, keeps them lean and performant and I can’t shoot myself in the foot). So how do I change the icon’s display order to match what’s in the URL if I can’t modify the order of the elements in the HTML? CSS to the rescue! Because my markup is just a bunch of <a> tags inside a custom element and I’m using CSS grid for layout, I can use the order property in CSS! All the IDs are in the URL, and their position as parameters has meaning, so I assign their ordering to each element as it passes by html-rewriter. Here’s some pseudo code: // Get all the IDs in the URL const ids = url.searchParams.getAll("id"); // Select all the icons in the HTML rewriter.on("icon-list a", { element: (element) => { // Get the ID const id = element.getAttribute('id'); // If it's in our list, set it's order // position from the URL if (ids.includes(id)) { const order = ids.indexOf(id); element.setAttribute( "style", `order: ${order}` ); // Otherwise, remove it } else { element.remove(); } }, }); Boom! I didn’t have to change the order in the source HTML document, but I can still get the displaying ordering to match what’s in the URL. I love shifty little workarounds like this! Email · Mastodon · Bluesky

15 hours ago 2 votes
The missing part of Espressif’s reset circuit

In the previous article, we peeked at the reset circuit of ESP-Prog with an oscilloscope, and reproduced it with basic components. We observed that it did not behave quite as expected. In this article, we’ll look into the missing pieces. An incomplete circuit For a hint, we’ll first look a bit more closely at the … Continue reading The missing part of Espressif’s reset circuit → The post The missing part of Espressif’s reset circuit appeared first on Quentin Santos.

15 hours ago 2 votes
clamp / median / range

Here are a few tangentially-related ideas vaguely near the theme of comparison operators. comparison style clamp style clamp is median clamp in range range style style clash? comparison style Some languages such as BCPL, Icon, Python have chained comparison operators, like if min <= x <= max: ... In languages without chained comparison, I like to write comparisons as if they were chained, like, if min <= x && x <= max { // ... } A rule of thumb is to prefer less than (or equal) operators and avoid greater than. In a sequence of comparisons, order values from (expected) least to greatest. clamp style The clamp() function ensures a value is between some min and max, def clamp(min, x, max): if x < min: return min if max < x: return max return x I like to order its arguments matching the expected order of the values, following my rule of thumb for comparisons. (I used that flavour of clamp() in my article about GCRA.) But I seem to be unusual in this preference, based on a few examples I have seen recently. clamp is median Last month, Fabian Giesen pointed out a way to resolve this difference of opinion: A function that returns the median of three values is equivalent to a clamp() function that doesn’t care about the order of its arguments. This version is written so that it returns NaN if any of its arguments is NaN. (When an argument is NaN, both of its comparisons will be false.) fn med3(a: f64, b: f64, c: f64) -> f64 { match (a <= b, b <= c, c <= a) { (false, false, false) => f64::NAN, (false, false, true) => b, // a > b > c (false, true, false) => a, // c > a > b (false, true, true) => c, // b <= c <= a (true, false, false) => c, // b > c > a (true, false, true) => a, // c <= a <= b (true, true, false) => b, // a <= b <= c (true, true, true) => b, // a == b == c } } When two of its arguments are constant, med3() should compile to the same code as a simple clamp(); but med3()’s misuse-resistance comes at a small cost when the arguments are not known at compile time. clamp in range If your language has proper range types, there is a nicer way to make clamp() resistant to misuse: fn clamp(x: f64, r: RangeInclusive<f64>) -> f64 { let (&min,&max) = (r.start(), r.end()); if x < min { return min } if max < x { return max } return x; } let x = clamp(x, MIN..=MAX); range style For a long time I have been fond of the idea of a simple counting for loop that matches the syntax of chained comparisons, like for min <= x <= max: ... By itself this is silly: too cute and too ad-hoc. I’m also dissatisfied with the range or slice syntax in basically every programming language I’ve seen. I thought it might be nice if the cute comparison and iteration syntaxes were aspects of a more generally useful range syntax, but I couldn’t make it work. Until recently when I realised I could make use of prefix or mixfix syntax, instead of confining myself to infix. So now my fantasy pet range syntax looks like >= min < max // half-open >= min <= max // inclusive And you might use it in a pattern match if x is >= min < max { // ... } Or as an iterator for x in >= min < max { // ... } Or to take a slice xs[>= min < max] style clash? It’s kind of ironic that these range examples don’t follow the left-to-right, lesser-to-greater rule of thumb that this post started off with. (x is not lexically between min and max!) But that rule of thumb is really intended for languages such as C that don’t have ranges. Careful stylistic conventions can help to avoid mistakes in nontrivial conditional expressions. It’s much better if language and library features reduce the need for nontrivial conditions and catch mistakes automatically.

yesterday 2 votes
C++ engineering decision in SumatraPDF code

SumatraPDF is a medium size (120k+ loc, not counting dependencies) Windows GUI (win32) C++ code base started by me and written by mostly 2 people. The goals of SumatraPDF are to be: fast small packed with features and yet with thoughtfully minimal UI It’s not just a matter of pride in craftsmanship of writing code. I believe being fast and small are a big reason for SumatraPDF’s success. People notice when an app starts in an instant because that’s sadly not the norm in modern software. The engineering goals of SumatraPDF are: reliable (no crashes) fast compilation to enable fast iteration SumatraPDF has been successful achieving those objectives so I’m writing up my C++ implementation decisions. I know those decisions are controversial. Maybe not Terry Davis level of controversial but still. You probably won’t adopt them. Even if you wanted to, you probably couldn’t. There’s no way code like this would pass Google review. Not because it’s bad but becaues it’s different. Diverging from mainstream this much is only feasible if you have total control: it’s your company or your own open-source project. If my ideas were just like everyone else’s ideas, there would be little point in writing about them, would it? Use UTF8 strings internally My app only runs on Windows and a string native to Windows is WCHAR* where each character consumes 2 bytes. Despite that I mostly use char* assumed to be utf8-encoded. I only decided on that after lots of code was written so it was a refactoring oddysey that is still ongoing. My initial impetus was to be able to compile non-GUI parts under Linux and Mac. I abandoned that goal but I think that’s a good idea anyway. WCHAR* strings are 2x larger than char*. That’s more memory used which also makes the app slower. Binaries are bigger if string constants are WCHAR*. The implementation rule is simple: I only convert to WCHAR* when calling Windows API. When Windows API returns WCHA* I convert it to utf-8. No exceptions Do you want to hear a joke? “Zero-cost exceptions”. Throwing and catching exceptions generate bloated code. Exceptions are a non-local control flow that makes it hard to reason about program. Every memory allocation becomes a potential leak. But RAII, you protest. RAII is a “solution” to a problem created by exceptions. How about I don’t create the problem in the first place. Hard core #include discipline I wrote about it in depth. My objects are not shy I don’t bother with private and protected. struct is just class with guts exposed by default, so I use that. While intellectually I understand the reasoning behind hiding implementation details in practices it becomes busy work of typing noise and then even more typing when you change your mind about visibility. I’m the only person working on the code so I don’t need to force those of lesser intellect to write the code properly. My objects are shy At the same time I minimize what goes into a class, especially methods. The smaller the class, the faster the build. A common problem is adding too many methods to a class. You have a StrVec class for array of strings. A lesser programmer is tempted to add Join(const char* sep) method to StrVec. A wise programmer makes it a stand-alone function: Join(const StrVec& v, const char* sep). This is enabled by making everything in a class public. If you limit visibility you then have to use friendto allow Join() function access what it needs. Another example of “solution” to self-inflicted problems. Minimize #ifdef #ifdef is problematic because it creates code paths that I don’t always build. I provide arm64, intel 32-bit and 64-bit builds but typically only develop with 64-bit intel build. Every #ifdef that branches on architecture introduces potential for compilation error which I’ll only know about when my daily ci build fails. Consider 2 possible implementations of IsProcess64Bit(): Bad: bool IsProcess64Bit() { #ifdef _WIN64 return true; #else return false; #endif } Good: bool IsProcess64Bit() { return sizeof(uintptr_t) == 8; } The bad version has a bug: it was correct when I was only doing intel builds but became buggy when I added arm64 builds. This conflicts with the goal of smallest possible size but it’s worth it. Stress testing SumatraPDF supports a lot of very complex document and image formats. Complex format require complex code that is likely to have bugs. I also have lots of files in those formats. I’ve added stress testing functionality where I point SumatraPDF to a folder with files and tell it to render all of them. For greater coverage, I also simulate some of the possible UI actions users can take like searching, switching view modes etc. Crash reporting I wrote about it in depth. Heavy use of CrashIf() C/C++ programmers are familiar with assert() macro. CrashIf() is my version of that, tailored to my needs. The purpose of assert / CrashIf is to add checks to detect incorrect use of APIs or invalid states in the program. For example, if the code tries to access an element of an array at an invalid index (negative or larger than size of the array), it indicates a bug in the program. I want to be notified about such bugs both when I test SumatraPDF and when it runs on user’s computers. As the name implies, it’ll crash (by de-referencing null pointer) and therefore generate a crash report. It’s enabled in debug and pre-release builds but not in release builds. Release builds have many, many users so I worry about too many crash reports. premake to generate Visual Studio solution Visual Studio uses XML files as a list of files in the project and build format. The format is impossible to work with in a text editor so you have no choice but to use Visual Studio to edit the project / solution. To add a new file: find the right UI element, click here, click there, pick a file using file picker, click again. To change a compilation setting of a project or a file? Find the right UI element, click here, click there, type this, confirm that. You accidentally changed compilation settings of 1 file out of a hundred? Good luck figuring out which one. Go over all files in UI one by one. In other words: managing project files using Visual Studio UI is a nightmare. Premake is a solution. It’s a meta-build system. You define your build using lua scripts, which look like test configuration files. Premake then can generate Visual Studio projects, XCode project, makefiles etc. That’s the meta part. It was truly a life server on project with lots of files (SumatraPDF’s own are over 300, many times more for third party libraries). Using /analyze and cppcheck cppcheck and /analyze flag in cl.exe are tools to find bugs in C++ code via static analysis. They are like a C++ compiler but instead of generating code, they analyze control flow in a program to find potential programs. It’s a cheap way to find some bugs, so there’s no excuse to not run them from time to time on your code. Using asan builds Address Sanitizer (asan) is a compiler flag /fsanitize=address that instruments the code with checks for common memory-related bugs like using an object after freeing it, over-writing values on the stack, freeing an object twice, writing past allocated memory. The downside of this instrumentation is that the code is much slower due to overhead of instrumentation. I’ve created a project for release build with asan and run it occasionally, especially in stress test. Write for the debugger Programmers love to code golf i.e. put us much code on one line as possible. As if lines of code were expensive. Many would write: Bad: // ... return (char*)(start + offset); I write: Good: // ... char* s = (char*)(start + offset); return s; Why? Imagine you’re in a debugger stepping through a debug build of your code. The second version makes it trivial to set a breakpoint at return s line and look at the value of s. The first doesn’t. I don’t optimize for smallest number of lines of code but for how easy it is to inspect the state of the program in the debugger. In practice it means that I intentionally create intermediary variables like s in the example above. Do it yourself standard library I’m not using STL. Yes, I wrote my own string and vector class. There are several reasons for that. Historical reason When I started SumatraPDF over 15 years ago STL was crappy. Bad APIs Today STL is still crappy. STL implementations improved greatly but the APIs still suck. There’s no API to insert something in the middle of a string or a vector. I understand the intent of separation of data structures and algorithms but I’m a pragmatist and to my pragmatist eyes v.insert (v.begin(), myarray, myarray+3); is just stupid compared to v.inert(3, el). Code bloat STL is bloated. Heavy use of templates leads to lots of generated code i.e. surprisingly large binaries for supposedly low-level language. That bloat is invisible i.e. you won’t know unless you inspect generated binaries, which no one does. The bloat is out of my control. Even if I notice, I can’t fix STL classes. All I can do is to write my non-bloaty alternative, which is what I did. Slow compilation times Compilation of C code is not fast but it feels zippy compared to compilation of C++ code. Heavy use of templates is big part of it. STL implementations are over-templetized and need to provide all the C++ support code (operators, iterators etc.). As a pragmatist, I only implement the absolute minimum functionality I use in my code. I minimize use of templates. For example Str and WStr could be a single template but are 2 implementations. I don’t understand C++ I understand the subset of C++ I use but the whole of C++ is impossibly complicated. For example I’ve read a bunch about std::move() and I’m not confident I know how to use it correctly and that’s just one of many complicated things in C++. C++ is too subtle and I don’t want my code to be a puzzle. Possibility of optimized implementations I wrote a StrVec class that is optimized for storing vector of strings. It’s more efficient than std::vector<std::string> by a large margin and I use it extensively. Temporary allocator and pool allocators I use temporary allocators heavily. They make the code faster and smaller. Technically STL has support for non-standard allocators but the API is so bad that I would rather not. My temporary allocator and pool allocators are very small and simple and I can add support for them only when beneficial. Minimize unsigned int STL and standard C library like to use size_t and other unsigned integers. I think it was a mistake. Go shows that you can just use int. Having two types leads to cast-apalooza. I don’t like visual noise in my code. Unsigned are also more dangerous. When you substract you can end up with a bigger value. Indexing from end is subtle, for (int i = n; i >= 0; i--) is buggy because i >= 0 is always true for unsigned. Sadly I only realized this recently so there’s a lot of code still to refactor to change use of size_t to int. Mostly raw pointers No std::unique_ptr for me. Warnings are errors C++ makes a distinction between compilation errors and compilation warnings. I don’t like sloppy code and polluting build output with warning messages so for my own code I use a compiler flag that turns warnings into errors, which forces me to fix the warnings.

yesterday 2 votes