More from Ognjen Regoje • ognjen.io
Many (most?) engineers go from university to a sizable company significantly distancing them from the actual value their code creates. They labour under the delusion that they’re paid to write code. In fact, they’re paid to make money, and writing code is probably the most expensive way that they can do that. They will often say things like “We should scrap this entirely and re-write it, it will only take 8 months” – often about code that generates 8 figures in revenue and employs several dozen people. Code that pays for their smartwatches. But, of course: Engineers are hired to create business value, not to program things – Don’t Call Yourself A Programmer, And Other Career Advice In my estimate it takes about a decade of experience before engineers start to really internalize this. This can be significantly sped up by having a shorter feedback loop between the code written and the value realized by the engineer. There are two ways to do this: Freelancing Founding Freelancing By freelancing, and doing it well, the reward, is very directly tied to the code written. The best way to do freelance, for the sake of learning, would be to work on fixed cost contracts – which isn’t great freelancing advice, but is excellent for the longterm career. Delivering to someone elses specs makes engineers focused on delivery only the necessary and sufficient code to make that happen. All the correct decisions result in an improvement of the engineers earnings per hour and all mistakes in a reduction. That feedback loop very quickly teaches: The importance of quality and automated testing Architecture and keeping options open Communication and requirements gathering, asking the right questions All of these are factors that come into play once an engineer is breaking the barrier from Senior to management or Staff. Founding a company Founding a company, where the code that you produced secures your salary, teaches those lessons, plus a few others: Understanding the importance tradeoffs that companies make betwen velocity and tech debt It is also an opportunity to learn how to make those tradeoffs well, something engineers aren’t always great at Experience creating the most value possible with the least code Very few enginers pre-emtively suggest ways to test product hyptheses using cheaper appoaches Pragmatism and bias towards shipping and avoidingg gold-plating functionality that is immature Plus you very quickly start to understand why “We should re-write it” is almost never the right business decision. All software engineers should freelance or found a business was originally published by Ognjen Regoje at Ognjen Regoje • ognjen.io on April 18, 2025.
In The Innovator’s Dilemma Christensen talks about how when acquiring a company you might either be acquiring its product or its processes. Depending on which it is, you need to handle the integration differently. I’ve realized that hiring a new manager follows a similar pattern: either they’re expected to integrate into the organization, or be independent and create some change. That expectation depends on whether the team, and possibly the wider organization, function well. If the team is high-performing, why would adding or overhauling processes make sense over fine-tuning existing ones? But new managers often join and immediately start suggesting ways to fix things. In many of these cases, they aren’t suggesting some best practices but are simply trying to have the new company function in a similar way to their previous one. But they never have enough context to justify these changes. What they should do is take a step back and understand why they were hired and what already works. Are they there to run the team as it is and perhaps look for marginal gains in efficiency and effectiveness? Or are they there because things are fundamentally broken and they need to overhaul the organization? In 9 out of 10 cases, it’s the first one. They’re there to ensure the continuity of the team. Therefore in 9 out of 10 cases the objective should be to integrate into the processes as quickly as possible and help iterate. Why are you here, manager? was originally published by Ognjen Regoje at Ognjen Regoje • ognjen.io on April 18, 2025.
I love Ben Brode’s Design Lessons from Improv talk. It presents techniques that we could all use more frequently. I particularly took the “Yes, and…“ to heart. It is an excellent technique, or attitude really, that keeps the conversation going. Conversations often start slow but get progressively more interesting the deeper you go. And “Yes, and…” makes it possible to get there. One of my favorite uses of “Yes, and…” is when someone sends you an article that you’ve already read or a video you’ve already watched. The typical response might be 👍 seen it (A whole site is named after the fact that you’ve already read it) If the other person is interested in having a conversation, you’ve just stopped it in its tracks expecting them to put in all the effort to keep it going. A “Yes, and…” response such as “Yes, I’ve read it, and something you found interesting” opens up the conversation. Even if the other person just wanted to share something they thought you might find interesting, you’ve: a) created an opportunity to exchange opinions and b) put in slightly above the bare minimum of effort to acknowledge that what they shared with you was indeed interesting At work At work, specifically, it is useful in all manner of discussions. Conversations about product, or code, or architecture, or team activities, or customer service all get better when you don’t dismiss but build on top of each other. The value of "Yes, and..." was originally published by Ognjen Regoje at Ognjen Regoje • ognjen.io on April 17, 2025.
One of the best pieces of advice I’ve ever gotten is to take a minute, or a week, after you’ve had a difficult conversation. By and large, people are not unreasonable. They’re not out to get you. They’re not trying to make your life miserable. They’re probably trying to do what they think is right. But tough conversations happen and when they do it’s important to take time to process the information and formulate a more nuanced opinion. To take a work example: picture a conversation where you’re being some particularly heavy feedback You’re confused, you’re sad, you’re angry. You disagree. You want to protest, defend yourself, argue, explain. Doing so, however, would accomplish nothing in the immediate, and probably set you back in the long-term. The other person is probably also upset and stressed about having to have the conversation. Getting defensive would get make them to do the same and the conversation would quickly devolve into one run by emotions. Instead, listen and gather as much information as possible. If possible, try to write as much as you can down. Don’t say much except ask questions and then politely ask for a follow-up meeting in a few days. That will give you the time to process all the information and figure out if they were right, if it might not have been a big deal at all, if there is nuance in the situation or if you were indeed right. Or, as is most likely, some combination of all of the above. You’ll be able to formulate a cohesive model of the situation in your head, which will help you make a better decision or counter-argument if needed. It’ll also give you, and the others, time to cool down and prevent anyone from reacting too emotionally. Come to the follow-up meeting with humility and a willingness to compromise. Recap the previous meeting and make sure that everyone is on the same page. Then explain your understanding of the situation and present your opinion. The end result should be a much more amicable outcome without the need for a third meeting. And while my example is in the context of work, the same is true for personal conversations. So, take a minute. Or a week. It’ll help you make better decisions. During a difficult conversation, remember to take a minute was originally published by Ognjen Regoje at Ognjen Regoje • ognjen.io on April 17, 2025.
There is nothing as inevitable as a re-org when a new VP joins. When a new executive joins they’re often overwhelmed by the amount of context they need to absorb to start being effective. The more seasoned ones aren’t pertrubed by this: they understand that gathering this context is their full-time job for the next several weeks or months. There’s even a book about this period. The less savvy ones, on the other hand, often reach for one of the following coping strategies, depending on the type of role they occupy. This organization makes no sense, we must re-organize it immediately Spoken by a newly joined VP who needs to assess the organization and understand why it is set up the way it is. It results in several workshops about boundaries, Conway’s law and team topologies result in a slightly different, but not materially significant organization. And a VP with a much better understanding of their people, the culture, the product and the challenges. We must document/map it Spoken by a product manager getting to grips with the features they’ll be working on before having read the abundant sales, technical and product reference materials. This usually results in several workshops where there is a lot of “discovery” and “mapping”. In reality, the product manager is getting an in-person crash course. It rarely results in any new discoveries or documentation or maps being produced but always results in a much more confident product manager. We must have a process for that Spoken by a new engineering manager who’s not yet familiar with the existing processes and ways of working. This usually results in the engineering manager starting to write a Confluence page on how the process should work, until one of the team members sends them an existing, but finished, Confluence page on exactly that, but with slight differences. The new page gets a link to the existing ones and is promptly forgotten. Does this process really work for anyone? A sub-category of the above then the process in place is different from their previous employer. This code is so bad, we must re-write it entirely Spoken by a senior but not yet quite staff engineer who’s just getting to grips with a new codebase – often about code that generates 7 or 8 digits in revenue. It results in the engineer spending several hours on an alternative architecture and running it by their team several times. Eventually, they understand that what they’re suggesting is quite similar to what is actually in place, that there is some refactoring and improvements to be done, but it’s nowhere near as tragic as they imagined it to be. Why does this happen? A week or two after joining, depending on how generous the company is, the engineer gets a ticket to work on, the PM is asked about the backlog priority and the EM why their bug injection rate is so high and what they’re doing about it. And they naturally feel lost. The problem is that most companies don’t set an expected timeline for having a person become effective in their position. How to do better? The amount of context required to be effective increases with seniority. But everyone needs a couple of weeks outside of the default onboarding programme to read through their team’s wiki space, to look through the backlog, to pair with their colleagues, to get an understanding of the work the team is doing, to be present at the retrospectives to listen and not have to lead and facilitate. Only after they get the lay of the land can they start contributing in a meaningful way. The managerial fear of the unknown was originally published by Ognjen Regoje at Ognjen Regoje • ognjen.io on April 17, 2025.
More in programming
.title {text-wrap:balance;} #content > p:first-child {text-wrap:balance;} If Git had a nemesis, it’d be large files. Large files bloat Git’s storage, slow down git clone, and wreak havoc on Git forges. In 2015, GitHub released Git LFS—a Git extension that hacked around problems with large files. But Git LFS added new complications and storage costs. Meanwhile, the Git project has been quietly working on large files. And while LFS ain’t dead yet, the latest Git release shows the path towards a future where LFS is, finally, obsolete. What you can do today: replace Git LFS with Git partial clone Git LFS works by storing large files outside your repo. When you clone a project via LFS, you get the repo’s history and small files, but skip large files. Instead, Git LFS downloads only the large files you need for your working copy. In 2017, the Git project introduced partial clones that provide the same benefits as Git LFS: Partial clone allows us to avoid downloading [large binary assets] in advance during clone and fetch operations and thereby reduce download times and disk usage. – Partial Clone Design Notes, git-scm.com Git’s partial clone and LFS both make for: Small checkouts – On clone, you get the latest copy of big files instead of every copy. Fast clones – Because you avoid downloading large files, each clone is fast. Quick setup – Unlike shallow clones, you get the entire history of the project—you can get to work right away. What is a partial clone? A Git partial clone is a clone with a --filter. For example, to avoid downloading files bigger than 100KB, you’d use: git clone --filter='blobs:size=100k' <repo> Later, Git will lazily download any files over 100KB you need for your checkout. By default, if I git clone a repo with many revisions of a noisome 25 MB PNG file, then cloning is slow and the checkout is obnoxiously large: $ time git clone https://github.com/thcipriani/noise-over-git Cloning into '/tmp/noise-over-git'... ... Receiving objects: 100% (153/153), 1.19 GiB real 3m49.052s Almost four minutes to check out a single 25MB file! $ du --max-depth=0 --human-readable noise-over-git/. 1.3G noise-over-git/. $ ^ 🤬 And 50 revisions of that single 25MB file eat 1.3GB of space. But a partial clone side-steps these problems: $ git config --global alias.pclone 'clone --filter=blob:limit=100k' $ time git pclone https://github.com/thcipriani/noise-over-git Cloning into '/tmp/noise-over-git'... ... Receiving objects: 100% (1/1), 24.03 MiB real 0m6.132s $ du --max-depth=0 --human-readable noise-over-git/. 49M noise-over-git/ $ ^ 😻 (the same size as a git lfs checkout) My filter made cloning 97% faster (3m 49s → 6s), and it reduced my checkout size by 96% (1.3GB → 49M)! But there are still some caveats here. If you run a command that needs data you filtered out, Git will need to make a trip to the server to get it. So, commands like git diff, git blame, and git checkout will require a trip to your Git host to run. But, for large files, this is the same behavior as Git LFS. Plus, I can’t remember the last time I ran git blame on a PNG 🙃. Why go to the trouble? What’s wrong with Git LFS? Git LFS foists Git’s problems with large files onto users. And the problems are significant: 🖕 High vendor lock-in – When GitHub wrote Git LFS, the other large file systems—Git Fat, Git Annex, and Git Media—were agnostic about the server-side. But GitHub locked users to their proprietary server implementation and charged folks to use it.1 💸 Costly – GitHub won because it let users host repositories for free. But Git LFS started as a paid product. Nowadays, there’s a free tier, but you’re dependent on the whims of GitHub to set pricing. Today, a 50GB repo on GitHub will cost $40/year for storage. In contrast, storing 50GB on Amazon’s S3 standard storage is $13/year. 😰 Hard to undo – Once you’ve moved to Git LFS, it’s impossible to undo the move without rewriting history. 🌀 Ongoing set-up costs – All your collaborators need to install Git LFS. Without Git LFS installed, your collaborators will get confusing, metadata-filled text files instead of the large files they expect. The future: Git large object promisors Large files create problems for Git forges, too. GitHub and GitLab put limits on file size2 because big files cost more money to host. Git LFS keeps server-side costs low by offloading large files to CDNs. But the Git project has a new solution. Earlier this year, Git merged a new feature: large object promisers. Large object promisors aim to provide the same server-side benefits as LFS, minus the hassle to users. This effort aims to especially improve things on the server side, and especially for large blobs that are already compressed in a binary format. This effort aims to provide an alternative to Git LFS – Large Object Promisors, git-scm.com What is a large object promisor? Large object promisors are special Git remotes that only house large files. In the bright, shiny future, large object promisors will work like this: You push a large file to your Git host. In the background, your Git host offloads that large file to a large object promisor. When you clone, the Git host tells your Git client about the promisor. Your client will clone from the Git host, and automagically nab large files from the promisor remote. But we’re still a ways off from that bright, shiny future. Git large object promisors are still a work in progress. Pieces of large object promisors merged to Git in March of 2025. But there’s more to do and open questions yet to answer. And so, for today, you’re stuck with Git LFS for giant files. But once large object promisors see broad adoption, maybe GitHub will let you push files bigger than 100MB. The future of large files in Git is Git. The Git project is thinking hard about large files, so you don’t have to. Today, we’re stuck with Git LFS. But soon, the only obstacle for large files in Git will be your half-remembered, ominous hunch that it’s a bad idea to stow your MP3 library in Git. Edited by Refactoring English Later, other Git forges made their own LFS servers. Today, you can push to multiple Git forges or use an LFS transfer agent, but all this makes set up harder for contributors. You’re pretty much locked-in unless you put in extra effort to get unlocked.↩︎ File size limits: 100MB for GitHub, 100MB for GitLab.com↩︎
Conrad Irwin has an article on the Zed blog “Why LLMs Can't Really Build Software”. He says it boils down to: the distinguishing factor of effective engineers is their ability to build and maintain clear mental models We do this by: Building a mental model of what you want to do Building a mental model of what the code does Reducing the difference between the two It’s kind of an interesting observation about how we (as humans) problem solve vs. how we use LLMs to problem solve: With LLMs, you stuff more and more information into context until it (hopefully) has enough to generate a solution. With your brain, you tweak, revise, or simplify your mental model more and more until the solution presents itself. One adds information — complexity you might even say — to solve a problem. The other eliminates it. You know what that sort of makes me think of? NPM driven development. Solving problems with LLMs is like solving front-end problems with NPM: the “solution” comes through installing more and more things — adding more and more context, i.e. more and more packages. LLM: Problem? Add more context. NPM: Problem? There’s a package for that. Contrast that with a solution that comes through simplification. You don’t add more context. You simplify your mental model so you need less to solve a problem — if you solve it at all, perhaps you eliminate the problem entirely! Rather than install another package to fix what ails you, you simplify your mental model which often eliminates the problem you had in the first place; thus eliminating the need to solve any problem at all, or to add any additional context or complexity (or dependency). As I’m typing this, I’m thinking of that image of the evolution of the Raptor engine, where it evolved in simplicity: This stands in contrast to my working with LLMs, which often wants more and more context from me to get to a generative solution: I know, I know. There’s probably a false equivalence here. This entire post started as a note and I just kept going. This post itself needs further thought and simplification. But that’ll have to come in a subsequent post, otherwise this never gets published lol. Email · Mastodon · Bluesky
Measuring, analyzing, and optimizing loops using Linux perf, Top-Down Microarchitectural Analysis, and the CPU’s micro-op cache
You can just change things! That's the power of open source. But for a lot of people, it might seem like a theoretical power. Can you really change, say, Chrome? Well, yes! We've made a micro fork of Chromium for Omarchy (our new 37signals Linux distribution). Just to add one feature needed for live theming. And now it's released as a package anyone can install on any flavor of Arch using the AUR (Arch User Repository). We got it all done in just four days. From idea, to solicitation, to successful patch, to release, to incorporation. And now it'll be part of the next release of Omarchy. There are no speed limits in open source. Nobody to ask for permission. You have the code, so you can make the change. All you need is skill and will (and maybe, if you need someone else to do it for you, a $5,000 incentive 😄).
Jan Miksovsky lays out his idea for website creation as content transformation. He starts by talking about tools that hide what’s happening “under the hood”: A framework’s marketing usually pretends it is unnecessary for you to understand how its core transformation works — but without that knowledge, you can’t achieve the beautiful range of results you see in the framework’s sample site gallery. This is a great callout. Tools will say, “You don’t have to worry about the details.” But the reality is, you end up worrying about the details — at least to some degree. Why? Because what you want to build is full of personalization. That’s how you differentiate yourself, which means you’re going to need a tool that’s expressive enough to help you. So the question becomes: how hard is it to understand the details that are being intentionally hidden away? A lot of the time those details are not exposed directly. Instead they’re exposed through configuration. But configuration doesn’t really help you learn how something works. I mean, how many of you have learned how typescript works under the hood by using tsconfig.json? As Jan says: Configuration can lead to as many problems as it solves Nailed it. He continues: Configuring software is itself a form of programming, in fact a rather difficult and often baroque form. It can take more data files or code to configure a framework’s transformation than to write a program that directly implements that transformation itself. I’m not a Devops person, but that sounds like Devops in a nutshell right there. (It also perfectly encapsulates my feelings on trying to setup configuration in GitHub Actions.) Jan moves beyond site creation to also discuss site hosting. He gives good reasons for keeping your website’s architecture simple and decoupled from your hosting provider (something I’ve been a long time proponent of): These site hosting platforms typically charge an ongoing subscription fee. (Some offer a free tier that may meet your needs.) The monthly fee may not be large, but it’s forever. Ten years from now you’ll probably still want your content to be publicly available, but will you still be happy paying that monthly fee? If you stop paying, your site disappears. In subscription pricing, any price (however small) is recurring. Stated differently: pricing is forever. Anyhow, it’s a good read from Jan and lays out his vision for why he’s building Web Origami: a tool for that encourages you to understand (and customize) how you transform content to a website. He just launched version 0.4.0 which has some exciting stuff I’m excited to try out further (I’ll have to write about all that later). Email · Mastodon · Bluesky