More from Founder's blog
TL;DR The "build vs. buy" equation has flipped. Businesses used to buy SaaS because it was cheaper than building their own. AI has changed that—building your own is now more affordable than ever. The discovery problem. AI recommendations default to well-established solutions. Think SEO is a long game? Try LLM SEO. Everyone worries about AI taking developer jobs, but what if AI wipes out the entire off-the-shelf software industry? The "Why Buy?" Problem Six months ago, we needed an AI-powered code review tool. We explored several options and ultimately "vibe-coded" our own GitHub Action—a simple Bash script that takes a git log, sends it to Claude via curl, and posts the results to Slack. Done. The best part? AI wrote the entire thing faster than it would take to sign up for a SaaS. How long until every company realizes they can do this? Need a simple "CRUD" CRM with JIRA-style tasks? Done. Need a mobile time-tracking app for remote employees? AI will spit out a React Native iOS build in minutes. Why pay for yet another SaaS when you can "vibe-code" something in a week? And mark my words, LLM providers are one step away from actually hosting the code they generate. Who needs to spawn an AWS server if you can just ask OpenAI to host the code it just wrote? - "Hey Siri! build me a Basecamp, but with green buttons, also register a domain, spawn a server and host it all there, charge this credit card when you're done" - "Absolutely, that'd be $1.17 per hour" The Discovery Problem AI doesn’t just make it easier to build software—it makes it harder for new SaaS products to get discovered. When you ask AI for recommendations, it defaults to the biggest names. And not just in SaaS, by the way, in open source too. Imagine launching a killer new JS framework today. AI coding assistants and tools like Cursor will just default to React anyway. And not even the latest version of it! In a recent tweet Adam Wathan, the creator of Tailwind, asked: "Has anyone migrated to Tailwind 4.0 yet?" The most popular response was "Nah! we're still waiting for LLMs to learn it." AI isn’t just "the next internet moment." It’s more like "the social network moment." Echo chambers get louder, big names get bigger, and smaller ones disappear into the noise. What Can SaaS Companies Do? 1. Become an Industry Standard Or at least a "go-to" product in a niche. If your app becomes something people mention on their CVs or job descriptions, you win. Examples: Slack. HubSpot. Salesforce etc. A salesperson moving to a new company simply expects Salesforce to be there. That kind of lock-in ensures survival. 2. Build Moats: Infrastructure & Vendor Lock-In SaaS products that are just CRUD apps will die. The ones that survive will own infrastructure or at least some part of it. Instead of building another AI voice assistant, create one with built-in VoIP and provide landline numbers to customers. Examples: Transistor.fm – Not just a SaaS, but also a podcast hosting and publishing pipeline. Postmark (or any transactional email service really) – yes, AI can code an email-sending app, but it can't get you a 10-year old high-reputation sender IP address trusted by Gmail and Outlook. SignWell, SavvyCal and similar "inter-business" file-sharing, communication & escrow apps that own the communication part (and frankly, are literally easier to use than vibe-code your own). But prepare for tthousands of clones. Which SaaS Will Die First? Side-project-scale, "one simple tool" SaaS products that used to be easy wins—form builders, schedulers, basic dashboards, simple workflow apps—those days are over. If AI can generate it in an afternoon, no one is paying a subscription for it. Oh, and "no code" is toasted too. The SaaS graveyard is about to get a lot more crowded. I give it 4 years. Software consulting is making a comeback though. Someone has to clean up the vibe-coded chaos.
TL;DR The "build vs. buy" equation has flipped. Businesses used to buy SaaS because it was cheaper than building their own. AI has changed that—building your own is now more affordable than ever. The discovery problem. AI recommendations default to well-established solutions. Think SEO is a long game? Try LLM SEO. Everyone worries about AI taking developer jobs, but what if AI wipes out the entire off-the-shelf software industry? The "Why Buy?" Problem Six months ago, we needed an AI-powered code review tool. We explored several options, tested them all, and ultimately "vibe-coded" our own GitHub Action—a simple Bash script that takes a git log, sends it to Claude via curl, and posts the results to Slack. Done. The best part? AI wrote the entire thing—faster than it took to sign up for another SaaS. How long until every company realizes they can do this? Need a simple CRM with JIRA-style tasks? Done. Need a mobile time-tracking app for remote employees? AI will spit out a React Native iOS build in minutes. Why pay for yet another SaaS when you can "vibe-code" something in a week? The Discovery Problem AI doesn’t just make it easier to build software—it makes it harder for new SaaS products to get discovered. When you ask AI for recommendations, it defaults to the biggest names. Here’s an open-source analogy: imagine launching a game-changing JS framework today. AI coding assistants and tools like Cursor will still default to React. And not even the latest version! Adam Wathan recently asked on Twitter, "Has anyone migrated to Tailwind 4.0 yet?" The most popular response was "Nah! we're still waiting for LLMs to learn it." AI isn’t just "the next internet moment." It’s more like "the social network moment." Echo chambers get louder, big names get bigger, and smaller ones disappear into the noise. What Can SaaS Companies Do? 1. Become an Industry Standard Or at least a "go-to" product in a niche. If your app becomes something people mention on their CVs or job descriptions, you win. Examples: Slack. HubSpot. Salesforce etc. A salesperson moving to a new company simply expects Salesforce to be there. That kind of lock-in ensures survival. 2. Build Moats: Infrastructure & Vendor Lock-In SaaS products that are just CRUD apps will die. The ones that survive will own infrastructure. Examples: Transistor.fm – Not just a SaaS, but also a podcast hosting and distribution pipeline. Postmark (or any transactional email service really) – AI can code an email-sending app, but it can't get you a 10-year old high-reputation sender IP address trusted by Gmail and Outlook. SignWell and similar B2B file-sharing apps (literally easier to use then code your own). Don't just build another CRUD sales CRM, build a CRM with an inbound VoIP number – because AI can’t replace telco infrastructure (yet). Which SaaS Will Die First? Side-project-scale, "one simple tool" SaaS products that used to be easy wins—Calendly replacements, form builders, schedulers, basic dashboards, simple workflow apps—those days are over. If AI can generate it in an afternoon, no one is paying a subscription for it. Oh, and "no code" is toasted too. The SaaS graveyard is about to get a lot more crowded. I give it 4 years. Software consulting is making a comeback though. Someone has to clean up the vibe-coded chaos.
I mean, it is! But the whole story about the stock market reacting to the news about DeepSeek V3 and R1 is a fine example of the knee-jerk nature of mass consciousness in the era of clickbait economics. Briefly, by points: No, DeepSeek isn’t “head and shoulders above” every other model. The results vary across benchmarks, but on average, GPT-4o and Gemini-2 are better. You can see this on ChatBot Arena, for example (Reddit thread). Even in the results published by DeepSeek’s authors themselves (benchmark graph), you can see that in several tests, the model lags behind GPT-4o from May 2024—which, mind you, is currently ranked 16th on ChatBot Arena. No, training DeepSeek didn’t cost $6 million, “100 times less than GPT-4.” The $6 million figure refers only to the final training run of the published model. It doesn’t include any prior experiments, earlier versions, or R&D costs. This is just the raw computational cost of that final training run. And guess what? That figure is pretty much in line with models of the same class. No, Nvidia did not deserve this hit Not that we’re shedding tears for them — they could use a push to lower hardware prices. And let's not forget that DeepSeek was still trained on Nvidia’s own hardware. And no, their GPUs aren’t suddenly obsolete. DeepSeek’s computational budget is fairly standard for training, and inference for such a massive model (reminder: it’s an MoE with 671 billion parameters, 37 billion of which are active per token generation) requires a ton of hardware. Inference costs are roughly on par with a 70B dense model. Naturally, they’ll scale this success by throwing even more hardware at it and making the model bigger. Not to mention that Deepseek makes LLMs more accessible for the on-prem customers. Which means smaller businesses will buy more GPU's, which is still good for NVDA, am I right? Does this mean the model is bad? No, the model is very, VERY good. It outperforms the vast majority of open-source models, which is fantastic. DeepSeek used 8-bit floating point numbers (FP8) throughout the entire training process. This sacrifices some of that precision to save memory and boost performance. Additionally, they employed a multi-token prediction system and innovative GPU clustering/connectivity techniques. These are clever and practical engineering choices that undoubtedly contributed to their success. In the end, though, stocks will recover, ideas will spread, models will get better, and progress will march on (hopefully).
After years of working with the "big" Visual Studio, I've had enough. It's buggy, slow, and frustrating, and I've decided to make the switch to Visual Studio Code. While as a C# developer I'm still unsure if I can replicate every aspect of my workflow in VS Code, I'm willing to give it a shot—and so far, I'm really impressed. 1. Performance Visual Studio 2022 performance has been a constant issue. It's sluggish and feels increasingly bloated with every new update. It's like watching paint dry every time I open a project. In contrast, Visual Studio Code feels lightweight and incredibly fast. The first time I opened my large project in VS Code, I was shocked — it loaded in lees than a second, literally, even with extensions like "C#" and "C# Dev Kit" installed. 2. Better Developer Experience Running dotnet watch run in VS Code's terminal has been a revelation. It's fast, responsive, and actually works consistently. Visual Studio's "hot reload" feature, on the other hand, has been a constant source of frustration for me. Half the time it doesn't work, and I'm left restarting debugging sessions over and over again. I can't tell you how many hours I've lost to that unreliable feature. 3. Fewer Bugs, Less Frustration The minor editor bugs in Visual Studio have been endless and exhausting. I remember one particularly infuriating bug where syntax highlighting would break in Razor and .cshtml files whenever I used certain HTML tags or even just adjusted the indentation. It drove me up the wall! Not to mention the bizarre issues with JavaScript formatting that never seemed to get fixed. Since switching to VS Code, I've encountered far fewer bugs. It just feels like an environment that respects my time and sanity. 4. A Thriving Ecosystem The VS Code extension ecosystem is alive and thriving. Need Tailwind CSS IntelliSense? There's an extension for that, and it works beautifully. Want to visualize your Git history for a particular line (better version of git-blame)? The Git History extension has got you covered. In "big" Visual Studio, I'd report issues through the "feedback hub" and wait months — or even years — for a response. With VS Code, the community is constantly contributing new tools and improvements. It's energizing (and sometimes exhausting) to be part of such an active ecosystem. 5. Cross-Platform Flexibility One of the biggest advantages I've found with Visual Studio Code is its true cross-platform support. Whether I'm on my Windows PC gaming rig at home or my MacBook while traveling, VS Code runs smoothly and keeps my workflow consistent. Visual Studio's limited macOS version just doesn't cut it for me. Being able to switch between machines without missing a beat has been a game-changer. I have to admit, I was skeptical at first. I've always had a bit of a grudge against Electron-based apps — they've often felt sluggish and bloated. But VS Code has completely changed my perspective. It's fast, responsive, and flexible enough to let me build the development environment that works best for me. Switching to VS Code has rekindled my passion for coding; it reminds me why I fell in love with development in the first place. While Visual Studio will always have its strengths, I need a tool that evolves with me—not one that holds me back.
More in programming
One of my side quests at work is to get a simple feedback loop going where we can create knowledge bases that comment on Notion documents. I was curious if I could hook this together following these requirements: No custom code hosting Prompt is editable within Notion rather than requiring understanding of Zapier Should be be fairly quickly Ultimately, I was able to get it working. So a quick summary of how it works, some comments on why I don’t particularly like this approach, then some more detailed comments on getting it working. General approach Create a Notion database of prompts. Create a specific prompt for providing feedback on RFCs. Create a Notion database for all RFCs. Add an automation into this database that calls a Zapier webhook. The Zapier webhook does a variety of things that culminate in using the RFC prompt to provide feedback on the specific RFC as a top-level comment in the RFC. Altogether this works fairly well. The challenges with this approach The best thing about this approach is that it actually works, and it works fairly well. However, as we dig into the implementation details, you’ll also see that a series of things are unnaturally difficult with Zapier: Managing rich text in Notion because it requires navigating the blocks datastructure Allowing looping API constructs such as making it straightforward to leave multiple comments on specific blocks rather than a single top-level comment Notion only allows up to 2,000 characters per block, but chunking into multiple blocks is moderately unnatural. In a true Python environment, it would be trivial to translate to and from Markdown using something like md2notion Ultimately, I could only recommend this approach as an initial validation. It’s definitely not the right long-term resting place for this kind of approach. Zapier implementation I already covered the Notion side of the integration, so let’s dig into the Zapier pieces a bit. Overall it had eight steps. I’ve skipped the first step, which was just a default webhook receiver. The second step was retrieving a statically defined Notion page containing the prompt. (In later steps I just use the Notion API directly, which I would do here if I was redoing this, but this worked too. The advantage of the API is that it returns a real JSON object, this doesn’t, probably because I didn’t specify the content-type header or some such.) This is the configuration page of step 2, where I specify the prompt’s page explicitly. ) Probably because I didn’t set content-type, I think I was getting post formatted data here, so I just regular expressed the data out. It’s a bit sloppy, but hey it worked, so there’s that. ) Here is using the Notion API request tool to retrieve the updated RFC (as opposed to the prompt which we already retrieved). ) The API request returns a JSON object that you can navigate without writing regular expressions, so that’s nice. ) Then we send both the prompt as system instructions and the RFC as the user message to Open AI. ) Then pass the response from OpenAI to json.dumps to encode it for being included in an API call. This is mostly solving for newlines being \n rather than literal newlines. ) Then format the response into an API request to add a comment to the document. Anyway, this wasn’t beautiful, and I think you could do a much better job by just doing all of this in Python, but it’s a workable proof of concept.
When working on big JavaScript web apps, you can split the bundle in multiple chunks and import selected chunks lazily, only when needed. That makes the main bundle smaller, faster to load and parse. How to lazy import a module? let hljs = await import("highlight.js").default; is equivalent of: import hljs from "highlight.js"; Now: let libZip = await import("@zip.js/zip.js"); let blobReader = new libZip.BlobReader(blob); Is equivalent to: import { BlobReader } from "@zip.js/zip.js"; It’s simple if we call it from async function but sometimes we want to lazy load from non-async function so things might get more complicated: let isLazyImportng = false; let hljs; let markdownIt; let markdownItAnchor; async function lazyImports() { if (isLazyImportng) return; isLazyImportng = true; let promises = await Promise.all([ import("highlight.js"), import("markdown-it"), import("markdown-it-anchor"), ]); hljs = promises[0].default; markdownIt = promises[1].default; markdownItAnchor = promises[2].default; } We can run it from non-async function: function doit() { lazyImports().then( () => { if (hljs) { // use hljs to do something } }) } I’ve included protection against kicking of lazy import more than once. That means on second and n-th call we might not yet have the module loaded so hljs will be still undefined.
A step-by-step walkthrough of the toy kill program using raw Linux syscalls.
For managers who have spent a long time reporting to a specific leader or working in an organization with well‑understood goals, it’s easy to develop skill gaps without realizing it. Usually this happens because those skills were not particularly important in the environment you grew up in. You may become extremely confident in your existing skills, enter a new organization that requires a different mix of competencies, and promptly fall on your face. There are a few common varieties of this, but the one I want to discuss here is when managers grow up in an organization that operates from top‑down plans (“orchestration‑heavy roles”) and then find themselves in a sufficiently senior role, or in a bottom‑up organization, that expects them to lead rather than orchestrate (“leadership‑heavy roles”). Orchestration versus leadership You can break the components of solving a problem down in a number of ways, and I’m not saying this is the perfect way to do it, but here are six important components of directing a team’s work: Problem discovery: Identifying which problems to work on Problem selection: Aligning with your stakeholders on the problems you’ve identified Solution discovery: Identifying potential solutions to the selected problem Solution selection: Aligning with your stakeholders on the approach you’ve chosen Execution: Implementing the selected solution Ongoing revision: Keeping your team and stakeholders aligned as you evolve the plan In an orchestration‑heavy management role, you might focus only on the second half of these steps. In a leadership‑heavy management role, you work on all six steps. Folks who’ve only worked in orchestration-heavy roles often have no idea that they are expected to perform all of these. So, yes, there’s a skill gap in performing the work, but more importantly there’s an awareness gap that the work actually exists to be done. Here are a few ways you can identify an orchestration‑heavy manager that doesn’t quite understand their current, leadership‑heavy circumstances: Focuses on prioritization as “solution of first resort.” When you’re not allowed to change the problem or the approach, prioritization becomes one of the best tools you have. Accepts problems and solutions as presented. If a stakeholder asks for something, questions are around priority rather than whether the project makes sense to do at all, or suggestions of alternative approaches. There’s no habit of questioning whether the request makes sense—that’s left to the stakeholder or to more senior functional leadership. Focuses on sprint planning and process. With the problem and approach fixed, protecting your team from interruption and disruption is one of your most valuable tools. Operating strictly to a sprint cadence (changing plans only at the start of each sprint) is a powerful technique. All of these things are still valuable in a leadership‑heavy role, but they just aren’t necessarily the most valuable things you could be doing. Operating in a leadership-heavy role There is a steep learning curve for managers who find themselves in a leadership‑heavy role, because it’s a much more expansive role. However, it’s important to realize that there are no senior engineering leadership roles focused solely on orchestration. You either learn this leadership style or you get stuck in mid‑level roles (even in organizations that lean orchestration-heavy). Further, the technology industry generally believes it overinvested in orchestration‑heavy roles in the 2010s. Consequently, companies are eliminating many of those roles and preventing similar roles from being created in the next generation of firms. There’s a pervasive narrative attributing this shift to the increased productivity brought by LLMs, but I’m skeptical of that relationship—this change was already underway before LLMs became prominent. My advice for folks working through the leadership‑heavy role learning curve is: Think of your job’s core loop as four steps: Identify the problems your team should be working on Decide on a destination that solves those problems Explain to your team, stakeholders, and executives the path the team will follow to reach that destination Communicate both data and narratives that provide evidence you’re walking that path successfully If you are not doing these four things, you are not performing your full role, even if people say you do some parts well. Similarly, if you want to get promoted or secure more headcount, those four steps are the path to doing so (I previously discussed this in How to get more headcount). Ask your team for priorities and problems to solve. Mining for bottom‑up projects is a critical part of your role. If you wait only for top‑down and lateral priorities, you aren’t performing the first step of the core loop. It’s easy to miss this expectation—it’s invisible to you but obvious to everyone else, so they don’t realize it needs to be said. If you’re not sure, ask. If your leadership chain is running the core loop for your team, it’s because they lack evidence that you can run it yourself. That’s a bad sign. What’s “business as usual” in an orchestration‑heavy role actually signals a struggling manager in a leadership‑heavy role. Get your projects prioritized by following the core loop. If you have a major problem on your team and wonder why it isn’t getting solved, that’s on you. Leadership‑heavy roles won’t have someone else telling you how to frame your team’s work—unless they think you’re doing a bad job. Picking the right problems and solutions is your highest‑leverage work. No, this is not only your product manager’s job or your tech lead’s—it is your job. It’s also theirs, but leadership overlaps because getting it right is so valuable. Generalizing a bit, your focus now is effectiveness of your team’s work, not efficiency in implementing it. Moving quickly on the wrong problem has no value. Understand your domain and technology in detail. You don’t have to write all the software—but you should have written some simple pull requests to verify you can reason about the codebase. You don’t have to author every product requirement or architecture proposal, but you should write one occasionally to prove you understand the work. If you don’t feel capable of that, that’s okay. But you need to urgently write down steps you’ll take to close that gap and share that plan with your team and manager. They currently see you as not meeting expectations and want to know how you’ll start meeting them. If you think that gap cannot be closed or that it’s unreasonable to expect you to close it, you misunderstand your role. Some organizations will allow you to misunderstand your role for a long time, provided you perform parts of it well, but they rarely promote you under those circumstances—and most won’t tolerate it for senior leaders. Align with your team and cross‑functional stakeholders as much as you align with your executive. If your executive is wrong and you follow them, it is your fault that your team and stakeholders are upset: part of your job is changing your executive’s mind. Yes, it can feel unfair if you’re the type to blame everything on your executive. But it’s still true: expecting your executive to get everything right is a sure way to feel superior without accomplishing much. Now that I’ve shared my perspective, I admit I’m being a bit extreme on purpose—people who don’t pick up on this tend to dispute its validity strongly unless there is no room to debate. There is room for nuance, but if you think my entire point is invalid, I encourage you to have a direct conversation with your manager and team about their expectations and how they feel you’re meeting them.
When building a large web app it’s possible to split the .js bundle into chunks and lazy load certain parts only when needed. For example, in Edna I use markdown-it and highlight.js library only in a certain scenario. By putting it in it’s separate chunk, I save almost 1 MB of uncompressed JavaScript in main bundle. Faster to download, faster to run. ../dist/assets/markdownit-hljs-DbctGXX9.js 1,087.33 kB │ gzip: 358.42 kB To split in chunks you configure rollup in vite.config.js: function manualChunks(id) { // partition files into chunks } export default defineConfig({ build: { rollupOptions: { output: { manualChunks: manualChunks, } } } } manualChunks() functions takes a path of the file (don’t know why everyone calls it an id). If you return a string for a given path, you tell rollup to bundle that file in a given chunk. If you return nothing (i.e. undefined) rollup will decide on how to chunk automatically, most likely putting everything into a single chunk It gets called for .css files, .js files and probably others. Here’s my hard won wisdom: console.log(id) when working on manualChunks() to see what files are processed chunk specific modules from node_modules to be lazy loaded everything else from node_modules goes into vendor chunk the rest is my own code and goes into lmain chunk as decided by rollup Seems simple enough: function manualChunks(id) { console.log(id); const chunksDef = [ ["/@zip.js/zip.js/", "zipjs"], ["/prettier/", "prettier"], // markdown-it and highlight.js are used together in askai.svelte [ "/markdown-it/", "/markdown-it-anchor/", "/highlight.js/", "/entities/", "/linkify-it/", "/mdurl/", "/punycode.js/", "/uc.micro/", "markdownit-hljs", ], ]; for (let def of chunksDef) { let n = def.length; for (let i = 0; i < n - 1; i++) { if (id.includes(def[i])) { return def[n - 1]; } } } // bundle all other 3rd-party modules into a single vendor.js module if (id.includes("/node_modules/")) { return "vendor"; } // when we return undefined, rollup will decide } This is real example from Edna. I’ve put zip.js, prettier and markdown-it + markdown-it-anchor + highlight.js into their own chunks, which I lazily import. Things to note: order is important. If I match /node_modules/ first, then everything would end up in vendor bundle id is a full path of bundled file in Unix format e.g. C:/Users/kjk/src/elaris/node_modules/prettier/standalone.mjs. People seem to match the path against just package name like prettier. I match against /prettier/ so that if some file has string prettier in it, it won’t be accidentally put in prettier chunk /entities/, /mdurl/ etc. are used by markdown-it so they should be included in its chunk. That’s where console.log(id) is helpful. I saw modules that I didn’t explicitly put in package.json which means they are implicit dependencies. I used bun.lock to see which package depends on those mysterious packages and that’s how I found what is used by markdown-it There were 2 remaining problems. I also had this in my code: import "highlight.js/styles/github.css"; manualChunks was called for "highlight.js/styles/github.css" to which I returned markdownit-hljs so it was put in its own chunk. Which was too much because I didn’t want to lazy import a small CSS file, so I told rollup to put all .css files in main CSS chunk: function manualChunks(id) { // pack all .css files in the same chunk if (id.endsWith(".css")) { return; } // ... rest of the code } There was one more thing that was a big pain in the ass to debug. To verify things are properly chunked I opened Dev Tools in Chrome and looked at network tab. To my surprise, markdownit-hljs was loaded immediately. After lots of debugging and research: turns out that vite bundles some helper functions. Because I didn’t specify explicitly which chunk they should go into, rollup decided to put them in markdownit-hljs chunk. Because main chunk was using that function, it had to import it, defeating my cunning plan to load it lazily later. The fix was to direct those known helper function into vendor bundle: function manualChunks(id) { // ... other code // bundle all other 3rd-party modules into a single vendor.js module if ( id.includes("/node_modules/") || ROLLUP_COMMON_MODULES.some((commonModule) => id.includes(commonModule)) ) { return "vendor"; } // ... rest of the code } You can read more at: https://github.com/vitejs/vite/issues/17823 https://github.com/vitejs/vite/issues/19758 https://github.com/vitejs/vite/issues/5189 Things might still go wrong. I got one invalid build that created a chunk that wouldn’t parse in the browser. For that reason I suggest to start simple: empty manualChunks() function that packs everything in one chunk. Then add desired chunks one by one and after each change. And how to lazy import things? let markdownIt = (await import("markdown-it")).default; is equivalent to static import: import markdownIt from "markdown-it"; Read more about lazy imports.