Full Width [alt+shift+f] FOCUS MODE Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
27
TL;DR The "build vs. buy" equation has flipped. Businesses used to buy SaaS because it was cheaper than building their own. AI has changed that—building your own is now more affordable than ever. The discovery problem. AI recommendations default to well-established solutions. Think SEO is a long game? Try LLM SEO. Everyone worries about AI taking developer jobs, but what if AI wipes out the entire off-the-shelf software industry? The "Why Buy?" Problem Six months ago, we needed an AI-powered code review tool. We explored several options, tested them all, and ultimately "vibe-coded" our own GitHub Action—a simple Bash script that takes a git log, sends it to Claude via curl, and posts the results to Slack. Done. The best part? AI wrote the entire thing—faster than it took to sign up for another SaaS. How long until every company realizes they can do this? Need a simple...
9 months ago

Comments

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Founder's blog

Will AI destroy B2B SaaS?

TL;DR The "build vs. buy" equation has flipped. Businesses used to buy SaaS because it was cheaper than building their own. AI has changed that—building your own is now more affordable than ever. The discovery problem. AI recommendations default to well-established solutions. Think SEO is a long game? Try LLM SEO. Everyone worries about AI taking developer jobs, but what if AI wipes out the entire off-the-shelf software industry? The "Why Buy?" Problem Six months ago, we needed an AI-powered code review tool. We explored several options and ultimately "vibe-coded" our own GitHub Action—a simple Bash script that takes a git log, sends it to Claude via curl, and posts the results to Slack. Done. The best part? AI wrote the entire thing faster than it would take to sign up for a SaaS. How long until every company realizes they can do this? Need a simple "CRUD" CRM with JIRA-style tasks? Done. Need a mobile time-tracking app for remote employees? AI will spit out a React Native iOS build in minutes. Why pay for yet another SaaS when you can "vibe-code" something in a week? And mark my words, LLM providers are one step away from actually hosting the code they generate. Who needs to spawn an AWS server if you can just ask OpenAI to host the code it just wrote? - "Hey Siri! build me a Basecamp, but with green buttons, also register a domain, spawn a server and host it all there, charge this credit card when you're done" - "Absolutely, that'd be $1.17 per hour" The Discovery Problem AI doesn’t just make it easier to build software—it makes it harder for new SaaS products to get discovered. When you ask AI for recommendations, it defaults to the biggest names. And not just in SaaS, by the way, in open source too. Imagine launching a killer new JS framework today. AI coding assistants and tools like Cursor will just default to React anyway. And not even the latest version of it! In a recent tweet Adam Wathan, the creator of Tailwind, asked: "Has anyone migrated to Tailwind 4.0 yet?" The most popular response was "Nah! we're still waiting for LLMs to learn it." AI isn’t just "the next internet moment." It’s more like "the social network moment." Echo chambers get louder, big names get bigger, and smaller ones disappear into the noise. What Can SaaS Companies Do? 1. Become an Industry Standard Or at least a "go-to" product in a niche. If your app becomes something people mention on their CVs or job descriptions, you win. Examples: Slack. HubSpot. Salesforce etc. A salesperson moving to a new company simply expects Salesforce to be there. That kind of lock-in ensures survival. 2. Build Moats: Infrastructure & Vendor Lock-In SaaS products that are just CRUD apps will die. The ones that survive will own infrastructure or at least some part of it. Instead of building another AI voice assistant, create one with built-in VoIP and provide landline numbers to customers. Examples: Transistor.fm – Not just a SaaS, but also a podcast hosting and publishing pipeline. Postmark (or any transactional email service really) – yes, AI can code an email-sending app, but it can't get you a 10-year old high-reputation sender IP address trusted by Gmail and Outlook. SignWell, SavvyCal and similar "inter-business" file-sharing, communication & escrow apps that own the communication part (and frankly, are literally easier to use than vibe-code your own). But prepare for tthousands of clones. Which SaaS Will Die First? Side-project-scale, "one simple tool" SaaS products that used to be easy wins—form builders, schedulers, basic dashboards, simple workflow apps—those days are over. If AI can generate it in an afternoon, no one is paying a subscription for it. Oh, and "no code" is toasted too. The SaaS graveyard is about to get a lot more crowded. I give it 4 years. Software consulting is making a comeback though. Someone has to clean up the vibe-coded chaos.

9 months ago 25 votes
Zen Browser review and benchmark vs Chrome, Brave, Firefox and Safari

I'm looking for a new daily driver browser on my Mac. Chrome is a non-starter for me due to privacy concerns (Google's tracking empire is alive and well), and Edge is just... too much. Every update shoves another set of “features” down my throat — Copilot, discount coupons, Bing nonsense — things I have to disable again and again. No thanks. I currently use Brave and I really want to like it, but something about it doesn't sit right with me. The constant crypto integration, some of the decisions around their search engine — it just feels like it's got an agenda. Arc? Well, Arc is dying now, so that's out. Someone suggested Zen, which is a Firefox-based browser aiming to be an Arc-like alternative. That got me curious. And since I already had all these browsers installed, I figured: why not run some benchmarks and see how they stack up? Benchmark Setup All tests were run using Speedometer 3.0 on a MacBook M3 Pro. I tested in incognito/private mode with no extensions, except where the browser had built-in blockers enabled: Chrome: Running uBlock Origin Brave: Default built-in ad/privacy blocker enabled Safari: Clean Firefox: Clean Zen: Clean Results Chrome 132.0.6834.160 - 37.7 Brave 1.74.51 - 37.6 Safari 18.2 - 37.6 Firefox 134.0.2 - 34.8 Zen Browser 1.7.3b - 31.6 Browser benchSpeedometer score (higher is better)ChomeBraveSafariFirefoxZen Browser0510152025303540 A few takeaways: Chrome is (unsurprisingly) the fastest. Brave is essentially Chrome with a privacy skin, Leo AI, some Crypto stuff etc, and the Speedometer score reflects that. Firefox holds up well but is still behind Chromium-based browsers. Not awful, but not amazing either. Zen, being Firefox-based, lags a bit further behind. If you want a Firefox alternative that looks different but runs about the same, it's an option. Otherwise, it's just Firefox with extra UI features (see below). Side Note: 1Password Is a Performance Killer One of the most surprising findings was how much 1Password's extension destroys Speedometer scores. Across all browsers, enabling it dropped my score by 10 points. No clue what it's doing under the hood, but it's heavy. Probably scans all inputs to shove a password into. A (tiny) Zen review no one asked for Zen is a very, very nice browser, but it has some rough edges: (nitpicking) Lacks standard macOS keyboard shortcuts — for example, Cmd+W should close a window when no tabs are left. There's a hidden setting to fix this, but seriously, just follow macOS conventions by default. No built-in adblocker, have to install uBlock Origin like it's 2023 again (kidding). The dev tools are Firefox-based, and that says it all. JavaScript debugging is flaky (unreliable variable watch list, breakpoints sometimes get skipped), and reverse-engineering complex CSS can be a nightmare. That said, Zen a very solid contender, and some of its UI design choices are genuinely great! If you'd like to learn more watch Theo's review

10 months ago 74 votes
No, Wall Street, DeepSeek is not "far superior"

I mean, it is! But the whole story about the stock market reacting to the news about DeepSeek V3 and R1 is a fine example of the knee-jerk nature of mass consciousness in the era of clickbait economics. Briefly, by points: No, DeepSeek isn’t “head and shoulders above” every other model. The results vary across benchmarks, but on average, GPT-4o and Gemini-2 are better. You can see this on ChatBot Arena, for example (Reddit thread). Even in the results published by DeepSeek’s authors themselves (benchmark graph), you can see that in several tests, the model lags behind GPT-4o from May 2024—which, mind you, is currently ranked 16th on ChatBot Arena. No, training DeepSeek didn’t cost $6 million, “100 times less than GPT-4.” The $6 million figure refers only to the final training run of the published model. It doesn’t include any prior experiments, earlier versions, or R&D costs. This is just the raw computational cost of that final training run. And guess what? That figure is pretty much in line with models of the same class. No, Nvidia did not deserve this hit Not that we’re shedding tears for them — they could use a push to lower hardware prices. And let's not forget that DeepSeek was still trained on Nvidia’s own hardware. And no, their GPUs aren’t suddenly obsolete. DeepSeek’s computational budget is fairly standard for training, and inference for such a massive model (reminder: it’s an MoE with 671 billion parameters, 37 billion of which are active per token generation) requires a ton of hardware. Inference costs are roughly on par with a 70B dense model. Naturally, they’ll scale this success by throwing even more hardware at it and making the model bigger. Not to mention that Deepseek makes LLMs more accessible for the on-prem customers. Which means smaller businesses will buy more GPU's, which is still good for NVDA, am I right? Does this mean the model is bad? No, the model is very, VERY good. It outperforms the vast majority of open-source models, which is fantastic. DeepSeek used 8-bit floating point numbers (FP8) throughout the entire training process. This sacrifices some of that precision to save memory and boost performance. Additionally, they employed a multi-token prediction system and innovative GPU clustering/connectivity techniques. These are clever and practical engineering choices that undoubtedly contributed to their success. In the end, though, stocks will recover, ideas will spread, models will get better, and progress will march on (hopefully).

10 months ago 32 votes
I'm finally dumping Visual Studio

After years of working with the "big" Visual Studio, I've had enough. It's buggy, slow, and frustrating, and I've decided to make the switch to Visual Studio Code. While as a C# developer I'm still unsure if I can replicate every aspect of my workflow in VS Code, I'm willing to give it a shot—and so far, I'm really impressed. 1. Performance Visual Studio 2022 performance has been a constant issue. It's sluggish and feels increasingly bloated with every new update. It's like watching paint dry every time I open a project. In contrast, Visual Studio Code feels lightweight and incredibly fast. The first time I opened my large project in VS Code, I was shocked — it loaded in lees than a second, literally, even with extensions like "C#" and "C# Dev Kit" installed. 2. Better Developer Experience Running dotnet watch run in VS Code's terminal has been a revelation. It's fast, responsive, and actually works consistently. Visual Studio's "hot reload" feature, on the other hand, has been a constant source of frustration for me. Half the time it doesn't work, and I'm left restarting debugging sessions over and over again. I can't tell you how many hours I've lost to that unreliable feature. 3. Fewer Bugs, Less Frustration The minor editor bugs in Visual Studio have been endless and exhausting. I remember one particularly infuriating bug where syntax highlighting would break in Razor and .cshtml files whenever I used certain HTML tags or even just adjusted the indentation. It drove me up the wall! Not to mention the bizarre issues with JavaScript formatting that never seemed to get fixed. Since switching to VS Code, I've encountered far fewer bugs. It just feels like an environment that respects my time and sanity. 4. A Thriving Ecosystem The VS Code extension ecosystem is alive and thriving. Need Tailwind CSS IntelliSense? There's an extension for that, and it works beautifully. Want to visualize your Git history for a particular line (better version of git-blame)? The Git History extension has got you covered. In "big" Visual Studio, I'd report issues through the "feedback hub" and wait months — or even years — for a response. With VS Code, the community is constantly contributing new tools and improvements. It's energizing (and sometimes exhausting) to be part of such an active ecosystem. 5. Cross-Platform Flexibility One of the biggest advantages I've found with Visual Studio Code is its true cross-platform support. Whether I'm on my Windows PC gaming rig at home or my MacBook while traveling, VS Code runs smoothly and keeps my workflow consistent. Visual Studio's limited macOS version just doesn't cut it for me. Being able to switch between machines without missing a beat has been a game-changer. I have to admit, I was skeptical at first. I've always had a bit of a grudge against Electron-based apps — they've often felt sluggish and bloated. But VS Code has completely changed my perspective. It's fast, responsive, and flexible enough to let me build the development environment that works best for me. Switching to VS Code has rekindled my passion for coding; it reminds me why I fell in love with development in the first place. While Visual Studio will always have its strengths, I need a tool that evolves with me—not one that holds me back.

a year ago 48 votes

More in programming

Engineering excellence starts on edge

The best engineering teams take control of their tools. They help develop the frameworks and libraries they depend on, and they do this by running production code on edge — the unreleased next version. That's where progress is made, that's where participation matters most. This sounds scary at first. Edge? Isn't that just another word for danger? What if there's a bug?! Yes, what if? Do you think bugs either just magically appear or disappear? No, they're put there by programmers and removed by the very same. If you want bug-free frameworks and libraries, you have to work for it, but if you do, the reward for your responsibility is increased engineering excellence. Take Rails 8.1, as an example. We just released the first beta version at Rails World, but Shopify, GitHub, 37signals, and a handful of other frontier teams have already been running this code in production for almost a year. Of course, there were bugs along the way, but good automated testing and diligent programmers caught virtually all of them before they went to production. It didn't always used to be this way. Once upon a time, I felt like I had one of the only teams running Rails on edge in production. But now two of the most important web apps in the world are doing the same! At an incredible scale and criticality. This has allowed both of them, and the few others with the same frontier ambition, to foster a truly elite engineering culture. One that isn't just a consumer of open source software, but a real-time co-creator. This is a step function in competence and prowess for any team. It's also an incredible motivation boost. When your programmers are able to directly influence the tools they're working with, they're far more likely to do so, and thus they go deeper, learn more, and create connections to experts in the same situation elsewhere. But this requires being able to immediately use the improvements or bug fixes they help devise. It doesn't work if you sit around waiting patiently for the next release before you dare dive in. Far more companies could do this. Far more companies should do this. Whether it's with Ruby, Rails, Omarchy, or whatever you're using, your team could level up by getting more involved, taking responsibility for finding issues on edge, and reaping the reward of excellence in the process. So what are you waiting on?

14 hours ago 4 votes
Dreams of Late Summer

Here on a summer night in the grass and lilac smell Drunk on the crickets and the starry sky, Oh what fine stories we could tell With this moonlight to tell them by. A summer night, and you, and paradise, So lovely and so filled with grace, Above your head, the universe has hung its … Continue reading Dreams of Late Summer →

an hour ago 2 votes
Apologies and forgiveness

The first in a series of posts about doing things the right way

yesterday 7 votes
Understanding Bazel remote caching

A deep dive into the Action Cache, the CAS, and the security issues that arise from using Bazel with a remote cache but without remote execution

yesterday 8 votes
Trying to Make Sense of Casing Conventions on the Web

(I present to you my stream of consciousness on the topic of casing as it applies to the web platform.) I’m reading about the new command and commandfor attributes — which I’m super excited about, declarative behavior invocation in HTML? YES PLEASE!! — and one thing that strikes me is the casing in these APIs. For example, the command attribute has a variety of values in HTML which correspond to APIs in JavaScript. The show-popover attribute value maps to .showPopover() in JavaScript. hide-popover maps to .hidePopover(), etc. So what we have is: lowercase in attribute names e.g. commandfor="..." kebab-case in attribute values e.g. show-popover camelCase for JS counterparts e.g. showPopover() After thinking about this a little more, I remember that HTML attributes names are case insensitive, so the browser will normalize them to lowercase during parsing. Given that, I suppose you could write commandFor="..." but it’s effectively the same. Ok, lowercase attribute names in HTML makes sense. The related popover attributes follow the same convention: popovertarget popovertargetaction And there are many other attribute names in HTML that are lowercase, e.g.: maxlength novalidate contenteditable autocomplete formenctype So that all makes sense. But wait, there are some attribute names with hyphens in them, like aria-label="..." and data-value="...". So why isn’t it command-for="..."? Well, upon further reflection, I suppose those attributes were named that way for extensibility’s sake: they are essentially wildcard attributes that represent a family of attributes that are all under the same namespace: aria-* and data-*. But wait, isn’t that an argument for doing popover-target and popover-target-action? Or command and command-for? But wait (I keep saying that) there are kebab-case attribute names in HTML — like http-equiv on the <meta> tag, or accept-charset on the form tag — but those seem more like legacy exceptions. It seems like the only answer here is: there is no rule. Naming is driven by convention and decisions are made on a case-by-case basis. But if I had to summarize, it would probably be that the default casing for new APIs tends to follow the rules I outlined at the start (and what’s reflected in the new command APIs): lowercase for HTML attributes names kebab-case for HTML attribute values camelCase for JS counterparts Let’s not even get into SVG attribute names We need one of those “bless this mess” signs that we can hang over the World Wide Web. Email · Mastodon · Bluesky

2 days ago 10 votes