More from Founder's blog
TL;DR The "build vs. buy" equation has flipped. Businesses used to buy SaaS because it was cheaper than building their own. AI has changed that—building your own is now more affordable than ever. The discovery problem. AI recommendations default to well-established solutions. Think SEO is a long game? Try LLM SEO. Everyone worries about AI taking developer jobs, but what if AI wipes out the entire off-the-shelf software industry? The "Why Buy?" Problem Six months ago, we needed an AI-powered code review tool. We explored several options and ultimately "vibe-coded" our own GitHub Action—a simple Bash script that takes a git log, sends it to Claude via curl, and posts the results to Slack. Done. The best part? AI wrote the entire thing faster than it would take to sign up for a SaaS. How long until every company realizes they can do this? Need a simple "CRUD" CRM with JIRA-style tasks? Done. Need a mobile time-tracking app for remote employees? AI will spit out a React Native iOS build in minutes. Why pay for yet another SaaS when you can "vibe-code" something in a week? And mark my words, LLM providers are one step away from actually hosting the code they generate. Who needs to spawn an AWS server if you can just ask OpenAI to host the code it just wrote? - "Hey Siri! build me a Basecamp, but with green buttons, also register a domain, spawn a server and host it all there, charge this credit card when you're done" - "Absolutely, that'd be $1.17 per hour" The Discovery Problem AI doesn’t just make it easier to build software—it makes it harder for new SaaS products to get discovered. When you ask AI for recommendations, it defaults to the biggest names. And not just in SaaS, by the way, in open source too. Imagine launching a killer new JS framework today. AI coding assistants and tools like Cursor will just default to React anyway. And not even the latest version of it! In a recent tweet Adam Wathan, the creator of Tailwind, asked: "Has anyone migrated to Tailwind 4.0 yet?" The most popular response was "Nah! we're still waiting for LLMs to learn it." AI isn’t just "the next internet moment." It’s more like "the social network moment." Echo chambers get louder, big names get bigger, and smaller ones disappear into the noise. What Can SaaS Companies Do? 1. Become an Industry Standard Or at least a "go-to" product in a niche. If your app becomes something people mention on their CVs or job descriptions, you win. Examples: Slack. HubSpot. Salesforce etc. A salesperson moving to a new company simply expects Salesforce to be there. That kind of lock-in ensures survival. 2. Build Moats: Infrastructure & Vendor Lock-In SaaS products that are just CRUD apps will die. The ones that survive will own infrastructure or at least some part of it. Instead of building another AI voice assistant, create one with built-in VoIP and provide landline numbers to customers. Examples: Transistor.fm – Not just a SaaS, but also a podcast hosting and publishing pipeline. Postmark (or any transactional email service really) – yes, AI can code an email-sending app, but it can't get you a 10-year old high-reputation sender IP address trusted by Gmail and Outlook. SignWell, SavvyCal and similar "inter-business" file-sharing, communication & escrow apps that own the communication part (and frankly, are literally easier to use than vibe-code your own). But prepare for tthousands of clones. Which SaaS Will Die First? Side-project-scale, "one simple tool" SaaS products that used to be easy wins—form builders, schedulers, basic dashboards, simple workflow apps—those days are over. If AI can generate it in an afternoon, no one is paying a subscription for it. Oh, and "no code" is toasted too. The SaaS graveyard is about to get a lot more crowded. I give it 4 years. Software consulting is making a comeback though. Someone has to clean up the vibe-coded chaos.
TL;DR The "build vs. buy" equation has flipped. Businesses used to buy SaaS because it was cheaper than building their own. AI has changed that—building your own is now more affordable than ever. The discovery problem. AI recommendations default to well-established solutions. Think SEO is a long game? Try LLM SEO. Everyone worries about AI taking developer jobs, but what if AI wipes out the entire off-the-shelf software industry? The "Why Buy?" Problem Six months ago, we needed an AI-powered code review tool. We explored several options, tested them all, and ultimately "vibe-coded" our own GitHub Action—a simple Bash script that takes a git log, sends it to Claude via curl, and posts the results to Slack. Done. The best part? AI wrote the entire thing—faster than it took to sign up for another SaaS. How long until every company realizes they can do this? Need a simple CRM with JIRA-style tasks? Done. Need a mobile time-tracking app for remote employees? AI will spit out a React Native iOS build in minutes. Why pay for yet another SaaS when you can "vibe-code" something in a week? The Discovery Problem AI doesn’t just make it easier to build software—it makes it harder for new SaaS products to get discovered. When you ask AI for recommendations, it defaults to the biggest names. Here’s an open-source analogy: imagine launching a game-changing JS framework today. AI coding assistants and tools like Cursor will still default to React. And not even the latest version! Adam Wathan recently asked on Twitter, "Has anyone migrated to Tailwind 4.0 yet?" The most popular response was "Nah! we're still waiting for LLMs to learn it." AI isn’t just "the next internet moment." It’s more like "the social network moment." Echo chambers get louder, big names get bigger, and smaller ones disappear into the noise. What Can SaaS Companies Do? 1. Become an Industry Standard Or at least a "go-to" product in a niche. If your app becomes something people mention on their CVs or job descriptions, you win. Examples: Slack. HubSpot. Salesforce etc. A salesperson moving to a new company simply expects Salesforce to be there. That kind of lock-in ensures survival. 2. Build Moats: Infrastructure & Vendor Lock-In SaaS products that are just CRUD apps will die. The ones that survive will own infrastructure. Examples: Transistor.fm – Not just a SaaS, but also a podcast hosting and distribution pipeline. Postmark (or any transactional email service really) – AI can code an email-sending app, but it can't get you a 10-year old high-reputation sender IP address trusted by Gmail and Outlook. SignWell and similar B2B file-sharing apps (literally easier to use then code your own). Don't just build another CRUD sales CRM, build a CRM with an inbound VoIP number – because AI can’t replace telco infrastructure (yet). Which SaaS Will Die First? Side-project-scale, "one simple tool" SaaS products that used to be easy wins—Calendly replacements, form builders, schedulers, basic dashboards, simple workflow apps—those days are over. If AI can generate it in an afternoon, no one is paying a subscription for it. Oh, and "no code" is toasted too. The SaaS graveyard is about to get a lot more crowded. I give it 4 years. Software consulting is making a comeback though. Someone has to clean up the vibe-coded chaos.
I'm looking for a new daily driver browser on my Mac. Chrome is a non-starter for me due to privacy concerns (Google's tracking empire is alive and well), and Edge is just... too much. Every update shoves another set of “features” down my throat — Copilot, discount coupons, Bing nonsense — things I have to disable again and again. No thanks. I currently use Brave and I really want to like it, but something about it doesn't sit right with me. The constant crypto integration, some of the decisions around their search engine — it just feels like it's got an agenda. Arc? Well, Arc is dying now, so that's out. Someone suggested Zen, which is a Firefox-based browser aiming to be an Arc-like alternative. That got me curious. And since I already had all these browsers installed, I figured: why not run some benchmarks and see how they stack up? Benchmark Setup All tests were run using Speedometer 3.0 on a MacBook M3 Pro. I tested in incognito/private mode with no extensions, except where the browser had built-in blockers enabled: Chrome: Running uBlock Origin Brave: Default built-in ad/privacy blocker enabled Safari: Clean Firefox: Clean Zen: Clean Results Chrome 132.0.6834.160 - 37.7 Brave 1.74.51 - 37.6 Safari 18.2 - 37.6 Firefox 134.0.2 - 34.8 Zen Browser 1.7.3b - 31.6 Browser benchSpeedometer score (higher is better)ChomeBraveSafariFirefoxZen Browser0510152025303540 A few takeaways: Chrome is (unsurprisingly) the fastest. Brave is essentially Chrome with a privacy skin, Leo AI, some Crypto stuff etc, and the Speedometer score reflects that. Firefox holds up well but is still behind Chromium-based browsers. Not awful, but not amazing either. Zen, being Firefox-based, lags a bit further behind. If you want a Firefox alternative that looks different but runs about the same, it's an option. Otherwise, it's just Firefox with extra UI features (see below). Side Note: 1Password Is a Performance Killer One of the most surprising findings was how much 1Password's extension destroys Speedometer scores. Across all browsers, enabling it dropped my score by 10 points. No clue what it's doing under the hood, but it's heavy. Probably scans all inputs to shove a password into. A (tiny) Zen review no one asked for Zen is a very, very nice browser, but it has some rough edges: (nitpicking) Lacks standard macOS keyboard shortcuts — for example, Cmd+W should close a window when no tabs are left. There's a hidden setting to fix this, but seriously, just follow macOS conventions by default. No built-in adblocker, have to install uBlock Origin like it's 2023 again (kidding). The dev tools are Firefox-based, and that says it all. JavaScript debugging is flaky (unreliable variable watch list, breakpoints sometimes get skipped), and reverse-engineering complex CSS can be a nightmare. That said, Zen a very solid contender, and some of its UI design choices are genuinely great! If you'd like to learn more watch Theo's review
I mean, it is! But the whole story about the stock market reacting to the news about DeepSeek V3 and R1 is a fine example of the knee-jerk nature of mass consciousness in the era of clickbait economics. Briefly, by points: No, DeepSeek isn’t “head and shoulders above” every other model. The results vary across benchmarks, but on average, GPT-4o and Gemini-2 are better. You can see this on ChatBot Arena, for example (Reddit thread). Even in the results published by DeepSeek’s authors themselves (benchmark graph), you can see that in several tests, the model lags behind GPT-4o from May 2024—which, mind you, is currently ranked 16th on ChatBot Arena. No, training DeepSeek didn’t cost $6 million, “100 times less than GPT-4.” The $6 million figure refers only to the final training run of the published model. It doesn’t include any prior experiments, earlier versions, or R&D costs. This is just the raw computational cost of that final training run. And guess what? That figure is pretty much in line with models of the same class. No, Nvidia did not deserve this hit Not that we’re shedding tears for them — they could use a push to lower hardware prices. And let's not forget that DeepSeek was still trained on Nvidia’s own hardware. And no, their GPUs aren’t suddenly obsolete. DeepSeek’s computational budget is fairly standard for training, and inference for such a massive model (reminder: it’s an MoE with 671 billion parameters, 37 billion of which are active per token generation) requires a ton of hardware. Inference costs are roughly on par with a 70B dense model. Naturally, they’ll scale this success by throwing even more hardware at it and making the model bigger. Not to mention that Deepseek makes LLMs more accessible for the on-prem customers. Which means smaller businesses will buy more GPU's, which is still good for NVDA, am I right? Does this mean the model is bad? No, the model is very, VERY good. It outperforms the vast majority of open-source models, which is fantastic. DeepSeek used 8-bit floating point numbers (FP8) throughout the entire training process. This sacrifices some of that precision to save memory and boost performance. Additionally, they employed a multi-token prediction system and innovative GPU clustering/connectivity techniques. These are clever and practical engineering choices that undoubtedly contributed to their success. In the end, though, stocks will recover, ideas will spread, models will get better, and progress will march on (hopefully).
After years of working with the "big" Visual Studio, I've had enough. It's buggy, slow, and frustrating, and I've decided to make the switch to Visual Studio Code. While as a C# developer I'm still unsure if I can replicate every aspect of my workflow in VS Code, I'm willing to give it a shot—and so far, I'm really impressed. 1. Performance Visual Studio 2022 performance has been a constant issue. It's sluggish and feels increasingly bloated with every new update. It's like watching paint dry every time I open a project. In contrast, Visual Studio Code feels lightweight and incredibly fast. The first time I opened my large project in VS Code, I was shocked — it loaded in lees than a second, literally, even with extensions like "C#" and "C# Dev Kit" installed. 2. Better Developer Experience Running dotnet watch run in VS Code's terminal has been a revelation. It's fast, responsive, and actually works consistently. Visual Studio's "hot reload" feature, on the other hand, has been a constant source of frustration for me. Half the time it doesn't work, and I'm left restarting debugging sessions over and over again. I can't tell you how many hours I've lost to that unreliable feature. 3. Fewer Bugs, Less Frustration The minor editor bugs in Visual Studio have been endless and exhausting. I remember one particularly infuriating bug where syntax highlighting would break in Razor and .cshtml files whenever I used certain HTML tags or even just adjusted the indentation. It drove me up the wall! Not to mention the bizarre issues with JavaScript formatting that never seemed to get fixed. Since switching to VS Code, I've encountered far fewer bugs. It just feels like an environment that respects my time and sanity. 4. A Thriving Ecosystem The VS Code extension ecosystem is alive and thriving. Need Tailwind CSS IntelliSense? There's an extension for that, and it works beautifully. Want to visualize your Git history for a particular line (better version of git-blame)? The Git History extension has got you covered. In "big" Visual Studio, I'd report issues through the "feedback hub" and wait months — or even years — for a response. With VS Code, the community is constantly contributing new tools and improvements. It's energizing (and sometimes exhausting) to be part of such an active ecosystem. 5. Cross-Platform Flexibility One of the biggest advantages I've found with Visual Studio Code is its true cross-platform support. Whether I'm on my Windows PC gaming rig at home or my MacBook while traveling, VS Code runs smoothly and keeps my workflow consistent. Visual Studio's limited macOS version just doesn't cut it for me. Being able to switch between machines without missing a beat has been a game-changer. I have to admit, I was skeptical at first. I've always had a bit of a grudge against Electron-based apps — they've often felt sluggish and bloated. But VS Code has completely changed my perspective. It's fast, responsive, and flexible enough to let me build the development environment that works best for me. Switching to VS Code has rekindled my passion for coding; it reminds me why I fell in love with development in the first place. While Visual Studio will always have its strengths, I need a tool that evolves with me—not one that holds me back.
More in programming
Discover how The Epic Programming Principles can transform your web development decision-making, boost your career, and help you build better software.
Denmark has been reaping lots of delayed accolades from its relatively strict immigration policy lately. The Swedes and the Germans in particular are now eager to take inspiration from The Danish Model, given their predicaments. The very same countries that until recently condemned the lack of open-arms/open-border policies they would champion as Moral Superpowers. But even in Denmark, thirty years after the public opposition to mass immigration started getting real political representation, the consequences of culturally-incompatible descendants from MENAPT continue to stress the high-trust societal model. Here are just three major cases that's been covered in the Danish media in 2025 alone: Danish public schools are increasingly struggling with violence and threats against students and teachers, primarily from descendants of MENAPT immigrants. In schools with 30% or more immigrants, violence is twice as prevalent. This is causing a flight to private schools from parents who can afford it (including some Syrians!). Some teachers are quitting the profession as a result, saying "the Quran run the class room". Danish women are increasingly feeling unsafe in the nightlife. The mayor of the country's third largest city, Odense, says he knows why: "It's groups of young men with an immigrant background that's causing it. We might as well be honest about that." But unfortunately, the only suggestion he had to deal with the problem was that "when [the women] meet these groups... they should take a big detour around them". A soccer club from the infamous ghetto area of Vollsmose got national attention because every other team in their league refused to play them. Due to the team's long history of violent assaults and death threats against opposing teams and referees. Bizarrely leading to the situation were the team got to the top of its division because they'd "win" every forfeited match. Problems of this sort have existed in Denmark for well over thirty years. So in a way, none of this should be surprising. But it actually is. Because it shows that long-term assimilation just isn't happening at a scale to tackle these problems. In fact, data shows the opposite: Descendants of MENAPT immigrants are more likely to be violent and troublesome than their parents. That's an explosive point because it blows up the thesis that time will solve these problems. Showing instead that it actually just makes it worse. And then what? This is particularly pertinent in the analysis of Sweden. After the "far right" party of the Swedish Democrats got into government, the new immigrant arrivals have plummeted. But unfortunately, the net share of immigrants is still increasing, in part because of family reunifications, and thus the problems continue. Meaning even if European countries "close the borders", they're still condemned to deal with the damning effects of maladjusted MENAPT immigrant descendants for decades to come. If the intervention stops there. There are no easy answers here. Obviously, if you're in a hole, you should stop digging. And Sweden has done just that. But just because you aren't compounding the problem doesn't mean you've found a way out. Denmark proves to be both a positive example of minimizing the digging while also a cautionary tale that the hole is still there.
One rabbit hole I can never resist going down is finding the original creator of a piece of art. This sounds simple, but it’s often quite difficult. The Internet is a maze of social media accounts that only exist to repost other people’s art, usually with minimal or non-existent attribution. A popular image spawns a thousand copies, each a little further from the original. Signatures get cropped, creators’ names vanish, and we’re left with meaningless phrases like “no copyright intended”, as if that magically absolves someone of artistic theft. Why do I do this? I’ve always been a bit obsessive, a bit completionist. I’ve worked in cultural heritage for eight years, which has made me more aware of copyright and more curious about provenance. And it’s satisfying to know I’ve found the original source, that I can’t dig any further. This takes time. It’s digital detective work, using tools like Google Lens and TinEye, and it’s not always easy or possible. Sometimes the original pops straight to the top, but other times it takes a lot of digging to find the source of an image. So many of us have become accustomed to art as an endless, anonymous stream of “content”. A beautiful image appears in our feed, we give it a quick heart, and scroll on, with no thought for the human who sweated blood and tears to create it. That original artist feels distant, disconected. Whatever benefit they might get from the “exposure” of your work going viral, they don’t get any if their name has been removed first. I came across two examples recently that remind me it’s not just artists who miss out – it’s everyone who enjoys art. I saw a photo of some traffic lights on Tumblr. I love their misty, nighttime aesthetic, the way the bright colours of the lights cut through the fog, the totality of the surrounding darkness. But there was no name – somebody had just uploaded the image to their Tumblr page, it was reblogged a bunch of times, and then it appeared on my dashboard. Who took it? I used Google Lens to find the original photographer: Lucas Zimmerman. Then I discovered it was part of a series. And there was a sequel. I found interviews. Context. Related work. I found all this cool stuff, but only because I knew Lucas’s name. Traffic Lights, by Lucas Zimmerman. Published on Behance.net under a CC BY‑NC 4.0 license, and reposted here in accordance with that license. The second example was a silent video of somebody making tiny chess pieces, just captioned “wow”. It was clearly an edit of another video, with fast-paced cuts to make it accommodate a short attention span – and again with no attribution. This was a little harder to find – I had to search several frames in Google Lens before I found a summary on a Russian website, which had a link to a YouTube video by metalworker and woodworker Левша (Levsha). This video is four times longer than the cut-up version I found, in higher resolution, and with commentary from the original creator. I don’t speak Russian, but YouTube has auto-translated subtitles. Now I know how this amazing set was made, and I have a much better understanding of the materials and techniques involved. (This includes the delightful name Wenge wood, which I’d never heard before.) https://youtube.com/watch?v=QoKdDK3y-mQ A piece of art is more than just a single image or video. It’s a process, a human story. When art is detached from its context and creator, we lose something fundamental. Creators lose the chance to benefit from their work, and we lose the opportunity to engage with it in a deeper way. We can’t learn how it was made, find their other work, or discover how to make similar art for ourselves. The Internet has done many wonderful things for art, but it’s also a machine for endless copyright infringement. It’s not just about generative AI and content scraping – those are serious issues, but this problem existed long before any of us had heard of ChatGPT. It’s a thousand tiny paper cuts. How many of us have used an image from the Internet because it showed up in a search, without a second thought for its creator? When Google Images says “images may be subject to copyright”, how many of us have really thought about what that means? Next time you want to use an image from the web, look to see if it’s shared under a license that allows reuse, and make sure you include the appropriate attribution – and if not, look for a different image. Finding the original creator is hard, sometimes impossible. The Internet is full of shadows: copies of things that went offline years ago. But when I succeed, it feels worth the effort – both for the original artist and myself. When I read a book or watch a TV show, the credits guide me to the artists, and I can appreciate both them and the rest of their work. I wish the Internet was more like that. I wish the platforms we rely on put more emphasis on credit and attribution, and the people behind art. The next time an image catches your eye, take a moment. Who made this? What does it mean? What’s their story? [If the formatting of this post looks odd in your feed reader, visit the original article]
When the iPhone first appeared in 2007, Microsoft was sitting pretty with their mobile strategy. They'd been early to the market with Windows CE, they were fast-following the iPod with their Zune. They also had the dominant operating system, the dominant office package, and control of the enterprise. The future on mobile must have looked so bright! But of course now, we know it wasn't. Steve Ballmer infamously dismissed the iPhone with a chuckle, as he believed all of Microsoft's past glory would guarantee them mobile victory. He wasn't worried at all. He clearly should have been! After reliving that Ballmer moment, it's uncanny to watch this CNBC interview from one year ago with Johny Srouji and John Ternus from Apple on their AI strategy. Ternus even repeats the chuckle!! Exuding the same delusional confidence that lost Ballmer's Microsoft any serious part in the mobile game. But somehow, Apple's problems with AI seem even more dire. Because there's apparently no one steering the ship. Apple has been promising customers a bag of vaporware since last fall, and they're nowhere close to being able to deliver on the shiny concept demos. The ones that were going to make Apple Intelligence worthy of its name, and not just terrible image generation that is years behind the state of the art. Nobody at Apple seems able or courageous enough to face the music: Apple Intelligence sucks. Siri sucks. None of the vaporware is anywhere close to happening. Yet as late as last week, you have Cook promoting the new MacBook Air with "Apple Intelligence". Yikes. This is partly down to the org chart. John Giannandrea is Apple's VP of ML/AI, and he reports directly to Tim Cook. He's been in the seat since 2018. But Cook evidently does not have the product savvy to be able to tell bullshit from benefit, so he keeps giving Giannandrea more rope. Now the fella has hung Apple's reputation on vaporware, promised all iPhone 16 customers something magical that just won't happen, and even spec-bumped all their devices with more RAM for nothing but diminished margins. Ouch. This is what regression to the mean looks like. This is what fiefdom management looks like. This is what having a company run by a logistics guy looks like. Apple needs a leadership reboot, stat. That asterisk is a stain.