Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
63
I'm looking for a new daily driver browser on my Mac. Chrome is a non-starter for me due to privacy concerns (Google's tracking empire is alive and well), and Edge is just... too much. Every update shoves another set of “features” down my throat — Copilot, discount coupons, Bing nonsense — things I have to disable again and again. No thanks. I currently use Brave and I really want to like it, but something about it doesn't sit right with me. The constant crypto integration, some of the decisions around their search engine — it just feels like it's got an agenda. Arc? Well, Arc is dying now, so that's out. Someone suggested Zen, which is a Firefox-based browser aiming to be an Arc-like alternative. That got me curious. And since I already had all these browsers installed, I figured: why not run some benchmarks and see how they stack up? Benchmark Setup All tests were run using Speedometer 3.0 on a MacBook M3 Pro. I tested in incognito/private mode with no...
9 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Founder's blog

Will AI destroy B2B SaaS?

TL;DR The "build vs. buy" equation has flipped. Businesses used to buy SaaS because it was cheaper than building their own. AI has changed that—building your own is now more affordable than ever. The discovery problem. AI recommendations default to well-established solutions. Think SEO is a long game? Try LLM SEO. Everyone worries about AI taking developer jobs, but what if AI wipes out the entire off-the-shelf software industry? The "Why Buy?" Problem Six months ago, we needed an AI-powered code review tool. We explored several options and ultimately "vibe-coded" our own GitHub Action—a simple Bash script that takes a git log, sends it to Claude via curl, and posts the results to Slack. Done. The best part? AI wrote the entire thing faster than it would take to sign up for a SaaS. How long until every company realizes they can do this? Need a simple "CRUD" CRM with JIRA-style tasks? Done. Need a mobile time-tracking app for remote employees? AI will spit out a React Native iOS build in minutes. Why pay for yet another SaaS when you can "vibe-code" something in a week? And mark my words, LLM providers are one step away from actually hosting the code they generate. Who needs to spawn an AWS server if you can just ask OpenAI to host the code it just wrote? - "Hey Siri! build me a Basecamp, but with green buttons, also register a domain, spawn a server and host it all there, charge this credit card when you're done" - "Absolutely, that'd be $1.17 per hour" The Discovery Problem AI doesn’t just make it easier to build software—it makes it harder for new SaaS products to get discovered. When you ask AI for recommendations, it defaults to the biggest names. And not just in SaaS, by the way, in open source too. Imagine launching a killer new JS framework today. AI coding assistants and tools like Cursor will just default to React anyway. And not even the latest version of it! In a recent tweet Adam Wathan, the creator of Tailwind, asked: "Has anyone migrated to Tailwind 4.0 yet?" The most popular response was "Nah! we're still waiting for LLMs to learn it." AI isn’t just "the next internet moment." It’s more like "the social network moment." Echo chambers get louder, big names get bigger, and smaller ones disappear into the noise. What Can SaaS Companies Do? 1. Become an Industry Standard Or at least a "go-to" product in a niche. If your app becomes something people mention on their CVs or job descriptions, you win. Examples: Slack. HubSpot. Salesforce etc. A salesperson moving to a new company simply expects Salesforce to be there. That kind of lock-in ensures survival. 2. Build Moats: Infrastructure & Vendor Lock-In SaaS products that are just CRUD apps will die. The ones that survive will own infrastructure or at least some part of it. Instead of building another AI voice assistant, create one with built-in VoIP and provide landline numbers to customers. Examples: Transistor.fm – Not just a SaaS, but also a podcast hosting and publishing pipeline. Postmark (or any transactional email service really) – yes, AI can code an email-sending app, but it can't get you a 10-year old high-reputation sender IP address trusted by Gmail and Outlook. SignWell, SavvyCal and similar "inter-business" file-sharing, communication & escrow apps that own the communication part (and frankly, are literally easier to use than vibe-code your own). But prepare for tthousands of clones. Which SaaS Will Die First? Side-project-scale, "one simple tool" SaaS products that used to be easy wins—form builders, schedulers, basic dashboards, simple workflow apps—those days are over. If AI can generate it in an afternoon, no one is paying a subscription for it. Oh, and "no code" is toasted too. The SaaS graveyard is about to get a lot more crowded. I give it 4 years. Software consulting is making a comeback though. Someone has to clean up the vibe-coded chaos.

7 months ago 19 votes
Will AI kill B2B SaaS?

TL;DR The "build vs. buy" equation has flipped. Businesses used to buy SaaS because it was cheaper than building their own. AI has changed that—building your own is now more affordable than ever. The discovery problem. AI recommendations default to well-established solutions. Think SEO is a long game? Try LLM SEO. Everyone worries about AI taking developer jobs, but what if AI wipes out the entire off-the-shelf software industry? The "Why Buy?" Problem Six months ago, we needed an AI-powered code review tool. We explored several options, tested them all, and ultimately "vibe-coded" our own GitHub Action—a simple Bash script that takes a git log, sends it to Claude via curl, and posts the results to Slack. Done. The best part? AI wrote the entire thing—faster than it took to sign up for another SaaS. How long until every company realizes they can do this? Need a simple CRM with JIRA-style tasks? Done. Need a mobile time-tracking app for remote employees? AI will spit out a React Native iOS build in minutes. Why pay for yet another SaaS when you can "vibe-code" something in a week? The Discovery Problem AI doesn’t just make it easier to build software—it makes it harder for new SaaS products to get discovered. When you ask AI for recommendations, it defaults to the biggest names. Here’s an open-source analogy: imagine launching a game-changing JS framework today. AI coding assistants and tools like Cursor will still default to React. And not even the latest version! Adam Wathan recently asked on Twitter, "Has anyone migrated to Tailwind 4.0 yet?" The most popular response was "Nah! we're still waiting for LLMs to learn it." AI isn’t just "the next internet moment." It’s more like "the social network moment." Echo chambers get louder, big names get bigger, and smaller ones disappear into the noise. What Can SaaS Companies Do? 1. Become an Industry Standard Or at least a "go-to" product in a niche. If your app becomes something people mention on their CVs or job descriptions, you win. Examples: Slack. HubSpot. Salesforce etc. A salesperson moving to a new company simply expects Salesforce to be there. That kind of lock-in ensures survival. 2. Build Moats: Infrastructure & Vendor Lock-In SaaS products that are just CRUD apps will die. The ones that survive will own infrastructure. Examples: Transistor.fm – Not just a SaaS, but also a podcast hosting and distribution pipeline. Postmark (or any transactional email service really) – AI can code an email-sending app, but it can't get you a 10-year old high-reputation sender IP address trusted by Gmail and Outlook. SignWell and similar B2B file-sharing apps (literally easier to use then code your own). Don't just build another CRUD sales CRM, build a CRM with an inbound VoIP number – because AI can’t replace telco infrastructure (yet). Which SaaS Will Die First? Side-project-scale, "one simple tool" SaaS products that used to be easy wins—Calendly replacements, form builders, schedulers, basic dashboards, simple workflow apps—those days are over. If AI can generate it in an afternoon, no one is paying a subscription for it. Oh, and "no code" is toasted too. The SaaS graveyard is about to get a lot more crowded. I give it 4 years. Software consulting is making a comeback though. Someone has to clean up the vibe-coded chaos.

7 months ago 21 votes
No, Wall Street, DeepSeek is not "far superior"

I mean, it is! But the whole story about the stock market reacting to the news about DeepSeek V3 and R1 is a fine example of the knee-jerk nature of mass consciousness in the era of clickbait economics. Briefly, by points: No, DeepSeek isn’t “head and shoulders above” every other model. The results vary across benchmarks, but on average, GPT-4o and Gemini-2 are better. You can see this on ChatBot Arena, for example (Reddit thread). Even in the results published by DeepSeek’s authors themselves (benchmark graph), you can see that in several tests, the model lags behind GPT-4o from May 2024—which, mind you, is currently ranked 16th on ChatBot Arena. No, training DeepSeek didn’t cost $6 million, “100 times less than GPT-4.” The $6 million figure refers only to the final training run of the published model. It doesn’t include any prior experiments, earlier versions, or R&D costs. This is just the raw computational cost of that final training run. And guess what? That figure is pretty much in line with models of the same class. No, Nvidia did not deserve this hit Not that we’re shedding tears for them — they could use a push to lower hardware prices. And let's not forget that DeepSeek was still trained on Nvidia’s own hardware. And no, their GPUs aren’t suddenly obsolete. DeepSeek’s computational budget is fairly standard for training, and inference for such a massive model (reminder: it’s an MoE with 671 billion parameters, 37 billion of which are active per token generation) requires a ton of hardware. Inference costs are roughly on par with a 70B dense model. Naturally, they’ll scale this success by throwing even more hardware at it and making the model bigger. Not to mention that Deepseek makes LLMs more accessible for the on-prem customers. Which means smaller businesses will buy more GPU's, which is still good for NVDA, am I right? Does this mean the model is bad? No, the model is very, VERY good. It outperforms the vast majority of open-source models, which is fantastic. DeepSeek used 8-bit floating point numbers (FP8) throughout the entire training process. This sacrifices some of that precision to save memory and boost performance. Additionally, they employed a multi-token prediction system and innovative GPU clustering/connectivity techniques. These are clever and practical engineering choices that undoubtedly contributed to their success. In the end, though, stocks will recover, ideas will spread, models will get better, and progress will march on (hopefully).

9 months ago 24 votes
I'm finally dumping Visual Studio

After years of working with the "big" Visual Studio, I've had enough. It's buggy, slow, and frustrating, and I've decided to make the switch to Visual Studio Code. While as a C# developer I'm still unsure if I can replicate every aspect of my workflow in VS Code, I'm willing to give it a shot—and so far, I'm really impressed. 1. Performance Visual Studio 2022 performance has been a constant issue. It's sluggish and feels increasingly bloated with every new update. It's like watching paint dry every time I open a project. In contrast, Visual Studio Code feels lightweight and incredibly fast. The first time I opened my large project in VS Code, I was shocked — it loaded in lees than a second, literally, even with extensions like "C#" and "C# Dev Kit" installed. 2. Better Developer Experience Running dotnet watch run in VS Code's terminal has been a revelation. It's fast, responsive, and actually works consistently. Visual Studio's "hot reload" feature, on the other hand, has been a constant source of frustration for me. Half the time it doesn't work, and I'm left restarting debugging sessions over and over again. I can't tell you how many hours I've lost to that unreliable feature. 3. Fewer Bugs, Less Frustration The minor editor bugs in Visual Studio have been endless and exhausting. I remember one particularly infuriating bug where syntax highlighting would break in Razor and .cshtml files whenever I used certain HTML tags or even just adjusted the indentation. It drove me up the wall! Not to mention the bizarre issues with JavaScript formatting that never seemed to get fixed. Since switching to VS Code, I've encountered far fewer bugs. It just feels like an environment that respects my time and sanity. 4. A Thriving Ecosystem The VS Code extension ecosystem is alive and thriving. Need Tailwind CSS IntelliSense? There's an extension for that, and it works beautifully. Want to visualize your Git history for a particular line (better version of git-blame)? The Git History extension has got you covered. In "big" Visual Studio, I'd report issues through the "feedback hub" and wait months — or even years — for a response. With VS Code, the community is constantly contributing new tools and improvements. It's energizing (and sometimes exhausting) to be part of such an active ecosystem. 5. Cross-Platform Flexibility One of the biggest advantages I've found with Visual Studio Code is its true cross-platform support. Whether I'm on my Windows PC gaming rig at home or my MacBook while traveling, VS Code runs smoothly and keeps my workflow consistent. Visual Studio's limited macOS version just doesn't cut it for me. Being able to switch between machines without missing a beat has been a game-changer. I have to admit, I was skeptical at first. I've always had a bit of a grudge against Electron-based apps — they've often felt sluggish and bloated. But VS Code has completely changed my perspective. It's fast, responsive, and flexible enough to let me build the development environment that works best for me. Switching to VS Code has rekindled my passion for coding; it reminds me why I fell in love with development in the first place. While Visual Studio will always have its strengths, I need a tool that evolves with me—not one that holds me back.

a year ago 40 votes

More in programming

Executives should be the least busy people

If your executive calendar is packed back to back, you have no room for fires, customers, or serendipities. You've traded all your availability for efficiency. That's a bad deal. Executives of old used to know this! That's what the long lunches, early escapes to the golf course, and reading the paper at work were all about. A great fictional example of this is Bert Cooper from Mad Men. He knew his value was largely in his network. He didn't have to grind every minute of every day to prove otherwise. His function was to leap into action when the critical occasion arose or decision needed to be made. But modern executives are so insecure about seeming busy 24/7 that they'll wreck their business trying to prove it. Trying to outwork everyone. Sacrificing themselves thin so they can run a squirrel-brain operation that's constantly chasing every nutty idea. Now someone is inevitably going to say "but what about Elon!!". He's actually a perfect illustration of doing this right, actually. Even if he works 100-hour weeks, he's the CEO of 3 companies, has a Diablo VI addiction, and keeps a busy tweeting schedule too. In all of that, I'd be surprised if there was more than 20-30h per company per week on average. And your boss is not Elon. Wide open calendars should not be seen as lazy, but as intentional availability. It's time we brought them back into vogue.

3 days ago 5 votes
Dispatch 012: Local-first talks, Automerge 3, and Scribbling on a Google Calendar

A secret master plan, the official launch of Automerge 3, and an update on Sketchy Calendars

3 days ago 4 votes
React Server Components with Vite and React-Router (tip)

Create a small example app and send payloads from the server to the client using RSC's

4 days ago 10 votes
2000 words about arrays and tables

I'm way too discombobulated from getting next month's release of Logic for Programmers ready, so I'm pulling a idea from the slush pile. Basically I wanted to come up with a mental model of arrays as a concept that explained APL-style multidimensional arrays and tables but also why there weren't multitables. So, arrays. In all languages they are basically the same: they map a sequence of numbers (I'll use 1..N)1 to homogeneous values (values of a single type). This is in contrast to the other two foundational types, associative arrays (which map an arbitrary type to homogeneous values) and structs (which map a fixed set of keys to heterogeneous values). Arrays appear in PLs earlier than the other two, possibly because they have the simplest implementation and the most obvious application to scientific computing. The OG FORTRAN had arrays. I'm interested in two structural extensions to arrays. The first, found in languages like nushell and frameworks like Pandas, is the table. Tables have string keys like a struct and indexes like an array. Each row is a struct, so you can get "all values in this column" or "all values for this row". They're heavily used in databases and data science. The other extension is the N-dimensional array, mostly seen in APLs like Dyalog and J. Think of this like arrays-of-arrays(-of-arrays), except all arrays at the same depth have the same length. So [[1,2,3],[4]] is not a 2D array, but [[1,2,3],[4,5,6]] is. This means that N-arrays can be queried on any axis. ]x =: i. 3 3 0 1 2 3 4 5 6 7 8 0 { x NB. first row 0 1 2 0 {"1 x NB. first column 0 3 6 So, I've had some ideas on a conceptual model of arrays that explains all of these variations and possibly predicts new variations. I wrote up my notes and did the bare minimum of editing and polishing. Somehow it ended up being 2000 words. 1-dimensional arrays A one-dimensional array is a function over 1..N for some N. To be clear this is math functions, not programming functions. Programming functions take values of a type and perform computations on them. Math functions take values of a fixed set and return values of another set. So the array [a, b, c, d] can be represented by the function (1 -> a ++ 2 -> b ++ 3 -> c ++ 4 -> d). Let's write the set of all four element character arrays as 1..4 -> char. 1..4 is the function's domain. The set of all character arrays is the empty array + the functions with domain 1..1 + the functions with domain 1..2 + ... Let's call this set Array[Char]. Our compilers can enforce that a type belongs to Array[Char], but some operations care about the more specific type, like matrix multiplication. This is either checked with the runtime type or, in exotic enough languages, with static dependent types. (This is actually how TLA+ does things: the basic collection types are functions and sets, and a function with domain 1..N is a sequence.) 2-dimensional arrays Now take the 3x4 matrix i. 3 4 0 1 2 3 4 5 6 7 8 9 10 11 There are two equally valid ways to represent the array function: A function that takes a row and a column and returns the value at that index, so it would look like f(r: 1..3, c: 1..4) -> Int. A function that takes a row and returns that column as an array, aka another function: f(r: 1..3) -> g(c: 1..4) -> Int.2 Man, (2) looks a lot like currying! In Haskell, functions can only have one parameter. If you write (+) 6 10, (+) 6 first returns a new function f y = y + 6, and then applies f 10 to get 16. So (+) has the type signature Int -> Int -> Int: it's a function that takes an Int and returns a function of type Int -> Int.3 Similarly, our 2D array can be represented as an array function that returns array functions: it has type 1..3 -> 1..4 -> Int, meaning it takes a row index and returns 1..4 -> Int, aka a single array. (This differs from conventional array-of-arrays because it forces all of the subarrays to have the same domain, aka the same length. If we wanted to permit ragged arrays, we would instead have the type 1..3 -> Array[Int].) Why is this useful? A couple of reasons. First of all, we can apply function transformations to arrays, like "combinators". For example, we can flip any function of type a -> b -> c into a function of type b -> a -> c. So given a function that takes rows and returns columns, we can produce one that takes columns and returns rows. That's just a matrix transposition! Second, we can extend this to any number of dimensions: a three-dimensional array is one with type 1..M -> 1..N -> 1..O -> V. We can still use function transformations to rearrange the array along any ordering of axes. Speaking of dimensions: What are dimensions, anyway Okay, so now imagine we have a Row × Col grid of pixels, where each pixel is a struct of type Pixel(R: int, G: int, B: int). So the array is Row -> Col -> Pixel But we can also represent the Pixel struct with a function: Pixel(R: 0, G: 0, B: 255) is the function where f(R) = 0, f(G) = 0, f(B) = 255, making it a function of type {R, G, B} -> Int. So the array is actually the function Row -> Col -> {R, G, B} -> Int And then we can rearrange the parameters of the function like this: {R, G, B} -> Row -> Col -> Int Even though the set {R, G, B} is not of form 1..N, this clearly has a real meaning: f[R] is the function mapping each coordinate to that coordinate's red value. What about Row -> {R, G, B} -> Col -> Int? That's for each row, the 3 × Col array mapping each color to that row's intensities. Really any finite set can be a "dimension". Recording the monitor over a span of time? Frame -> Row -> Col -> Color -> Int. Recording a bunch of computers over some time? Computer -> Frame -> Row …. This is pretty common in constraint satisfaction! Like if you're conference trying to assign talks to talk slots, your array might be type (Day, Time, Room) -> Talk, where Day/Time/Room are enumerations. An implementation constraint is that most programming languages only allow integer indexes, so we have to replace Rooms and Colors with numerical enumerations over the set. As long as the set is finite, this is always possible, and for struct-functions, we can always choose the indexing on the lexicographic ordering of the keys. But we lose type safety. Why tables are different One more example: Day -> Hour -> Airport(name: str, flights: int, revenue: USD). Can we turn the struct into a dimension like before? In this case, no. We were able to make Color an axis because we could turn Pixel into a Color -> Int function, and we could only do that because all of the fields of the struct had the same type. This time, the fields are different types. So we can't convert {name, flights, revenue} into an axis. 4 One thing we can do is convert it to three separate functions: airport: Day -> Hour -> Str flights: Day -> Hour -> Int revenue: Day -> Hour -> USD But we want to keep all of the data in one place. That's where tables come in: an array-of-structs is isomorphic to a struct-of-arrays: AirportColumns( airport: Day -> Hour -> Str, flights: Day -> Hour -> Int, revenue: Day -> Hour -> USD, ) The table is a sort of both representations simultaneously. If this was a pandas dataframe, df["airport"] would get the airport column, while df.loc[day1] would get the first day's data. I don't think many table implementations support more than one axis dimension but there's no reason they couldn't. These are also possible transforms: Hour -> NamesAreHard( airport: Day -> Str, flights: Day -> Int, revenue: Day -> USD, ) Day -> Whatever( airport: Hour -> Str, flights: Hour -> Int, revenue: Hour -> USD, ) In my mental model, the heterogeneous struct acts as a "block" in the array. We can't remove it, we can only push an index into the fields or pull a shared column out. But there's no way to convert a heterogeneous table into an array. Actually there is a terrible way Most languages have unions or product types that let us say "this is a string OR integer". So we can make our airport data Day -> Hour -> AirportKey -> Int | Str | USD. Heck, might as well just say it's Day -> Hour -> AirportKey -> Any. But would anybody really be mad enough to use that in practice? Oh wait J does exactly that. J has an opaque datatype called a "box". A "table" is a function Dim1 -> Dim2 -> Box. You can see some examples of what that looks like here Misc Thoughts and Questions The heterogeneity barrier seems like it explains why we don't see multiple axes of table columns, while we do see multiple axes of array dimensions. But is that actually why? Is there a system out there that does have multiple columnar axes? The array x = [[a, b, a], [b, b, b]] has type 1..2 -> 1..3 -> {a, b}. Can we rearrange it to 1..2 -> {a, b} -> 1..3? No. But we can rearrange it to 1..2 -> {a, b} -> PowerSet(1..3), which maps rows and characters to columns with that character. [(a -> {1, 3} ++ b -> {2}), (a -> {} ++ b -> {1, 2, 3}]. We can also transform Row -> PowerSet(Col) into Row -> Col -> Bool, aka a boolean matrix. This makes sense to me as both forms are means of representing directed graphs. Are other function combinators useful for thinking about arrays? Does this model cover pivot tables? Can we extend it to relational data with multiple tables? Systems Distributed Talk (will be) Online The premier will be August 6 at 12 CST, here! I'll be there to answer questions / mock my own performance / generally make a fool of myself. Sacrilege! But it turns out in this context, it's easier to use 1-indexing than 0-indexing. In the years since I wrote that article I've settled on "each indexing choice matches different kinds of mathematical work", so mathematicians and computer scientists are best served by being able to choose their index. But software engineers need consistency, and 0-indexing is overall a net better consistency pick. ↩ This is right-associative: a -> b -> c means a -> (b -> c), not (a -> b) -> c. (1..3 -> 1..4) -> Int would be the associative array that maps length-3 arrays to integers. ↩ Technically it has type Num a => a -> a -> a, since (+) works on floats too. ↩ Notice that if each Airport had a unique name, we could pull it out into AirportName -> Airport(flights, revenue), but we still are stuck with two different values. ↩

4 days ago 9 votes
Our $100M Series B

We don’t want to bury the lede: we have raised a $100M Series B, led by a new strategic partner in USIT with participation from all existing Oxide investors. To put that number in perspective: over the nearly six year lifetime of the company, we have raised $89M; our $100M Series B more than doubles our total capital raised to date — and positions us to make Oxide the generational company that we have always aspired it to be. If this aspiration seems heady now, it seemed absolutely outlandish when we were first raising venture capital in 2019. Our thesis was that cloud computing was the future of all computing; that running on-premises would remain (or become!) strategically important for many; that the entire stack — hardware and software — needed to be rethought from first principles to serve this market; and that a large, durable, public company could be built by whomever pulled it off. This scope wasn’t immediately clear to all potential investors, some of whom seemed to latch on to one aspect or another without understanding the whole. Their objections were revealing: "We know you can build this," began more than one venture capitalist (at which we bit our tongue; were we not properly explaining what we intended to build?!), "but we don’t think that there is a market." Entrepreneurs must become accustomed to rejection, but this flavor was particularly frustrating because it was exactly backwards: we felt that there was in fact substantial technical risk in the enormity of the task we put before ourselves — but we also knew that if we could build it (a huge if!) there was a huge market, desperate for cloud computing on-premises. Fortunately, in Eclipse Ventures we found investors who saw what we saw: that the most important products come when we co-design hardware and software together, and that the on-premises market was sick of being told that they either don’t exist or that they don’t deserve modernity. These bold investors — like the customers we sought to serve — had been waiting for this company to come along; we raised seed capital, and started building. And build it we did, making good on our initial technical vision: We did our own board designs, allowing for essential system foundation like a true hardware root-of-trust and end-to-end power observability. We did our own microcontroller operating system, and used it to replace the traditional BMC. We did our own platform enablement software, eliminating the traditional UEFI BIOS and its accompanying flotilla of vulnerabilities. We did our own host hypervisor, assuring an integrated and seamless user experience — and eliminating the need for a third-party hypervisor and its concomitant rapacious software licensing. We did our own switch — and our own switch runtime — eliminating entire universes of integration complexity and operational nightmares. We did our own integrated storage service, allowing the rack-scale system to have reliable, available, durable, elastic instance storage without necessitating a dependency on a third party. We did our own control plane, a sophisticated distributed system building on the foundation of our hardware and software components to deliver the API-driven services that modernity demands: elastic compute, virtual networking, and virtual storage. While these technological components are each very important (and each is in service to specific customer problems when deploying infrastructure on-premises), the objective is the product, not its parts. The journey to a product was long, but we ticked off the milestones. We got the boards brought up. We got the switch transiting packets. We got the control plane working. We got the rack manufactured. We passed FCC compliance. And finally, two years ago, we shipped our first system! Shortly thereafter, more milestones of the variety you can only get after shipping: our first update of the software in the field; our first update-delivered performance improvements; our first customer-requested features added as part of an update. Later that year, we hit general commercial availability, and things started accelerating. We had more customers — and our first multi-rack customer. We had customers go on the record about why they had selected Oxide — and customers describing the wins that they had seen deploying Oxide. Customers starting landing faster now: enterprise sales cycles are infamously long, but we were finding that we were going from first conversations to a delivered product surprisingly quickly. The quickening pace always seemed to be due in some way to our transparency: new customers were listeners to our podcast, or they had read our RFDs, or they had perused our documentation, or they had looked at the source code itself. With growing customer enthusiasm, we were increasingly getting questions about what it would look like to buy a large number of Oxide racks. Could we manufacture them? Could we support them? Could we make them easy to operate together? Into this excitement, a new potential investor, USIT, got to know us. They asked terrific questions, and we found a shared disposition towards building lasting value and doing it the right way. We learned more about them, too, and especially USIT’s founder, Thomas Tull. The more we each learned about the other, the more there was to like. And importantly, USIT had the vision for us that we had for ourselves: that there was a big, important market here — and that it was uniquely served by Oxide. We are elated to announce this new, exciting phase of the company. It’s not necessarily in our nature to celebrate fundraising, but this is a big milestone, because it will allow us to address our customers' most pressing questions around scale (manufacturing scale, system scale, operations scale) and roadmap scope. We have always believed in our mission, but this raise gives us a new sense of confidence when we say it: we’re going to kick butt, have fun, not cheat (of course!), love our customers — and change computing forever.

4 days ago 12 votes