Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
42
We’re very excited to have announced the general availability of our cloud computer. As part of this work, we continue to build on top of the LPC55S69 from NXP as our Root of Trust. We’ve discovered some gaps when using TrustZone preset settings on the LPC55S69 that can allow for unexpected behavior including enabling debug settings and exposure of the UDS (Unique Device Secret). These issues require a signed image or access at manufacturing time. How to (safely, securely) configure a chip The LPC55S69 uses the Armv8-m architecture which includes TrustZone-M. We’ve previously discussed some aspects of the Armv8-m architecture and presented on it in more detail. Fundamentally, setting up TrustZone-M is simply a matter of putting the right values in the right registers. The word "simply" is, of course, doing a lot of heavy lifting here. TrustZone-M must also be set up in conjunction with the Memory Protection Unit (MPU) and any other vendor specific security settings. Once the ideal...
a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Oxide Computer Company Blog

Oxide’s Compensation Model: How is it Going?

How it started Four years ago, we were struggling to hire. Our team was small (~23 employees), and we knew that we needed many more people to execute on our audacious vision. While we had had success hiring in our personal networks, those networks now felt tapped; we needed to get further afield. As is our wont, we got together as a team and brainstormed: how could we get a bigger and broader applicant pool? One of our engineers, Sean, shared some personal experience: that Oxide’s principles and values were very personally important to him — but that when he explained them to people unfamiliar with the company, they were (understandably?) dismissed as corporate claptrap. Sean had found, however, that there was one surefire way to cut through the skepticism: to explain our approach to compensation. Maybe, Sean wondered, we should talk about it publicly? "I could certainly write a blog entry explaining it," I offered. At this suggestion, the team practically lunged with enthusiasm: the reaction was so uniformly positive that I have to assume that everyone was sick of explaining this most idiosyncratic aspect of Oxide to friends and family. So what was the big deal about our compensation? Well, as a I wrote in the resulting piece, Compensation as a Reflection of Values, our compensation is not merely transparent, but uniform. The piece — unsurprisingly, given the evergreen hot topic that is compensation — got a ton of attention. While some of that attention was negative (despite the piece trying to frontrun every HN hater!), much of it was positive — and everyone seemed to be at least intrigued. And in terms of its initial purpose, the piece succeeded beyond our wildest imagination: it brought a surge of new folks interested in the company. Best of all, the people new to Oxide were interested for all of the right reasons: not the compensation per se, but for the values that the compensation represents. The deeper they dug, the more they found to like — and many who learned about Oxide for the first time through that blog entry we now count as long-time, cherished colleagues. That blog entry was a long time ago now, and today we have ~75 employees (and a shipping product!); how is our compensation model working out for us? How it’s going Before we get into our deeper findings, two updates that are so important that we have updated the blog entry itself. First, the dollar figure itself continues to increase over time (as of this writing in 2025, $207,264); things definitely haven’t gotten (and aren’t getting!) any cheaper. And second, we did introduce variable compensation for some sales roles. Yes, those roles can make more than the rest of us — but they can also make less, too. And, importantly: if/when those folks are making more than the rest of us, it’s because they’re selling a lot — a result that can be celebrated by everyone! Those critical updates out of the way, how is it working? There have been a lot of surprises along the way, mostly (all?) of the positive variety. A couple of things that we have learned: People take their own performance really seriously. When some outsiders hear about our compensation model, they insist that it can’t possibly work because "everyone will slack off." I have come to find this concern to be more revealing of the person making the objection than of our model, as our experience has been in fact the opposite: in my one-on-one conversations with team members, a frequent subject of conversation is people who are concerned that they aren’t doing enough (or that they aren’t doing the right thing, or that their work is progressing slower than they would like). I find my job is often to help quiet this inner critic while at the same time stoking what I feel is a healthy urge: when one holds one’s colleagues in high regard, there is an especially strong desire to help contribute — to prove oneself worthy of a superlative team. Our model allows people to focus on their own contribution (whatever it might be). People take hiring really seriously. When evaluating a peer (rather than a subordinate), one naturally has high expectations — and because (in the sense of our wages, anyway) everyone at Oxide is a peer, it shouldn’t be surprising that folks have very high expectations for potential future colleagues. And because the Oxide hiring process is writing intensive, it allows for candidates to be thoroughly reviewed by Oxide employees — who are tough graders! It is, bluntly, really hard to get a job at Oxide. It allows us to internalize the importance of different roles. One of the more incredible (and disturbingly frequent) objections I have heard is: "But is that what you’ll pay support folks?" I continue to find this question offensive, but I no longer find it surprising: the specific dismissal of support roles reveals a widespread and corrosive devaluation of those closest to customers. My rejoinder is simple: think of the best support engineers you’ve worked with; what were they worth? Anyone who has shipped complex systems knows these extraordinary people — calm under fire, deeply technical, brilliantly resourceful, profoundly empathetic — are invaluable to the business. So what if you built a team entirely of folks like that? The response has usually been: well, sure, if you’re going to only hire those folks. Yeah, we are — and we have! It allows for fearless versatility. A bit of a corollary to the above, but subtly different: even though we (certainly!) hire and select for certain roles, our uniform compensation means we can in fact think primarily in terms of people unconfined by those roles. That is, we can be very fluid about what we’re working on, without fear of how it will affect a perceived career trajectory. As a concrete example: we had a large customer that wanted to put in place a program for some of the additional work they wanted to see in the product. The complexity of their needs required dedicated program management resources that we couldn’t spare, and in another more static company we would have perhaps looked to hire. But in our case, two folks came together — CJ from operations, and Izzy from support — and did something together that was in some regards new to both of them (and was neither of their putative full-time jobs!) The result was indisputably successful: the customer loved the results, and two terrific people got a chance to work closely together without worrying about who was dotted-lined to whom. It has allowed us to organizationally scale. Many organizations describe themselves as flat, and a reasonable rebuttal to this are the "shadow hierarchies" created by the tyranny of structurelessness. And indeed, if one were to read (say) Valve’s (in)famous handbook, the autonomy seems great — but the stack ranking decidedly less so, especially because the handbook is conspicuously silent on the subject of compensation. (Unsurprisingly, compensation was weaponized at Valve, which descended into toxic cliquishness.) While we believe that autonomy is important to do one’s best work, we also have a clear structure at Oxide in that Steve Tuck (Oxide co-founder and CEO) is in charge. He has to be: he is held accountable to our investors — and he must have the latitude to make decisions. Under Steve, it is true that we don’t have layers of middle management. Might we need some in the future? Perhaps, but what fraction of middle management in a company is dedicated to — at some level — determining who gets what in terms of compensation? What happens when you eliminate that burden completely? It frees us to both lead and follow. We expect that every Oxide employee has the capacity to lead others — and we tap this capacity frequently. Of course, a company in which everyone is trying to direct all traffic all the time would be a madhouse, so we also very much rely on following one another too! Just as our compensation model allows us to internalize the values of different roles, it allows us to appreciate the value of both leading and following, and empowers us each with the judgement to know when to do which. This isn’t always easy or free of ambiguity, but this particular dimension of our versatility has been essential — and our compensation model serves to encourage it. It causes us to hire carefully and deliberately. Of course, one should always hire carefully and deliberately, but this often isn’t the case — and many a startup has been ruined by reckless expansion of headcount. One of the roots of this can be found in a dirty open secret of Silicon Valley middle management: its ranks are taught to grade their career by the number of reports in their organization. Just as if you were to compensate software engineers based on the number of lines of code they wrote, this results in perverse incentives and predictable disasters — and any Silicon Valley vet will have plenty of horror stories of middle management jockeying for reqs or reorgs when they should have been focusing on product and customers. When you can eliminate middle management, you eliminate this incentive. We grow the team not because of someone’s animal urges to have the largest possible organization, but rather because we are at a point where adding people will allow us to better serve our market and customers. It liberates feedback from compensation. Feedback is, of course, very important: we all want to know when and where we’re doing the right thing! And of course, we want to know too where there is opportunity for improvement. However, Silicon Valley has historically tied feedback so tightly to compensation that it has ceased to even pretend to be constructive: if it needs to be said, performance review processes aren’t, in fact, about improving the performance of the team, but rather quantifying and stack-ranking that performance for purposes of compensation. When compensation is moved aside, there is a kind of liberation for feedback itself: because feedback is now entirely earnest, it can be expressed and received thoughtfully. It allows people to focus on doing the right thing. In a world of traditional, compensation-tied performance review, the organizational priority is around those things that affect compensation — even at the expense of activity that clearly benefits the company. This leads to all sorts of wild phenomena, and most technology workers will be able to tell stories of doing things that were clearly right for the company, but having to hide it from management that thought only narrowly in terms of their own stated KPIs and MBOs. By contrast, over and over (and over!) again, we have found that people do the right thing at Oxide — even if (especially if?) no one is looking. The beneficiary of that right thing? More often than not, it’s our customers, who have uniformly praised the team for going above and beyond. It allows us to focus on the work that matters. Relatedly, when compensation is non-uniform, the process to figure out (and maintain) that non-uniformity is laborious. All of that work — of line workers assembling packets explaining themselves, of managers arming themselves with those packets to fight in the arena of organizational combat, and then of those same packets ultimately being regurgitated back onto something called a review — is work. Assuming such a process is executed perfectly (something which I suppose is possible in the abstract, even though I personally have never seen it), this is work that does not in fact advance the mission of the company. Not having variable compensation gives us all of that time and energy back to do the actual work — the stuff that matters. It has stoked an extraordinary sense of teamwork. For me personally — and as I relayed on an episode of Software Misadventures — the highlights of my career have been being a part of an extraordinary team. The currency of a team is mutual trust, and while uniform compensation certainly isn’t the only way to achieve that trust, boy does it ever help! As Steve and I have told one another more times that we can count: we are so lucky to work on this team, with its extraordinary depth and breadth. While our findings have been very positive, I would still reiterate what we said four years ago: we don’t know what the future holds, and it’s easier to make an unwavering commitment to the transparency rather than the uniformity. That said, the uniformity has had so many positive ramifications that the model feels more important than ever. We are beyond the point of this being a curiosity; it’s been essential for building a mission-focused team taking on a problem larger than ourselves. So it’s not a fit for everyone — but if you are seeking an extraordinary team solving hard problems in service to customers, consider Oxide!

2 months ago 12 votes
dtrace.conf(24)

Sometime in late 2007, we had the idea of a DTrace conference. Or really, more of a meetup; from the primordial e-mail I sent: The goal here, by the way, is not a DTrace user group, but more of a face-to-face meeting with people actively involved in DTrace — either by porting it to another system, by integrating probes into higher level environments, by building higher-level tools on top of DTrace or by using it heavily and/or in a critical role. That said, we also don’t want to be exclusionary, so our thinking is that the only true requirement for attending is that everyone must be prepared to speak informally for 15 mins or so on what they are doing with DTrace, any limitations that they have encountered, and some ideas for the future. We’re thinking that this is going to be on the order of 15-30 people (though more would be a good problem to have — we’ll track it if necessary), that it will be one full day (breakfast in the morning through drinks into the evening), and that we’re going to host it here at our offices in San Francisco sometime in March 2008. This same note also included some suggested names for the gathering, including what in hindsight seems a clear winner: DTrace Bi-Mon-Sci-Fi-Con. As if knowing that I should leave an explanatory note to my future self as to why this name was not selected, my past self fortunately clarified: "before everyone clamors for the obvious Bi-Mon-Sci-Fi-Con, you should know that most Millennials don’t (sadly) get the reference." (While I disagree with the judgement of my past self, it at least indicates that at some point I cared if anyone got the reference.) We settled on a much more obscure reference, and had the first dtrace.conf in March 2008. Befitting the style of the time, it was an unconference (a term that may well have hit its apogee in 2008) that you signed up to attend by editing a wiki. More surprising given the year (and thanks entirely to attendee Ben Rockwood), it was recorded — though this is so long ago that I referred to it as video taping (and with none of the participants mic’d, I’m afraid the quality isn’t very good). The conference, however, was terrific, viz. the reports of Adam, Keith and Stephen (all somehow still online nearly two decades later). If anything, it was a little too good: we realized that we couldn’t recreate the magic, and we demurred on making it an annual event. Years passed, and memories faded. By 2012, it felt like we wanted to get folks together again, now under a post-lawnmower corporate aegis in Joyent. The resulting dtrace.conf(12) was a success, and the Olympiad cadence felt like the right one; we did it again four years later at dtrace.conf(16). In 2020, we came back together for a new adventure — and the DTrace Olympiad was not lost on Adam. Alas, dtrace.conf(20) — like the Olympics themselves — was cancelled, if implicitly. Unlike the Olympics, however, it was not to be rescheduled. More years passed and DTrace continued to prove its utility at Oxide; last year when Adam and I did our "DTrace at 20" episode of Oxide and Friends, we vowed to hold dtrace.conf(24) — and a few months ago, we set our date to be December 11th. At first we assumed we would do something similar to our earlier conferences: a one-day participant-run conference, at the Oxide office in Emeryville. But times have changed: thanks to the rise of remote work, technologists are much more dispersed — and many more people would need to travel for dtrace.conf(24) than in previous DTrace Olympiads. Travel hasn’t become any cheaper since 2008, and the cost (and inconvenience) was clearly going to limit attendance. The dilemma for our small meetup highlights the changing dynamics in tech conferences in general: with talks all recorded and made publicly available after the conference, how does one justify attending a conference in person? There can be reasonable answers to that question, of course: it may be the hallway track, or the expo hall, or the after-hours socializing, or perhaps some other special conference experience. But it’s also not surprising that some conferences — especially ones really focused on technical content — have decided that they are better off doing as conference giant O’Reilly Media did, and going exclusively online. And without the need to feed and shelter participants, the logistics for running a conference become much more tenable — and the price point can be lowered to the point that even highly produced conferences like P99 CONF can be made freely available. This, in turn, leads to much greater attendance — and a network effect that can get back some of what one might lose going online. In particular, using chat as the hallway track can be more much effective (and is certainly more scalable!) than the actual physical hallways at a conference. For conferences in general, there is a conversation to be had here (and as a teaser, Adam and I are going to talk about it with Stephen O’Grady and Theo Schlossnagle on Oxide and Friends next week, but for our quirky, one-day, Olympiad-cadence dtrace.conf, the decision was pretty easy: there was much more to be gained than lost by going exclusively on-line. So dtrace.conf(24) is coming up next week, and it’s available to everyone. In terms of platform, we’re going to try to keep that pretty simple: we’re going to use Google Meet for the actual presenters, which we will stream in real-time to YouTube — and we’ll use the Oxide Discord for all chat. We’re hoping you’ll join us on December 11th — and if you want to talk about DTrace or a DTrace-adjacent topic, we’d love for you to present! Keeping to the unconference style, if you would like to present, please indicate your topic in the #session-topics Discord channel so we can get the agenda fleshed out. While we’re excited to be online, there are some historical accoutrements of conferences that we didn’t want to give up. First, we have a tradition of t-shirts with dtrace.conf. Thanks to our designer Ben Leonard, we have a banger of a t-shirt, capturing the spirit of our original dtrace.conf(08) shirt but with an Oxide twist. It’s (obviously) harder to make those free but we have tried to price them reasonably. You can get your t-shirt by adding it to your (free) dtrace.conf ticket. (And for those who present at dtrace.conf, your shirt is on us — we’ll send you a coupon code!) Second, for those who can make their way to the East Bay and want some hangout time, we are going to have an après conference social event at the Oxide office starting at 5p. We’re charging something nominal for that too (and like the t-shirt, you pay for that via your dtrace.conf ticket); we’ll have some food and drinks and an Oxide hardware tour for the curious — and (of course?) there will be Fishpong. Much has changed since I sent that e-mail 17 years ago — but the shared values and disposition that brought together our small community continue to endure; we look forward to seeing everyone (virtually) at dtrace.conf(24)!

7 months ago 80 votes
Advancing Cloud and HPC Convergence with Lawrence Livermore National Laboratory

Oxide Computer Company and Lawrence Livermore National Laboratory Work Together to Advance Cloud and HPC Convergence Oxide Computer Company and Lawrence Livermore National Laboratory (LLNL) today announced a plan to bring on-premises cloud computing capabilities to the Livermore Computing (LC) high-performance computing (HPC) center. The rack-scale Oxide Cloud Computer allows LLNL to improve the efficiency of operational workloads and will provide users in the National Nuclear Security Administration (NNSA) with new capabilities for provisioning secure, virtualized services alongside HPC workloads. HPC centers have traditionally run batch workloads for large-scale scientific simulations and other compute-heavy applications. HPC workloads do not exist in isolation—there are a multitude of persistent, operational services that keep the HPC center running. Meanwhile, HPC users also want to deploy cloud-like persistent services—databases, Jupyter notebooks, orchestration tools, Kubernetes clusters. Clouds have developed extensive APIs, security layers, and automation to enable these capabilities, but few options exist to deploy fully virtualized, automated cloud environments on-premises. The Oxide Cloud Computer allows organizations to deliver secure cloud computing capabilities within an on-premises environment. On-premises environments are the next frontier for cloud computing. LLNL is tackling some of the hardest and most important problems in science and technology, requiring advanced hardware, software, and cloud capabilities. We are thrilled to be working with their exceptional team to help advance those efforts, delivering an integrated system that meets their rigorous requirements for performance, efficiency, and security. — Steve TuckCEO at Oxide Computer Company Leveraging the new Oxide Cloud Computer, LLNL will enable staff to provision virtual machines (VMs) and services via self-service APIs, improving operations and modernizing aspects of system management. In addition, LLNL will use the Oxide rack as a proving ground for secure multi-tenancy and for smooth integration with the LLNL-developed Flux resource manager. LLNL plans to bring its users cloud-like Infrastructure-as-a-Service (IaaS) capabilities that work seamlessly with their HPC jobs, while maintaining security and isolation from other users. Beyond LLNL personnel, researchers at the Los Alamos National Laboratory and Sandia National Laboratories will also partner in many of the activities on the Oxide Cloud Computer. We look forward to working with Oxide to integrate this machine within our HPC center. Oxide’s Cloud Computer will allow us to securely support new types of workloads for users, and it will be a proving ground for introducing cloud-like features to operational processes and user workflows. We expect Oxide’s open-source software stack and their transparent and open approach to development to help us work closely together. — Todd GamblinDistinguished Member of Technical Staff at LLNL Sandia is excited to explore the Oxide platform as we work to integrate on-premise cloud technologies into our HPC environment. This advancement has the potential to enable new classes of interactive and on-demand modeling and simulation capabilities. — Kevin PedrettiDistinguished Member of Technical Staff at Sandia National Laboratories LLNL plans to work with Oxide on additional capabilities, including the deployment of additional Cloud Computers in its environment. Of particular interest are scale-out capabilities and disaster recovery. The latest installation underscores Oxide Computer’s momentum in the federal technology ecosystem, providing reliable, state-of-the-art Cloud Computers to support critical IT infrastructure. To learn more about Oxide Computer, visit https://oxide.computer. About Oxide Computer Oxide Computer Company is the creator of the world’s first commercial Cloud Computer, a true rack-scale system with fully unified hardware and software, purpose-built to deliver hyperscale cloud computing to on-premises data centers. With Oxide, organizations can fully realize the economic and operational benefits of cloud ownership, with access to the same self-service development experience of public cloud, without the public cloud cost. Oxide empowers developers to build, run, and operate any application with enhanced security, latency, and control, and frees organizations to elevate IT operations to accelerate strategic initiatives. To learn more about Oxide’s Cloud Computer, visit oxide.computer. About LLNL Founded in 1952, Lawrence Livermore National Laboratory provides solutions to our nation’s most important national security challenges through innovative science, engineering, and technology. Lawrence Livermore National Laboratory is managed by Lawrence Livermore National Security, LLC for the U.S. Department of Energy’s National Nuclear Security Administration. Media Contact LaunchSquad for Oxide Computer oxide@launchsquad.com

7 months ago 77 votes
Remembering Charles Beeler

We are heartbroken to relay that Charles Beeler, a friend and early investor in Oxide, passed away in September after a battle with cancer. We lost Charles far too soon; he had a tremendous influence on the careers of us both. Our relationship with Charles dates back nearly two decades, to his involvement with the ACM Queue board where he met Bryan. It was unprecedented to have a venture capitalist serve in this capacity with ACM, and Charles brought an entirely different perspective on the practitioner content. A computer science pioneer who also served on the board took Bryan aside at one point: "Charles is one of the good ones, you know." When Bryan joined Joyent a few years later, Charles also got to know Steve well. Seeing the promise in both node.js and cloud computing, Charles became an investor in the company. When companies hit challenging times, some investors will hide — but Charles was the kind of investor to figure out how to fix what was broken. When Joyent needed a change in executive leadership, it was Charles who not only had the tough conversations, but led the search for the leader the company needed, ultimately positioning the company for success. Aside from his investment in Joyent, Charles was an outspoken proponent of node.js, becoming an organizer of the Node Summit conference. In 2017, he asked Bryan to deliver the conference’s keynote, but by then, the relationship between Joyent and node.js had become…​ complicated, and Bryan felt that it probably wouldn’t be a good idea. Any rational person would have dropped it, but Charles persisted, with characteristic zeal: if the Joyent relationship with node.js had become strained, so much more the reason to speak candidly about it! Charles prevailed, and the resulting talk, Platform as Reflection of Values, became one of Bryan’s most personally meaningful talks. Charles’s persistence was emblematic: he worked behind the scenes to encourage people to do their best work, always with an enthusiasm for the innovators and the creators. As we were contemplating Oxide, we told Charles what we wanted to do long before we had a company. Charles laughed with delight: "I hoped that you two would do something big, and I am just so happy for you that you’re doing something so ambitious!" As we raised seed capital, we knew that we were likely a poor fit for Charles and his fund. But we also knew that we deeply appreciated his wisdom and enthusiasm; we couldn’t resist pitching him on Oxide. Charles approached the investment in Oxide as he did with so many other aspects: with curiosity, diligence, empathy, and candor. He was direct with us that despite his enthusiasm for us personally, Oxide would be a challenging investment for his firm. But he also worked with us to address specific objections, and ultimately he won over his partnership. We were thrilled when he not only invested, but pulled together a syndicate of like-minded technologists and entrepreneurs to join him. Ever since, he has been a huge Oxide fan. Befitting his enthusiasm, one of his final posts expressed his enthusiasm and pride in what the Oxide team has built. Charles, thank you. You told us you were proud of us — and it meant the world. We are gutted to no longer have you with us; your influence lives on not just in Oxide, but also in the many people that you have inspired. You were the best of venture capital. Closer to the heart, you were a terrific friend to us both; thank you.

7 months ago 64 votes
How Oxide Cuts Data Center Power Consumption in Half

Here’s a sobering thought: today, data centers already consume 1-2% of the world’s power, and that percentage will likely rise to 3-4% by the end of the decade. According to Goldman Sachs research, that rise will include a doubling in data center carbon dioxide emissions. As the data and AI boom progresses, this thirst for power shows no signs of slowing down anytime soon. Two key challenges quickly become evident for the 85% of IT that currently lives on-premises. How can organizations reduce power consumption and corresponding carbon emissions? How can organizations keep pace with AI innovation as existing data centers run out of available power? Figure 1. Masanet et al. (2020), Cisco, IEA, Goldman Sachs Research Rack-scale design is critical to improved data center efficiency Traditional data center IT consumes so much power because the fundamental unit of compute is an individual server; like a house where rooms were built one at a time, with each room having its own central AC unit, gas furnace, and electrical panel. Individual rackmount servers are stacked together, each with their own AC power supplies, cooling fans, and power management. They are then paired with storage appliances and network switches that communicate at arm’s length, not designed as a cohesive whole. This approach fundamentally limits organizations' ability to maintain sustainable, high-efficiency computing systems. Of course, hyperscale public cloud providers did not design their data center systems this way. Instead, they operate like a carefully planned smart home where everything is designed to work together cohesively and is operated by software that understands the home’s systems end-to-end. High-efficiency, rack-scale computers are deployed at scale and operate as a single unit with integrated storage and networking to support elastic cloud computing services. This modern archietecture is made available to the market as public cloud, but that rental-only model is ill-fit for many business needs. Compared to a popular rackmount server vendor, Oxide is able to fill our specialized racks with 32 AMD Milan sleds and highly-available network switches using less than 15kW per rack, doubling the compute density in a typical data center. With just 16 of the alternative 1U servers and equivalent network switches, over 16kW of power is required per rack, leading to only 1,024 CPU cores vs Oxide’s 2,048. Extracting more useful compute from each kW of power and square foot of data center space is key to the future effectiveness of on-premises computing. At Oxide, we’ve taken this lesson in advancing rack-scale design, improved upon it in several ways, and made it available for every organization to purchase and operate anywhere in the world without a tether back to the public cloud. Our Cloud Computer treats the entire rack as a single, unified computer rather than a collection of independent parts, achieving unprecedented power efficiency. By designing the hardware and software together, we’ve eliminated unnecessary components and optimized every aspect of system operation through a control plane with visibility to end-to-end operations. When we started Oxide, the DC bus bar stood as one of the most glaring differences between the rack-scale machines at the hyperscalers and the rack-and-stack servers that the rest of the market was stuck with. That a relatively simple piece of copper was unavailable to commercial buyers — despite being unequivocally the right way to build it! — represented everything wrong with the legacy approach. The bus bar in the Oxide Cloud Computer is not merely more efficient, it is a concrete embodiment of the tremendous gains from designing at rack-scale, and by integrating hardware with software. — Bryan Cantrill The improvements we’re seeing are rooted in technical innovation Replacing low-efficiency AC power supplies with a high-efficiency DC Bus Bar This eliminates the 70 total AC power supplies found in an equivalent legacy server rack within 32 servers, two top-of-rack switches, and one out-of-band switch, each with two AC power supplies. This power shelf also ensures the load is balanced across phases, something that’s impossible with traditional power distribution units found in legacy server racks. Bigger fans = bigger efficiency gains using 12x less energy than legacy servers, which each contain as many as 7 fans, which must work much harder to move air over system components. Purpose-built for power efficiency less restrictive airflow than legacy servers by eliminating extraneous components like PCIe risers, storage backplanes, and more. Legacy servers need many optional components like these because they could be used for any number of tasks, such as point-of-sale systems, data center servers, or network-attached-storage (NAS) systems. Still, they were never designed optimally for any one of those tasks. The Oxide Cloud Computer was designed from the ground up to be a rack-scale cloud computing powerhouse, and so it’s optimized for exactly that task. Hardware + Software designed together By designing the hardware and software together, we can make hardware choices like more intelligent DC-DC power converters that can provide rich telemetry to our control plane, enabling future feature enhancements such as dynamic power capping and efficiency-based workload placement that are impossible with legacy servers and software systems. Learn more about Oxide’s intelligent Power Shelf Controller The Bottom Line: Customers and the Environment Both Benefit Reducing data center power demands and achieving more useful computing per kilowatt requires fundamentally rethinking traditional data center utilization and compute design. At Oxide, we’ve proven that dramatic efficiency gains are possible when you rethink the computer at rack-scale with hardware and software designed thoughtfully and rigorously together. Ready to learn how your organization can achieve these results? Schedule time with our team here. Together, we can reclaim on-premises computing efficiency to achieve both business and sustainability goals.

7 months ago 73 votes

More in programming

Another tip (tip)
21 hours ago 4 votes
Get in losers, we're moving to Linux!

I've never seen so many developers curious about leaving the Mac and giving Linux a go. Something has really changed in the last few years. Maybe Linux just got better? Maybe powerful mini PCs made it easier? Maybe Apple just fumbled their relationship with developers one too many times? Maybe it's all of it. But whatever the reason, the vibe shift is noticeable. This is why the future is so hard to predict! People have been joking about "The Year of Linux on the Desktop" since the late 90s. Just like self-driving cars were supposed to be a thing back in 2017. And now, in the year of our Lord 2025, it seems like we're getting both! I also wouldn't underestimate the cultural influence of a few key people. PewDiePie sharing his journey into Arch and Hyprland with his 110 million followers is important. ThePrimeagen moving to Arch and Hyprland is important. Typecraft teaching beginners how to build an Arch and Hyprland setup from scratch is important (and who I just spoke to about Omarchy). Gabe Newell's Steam Deck being built on Arch and pushing Proton to over 20,000 compatible Linux games is important. You'll notice a trend here, which is that Arch Linux, a notoriously "difficult" distribution, is at the center of much of this new engagement. Despite the fact that it's been around since 2003! There's nothing new about Arch, but there's something new about the circles of people it's engaging. I've put Arch at the center of Omarchy too. Originally just because that was what Hyprland recommended. Then, after living with the wonders of 90,000+ packages on the community-driven AUR package repository, for its own sake. It's really good! But while Arch (and Hyprland) are having a moment amongst a new crowd, it's also "just" Linux at its core. And Linux really is the star of the show. The perfect, free, and open alternative that was just sitting around waiting for developers to finally have had enough of the commercial offerings from Apple and Microsoft. Now obviously there's a taste of "new vegan sees vegans everywhere" here. You start talking about Linux, and you'll hear from folks already in the community or those considering the move too. It's easy to confuse what you'd like to be true with what is actually true. And it's definitely true that Linux is still a niche operating system on the desktop. Even among developers. Apple and Microsoft sit on the lion's share of the market share. But the mind share? They've been losing that fast. The window is open for a major shift to happen. First gradually, then suddenly. It feels like morning in Linux land!

15 hours ago 2 votes
All about Svelte 5 snippets

Snippets are a useful addition to Svelte 5. I use them in my Svelte 5 projects like Edna. Snippet basics A snippet is a function that renders html based on its arguments. Here’s how to define and use a snippet: {#snippet hello(name)} <div>Hello {name}!</div> {/snippet} {@render hello("Andrew")} {@render hello("Amy")} You can re-use snippets by exporting them: <script module> export { hello }; </script> {@snippet hello(name)}<div>Hello {name}!</div>{/snippet} Snippets use cases Snippets for less nesting Deeply nested html is hard to read. You can use snippets to extract some parts to make the structure clearer. For example, you can transform: <div> <div class="flex justify-end mt-2"> <button onclick={onclose} class="mr-4 px-4 py-1 border border-black hover:bg-gray-100" >Cancel</button > <button onclick={() => emitRename()} disabled={!canRename} class="px-4 py-1 border border-black hover:bg-gray-50 disabled:text-gray-400 disabled:border-gray-400 disabled:bg-white default:bg-slate-700" >Rename</button > </div> into: {#snippet buttonCancel()} <button onclick={onclose} class="mr-4 px-4 py-1 border border-black hover:bg-gray-100" >Cancel</button > {/snippet} {#snippet buttonRename()}...{/snippet} To make this easier to read: <div> <div class="flex justify-end mt-2"> {@render buttonCancel()} {@render buttonRename()} </div> </div> snippets replace default <slot/> In Svelte 4, if you wanted place some HTML inside the component, you used <slot />. Let’s say you have Overlay.svelte component used like this: <Overlay> <MyDialog></MyDialog> </Overlay> In Svelte 4, you would use <slot /> to render children: <div class="overlay-wrapper"> <slot /> </div> <slot /> would be replaced with <MyDialog></MyDialog>. In Svelte 5 <MyDialog></MyDialog> is passed to Overlay.svelte as children property so you would change Overlay.svelte to: <script> let { children } = $props(); </script> <div class="overlay-wrapper"> {@render children()} </div> children property is created by Svelte compiler so you should avoid naming your own props children. snippets replace named slots A component can have a default slot for rendering children and additional named slots. In Svelte 5 instead of named slots you pass snippets as props. An example of Dialog.svelte: <script> let { title, children } = $props(); </script> <div class="dialog"> <div class="title"> {@render title()} </div> {@render children()} </div> And use: {#snippet title()} <div class="fancy-title">My fancy title</div> {/snippet} <Dialog title={title}> <div>Body of the dialog</div> </Dialog> passing snippets as implicit props You can pass title snippet prop implicitly: <Dialog> {#snippet title()} <div class="fancy-title">My fancy title</div> {/snippet} <div>Body of the dialog</div> </Dialog> Because {snippet title()} is a child or <Dialog>, we don’t have to pass it as explicit title={title} prop. The compiler does it for us. snippets to reduce repetition Here’s part of how I render https://tools.arslexis.io/ {#snippet row(name, url, desc)} <tr> <td class="text-left align-top" ><a class="font-semibold whitespace-nowrap" href={url}>{name}</a> </td> <td class="pl-4 align-top">{@html desc}</td> </tr> {/snippet} {@render row("unzip", "/unzip/", "unzip a file in the browser")} {@render row("wc", "/wc/", "like <tt>wc</tt>, but in the browser")} It saves me copy & paste of the same HTML and makes the structure more readable. snippets for recursive rendering Sometimes you need to render a recursive structure, like nested menus or file tree. In Svelte 4 you could use <svelte:self> but the downside of that is that you create multiple instances of the component. That means that the state is also split among multiple instances. That makes it harder to implement functionality that requires a global view of the structure, like keyboard navigation. With snippets you can render things recursively in a single instance of the component. I used it to implement nested context menus. snippets to customize rendering Let’s say you’re building a Menu component. Each menu item is a <div> with some non-trivial children. To allow the client of Menu customize how items are rendered, you could provide props for things like colors, padding etc. or you could allow ultimate flexibility by accepting an optional menuitem prop that is a snippet that renders the item. You can think of it as a headless UI i.e. you provide the necessary structure and difficult logic like keyboard navigation etc. and allow the client lots of control over how things are rendered. snippets for library of icons Before snippets every SVG Icon I used was a Svelte component. Many icons means many files. Now I have a single Icons.svelte file, like: <script module> export { IconMenu, IconSettings }; </script> {#snippet IconMenu(arg1, arg2, ...)} <svg>... icon svg</svg> {/snippet}} {#snippet IconSettings()} <svg>... icon svg</svg> {/snippet}}

yesterday 2 votes
Logical Quantifiers in Software

I realize that for all I've talked about Logic for Programmers in this newsletter, I never once explained basic logical quantifiers. They're both simple and incredibly useful, so let's do that this week! Sets and quantifiers A set is a collection of unordered, unique elements. {1, 2, 3, …} is a set, as are "every programming language", "every programming language's Wikipedia page", and "every function ever defined in any programming language's standard library". You can put whatever you want in a set, with some very specific limitations to avoid certain paradoxes.2 Once we have a set, we can ask "is something true for all elements of the set" and "is something true for at least one element of the set?" IE, is it true that every programming language has a set collection type in the core language? We would write it like this: # all of them all l in ProgrammingLanguages: HasSetType(l) # at least one some l in ProgrammingLanguages: HasSetType(l) This is the notation I use in the book because it's easy to read, type, and search for. Mathematicians historically had a few different formats; the one I grew up with was ∀x ∈ set: P(x) to mean all x in set, and ∃ to mean some. I use these when writing for just myself, but find them confusing to programmers when communicating. "All" and "some" are respectively referred to as "universal" and "existential" quantifiers. Some cool properties We can simplify expressions with quantifiers, in the same way that we can simplify !(x && y) to !x || !y. First of all, quantifiers are commutative with themselves. some x: some y: P(x,y) is the same as some y: some x: P(x, y). For this reason we can write some x, y: P(x,y) as shorthand. We can even do this when quantifying over different sets, writing some x, x' in X, y in Y instead of some x, x' in X: some y in Y. We can not do this with "alternating quantifiers": all p in Person: some m in Person: Mother(m, p) says that every person has a mother. some m in Person: all p in Person: Mother(m, p) says that someone is every person's mother. Second, existentials distribute over || while universals distribute over &&. "There is some url which returns a 403 or 404" is the same as "there is some url which returns a 403 or some url that returns a 404", and "all PRs pass the linter and the test suites" is the same as "all PRs pass the linter and all PRs pass the test suites". Finally, some and all are duals: some x: P(x) == !(all x: !P(x)), and vice-versa. Intuitively: if some file is malicious, it's not true that all files are benign. All these rules together mean we can manipulate quantifiers almost as easily as we can manipulate regular booleans, putting them in whatever form is easiest to use in programming. Speaking of which, how do we use this in in programming? How we use this in programming First of all, people clearly have a need for directly using quantifiers in code. If we have something of the form: for x in list: if P(x): return true return false That's just some x in list: P(x). And this is a prevalent pattern, as you can see by using GitHub code search. It finds over 500k examples of this pattern in Python alone! That can be simplified via using the language's built-in quantifiers: the Python would be any(P(x) for x in list). (Note this is not quantifying over sets but iterables. But the idea translates cleanly enough.) More generally, quantifiers are a key way we express higher-level properties of software. What does it mean for a list to be sorted in ascending order? That all i, j in 0..<len(l): if i < j then l[i] <= l[j]. When should a ratchet test fail? When some f in functions - exceptions: Uses(f, bad_function). Should the image classifier work upside down? all i in images: classify(i) == classify(rotate(i, 180)). These are the properties we verify with tests and types and MISU and whatnot;1 it helps to be able to make them explicit! One cool use case that'll be in the book's next version: database invariants are universal statements over the set of all records, like all a in accounts: a.balance > 0. That's enforceable with a CHECK constraint. But what about something like all i, i' in intervals: NoOverlap(i, i')? That isn't covered by CHECK, since it spans two rows. Quantifier duality to the rescue! The invariant is equivalent to !(some i, i' in intervals: Overlap(i, i')), so is preserved if the query SELECT COUNT(*) FROM intervals CROSS JOIN intervals … returns 0 rows. This means we can test it via a database trigger.3 There are a lot more use cases for quantifiers, but this is enough to introduce the ideas! Next week's the one year anniversary of the book entering early access, so I'll be writing a bit about that experience and how the book changed. It's crazy how crude v0.1 was compared to the current version. MISU ("make illegal states unrepresentable") means using data representations that rule out invalid values. For example, if you have a location -> Optional(item) lookup and want to make sure that each item is in exactly one location, consider instead changing the map to item -> location. This is a means of implementing the property all i in item, l, l' in location: if ItemIn(i, l) && l != l' then !ItemIn(i, l'). ↩ Specifically, a set can't be an element of itself, which rules out constructing things like "the set of all sets" or "the set of sets that don't contain themselves". ↩ Though note that when you're inserting or updating an interval, you already have that row's fields in the trigger's NEW keyword. So you can just query !(some i in intervals: Overlap(new, i')), which is more efficient. ↩

2 days ago 5 votes
The missing part of Espressif’s reset circuit

In the previous article, we peeked at the reset circuit of ESP-Prog with an oscilloscope, and reproduced it with basic components. We observed that it did not behave quite as expected. In this article, we’ll look into the missing pieces. An incomplete circuit For a hint, we’ll first look a bit more closely at the … Continue reading The missing part of Espressif’s reset circuit → The post The missing part of Espressif’s reset circuit appeared first on Quentin Santos.

2 days ago 3 votes