Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
80
The product we’re building, a rack-scale computer, is specifically designed to be a centralized, integrated product because that’s what our customers need. This requirement and the design choices we’ve made to meet this need create some daily efficiency challenges for our team. As a remote-first company, we’re designing this product with team members (including the hardware team) across most North American time zones and even multiple continents, so a large portion of our team is not going into the office/lab every day for hands-on access to "production" hardware. At first blush, the design of our product and the design of our team appear to conflict at some level: we value remote work, but we can’t ship entire racks to the homes of our teammates for both practical and economic reasons. Our racks are rather inconvenient for a home installation: over 2.3 m (7.7') tall, very heavy, and have 3-phase power inputs that aren’t usable in a typical residential setting. Aside from the...
over a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Oxide Computer Company Blog

Oxide’s Compensation Model: How is it Going?

How it started Four years ago, we were struggling to hire. Our team was small (~23 employees), and we knew that we needed many more people to execute on our audacious vision. While we had had success hiring in our personal networks, those networks now felt tapped; we needed to get further afield. As is our wont, we got together as a team and brainstormed: how could we get a bigger and broader applicant pool? One of our engineers, Sean, shared some personal experience: that Oxide’s principles and values were very personally important to him — but that when he explained them to people unfamiliar with the company, they were (understandably?) dismissed as corporate claptrap. Sean had found, however, that there was one surefire way to cut through the skepticism: to explain our approach to compensation. Maybe, Sean wondered, we should talk about it publicly? "I could certainly write a blog entry explaining it," I offered. At this suggestion, the team practically lunged with enthusiasm: the reaction was so uniformly positive that I have to assume that everyone was sick of explaining this most idiosyncratic aspect of Oxide to friends and family. So what was the big deal about our compensation? Well, as a I wrote in the resulting piece, Compensation as a Reflection of Values, our compensation is not merely transparent, but uniform. The piece — unsurprisingly, given the evergreen hot topic that is compensation — got a ton of attention. While some of that attention was negative (despite the piece trying to frontrun every HN hater!), much of it was positive — and everyone seemed to be at least intrigued. And in terms of its initial purpose, the piece succeeded beyond our wildest imagination: it brought a surge of new folks interested in the company. Best of all, the people new to Oxide were interested for all of the right reasons: not the compensation per se, but for the values that the compensation represents. The deeper they dug, the more they found to like — and many who learned about Oxide for the first time through that blog entry we now count as long-time, cherished colleagues. That blog entry was a long time ago now, and today we have ~75 employees (and a shipping product!); how is our compensation model working out for us? How it’s going Before we get into our deeper findings, two updates that are so important that we have updated the blog entry itself. First, the dollar figure itself continues to increase over time (as of this writing in 2025, $207,264); things definitely haven’t gotten (and aren’t getting!) any cheaper. And second, we did introduce variable compensation for some sales roles. Yes, those roles can make more than the rest of us — but they can also make less, too. And, importantly: if/when those folks are making more than the rest of us, it’s because they’re selling a lot — a result that can be celebrated by everyone! Those critical updates out of the way, how is it working? There have been a lot of surprises along the way, mostly (all?) of the positive variety. A couple of things that we have learned: People take their own performance really seriously. When some outsiders hear about our compensation model, they insist that it can’t possibly work because "everyone will slack off." I have come to find this concern to be more revealing of the person making the objection than of our model, as our experience has been in fact the opposite: in my one-on-one conversations with team members, a frequent subject of conversation is people who are concerned that they aren’t doing enough (or that they aren’t doing the right thing, or that their work is progressing slower than they would like). I find my job is often to help quiet this inner critic while at the same time stoking what I feel is a healthy urge: when one holds one’s colleagues in high regard, there is an especially strong desire to help contribute — to prove oneself worthy of a superlative team. Our model allows people to focus on their own contribution (whatever it might be). People take hiring really seriously. When evaluating a peer (rather than a subordinate), one naturally has high expectations — and because (in the sense of our wages, anyway) everyone at Oxide is a peer, it shouldn’t be surprising that folks have very high expectations for potential future colleagues. And because the Oxide hiring process is writing intensive, it allows for candidates to be thoroughly reviewed by Oxide employees — who are tough graders! It is, bluntly, really hard to get a job at Oxide. It allows us to internalize the importance of different roles. One of the more incredible (and disturbingly frequent) objections I have heard is: "But is that what you’ll pay support folks?" I continue to find this question offensive, but I no longer find it surprising: the specific dismissal of support roles reveals a widespread and corrosive devaluation of those closest to customers. My rejoinder is simple: think of the best support engineers you’ve worked with; what were they worth? Anyone who has shipped complex systems knows these extraordinary people — calm under fire, deeply technical, brilliantly resourceful, profoundly empathetic — are invaluable to the business. So what if you built a team entirely of folks like that? The response has usually been: well, sure, if you’re going to only hire those folks. Yeah, we are — and we have! It allows for fearless versatility. A bit of a corollary to the above, but subtly different: even though we (certainly!) hire and select for certain roles, our uniform compensation means we can in fact think primarily in terms of people unconfined by those roles. That is, we can be very fluid about what we’re working on, without fear of how it will affect a perceived career trajectory. As a concrete example: we had a large customer that wanted to put in place a program for some of the additional work they wanted to see in the product. The complexity of their needs required dedicated program management resources that we couldn’t spare, and in another more static company we would have perhaps looked to hire. But in our case, two folks came together — CJ from operations, and Izzy from support — and did something together that was in some regards new to both of them (and was neither of their putative full-time jobs!) The result was indisputably successful: the customer loved the results, and two terrific people got a chance to work closely together without worrying about who was dotted-lined to whom. It has allowed us to organizationally scale. Many organizations describe themselves as flat, and a reasonable rebuttal to this are the "shadow hierarchies" created by the tyranny of structurelessness. And indeed, if one were to read (say) Valve’s (in)famous handbook, the autonomy seems great — but the stack ranking decidedly less so, especially because the handbook is conspicuously silent on the subject of compensation. (Unsurprisingly, compensation was weaponized at Valve, which descended into toxic cliquishness.) While we believe that autonomy is important to do one’s best work, we also have a clear structure at Oxide in that Steve Tuck (Oxide co-founder and CEO) is in charge. He has to be: he is held accountable to our investors — and he must have the latitude to make decisions. Under Steve, it is true that we don’t have layers of middle management. Might we need some in the future? Perhaps, but what fraction of middle management in a company is dedicated to — at some level — determining who gets what in terms of compensation? What happens when you eliminate that burden completely? It frees us to both lead and follow. We expect that every Oxide employee has the capacity to lead others — and we tap this capacity frequently. Of course, a company in which everyone is trying to direct all traffic all the time would be a madhouse, so we also very much rely on following one another too! Just as our compensation model allows us to internalize the values of different roles, it allows us to appreciate the value of both leading and following, and empowers us each with the judgement to know when to do which. This isn’t always easy or free of ambiguity, but this particular dimension of our versatility has been essential — and our compensation model serves to encourage it. It causes us to hire carefully and deliberately. Of course, one should always hire carefully and deliberately, but this often isn’t the case — and many a startup has been ruined by reckless expansion of headcount. One of the roots of this can be found in a dirty open secret of Silicon Valley middle management: its ranks are taught to grade their career by the number of reports in their organization. Just as if you were to compensate software engineers based on the number of lines of code they wrote, this results in perverse incentives and predictable disasters — and any Silicon Valley vet will have plenty of horror stories of middle management jockeying for reqs or reorgs when they should have been focusing on product and customers. When you can eliminate middle management, you eliminate this incentive. We grow the team not because of someone’s animal urges to have the largest possible organization, but rather because we are at a point where adding people will allow us to better serve our market and customers. It liberates feedback from compensation. Feedback is, of course, very important: we all want to know when and where we’re doing the right thing! And of course, we want to know too where there is opportunity for improvement. However, Silicon Valley has historically tied feedback so tightly to compensation that it has ceased to even pretend to be constructive: if it needs to be said, performance review processes aren’t, in fact, about improving the performance of the team, but rather quantifying and stack-ranking that performance for purposes of compensation. When compensation is moved aside, there is a kind of liberation for feedback itself: because feedback is now entirely earnest, it can be expressed and received thoughtfully. It allows people to focus on doing the right thing. In a world of traditional, compensation-tied performance review, the organizational priority is around those things that affect compensation — even at the expense of activity that clearly benefits the company. This leads to all sorts of wild phenomena, and most technology workers will be able to tell stories of doing things that were clearly right for the company, but having to hide it from management that thought only narrowly in terms of their own stated KPIs and MBOs. By contrast, over and over (and over!) again, we have found that people do the right thing at Oxide — even if (especially if?) no one is looking. The beneficiary of that right thing? More often than not, it’s our customers, who have uniformly praised the team for going above and beyond. It allows us to focus on the work that matters. Relatedly, when compensation is non-uniform, the process to figure out (and maintain) that non-uniformity is laborious. All of that work — of line workers assembling packets explaining themselves, of managers arming themselves with those packets to fight in the arena of organizational combat, and then of those same packets ultimately being regurgitated back onto something called a review — is work. Assuming such a process is executed perfectly (something which I suppose is possible in the abstract, even though I personally have never seen it), this is work that does not in fact advance the mission of the company. Not having variable compensation gives us all of that time and energy back to do the actual work — the stuff that matters. It has stoked an extraordinary sense of teamwork. For me personally — and as I relayed on an episode of Software Misadventures — the highlights of my career have been being a part of an extraordinary team. The currency of a team is mutual trust, and while uniform compensation certainly isn’t the only way to achieve that trust, boy does it ever help! As Steve and I have told one another more times that we can count: we are so lucky to work on this team, with its extraordinary depth and breadth. While our findings have been very positive, I would still reiterate what we said four years ago: we don’t know what the future holds, and it’s easier to make an unwavering commitment to the transparency rather than the uniformity. That said, the uniformity has had so many positive ramifications that the model feels more important than ever. We are beyond the point of this being a curiosity; it’s been essential for building a mission-focused team taking on a problem larger than ourselves. So it’s not a fit for everyone — but if you are seeking an extraordinary team solving hard problems in service to customers, consider Oxide!

a week ago 1 votes
dtrace.conf(24)

Sometime in late 2007, we had the idea of a DTrace conference. Or really, more of a meetup; from the primordial e-mail I sent: The goal here, by the way, is not a DTrace user group, but more of a face-to-face meeting with people actively involved in DTrace — either by porting it to another system, by integrating probes into higher level environments, by building higher-level tools on top of DTrace or by using it heavily and/or in a critical role. That said, we also don’t want to be exclusionary, so our thinking is that the only true requirement for attending is that everyone must be prepared to speak informally for 15 mins or so on what they are doing with DTrace, any limitations that they have encountered, and some ideas for the future. We’re thinking that this is going to be on the order of 15-30 people (though more would be a good problem to have — we’ll track it if necessary), that it will be one full day (breakfast in the morning through drinks into the evening), and that we’re going to host it here at our offices in San Francisco sometime in March 2008. This same note also included some suggested names for the gathering, including what in hindsight seems a clear winner: DTrace Bi-Mon-Sci-Fi-Con. As if knowing that I should leave an explanatory note to my future self as to why this name was not selected, my past self fortunately clarified: "before everyone clamors for the obvious Bi-Mon-Sci-Fi-Con, you should know that most Millennials don’t (sadly) get the reference." (While I disagree with the judgement of my past self, it at least indicates that at some point I cared if anyone got the reference.) We settled on a much more obscure reference, and had the first dtrace.conf in March 2008. Befitting the style of the time, it was an unconference (a term that may well have hit its apogee in 2008) that you signed up to attend by editing a wiki. More surprising given the year (and thanks entirely to attendee Ben Rockwood), it was recorded — though this is so long ago that I referred to it as video taping (and with none of the participants mic’d, I’m afraid the quality isn’t very good). The conference, however, was terrific, viz. the reports of Adam, Keith and Stephen (all somehow still online nearly two decades later). If anything, it was a little too good: we realized that we couldn’t recreate the magic, and we demurred on making it an annual event. Years passed, and memories faded. By 2012, it felt like we wanted to get folks together again, now under a post-lawnmower corporate aegis in Joyent. The resulting dtrace.conf(12) was a success, and the Olympiad cadence felt like the right one; we did it again four years later at dtrace.conf(16). In 2020, we came back together for a new adventure — and the DTrace Olympiad was not lost on Adam. Alas, dtrace.conf(20) — like the Olympics themselves — was cancelled, if implicitly. Unlike the Olympics, however, it was not to be rescheduled. More years passed and DTrace continued to prove its utility at Oxide; last year when Adam and I did our "DTrace at 20" episode of Oxide and Friends, we vowed to hold dtrace.conf(24) — and a few months ago, we set our date to be December 11th. At first we assumed we would do something similar to our earlier conferences: a one-day participant-run conference, at the Oxide office in Emeryville. But times have changed: thanks to the rise of remote work, technologists are much more dispersed — and many more people would need to travel for dtrace.conf(24) than in previous DTrace Olympiads. Travel hasn’t become any cheaper since 2008, and the cost (and inconvenience) was clearly going to limit attendance. The dilemma for our small meetup highlights the changing dynamics in tech conferences in general: with talks all recorded and made publicly available after the conference, how does one justify attending a conference in person? There can be reasonable answers to that question, of course: it may be the hallway track, or the expo hall, or the after-hours socializing, or perhaps some other special conference experience. But it’s also not surprising that some conferences — especially ones really focused on technical content — have decided that they are better off doing as conference giant O’Reilly Media did, and going exclusively online. And without the need to feed and shelter participants, the logistics for running a conference become much more tenable — and the price point can be lowered to the point that even highly produced conferences like P99 CONF can be made freely available. This, in turn, leads to much greater attendance — and a network effect that can get back some of what one might lose going online. In particular, using chat as the hallway track can be more much effective (and is certainly more scalable!) than the actual physical hallways at a conference. For conferences in general, there is a conversation to be had here (and as a teaser, Adam and I are going to talk about it with Stephen O’Grady and Theo Schlossnagle on Oxide and Friends next week, but for our quirky, one-day, Olympiad-cadence dtrace.conf, the decision was pretty easy: there was much more to be gained than lost by going exclusively on-line. So dtrace.conf(24) is coming up next week, and it’s available to everyone. In terms of platform, we’re going to try to keep that pretty simple: we’re going to use Google Meet for the actual presenters, which we will stream in real-time to YouTube — and we’ll use the Oxide Discord for all chat. We’re hoping you’ll join us on December 11th — and if you want to talk about DTrace or a DTrace-adjacent topic, we’d love for you to present! Keeping to the unconference style, if you would like to present, please indicate your topic in the #session-topics Discord channel so we can get the agenda fleshed out. While we’re excited to be online, there are some historical accoutrements of conferences that we didn’t want to give up. First, we have a tradition of t-shirts with dtrace.conf. Thanks to our designer Ben Leonard, we have a banger of a t-shirt, capturing the spirit of our original dtrace.conf(08) shirt but with an Oxide twist. It’s (obviously) harder to make those free but we have tried to price them reasonably. You can get your t-shirt by adding it to your (free) dtrace.conf ticket. (And for those who present at dtrace.conf, your shirt is on us — we’ll send you a coupon code!) Second, for those who can make their way to the East Bay and want some hangout time, we are going to have an après conference social event at the Oxide office starting at 5p. We’re charging something nominal for that too (and like the t-shirt, you pay for that via your dtrace.conf ticket); we’ll have some food and drinks and an Oxide hardware tour for the curious — and (of course?) there will be Fishpong. Much has changed since I sent that e-mail 17 years ago — but the shared values and disposition that brought together our small community continue to endure; we look forward to seeing everyone (virtually) at dtrace.conf(24)!

5 months ago 71 votes
Advancing Cloud and HPC Convergence with Lawrence Livermore National Laboratory

Oxide Computer Company and Lawrence Livermore National Laboratory Work Together to Advance Cloud and HPC Convergence Oxide Computer Company and Lawrence Livermore National Laboratory (LLNL) today announced a plan to bring on-premises cloud computing capabilities to the Livermore Computing (LC) high-performance computing (HPC) center. The rack-scale Oxide Cloud Computer allows LLNL to improve the efficiency of operational workloads and will provide users in the National Nuclear Security Administration (NNSA) with new capabilities for provisioning secure, virtualized services alongside HPC workloads. HPC centers have traditionally run batch workloads for large-scale scientific simulations and other compute-heavy applications. HPC workloads do not exist in isolation—there are a multitude of persistent, operational services that keep the HPC center running. Meanwhile, HPC users also want to deploy cloud-like persistent services—databases, Jupyter notebooks, orchestration tools, Kubernetes clusters. Clouds have developed extensive APIs, security layers, and automation to enable these capabilities, but few options exist to deploy fully virtualized, automated cloud environments on-premises. The Oxide Cloud Computer allows organizations to deliver secure cloud computing capabilities within an on-premises environment. On-premises environments are the next frontier for cloud computing. LLNL is tackling some of the hardest and most important problems in science and technology, requiring advanced hardware, software, and cloud capabilities. We are thrilled to be working with their exceptional team to help advance those efforts, delivering an integrated system that meets their rigorous requirements for performance, efficiency, and security. — Steve TuckCEO at Oxide Computer Company Leveraging the new Oxide Cloud Computer, LLNL will enable staff to provision virtual machines (VMs) and services via self-service APIs, improving operations and modernizing aspects of system management. In addition, LLNL will use the Oxide rack as a proving ground for secure multi-tenancy and for smooth integration with the LLNL-developed Flux resource manager. LLNL plans to bring its users cloud-like Infrastructure-as-a-Service (IaaS) capabilities that work seamlessly with their HPC jobs, while maintaining security and isolation from other users. Beyond LLNL personnel, researchers at the Los Alamos National Laboratory and Sandia National Laboratories will also partner in many of the activities on the Oxide Cloud Computer. We look forward to working with Oxide to integrate this machine within our HPC center. Oxide’s Cloud Computer will allow us to securely support new types of workloads for users, and it will be a proving ground for introducing cloud-like features to operational processes and user workflows. We expect Oxide’s open-source software stack and their transparent and open approach to development to help us work closely together. — Todd GamblinDistinguished Member of Technical Staff at LLNL Sandia is excited to explore the Oxide platform as we work to integrate on-premise cloud technologies into our HPC environment. This advancement has the potential to enable new classes of interactive and on-demand modeling and simulation capabilities. — Kevin PedrettiDistinguished Member of Technical Staff at Sandia National Laboratories LLNL plans to work with Oxide on additional capabilities, including the deployment of additional Cloud Computers in its environment. Of particular interest are scale-out capabilities and disaster recovery. The latest installation underscores Oxide Computer’s momentum in the federal technology ecosystem, providing reliable, state-of-the-art Cloud Computers to support critical IT infrastructure. To learn more about Oxide Computer, visit https://oxide.computer. About Oxide Computer Oxide Computer Company is the creator of the world’s first commercial Cloud Computer, a true rack-scale system with fully unified hardware and software, purpose-built to deliver hyperscale cloud computing to on-premises data centers. With Oxide, organizations can fully realize the economic and operational benefits of cloud ownership, with access to the same self-service development experience of public cloud, without the public cloud cost. Oxide empowers developers to build, run, and operate any application with enhanced security, latency, and control, and frees organizations to elevate IT operations to accelerate strategic initiatives. To learn more about Oxide’s Cloud Computer, visit oxide.computer. About LLNL Founded in 1952, Lawrence Livermore National Laboratory provides solutions to our nation’s most important national security challenges through innovative science, engineering, and technology. Lawrence Livermore National Laboratory is managed by Lawrence Livermore National Security, LLC for the U.S. Department of Energy’s National Nuclear Security Administration. Media Contact LaunchSquad for Oxide Computer oxide@launchsquad.com

5 months ago 69 votes
Remembering Charles Beeler

We are heartbroken to relay that Charles Beeler, a friend and early investor in Oxide, passed away in September after a battle with cancer. We lost Charles far too soon; he had a tremendous influence on the careers of us both. Our relationship with Charles dates back nearly two decades, to his involvement with the ACM Queue board where he met Bryan. It was unprecedented to have a venture capitalist serve in this capacity with ACM, and Charles brought an entirely different perspective on the practitioner content. A computer science pioneer who also served on the board took Bryan aside at one point: "Charles is one of the good ones, you know." When Bryan joined Joyent a few years later, Charles also got to know Steve well. Seeing the promise in both node.js and cloud computing, Charles became an investor in the company. When companies hit challenging times, some investors will hide — but Charles was the kind of investor to figure out how to fix what was broken. When Joyent needed a change in executive leadership, it was Charles who not only had the tough conversations, but led the search for the leader the company needed, ultimately positioning the company for success. Aside from his investment in Joyent, Charles was an outspoken proponent of node.js, becoming an organizer of the Node Summit conference. In 2017, he asked Bryan to deliver the conference’s keynote, but by then, the relationship between Joyent and node.js had become…​ complicated, and Bryan felt that it probably wouldn’t be a good idea. Any rational person would have dropped it, but Charles persisted, with characteristic zeal: if the Joyent relationship with node.js had become strained, so much more the reason to speak candidly about it! Charles prevailed, and the resulting talk, Platform as Reflection of Values, became one of Bryan’s most personally meaningful talks. Charles’s persistence was emblematic: he worked behind the scenes to encourage people to do their best work, always with an enthusiasm for the innovators and the creators. As we were contemplating Oxide, we told Charles what we wanted to do long before we had a company. Charles laughed with delight: "I hoped that you two would do something big, and I am just so happy for you that you’re doing something so ambitious!" As we raised seed capital, we knew that we were likely a poor fit for Charles and his fund. But we also knew that we deeply appreciated his wisdom and enthusiasm; we couldn’t resist pitching him on Oxide. Charles approached the investment in Oxide as he did with so many other aspects: with curiosity, diligence, empathy, and candor. He was direct with us that despite his enthusiasm for us personally, Oxide would be a challenging investment for his firm. But he also worked with us to address specific objections, and ultimately he won over his partnership. We were thrilled when he not only invested, but pulled together a syndicate of like-minded technologists and entrepreneurs to join him. Ever since, he has been a huge Oxide fan. Befitting his enthusiasm, one of his final posts expressed his enthusiasm and pride in what the Oxide team has built. Charles, thank you. You told us you were proud of us — and it meant the world. We are gutted to no longer have you with us; your influence lives on not just in Oxide, but also in the many people that you have inspired. You were the best of venture capital. Closer to the heart, you were a terrific friend to us both; thank you.

5 months ago 58 votes
How Oxide Cuts Data Center Power Consumption in Half

Here’s a sobering thought: today, data centers already consume 1-2% of the world’s power, and that percentage will likely rise to 3-4% by the end of the decade. According to Goldman Sachs research, that rise will include a doubling in data center carbon dioxide emissions. As the data and AI boom progresses, this thirst for power shows no signs of slowing down anytime soon. Two key challenges quickly become evident for the 85% of IT that currently lives on-premises. How can organizations reduce power consumption and corresponding carbon emissions? How can organizations keep pace with AI innovation as existing data centers run out of available power? Figure 1. Masanet et al. (2020), Cisco, IEA, Goldman Sachs Research Rack-scale design is critical to improved data center efficiency Traditional data center IT consumes so much power because the fundamental unit of compute is an individual server; like a house where rooms were built one at a time, with each room having its own central AC unit, gas furnace, and electrical panel. Individual rackmount servers are stacked together, each with their own AC power supplies, cooling fans, and power management. They are then paired with storage appliances and network switches that communicate at arm’s length, not designed as a cohesive whole. This approach fundamentally limits organizations' ability to maintain sustainable, high-efficiency computing systems. Of course, hyperscale public cloud providers did not design their data center systems this way. Instead, they operate like a carefully planned smart home where everything is designed to work together cohesively and is operated by software that understands the home’s systems end-to-end. High-efficiency, rack-scale computers are deployed at scale and operate as a single unit with integrated storage and networking to support elastic cloud computing services. This modern archietecture is made available to the market as public cloud, but that rental-only model is ill-fit for many business needs. Compared to a popular rackmount server vendor, Oxide is able to fill our specialized racks with 32 AMD Milan sleds and highly-available network switches using less than 15kW per rack, doubling the compute density in a typical data center. With just 16 of the alternative 1U servers and equivalent network switches, over 16kW of power is required per rack, leading to only 1,024 CPU cores vs Oxide’s 2,048. Extracting more useful compute from each kW of power and square foot of data center space is key to the future effectiveness of on-premises computing. At Oxide, we’ve taken this lesson in advancing rack-scale design, improved upon it in several ways, and made it available for every organization to purchase and operate anywhere in the world without a tether back to the public cloud. Our Cloud Computer treats the entire rack as a single, unified computer rather than a collection of independent parts, achieving unprecedented power efficiency. By designing the hardware and software together, we’ve eliminated unnecessary components and optimized every aspect of system operation through a control plane with visibility to end-to-end operations. When we started Oxide, the DC bus bar stood as one of the most glaring differences between the rack-scale machines at the hyperscalers and the rack-and-stack servers that the rest of the market was stuck with. That a relatively simple piece of copper was unavailable to commercial buyers — despite being unequivocally the right way to build it! — represented everything wrong with the legacy approach. The bus bar in the Oxide Cloud Computer is not merely more efficient, it is a concrete embodiment of the tremendous gains from designing at rack-scale, and by integrating hardware with software. — Bryan Cantrill The improvements we’re seeing are rooted in technical innovation Replacing low-efficiency AC power supplies with a high-efficiency DC Bus Bar This eliminates the 70 total AC power supplies found in an equivalent legacy server rack within 32 servers, two top-of-rack switches, and one out-of-band switch, each with two AC power supplies. This power shelf also ensures the load is balanced across phases, something that’s impossible with traditional power distribution units found in legacy server racks. Bigger fans = bigger efficiency gains using 12x less energy than legacy servers, which each contain as many as 7 fans, which must work much harder to move air over system components. Purpose-built for power efficiency less restrictive airflow than legacy servers by eliminating extraneous components like PCIe risers, storage backplanes, and more. Legacy servers need many optional components like these because they could be used for any number of tasks, such as point-of-sale systems, data center servers, or network-attached-storage (NAS) systems. Still, they were never designed optimally for any one of those tasks. The Oxide Cloud Computer was designed from the ground up to be a rack-scale cloud computing powerhouse, and so it’s optimized for exactly that task. Hardware + Software designed together By designing the hardware and software together, we can make hardware choices like more intelligent DC-DC power converters that can provide rich telemetry to our control plane, enabling future feature enhancements such as dynamic power capping and efficiency-based workload placement that are impossible with legacy servers and software systems. Learn more about Oxide’s intelligent Power Shelf Controller The Bottom Line: Customers and the Environment Both Benefit Reducing data center power demands and achieving more useful computing per kilowatt requires fundamentally rethinking traditional data center utilization and compute design. At Oxide, we’ve proven that dramatic efficiency gains are possible when you rethink the computer at rack-scale with hardware and software designed thoughtfully and rigorously together. Ready to learn how your organization can achieve these results? Schedule time with our team here. Together, we can reclaim on-premises computing efficiency to achieve both business and sustainability goals.

6 months ago 66 votes

More in programming

Write the most clever code you possibly can

I started writing this early last week but Real Life Stuff happened and now you're getting the first-draft late this week. Warning, unedited thoughts ahead! New Logic for Programmers release! v0.9 is out! This is a big release, with a new cover design, several rewritten chapters, online code samples and much more. See the full release notes at the changelog page, and get the book here! Write the cleverest code you possibly can There are millions of articles online about how programmers should not write "clever" code, and instead write simple, maintainable code that everybody understands. Sometimes the example of "clever" code looks like this (src): # Python p=n=1 exec("p*=n*n;n+=1;"*~-int(input())) print(p%n) This is code-golfing, the sport of writing the most concise code possible. Obviously you shouldn't run this in production for the same reason you shouldn't eat dinner off a Rembrandt. Other times the example looks like this: def is_prime(x): if x == 1: return True return all([x%n != 0 for n in range(2, x)] This is "clever" because it uses a single list comprehension, as opposed to a "simple" for loop. Yes, "list comprehensions are too clever" is something I've read in one of these articles. I've also talked to people who think that datatypes besides lists and hashmaps are too clever to use, that most optimizations are too clever to bother with, and even that functions and classes are too clever and code should be a linear script.1. Clever code is anything using features or domain concepts we don't understand. Something that seems unbearably clever to me might be utterly mundane for you, and vice versa. How do we make something utterly mundane? By using it and working at the boundaries of our skills. Almost everything I'm "good at" comes from banging my head against it more than is healthy. That suggests a really good reason to write clever code: it's an excellent form of purposeful practice. Writing clever code forces us to code outside of our comfort zone, developing our skills as software engineers. Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you [will get excellent debugging practice at exactly the right level required to push your skills as a software engineer] — Brian Kernighan, probably There are other benefits, too, but first let's kill the elephant in the room:2 Don't commit clever code I am proposing writing clever code as a means of practice. Being at work is a job with coworkers who will not appreciate if your code is too clever. Similarly, don't use too many innovative technologies. Don't put anything in production you are uncomfortable with. We can still responsibly write clever code at work, though: Solve a problem in both a simple and a clever way, and then only commit the simple way. This works well for small scale problems where trying the "clever way" only takes a few minutes. Write our personal tools cleverly. I'm a big believer of the idea that most programmers would benefit from writing more scripts and support code customized to their particular work environment. This is a great place to practice new techniques, languages, etc. If clever code is absolutely the best way to solve a problem, then commit it with extensive documentation explaining how it works and why it's preferable to simpler solutions. Bonus: this potentially helps the whole team upskill. Writing clever code... ...teaches simple solutions Usually, code that's called too clever composes several powerful features together — the "not a single list comprehension or function" people are the exception. Josh Comeau's "don't write clever code" article gives this example of "too clever": const extractDataFromResponse = (response) => { const [Component, props] = response; const resultsEntries = Object.entries({ Component, props }); const assignIfValueTruthy = (o, [k, v]) => (v ? { ...o, [k]: v } : o ); return resultsEntries.reduce(assignIfValueTruthy, {}); } What makes this "clever"? I count eight language features composed together: entries, argument unpacking, implicit objects, splats, ternaries, higher-order functions, and reductions. Would code that used only one or two of these features still be "clever"? I don't think so. These features exist for a reason, and oftentimes they make code simpler than not using them. We can, of course, learn these features one at a time. Writing the clever version (but not committing it) gives us practice with all eight at once and also with how they compose together. That knowledge comes in handy when we want to apply a single one of the ideas. I've recently had to do a bit of pandas for a project. Whenever I have to do a new analysis, I try to write it as a single chain of transformations, and then as a more balanced set of updates. ...helps us master concepts Even if the composite parts of a "clever" solution aren't by themselves useful, it still makes us better at the overall language, and that's inherently valuable. A few years ago I wrote Crimes with Python's Pattern Matching. It involves writing horrible code like this: from abc import ABC class NotIterable(ABC): @classmethod def __subclasshook__(cls, C): return not hasattr(C, "__iter__") def f(x): match x: case NotIterable(): print(f"{x} is not iterable") case _: print(f"{x} is iterable") if __name__ == "__main__": f(10) f("string") f([1, 2, 3]) This composes Python match statements, which are broadly useful, and abstract base classes, which are incredibly niche. But even if I never use ABCs in real production code, it helped me understand Python's match semantics and Method Resolution Order better. ...prepares us for necessity Sometimes the clever way is the only way. Maybe we need something faster than the simplest solution. Maybe we are working with constrained tools or frameworks that demand cleverness. Peter Norvig argued that design patterns compensate for missing language features. I'd argue that cleverness is another means of compensating: if our tools don't have an easy way to do something, we need to find a clever way. You see this a lot in formal methods like TLA+. Need to check a hyperproperty? Cast your state space to a directed graph. Need to compose ten specifications together? Combine refinements with state machines. Most difficult problems have a "clever" solution. The real problem is that clever solutions have a skill floor. If normal use of the tool is at difficult 3 out of 10, then basic clever solutions are at 5 out of 10, and it's hard to jump those two steps in the moment you need the cleverness. But if you've practiced with writing overly clever code, you're used to working at a 7 out of 10 level in short bursts, and then you can "drop down" to 5/10. I don't know if that makes too much sense, but I see it happen a lot in practice. ...builds comradery On a few occasions, after getting a pull request merged, I pulled the reviewer over and said "check out this horrible way of doing the same thing". I find that as long as people know they're not going to be subjected to a clever solution in production, they enjoy seeing it! Next week's newsletter will probably also be late, after that we should be back to a regular schedule for the rest of the summer. Mostly grad students outside of CS who have to write scripts to do research. And in more than one data scientist. I think it's correlated with using Jupyter. ↩ If I don't put this at the beginning, I'll get a bajillion responses like "your team will hate you" ↩

yesterday 2 votes
I switched from GMail and nobody died

Whether we like it or not, email is widely used to identify a person. Code sent to email is used as authentication and sometimes as authorisation for certain actions. I’m not comfortable with Google having such power over me, especially given the fact that they practically don’t have any support you can appeal to. If your Google account is blocked, that’s it. Maybe you know someone from Google and they can help you, but for most of us mortals that’s not an option.

yesterday 2 votes
Language Needs Innovation

In his book “The Order of Time” Carlo Rovelli notes how we often asks ourselves questions about the fundamental nature of reality such as “What is real?” and “What exists?” But those are bad questions he says. Why? the adjective “real” is ambiguous; it has a thousand meanings. The verb “to exist” has even more. To the question “Does a puppet whose nose grows when he lies exist?” it is possible to reply: “Of course he exists! It’s Pinocchio!”; or: “No, it doesn’t, he’s only part of a fantasy dreamed up by Collodi.” Both answers are correct, because they are using different meanings of the verb “to exist.” He notes how Pinocchio “exists” and is “real” in terms of a literary character, but not so far as any official Italian registry office is concerned. To ask oneself in general “what exists” or “what is real” means only to ask how you would like to use a verb and an adjective. It’s a grammatical question, not a question about nature. The point he goes on to make is that our language has to evolve and adapt with our knowledge. Our grammar developed from our limited experience, before we know what we know now and before we became aware of how imprecise it was in describing the richness of the natural world. Rovelli gives an example of this from a text of antiquity which uses confusing grammar to get at the idea of the Earth having a spherical shape: For those standing below, things above are below, while things below are above, and this is the case around the entire earth. On its face, that is a very confusing sentence full of contradictions. But the idea in there is profound: the Earth is round and direction is relative to the observer. Here’s Rovelli: How is it possible that “things above are below, while things below are above"? It makes no sense…But if we reread it bearing in mind the shape and the physics of the Earth, the phrase becomes clear: its author is saying that for those who live at the Antipodes (in Australia), the direction “upward” is the same as “downward” for those who are in Europe. He is saying, that is, that the direction “above” changes from one place to another on the Earth. He means that what is above with respect to Sydney is below with respect to us. The author of this text, written two thousand years ago, is struggling to adapt his language and his intuition to a new discovery: the fact that the Earth is a sphere, and that “up” and “down” have a meaning that changes between here and there. The terms do not have, as previously thought, a single and universal meaning. So language needs innovation as much as any technological or scientific achievement. Otherwise we find ourselves arguing over questions of deep import in a way that ultimately amounts to merely a question of grammar. Email · Mastodon · Bluesky

2 days ago 2 votes
A Little Bit Now, A Lotta Bit Later

In mid-March we released a big bug fix update—elementary OS 8.0.1—and since then we’ve been hard at work on even more bug fixes and some new exciting features that I’m excited to share with you today! Read ahead to find out what we’ve released recently and what you can help us test in Early Access. Quick Settings Quick Settings has a new “Prevent Sleep” toggle Leo added a new “Prevent Sleep” toggle. This is useful when you’re giving a presentation or have a long-running background task where you want to temporarily avoid letting the computer go to sleep on its normal schedule. We also fixed a bug where the “Dark Mode” toggle would cancel the dark mode schedule when used. We now have proper schedule snoozing, so when you manually toggle Dark Mode on or off while using a timed or sunset-to-sunrise schedule, your schedule will resume on the next schedule change instead of being canceled completely. Vishal also fixed an issue that caused some apps to report being improperly closed on system shutdown or restart and on the lock screen we now show the “Suspend” button rather than the “Lock” button. System Settings Locale settings has a fresh layout thanks to Alain with its options aligned more cleanly and improved links to additional settings. Locale Settings has a more responsive design We’ve also added the phrase “about this device” as a search term for the System page and improved interface copy when a restart is required to finish installing updates based on your feedback. Plus, Stanisław improved stylus detection in Wacom settings preventing a crash when no stylus is found. AppCenter We now show a small label next to the download button for apps which contain in-app purchases. This is especially useful for easily identifying free-to-play games or alt stores like Steam or Heroic Games Launcher. AppCenter now shows when apps have in-app purchases Plus, we now reload app icons on-the-fly as their data is processed, thanks to Italo. That means you’ll no longer get occasionally stuck with an AppCenter which shows missing images for app’s who have taken a bit longer than usual to load. Get These Updates As always, pop open System Settings → System on elementary OS 8 and hit “Update All” to get these updates plus your regular security, bug fix, and translation updates. Or set up automatic updates and get a notification when updates are ready to install! Early Access Our development focus recently has been on some of the bigger features that will likely land for either elementary OS 8.1 or 9. We’ve got a new app, big changes to the design of our desktop itself, a whole lot of under-the-hood cleanup, and the return of some key system services thanks to a new open source project. Monitor We’re now shipping a System Monitor app by default By popular demand—and thanks to the hard work of Stanisław—we have a new system monitor app called “Monitor” shipping in Early Access. Monitor provides usage information for your processor, GPU, memory, storage, network, and currently running processes. You can optionally see system information in the panel with Monitor You can also optionally get a ton of glanceable information shown in the panel. There’s currently a lot of work happening to port Monitor to GTK4 and improve its functionality under the Secure Session, so make sure to report any issues you find! Multitasking The Dock is getting a workspace switcher Probably the biggest change to the Pantheon shell since its early inception, the Dock is getting a new workspace switcher! The workspace switcher works in a familiar way to the one you may have seen in the Multitasking View: Your currently open workspaces are represented as tiles with the icons of apps running on them; You can select a workspace to switch to it; You can drag-and-drop workspaces to rearrange them; And you can use the “+” button to create a new blank workspace. One new trick however is that selecting the workspace you’re already on will launch Multitasking View. The new workspace switcher makes it so much more accessible to multitask with just the mouse and get an overview of your workflows without having to first enter the Multitasking View. We’re really excited to hear what people think about it! You can close apps from Multitasking View by swiping up Another very satisfying feature for folks using touch input, you can now swipe up windows in the Multitasking View to close them. This is a really familiar gesture for those of us with Android and iOS devices and feels really natural for managing a big stack of windows without having to aim for a small “x” button. GTK4 Porting We’ve recently landed the port of Tasks to GTK4. So far that comes with a few fixes to tighten up its design, with much more possible in the future. Please make sure to help us test it thoroughly for any regressions! Tasks has a slightly tightened up design We’re also making great progress on porting the panel to GTK4. So far we have branches in review for Nightlight, Bluetooth, Datetime, and Network indicators. Power, Keyboard, and Quick Settings indicators all have in-progress branches. That leaves just Applications, Sound, and Notifications. So far these ports don’t come with major feature changes, but they do involve lots of cleaning up and modernizing of these code bases and in some cases fixing bugs! When the port is finished, we should see immediate performance gains and we’ll have a much better foundation for future releases. You can follow along with our progress porting everything to GTK4 in this GitHub Project. And More When you take a screenshot using keyboard shortcuts or by secondary-clicking an app’s window handle, we now send a notification letting you know that it was succesful and where to find the resulting image. Plus there’s a handy button that opens Files with your screenshot pre-selected. We’re also testing beaconDB as a replacement for Mozilla Location Services (MLS). If you’re not aware, we relied on MLS in previous versions of elementary OS to provide location information for devices that don’t have a GPS radio. Unfortunately Mozilla discontinued the service last June and we’ve been left without a replacement until now. Without these services, not only did maps and weather apps cease to function, but system features like automatic timezone detection and features that rely on sunset and sunrise times no longer work properly. beaconDB offers a drop-in replacement for MLS that uses Wireless networks, bluetooth devices, and cell towers to provide location data when requested. All of its data is crowd-sourced and opt-in and several distributions are now defaulting to using it as their location services data provider. I’ve set up a small sponsorship from elementary on Liberapay to support the project. If you can help support beaconDB either by sponsoring or providing stumbler data, I’d highly encourage you to do so! Sponsors At the moment we’re at 23% of our monthly funding goal and 336 Sponsors on GitHub! Shoutouts to everyone helping us reach our goals here. Your monthly sponsorship funds development and makes sure we have the resources we need to give you the best version of elementary OS we can! Monthly release candidate builds and daily Early Access builds are available to GitHub Sponsors from any tier! Beware that Early Access builds are not considered stable and you will encounter fresh issues when you run them. We’d really appreciate reporting any problems you encounter with the Feedback app or directly on GitHub.

3 days ago 1 votes
The Tumultuous Evolution of the Design Profession

Via Jeremy Keith’s link blog I found this article: Elizabeth Goodspeed on why graphic designers can’t stop joking about hating their jobs. It’s about the disillusionment of designers since the ~2010s. Having ridden that wave myself, there’s a lot of very relatable stuff in there about how design has evolved as a profession. But before we get into the meat of the article, there’s some bangers worth acknowledging, like this: Amazon – the most used website in the world – looks like a bunch of pop-up ads stitched together. lol, burn. Haven’t heard Amazon described this way, but it’s spot on. The hard truth, as pointed out in the article, is this: bad design doesn’t hurt profit margins. Or at least there’s no immediately-obvious, concrete data or correlation that proves this. So most decision makers don’t care. You know what does help profit margins? Spending less money. Cost-savings initiatives. Those always provide a direct, immediate, seemingly-obvious correlation. So those initiatives get prioritized. Fuzzy human-centered initiatives (humanities-adjacent stuff), are difficult to quantitatively (and monetarily) measure. “Let’s stop printing paper and sending people stuff in the mail. It’s expensive. Send them emails instead.” Boom! Money saved for everyone. That’s easier to prioritize than asking, “How do people want us to communicate with them — if at all?” Nobody ever asks that last part. Designers quickly realized that in most settings they serve the business first, customers second — or third, or fourth, or... Shar Biggers [says] designers are “realising that much of their work is being used to push for profit rather than change..” Meet the new boss. Same as the old boss. As students, designers are encouraged to make expressive, nuanced work, and rewarded for experimentation and personal voice. The implication, of course, is that this is what a design career will look like: meaningful, impactful, self-directed. But then graduation hits, and many land their first jobs building out endless Google Slides templates or resizing banner ads...no one prepared them for how constrained and compromised most design jobs actually are. Reality hits hard. And here’s the part Jeremy quotes: We trained people to care deeply and then funnelled them into environments that reward detachment. ​​And the longer you stick around, the more disorienting the gap becomes – especially as you rise in seniority. You start doing less actual design and more yapping: pitching to stakeholders, writing brand strategy decks, performing taste. Less craft, more optics; less idealism, more cynicism. Less work advocating for your customers, more work for advocating for yourself and your team within the organization itself. Then the cynicism sets in. We’re not making software for others. We’re making company numbers go up, so our numbers ($$$) will go up. Which reminds me: Stephanie Stimac wrote about reaching 1 year at Igalia and what stood out to me in her post was that she didn’t feel a pressing requirement to create visibility into her work and measure (i.e. prove) its impact. I’ve never been good at that. I’ve seen its necessity, but am just not good at doing it. Being good at building is great. But being good at the optics of building is often better — for you, your career, and your standing in many orgs. Anyway, back to Elizabeth’s article. She notes you’ll burn out trying to monetize something you love — especially when it’s in pursuit of maintaining a cost of living. Once your identity is tied up in the performance, it’s hard to admit when it stops feeling good. It’s a great article and if you’ve been in the design profession of building software, it’s worth your time. Email · Mastodon · Bluesky

4 days ago 2 votes