Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
87
Sometime in late 2007, we had the idea of a DTrace conference. Or really, more of a meetup; from the primordial e-mail I sent: The goal here, by the way, is not a DTrace user group, but more of a face-to-face meeting with people actively involved in DTrace — either by porting it to another system, by integrating probes into higher level environments, by building higher-level tools on top of DTrace or by using it heavily and/or in a critical role. That said, we also don’t want to be exclusionary, so our thinking is that the only true requirement for attending is that everyone must be prepared to speak informally for 15 mins or so on what they are doing with DTrace, any limitations that they have encountered, and some ideas for the future. We’re thinking that this is going to be on the order of 15-30 people (though more would be a good problem to have — we’ll track it if necessary), that it will be one full day (breakfast in the morning through drinks into the evening), and that...
7 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Oxide Computer Company Blog

Oxide’s Compensation Model: How is it Going?

How it started Four years ago, we were struggling to hire. Our team was small (~23 employees), and we knew that we needed many more people to execute on our audacious vision. While we had had success hiring in our personal networks, those networks now felt tapped; we needed to get further afield. As is our wont, we got together as a team and brainstormed: how could we get a bigger and broader applicant pool? One of our engineers, Sean, shared some personal experience: that Oxide’s principles and values were very personally important to him — but that when he explained them to people unfamiliar with the company, they were (understandably?) dismissed as corporate claptrap. Sean had found, however, that there was one surefire way to cut through the skepticism: to explain our approach to compensation. Maybe, Sean wondered, we should talk about it publicly? "I could certainly write a blog entry explaining it," I offered. At this suggestion, the team practically lunged with enthusiasm: the reaction was so uniformly positive that I have to assume that everyone was sick of explaining this most idiosyncratic aspect of Oxide to friends and family. So what was the big deal about our compensation? Well, as a I wrote in the resulting piece, Compensation as a Reflection of Values, our compensation is not merely transparent, but uniform. The piece — unsurprisingly, given the evergreen hot topic that is compensation — got a ton of attention. While some of that attention was negative (despite the piece trying to frontrun every HN hater!), much of it was positive — and everyone seemed to be at least intrigued. And in terms of its initial purpose, the piece succeeded beyond our wildest imagination: it brought a surge of new folks interested in the company. Best of all, the people new to Oxide were interested for all of the right reasons: not the compensation per se, but for the values that the compensation represents. The deeper they dug, the more they found to like — and many who learned about Oxide for the first time through that blog entry we now count as long-time, cherished colleagues. That blog entry was a long time ago now, and today we have ~75 employees (and a shipping product!); how is our compensation model working out for us? How it’s going Before we get into our deeper findings, two updates that are so important that we have updated the blog entry itself. First, the dollar figure itself continues to increase over time (as of this writing in 2025, $207,264); things definitely haven’t gotten (and aren’t getting!) any cheaper. And second, we did introduce variable compensation for some sales roles. Yes, those roles can make more than the rest of us — but they can also make less, too. And, importantly: if/when those folks are making more than the rest of us, it’s because they’re selling a lot — a result that can be celebrated by everyone! Those critical updates out of the way, how is it working? There have been a lot of surprises along the way, mostly (all?) of the positive variety. A couple of things that we have learned: People take their own performance really seriously. When some outsiders hear about our compensation model, they insist that it can’t possibly work because "everyone will slack off." I have come to find this concern to be more revealing of the person making the objection than of our model, as our experience has been in fact the opposite: in my one-on-one conversations with team members, a frequent subject of conversation is people who are concerned that they aren’t doing enough (or that they aren’t doing the right thing, or that their work is progressing slower than they would like). I find my job is often to help quiet this inner critic while at the same time stoking what I feel is a healthy urge: when one holds one’s colleagues in high regard, there is an especially strong desire to help contribute — to prove oneself worthy of a superlative team. Our model allows people to focus on their own contribution (whatever it might be). People take hiring really seriously. When evaluating a peer (rather than a subordinate), one naturally has high expectations — and because (in the sense of our wages, anyway) everyone at Oxide is a peer, it shouldn’t be surprising that folks have very high expectations for potential future colleagues. And because the Oxide hiring process is writing intensive, it allows for candidates to be thoroughly reviewed by Oxide employees — who are tough graders! It is, bluntly, really hard to get a job at Oxide. It allows us to internalize the importance of different roles. One of the more incredible (and disturbingly frequent) objections I have heard is: "But is that what you’ll pay support folks?" I continue to find this question offensive, but I no longer find it surprising: the specific dismissal of support roles reveals a widespread and corrosive devaluation of those closest to customers. My rejoinder is simple: think of the best support engineers you’ve worked with; what were they worth? Anyone who has shipped complex systems knows these extraordinary people — calm under fire, deeply technical, brilliantly resourceful, profoundly empathetic — are invaluable to the business. So what if you built a team entirely of folks like that? The response has usually been: well, sure, if you’re going to only hire those folks. Yeah, we are — and we have! It allows for fearless versatility. A bit of a corollary to the above, but subtly different: even though we (certainly!) hire and select for certain roles, our uniform compensation means we can in fact think primarily in terms of people unconfined by those roles. That is, we can be very fluid about what we’re working on, without fear of how it will affect a perceived career trajectory. As a concrete example: we had a large customer that wanted to put in place a program for some of the additional work they wanted to see in the product. The complexity of their needs required dedicated program management resources that we couldn’t spare, and in another more static company we would have perhaps looked to hire. But in our case, two folks came together — CJ from operations, and Izzy from support — and did something together that was in some regards new to both of them (and was neither of their putative full-time jobs!) The result was indisputably successful: the customer loved the results, and two terrific people got a chance to work closely together without worrying about who was dotted-lined to whom. It has allowed us to organizationally scale. Many organizations describe themselves as flat, and a reasonable rebuttal to this are the "shadow hierarchies" created by the tyranny of structurelessness. And indeed, if one were to read (say) Valve’s (in)famous handbook, the autonomy seems great — but the stack ranking decidedly less so, especially because the handbook is conspicuously silent on the subject of compensation. (Unsurprisingly, compensation was weaponized at Valve, which descended into toxic cliquishness.) While we believe that autonomy is important to do one’s best work, we also have a clear structure at Oxide in that Steve Tuck (Oxide co-founder and CEO) is in charge. He has to be: he is held accountable to our investors — and he must have the latitude to make decisions. Under Steve, it is true that we don’t have layers of middle management. Might we need some in the future? Perhaps, but what fraction of middle management in a company is dedicated to — at some level — determining who gets what in terms of compensation? What happens when you eliminate that burden completely? It frees us to both lead and follow. We expect that every Oxide employee has the capacity to lead others — and we tap this capacity frequently. Of course, a company in which everyone is trying to direct all traffic all the time would be a madhouse, so we also very much rely on following one another too! Just as our compensation model allows us to internalize the values of different roles, it allows us to appreciate the value of both leading and following, and empowers us each with the judgement to know when to do which. This isn’t always easy or free of ambiguity, but this particular dimension of our versatility has been essential — and our compensation model serves to encourage it. It causes us to hire carefully and deliberately. Of course, one should always hire carefully and deliberately, but this often isn’t the case — and many a startup has been ruined by reckless expansion of headcount. One of the roots of this can be found in a dirty open secret of Silicon Valley middle management: its ranks are taught to grade their career by the number of reports in their organization. Just as if you were to compensate software engineers based on the number of lines of code they wrote, this results in perverse incentives and predictable disasters — and any Silicon Valley vet will have plenty of horror stories of middle management jockeying for reqs or reorgs when they should have been focusing on product and customers. When you can eliminate middle management, you eliminate this incentive. We grow the team not because of someone’s animal urges to have the largest possible organization, but rather because we are at a point where adding people will allow us to better serve our market and customers. It liberates feedback from compensation. Feedback is, of course, very important: we all want to know when and where we’re doing the right thing! And of course, we want to know too where there is opportunity for improvement. However, Silicon Valley has historically tied feedback so tightly to compensation that it has ceased to even pretend to be constructive: if it needs to be said, performance review processes aren’t, in fact, about improving the performance of the team, but rather quantifying and stack-ranking that performance for purposes of compensation. When compensation is moved aside, there is a kind of liberation for feedback itself: because feedback is now entirely earnest, it can be expressed and received thoughtfully. It allows people to focus on doing the right thing. In a world of traditional, compensation-tied performance review, the organizational priority is around those things that affect compensation — even at the expense of activity that clearly benefits the company. This leads to all sorts of wild phenomena, and most technology workers will be able to tell stories of doing things that were clearly right for the company, but having to hide it from management that thought only narrowly in terms of their own stated KPIs and MBOs. By contrast, over and over (and over!) again, we have found that people do the right thing at Oxide — even if (especially if?) no one is looking. The beneficiary of that right thing? More often than not, it’s our customers, who have uniformly praised the team for going above and beyond. It allows us to focus on the work that matters. Relatedly, when compensation is non-uniform, the process to figure out (and maintain) that non-uniformity is laborious. All of that work — of line workers assembling packets explaining themselves, of managers arming themselves with those packets to fight in the arena of organizational combat, and then of those same packets ultimately being regurgitated back onto something called a review — is work. Assuming such a process is executed perfectly (something which I suppose is possible in the abstract, even though I personally have never seen it), this is work that does not in fact advance the mission of the company. Not having variable compensation gives us all of that time and energy back to do the actual work — the stuff that matters. It has stoked an extraordinary sense of teamwork. For me personally — and as I relayed on an episode of Software Misadventures — the highlights of my career have been being a part of an extraordinary team. The currency of a team is mutual trust, and while uniform compensation certainly isn’t the only way to achieve that trust, boy does it ever help! As Steve and I have told one another more times that we can count: we are so lucky to work on this team, with its extraordinary depth and breadth. While our findings have been very positive, I would still reiterate what we said four years ago: we don’t know what the future holds, and it’s easier to make an unwavering commitment to the transparency rather than the uniformity. That said, the uniformity has had so many positive ramifications that the model feels more important than ever. We are beyond the point of this being a curiosity; it’s been essential for building a mission-focused team taking on a problem larger than ourselves. So it’s not a fit for everyone — but if you are seeking an extraordinary team solving hard problems in service to customers, consider Oxide!

2 months ago 17 votes
Advancing Cloud and HPC Convergence with Lawrence Livermore National Laboratory

Oxide Computer Company and Lawrence Livermore National Laboratory Work Together to Advance Cloud and HPC Convergence Oxide Computer Company and Lawrence Livermore National Laboratory (LLNL) today announced a plan to bring on-premises cloud computing capabilities to the Livermore Computing (LC) high-performance computing (HPC) center. The rack-scale Oxide Cloud Computer allows LLNL to improve the efficiency of operational workloads and will provide users in the National Nuclear Security Administration (NNSA) with new capabilities for provisioning secure, virtualized services alongside HPC workloads. HPC centers have traditionally run batch workloads for large-scale scientific simulations and other compute-heavy applications. HPC workloads do not exist in isolation—there are a multitude of persistent, operational services that keep the HPC center running. Meanwhile, HPC users also want to deploy cloud-like persistent services—databases, Jupyter notebooks, orchestration tools, Kubernetes clusters. Clouds have developed extensive APIs, security layers, and automation to enable these capabilities, but few options exist to deploy fully virtualized, automated cloud environments on-premises. The Oxide Cloud Computer allows organizations to deliver secure cloud computing capabilities within an on-premises environment. On-premises environments are the next frontier for cloud computing. LLNL is tackling some of the hardest and most important problems in science and technology, requiring advanced hardware, software, and cloud capabilities. We are thrilled to be working with their exceptional team to help advance those efforts, delivering an integrated system that meets their rigorous requirements for performance, efficiency, and security. — Steve TuckCEO at Oxide Computer Company Leveraging the new Oxide Cloud Computer, LLNL will enable staff to provision virtual machines (VMs) and services via self-service APIs, improving operations and modernizing aspects of system management. In addition, LLNL will use the Oxide rack as a proving ground for secure multi-tenancy and for smooth integration with the LLNL-developed Flux resource manager. LLNL plans to bring its users cloud-like Infrastructure-as-a-Service (IaaS) capabilities that work seamlessly with their HPC jobs, while maintaining security and isolation from other users. Beyond LLNL personnel, researchers at the Los Alamos National Laboratory and Sandia National Laboratories will also partner in many of the activities on the Oxide Cloud Computer. We look forward to working with Oxide to integrate this machine within our HPC center. Oxide’s Cloud Computer will allow us to securely support new types of workloads for users, and it will be a proving ground for introducing cloud-like features to operational processes and user workflows. We expect Oxide’s open-source software stack and their transparent and open approach to development to help us work closely together. — Todd GamblinDistinguished Member of Technical Staff at LLNL Sandia is excited to explore the Oxide platform as we work to integrate on-premise cloud technologies into our HPC environment. This advancement has the potential to enable new classes of interactive and on-demand modeling and simulation capabilities. — Kevin PedrettiDistinguished Member of Technical Staff at Sandia National Laboratories LLNL plans to work with Oxide on additional capabilities, including the deployment of additional Cloud Computers in its environment. Of particular interest are scale-out capabilities and disaster recovery. The latest installation underscores Oxide Computer’s momentum in the federal technology ecosystem, providing reliable, state-of-the-art Cloud Computers to support critical IT infrastructure. To learn more about Oxide Computer, visit https://oxide.computer. About Oxide Computer Oxide Computer Company is the creator of the world’s first commercial Cloud Computer, a true rack-scale system with fully unified hardware and software, purpose-built to deliver hyperscale cloud computing to on-premises data centers. With Oxide, organizations can fully realize the economic and operational benefits of cloud ownership, with access to the same self-service development experience of public cloud, without the public cloud cost. Oxide empowers developers to build, run, and operate any application with enhanced security, latency, and control, and frees organizations to elevate IT operations to accelerate strategic initiatives. To learn more about Oxide’s Cloud Computer, visit oxide.computer. About LLNL Founded in 1952, Lawrence Livermore National Laboratory provides solutions to our nation’s most important national security challenges through innovative science, engineering, and technology. Lawrence Livermore National Laboratory is managed by Lawrence Livermore National Security, LLC for the U.S. Department of Energy’s National Nuclear Security Administration. Media Contact LaunchSquad for Oxide Computer oxide@launchsquad.com

8 months ago 82 votes
Remembering Charles Beeler

We are heartbroken to relay that Charles Beeler, a friend and early investor in Oxide, passed away in September after a battle with cancer. We lost Charles far too soon; he had a tremendous influence on the careers of us both. Our relationship with Charles dates back nearly two decades, to his involvement with the ACM Queue board where he met Bryan. It was unprecedented to have a venture capitalist serve in this capacity with ACM, and Charles brought an entirely different perspective on the practitioner content. A computer science pioneer who also served on the board took Bryan aside at one point: "Charles is one of the good ones, you know." When Bryan joined Joyent a few years later, Charles also got to know Steve well. Seeing the promise in both node.js and cloud computing, Charles became an investor in the company. When companies hit challenging times, some investors will hide — but Charles was the kind of investor to figure out how to fix what was broken. When Joyent needed a change in executive leadership, it was Charles who not only had the tough conversations, but led the search for the leader the company needed, ultimately positioning the company for success. Aside from his investment in Joyent, Charles was an outspoken proponent of node.js, becoming an organizer of the Node Summit conference. In 2017, he asked Bryan to deliver the conference’s keynote, but by then, the relationship between Joyent and node.js had become…​ complicated, and Bryan felt that it probably wouldn’t be a good idea. Any rational person would have dropped it, but Charles persisted, with characteristic zeal: if the Joyent relationship with node.js had become strained, so much more the reason to speak candidly about it! Charles prevailed, and the resulting talk, Platform as Reflection of Values, became one of Bryan’s most personally meaningful talks. Charles’s persistence was emblematic: he worked behind the scenes to encourage people to do their best work, always with an enthusiasm for the innovators and the creators. As we were contemplating Oxide, we told Charles what we wanted to do long before we had a company. Charles laughed with delight: "I hoped that you two would do something big, and I am just so happy for you that you’re doing something so ambitious!" As we raised seed capital, we knew that we were likely a poor fit for Charles and his fund. But we also knew that we deeply appreciated his wisdom and enthusiasm; we couldn’t resist pitching him on Oxide. Charles approached the investment in Oxide as he did with so many other aspects: with curiosity, diligence, empathy, and candor. He was direct with us that despite his enthusiasm for us personally, Oxide would be a challenging investment for his firm. But he also worked with us to address specific objections, and ultimately he won over his partnership. We were thrilled when he not only invested, but pulled together a syndicate of like-minded technologists and entrepreneurs to join him. Ever since, he has been a huge Oxide fan. Befitting his enthusiasm, one of his final posts expressed his enthusiasm and pride in what the Oxide team has built. Charles, thank you. You told us you were proud of us — and it meant the world. We are gutted to no longer have you with us; your influence lives on not just in Oxide, but also in the many people that you have inspired. You were the best of venture capital. Closer to the heart, you were a terrific friend to us both; thank you.

8 months ago 68 votes
How Oxide Cuts Data Center Power Consumption in Half

Here’s a sobering thought: today, data centers already consume 1-2% of the world’s power, and that percentage will likely rise to 3-4% by the end of the decade. According to Goldman Sachs research, that rise will include a doubling in data center carbon dioxide emissions. As the data and AI boom progresses, this thirst for power shows no signs of slowing down anytime soon. Two key challenges quickly become evident for the 85% of IT that currently lives on-premises. How can organizations reduce power consumption and corresponding carbon emissions? How can organizations keep pace with AI innovation as existing data centers run out of available power? Figure 1. Masanet et al. (2020), Cisco, IEA, Goldman Sachs Research Rack-scale design is critical to improved data center efficiency Traditional data center IT consumes so much power because the fundamental unit of compute is an individual server; like a house where rooms were built one at a time, with each room having its own central AC unit, gas furnace, and electrical panel. Individual rackmount servers are stacked together, each with their own AC power supplies, cooling fans, and power management. They are then paired with storage appliances and network switches that communicate at arm’s length, not designed as a cohesive whole. This approach fundamentally limits organizations' ability to maintain sustainable, high-efficiency computing systems. Of course, hyperscale public cloud providers did not design their data center systems this way. Instead, they operate like a carefully planned smart home where everything is designed to work together cohesively and is operated by software that understands the home’s systems end-to-end. High-efficiency, rack-scale computers are deployed at scale and operate as a single unit with integrated storage and networking to support elastic cloud computing services. This modern archietecture is made available to the market as public cloud, but that rental-only model is ill-fit for many business needs. Compared to a popular rackmount server vendor, Oxide is able to fill our specialized racks with 32 AMD Milan sleds and highly-available network switches using less than 15kW per rack, doubling the compute density in a typical data center. With just 16 of the alternative 1U servers and equivalent network switches, over 16kW of power is required per rack, leading to only 1,024 CPU cores vs Oxide’s 2,048. Extracting more useful compute from each kW of power and square foot of data center space is key to the future effectiveness of on-premises computing. At Oxide, we’ve taken this lesson in advancing rack-scale design, improved upon it in several ways, and made it available for every organization to purchase and operate anywhere in the world without a tether back to the public cloud. Our Cloud Computer treats the entire rack as a single, unified computer rather than a collection of independent parts, achieving unprecedented power efficiency. By designing the hardware and software together, we’ve eliminated unnecessary components and optimized every aspect of system operation through a control plane with visibility to end-to-end operations. When we started Oxide, the DC bus bar stood as one of the most glaring differences between the rack-scale machines at the hyperscalers and the rack-and-stack servers that the rest of the market was stuck with. That a relatively simple piece of copper was unavailable to commercial buyers — despite being unequivocally the right way to build it! — represented everything wrong with the legacy approach. The bus bar in the Oxide Cloud Computer is not merely more efficient, it is a concrete embodiment of the tremendous gains from designing at rack-scale, and by integrating hardware with software. — Bryan Cantrill The improvements we’re seeing are rooted in technical innovation Replacing low-efficiency AC power supplies with a high-efficiency DC Bus Bar This eliminates the 70 total AC power supplies found in an equivalent legacy server rack within 32 servers, two top-of-rack switches, and one out-of-band switch, each with two AC power supplies. This power shelf also ensures the load is balanced across phases, something that’s impossible with traditional power distribution units found in legacy server racks. Bigger fans = bigger efficiency gains using 12x less energy than legacy servers, which each contain as many as 7 fans, which must work much harder to move air over system components. Purpose-built for power efficiency less restrictive airflow than legacy servers by eliminating extraneous components like PCIe risers, storage backplanes, and more. Legacy servers need many optional components like these because they could be used for any number of tasks, such as point-of-sale systems, data center servers, or network-attached-storage (NAS) systems. Still, they were never designed optimally for any one of those tasks. The Oxide Cloud Computer was designed from the ground up to be a rack-scale cloud computing powerhouse, and so it’s optimized for exactly that task. Hardware + Software designed together By designing the hardware and software together, we can make hardware choices like more intelligent DC-DC power converters that can provide rich telemetry to our control plane, enabling future feature enhancements such as dynamic power capping and efficiency-based workload placement that are impossible with legacy servers and software systems. Learn more about Oxide’s intelligent Power Shelf Controller The Bottom Line: Customers and the Environment Both Benefit Reducing data center power demands and achieving more useful computing per kilowatt requires fundamentally rethinking traditional data center utilization and compute design. At Oxide, we’ve proven that dramatic efficiency gains are possible when you rethink the computer at rack-scale with hardware and software designed thoughtfully and rigorously together. Ready to learn how your organization can achieve these results? Schedule time with our team here. Together, we can reclaim on-premises computing efficiency to achieve both business and sustainability goals.

8 months ago 78 votes

More in programming

Building competency is better than therapy

The world is waking to the fact that talk therapy is neither the only nor the best way to cure a garden-variety petite depression. Something many people will encounter at some point in their lives. Studies have shown that exercise, for example, is a more effective treatment than talk therapy (and pharmaceuticals!) when dealing with such episodes. But I'm just as interested in the role building competence can have in warding off the demons. And partly because of this meme: I've talked about it before, but I keep coming back to the fact that it's exactly backwards. That signing up for an educational quest into Linux, history, or motorcycle repair actually is an incredibly effective alternative to therapy! At least for men who'd prefer to feel useful over being listened to, which, in my experience, is most of them. This is why I find it so misguided when people who undertake those quests sell their journey short with self-effacing jibes about how much an unattractive nerd it makes them to care about their hobby. Mihaly Csikszentmihalyi detailed back in 1990 how peak human happiness arrives exactly in these moments of flow when your competence is stretched by a difficult-but-doable challenge. Don't tell me those endorphins don't also help counter the darkness. But it's just as much about the fact that these pursuits of competence usually offer a great opportunity for community as well that seals the deal. I've found time and again that people are starved for the kind of topic-based connections that, say, learning about Linux offers in spades. You're not just learning, you're learning with others. That is a time-tested antidote to depression: Forming and cultivating meaningful human connections. Yes, doing so over the internet isn't as powerful as doing it in person, but it's still powerful. It still offers community, involvement, and plenty of invitation to carry a meaningful burden. Open source nails this trifecta of motivations to a T. There are endless paths of discovery and mastery available. There are tons of fellow travelers with whom to connect and collaborate. And you'll find an unlimited number of meaningful burdens in maintainerships open for the taking. So next time you see that meme, you should cheer that the talk therapy table is empty. Leave it available for the severe, pathological cases that exercise and the pursuit of competence can't cure. Most people just don't need therapy, they need purpose, they need competence, they need exercise, and they need community.

2 days ago 5 votes
Programming Language Escape Hatches

The excellent-but-defunct blog Programming in the 21st Century defines "puzzle languages" as languages were part of the appeal is in figuring out how to express a program idiomatically, like a puzzle. As examples, he lists Haskell, Erlang, and J. All puzzle languages, the author says, have an "escape" out of the puzzle model that is pragmatic but stigmatized. But many mainstream languages have escape hatches, too. Languages have a lot of properties. One of these properties is the language's capabilities, roughly the set of things you can do in the language. Capability is desirable but comes into conflicts with a lot of other desirable properties, like simplicity or efficiency. In particular, reducing the capability of a language means that all remaining programs share more in common, meaning there's more assumptions the compiler and programmer can make ("tractability"). Assumptions are generally used to reason about correctness, but can also be about things like optimization: J's assumption that everything is an array leads to high-performance "special combinations". Rust is the most famous example of mainstream language that trades capability for tractability.1 Rust has a lot of rules designed to prevent common memory errors, like keeping a reference to deallocated memory or modifying memory while something else is reading it. As a consequence, there's a lot of things that cannot be done in (safe) Rust, like interface with an external C function (as it doesn't have these guarantees). To do this, you need to use unsafe Rust, which lets you do additional things forbidden by safe Rust, such as deference a raw pointer. Everybody tells you not to use unsafe unless you absolutely 100% know what you're doing, and possibly not even then. Sounds like an escape hatch to me! To extrapolate, an escape hatch is a feature (either in the language itself or a particular implementation) that deliberately breaks core assumptions about the language in order to add capabilities. This explains both Rust and most of the so-called "puzzle languages": they need escape hatches because they have very strong conceptual models of the language which leads to lots of assumptions about programs. But plenty of "kitchen sink" mainstream languages have escape hatches, too: Some compilers let C++ code embed inline assembly. Languages built on .NET or the JVM has some sort of interop with C# or Java, and many of those languages make assumptions about programs that C#/Java do not. The SQL language has stored procedures as an escape hatch and vendors create a second escape hatch of user-defined functions. Ruby lets you bypass any form of encapsulation with send. Frameworks have escape hatches, too! React has an entire page on them. (Does eval in interpreted languages count as an escape hatch? It feels different, but it does add a lot of capability. Maybe they don't "break assumptions" in the same way?) The problem with escape hatches In all languages with escape hatches, the rule is "use this as carefully and sparingly as possible", to the point where a messy solution without an escape hatch is preferable to a clean solution with one. Breaking a core assumption is a big deal! If the language is operating as if its still true, it's going to do incorrect things. I recently had this problem in a TLA+ contract. TLA+ is a language for modeling complicated systems, and assumes that the model is a self-contained universe. The client wanted to use the TLA+ to test a real system. The model checker should send commands to a test device and check the next states were the same. This is straightforward to set up with the IOExec escape hatch.2 But the model checker assumed that state exploration was pure and it could skip around the state randomly, meaning it would do things like set x = 10, then skip to set x = 1, then skip back to inc x; assert x == 11. Oops! We eventually found workarounds but it took a lot of clever tricks to pull off. I'll probably write up the technique when I'm less busy with The Book. The other problem with escape hatches is the rest of the language is designed around not having said capabilities, meaning it can't support the feature as well as a language designed for them from the start. Even if your escape hatch code is clean, it might not cleanly integrate with the rest of your code. This is why people complain about unsafe Rust so often. It should be noted though that all languages with automatic memory management are trading capability for tractability, too. If you can't deference pointers, you can't deference null pointers. ↩ From the Community Modules (which come default with the VSCode extension). ↩

3 days ago 10 votes
How We Migrated the Parse API From Ruby to Golang (Resurrected)

I wrote a lot of blog posts over my time at Parse, but they all evaporated after Facebook killed the product. Most of them I didn’t care about (there were, ahem, a lot of status updates and “service reliability announcements”, but I was mad about losing this one in particular, a deceptively casual retrospective of […]

3 days ago 10 votes
It's a Beelink, baby

It's only been two months since I discovered the power and joy of this new generation of mini PCs. My journey started out with a Minisforum UM870, which is a lovely machine, but since then, I've come to really appreciate the work of Beelink.  In a crowded market for mini PCs, Beelink stands out with their superior build quality, their class-leading cooling and silent operation, and their use of fully Linux-compatible components (the UM870 shipped with a MediaTek bluetooth/wifi card that doesn't work with Linux!). It's the complete package at three super compelling price points. For $289, you can get the EQR5, which runs an 8-core AMD Zen3 5825U that puts out 1723/6419 in Geekbench, and comes with 16GB RAM and 500GB NVMe. I've run Omarchy on it, and it flies. For me, the main drawback was the lack of a DisplayPort, which kept me from using it with an Apple display, and the fact that the SER8 exists. But if you're on a budget, and you're fine with HDMI only, it's a wild bargain. For $499, you can get the SER8. That's the price-to-performance sweet spot in the range. It uses the excellent 8-core AMD Zen4 8745HS that puts out 2595/12985 in Geekbench (~M4 multi-core numbers!), and runs our HEY test suite with 30,000 assertions almost as fast as an M4 Max! At that price, you get 32GB RAM + 1TB NVMe, as well as a DisplayPort, so it works with both the Apple 5K Studio Display and the Apple 6K XDR Display (you just need the right cable). Main drawback is limited wifi/bluetooth range, but Beelink tells me there's a fix on the way for that. For $929, you can get the SER9 HX370. This is the top dog in this form factor. It uses the incredible 12-core AMD Zen5 HX370 that hits 2990/15611 in Geekbench, and runs our HEY test suite faster than any Apple M chip I've ever tested. The built-in graphics are also very capable. Enough to play a ton of games at 1080p. It also sorted the SER8's current wifi/bluetooth range issue. I ran the SER8 as my main computer for a while, but now I'm using the SER9, and I just about never feel like I need anything more. Yes, the Framework Desktop, with its insane AMD Max 395+ chip, is even more bonkers. It almost cuts the HEY test suite time in half(!), but it's also $1,795, and not yet generally available. (But preorders are open for the ballers!). Whichever machine fits your budget, it's frankly incredible that we have this kind of performance and efficiency available at these prices with all of these Beelinks drawing less than 10 watt at idle and no more than 100 watt at peak! So it's no wonder that Beelink has been selling these units like hotcakes since I started talking about them on X as the ideal, cheap Omarchy desktop computers. It's such a symbiotic relationship. There are a ton of programmers who have become Linux curious, and Beelink offers no-brainer options to give that a try at a bargain. I just love when that happens. The perfect intersection of hardware, software, and timing. That's what we got here. It's a Beelink, baby! (And no, before you ask, I don't get any royalties, there's no affiliate link, and I don't own any shares in Beelink. I just love discovering great technology and seeing people start their Linux journey with an awesome, affordable computer!)

4 days ago 12 votes
How to Network as a Developer (Without Feeling Sleazy)

“One of the comments that sparked this article,” our founder Paul McMahon told me, “was someone saying, ‘I don’t really want to do networking because it seems kind of sleazy. I’m not that kind of person.’” I guess that’s the key misconception people have when they hear ‘networking.’ They think it’s like some used car salesman kind of approach where you have to go and get something out of the person. That’s a serious error, according to Paul, and it worries him that so many developers share that mindset. Instead, Paul considers networking a mix of making new friends, growing a community, and enjoying serendipitous connections that might not bear fruit until years later, but which could prove to be make-or-break career moments. It’s something that you don’t get quick results on and that doesn’t make a difference at all until it does. And it’s just because of the one connection you happen to make at an event you went to once, this rainy Tuesday night when you didn’t really feel like going, but told yourself you have to go—and that can make all the difference. As Paul has previously shared, he can attribute much of his own career success—and, interestingly enough, his peace of mind—to the huge amount of networking he’s done over the years. This is despite the fact that Paul is, in his own words, “not such a talkative person when it comes to small talk or whatever.” Recently I sat down with Paul to discuss exactly how developers are networking “wrong,” and how they can get it right instead. In our conversation, we covered: What networking really is, and why you need to start ASAP Paul’s top tip for anyone who wants to network Advice for networking as an introvert Online vs offline networking—which is more effective? And how to network in Japan, even when you don’t speak Japanese What is networking, really, and why should you start now? “Sometimes,” Paul explained, “people think of hiring fairs and various exhibitions as the way to network, but that’s not networking to me. It’s purely transactional. Job seekers are focused on getting interviews, recruiters on making hires. There’s no chance to make friends or help people outside of your defined role.” Networking is getting to know other people, understanding how maybe you can help them and how they can help you. And sometime down the road, maybe something comes out of it, maybe it doesn’t, but it’s just expanding your connections to people. One reason developers often avoid or delay networking is that, at its core, networking is a long game. Dramatic impacts on your business or career are possible—even probable—but they don’t come to fruition immediately. “A very specific example would be TokyoDev,” said Paul. “One of our initial clients that posted to the list came through networking.” Sounds like a straightforward result? It’s a bit more complicated than that. “There was a Belgian guy, Peter, whom I had known through the Ruby and tech community in Japan for a while,” Paul explained. “We knew each other, and Peter had met another Canadian guy, Jack, who [was] looking to hire a Ruby developer. “So Peter knew about me and TokyoDev and introduced me to Jack, and that was the founder of Degica, who became one of our first clients. . . . And that just happened because I had known Peter through attending events over the years.” Another example is how Paul’s connection to the Ruby community helped him launch Doorkeeper. His participation in Ruby events played a critical role in helping the product succeed, but only because he’d already volunteered at them for years. “Because I knew those people,” he said, “they wanted to support me, and I guess they also saw that I was genuine about this stuff, and I wasn’t participating in these events with some big plan about, ‘If I do this, then they’re going to use my system,’ or whatever. Again, it was people helping each other out.” These delayed and indirect impacts are why Paul thinks you should start networking right now. “You need to do it in advance of when you actually need it,” he said. “People say they’re looking for a job, and they’re told ‘You could network!’ Yeah, that could potentially help, but it’s almost too late.” You should have been networking a couple years ago when you didn’t need to be doing it, because then you’ve already built up the relationships. You can have this karma you’re building over time. . . . Networking has given me a lot of wealth. I don’t mean so much in money per se, but more it’s given me a safety net. “Now I’m confident,” he said, “that if tomorrow TokyoDev disappeared, I could easily find something just through my connections. I don’t think I’ll, at least in Japan, ever have to apply for a job again.” “I think my success with networking is something that anyone can replicate,” Paul went on, “provided they put in the time. I don’t consider myself to be especially skilled in networking, it’s just that I’ve spent over a decade making connections with people.” How to network (the non-sleazy way) Paul has a fair amount of advice for those who want to network in an effective, yet genuine fashion. His first and most important tip:  Be interested in other people. Asking questions rather than delivering your own talking points is Paul’s number one method for forging connections. It also helps avoid those “used car salesman” vibes. “ That’s why, at TokyoDev,” Paul explained, “we typically bar recruiters from attending our developer events. Because there are these kinds of people who are just going around wanting to get business cards from everyone, wanting to get their contact information, wanting to then sell them on something later. It’s quite obvious that they’re like that, and that leads to a bad environment, [if] someone’s trying to sell you on something.” Networking for introverts The other reason Paul likes asking questions is that it helps him to network as an introvert. “That’s actually one of the things that makes networking easier for someone who isn’t naturally so talkative. . . . When you meet new people, there are some standard questions you can ask them, and it’s like a blank slate where you’re filling in the details about this person.” He explained further that going to events and being social can be fun for him, but he doesn’t exactly find it relaxing. “When it comes to talking about something I’m really interested in, I can do it, but I stumble in these social situations. Despite that, I think I have been pretty successful when it comes to networking.” “What has worked well for me,” he went on, “has been putting myself in those situations that require me to do some networking, like going to an event.” Even if you aren’t that proactive, you’re going to meet a couple of people there. You’re making more connections than you would if you stayed home and played video games. The more often you do it, the easier it gets, and not just because of practice: there’s a cumulative effect to making connections. “Say you’re going to an event, and maybe last time you met a couple of people, you could just say ‘Hi’ to those people again. And maybe they are talking with someone else they can introduce you to.” Or, you can be the one making the introductions. “What has also worked well for me, is that I like to introduce other people,” Paul said. It’s always a great feeling when I’m talking to someone at an event, and I hear about what they’re doing or what they’re wanting to do, and then I can introduce someone else who maybe matches that. “And it’s also good for me, then I can just be kind of passive there,” Paul joked. “I don’t have to be out there myself so much, if they’re talking to each other.” His last piece of advice for introverts is somewhat counterintuitive. “Paradoxically,” he told me, “it helps if you’re in some sort of leadership position.” If you’re an introvert, my advice would be one, just do it, but then also look for opportunities for helping in some more formal capacity, whether it’s organizing an event yourself, volunteering at an event . . . [or] making presentations. “Like for me, when I’ve organized a Tokyo Rubyist Meetup,” Paul said, “[then] naturally as the organizer there people come to talk to me and ask me questions. . . . And it’s been similar when I’ve presented at an event, because then people have something that they know that you know something about, and maybe they want to know more about it, and so then they can ask you more questions and lead the conversation that way.” Offline vs online networking When it comes to offline vs online networking, Paul prefers offline. In-person events are great for networking because they create serendipity. You meet people through events you wouldn’t meet otherwise just because you’re in the same physical space as them. Those time and space constraints add pressure to make conversation—in a good way. “It’s natural when you are meeting someone, you ask about what they’re doing, and you make that small connection there. Then, after seeing them at multiple different events, you get a bit of a stronger connection to them.” “Physical events are [also] much more constrained in the number of people, so it’s easier to help people,” he added. “Like with TokyoDev, I can’t help every single person online there, but if someone meets me at the event [and is] asking for advice or something like that, of course I’ve got to answer them. And I have more time for them there, because we’re in the same place at the same time.” As humans, we’re more likely to help other people we have met in person, I think just because that’s how our brains work. That being said, Paul’s also found success with online networking. For example, several TokyoDev contributors—myself included—started working with Paul after interacting with him online. I commented on TokyoDev’s Dungeons and Dragons article, which led to Paul checking my profile and asking to chat about my experience. Scott, our community moderator and editor, joined TokyoDev in a paid position after being active on the TokyoDev Discord. Michelle was also active on the Discord, and Paul initially asked her to write an article for TokyoDev on being a woman software engineer in Japan, before later bringing her onto the team. Key to these results was that they involved no stereotypical “networking” strategies on either side: we all connected simply by playing a role in a shared, online community. Consistent interactions with others, particularly over a longer period of time, builds mutual trust and understanding. Your online presence can help with offline networking. As TokyoDev became bigger and more people knew about me through my blog, it became a lot easier to network with people at events because they’re like, ‘Hey, you’re Paul from TokyoDev. I like that site.’ “It just leads to more opportunities,” he continued. “If you’ve interacted with someone before online, and then you meet them offline, you already do have a bit of a relationship with them, so you’re more likely to have a place to start the conversation. [And] if you’re someone who is struggling with doing in-person networking, the more you can produce or put out there [online], the more opportunities that can lead to.” Networking in Japanese While there are a number of events throughout Japan that are primarily in English, for best networking results, developers should take advantage of Japanese events as well—even if your Japanese isn’t that good. In 2010, Paul created the Tokyo Rubyist Meetup, with the intention of bringing together Japanese and international developers. To ensure it succeeded, he knew he needed more connections to the Japanese development community. “So I started attending a lot of Japanese developer events where I was the only non-Japanese person there,” said Paul. “I didn’t have such great Japanese skills. I couldn’t understand all the presentations. But it still gave me a chance to make lots of connections, both with people who would later present at [Tokyo Rubyist Meetup], but also with other Japanese developers whom I would work with either on my own products or also on other client projects.” I think it helped being kind of a visible minority. People were curious about me, about why I was attending these events. Their curiosity not only helped him network, but also gave him a helping hand when it came to Japanese conversation. “It’s a lot easier for me in Japanese to be asked questions and answer them,” he admitted. But Paul wasn’t just attending those seminars and events in a passive manner. He soon started delivering presentations himself, usually as part of Lightning Talks—again, despite his relatively low level of Japanese. “It doesn’t matter if you do a bad job of it,” he said. Japanese people I think are really receptive to people trying to speak in Japanese and making an effort. I think they’re happy to have someone who isn’t Japanese present, even if they don’t do a great job. He also quickly learned that the most important networking doesn’t take place at the meetup itself. “At least in the past,” he explained, “it was really split . . . [there’s the] seminar time where everyone goes and watches someone present. Everyone’s pretty passive there and there isn’t much conversation going on between attendees. “Then afterwards—and maybe less than half of the people attend—but they go to a restaurant and have drinks after the event. And that’s where all the real socialization happens, and so that’s where I was able to really make the most connections.” That said, Paul noted that the actual “drinking” part of the process has noticeably diminished. “Drinking culture in Japan is changing a lot,” he told me. “I noticed that even when hosting the Tokyo Rubyist Meetup. When we were first hosting it, we [had] an average of 2.5 beers per participant. And more recently, the average is one or less per participant there. “I think there is not so much of an expectation for people to drink a lot. Young Japanese people don’t drink at the same rate, so don’t feel like you actually have to get drunk at these events. You probably shouldn’t,” he added with a laugh. What you should do is be persistent, and patient. It took Paul about a year of very regularly attending events before he felt he was treated as a member of the community. “Literally I was attending more than the typical Japanese person,” he said. “At the peak, there were a couple events per week.” His hard work paid off, though. “I think one thing about Japanese culture,” he said, “is that it’s really group based.” Initially, as foreigners, we see ourselves in the foreign group versus the Japanese group, and there’s kind of a barrier there. But if you can find some other connection, like in my case Ruby, then with these developers I became part of the “Ruby developer group,” and then I felt much more accepted. Eventually he experienced another benefit. “I think it was after a year of volunteering, maybe two years. . . . RubyKaigi, the biggest Ruby conference in Japan and one of the biggest developer conferences in Japan [in general], used Doorkeeper, the event registration system [I created], to manage their event. “That was a big win for us because it showed that we were a serious system to lots of people there. It exposed us to lots of potential users and was one of the things that I think led to us, for a time, being the most popular event registration system among the tech community in Japan.” Based on his experiences, Paul would urge more developers to try attending Japanese dev events. “Because I think a lot of non-Japanese people are still too intimidated to go to these events, even if they have better Japanese ability than I did. “If you look at most of the Japanese developer events happening now, I think the participants are almost exclusively Japanese, but still, that doesn’t need to be the case.” Takeaways What Paul hopes other developers will take away from this article is that networking shouldn’t feel sleazy. Instead, good networking looks like: Being interested in other people. Asking them questions is the easiest way to start a conversation and make a genuine connection. Occasionally just making yourself go to that in-person event. Serendipity can’t happen if you don’t create opportunities for it. Introducing people to each other—it’s a fast and stress-free way to make more connections. Volunteering for events or organizing your own. Supporting offline events with a solid online presence as well. Not being afraid to attend Japanese events, even if your Japanese isn’t good. Above all, Paul stressed, don’t overcomplicate what networking is at its core. Really what networking comes down to is learning about what other people are doing, and how you can help them or how they can help you. Whether you’re online, offline, or doing it in Japanese, that mindset can turn networking from an awkward, sleazy-feeling experience into something you actually enjoy—even on a rainy Tuesday night.

4 days ago 11 votes