Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
15
Ahead of announcing the title and publisher of my thus-far-untitled book on engineering strategy in the next week or two, I put together a website for its content. That site is pretty much the same format as this blog, but with some improvements like better mobile rendering on / than this blog has historically had. After finishing that work, I ported the improvements back to lethain.com, but also decided to bring them to staffeng.com. That was slightly trickier because, unlike this blog, StaffEng was historically a Gatsby app. (Why a Gatsby app? Because Calm was using Gatsby for our web frontend and I wanted to get some experience with it.) Over the weekend, I took some time to migrate to Hugo and apply the same enhancements. which you can now see in the lethain:staff-eng repository or on staffeng.com. Here’s a screenshot of the old version. Then here’s a screenshot of the updated version. Overall, I think it’s slightly easier to read, and I took it as a chance to update the various...
4 weeks ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Irrational Exuberance

systems-mcp: generate systems models via LLM

Back in 2018, I wrote lethain/systems as a domain-specific language for writing runnable systems models, and introduced it with this blog post modeling a hiring funnel. While it’s far from a perfect system, I’ve gotten a lot of value out of it over the last seven years, because it allows me to maintain systems models in version control. As I’ve been playing with writing Model Context Protocol (MCP) servers, one I’ve been thinking about frequently is one to help writing systems syntax, and I finally put that together in the lethain/systems-mcp repository. More detailed installation and usage instructions are in the GitHub repository, so I’ll just share a couple of screenshots and comments here. Starting with the load_systems_documentation tool which loads a copy of lethain/systems/README.md and a file with example systems into the context window. The biggest challenge of properly writing DSLs with an LLM is providing enough in-context learning (ICL) examples, and I think the idea of providing tools that are specifically designed to provide that context is a very interesting idea. Eventually I imagine there will be generalized tools for this, e.g. a search index of the best ICL examples for a wide variety of DSLs. Until then, my guess is that this sort of tool is particularly valuable. The second tool is run_systems_model which passes the DSL (and an optional parameter for number of rounds) to the tool and then returns the result. I experimented with interface design here, initially trying to return a rendered chart of the results, but ultimately even multi-modal models are just much better at working with text than with images. This meant that I had the best results returning JSON of the results and then having the LLM build a tool for interacting with the results. Altogether, a fun little experiment, and another confirmation in my mind that the most interesting part of designing MCPs today is deciding where to introduce and eliminate complexity from the LLM. Introduce too little and the tool lacks power; eliminate too little and the combination rarely works.

a week ago 6 votes
How to provide feedback on documents.

At Carta, we recently ran a reading group for Facilitating Software Architecture by Andrew Harmel-Law. We already loosely followed the ideas of an architectural advice process (from this 2021 article by the same Andrew Harmel-Law), but in practice we found that internal tech spec and architecture decision record (ADR) authors tended to exclusively share their documents locally within their team rather than more widely. As we asked authors why they preferred sharing locally, the most common answer was that they got enough feedback from their team that they didn’t want to pay the time overhead of sharing widely. The wider feedback wasn’t necessarily bad or combative. It just wasn’t good enough to compensate for the additional time it cost to process. This made sense from the authors’ perspectives, but didn’t work well for me from the executive perspective, as I was seeing teams make misaligned decisions due to lack of cross-team communication. As one step in reducing the overhead of sharing documents widely, I wrote up and shared this recommended process for providing feedback on documents: Before starting, remember that the goal of providing feedback on a document is to help the author. Optimizing for anything else, even if it’s a worthy cause, discourages authors from sharing their future writing. If you prioritize something other than helping the author, you are discouraging them from sharing future work. Start by skimming the document to understand its structure and where various kinds of topics are addressed. Why? This helps avoid giving feedback on ways the document’s actual structure diverges from how you imagined it would be structured. It also reduces questions about topics that are answered later in the document. Both of these sorts of feedback are a distraction during a discussion on a tech spec. In general, it’s better to avoid them. If you notice an author making the same significant structural mistake over several ADRs, it’s worth delivering that feedback separately. After skimming, reread the document, leaving comments with concerns. Each comment should include these details: What your suggested change or concern is Why you believe this is meaningful to address How important this seems (from ignorable nitpick to critical) If you find yourself leaving more than three or four issues, then you should either raise your threshold for commenting or you should schedule time with the individual to talk over the feedback. If the document is unreasonably weak, then it’s appropriate to nudge their leadership to dig into what’s happening on that team. The most important idea behind these steps is that your goal as a feedback giver is to help the document’s author. It is not to protect your team’s strategy or platform. It is not to optimize for your goals. It’s to help the author. This might feel wrong, but ultimately optimizing for anything else will lead to an environment where sharing widely is an irrational behavior. As a final aside, I think the user experience around commenting on documents is fundamentally wrong in most document editors. For example, Google Docs treats individual comments as first-order objects, similarly to how old version control systems like CVS tracked changes to individual files without tracking an overall state of the project. Ultimately, you want to collect all your comments into a bundle, then review that bundle for consistency and duplicates, and then submit that bundle as commentary, but editors don’t support that flow particularly well.

a week ago 10 votes
Public company comparables.

A few years ago I wrote about reading a Profit & Loss statement, which is a foundational executive skill. I also subsequently wrote about ways to measure your engineering organization. Despite having written those, I still spend a lot of time wondering about effective ways to represent an engineering organization to your board of directors. Over the past few years, one of the most useful charts I’ve found for explaining an R&D organization is a scatterplot of R&D spend as a % of margin versus YoY growth of last twelve months (LTM) revenue. Unlike so many other measures, this is an explicit measure of your R&D organization value as an investment relative to peer organizations. Until recently, I assumed building this dataset required reading financial filings but my strategic finance partner at Carta, Tyler Braslow, pointed out that you can get all of this data for the tech section from Meritech Analytics, for free. When you login to Meritech, you’re dropped into a table of public company comparables for tech companies. This is the exact dataset I’d been looking for to build this chart. After logging in, you can then copy the contents of that table into a Google Sheets spreadsheet or Excel, or whatever you’re most comfortable with. Within that sheet, the columns you care about are: % YoY Growth LTM Rev (column Q for me) – how much “last twelve months revenue” has grown year over year, as a percentage % LTM Margins for R&D (column U for me) – how much R&D spend is as a percentage of last twelve months margin LTM Revenue (column O for me) – although I don’t show this in the scatterplot, I find this one useful for debugging outlier values Hiding the other columns gives you a much simpler table. From that table, you’re then able to build the scatterplot. Note that being “higher” means your R&D spend as a percentage of LTM margin is higher, which is a bad thing. The best companies are to the bottom and to the right; the worst companies are to the top and to the left. With this chart as a starting point, you can then plot your company in and show where you stand. You could also show how your company’s position in the chart has evolved over time: hopefully improving. Finally, you might want to cull some of these data points to better determine your public company comparables. The Meritech dataset has 106 entries, but you might prefer a more representative thirty entries.

2 weeks ago 6 votes
How to filter out old email from inbox

Every few years I take a pass at reducing the chaos in my personal inboxes. There are simply too many emails to deal with, and that generally leads to me increasingly failing to follow up on important email. Up to this point, my strategy has largely been filtering out emails that I never want to read. But there’s another category of email which is stuff I often want to read when it’s fresh, but never want to read after it’s fresh. For example, calendar reminders, some mailing lists, some news letters, etc. I decided to figure out how I could setup a system where I could mark a number of things as “filter three days after receipt”. This is a nice compromise, because I do want to see those things, but I don’t want to have to remember to archive them after the fact. You can write a search query for this in GMail: from:(calendar-notification@google.com) older_than:3d However, if you try to create a GMail filter using that, it turns the older_than:3d into a fixed point in time rather than doing what you want. It seems that this is unsolvable within GMail itself. However, some quick searching suggested it was possible to create a Google App Script to solve this, and asked Claude to write the script for me. Following those instructions, I went to script.google.com, which I have not gone to in many years. I edited the generated script from Claude to use the tag “TempMsg”, to archive messages (as originally it had those commented out), and to limit itself to the first fifty items matching that tag. You can find the full code in this gist. I attempted to run this as is, and got an error message that I needed to grant permissions. That requires three clicks within the Google Scripts UI. This also requires approving the somewhat scary message that I trust myself. From there I tried to run this script, and it failed because the TempMsg tag doesn’t exist in my inbox. So I went ahead and created that tag, and setup some filters to assign that tag to certain email senders. After that, I was able to run the script and it worked properly. Note that I convinced myself that it was failing for a bit, because it doesn’t remove messages from the past three days. That is exactly how it’s supposed to work, but I would run it and then see messages with the tag there and think it was failing. Woops. After convincing myself it was working, I added a periodic trigger to run this. I now have this running on a daily basis, and it’s given me a nice new tool for managing my email a bit better. After verifying it, I also used the tag manager to “hide” this tag in the inbox, so I don’t have to see the TempMsg tag everywhere. If I ever need to debug things, I can always make it visible again.

2 weeks ago 4 votes
How should Stripe deprecate APIs? (~2016)

While Stripe is a widely admired company for things like its creation of the Sorbet typer project, I personally think that Stripe’s most interesting strategy work is also among its most subtle: its willingness to significantly prioritize API stability. This strategy is almost invisible externally. Internally, discussions around it were frequent and detailed, but mostly confined to dedicated API design conversations. API stability isn’t just a technical design quirk, it’s a foundational decision in an API-driven business, and I believe it is one of the unsung heroes of Stripe’s business success. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. Reading this document To apply this strategy, start at the top with Policy. To understand the thinking behind this strategy, read sections in reverse order, starting with Explore. More detail on this structure in Making a readable Engineering Strategy document. Policy & Operation Our policies for managing API changes are: Design for long API lifetime. APIs are not inherently durable. Instead we have to design thoughtfully to ensure they can support change. When designing a new API, build a test application that doesn’t use this API, then migrate to the new API. Consider how integrations might evolve as applications change. Perform these migrations yourself to understand potential friction with your API. Then think about the future changes that we might want to implement on our end. How would those changes impact the API, and how would they impact the application you’ve developed. At this point, take your API to API Review for initial approval as described below. Following that approval, identify a handful of early adopter companies who can place additional pressure on your API design, and test with them before releasing the final, stable API. All new and modified APIs must be approved by API Review. API changes may not be enabled for customers prior to API Review approval. Change requests should be sent to api-review email group. For examples of prior art, review the api-review archive for prior requests and the feedback they received. All requests must include a written proposal. Most requests will be approved asynchronously by a member of API Review. Complex or controversial proposals will require live discussions to ensure API Review members have sufficient context before making a decision. We never deprecate APIs without an unavoidable requirement to do so. Even if it’s technically expensive to maintain support, we incur that support cost. To be explicit, we define API deprecation as any change that would require customers to modify an existing integration. If such a change were to be approved as an exception to this policy, it must first be approved by the API Review, followed by our CEO. One example where we granted an exception was the deprecation of TLS 1.2 support due to PCI compliance obligations. When significant new functionality is required, we add a new API. For example, we created /v1/subscriptions to support those workflows rather than extending /v1/charges to add subscriptions support. With the benefit of hindsight, a good example of this policy in action was the introduction of the Payment Intents APIs to maintain compliance with Europe’s Strong Customer Authentication requirements. Even in that case the charge API continued to work as it did previously, albeit only for non-European Union payments. We manage this policy’s implied technical debt via an API translation layer. We release changed APIs into versions, tracked in our API version changelog. However, we only maintain one implementation internally, which is the implementation of the latest version of the API. On top of that implementation, a series of version transformations are maintained, which allow us to support prior versions without maintaining them directly. While this approach doesn’t eliminate the overhead of supporting multiple API versions, it significantly reduces complexity by enabling us to maintain just a single, modern implementation internally. All API modifications must also update the version transformation layers to allow the new version to coexist peacefully with prior versions. In the future, SDKs may allow us to soften this policy. While a significant number of our customers have direct integrations with our APIs, that number has dropped significantly over time. Instead, most new integrations are performed via one of our official API SDKs. We believe that in the future, it may be possible for us to make more backwards incompatible changes because we can absorb the complexity of migrations into the SDKs we provide. That is certainly not the case yet today. Diagnosis Our diagnosis of the impact on API changes and deprecation on our business is: If you are a small startup composed of mostly engineers, integrating a new payments API seems easy. However, for a small business without dedicated engineers—or a larger enterprise involving numerous stakeholders—handling external API changes can be particularly challenging. Even if this is only marginally true, we’ve modeled the impact of minimizing API changes on long-term revenue growth, and it has a significant impact, unlocking our ability to benefit from other churn reduction work. While we believe API instability directly creates churn, we also believe that API stability directly retains customers by increasing the migration overhead even if they wanted to change providers. Without an API change forcing them to change their integration, we believe that hypergrowth customers are particularly unlikely to change payments API providers absent a concrete motivation like an API change or a payment plan change. We are aware of relatively few companies that provide long-term API stability in general, and particularly few for complex, dynamic areas like payments APIs. We can’t assume that companies that make API changes are ill-informed. Rather it appears that they experience a meaningful technical debt tradeoff between the API provider and API consumers, and aren’t willing to consistently absorb that technical debt internally. Future compliance or security requirements—along the lines of our upgrade from TLS 1.2 to TLS 1.3 for PCI—may necessitate API changes. There may also be new tradeoffs exposed as we enter new markets with their own compliance regimes. However, we have limited ability to predict these changes at this point.

3 weeks ago 22 votes

More in programming

Server Componets (RSC) in react-router are... actually good? (tip)

Explore Remix's new React Server Components (RSC) preview in react-router! Learn usage, different approaches, and trade-offs.

15 hours ago 4 votes
Have you tried the exact opposite?

Have you thought about doing the opposite of whatever you're doing or considering? It's a really helpful way to test your assumptions and your values. What does the opposite look like, how would it work? It's so easy to get stuck in a groove of what works, what you believe to be right. But helpful assumptions have a half-life, just like facts. And it's ever so easy to miss the shift when circumstances change, if you're not regularly stress-testing your core beliefs. That doesn't mean you're just a flag in the wind, blowing whichever way. But it does mean having enough intellectual humility and creative flexibility to consider that what you believe to be true about your business, about your team, about your technology might not be so. We did this a while back with full-time managers. We'd been working for nearly two decades without any, but exactly because it'd been so long, we were drawn to try the opposite, just to see what we might have missed. So we did. Hired a few full-time managers to help us test that assumption for a few years. In the end, we decided that our managers-of-one culture worked better, but it wasn't a given at the outset. To try the opposite, you really have to believe that you might have been wrong. Because you're wrong about something. I guarantee it. We all are.

21 hours ago 2 votes
Is There a Japanese Equivalent of Glassdoor?

When interviewing with a Japanese company, you’ll naturally want to know: “Is this a good place to work?” And while Glassdoor is the standard in English-speaking countries for employees leaving online reviews, the site is only rarely used in Japan, and then primarily by non-Japanese workers. Many countries have a culture that endorses directly reviewing employers in an open, public environment—Japan does not. However, there are still sites where you can find important information on your potential employer. What to watch out for In particular, you want to avoid signing on with a company that engages in exploitative practices—or as they’re known in Japan, a “black company” (ブラック企業, burakku kigyou). The Ministry of Health, Labor, and Welfare has a FAQ describing what defines these companies: Imposing extremely long working hours with high quotas. Recognition of workers’ rights is low throughout the company; unpaid overtime and/or workplace bullying (パワハラ, pawahara) are common. The company assigns shifts to workers without consent. The company discriminates among workers in the above circumstances. In a 2023 survey, those who had worked for such toxic companies listed high turnover rates as the most common sign that something was wrong, followed by long working hours and unpaid overtime. As you examine online review sites and other sources, look for clues such as: Turnover rate: how long do employees typically stay? Internal promotion: can you see employees rising in the ranks? Upper management: are there any non-Japanese employees in management positions? Recent company announcements: do they often make sudden pivots in their business policies? If you discover, for example, that the company can’t retain employees, shows no history of internal promotions, and has just issued a return-to-office order out of the blue, it’s safe to assume you don’t want to work there. OpenWork OpenWork, also known as Vorkers, hosts over 19 million company reviews. The reviews are represented in a radar chart for easy visual reference, and are also broken down into different categories, such as work-life balance, the ease of working for women, and reasons for considering quitting. In addition, applicants can post questions for employees to answer. If you don’t speak Japanese, the site is still readable with Google Translate. You’ll need to make a free account to see all of the information, but much of it is accessible even without an account. Other Japanese sites JobTalk and Engage Hyouban are other Japanese-language review sites. JobTalk contains 4.4 million reviews of around 230,000 different companies, and Engage Hyouban boasts 30 million reviews for 220,000 companies. Neither of these sites offer as much information on tech companies in Japan as OpenWork does. If you’re applying to a large company such as Rakuten, you may find some additional reviews there, but many of TokyoDev’s clients are smaller companies that aren’t listed at all. Google Maps Reviews An unusual but occasionally helpful place to find company reviews is on Google Maps. If you search for a business’s main corporate office location—usually in Tokyo—you will sometimes find reviews written by current or former employees. Whether these reviews are high-quality or trustworthy is another matter. Rakuten, for example, has reviews with a range of opinions. Cybozu, by contrast, mostly has reviews from those who would like to work for the company but currently don’t. Still, the reviews of its corporate office are consistently positive, so you can at least get an impression of the physical environment. LinkedIn “If you’re worried that a company might be a poor place to work, try contacting current or past employees via LinkedIn,” suggested Paul McMahon, founder of TokyoDev. “This probably works best if you’re late in the hiring process.” You can send a connect request saying, ‘I’ve received an offer from company X, and want to confirm what it’s really like to work there as an engineer. Mind if I ask you a couple of questions?’ Whether or not they respond, you can still glean good information from the profiles of past and current employees. Check to see if developers tend to leave the company quickly, for example, or how long the average employee goes before being promoted. You should keep in mind though that LinkedIn is not popular in Japan, for several good reasons. If you are applying to a primarily Japanese company, many of your future coworkers won’t be active there, which means you still may not be getting a complete picture. TokyoDev In 2020, TokyoDev began interviewing developers in order to provide a more complete, boots-on-the-ground picture of daily life at specific companies. Our Developer Stories feature interviews with developers at top Japanese tech companies, who share details about both their specific jobs and the general work environment. The goal is to give applicants a good sense of how a company operates on a day-to-day basis, from the perspective of those on the inside. So far, TokyoDev has interviewed developers from Mercari, PayPay, Givery, HENNGE, KOMOJU, and more. In addition, TokyoDev’s job board is a selective one, listing only companies that we feel good about sending applicants to. In the rare event that employees later reach out with poor reviews of a business, if those reports can be confirmed, then TokyoDev will end its relationship with that company. Conclusion In short, the answer to the question “Is there a Japanese equivalent to Glassdoor?” is, “Not really.” However, by combining some of the alternatives—OpenWork, LinkedIn, TokyoDev, and perhaps even Google Maps—you can gather enough information to decide whether you want to work with a particular Japanese company. You could also ask fellow developers in our Discord. Curious about working in Japan in general? See our articles on the subject, as well as moving to Japan, living in Japan, starting a business in Japan, and more.

an hour ago 1 votes
Multiple Computers

I’ve spent so much time, had so many headaches, and encountered so much complexity from what, in my estimation, boils down to this: trying to get something to work on multiple computers. It might be time to just go back to having one computer — a personal laptop — do everything. No more commit, push, and let the cloud build and deploy. No more making it possible to do a task on my phone and tablet too. No more striving to make it possible to do anything from anywhere. Instead, I should accept the constraint of doing specific kinds of tasks when I’m at my laptop. No laptop? Don’t do it. Save it for later. Is it really that important? I think I’d save myself a lot of time and headache with that constraint. No more continuous over-investment of my time in making it possible to do some particular task across multiple computers. It’s a subtle, but fundamental, shift in thinking about my approach to computing tasks. Today, my default posture is to defer control of tasks to cloud computing platforms. Let them do the work, and I can access and monitor that work from any device. Like, for example, publishing a version of my website: git commit, push, and let the cloud build and deploy it. But beware, there be possible dragons! The build fails. It’s not clear why, but it “works on my machine”. Something is different between my computer and the computer in the cloud. Now I’m troubleshooting an issue unrelated to my website itself. I’m troubleshooting an issue with the build and deployment of my website across multiple computers. It’s easy to say: build works on my machine, deploy it! It’s deceivingly time-consuming to take that one more step and say: let another computer build it and deploy it. So rather than taking the default posture of “cloud-first”, i.e. push to the cloud and let it handle everything, I’d rather take a “local-first” approach where I choose one primary device to do tasks on, and ensure I can do them from there. Everything else beyond that, i.e. getting it to work on multiple computers, is a “progressive enhancement” in my workflow. I can invest the time, if I want to, but I don’t have to. This stands in contrast to where I am today which is if a build fails in the cloud, I have to invest the time because that’s how I’ve setup my workflow. I can only deploy via the cloud. So I have to figure out how to get the cloud’s computer to build my site, even when my laptop is doing it just fine. It’s hard to make things work identically across multiple computers. I get it, that’s a program not software. And that’s the work. But sometimes a program is just fine. Wisdom is knowing the difference. Email · Mastodon · Bluesky

2 days ago 3 votes
Cheap mini PCs have gotten really good

For the past week, I've been working off the Minisforum UM870. A tiny mini PC with an 8-core/16-thread AMD 8745H CPU, which retails for $343 (or €379) as a bare-bone unit, and stays below $550, even after adding 48GB of RAM and 1TB of storage. I'm shocked to report that I really don't need more than this! I mean, I knew that Apple's Mac Mini, which is equally petite to the Minisforum, had plenty of power for macOS. But somehow I thought Apple had some special sauce that made this possible, and that PCs were forever condemned to be bigger, louder, and slower. How 2020 of me. The UM870 is a little beast. It runs our full HEY test suite in just 2m28s. In comparison, it takes a 14-core M4 Pro 2m49s, and such an Apple costs $2,199, once you've given it 48GB of RAM and 1TB of storage. Now, that M4 Mac Mini can probably do things with, say, 8K video editing that the UM870 can't touch. But on the other hand, the UM870 can play the latest video games. Everything from Fortnite to Cyberpunk 2077 to Forza Horizon. It won't trouble a modern, dedicated Nvidia card for max FPS, but it's perfectly playable at 1080p at medium settings in a ton of games. In raw CPU power, the AMD 8745H will match a regular M4 in multi-core. They both clock in right around 13,000 points on Geekbench 6. Though the M4 is a fair bit quicker in single-core. The AMD is also far behind an M4 Pro in raw multi-core power (13K vs 22K), but at less than a quarter the cost, it's hard to complain. But as with the example of video games, it's often deceiving just to compare the Geekbench numbers, because it all depends on what you're doing. If you're really into video games, it's no use to have extra grunt, if your favorite games won't run. The same is true if you're a developer working with Docker containers and a Linux toolchain. As quoted above, the UM870 handily defeats the M4 Pro in our all-cores-buzzing HEY test suite. That's partly because we run databases and accessories, like MySQL, Redis, and ElasticSearch, in Docker containers. Even though we run the Ruby code natively on both platforms, the Docker dependencies put the Mac further behind than it otherwise would have been, because Linux runs Docker natively, and the Mac has to deal with the file-system tax and other drawbacks. The irony is that it was partly Apple's volume with and investment in TSMC that got us these incredible AMD chips, as they're riding the same improvements in TSMC manufacturing prowess as Apple's M chips. The Zen 4 cores in the 8745H are all forged on the same 5nm process as the M2, so it's no surprise that the AMD cores are dead-on-the-money for Apple's in Geekbench single-core performance. And Zen 4 is even the last generation! The insane new (and insanely named) AMD Ryzen AI Max 395+ chip that's used in the upcoming Framework Desktop runs on Zen 5 cores. And with 16 of those, the 395+ is faster in Geekbench multi-core than an M4 Pro, and only ever so slightly behind the M4 Max. On my HEY test suite, it completes the run in an insane 1m21s — more than twice as fast as the 14-core M4 Pro! But I digress. The 395+ chip isn't cheap, even if it's still a great deal. The Framework Desktop with 64GB/1TB, which is twice as fast as the M4 Pro with our HEY tests, is $1,744. That's still less than the $2,199 Mac Mini, which only has 48GB of RAM. But obviously way more than a $550 Minisforum! And while it's quite small, it's not tiny, like the UM870. Regardless, this is what I love about technology. I love when our assumptions are tested: just how small and cheap can an awesome developer machine become? I love that open-source Linux is able to run laps around Apple in the workloads that many developers need (like working with Docker containers). I love that this tiny little silent $550 mini PC on my desk is capable of putting out computing power that only a decade ago would have been reserved to loud, honking metal in a data center. Mini PCs have gotten really good. AMD is on a roll. Linux is a blast. These are my conclusions. Check out the Minisforum UM870 or the Beelink SER8. Anything with an AMD 7745H and up to an 8945HS should be a great deal. If you want to splurge (yet still get a bargain compared to the macs), you could have a look at the new HX370 in the Beelink SER9 or Minisforum X1, but I'd save my money, buy a Lofree Flow84 keyboard to go with the new mini rig, and put the rest of the money towards a KEF LSX II savings fund!

2 days ago 3 votes