More from ntietz.com blog - technically a blog
There's a pizza shop near me that serves a normal pizza. I mean, they distribute the toppings in a normal way. They're not uniform at all. The toppings are random, but not the way I want. The colloquial understanding of "random" is kind of the Platonic ideal of a pizza: slightly chaotic but things are more or less spread out over the whole piece in a regular way. If you take a slice you'll get more of less the same amount of pepperoni as any other slice. And every bite will have roughly the same amount of pepperoni as every other bite. I think it would look something like this. Regenerate this pie! This pizza to me is pretty much the canonical mental pizza. It looks pretty random, but you know what you're gonna get. And it is random! Here's how we made it, with the visualiztion part glossed over. First, we make a helper function, since Math.random() gives us values from 0 to 1, but we want values from -1 to 1. // return a uniform random value in [-1, 1] function randUniform() { return 2*Math.random() - 1; } Then, we make a simple function that gives us the coordinates of where to put a pepperoni piece, from the uniform distribution. function uniformPepperoniPosition() { var [centerX, centerY, radius] = pepperoniBounds(); let x = radius*2; let y = radius*2; while (x**2 + y**2 >= radius**2) { x = randUniform() * radius; y = randUniform() * radius; } return [x+centerX, y+centerY]; } And we cap it off with placing 300 fresh pieces of pepperoni on this pie, before we send it into the oven. (It's an outrageous amount of very small pepperoni, chosen in both axes for ease of visualizing the distribution rather than realism.) function drawUniformPizza() { drawBackground(); drawPizzaCrust(); drawCheese(); var [_, _, radius] = pepperoniBounds(); for (let p = 0; p < 300; p++) { let [x,y] = uniformPepperoniPosition(); drawPepperoni(x, y); } } But it's not what my local pizza shop's pizza's look like. That's because they're not using the same probability distribution. This pizza is using a uniform distribution. That means that for any given pepperoni, every single position on the pizza is equally likely for it to land on. These are normal pizzas We are using a uniform distribution here, but there are plenty of other distributions we could use as well. One of the other most familiar distributions is normal distribution. This is the distribution that has the normal "bell curve" that we are used to seeing. And this is probably what people are talking about most of the time when they talk about how many standard deviations something is away from something else. So what would it look like if we did a normal distribution on a pizza? The very first thing we need to answer that is a way of getting the values from the normal distribution. This isn't included with JavaScript by default, but we can implement it pretty simply using the Box-Muller transform. This might be a scary name, but it's really easy to use. Is a way of generating numbers in the normal distribution using number sampled from the uniform distribution. We can implement it like this: function randNormal() { let theta = 2*Math.PI*Math.random(); let r = Math.sqrt(-2*Math.log(Math.random())); let x = r * Math.cos(theta); let y = r * Math.sin(theta); return [x,y]; } Then we can make a pretty simple function again which gives us coordinates for where to place pepperoni in this distribution. The only little weird thing here is that I scale the radius down by a factor of 3. Without this, the pizza ends up a little bit indistinguishable from the uniform distribution, but the scaling is arbitrary and you can do whatever you want. function normalPepperoniPosition() { var [centerX, centerY, radius] = pepperoniBounds(); let x = radius*2; let y = radius*2; while (x**2 + y**2 >= radius**2) { [x,y] = randNormal(); x = x * radius/3; y = y * radius/3; } return [x + centerX, y + centerX]; } And then once again we cap it off with a 300 piece pepperoni pizza. function drawNormalPizza() { drawBackground(); drawPizzaCrust(); drawCheese(); for (let p = 0; p < 300; p++) { let [x,y] = normalPepperoniPosition(); drawPepperoni(x, y); } } Regenerate this pie! Ouch. It's not my platonic ideal of a pizza, that's for sure. It also looks closer to the pizzas my local shop serves, but it's missing something... See, this one is centered around, you know, the center. Theirs are not that. They're more chaotic with a few handfuls of toppings. What if we did the normal distributions, but multiple times, with different centers? First we have to update our position picking function to accept a center for the cluster. We'll do this by passing in the center and generating coordinates around those, while still checking that we're within the bounds of the circle formed by the crust of the pizza. function normal(cx, cy) { var [centerX, centerY, radius] = pepperoniBounds(); let x = radius*2; let y = radius*2; while ((x-centerX)**2 + (y-centerY)**2 >= radius**2) { [x,y] = randNormal(); x = cx + x * radius/3; y = cy + y * radius/3; } return [x, y]; } And then instead of one single loop for all 300 pieces, we can do 3 loops of 100 pieces each, with different (randomly chosen) centers for each. function drawClusterPizza() { const settings = initializeCanvas("drawing-3"); drawBackground(settings); drawPizzaCrust(settings); drawCheese(settings); var [centerX, centerY, radius] = pepperoniBounds(settings); for (let c = 0; c < 3; c++) { let [cx, cy] = uniform(settings, centerX, centerY, 1); console.log(cx, cy); for (let p = 0; p < 100; p++) { let [x, y] = normal(settings, cx, cy, 4); drawPepperoni(settings, x, y); } } } Regenerate this pie! That looks more like it. Well, probably. This one is more chaotic, and sometimes things work out okay, but other times they're weird. Just like the real pizzas. Click that "regenerate" button a few times to see a few examples! Okay, but when do you want one? So, this is all great. But, when would we want this? I mean, first of all, boring. We don't need a reason except that it's fun! But, there's one valid use case that a medical professional and I came up with[1]: hot honey[2]. The ideal pepperoni pizza just might be one that has uniformly distributed pepperoni with normally distributed hot honey or hot sauce. You'd start with more intense heat, then it would taper off as you go toward the crust, so you maintain the heat without getting overwhelmed by it. The room to play here is endless! We can come up with a lot of other fun distributions and map them in similar ways. Unfortunately, we probably can't make a Poisson pizza, since that's a distribution for discrete variables. I really do talk about weird things with all my medical providers. And everyone else I meet. I don't know, life's too short to go "hey, this is a professional interaction, let's not chatter on and on about whatever irrelevant topic is on our mind." ↩ The pizza topping, not my pet name. ↩
When you're just getting started with music, you have so many skills to learn. You have to be able to play your instrument and express yourself through it. You need to know the style you're playing, and its idioms and conventions. You may want to record your music, and need all the skills that come along with it. Music is, mostly, subjective: there's not an objective right or wrong way to do things. And that can make it really hard! Each of these skills is then couched in this subjectivity of trying to see if it's good enough. Playing someone else's music, making a cover, is great because it can make it objective. It gives you something to check against. When you're playing your own music, you're in charge of the entire thing. You didn't play a wrong note, because, well, you've just changed the piece! But when you play someone else's music, now there's an original and you can try to get as close to it as possible. Recreating it gives you a lot of practice in figuring out what someone did and how they did it. It also lets you peek into why they did it. Maybe a particular chord voicing is hard for you to play. Okay, let's simplify it and play an easier voicing. How does it sound now? How does it sound with the harder one? Play around with those differences and you start to see the why behind it all. * * * The same thing holds true for programming. One of my friends is a C++ programmer[1] and he was telling me about how he learned C++ and data structures really well early on: He reimplemented parts of the Boost library. This code makes heavy use of templates, a hard thing in C++. And it provides fundamental data structures with robust implementations and good performance[2]. What he would do is look at the library and pick a slice of it to implement. He'd look at what the API for it is, how it was implemented, what it was doing under the hood. Then he'd go ahead and try to do it himself, without any copy-pasting and without real-time copying from the other screen. Sometimes, he'd run into things which didn't make sense. Why is this a doubly-linked list here, when it seems a singly-linked list would do just fine? And in those moments, if you can't find a reason? You get to go down that path, make it the singly-linked version, and then find out later: oh, ohhh. Ohhhh, they did that for a reason. It lets you run into some of the hard problems, grapple with them, and understand why the original was written how it was. You get to study with some really strong programmers, by proxy via their codebase. Their code is your tutor and your guide for understanding how to write similar things in the future. * * * There's a lot of judgment out there about doing original works. This kind of judgment of covers and of reimplementing things that already exist, just to learn. So many people have internalized this, and I've heard countless times "I want to make a new project, but everything I think of, someone else has already done!" And to that, I say: do it anyway[3]. If someone else has done it, that's great. That means that you had an idea so good that someone else thought it was a good idea, too. And that means that, because someone else has done it, you have a reference now. You can compare notes, and you can see how they did it, and you can learn. I'm a recovering C++ programmer myself, and had some unpleasant experiences associated with the language. This friend is a game developer, and his industry is one where C++ makes a lot of sense to use because of the built-up code around it. ↩ He said they're not perfect, but that they're really good and solid and you know a lot of people thought for a long time about how to do them. You get to follow in their footsteps and benefit from all that hard thinking time. ↩ But: you must always give credit when you are using someone else's work. If you're reimplementing someone else's library, or covering someone's song, don't claim it's your own original invention. ↩
One of the first types we learn about is the boolean. It's pretty natural to use, because boolean logic underpins much of modern computing. And yet, it's one of the types we should probably be using a lot less of. In almost every single instance when you use a boolean, it should be something else. The trick is figuring out what "something else" is. Doing this is worth the effort. It tells you a lot about your system, and it will improve your design (even if you end up using a boolean). There are a few possible types that come up often, hiding as booleans. Let's take a look at each of these, as well as the case where using a boolean does make sense. This isn't exhaustive—[1]there are surely other types that can make sense, too. Datetimes A lot of boolean data is representing a temporal event having happened. For example, websites often have you confirm your email. This may be stored as a boolean column, is_confirmed, in the database. It makes a lot of sense. But, you're throwing away data: when the confirmation happened. You can instead store when the user confirmed their email in a nullable column. You can still get the same information by checking whether the column is null. But you also get richer data for other purposes. Maybe you find out down the road that there was a bug in your confirmation process. You can use these timestamps to check which users would be affected by that, based on when their confirmation was stored. This is the one I've seen discussed the most of all these. We run into it with almost every database we design, after all. You can detect it by asking if an action has to occur for the boolean to change values, and if values can only change one time. If you have both of these, then it really looks like it is a datetime being transformed into a boolean. Store the datetime! Enums Much of the remaining boolean data indicates either what type something is, or its status. Is a user an admin or not? Check the is_admin column! Did that job fail? Check the failed column! Is the user allowed to take this action? Return a boolean for that, yes or no! These usually make more sense as an enum. Consider the admin case: this is really a user role, and you should have an enum for it. If it's a boolean, you're going to eventually need more columns, and you'll keep adding on other statuses. Oh, we had users and admins, but now we also need guest users and we need super-admins. With an enum, you can add those easily. enum UserRole { User, Admin, Guest, SuperAdmin, } And then you can usually use your tooling to make sure that all the new cases are covered in your code. With a boolean, you have to add more booleans, and then you have to make sure you find all the places where the old booleans were used and make sure they handle these new cases, too. Enums help you avoid these bugs. Job status is one that's pretty clearly an enum as well. If you use booleans, you'll have is_failed, is_started, is_queued, and on and on. Or you could just have one single field, status, which is an enum with the various statuses. (Note, though, that you probably do want timestamp fields for each of these events—but you're still best having the status stored explicitly as well.) This begins to resemble a state machine once you store the status, and it means that you can make much cleaner code and analyze things along state transition lines. And it's not just for storing in a database, either. If you're checking a user's permissions, you often return a boolean for that. fn check_permissions(user: User) -> bool { false // no one is allowed to do anything i guess } In this case, true means the user can do it and false means they can't. Usually. I think. But you can really start to have doubts here, and with any boolean, because the application logic meaning of the value cannot be inferred from the type. Instead, this can be represented as an enum, even when there are just two choices. enum PermissionCheck { Allowed, NotPermitted(reason: String), } As a bonus, though, if you use an enum? You can end up with richer information, like returning a reason for a permission check failing. And you are safe for future expansions of the enum, just like with roles. You can detect when something should be an enum a proliferation of booleans which are mutually exclusive or depend on one another. You'll see multiple columns which are all changed at the same time. Or you'll see a boolean which is returned and used for a long time. It's important to use enums here to keep your program maintainable and understandable. Conditionals But when should we use a boolean? I've mainly run into one case where it makes sense: when you're (temporarily) storing the result of a conditional expression for evaluation. This is in some ways an optimization, either for the computer (reuse a variable[2]) or for the programmer (make it more comprehensible by giving a name to a big conditional) by storing an intermediate value. Here's a contrived example where using a boolean as an intermediate value. fn calculate_user_data(user: User, records: RecordStore) { // this would be some nice long conditional, // but I don't have one. So variables it is! let user_can_do_this: bool = (a && b) && (c || !d); if user_can_do_this && records.ready() { // do the thing } else if user_can_do_this && records.in_progress() { // do another thing } else { // and something else! } } But even here in this contrived example, some enums would make more sense. I'd keep the boolean, probably, simply to give a name to what we're calculating. But the rest of it should be a match on an enum! * * * Sure, not every boolean should go away. There's probably no single rule in software design that is always true. But, we should be paying a lot more attention to booleans. They're sneaky. They feel like they make sense for our data, but they make sense for our logic. The data is usually something different underneath. By storing a boolean as our data, we're coupling that data tightly to our application logic. Instead, we should remain critical and ask what data the boolean depends on, and should we maybe store that instead? It comes easier with practice. Really, all good design does. A little thinking up front saves you a lot of time in the long run. I know that using an em-dash is treated as a sign of using LLMs. LLMs are never used for my writing. I just really like em-dashes and have a dedicated key for them on one of my keyboard layers. ↩ This one is probably best left to the compiler. ↩
One of the best known hard problems in computer science is the halting problem. In fact, it's widely thought[1] that you cannot write a program that will, for any arbitrary program as input, tell you correctly whether or not it will terminate. This is written from the framing of computers, though: can we do better with a human in the loop? It turns out, we can. And we can use a method that's generalizable, which many people can follow for many problems. Not everyone can use the method, which you'll see why in a bit. But lots of people can apply this proof technique. Let's get started. * * * We'll start by formalizing what we're talking about, just a little bit. I'm not going to give the full formal proof—that will be reserved for when this is submitted to a prestigious conference next year. We will call the set of all programs P. We want to answer, for any p in P, whether or not p will eventually halt. We will call this h(p) and h(p) = true if p eventually finished and false otherwise. Actually, scratch that. Let's simplify it and just say that yes, every program does halt eventually, so h(p) = true for all p. That makes our lives easier. Now we need to get from our starting assumptions, the world of logic we live in, to the truth of our statement. We'll call our goal, that h(p) = true for all p, the statement H. Now let's start with some facts. Fact one: I think it's always an appropriate time to play the saxophone. *honk*! Fact two: My wife thinks that it's sometimes inappropriate to play the saxophone, such as when it's "time for bed" or "I was in the middle of a sentence![2] We'll give the statement "It's always an appropriate time to play the saxophone" the name A. We know that I believe A is true. And my wife believes that A is false. So now we run into the snag: Fact three: The wife is always right. This is a truism in American culture, useful for settling debates. It's also useful here for solving major problems in computer science because, babe, we're both the wife. We're both right! So now that we're both right, we know that A and !A are both true. And we're in luck, we can apply a whole lot of fancy classical logic here. Since A and !A we know that A is true and we also know that !A is true. From A being true, we can conclude that A or H is true. And then we can apply disjunctive syllogism[3] which says that if A or H is true and !A is true, then H must be true. This makes sense, because if you've excluded one possibility then the other must be true. And we do have !A, so that means: H is true! There we have it. We've proved our proposition, H, which says that for any program p, p will eventually halt. The previous logic is, mostly, sound. It uses the principle of explosion, though I prefer to call it "proof by married lesbian." * * * Of course, we know that this is wrong. It falls apart with our assumptions. We built the system on contradictory assumptions to begin with, and this is something we avoid in logic[4]. If we allow contradictions, then we can prove truly anything. I could have also proved (by married lesbian) that no program will terminate. This has been a silly traipse through logic. If you want a good journey through logic, I'd recommend Hillel Wayne's Logic for Programmers. I'm sure that, after reading it, you'll find absolutely no flaws in my logic here. After all, I'm the wife, so I'm always right. It's widely thought because it's true, but we don't have to let that keep us from a good time. ↩ I fact checked this with her, and she does indeed hold this belief. ↩ I had to look this up, my uni logic class was a long time ago. ↩ The real conclusion to draw is that, because of proof by contradiction, it's certainly not true that the wife is always right. Proved that one via married lesbians having arguments. Or maybe gay relationships are always magical and happy and everyone lives happily ever after, who knows. ↩
I've been publishing at least one blog post every week on this blog for about 2.5 years. I kept it up even when I was very sick last year with Lyme disease. It's time for me to take a break and reset. This is the right time, because the world is very difficult for me to move through right now and I'm just burnt out. I need to focus my energy on things that give me energy and right now, that's not writing and that's not tech. I'll come back to this, and it might look a little different. This is my last post for at least a month. It might be longer, if I still need more time, but I won't return before the end of May. I know I need at least that long to heal, and I also need that time to focus on music. I plan to play a set at West Philly Porchfest, so this whole month I'll be prepping that set. If you want to follow along with my music, you can find it on my bandcamp (only one track, but I'll post demos of the others that I prepare for Porchfest as they come together). And if you want to reach out, my inbox is open. Be kind to yourself. Stay well, drink some water. See you in a while.
More in programming
What does it mean when someone writes that a programming language is “strongly typed”? I’ve known for many years that “strongly typed” is a poorly-defined term. Recently I was prompted on Lobsters to explain why it’s hard to understand what someone means when they use the phrase. I came up with more than five meanings! how strong? The various meanings of “strongly typed” are not clearly yes-or-no. Some developers like to argue that these kinds of integrity checks must be completely perfect or else they are entirely worthless. Charitably (it took me a while to think of a polite way to phrase this), that betrays a lack of engineering maturity. Software engineers, like any engineers, have to create working systems from imperfect materials. To do so, we must understand what guarantees we can rely on, where our mistakes can be caught early, where we need to establish processes to catch mistakes, how we can control the consequences of our mistakes, and how to remediate when somethng breaks because of a mistake that wasn’t caught. strong how? So, what are the ways that a programming language can be strongly or weakly typed? In what ways are real programming languages “mid”? Statically typed as opposed to dynamically typed? Many languages have a mixture of the two, such as run time polymorphism in OO languages (e.g. Java), or gradual type systems for dynamic languages (e.g. TypeScript). Sound static type system? It’s common for static type systems to be deliberately unsound, such as covariant subtyping in arrays or functions (Java, again). Gradual type systems migh have gaping holes for usability reasons (TypeScript, again). And some type systems might be unsound due to bugs. (There are a few of these in Rust.) Unsoundness isn’t a disaster, if a programmer won’t cause it without being aware of the risk. For example: in Lean you can write “sorry” as a kind of “to do” annotation that deliberately breaks soundness; and Idris 2 has type-in-type so it accepts Girard’s paradox. Type safe at run time? Most languages have facilities for deliberately bypassing type safety, with an “unsafe” library module or “unsafe” language features, or things that are harder to spot. It can be more or less difficult to break type safety in ways that the programmer or language designer did not intend. JavaScript and Lua are very safe, treating type safety failures as security vulnerabilities. Java and Rust have controlled unsafety. In C everything is unsafe. Fewer weird implicit coercions? There isn’t a total order here: for instance, C has implicit bool/int coercions, Rust does not; Rust has implicit deref, C does not. There’s a huge range in how much coercions are a convenience or a source of bugs. For example, the PHP and JavaScript == operators are made entirely of WAT, but at least you can use === instead. How fancy is the type system? To what degree can you model properties of your program as types? Is it convenient to parse, not validate? Is the Curry-Howard correspondance something you can put into practice? Or is it only capable of describing the physical layout of data? There are probably other meanings, e.g. I have seen “strongly typed” used to mean that runtime representations are abstract (you can’t see the underlying bytes); or in the past it sometimes meant a language with a heavy type annotation burden (as a mischaracterization of static type checking). how to type So, when you write (with your keyboard) the phrase “strongly typed”, delete it, and come up with a more precise description of what you really mean. The desiderata above are partly overlapping, sometimes partly orthogonal. Some of them you might care about, some of them not. But please try to communicate where you draw the line and how fuzzy your line is.
(Last week's newsletter took too long and I'm way behind on Logic for Programmers revisions so short one this time.1) In classical logic, two operators F/G are duals if F(x) = !G(!x). Three examples: x || y is the same as !(!x && !y). <>P ("P is possibly true") is the same as ![]!P ("not P isn't definitely true"). some x in set: P(x) is the same as !(all x in set: !P(x)). (1) is just a version of De Morgan's Law, which we regularly use to simplify boolean expressions. (2) is important in modal logic but has niche applications in software engineering, mostly in how it powers various formal methods.2 The real interesting one is (3), the "quantifier duals". We use lots of software tools to either find a value satisfying P or check that all values satisfy P. And by duality, any tool that does one can do the other, by seeing if it fails to find/check !P. Some examples in the wild: Z3 is used to solve mathematical constraints, like "find x, where f(x) >= 0. If I want to prove a property like "f is always positive", I ask z3 to solve "find x, where !(f(x) >= 0), and see if that is unsatisfiable. This use case powers a LOT of theorem provers and formal verification tooling. Property testing checks that all inputs to a code block satisfy a property. I've used it to generate complex inputs with certain properties by checking that all inputs don't satisfy the property and reading out the test failure. Model checkers check that all behaviors of a specification satisfy a property, so we can find a behavior that reaches a goal state G by checking that all states are !G. Here's TLA+ solving a puzzle this way.3 Planners find behaviors that reach a goal state, so we can check if all behaviors satisfy a property P by asking it to reach goal state !P. The problem "find the shortest traveling salesman route" can be broken into some route: distance(route) = n and all route: !(distance(route) < n). Then a route finder can find the first, and then convert the second into a some and fail to find it, proving n is optimal. Even cooler to me is when a tool does both finding and checking, but gives them different "meanings". In SQL, some x: P(x) is true if we can query for P(x) and get a nonempty response, while all x: P(x) is true if all records satisfy the P(x) constraint. Most SQL databases allow for complex queries but not complex constraints! You got UNIQUE, NOT NULL, REFERENCES, which are fixed predicates, and CHECK, which is one-record only.4 Oh, and you got database triggers, which can run arbitrary queries and throw exceptions. So if you really need to enforce a complex constraint P(x, y, z), you put in a database trigger that queries some x, y, z: !P(x, y, z) and throws an exception if it finds any results. That all works because of quantifier duality! See here for an example of this in practice. Duals more broadly "Dual" doesn't have a strict meaning in math, it's more of a vibe thing where all of the "duals" are kinda similar in meaning but don't strictly follow all of the same rules. Usually things X and Y are duals if there is some transform F where X = F(Y) and Y = F(X), but not always. Maybe the category theorists have a formal definition that covers all of the different uses. Usually duals switch properties of things, too: an example showing some x: P(x) becomes a counterexample of all x: !P(x). Under this definition, I think the dual of a list l could be reverse(l). The first element of l becomes the last element of reverse(l), the last becomes the first, etc. A more interesting case is the dual of a K -> set(V) map is the V -> set(K) map. IE the dual of lived_in_city = {alice: {paris}, bob: {detroit}, charlie: {detroit, paris}} is city_lived_in_by = {paris: {alice, charlie}, detroit: {bob, charlie}}. This preserves the property that x in map[y] <=> y in dual[x]. And after writing this I just realized this is partial retread of a newsletter I wrote a couple months ago. But only a partial retread! ↩ Specifically "linear temporal logics" are modal logics, so "eventually P ("P is true in at least one state of each behavior") is the same as saying !always !P ("not P isn't true in all states of all behaviors"). This is the basis of liveness checking. ↩ I don't know for sure, but my best guess is that Antithesis does something similar when their fuzzer beats videogames. They're doing fuzzing, not model checking, but they have the same purpose check that complex state spaces don't have bugs. Making the bug "we can't reach the end screen" can make a fuzzer output a complete end-to-end run of the game. Obvs a lot more complicated than that but that's the general idea at least. ↩ For CHECK to constraint multiple records you would need to use a subquery. Core SQL does not support subqueries in check. It is an optional database "feature outside of core SQL" (F671), which Postgres does not support. ↩
Omarchy 2.0 was released on Linux's 34th birthday as a gift to perhaps the greatest open-source project the world has ever known. Not only does Linux run 95% of all servers on the web, billions of devices as an embedded OS, but it also turns out to be an incredible desktop environment! It's crazy that it took me more than thirty years to realize this, but while I spent time in Apple's walled garden, the free software alternative simply grew better, stronger, and faster. The Linux of 2025 is not the Linux of the 90s or the 00s or even the 10s. It's shockingly more polished, capable, and beautiful. It's been an absolute honor to celebrate Linux with the making of Omarchy, the new Linux distribution that I've spent the last few months building on top of Arch and Hyprland. What began as a post-install script has turned into a full-blown ISO, dedicated package repository, and flourishing community of thousands of enthusiasts all collaborating on making it better. It's been improving rapidly with over twenty releases since the premiere in late June, but this Version 2.0 update is the biggest one yet. If you've been curious about giving Linux a try, you're not afraid of an operating system that asks you to level up and learn a little, and you want to see what a totally different computing experience can look and feel like, I invite you to give it a go. Here's a full tour of Omarchy 2.0.
In 2020, Apple released the M1 with a custom GPU. We got to work reverse-engineering the hardware and porting Linux. Today, you can run Linux on a range of M1 and M2 Macs, with almost all hardware working: wireless, audio, and full graphics acceleration. Our story begins in December 2020, when Hector Martin kicked off Asahi Linux. I was working for Collabora working on Panfrost, the open source Mesa3D driver for Arm Mali GPUs. Hector put out a public call for guidance from upstream open source maintainers, and I bit. I just intended to give some quick pointers. Instead, I bought myself a Christmas present and got to work. In between my university coursework and Collabora work, I poked at the shader instruction set. One thing led to another. Within a few weeks, I drew a triangle. In 3D graphics, once you can draw a triangle, you can do anything. Pretty soon, I started work on a shader compiler. After my final exams that semester, I took a few days off from Collabora to bring up an OpenGL driver capable of spinning gears with my new compiler. Over the next year, I kept reverse-engineering and improving the driver until it could run 3D games on macOS. Meanwhile, Asahi Lina wrote a kernel driver for the Apple GPU. My userspace OpenGL driver ran on macOS, leaving her kernel driver as the missing piece for an open source graphics stack. In December 2022, we shipped graphics acceleration in Asahi Linux. In January 2023, I started my final semester in my Computer Science program at the University of Toronto. For years I juggled my courses with my part-time job and my hobby driver. I faced the same question as my peers: what will I do after graduation? Maybe Panfrost? I started reverse-engineering of the Mali Midgard GPU back in 2017, when I was still in high school. That led to an internship at Collabora in 2019 once I graduated, turning into my job throughout four years of university. During that time, Panfrost grew from a kid’s pet project based on blackbox reverse-engineering, to a professional driver engineered by a team with Arm’s backing and hardware documentation. I did what I set out to do, and the project succeeded beyond my dreams. It was time to move on. What did I want to do next? Finish what I started with the M1. Ship a great driver. Bring full, conformant OpenGL drivers to the M1. Apple’s drivers are not conformant, but we should strive for the industry standard. Bring full, conformant Vulkan to Apple platforms, disproving the myth that Vulkan isn’t suitable for Apple hardware. Bring Proton gaming to Asahi Linux. Thanks to Valve’s work for the Steam Deck, Windows games can run better on Linux than even on Windows. Why not reap those benefits on the M1? Panfrost was my challenge until we “won”. My next challenge? Gaming on Linux on M1. Once I finished my coursework, I started full-time on gaming on Linux. Within a month, we shipped OpenGL 3.1 on Asahi Linux. A few weeks later, we passed official conformance for OpenGL ES 3.1. That put us at feature parity with Panfrost. I wanted to go further. OpenGL (ES) 3.2 requires geometry shaders, a legacy feature not supported by either Arm or Apple hardware. The proprietary OpenGL drivers emulate geometry shaders with compute, but there was no open source prior art to borrow. Even though multiple Mesa drivers need geometry/tessellation emulation, nobody did the work to get there. My early progress on OpenGL was fast thanks to the mature common code in Mesa. It was time to pay it forward. Over the rest of the year, I implemented geometry/tessellation shader emulation. And also the rest of the owl. In January 2024, I passed conformance for the full OpenGL 4.6 specification, finishing up OpenGL. Vulkan wasn’t too bad, either. I polished the OpenGL driver for a few months, but once I started typing a Vulkan driver, I passed 1.3 conformance in a few weeks. What remained was wiring up the geometry/tessellation emulation to my shiny new Vulkan driver, since those are required for Direct3D. Et voilà, Proton games. Along the way, Karol Herbst passed OpenCL 3.0 conformance on the M1, running my compiler atop his “rusticl” frontend. Meanwhile, when the Vulkan 1.4 specification was published, we were ready and shipped a conformant implementation on the same day. After that, I implemented sparse texture support, unlocking Direct3D 12 via Proton. …Now what? Ship a great driver? Check. Conformant OpenGL 4.6, OpenGL ES 3.2, and OpenCL 3.0? Check. Conformant Vulkan 1.4? Check. Proton gaming? Check. That’s a wrap. We’ve succeeded beyond my dreams. The challenges I chased, I have tackled. The drivers are fully upstream in Mesa. Performance isn’t too bad. With the Vulkan on Apple myth busted, conformant Vulkan is now coming to macOS via LunarG’s KosmicKrisp project building on my work. Satisfied, I am now stepping away from the Apple ecosystem. My friends in the Asahi Linux orbit will carry the torch from here. As for me? Onto the next challenge!
TokyoDev has published a number of different guides on coming to Japan to work as a software developer. But what if you’re already employed in another industry in Japan, and are considering changing your career to software development? I interviewed four people who became developers after they moved to Japan, for their advice and personal experiences on: Why they chose development How they switched careers How they successfully found their first jobs What mistakes they made in the job hunt The most important advice they give to others Why switch to software development? A lifelong goal For Yuta Asakura, a career in software was the dream all along. “I’ve always wanted to work with computers,” he said, “but due to financial difficulties, I couldn’t pursue a degree in computer science. I had to start working early to support my single mother. As the eldest child, I focused on helping my younger brother complete his education.” To support his family, Asakura worked in construction for eight years, eventually becoming a foreman in Yokohama. Meanwhile, his brother graduated, and became a software engineer after joining the Le Wagon Tokyo bootcamp. About a year before his brother graduated, Asakura began to delve back into development. “I had already begun self-studying in my free time by taking online courses and building small projects,” he explained. “ I quickly became hooked by how fun and empowering it was to learn, apply, and build. It wasn’t always easy. There were moments I wanted to give up, but the more I learned, the more interesting things I could create. That feeling kept me going.” What truly inspired me was the idea of creating something from nothing. Coming from a construction background, I was used to building things physically. But I wanted to create things that were digital, scalable, borderless, and meaningful to others. An unexpected passion As Andrew Wilson put it, “Wee little Andrew had a very digital childhood,” full of games and computer time. Rather than pursuing tech, however, he majored in Japanese and moved to Japan in 2012, where he initially worked as a language teacher and recruiter before settling into sales. Wilson soon discovered that sales wasn’t really his strong suit. “At the time I was selling three different enterprise software solutions.” So I had to have a fairly deep understanding of that software from a user perspective, and in the course of learning about these products and giving technical demonstrations, I realized that I liked doing that bit of my job way more than I liked actually trying to sell these things. Around that time, he also realized he didn’t want to manually digitize the many business cards he always collected during sales meetings: “That’s boring, and I’m lazy.” So instead, he found a business card-scanning app, made a spreadsheet to contain the data, automated the whole process, and shared it internally within his company. His manager approached him soon afterwards, saying, “You built this? We were looking to hire someone to do this!” Encouraged, Wilson continued to develop it. “As soon as I was done with work,” he explained with a laugh, “I was like, ‘Oh boy, I can work on my spreadsheet!’” As a result, Wilson came to the conclusion that he really should switch careers and pursue his passion for programming. Similarly to Wilson, Malcolm Hendricks initially focused on Japanese. He came to Japan as an exchange student in 2002, and traveled to Japan several more times before finally relocating in 2011. Though his original role was as a language teacher, he soon found a job at a Japanese publishing company, where he worked as an editor and writer for seven years. However, he felt burned out on the work, and also that he was in danger of stagnating; since he isn’t Japanese, the road to promotion was a difficult one. He started following some YouTube tutorials on web development, and eventually began creating websites for his friends. Along the way, he fell in love with development, on both a practical and a philosophical level. “There’s another saying I’ve heard here and there—I don’t know exactly who to attribute it to—but the essence of it goes that ‘Computer science is just teaching rocks how to think,’” Hendricks said. “My mentor Bob has been guiding me through the very fundamentals of computer science, down to binary calculations, Boolean logic, gate theory, and von Neumann architecture. He explains the fine minutia and often concludes with, ‘That’s how it works. There’s no magic to it.’ “Meanwhile, in the back of my mind, I can’t help but be mystified at the things we are all now able to do, such as having video calls from completely different parts of the world, or even me here typing on squares of plastic to make letters appear on a screen that has its own source of light inside it. . . . [It] sounds like the highest of high-fantasy wizardry to me.” I’ve always had a love for technomancy, but I never figured I might one day get the chance to be a technomancer myself. And I love it! We have the ability to create nigh unto anything in the digital world. A practical solution When Paulo D’Alberti moved to Japan in 2019, he only spoke a little Japanese, which limited his employment prospects. With his prior business experience, he landed an online marketing role for a blockchain startup, but eventually exited the company to pursue a more stable work environment. “But when I decided to leave the company,” D’Alberti said, “my Japanese was still not good enough to do business. So I was at a crossroads.” Do I decide to join a full-time Japanese language course, aiming to get JLPT N2 or the equivalent, and find a job on the business side? . . . Or do I say screw it and go for a complete career change and get skills in something more technical, that would allow me to carry those skills [with me] even if I were to move again to another country?” The portability of a career in development was a major plus for D’Alberti. “That was one of the big reasons. Another consideration was that, looking at the boot camps that were available, the promise was ‘Yeah, we’ll teach you to be a software developer in nine weeks or two months.’ That was a much shorter lead time than getting from JLPT N4 to N2. I definitely wouldn’t be able to do that in two months.” Since D’Alberti had family obligations, the timeline for his career switch was crucial. “We still had family costs and rent and groceries and all of that. I needed to find a job as soon as possible. I actually already at that point had been unsuccessfully job hunting for two months. So that was like, ‘Okay, the savings are winding up, and we are running out of options. I need to make a decision and make it fast.’” How to switch careers Method 1: Software Development Bootcamp Under pressure to find new employment quickly, D’Alberti decided to enter the Le Wagon Coding Bootcamp in Tokyo. Originally, he wavered between Le Wagon and Code Chrysalis, which has since ended its bootcamp programs. “I went with Le Wagon for two reasons,” he explained. “There were some scheduling reasons. . . . But the main reason was that Code Chrysalis required you to pass a coding exam before being admitted to their bootcamp.” Since D’Alberti was struggling to learn development by himself, he knew his chances of passing any coding exam were slim. “I tried Code Academy, I tried Solo Learn, I tried a whole bunch of apps online, I would follow the examples, the exercises . . . nothing clicked. I wouldn’t understand what I was doing or why I was doing it.” At the time, Le Wagon only offered full-time web development courses, although they now also have part-time courses and a data science curriculum. Since D’Alberti was unemployed, a full-time program wasn’t a problem for him, “But it did mean that the people who were present were very particular [kinds] of people: students who could take some time off to add this to their [coursework], or foreigners who took three months off and were traveling and decide to come here and do studying plus sightseeing, and I think there were one or two who actually asked for time off from the job in order to participate.” It was a very intense course, and the experience itself gave me exactly what I needed. I had been trying to learn by myself. It did not work. I did not understand. [After joining], the first day or second day, suddenly everything clicked. D’Alberti appreciated how Le Wagon organized the curriculum to build continuously off previous lessons. By the time he graduated in June of 2019, he’d built three applications from scratch, and felt far more confident in his coding abilities. “It was great. [The curriculum] was amazing, and I really felt super confident in my abilities after the three months. Which, looking back,” he joked, “I still had a lot to learn.” D’Alberti did have some specific advice for those considering a bootcamp: “Especially in the last couple of weeks, it can get very dramatic. You are divided into teams and as a team, you’re supposed to develop an application that you will be demonstrating in front of other people.” Some of the students, D’Alberti explained, felt that pressure intensely; one of his classmates broke down in tears. “Of course,” he added, “one of the big difficulties of joining a bootcamp is economical. The bootcamp itself is quite expensive.” While between 700,000 and 800,000 yen when D’Alberti went through the bootcamp, Le Wagon’s tuition has now risen to 890,000 yen for Web Development and 950,000 for Data Science. At the time D’Alberti joined there was no financial assistance. Now, Le Wagon has an agreement with Hello Work, so that students who are enrolled in the Hello Work system can be reimbursed for up to 70 percent of the bootcamp’s tuition. Though already studying development by himself, Asakura also enrolled in Le Wagon Tokyo in 2024, “to gain structure and accountability,” he said. One lesson that really stayed with me came from Sylvain Pierre, our bootcamp director. He said, ‘You stop being a developer the moment you stop learning or coding.’ That mindset helped me stay on track. Method 2: Online computer science degree Wilson considered going the bootcamp route, but decided against it. He knew, from his experience in recruiting, that a degree would give him an edge—especially in Japan, where having the right degree can make a difference in visa eligibility “The quality of bootcamps is perfectly fine,” he explained. “If you go through a bootcamp and study hard, you can get a job and become a developer no problem. I wanted to differentiate myself on paper as much as I could . . . [because] there are a lot of smart, motivated people who go through a bootcamp.” Whether it’s true or not, whether it’s valid or not, if you take two candidates who are very similar on paper, and one has a coding bootcamp and one has a degree, from a typical Japanese HR perspective, they’re going to lean toward the person with the degree. “Whether that’s good or not, that’s sort of a separate situation,” Wilson added. “But the reality [is] I’m older and I’m trying to make a career change, so I want to make sure that I’m giving myself every advantage that I can.” For these reasons, Wilson opted to get his computer science degree online. “There’s a program out of the University of Oregon, for people who already had a Bachelor’s degree in a different subject to get a Bachelor’s degree in Computer Science. “Because it’s limited to people who already have a Bachelor’s degree, that means you don’t need to take any non-computer science classes. You don’t need any electives or prerequisites or anything like that.” As it happened, Wilson was on paternity leave when he started studying for his degree. “That was one of my motivations to finish quickly!” he said. In the end, with his employer’s cooperation, he extended his paternity leave to two years, and finished the degree in five quarters. Method 3: Self-taught Hendricks took a different route, combining online learning materials with direct experience. He primarily used YouTube tutorials, like this project from one of his favorite channels, to teach himself. Once he had the basics down, he started creating websites for friends, as well as for the publishing company he worked for at the time. With every site, he’d put his name at the bottom of the page, as a form of marketing. This worked well enough that Hendricks was able to quit his work at the translation company and transition to full-time freelancing. However, eventually the freelancing work dried up, and he decided he wanted to experience working at a tech company—and not just for job security reasons. Hendricks saw finding a full-time development role as the perfect opportunity to push himself and see just how far he could get in his new career. There’s a common trope, probably belonging more to the sports world at large, about the importance of shedding ‘blood, sweat, and tears’ in the pursuit of one’s passion . . . and that’s also how I wanted to cut my teeth in the software engineering world. The job hunt While all four are now successfully employed as developers, Asakura, D’Alberti, Wilson, and Hendricks approached and experienced the job hunt differently. Following is their hard-earned advice on best practices and common mistakes. DO network When Hendricks started his job hunt, he faced the disadvantages of not having any formal experience, and also being both physically and socially isolated from other developers. Since he and his family were living in Nagano, he wasn’t able to participate in most of the tech events and meet-ups available in Tokyo or other big cities. His initial job hunt took around a year, and at one point he was sending so many applications that he received a hundred rejections in a week. It wasn’t until he started connecting with the community that he was able to turn it around, eventually getting three good job offers in a single week. Networking, for me, is what made all the difference. It was through networking that I found my mentors, found community, and joined and even started a few great Discord servers. These all undeniably contributed to me ultimately landing my current job, but they also made me feel welcome in the industry. Hendricks particularly credits his mentors, Ean More and Bob Cousins, for giving him great advice. “My initial mentor [Ean More] I actually met through a mutual IT networking Facebook group. I noticed that he was one of the more active members, and that he was always ready to lend a hand to help others with their questions and spread a deeper understanding of programming and computer science. He also often posted snippets of his own code to share with the community and receive feedback, and I was interested in a lot of what he was posting. “I reached out to him and told him I thought it was amazing how selfless he was in the group, and that, while I’m still a junior, if there was ever any grunt work I could do under his guidance, I would be happy to do so. Since he had a history of mentoring others, he offered to do so for me, and we’ve been mentor/mentee and friends ever since.” “My other mentor [Bob Cousins],” Hendricks continued, “was a friend of my late uncle’s. My uncle had originally begun mentoring me shortly before his passing. We were connected through a mutual friend whom I lamented to about not having any clue how to continue following the path my uncle had originally laid before me. He mentioned that he knew just the right person and gave me an email address to contact. I sent an email to the address and was greeted warmly by the man who would become another mentor, and like an uncle to me.” Although Hendricks found him via a personal connection, Cousins runs a mentorship program that caters to a wide variety of industries. Wilson also believes in the power of networking—and not just for the job hunt. “One of the things I like about programming,” he said, “is that it’s a very collaborative community. Everybody wants to help everybody.” We remember that everyone had to start somewhere, and we’ll take time to help those starting out. It’s a very welcoming community. Just do it! We’re all here for you, and if you need help I’ll refer you. Asakura, by contrast, thinks that networking can help, but that it works a little differently in Japan than in other countries. “Don’t rely on it too much,” he said. “Unlike in Western countries, personal referrals don’t always lead directly to job opportunities in Japan. Your skills, effort, and consistency will matter more in the long run.” DO treat the job hunt like a job Once he’d graduated from Le Wagon, D’Alberti said, “I considered job-hunting my full-time job.” I checked all the possible networking events and meetup events that were going on in the city, and tried to attend all of them, every single day. I had a list of 10 different job boards that I would go and just refresh on a daily basis to see, ‘Okay, Is there anything new now?’ And, of course, I talked with recruiters. D’Alberti suggests beginning the search earlier than you think you need to. “I had started actively job hunting even before graduating [from Le Wagon],” he said. “That’s advice I give to everyone who joins the bootcamp. “Two weeks before graduation, you have one simple web application that you can show. You have a second one you’re working on in a team, and you have a third one that you know what it’s going to be about. So, already, there are three applications that you can showcase or you can use to explain your skills. I started going to meetups and to different events, talking with people, showing my CV.” The process wasn’t easy, as most companies and recruiters weren’t interested in hiring for junior roles. But his intensive strategy paid off within a month, as D’Albert landed three invitations to interview: one from a Japanese job board, one from a recruiter, and one from LinkedIn. For Asakura, treating job hunting like a job was as much for his mental health as for his career. “The biggest challenge was dealing with impostor syndrome and feeling like I didn’t belong because I didn’t have a computer science degree,” he explained. “I also experienced burnout from pushing myself too hard.” To cope, I stuck to a structured routine. I went to the gym daily to decompress, kept a consistent study schedule as if I were working full-time, and continued applying for jobs even when it felt hopeless. At first, Asakura tried to apply to jobs strategically by tracking each application, tailoring his resume, and researching every role. “But after dozens of rejections,” he said, “I eventually switched to applying more broadly and sent out over one hundred applications. I also reached out to friends who were already software engineers and asked for direct referrals, but unfortunately, nothing worked out.” Still, Asakura didn’t give up. He practiced interviews in both English and Japanese with his friends, and stayed in touch with recruiters. Most importantly, he kept developing and adding to his portfolio. DO make use of online resources “What ultimately helped me was staying active and visible,” Asakura said. I consistently updated my GitHub, LInkedIn, and Wantedly profiles. Eventually, I received a message on Wantedly from the CTO of a company who was impressed with my portfolio, and that led to my first developer job.” “If you have the time, certifications can also help validate your knowledge,” Asakura added, “especially in fields like cloud and AI. Some people may not realize this, but the rise of artificial intelligence is closely tied to the growth of cloud computing. Earning certifications such as AWS, Kubernetes, and others can give you a strong foundation and open new opportunities, especially as these technologies continue to evolve.” Hendricks also heavily utilized LinkedIn and similar sites, though in a slightly different way. “I would also emphasize the importance of knowing how to use job-hunting sites like Indeed and LinkedIn,” he said. “I had the best luck when I used them primarily to do initial research into companies, then applied directly through the companies’ own websites, rather than through job postings that filter applicants before their resumés ever make it to the actual people looking to hire.” In addition, Hendricks recommends studying coding interview prep tutorials from freeCodeCamp. Along with advice from his mentors and the online communities he joined, he credits those tutorials with helping him successfully receive offers after a long job hunt. DO highlight experience with Japanese culture and language Asakura felt that his experience in Japan, and knowledge of Japanese, gave him an edge. “I understand Japanese work culture [and] can speak the language,” Asakura said, “and as a Japanese national I didn’t require visa sponsorship. That made me a lower-risk hire for companies here.” Hendricks also felt that his excellent Japanese made him a more attractive hire. While applying, he emphasized to companies that he could be a bridge to the global market and business overseas. However, he also admitted this strategy steered him towards applying with more domestic Japanese companies, which were also less likely to hire someone without a computer science degree. “So,” he said, “it sort of washed out.” Wilson is another who put a lot of emphasis on his Japanese language skills, from a slightly different angle. A lot of interviewees typically don’t speak Japanese well . . . and a lot of companies here say that they’re very international, but if they want very good programmers, [those people] spend their lives programming, not studying English. So having somebody who can bridge the language gap on the IT side can be helpful. DO lean into your other experience Several career switchers discovered that their past experiences and skills, while not immediately relevant to their new career, still proved quite helpful in landing that first role—sometimes in very unexpected ways. When Wilson was pitching his language skills to companies, he wasn’t talking about just Japanese–English translation. He also highlighted his prior experience in sales to suggest that he could help communicate with and educate non-technical audiences. “Actually to be a software engineer, there’s a lot of technical communication you have to do.” I have worked with some incredible coders who are so good at the technical side and just don’t want to do the personal side. But for those of us who are not super-geniuses and can’t rely purely on our tech skills . . . there’s a lot of non-technical discussion that goes around building a product.” This strategy, while eventually fruitful, didn’t earn Wilson a job right away. Initially, he applied to more than sixty companies over the course of three to four months. “I didn’t have any professional [coding] experience, so it was actually quite a rough time,” he said. “I interviewed all over the place. I was getting rejected all over town.” The good news was, Wilson said, “I’m from Chicago. I don’t know what it is, but there are a lot of Chicagoans who work in Tokyo for whatever reason.” When he finally landed an interview, one of the three founders of the company was also from Chicago, giving them something in common. “We hit it off really well in the interview. I think that kind of gave me the edge to get the role, to be honest.” Like Wilson, D’Alberti found that his previous work as a marketer helped him secure his first developer role—which was ironic, he felt, given that he’d partially chosen to switch careers because he hadn’t been able to find an English-language marketing job in Japan. “I had my first interview with the CEO,” he told me, “and this was for a Japanese startup that was building chatbots, and they wanted to expand into the English market. So I talked with the CEO, and he was very excited to get to know me and sent me to talk with the CTO.” The CTO, unfortunately, wasn’t interested in hiring a junior developer with no professional experience. “And I thought that was the end of it. But then I got called again by the CEO. I wanted to join for the engineering position, and he wanted to have me for my marketing experience.” In the end we agreed that I would join in a 50-50 arrangement. I would do 50 percent of my job in marketing and going to conferences and talking to people, and 50 percent on the engineering side. I was like, ‘Okay, I’ll take that.’ This ended up working better than D’Alberti had expected, partially due to external circumstances. “When COVID came, we couldn’t travel abroad, so most of the job I was doing in my marketing role I couldn’t perform anymore. “So they sat me down and [said], ‘What are we going to do with you, since we cannot use you for marketing anymore?’ And I was like, ‘Well, I’m still a software developer. I could continue working in that role.’ And that actually allowed me to fully transition.” DON’T make these mistakes It was D’Alberti’s willingness to compromise on that first development role that led to his later success, so he would explicitly encourage other career-changers to avoid, in his own words, “being too picky.” This advice is based, not just on his own experience, but also on his time working as a teaching assistant at Le Wagon. “There were a couple of people who would be like, ‘Yeah, I’d really like to find a job and I’m not getting any interviews,’” he explained. “And then we’d go and ask, ‘Okay, how many companies are you applying to? What are you doing?’ But [they’d say] ‘No, see, [this company] doesn’t offer enough’ or ‘I don’t really like this company’ or ‘I’d like to do something else.’ Those who would be really picky or wouldn’t put in the effort, they wouldn’t land a job. Those who were deadly serious about ‘I need to get a job as a software developer,’ they’d find one. It might not be a great job, it might not be at a good company, but it would be a good first start from which to move on afterwards. Asakura also knew some other bootcamp graduates who struggled to find work. “A major reason was a lack of Japanese language skills,” he said. Even for junior roles, many companies in Japan require at least conversational Japanese, especially domestic ones. On the other hand, if you prioritize learning Japanese, that can give you an edge on entering the industry: “Many local companies are open to training junior developers, as long as they see your motivation and you can communicate effectively. International companies, on the other hand, often have stricter technical requirements and may pass on candidates without degrees or prior experience.” Finally, Hendricks said that during his own job hunt, “Not living in Tokyo was a problem.” It was something that he was able to overcome via diligent digital networking, but he’d encourage career-changers to think seriously about their future job prospects before settling outside a major metropolis in Japan. Their top advice I asked each developer to share their number one piece of advice for career-changers. D’Alberti wasn’t quite sure what to suggest, given recent changes in the tech market overall. “I don’t have clear advice to someone who’s trying to break into tech right now,” he said. “It might be good to wait and see what happens with the AI path. Might be good to actually learn how to code using AI, if that’s going to be the way to distinguish yourself from other junior developers. It might be to just abandon the idea of [being] a linear software developer in the traditional sense, and maybe look more into data science, if there are more opportunities.” But assuming they still decide ‘Yes, I want to join, I love the idea of being a software developer and I want to go forward’ . . . my main suggestion is patience. “It’s going to be tough,” he added. By contrast, Hendricks and Wilson had the same suggestion: if you want to change careers, then go for it, full speed ahead. “Do it now, or as soon as you possibly can,” Hendricks stated adamantly. His life has been so positively altered by discovering and pursuing his passion, that his only regret is he didn’t do it sooner. Wilson said something strikingly similar. “Do it. Just do it. I went back and forth a lot,” he explained. “‘Oh, should I do this, it’s so much money, I already have a job’ . . . just rip the bandaid off. Just do it. You probably have a good reason.” He pointed out that while starting over and looking for work is scary, it’s also possible that you’ll lose your current job anyway, at which point you’ll still be job hunting but in an industry you no longer even enjoy. “If you keep at it,” he said, “you can probably do it.” “Not to talk down to developers,” he added, “but it’s not the hardest job in the world. You have to study and learn and be the kind of person who wants to sit at the computer and write code, but if you’re thinking about it, you’re probably the kind of person who can do it, and that also means you can probably weather the awful six months of job hunting.” You only need to pass one job interview. You only need to get your foot in the door. Asakura agreed with “just do it,” but with a twist. “Build in public,” he suggested. “Share your progress. Post on GitHub. Keep your LinkedIn active.” Let people see your journey, because even small wins build momentum and credibility. “To anyone learning to code right now,” Asakura added, “don’t get discouraged by setbacks or rejections. Focus on building, learning, and showing up every day. Your portfolio speaks louder than your past, and consistency will eventually open the door.” If you want to read more how-tos and success stories around networking, working with recruitment agencies, writing your resume, etc., check out TokyoDev’s other articles. If you’d like to hear more about being a developer in Japan, we invite you to join the TokyoDev Discord, which has over 6,000 members as well as dedicated channels for resume review, job posts, life in Japan, and more.