Full Width [alt+shift+f] FOCUS MODE Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
6
(Last week's newsletter took too long and I'm way behind on Logic for Programmers revisions so short one this time.1) In classical logic, two operators F/G are duals if F(x) = !G(!x). Three examples: x || y is the same as !(!x && !y). <>P ("P is possibly true") is the same as ![]!P ("not P isn't definitely true"). some x in set: P(x) is the same as !(all x in set: !P(x)). (1) is just a version of De Morgan's Law, which we regularly use to simplify boolean expressions. (2) is important in modal logic but has niche applications in software engineering, mostly in how it powers various formal methods.2 The real interesting one is (3), the "quantifier duals". We use lots of software tools to either find a value satisfying P or check that all values satisfy P. And by duality, any tool that does one can do the other, by seeing if it fails to find/check !P. Some examples in the wild: Z3 is used to solve mathematical constraints, like "find x, where f(x) >= 0. If I want to prove a property like...
yesterday

Comments

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Computer Things

Sapir-Whorf does not apply to Programming Languages

This one is a hot mess but it's too late in the week to start over. Oh well! Someone recognized me at last week's Chipy and asked for my opinion on Sapir-Whorf hypothesis in programming languages. I thought this was interesting enough to make a newsletter. First what it is, then why it looks like it applies, and then why it doesn't apply after all. The Sapir-Whorf Hypothesis We dissect nature along lines laid down by our native language. — Whorf To quote from a Linguistics book I've read, the hypothesis is that "an individual's fundamental perception of reality is moulded by the language they speak." As a massive oversimplification, if English did not have a word for "rebellion", we would not be able to conceive of rebellion. This view, now called Linguistic Determinism, is mostly rejected by modern linguists. The "weak" form of SWH is that the language we speak influences, but does not decide our cognition. For example, Russian has distinct words for "light blue" and "dark blue", so can discriminate between "light blue" and "dark blue" shades faster than they can discriminate two "light blue" shades. English does not have distinct words, so we discriminate those at the same speed. This linguistic relativism seems to have lots of empirical support in studies, but mostly with "small indicators". I don't think there's anything that convincingly shows linguistic relativism having effects on a societal level.1 The weak form of SWH for software would then be the "the programming languages you know affects how you think about programs." SWH in software This seems like a natural fit, as different paradigms solve problems in different ways. Consider the hardest interview question ever, "given a list of integers, sum the even numbers". Here it is in four paradigms: Procedural: total = 0; foreach x in list {if IsEven(x) total += x}. You iterate over data with an algorithm. Functional: reduce(+, filter(IsEven, list), 0). You apply transformations to data to get a result. Array: + fold L * iseven L.2 In English: replace every element in L with 0 if odd and 1 if even, multiple the new array elementwise against L, and then sum the resulting array. It's like functional except everything is in terms of whole-array transformations. Logical: Somethingish like sumeven(0, []). sumeven(X, [Y|L]) :- iseven(Y) -> sumeven(Z, L), X is Y + Z ; sumeven(X, L). You write a set of equations that express what it means for X to be the sum of events of L. There's some similarities between how these paradigms approach the problem, but each is also unique, too. It's plausible that where a procedural programmer "sees" a for loop, a functional programmer "sees" a map and an array programmer "sees" a singular operator. I also have a personal experience with how a language changed the way I think. I use TLA+ to detect concurrency bugs in software designs. After doing this for several years, I've gotten much better at intuitively seeing race conditions in things even without writing a TLA+ spec. It's even leaked out into my day-to-day life. I see concurrency bugs everywhere. Phone tag is a race condition. But I still don't think SWH is the right mental model to use, for one big reason: language is special. We think in language, we dream in language, there are huge parts of our brain dedicated to processing language. We don't use those parts of our brain to read code. SWH is so intriguing because it seems so unnatural, that the way we express thoughts changes the way we think thoughts. That I would be a different person if I was bilingual in Spanish, not because the life experiences it would open up but because grammatical gender would change my brain. Compared to that, the idea that programming languages affect our brain is more natural and has a simpler explanation: It's the goddamned Tetris Effect. The Goddamned Tetris Effect The Tetris effect occurs when someone dedicates vast amounts of time, effort and concentration on an activity which thereby alters their thoughts, dreams, and other experiences not directly linked to said activity. — Wikipedia Every skill does this. I'm a juggler, so every item I can see right now has a tiny metadata field of "how would this tumble if I threw it up". I teach professionally, so I'm always noticing good teaching examples everywhere. I spent years writing specs in TLA+ and watching the model checker throw concurrency errors in my face, so now race conditions have visceral presence. Every skill does this. And to really develop a skill, you gotta practice. This is where I think programming paradigms do something especially interesting that make them feel more like Sapir-Whorfy than, like, juggling. Some languages mix lots of different paradigms, like Javascript or Rust. Others like Haskell really focus on excluding paradigms. If something is easy for you in procedural and hard in FP, in JS you could just lean on the procedural bits. In Haskell, too bad, you're learning how to do it the functional way.3 And that forces you to practice, which makes you see functional patterns everywhere. Tetris effect! Anyway this may all seem like quibbling— why does it matter whether we call it "Tetris effect" or "Sapir-Whorf", if our brains is get rewired either way? For me, personally, it's because SWH sounds really special and unique, while Tetris effect sounds mundane and commonplace. Which it is. But also because TE suggests it's not just programming languages that affect how we think about software, it's everything. Spending lots of time debugging, profiling, writing exploits, whatever will change what you notice, what you think a program "is". And that's a way useful idea that shouldn't be restricted to just PLs. (Then again, the Tetris Effect might also be a bad analogy to what's going on here, because I think part of it is that it wears off after a while. Maybe it's just "building a mental model is good".) I just realized all of this might have missed the point Wait are people actually using SWH to mean the weak form or the strong form? Like that if a language doesn't make something possible, its users can't conceive of it being possible. I've been arguing against the weaker form in software but I think I've seen strong form often too. Dammit. Well, it's already Thursday and far too late to rewrite the whole newsletter, so I'll just outline the problem with the strong form: we describe the capabilities of our programming languages with human language. In college I wrote a lot of crappy physics lab C++ and one of my projects was filled with comments like "man I hate copying this triply-nested loop in 10 places with one-line changes, I wish I could put it in one function and just take the changing line as a parameter". Even if I hadn't encountered higher-order functions, I was still perfectly capable of expressing the idea. So if the strong SWH isn't true for human language, it's not true for programming languages either. Systems Distributed talk now up! Link here! Original abstract: Building correct distributed systems takes thinking outside the box, and the fastest way to do that is to think inside a different box. One different box is "formal methods", the discipline of mathematically verifying software and systems. Formal methods encourages unusual perspectives on systems, models that are also broadly useful to all software developers. In this talk we will learn two of the most important FM perspectives: the abstract specifications behind software systems, and the property they are and aren't supposed to have. The talk ended up evolving away from that abstract but I like how it turned out! There is one paper arguing that people who speak a language that doesn't have a "future tense" are more likely to save and eat healthy, but it is... extremely questionable. ↩ The original J is +/ (* (0 = 2&|)). Obligatory Notation as a Tool of Thought reference ↩ Though if it's too hard for you, that's why languages have escape hatches ↩

a week ago 13 votes
Software books I wish I could read

New Logic for Programmers Release! v0.11 is now available! This is over 20% longer than v0.10, with a new chapter on code proofs, three chapter overhauls, and more! Full release notes here. Software books I wish I could read I'm writing Logic for Programmers because it's a book I wanted to have ten years ago. I had to learn everything in it the hard way, which is why I'm ensuring that everybody else can learn it the easy way. Books occupy a sort of weird niche in software. We're great at sharing information via blogs and git repos and entire websites. These have many benefits over books: they're free, they're easily accessible, they can be updated quickly, they can even be interactive. But no blog post has influenced me as profoundly as Data and Reality or Making Software. There is no blog or talk about debugging as good as the Debugging book. It might not be anything deeper than "people spend more time per word on writing books than blog posts". I dunno. So here are some other books I wish I could read. I don't think any of them exist yet but it's a big world out there. Also while they're probably best as books, a website or a series of blog posts would be ok too. Everything about Configurations The whole topic of how we configure software, whether by CLI flags, environmental vars, or JSON/YAML/XML/Dhall files. What causes the configuration complexity clock? How do we distinguish between basic, advanced, and developer-only configuration options? When should we disallow configuration? How do we test all possible configurations for correctness? Why do so many widespread outages trace back to misconfiguration, and how do we prevent them? I also want the same for plugin systems. Manifests, permissions, common APIs and architectures, etc. Configuration management is more universal, though, since everybody either uses software with configuration or has made software with configuration. The Big Book of Complicated Data Schemas I guess this would kind of be like Schema.org, except with a lot more on the "why" and not the what. Why is important for the Volcano model to have a "smokingAllowed" field?1 I'd see this less as "here's your guide to putting Volcanos in your database" and more "here's recurring motifs in modeling interesting domains", to help a person see sources of complexity in their own domain. Does something crop up if the references can form a cycle? If a relationship needs to be strictly temporary, or a reference can change type? Bonus: path dependence in data models, where an additional requirement leads to a vastly different ideal data model that a company couldn't do because they made the old model. (This has got to exist, right? Business modeling is a big enough domain that this must exist. Maybe The Essence of Software touches on this? Man I feel bad I haven't read that yet.) Computer Science for Software Engineers Yes, I checked, this book does not exist (though maybe this is the same thing). I don't have any formal software education; everything I know was either self-taught or learned on the job. But it's way easier to learn software engineering that way than computer science. And I bet there's a lot of other engineers in the same boat. This book wouldn't have to be comprehensive or instructive: just enough about each topic to understand why it's an area of study and appreciate how research in it eventually finds its way into practice. MISU Patterns MISU, or "Make Illegal States Unrepresentable", is the idea of designing system invariants in the structure of your data. For example, if a Contact needs at least one of email or phone to be non-null, make it a sum type over EmailContact, PhoneContact, EmailPhoneContact (from this post). MISU is great. Most MISU in the wild look very different than that, though, because the concept of MISU is so broad there's lots of different ways to achieve it. And that means there are "patterns": smart constructors, product types, properly using sets, newtypes to some degree, etc. Some of them are specific to typed FP, while others can be used in even untyped languages. Someone oughta make a pattern book. My one request would be to not give them cutesy names. Do something like the Aarne–Thompson–Uther Index, where items are given names like "Recognition by manner of throwing cakes of different weights into faces of old uncles". Names can come later. The Tools of '25 Not something I'd read, but something to recommend to junior engineers. Starting out it's easy to think the only bit that matters is the language or framework and not realize the enormous amount of surrounding tooling you'll have to learn. This book would cover the basics of tools that enough developers will probably use at some point: git, VSCode, very basic Unix and bash, curl. Maybe the general concepts of tools that appear in every ecosystem, like package managers, build tools, task runners. That might be easier if we specialize this to one particular domain, like webdev or data science. Ideally the book would only have to be updated every five years or so. No LLM stuff because I don't expect the tooling will be stable through 2026, to say nothing of 2030. A History of Obsolete Optimizations Probably better as a really long blog series. Each chapter would be broken up into two parts: A deep dive into a brilliant, elegant, insightful historical optimization designed to work within the constraints of that era's computing technology What we started doing instead, once we had more compute/network/storage available. c.f. A Spellchecker Used to Be a Major Feat of Software Engineering. Bonus topics would be brilliance obsoleted by standardization (like what people did before git and json were universal), optimizations we do today that may not stand the test of time, and optimizations from the past that did. Sphinx Internals I need this. I've spent so much goddamn time digging around in Sphinx and docutils source code I'm gonna throw up. Systems Distributed Talk Today! Online premier's at noon central / 5 PM UTC, here! I'll be hanging out to answer questions and be awkward. You ever watch a recording of your own talk? It's real uncomfortable! In this case because it's a field on one of Volcano's supertypes. I guess schemas gotta follow LSP too ↩

3 weeks ago 25 votes
2000 words about arrays and tables

I'm way too discombobulated from getting next month's release of Logic for Programmers ready, so I'm pulling a idea from the slush pile. Basically I wanted to come up with a mental model of arrays as a concept that explained APL-style multidimensional arrays and tables but also why there weren't multitables. So, arrays. In all languages they are basically the same: they map a sequence of numbers (I'll use 1..N)1 to homogeneous values (values of a single type). This is in contrast to the other two foundational types, associative arrays (which map an arbitrary type to homogeneous values) and structs (which map a fixed set of keys to heterogeneous values). Arrays appear in PLs earlier than the other two, possibly because they have the simplest implementation and the most obvious application to scientific computing. The OG FORTRAN had arrays. I'm interested in two structural extensions to arrays. The first, found in languages like nushell and frameworks like Pandas, is the table. Tables have string keys like a struct and indexes like an array. Each row is a struct, so you can get "all values in this column" or "all values for this row". They're heavily used in databases and data science. The other extension is the N-dimensional array, mostly seen in APLs like Dyalog and J. Think of this like arrays-of-arrays(-of-arrays), except all arrays at the same depth have the same length. So [[1,2,3],[4]] is not a 2D array, but [[1,2,3],[4,5,6]] is. This means that N-arrays can be queried on any axis. ]x =: i. 3 3 0 1 2 3 4 5 6 7 8 0 { x NB. first row 0 1 2 0 {"1 x NB. first column 0 3 6 So, I've had some ideas on a conceptual model of arrays that explains all of these variations and possibly predicts new variations. I wrote up my notes and did the bare minimum of editing and polishing. Somehow it ended up being 2000 words. 1-dimensional arrays A one-dimensional array is a function over 1..N for some N. To be clear this is math functions, not programming functions. Programming functions take values of a type and perform computations on them. Math functions take values of a fixed set and return values of another set. So the array [a, b, c, d] can be represented by the function (1 -> a ++ 2 -> b ++ 3 -> c ++ 4 -> d). Let's write the set of all four element character arrays as 1..4 -> char. 1..4 is the function's domain. The set of all character arrays is the empty array + the functions with domain 1..1 + the functions with domain 1..2 + ... Let's call this set Array[Char]. Our compilers can enforce that a type belongs to Array[Char], but some operations care about the more specific type, like matrix multiplication. This is either checked with the runtime type or, in exotic enough languages, with static dependent types. (This is actually how TLA+ does things: the basic collection types are functions and sets, and a function with domain 1..N is a sequence.) 2-dimensional arrays Now take the 3x4 matrix i. 3 4 0 1 2 3 4 5 6 7 8 9 10 11 There are two equally valid ways to represent the array function: A function that takes a row and a column and returns the value at that index, so it would look like f(r: 1..3, c: 1..4) -> Int. A function that takes a row and returns that column as an array, aka another function: f(r: 1..3) -> g(c: 1..4) -> Int.2 Man, (2) looks a lot like currying! In Haskell, functions can only have one parameter. If you write (+) 6 10, (+) 6 first returns a new function f y = y + 6, and then applies f 10 to get 16. So (+) has the type signature Int -> Int -> Int: it's a function that takes an Int and returns a function of type Int -> Int.3 Similarly, our 2D array can be represented as an array function that returns array functions: it has type 1..3 -> 1..4 -> Int, meaning it takes a row index and returns 1..4 -> Int, aka a single array. (This differs from conventional array-of-arrays because it forces all of the subarrays to have the same domain, aka the same length. If we wanted to permit ragged arrays, we would instead have the type 1..3 -> Array[Int].) Why is this useful? A couple of reasons. First of all, we can apply function transformations to arrays, like "combinators". For example, we can flip any function of type a -> b -> c into a function of type b -> a -> c. So given a function that takes rows and returns columns, we can produce one that takes columns and returns rows. That's just a matrix transposition! Second, we can extend this to any number of dimensions: a three-dimensional array is one with type 1..M -> 1..N -> 1..O -> V. We can still use function transformations to rearrange the array along any ordering of axes. Speaking of dimensions: What are dimensions, anyway Okay, so now imagine we have a Row × Col grid of pixels, where each pixel is a struct of type Pixel(R: int, G: int, B: int). So the array is Row -> Col -> Pixel But we can also represent the Pixel struct with a function: Pixel(R: 0, G: 0, B: 255) is the function where f(R) = 0, f(G) = 0, f(B) = 255, making it a function of type {R, G, B} -> Int. So the array is actually the function Row -> Col -> {R, G, B} -> Int And then we can rearrange the parameters of the function like this: {R, G, B} -> Row -> Col -> Int Even though the set {R, G, B} is not of form 1..N, this clearly has a real meaning: f[R] is the function mapping each coordinate to that coordinate's red value. What about Row -> {R, G, B} -> Col -> Int? That's for each row, the 3 × Col array mapping each color to that row's intensities. Really any finite set can be a "dimension". Recording the monitor over a span of time? Frame -> Row -> Col -> Color -> Int. Recording a bunch of computers over some time? Computer -> Frame -> Row …. This is pretty common in constraint satisfaction! Like if you're conference trying to assign talks to talk slots, your array might be type (Day, Time, Room) -> Talk, where Day/Time/Room are enumerations. An implementation constraint is that most programming languages only allow integer indexes, so we have to replace Rooms and Colors with numerical enumerations over the set. As long as the set is finite, this is always possible, and for struct-functions, we can always choose the indexing on the lexicographic ordering of the keys. But we lose type safety. Why tables are different One more example: Day -> Hour -> Airport(name: str, flights: int, revenue: USD). Can we turn the struct into a dimension like before? In this case, no. We were able to make Color an axis because we could turn Pixel into a Color -> Int function, and we could only do that because all of the fields of the struct had the same type. This time, the fields are different types. So we can't convert {name, flights, revenue} into an axis. 4 One thing we can do is convert it to three separate functions: airport: Day -> Hour -> Str flights: Day -> Hour -> Int revenue: Day -> Hour -> USD But we want to keep all of the data in one place. That's where tables come in: an array-of-structs is isomorphic to a struct-of-arrays: AirportColumns( airport: Day -> Hour -> Str, flights: Day -> Hour -> Int, revenue: Day -> Hour -> USD, ) The table is a sort of both representations simultaneously. If this was a pandas dataframe, df["airport"] would get the airport column, while df.loc[day1] would get the first day's data. I don't think many table implementations support more than one axis dimension but there's no reason they couldn't. These are also possible transforms: Hour -> NamesAreHard( airport: Day -> Str, flights: Day -> Int, revenue: Day -> USD, ) Day -> Whatever( airport: Hour -> Str, flights: Hour -> Int, revenue: Hour -> USD, ) In my mental model, the heterogeneous struct acts as a "block" in the array. We can't remove it, we can only push an index into the fields or pull a shared column out. But there's no way to convert a heterogeneous table into an array. Actually there is a terrible way Most languages have unions or product types that let us say "this is a string OR integer". So we can make our airport data Day -> Hour -> AirportKey -> Int | Str | USD. Heck, might as well just say it's Day -> Hour -> AirportKey -> Any. But would anybody really be mad enough to use that in practice? Oh wait J does exactly that. J has an opaque datatype called a "box". A "table" is a function Dim1 -> Dim2 -> Box. You can see some examples of what that looks like here Misc Thoughts and Questions The heterogeneity barrier seems like it explains why we don't see multiple axes of table columns, while we do see multiple axes of array dimensions. But is that actually why? Is there a system out there that does have multiple columnar axes? The array x = [[a, b, a], [b, b, b]] has type 1..2 -> 1..3 -> {a, b}. Can we rearrange it to 1..2 -> {a, b} -> 1..3? No. But we can rearrange it to 1..2 -> {a, b} -> PowerSet(1..3), which maps rows and characters to columns with that character. [(a -> {1, 3} ++ b -> {2}), (a -> {} ++ b -> {1, 2, 3}]. We can also transform Row -> PowerSet(Col) into Row -> Col -> Bool, aka a boolean matrix. This makes sense to me as both forms are means of representing directed graphs. Are other function combinators useful for thinking about arrays? Does this model cover pivot tables? Can we extend it to relational data with multiple tables? Systems Distributed Talk (will be) Online The premier will be August 6 at 12 CST, here! I'll be there to answer questions / mock my own performance / generally make a fool of myself. Sacrilege! But it turns out in this context, it's easier to use 1-indexing than 0-indexing. In the years since I wrote that article I've settled on "each indexing choice matches different kinds of mathematical work", so mathematicians and computer scientists are best served by being able to choose their index. But software engineers need consistency, and 0-indexing is overall a net better consistency pick. ↩ This is right-associative: a -> b -> c means a -> (b -> c), not (a -> b) -> c. (1..3 -> 1..4) -> Int would be the associative array that maps length-3 arrays to integers. ↩ Technically it has type Num a => a -> a -> a, since (+) works on floats too. ↩ Notice that if each Airport had a unique name, we could pull it out into AirportName -> Airport(flights, revenue), but we still are stuck with two different values. ↩

4 weeks ago 20 votes
Programming Language Escape Hatches

The excellent-but-defunct blog Programming in the 21st Century defines "puzzle languages" as languages were part of the appeal is in figuring out how to express a program idiomatically, like a puzzle. As examples, he lists Haskell, Erlang, and J. All puzzle languages, the author says, have an "escape" out of the puzzle model that is pragmatic but stigmatized. But many mainstream languages have escape hatches, too. Languages have a lot of properties. One of these properties is the language's capabilities, roughly the set of things you can do in the language. Capability is desirable but comes into conflicts with a lot of other desirable properties, like simplicity or efficiency. In particular, reducing the capability of a language means that all remaining programs share more in common, meaning there's more assumptions the compiler and programmer can make ("tractability"). Assumptions are generally used to reason about correctness, but can also be about things like optimization: J's assumption that everything is an array leads to high-performance "special combinations". Rust is the most famous example of mainstream language that trades capability for tractability.1 Rust has a lot of rules designed to prevent common memory errors, like keeping a reference to deallocated memory or modifying memory while something else is reading it. As a consequence, there's a lot of things that cannot be done in (safe) Rust, like interface with an external C function (as it doesn't have these guarantees). To do this, you need to use unsafe Rust, which lets you do additional things forbidden by safe Rust, such as deference a raw pointer. Everybody tells you not to use unsafe unless you absolutely 100% know what you're doing, and possibly not even then. Sounds like an escape hatch to me! To extrapolate, an escape hatch is a feature (either in the language itself or a particular implementation) that deliberately breaks core assumptions about the language in order to add capabilities. This explains both Rust and most of the so-called "puzzle languages": they need escape hatches because they have very strong conceptual models of the language which leads to lots of assumptions about programs. But plenty of "kitchen sink" mainstream languages have escape hatches, too: Some compilers let C++ code embed inline assembly. Languages built on .NET or the JVM has some sort of interop with C# or Java, and many of those languages make assumptions about programs that C#/Java do not. The SQL language has stored procedures as an escape hatch and vendors create a second escape hatch of user-defined functions. Ruby lets you bypass any form of encapsulation with send. Frameworks have escape hatches, too! React has an entire page on them. (Does eval in interpreted languages count as an escape hatch? It feels different, but it does add a lot of capability. Maybe they don't "break assumptions" in the same way?) The problem with escape hatches In all languages with escape hatches, the rule is "use this as carefully and sparingly as possible", to the point where a messy solution without an escape hatch is preferable to a clean solution with one. Breaking a core assumption is a big deal! If the language is operating as if its still true, it's going to do incorrect things. I recently had this problem in a TLA+ contract. TLA+ is a language for modeling complicated systems, and assumes that the model is a self-contained universe. The client wanted to use the TLA+ to test a real system. The model checker should send commands to a test device and check the next states were the same. This is straightforward to set up with the IOExec escape hatch.2 But the model checker assumed that state exploration was pure and it could skip around the state randomly, meaning it would do things like set x = 10, then skip to set x = 1, then skip back to inc x; assert x == 11. Oops! We eventually found workarounds but it took a lot of clever tricks to pull off. I'll probably write up the technique when I'm less busy with The Book. The other problem with escape hatches is the rest of the language is designed around not having said capabilities, meaning it can't support the feature as well as a language designed for them from the start. Even if your escape hatch code is clean, it might not cleanly integrate with the rest of your code. This is why people complain about unsafe Rust so often. It should be noted though that all languages with automatic memory management are trading capability for tractability, too. If you can't deference pointers, you can't deference null pointers. ↩ From the Community Modules (which come default with the VSCode extension). ↩

a month ago 26 votes

More in programming

strongly typed?

What does it mean when someone writes that a programming language is “strongly typed”? I’ve known for many years that “strongly typed” is a poorly-defined term. Recently I was prompted on Lobsters to explain why it’s hard to understand what someone means when they use the phrase. I came up with more than five meanings! how strong? The various meanings of “strongly typed” are not clearly yes-or-no. Some developers like to argue that these kinds of integrity checks must be completely perfect or else they are entirely worthless. Charitably (it took me a while to think of a polite way to phrase this), that betrays a lack of engineering maturity. Software engineers, like any engineers, have to create working systems from imperfect materials. To do so, we must understand what guarantees we can rely on, where our mistakes can be caught early, where we need to establish processes to catch mistakes, how we can control the consequences of our mistakes, and how to remediate when somethng breaks because of a mistake that wasn’t caught. strong how? So, what are the ways that a programming language can be strongly or weakly typed? In what ways are real programming languages “mid”? Statically typed as opposed to dynamically typed? Many languages have a mixture of the two, such as run time polymorphism in OO languages (e.g. Java), or gradual type systems for dynamic languages (e.g. TypeScript). Sound static type system? It’s common for static type systems to be deliberately unsound, such as covariant subtyping in arrays or functions (Java, again). Gradual type systems migh have gaping holes for usability reasons (TypeScript, again). And some type systems might be unsound due to bugs. (There are a few of these in Rust.) Unsoundness isn’t a disaster, if a programmer won’t cause it without being aware of the risk. For example: in Lean you can write “sorry” as a kind of “to do” annotation that deliberately breaks soundness; and Idris 2 has type-in-type so it accepts Girard’s paradox. Type safe at run time? Most languages have facilities for deliberately bypassing type safety, with an “unsafe” library module or “unsafe” language features, or things that are harder to spot. It can be more or less difficult to break type safety in ways that the programmer or language designer did not intend. JavaScript and Lua are very safe, treating type safety failures as security vulnerabilities. Java and Rust have controlled unsafety. In C everything is unsafe. Fewer weird implicit coercions? There isn’t a total order here: for instance, C has implicit bool/int coercions, Rust does not; Rust has implicit deref, C does not. There’s a huge range in how much coercions are a convenience or a source of bugs. For example, the PHP and JavaScript == operators are made entirely of WAT, but at least you can use === instead. How fancy is the type system? To what degree can you model properties of your program as types? Is it convenient to parse, not validate? Is the Curry-Howard correspondance something you can put into practice? Or is it only capable of describing the physical layout of data? There are probably other meanings, e.g. I have seen “strongly typed” used to mean that runtime representations are abstract (you can’t see the underlying bytes); or in the past it sometimes meant a language with a heavy type annotation burden (as a mischaracterization of static type checking). how to type So, when you write (with your keyboard) the phrase “strongly typed”, delete it, and come up with a more precise description of what you really mean. The desiderata above are partly overlapping, sometimes partly orthogonal. Some of them you might care about, some of them not. But please try to communicate where you draw the line and how fuzzy your line is.

21 hours ago 6 votes
Omarchy 2.0

Omarchy 2.0 was released on Linux's 34th birthday as a gift to perhaps the greatest open-source project the world has ever known. Not only does Linux run 95% of all servers on the web, billions of devices as an embedded OS, but it also turns out to be an incredible desktop environment! It's crazy that it took me more than thirty years to realize this, but while I spent time in Apple's walled garden, the free software alternative simply grew better, stronger, and faster. The Linux of 2025 is not the Linux of the 90s or the 00s or even the 10s. It's shockingly more polished, capable, and beautiful. It's been an absolute honor to celebrate Linux with the making of Omarchy, the new Linux distribution that I've spent the last few months building on top of Arch and Hyprland. What began as a post-install script has turned into a full-blown ISO, dedicated package repository, and flourishing community of thousands of enthusiasts all collaborating on making it better. It's been improving rapidly with over twenty releases since the premiere in late June, but this Version 2.0 update is the biggest one yet. If you've been curious about giving Linux a try, you're not afraid of an operating system that asks you to level up and learn a little, and you want to see what a totally different computing experience can look and feel like, I invite you to give it a go. Here's a full tour of Omarchy 2.0.

2 days ago 7 votes
Dissecting the Apple M1 GPU, the end

In 2020, Apple released the M1 with a custom GPU. We got to work reverse-engineering the hardware and porting Linux. Today, you can run Linux on a range of M1 and M2 Macs, with almost all hardware working: wireless, audio, and full graphics acceleration. Our story begins in December 2020, when Hector Martin kicked off Asahi Linux. I was working for Collabora working on Panfrost, the open source Mesa3D driver for Arm Mali GPUs. Hector put out a public call for guidance from upstream open source maintainers, and I bit. I just intended to give some quick pointers. Instead, I bought myself a Christmas present and got to work. In between my university coursework and Collabora work, I poked at the shader instruction set. One thing led to another. Within a few weeks, I drew a triangle. In 3D graphics, once you can draw a triangle, you can do anything. Pretty soon, I started work on a shader compiler. After my final exams that semester, I took a few days off from Collabora to bring up an OpenGL driver capable of spinning gears with my new compiler. Over the next year, I kept reverse-engineering and improving the driver until it could run 3D games on macOS. Meanwhile, Asahi Lina wrote a kernel driver for the Apple GPU. My userspace OpenGL driver ran on macOS, leaving her kernel driver as the missing piece for an open source graphics stack. In December 2022, we shipped graphics acceleration in Asahi Linux. In January 2023, I started my final semester in my Computer Science program at the University of Toronto. For years I juggled my courses with my part-time job and my hobby driver. I faced the same question as my peers: what will I do after graduation? Maybe Panfrost? I started reverse-engineering of the Mali Midgard GPU back in 2017, when I was still in high school. That led to an internship at Collabora in 2019 once I graduated, turning into my job throughout four years of university. During that time, Panfrost grew from a kid’s pet project based on blackbox reverse-engineering, to a professional driver engineered by a team with Arm’s backing and hardware documentation. I did what I set out to do, and the project succeeded beyond my dreams. It was time to move on. What did I want to do next? Finish what I started with the M1. Ship a great driver. Bring full, conformant OpenGL drivers to the M1. Apple’s drivers are not conformant, but we should strive for the industry standard. Bring full, conformant Vulkan to Apple platforms, disproving the myth that Vulkan isn’t suitable for Apple hardware. Bring Proton gaming to Asahi Linux. Thanks to Valve’s work for the Steam Deck, Windows games can run better on Linux than even on Windows. Why not reap those benefits on the M1? Panfrost was my challenge until we “won”. My next challenge? Gaming on Linux on M1. Once I finished my coursework, I started full-time on gaming on Linux. Within a month, we shipped OpenGL 3.1 on Asahi Linux. A few weeks later, we passed official conformance for OpenGL ES 3.1. That put us at feature parity with Panfrost. I wanted to go further. OpenGL (ES) 3.2 requires geometry shaders, a legacy feature not supported by either Arm or Apple hardware. The proprietary OpenGL drivers emulate geometry shaders with compute, but there was no open source prior art to borrow. Even though multiple Mesa drivers need geometry/tessellation emulation, nobody did the work to get there. My early progress on OpenGL was fast thanks to the mature common code in Mesa. It was time to pay it forward. Over the rest of the year, I implemented geometry/tessellation shader emulation. And also the rest of the owl. In January 2024, I passed conformance for the full OpenGL 4.6 specification, finishing up OpenGL. Vulkan wasn’t too bad, either. I polished the OpenGL driver for a few months, but once I started typing a Vulkan driver, I passed 1.3 conformance in a few weeks. What remained was wiring up the geometry/tessellation emulation to my shiny new Vulkan driver, since those are required for Direct3D. Et voilà, Proton games. Along the way, Karol Herbst passed OpenCL 3.0 conformance on the M1, running my compiler atop his “rusticl” frontend. Meanwhile, when the Vulkan 1.4 specification was published, we were ready and shipped a conformant implementation on the same day. After that, I implemented sparse texture support, unlocking Direct3D 12 via Proton. …Now what? Ship a great driver? Check. Conformant OpenGL 4.6, OpenGL ES 3.2, and OpenCL 3.0? Check. Conformant Vulkan 1.4? Check. Proton gaming? Check. That’s a wrap. We’ve succeeded beyond my dreams. The challenges I chased, I have tackled. The drivers are fully upstream in Mesa. Performance isn’t too bad. With the Vulkan on Apple myth busted, conformant Vulkan is now coming to macOS via LunarG’s KosmicKrisp project building on my work. Satisfied, I am now stepping away from the Apple ecosystem. My friends in the Asahi Linux orbit will carry the torch from here. As for me? Onto the next challenge!

2 days ago 10 votes
Changing Careers to Software Development in Japan

TokyoDev has published a number of different guides on coming to Japan to work as a software developer. But what if you’re already employed in another industry in Japan, and are considering changing your career to software development? I interviewed four people who became developers after they moved to Japan, for their advice and personal experiences on: Why they chose development How they switched careers How they successfully found their first jobs What mistakes they made in the job hunt The most important advice they give to others Why switch to software development? A lifelong goal For Yuta Asakura, a career in software was the dream all along. “I’ve always wanted to work with computers,” he said, “but due to financial difficulties, I couldn’t pursue a degree in computer science. I had to start working early to support my single mother. As the eldest child, I focused on helping my younger brother complete his education.” To support his family, Asakura worked in construction for eight years, eventually becoming a foreman in Yokohama. Meanwhile, his brother graduated, and became a software engineer after joining the Le Wagon Tokyo bootcamp. About a year before his brother graduated, Asakura began to delve back into development. “I had already begun self-studying in my free time by taking online courses and building small projects,” he explained. “ I quickly became hooked by how fun and empowering it was to learn, apply, and build. It wasn’t always easy. There were moments I wanted to give up, but the more I learned, the more interesting things I could create. That feeling kept me going.” What truly inspired me was the idea of creating something from nothing. Coming from a construction background, I was used to building things physically. But I wanted to create things that were digital, scalable, borderless, and meaningful to others. An unexpected passion As Andrew Wilson put it, “Wee little Andrew had a very digital childhood,” full of games and computer time. Rather than pursuing tech, however, he majored in Japanese and moved to Japan in 2012, where he initially worked as a language teacher and recruiter before settling into sales. Wilson soon discovered that sales wasn’t really his strong suit. “At the time I was selling three different enterprise software solutions.” So I had to have a fairly deep understanding of that software from a user perspective, and in the course of learning about these products and giving technical demonstrations, I realized that I liked doing that bit of my job way more than I liked actually trying to sell these things. Around that time, he also realized he didn’t want to manually digitize the many business cards he always collected during sales meetings: “That’s boring, and I’m lazy.” So instead, he found a business card-scanning app, made a spreadsheet to contain the data, automated the whole process, and shared it internally within his company. His manager approached him soon afterwards, saying, “You built this? We were looking to hire someone to do this!” Encouraged, Wilson continued to develop it. “As soon as I was done with work,” he explained with a laugh, “I was like, ‘Oh boy, I can work on my spreadsheet!’” As a result, Wilson came to the conclusion that he really should switch careers and pursue his passion for programming. Similarly to Wilson, Malcolm Hendricks initially focused on Japanese. He came to Japan as an exchange student in 2002, and traveled to Japan several more times before finally relocating in 2011. Though his original role was as a language teacher, he soon found a job at a Japanese publishing company, where he worked as an editor and writer for seven years. However, he felt burned out on the work, and also that he was in danger of stagnating; since he isn’t Japanese, the road to promotion was a difficult one. He started following some YouTube tutorials on web development, and eventually began creating websites for his friends. Along the way, he fell in love with development, on both a practical and a philosophical level. “There’s another saying I’ve heard here and there—I don’t know exactly who to attribute it to—but the essence of it goes that ‘Computer science is just teaching rocks how to think,’” Hendricks said. “My mentor Bob has been guiding me through the very fundamentals of computer science, down to binary calculations, Boolean logic, gate theory, and von Neumann architecture. He explains the fine minutia and often concludes with, ‘That’s how it works. There’s no magic to it.’ “Meanwhile, in the back of my mind, I can’t help but be mystified at the things we are all now able to do, such as having video calls from completely different parts of the world, or even me here typing on squares of plastic to make letters appear on a screen that has its own source of light inside it. . . . [It] sounds like the highest of high-fantasy wizardry to me.” I’ve always had a love for technomancy, but I never figured I might one day get the chance to be a technomancer myself. And I love it! We have the ability to create nigh unto anything in the digital world. A practical solution When Paulo D’Alberti moved to Japan in 2019, he only spoke a little Japanese, which limited his employment prospects. With his prior business experience, he landed an online marketing role for a blockchain startup, but eventually exited the company to pursue a more stable work environment. “But when I decided to leave the company,” D’Alberti said, “my Japanese was still not good enough to do business. So I was at a crossroads.” Do I decide to join a full-time Japanese language course, aiming to get JLPT N2 or the equivalent, and find a job on the business side? . . . Or do I say screw it and go for a complete career change and get skills in something more technical, that would allow me to carry those skills [with me] even if I were to move again to another country?” The portability of a career in development was a major plus for D’Alberti. “That was one of the big reasons. Another consideration was that, looking at the boot camps that were available, the promise was ‘Yeah, we’ll teach you to be a software developer in nine weeks or two months.’ That was a much shorter lead time than getting from JLPT N4 to N2. I definitely wouldn’t be able to do that in two months.” Since D’Alberti had family obligations, the timeline for his career switch was crucial. “We still had family costs and rent and groceries and all of that. I needed to find a job as soon as possible. I actually already at that point had been unsuccessfully job hunting for two months. So that was like, ‘Okay, the savings are winding up, and we are running out of options. I need to make a decision and make it fast.’” How to switch careers Method 1: Software Development Bootcamp Under pressure to find new employment quickly, D’Alberti decided to enter the Le Wagon Coding Bootcamp in Tokyo. Originally, he wavered between Le Wagon and Code Chrysalis, which has since ended its bootcamp programs. “I went with Le Wagon for two reasons,” he explained. “There were some scheduling reasons. . . . But the main reason was that Code Chrysalis required you to pass a coding exam before being admitted to their bootcamp.” Since D’Alberti was struggling to learn development by himself, he knew his chances of passing any coding exam were slim. “I tried Code Academy, I tried Solo Learn, I tried a whole bunch of apps online, I would follow the examples, the exercises . . . nothing clicked. I wouldn’t understand what I was doing or why I was doing it.” At the time, Le Wagon only offered full-time web development courses, although they now also have part-time courses and a data science curriculum. Since D’Alberti was unemployed, a full-time program wasn’t a problem for him, “But it did mean that the people who were present were very particular [kinds] of people: students who could take some time off to add this to their [coursework], or foreigners who took three months off and were traveling and decide to come here and do studying plus sightseeing, and I think there were one or two who actually asked for time off from the job in order to participate.” It was a very intense course, and the experience itself gave me exactly what I needed. I had been trying to learn by myself. It did not work. I did not understand. [After joining], the first day or second day, suddenly everything clicked. D’Alberti appreciated how Le Wagon organized the curriculum to build continuously off previous lessons. By the time he graduated in June of 2019, he’d built three applications from scratch, and felt far more confident in his coding abilities. “It was great. [The curriculum] was amazing, and I really felt super confident in my abilities after the three months. Which, looking back,” he joked, “I still had a lot to learn.” D’Alberti did have some specific advice for those considering a bootcamp: “Especially in the last couple of weeks, it can get very dramatic. You are divided into teams and as a team, you’re supposed to develop an application that you will be demonstrating in front of other people.” Some of the students, D’Alberti explained, felt that pressure intensely; one of his classmates broke down in tears. “Of course,” he added, “one of the big difficulties of joining a bootcamp is economical. The bootcamp itself is quite expensive.” While between 700,000 and 800,000 yen when D’Alberti went through the bootcamp, Le Wagon’s tuition has now risen to 890,000 yen for Web Development and 950,000 for Data Science. At the time D’Alberti joined there was no financial assistance. Now, Le Wagon has an agreement with Hello Work, so that students who are enrolled in the Hello Work system can be reimbursed for up to 70 percent of the bootcamp’s tuition. Though already studying development by himself, Asakura also enrolled in Le Wagon Tokyo in 2024, “to gain structure and accountability,” he said. One lesson that really stayed with me came from Sylvain Pierre, our bootcamp director. He said, ‘You stop being a developer the moment you stop learning or coding.’ That mindset helped me stay on track. Method 2: Online computer science degree Wilson considered going the bootcamp route, but decided against it. He knew, from his experience in recruiting, that a degree would give him an edge—especially in Japan, where having the right degree can make a difference in visa eligibility “The quality of bootcamps is perfectly fine,” he explained. “If you go through a bootcamp and study hard, you can get a job and become a developer no problem. I wanted to differentiate myself on paper as much as I could . . . [because] there are a lot of smart, motivated people who go through a bootcamp.” Whether it’s true or not, whether it’s valid or not, if you take two candidates who are very similar on paper, and one has a coding bootcamp and one has a degree, from a typical Japanese HR perspective, they’re going to lean toward the person with the degree. “Whether that’s good or not, that’s sort of a separate situation,” Wilson added. “But the reality [is] I’m older and I’m trying to make a career change, so I want to make sure that I’m giving myself every advantage that I can.” For these reasons, Wilson opted to get his computer science degree online. “There’s a program out of the University of Oregon, for people who already had a Bachelor’s degree in a different subject to get a Bachelor’s degree in Computer Science. “Because it’s limited to people who already have a Bachelor’s degree, that means you don’t need to take any non-computer science classes. You don’t need any electives or prerequisites or anything like that.” As it happened, Wilson was on paternity leave when he started studying for his degree. “That was one of my motivations to finish quickly!” he said. In the end, with his employer’s cooperation, he extended his paternity leave to two years, and finished the degree in five quarters. Method 3: Self-taught Hendricks took a different route, combining online learning materials with direct experience. He primarily used YouTube tutorials, like this project from one of his favorite channels, to teach himself. Once he had the basics down, he started creating websites for friends, as well as for the publishing company he worked for at the time. With every site, he’d put his name at the bottom of the page, as a form of marketing. This worked well enough that Hendricks was able to quit his work at the translation company and transition to full-time freelancing. However, eventually the freelancing work dried up, and he decided he wanted to experience working at a tech company—and not just for job security reasons. Hendricks saw finding a full-time development role as the perfect opportunity to push himself and see just how far he could get in his new career. There’s a common trope, probably belonging more to the sports world at large, about the importance of shedding ‘blood, sweat, and tears’ in the pursuit of one’s passion . . . and that’s also how I wanted to cut my teeth in the software engineering world. The job hunt While all four are now successfully employed as developers, Asakura, D’Alberti, Wilson, and Hendricks approached and experienced the job hunt differently. Following is their hard-earned advice on best practices and common mistakes. DO network When Hendricks started his job hunt, he faced the disadvantages of not having any formal experience, and also being both physically and socially isolated from other developers. Since he and his family were living in Nagano, he wasn’t able to participate in most of the tech events and meet-ups available in Tokyo or other big cities. His initial job hunt took around a year, and at one point he was sending so many applications that he received a hundred rejections in a week. It wasn’t until he started connecting with the community that he was able to turn it around, eventually getting three good job offers in a single week. Networking, for me, is what made all the difference. It was through networking that I found my mentors, found community, and joined and even started a few great Discord servers. These all undeniably contributed to me ultimately landing my current job, but they also made me feel welcome in the industry. Hendricks particularly credits his mentors, Ean More and Bob Cousins, for giving him great advice. “My initial mentor [Ean More] I actually met through a mutual IT networking Facebook group. I noticed that he was one of the more active members, and that he was always ready to lend a hand to help others with their questions and spread a deeper understanding of programming and computer science. He also often posted snippets of his own code to share with the community and receive feedback, and I was interested in a lot of what he was posting. “I reached out to him and told him I thought it was amazing how selfless he was in the group, and that, while I’m still a junior, if there was ever any grunt work I could do under his guidance, I would be happy to do so. Since he had a history of mentoring others, he offered to do so for me, and we’ve been mentor/mentee and friends ever since.” “My other mentor [Bob Cousins],” Hendricks continued, “was a friend of my late uncle’s. My uncle had originally begun mentoring me shortly before his passing. We were connected through a mutual friend whom I lamented to about not having any clue how to continue following the path my uncle had originally laid before me. He mentioned that he knew just the right person and gave me an email address to contact. I sent an email to the address and was greeted warmly by the man who would become another mentor, and like an uncle to me.” Although Hendricks found him via a personal connection, Cousins runs a mentorship program that caters to a wide variety of industries. Wilson also believes in the power of networking—and not just for the job hunt. “One of the things I like about programming,” he said, “is that it’s a very collaborative community. Everybody wants to help everybody.” We remember that everyone had to start somewhere, and we’ll take time to help those starting out. It’s a very welcoming community. Just do it! We’re all here for you, and if you need help I’ll refer you. Asakura, by contrast, thinks that networking can help, but that it works a little differently in Japan than in other countries. “Don’t rely on it too much,” he said. “Unlike in Western countries, personal referrals don’t always lead directly to job opportunities in Japan. Your skills, effort, and consistency will matter more in the long run.” DO treat the job hunt like a job Once he’d graduated from Le Wagon, D’Alberti said, “I considered job-hunting my full-time job.”  I checked all the possible networking events and meetup events that were going on in the city, and tried to attend all of them, every single day. I had a list of 10 different job boards that I would go and just refresh on a daily basis to see, ‘Okay, Is there anything new now?’ And, of course, I talked with recruiters. D’Alberti suggests beginning the search earlier than you think you need to. “I had started actively job hunting even before graduating [from Le Wagon],” he said. “That’s advice I give to everyone who joins the bootcamp. “Two weeks before graduation, you have one simple web application that you can show. You have a second one you’re working on in a team, and you have a third one that you know what it’s going to be about. So, already, there are three applications that you can showcase or you can use to explain your skills. I started going to meetups and to different events, talking with people, showing my CV.” The process wasn’t easy, as most companies and recruiters weren’t interested in hiring for junior roles. But his intensive strategy paid off within a month, as D’Albert landed three invitations to interview: one from a Japanese job board, one from a recruiter, and one from LinkedIn. For Asakura, treating job hunting like a job was as much for his mental health as for his career. “The biggest challenge was dealing with impostor syndrome and feeling like I didn’t belong because I didn’t have a computer science degree,” he explained. “I also experienced burnout from pushing myself too hard.” To cope, I stuck to a structured routine. I went to the gym daily to decompress, kept a consistent study schedule as if I were working full-time, and continued applying for jobs even when it felt hopeless. At first, Asakura tried to apply to jobs strategically by tracking each application, tailoring his resume, and researching every role. “But after dozens of rejections,” he said, “I eventually switched to applying more broadly and sent out over one hundred applications. I also reached out to friends who were already software engineers and asked for direct referrals, but unfortunately, nothing worked out.” Still, Asakura didn’t give up. He practiced interviews in both English and Japanese with his friends, and stayed in touch with recruiters. Most importantly, he kept developing and adding to his portfolio. DO make use of online resources “What ultimately helped me was staying active and visible,” Asakura said. I consistently updated my GitHub, LInkedIn, and Wantedly profiles. Eventually, I received a message on Wantedly from the CTO of a company who was impressed with my portfolio, and that led to my first developer job.” “If you have the time, certifications can also help validate your knowledge,” Asakura added, “especially in fields like cloud and AI. Some people may not realize this, but the rise of artificial intelligence is closely tied to the growth of cloud computing. Earning certifications such as AWS, Kubernetes, and others can give you a strong foundation and open new opportunities, especially as these technologies continue to evolve.” Hendricks also heavily utilized LinkedIn and similar sites, though in a slightly different way. “I would also emphasize the importance of knowing how to use job-hunting sites like Indeed and LinkedIn,” he said. “I had the best luck when I used them primarily to do initial research into companies, then applied directly through the companies’ own websites, rather than through job postings that filter applicants before their resumés ever make it to the actual people looking to hire.” In addition, Hendricks recommends studying coding interview prep tutorials from freeCodeCamp. Along with advice from his mentors and the online communities he joined, he credits those tutorials with helping him successfully receive offers after a long job hunt. DO highlight experience with Japanese culture and language Asakura felt that his experience in Japan, and knowledge of Japanese, gave him an edge. “I understand Japanese work culture [and] can speak the language,” Asakura said, “and as a Japanese national I didn’t require visa sponsorship. That made me a lower-risk hire for companies here.” Hendricks also felt that his excellent Japanese made him a more attractive hire. While applying, he emphasized to companies that he could be a bridge to the global market and business overseas. However, he also admitted this strategy steered him towards applying with more domestic Japanese companies, which were also less likely to hire someone without a computer science degree. “So,” he said, “it sort of washed out.” Wilson is another who put a lot of emphasis on his Japanese language skills, from a slightly different angle. A lot of interviewees typically don’t speak Japanese well . . . and a lot of companies here say that they’re very international, but if they want very good programmers, [those people] spend their lives programming, not studying English. So having somebody who can bridge the language gap on the IT side can be helpful. DO lean into your other experience Several career switchers discovered that their past experiences and skills, while not immediately relevant to their new career, still proved quite helpful in landing that first role—sometimes in very unexpected ways. When Wilson was pitching his language skills to companies, he wasn’t talking about just Japanese–English translation. He also highlighted his prior experience in sales to suggest that he could help communicate with and educate non-technical audiences. “Actually to be a software engineer, there’s a lot of technical communication you have to do.” I have worked with some incredible coders who are so good at the technical side and just don’t want to do the personal side. But for those of us who are not super-geniuses and can’t rely purely on our tech skills . . . there’s a lot of non-technical discussion that goes around building a product.” This strategy, while eventually fruitful, didn’t earn Wilson a job right away. Initially, he applied to more than sixty companies over the course of three to four months. “I didn’t have any professional [coding] experience, so it was actually quite a rough time,” he said. “I interviewed all over the place. I was getting rejected all over town.” The good news was, Wilson said, “I’m from Chicago. I don’t know what it is, but there are a lot of Chicagoans who work in Tokyo for whatever reason.” When he finally landed an interview, one of the three founders of the company was also from Chicago, giving them something in common. “We hit it off really well in the interview. I think that kind of gave me the edge to get the role, to be honest.” Like Wilson, D’Alberti found that his previous work as a marketer helped him secure his first developer role—which was ironic, he felt, given that he’d partially chosen to switch careers because he hadn’t been able to find an English-language marketing job in Japan. “I had my first interview with the CEO,” he told me, “and this was for a Japanese startup that was building chatbots, and they wanted to expand into the English market. So I talked with the CEO, and he was very excited to get to know me and sent me to talk with the CTO.” The CTO, unfortunately, wasn’t interested in hiring a junior developer with no professional experience. “And I thought that was the end of it. But then I got called again by the CEO. I wanted to join for the engineering position, and he wanted to have me for my marketing experience.” In the end we agreed that I would join in a 50-50 arrangement. I would do 50 percent of my job in marketing and going to conferences and talking to people, and 50 percent on the engineering side. I was like, ‘Okay, I’ll take that.’ This ended up working better than D’Alberti had expected, partially due to external circumstances. “When COVID came, we couldn’t travel abroad, so most of the job I was doing in my marketing role I couldn’t perform anymore. “So they sat me down and [said], ‘What are we going to do with you, since we cannot use you for marketing anymore?’ And I was like, ‘Well, I’m still a software developer. I could continue working in that role.’ And that actually allowed me to fully transition.” DON’T make these mistakes It was D’Alberti’s willingness to compromise on that first development role that led to his later success, so he would explicitly encourage other career-changers to avoid, in his own words, “being too picky.” This advice is based, not just on his own experience, but also on his time working as a teaching assistant at Le Wagon. “There were a couple of people who would be like, ‘Yeah, I’d really like to find a job and I’m not getting any interviews,’” he explained. “And then we’d go and ask, ‘Okay, how many companies are you applying to? What are you doing?’ But [they’d say] ‘No, see, [this company] doesn’t offer enough’ or ‘I don’t really like this company’ or ‘I’d like to do something else.’ Those who would be really picky or wouldn’t put in the effort, they wouldn’t land a job. Those who were deadly serious about ‘I need to get a job as a software developer,’ they’d find one. It might not be a great job, it might not be at a good company, but it would be a good first start from which to move on afterwards. Asakura also knew some other bootcamp graduates who struggled to find work. “A major reason was a lack of Japanese language skills,” he said. Even for junior roles, many companies in Japan require at least conversational Japanese, especially domestic ones. On the other hand, if you prioritize learning Japanese, that can give you an edge on entering the industry: “Many local companies are open to training junior developers, as long as they see your motivation and you can communicate effectively. International companies, on the other hand, often have stricter technical requirements and may pass on candidates without degrees or prior experience.” Finally, Hendricks said that during his own job hunt, “Not living in Tokyo was a problem.” It was something that he was able to overcome via diligent digital networking, but he’d encourage career-changers to think seriously about their future job prospects before settling outside a major metropolis in Japan. Their top advice I asked each developer to share their number one piece of advice for career-changers. D’Alberti wasn’t quite sure what to suggest, given recent changes in the tech market overall. “I don’t have clear advice to someone who’s trying to break into tech right now,” he said. “It might be good to wait and see what happens with the AI path. Might be good to actually learn how to code using AI, if that’s going to be the way to distinguish yourself from other junior developers. It might be to just abandon the idea of [being] a linear software developer in the traditional sense, and maybe look more into data science, if there are more opportunities.” But assuming they still decide ‘Yes, I want to join, I love the idea of being a software developer and I want to go forward’ . . . my main suggestion is patience. “It’s going to be tough,” he added. By contrast, Hendricks and Wilson had the same suggestion: if you want to change careers, then go for it, full speed ahead. “Do it now, or as soon as you possibly can,” Hendricks stated adamantly. His life has been so positively altered by discovering and pursuing his passion, that his only regret is he didn’t do it sooner. Wilson said something strikingly similar. “Do it. Just do it. I went back and forth a lot,” he explained. “‘Oh, should I do this, it’s so much money, I already have a job’ . . . just rip the bandaid off. Just do it. You probably have a good reason.” He pointed out that while starting over and looking for work is scary, it’s also possible that you’ll lose your current job anyway, at which point you’ll still be job hunting but in an industry you no longer even enjoy. “If you keep at it,” he said, “you can probably do it.” “Not to talk down to developers,” he added, “but it’s not the hardest job in the world. You have to study and learn and be the kind of person who wants to sit at the computer and write code, but if you’re thinking about it, you’re probably the kind of person who can do it, and that also means you can probably weather the awful six months of job hunting.” You only need to pass one job interview. You only need to get your foot in the door. Asakura agreed with “just do it,” but with a twist. “Build in public,” he suggested. “Share your progress. Post on GitHub. Keep your LinkedIn active.” Let people see your journey, because even small wins build momentum and credibility. “To anyone learning to code right now,” Asakura added, “don’t get discouraged by setbacks or rejections. Focus on building, learning, and showing up every day. Your portfolio speaks louder than your past, and consistency will eventually open the door.” If you want to read more how-tos and success stories around networking, working with recruitment agencies, writing your resume, etc., check out TokyoDev’s other articles. If you’d like to hear more about being a developer in Japan, we invite you to join the TokyoDev Discord, which has over 6,000 members as well as dedicated channels for resume review, job posts, life in Japan, and more.

2 days ago 10 votes