More from samwho.dev
header h1 { padding: 0; margin-top: 0.2rem; margin-bottom: 1rem; } button { margin: 0.5rem; padding: 0.5rem 1rem; background-color: #444444; color: white; border: none; border-radius: 8px; cursor: pointer; touch-action: manipulation; user-select: none; } button:hover:not(:disabled) { background-color: #555555; touch-action: manipulation; user-select: none; } button:disabled { filter: opacity(0.5); cursor: not-allowed; touch-action: manipulation; user-select: none; } input[type="range"] { width: 100%; margin: 1rem 0; } .control { display: flex; justify-content: center; align-items: center; margin: 1rem 0; } .control label { margin-right: 1rem; white-space: nowrap; } .control input[type="range"] { flex-grow: 1; } .odds { font-size: 1.2rem; font-family: var(--code-font); font-weight: bold; text-align: center; display: flex; justify-content: center; gap: 2rem; margin: 1rem 0; } .odds .hold { display: flex; flex-direction: column; align-items: center; } .odds .replace { display: flex; flex-direction: column; align-items: center; } .odds .hold .value { color: var(--palette-dark-blue); white-space: nowrap; } .odds .replace .value { color: var(--palette-red); white-space: nowrap; } article input { --c: var(--palette-orange); --l: 2px; --h: 30px; --w: 30px; width: 100%; height: var(--h); -webkit-appearance :none; -moz-appearance :none; appearance :none; background: none; cursor: pointer; overflow: hidden; } article input:focus-visible, article input:hover{ --p: 25%; } article input[type="range" i]::-webkit-slider-thumb{ height: var(--h); width: var(--w); aspect-ratio: 1; border-radius: 50%; background: var(--c); border-image: linear-gradient(90deg,var(--c) 50%,#ababab 0) 0 1/calc(50% - var(--l)/2) 100vw/0 100vw; -webkit-appearance: none; appearance: none; box-shadow: none; transition: .3s; } article input[type="range"]::-moz-range-thumb { --h: 25px; --w: 25px; height: var(--h); width: var(--w); aspect-ratio: 1; border-radius: 50%; background: var(--c); border-image: linear-gradient(90deg,var(--c) 50%,#ababab 0) 0 1/calc(50% - var(--l)/2) 100vw/0 100vw; -webkit-appearance: none; appearance: none; box-shadow: none; transition: .3s; } img.hero { --width: 200px; width: var(--width); max-width: var(--width); margin-top: calc(var(--width) * -0.5); margin-bottom: 1rem; } Reservoir Sampling Reservoir sampling is a technique for selecting a fair random sample when you don't know the size of the set you're sampling from. By the end of this essay you will know: When you would need reservoir sampling. The mathematics behind how it works, using only basic operations: subtraction, multiplication, and division. No math notation, I promise. A simple way to implement reservoir sampling if you want to use it. ittybit, and their API for working with videos, images, and audio. If you need to store, encode, or get intelligence from the media files in your app, check them out! # Sampling when you know the size In front of you are 10 playing cards and I ask you to pick 3 at random. How do you do it? The first technique that might come to mind from your childhood is to mix them all up in the middle. Then you can straighten them out and pick the first 3. You can see this happen below by clicking "Shuffle." Every time you click "Shuffle," the chart below tracks what the first 3 cards were. At first you'll notice some cards are selected more than others, but if you keep going it will even out. All cards have an equal chance of being selected. This makes it "fair." Click "Shuffle 100 times" until the chart evens out. You can reset the chart if you'd like to start over. This method works fine with 10 cards, but what if you had 1 million cards? Mixing those up won't be easy. Instead, we could use a random number generator to pick 3 indices. These would be our 3 chosen cards. We no longer have to move all of the cards, and if we click the "Select" button enough times we'll see that this method is just as fair as the mix-up method. I'm stretching the analogy a little here. It would take a long time to count through the deck to get to, say, index 436,234. But when it's an array in memory, computers have no trouble finding an element by its index. Now let me throw you a curveball: what if I were to show you 1 card at a time, and you had to pick 1 at random? That's the curveball: you don't know. No, you can only hold on to 1 card at a time. You're free to swap your card with the newest one each time I show you a card, but you can only hold one and you can't go back to a card you've already seen. Believe it or not, this is a real problem and it has a real and elegant solution. For example, let's say you're building a log collection service. Text logs, not wooden ones. This service receives log messages from other services and stores them so that it's easy to search them in one place. One of the things you need to think about when building a service like this is what do you do when another service starts sending you way too many logs. Maybe it's a bad release, maybe one of your videos goes viral. Whatever the reason, it threatens to overwhelm your log collection service. Let's simulate this. Below you can see a stream of logs that experiences periodic spikes. A horizontal line indicates the threshold of logs per second that the log collection service can handle, which in this example is 5 logs per second. You can see that every so often, logs per second spikes above the threshold . One way to deal with this is "sampling." Deciding to send only a fraction of the logs to the log collection service. Let's send 10% of the logs. Below we will see the same simulation again, but this time logs that don't get sent to our log collection service will be greyed out. The graph has 2 lines: a black line tracks sent logs , the logs that are sent to our log collection service, and a grey line tracks total logs . The rate of sent logs never exceeds the threshold , so we never overwhelm our log collection service. However, in the quieter periods we're throwing away 90% of the logs when we don't need to! What we really want is to send at most 5 logs per second. This would mean that during quiet periods you get all the logs, but during spikes you discard logs to protect the log collection service. The simple way to achieve this would be to send the first 5 logs you see each second, but this isn't fair. You aren't giving all logs an equal chance of being selected. # Sampling when you don't know the size We instead want to pick a fair sample of all the logs we see each second. The problem is that we don't know how many we will see. Reservoir sampling is an algorithm that solves this exact problem. You could, but why live with that uncertainty? You'd be holding on to an unknown number of logs in memory. A sufficiently big spike could cause you problems. Reservoir sampling solves this problem, and does so without ever using more memory than you ask it to. Let's go back to our curveball of me showing you 1 card at a time. Here's a recap of the rules: I'll draw cards one at a time from a deck. Each time I show you a card, you have to choose to hold it or discard it. If you were already holding a card, you discard your held card before replacing it with the new card. At any point I can stop drawing cards and whatever card you're holding is the one you've chosen. How would you play this game in a way that ensures all cards have been given an equal chance to be selected when I decide to stop? You're on the right track. Let's have a look at how the coin flip idea plays out in practice. Below you see a deck of cards. Clicking "Deal" will draw a card and 50% of the time it will go to the discard pile on the right, and 50% of the time it will become your held card in the center, with any previously held card moving to the discard pile. The problem is that while the hold vs discard counts are roughly equal, which feels fair, later cards are much more likely to be held when I stop than earlier cards. The first card drawn has to win 10 coin flips to still be in your hand after the 10th card is drawn. The last card only has to win 1. Scrub the slider below to see how the chances change as we draw more cards. Each bar represents a card in the deck, and the height of the bar is the chance we're holding that card when I stop. Below the slider are the chances we're holding the first card drawn vs. the last card drawn. Anything older than 15 cards ago is has a less than 0.01% chance of being held when I stop. Because believe it or not, we only have to make one small change to this idea to make it fair. Instead of flipping a coin to decide if we'll hold the card or not, instead we give each new card a 1/n chance of being held, where n is the number of cards we've seen so far. Yep! In order to be fair, every card must have an equal chance of being selected. So for the 2nd card, we want both cards to have a 1/2 chance. For the 3rd card, we want all 3 cards to have a 1/3 chance. For the 4th card, we want all 4 cards to have a 1/4 chance, and so on. So if we use 1/n for the new card, we can at least say that the new card has had a fair shot. Let's have a look at the chances as you draw more cards with this new method. new card has the right chance of being selected, but how does that make the older cards fair? So far we've focused on the chance of the new card being selected, but we also need to consider the chance of the card you're holding staying in your hand. Let's walk through the numbers. # Card 1 The first card is easy: we're not holding anything, so we always choose to hold the first card. The chance we're holding this card is 1/1, or 100%. # Card 2 This time we have a real choice. We can keep hold of the card we have, or replace it with the new one. We've said that we're going to do this with a 1/n chance, where n is the number of cards we've seen so far. So our chance of replacing the first card is 1/2, or 50%, and our chance of keeping hold of the first card is its chance of being chosen last time multiplied by its chance of being replaced, so 100% * 1/2, which is again 50%. # Card 3 The card we're holding has a 50% chance of being there. This is true regardless what happened up to this point. No matter whether we're holding card 1 or card 2, it's 50%. The new card has a 1/3 chance of being selected, so the card we're holding has a 1/3 chance of being replaced. This means that our held card has a 2/3 chance of remaining held. So its chances of "surviving" this round are 50% * 2/3. # Card N This pattern continues for as many cards as you want to draw. We can express both options as formulas. Drag the slider to substitute n with real numbers and see that the two formulas are always equal. If 1/n is the chance of choosing the new card, 1/(n-1) is the chance of choosing the new card from the previous draw. The chance of not choosing the new card is the complement of 1/n, which is 1-(1/n). Below are the cards again except this time set up to use 1/n instead of a coin flip. Click to the end of the deck. Does it feel fair to you? There's a good chance that through the 2nd half of the deck, you never swap your chosen card. This feels wrong, at least to me, but as we saw above the numbers say it is completely fair. # Choosing multiple cards Now that we know how to select a single card, we can extend this to selecting multiple cards. There are 2 changes we need to make: Rather than new cards having a 1/n chance of being selected, they now have a k/n chance, where k is the number of cards we want to choose. When we decide to replace a held card, we choose one of the k cards we're holding at random. So our new previous card selection formula becomes k/(n-1) because we're now holding k cards. Then the chance that any of the cards we're holding get replaced is 1-(1/n). Let's see how this plays out with real numbers. The fairness still holds, and will hold for any k and n pair. This is because all held cards have an equal chance of being replaced, which keeps them at an equal likelihood of still being in your hand every draw. A nice way to implement this is to use an array of size k. For each new card, generate a random number between 0 and n. If the random number is less than k, replace the card at that index with the new card. Otherwise, discard the new card. And that's how reservoir sampling works! # Applying this to log collection Let's take what we now know about reservoir sampling and apply it to our log collection service. We'll set k=5, so we're "holding" at most 5 log messages at a time, and every second we will send the selected logs to the log collection service. After we've done that, we empty our array of size 5 and start again. This creates a "lumpy" pattern in the graph below, and highlights a trade-off when using reservoir sampling. It's no longer a real-time stream of logs, but chunks of logs sent at an interval. However, sent logs never exceeds the threshold , and during quiet periods the two lines track each other almost perfectly. No logs lost during quiet periods, and never more than threshold logs per second sent during spikes. The best of both worlds. It also doesn't store more than k=5 logs, so it will have predictable memory usage. # Further reading Something you may have thought while reading this post is that some logs are more valuable than others. You almost certainly want to keep all error logs, for example. For that use-case there is a weighted variant of reservoir sampling. I wasn't able to find a simpler explanation of it, so that link is to Wikipedia which I personally find a bit hard to follow. But the key point is that it exists and if you need it you can use it. # Conclusion Reservoir sampling is one of my favourite algorithms, and I've been wanting to write about it for years now. It allows you to solve a problem that at first seems impossible, in a way that is both elegant and efficient. Thank you again to ittybit for sponsoring this post. I really couldn't have hoped for a more supportive first sponsor. Thank you for believing in and understanding what I'm doing here. Thank you to everyone who read this post and gave their feedback. You made this post much better than I could have done on my own, and steered me away from several paths that just weren't working. If you want to tell me what you thought of this post by sending me an anonymous message that goes directly to my phone, go to https://samwho.dev/ping.
.dog-line { display: flex; flex-wrap: nowrap; flex-direction: row; width: 100%; height: 10rem; margin-top: 2rem; margin-bottom: 2rem; } .dog-line img { flex-grow: 1; height: auto; margin: 0; padding: 0; object-fit: contain; } .dog-grid { display: grid; grid-template-columns: repeat(4, 1fr); grid-gap: 1rem; margin-top: 2rem; margin-bottom: 2rem; } Back in Memory Allocation, I introduced Haskie. The idea behind Haskie was to create a character that could ask questions the reader might have, and to "soften" the posts to make them feel less intimidating. I got some feedback from people that Haskie was a bit too childish, and didn't feel like he belonged in posts about serious topics. This feedback was in the minority, though, and most people liked him. So I kept him and used him again in Hashing. Having a proxy to the reader was useful. I could anticipate areas of confusion and clear them up without creating enormous walls of text. I don't like it when the entire screen is filled with text, I like to break it up with images and interactive elements. And now dogs. Then in Bloom Filters, I found myself needing a character to represent the "adult in the room." If Haskie was my proxy to the reader, this new character would serve as a proxy to all of the material I learned from in the writing of the post. This is Sage. I liked the idea of having a cast of characters, each with their own personality and purpose. But I had a few problems. # Problems Both Haskie and Sage, because I have no artistic ability, were generated by AI. Back when I made them I was making no money from this blog, and I had no idea if I was going to keep them around. I didn't want to invest money in an idea that could flop, so I didn't feel bad about using AI to try it out. Since then, however, I have been paid twice to write posts for companies, and I know that I'm keeping the dogs. It wasn't ethical to continue piggybacking on AI. While ethics were the primary motivation, there were some other smaller problems with the dogs: The visual style of them, while I did like it, never felt like it fit with the rest of my personal brand. It was difficult to get AI to generate consistent dogs. You'll notice differences in coat colouration and features between variants of the same dog. The AI generated images look bad at small sizes. So I worked with the wonderful Andy Carolan to create a new design for my dogs. A design that would be consistent, fit with my brand, and look good at any size. # Haskie, Sage, and Doe The redesigned dogs are consistent, use simple colours and shapes, and use the SVGs file format to look good at any size. Each variant clocks in at around 20kb, which is slightly larger than the small AI-generated images, but I'll be able to use them at any size. Together the dogs represent a family unit: Sage as the dad, Haskie as the youngest child, and Doe as his older sister. They also come in a variety of poses, so I can use them to represent different emotions or actions. We were careful to make the dogs recognisable apart. They differ in colour, ear shape, tail shape, and collar tag. Sage and Doe have further distinguishing features: Sage with his glasses, and Doe with her bandana. Doe's bandana uses the same colours as the transgender flag, to show my support for the trans community and as a nod to her identity. # Going forward I'm so happy with the new dogs, and plan to use them in my posts going forward. I suspect I will, at some point, replace the dogs in my old posts as well. I don't plan to add any more characters, and I want to be careful to avoid overusing them. I don't want them to become a crutch, or to distract from the content of the posts. I also haven't forgotten the many people that pointed out to me that you can't pet the dogs. I'm working on it.
.bf { width: 100%; height: 150px; } @media only screen and (min-width: 320px) and (max-width: 479px) { .bf { height: 200px; } } @media only screen and (min-width: 480px) and (max-width: 676px) { .bf { height: 200px; } } @media only screen and (min-width: 677px) and (max-width: 991px) { .bf { height: 150px; } } form { display: flex; flex-direction: column; align-items: center; justify-content: stretch; } input { border: 1px solid rgb(119, 119, 119); padding: 0.25rem; border-radius: 0.25rem; height: 2em; line-height: 2em; } .aside { padding: 2rem; width: 100vw; position: relative; margin-left: -50vw; left: 50%; background-color: #eeeeee; display: flex; align-items: center; flex-direction: column; } .aside > * { flex-grow: 1; } .aside p { padding-left: 1rem; padding-right: 1rem; max-width: 780px; font-style: italic; font-family: Lora, serif; text-align: center; } Everyone has a set of tools they use to solve problems. Growing this set helps you to solve ever more difficult problems. In this post, I'm going to teach you about a tool you may not have heard of before. It's a niche tool that won't apply to many problems, but when it does you'll find it invaluable. It's called a "bloom filter." Before you continue! This post assumes you know what a hash function is, and if you don't it's going to be tricky to understand. Sam has written a post about hash functions, and recommendeds that you read this first. # What bloom filters can do Bloom filters are similar to the Set data structure. You can add items to them, and check if an item is present. Here's what it might look like to use a bloom filter in JavaScript, using a made-up BloomFilter class: let bf = new BloomFilter(); bf.add("Ant"); bf.add("Rhino"); bf.contains("Ant"); // true bf.contains("Rhino"); // true While this looks almost identical to a Set, there are some key differences. Bloom filters are what's called a probabalistic data structure. Where a Set can give you a concrete "yes" or "no" answer when you call contains, a bloom filter can't. Bloom filters can give definite "no"s, but they can't be certain about "yes." In the example above, when we ask bf if it contains "Ant" and "Rhino", the true that it returns isn't a guarantee that they're present. We know that they're present because we added them just a couple of lines before, but it would be possible for this to happen: let bf = new BloomFilter(); bf.add("Ant"); bf.add("Rhino"); bf.contains("Fox"); // true We'll demonstrate why over the course of this post. For now, we'll say that when bloom filters return true it doesn't mean "yes", it means "maybe". When this happens and the item has never been added before, it's called a false-positive. The opposite, claiming "no" when the answer is "yes," is called a false-negative. A bloom filter will never give a false-negative, and this is what makes them useful. It's not strictly lying, it's just not giving you a definite answer. Let's look at an example where we can use this property to our advantage. # When bloom filters are useful Imagine you're building a web browser, and you want to protect users from malicious links. You could build and maintain a list of all known malicious links and check the list every time a user navigates the browser. If the link they're trying to visit is in the list, you warn the user that they might be about to visit a malicious website. If we assume there are, say, 1,000,000 malicious links on the Internet, and each link is 20 characters long, then the list of malicious links would be 20MB in size. This isn't a huge amount of data, but it's not small either. If you have lots of users and want to keep this list up to date, the bandwidth could add up. However, if you're happy to accept being wrong 0.0001% of the time (1 in a million), you could use a bloom filter which can store the same data in 3.59MB. That's an 82% reduction in size, and all it costs you is showing the user an incorrect warning 1 in every million links visited. If you wanted to take it even further, and you were happy to accept being wrong 0.1% of the time (1 in 1000), the bloom filter would only be 1.8MB. This use-case isn't hypothetical, either. Google Chrome used a bloom filter for this exact purpose until 2012. If you were worried about showing a warning when it wasn't needed, you could always make an API that has the full list of malicious links in a database. When the bloom filter says "maybe," you would then make an API call to check the full list to be sure. No more spurious warnings, and the bloom filter would save you from having to call the API for every link visited. # How bloom filters work At its core, a bloom filter is an array of bits. When it is created, all of the bits are set to 0. We're going to represent this as a grid of circles, with each circle representing 1 bit. Our bloom filters in this post are all going to have 32 bits in total. this one and let me know what you think. Click here to go back to normal. To add an item to the bloom filter, we're going to hash it with 3 different hash functions, then use the 3 resulting values to set 3 bits. If you're not familiar with hashing, I recommend reading my post about it before continuing. For this post I'm choosing to use 3 of the SHA family of hash functions: sha1, sha256, and sha512. Here's what our bloom filter looks like if we add the value "foo" to it: The bits in positions 15, 16 and 27 have been set. Other bits, e.g. 1 have not been set. You can hover or tap the bits in this paragraph to highlight them in the visualisation. We get to this state by taking the hash value of "foo" for each of our 3 hash functions and modulo it by the number of bits in our bloom filter. Modulo gets us the remainder when dividing by 32, so we get 27 with sha1, 15 with sha256 and 16 with sha512. The table below shows what's happening, and you can try inputting your own values to see what bits they would set if added. Go ahead and add a few of your own values to our bloom filter below and see what happens. There's also a check button that will tell you if a value is present within the bloom filter. A value is only considered present if all of the bits checked are set. You can start again by hitting the clear button. You might occasionally notice that only 2, or even 1, bits get set. This happens when 2 or more of our hash functions produce the same value, or we attempt to set a bit that has already been set. Taking that a bit further, have a think about the implications of a bloom filter that has every bit set. bit is set, then won't the bloom filter claim it contains every item you check? That's a false-positive every time! Exactly right. A bloom filter with every bit set is equivalent to a Set that always returns true for contains. It will claim to contain everything you ask it about, even if that thing was never added. # False-positive rates The rate of false-positives in our bloom filter will grow as the percentage of set bits increases. Drag the slider below the graph to see how the false-positive rate changes as the number of set bits increases. It grows slowly at first, but as we get closer to having all bits set the rate increases. This is because we calculate the false-positive rate as x^3, where x is the percentage of set bits and 3 is the number of hash functions used. To give an example of why we calculate it with this formula, imagine we have a bloom filter with half of its bits set, x = 0.5. If we assume that our hash function has an equal chance of setting any of the bits, then the chance that all 3 hash functions set a bit that is already set is 0.5 * 0.5 * 0.5, or x^3. Let's have a look at the false-positive rate of bloom filters that use different numbers of hash functions. The problem that using lots of hash functions introduces is that it makes the bloom filter fill up faster. The more hash functions you use, the more bits get set for each item you add. There's also the cost of hashing itself. Hash functions aren't free, and while the hash functions you'd use in a bloom filter try to be as fast as possible, it's still more expensive to run 100 of them than it is to run 3. It's possible to calculate how full a bloom filter will be after inserting a number of items, based on the number of hash functions used. The graph below assumes a bloom filter with 1000 bits. The more hash functions we use, the faster we set all of the bits. You'll notice that the curve tails off as more items are added. This is because the more bits that are set, the more likely it is that we'all attempt to set a bit that has already been set. In practice, 1000 bits is a very small bloom filter, occupying only 125 bytes of memory. Modern computers have a lot of memory, so let's crank this up to 100,000 bits (12.5kB) and see what happens. The lines barely leave the bottom of the graph, meaning the bloom filter will be very empty and the false-positive rate will be low. All this cost us was 12.5kB of memory, which is still a very small amount by 2024 standards. # Tuning a bloom filter Picking the correct number of hash functions and bits for a bloom filter is a fine balance. Fortunately for us, if we know up-front how many unique items we want to store, and what our desired false-positive rate is, we can calculate the optimal number of hash functions, and the required number of bits. The bloom filter page on Wikipedia covers the mathematics involved, which I'm going to translate into JavaScript functions for us to use. I want to stress that you don't need to understand the maths to use a bloom filter or read this post. I'm including the link to it only for completeness. # Optimal number of bits The following JavaScript function, which might look a bit scary but bear with me, takes the number of items you want to store (items) and the desired false-positive rate (fpr, where 1% == 0.01), and returns how many bits you will need to achieve that false-positive rate. function bits(items, fpr) { const n = -items * Math.log(fpr); const d = Math.log(2) ** 2; return Math.ceil(n / d); } We can see how this grows for a variety of fpr values in the graph below. # Optimal number of hash functions After we've used the JavaScript above to calculate how many bits we need, we can use the following function to calculate the optimal number of hash functions to use: function hashFunctions(bits, items) { return Math.ceil((bits / items) * Math.log(2)); } Pause for a second here and have a think about how the number of hash functions might grow based on the size of the bloom filter and the number of items you expect to add. Do you think you'll use more hash functions, or fewer, as the bloom filter gets larger? What about as the number of items increases? The more items you plan to add, the fewer hash functions you should use. Yet, a larger bloom filter means you can use more hash functions. More hash functions keep the false-positive rate lower for longer, but more items fills up the bloom filter faster. It's a complex balancing act, and I am thankful that mathematicians have done the hard work of figuring it out for us. # Caution While we can stand on the shoulders of giants and pick the optimal number of bits and hash functions for our bloom filter, it's important to remember that these rely on you giving good estimates of the number of items you expect to add, and choosing a false-positive rate that's acceptable for your use-case. These numbers might be difficult to come up with, and I recommend erring on the side of caution. If you're not sure, it's likely better to use a larger bloom filter than you think you need. # Removing items from a bloom filter We've spent the whole post talking about adding things to a bloom filter, and the optimal parameters to use. We haven't spoken at all about removing items. And that's because you can't! In a bloom filter, we're using bits, individual 1s and 0s, to track the presence of items. If we were to remove an item by setting its bits to 0, we might also be removing other items by accident. There's no way of knowing. Click the buttons of the bloom filter below to see this in action. First we will add "foo", then "baz", and then we will remove "baz". Hit "clear" if you want to start again. The end result of this sequence is a bloom filter that doesn't contain "baz", but doesn't contain "foo" either. Because both "foo" and "baz" set bit 27, we accidentally clobber the presence of "foo" while removing "baz". Something else you might have noticed playing with the above example is that if you add "foo" and then attempt to remove "baz" before having added it, nothing happens. Even though 27 is set, bits 18 and 23 are not, so the bloom filter cannot contain "baz". Because of this, it won't unset 27. # Counting bloom filters While you can't remove items from a standard bloom filter, there are variants that allow you to do so. One of these variants is called a "counting bloom filter," which uses an array of counters instead of bits to keep track of items. Now when you go through the sequence, the end result is that the bloom filter still contains "foo." It solves the problem. The trade-off, though, is that counters are bigger than bits. With 4 bits per counter you can increment up to 15. With 8 bits per counter you can increment up to 255. You'll need to pick a counter size sufficient to never reach the maximum value, otherwise you risk corrupting the bloom filter. Using 8x more memory than a standard bloom filter could be a big deal, especially if you're using a bloom filter to save memory in the first place. Think hard about whether you really need to be able to remove items from your bloom filter. Counting bloom filters also introduce the possibility of false-negatives, which are impossible in standard bloom filters. Consider the following example. Because "loved" and "response" both hash to the bits 5, 22, and 26, when we remove "response" we also remove "loved". If we write this as JavaScript the problem becomes more clear: let bf = new CountingBloomFilter(); bf.add("loved"); bf.add("your"); bf.remove("response"); bf.contains("loved"); // false Even though we know for sure we've added "loved" in this snippet, the call to contains will return false. This sort of false-negative can't happen in a standard bloom filter, and it removes one of the key benefits of using a bloom filter in the first place: the guarantee of no false-negatives. # Bloom filters in the real-world Real-world users of bloom filters include Akamai, who use them to avoid caching web pages that are accessed once and never again. They do this by storing all page accesses in a bloom filter, and only writing them into cache if the bloom filter says they've been seen before. This does result in some pages being cached on the first access, but that's fine because it's still an improvement. It would be impractical for them to store all page accesses in a Set, so they accept the small false-positive rate in favour of the significantly smaller bloom filter. Akamai released a paper about this that goes into the full details if you're interested. Google's BigTable is a distributed key-value store, and uses bloom filters internally to know what keys are stored within. When a read request for a key comes in, a bloom filter in memory is first checked to see if the key is in the database. If not, BigTable can respond with "not found" without ever needing to read from disk. Sometimes the bloom filter will claim a key is in the database when it isn't, but this is fine because when that happens a disk access will confirm the key in fact isn't in the database. # Conclusion Bloom filters, while niche, can be a huge optimisation in the right situation. They're a wonderful application of hash functions, and a great example of making a deliberate trade-off to achieve a specific goal. Trade-offs, and combining simpler building blocks to create more complex, purpose-built data structures, are present everywhere in software engineering. Being able to spot where a data structure could net a big win can separate you from the pack, and take your career to the next level. I hope you've enjoyed this post, and that you find a way to apply bloom filters to a problem you're working on. # Acknowledgements Enormous thank you to my reviewers, without whom this post would be a shadow of what you read today. In no particular order: rylon, Indy, Aaron, Sophie, Davis, ed, Michael Drury, Anton Zhiyanov, Christoph Berger.
form { padding-top: 0.5em; padding-left: 0.5em; padding-right: 0.5em; display: flex; justify-content: center; gap: 0.3em; } form input[type=text] { flex: 4 1 auto; min-width: 0; border-radius: 0.3em; border: 1px solid #aaaaaa; padding: 0.3em; } form button { flex: 1 1 auto; max-width: 140px; } form button:disabled { opacity: 0.5 !important; } form button.add { background-color: #009E73; color: white; border: 0; border-radius: 0.3em; cursor: pointer; } form button.check { background-color: #56B4E9; color: white; border: 0; border-radius: 0.3em; cursor: pointer; } form button.clear { background-color: #D55E00; color: white; border: 0; border-radius: 0.3em; cursor: pointer; } .grid-2x2 { display: "grid"; } .grid { user-select: none; cursor: pointer; margin-top: 1rem; margin-bottom: 1rem; border: 1px solid #009E73; width: 100%; display: grid; grid-template-columns: repeat(8, 1fr); grid-template-rows: repeat(2, 1fr); } .grid-item { display: flex; align-items: center; justify-content: center; aspect-ratio: 1/1; } .grid-active { background-color: #009E73; color: white; } .above-grid { display: flex; justify-content: center; } .hash-examples { padding-top: 0.5rem; padding-bottom: 0.5rem; margin: auto; display: flex; flex-direction: column; align-items: center; } .hash-examples div { margin: auto; } .hash-examples code { display: block; white-space: pre; font-weight: bold; } .hash-examples p { font-size: 0.75rem; font-style: italic; text-align: center; font-family: Lora, serif; width: 75%; } .blob { cursor: pointer; background: #CC79A7; display: flex; justify-content: center; align-items: center; font-size: 1.5rem; color: white; border-radius: 50%; margin: 10px; height: 3rem; width: 3rem; min-width: 3rem; max-width: 3rem; box-shadow: 0 0 0 0 #CC79A7FF; transform: scale(1); animation: pulse 2s infinite; } @keyframes pulse { 0% { transform: scale(0.85); box-shadow: 0 0 0 0 #CC79A77F; } 70% { transform: scale(1); box-shadow: 0 0 0 1rem rgba(0, 0, 0, 0); } 100% { transform: scale(0.85); box-shadow: 0 0 0 0 rgba(0, 0, 0, 0); } } .blob-click { cursor: default; animation: tick 1s linear; background: #009E73FF; } @keyframes tick { 0% { transform: scale(1); box-shadow: 0 0 0 0 #009E73FF; } 50% { box-shadow: 0 0 0 1rem #009E737F; } 100% { box-shadow: 0 0 0 2rem #009E7300; } } .aside { padding: 2rem; width: 100vw; position: relative; margin-left: -50vw; left: 50%; background-color: #eeeeee; display: flex; align-items: center; flex-direction: column; } .aside > * { flex-grow: 1; } .aside p { padding-left: 1rem; padding-right: 1rem; max-width: 780px; font-style: italic; font-family: Lora, serif; text-align: center; } .pct25 { width: 100%; height: 200px; } .datasets th { text-align: left; } .datasets { table-layout: fixed; } As a programmer, you use hash functions every day. They're used in databases to optimise queries, they're used in data structures to make things faster, they're used in security to keep data safe. Almost every interaction you have with technology will involve hash functions in one way or another. Hash functions are foundational, and they are everywhere. But what is a hash function, and how do they work? In this post, we're going to demystify hash functions. We're going to start by looking at a simple hash function, then we're going to learn how to test if a hash function is good or not, and then we're going to look at a real-world use of hash functions: the hash map. clicked. # What is a hash function? Hash functions are functions that take an input, usually a string, and produce a number. If you were to call a hash function multiple times with the same input, it will always return the same number, and that number returned will always be within a promised range. What that range is will depend on the hash function, some use 32-bit integers (so 0 to 4 billion), others go much larger. If we were to write a dummy hash function in JavaScript, it might look like this: function hash(input) { return 0; } Even without knowing how hash functions are used, it's probably no surprise that this hash function is useless. Let's see how we can measure how good a hash function is, and after that we'll do a deep dive on how they're used within hash maps. # What makes a hash function good? Because input can be any string, but the number returned is within some promised range, it's possible that two different inputs can return the same number. This is called a "collision," and good hash functions try to minimise how many collisions they produce. It's not possible to completely eliminate collisions, though. If we wrote a hash function that returned a number in the range 0 to 7, and we gave it 9 unique inputs, we're guaranteed at least 1 collision. hash("to") == 3 hash("the") == 2 hash("café") == 0 hash("de") == 6 hash("versailles") == 4 hash("for") == 5 hash("coffee") == 0 hash("we") == 7 hash("had") == 1 To visualise collisions, I'm going to use a grid. Each square of the grid is going to represent a number output by a hash function. Here's an example 8x2 grid. Click on the grid to increment the example hash output value and see how we map it to a grid square. See what happens when you get a number larger than the number of grid squares. { let grid = document.getElementById("first-grid"); let hash = document.getElementById("grid-hash"); let modulo = document.getElementById("grid-modulo"); grid.addEventListener("click", (e) => { e.preventDefault(); let number = parseInt(hash.innerText) + 1; hash.innerText = number.toString(); modulo.innerText = (number % 16).toString(); grid.querySelector(".grid-active").classList.remove("grid-active"); grid.children[number % 16].classList.add("grid-active"); return false; }); }); 13 % 16 == 13 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Every time we hash a value, we're going to make its corresponding square on the grid a bit darker. The idea is to create an easy way to see how well a hash function avoids collisions. What we're looking for is a nice, even distribution. We'll know that the hash function isn't good if we have clumps or patterns of dark squares. This is a great observation. You're absolutely right, we're going to be creating "pseudo-collisions" on our grid. It's okay, though, because if the hash function is good we will still see an even distribution. Incrementing every square by 100 is just as good a distribution as incrementing every square by 1. If we have a bad hash function that collides a lot, that will still stand out. We'll see this shortly. Let's take a larger grid and hash 1,000 randomly-generated strings. You can click on the grid to hash a new set of random inputs, and the grid will animate to show you each input being hashed and placed on the grid. The values are nice and evenly distributed because we're using a good, well-known hash function called murmur3. This hash is widely used in the real-world because it has great distribution while also being really, really fast. What would our grid look like if we used a bad hash function? function hash(input) { let hash = 0; for (let c of input) { hash += c.charCodeAt(0); } return hash % 1000000; } This hash function loops through the string that we're given and sums the numeric values of each character. It then makes sure that the value is between 0 and 1000000 by using the modulus operator (%). Let's call this hash function stringSum. Here it is on the grid. Reminder, this is 1,000 randomly generated strings that we're hashing. This doesn't look all that different from murmur3. What gives? The problem is that the strings we're giving to be hashed are random. Let's see how each function performs when given input that is not random: the numbers from 1 to 1000 converted to strings. Now the problem is more clear. When the input isn't random, the output of stringSum forms a pattern. Our murmur3 grid, however, looks the same as how it looked with random values. How about if we hash the top 1,000 most common English words: It's more subtle, but we do see a pattern on the stringSum grid. As usual, murmur3 looks the same as it always does. This is the power of a good hash function: no matter the input, the output is evenly distributed. Let's talk about one more way to visualise this and then talk about why it matters. # The avalanche effect Another way hash functions get evaluated is on something called the "avalanche effect." This refers to how many bits in the output value change when just a single bit of the input changes. To say that a hash function has a good avalanche effect, a single bit flip in the input should result in an average of 50% the output bits flipping. It's this property that helps hash functions avoid forming patterns in the grid. If small changes in the input result in small changes in the output, you get patterns. Patterns indicate poor distribution, and a higher rate of collisions. Below, we are visualising the avalanche effect by showing two 8-bit binary numbers. The top number is the input value, and the bottom number is the murmur3 output value. Click on it to flip a single bit in the input. Bits that change in the output will be green, bits that stay the same will be red. murmur3 does well, though you will notice that sometimes fewer than 50% of the bits flip and sometimes more. This is okay, provided that it is 50% on average. Let's see how stringSum performs. Well this is embarassing. The output is equal to the input, and so only a single bit flips each time. This does make sense, because stringSum just sums the numeric value of each character in the string. This example only hashes the equivalent of a single character, which means the output will always be the same as the input. # Why all of this matters We've taken the time to understand some of the ways to determine if a hash function is good, but we've not spent any time talking about why it matters. Let's fix that by talking about hash maps. To understand hash maps, we first must understand what a map is. A map is a data structure that allows you to store key-value pairs. Here's an example in JavaScript: let map = new Map(); map.set("hello", "world"); console.log(map.get("hello")); Here we take a key-value pair ("hello" → "world") and store it in the map. Then we print out the value associated with the key "hello", which will be "world". A more fun real-world use-case would be to find anagrams. An anagram is when two different words contain the same letters, for example "antlers" and "rentals" or "article" and "recital." If you have a list of words and you want to find all of the anagrams, you can sort the letters in each word alphabetically and use that as a key in a map. let words = [ "antlers", "rentals", "sternal", "article", "recital", "flamboyant", ] let map = new Map(); for (let word of words) { let key = word .split('') .sort() .join(''); if (!map.has(key)) { map.set(key, []); } map.get(key).push(word); } This code results in a map with the following structure: { "aelnrst": [ "antlers", "rentals", "sternal" ], "aceilrt": [ "article", "recital" ], "aabflmnoty": [ "flamboyant" ] } # Implementing our own simple hash map Hash maps are one of many map implementations, and there are many ways to implement hash maps. The simplest way, and the way we're going to demonstrate, is to use a list of lists. The inner lists are often referred to as "buckets" in the real-world, so that's what we'll call them here. A hash function is used on the key to determine which bucket to store the key-value pair in, then the key-value pair is added to that bucket. Let's walk through a simple hash map implementation in JavaScript. We're going to go through it bottom-up, so we'll see some utility methods before getting to the set and get implementations. class HashMap { constructor() { this.bs = [[], [], []]; } } We start off by creating a HashMap class with a constructor that sets up 3 buckets. We use 3 buckets and the short variable name bs so that this code displays nicely on devices with smaller screens. In reality, you could have however many buckets you want (and better variable names). class HashMap { // ... bucket(key) { let h = murmur3(key); return this.bs[ h % this.bs.length ]; } } The bucket method uses murmur3 on the key passed in to find a bucket to use. This is the only place in our hash map code that a hash function is used. class HashMap { // ... entry(bucket, key) { for (let e of bucket) { if (e.key === key) { return e; } } return null; } } The entry method takes a bucket and a key and scans the bucket until it finds an entry with the given key. If no entry is found, null is returned. class HashMap { // ... set(key, value) { let b = this.bucket(key); let e = this.entry(b, key); if (e) { e.value = value; return; } b.push({ key, value }); } } The set method is the first one we should recognise from our earlier JavaScript Map examples. It takes a key-value pair and stores it in our hash map. It does this by using the bucket and entry methods we created earlier. If an entry is found, its value is overwritten. If no entry is found, the key-value pair is added to the map. In JavaScript, { key, value } is shorthand for { key: key, value: value }. class HashMap { // ... get(key) { let b = this.bucket(key); let e = this.entry(b, key); if (e) { return e.value; } return null; } } The get method is very similar to set. It uses bucket and entry to find the entry related to the key passed in, just like set does. If an entry is found, its value is returned. If one isn't found, null is returned. That was quite a lot of code. What you should take away from it is that our hash map is a list of lists, and a hash function is used to know which of the lists to store and retrieve a given key from. Here's a visual representation of this hash map in action. Click anywhere on the buckets to add a new key-value pair using our set method. To keep the visualisation simple, if a bucket were to "overflow", the buckets are all reset. Because we're using murmur3 as our hash function, you should see good distribution between the buckets. It's expected you'll see some imbalance, but it should generally be quite even. To get a value out of the hash map, we first hash the key to figure out which bucket the value will be in. Then we have to compare the key we're searching for against all of the keys in the bucket. It's this search step that we minimise through hashing, and why murmur3 is optimised for speed. The faster the hash function, the faster we find the right bucket to search, the faster our hash map is overall. This is also why reducing collisions is so crucial. If we did decide to use that dummy hash function from all the way at the start of this article, the one that returns 0 all the time, we'll put all of our key-value pairs into the first bucket. Finding anything could mean we have to check all of the values in the hash map. With a good hash function, with good distribution, we reduce the amount of searching we have to do to 1/N, where N is the number of buckets. Let's see how stringSum does. Interestingly, stringSum seems to distribute values quite well. You notice a pattern, but the overall distribution looks good. stringSum. I knew it would be good for something. Not so fast, Haskie. We need to talk about a serious problem. The distribution looks okay on these sequential numbers, but we've seen that stringSum doesn't have a good avalanche effect. This doesn't end well. # Real-world collisions Let's look at 2 real-world data sets: IP addresses and English words. What I'm going to do is take 100,000,000 random IP addresses and 466,550 English words, hash all of them with both murmur3 and stringSum, and see how many collisions we get. IP Addresses murmur3 stringSum Collisions 1,156,959 99,999,566 1.157% 99.999% English words murmur3 stringSum Collisions 25 464,220 0.005% 99.5% When we use hash maps for real, we aren't usually storing random values in them. We can imagine counting the number of times we've seen an IP address in rate limiting code for a server. Or code that counts the occurrences of words in books throughout history to track their origin and popularity. stringSum sucks for these applications because of it's extremely high collision rate. # Manufactured collisions Now it's murmur3's turn for some bad news. It's not just collisions caused by similarity in the input we have to worry about. Check this out. What's happening here? Why do all of these jibberish strings hash to the same number? I hashed 141 trillion random strings to find values that hash to the number 1228476406 when using murmur3. Hash functions have to always return the same output for a specific input, so it's possible to find collisions by brute force. trillion? Like... 141 and then 12 zeroes? Yes, and it only took me 25 minutes. Computers are fast. Bad actors having easy access to collisions can be devastating if your software builds hash maps out of user input. Take HTTP headers, for example. An HTTP request looks like this: GET / HTTP/1.1 Accept: */* Accept-Encoding: gzip, deflate Connection: keep-alive Host: google.com You don't have to understand all of the words, just that the first line is the path being requested and all of the other lines are headers. Headers are Key: Value pairs, so HTTP servers tend to use maps to store them. Nothing stops us from passing any headers we want, so we can be really mean and pass headers we know will cause collisions. This can significantly slow down the server. This isn't theoretical, either. If you search "HashDoS" you'll find a lot more examples of this. It was a really big deal in the mid-2000s. There are a few ways to mitigate this specific to HTTP servers: ignoring jibberish header keys and limiting the number of headers you store, for example. But modern hash functions like murmur3 offer a more generalised solution: randomisation. Earlier in this post we showed some examples of hash function implementations. Those implementations took a single argument: input. Lots of modern hash functions take a 2nd parameter: seed (sometimes called salt). In the case of murmur3, this seed is a number. So far, we've been using 0 as the seed. Let's see what happens with the collisions I've collected when we use a seed of 1. Just like that, 0 to 1, the collisions are gone. This is the purpose of the seed: it randomises the output of the hash function in an unpredictable way. How it achieves this is beyond the scope of this article, all hash functions do this in their own way. The hash function still returns the same output for the same input, it's just that the input is a combination of input and seed. Things that collide with one seed shouldn't collide when using another. Programming languages often generate a random number to use as the seed when the process starts, so that every time you run your program the seed is different. As a bad guy, not knowing the seed, it is now impossible for me to reliably cause harm. If you look closely in the above visualisation and the one before it, they're the same values being hashed but they produce different hash values. The implication of this is that if you hash a value with one seed, and want to be able to compare against it in the future, you need to make sure you use the same seed. Having different values for different seeds doesn't affect the hash map use-case, because hash maps only live for the duration the program is running. Provided you use the same seed for the lifetime of the program, your hash maps will continue to work just fine. If you ever store hash values outside of your program, in a file for example, you need to be careful you know what seed has been used. # Playground As is tradition, I've made a playground for you to write your own hash functions and see them visualised with the grids seen in this article. Click here to try it! # Conclusion We've covered what a hash function is, some ways to measure how good it is, what happens when it's not good, and some of the ways they can be broken by bad actors. The universe of hash functions is a large one, and we've really only scratched the surface in this post. We haven't spoken about cryptographic vs non-cryptographic hashing, we've touched on only 1 of the thousands of use-cases for hash functions, and we haven't talked about how exactly modern hash functions actually work. Some further reading I recommend if you're really enthusiastic about this topic and want to learn more: https://github.com/rurban/smhasher this repository is the gold standard for testing how good hash functions are. They run a tonne of tests against a wide number of hash functions and present the results in a big table. It will be difficult to understand what all of the tests are for, but this is where the state of the art of hash testing lives. https://djhworld.github.io/hyperloglog/ this is an interactive piece on a data structure called HyperLogLog. It's used to efficiently count the number of unique elements in very, very large sets. It uses hashing to do it in a really clever way. https://www.gnu.org/software/gperf/ is a piece of software that, when given the expected set of things you want to hash, can generate a "perfect" hash function automatically. Feel free to join the discussion on Hacker News! # Acknowledgements Thanks to everyone who read early drafts and provided invaluable feedback. delroth, Manon, Aaron, Charlie And everyone who helped me find murmur3 hash collisions: Indy, Aaron, Max # Patreon After the success of Load Balancing and Memory Allocation, I have decided to set up a Patreon page: https://patreon.com/samwho. For all of these articles going forward, I am going to post a Patreon-exclusive behind-the-scenes post talking about decisions, difficulties, and lessons learned from each post. It will give you a deep look in to how these articles evolve, and I'm really stoked about the one I've written for this one. If you enjoy my writing, and want to support it going forward, I'd really appreciate you becoming a Patreon. ❤️
More in programming
Some major updates to our open-source Automerge library, an introduction to Sketchy Calendars, and a peek at our work on collaborative game development. Also some meta content—a refreshed website, and a talk about how we work.
Today’s my last day at Carta, where I got the chance to serve as their CTO for the past two years. I’ve learned so much working there, and I wanted to end my chapter there by collecting my thoughts on what I learned. (I am heading somewhere, and will share news in a week or two after firming up the communication plan with my new team there.) The most important things I learned at Carta were: Working in the details – if you took a critical lens towards my historical leadership style, I think the biggest issue you’d point at is my being too comfortable operating at a high level of abstraction. Utilizing the expertise of others to fill in your gaps is a valuable skill, but–like any single approach–it’s limiting when utilized too frequently. One of the strengths of Carta’s “house leadership style” is expecting leaders to go deep into the details to get informed and push pace. What I practiced there turned into the pieces on strategy testing and developing domain expertise. Refining my approach to engineering strategy – over the past 18 months, I’ve written a book on engineering strategy (posts are all in #eng-strategy-book), with initial chapters coming available for early release with O’Reilly next month. Fingers crossed, the book will be released in approximately October. Coming into Carta, I already had much of my core thesis about how to do engineering strategy, but Carta gave me a number of complex projects to practice on, and excellent people to practice with: thank you to Dan, Shawna and Vogl in particular! More on this project in the next few weeks. Extract the kernel – everywhere I’ve ever worked, teams have struggled understanding executives. In every case, the executives could be clearer, but it’s not particularly interesting to frame these problems as something the executives need to fix. Sure, that’s true they could communicate better, but that framing makes you powerless, when you have a great deal of power to understand confusing communication. After all, even good communicators communicate poorly sometimes. Meaningfully adopting LLMs – a year ago I wrote up notes on adopting LLMs in your products, based on what we’d learned so far. Since then, we’ve learned a lot more, and LLMs themselves have significantly improved. Carta has been using LLMs in real, business-impacting workflows for over a year. That’s continuing to expand into solving more complex internal workflows, and even more interestingly into creating net-new product capabilities that ought to roll out more widely in the next few months (currently released to small beta groups). This is the first major technology transition that I’ve experienced in a senior leadership role (since I was earlier in my career when mobile internet transitioned from novelty to commodity). The immense pressure to adopt faster, combined with the immense uncertainty if it’s a meaningful change or a brief blip was a lot of fun, and was the inspiration for this strategy document around LLM adoption. Multi-dimensional tradeoffs – a phrase that Henry Ward uses frequent is that “everyone’s right, just at a different altitude.” That idea resonates with me, and meshes well with the ideas of multi-dimensional tradeoffs and layers of context that I find improve decision making for folks in roles that require making numerous, complex decisions. Working at Carta, these ideas formalized from something I intuited into something I could explain clearly. Navigators – I think our most successful engineering strategy at Carta was rolling out the Navigator program, which ensured senior-most engineers had context and direct representation, rather than relying exclusively on indirect representation via engineering management. Carta’s engineering managers are excellent, but there’s always something lost as discussions extend across layers. The Navigator program probably isn’t a perfect fit for particularly small companies, but I think any company with more than 100-150 engineers would benefit from something along these lines. How to create software quality – I’ve evolved my thinking about software quality quite a bit over time, but Carta was particularly helpful in distinguishing why some pieces of software are so hard to build despite having little-to-no scale from a data or concurrency perspective. These systems, which I label as “high essential complexity”, deserve more credit for their complexity, even if they have little in the way of complexity from infrastructure scaling. Shaping eng org costs – a few years ago, I wrote about my mental model for managing infrastructure costs. At Carta, I got to refine my thinking about engineering salary costs, with most of those ideas getting incorporated in the Navigating Private Equity ownership strategy, and the eng org seniority mix model. The three biggest levers are (1) “N-1 backfills”, (2) requiring a business rationale for promotions into senior-most levels, and (3) shifting hiring into cost efficient hiring regions. None of these are the sort of inspiring topics that excite folks, but they are all essential to the long term stability of your organization. Explaining engineering costs to boards/execs – Similarly, I finally have a clear perspective on how to represent R&D investment to boards in the same language that they speak in, which I wrote up here, and know how to do it quickly without relying on any manually curated internal datasets. Lots of smaller stuff, like the no wrong doors policy for routing colleagues to appropriate channels, how to request headcount in a way that is convincing to executives, Act Two rationales for how people’s motivations evolve over the course of long careers (and my own personal career mission to advance the industry, why friction isn’t velocity even though many folks act like it is. I’ve also learned quite a bit about venture capital, fund administration, cap tables, non-social network products, operating a multi-business line company, and various operating models. Figuring out how to sanitize those learnings to share the interesting tidbits without leaking internal details is a bit too painful, so I’m omitting them for now. Maybe some will be shareable in four or five years after my context goes sufficiently stale. As a closing thought, I just want to say how much I’ve appreciated the folks I’ve gotten to work with at Carta. From the executive team (Ali, April, Charly, Davis, Henry, Jeff, Nicole, Vrushali) to my directs (Adi, Ciera, Dan, Dave, Jasmine, Javier, Jayesh, Karen, Madhuri, Sam, Shawna) to the navigators (there’s a bunch of y’all). The people truly are always the best part, and that was certainly true at Carta.
Test UI outcomes, not API requests. Mock network calls in setup, but assert on what users actually see and experience, not implementation details.
Do you feel that the number of applications needed to land a role has skyrocketed? If so, your instincts are correct. According to a Workday Global Workforce Report in September 2024, job applications are growing at a rate four times faster than job openings. This growth is fuelled by a tight job market as well as the new availability of remote work and online job boards. It’s also one of the results of improved generative AI. Around half of all job seekers use AI tools to create their resumes or fill out applications. More than that, a 2024 survey found that 29 percent of applicants were using AI tools to complete skills tests, while 26 percent employed AI tools to mass apply to positions, regardless of fit or qualifications. This never-before-seen flood of applications poses new hardships for both job candidates and recruiters. Candidates must ensure that their applications stand out enough from the pile to receive a recruiter’s attention. Recruiters, meanwhile, are struggling to manage the sheer number of resumes they receive, and winnow through heaps of irrelevant or unqualified applicants to find the ones they need. These problems worsen if you’re an overseas candidate hoping to find a role in Japan. Japan is a popular country for migrants, thereby increasing the competition for each open position. In addition, recruiters here have set expectations and criteria, some of which can be triggered unknowingly by candidates unfamiliar with the Japanese market. With all this in mind, how can you ensure your resume stands out from the crowd—and is there anything else you can do to pass the screening stage? I interviewed nine recruiters, both external and in-house, to learn how applicants can increase their chances of success. Below are their detailed suggestions on improving your resume, avoiding Japan-specific red flags, and persisting even in the face of rejection. The competition The first questions I asked each recruiter were: How many resumes do you review in a month? How long does it take you to review a resume? Some interviewees work for agencies or independently, while others are employed by the companies they screen applicants for. Surprisingly, where they work doesn’t consistently affect how many resumes they receive. What does affect their numbers is whether they accept candidates from overseas. One anonymous contributor stated the case plainly: “The volume of applications depends on whether the job posting targets candidates in Japan or internationally.” In Japan: we receive around 20–100+ applications within the first three days. Outside of Japan: a single job posting can attract 200–1,000 applications within three days. ”[Because] we are generally only open to current residents of Japan, our total applicant count is around 100 or so in a month,” said Caleb McClain, who is both a Senior Software Engineer and a hiring manager at Lunaris. “In the past, when we accepted applications from abroad it was much higher, though I unfortunately don’t have stats for that period. It was unmanageable for a single person (me) reviewing the applications, though! “Given that I deal with 100 or so per month, I probably spend a bit more time than others screening applications, but it depends. I’ll give every candidate a quick read through within a minute or so and, if I didn’t find a reason to immediately reject them, I’ll spend a few more minutes reading about their experience more deeply. I’ll check out the companies they have listed for their experience if I’m not familiar with them and, if they have a Github or personal projects listed, I’ll also spend a few minutes checking those out.” For companies that accept overseas candidates, the workload is greater. Laine Takahashi, a Talent Acquisition employee at HENNGE, estimated that every month they receive around 200 completed applications for engineering mid-career roles and 270 applications for their Global Internship program. Since their application process starts with a coding test as well as a resume and cover letter, it can take up to two weeks to review, score, and respond to each application. Clement Chidiac, Senior Technical Recruiter at Mercari, explained that the number of resumes he reviews monthly varies widely. “As an example, one of the current roles I am working on received 250+ applications in three weeks. Typically a recruiter at Mercari can work from 5–20 positions at a time, so this gives you an idea.” He also said that his initial quick scan of each resume might take between 5–30 seconds. External recruiters process resumes at a similar rate. Edmund Ho, Principal Consultant for Talisman Corporation, works with around 15 clients a month. To find them, he looks at 20–30 resumes a day, or 600–700 a month, and can only spend 30 seconds to 2 minutes on each one before coming to a decision. Axel Algoet, founder and CEO of InnoHyve, only reviews 200 resumes a month—but “if you count LinkedIn profiles, it’s probably around 1,000.” Why LinkedIn? “I usually start by looking at LinkedIn—the companies they’ve worked at and the roles they’ve had,” Algoet explained. “From there, I can quickly tell whether I’m open to talking with them or not. Since I focus on a very specific segment of roles, I can rapidly identify if a candidate might be a fit for my clients.” Applicant Tracking Systems (ATS) Given the sheer volume of resumes to review and respond to, it’s not surprising that companies are using Applicant Tracking Systems. What’s more unexpected is how few recruiters personally use an ATS or AI when evaluating candidates. Both Ho and Algoet reported that though a high percentage of their clients use an ATS—as many as 90 percent, according to Ho—they themselves don’t use one. Ho in particular emphasized that he manually reads every resume he receives. Lunaris doesn’t use an ATS, “unless you count Notion,” joked McClain. “Open to recommendations!” Koji Hamane, Vice President of Human Resources at KOMOJU, said, “Up to 2023, we were managing the pipeline on a spreadsheet basis, and you cannot do it anymore with 3,000 applications [a year]. So it’s more effective and efficient in terms of tracking where each applicant sits in the recruiting process, but it also facilitates communication among [the members of] the interview panel.” The ATS KOMOJU uses is Workable. “Workable, I mean, you know, it works,” Hamane joked. “It’s much better than nothing. . . . Workable actually shows the valid points of the candidates, highlights characteristics, and evaluates the fit for the required positions, like from a 0 to 100 point basis. It helps, but actually you need to go through the details anyway, to properly assess the candidates.” Chidiac explained that Mercari also uses Workable, which has a feature that matches keywords from the job description to the resume, giving the resume a score. “I’ve never made a decision based on that,” said Chidiac. “It’s an indicator, but it’s not accurate enough yet to use it as a decision-making tool.” For example, it doesn’t screen out non-Japanese speakers when Japanese is a requirement for the role. I think these [ATS] tools are going to be better, and they’re going to work. I think it’s a good idea to help junior recruiters. But I think it has to be used as a ‘decision helper,’ not a decision-making tool. There’s also an element of ethics—do you want to be screened out by a robot? HENNGE uses a different ATS, Greenhouse, mostly to communicate with candidates and send them the results of their application. “ Everything they submit,” said Sonam Choden, HENNGE’s Software Engineer Recruiter, “is actually manually checked by somebody in our team. It’s not that everything is automated for the coding test—the bot only checks if they meet the minimum score. Then there is another [human] screener that will actually look over the test itself. If they pass the coding test, then we have another [human] screener looking through each and every document, both the resume and the cover letter.” How to format your resume The good news is that, according to our interviewees, passing the resume screening doesn’t involve trying to master ATS algorithms. However, since many recruiters are manually evaluating a high number of resume every day, they can spend at most only a few minutes on each one. That’s why it’s critical to make your resume stand out positively from the rest. You can see tips on formatting and good practices in our article on the subject, but below recruiters offer detailed explanations of exactly what they’re looking for—and, importantly, what red flags lead to rejection. Red flags The biggest red flags called out by recruiters are frequent job changes, not having skills required by the position, applications from abroad when no visa support is available, mismatches in salary expectations, and lack of required Japanese language ability. Frequent job changes Jumpiness. Job-hopping. Career-switching. Although they had different names for it, nearly everyone listed frequent job changes as the number one red flag on a candidate’s resume—at least, when applying to jobs in Japan. “There’s a term HR in Japan uses: ‘Oh, this guy is jumpy,’” Clement Chidiac told me. When he asked what they meant by that, they told him it referred to a candidate who had only been in their last job for two years or less. “And my first reaction was like, ‘Is that a bad thing?’ I think in the US, and in most tech companies, people change over every two to three years. I remember at my university in France, I was told you need to change your job externally or internally every three years to grow. But in Japan, there’s still the element of loyalty, right?” It’s changing a little bit, but when I have a candidate, a good candidate, that has had four jobs in the past ten years, I know I’m going to get questioned. . . . If I get a candidate that’s changed jobs three times in the past three years, they’re not likely to pass the screening, especially if they’re overseas. “Which is fair, right?” he added. “Because it’s a bit expensive, it’s a bit of a risk, and [it takes] a bit of time.” Why do Japanese companies feel so strongly on this issue? Some of it is simply history—lifetime employment at a single company was the Japanese ideal until quite recently. But as Chidiac pointed out, hiring overseas candidates represents additional investments in both money and time spent navigating the visa system, so it makes sense for Japanese companies to move more cautiously when doing so. Sayaka Sasaki, who was previously employed as a Sourcing Specialist by Tech Japan Inc., told me that recruiters attempt to use past job history to foresee the future. “A lack of consistency in career history can also lead to rejection,” she said. “Recruiters can often predict a candidate’s future career plans and job-switching tendencies based on their past job-change patterns.” Koji Hamane has another reason for considering job tenure. “When you try to leave some achievement or visible impact, [you have to] take some time in the same job, in the same company. So from that perspective, the tenure of each position on a resume really matters. Even though you say, ‘I have this capability and I have this strength,’ your tenure at each company is very short, and [you] don’t leave an impact on those workplaces.” In this sense, Hamane is not evaluating loyalty for its own sake, but considering tenure as a variable to assess the reproducibility of meaningful achievement. For him, achievement and impact—rather than tenure length itself—are the true signals of qualities such as leadership and resilience. Long-time or regular freelancers may face similar scrutiny. Though Chidiac is reluctant to call freelancing a red flag, he acknowledged that it can cause problems. “[With] an engineer that’s been doing freelance for the past three or four years, I know I’m going to get pushback from the hiring team, because they might have worked on three-, four-, five-month projects. They might not have the depth of knowledge that companies on a large scale might want to hire.” Also my question is, if that person has been working on their own for three or four years, how are they going to work in the team? How long are they going to stay with us? Are they going to be happy being part of a company and then maybe having to come to the office, that kind of thing? He gave an example: “If you get 100 applicants for backend engineer roles, it’s sad, but you’re going to go with the ones that fit the most traditional background. If I’m hiring and I’m getting five candidates from PayPay . . . I might prioritize these people as opposed to a freelancer that’s based out of Spain and wants to relocate to Japan, because there are a lot of question marks. That’s the reality of the candidate pool. “Now, if the freelancer in Spain has the exact experience that I want, and I don’t have other applicants, then yeah, of course I’ll talk to that person. I’ll take time to understand [their reasons].” How to “fix” job-hopping on your resume If you have changed jobs frequently, is rejection guaranteed? Not necessarily. These recruiters also offered a host of tips to compensate for job-hopping, freelancing stints, or gaps in your work history. The biggest tip: include an explanation on your resume. Edmund Ho advises offering a “reason for leaving” for short-term jobs, defining short-term as “less than three years.” For example, if the job was a limited contract role, then labelling it as such will prevent Japanese companies from drawing the conclusion that you left prematurely. Lay-offs and failed start-ups will also be looked upon more benevolently than simply quitting. In addition, Ho suggested that those with difficult resumes avail themselves of an agent or recruiter. Since the recruiter will contact the company directly, they have the chance to advocate and explain your job history better than the resume alone can. Sasaki also feels that explanations can help, but added a caveat: “Being honest about what you did during a gap period is not a bad thing. However, it is important to present it in a positive light. For example, if you traveled abroad or spent time at your family home during the gap period, you could write something like this: ‘Once I start a new job, it will be difficult to take a long vacation. So, I took advantage of this break to visit [destination], which I had always dreamed of seeing. Experiencing [specific highlight] was a lifelong goal, and it helped me refresh myself while boosting my motivation for work.’ “If the gap period lasted for more than a year, it is necessary to provide a convincing explanation for the hiring manager. For instance, you could write, ‘I used this time to enhance my skills by studying [specific subject] and preparing for [certification].’ If you have actually obtained a qualification, that would be a perfect way to present your time productively.” Hamane answered the question quite differently. “Do you gamble?” he asked me. He went on: “ When I say ‘gamble,’ ultimately recruiting is decision-making under uncertainty, right? It comes with risks. But the most important question is, what are the downside risks and upside risks?” “In the game of hiring,” Hamane explained, “employers are looking for indicators of future performance. Tenure, to me, is not inherently valuable, but serves as a variable to assess whether a candidate had the opportunity to leave a meaningful impact. It’s not about loyalty or raw length of time, but about whether qualities like resilience or leadership had the chance to emerge. Those qualities often require time. However, I don’t judge the number of years on its own—what matters is whether there is evidence of real contributions.” A shorter tenure with clear impact can be just as strong a signal as longer service. That’s why I view tenure not categorically, but contextually—as one indicator among others. If possible, then, a candidate should focus on highlighting their work contributions and unique strengths in their resume, which can counterbalance the perceived “downside risk” of job-hopping. Incompatibility with the job description Most other red flags can be categorized as “incompatible with the job description.” This includes: Not possessing the required skills Applying from abroad when the position doesn’t offer visa support Mismatch in salary expectations Not speaking Japanese Many of the resumes recruiters receive are wholly unsuited for the position. Hamane estimated that 70 percent of the resumes his department reviews are essentially “random applications.” Almost all the applications are basically not qualified. One of the major reasons why is the Internet. The Internet enables us to apply for any job from anywhere, right? So there are so many applications with no required skills. . . . From my perspective, they are applying on a batch basis, like mass applications. Even if the candidate has the required job skills, if they’re overseas and the position doesn’t offer visa support, their resume almost certainly won’t pass. Caleb McClain, whose company is currently hiring only domestically, said, “The most common reason [for rejection] is the person is applying from abroad. . . . After that, if there’s just a clear skills mismatch, we won’t move forward with them.” Axel Algoet pointed out that nationality can be a problem even if the company is open to hiring from overseas. “I support many companies in the space, aerospace, and defense industries,” he said, “and they are not allowed to hire candidates from certain countries.” It’s important to comprehend any legal issues surrounding sensitive industries before applying, to save both your own and the company’s time. He also mentioned that, while companies do look for candidates with experience at top enterprises, a prestigious background can actually be a red flag—-mostly in terms of compensation. Japanese tech companies on average pay lower wages than American businesses, and a mismatch in expectations can become a major stumbling block in the application process overall. “Especially [for] candidates coming from companies like Indeed or some foreign firms,” Algoet said, “if I know I won’t be able to match or beat their current salary, I tell them upfront.” Not speaking Japanese is another common stumbling block. Companies have different expectations of candidates when it comes to Japanese language ability. Algoet said that, although in his own niche Japanese often isn’t required at all, a Japanese level below JLPT N2 can be a problem for other roles. Sasaki agreed that speaking Japanese to at least the JLPT N3 level would open more doors. Anticipating potential rejection points If you can anticipate why recruiters might reject you, you can structure your resume accordingly, highlighting your strengths while deemphasizing any weak points. For example, if you don’t live in Japan but do speak Japanese, it’s important to bring attention to that fact. “Something that’s annoying,” said Chidiac, “that I’m seeing a lot from a hiring manager point of view, is that they sort of anticipate or presume things. . . . ‘That person has only been in Japan for a year, they can’t speak Japanese.’ But there are some people that have been [going to] Japanese school back home.” That’s why he urges candidates to clearly state both their language ability and their connections to Japan in their resume whenever possible. Chidiac also mentioned seniority issues. “It’s important that you highlight any elements of seniority.” However, he added, “Seniority means different things depending on the environment.” That’s why context is critical in your resume. If you’ve worked for a company in another country or another industry, the recruiter may not intuitively know much about the scale or complexity of the projects you’ve worked on. Without offering some context—the size of the project, the size of the team, the technologies involved, etc.—it’s difficult for recruiters to judge. If you contextualize your projects properly, though, Chidiac believes that even someone with relatively few years of experience may still be viewed favorably for higher roles. If you’ve led a very strong project, you might have the seniority we want. Finally, Edmund Ho suggested an easy trick for those without a STEM degree: just put down the university you graduated from, and not your major. “It’s cheating!” he said with a chuckle. Green flags Creating a great resume isn’t just about avoiding pitfalls. Your resume may also be missing some of the green flags recruiters get excited to see, which can open doors or lead to unexpected offers. Niche skills Niche skills were cited by several as not only being valuable in and of themselves, but also being a great way to open otherwise closed doors. Even when the job description doesn’t call for your unusual ability or experience, it’s probably worth including them in your resume. “I’ll of course take into consideration the requirements as written in our current open listings,” said McClain, “as that represents the core of what we are looking for at any given time. However, I also try to keep an eye out for interesting individuals with skills or experience that may benefit us in ways we haven’t considered yet, or match well with projects that aren’t formally planned but we are excited about starting when we have the time or the right people.” Chidiac agrees that he takes special note of rare skills or very senior candidates on a resume. “We might be able to create an unseeable headcount to secure a rare talent. . . . I think it’s important to have that mindset, especially for niche areas. Machine learning is one that comes to mind, but it could also be very senior [candidates], like staff level or principal level engineers, or people coming from very strong companies, or people that solve problems that we want to solve at the moment, that kind of thing.” I call it the opportunistic approach, like the unusual path, but it’s important to have that in mind when you apply for a company, because you might not be a fit for a role now, but you might not be aware that a role is going to open soon. Sasaki pointed out that niche skills can compensate for an otherwise relatively weak resume, or one that would be bypassed by more traditional Japanese companies. “If the company you are applying to is looking for a niche skill set that only you possess, they will want to speak with you in an interview. So don’t lose hope!” Tailoring to the job description “I don’t think there’s a secret recipe to automatically pass the resume screening, because at the end of the day, you need to match the job, right?” said Chidiac. “But I’ve seen people that use the same resume for different roles, and sometimes it’s missing [relevant] experience or specific keywords. So I think it’s important to really read the job description and think about, ‘Okay, these are all the main skills they want. Let me highlight these in some way.’” If you’re a cloud infrastructure engineer, but you’ve done a lot of coding in the past, or you use a specific technology but it doesn’t show on your CV, you may be automatically rejected either by the recruiter or by the [ATS]. But if you make sure that, ‘Oh yeah, I’ve seen the need for coding skill. I’m going to add that I was a software engineer when I started and I’m doing coding on my side project,’ that will help you with the screening. It’s not necessary to entirely remake your resume each time, Chidiac believes, but you should at least ensure that at the top of the resume you highlight the skills that match the job description. Connections to Japan While most of this advice would be relevant anywhere in the world, recruiters did offer one additional tip for applying in Japan—emphasizing your connection to the country. “Whenever a candidate overseas writes a little thing about any ties to Japan, it usually helps,” said Chidiac. For example, he believes that it helps to highlight your Japanese language ability at the top of your resume. [If] someone writes like, ‘I want to come to Japan,’ ‘I’ve been going to Japanese school for the last five years,’ ‘I’ve got family in Japan,’ . . . that kind of stuff usually helps. Laine Takahashi confirmed that HENNGE shows extra interest in those kinds of candidates. “Either in the cover letter or the CV,” she said, “if they’re not living in Japan, we want them to write about their passion for coming to Japan.” Ho went so far as to state that every overseas candidate he’d helped land a job in Japan had either already learned some Japanese, or had an interest in Japanese culture. Tourists who’d just enjoyed traveling in Japan were less successful, he’d found. How important is a cover letter? Most recruiters had similar advice for candidates, but one serious point of contention arose: cover letters. Depending on their company and hiring style, interviewees’ opinions ranged widely on whether cover letters were necessary or helpful. Cover letters aren’t important “I was trying to remember the last time I read a cover letter,” said Clement Chidiac, “and I honestly don’t think I’ve ever screened an application based on the cover letter.” Instead, Mercari typically requests a resume and poses some screening questions. Chidiac thought this might be a controversial opinion to take, but it was echoed strongly by around half of the other interviewees. When applying to jobs in Japan, there’s no need to write a cover letter, Edmund Ho told me. “Companies in Japan don’t care!” He then added, “One company, HENNGE, uses cover letters. But you don’t need,” he advised, “to write a fancy cover letter.” “I never ask for cover letters,” said Axel Algoet. “Instead, I usually set up a casual twenty-minute call between the hiring manager and the candidate, as a quick intro to decide if it’s worth moving forward with the interview process.” Getting to skip the cover letter and go straight to an early-stage interview is a major advantage Algoet is able to offer his candidates. “That said,” he added, “if a candidate is rejected at the screening stage and I feel the client is making a mistake, we sometimes work on a cover letter together to give it another shot.” Cover letters are extremely important According to Sayaka Sasaki, though, Japanese companies don’t just expect cover letters—they read them quite closely. “Some people may find this hard to believe,” said Sasaki, “but many Japanese companies carefully analyze aspects of a candidate’s personality that cannot be directly read from the text of a cover letter. They expect to see respect, humility, enthusiasm, and sincerity reflected in the writing.” Such companies also expect, or at least hope for, brevity and clarity. “Long cover letters are not a good sign,” said Koji Hamane. “You need to be clear and concise.” He does appreciate cover letters, though, especially for junior candidates, who have less information on their resume. “It supplements [our knowledge of] the candidate’s objectives, and helps us to verify the fit between the candidate’s motivation and the job and the company.” Caleb McClain feels strongly that a good cover letter is the best way for a candidate to stand out from a crowd. “After looking at enough resumes,” he said, “you start to notice similarities and patterns, and as the resume screener I feel a bit of exhaustion over trying to pick out what makes a person unique or better-suited for the position than another.” A well-written and personal cover letter that expresses genuine interest in joining ‘our’ team and company and working on ‘our’ projects will make you stand out and, assuming you meet the requirements otherwise, I will take that interest into serious consideration. “For example,” McClain continued, “we had an applicant in the past who wrote about his experience using our e-commerce site, SolarisJapan, many years ago, and his positive impressions of shopping there. Others wrote about their interests which clearly align with our businesses, or about details from our TokyoDev company profile that appealed to them.” McClain urged candidates to “really tie your experience and interests into what the company does, show us why you’re the best fit! Use the cover letter to stand out in the crowd and show us who you are in ways that a standard resume cannot. If you have interesting projects on Github or blogs on technical topics, share them! But of course,” he added, “make sure they are in a state where you’d want others to read them.” What to avoid in your cover letter “However,” McClain also cautioned, “[cover letters are] a double-edged sword, and for as many times as they’ve caused an application to rise to the top, they’ve also sunk that many.” For this reason, it’s best not to attach a cover letter unless one is specifically requested. Since cover letters are extremely important to some recruiters, however, you should have a good one prepared in advance—and not one authored by an AI tool. “I sometimes receive cover letters,” McClain told me, “that are very clearly written by AI, even going so far as to leave the prompt in the cover letter. Others simply rehash points from their resume, which is a shame and feels like a waste. This is your chance to really sell yourself!” He wasn’t the only recruiter who frowned on using AI. “Avoid simply copying and pasting AI-generated content into your cover letter,” Sasaki advised. “At the very least, you should write the base structure yourself. Using AI to refine your writing is acceptable, but hiring managers tend to dislike cover letters that clearly appear to be AI-written.” Laine Takahashi and Sonam Choden at HENNGE have also received their share of AI-generated letters. Sometimes, Choden explained, the use of AI is blatantly obvious, because the places where the company or applicant’s name should be written aren’t filled out. That doesn’t mean they’re opposed to all use of AI, though. “[The screeners] do not have a problem with the usage of AI technology. It’s just that [you should] show a bit more of your personality,” Takahashi said. She thinks it’s acceptable to use AI “just for making the sentences a bit more pretty, for example, but the story itself is still yours.” A bigger mistake would be not writing a cover letter at all. “There are cases,” Takahashi explained, “where perhaps the candidate thought that we actually don’t look at or read the cover letter.” They sent the CV, and then the cover letter was like, ‘Whatever, you’re not going to read this anyway.’ That’s an automatic fail from our side. “We do understand,” said Choden, “that most developers now think cover letters are an outdated type of process. But for us, there is a lot of benefit in actually going through with the cover letter, because it’s really hard to judge someone by one piece like a resume, right? So the cover letter is perfect to supplement with things that you might not be able to express in a one-page CV.” Other tips for success The interviewees offered a host of other tips to help candidates advance in the application process. Recruiters vs job boards There are pros and cons to working with a recruiter as opposed to applying directly. Partnering with a recruiter can be a complex process in its own right, and candidates should not expect recruiters to guarantee a specific placement or job. Edmund Ho pointed out some of the advantages of working with a recruiter from the start of your job search. Not only can they help fix your resume, or call a company’s HR directly if you’re rejected, but these services are free. After all, external recruiters are paid only if they successfully place you with a company. Axel Algoet also recommended candidates find a recruiter, but he offered a few caveats to this general advice. “Many candidates are unaware of the candidate ownership rule—which means that when a recruiter submits your application, they ‘own’ it for the next 12–18 months. There’s nothing you can do about it after that point.” By that, he means that the agency you work with will be eligible for a fee if you are hired within that timeframe. Other agencies typically won’t submit your application if it is currently “owned” by another. This affects TokyoDev as well: if you apply to a company with a recruiter, and then later apply to another role at that company via TokyoDev within 12 months of the original application, the recruiter receives the hiring fee rather than TokyoDev. That’s why, Algoet said, you should make sure your recruiter is a good fit and can represent you properly. “If you feel they can’t,” he suggested, “walk away.” And if you have less than three years of experience, he suggests skipping a recruiter entirely. “Many companies don’t want to pay recruitment fees for junior candidates,” he added, “but that doesn’t mean they won’t hire you. Reach out to hiring managers directly.” From the internal recruiter’s perspective, Sonam Choden is in favor of candidates who come through job boards. “I think we definitely have more success with job boards where people are actively directly applying, rather than candidates from agents. In terms of the requirements, the candidates introduced by agents have the experience and what we’re looking for, but those candidates introduced by agents might not necessarily be looking for work, or even if they are . . . [HENNGE] might not be their first choice.” Laine Takahashi agreed and cited TokyoDev as one of HENNGE’s best sources for candidates. We’ve been using TokyoDev for the longest time . . . before the [other] job boards that we’re using now. I think TokyoDev was the one that gave us a good head start for hiring inside Japan. “And now we’re expanding to other job boards as well,” she said, “but still, TokyoDev is [at] the top, definitely.” Follow up Ho casually nailed the dilemma around sending a message or email to follow up on your application. “It’s always best to follow up if you don’t hear back,” he said, “but if you follow up too much, it’s irritating.” The question is, how much is too much? When is it too soon to message a recruiter or hiring manager? Ho gave a concrete suggestion: “Send a message after three days to one week.” For Chidiac, following up is a strategy he’s used himself to great effect. “Something that I’ve always done when I look for a job is ping people on LinkedIn, trying to anticipate who is the hiring manager for that role, or who’s the recruiter for that role, and say ‘Hey, I want to apply,’ or ‘I’ve applied.’” [I’ve said] ‘I know I might not be able to do this and this and that, but I’ve done this and this and this. Can we have a quick chat? Do you need me to tailor my CV differently? Do you have any other roles that you think would be a good fit?’ And then, follow up frequently. “This is something that’s important,” he added, “showing that you’ve researched about the company, showing that you’ve attended meetups from time to time, checking the [company] blogs as well. I’ve had people that just said, ‘Hey, I’ve seen on the blogs that you’re working on this. This is what I’ve done in my company. If you’re hiring [for] this team, let me know, right?’ So that could be a good tip to stand out from other applicants. [But] I think there’s no rule. It’s just going to be down to individuals.” “You might,” he continued, “end up talking to someone who’s like, ‘Hey, don’t ever contact me again.’ As an agency recruiter that happened to me, someone said, ‘How did you get my phone [number]? Don’t ever call me again.’ . . . [But] then a lot of the time it’s like, ‘Oh, we’re both French, let’s help each other out,’ or, ‘Oh, yeah, we were at the same university,’ or ‘Hey, I know you know that person.’” Chidiac gave a recent example of a highly-effective follow-up message. “He used to work in top US tech companies for the past 25 years. [After he applied to Mercari], the person messaged me out of the blue: ‘I’m in Japan, I’m semi-retired, I don’t care about money. I really like what Mercari is doing. I’ve done X and Y at these companies.’ . . . So yeah, I was like, I don’t have a role, but this is an exceptional CV. I’ll show it to the hiring team.” There are a few caveats to this advice, however. First, a well-researched, well-crafted follow-up message is necessary to stand out from the crowd—and these days, there is quite a crowd. “Oh my goodness,” Choden exclaimed when I brought up the subject. “I actually wanted to write a post on LinkedIn, apologizing to people for not being able to get back to them, because of the amount of requests to connect and all related to the positions that we have at HENNGE.” Takahashi and Choden explained that many of these messages are attempts to get around the actual hiring process. “Sometimes,” Choden said, “when I do have the time, I try to redirect them. ‘Oh, please, apply here, or go directly to the site,’ because we can’t really do anything, they have to start with the coding test itself. . . . I do look at them,” Choden went on, “and if they’re actually asking a question that I can help with, then I’m more than happy to reply.” Nonetheless, a few candidates have attempted to go over their heads. Sometimes we have some candidates who are asking for updates on their application directly from our CEO. It’s quite shocking, because they send it to his work email as well. “And then he’s like, ‘Is anybody handling this? Why am I getting this email?’,” Choden related. Other applicants have emailed random HENNGE employees, or even members of the overseas branch in Taiwan. Needless to say, such candidates don’t endear themselves to anyone on the hiring team. Be persistent “I know a bunch of people,” Chidiac told me, “that managed to land a job because they’ve tried harder going to meetups, reaching out to people, networking, that kind of thing.” One of those people was Chidiac himself, who in 2021 was searching for an in-house recruiter position in Japan, while not speaking Japanese. In his job hunt, Chidiac was well aware that he faced some major disadvantages. “So I went the extra mile by contacting the company directly and being like, ‘This is what I’ve done, I’ve solved these problems, I’ve done this, I’ve done that, I know the Japanese market . . . [but] I don’t speak Japanese.’” There’s a bit of a reality check that everyone has to have on what they can bring to the table and how much effort they need to [put forth]. You’re going to have to sell yourself and reach out and find your people. “Does it always work? No. Does it often work? No. But it works, right?” said Chidiac with a laugh. “Like five percent of the time it works every time. But you need to understand that there are some markets that are tougher than others.” Ho agreed that job-hunters, particularly candidates who are overseas hoping to work in Japan for the first time, face a tough road. He recommended applying to as many jobs as possible, but in a strictly organized way. “Make an Excel sheet for your applications,” he urged. Such a spreadsheet should track your applications, when you followed up on those applications, and the probation period for reapplying to that company when you receive a rejection. Most importantly, Ho believes candidates should maintain a realistic, but optimistic, view of the process. “Keep a longer mindset,” he suggested. “Maybe you don’t get an offer the first year, but you do the second year.” Conclusion Given the staggering number of applications recruiters must process, and the increasing competition for good roles—especially those open to candidates overseas—it’s easy to become discouraged. Nonetheless, Japan needs international developers. Given Japan’s demographics, as well as the government’s interest in implementing AI and digital transformation (DX) solutions for social problems, that fact won’t change anytime soon. We at TokyoDev suggest that candidates interested in working in Japan adopt two basic approaches. First, follow the advice in this article and also in our resume-writing guide to prevent your resume from being rejected for common flaws. You can highlight niche skills, write an original cover letter, and send appropriate follow-up messages to the recruiters and hiring managers you hope to impress. Second, persistence is key. The work culture in Japan is evolving and there are more openings for new candidates. Japan’s startup scene is also burgeoning, and modern tech companies—such as Mercari—continue to grow and hire. If your long-term goal is to work in Japan, then it’s worth investing the time to keep applying. That said, hopefully the suggestions offered above will help turn what might have been a lengthy job-hunt into a quicker and more successful search. To apply to open positions right now, see our job board. If you want to hear more tips from other international developers in Japan, check out the TokyoDev Discord. We also have articles with more advice on job hunting, relocating to Japan, and life in Japan.
With search getting worse by the day, maybe it's time we rebounded in the other direction. The long forgotten directory. The post Can Directories Rise Again? appeared first on The History of the Web.