Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
69
tl;dr: don’t call last() on a DoubleEndedIterator How do you efficiently get the last part of a space-separated string in Rust? It will be obvious to some, but the obvious answer of s.split(' ').last() is wrong. The mistake is easy to make; I encountered it in a recent MR I reviewed, and I realized I … Continue reading Rust Gotcha: last() on DoubleEndedIterator → The post Rust Gotcha: last() on DoubleEndedIterator appeared first on Quentin Santos.
7 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Quentin Santos

Overanalyzing a minor quirk of Espressif’s reset circuit

The mystery In the previous article, I briefly mentioned a slight difference between the ESP-Prog and the reproduced circuit, when it comes to EN: Focusing on EN, it looks like the voltage level goes back to 3.3V much faster on the ESP-Prog than on the breadboard circuit. The grid is horizontally spaced at 2ms, so … Continue reading Overanalyzing a minor quirk of Espressif’s reset circuit → The post Overanalyzing a minor quirk of Espressif’s reset circuit appeared first on Quentin Santos.

a month ago 18 votes
Transistors in reverse and redundant circuits

The mystery In the previous article, I briefly mentioned a slight difference between the ESP-Prog and the reproduced circuit, when it comes to EN: Focusing on EN, it looks like the voltage level goes back to 3.3V much faster on the ESP-Prog than on the breadboard circuit. The grid is horizontally spaced at 2ms, so … Continue reading Transistors in reverse and redundant circuits → The post Transistors in reverse and redundant circuits appeared first on Quentin Santos.

a month ago 15 votes
The missing part of Espressif’s reset circuit

In the previous article, we peeked at the reset circuit of ESP-Prog with an oscilloscope, and reproduced it with basic components. We observed that it did not behave quite as expected. In this article, we’ll look into the missing pieces. An incomplete circuit For a hint, we’ll first look a bit more closely at the … Continue reading The missing part of Espressif’s reset circuit → The post The missing part of Espressif’s reset circuit appeared first on Quentin Santos.

a month ago 123 votes
Reproducing Espressif’s reset circuit

I recently discussed how Espressif implements automatic reset, a feature that lets users easily update the code on an Espressif microcontroller. There are actually more subtleties than a quick look would suggest, and I spent a fair bit of time investigating them. This article and the next two present what I have learned. The current … Continue reading Reproducing Espressif’s reset circuit → The post Reproducing Espressif’s reset circuit appeared first on Quentin Santos.

a month ago 24 votes
The ESP32-S2 reset pin

RST defaults to high This is an addendum to the article about Espressif’s automatic reset. In that article, we observed the effect of the RST pin on the ESP32-S2-Saola-1RI board: I skipped over this topic quickly, so I am now taking the time to explain how the RST pin manages to have a defined behavior … Continue reading The ESP32-S2 reset pin → The post The ESP32-S2 reset pin appeared first on Quentin Santos.

2 months ago 30 votes

More in programming

HTML is Dead, Long Live HTML

Rethinking DOM from first principles Browsers are in a very weird place. While WebAssembly has succeeded, even on the server, the client still feels largely the same as it did 10 years ago. Enthusiasts will tell you that accessing native web APIs via WASM is a solved problem, with some minimal JS glue. But the question not asked is why you would want to access the DOM. It's just the only option. So I'd like to explain why it really is time to send the DOM and its assorted APIs off to a farm somewhere, with some ideas on how. I won't pretend to know everything about browsers. Nobody knows everything anymore, and that's the problem. The 'Document' Model Few know how bad the DOM really is. In Chrome, document.body now has 350+ keys, grouped roughly like this: This doesn't include the CSS properties in document.body.style of which there are... 660. The boundary between properties and methods is very vague. Many are just facades with an invisible setter behind them. Some getters may trigger a just-in-time re-layout. There's ancient legacy stuff, like all the onevent properties nobody uses anymore. The DOM is not lean and continues to get fatter. Whether you notice this largely depends on whether you are making web pages or web applications. Most devs now avoid working with the DOM directly, though occasionally some purist will praise pure DOM as being superior to the various JS component/templating frameworks. What little declarative facilities the DOM has, like innerHTML, do not resemble modern UI patterns at all. The DOM has too many ways to do the same thing, none of them nice. connectedCallback() { const shadow = this.attachShadow({ mode: 'closed' }), template = document.getElementById('hello-world') .content.cloneNode(true), hwMsg = `Hello ${ this.name }`; Array.from(template.querySelectorAll('.hw-text')) .forEach(n => n.textContent = hwMsg); shadow.append(template); } Web Components deserve a mention, being the web-native equivalent of JS component libraries. But they came too late and are unpopular. The API seems clunky, with its Shadow DOM introducing new nesting and scoping layers. Proponents kinda read like apologetics. The achilles heel is the DOM's SGML/XML heritage, making everything stringly typed. React-likes do not have this problem, their syntax only looks like XML. Devs have learned not to keep state in the document, because it's inadequate for it. For HTML itself, there isn't much to critique because nothing has changed in 10-15 years. Only ARIA (accessibility) is notable, and only because this was what Semantic HTML was supposed to do and didn't. Semantic HTML never quite reached its goal. Despite dating from around 2011, there is e.g. no <thread> or <comment> tag, when those were well-established idioms. Instead, an article inside an article is probably a comment. The guidelines are... weird. There's this feeling that HTML always had paper-envy, and couldn't quite embrace or fully define its hypertext nature, and did not trust its users to follow clear rules. Stewardship of HTML has since firmly passed to WHATWG, really the browser vendors, who have not been able to define anything more concrete as a vision, and have instead just added epicycles at the margins. Along the way even CSS has grown expressions, because every templating language wants to become a programming language. Editability of HTML remains a sad footnote. While technically supported via contentEditable, actually wrangling this feature into something usable for applications is a dark art. I'm sure the Google Docs and Notion people have horror stories. Nobody really believes in the old gods of progressive enhancement and separating markup from style anymore, not if they make apps. Most of the applications you see nowadays will kitbash HTML/CSS/SVG into a pretty enough shape. But this comes with immense overhead, and is looking more and more like the opposite of a decent UI toolkit. The Slack input box Off-screen clipboard hacks Lists and tables must be virtualized by hand, taking over for layout, resizing, dragging, and so on. Making a chat window's scrollbar stick to the bottom is somebody's TODO, every single time. And the more you virtualize, the more you have to reinvent find-in-page, right-click menus, etc. The web blurred the distinction between UI and fluid content, which was novel at the time. But it makes less and less sense, because the UI part is a decade obsolete, and the content has largely homogenized. CSS is inside-out CSS doesn't have a stellar reputation either, but few can put their finger on exactly why. Where most people go wrong is to start with the wrong mental model, approaching it like a constraint solver. This is easy to show with e.g.: <div> <div style="height: 50%">...</div> <div style="height: 50%">...</div> </div> <div> <div style="height: 100%">...</div> <div style="height: 100%">...</div> </div> The first might seem reasonable: divide the parent into two halves vertically. But what about the second? Viewed as a set of constraints, it's contradictory, because the parent div is twice as tall as... itself. What will happen instead in both cases is the height is ignored. The parent height is unknown and CSS doesn't backtrack or iterate here. It just shrink-wraps the contents. If you set e.g. height: 300px on the parent, then it works, but the latter case will still just spill out. Outside-in and inside-out layout modes Instead, your mental model of CSS should be applying two passes of constraints, first going outside-in, and then inside-out. When you make an application frame, this is outside-in: the available space is divided, and the content inside does not affect sizing of panels. When paragraphs stack on a page, this is inside-out: the text stretches out its containing parent. This is what HTML wants to do naturally. By being structured this way, CSS layouts are computationally pretty simple. You can propagate the parent constraints down to the children, and then gather up the children's sizes in the other direction. This is attractive and allows webpages to scale well in terms of elements and text content. CSS is always inside-out by default, reflecting its document-oriented nature. The outside-in is not obvious, because it's up to you to pass all the constraints down, starting with body { height: 100%; }. This is why they always say vertical alignment in CSS is hard. Use flex grow and shrink for spill-free auto-layouts with completely reasonable gaps The scenario above is better handled with a CSS3 flex box (display: flex), which provides explicit control over how space is divided. Unfortunately flexing muddles the simple CSS model. To auto-flex, the layout algorithm must measure the "natural size" of every child. This means laying it out twice: first speculatively, as if floating in aether, and then again after growing or shrinking to fit: This sounds reasonable but can come with hidden surprises, because it's recursive. Doing speculative layout of a parent often requires full layout of unsized children. e.g. to know how text will wrap. If you nest it right, it could in theory cause an exponential blow up, though I've never heard of it being an issue. Instead you will only discover this when someone drops some large content in somewhere, and suddenly everything gets stretched out of whack. It's the opposite of the problem on the mug. To avoid the recursive dependency, you need to isolate the children's contents from the outside, thus making speculative layout trivial. This can be done with contain: size, or by manually setting the flex-basis size. CSS has gained a few constructs like contain or will-transform, which work directly with the layout system, and drop the pretense of one big happy layout. It reveals some of the layer-oriented nature underneath, and is a substitute for e.g. using position: absolute wrappers to do the same. What these do is strip off some of the semantics, and break the flow of DOM-wide constraints. These are overly broad by default and too document-oriented for the simpler cases. This is really a metaphor for all DOM APIs. The Good Parts? That said, flex box is pretty decent if you understand these caveats. Building layouts out of nested rows and columns with gaps is intuitive, and adapts well to varying sizes. There is a "CSS: The Good Parts" here, which you can make ergonomic with sufficient love. CSS grids also work similarly, they're just very painfully... CSSy in their syntax. But if you designed CSS layout from scratch, you wouldn't do it this way. You wouldn't have a subtractive API, with additional extra containment barrier hints. You would instead break the behavior down into its component facets, and use them à la carte. Outside-in and inside-out would both be legible as different kinds of containers and placement models. The inline-block and inline-flex display models illustrate this: it's a block or flex on the inside, but an inline element on the outside. These are two (mostly) orthogonal aspects of a box in a box model. Text and font styles are in fact the odd ones out, in hypertext. Properties like font size inherit from parent to child, so that formatting tags like <b> can work. But most of those 660 CSS properties do not do that. Setting a border on an element does not apply the same border to all its children recursively, that would be silly. It shows that CSS is at least two different things mashed together: a system for styling rich text based on inheritance... and a layout system for block and inline elements, nested recursively but without inheritance, only containment. They use the same syntax and APIs, but don't really cascade the same way. Combining this under one style-umbrella was a mistake. Worth pointing out: early ideas of relative em scaling have largely become irrelevant. We now think of logical vs device pixels instead, which is a far more sane solution, and closer to what users actually expect. SVG is natively integrated as well. Having SVGs in the DOM instead of just as <img> tags is useful to dynamically generate shapes and adjust icon styles. But while SVG is powerful, it's neither a subset nor superset of CSS. Even when it overlaps, there are subtle differences, like the affine transform. It has its own warts, like serializing all coordinates to strings. CSS has also gained the ability to round corners, draw gradients, and apply arbitrary clipping masks: it clearly has SVG-envy, but falls very short. SVG can e.g. do polygonal hit-testing for mouse events, which CSS cannot, and SVG has its own set of graphical layer effects. Whether you use HTML/CSS or SVG to render any particular element is based on specific annoying trade-offs, even if they're all scalable vectors on the back-end. In either case, there are also some roadblocks. I'll just mention three: text-ellipsis can only be used to truncate unwrapped text, not entire paragraphs. Detecting truncated text is even harder, as is just measuring text: the APIs are inadequate. Everyone just counts letters instead. position: sticky lets elements stay in place while scrolling with zero jank. While tailor-made for this purpose, it's subtly broken. Having elements remain unconditionally sticky requires an absurd nesting hack, when it should be trivial. The z-index property determines layering by absolute index. This inevitably leads to a z-index-war.css where everyone is putting in a new number +1 or -1 to make things layer correctly. There is no concept of relative Z positioning. For each of these features, we got stuck with v1 of whatever they could get working, instead of providing the right primitives. Getting this right isn't easy, it's the hard part of API design. You can only iterate on it, by building real stuff with it before finalizing it, and looking for the holes. Oil on Canvas So, DOM is bad, CSS is single-digit X% good, and SVG is ugly but necessary... and nobody is in a position to fix it? Well no. The diagnosis is that the middle layers don't suit anyone particularly well anymore. Just an HTML6 that finally removes things could be a good start. But most of what needs to happen is to liberate the functionality that is there already. This can be done in good or bad ways. Ideally you design your system so the "escape hatch" for custom use is the same API you built the user-space stuff with. That's what dogfooding is, and also how you get good kernels. A recent proposal here is HTML in Canvas, to draw HTML content into a <canvas>, with full control over the visual output. It's not very good. While it might seem useful, the only reason the API has the shape that it does is because it's shoehorned into the DOM: elements must be descendants of <canvas> to fully participate in layout and styling, and to make accessibility work. There are also "technical concerns" with using it off-screen. One example is this spinny cube: To make it interactive, you attach hit-testing rectangles and respond to paint events. This is a new kind of hit-testing API. But it only works in 2D... so it seems 3D-use is only cosmetic? I have many questions. Again, if you designed it from scratch, you wouldn't do it this way! In particular, it's absurd that you'd have to take over all interaction responsibilities for an element and its descendants just to be able to customize how it looks i.e. renders. Especially in a browser that has projective CSS 3D transforms. The use cases not covered by that, e.g. curved re-projection, will also need more complicated hit-testing than rectangles. Did they think this through? What happens when you put a dropdown in there? To me it seems like they couldn't really figure out how to unify CSS and SVG filters, or how to add shaders to CSS. Passing it thru canvas is the only viable option left. "At least it's programmable." Is it really? Screenshotting DOM content is 1 good use-case, but not what this is sold as at all. The whole reason to do "complex UIs on canvas" is to do all the things the DOM doesn't do, like virtualizing content, just-in-time layout and styling, visual effects, custom gestures and hit-testing, and so on. It's all nuts and bolts stuff. Having to pre-stage all the DOM content you want to draw sounds... very counterproductive. From a reactivity point-of-view it's also a bad idea to route this stuff back through the same document tree, because it sets up potential cycles with observers. A canvas that's rendering DOM content isn't really a document element anymore, it's doing something else entirely. Canvas-based spreadsheet that skips the DOM entirely The actual achilles heel of canvas is that you don't have any real access to system fonts, text layout APIs, or UI utilities. It's quite absurd how basic it is. You have to implement everything from scratch, including Unicode word splitting, just to get wrapped text. The proposal is "just use the DOM as a black box for content." But we already know that you can't do anything except more CSS/SVG kitbashing this way. text-ellipsis and friends will still be broken, and you will still need to implement UIs circa 1990 from scratch to fix it. It's all-or-nothing when you actually want something right in the middle. That's why the lower level needs to be opened up. Where To Go From Here The goals of "HTML in Canvas" do strike a chord, with chunks of HTML used as free-floating fragments, a notion that has always existed under the hood. It's a composite value type you can handle. But it should not drag 20 years of useless baggage along, while not enabling anything truly novel. The kitbashing of the web has also resulted in enormous stagnation, and a loss of general UI finesse. When UI behaviors have to be mined out of divs, it limits the kinds of solutions you can even consider. Fixing this within DOM/HTML seems unwise, because there's just too much mess inside. Instead, new surfaces should be opened up outside of it. WebGPU-based box model My schtick here has become to point awkwardly at Use.GPU's HTML-like renderer, which does a full X/Y flex model in a fraction of the complexity or code. I don't mean my stuff is super great, no, it's pretty bare-bones and kinda niche... and yet definitely nicer. Vertical centering is easy. Positioning makes sense. There is no semantic HTML or CSS cascade, just first-class layout. You don't need 61 different accessors for border* either. You can just attach shaders to divs. Like, that's what people wanted right? Here's a blueprint, it's mostly just SDFs. Font and markup concerns only appear at the leaves of the tree, where the text sits. It's striking how you can do like 90% of what the DOM does here, with a fraction of the complexity of HTML/CSS/SVG, if you just reinvent that wheel. Done by 1 guy. And yes, I know about the second 90% too. The classic data model here is of a view tree and a render tree. What should the view tree actually look like? And what can it be lowered into? What is it being lowered into right now, by a giant pile of legacy crud? Alt-browser projects like Servo or Ladybird are in a position to make good proposals here. They have the freshest implementations, and are targeting the most essential features first. The big browser vendors could also do it, but well, taste matters. Good big systems grow from good small ones, not bad big ones. Maybe if Mozilla hadn't imploded... but alas. Platform-native UI toolkits are still playing catch up with declarative and reactive UI, so that's that. Native Electron-alternatives like Tauri could be helpful, but they don't treat origin isolation as a design constraint, which makes security teams antsy. There's a feasible carrot to dangle for them though, namely in the form of better process isolation. Because of CPU exploits like Spectre, multi-threading via SharedArrayBuffer and Web Workers is kinda dead on arrival anyway, and that affects all WASM. The details are boring but right now it's an impossible sell when websites have to have things like OAuth and Zendesk integrated into them. Reinventing the DOM to ditch all legacy baggage could coincide with redesigning it for a more multi-threaded, multi-origin, and async web. The browser engines are already multi-process... what did they learn? A lot has happened since Netscape, with advances in structured concurrency, ownership semantics, FP effects... all could come in handy here. * * * Step 1 should just be a data model that doesn't have 350+ properties per node tho. Don't be under the mistaken impression that this isn't entirely fixable.

15 hours ago 4 votes
Omarchy is on the move

Omarchy has been improving at a furious pace. Since it was first released on June 26, I've pushed out 18(!) new releases together with a rapidly growing community of collaborators, users, and new-to-Linux enthusiasts. We have about 3,500 early adopters on the Omarchy Discord, 250 pull requests processed, and one heck of an awesome Arch + Hyprland Linux environment to show for it! The latest release is 1.11.0, and it brings an entirely overhauled control menu to the experience. Now everything is controlled through a single, unified system that makes it super fast to operate Omarchy's settings and options through the keyboard. It's exactly the kind of hands-off-the-mouse operation that I've always wanted, and with Linux, I've been able to build it just to my tastes. It's a delight. There's really something special going on in Linux at the moment. Arch has been around for twenty years, but with Hyprland on top, it's been catapulted in front of an entirely new audience. Folks who'd never thought that open source could be able to deliver a desktop experience worth giving up Windows or macOS for. Of course, Linux isn't for everyone. It's still an adventure! An awesome, teach-you-about-computers adventure, but not everyone is into computer adventures. Plenty of people are content with a computer appliance where they never have to look under the hood. All good. Microsoft and Apple have those people covered. But the world is a big place! And in that big place, there are a growing number of computer enthusiasts who've grown very disillusioned with both Microsoft and Apple. Folks who could be enticed to give Linux a look, if the barrier was a little lower and the benefits a little clearer. Those are the folks I'm building Omarchy for.

20 hours ago 3 votes
Digital hygiene: Notifications

Take back your attention.

4 hours ago 2 votes
We Are Still the Web

Twenty years ago, Kevin Kelly wrote an absolutely seminal piece for Wired. This week is a great opportunity to look back at it. The post We Are Still the Web appeared first on The History of the Web.

yesterday 4 votes
Extending My Japanese Visa as a Freelancer

With TokyoDev as my sponsor, I extended my Engineer/Specialist in Humanities/International Services visa for another three years. I’m thrilled by this result, because my family and I recently moved to a small town in Kansai and have been enjoying our lives in Japan more than ever. Since I have some experience with bureaucracy in Japan, I was prepared for things to get . . . complicated. Instead, I was pleasantly surprised. Despite the fact that I’d changed jobs and had three dependents, the process was much simpler than I expected. Below I’ll share my particular experience, which should be especially helpful to those in the Kansai area, and cover the following: What a visa extension is What happens when you change jobs mid-visa The documents your new sponsoring company needs to provide The documents you need to assemble yourself Some paperwork issues you might encounter What you can expect when visiting an immigration office (particularly in Osaka) Follow-up actions you’ll be required to take Information I wish I’d had What do I mean by “visa extension”? In 2022, I was a permanent employee at a company in Tokyo, which agreed to sponsor my Engineer/Specialist in Humanities/International Services visa and bring me to Japan. Initially I received a three-year work visa, and at the same time my husband and two children each received a three-year Dependent visa. Our original visas were set to expire in August 2025, but we’ve decided to remain in Japan long-term, so we wanted to prolong our stay. Since Japan’s immigration offices accept visa extension applications beginning three months before the visa end date, I began preparing my application in May 2025 and submitted it in June. It’s a good idea to begin the visa extension process as soon as possible. There are no downsides to doing so, and beginning early can help prevent serious complications. If you have a bank account in Japan, it can be frozen when your original visa expires; you will either need to show the bank your new residence card before that date, or demonstrate that you are currently in the process of extending your visa. Your My Number Card also expires on the original visa expiration date. This process is also often called a “visa renewal,” but it’s the same procedure. There is no difference between an extension and a renewal. New employment status and employer In the three years since my visa was originally issued, I became a freelancer, or sole proprietor (個人事業主, kojin jigyou nushi), and left my original sponsoring company. Paul McMahon was not only one of my first clients in Japan, but also the first to offer me an ongoing contract, which was enormously helpful. When I made my formal exit from my initial company, I was able to list TokyoDev as my new employer when notifying Immigration. The documents required TokyoDev also agreed to sponsor my visa, which meant Paul would provide documentation about the company to Immigration. I’d assumed this paperwork might be difficult or time-intensive, but Paul reassured me that the entire process was quite simple and only took a few hours. This work does not increase linearly per international employee; once a company knows which documents are required, it is relatively simple to repeat the process for each employee. I’m not the first worker TokyoDev has sponsored. In fact, TokyoDev successfully sponsored a contractor within a month of incorporation, with the only fees being those required for gathering the paperwork. Company documents Exactly what documents are required varies according to the status of the company. In this specific case, the documents Paul provided for TokyoDev, a category 4 company, were: The company portion of my visa extension application TokyoDev’s legal report summary (法定調書合計表, hotei chosho goukei-hyou) for the previous fiscal year TokyoDev’s Certificate of Registration (登記事項証明書, touki jikou shoumei-sho) A copy of TokyoDev’s financial statements (決算書, kessan-sho) for the latest fiscal year A business description of TokyoDev, which in this case was a sales presentation in Japanese that explained the premise of the company Personal documents The documents I supplied myself were: My passport and residence card My portions of my visa extension application A visa-sized photo (taken at a photo booth) The signed contract between myself and TokyoDev A contract with a secondary client My tax payment certificate for the previous year (納税証明書, nouzei shoumei-sho), which I got from our town hall My resident tax certificate (住民税の課税, juuminzei no kazei), which I got from our town hall I had to prepare some additional documents for my dependents. These were: The residence cards and passports of my children Copies of my own residence card and passport, for my husband’s application Visa extension applications for my dependent children and husband A visa-sized photo of my husband (children under 16 don’t need photos) Copies and Japanese translations of the children’s birth certificates A copy and Japanese translation of our American wedding certificate Paperwork tips A few questions and complications did arise while I was assembling the paperwork. Japanese translations I had Japanese translations of my children’s birth certificates and my marriage certificate already, left over from registering my initial address with City Hall. These translations were done by a coworker, and weren’t certified. I’ve used them repeatedly for procedures in Japan and never had them rejected. Dependent applications First, I had a hard time locating the correct application for my dependents. I could only find the one I’ve linked above, which initially didn’t seem to apply, since it’s for dependents of those who have a Designated Activities visa (such as researchers). I ended up filling out another, totally erroneous version of the application and had to re-do it all at the immigration office. To my chagrin, I found the paper version they had on hand was identical to this linked form! Resident tax certificate in a new town Next, my resident tax certificate was complicated by the fact that I’d lived in my new town in Nara for about seven months, and hadn’t yet paid any resident tax locally. Fortunately my first resident tax installment came due about that time, so I paid it promptly, then got the form from City Hall demonstrating that it had indeed been paid. I wasn’t sure a single payment would be enough to satisfy immigration, but it seemed to work. If I’d needed to prove payment for previous years, I would have had to request that certificate from the previous town I’d lived in, Hachoiji. Since this would have been a tedious process involving mailing things back and forth and a money order, I was glad to avoid it. Giving a “reason for extension” When filling out my application, Paul advised that I ask for a five-year extension: he said Immigration might not grant it, but it probably wouldn’t hurt my chances. I did that, and in the brief space where you write “Reason for extension,” I crammed in several sentences about how my career is based in Japan, my husband is studying shakuhachi, and my children attend public Japanese school and speak Japanese. All our applications included at least some of these details. This probably wasn’t necessary, and it’s hard to say if it influenced the final result or not, but that was how I approached it. That pesky middle name I worried that since I’d signed my TokyoDev contract without my middle name, which is present on my passport and residence card, that the application would be rejected. This sort of name-based nitpicking is common enough at Japanese banks—would Immigration react in the same way? Paul assured me that other employees had submitted their contracts without middle names and had no trouble. He was right and it wasn’t an issue, but I’ve decided in future to sign everything with all three of my names, just to be sure. Never make this mistake Finally, my husband wrote his own application, then had to rewrite it at the immigration office because they realized he’d used a Frixion (erasable) pen. This is strictly not allowed, so save yourself some trouble and use a regular ballpoint with blue or black ink! The application process Before making the trip to an immigration office, I polled my friends and checked Google Maps reviews. The nearest office to me had some one-star reviews, and a friend of mine described a negative experience there, so I was leery of simply going with the closest option. Instead, I decided to apply at an office farther from home, the Osaka Regional Immigration Bureau by Cosmosquare Station, which my friend had used for years. I wasn’t entirely sure that this was permitted, but nobody at the Osaka office raised an eyebrow at my Nara address. Getting there I took the train to Cosmosquare Station and arrived around lunchtime on Friday, June 20th. The station itself has an odd quirk: every time I try to use Google Maps inside or near it, I receive bizarrely inaccurate directions. Whatever the building is made of, it really messes with Maps! Luckily the signage around Cosmosquare is quite clear, and I had no difficulty locating the immigration office once I stopped trying to use my phone. Unfortunately I must have picked one of the worst times to visit. The office is on the second floor, but the line extended out the door and down the staircase. At least it was moving quickly, and I soon discovered that there is a convenience store on the second floor, which proved important later on. Asking for information The line I was standing in led to two counters, Application and Information. Since I wasn’t sure I had filled out the correct forms for my dependents, I stopped by the Information desk first. The man there spoke English well, and informed me that I had, in fact, filled out the wrong paperwork. This mistake was easily fixed because there were printed copies of the correct form—and of every other form used by Immigration—right by the doorway. The clerk also confirmed what I’d already suspected, that I couldn’t submit an application on behalf of my husband. Since I’d come alone while he watched the kids, he’d have to come by himself later. I took fresh copies of the applications for my children. Since the office itself was quite full, I went to the convenience store and enjoyed a soda while filling out the paperwork again. That convenience store also has an ID photo booth, a copier, and revenue stamps, so it’s well-equipped to assist applicants. Submitting the application Armed with the correct paperwork, I got back into line and waited around 10 minutes for my turn to submit. The woman behind the desk glanced quickly through my documents. Mostly she wanted to know if I needed to make any copies, because I wouldn’t be receiving these documents back. Once I’d confirmed I didn’t need any papers returned, she gave me a number and asked me to wait to be called. In addition to my number, she handed me a postcard on which to write my own address. This would be sent to me if and when Immigration approved the visa extension, to indicate by what date I needed to pick up my new residence card. Based on the messages I periodically sent my husband, my number wasn’t called for three and a half hours. The office was crowded and hot, but there were also screens showing the numbers called in the hallway and downstairs in the lobby, so it’s possible to visit the convenience store or stretch your legs without missing your appointment. Being able to purchase snacks and drinks at will certainly helped. Mostly, I wished I had brought a good book with me. When my number was finally called, I was surprised they had no questions for me. The clerks had spotted one place in the documents where I’d forgotten to sign; once that minor error was corrected, I was free to go. A paper was stapled into my passport, and my residence card was stamped on the back to show that I was going through the visa extension process. My husband’s experience My husband visited the Osaka Regional Immigration Bureau at 9:30 a.m. on Monday, June 26th. Although he described it as “quite busy” already, there was no line down the staircase, and he was finished by noon. If you want to avoid long wait times, arriving early in the morning might help. Approval and picking up Given the crowd that had packed the Osaka immigration office, and also knowing how backed up the immigration offices in Tokyo can be, I fully expected not to see our postcards for several months. Immigration regularly publishes statistics on the various visas and related processing times based on national averages. In fact, my husband and I received our postcards the same day, July 11th, just three weeks after I’d submitted my and my children’s applications. As usual, there was no indication on the postcard as to how long our visa extension would be: we would only find out if we’d qualified for a one-, three-, or five-year extension once we picked up our new residence cards. I had until July 18th to collect the cards for myself and the kids, and my husband had until the 25th to get his. We opted to go together on the same day, July 14th. The postcards also indicated that we’d need four 6,000 yen revenue stamps, one for each applicant. Revenue stamps (収入印紙, shuunyuu inshi) are a cash replacement, like a money order, to affix to specific documents. Though we knew that the convenience store at the Osaka Regional Immigration Bureau sold revenue stamps, we decided to secure them in advance, just in case. The morning we left, we stopped by our local post office and showed the staff our postcards. They had no trouble identifying and providing the stamps we needed. We arrived at the immigration office around 10:45 a.m. Foolishly, we’d assumed that picking up the cards would be a faster process. Instead, we waited for nearly four hours. Fortunately we’d discussed this possibility with several family friends, who were prepared to help pick up our children from school when we were running late. We finally got our cards and the news was good: we’d all received three-year extensions! Aftermath Extending our visa, and receiving new residence cards, entails some further paperwork. Specifically: My husband will need to reapply for permission to work. We’ll need new My Number cards for all family members, as those expire with the original visa expiration date. Our Japanese bank account will also be frozen upon the original visa expiration date, so it’s important that we inform our bank of the visa extension and provide copies of our new cards as soon as possible. If you are still going through the extension process when your original visa expires, you can show the bank your residence card, which should be stamped to indicate you are currently extending your visa, to prevent them from freezing your account in the interim. Top Takeaways Here’s a brief list of the most important questions I had during the process, and the answers I found. Can I apply for a visa extension on behalf of my spouse and children? Yes to underage children, no to the spouse, unless there are serious extenuating circumstances (such as the spouse being hospitalized). If you and your spouse don’t apply at the same time, make sure your dependent spouse has a copy of your passport and residence card to take with them. Can you only apply at the nearest immigration office? Not necessarily. I applied to one slightly further from my house, and actually in another prefecture, for personal reasons. However, this only worked because the Osaka office was a regional branch, with broader jurisdiction that included Nara. It probably wouldn’t have worked in reverse—for example, if I lived in Osaka and applied to the satellite office in Nara, which only has jurisdiction over Nara and Wakayama. Be sure to check the jurisdiction of the immigration office you choose. Is there any downside to applying early? There is no downside to getting your application in as soon as possible. Immigration will begin accepting applications within three months of the visa expiration date. I originally questioned whether an early extension would mean you “lost” a few months of your visa. For example, if I received my new card in June, but my visa was originally due to expire in August, would the new expiration date be in June? This isn’t the case: the new expiration date is based on the previous expiration date, not on when you submit your application. My visa’s prior expiration date was August 2025, and it’s now August 2028. If you’re extending a visa that was for longer than one year, how many years of tax certificates and records do you need to provide? A: I only provided my previous fiscal year’s tax certificate and proof of one resident tax payment in my local area, and that seemed to be enough. I wasn’t asked for documentation of previous years or paperwork from my prior town hall. Conclusion I’ve lived in several countries over the last fifteen years, so I’m experienced in general at acquiring and retaining visas. Japan’s visa system is paperwork-intensive, but it’s also fair, stable, and reasonably transparent. The fact that my Japanese visa isn’t attached to a singular company, but rather to the type of work I wish to perform, gives me peace of mind as I continue to establish our lives here. I also feel more comfortable as a freelancer in Japan, now that I know how easy it is for a company to sponsor my visa. Paul was able to assemble the documents needed in a single afternoon, and it didn’t cost TokyoDev anything beyond the price of the papers and postage. As freelancing and gig work are on the rise, I’d encourage more Japanese companies to consider sponsoring visas for their international contractors. Likewise, I hope that the experience I’ve shared here will help other immigrants to explore their freelancing options in Japan, and approach their visa extension process with both good information and a solid plan. If you’d like to continue the conversation on visa extensions and company sponsorship, you can join the TokyoDev Discord. Or see more articles on visas for developers, starting your own business in Japan, and remaining here long-term.

2 days ago 6 votes