Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
15
Last year, my coworker Suzanne and I got a case study accepted into an ethnography conference! She's an anthropologist by training, and I'm a software engineer. How'd we wind up there? The short version is I asked her what I thought was an interesting question, and she thought it was an interesting question, and we wound up doing research. We have a lot more to go on the original question—we're studying the relationship between the values we hold as engineers and our documentation practices—and this case study is the first step in that direction. The title of the case study is a mouthful: "Foundations of Practice: An Ethnographic Exploration of Software Documentation and its Cultural Implications for an Agile Startup". I mean, it's intended for a more academic audience, and in a field that's not software engineering. But what we studied is relevant for software engineers, so let's talk about it here in more familiar language! If you want to read the full case study, it's open...
2 weeks ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from ntietz.com blog - technically a blog

What's in a ring buffer? And using them in Rust

Working on my cursed MIDI project, I needed a way to store the most recent messages without risking an unbounded amount of memory usage. I turned to the trusty ring buffer for this! I started by writing a very simple one myself in Rust, then I looked and what do you know, of course the standard library has an implementation. While I was working on this project, I was pairing with a friend. She asked me about ring buffers, and that's where this post came from! What's a ring buffer? Ring buffers are known by a few names. Circular queues and circular buffers are the other two I hear the most. But this name just has a nice ring to it. It's an array, basically: a buffer, or a queue, or a list of items stacked up against each other. You can put things into it and take things out of it the same as any buffer. But the front of it connects to the back, like any good ring1. Instead of adding on the end and popping from the end it, like a stack, you can add to one end and remove from the start, like a queue. And as you add or remove things, where the start and end of the list move around. Your data eats its own tail. This lets us keep a fixed number of elements in the buffer without running into reallocation. In a regular old buffer, if you use it as a queue—add to the end, remove from the front—then you'll eventually need to either reallocate the entire thing or shift all the elements over. Instead, a ring buffer lets you just keep adding from either end and removing from either end and you never have to reallocate! Uses for a ring buffer My MIDI program is one example of where you'd want a ring buffer, to keep the most recent items. There are some general situations where you'll run into this: You want a fixed number of the most recent things, like the last 50 items seen You want a queue, especially with a fixed maximum number of queued elements2 You want a buffer for data coming in with an upper bound, like with streaming audio, and you want to overwrite old data if the consumer can't keep up for a bit A lot of it comes down to performance, and streaming data. Something is producing data, something else is consuming it, and you want to make sure insertions and removals are fast. That was exactly my use: a MIDI device produces messages, my UI consumes them, but I don't want to fill up all my memory, forever, with more of them. How ring buffers work So how do they work? This is a normal, non-circular buffer: When you add something, you put it on the end. If you're using it as a stack, then you can pop things off the end. But if you're using it as a queue, you pop things off the front... And then you have to shuffle everything to the left. That's why we have ring buffers! I'll draw it as a straight line here, to show how it's laid out in memory. But the end loops back to the front logically, and we'll see how it wraps around. We start with an empty buffer, and start and end both point at the first element. When start == end, we know the buffer is empty. If we insert an element, we move end forward. And it gets repeated as we insert multiple times. If we remove an element, we move start forward. We can also start the buffer at any point, and it crosses over the end gracefully. Ring buffers are pretty simple when you get into their details, with just a few things to wrap your head around. It's an incredibly useful data structure! Rust options If you want to use one in Rust, what are your options? There's the standard library, which includes VecDeque. This implements a ring buffer, and the name comes from "Vec" (Rust's growable array type) combined with "Deque" (a double-ended queue). As this is in the standard library, this is quite accessible from most code, and it's a growable ring buffer. This means that the pop/push operations will always be efficient, but if your buffer is full it will result in a resize operation. The amortized running time will still be O(1) for insertion, but you incur the cost all at once when a resize happens, rather than at each insertion. You can enforce size limits in your code, if you want to avoid resize operations. Here's an example of how you could do that. You check if it's full first, and if so, you remove something to make space for the new element. let buffer: VecDeque<u32> = VecDeque::with_capacity(10); for thing in 0..15 { // if the buffer is already full, remove the first element to make space if buffer.len() == 10 { buffer.pop_front(); } buffer.push_back(thing); } There are also libraries that can enforce this for you! For example, circular-buffer implements a ring buffer. It has a fixed max capacity and won't resize on you, instead overwriting elements when you run out of space. The size is set at compile time, though, which can be great or can be a constraint that's hard to meet. There is also ringbuffer, which gives you a fixed size buffer that's heap allocated at runtime. That buys you some flexibility, with the drawback of being heap-based instead of stack-based. I'm definitely reaching for a library when I need a non-growable ring buffer in Rust now. There are some good choices, and it saves me the trouble of having to manually enforce size limits. 1 One of my favorite rings represents the sun and the moon, with the sun nestling inside this crescent moon shape. Unfortunately, the ring is open there. It makes for a very nice visual effect, but it kept getting snagged on things and bending. So it's not a very good ring for living an active hands-on life. 2 These can also be resizable! The Rust standard library one is. This comes with reallocation cost, which amortizes to a low cost but can be expensive in individual moments.

a week ago 9 votes
Bright lights in dark times

It's kind of dark times right now. And I'm definitely only talking about the days being short. It's pretty dark out right now, since it's the winter in the northern hemisphere. Every year, I start to realize somewhere around January that I'm tired, really tired, and don't want to do anything. I'm not sure that this is seasonal affective disorder, but I did start to explore it and found that bright light makes me feel a lot better in the winter. The problem is, I go through this discovery every year, it seems. My earlier attempts After I first learned that bright light can actually help you feel better in the winter, I got a Happy Light. That's the actual branding on it, and it might have helped, maybe? But it was really inconvenient to use. I got one of the more affordable ones, which meant a small panel I had to sit uncomfortably close to. And I was supposed to use it almost first thing in the morning for 30 minutes. That's... really not going to fit in my routine, especially now that I have two young kids who are raring to go right at 7am, and they do not want to deal with mom sitting in front of a lamp that early. So I just kind of stopped using it, and would poke at it in the drawer occasionally but not really do anything with it. Instead, I got a few more lamps in my office. These seemed to help, a little, but that might just be that I like my spaces well lit1. Going as bright as possible Somewhere along the line I saw a post talking about winter energy and light. And the author suggested that indoor lights just have to be bright. They recommended some high-watt LED bulbs. I went ahead and ordered some and... these are very bright. They're 100 watts of LEDs each, and I have two of them. I plugged them in to these plug-in fixtures that have inline switches. These seemed to really help me out a lot. On days I used them, I was more energetic and felt better. But I had a lot of days when I didn't use them, because using them was inconvenient. The main problems were that I had to turn on two switches manually, and they were too bright to look at. Turning them on doesn't sound like a lot, but doing it every day when you're already struggling can be just enough friction to avoid doing it! The brightness was the biggest issue, because they were blinding to look at and cast this reverse shadow out of my office. The brightness was the point, so now do I deal with that? Shading it Turning on the switches was easy. I put them on some smart outlets, set a schedule, and now they turn on and off at my predetermined times! Then I had to make a shade for the lamps. My main goal here is to diffuse the light a bit, so I was thinking of a small wooden frame wrapped in a light white fabric. That would spread the light out and avoid having any spots that are too bright to look at. I took measurements for where it was going to fit, then headed out to my workshop. Then I ripped a scrap board into 0.75"x0.75" stock. I ended up with just enough for the design. This is what I ended up with, after it was glued up. Here's a bonus detail shot of the extremely unnecessary joinery here. But hey, this joinery was incredibly satisfying, and the whole thing went together without any nails or screws. Joy makes it worthwhile. And then with the fabric attached2, it ended looking pretty nice. In its intended home, you can see that it does its job! It's now not painful to look straight at the lamp. You wouldn't want to do it, but it won't hurt you at least! Waiting for summer Of course, we need these bright lights during the winter, during the dark days. But summer will come again. Not all of us will see the long days. We'll lose some to the winter. But summer will come again, and we'll be able to go outside freely. Soak in the sun. Be ourselves. Dance in sunshowers. It's hard to see that in the darkest of times. I'm writing this for myself as much as I'm writing it for you. But believe me: you are strong, and I am strong, and together, we will survive. It seems like this winter is going to last a long time. And it probably will. But I'm not just talking about the winter, and the dark times are going to end, eventually. 1 Most places are very poorly lit and don't have enough lamps. You should have a lot of lamps in every room to make it as bright and cheery as possible! I refuse to believe otherwise, but my wife, a gremlin of the darkness3, does like rooms quite dim. 2 My wife helped with this step. She works with fabric a lot, and this was our first project combining our respective crafts. Certainly not our last! 3 She approved this phrasing.

3 weeks ago 12 votes
My writing process, and how I keep it sustainable

Recently, a reader wrote to me and asked about my writing process and burnout. They had an image in their head that I could sit down at a computer and type up a full post on a given topic, but were unsure if that's the right approach when they start blogging. And they were concerned about how to keep things sustainable for themselves, too. I started to write back to them, but decided to ship my answer for everyone else, as well. Now, to be clear: this is my writing process. There are many other very valid approaches, such as what Gabriella wrote. But I think outlining my process here will at least help break some of the mystique around writing, because I certainly do not just sit down and plunk out a post. Well, it often looks like that if you're an outside observer1. But that's only what's visible. How I write The very first piece of any blog post for me is an idea, some inspiration, something I want to share. These ideas come from a variety of places. Some of them are from conversations with friends or coworkers. Others come from problems I'm solving in my own code. Or they come from things I'm reading. Very rarely, they come from just thinking about things, but those are other sources with the connection hidden by the passage of time. After an idea comes to me, I try to capture it quickly. If I don't, it's probably gone! I store these ideas in a note in Obsidian, which currently has 162 ideas in it2. I format this file as a bulleted list, with minimal categorization so that the friction for adding something new is as low as possible. When I revisit this list to plan things to write (I do this every few weeks), I'll remove things which are either completed or which I'm confident I am not interested in anymore. Ideas which are promoted from "want to write" to "definitely going to write" then get moved into my primary task tracking system, LunaTask. When I start typing an article, I move it from "later" or "next" to "in progress". But in reality, I start writing an article far before I start typing it. See, the thing is, I'm almost always thinking about topics I'm going to write about. Maybe this is an element of having ADHD, or maybe it's from my obsessive interests and my deep diving into things I'm curious about. But it started happening more when I started writing on a schedule. By writing on a schedule, having to publish weekly, I've gotten in the habit of thinking about things so that I can hit the ground running when I sit down to type. Once I've picked out an idea, then it sits in the back of my head and some background processing happens. Usually I'm not crystal clear on what I want to say on a topic, so that background process churns through it. After figuring out what I want to say, I move onto a little more active thought on how I want to say it and structure it. This happens while I'm on a run or a walk, while I'm on coffee breaks, between exercises I'm working on for my music lessons. There's a lot of background processing between snatches of active effort. Piece by piece, it crystallizes in my mind and I figure out what I want to say and how to say it. Then, finally, I sit at the keyboard. And this is where it really looks like I just sit down and write posts in one quick sitting. That's because by then, the ideas are fully formed and it's a matter of writing the words out—and after a few years of that, it's something I got pretty quick at. Take this blog post for example. I received the initial email on January 1st, and it sat in my inbox for a few days before I could read it and digest it. I started to write a reply, but realized it would work as a blog post. So it ended up in background processing for a week, then I started actively thinking about what I wanted to write. At that point, I created a blog post file with section headings as an outline. This post ended up being delayed a bit, since I had to practice content-driven development for a project that has a self-imposed deadline. But the core writing (though not the typing) was done in the week after opening the original email. And then when I finally sat down to write it, it was two 20-minute sessions of typing out the words3. Editing and publishing After the first draft of a post is written, there's a lot of detail work remaining. This draft is usually in the approximate shape of the final post, but it may be missing sections or have too much in some. I'll take this time to do a quick read over it by myself. Here I'm not checking for spelling errors, but checking if the structure is good and if it needs major work, or just copy edits. If I'm unsure of some aspects of a post, I ask for feedback. I'll reach out to one or two friends who I think would have useful feedback and would enjoy reading it early, and ask them if they can read it. This request comes with specific feedback requests, since those are easier to fulfill then a broad "what do you think?" These are something along the lines of "was it clear? what parts stuck out to you positively or negatively? is there anything you found unclear or inaccurate?" Once people get me feedback, I add an acknowledgement for them (with explicit consent for if/how to share their name and where to link). After I'm more sure of the structure and content of a post, I'll go through and do copy edits. I read through it, with a spellchecker enabled4, and fix any misspellings and improve the wording of sentences. Then it's just the final details. I write up the blurb that goes in my newsletter and on social media (Mastodon, Bluesky, and LinkedIn, currently). I make sure the date is correct, the title is a decent one, and the slug mostly matches the title. And I make sure I did do things like acknowledgements and spellchecking. Here's the full checklist I use for every post before publishing. - [ ] Edited? - [ ] Newsletter blurb written? - [ ] Spellchecked? - [ ] Is the date correct? - [ ] Is the title updated? - [ ] Does the slug match the title? - [ ] Are tags set? - [ ] Draft setting removed? - [ ] Are acknowledgements added? Once that checklist is complete, it's ready to go! That means that on Monday morning, I just have to run make deploy and it'll build and push out the article. Making it sustainable I've been writing at least one post a week since September 2022—over two years at this point, and about 200,000 words. I've avoided burning out on it, so far. That's an interesting thing, since I've burned out on personal projects in the past. What makes this different? One aspect is that it's something that's very enjoyable for me. I like writing, and the process of it. I've always liked writing, and having an outlet for it is highly motivating for me. I thrive in structure, so the self-imposed structure of publishing every week is helpful. This structure makes me abandon perfection and find smaller pieces that I can publish in a shorter amount of time, instead of clinging to a big idea and shipping all of it at once, or never. This is where content-driven development comes in for me, and the consistent schedule also leads me to create inspiration. I also don't put pressure on myself besides the deadline. That's the only immovable thing. Everything else—length, content, even quality5—is flexible. Some of my earliest posts from 2022 are of the form "here's what I did this week!" which got me into a consistent rhythm. The most important thing for me is to keep it fun. If I'm not finding something fun, I just move on to a different post or a different topic. I have some posts that would be good but have sat in my backlog for over a year, waiting to be written. Whenever I sit down to write those, my brain decides to find a bunch of other, better, posts to write instead! So far, it's been fun and motivating to write weekly. If that ever stops, and it becomes a job and something I dread, I'll change the format to be something that works better for me instead! 1 If this is what it looks like then hey, stop peeking through my window. 2 I keep them in a bulleted list, each starting with -, so I can find the count via: grep "\s*-" 'Blog post ideas.md' | wc -l 3 Drafting documents is where I feel like my typing speed is actually a real benefit. On a typing test this evening, I got 99% accuracy at 120wpm. This lets me get through a lot of words quickly. This is a big plus for me both as a writer and as a principal engineer. 4 I leave my spellchecker disabled the vast majority of the time, since the squiggles are distracting! It's not a big deal to misspell things most of the time, and I'd rather be able to focus on getting words out than noticing each mistake as I make it—talk about discouraging! 5 That said, I think it's hard to judge your own quality in the moment. As writers, we can be very critical of ourselves. We think we wrote something boring but others find it interesting. Or something we think is quite clever, others find uninspiring. Sometimes, when I push out something I think is lazy and quick and low-quality, I'm surprised to find a very eager reception.

4 weeks ago 37 votes
Beginning of a MIDI GUI in Rust

A project I'm working on (which is definitely not my SIGBOVIK submission for this year, and definitely not about computer ergonomics) requires me to use MIDI. And to do custom handling of it. So I need something that receives those MIDI events and handles them. But... I'm going to make mistakes along the way, and a terminal program isn't very interesting for a presentation. So of course, this program also needs a UI. This should be simple, right? Just a little UI to show things as they come in, should be easy, yeah? Hahahaha. Haha. Ha. Ha. Whoops. The initial plan Who am I kidding? There was no plan. I sat down with egui's docs open in a tab and started just writing a UI. After a few false starts with this—it turns out, sitting down without a plan is a recipe for not doing anything at all—I finally asked some friends to talk through it. I had two short meetings with two different friends. Talking it through with them forced me to figure out ahead of time what I wanted, which made the problems much clearer. Laying out requirements Our goal here is twofold: to provide a debug UI to show MIDI messages and connected devices, and to serve as scaffolding for future MIDI shenanigans. A few requirements fall out of this1: Display each incoming MIDI message in a list in the UI Display each connected MIDI device in a list in the UI Allow filtering of MIDI messages by type in the UI Provide a convenient hook to add subscribers, unrelated to the UI, which receive all MIDI messages Allow the subscribers to choose which device categories they receive messages from (these categories are things like "piano," "drums", or "wind synth") Dynamically detect MIDI devices, handling the attachment or removal of devices Minimize duplication of responsibility (cloning data is fine, but I want one source of truth for incoming data, not multiple) Now that we have some requirements, we can think about how this would be implemented. We'll need some sort of routing system. And the UI will need state, which I think should be separate from the state of the core application which handles the routing. That is, I want to make it so that the UI is a subscriber just like all the other subscribers are. Jumping ahead a bit, here's what it looks like when two of my MIDI devices are connected, after playing a few notes. Current architecture The architecture has three main components: the MIDI daemon, the message subscribers, and the GUI. The MIDI daemon is the core that handles detecting MIDI devices and sets up message routing. This daemon owns the connections for each port2, and periodically checks if devices are still connected or not. When it detects a new device, it maps it onto a type of device (is this a piano? a drum pad? a wind synth?) Each new device gets a listener, which will take every message and send it into the global routing. The message routing and device connection handling are done in the same loop, which is fine—originally I was separating them, but then I measured the timing and each refresh takes under 250 microseconds. That's more than fast enough for my purposes, and probably well within the latency requirements of most MIDI systems. The next piece is message subscribers. Each subscriber can specify which type of messages it wants to get, and then their logic is applied to all incoming messages. Right now there's just the GUI subscriber and some debug subscribers, but there'll eventually be a better debug subscriber and there'll be the core handlers that this whole project is written around. (Is this GUI part of a giant yak shave? Maaaaybeeeee.) The subscribers look pretty simple. Here's one that echoes every message it receives (where dbg_recv is its queue). You just receive from the queue, then do whatever you want with that information! std::thread::spawn(move || loop { match dbg_recv.recv() { Ok(m) => println!("debug: {m:?}"), Err(err) => println!("err: {err:?}"), } }); Finally we reach the GUI, which has its own receiver (marginally more complicated than the debug print one, but not much). The GUI has a background thread which handles message receiving, and stores these messages into state which the GUI uses. This is all separated out so we won't block a frame render if a message takes some time to handle. The GUI also contains state, in two pieces: the ports and MIDI messages are shared with the background thread, so they're in an Arc<Mutex>. And there is also the pure GUI state, like which fields are selected, which is not shared and is just used inside the GUI logic. I think there's a rule that you can't bandy around the word "architecture" without drawing at least one diagram, so here's one diagram. This is the flow of messages through the system. Messages come in from each device, go into the daemon, then get routed to where they belong. Ultimately, they end up stored in the GUI state for updates (by the daemon) and display (by the GUI). State of the code This one's not quite ready to be used, but the code is available. In particular, the full GUI code can be found in src/ui.rs. Here are the highlights! First let's look at some of the state handling. Here's the state for the daemon. CTM is a tuple of the connection id, timestamp, and message; I abbreviate it since it's just littered all over the place. pub struct State { midi: MidiInput, ports: Vec<MidiInputPort>, connections: HashMap<String, MidiInputConnection<(ConnectionId, Sender<CTM>)>>, conn_mapping: HashMap<ConnectionId, Category>, message_receive: Receiver<CTM>, message_send: Sender<CTM>, ports_receive: Receiver<Vec<MidiInputPort>>, ports_send: Sender<Vec<MidiInputPort>>, } And here's the state for the GUI. Data that's written once or only by the GUI is in the struct directly, and anything which is shared is inside an Arc<Mutex>. /// State used to display the UI. It's intended to be shared between the /// renderer and the daemon which updates the state. #[derive(Clone)] pub struct DisplayState { pub midi_input_ports: Arc<Mutex<Vec<MidiInputPort>>>, pub midi_messages: Arc<Mutex<VecDeque<CTM>>>, pub selected_ports: HashMap<String, bool>, pub max_messages: usize, pub only_note_on_off: Arc<AtomicBool>, } I'm using egui, which is an immediate-mode GUI library. That means we do things a little differently, and we build what is to be rendered on each frame instead of retaining it between frames. It's a model which is different from things like Qt and GTK, but feels pretty intuitive to me, since it's just imperative code! This comes through really clearly in the menu handling. Here's the code for the top menu bar. We show the top panel, and inside it we create a menu bar. Inside that menu bar, we create menu buttons, which have an if statement for the click hander. egui::TopBottomPanel::top("menu_bar_panel").show(ctx, |ui| { egui::menu::bar(ui, |ui| { ui.menu_button("File", |ui| { if ui.button("Quit").clicked() { ctx.send_viewport_cmd(egui::ViewportCommand::Close); } }); ui.menu_button("Help", |ui| { if ui.button("About").clicked() { // TODO: implement something } }); }); }); Notice how we handle clicks on buttons. We don't give it a callback—we just check if it's currently clicked and then take action from there. This is run each frame, and it just... works. After the top menu panel, we can add our left panel3. egui::SidePanel::left("instrument_panel").show(ctx, |ui| { ui.heading("Connections"); for port in ports.iter() { let port_name = // ...snip! let conn_id = port.id(); let selected = self.state.selected_ports.get(&conn_id).unwrap_or(&false); if ui .add(SelectableLabel::new(*selected, &port_name)) .clicked() { self.state.selected_ports.insert(conn_id, !selected); } } }); Once again, we see pretty readable code—though very unfamiliar, if you're not used to immediate mode. We add a heading inside the panel, then we iterate over the ports and for each one we render its label. If it's selected it'll show up in a different color, and if we click on it, the state should toggle4. The code for displaying the messages is largely the same, and you can check it out in the repo. It's longer, but only in tedious ways, so I'm omitting it here. What's next? Coming up, I'm going to focus on what I set out to in the first place and write the handlers. I have some fun logic to do for different MIDI messages! I'm also going to build another tab in this UI to show the state of those handlers. Their logic will be... interesting... and I want to have a visual representation of it. Both for presentation reasons, and also for debugging, so I can see what state they're in while I try to use those handlers. I think the next step is... encoding bytes in base 3 or base 45, then making my handlers interpret those. And also base 7 or 11. And then I'll be able to learn how to use this weird program I'm building. Sooo this has been fun, but back to work! 1 I really try to avoid bulleted lists when possible, but "list of requirements" is just about the perfect use case for them. 2 A MIDI port is essentially a physically connected, available device. It's attached to the system, but it's not one we're listening to yet. 3 The order you add panels matters for the final rendering result! This can feel a little different from what we're used to in a declarative model, but I really like it. 4 This is debounced by egui, I think! Otherwise it would result in lots of clicks, since we don't usually hold a button down for precisely the length of one frame render. 5 or in bass drum

a month ago 37 votes

More in programming

The Challenges Faced by Multinational Teams and Japanese Companies

It’s a fact that Japan needs more international developers. That doesn’t mean integrating those developers into Japanese companies, as well as Japanese society, is a simple process. But what are the most common challenges encountered by these companies with multinational teams? To find out, TokyoDev interviewed a number of Japanese companies with international employees. In addition to discussing the benefits of hiring overseas, we also wanted to learn more about what challenges they had faced, and how they had overcome them. The companies interviewed included: Autify, which provides an AI-based software test automation platform Beatrust, which has created a search platform that automatically structures profile information Cybozu, Japan’s leading groupware provider DeepX, which automates heavy equipment machinery Givery, which scouts, hires, and trains world-class engineers Shippio, which digitalizes international trading and is Japan’s first digital forwarding company Yaraku, which offers web-based translation management According to those companies, the issues they experienced fell into two categories: addressing the language barrier, and helping new hires come to Japan. The language barrier Language issues are by far the most universal problem faced by Japanese companies with multinational teams. As a result, all of the companies we spoke to have evolved their own unique solutions. AI translation To help improve English-Japanese communication, Yaraku has turned to AI and its own translation tool, YarakuZen. With these they’ve reduced comprehension issues down to verbal communication alone. Since their engineering teams primarily communicate via text anyway, this has solved the majority of their language barrier issue, and engineers feel that they can now work smoothly together. Calling on bilinguals While DeepX employs engineers from over 20 countries, English is the common language between them. Documentation is written in English, and even Japanese departments still write minutes in English so colleagues can check them later. Likewise, explanations of company-wide meetings are delivered in both Japanese and English. Still, a communication gap exists. To overcome it, DeepX assigns Japanese project managers who can also speak English well. English skills weren’t previously a requirement, but once English became the official language of the engineering team, bilingualism was an essential part of the role. These project managers are responsible for taking requests from clients and communicating them accurately to the English-only engineers. In addition, DeepX is producing more bilingual employees by offering online training in both Japanese and English. The online lessons have proven particularly popular with international employees who have just arrived in Japan. Beatrust has pursued a similar policy of encouraging employees to learn and speak both languages. Dr. Andreas Dippon, the Vice President of Engineering at Beatrust, feels that bilingual colleagues are absolutely necessary to business. I think the biggest mistake you can make is just hiring foreigners who speak only English and assuming all the communication inside engineering is just English and that’s fine. You need to understand that business communication with [those] engineers will be immensely difficult . . . You need some almost bilingual people in between the business side and the engineering side to make it work. Similar to DeepX, Beatrust offers its employees a stipend for language learning. “So nowadays, it’s almost like 80 percent of both sides can speak English and Japanese to some extent, and then there are like two or three people on each side who cannot speak the other language,” Dippon said. “So we have like two or three engineers who cannot speak Japanese at all, and we have two or three business members who cannot speak English at all.” But in the engineering team itself “is 100 percent English. And the business team is almost 100 percent Japanese.” “ Of course the leaders try to bridge the gap,” Dippon explained. “So I’m now joining the business meetings that are in Japanese and trying to follow up on that and then share the information with the engineering team, and [it’s] also the same for the business lead, who is joining some engineering meetings and trying to update the business team on what’s happening inside engineering.” “Mixed language” Shippio, on the other hand, encountered negative results when they leaned too hard on their bilingual employees. Initially they asked bilinguals to provide simultaneous interpretation at meetings, but quickly decided that the burden on them was too great and not sustainable in the long term. Instead, Shippio has adopted a policy of “mixed language,” or combining Japanese and English together. The goal of mixed language is simple: to “understand each other.” Many employees who speak one language also know a bit of the other, and Shippio has found that by fostering a culture of flexible communication, employees can overcome the language barrier themselves. For example, a Japanese engineer might forget an English word, in which case he’ll do his best to explain the meaning in Japanese. If the international engineer can understand a bit of Japanese, he’ll be able to figure out what his coworker intended to say, at which point they will switch back to English. This method, while idiosyncratic to every conversation, strikes a balance between the stress of speaking another language and consideration for the other person. The most important thing, according to Shippio, is that the message is conveyed in any language. Meeting more often Another method these companies use is creating structured meeting schedules designed to improve cross-team communication. Givery teams hold what they call “win sessions” and “sync-up meetings” once or twice a month, to ensure thorough information-sharing within and between departments. These two types of meetings have different goals: A “win session” reviews business or project successes, with the aim of analyzing and then repeating that success in future. A “sync-up meeting” helps teams coordinate project deadlines. They report on their progress, discuss any obstacles that have arisen, and plan future tasks. In these meetings employees often speak Japanese, but the meetings are translated into English, and sometimes supplemented with additional English messages and explanations. By building these sort of regular, focused meetings into the company’s schedule, Givery aims to overcome language difficulties with extra personal contact. Beatrust takes a similarly structured, if somewhat more casual, approach. They tend to schedule most meetings on Friday, when engineers are likely to come to the office. However, in addition to the regular meetings, they also hold the “no meeting hour” for everyone, including the business team. “One of the reasons is to just let people talk to each other,” Dippon explained. “Let the engineers talk to business people and to each other.” This kind of interaction, we don’t really care if it’s personal stuff or work stuff that they talk about. Just to be there, talking to each other, and getting this feeling of a team [is what’s important]. . . . This is hugely beneficial, I think. Building Bonds Beatrust also believes in building team relationships through regular off-site events. “Last time we went to Takaosan, the mountain area,” said Dippon. “It was nice, we did udon-making. . . . This was kind of a workshop for QRs, and this was really fun, because even the Japanese people had never done it before by themselves. So it was a really great experience. After we did that, we had a half-day workshop about team culture, company culture, our next goals, and so on.” Dippon in particular appreciates any chance to learn more about his fellow employees. Like, ‘Why did you leave your country? Why did you come to Japan? What are the problems in your country? What’s good in your country?’ You hear a lot of very different stories. DeepX also hopes to deepen the bonds between employees with different cultures and backgrounds via family parties, barbecues, and other fun, relaxing events. This policy intensified after the COVID-19 pandemic, during which Japan’s borders were closed and international engineers weren’t able to immigrate. When the borders opened and those engineers finally did arrive, DeepX organized in-house get-togethers every two weeks, to fortify the newcomers’ relationships with other members of the company. Sponsoring visas Not every company that hires international developers actually brings them to Japan—-quite a few prefer to hire foreign employees who are already in-country. However, for those willing to sponsor new work visas, there is universal consensus on how best to do it: hire a professional. Cybozu has gone to the extent of bringing those professionals in-house. The first international member they hired was an engineer living in the United States. Though he wanted to work in Japan, at that time they didn’t have any experience in acquiring a work visa or relocating an employee, so they asked him to work for their US subsidiary instead. But as they continued hiring internationally, Cybozu realized that quite a few engineers were interested in physically relocating to Japan. To facilitate this, the company set up a new support system for their multinational team, for the purpose of providing their employees with work visas. Other companies prefer to outsource the visa process. DeepX, for example, has hired a certified administrative scrivener corporation to handle visa applications on behalf of the company. Autify also goes to a “dedicated, specialized” lawyer for immigration procedures. Thomas Santonja, VP of Engineering at Autify, feels that sponsoring visas is a necessary cost of business and that the advantages far outweigh the price. We used to have fully remote, long-term employees outside of Japan, but we stopped after we noticed that there is a lot of value in being able to meet in person and join in increased collaboration, especially with Japanese-speaking employees that are less inclined to make an effort when they don’t know the people individually. “It’s kind of become a requirement, in the last two years,” he concluded, “to at least be capable of being physically here.” However, Autify does prevent unnecessary expenses by having a new employee work remotely from their home country for a one month trial period before starting the visa process in earnest. So far, the only serious issues they encountered were with an employee based in Egypt; the visa process became so complicated, Autify eventually had to give up. But Autify also employs engineers from France, the Philippines, and Canada, among other countries, and has successfully brought their workers over many times. Helping employees adjust Sponsoring a visa is only the beginning of bringing an employee to Japan. The next step is providing special support for international employees, although this can look quite different from company to company. DeepX points out that just working at a new company is difficult enough; also beginning a new life in a new country, particularly when one doesn’t speak the language, can be incredibly challenging. That’s why DeepX not only covers the cost of international flights, but also implemented other support systems for new arrivals. To help them get started in Japan, DeepX provides a hired car to transport them from the airport, and a furnished monthly apartment for one month. Then they offer four days of special paid leave to complete necessary procedures: opening a bank account, signing a mobile phone contract, finding housing, etc. The company also introduces real estate companies that specialize in helping foreigners find housing, since that can sometimes be a difficult process on its own. Dippon at Beatrust believes that international employees need ongoing support, not just at the point of entry, and that it’s best to have at least one person in-house who is prepared to assist them. I think that one trap many companies run into is that they know all about Japanese laws and taxes and so on, and everybody grew up with that, so they are all familiar. But suddenly you have foreigners who have basically no idea about the systems, and they need a lot of support, because it can be quite different. Santonja at Autify, by contrast, has had a different experience helping employees get settled. “I am extremely tempted to say that I don’t have any challenges. I would be extremely hard pressed to tell you anything that could be remotely considered difficult or, you know, require some organization or even extra work or thinking.” Most people we hire look for us, right? So they are looking for an opportunity to move to Japan and be supported with a visa, which is again a very rare occurrence. They tend to be extremely motivated to live and make it work here. So I don’t think that integration in Japan is such a challenge. Conclusion To companies unfamiliar with the process, the barriers to hiring internationally may seem high. However, there are typically only two major challenges when integrating developers from other countries. The first, language issues, has a variety of solutions ranging from the technical to the cultural. The second, attaining the correct work visa, is best handled by trained professionals, whether in-house or through contractors. Neither of these difficulties is insurmountable, particularly with expert assistance. In addition, Givery in particular has stressed that it’s not necessary to figure out all the details in advance of hiring: in fact, it can benefit a company to introduce international workers early on, before its own internal policies are overly fixed. This information should also benefit international developers hoping to work in Japan. Since this article reflects the top concerns of Japanese companies, developers can work to proactively relieve those worries. Learning even basic Japanese helps reduce the language barrier, while becoming preemptively familiar with Japanese society reassures employers that you’re capable of taking care of yourself here. If you’d like to learn more about the benefits these companies enjoy from hiring international developers, see part one of this article series here. Want to find a job in Japan? Check out the TokyoDev job board. If you want to know more about multinational teams, moving to Japan, or Japanese work life in general, see our extensive library of articles. If you’d like to continue the conversation, please join the TokyoDev Discord.

4 hours ago 1 votes
How to select your first marketing channel

When you're brand new, how do you select your first marketing channel?

yesterday 5 votes
Europeans don't have or understand free speech

The new American vice president JD Vance just gave a remarkable talk at the Munich Security Conference on free speech and mass immigration. It did not go over well with many European politicians, some of which immediately proved Vance's point, and labeled the speech "not acceptable". All because Vance dared poke at two of the holiest taboos in European politics. Let's start with his points on free speech, because they're the foundation for understanding how Europe got into such a mess on mass immigration. See, Europeans by and large simply do not understand "free speech" as a concept the way Americans do. There is no first amendment-style guarantee in Europe, yet the European mind desperately wants to believe it has the same kind of free speech as the US, despite endless evidence to the contrary. It's quite like how every dictator around the world pretends to believe in democracy. Sure, they may repress the opposition and rig their elections, but they still crave the imprimatur of the concept. So too "free speech" and the Europeans. Vance illustrated his point with several examples from the UK. A country that pursues thousands of yearly wrong-speech cases, threatens foreigners with repercussions should they dare say too much online, and has no qualms about handing down draconian sentences for online utterances. It's completely totalitarian and completely nuts. Germany is not much better. It's illegal to insult elected officials, and if you say the wrong thing, or post the wrong meme, you may well find yourself the subject of a raid at dawn. Just crazy stuff. I'd love to say that Denmark is different, but sadly it is not. You can be put in prison for up to two years for mocking or degrading someone on the basis on their race. It recently become illegal to burn the Quran (which sadly only serves to legitimize crazy Muslims killing or stabbing those who do). And you may face up to three years in prison for posting online in a way that can be construed as morally supporting terrorism. But despite all of these examples and laws, I'm constantly arguing with Europeans who cling to the idea that they do have free speech like Americans. Many of them mistakenly think that "hate speech" is illegal in the US, for example. It is not. America really takes the first amendment quite seriously. Even when it comes to hate speech. Famously, the Jewish lawyers of the (now unrecognizable) ACLU defended the right of literal, actual Nazis to march for their hateful ideology in the streets of Skokie, Illinois in 1979 and won. Another common misconception is that "misinformation" is illegal over there too. It also is not. That's why the Twitter Files proved to be so scandalous. Because it showed the US government under Biden laundering an illegal censorship regime -- in grave violation of the first amendment -- through private parties, like the social media networks. In America, your speech is free to be wrong, free to be hateful, free to insult religions and celebrities alike. All because the founding fathers correctly saw that asserting the power to determine otherwise leads to a totalitarian darkness. We've seen vivid illustrations of both in recent years. At the height of the trans mania, questioning whether men who said they were women should be allowed in women's sports or bathrooms or prisons was frequently labeled "hate speech". During the pandemic, questioning whether the virus might have escaped from a lab instead of a wet market got labeled "misinformation". So too did any questions about the vaccine's inability to stop spread or infection. Or whether surgical masks or lock downs were effective interventions. Now we know that having a public debate about all of these topics was of course completely legitimate. Covid escaping from a lab is currently the most likely explanation, according to American intelligence services, and many European countries, including the UK, have stopped allowing puberty blockers for children. Which brings us to that last bugaboo: Mass immigration. Vance identified it as one of the key threats to Europe at the moment, and I have to agree. So should anyone who've been paying attention to the statistics showing the abject failure of this thirty-year policy utopia of a multi-cultural Europe. The fast changing winds in European politics suggest that's exactly what's happening. These are not separate issues. It's the lack of free speech, and a catastrophically narrow Overton window, which has led Europe into such a mess with mass immigration in the first place. In Denmark, the first popular political party that dared to question the wisdom of importing massive numbers of culturally-incompatible foreigners were frequently charged with claims of racism back in the 90s. The same "that's racist!" playbook is now being run on political parties across Europe who dare challenge the mass immigration taboo. But making plain observations that some groups of immigrants really do commit vastly more crime and contribute vastly less economically to society is not racist. It wasn't racist when the Danish Folkparty did it in Denmark in the 1990s, and it isn't racist now when the mainstream center-left parties have followed suit. I've drawn the contrast to Sweden many times, and I'll do it again here. Unlike Denmark, Sweden kept its Overton window shut on the consequences of mass immigration all the way up through the 90s, 00s, and 10s. As a prize, it now has bombs going off daily, the European record in gun homicides, and a government that admits that the immigrant violence is out of control. The state of Sweden today is a direct consequence of suppressing any talk of the downsides to mass immigration for decades. And while that taboo has recently been broken, it may well be decades more before the problems are tackled at their root. It's tragic beyond belief. The rest of Europe should look to Sweden as a cautionary tale, and the Danish alternative as a precautionary one. It's never too late to fix tomorrow. You can't fix today, but you can always fix tomorrow. So Vance was right to wag his finger at all this nonsense. The lack of free speech and the problems with mass immigration. He was right to assert that America and Europe has a shared civilization to advance and protect. Whether the current politicians of Europe wants to hear it or not, I'm convinced that average Europeans actually are listening.

2 days ago 10 votes
Using Xcode's AI Is Like Pair Programming With A Monkey

I've never used any other AI "assistant," although I've talked with those who have, most of whom are not very positive. My experience using Xcode's AI is that it occasionally offers a line of code that works, but you mostly get junk

3 days ago 7 votes
tracepoints: gnarly but worth it

Hey all, quick post today to mention that I added tracing support to the . If the support library for is available when Whippet is compiled, Whippet embedders can visualize the GC process. Like this!Whippet GC libraryLTTng Click above for a full-scale screenshot of the trace explorer processing the with the on a 2.5x heap. Of course no image will have all the information; the nice thing about trace visualizers like is that you can zoom in to sub-microsecond spans to see exactly what is happening, have nice mouseovers and clicky-clickies. Fun times!Perfetto microbenchmarknboyerparallel copying collector Adding tracepoints to a library is not too hard in the end. You need to , which has a file. You need to . Then you have a that includes the header, to generate the code needed to emit tracepoints.pull in the librarylttng-ustdeclare your tracepoints in one of your header filesminimal C filepkg-config Annoyingly, this header file you write needs to be in one of the directories; it can’t be just in the the source directory, because includes it seven times (!!) using (!!!) and because the LTTng file header that does all the computed including isn’t in your directory, GCC won’t find it. It’s pretty ugly. Ugliest part, I would say. But, grit your teeth, because it’s worth it.-Ilttngcomputed includes Finally you pepper your source with tracepoints, which probably you so that you don’t have to require LTTng, and so you can switch to other tracepoint libraries, and so on.wrap in some macro I wrote up a little . It’s not as easy as , which I think is an error. Another ugly point. Buck up, though, you are so close to graphs!guide for Whippet users about how to actually get tracesperf record By which I mean, so close to having to write a Python script to make graphs! Because LTTng writes its logs in so-called Common Trace Format, which as you might guess is not very common. I have a colleague who swears by it, that for him it is the lowest-overhead system, and indeed in my case it has no measurable overhead when trace data is not being collected, but his group uses custom scripts to convert the CTF data that he collects to... (?!?!?!!).GTKWave In my case I wanted to use Perfetto’s UI, so I found a to convert from CTF to the . But, it uses an old version of Babeltrace that wasn’t available on my system, so I had to write a (!!?!?!?!!), probably the most Python I have written in the last 20 years.scriptJSON-based tracing format that Chrome profiling used to usenew script Yes. God I love blinkenlights. As long as it’s low-maintenance going forward, I am satisfied with the tradeoffs. Even the fact that I had to write a script to process the logs isn’t so bad, because it let me get nice nested events, which most stock tracing tools don’t allow you to do. I fixed a small performance bug because of it – a . A win, and one that never would have shown up on a sampling profiler too. I suspect that as I add more tracepoints, more bugs will be found and fixed.worker thread was spinning waiting for a pool to terminate instead of helping out I think the only thing that would be better is if tracepoints were a part of Linux system ABIs – that there would be header files to emit tracepoint metadata in all binaries, that you wouldn’t have to link to any library, and the actual tracing tools would be intermediated by that ABI in such a way that you wouldn’t depend on those tools at build-time or distribution-time. But until then, I will take what I can get. Happy tracing! on adding tracepoints using the thing is it worth it? fin

3 days ago 3 votes