Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
2
Code ownership is a popular concept, but it emphasizes the wrong thing. It can bring out the worst in a person or a team: defensiveness, control-seeking, power struggles. Instead, we should be focusing on stewardship. How code ownership manifests Code ownership as a concept means that a particular person or team "owns" a section of the codebase. This gives them certain rights and responsibilities: They control what goes into the code, and can approve or deny changes They are responsible for fixing bugs in that part of the code They are responsible for maintaining and improving that part of the code There are tools that help with these, like the CODEOWNERS file on GitHub. This file lets you define a group or list of individuals who own a section of the repository. Then you can require reviews/approvals from them before anything gets merged. These are all coming from a good place. We want our code to be well-maintained, and we want to make sure that someone is responsible for its...
18 hours ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from ntietz.com blog - technically a blog

Some things that make Rust lifetimes hard to learn

After I wrote YARR (Yet Another Rust Resource, with requisite pirate mentions), one of my friends tried it out. He gave me some really useful insights as he went through it, letting me see what was hard about learning Rust from a newcomer's perspective. Unsurprisingly, lifetimes are a challenge—and seeing him go through it helped me understand why they're hard to learn. Here are a few of the challenges he ran into. I don't think that these are necessarily problems, but they're perhaps opportunities to improve educational materials. They don't map 100% to how long a variable is in memory My friend gave me an example he's seen a few times when people explain lifetimes. fn longest<'a>(x: &'a str, y: &'a str) -> &'a str { if x.len() > y.len() { x } else { y } } And for many newcomers, you see this and you expect it is saying that x and y both have the lifetime 'a, so they live the same amount of time. But the following is valid: fn print_longest(x: &'static str) { let y = "local"; let a = longest(x, y); println!("{a}"); drop(a); drop(y); println!("y is gone"); } In this example, x and y live for different amounts of time. y doesn't even survive to the end of the function, whereas x should be valid for the entire duration of the program. That's because lifetimes are talking about a bound on the time something can live. There's some lifetime 'a during which we can say that x and y are both certainly valid. But x and y can both live longer than 'a. Lifetimes don't change the runtime behavior Most code we write changes what the program does at runtime. Types can be different, because sometimes you're giving the compiler information about what something is. But most type information can change the runtime behavior! The simplest example is when you have an integer. You can declare one without a type. let x = 10; This has an inferred type, and if you set a different type, like u8, you'll get different behavior at run time. let x: u8 = 10; In contrast, lifetimes are only used by the compiler to ensure that borrows are all valid. The compiler can reject your program if invalid borrows are performed, but the binary output should not be affected by the lifetimes of the variables. It's a different kind of type system We're used to seeing types in our programming languages, and these type systems are usually pretty similar. Rust's lifetimes are different, though. The borrow checker uses a linear type system to do its work. These are super cool, and something that I don't understand particularly well. I'm familiar with how to use the borrow checker, but I don't know any of the theory behind them. The premise, as I understand it, is that objects can be used exactly once, allowing you to safely deallocate it after use (since it won't be used again). This prevents multiple concurrent uses (yay, data race protection!) or use-after-free (yay, segfault protection!). The coolness is why we have it, but it's still pretty tough to understand. You have to learn this whole new type system that's pretty different from everything else you've touched. And most of the resources1 out there don't even mention that it's a different kind of type system! They share syntax with generics Another challenge is that the syntax is shared with generics. Even though lifetimes are very different in behavior and type system from generics, they sit inside very similar looking syntax. This is probably unavoidable—lifetimes are related to all the other types in your code—but it certainly makes things harder to learn. When you see something like this, you expect that it's generic over a type. fn something_generic<T>(arg: T) { ... } And you're right that it is! But then you have something that looks very similar, like this. And you might expect it to also be generic over a type. fn something_generic<'a>(arg: &'a str) { ... } But it's not, in the normal sense. Instead it's generic over a lifetime. And that's a little confusing that those sit in the same spot, especially when it's not called out as a potential gotcha in learning materials. * * * Lifetimes have some inherent complexity. The borrow checker is a very valuable tool, and it's great we have it! But with that power and complexity can come challenges in learning, and teaching, the underlying concepts. I think the current difficulty in learning Rust is due to a lot of things. One aspect is certainly some inherent complexity. But another aspect is that many resources aren't really geared toward the kind of programmer coming to Rust without this background knowledge, and there is room for improvement. We can make explanations of lifetimes and the borrow checker better and less confusing. Or we can at least make them more empathetic, projecting that it's expected to be confused because there are some good reasons it's hard to understand. And that you'll get there, eventually. Thank you, Ryan, for generously sharing your thoughts as you went through learning Rust. Our conversations were instrumental in writing this post. 1 I suppose, as the author of YARR, I can fix this in at least one instance.

a week ago 7 votes
Your product shouldn't require showing my legal name

Last week, I finally got verified on LinkedIn. Now there's a little badge next to my name that says "yes, she's a human who is legally named Nicole." Their marketing for verification says that I should now expect 60% more profile views and 50% more comments and reactions. For a writer like me, that seems great. More people viewing my content means more people can learn from me, or be entertained by me. And all that for free? There's a problem, of course. Nicole is my legal name, so I was able to get verified as a result. But many people don't go by their legal name. Other names are common So who doesn't go by their legal name? I didn't, for years after my wife and I got married. I went by an alias—our hyphenated last name—without legally changing it. I wasn't eligible to get verified then, since that name was not on my ID. I didn't, when I came out as transgender. It takes time to change your name and update your documents. Until that was complete, I would have had to go by my deadname or lose verification. And what about women who change their name when they get married, but go by their previous name professionally? This is an alias, and it is their name even though it's not what the government knows them as. But they would lose verification for doing this. Anyone who goes by a nickname or alias is ineligible. I have many friends who go by a different name than what their ID shows. This isn't fraudulent—it just reflects who they are. Penalized for being yourself And yet, if you fall into any case where you cannot get verified, then you can't get the benefits. You can't get your extra profile views and extra comments. Or put another way: you're penalized for not being verified. You'll get about 40% fewer profile views and 33% fewer comments/reactions than people who are verified. You get marginalized, unable to reap the full benefits of the platform, if you don't conform to a very particular outlook on what a name is (the official sequence of letters on your ID). To be clear, the problem isn't the verification process itself1. That process (and its associated benefits) may be in place to deal with bot traffic. I can sympathize with this, and I do want lower bot traffic—it makes platforms much more pleasant to use. Let us use our names The problem is that you have to show your legal name to everyone. There should be a process for being verified without it being your legal name on display. This process doesn't have to be scalable if the group that would utilize it is small—since surely they'd only forget about small populations2. It can be as simple as filing a help ticket and allowing a human to approve it based on some evidence. Is your name consistent across your public profiles? And you're a human being? Cool, verified. I believe that LinkedIn can, and should, do better. This feature as implemented is harming marginalized folks who are not able to get the same visibility when they cannot get verified. It reduces the exposure that marginalized creators can get. You should keep this in mind in the products you make, too. Don't require people to display their legal names. And before you even collect that data, think about what problem you're trying to solve. Do you need to collect legal names to solve that? (Probably not.) If so, do you need to store them after processing once? (Probably not.) And if so, do you need to display them publicly? (Probably not.) Names are so much more than what the government knows us by. Let us be our true selves and verify us with our true names. 1 It's not free of problems, though: I'd like to have a way to achieve the same result ("she's a human! she's generally the internet person she claims she is!") without showing my government identity documents to a third party. 2 Though, companies have been known to marginalize large groups of people. This is a rhetorical point that they are either harming a lot of people or they could solve the problem for a low cost.

2 weeks ago 15 votes
Can I ethically use LLMs?

The title is not a rhetorical question, and I'm not going to bury an answer. I don't have an answer. This post is my exploration of the question, and why I think it is a question1. Important things up front: what's my relationship with LLMs today? I don't use any LLMs regularly. I do have access to GitHub Copilot through my employer. I have it available on a hotkey, I think, and I cannot remember how to trigger it since I do not use it. I've explored using LLMs in the past. I used to be a regular Copilot user, and I explored ChatGPT, Claude, etc. to see what their capabilities were. I have done trainings for my coworkers on how to use them effectively, though I would not feel comfortable doing so now. My employer's product uses LLMs. I don't want to link to my employer, but yeah, I guess my paycheck depends on being okay with integrating them in? It's complicated (a refrain). (This post is, obviously, my opinions and does not reflect my employer.) I don't think using or not using them is a moral failing. There is a lot of moralizing around LLM usage. I'm not doing any of that here. I have my own beliefs (or my own questions), but I don't think people using LLMs are immoral (or vice versa). So, you can see I have used them and I'm not absolutist, but I don't use them today. Why not? Why did I stop using them in spite of the advances, where they're more capable than ever? It's because of these questions and issues. Where I have undecided ethical questions, I lean toward the more conservative2 choice of not using them until I have clarity on the ethics. (Note: I am not inviting folks to email me with answers to this question.) Energy usage Another technology that uses a lot of energy is blockchains. I think using public blockchains is almost universally unethical since there are other, better, less harmful options. Part of the harm from blockchains is an absurd amount of energy usage. LLMs also use a lot of energy. This can be split into training and inference energy usage. These vary based on the model. Some models can run locally on Apple silicon, and those are lower energy usage—their upper bound is running your computer full tilt, and an M4 Mac mini's max power consumption is 65 watts. This is roughly equivalent to one incandescent light bulb, or 8 LED light bulbs. It's good to turn off unnecessary lights, but doing so isn't going to solve the climate crisis; we need bigger, more sweeping reforms. I don't think that local models are going to significantly alter the climate crisis in either direction. Other models are massive and run in data centers on lots of power-hungry GPUs3. These data centers also require construction, and that comes with its own environmental impact. An article from Tom's Guide last year showed that "a single query on ChatGPT-4 can use up to 3 bottles of water, that a year of queries uses enough electricity to power over nine houses". A lot of the cost comes from new data centers being built. The demand for LLMs has led to more demand for power generation and more demand for data centers. And this new power generation is coming from gas-fired plants instead of sustainable, clean energy sources, because that's all we can build fast enough. A lot of attention is given to the training side. The numbers for training are large and shocking: Llama 3 used 500 mWh and GPT-3 training used 1,287 mWh—even more if you include the cost of training failed models which preceded these, the experiments that made the models possible. The listed figures are high, and 500 mWh is about the cost of a large jet flying for 7 hours. But we do it once per foundational model, and then the cost is spread across all the remaining usage. I don't think that the training side is significantly shifting the equation on climate change. We'd have a much larger impact on improving the climate crisis by advocating for remote work—reducing vehicles on the road, making many flights unnecessary—than by not training models. Overall it feels to me like local models have a clearly acceptable impact, and data center models have higher energy usage but still probably do not change the situation very much. Training data The training data for LLMs has largely been lifted without the consent of the people who generated that data. This is a lot of writing, music, videos, visual art, all of it. There are some attempts at there at using licensed data only, but the majority of models, and the most popular ones, are unlicensed data. Now the question is: is this an ethical problem? I know there are opinions on both sides of this. Some say that this data is publicly visible on the internet (though some of the data was not on the internet), and so it's fair game. Others say that this use isn't one that people consented to, and it should require that consent. My thought experiment is this: If we made search engines today, would people have this same objection to a search engine using their data without their express consent? I think most people would ultimately support search engines. They are different than LLMs, because they (mostly) serve results that point you to the original source, rather than create new content for you to replace the original sources with. But maybe people would reject search engines. And maybe consistency between these two isn't necessary, or maybe it's a false consistency—there could be other differentiating details that lead to different answers. Where I come down is that I think we need a robust mechanism to opt out of use of data for training, but that it's probably fine to train with publicly available data on the internet. What you do with the trained model is another question entirely. When you try to replace people making original works instead of creating an entirely new function, that's where you get questions. "Replacing" people There's a lot of LLM usage that is just trying to replace people in entire jobs. I mean, it feels like all of it. We see LLMs that are meant to replace writers and editors, artists and illustrators, musicians and songwriters. It doesn't say that directly—it says you're empowered to create things yourself. But what it means is that people should be able to press a few buttons and, with no artist involved, get out a beautiful artwork. Sounds like replacing to me. This is something we've long done with technology. We make technology that puts people out of jobs, and that was the whole industrial revolution. That's what happened with shipping from the containerization of ports, putting many dockworkers out of jobs. Replacing people in the abstract is not unethical. The problem is if we fail to deal with the harm created from replacing people. And people will be harmed, because losing your job or the value of it going down has a serious impact on quality of life. When we put people out of work, we—both society and technologists—have an ethical responsibility to ensure there's a plan to mitigate the harm from that. Maybe that means grants for living expenses while people switch fields, if put out of work in an LLM-heavy industry. Maybe that means a universal basic income. But it certainly doesn't mean doing nothing. Incorrect information and bias One of the major well-known problems of LLMs is a tendency to "hallucinate," or to confidently state facts that are made up from whole cloth. They also have an unknown amount of bias, with unknown mitigations in place, due to being closed systems. This is a big problem! We're not good at seeing what information is incorrect in something that's generated to look like it's the most likely string of tokens. If there is incorrect information in there, we'll just miss it. This means that people can make poor decisions on the basis of what an LLM tells them. They can have lost income due to its mistakes. And the bias? That's a huge problem, because it means that we don't know if this system will reinforce existing problematic norms4. We don't know what it will reinforce, because the training data is closed and there's not a lot of public evaluation on bias. So ultimately, we're left with an unknown harm of unknown magnitude to an unknown population5. Concentrating power (with the wealthy elite) A big ethical concern for me is also what this will do to our entire society. Many technologies are heralded as "democratizing" things. Spotify "democratized" music by making it so that anyone can get listens—but, y'all, it ended up flattening the tail and making the popular artists more money while making small artists less money. Will LLMs do the same? We know that the big models need large data centers to run their training and inference. And even small models need beefy hardware to run inference, let alone training! We have some access to models which can run locally, which is a good step. But the problem is that we can only run other people's models. They'll have those people's decisions baked in, decisions on which data to include or to exclude, decisions on how to approach questions of bias and abuse. And when the hardware to run the biggest models is only accessible to a few companies, that means that those companies really get a lot more powerful. OpenAI and Anthropic and Google and Meta all have the ability to run really large models. I certainly don't, though. This means that a technology that many are heralding as making things more accessible to everyone is controlled by a small handful of people. A small handful of people can decide how the models are trained, and set policies on how they're used. In a time when the US government is trying to get any paper retracted that mentioned queer people, and erasing trans people from the Stonewall monument, it feels self-evident that letting a small group of people control this technology imperils the future of many people. * * * Ultimately, I want robots to do the things I don't want to do. I want them to do my dishes and my laundry. I don't want them to play music instead of me, write code instead of me, write words instead of me. I am not sure whether or not using LLMs is unethical. There are certainly ways of using them which are unquestionably unethical—as is true with every technology. And there are ways of developing LLMs which are unethical—as is true with every technology. But the problems with them are large. I think it is unethical to use them without addressing the ethical questions above. If you're not working on mitigating the harms from LLMs (which do exist), then you might be doing something unethical. 1 I've been in the interesting situation of having anti-LLM people think I'm pro-LLM and vice versa. It's a very weird feeling, and makes me a little nervous of posting this! But this is an important question and an important conversation. 2 Footnoting out of an abundance of caution: I don't meant conservative-like-Republicans, because, no, I'd like to keep my rights thank you very much. I just mean in terms of minimizing risk. Let's please stop attacking trans rights, immigrants, Palestinians, and you know, everyone else that I've forgotten because the whole world seems to be on fire. 3 If we ever achieve artificial sentience, this sentence may acquire a second, more sinister, meaning. 4 It will. 5 My guess is a large harm to all underrepresented groups, but who asked the trans woman?

3 weeks ago 18 votes
What's in a ring buffer? And using them in Rust

Working on my cursed MIDI project, I needed a way to store the most recent messages without risking an unbounded amount of memory usage. I turned to the trusty ring buffer for this! I started by writing a very simple one myself in Rust, then I looked and what do you know, of course the standard library has an implementation. While I was working on this project, I was pairing with a friend. She asked me about ring buffers, and that's where this post came from! What's a ring buffer? Ring buffers are known by a few names. Circular queues and circular buffers are the other two I hear the most. But this name just has a nice ring to it. It's an array, basically: a buffer, or a queue, or a list of items stacked up against each other. You can put things into it and take things out of it the same as any buffer. But the front of it connects to the back, like any good ring1. Instead of adding on the end and popping from the end it, like a stack, you can add to one end and remove from the start, like a queue. And as you add or remove things, where the start and end of the list move around. Your data eats its own tail. This lets us keep a fixed number of elements in the buffer without running into reallocation. In a regular old buffer, if you use it as a queue—add to the end, remove from the front—then you'll eventually need to either reallocate the entire thing or shift all the elements over. Instead, a ring buffer lets you just keep adding from either end and removing from either end and you never have to reallocate! Uses for a ring buffer My MIDI program is one example of where you'd want a ring buffer, to keep the most recent items. There are some general situations where you'll run into this: You want a fixed number of the most recent things, like the last 50 items seen You want a queue, especially with a fixed maximum number of queued elements2 You want a buffer for data coming in with an upper bound, like with streaming audio, and you want to overwrite old data if the consumer can't keep up for a bit A lot of it comes down to performance, and streaming data. Something is producing data, something else is consuming it, and you want to make sure insertions and removals are fast. That was exactly my use: a MIDI device produces messages, my UI consumes them, but I don't want to fill up all my memory, forever, with more of them. How ring buffers work So how do they work? This is a normal, non-circular buffer: When you add something, you put it on the end. If you're using it as a stack, then you can pop things off the end. But if you're using it as a queue, you pop things off the front... And then you have to shuffle everything to the left. That's why we have ring buffers! I'll draw it as a straight line here, to show how it's laid out in memory. But the end loops back to the front logically, and we'll see how it wraps around. We start with an empty buffer, and start and end both point at the first element. When start == end, we know the buffer is empty. If we insert an element, we move end forward. And it gets repeated as we insert multiple times. If we remove an element, we move start forward. We can also start the buffer at any point, and it crosses over the end gracefully. Ring buffers are pretty simple when you get into their details, with just a few things to wrap your head around. It's an incredibly useful data structure! Rust options If you want to use one in Rust, what are your options? There's the standard library, which includes VecDeque. This implements a ring buffer, and the name comes from "Vec" (Rust's growable array type) combined with "Deque" (a double-ended queue). As this is in the standard library, this is quite accessible from most code, and it's a growable ring buffer. This means that the pop/push operations will always be efficient, but if your buffer is full it will result in a resize operation. The amortized running time will still be O(1) for insertion, but you incur the cost all at once when a resize happens, rather than at each insertion. You can enforce size limits in your code, if you want to avoid resize operations. Here's an example of how you could do that. You check if it's full first, and if so, you remove something to make space for the new element. let buffer: VecDeque<u32> = VecDeque::with_capacity(10); for thing in 0..15 { // if the buffer is already full, remove the first element to make space if buffer.len() == 10 { buffer.pop_front(); } buffer.push_back(thing); } There are also libraries that can enforce this for you! For example, circular-buffer implements a ring buffer. It has a fixed max capacity and won't resize on you, instead overwriting elements when you run out of space. The size is set at compile time, though, which can be great or can be a constraint that's hard to meet. There is also ringbuffer, which gives you a fixed size buffer that's heap allocated at runtime. That buys you some flexibility, with the drawback of being heap-based instead of stack-based. I'm definitely reaching for a library when I need a non-growable ring buffer in Rust now. There are some good choices, and it saves me the trouble of having to manually enforce size limits. 1 One of my favorite rings represents the sun and the moon, with the sun nestling inside this crescent moon shape. Unfortunately, the ring is open there. It makes for a very nice visual effect, but it kept getting snagged on things and bending. So it's not a very good ring for living an active hands-on life. 2 These can also be resizable! The Rust standard library one is. This comes with reallocation cost, which amortizes to a low cost but can be expensive in individual moments.

4 weeks ago 17 votes

More in programming

Top Coworking Spaces in Karuizawa

Since November 2023, I’ve been living in Karuizawa, a small resort town that’s 70 minutes away from Tokyo by Shinkansen. The elevation is approximately 1000 meters above sea level, making the summers relatively mild. Unlike other colder places in Japan, it doesn’t get much snow, and has the same sunny winters I came to love in Tokyo. With COVID and the remote work boom, it’s also become popular among professionals such as myself who want to live somewhere with an abundance of nature, but who still need to commute into Tokyo on a semi-regular basis. While I have a home office, I sometimes like to work outside. So I thought I’d share my impressions of the coworking spaces in town that I’ve personally visited, and a few other places where you can get some work done when you’re in town. Sawamura Roastery 11am on a Friday morning and there was only one other customer. Sawamura Roastery is technically a cafe, but it’s my personal favourite coworking space. It has free wifi, outlets, and comfortable chairs. While their coffees are on the expensive side, at about 750 yen for a cafe latte, they are also some of Karuizawa’s best. It’s empty enough on weekday mornings that I feel fine about staying there for hours, making it a deal compared to official drop-in coworking spaces. Another bonus is that it opens early: 7 a.m. (or 8 a.m. during the winter months). This allows me to start working right after I drop off my kids at daycare, rather than having 20 odd minutes to kill before heading to the other places that open at 9 a.m. If you’re having an online meeting, you can make use of the outdoor seating. It’s perfect when the weather is nice, but they also have heating for when it isn’t. The downsides are that their playlist is rather short, so I’m constantly hearing the same songs, and their roasting machine sometimes gets quite noisy. Gokalab Gokalab is my favourite dedicated coworking space in Karuizawa. Technically it is in Miyota, the next town over, which is sometimes called “Nishikaruizawa”. But it’s the only coworking space in the area I’ve been to that feels like it has a real community. When you want to work here, you have three options: buy a drink (600 yen for a cafe au lait—no cafe lattes, unfortunately, but if you prefer black coffee they have a good selection) and work out of the cafe area on the first floor; pay their daily drop-in fee of 1,000 yen; or become a “researcher” (研究員, kenkyuin) for 3,000 yen per month and enjoy unlimited usage. Now you may be thinking that the last option is a steal. That’s because it is. However, to become a researcher you need to go through a workshop that involves making something out of LEGO, and submit an essay about why you want to use the space. The thinking behind this is that they want to support people who actually share their vision, and aren’t just after a cheap space to work or study. Kind of zany, but that sort of out-of-the-box thinking is exactly what I want in a coworking space. When I first moved to Karuizawa, my youngest child couldn’t get into the local daycare. However, we found out that in Miyota, Suginoko Kindergarten had part-time spots available for two year olds. My wife and I ended up taking turns driving my kid there, and then spending the morning working out of Gokalab. Since my youngest is now in a local daycare, I haven’t made it out to Gokalab much. It’s just a bit too far for me (about a 15-minute drive from my house, while other options on this list are at most a 15-minute bicycle ride). But if I was living closer, I’d be a regular there. 232 Coworking Space & Hotel Noon on a Monday morning at 232 Coworking Space. If you’re looking for a coworking space near Karuizawa station, 232 Coworking Space & Hotel is the best option I’ve come across. The “hotel” part of the name made me think they were focused on “workcations,” but the space seems like it caters to locals as well. The space offers free coffee via an automatic espresso machine, along with other drinks, and a decent number of desks. When I used it on a Monday morning in the off-season, it was moderately occupied at perhaps a quarter capacity. Everyone spoke in whispers, so it felt a bit like a library. There were two booths for calls, but unfortunately they were both occupied when I wanted to have mine, so I had to sit in the hall instead. If the weather was a bit warmer I would have taken it outside, as there was some nice covered seating available. The decor was nice, though the chairs weren’t that comfortable. After a couple of hours I was getting sore. It was also too dimly lit for me, without much natural light. The price for drop-ins is reasonable, starting at 1,500 yen for four hours. They also have monthly plans starting from 10,000 yen for five days per month. WhatI found missing was a feeling of community. I didn’t see any small talk between the people working there, though I was only there for a couple hours, and maybe this occurs at other times. Their webpage also mentioned that they host events, but apparently they don’t have any upcoming ones planned and haven’t had any in a while. Shozo Coffee Karuizawa The latte is just okay here, but the atmosphere is nice. Shozo Coffee Karuizawa is a cafe on the first floor of the bookstore in Karuizawa Commongrounds. The second floor has a dedicated coworking space, but for me personally, the cafe is a better deal. Their cafe latte is mid-tier and 700 yen. In the afternoons I’ll go for their chai to avoid over-caffeination. They offer free wifi and have signs posted asking you not to hold online meetings, implicitly making it clear that otherwise they don’t mind you working there. Location-wise, this place is very convenient for me, but it suffers from a fatal flaw that prevents me from working there for an extended amount of time: the tables are way too low for me to type comfortably. I’m tall though (190 cm), so they aren’t designed with me in mind. Sheridan Coffee and a popover \- my entrance fee to this “coworking space”. Sheridan is a western breakfast and brunch restaurant. They aren’t that busy on weekdays and have free wifi, plus the owner was happy to let me work there. The coffee comes in a pot with enough for at least one refill. There’s also some covered outdoor seating. I used this spot to get some work done when my child was sick and being looked after at the wonderful Hochi Lodge (ほっちのロージ). It’s a clinic and sick childcare facility that does its best to not let on that it’s a medical facility. The doctors and nurses don’t wear uniforms, and appointments there feel more like you’re visiting someone’s home. Sheridan is within walking distance of it. Natural Cafeina An excellent cappuccino but only an okay place to work. If you’d like to get a bit of work done over an excellent cappuccino, Natural Cafeina is a good option. This cafe feels a bit cramped, and as there isn’t much seating, I wouldn’t want to use it for an extended period of time. Also, the music was also a bit loud. But they do have free wifi, and when I visited, there were a couple of other customers besides myself working there. Nakakaruizawa Library The Nakakaruizawa Library is a beautiful space with plenty of desks facing the windows and free wifi. Anyone can use it for free, making it the most economical coworking space in town. I’ve tried working out of it, but found that, for me personally, it wasn’t conducive to work. It is still a library, and there’s something about the vibes that just doesn’t inspire me. Karuizawa Commongrounds Bookstore Coworking Space The renowned bookstore Tsutaya operates Karuizawa Books in the Karuizawa Commongrounds development. The second floor has a coworking space that features the “cheap chic” look common among hip coworking spaces. Unfinished plywood is everywhere, as are books. I’d never actually worked at this space until writing this article. The price is just too high for me to justify it, as it starts at 1,100 yen for a mere hour, to a max of 4,000 yen per day. At 22,000 yen per month, it’s a more reasonable price for someone using it as an office full time. But I already have a home office and just want somewhere I can drop in at occasionally. There are a couple options, seating-wise. Most of the seats are in booths, which I found rather dark but with comfortable chairs. Then there’s a row of stools next to the window, which offer a good view, but are too uncomfortable for me. Depending on your height, the bar there may work as a standing desk. Lastly, there are two coveted seats with office chairs by a window, but they were both occupied when I visited. The emphasis here seems to be on individual deep work, and though there were a number of other people working, I’d have felt uncomfortable striking up a conversation with one of them. That’s enough to make me give it a pass. Coworking Space Ikoi Villa Coworking Space Ikoi Villa is located in Naka-Karuizawa, relatively close to my home. I’ve only used it once though. It’s part of a hotel, and they converted the lobby to a coworking space by putting a bunch of desks and chairs in it. If all you need is wifi and space to work, it gets the job done. But it’s a shame they didn’t invest a bit more in making it feel like a nice place to work. I went during the summer on one of the hottest days. My house only had one AC unit and couldn’t keep up, so I was hoping to find somewhere cooler to work. But they just had the windows open with some fans going, which left me disappointed. This was ostensibly the peak season for Karuizawa, but only a couple of others were working there that day. Maybe the regulars knew it’d be too hot, but it felt kind of lonely for a coworking space. The drop-in fee starts at 1,000 yen for four hours. It comes with free drinks from a machine: green tea, coffee, and water, if I recall correctly. Karuizawa Prince The Workation Core Do you like corporate vibes? Then this is the place for you. Karuizawa Prince The Workation Core is a coworking space located in my least favourite part of the town—the outlet mall. The throngs of shoppers and rampant commercialism are in stark contrast to the serenity found farther away from the station. This is another coworking space I visited expressly for this article. The fee is 660 yen per 30 minutes, to a maximum of 6,336 yen per day. Even now, just reading that maximum, my heart skipped a beat. This is certainly the most expensive coworking space I’ve ever worked from—I better get this article done fast. The facilities include a large open space with reasonably comfortable seating. There are a number of booths with monitors. As they are 23.8 inch monitors with 1,920 x 1,080 resolution, they’re a step down from the resolution of modern laptops, and so not of much use. Though there was room for 40 plus people, I was the only person working . Granted this was on a Sunday morning, so not when most people would typically attend. I don’t think I’ll be back here again. The price and sterile corporate vibe just aren’t for me. If you’re staying at The Prince Hotel, I think you get a discount. In that case, maybe it’s worth it, but otherwise I think there are better options. Sawamura Bakery & Restaurant Kyukaruizawa Sawamura Bakery & Restaurant is across the street from the Roastery. It offers slightly cheaper prices, with about 100 yen off the cafe latte, though the quality is worse, as is the vibe of the place as a whole. They do have a bigger selection of baked goods, though. As a cafe for doing some work, there’s nothing wrong with it per se. The upstairs cafe area has ample seating outside of peak hours. But I just don’t have a good reason to work here over the Roastery. The Pie Hole Los Angles Karuizawa The best (and only) pecan pie that I’ve had in Japan. The name of this place is a mouthful. Technically, it shouldn’t be on this list because I’ve never worked out of it. But they have wonderful pie, free wifi, and not many customers, so I could see working here. The chairs are a bit uncomfortable though, so I wouldn’t want to stop by for more than an hour or two. While this place had been on my radar for a while, I’d avoided it because there’s no good bicycle parking nearby—-or so I thought. I just found that the relatively close Church Street shopping street has a bit of bicycle parking off to the side. If you come to Karuizawa… When I was living in Tokyo, there were just too many opportunities to meet people, and so I found myself having to frequently turn down offers to go out for coffee. Since moving here, I’ve made some local connections, but the pace has been a lot slower. If you’re ever passing through Karuizawa, do get in touch, and I’d be happy to meet up for a cafe latte and possibly some pie.

12 hours ago 3 votes
Rohit Chess

fun little board game

12 hours ago 3 votes
Metric-Driven Development and The Claude Effect

Sometimes, feels have to be trusted over figures

19 hours ago 3 votes
Apple does AI as Microsoft did mobile

When the iPhone first appeared in 2007, Microsoft was sitting pretty with their mobile strategy. They'd been early to the market with Windows CE, they were fast-following the iPod with their Zune. They also had the dominant operating system, the dominant office package, and control of the enterprise. The future on mobile must have looked so bright! But of course now, we know it wasn't. Steve Ballmer infamously dismissed the iPhone with a chuckle, as he believed all of Microsoft's past glory would guarantee them mobile victory. He wasn't worried at all. He clearly should have been! After reliving that Ballmer moment, it's uncanny to watch this CNBC interview from one year ago with Johny Srouji and John Ternus from Apple on their AI strategy. Ternus even repeats the chuckle!! Exuding the same delusional confidence that lost Ballmer's Microsoft any serious part in the mobile game.  But somehow, Apple's problems with AI seem even more dire. Because there's apparently no one steering the ship. Apple has been promising customers a bag of vaporware since last fall, and they're nowhere close to being able to deliver on the shiny concept demos. The ones that were going to make Apple Intelligence worthy of its name, and not just terrible image generation that is years behind the state of the art. Nobody at Apple seems able or courageous enough to face the music: Apple Intelligence sucks. Siri sucks. None of the vaporware is anywhere close to happening. Yet as late as last week, you have Cook promoting the new MacBook Air with "Apple Intelligence". Yikes. This is partly down to the org chart. John Giannandrea is Apple's VP of ML/AI, and he reports directly to Tim Cook. He's been in the seat since 2018. But Cook evidently does not have the product savvy to be able to tell bullshit from benefit, so he keeps giving Giannandrea more rope. Now the fella has hung Apple's reputation on vaporware, promised all iPhone 16 customers something magical that just won't happen, and even spec-bumped all their devices with more RAM for nothing but diminished margins. Ouch. This is what regression to the mean looks like. This is what fiefdom management looks like. This is what having a company run by a logistics guy looks like. Apple needs a leadership reboot, stat. That asterisk is a stain.

10 hours ago 2 votes