Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
5
Code ownership is a popular concept, but it emphasizes the wrong thing. It can bring out the worst in a person or a team: defensiveness, control-seeking, power struggles. Instead, we should be focusing on stewardship. How code ownership manifests Code ownership as a concept means that a particular person or team "owns" a section of the codebase. This gives them certain rights and responsibilities: They control what goes into the code, and can approve or deny changes They are responsible for fixing bugs in that part of the code They are responsible for maintaining and improving that part of the code There are tools that help with these, like the CODEOWNERS file on GitHub. This file lets you define a group or list of individuals who own a section of the repository. Then you can require reviews/approvals from them before anything gets merged. These are all coming from a good place. We want our code to be well-maintained, and we want to make sure that someone is responsible for its...
3 days ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from ntietz.com blog - technically a blog

Some things that make Rust lifetimes hard to learn

After I wrote YARR (Yet Another Rust Resource, with requisite pirate mentions), one of my friends tried it out. He gave me some really useful insights as he went through it, letting me see what was hard about learning Rust from a newcomer's perspective. Unsurprisingly, lifetimes are a challenge—and seeing him go through it helped me understand why they're hard to learn. Here are a few of the challenges he ran into. I don't think that these are necessarily problems, but they're perhaps opportunities to improve educational materials. They don't map 100% to how long a variable is in memory My friend gave me an example he's seen a few times when people explain lifetimes. fn longest<'a>(x: &'a str, y: &'a str) -> &'a str { if x.len() > y.len() { x } else { y } } And for many newcomers, you see this and you expect it is saying that x and y both have the lifetime 'a, so they live the same amount of time. But the following is valid: fn print_longest(x: &'static str) { let y = "local"; let a = longest(x, y); println!("{a}"); drop(a); drop(y); println!("y is gone"); } In this example, x and y live for different amounts of time. y doesn't even survive to the end of the function, whereas x should be valid for the entire duration of the program. That's because lifetimes are talking about a bound on the time something can live. There's some lifetime 'a during which we can say that x and y are both certainly valid. But x and y can both live longer than 'a. Lifetimes don't change the runtime behavior Most code we write changes what the program does at runtime. Types can be different, because sometimes you're giving the compiler information about what something is. But most type information can change the runtime behavior! The simplest example is when you have an integer. You can declare one without a type. let x = 10; This has an inferred type, and if you set a different type, like u8, you'll get different behavior at run time. let x: u8 = 10; In contrast, lifetimes are only used by the compiler to ensure that borrows are all valid. The compiler can reject your program if invalid borrows are performed, but the binary output should not be affected by the lifetimes of the variables. It's a different kind of type system We're used to seeing types in our programming languages, and these type systems are usually pretty similar. Rust's lifetimes are different, though. The borrow checker uses a linear type system to do its work. These are super cool, and something that I don't understand particularly well. I'm familiar with how to use the borrow checker, but I don't know any of the theory behind them. The premise, as I understand it, is that objects can be used exactly once, allowing you to safely deallocate it after use (since it won't be used again). This prevents multiple concurrent uses (yay, data race protection!) or use-after-free (yay, segfault protection!). The coolness is why we have it, but it's still pretty tough to understand. You have to learn this whole new type system that's pretty different from everything else you've touched. And most of the resources1 out there don't even mention that it's a different kind of type system! They share syntax with generics Another challenge is that the syntax is shared with generics. Even though lifetimes are very different in behavior and type system from generics, they sit inside very similar looking syntax. This is probably unavoidable—lifetimes are related to all the other types in your code—but it certainly makes things harder to learn. When you see something like this, you expect that it's generic over a type. fn something_generic<T>(arg: T) { ... } And you're right that it is! But then you have something that looks very similar, like this. And you might expect it to also be generic over a type. fn something_generic<'a>(arg: &'a str) { ... } But it's not, in the normal sense. Instead it's generic over a lifetime. And that's a little confusing that those sit in the same spot, especially when it's not called out as a potential gotcha in learning materials. * * * Lifetimes have some inherent complexity. The borrow checker is a very valuable tool, and it's great we have it! But with that power and complexity can come challenges in learning, and teaching, the underlying concepts. I think the current difficulty in learning Rust is due to a lot of things. One aspect is certainly some inherent complexity. But another aspect is that many resources aren't really geared toward the kind of programmer coming to Rust without this background knowledge, and there is room for improvement. We can make explanations of lifetimes and the borrow checker better and less confusing. Or we can at least make them more empathetic, projecting that it's expected to be confused because there are some good reasons it's hard to understand. And that you'll get there, eventually. Thank you, Ryan, for generously sharing your thoughts as you went through learning Rust. Our conversations were instrumental in writing this post. 1 I suppose, as the author of YARR, I can fix this in at least one instance.

a week ago 8 votes
Your product shouldn't require showing my legal name

Last week, I finally got verified on LinkedIn. Now there's a little badge next to my name that says "yes, she's a human who is legally named Nicole." Their marketing for verification says that I should now expect 60% more profile views and 50% more comments and reactions. For a writer like me, that seems great. More people viewing my content means more people can learn from me, or be entertained by me. And all that for free? There's a problem, of course. Nicole is my legal name, so I was able to get verified as a result. But many people don't go by their legal name. Other names are common So who doesn't go by their legal name? I didn't, for years after my wife and I got married. I went by an alias—our hyphenated last name—without legally changing it. I wasn't eligible to get verified then, since that name was not on my ID. I didn't, when I came out as transgender. It takes time to change your name and update your documents. Until that was complete, I would have had to go by my deadname or lose verification. And what about women who change their name when they get married, but go by their previous name professionally? This is an alias, and it is their name even though it's not what the government knows them as. But they would lose verification for doing this. Anyone who goes by a nickname or alias is ineligible. I have many friends who go by a different name than what their ID shows. This isn't fraudulent—it just reflects who they are. Penalized for being yourself And yet, if you fall into any case where you cannot get verified, then you can't get the benefits. You can't get your extra profile views and extra comments. Or put another way: you're penalized for not being verified. You'll get about 40% fewer profile views and 33% fewer comments/reactions than people who are verified. You get marginalized, unable to reap the full benefits of the platform, if you don't conform to a very particular outlook on what a name is (the official sequence of letters on your ID). To be clear, the problem isn't the verification process itself1. That process (and its associated benefits) may be in place to deal with bot traffic. I can sympathize with this, and I do want lower bot traffic—it makes platforms much more pleasant to use. Let us use our names The problem is that you have to show your legal name to everyone. There should be a process for being verified without it being your legal name on display. This process doesn't have to be scalable if the group that would utilize it is small—since surely they'd only forget about small populations2. It can be as simple as filing a help ticket and allowing a human to approve it based on some evidence. Is your name consistent across your public profiles? And you're a human being? Cool, verified. I believe that LinkedIn can, and should, do better. This feature as implemented is harming marginalized folks who are not able to get the same visibility when they cannot get verified. It reduces the exposure that marginalized creators can get. You should keep this in mind in the products you make, too. Don't require people to display their legal names. And before you even collect that data, think about what problem you're trying to solve. Do you need to collect legal names to solve that? (Probably not.) If so, do you need to store them after processing once? (Probably not.) And if so, do you need to display them publicly? (Probably not.) Names are so much more than what the government knows us by. Let us be our true selves and verify us with our true names. 1 It's not free of problems, though: I'd like to have a way to achieve the same result ("she's a human! she's generally the internet person she claims she is!") without showing my government identity documents to a third party. 2 Though, companies have been known to marginalize large groups of people. This is a rhetorical point that they are either harming a lot of people or they could solve the problem for a low cost.

2 weeks ago 16 votes
Can I ethically use LLMs?

The title is not a rhetorical question, and I'm not going to bury an answer. I don't have an answer. This post is my exploration of the question, and why I think it is a question1. Important things up front: what's my relationship with LLMs today? I don't use any LLMs regularly. I do have access to GitHub Copilot through my employer. I have it available on a hotkey, I think, and I cannot remember how to trigger it since I do not use it. I've explored using LLMs in the past. I used to be a regular Copilot user, and I explored ChatGPT, Claude, etc. to see what their capabilities were. I have done trainings for my coworkers on how to use them effectively, though I would not feel comfortable doing so now. My employer's product uses LLMs. I don't want to link to my employer, but yeah, I guess my paycheck depends on being okay with integrating them in? It's complicated (a refrain). (This post is, obviously, my opinions and does not reflect my employer.) I don't think using or not using them is a moral failing. There is a lot of moralizing around LLM usage. I'm not doing any of that here. I have my own beliefs (or my own questions), but I don't think people using LLMs are immoral (or vice versa). So, you can see I have used them and I'm not absolutist, but I don't use them today. Why not? Why did I stop using them in spite of the advances, where they're more capable than ever? It's because of these questions and issues. Where I have undecided ethical questions, I lean toward the more conservative2 choice of not using them until I have clarity on the ethics. (Note: I am not inviting folks to email me with answers to this question.) Energy usage Another technology that uses a lot of energy is blockchains. I think using public blockchains is almost universally unethical since there are other, better, less harmful options. Part of the harm from blockchains is an absurd amount of energy usage. LLMs also use a lot of energy. This can be split into training and inference energy usage. These vary based on the model. Some models can run locally on Apple silicon, and those are lower energy usage—their upper bound is running your computer full tilt, and an M4 Mac mini's max power consumption is 65 watts. This is roughly equivalent to one incandescent light bulb, or 8 LED light bulbs. It's good to turn off unnecessary lights, but doing so isn't going to solve the climate crisis; we need bigger, more sweeping reforms. I don't think that local models are going to significantly alter the climate crisis in either direction. Other models are massive and run in data centers on lots of power-hungry GPUs3. These data centers also require construction, and that comes with its own environmental impact. An article from Tom's Guide last year showed that "a single query on ChatGPT-4 can use up to 3 bottles of water, that a year of queries uses enough electricity to power over nine houses". A lot of the cost comes from new data centers being built. The demand for LLMs has led to more demand for power generation and more demand for data centers. And this new power generation is coming from gas-fired plants instead of sustainable, clean energy sources, because that's all we can build fast enough. A lot of attention is given to the training side. The numbers for training are large and shocking: Llama 3 used 500 mWh and GPT-3 training used 1,287 mWh—even more if you include the cost of training failed models which preceded these, the experiments that made the models possible. The listed figures are high, and 500 mWh is about the cost of a large jet flying for 7 hours. But we do it once per foundational model, and then the cost is spread across all the remaining usage. I don't think that the training side is significantly shifting the equation on climate change. We'd have a much larger impact on improving the climate crisis by advocating for remote work—reducing vehicles on the road, making many flights unnecessary—than by not training models. Overall it feels to me like local models have a clearly acceptable impact, and data center models have higher energy usage but still probably do not change the situation very much. Training data The training data for LLMs has largely been lifted without the consent of the people who generated that data. This is a lot of writing, music, videos, visual art, all of it. There are some attempts at there at using licensed data only, but the majority of models, and the most popular ones, are unlicensed data. Now the question is: is this an ethical problem? I know there are opinions on both sides of this. Some say that this data is publicly visible on the internet (though some of the data was not on the internet), and so it's fair game. Others say that this use isn't one that people consented to, and it should require that consent. My thought experiment is this: If we made search engines today, would people have this same objection to a search engine using their data without their express consent? I think most people would ultimately support search engines. They are different than LLMs, because they (mostly) serve results that point you to the original source, rather than create new content for you to replace the original sources with. But maybe people would reject search engines. And maybe consistency between these two isn't necessary, or maybe it's a false consistency—there could be other differentiating details that lead to different answers. Where I come down is that I think we need a robust mechanism to opt out of use of data for training, but that it's probably fine to train with publicly available data on the internet. What you do with the trained model is another question entirely. When you try to replace people making original works instead of creating an entirely new function, that's where you get questions. "Replacing" people There's a lot of LLM usage that is just trying to replace people in entire jobs. I mean, it feels like all of it. We see LLMs that are meant to replace writers and editors, artists and illustrators, musicians and songwriters. It doesn't say that directly—it says you're empowered to create things yourself. But what it means is that people should be able to press a few buttons and, with no artist involved, get out a beautiful artwork. Sounds like replacing to me. This is something we've long done with technology. We make technology that puts people out of jobs, and that was the whole industrial revolution. That's what happened with shipping from the containerization of ports, putting many dockworkers out of jobs. Replacing people in the abstract is not unethical. The problem is if we fail to deal with the harm created from replacing people. And people will be harmed, because losing your job or the value of it going down has a serious impact on quality of life. When we put people out of work, we—both society and technologists—have an ethical responsibility to ensure there's a plan to mitigate the harm from that. Maybe that means grants for living expenses while people switch fields, if put out of work in an LLM-heavy industry. Maybe that means a universal basic income. But it certainly doesn't mean doing nothing. Incorrect information and bias One of the major well-known problems of LLMs is a tendency to "hallucinate," or to confidently state facts that are made up from whole cloth. They also have an unknown amount of bias, with unknown mitigations in place, due to being closed systems. This is a big problem! We're not good at seeing what information is incorrect in something that's generated to look like it's the most likely string of tokens. If there is incorrect information in there, we'll just miss it. This means that people can make poor decisions on the basis of what an LLM tells them. They can have lost income due to its mistakes. And the bias? That's a huge problem, because it means that we don't know if this system will reinforce existing problematic norms4. We don't know what it will reinforce, because the training data is closed and there's not a lot of public evaluation on bias. So ultimately, we're left with an unknown harm of unknown magnitude to an unknown population5. Concentrating power (with the wealthy elite) A big ethical concern for me is also what this will do to our entire society. Many technologies are heralded as "democratizing" things. Spotify "democratized" music by making it so that anyone can get listens—but, y'all, it ended up flattening the tail and making the popular artists more money while making small artists less money. Will LLMs do the same? We know that the big models need large data centers to run their training and inference. And even small models need beefy hardware to run inference, let alone training! We have some access to models which can run locally, which is a good step. But the problem is that we can only run other people's models. They'll have those people's decisions baked in, decisions on which data to include or to exclude, decisions on how to approach questions of bias and abuse. And when the hardware to run the biggest models is only accessible to a few companies, that means that those companies really get a lot more powerful. OpenAI and Anthropic and Google and Meta all have the ability to run really large models. I certainly don't, though. This means that a technology that many are heralding as making things more accessible to everyone is controlled by a small handful of people. A small handful of people can decide how the models are trained, and set policies on how they're used. In a time when the US government is trying to get any paper retracted that mentioned queer people, and erasing trans people from the Stonewall monument, it feels self-evident that letting a small group of people control this technology imperils the future of many people. * * * Ultimately, I want robots to do the things I don't want to do. I want them to do my dishes and my laundry. I don't want them to play music instead of me, write code instead of me, write words instead of me. I am not sure whether or not using LLMs is unethical. There are certainly ways of using them which are unquestionably unethical—as is true with every technology. And there are ways of developing LLMs which are unethical—as is true with every technology. But the problems with them are large. I think it is unethical to use them without addressing the ethical questions above. If you're not working on mitigating the harms from LLMs (which do exist), then you might be doing something unethical. 1 I've been in the interesting situation of having anti-LLM people think I'm pro-LLM and vice versa. It's a very weird feeling, and makes me a little nervous of posting this! But this is an important question and an important conversation. 2 Footnoting out of an abundance of caution: I don't meant conservative-like-Republicans, because, no, I'd like to keep my rights thank you very much. I just mean in terms of minimizing risk. Let's please stop attacking trans rights, immigrants, Palestinians, and you know, everyone else that I've forgotten because the whole world seems to be on fire. 3 If we ever achieve artificial sentience, this sentence may acquire a second, more sinister, meaning. 4 It will. 5 My guess is a large harm to all underrepresented groups, but who asked the trans woman?

3 weeks ago 18 votes
What's in a ring buffer? And using them in Rust

Working on my cursed MIDI project, I needed a way to store the most recent messages without risking an unbounded amount of memory usage. I turned to the trusty ring buffer for this! I started by writing a very simple one myself in Rust, then I looked and what do you know, of course the standard library has an implementation. While I was working on this project, I was pairing with a friend. She asked me about ring buffers, and that's where this post came from! What's a ring buffer? Ring buffers are known by a few names. Circular queues and circular buffers are the other two I hear the most. But this name just has a nice ring to it. It's an array, basically: a buffer, or a queue, or a list of items stacked up against each other. You can put things into it and take things out of it the same as any buffer. But the front of it connects to the back, like any good ring1. Instead of adding on the end and popping from the end it, like a stack, you can add to one end and remove from the start, like a queue. And as you add or remove things, where the start and end of the list move around. Your data eats its own tail. This lets us keep a fixed number of elements in the buffer without running into reallocation. In a regular old buffer, if you use it as a queue—add to the end, remove from the front—then you'll eventually need to either reallocate the entire thing or shift all the elements over. Instead, a ring buffer lets you just keep adding from either end and removing from either end and you never have to reallocate! Uses for a ring buffer My MIDI program is one example of where you'd want a ring buffer, to keep the most recent items. There are some general situations where you'll run into this: You want a fixed number of the most recent things, like the last 50 items seen You want a queue, especially with a fixed maximum number of queued elements2 You want a buffer for data coming in with an upper bound, like with streaming audio, and you want to overwrite old data if the consumer can't keep up for a bit A lot of it comes down to performance, and streaming data. Something is producing data, something else is consuming it, and you want to make sure insertions and removals are fast. That was exactly my use: a MIDI device produces messages, my UI consumes them, but I don't want to fill up all my memory, forever, with more of them. How ring buffers work So how do they work? This is a normal, non-circular buffer: When you add something, you put it on the end. If you're using it as a stack, then you can pop things off the end. But if you're using it as a queue, you pop things off the front... And then you have to shuffle everything to the left. That's why we have ring buffers! I'll draw it as a straight line here, to show how it's laid out in memory. But the end loops back to the front logically, and we'll see how it wraps around. We start with an empty buffer, and start and end both point at the first element. When start == end, we know the buffer is empty. If we insert an element, we move end forward. And it gets repeated as we insert multiple times. If we remove an element, we move start forward. We can also start the buffer at any point, and it crosses over the end gracefully. Ring buffers are pretty simple when you get into their details, with just a few things to wrap your head around. It's an incredibly useful data structure! Rust options If you want to use one in Rust, what are your options? There's the standard library, which includes VecDeque. This implements a ring buffer, and the name comes from "Vec" (Rust's growable array type) combined with "Deque" (a double-ended queue). As this is in the standard library, this is quite accessible from most code, and it's a growable ring buffer. This means that the pop/push operations will always be efficient, but if your buffer is full it will result in a resize operation. The amortized running time will still be O(1) for insertion, but you incur the cost all at once when a resize happens, rather than at each insertion. You can enforce size limits in your code, if you want to avoid resize operations. Here's an example of how you could do that. You check if it's full first, and if so, you remove something to make space for the new element. let buffer: VecDeque<u32> = VecDeque::with_capacity(10); for thing in 0..15 { // if the buffer is already full, remove the first element to make space if buffer.len() == 10 { buffer.pop_front(); } buffer.push_back(thing); } There are also libraries that can enforce this for you! For example, circular-buffer implements a ring buffer. It has a fixed max capacity and won't resize on you, instead overwriting elements when you run out of space. The size is set at compile time, though, which can be great or can be a constraint that's hard to meet. There is also ringbuffer, which gives you a fixed size buffer that's heap allocated at runtime. That buys you some flexibility, with the drawback of being heap-based instead of stack-based. I'm definitely reaching for a library when I need a non-growable ring buffer in Rust now. There are some good choices, and it saves me the trouble of having to manually enforce size limits. 1 One of my favorite rings represents the sun and the moon, with the sun nestling inside this crescent moon shape. Unfortunately, the ring is open there. It makes for a very nice visual effect, but it kept getting snagged on things and bending. So it's not a very good ring for living an active hands-on life. 2 These can also be resizable! The Rust standard library one is. This comes with reallocation cost, which amortizes to a low cost but can be expensive in individual moments.

a month ago 18 votes

More in programming

New Blog Post: "A Perplexing Javascript Parsing Puzzle"

I know I said we'd be back to normal newsletters this week and in fact had 80% of one already written. Then I unearthed something that was better left buried. Blog post here, Patreon notes here (Mostly an explanation of how I found this horror in the first place). Next week I'll send what was supposed to be this week's piece. (PS: April Cools in three weeks!)

17 hours ago 3 votes
Notes on Improving Churn

Ask any B2C SaaS founder what metric they’d like to improve and most will say reducing churn. However, proactively reducing churn is a difficult task. I’ll outline the approach we’ve taken at Jenni AI to go from ~17% to 9% churn over the past year. We are still a work in progress but hopefully you’ll […] The post Notes on Improving Churn appeared first on Marc Astbury.

20 hours ago 3 votes
Catching grace

Meditation is easy when you know what to do: absolutely nothing! It's hard at first, like trying to look at the back of your own head, but there's a knack to it.

17 hours ago 3 votes
Python Performance: Why 'if not list' is 2x Faster Than Using len()

Discover why 'if not mylist' is twice as fast as 'len(mylist) == 0' by examining CPython's VM instructions and object memory access patterns.

12 hours ago 2 votes
Our switch to Kamal is complete

In a fit of frustration, I wrote the first version of Kamal in six weeks at the start of 2023. Our plan to get out of the cloud was getting bogged down in enterprisey pricing and Kubernetes complexity. And I refused to accept that running our own hardware had to be that expensive or that convoluted. So I got busy building a cheap and simple alternative.  Now, just two years later, Kamal is deploying every single application in our entire heritage fleet, and everything in active development. Finalizing a perfectly uniform mode of deployment for every web app we've built over the past two decades and still maintain. See, we have this obsession at 37signals: That the modern build-boost-discard cycle of internet applications is a scourge. That users ought to be able to trust that when they adopt a system like Basecamp or HEY, they don't have to fear eviction from the next executive re-org. We call this obsession Until The End Of The Internet. That obsession isn't free, but it's worth it. It means we're still operating the very first version of Basecamp for thousands of paying customers. That's the OG code base from 2003! Which hasn't seen any updates since 2010, beyond security patches, bug fixes, and performance improvements. But we're still operating it, and, along with every other app in our heritage collection, deploying it with Kamal. That just makes me smile, knowing that we have customers who adopted Basecamp in 2004, and are still able to use the same system some twenty years later. In the meantime, we've relaunched and dramatically improved Basecamp many times since. But for customers happy with what they have, there's no forced migration to the latest version. I very much had all of this in mind when designing Kamal. That's one of the reasons I really love Docker. It allows you to encapsulate an entire system, with all of its dependencies, and run it until the end of time. Kind of how modern gaming emulators can run the original ROM of Pac-Man or Pong to perfection and eternity. Kamal seeks to be but a simple wrapper and workflow around this wondrous simplicity. Complexity is but a bridge — and a fragile one at that. To build something durable, you have to make it simple.

23 hours ago 2 votes