Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
7
There's this thing in Python that always trips me up. It's not that tricky, once you know what you're looking for, but it's not intuitive for me, so I do forget. It's that shadowing a variable can sometimes give you an UnboundLocalError! It happened to me last week while working on a workflow engine with a coworker. We were refactoring some of the code. I can't share that code (yet?) so let's use a small example that illustrates the same problem. Let's start with some working code, which we had before our refactoring caused a problem. Here's some code that defines a decorator for a function, which will trigger some other functions after it runs. def trigger(*fns): """After the decorated function runs, it will trigger the provided functions to run sequentially. You can provide multiple functions and they run in the provided order. This function *returns* a decorator, which is then applied to the function we want to use to trigger other functions. """ def...
3 days ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from ntietz.com blog - technically a blog

Big endian and little endian

Every time I run into endianness, I have to look it up. Which way do the bytes go, and what does that mean? Something about it breaks my brain, and makes me feel like I can't tell which way is up and down, left and right. This is the blog post I've needed every time I run into this. I hope it'll be the post you need, too. What is endianness? The term comes from Gulliver's travels, referring to a conflict over cracking boiled eggs on the big end or the little end[1]. In computers, the term refers to the order of bytes within a segment of data, or a word. Specifically, it only refers to the order of bytes, as those are the smallest unit of addressable data: bits are not individually addressable. The two main orderings are big-endian and little-endian. Big-endian means you store the "big" end first: the most-significant byte (highest value) goes into the smallest memory address. Little-endian means you store the "little" end first: the least-significant byte (smallest value) goes into the smallest memory address. Let's look at the number 168496141 as an example. This is 0x0A0B0C0D in hex. If we store 0x0A at address a, 0x0B at a+1, 0x0C at a+2, and 0x0D at a+3, then this is big-endian. And then if we store it in the other order, with 0x0D at a and 0x0A at a+3, it's little-endian. And... there's also mixed-endianness, where you use one kind within a word (say, little-endian) and a different ordering for words themselves (say, big-endian). If our example is on a system that has 2-byte words (for the sake of illustration), then we could order these bytes in a mixed-endian fashion. One possibility would be to put 0x0B in a, 0x0A in a+1, 0x0D in a+2, and 0x0C in a+3. There are certainly reasons to do this, and it comes up on some ARM processors, but... it feels so utterly cursed. Let's ignore it for the rest of this! For me, the intuitive ordering is big-ending, because it feels like it matches how we read and write numbers in English[2]. If lower memory addresses are on the left, and higher on the right, then this is the left-to-right ordering, just like digits in a written number. So... which do I have? Given some number, how do I know which endianness it uses? You don't, at least not from the number entirely by itself. Each integer that's valid in one endianness is still a valid integer in another endianness, it just is a different value. You have to see how things are used to figure it out. Or you can figure it out from the system you're using (or which wrote the data). If you're using an x86 or x64 system, it's mostly little-endian. (There are some instructions which enable fetching/writing in a big-endian format.) ARM systems are bi-endian, allowing either. But perhaps the most popular ARM chips today, Apple silicon, are little-endian. And the major microcontrollers I checked (AVR, ESP32, ATmega) are little-endian. It's thoroughly dominant commercially! Big-endian systems used to be more common. They're not really in most of the systems I'm likely to run into as a software engineer now, though. You are likely to run into it for some things, though. Even though we don't use big-endianness for processor math most of the time, we use it constantly to represent data. It comes back in networking! Most of the Internet protocols we know and love, like TCP and IP, use "network order" which means big-endian. This is mentioned in RFC 1700, among others. Other protocols do also use little-endianness again, though, so you can't always assume that it's big-endian just because it's coming over the wire. So... which you have? For your processor, probably little-endian. For data written to the disk or to the wire: who knows, check the protocol! Why do we do this??? I mean, ultimately, it's somewhat arbitrary. We have an endianness in the way we write, and we could pick either right-to-left or left-to-right. Both exist, but we need to pick one. Given that, it makes sense that both would arise over time, since there's no single entity controlling all computer usage[3]. There are advantages of each, though. One of the more interesting advantages is that little-endianness lets us pretend integers are whatever size we like, within bounds. If you write the number 26[4] into memory on a big-endian system, then read bytes from that memory address, it will represent different values depending on how many bytes you read. The length matters for reading in and interpreting the data. If you write it into memory on a little-endian system, though, and read bytes from the address (with the remaining ones zero, very important!), then it is the same value no matter how many bytes you read. As long as you don't truncate the value, at least; 0x0A0B read as an 8-bit int would not be equal to being read as a 16-bit ints, since an 8-bit int can't hold the entire thing. This lets you read a value in the size of integer you need for your calculation without conversion. On the other hand, big-endian values are easier to read and reason about as a human. If you dump out the raw bytes that you're working with, a big-endian number can be easier to spot since it matches the numbers we use in English. This makes it pretty convenient to store values as big-endian, even if that's not the native format, so you can spot things in a hex dump more easily. Ultimately, it's all kind of arbitrary. And it's a pile of standards where everything is made up, nothing matters, and the big-end is obviously the right end of the egg to crack. You monster. The correct answer is obviously the big end. That's where the little air pocket goes. But some people are monsters... ↩ Please, please, someone make a conlang that uses mixed-endian inspired numbers. ↩ If ever there were, maybe different endianness would be a contentious issue. Maybe some of our systems would be using big-endian but eventually realize their design was better suited to little-endian, and then spend a long time making that change. And then the government would become authoritarian on the promise of eradicating endianness-affirming care and—Oops, this became a metaphor. ↩ 26 in hex is 0x1A, which is purely a coincidence and not a reference to the First Amendment. This is a tech blog, not political, and I definitely stay in my lane. If it were a reference, though, I'd remind you to exercise their 1A rights[5] now and call your elected officials to ensure that we keep these rights. I'm scared, and I'm staring down the barrel of potential life-threatening circumstances if things get worse. I expect you're scared, too. And you know what? Bravery is doing things in spite of your fear. ↩ If you live somewhere other than the US, please interpret this as it applies to your own country's political process! There's a lot of authoritarian movement going on in the world, and we all need to work together for humanity's best, most free[6] future. ↩ I originally wrote "freest" which, while spelled correctly, looks so weird that I decided to replace it with "most free" instead. ↩

a week ago 9 votes
Who are your teammates?

If you manage a team, who are your teammates? If you're a staff software engineer embedded in a product team, who are your teammates? The answer to the question comes down to who your main responsibility lies with. That's not the folks you're managing and leading. Your responsibility lies with your fellow leaders, and they're your teammates. The first team mentality There's a concept in leadership called the first team mentality. If you're a leader, then you're a member of a couple of different teams at the same time. Using myself as an example, I'm a member of the company's leadership team (along with the heads of marketing, sales, product, etc.), and I'm also a member of the engineering department's leadership team (along with the engineering directors and managers and the CTO). I'm also sometimes embedded into a team for a project, and at one point I was running a 3-person platform team day-to-day. So I'm on at least two teams, but often three or more. Which of these is my "first" team, the one which I will prioritize over all the others? For my role, that's ultimately the company leadership. Each department is supposed to work toward the company goals, and so if there's an inter-department conflict you need to do what's best for the company—helping your fellow department heads—rather than what's best for your department. (Ultimately, your job is to get both of these into alignment; more on that later.) This applies across roles. If you're an engineering manager, your teammates are not the people who report to you. Your teammates are the other engineering managers and staff engineers at your level. You all are working together toward department goals, and sometimes the team has to sacrifice to make that happen. Focus on the bigger goals One of the best things about a first team mentality is that it comes with a shift in where your focus is. You have to focus on the broader goals your group is working in service of, instead of focusing on your group's individual work. I don't think you can achieve either without the other. When you zoom out from the team you lead or manage and collaborate with your fellow leaders, you gain context from them. You see what their teams are working on, and you can contextualize your work with theirs. And you also see how your work impacts theirs, both positively and negatively. That broader context gives you a reminder of the bigger, broader goals. It can also show you that those goals are unclear. And if that's the case, then the work you're doing in your individual teams doesn't matter, because no one is going in the same direction! What's more important there is to focus on figuring out what the bigger goals should be. And once those are done, then you can realign each of your groups around them. Conflicts are a lens Sometimes the first team mentality will result in a conflict. There's something your group wants or needs, which will result in a problem for another group. Ultimately, this is your work to resolve, and the conflict is a lens you can use to see misalignment and to improve the greater organization. You have to find a way to make sure that your group is healthy and able to thrive. And you also have to make sure that your group works toward collective success, which means helping all the groups achieve success. Any time you run into a conflict like this, it means that something went wrong in alignment. Either your group was doing something which worked against its own goal, or it was doing something which worked against another group's goal. If the latter, then that means that the goals themselves fundamentally conflicted! So you go and you take that conflict, and you work through it. You work with your first team—and you figure out what the mismatch is, where it came from, and most importantly, what we do to resolve it. Then you take those new goals back to your group. And you do it with humility, since you're going to have to tell them that you made a mistake. Because that alignment is ultimately your job, and you have to own your failures if you expect your team to be able to trust you and trust each other.

2 weeks ago 12 votes
Stewardship over ownership

Code ownership is a popular concept, but it emphasizes the wrong thing. It can bring out the worst in a person or a team: defensiveness, control-seeking, power struggles. Instead, we should be focusing on stewardship. How code ownership manifests Code ownership as a concept means that a particular person or team "owns" a section of the codebase. This gives them certain rights and responsibilities: They control what goes into the code, and can approve or deny changes They are responsible for fixing bugs in that part of the code They are responsible for maintaining and improving that part of the code There are tools that help with these, like the CODEOWNERS file on GitHub. This file lets you define a group or list of individuals who own a section of the repository. Then you can require reviews/approvals from them before anything gets merged. These are all coming from a good place. We want our code to be well-maintained, and we want to make sure that someone is responsible for its direction. It really helps to know who to go to with questions or requests. Without these, changes can grind to a halt, mired in confusion and tech debt. But the concept in practice brings challenges. If you've worked on a team using code ownership before, you've probably run into: that engineer who guards the code against anyone else's changes, wanting all the credit for themselves that engineer who refuses to add anything else to their codebase, because they don't want to maintain it that engineer who tries to gain code ownership over more areas, to control more of the code and more of the company I've done certainly acted badly due to code ownership, without realizing what I was doing or or why I was doing it at the time. There are almost endless ways that code ownership can bring out the worst in people. And it all makes sense. We can do better by shifting to stewardship instead of ownership. Stewardship is about service We are all stewards of things we own or are responsible for. I have stewardship over the house I live in with my family, for example. I also have stewardship over the espresso machine I use every day: It's a big piece of machinery, and it's my responsibility to take good care of it and to ensure that as long as it's mine, it operates well and lasts a long time. That reduces expense, reduces waste, and reduces impact on the world—but it also means that the object (an espresso machine) is serving its purpose to bring joy and connection. Code is no different. By focusing on stewardship rather than ownership, we are focusing on the responsible, sustainable maintenance of the code. We focus on taking good care of that which we're entrusted with. A steward doesn't jealously guard, or struggle to gain more power. A steward watches what her responsibilities are, ensuring enough to contribute but not so many as to burn out. And she nurtures and cares for the code, to make sure that it continues to serve its purpose. Instead of an adversarial relationship, stewardship promotes partnership: It promotes working with others to figure out how to make the best use of resources, instead of hoarding them for yourself. Stewardship can solve many of the same problems that code ownership does: It gives you someone who's a main point of contact for some code It grants someone responsibility for bug fixes and maintenance of that code And in some ways, they look alike. You're going to do a lot of the same things, controlling what goes in or out. But they are very different in the focus. Owners are concerned with the value of what they own. Stewards are concerned with how well it can serve the group. And this makes all the difference in producing better outcomes.

3 weeks ago 16 votes
Some things that make Rust lifetimes hard to learn

After I wrote YARR (Yet Another Rust Resource, with requisite pirate mentions), one of my friends tried it out. He gave me some really useful insights as he went through it, letting me see what was hard about learning Rust from a newcomer's perspective. Unsurprisingly, lifetimes are a challenge—and seeing him go through it helped me understand why they're hard to learn. Here are a few of the challenges he ran into. I don't think that these are necessarily problems, but they're perhaps opportunities to improve educational materials. They don't map 100% to how long a variable is in memory My friend gave me an example he's seen a few times when people explain lifetimes. fn longest<'a>(x: &'a str, y: &'a str) -> &'a str { if x.len() > y.len() { x } else { y } } And for many newcomers, you see this and you expect it is saying that x and y both have the lifetime 'a, so they live the same amount of time. But the following is valid: fn print_longest(x: &'static str) { let y = "local"; let a = longest(x, y); println!("{a}"); drop(a); drop(y); println!("y is gone"); } In this example, x and y live for different amounts of time. y doesn't even survive to the end of the function, whereas x should be valid for the entire duration of the program. That's because lifetimes are talking about a bound on the time something can live. There's some lifetime 'a during which we can say that x and y are both certainly valid. But x and y can both live longer than 'a. Lifetimes don't change the runtime behavior Most code we write changes what the program does at runtime. Types can be different, because sometimes you're giving the compiler information about what something is. But most type information can change the runtime behavior! The simplest example is when you have an integer. You can declare one without a type. let x = 10; This has an inferred type, and if you set a different type, like u8, you'll get different behavior at run time. let x: u8 = 10; In contrast, lifetimes are only used by the compiler to ensure that borrows are all valid. The compiler can reject your program if invalid borrows are performed, but the binary output should not be affected by the lifetimes of the variables. It's a different kind of type system We're used to seeing types in our programming languages, and these type systems are usually pretty similar. Rust's lifetimes are different, though. The borrow checker uses a linear type system to do its work. These are super cool, and something that I don't understand particularly well. I'm familiar with how to use the borrow checker, but I don't know any of the theory behind them. The premise, as I understand it, is that objects can be used exactly once, allowing you to safely deallocate it after use (since it won't be used again). This prevents multiple concurrent uses (yay, data race protection!) or use-after-free (yay, segfault protection!). The coolness is why we have it, but it's still pretty tough to understand. You have to learn this whole new type system that's pretty different from everything else you've touched. And most of the resources1 out there don't even mention that it's a different kind of type system! They share syntax with generics Another challenge is that the syntax is shared with generics. Even though lifetimes are very different in behavior and type system from generics, they sit inside very similar looking syntax. This is probably unavoidable—lifetimes are related to all the other types in your code—but it certainly makes things harder to learn. When you see something like this, you expect that it's generic over a type. fn something_generic<T>(arg: T) { ... } And you're right that it is! But then you have something that looks very similar, like this. And you might expect it to also be generic over a type. fn something_generic<'a>(arg: &'a str) { ... } But it's not, in the normal sense. Instead it's generic over a lifetime. And that's a little confusing that those sit in the same spot, especially when it's not called out as a potential gotcha in learning materials. * * * Lifetimes have some inherent complexity. The borrow checker is a very valuable tool, and it's great we have it! But with that power and complexity can come challenges in learning, and teaching, the underlying concepts. I think the current difficulty in learning Rust is due to a lot of things. One aspect is certainly some inherent complexity. But another aspect is that many resources aren't really geared toward the kind of programmer coming to Rust without this background knowledge, and there is room for improvement. We can make explanations of lifetimes and the borrow checker better and less confusing. Or we can at least make them more empathetic, projecting that it's expected to be confused because there are some good reasons it's hard to understand. And that you'll get there, eventually. Thank you, Ryan, for generously sharing your thoughts as you went through learning Rust. Our conversations were instrumental in writing this post. 1 I suppose, as the author of YARR, I can fix this in at least one instance.

a month ago 19 votes

More in programming

How to resource Engineering-driven projects at Calm? (2020)

One of the recurring challenges in any organization is how to split your attention across long-term and short-term problems. Your software might be struggling to scale with ramping user load while also knowing that you have a series of meaningful security vulnerabilities that need to be closed sooner than later. How do you balance across them? These sorts of balance questions occur at every level of an organization. A particularly frequent format is the debate between Product and Engineering about how much time goes towards developing new functionality versus improving what’s already been implemented. In 2020, Calm was growing rapidly as we navigated the COVID-19 pandemic, and the team was struggling to make improvements, as they felt saturated by incoming new requests. This strategy for resourcing Engineering-driven projects was our attempt to solve that problem. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. Reading this document To apply this strategy, start at the top with Policy. To understand the thinking behind this strategy, read sections in reverse order, starting with Explore. More detail on this structure in Making a readable Engineering Strategy document. Policy & Operation Our policies for resourcing Engineering-driven projects are: We will protect one Eng-driven project per product engineering team, per quarter. These projects should represent a maximum of 20% of the team’s bandwidth. Each project must advance a measurable metric, and execution must be designed to show progress on that metric within 4 weeks. These projects must adhere to Calm’s existing Engineering strategies. We resource these projects first in the team’s planning, rather than last. However, only concrete projects are resourced. If there’s no concrete proposal, then the team won’t have time budgeted for Engineering-driven work. Team’s engineering manager is responsible for deciding on the project, ensuring the project is valuable, and pushing back on attempts to defund the project. Project selection does not require CTO approval, but you should escalate to the CTO if there’s friction or disagreement. CTO will review Engineering-driven projects each quarter to summarize their impact and provide feedback to teams’ engineering managers on project selection and execution. They will also review teams that did not perform a project to understand why not. As we’ve communicated this strategy, we’ve frequently gotten conceptual alignment that this sounds reasonable, coupled with uncertainty about what sort of projects should actually be selected. At some level, this ambiguity is an acknowledgment that we believe teams will identify the best opportunities bottoms-up, we also wanted to give two concrete examples of projects we’re greenlighting in the first batch: Code-free media release: historically, we’ve needed to make a number of pull requests to add, organize, and release new pieces of media. This is high urgency work, but Engineering doesn’t exercise much judgment while doing it, and manual steps often create errors. We aim to track and eliminate these pull requests, while also increasing the number of releases that can be facilitated without scaling the content release team. Machine-learning content placement: developing new pieces of media is often a multi-week or month process. After content is ready to release, there’s generally a debate on where to place the content. This matters for the company, as this drives engagement with our users, but it matters even more to the content creator, who is generally evaluated in terms of their content’s performance. This often leads to Product and Engineering getting caught up in debates about how to surface particular pieces of content. This project aims to improve user engagement by surfacing the best content for their interests, while also giving the Content team several explicit positions to highlight content without Product and Engineering involvement. Although these projects are similar, it’s not intended that all Engineering-driven projects are of this variety. Instead it’s happenstance based on what the teams view as their biggest opportunities today. Diagnosis Our assessment of the current situation at Calm is: We are spending a high percentage of our time on urgent but low engineering value tasks. Most significantly, about one-third of our time is going into launching, debugging, and changing content that we release into our product. Engineering is involved due to limitations in our implementation, not because there is any inherent value in Engineering’s involvement. (We mostly just make releases slowly and inadvertently introduce bugs of our own.) We have a bunch of fairly clear ideas around improving the platform to empower the Content team to speed up releases, and to eliminate the Engineering involvement. However, we’ve struggled to find time to implement them, or to validate that these ideas will work. If we don’t find a way to prioritize, and succeed at implementing, a project to reduce Engineering involvement in Content releases, we will struggle to support our goals to release more content and to develop more product functionality this year Our Infrastructure team has been able to plan and make these kinds of investments stick. However, when we attempt these projects within our Product Engineering teams, things don’t go that well. We are good at getting them onto the initial roadmap, but then they get deprioritized due to pressure to complete other projects. Engineering team is not very fungible due to its small size (20 engineers), and because we have many specializations within the team: iOS, Android, Backend, Frontend, Infrastructure, and QA. We would like to staff these kinds of projects onto the Infrastructure team, but in practice that team does not have the product development experience to implement theis kind of project. We’ve discussed spinning up a Platform team, or moving product engineers onto Infrastructure, but that would either (1) break our goal to maintain joint pairs between Product Managers and Engineering Managers, or (2) be indistinguishable from prioritizing within the existing team because it would still have the same Product Manager and Engineering Manager pair. Company planning is organic, occurring in many discussions and limited structured process. If we make a decision to invest in one project, it’s easy for that project to get deprioritized in a side discussion missing context on why the project is important. These reprioritization discussions happen both in executive forums and in team-specific forums. There’s imperfect awareness across these two sorts of forums. Explore Prioritization is a deep topic with a wide variety of popular solutions. For example, many software companies rely on “RICE” scoring, calculating priority as (Reach times Impact times Confidence) divided by Effort. At the other extreme are complex methodologies like [Scaled Agile Framework)(https://en.wikipedia.org/wiki/Scaled_agile_framework). In addition to generalized planning solutions, many companies carve out special mechanisms to solve for particular prioritization gaps. Google historically offered 20% time to allow individuals to work on experimental projects that didn’t align directly with top-down priorities. Stripe’s Foundation Engineering organization developed the concept of Foundational Initiatives to prioritize cross-pillar projects with long-term implications, which otherwise struggled to get prioritized within the team-led planning process. All these methods have clear examples of succeeding, and equally clear examples of struggling. Where these initiatives have succeeded, they had an engaged executive sponsoring the practice’s rollout, including triaging escalations when the rollout inconvenienced supporters of the prior method. Where they lacked a sponsor, or were misaligned with the company’s culture, these methods have consistently failed despite the fact that they’ve previously succeeded elsewhere.

4 hours ago 2 votes
Personal tools

I used to make little applications just for myself. Sixteen years ago (oof) I wrote a habit tracking application, and a keylogger that let me keep track of when I was using a computer, and generate some pretty charts. I’ve taken a long break from those kinds of things. I love my hobbies, but they’ve drifted toward the non-technical, and the idea of keeping a server online for a fun project is unappealing (which is something that I hope Val Town, where I work, fixes). Some folks maintain whole ‘homelab’ setups and run Kubernetes in their basement. Not me, at least for now. But I have been tiptoeing back into some little custom tools that only I use, with a focus on just my own computing experience. Here’s a quick tour. Hammerspoon Hammerspoon is an extremely powerful scripting tool for macOS that lets you write custom keyboard shortcuts, UIs, and more with the very friendly little language Lua. Right now my Hammerspoon configuration is very simple, but I think I’ll use it for a lot more as time progresses. Here it is: hs.hotkey.bind({"cmd", "shift"}, "return", function() local frontmost = hs.application.frontmostApplication() if frontmost:name() == "Ghostty" then frontmost:hide() else hs.application.launchOrFocus("Ghostty") end end) Not much! But I recently switched to Ghostty as my terminal, and I heavily relied on iTerm2’s global show/hide shortcut. Ghostty doesn’t have an equivalent, and Mikael Henriksson suggested a script like this in GitHub discussions, so I ran with it. Hammerspoon can do practically anything, so it’ll probably be useful for other stuff too. SwiftBar I review a lot of PRs these days. I wanted an easy way to see how many were in my review queue and go to them quickly. So, this script runs with SwiftBar, which is a flexible way to put any script’s output into your menu bar. It uses the GitHub CLI to list the issues, and jq to massage that output into a friendly list of issues, which I can click on to go directly to the issue on GitHub. #!/bin/bash # <xbar.title>GitHub PR Reviews</xbar.title> # <xbar.version>v0.0</xbar.version> # <xbar.author>Tom MacWright</xbar.author> # <xbar.author.github>tmcw</xbar.author.github> # <xbar.desc>Displays PRs that you need to review</xbar.desc> # <xbar.image></xbar.image> # <xbar.dependencies>Bash GNU AWK</xbar.dependencies> # <xbar.abouturl></xbar.abouturl> DATA=$(gh search prs --state=open -R val-town/val.town --review-requested=@me --json url,title,number,author) echo "$(echo "$DATA" | jq 'length') PR" echo '---' echo "$DATA" | jq -c '.[]' | while IFS= read -r pr; do TITLE=$(echo "$pr" | jq -r '.title') AUTHOR=$(echo "$pr" | jq -r '.author.login') URL=$(echo "$pr" | jq -r '.url') echo "$TITLE ($AUTHOR) | href=$URL" done Tampermonkey Tampermonkey is essentially a twist on Greasemonkey: both let you run your own JavaScript on anybody’s webpage. Sidenote: Greasemonkey was created by Aaron Boodman, who went on to write Replicache, which I used in Placemark, and is now working on Zero, the successor to Replicache. Anyway, I have a few fancy credit cards which have ‘offers’ which only work if you ‘activate’ them. This is an annoying dark pattern! And there’s a solution to it - CardPointers - but I neither spend enough nor care enough about points hacking to justify the cost. Plus, I’d like to know what code is running on my bank website. So, Tampermonkey to the rescue! I wrote userscripts for Chase, American Express, and Citi. You can check them out on this Gist but I strongly recommend to read through all the code because of the afore-mentioned risks around running untrusted code on your bank account’s website! Obsidian Freeform This is a plugin for Obsidian, the notetaking tool that I use every day. Freeform is pretty cool, if I can say so myself (I wrote it), but could be much better. The development experience is lackluster because you can’t preview output at the same time as writing code: you have to toggle between the two states. I’ll fix that eventually, or perhaps Obsidian will add new API that makes it all work. I use Freeform for a lot of private health & financial data, almost always with an Observable Plot visualization as an eventual output. For example, when I was switching banks and one of the considerations was mortgage discounts in case I ever buy a house (ha 😢), it was fun to chart out the % discounts versus the required AUM. It’s been really nice to have this kind of visualization as ‘just another document’ in my notetaking app. Doesn’t need another server, and Obsidian is pretty secure and private.

16 hours ago 2 votes
All conference talks should start with a small technical glitch that the speaker can easily solve

At a conference a while back, I noticed a couple of speakers get such a confidence boost after solving a small technical glitch. We should probably make that a part of every talk. Have the mic not connect automatically, or an almost-complete puzzle on the stage that the speaker can finish, or have someone forget their badge and the speaker return it to them. Maybe the next time I, or a consenting teammate, have to give a presentation I’ll try to engineer such a situation. All conference talks should start with a small technical glitch that the speaker can easily solve was originally published by Ognjen Regoje at Ognjen Regoje • ognjen.io on April 03, 2025.

yesterday 2 votes
Thomas Aquinas — The world is divine!

A large part of our civilisation rests on the shoulders of one medieval monk: Thomas Aquinas. Amid the turmoil of life, riddled with wickedness and pain, he would insist that our world is good.  And all our success is built on this belief. Note: Before we start, let’s get one thing out of the way: Thomas Aquinas is clearly a Christian thinker, a Saint even. Yet he was also a brilliant philosopher. So even if you consider yourself agnostic or an atheist, stay with me, you will still enjoy his ideas. What is good? Thomas’ argument is rooted in Aristotle’s concept of goodness: Something is good if it fulfills its function. Aristotle had illustrated this idea with a knife. A knife is good to the extent that it cuts well. He made a distinction between an actual knife and its ideal function. That actual thing in your drawer is the existence of a knife. And its ideal function is its essence—what it means to be a knife: to cut well.  So everything is separated into its existence and its ideal essence. And this is also true for humans: We have an ideal conception of what the essence of a human […] The post Thomas Aquinas — The world is divine! appeared first on Ralph Ammer.

yesterday 4 votes
[April Cools] Gaming Games for Non-Gamers

My April Cools is out! Gaming Games for Non-Gamers is a 3,000 word essay on video games worth playing if you've never enjoyed a video game before. Patreon notes here. (April Cools is a project where we write genuine content on non-normal topics. You can see all the other April Cools posted so far here. There's still time to submit your own!) April Cools' Club

2 days ago 4 votes