Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
3
One of the best known hard problems in computer science is the halting problem. In fact, it's widely thought[1] that you cannot write a program that will, for any arbitrary program as input, tell you correctly whether or not it will terminate. This is written from the framing of computers, though: can we do better with a human in the loop? It turns out, we can. And we can use a method that's generalizable, which many people can follow for many problems. Not everyone can use the method, which you'll see why in a bit. But lots of people can apply this proof technique. Let's get started. * * * We'll start by formalizing what we're talking about, just a little bit. I'm not going to give the full formal proof—that will be reserved for when this is submitted to a prestigious conference next year. We will call the set of all programs P. We want to answer, for any p in P, whether or not p will eventually halt. We will call this h(p) and h(p) = true if p eventually finished and false...
yesterday

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from ntietz.com blog - technically a blog

Taking a break

I've been publishing at least one blog post every week on this blog for about 2.5 years. I kept it up even when I was very sick last year with Lyme disease. It's time for me to take a break and reset. This is the right time, because the world is very difficult for me to move through right now and I'm just burnt out. I need to focus my energy on things that give me energy and right now, that's not writing and that's not tech. I'll come back to this, and it might look a little different. This is my last post for at least a month. It might be longer, if I still need more time, but I won't return before the end of May. I know I need at least that long to heal, and I also need that time to focus on music. I plan to play a set at West Philly Porchfest, so this whole month I'll be prepping that set. If you want to follow along with my music, you can find it on my bandcamp (only one track, but I'll post demos of the others that I prepare for Porchfest as they come together). And if you want to reach out, my inbox is open. Be kind to yourself. Stay well, drink some water. See you in a while.

a month ago 13 votes
Measuring my Framework laptop's performance in 3 positions

A few months ago, I was talking with a friend about my ergonomic setup and they asked if being vertical helps it with cooling. I wasn't sure, because it seems like it could help but it was probably such a small difference that it wouldn't matter. So, I did what any self-respecting nerd would do: I procrastinated. The question didn't leave me, though, so after those months passed, I did the second thing any self-respecting nerd would do: benchmarks. The question and the setup What we want to find out is whether or not the position of the laptop would affect its CPU performance. I wanted to measure it in three positions: normal: using it the way any normal person uses their laptop, with the screen and keyboard at something like a 90-degree angle closed: using it like a tech nerd, closed but plugged into a monitor and peripherals vertical: using it like a weird blogger who has sunk a lot of time into her ergonomic setup and wants to justify it even further My hypothesis was that using it closed would slightly reduce CPU performance, and that using it normal or vertical would be roughly the same. For this experiment, I'm using my personal laptop. It's one of the early Framework laptops (2nd batch of shipments) which is about four years old. It has an 11th gen Intel CPU in it, the i7-1165G7. My laptop will be sitting on a laptop riser for the closed and normal positions, and it will be sitting in my ergonomic tray for the vertical one. For all three, it will be connected to the same set of peripherals through a single USB-C cable, and the internal display is disabled for all three. Running the tests I'm not too interested in the initial boost clock. I'm more interested in what clock speeds we can sustain. What happens under a sustained, heavy load, when we hit a saturation point and can't shed any more heat? To test that, I'm doing a test using heavy CPU load. The load is generated by stress-ng, which also reports some statistics. Most notably, it reports CPU temperatures and clock speeds during the tests. Here's the script I wrote to make these consistent. To skip the boost clock period, I warm it up first with a 3-minute load Then I do a 5-minute load and measure the CPU clock frequency and CPU temps every second along the way. #!/bin/bash # load the CPU for 3 minutes to warm it up sudo stress-ng --matrix $2 -t 3m --tz --raplstat 1 --thermalstat 1 -Y warmup-$1.yaml --log-file warmup-$1.log --timestamp --ignite-cpu # run for 5 minutes to gather our averages sudo stress-ng --matrix $2 -t 5m --tz --raplstat 1 --thermalstat 1 -Y cputhermal-$1.yaml --log-file cputhermal-$1.log --timestamp --ignite-cpu We need sudo since we're using an option (--ignite-cpu) which needs root privileges[1] and attempts to make the CPU run harder/hotter. Then we specify the stressor we're using with --matrix $2, which does some matrix calculations over a number of cores we specify. The remaining options are about reporting and logging. I let the computer cool for a minute or two between each test, but not for a scientific reason. Just because I was doing other things. Since my goal was to saturate the temperatures, and they got stable within each warmup period, cooldowh time wasn't necessary—we'd warm it back up anyway. So, I ran this with the three positions, and with two core count options: 8, one per thread on my CPU; and 4, one per physical core on my CPU. The results Once it was done, I analyzed the results. I took the average clock speed across the 5 minute test for each of the configurations. My hypothesis was partially right and partially wrong. When doing 8 threads, each position had different results: Our baseline normal open position had an average clock speed of 3.44 GHz and an average CPU temp of 91.75 F. With the laptop closed, the average clock speed was 3.37 GHz and the average CPU temp was 91.75 F. With the laptop open vertical, the average clock speed was 3.48 GHz and the average CPU temp was 88.75 F. With 4 threads, the results were: For the baseline normal open position, the average clock speed was 3.80 GHz with average CPU temps of 91.11 F. With the laptop closed, the average clock speed was 3.64 GHz with average CPU temps of 90.70 F. With the laptop open vertical, the average clock speed was 3.80 GHz with average CPU temps of 86.07 F. So, I was wrong in one big aspect: it does make a clearly measurable difference. Having it open and vertical reduces temps by 3 degrees in one test and 5 in the other, and it had a higher clock speed (by 0.05 GHz, which isn't a lot but isn't nothing). We can infer that, since clock speeds improved in the heavier load test but not in the lighter load test, that the lighter load isn't hitting our thermal limits—and when we do, the extra cooling from the vertical position really helps. One thing is clear: in all cases, the CPU ran slower when the laptop was closed. It's sorta weird that the CPU temps went down when closed in the second test. I wonder if that's from being able to cool down more when it throttled down a lot, or if there was a hotspot that throttled the CPU but which wasn't reflected in the temp data, maybe a different sensor. I'm not sure if having my laptop vertical like I do will ever make a perceptible performance difference. At any rate, that's not why I do it. But it does have lower temps, and that should let my fans run less often and be quieter when they do. That's a win in my book. It also means that when I run CPU-intensive things (say hi to every single Rust compile!) I should not close the laptop. And hey, if I decide to work from my armchair using my ergonomic tray, I can argue it's for efficiency: boss, I just gotta eke out those extra clock cycles. I'm not sure that this made any difference on my system. I didn't want to rerun the whole set without it, though, and it doesn't invalidate the tests if it simply wasn't doing anything. ↩

a month ago 9 votes
The five stages of incident response

The scene: you're on call for a web app, and your pager goes off. Denial. No no no, the app can't be down. There's no way it's down. Why would it be down? It isn't down. Sure, my pager went off. And sure, the metrics all say it's down and the customer is complaining that it's down. But it isn't, I'm sure this is all a misunderstanding. Anger. Okay so it's fucking down. Why did this have to happen on my on-call shift? This is so unfair. I had my dinner ready to eat, and *boom* I'm paged. It's the PM's fault for not prioritizing my tech debt, ugh. Bargaining. Okay okay okay. Maybe... I can trade my on-call shift with Sam. They really know this service, so they could take it on. Or maybe I can eat my dinner while we respond to this... Depression. This is bad, this is so bad. Our app is down, and the customer knows. We're totally screwed here, why even bother putting it back up? They're all going to be mad, leave, the company is dead... There's not even any point. Acceptance. You know, it's going to be okay. This happens to everyone, apps go down. We'll get it back up, and everything will be fine.

2 months ago 22 votes
Python is an interpreted language with a compiler

After I put up a post about a Python gotcha, someone remarked that "there are very few interpreted languages in common usage," and that they "wish Python was more widely recognized as a compiled language." This got me thinking: what is the distinction between a compiled or interpreted language? I was pretty sure that I do think Python is interpreted[1], but how would I draw that distinction cleanly? On the surface level, it seems like the distinction between compiled and interpreted languages is obvious: compiled languages have a compiler, and interpreted languages have an interpreter. We typically call Java a compiled language and Python an interpreted language. But on the inside, Java has an interpreter and Python has a compiler. What's going on? What's an interpreter? What's a compiler? A compiler takes code written in one programming language and turns it into a runnable thing. It's common for this to be machine code in an executable program, but it can also by bytecode for VM or assembly language. On the other hand, an interpreter directly takes a program and runs it. It doesn't require any pre-compilation to do so, and can apply a variety of techniques to achieve this (even a compiler). That's where the distinction really lies: what you end up running. An interpeter runs your program, while a compiler produces something that can run later[2] (or right now, if it's in an interpreter). Compiled or interpreted languages A compiled language is one that uses a compiler, and an interpreted language uses an interpreter. Except... many languages[3] use both. Let's look at Java. It has a compiler, which you feed Java source code into and you get out an artifact that you can't run directly. No, you have to feed that into the Java virtual machine, which then interprets the bytecode and runs it. So the entire Java stack seems to have both a compiler and an interpreter. But it's the usage, that you have to pre-compile it, that makes it a compiled language. And similarly is Python[4]. It has an interpreter, which you feed Python source code into and it runs the program. But on the inside, it has a compiler. That compiler takes the source code, turns it into Python bytecode, and then feeds that into the Python virtual machine. So, just like Java, it goes from code to bytecode (which is even written to the disk, usually) and bytecode to VM, which then runs it. And here again we see the usage, where you don't pre-compile anything, you just run it. That's the difference. And that's why Python is an interpreted language with a compiler! And... so what? Ultimately, why does it matter? If I can do cargo run and get my Rust program running the same as if I did python main.py, don't they feel the same? On the surface level, they do, and that's because it's a really nice interface so we've adopted it for many interactions! But underneath it, you see the differences peeping out from the compiled or interpreted nature. When you run a Python program, it will run until it encounters an error, even if there's malformed syntax! As long as it doesn't need to load that malformed syntax, you're able to start running. But if you cargo run a Rust program, it won't run at all if it encounters an error in the compilation step! It has to run the entire compilation process before the program will start at all. The difference in approaches runs pretty deep into the feel of an entire toolchain. That's where it matters, because it is one of the fundamental choices that everything else is built around. The words here are ultimately arbitrary. But they tell us a lot about the language and tools we're using. * * * Thank you to Adam for feedback on a draft of this post. It is worth occasionally challenging your own beliefs and assumptions! It's how you grow, and how you figure out when you are actually wrong. ↩ This feels like it rhymes with async functions in Python. Invoking a regular function runs it immediately, while invoking an async function creates something which can run later. ↩ And it doesn't even apply at the language level, because you could write an interpreter for C++ or a compiler for Hurl, not that you'd want to, but we're going to gloss over that distinction here and just keep calling them "compiled/interpreted languages." It's how we talk about it already, and it's not that confusing. ↩ Here, I'm talking about the standard CPython implementation. Others will differ in their details. ↩

2 months ago 26 votes

More in programming

2025-06-23 Mon:

activity More Edna porting.

yesterday 2 votes
My Copy of The Internet Phone Book

I recently got my copy of the Internet Phone Book. Look who’s hiding on the bottom inside spread of page 32: The book is divided into a number of categories — such as “Small”, “Text”, and “Ecology” — and I am beyond flattered to be listed under the category “HTML”! You can dial my site at number 223. As the authors note, the sites of the internet represented in this book are not described by adjectives like “attention”, “competition”, and “promotion”. Instead they’re better suited by adjectives like “home”, “love”, and “glow”. These sites don’t look to impose their will on you, soliciting that you share, like, and subscribe. They look to spark curiosity, mystery, and wonder, letting you decide for yourself how to respond to the feelings of this experience. But why make a printed book listing sites on the internet? That’s crazy, right? Here’s the book’s co-author Kristoffer Tjalve in the introduction: With the Internet Phone Book, we bring the web, the medium we love dearly, and call it into a thousand-year old tradition [of print] I love that! I think the juxtaposition of websites in a printed phone book is exactly the kind of thing that makes you pause and reconsider the medium of the web in a new light. Isn’t that exactly what art is for? Kristoffer continues: Elliot and I began working on diagram.website, a map with hundreds of links to the internet beyond platform walls. We envisioned this map like a night sky in a nature reserve—removed from the light pollution of cities—inviting a sense of awe for the vastness of the universe, or in our case, the internet. We wanted people to know that the poetic internet already existed, waiting for them…The result of that conversation is what you now hold in your hands. The web is a web because of its seemingly infinite number of interconnected sites, not because of it’s half-dozen social platforms. It’s called the web, not the mall. There’s an entire night sky out there to discover! Email · Mastodon · Bluesky

2 days ago 3 votes
2025-06-22 Sun: Ban std::string

The use of std::string should be banned in C++ code bases. I’m sure this statement sounds like heresy and you want to burn me at stake. But is it really controversial? Java, C#, Go, JavaScript, Python, Ruby, PHP: they all have immutable strings that are basically 2 machine words: a pointer to string data and size of the string. If they have an equivalent of std:string it’s something like StringBuilder. C++ should also use immutable strings in 97% of situations. The problem is gravity: the existing code, the culture. They all pull you strongly towards std::string and going against the current is the hardest thing there is. There isn’t a standard type for that. You can use newish std::span<char*> but there really should be std::str (or some such). I did that in SumatraPDF where I mostly pass char* but I don’t expect many other C++ code bases to switch away from std::string.

2 days ago 2 votes
I'm writing a book!

Over the course of my career, I introduced a couple of engineers into the topic of query engines. Every time, I bumped into the same problem: query engines are extremely academic. Despite the fact that industry has over 40 years of expertise, reading foundational papers and then sort of just googling around is the only way to dive deep. Database courses and seminars from Andy Pavlo definitely help, but they are still targeted at academic audience and require a lot of extra reading to be useful.

2 days ago 4 votes