Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
29
My wife's alarm clock has been acting up lately. Sporadic at first but then every day, it wouldn't blare in the morning at the set time. Instead, when it was supposed to go off it would... reset itself. The time would start flashing in that "I'm confused because the power went out" sort of way. Not very useful. You want the alarm to wake you up, not give you a new morning chore. I volunteered to try to fix it: I'm pretty handy, and I've done some basic electronics repair before. (Desperate times called for desperate measures, and I got that coffee grinder working again with just a little soldering.) Before opening it up, I pointed out that there's a battery. We reproduced the issue first, then changed the battery, and confirmed that that was the culprit1! But that left the question of why it's doing that. It probably isn't using the battery for power all the time, since it's plugged in. And it does get its timekeeping from the wall—well, more on that later. You can even hear a 60 Hz...
6 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from ntietz.com blog - technically a blog

Taking a break

I've been publishing at least one blog post every week on this blog for about 2.5 years. I kept it up even when I was very sick last year with Lyme disease. It's time for me to take a break and reset. This is the right time, because the world is very difficult for me to move through right now and I'm just burnt out. I need to focus my energy on things that give me energy and right now, that's not writing and that's not tech. I'll come back to this, and it might look a little different. This is my last post for at least a month. It might be longer, if I still need more time, but I won't return before the end of May. I know I need at least that long to heal, and I also need that time to focus on music. I plan to play a set at West Philly Porchfest, so this whole month I'll be prepping that set. If you want to follow along with my music, you can find it on my bandcamp (only one track, but I'll post demos of the others that I prepare for Porchfest as they come together). And if you want to reach out, my inbox is open. Be kind to yourself. Stay well, drink some water. See you in a while.

a week ago 4 votes
Measuring my Framework laptop's performance in 3 positions

A few months ago, I was talking with a friend about my ergonomic setup and they asked if being vertical helps it with cooling. I wasn't sure, because it seems like it could help but it was probably such a small difference that it wouldn't matter. So, I did what any self-respecting nerd would do: I procrastinated. The question didn't leave me, though, so after those months passed, I did the second thing any self-respecting nerd would do: benchmarks. The question and the setup What we want to find out is whether or not the position of the laptop would affect its CPU performance. I wanted to measure it in three positions: normal: using it the way any normal person uses their laptop, with the screen and keyboard at something like a 90-degree angle closed: using it like a tech nerd, closed but plugged into a monitor and peripherals vertical: using it like a weird blogger who has sunk a lot of time into her ergonomic setup and wants to justify it even further My hypothesis was that using it closed would slightly reduce CPU performance, and that using it normal or vertical would be roughly the same. For this experiment, I'm using my personal laptop. It's one of the early Framework laptops (2nd batch of shipments) which is about four years old. It has an 11th gen Intel CPU in it, the i7-1165G7. My laptop will be sitting on a laptop riser for the closed and normal positions, and it will be sitting in my ergonomic tray for the vertical one. For all three, it will be connected to the same set of peripherals through a single USB-C cable, and the internal display is disabled for all three. Running the tests I'm not too interested in the initial boost clock. I'm more interested in what clock speeds we can sustain. What happens under a sustained, heavy load, when we hit a saturation point and can't shed any more heat? To test that, I'm doing a test using heavy CPU load. The load is generated by stress-ng, which also reports some statistics. Most notably, it reports CPU temperatures and clock speeds during the tests. Here's the script I wrote to make these consistent. To skip the boost clock period, I warm it up first with a 3-minute load Then I do a 5-minute load and measure the CPU clock frequency and CPU temps every second along the way. #!/bin/bash # load the CPU for 3 minutes to warm it up sudo stress-ng --matrix $2 -t 3m --tz --raplstat 1 --thermalstat 1 -Y warmup-$1.yaml --log-file warmup-$1.log --timestamp --ignite-cpu # run for 5 minutes to gather our averages sudo stress-ng --matrix $2 -t 5m --tz --raplstat 1 --thermalstat 1 -Y cputhermal-$1.yaml --log-file cputhermal-$1.log --timestamp --ignite-cpu We need sudo since we're using an option (--ignite-cpu) which needs root privileges[1] and attempts to make the CPU run harder/hotter. Then we specify the stressor we're using with --matrix $2, which does some matrix calculations over a number of cores we specify. The remaining options are about reporting and logging. I let the computer cool for a minute or two between each test, but not for a scientific reason. Just because I was doing other things. Since my goal was to saturate the temperatures, and they got stable within each warmup period, cooldowh time wasn't necessary—we'd warm it back up anyway. So, I ran this with the three positions, and with two core count options: 8, one per thread on my CPU; and 4, one per physical core on my CPU. The results Once it was done, I analyzed the results. I took the average clock speed across the 5 minute test for each of the configurations. My hypothesis was partially right and partially wrong. When doing 8 threads, each position had different results: Our baseline normal open position had an average clock speed of 3.44 GHz and an average CPU temp of 91.75 F. With the laptop closed, the average clock speed was 3.37 GHz and the average CPU temp was 91.75 F. With the laptop open vertical, the average clock speed was 3.48 GHz and the average CPU temp was 88.75 F. With 4 threads, the results were: For the baseline normal open position, the average clock speed was 3.80 GHz with average CPU temps of 91.11 F. With the laptop closed, the average clock speed was 3.64 GHz with average CPU temps of 90.70 F. With the laptop open vertical, the average clock speed was 3.80 GHz with average CPU temps of 86.07 F. So, I was wrong in one big aspect: it does make a clearly measurable difference. Having it open and vertical reduces temps by 3 degrees in one test and 5 in the other, and it had a higher clock speed (by 0.05 GHz, which isn't a lot but isn't nothing). We can infer that, since clock speeds improved in the heavier load test but not in the lighter load test, that the lighter load isn't hitting our thermal limits—and when we do, the extra cooling from the vertical position really helps. One thing is clear: in all cases, the CPU ran slower when the laptop was closed. It's sorta weird that the CPU temps went down when closed in the second test. I wonder if that's from being able to cool down more when it throttled down a lot, or if there was a hotspot that throttled the CPU but which wasn't reflected in the temp data, maybe a different sensor. I'm not sure if having my laptop vertical like I do will ever make a perceptible performance difference. At any rate, that's not why I do it. But it does have lower temps, and that should let my fans run less often and be quieter when they do. That's a win in my book. It also means that when I run CPU-intensive things (say hi to every single Rust compile!) I should not close the laptop. And hey, if I decide to work from my armchair using my ergonomic tray, I can argue it's for efficiency: boss, I just gotta eke out those extra clock cycles. I'm not sure that this made any difference on my system. I didn't want to rerun the whole set without it, though, and it doesn't invalidate the tests if it simply wasn't doing anything. ↩

2 weeks ago 3 votes
The five stages of incident response

The scene: you're on call for a web app, and your pager goes off. Denial. No no no, the app can't be down. There's no way it's down. Why would it be down? It isn't down. Sure, my pager went off. And sure, the metrics all say it's down and the customer is complaining that it's down. But it isn't, I'm sure this is all a misunderstanding. Anger. Okay so it's fucking down. Why did this have to happen on my on-call shift? This is so unfair. I had my dinner ready to eat, and *boom* I'm paged. It's the PM's fault for not prioritizing my tech debt, ugh. Bargaining. Okay okay okay. Maybe... I can trade my on-call shift with Sam. They really know this service, so they could take it on. Or maybe I can eat my dinner while we respond to this... Depression. This is bad, this is so bad. Our app is down, and the customer knows. We're totally screwed here, why even bother putting it back up? They're all going to be mad, leave, the company is dead... There's not even any point. Acceptance. You know, it's going to be okay. This happens to everyone, apps go down. We'll get it back up, and everything will be fine.

3 weeks ago 14 votes
Python is an interpreted language with a compiler

After I put up a post about a Python gotcha, someone remarked that "there are very few interpreted languages in common usage," and that they "wish Python was more widely recognized as a compiled language." This got me thinking: what is the distinction between a compiled or interpreted language? I was pretty sure that I do think Python is interpreted[1], but how would I draw that distinction cleanly? On the surface level, it seems like the distinction between compiled and interpreted languages is obvious: compiled languages have a compiler, and interpreted languages have an interpreter. We typically call Java a compiled language and Python an interpreted language. But on the inside, Java has an interpreter and Python has a compiler. What's going on? What's an interpreter? What's a compiler? A compiler takes code written in one programming language and turns it into a runnable thing. It's common for this to be machine code in an executable program, but it can also by bytecode for VM or assembly language. On the other hand, an interpreter directly takes a program and runs it. It doesn't require any pre-compilation to do so, and can apply a variety of techniques to achieve this (even a compiler). That's where the distinction really lies: what you end up running. An interpeter runs your program, while a compiler produces something that can run later[2] (or right now, if it's in an interpreter). Compiled or interpreted languages A compiled language is one that uses a compiler, and an interpreted language uses an interpreter. Except... many languages[3] use both. Let's look at Java. It has a compiler, which you feed Java source code into and you get out an artifact that you can't run directly. No, you have to feed that into the Java virtual machine, which then interprets the bytecode and runs it. So the entire Java stack seems to have both a compiler and an interpreter. But it's the usage, that you have to pre-compile it, that makes it a compiled language. And similarly is Python[4]. It has an interpreter, which you feed Python source code into and it runs the program. But on the inside, it has a compiler. That compiler takes the source code, turns it into Python bytecode, and then feeds that into the Python virtual machine. So, just like Java, it goes from code to bytecode (which is even written to the disk, usually) and bytecode to VM, which then runs it. And here again we see the usage, where you don't pre-compile anything, you just run it. That's the difference. And that's why Python is an interpreted language with a compiler! And... so what? Ultimately, why does it matter? If I can do cargo run and get my Rust program running the same as if I did python main.py, don't they feel the same? On the surface level, they do, and that's because it's a really nice interface so we've adopted it for many interactions! But underneath it, you see the differences peeping out from the compiled or interpreted nature. When you run a Python program, it will run until it encounters an error, even if there's malformed syntax! As long as it doesn't need to load that malformed syntax, you're able to start running. But if you cargo run a Rust program, it won't run at all if it encounters an error in the compilation step! It has to run the entire compilation process before the program will start at all. The difference in approaches runs pretty deep into the feel of an entire toolchain. That's where it matters, because it is one of the fundamental choices that everything else is built around. The words here are ultimately arbitrary. But they tell us a lot about the language and tools we're using. * * * Thank you to Adam for feedback on a draft of this post. It is worth occasionally challenging your own beliefs and assumptions! It's how you grow, and how you figure out when you are actually wrong. ↩ This feels like it rhymes with async functions in Python. Invoking a regular function runs it immediately, while invoking an async function creates something which can run later. ↩ And it doesn't even apply at the language level, because you could write an interpreter for C++ or a compiler for Hurl, not that you'd want to, but we're going to gloss over that distinction here and just keep calling them "compiled/interpreted languages." It's how we talk about it already, and it's not that confusing. ↩ Here, I'm talking about the standard CPython implementation. Others will differ in their details. ↩

4 weeks ago 15 votes
Typing using my keyboard (the other kind)

I got a new-to-me keyboard recently. It was my brother's in school, but he doesn't use it anymore, so I set it up in my office. It's got 61 keys and you can hook up a pedal to it, too! But when you hook it up to the computer, you can't type with it. I mean, that's expected—it makes piano and synth noises mostly. But what if you could type with it? Wouldn't that be grand? (Ha, grand, like a pian—you know, nevermind.) How do you type on a keyboard? Or more generally, how do you type with any MIDI device? I also have a couple of wind synths and a MIDI drum pad, can I type with those? The first and most obvious idea is to map each key to a letter. The lowest key on the keyboard could be 'a'[1], etc. This kind of works for a piano-style keyboard. If you have a full size keyboard, you get 88 keys. You can use 52 of those for the letters you need for English[2] and 10 for digits. Then you have 26 left. That's more than enough for a few punctuation marks and other niceties. It only kind of works, though, because it sounds pretty terrible. You end up making melodies that don't make a lot of sense, and do not stay confined to a given key signature. Plus, this assumes you have an 88 key keyboard. I have a 61 key keyboard, so I can't even type every letter and digit! And if I want to write some messages using my other instruments, I'll need something that works on those as well. Although, only being able to type 5 letters using my drums would be pretty funny... Melodic typing The typing scheme I settled on was melodic typing. When you write your message, it should correspond to a similarly beautiful[3] melody. Or, conversely, when you play a beautiful melody it turns into some text on your computer. The way we do this is we keep track of sequences of notes. We start with our key, which will be the key of C, the Times New Roman of key signatures. Then, each note in the scale is has its scale degree: C is 1, D is 2, etc. until B is 7. We want to use scale degree, so that if we jam out with others, we can switch to the appropriate key and type in harmony with them. Obviously. We assign different computer keys to different sequences of these scale degrees. The first question is, how long should our sequences be? If we have 1-note sequences, then we can type 7 keys. Great for some very specific messages, but not for general purpose typing. 2-note sequences would give us 49 keys, and 3-note sequences give us 343. So 3 notes is probably enough, since it's way more than a standard keyboard. But could we get away with the 49? (Yes.) This is where it becomes clear why full Unicode support would be a challenge. Unicode has 155,063 characters (according to wikipedia). To represent the full space, we'd need at least 7 notes, since 7^7 is 823,543. You could also use a highly variable encoding, which would make some letters easy to type and others very long-winded. It could be done, but then the key mapping would be even harder to learn... My first implementation used 3-note sequences, but the resulting tunes were... uninspiring, to say the least. There was a lot of repetition of particular notes, which wasn't my vibe. So I went back to 2-note sequences, with a pared down set of keys. Instead of trying to represent both lowercase and uppercase letters, we can just do what keyboards do, and represent them using a shift key[4]. My final mapping includes the English alphabet, numerals 0 to 9, comma, period, exclamation marks, spaces, newlines, shift, backspace, and caps lock—I mean, obviously we're going to allow constant shouting. This lets us type just about any message we'd want with just our instrument. And we only used 44 of the available sequences, so we could add even more keys. Maybe one of those would shift us into a 3-note sequence. The key mapping The note mapping I ended up with is available in a text file in the repo. This mapping lets you type anything you'd like, as long as it's English and doesn't use too complicated of punctuation. No contractions for you, and—to my chagrin—no em dashes either. The key is pretty helpful, but even better is a dynamic key. When I was trying this for the first time, I had two major problems: I didn't know which notes would give me the letter I wanted I didn't know what I had entered so far (sometimes you miss a note!) But we can solve this with code! The UI will show you which notes are entered so far (which is only ever 1 note, for the current typing scheme), as well as which notes to play to reach certain keys. It's basically a peek into the state machine behind what you're typing! An example: "hello world" Let's see this in action. As all programmers, we're obligated by law to start with "hello, world." We can use our handy-dandy cheat sheet above to figure out how to do this. "Hello, world!" uses a pesky capital letter, so we start with a shift. C C Then an 'h'. D F Then we continue on for the rest of it and get: D C E C E C E F A A B C F G E F E B E C C B A B Okay, of course this will catch on! Here's my honest first take of dooting out those notes from the translation above. Hello, world! I... am a bit disappointed, because it would have been much better comedy if it came out like "HelLoo wrolb," but them's the breaks. Moving on, though, let's make this something musical. We can take the notes and put a basic rhythm on them. Something like this, with a little swing to it. By the magic of MIDI and computers, we can hear what this sounds like. maddie marie · Hello, world! (melody) Okay, not bad. But it's missing something... Maybe a drum groove... maddie marie · Hello, world! (w/ drums) Oh yeah, there we go. Just in time to be the song of the summer, too. And if you play the melody, it enters "Hello, world!" Now we can compose music by typing! We have found a way to annoy our office mates even more than with mechanical keyboards[5]! Other rejected neglected typing schemes As with all great scientific advancements, other great ideas were passed by in the process. Here are a few of those great ideas we tried but had to abandon, since we were not enough to handle their greatness. A chorded keyboard. This would function by having the left hand control layers of the keyboard by playing a chord, and then the right hand would press keys within that layer. I think this one is a good idea! I didn't implement it because I don't play piano very well. I'm primarily a woodwind player, and I wanted to be able to use my wind synth for this. Shift via volume! There's something very cathartic about playing loudly to type capital letters and playing quietly to print lowercase letters. But... it was pretty difficult to get working for all instruments. Wind synths don't have uniform velocity (the MIDI term for how hard the key was pressed, or how strong breath was on a wind instrument), and if you average it then you don't press the key until after it's over, which is an odd typing experience. Imagine your keyboard only entering a character when you release it! So, this one is tenable, but more for keyboards than for wind synths. It complicated the code quite a bit so I tossed it, but it should come back someday. Each key is a key. You have 88 keys on a keyboard, which definitely would cover the same space as our chosen scheme. It doesn't end up sounding very good, though... Rhythmic typing. This is the one I'm perhaps most likely to implement in the future, because as we saw above, drums really add something. I have a drum multipad, which has four zones on it and two pedals attached (kick drum and hi-hat pedal). That could definitely be used to type, too! I am not sure the exact way it would work, but it might be good to quantize the notes (eighths or quarters) and then interpret the combination of feet/pads as different letters. I might take a swing at this one sometime. Please do try this at home I've written previously about how I was writing the GUI for this. The GUI is now available for you to use for all your typing needs! Except the ones that need, you know, punctuation or anything outside of the English alphabet. You can try it out by getting it from the sourcehut repo (https://git.sr.ht/~ntietz/midi-keys). It's a Rust program, so you run it with cargo run. The program is free-as-in-mattress: it's probably full of bugs, but it's yours if you want it. Well, you have to comply with the license: either AGPL or the Gay Agenda License (be gay, do crime[6]). If you try it out, let me know how it goes! Let me know what your favorite pieces of music spell when you play them on your instrument. Coincidentally, this is the letter 'a' and the note is A! We don't remain so fortunate; the letter 'b' is the note A#. ↩ I'm sorry this is English only! But, you could to the equivalent thing for most other languages. Full Unicode support would be tricky, I'll show you why later in the post. ↩ My messages do not come out as beautiful melodies. Oops. Perhaps they're not beautiful messages. ↩ This is where it would be fun to use an organ and have the lower keyboard be lowercase and the upper keyboard be uppercase. ↩ I promise you, I will do this if you ever make me go back to working in an open office. ↩ For any feds reading this: it's a joke, I'm not advocating people actually commit crimes. What kind of lady do you think I am? Obviously I'd never think that civil disobedience is something we should do, disobeying unjust laws, nooooo... I'm also never sarcastic. ↩

a month ago 16 votes

More in programming

Debug React Router Applications with Custom Logs using react-router-devtools (tip)

react-router-devtools enhances debugging by adding automatic logging for loaders & actions, plus direct links to code origins in console logs.

20 hours ago 3 votes
Notes from the Chrome Team’s “Blink principles of web compatibility”

Following up on a previous article I wrote about backwards compatibility, I came across this document from Rick Byers of the Chrome team titled “Blink principles of web compatibility” which outlines how they navigate introducing breaking changes. “Hold up,” you might say. “Breaking changes? But there’s no breaking changes on the web!?” Well, as outlined in their Google Doc, “don’t break anyone ever” is a bit unrealistic. Here’s their rationale: The Chromium project aims to reduce the pain of breaking changes on web developers. But Chromium’s mission is to advance the web, and in some cases it’s realistically unavoidable to make a breaking change in order to do that. Since the web is expected to continue to evolve incrementally indefinitely, it’s essential to its survival that we have some mechanism for shedding some of the mistakes of the past. Fair enough. We all need ways of shedding mistakes from the past. But let’s not get too personal. That’s a different post. So when it comes to the web, how do you know when to break something and when to not? The Chrome team looks at the data collected via Chrome's anonymous usage statistics (you can take a peak at that data yourself) to understand how often “mistake” APIs are still being used. This helps them categorize breaking changes as low-risk or high-risk. What’s wild is that, given Chrome’s ubiquity as a browser, a number like 0.1% is classified as “high-risk”! As a general rule of thumb, 0.1% of PageVisits (1 in 1000) is large, while 0.001% is considered small but non-trivial. Anything below about 0.00001% (1 in 10 million) is generally considered trivial. There are around 771 billion web pages viewed in Chrome every month (not counting other Chromium-based browsers). So seriously breaking even 0.0001% still results in someone being frustrated every 3 seconds, and so not to be taken lightly! But the usage stats are merely a guide — a partially blind one at that. The Chrome team openly acknowledges their dataset doesn’t tell the whole story (e.g. Enterprise clients have metrics recording is disabled, China has Google’s metric servers are disabled, and Chromium derivatives don’t record metrics at all). And Chrome itself is only part of the story. They acknowledge that a change that would break Chrome but align it with other browsers is a good thing because it’s advancing the whole web while perhaps costing Chrome specifically in the short term — community > corporation?? Breaking changes which align Chromium’s behavior with other engines are much less risky than those which cause it to deviate…In general if a change will break only sites coding specifically for Chromium (eg. via UA sniffing), then it’s likely to be net-positive towards Chromium’s mission of advancing the whole web. Yay for advancing the web! And the web is open, which is why they also state they’ll opt for open formats where possible over closed, proprietary, “patent-encumbered” ones. The chromium project is committed to a free and open web, enabling innovation and competition by anyone in any size organization or of any financial means or legal risk tolerance. In general the chromium project will accept an increased level of compatibility risk in order to reduce dependence in the web ecosystem on technologies which cannot be implemented on a royalty-free basis. One example we saw of breaking change that excluded proprietary in favor of open was Flash. One way of dealing with a breaking change like that is to provide opt-out. In the case of Flash, users were given the ability to “opt-out” of Flash being deprecated via site settings (in other words, opt-in to using flash on a page-by-page basis). That was an important step in phasing out that behavior completely over time. But not all changes get that kind of heads-up. there is a substantial portion of the web which is unmaintained and will effectively never be updated…It may be useful to look at how long chromium has had the behavior in question to get some idea of the risk that a lot of unmaintained code will depend on it…In general we believe in the principle that the vast majority of websites should continue to function forever. There’s a lot going on with Chrome right now, but you gotta love seeing the people who work on it making public statements like that — “we believe…that the vast majority of websites should continue to function forever.” There’s some good stuff in this document that gives you hope that people really do care and work incredibly hard to not break the web! (It’s an ecosystem after all.) It’s important for [us] browser engineers to resist the temptation to treat breaking changes in a paternalistic fashion. It’s common to think we know better than web developers, only to find out that we were wrong and didn’t know as much about the real world as we thought we did. Providing at least a temporary developer opt-out is an act of humility and respect for developers which acknowledges that we’ll only succeed in really improving the web for users long-term via healthy collaborations between browser engineers and web developers. More 👏 acts 👏 of 👏 humility 👏 in tech 👏 please! Email · Mastodon · Bluesky

19 hours ago 2 votes
Coding should be a vibe!

The appeal of "vibe coding" — where programmers lean back and prompt their way through an entire project with AI — appears partly to be based on the fact that so many development environments are deeply unpleasant to work with. So it's no wonder that all these programmers stuck working with cumbersome languages and frameworks can't wait to give up on the coding part of software development. If I found writing code a chore, I'd be looking for retirement too. But I don't. I mean, I used to! When I started programming, it was purely because I wanted programs. Learning to code was a necessary but inconvenient step toward bringing systems to life. That all changed when I learned Ruby and built Rails. Ruby's entire premise is "programmer happiness": that writing code should be a joy. And historically, the language was willing to trade run-time performance, memory usage, and other machine sympathies against the pursuit of said programmer happiness. These days, it seems like you can eat your cake and have it too, though. Ruby, after thirty years of constant improvement, is now incredibly fast and efficient, yet remains a delight to work with. That ethos couldn't shine brighter now. Disgruntled programmers have finally realized that an escape from nasty syntax, boilerplate galore, and ecosystem hyper-churn is possible. That's the appeal of AI: having it hide away all that unpleasantness. Only it's like cleaning your room by stuffing the mess under the bed — it doesn't make it go away! But the instinct is correct: Programming should be a vibe! It should be fun! It should resemble English closely enough that line noise doesn't obscure the underlying ideas and decisions. It should allow a richness of expression that serves the human reader instead of favoring the strictness preferred by the computer. Ruby does. And given that, I have no interest in giving up writing code. That's not the unpleasant part that I want AI to take off my hands. Just so I can — what? — become a project manager for a murder of AI crows? I've had the option to retreat up the manager ladder for most of my career, but I've steadily refused, because I really like writing Ruby! It's the most enjoyable part of the job! That doesn't mean AI doesn't have a role to play when writing Ruby. I'm conversing and collaborating with LLMs all day long — looking up APIs, clarifying concepts, and asking stupid questions. AI is a superb pair programmer, but I'd retire before permanently handing it the keyboard to drive the code. Maybe one day, wanting to write code will be a quaint concept. Like tending to horses for transportation in the modern world — done as a hobby but devoid of any economic value. I don't think anyone knows just how far we can push the intelligence and creativity of these insatiable token munchers. And I wouldn't bet against their advance, but it's clear to me that a big part of their appeal to programmers is the wisdom that Ruby was founded on: Programming should favor and flatter the human.

6 hours ago 2 votes
Tempest Rising is a great game

I really like RTS games. I pretty much grew up on them, starting with Command&Conquer 3: Kane’s Wrath, moving on to StarCraft 2 trilogy and witnessing the downfall of Command&Conquer 4. I never had the disks for any other RTS games during my teenage years. Yes, the disks, the ones you go to the store to buy! I didn’t know Steam existed back then, so this was my only source of games. There is something magical in owning a physical copy of the game. I always liked the art on the front (a mandatory huge face for all RTS!), game description and screenshots on the back, even the smell of the plastic disk case.

14 hours ago 1 votes
Changing text style for DandeGUI window output

<![CDATA[Printing rich text to windows is one of the planned features of DandeGUI, the GUI library for Medley Interlisp I'm developing in Common Lisp. I finally got around to this and implemented the GUI:WITH-TEXT-STYLE macro which controls the attributes of text printed to a window, such as the font family and face. GUI:WITH-TEXT-STYLE establishes a context in which text printed to the stream associated with a TEdit window is rendered in the style specified by the arguments. The call to GUI:WITH-TEXT-STYLE here extends the square root table example by printing the heading in a 12-point bold sans serif font: (gui:with-output-to-window (stream :title "Table of square roots") (gui:with-text-style (stream :family :sans :size 12 :face :bold) (format stream "~&Number~40TSquare Root~2%")) (loop for n from 1 to 30 do (format stream "~&~4D~40T~8,4F~%" n (sqrt n)))) The code produces this window in which the styled column headings stand out: Medley Interlisp window of a square root table generated by the DandeGUI GUI library. The :FAMILY, :SIZE, and :FACE arguments determine the corresponding text attributes. :FAMILY may be a generic family such as :SERIF for an unspecified serif font; :SANS for a sans serif font; :FIX for a fixed width font; or a keyword denoting a specific family like :TIMESROMAN. At the heart of GUI:WITH-TEXT-STYLE is a pair of calls to the Interlisp function PRINTOUT that wrap the macro body, the first for setting the font of the stream to the specified style and the other for restoring the default: (DEFMACRO WITH-TEXT-STYLE ((STREAM &KEY FAMILY SIZE FACE) &BODY BODY) (ONCE-ONLY (STREAM) `(UNWIND-PROTECT (PROGN (IL:PRINTOUT ,STREAM IL:.FONT (TEXT-STYLE-TO-FD ,FAMILY ,SIZE ,FACE)) ,@BODY) (IL:PRINTOUT ,STREAM IL:.FONT DEFAULT-FONT)))) PRINTOUT is an Interlisp function for formatted output similar to Common Lisp's FORMAT but with additional font control via the .FONT directive. The symbols of PRINTOUT, i.e. its directives and arguments, are in the Interlisp package. In turn GUI:WITH-TEXT-STYLE calls GUI::TEXT-STYLE-TO-FD, an internal DandeGUI function which passes to .FONT a font descriptor matching the required text attributes. GUI::TEXT-STYLE-TO-FD calls IL:FONTCOPY to build a descriptor that merges the specified attributes with any unspecified ones copied from the default font. The font descriptor is an Interlisp data structure that represents a font on the Medley environment. #DandeGUI #CommonLisp #Interlisp #Lisp a href="https://remark.as/p/journal.paoloamoroso.com/changing-text-style-for-dandegui-window-output"Discuss.../a Email | Reply @amoroso@oldbytes.space !--emailsub--]]>

yesterday 4 votes