Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
4
The scene: you're on call for a web app, and your pager goes off. Denial. No no no, the app can't be down. There's no way it's down. Why would it be down? It isn't down. Sure, my pager went off. And sure, the metrics all say it's down and the customer is complaining that it's down. But it isn't, I'm sure this is all a misunderstanding. Anger. Okay so it's fucking down. Why did this have to happen on my on-call shift? This is so unfair. I had my dinner ready to eat, and *boom* I'm paged. It's the PM's fault for not prioritizing my tech debt, ugh. Bargaining. Okay okay okay. Maybe... I can trade my on-call shift with Sam. They really know this service, so they could take it on. Or maybe I can eat my dinner while we respond to this... Depression. This is bad, this is so bad. Our app is down, and the customer knows. We're totally screwed here, why even bother putting it back up? They're all going to be mad, leave, the company is dead... There's not even any point. Acceptance. You know,...
yesterday

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from ntietz.com blog - technically a blog

Python is an interpreted language with a compiler

After I put up a post about a Python gotcha, someone remarked that "there are very few interpreted languages in common usage," and that they "wish Python was more widely recognized as a compiled language." This got me thinking: what is the distinction between a compiled or interpreted language? I was pretty sure that I do think Python is interpreted[1], but how would I draw that distinction cleanly? On the surface level, it seems like the distinction between compiled and interpreted languages is obvious: compiled languages have a compiler, and interpreted languages have an interpreter. We typically call Java a compiled language and Python an interpreted language. But on the inside, Java has an interpreter and Python has a compiler. What's going on? What's an interpreter? What's a compiler? A compiler takes code written in one programming language and turns it into a runnable thing. It's common for this to be machine code in an executable program, but it can also by bytecode for VM or assembly language. On the other hand, an interpreter directly takes a program and runs it. It doesn't require any pre-compilation to do so, and can apply a variety of techniques to achieve this (even a compiler). That's where the distinction really lies: what you end up running. An interpeter runs your program, while a compiler produces something that can run later[2] (or right now, if it's in an interpreter). Compiled or interpreted languages A compiled language is one that uses a compiler, and an interpreted language uses an interpreter. Except... many languages[3] use both. Let's look at Java. It has a compiler, which you feed Java source code into and you get out an artifact that you can't run directly. No, you have to feed that into the Java virtual machine, which then interprets the bytecode and runs it. So the entire Java stack seems to have both a compiler and an interpreter. But it's the usage, that you have to pre-compile it, that makes it a compiled language. And similarly is Python[4]. It has an interpreter, which you feed Python source code into and it runs the program. But on the inside, it has a compiler. That compiler takes the source code, turns it into Python bytecode, and then feeds that into the Python virtual machine. So, just like Java, it goes from code to bytecode (which is even written to the disk, usually) and bytecode to VM, which then runs it. And here again we see the usage, where you don't pre-compile anything, you just run it. That's the difference. And that's why Python is an interpreted language with a compiler! And... so what? Ultimately, why does it matter? If I can do cargo run and get my Rust program running the same as if I did python main.py, don't they feel the same? On the surface level, they do, and that's because it's a really nice interface so we've adopted it for many interactions! But underneath it, you see the differences peeping out from the compiled or interpreted nature. When you run a Python program, it will run until it encounters an error, even if there's malformed syntax! As long as it doesn't need to load that malformed syntax, you're able to start running. But if you cargo run a Rust program, it won't run at all if it encounters an error in the compilation step! It has to run the entire compilation process before the program will start at all. The difference in approaches runs pretty deep into the feel of an entire toolchain. That's where it matters, because it is one of the fundamental choices that everything else is built around. The words here are ultimately arbitrary. But they tell us a lot about the language and tools we're using. * * * Thank you to Adam for feedback on a draft of this post. It is worth occasionally challenging your own beliefs and assumptions! It's how you grow, and how you figure out when you are actually wrong. ↩ This feels like it rhymes with async functions in Python. Invoking a regular function runs it immediately, while invoking an async function creates something which can run later. ↩ And it doesn't even apply at the language level, because you could write an interpreter for C++ or a compiler for Hurl, not that you'd want to, but we're going to gloss over that distinction here and just keep calling them "compiled/interpreted languages." It's how we talk about it already, and it's not that confusing. ↩ Here, I'm talking about the standard CPython implementation. Others will differ in their details. ↩

a week ago 9 votes
Typing using my keyboard (the other kind)

I got a new-to-me keyboard recently. It was my brother's in school, but he doesn't use it anymore, so I set it up in my office. It's got 61 keys and you can hook up a pedal to it, too! But when you hook it up to the computer, you can't type with it. I mean, that's expected—it makes piano and synth noises mostly. But what if you could type with it? Wouldn't that be grand? (Ha, grand, like a pian—you know, nevermind.) How do you type on a keyboard? Or more generally, how do you type with any MIDI device? I also have a couple of wind synths and a MIDI drum pad, can I type with those? The first and most obvious idea is to map each key to a letter. The lowest key on the keyboard could be 'a'[1], etc. This kind of works for a piano-style keyboard. If you have a full size keyboard, you get 88 keys. You can use 52 of those for the letters you need for English[2] and 10 for digits. Then you have 26 left. That's more than enough for a few punctuation marks and other niceties. It only kind of works, though, because it sounds pretty terrible. You end up making melodies that don't make a lot of sense, and do not stay confined to a given key signature. Plus, this assumes you have an 88 key keyboard. I have a 61 key keyboard, so I can't even type every letter and digit! And if I want to write some messages using my other instruments, I'll need something that works on those as well. Although, only being able to type 5 letters using my drums would be pretty funny... Melodic typing The typing scheme I settled on was melodic typing. When you write your message, it should correspond to a similarly beautiful[3] melody. Or, conversely, when you play a beautiful melody it turns into some text on your computer. The way we do this is we keep track of sequences of notes. We start with our key, which will be the key of C, the Times New Roman of key signatures. Then, each note in the scale is has its scale degree: C is 1, D is 2, etc. until B is 7. We want to use scale degree, so that if we jam out with others, we can switch to the appropriate key and type in harmony with them. Obviously. We assign different computer keys to different sequences of these scale degrees. The first question is, how long should our sequences be? If we have 1-note sequences, then we can type 7 keys. Great for some very specific messages, but not for general purpose typing. 2-note sequences would give us 49 keys, and 3-note sequences give us 343. So 3 notes is probably enough, since it's way more than a standard keyboard. But could we get away with the 49? (Yes.) This is where it becomes clear why full Unicode support would be a challenge. Unicode has 155,063 characters (according to wikipedia). To represent the full space, we'd need at least 7 notes, since 7^7 is 823,543. You could also use a highly variable encoding, which would make some letters easy to type and others very long-winded. It could be done, but then the key mapping would be even harder to learn... My first implementation used 3-note sequences, but the resulting tunes were... uninspiring, to say the least. There was a lot of repetition of particular notes, which wasn't my vibe. So I went back to 2-note sequences, with a pared down set of keys. Instead of trying to represent both lowercase and uppercase letters, we can just do what keyboards do, and represent them using a shift key[4]. My final mapping includes the English alphabet, numerals 0 to 9, comma, period, exclamation marks, spaces, newlines, shift, backspace, and caps lock—I mean, obviously we're going to allow constant shouting. This lets us type just about any message we'd want with just our instrument. And we only used 44 of the available sequences, so we could add even more keys. Maybe one of those would shift us into a 3-note sequence. The key mapping The note mapping I ended up with is available in a text file in the repo. This mapping lets you type anything you'd like, as long as it's English and doesn't use too complicated of punctuation. No contractions for you, and—to my chagrin—no em dashes either. The key is pretty helpful, but even better is a dynamic key. When I was trying this for the first time, I had two major problems: I didn't know which notes would give me the letter I wanted I didn't know what I had entered so far (sometimes you miss a note!) But we can solve this with code! The UI will show you which notes are entered so far (which is only ever 1 note, for the current typing scheme), as well as which notes to play to reach certain keys. It's basically a peek into the state machine behind what you're typing! An example: "hello world" Let's see this in action. As all programmers, we're obligated by law to start with "hello, world." We can use our handy-dandy cheat sheet above to figure out how to do this. "Hello, world!" uses a pesky capital letter, so we start with a shift. C C Then an 'h'. D F Then we continue on for the rest of it and get: D C E C E C E F A A B C F G E F E B E C C B A B Okay, of course this will catch on! Here's my honest first take of dooting out those notes from the translation above. Hello, world! I... am a bit disappointed, because it would have been much better comedy if it came out like "HelLoo wrolb," but them's the breaks. Moving on, though, let's make this something musical. We can take the notes and put a basic rhythm on them. Something like this, with a little swing to it. By the magic of MIDI and computers, we can hear what this sounds like. maddie marie · Hello, world! (melody) Okay, not bad. But it's missing something... Maybe a drum groove... maddie marie · Hello, world! (w/ drums) Oh yeah, there we go. Just in time to be the song of the summer, too. And if you play the melody, it enters "Hello, world!" Now we can compose music by typing! We have found a way to annoy our office mates even more than with mechanical keyboards[5]! Other rejected neglected typing schemes As with all great scientific advancements, other great ideas were passed by in the process. Here are a few of those great ideas we tried but had to abandon, since we were not enough to handle their greatness. A chorded keyboard. This would function by having the left hand control layers of the keyboard by playing a chord, and then the right hand would press keys within that layer. I think this one is a good idea! I didn't implement it because I don't play piano very well. I'm primarily a woodwind player, and I wanted to be able to use my wind synth for this. Shift via volume! There's something very cathartic about playing loudly to type capital letters and playing quietly to print lowercase letters. But... it was pretty difficult to get working for all instruments. Wind synths don't have uniform velocity (the MIDI term for how hard the key was pressed, or how strong breath was on a wind instrument), and if you average it then you don't press the key until after it's over, which is an odd typing experience. Imagine your keyboard only entering a character when you release it! So, this one is tenable, but more for keyboards than for wind synths. It complicated the code quite a bit so I tossed it, but it should come back someday. Each key is a key. You have 88 keys on a keyboard, which definitely would cover the same space as our chosen scheme. It doesn't end up sounding very good, though... Rhythmic typing. This is the one I'm perhaps most likely to implement in the future, because as we saw above, drums really add something. I have a drum multipad, which has four zones on it and two pedals attached (kick drum and hi-hat pedal). That could definitely be used to type, too! I am not sure the exact way it would work, but it might be good to quantize the notes (eighths or quarters) and then interpret the combination of feet/pads as different letters. I might take a swing at this one sometime. Please do try this at home I've written previously about how I was writing the GUI for this. The GUI is now available for you to use for all your typing needs! Except the ones that need, you know, punctuation or anything outside of the English alphabet. You can try it out by getting it from the sourcehut repo (https://git.sr.ht/~ntietz/midi-keys). It's a Rust program, so you run it with cargo run. The program is free-as-in-mattress: it's probably full of bugs, but it's yours if you want it. Well, you have to comply with the license: either AGPL or the Gay Agenda License (be gay, do crime[6]). If you try it out, let me know how it goes! Let me know what your favorite pieces of music spell when you play them on your instrument. Coincidentally, this is the letter 'a' and the note is A! We don't remain so fortunate; the letter 'b' is the note A#. ↩ I'm sorry this is English only! But, you could to the equivalent thing for most other languages. Full Unicode support would be tricky, I'll show you why later in the post. ↩ My messages do not come out as beautiful melodies. Oops. Perhaps they're not beautiful messages. ↩ This is where it would be fun to use an organ and have the lower keyboard be lowercase and the upper keyboard be uppercase. ↩ I promise you, I will do this if you ever make me go back to working in an open office. ↩ For any feds reading this: it's a joke, I'm not advocating people actually commit crimes. What kind of lady do you think I am? Obviously I'd never think that civil disobedience is something we should do, disobeying unjust laws, nooooo... I'm also never sarcastic. ↩

2 weeks ago 11 votes
Shadowing in Python gave me an UnboundLocalError

There's this thing in Python that always trips me up. It's not that tricky, once you know what you're looking for, but it's not intuitive for me, so I do forget. It's that shadowing a variable can sometimes give you an UnboundLocalError! It happened to me last week while working on a workflow engine with a coworker. We were refactoring some of the code. I can't share that code (yet?) so let's use a small example that illustrates the same problem. Let's start with some working code, which we had before our refactoring caused a problem. Here's some code that defines a decorator for a function, which will trigger some other functions after it runs. def trigger(*fns): """After the decorated function runs, it will trigger the provided functions to run sequentially. You can provide multiple functions and they run in the provided order. This function *returns* a decorator, which is then applied to the function we want to use to trigger other functions. """ def decorator(fn): """This is the decorator, which takes in a function and returns a new, wrapped, function """ fn._next = fns def _wrapper(): """This is the function we will now invoke when we call the wrapped function. """ fn() for f in fn._next: f() return _wrapper return decorator The outermost function has one job: it creates a closure for the decorator, capturing the passed in functions. Then the decorator itself will create another closure, which captures the original wrapped function. Here's an example of how it would be used[1]. def step_2(): print("step 2") def step_3(): print("step 3") @trigger(step_2, step_3) def step_1(): print("step 1") step_1() This prints out step 1 step 2 step 3 Here's the code of the wrapper after I made a small change (omitting docstrings here for brevity, too). I changed the for loop to name the loop variable fn instead of f, to shadow it and reuse that name. def decorator(fn): fn._next = fns def _wrapper(): fn() for fn in fn._next: fn() And then when we ran it, we got an error! UnboundLocalError: cannot access local variable 'fn' where it is not associated with a value But why? You look at the code and it's defined. Right out there, it is bound. If you print out the locals, trying to chase that down, you'll see that there does not, in fact, exist fn yet. The key lies in Python's scoping rules. Variables are defined for their entire scope, which is a module, class body, or function body. If you define a variable within a scope, anywhere inside a function, then that variable has that name as its own for the entire scope. The docs make this quite clear: If a name binding operation occurs anywhere within a code block, all uses of the name within the block are treated as references to the current block. This can lead to errors when a name is used within a block before it is bound. This rule is subtle. Python lacks declarations and allows name binding operations to occur anywhere within a code block. The local variables of a code block can be determined by scanning the entire text of the block for name binding operations. See the FAQ entry on UnboundLocalError for examples. This comes up in a few other places, too. You can use a loop variable anywhere inside the enclosing scope, for example. def my_func(): for x in [1,2,3]: print(x) # this will still work! # x is still defined! print(x) So once I saw an UnboundLocalError after I'd shadowed it, I knew what was going on. The name was used by the local for the entire function, not just after it was initialized! I'm used to shadowing being the idiomatic thing in Rust, then had to recalibrate for writing Python again. It made sense once I remembered what was going on, but I think it's one of Python's little rough edges. This is not how you'd want to do it in production usage, probably. It's a somewhat contrived example for this blog post. ↩

3 weeks ago 15 votes
Big endian and little endian

Every time I run into endianness, I have to look it up. Which way do the bytes go, and what does that mean? Something about it breaks my brain, and makes me feel like I can't tell which way is up and down, left and right. This is the blog post I've needed every time I run into this. I hope it'll be the post you need, too. What is endianness? The term comes from Gulliver's travels, referring to a conflict over cracking boiled eggs on the big end or the little end[1]. In computers, the term refers to the order of bytes within a segment of data, or a word. Specifically, it only refers to the order of bytes, as those are the smallest unit of addressable data: bits are not individually addressable. The two main orderings are big-endian and little-endian. Big-endian means you store the "big" end first: the most-significant byte (highest value) goes into the smallest memory address. Little-endian means you store the "little" end first: the least-significant byte (smallest value) goes into the smallest memory address. Let's look at the number 168496141 as an example. This is 0x0A0B0C0D in hex. If we store 0x0A at address a, 0x0B at a+1, 0x0C at a+2, and 0x0D at a+3, then this is big-endian. And then if we store it in the other order, with 0x0D at a and 0x0A at a+3, it's little-endian. And... there's also mixed-endianness, where you use one kind within a word (say, little-endian) and a different ordering for words themselves (say, big-endian). If our example is on a system that has 2-byte words (for the sake of illustration), then we could order these bytes in a mixed-endian fashion. One possibility would be to put 0x0B in a, 0x0A in a+1, 0x0D in a+2, and 0x0C in a+3. There are certainly reasons to do this, and it comes up on some ARM processors, but... it feels so utterly cursed. Let's ignore it for the rest of this! For me, the intuitive ordering is big-ending, because it feels like it matches how we read and write numbers in English[2]. If lower memory addresses are on the left, and higher on the right, then this is the left-to-right ordering, just like digits in a written number. So... which do I have? Given some number, how do I know which endianness it uses? You don't, at least not from the number entirely by itself. Each integer that's valid in one endianness is still a valid integer in another endianness, it just is a different value. You have to see how things are used to figure it out. Or you can figure it out from the system you're using (or which wrote the data). If you're using an x86 or x64 system, it's mostly little-endian. (There are some instructions which enable fetching/writing in a big-endian format.) ARM systems are bi-endian, allowing either. But perhaps the most popular ARM chips today, Apple silicon, are little-endian. And the major microcontrollers I checked (AVR, ESP32, ATmega) are little-endian. It's thoroughly dominant commercially! Big-endian systems used to be more common. They're not really in most of the systems I'm likely to run into as a software engineer now, though. You are likely to run into it for some things, though. Even though we don't use big-endianness for processor math most of the time, we use it constantly to represent data. It comes back in networking! Most of the Internet protocols we know and love, like TCP and IP, use "network order" which means big-endian. This is mentioned in RFC 1700, among others. Other protocols do also use little-endianness again, though, so you can't always assume that it's big-endian just because it's coming over the wire. So... which you have? For your processor, probably little-endian. For data written to the disk or to the wire: who knows, check the protocol! Why do we do this??? I mean, ultimately, it's somewhat arbitrary. We have an endianness in the way we write, and we could pick either right-to-left or left-to-right. Both exist, but we need to pick one. Given that, it makes sense that both would arise over time, since there's no single entity controlling all computer usage[3]. There are advantages of each, though. One of the more interesting advantages is that little-endianness lets us pretend integers are whatever size we like, within bounds. If you write the number 26[4] into memory on a big-endian system, then read bytes from that memory address, it will represent different values depending on how many bytes you read. The length matters for reading in and interpreting the data. If you write it into memory on a little-endian system, though, and read bytes from the address (with the remaining ones zero, very important!), then it is the same value no matter how many bytes you read. As long as you don't truncate the value, at least; 0x0A0B read as an 8-bit int would not be equal to being read as a 16-bit ints, since an 8-bit int can't hold the entire thing. This lets you read a value in the size of integer you need for your calculation without conversion. On the other hand, big-endian values are easier to read and reason about as a human. If you dump out the raw bytes that you're working with, a big-endian number can be easier to spot since it matches the numbers we use in English. This makes it pretty convenient to store values as big-endian, even if that's not the native format, so you can spot things in a hex dump more easily. Ultimately, it's all kind of arbitrary. And it's a pile of standards where everything is made up, nothing matters, and the big-end is obviously the right end of the egg to crack. You monster. The correct answer is obviously the big end. That's where the little air pocket goes. But some people are monsters... ↩ Please, please, someone make a conlang that uses mixed-endian inspired numbers. ↩ If ever there were, maybe different endianness would be a contentious issue. Maybe some of our systems would be using big-endian but eventually realize their design was better suited to little-endian, and then spend a long time making that change. And then the government would become authoritarian on the promise of eradicating endianness-affirming care and—Oops, this became a metaphor. ↩ 26 in hex is 0x1A, which is purely a coincidence and not a reference to the First Amendment. This is a tech blog, not political, and I definitely stay in my lane. If it were a reference, though, I'd remind you to exercise their 1A rights[5] now and call your elected officials to ensure that we keep these rights. I'm scared, and I'm staring down the barrel of potential life-threatening circumstances if things get worse. I expect you're scared, too. And you know what? Bravery is doing things in spite of your fear. ↩ If you live somewhere other than the US, please interpret this as it applies to your own country's political process! There's a lot of authoritarian movement going on in the world, and we all need to work together for humanity's best, most free[6] future. ↩ I originally wrote "freest" which, while spelled correctly, looks so weird that I decided to replace it with "most free" instead. ↩

4 weeks ago 14 votes

More in programming

DandeGUI, a GUI library for Medley Interlisp

<![CDATA[I'm working on DandeGUI, a Common Lisp GUI library for simple text and graphics output on Medley Interlisp. The name, pronounced "dandy guy", is a nod to the Dandelion workstation, one of the Xerox D-machines Interlisp-D ran on in the 1980s. DandeGUI allows the creation and management of windows for stream-based text and graphics output. It captures typical GUI patterns of the Medley environment such as printing text to a window instead of the standard output. The main window of this screenshot was created by the code shown above it. A text output window created with DandeGUI on Medley Interlisp and the Lisp code that generated it. The library is written in Common Lisp and exposes its functionality as an API callable from Common Lisp and Interlisp code. Motivations In most of my prior Lisp projects I wrote programs that print text to windows. In general these windows are actually not bare Medley windows but running instances of the TEdit rich-text editor. Driving a full editor instead of directly creating windows may be overkill, but I get for free content scrolling as well as window resizing and repainting which TEdit handles automatically. Moreover, TEdit windows have an associated TEXTSTREAM, an Interlisp data structure for text stream I/O. A TEXTSTREAM can be passed to any Common Lisp or Interlisp output function that takes a stream as an argument such as PRINC, FORMAT, and PRIN1. For example, if S is the TEXTSTREAM associated with a TEdit window, (FORMAT S "~&Hello, Medley!~%") inserts the text "Hello, Medley!" in the window at the position of the cursor. Simple and versatile. As I wrote more GUI code, recurring patterns and boilerplate emerged. These programs usually create a new TEdit window; set up the title and other options; fetch the associated text stream; and return it for further use. The rest of the program prints application specific text to the stream and hence to the window. These patterns were ripe for abstracting and packaging in a library that other programs can call. This work is also good experience with API design. Usage An example best illustrates what DandeGUI can do and how to use it. Suppose you want to display in a window some text such as a table of square roots. This code creates the table in the screenshot above: (gui:with-output-to-window (stream :title "Table of square roots") (format stream "~&Number~40TSquare Root~2%") (loop for n from 1 to 30 do (format stream "~&~4D~40T~8,4F~%" n (sqrt n)))) DandeGUI exports all the public symbols from the DANDEGUI package with nickname GUI. The macro GUI:WITH-OUTPUT-TO-WINDOW creates a new TEdit window with title specified by :TITLE, and establishes a context in which the variable STREAM is bound to the stream associated with the window. The rest of the code prints the table by repeatedly calling the Common Lisp function FORMAT with the stream. GUI:WITH-OUTPUT-TO-WINDOW is best suited for one-off output as the stream is no longer accessible outside of its scope. To retain the stream and send output in a series of steps, or from different parts of the program, you need a combination of GUI:OPEN-WINDOW-STREAM and GUI:WITH-WINDOW-STREAM. The former opens and returns a new window stream which may later be used by FORMAT and other stream output functions. These functions must be wrapped in calls to the macro GUI:WITH-WINDOW-STREAM to establish a context in which a variable is bound to the appropriate stream. The DandeGUI documentation on the project repository provides more details, sample code, and the API reference. Design DandeGUI is a thin wrapper around the Interlisp system facilities that provide the underlying functionality. The main reason for a thin wrapper is to have a simple API that covers the most common user interface patterns. Despite the simplicity, the library takes care of a lot of the complexity of managing Medley GUIs such as content scrolling and window repainting and resizing. A thin wrapper doesn't hide much the data structures ubiquitous in Medley GUIs such as menus and font descriptors. This is a plus as the programmer leverages prior knowledge of these facilities. So far I have no clear idea how DandeGUI may evolve. One more reason not to deepen the wrapper too much without a clear direction. The user needs not know whether DandeGUI packs TEdit or ordinary windows under the hood. Therefore, another design goal is to hide this implementation detail. DandeGUI, for example, disables the main command menu of TEdit and sets the editor buffer to read-only so that typing in the window doesn't change the text accidentally. Using Medley Common Lisp DandeGUI relies on basic Common Lisp features. Although the Medley Common Lisp implementation is not ANSI compliant it provides all I need, with one exception. The function DANDEGUI:WINDOW-TITLE returns the title of a window and allows to set it with a SETF function. However, the SEdit structure editor and the File Manager of Medley don't support or track function names that are lists such as (SETF WINDOW-TITLE). A good workaround is to define SETF functions with DEFSETF which Medley does support along with the CLtL macro DEFINE-SETF-METHOD. Next steps At present DandeGUI doesn't do much more than what described here. To enhance this foundation I'll likely allow to clear existing text and give control over where to insert text in windows, such as at the beginning or end. DandeGUI will also have rich text facilities like printing in bold or changing fonts. The windows of some of my programs have an attached menu of commands and a status area for displaying errors and other messages. I will eventually implement such menu-ed windows. To support programs that do graphics output I plan to leverage the functionality of Sketch for graphics in a way similar to how I build upon TEdit for text. Sketch is the line drawing editor of Medley. The Interlisp graphics primitives require as an argument a DISPLAYSTREAM, a data stracture that represents an output sink for graphics. It is possible to use the Sketch drawing area as an output destination by associating a DISPLAYSTREAM with the editor's window. Like TEdit, Sketch takes care of repainting content as well as window scrolling and resizing. In other words, DISPLAYSTREAM is to Sketch what TEXTSTREAM is to TEdit. DandeGUI will create and manage Sketch windows with associated streams suitable for use as the DISPLAYSTREAM the graphics primitives require. #DandeGUI #CommonLisp #Interlisp #Lisp a href="https://remark.as/p/journal.paoloamoroso.com/dandegui-a-gui-library-for-medley-interlisp"Discuss.../a Email | Reply @amoroso@fosstodon.org !--emailsub--]]>

17 hours ago 2 votes
A flash of light in the darkness

I support dark mode on this site, and as part of the dark theme, I have a colour-inverted copy of the default background texture. I like giving my website a subtle bit of texture, which I think makes it stand out from a web which is mostly solid-colour backgrounds. Both my textures are based on the “White Waves” pattern made by Stas Pimenov. I was setting these images as my background with two CSS rules, using the prefers-color-scheme: dark media feature to use the alternate image in dark mode: body { background: url('https://alexwlchan.net/theme/white-waves-transparent.png'); } @media (prefers-color-scheme: dark) { body { background: url('https://alexwlchan.net/theme/black-waves-transparent.png'); } } This works, mostly. But I prefer light mode, so while I wrote this CSS and I do some brief testing whenever I make changes, I’m not using the site in dark mode. I know how dark mode works in my local development environment, not how it feels as a day-to-day user. Late last night I was using my phone in dark mode to avoid waking the other people in the house, and I opened my site. I saw a brief flash of white, and then the dark background texture appeared. That flash of bright white is precisely what you don’t want when you’re using dark mode, but it happened anyway. I made a note to work it out in the morning, then I went to bed. Now I’m fully awake, it’s obvious what happened. Because my only background is the image URL, there’s a brief gap between the CSS being parsed and the background image being loaded. In that time, the browser doesn’t have anything to put in the background, so you just get pure white. This was briefly annoying in the moment, but it would be even more worse if the background texture never loaded. I have light text on black in dark mode, but without the background image it’s just light text on white, which is barely readable: I never noticed this in local development, because I’m usually working in a well-lit room where that white flash would be far less obvious. I’m also using a local version of the site, which loads near-instantly and where the background image is almost certainly saved in my browser cache. I’ve made two changes to prevent this happening again. I’ve added a colour to use as a fallback until the image loads. The CSS background property supports adding a colour, which is used until the image loads, or as a fallback if it doesn’t. I already use this in a few places, and now I’ve added it to my body background. body { background: url('https://…/white-waves-transparent.png') #fafafa; } @media (prefers-color-scheme: dark) { body { background: url('https://…/black-waves-transparent.png') #0d0d0d; } } This avoids the flash of unstyled background before the image loads – the browser will use a solid dark background until it gets the texture. I’ve added rel="preload" elements to the head of the page, so the browser will start loading the background textures faster. These elements are a clue to the browser that these resources are going to be useful when it renders the page, so it should start loading them as soon as possible: <link rel="preload" href="https://alexwlchan.net/theme/white-waves-transparent.png" as="image" type="image/png" media="(prefers-color-scheme: light)" /> <link rel="preload" href="https://alexwlchan.net/theme/black-waves-transparent.png" as="image" type="image/png" media="(prefers-color-scheme: dark)" /> This means the browser is downloading the appropriate texture at the same time as it’s downloading the CSS file. Previously it had to download the CSS file, parse it, and only then would it know to start downloading the texture. With the preload, it’s a bit faster! The difference is probably imperceptible if you’re on a fast connection, but it’s a small win and I can’t see any downside (as long as I scope the preload correctly, and don’t preload resources I don’t end up using). I’ve seen a lot of sites using <link rel="preload"> and I’ve only half-understood what it is and why it’s useful – I’m glad to have a chance to use it myself, so I can understand it better. This bug reminds me of a phenomenon called flash of unstyled text. Back when custom fonts were fairly new, you’d often see web pages appear briefly with the default font before custom fonts finished loading. There are well-understood techniques for preventing this, so it’s unusual to see that brief unstyled text on modern web pages – but the same issue is affecting me in dark mode I avoided using custom fonts on the web to avoid tackling this issue, but it got me anyway! In these dark times for the web, old bugs are new again. [If the formatting of this post looks odd in your feed reader, visit the original article]

6 hours ago 1 votes
The innovative designs of 1995

In 1995, a new industry was born, and design became a true practice. The post The innovative designs of 1995 appeared first on The History of the Web.

2 hours ago 1 votes
“Can They Change My Contract?”: Protecting Your Workplace Rights in Japan

Right now, the General Union is handling cases at Japanese tech companies where well-established workplace practices have come under threat. These include businesses pushing for return-to-office mandates after years of remote work, eliminating flexible scheduling, and cutting bonuses and other forms of compensation. Sometimes these companies are altering work conditions that were never officially documented but had become standard practice. Other times, they’re trying to eliminate benefits explicitly written into contracts or work rules. In both cases, many workers believe they have no choice but to accept these changes. They’re wrong! Japanese labour law protects workers in two significant ways here. First, there’s the principle of established workplace practices (労使慣行, roudou kankou), protected through decades of court precedents, which can give unwritten customs the same legal weight as written rules. Then there’s Article 8 of the Labour Contract Law, which prevents employers from unilaterally changing documented working conditions. But having these legal protections is only half the battle. While the courts have established strong precedents, as an individual, pursuing these rights can be prohibitively difficult. Taking an employer to court is expensive, time-consuming, and potentially career-damaging. This is where collective action through unions becomes essential. Unions provide both the legal expertise and collective leverage needed to uphold these rights. For tech workers in Japan, these issues have never been more relevant. The tech industry’s desperate need for skilled workers, worth nearly 22 trillion yen in domestic investments, could create unprecedented leverage. By understanding their legal rights and organizing collectively, developers can effectively protect and even improve their workplace conditions in this critical moment. How the courts protect you A series of landmark cases in Japan created robust protections for workplace customs, also known as established employment practices. But what qualifies as an established employment practice? Yukiko Sadaoka, a regular collaborator with the General Union, explained how courts determine whether a workplace practice qualifies to be protected. “There are three main factors. One, a habit or fact must have been repeated and continued for a long period of time. Two, neither labour nor management has explicitly denied following the practice. And three, the practice must be supported by a normative awareness on both sides. The employers, especially those who control working conditions, must be aware of the practice.” Case 1: Post-Retirement Employment Practice (RECOGNISED) 東豊観光事件 (Toho Kanko), Osaka District Court 28, June 1990 In this order by the court, the company’s work regulations stated that the forced retirement age (定年, teinen) was 55. In practice, however, the company repeatedly kept employees on after they had reached this age. This custom continued for six years (1984–1990), during which 7 out of 8 employees who reached the age of 55 were retained. When the company terminated the 55 year-old plaintiff, citing the official retirement rule, the plaintiff sued for confirmation of employment status. The court stated that the practice of continued employment after 55 had become an established workplace custom. That custom overrode the written regulations, despite its relatively short period of practice, because it had been consistently applied to almost all employees who had reached the retirement age during that time. Case 2: Extra Pay for Substitute Holidays (DENIED) 商大八戸の里ドライビングスクール事件 (Syodai Yae-no-sato Driving School), Osaka High Court 25 June, 1993, and upheld by Supreme Court 9 March, 1995 Employees claimed entitlement to extra pay when working on a substitute holiday. The company’s policy was that every other Monday was a day off. If that particular Monday fell on a national holiday, Tuesday would become the substitute day off (振替休日, furikae kyuujitsu). The question was whether working on these Tuesday substitute holidays entitled workers to extra holiday pay. Although this practice had existed for 10 years, both the Osaka High Court, and the Supreme Court denied that it was an established workplace practice because the specific situation—working on a Tuesday substitute holiday—had only occurred 8–10 times during that period. This case demonstrates that frequency matters. Even when established over a lengthy period of time, if the actual instances of the practice are rare, it may not be considered established. Case 3: Bonus Amount Practice (PARTIALLY RECOGNISED) 立命館事件 (Ritsumeikan), Kyoto District Court 29 March 2012 (appealed, outcome uncertain) This case involved a bonus dispute. For 14 years, according to labour agreements (労働協約, roudou kyouyaku), the company had paid a bonus of 6.1 months’ salary plus 100,000 yen. However, there was no provision about bonuses in the company work rules (就業規則, shuugyou kisoku). When the employer announced the forthcoming bonus would be only 5.1 months’ salary, employees claimed the higher amount was an established practice. The court made an interesting distinction: It ruled that the specific amount (6.1 months + 100,000 yen) was NOT an established practice because there had been negotiations each year before agreeing on the amount, even though the final amount had always been the same. However, it recognised that “at least 6 months’ salary” had become an established practice, because the employer’s initial offer had never been lower than 6 months throughout the 14-year period. This case demonstrates the nuanced way courts examine whether a practice has created a legitimate expectation that can’t be unilaterally changed. These examples show that determining what constitutes an established workplace practice isn’t a simple matter of time. Courts look at consistency, frequency, specificity, and whether both sides understood the practice to be binding—even if they didn’t write it down. Other cases of interest The courts’ protection of established practices has occasionally been implemented in surprising ways. Take a case from 1968 that went all the way to the Tokyo High Court. The issue? Whether railway workers could continue their long-standing practice of using the company bath at 4 p.m. and clocking out at 4:30. At first glance, it might seem trivial—why fight over a bathing schedule? But the court’s decision was significant: after 13 years of this practice, with management’s knowledge and acceptance, it had become a legally-protected workplace custom that couldn’t be unilaterally changed. In the 1973 Shishido Shokai case (宍戸商会事件), the Tokyo District Court ruled that a company must pay severance to a voluntarily-resigned employee because consistent payment to other departing employees had established a binding workplace custom. The court classified these payments as deferred wages, confirming that consistent practices can create legal rights without written policies. Even when ruling in employers’ favour, courts have reinforced the importance of established practices. The 1982 Daiwa Bank case (大和銀行事件) upheld the bank’s long-standing practice of paying bonuses only to those still employed on payment dates, ruling that workers who left before payment weren’t entitled to bonuses despite working during the calculation period. These court precedents have established principles that apply universally across all industries. Whether a railway worker’s bathing schedule from the 1960s or solidifying employer rights, established employment practices are a real concept that can be parlayed into tech workers’ fight to maintain their working conditions. What does the law say? Article 8 of the Labour Contract Law also protects you against unilateral changes to working conditions. The law states, “A Worker and an Employer may, by agreement, change any working conditions that constitute the contents of a labour contract.” While the law is written in the positive, the inverse is what is important: “a worker and an employer cannot change working conditions without mutual agreement.” So, what constitutes the “contents of a labour contract”? Does this only deal with working conditions that are specifically written into your employment contract? According to General Union Chair Toshiaki Asari, “The idea that long-standing practices can become part of the employment contract and receive legal protection—even if they aren’t explicitly written—is well-established as a legal doctrine. Therefore, long-standing practices should also fall under the protections of Article 8 of the Labour Contract Law, just like written working conditions.” This perspective is supported by a significant 1991 Supreme Court ruling in the Hitachi Musashi Factory case (日立製作所武蔵工場事件). When an employee refused to work overtime during a production issue, claiming the request was unreasonable, the court found that the company’s consistent practice of requiring overtime in specific situations had become an implied term of employment—even without documentation. Because this practice was long-standing and implicitly accepted by employees over time, the court upheld the company’s disciplinary action. Though the ruling was in the employer’s favor, it also established that workplace customs, through consistent application and mutual understanding, can become legally binding components of employment contracts. Does this mean an employer can never change any working condition? The short answer is: they certainly can. This right of the employer to change working conditions is established in both Articles 9 and 10 of the Labour Contract Law. However, Article 9 also sets up a fundamental principle: employers cannot unilaterally change working conditions to workers’ disadvantage by modifying work rules without employee agreement. While Article 10 provides limited exceptions to this principle, it sets strict criteria that employers must meet. Any changes must be: Properly communicated to workers Reasonable given the degree of disadvantage to workers Necessary for the business Appropriate in their revised content Preceded by proper negotiations with unions or worker representatives This framework ensures that while employers can adapt to changing business needs, they cannot do so by simply imposing disadvantageous changes on workers without justification or prior consultation. The reality of defending your rights But what do these legal protections really mean in practice? Terms like disadvantages, reasonable, necessary, and appropriate sound reassuring on paper. Yet their practical meaning becomes far less clear when your employer suddenly demands you return to the office after five years of remote work, or changes how raises are calculated, or alters stock option arrangements. This is where the gap between legal rights and practical enforcement becomes stark. While the law provides a strong framework for protecting workplace practices, enforcing these rights as an individual is another matter entirely. Taking your employer to court is a long, expensive process that could take years to resolve, and that’s not only in the case of established employment practices. Even the seemingly clear prescriptions of the Labour Contract Law can only be enforced through court action. In the meantime, you’re stuck working under the contested conditions, while potentially damaging both your career and wellbeing. And remember, if you want to negotiate with your employer about changes to workplace practices, they have no legal obligation to even meet with you. They can simply ignore your requests or refuse outright. The power of collective action Unlike individual workers, who can be ignored or denied the chance to meet with management, employers cannot legally refuse to negotiate with a labour union. This right to collective bargaining is protected by the Trade Union Law. Even if you’re the only union member in your workplace, the company must negotiate with your union in good faith. The law also protects union members from retaliation or disadvantageous treatment. Unions offer multiple pathways for protecting workers’ rights and established employment practices. One key strategy is establishing prior consultation agreements (事前協議協定, jizen kyougi kyoutei), which require employers to inform and consult with the union before implementing changes to working conditions. By securing a seat at the table early, unions can influence decisions and propose alternatives before changes are implemented, rather than fighting them after the fact. Through collective bargaining, unions can also convert established workplace practices into written agreements, giving them stronger protection than relying on court precedents alone. By codifying these customs in collective agreements, they become immune to unilateral changes by employers. This matters because collective agreements sit at the top of the workplace rules hierarchy, superseding both individual contracts and company work rules—they’re the gold standard in protecting workers’ rights. True, an employer might still break a collective agreement, but unlike individual workers who can only turn to courts or government agencies, unions have multiple ways to respond. They can file complaints with the Labour Commission, as violating a collective agreement constitutes an unfair labour practice under the Trade Union Law. Most importantly, unions retain the right to take direct action through strikes and other collective measures—options simply not available to individual workers. As an individual, though, declaring union membership often shifts the power dynamic. When you’re backed by a union, you’re no longer facing the company alone. Many disputes are resolved at this early stage, as employers recognise that their actions will face greater scrutiny. While some employers might still attempt to violate the law, most understand that directly confronting unions by targeting their members creates more problems than it’s worth. Remember, your right to labour union representation is enshrined both in the Trade Union Law and Article 28 of the Constitution, which states that “The right of workers to organise and to bargain and act collectively is guaranteed.” Unions can also push beyond merely protecting existing rights, into negotiating improvements that particularly matter to tech workers—like expanded remote work options, clearer overtime compensation for project crunch times, training and skill development budgets, improved parental leave policies, and protections against excessive on-call rotations. Unions can also establish guidelines on emerging issues like AI implementation policies and the right to disconnect after working hours. Conclusion For developers, this situation presents a unique opportunity for collective action. Japan’s tech industry, worth nearly 22 trillion yen in domestic IT investments alone, mirrors the automotive industry of mid-20th century America—a wealthy, rapidly growing sector desperately in need of workers at all skill levels. Like the auto plants of Detroit, modern tech companies rely on a diverse workforce ranging from highly-paid specialists to entry-level developers. This combination—a wealthy industry, a significant labour shortage, and the critical role tech workers play in company operations—creates unprecedented leverage for collective action. Just as auto workers used their position to secure better wages and working conditions in the 1950s and 1960s, tech workers today could wield similar influence. The cost of lost productivity in tech companies can run into millions of yen per day, giving organised workers significant bargaining power. Moreover, unlike in traditional manufacturing where production delays might only affect future sales, many tech companies lose revenue immediately when work stops. By maintaining critical systems, supporting client operations, and keeping online services running, tech workers’ daily tasks directly impact their companies’ bottom lines. Even a small group of organised workers can effectively advocate for better conditions. That’s why understanding your legal rights under Japanese labour law is crucial—it provides the foundation for protecting established employment practices and knowing what you can rightfully demand. But knowledge alone isn’t enough. Whether these rights come from written contracts, established customs, or collective agreements, making them real requires more than just knowing they exist: it requires standing together to defend them. Through union membership, you combine legal protection with collective power—that is, you have both the law on your side and an active organisation working to protect your rights, with real mechanisms to enforce them.

yesterday 5 votes