More from alexwlchan
Consider the following JSON object: { "sides": 4, "colour": "red", "sides": 5, "colour": "blue" } Notice that sides and colour both appear twice. This looks invalid, but I learnt recently that this is actually legal JSON syntax! It’s unusual and discouraged, but it’s not completely forbidden. This was a big surprise to me. I think of JSON objects as key/value pairs, and I associate them with data structures like a dict in Python or a Hash in Ruby – both of which only allow unique keys. JSON has no such restriction, and I started thinking about how to handle it. What does the JSON spec say about duplicate names? JSON is described by several standards, which Wikipedia helpfully explains for us: After RFC 4627 had been available as its “informational” specification since 2006, JSON was first standardized in 2013, as ECMA‑404. RFC 8259, published in 2017, is the current version of the Internet Standard STD 90, and it remains consistent with ECMA‑404. That same year, JSON was also standardized as ISO/IEC 21778:2017. The ECMA and ISO/IEC standards describe only the allowed syntax, whereas the RFC covers some security and interoperability considerations. All three of these standards explicitly allow the use of duplicate names in objects. ECMA‑404 and ISO/IEC 21778:2017 have identical text to describe the syntax of JSON objects, and they say (emphasis mine): An object structure is represented as a pair of curly bracket tokens surrounding zero or more name/value pairs. […] The JSON syntax does not impose any restrictions on the strings used as names, does not require that name strings be unique, and does not assign any significance to the ordering of name/value pairs. These are all semantic considerations that may be defined by JSON processors or in specifications defining specific uses of JSON for data interchange. RFC 8259 goes further and strongly recommends against duplicate names, but the use of SHOULD means it isn’t completely forbidden: The names within an object SHOULD be unique. The same document warns about the consequences of ignoring this recommendation: An object whose names are all unique is interoperable in the sense that all software implementations receiving that object will agree on the name-value mappings. When the names within an object are not unique, the behavior of software that receives such an object is unpredictable. Many implementations report the last name/value pair only. Other implementations report an error or fail to parse the object, and some implementations report all of the name/value pairs, including duplicates. So it’s technically valid, but it’s unusual and discouraged. I’ve never heard of a use case for JSON objects with duplicate names. I’m sure there was a good reason for it being allowed by the spec, but I can’t think of it. Most JSON parsers – including jq, JavaScript, and Python – will silently discard all but the last instance of a duplicate name. Here’s an example in Python: >>> import json >>> json.loads('{"sides": 4, "colour": "red", "sides": 5, "colour": "blue"}') {'colour': 'blue', 'sides': 5} What if I wanted to decode the whole object, or throw an exception if I see duplicate names? This happened to me recently. I was editing a JSON file by hand, and I’d copy/paste objects to update the data. I also had scripts which could update the file. I forgot to update the name on one of the JSON objects, so there were two name/value pairs with the same name. When I ran the script, it silently erased the first value. I was able to recover the deleted value from the Git history, but I wondered how I could prevent this happening again. How could I make the script fail, rather than silently delete data? Decoding duplicate names in Python When Python decodes a JSON object, it first parses the object as a list of name/value pairs, then it turns that list of name value pairs into a dictionary. We can see this by looking at the JSONObject function in the CPython source code: it builds a list pairs, and at the end of the function, it calls dict(pairs) to turn the list into a dictionary. This relies on the fact that dict() can take an iterable of key/value tuples and create a dictionary: >>> dict([('sides', 4), ('colour', 'red')]) {'colour': 'red', 'sides': 4} The docs for dict() tell us that it` will discard duplicate keys: “if a key occurs more than once, the last value for that key becomes the corresponding value in the new dictionary”. >>> dict([('sides', 4), ('colour', 'red'), ('sides', 5), ('colour', 'blue')]) {'colour': 'blue', 'sides': 5} We can customise what Python does with the list of name/value pairs. Rather than calling dict(), we can pass our own function to the object_pairs_hook parameter of json.loads(), and Python will call that function on the list of pairs. This allows us to parse objects in a different way. For example, we can just return the literal list of name/value pairs: >>> import json >>> json.loads( ... '{"sides": 4, "colour": "red", "sides": 5, "colour": "blue"}', ... object_pairs_hook=lambda pairs: pairs ... ) ... [('sides', 4), ('colour', 'red'), ('sides', 5), ('colour', 'blue')] We could also use the multidict library to get a dict-like data structure which supports multiple values per key. This is based on HTTP headers and URL query strings, two environments where it’s common to have multiple values for a single key: >>> from multidict import MultiDict >>> md = json.loads( ... '{"sides": 4, "colour": "red", "sides": 5, "colour": "blue"}', ... object_pairs_hook=lambda pairs: MultiDict(pairs) ... ) ... >>> md <MultiDict('sides': 4, 'colour': 'red', 'sides': 5, 'colour': 'blue')> >>> md['sides'] 4 >>> md.getall('sides') [4, 5] Preventing silent data loss If we want to throw an exception when we see duplicate names, we need a longer function. Here’s the code I wrote: import collections import typing def dict_with_unique_names(pairs: list[tuple[str, typing.Any]]) -> dict[str, typing.Any]: """ Convert a list of name/value pairs to a dict, but only if the names are unique. If there are non-unique names, this function throws a ValueError. """ # First try to parse the object as a dictionary; if it's the same # length as the pairs, then we know all the names were unique and # we can return immediately. pairs_as_dict = dict(pairs) if len(pairs_as_dict) == len(pairs): return pairs_as_dict # Otherwise, let's work out what the repeated name(s) were, so we # can throw an appropriate error message for the user. name_tally = collections.Counter(n for n, _ in pairs) repeated_names = [n for n, count in name_tally.items() if count > 1] assert len(repeated_names) > 0 if len(repeated_names) == 1: raise ValueError(f"Found repeated name in JSON object: {repeated_names[0]}") else: raise ValueError( f"Found repeated names in JSON object: {', '.join(repeated_names)}" ) If I use this as my object_pairs_hook when parsing an object which has all unique names, it returns the normal dict I’d expect: >>> json.loads( ... '{"sides": 4, "colour": "red"}', ... object_pairs_hook=dict_with_unique_names ... ) ... {'colour': 'red', 'sides': 4} But if I’m parsing an object with one or more repeated names, the parsing fails and throws a ValueError: >>> json.loads( ... '{"sides": 4, "colour": "red", "sides": 5}', ... object_pairs_hook=dict_with_unique_names ... ) Traceback (most recent call last): […] ValueError: Found repeated name in JSON object: sides >>> json.loads( ... '{"sides": 4, "colour": "red", "sides": 5, "colour": "blue"}', ... object_pairs_hook=dict_with_unique_names ... ) Traceback (most recent call last): […] ValueError: Found repeated names in JSON object: sides, colour This is precisely the behaviour I want – throwing an exception, not silently dropping data. Encoding non-unique names in Python It’s hard to think of a use case, but this post feels incomplete without at least a brief mention. If you want to encode custom data structures with Python’s JSON library, you can subclass JSONEncoder and define how those structures should be serialised. Here’s a rudimentary attempt at doing that for a MultiDict: class MultiDictEncoder(json.JSONEncoder): def encode(self, o: typing.Any) -> str: # If this is a MultiDict, we need to construct the JSON string # manually -- first encode each name/value pair, then construct # the JSON object literal. if isinstance(o, MultiDict): name_value_pairs = [ f'{super().encode(str(name))}: {self.encode(value)}' for name, value in o.items() ] return '{' + ', '.join(name_value_pairs) + '}' return super().encode(o) and here’s how you use it: >>> md = MultiDict([('sides', 4), ('colour', 'red'), ('sides', 5), ('colour', 'blue')]) >>> json.dumps(md, cls=MultiDictEncoder) {"sides": 4, "colour": "red", "sides": 5, "colour": "blue"} This is rough code, and you shouldn’t use it – it’s only an example. I’m constructing the JSON string manually, so it doesn’t handle edge cases like indentation or special characters. There are almost certainly bugs, and you’d need to be more careful if you wanted to use this for real. In practice, if I had to encode a multi-dict as JSON, I’d encode it as a list of objects which each have a key and a value field. For example: [ {"key": "sides", "value": 4 }, {"key": "colour", "value": "red" }, {"key": "sides", "value": 5 }, {"key": "colour", "value": "blue"}, ] This is a pretty standard pattern, and it won’t trip up JSON parsers which aren’t expecting duplicate names. Do you need to worry about this? This isn’t a big deal. JSON objects with duplicate names are pretty unusual – this is the first time I’ve ever encountered one, and it was a mistake. Trying to account for this edge case in every project that uses JSON would be overkill. It would add complexity to my code and probably never catch a single error. This started when I made a copy/paste error that introduced the initial duplication, and then a script modified the JSON file and caused some data loss. That’s a somewhat unusual workflow, because most JSON files are exclusively modified by computers, and this wouldn’t be an issue. I’ve added this error handling to my javascript-data-files library, but I don’t anticipate adding it to other projects. I use that library for my static website archives, which is where I had this issue. Although I won’t use this code exactly, it’s been good practice at writing custom encoders/decoders in Python. That is something I do all the time – I’m often encoding native Python types as JSON, and I want to get the same type back when I decode later. I’ve been writing my own subclasses of JSONEncoder and JSONDecoder for a while. Now I know a bit more about how Python decodes JSON, and object_pairs_hook is another tool I can consider using. This was a fun deep dive for me, and I hope you found it helpful too. [If the formatting of this post looks odd in your feed reader, visit the original article]
I store a lot of data in SQLite databases on remote servers, and I often want to copy them to my local machine for analysis or backup. When I’m starting a new project and the database is near-empty, this is a simple rsync operation: $ rsync --progress username@server:my_remote_database.db my_local_database.db As the project matures and the database grows, this gets slower and less reliable. Downloading a 250MB database from my web server takes about a minute over my home Internet connection, and that’s pretty small – most of my databases are multiple gigabytes in size. I’ve been trying to make these copies go faster, and I recently discovered a neat trick. What really slows me down is my indexes. I have a lot of indexes in my SQLite databases, which dramatically speed up my queries, but also make the database file larger and slower to copy. (In one database, there’s an index which single-handedly accounts for half the size on disk!) The indexes don’t store anything unique – they just duplicate data from other tables to make queries faster. Copying the indexes makes the transfer less efficient, because I’m copying the same data multiple times. I was thinking about ways to skip copying the indexes, and I realised that SQLite has built-in tools to make this easy. Dumping a database as a text file SQLite allows you to dump a database as a text file. If you use the .dump command, it prints the entire database as a series of SQL statements. This text file can often be significantly smaller than the original database. Here’s the command: $ sqlite3 my_database.db .dump > my_database.db.txt And here’s what the beginning of that file looks like: PRAGMA foreign_keys=OFF; BEGIN TRANSACTION; CREATE TABLE IF NOT EXISTS "tags" ( [name] TEXT PRIMARY KEY, [count_uses] INTEGER NOT NULL ); INSERT INTO tags VALUES('carving',260); INSERT INTO tags VALUES('grass',743); … Crucially, this reduces the large and disk-heavy indexes into a single line of text – it’s an instruction to create an index, not the index itself. CREATE INDEX [idx_photo_locations] ON [photos] ([longitude], [latitude]); This means that I’m only storing each value once, rather than the many times it may be stored across the original table and my indexes. This is how the text file can be smaller than the original database. If you want to reconstruct the database, you pipe this text file back to SQLite: $ cat my_database.db.txt | sqlite3 my_reconstructed_database.db Because the SQL statements are very repetitive, this text responds well to compression: $ sqlite3 explorer.db .dump | gzip -c > explorer.db.txt.gz To give you an idea of the potential savings, here’s the relative disk size for one of my databases. File Size on disk original SQLite database 3.4 GB text file (sqlite3 my_database.db .dump) 1.3 GB gzip-compressed text (sqlite3 my_database.db .dump | gzip -c) 240 MB The gzip-compressed text file is 14× smaller than the original SQLite database – that makes downloading the database much faster. My new ssh+rsync command Rather than copying the database directly, now I create a gzip-compressed text file on the server, copy that to my local machine, and reconstruct the database. Like so: # Create a gzip-compressed text file on the server ssh username@server "sqlite3 my_remote_database.db .dump | gzip -c > my_remote_database.db.txt.gz" # Copy the gzip-compressed text file to my local machine rsync --progress username@server:my_remote_database.db.txt.gz my_local_database.db.txt.gz # Remove the gzip-compressed text file from my server ssh username@server "rm my_remote_database.db.txt.gz" # Uncompress the text file gunzip my_local_database.db.txt.gz # Reconstruct the database from the text file cat my_local_database.db.txt | sqlite3 my_local_database.db # Remove the local text file rm my_local_database.db.txt A database dump is a stable copy source This approach fixes another issue I’ve had when copying SQLite databases. If it takes a long time to copy a database and it gets updated midway through, rsync may give me an invalid database file. The first half of the file is pre-update, the second half file is post-update, and they don’t match. When I try to open the database locally, I get an error: database disk image is malformed By creating a text dump before I start the copy operation, I’m giving rsync a stable copy source. That text dump isn’t going to change midway through the copy, so I’ll always get a complete and consistent text file. This approach has saved me hours when working with large databases, and made my downloads both faster and more reliable. If you have to copy around large SQLite databases, give it a try. [If the formatting of this post looks odd in your feed reader, visit the original article]
I support dark mode on this site, and as part of the dark theme, I have a colour-inverted copy of the default background texture. I like giving my website a subtle bit of texture, which I think makes it stand out from a web which is mostly solid-colour backgrounds. Both my textures are based on the “White Waves” pattern made by Stas Pimenov. I was setting these images as my background with two CSS rules, using the prefers-color-scheme: dark media feature to use the alternate image in dark mode: body { background: url('https://alexwlchan.net/theme/white-waves-transparent.png'); } @media (prefers-color-scheme: dark) { body { background: url('https://alexwlchan.net/theme/black-waves-transparent.png'); } } This works, mostly. But I prefer light mode, so while I wrote this CSS and I do some brief testing whenever I make changes, I’m not using the site in dark mode. I know how dark mode works in my local development environment, not how it feels as a day-to-day user. Late last night I was using my phone in dark mode to avoid waking the other people in the house, and I opened my site. I saw a brief flash of white, and then the dark background texture appeared. That flash of bright white is precisely what you don’t want when you’re using dark mode, but it happened anyway. I made a note to work it out in the morning, then I went to bed. Now I’m fully awake, it’s obvious what happened. Because my only background is the image URL, there’s a brief gap between the CSS being parsed and the background image being loaded. In that time, the browser doesn’t have anything to put in the background, so you just get pure white. This was briefly annoying in the moment, but it would be even more worse if the background texture never loaded. I have light text on black in dark mode, but without the background image it’s just light text on white, which is barely readable: I never noticed this in local development, because I’m usually working in a well-lit room where that white flash would be far less obvious. I’m also using a local version of the site, which loads near-instantly and where the background image is almost certainly saved in my browser cache. I’ve made two changes to prevent this happening again. I’ve added a colour to use as a fallback until the image loads. The CSS background property supports adding a colour, which is used until the image loads, or as a fallback if it doesn’t. I already use this in a few places, and now I’ve added it to my body background. body { background: url('https://…/white-waves-transparent.png') #fafafa; } @media (prefers-color-scheme: dark) { body { background: url('https://…/black-waves-transparent.png') #0d0d0d; } } This avoids the flash of unstyled background before the image loads – the browser will use a solid dark background until it gets the texture. I’ve added rel="preload" elements to the head of the page, so the browser will start loading the background textures faster. These elements are a clue to the browser that these resources are going to be useful when it renders the page, so it should start loading them as soon as possible: <link rel="preload" href="https://alexwlchan.net/theme/white-waves-transparent.png" as="image" type="image/png" media="(prefers-color-scheme: light)" /> <link rel="preload" href="https://alexwlchan.net/theme/black-waves-transparent.png" as="image" type="image/png" media="(prefers-color-scheme: dark)" /> This means the browser is downloading the appropriate texture at the same time as it’s downloading the CSS file. Previously it had to download the CSS file, parse it, and only then would it know to start downloading the texture. With the preload, it’s a bit faster! The difference is probably imperceptible if you’re on a fast connection, but it’s a small win and I can’t see any downside (as long as I scope the preload correctly, and don’t preload resources I don’t end up using). I’ve seen a lot of sites using <link rel="preload"> and I’ve only half-understood what it is and why it’s useful – I’m glad to have a chance to use it myself, so I can understand it better. This bug reminds me of a phenomenon called flash of unstyled text. Back when custom fonts were fairly new, you’d often see web pages appear briefly with the default font before custom fonts finished loading. There are well-understood techniques for preventing this, so it’s unusual to see that brief unstyled text on modern web pages – but the same issue is affecting me in dark mode I avoided using custom fonts on the web to avoid tackling this issue, but it got me anyway! In these dark times for the web, old bugs are new again. [If the formatting of this post looks odd in your feed reader, visit the original article]
I’m a big fan of keyring, a Python module made by Jason R. Coombs for storing secrets in the system keyring. It works on multiple operating systems, and it knows what password store to use for each of them. For example, if you’re using macOS it puts secrets in the Keychain, but if you’re on Windows it uses Credential Locker. The keyring module is a safe and portable way to store passwords, more secure than using a plaintext config file or an environment variable. The same code will work on different platforms, because keyring handles the hard work of choosing which password store to use. It has a straightforward API: the keyring.set_password and keyring.get_password functions will handle a lot of use cases. >>> import keyring >>> keyring.set_password("xkcd", "alexwlchan", "correct-horse-battery-staple") >>> keyring.get_password("xkcd", "alexwlchan") "correct-horse-battery-staple" Although this API is simple, it’s not perfect – I have some frustrations with the get_password function. In a lot of my projects, I’m now using a small function that wraps get_password. What do I find frustrating about keyring.get_password? If you look up a password that isn’t in the system keyring, get_password returns None rather than throwing an exception: >>> print(keyring.get_password("xkcd", "the_invisible_man")) None I can see why this makes sense for the library overall – a non-existent password is very normal, and not exceptional behaviour – but in my projects, None is rarely a usable value. I normally use keyring to retrieve secrets that I need to access protected resources – for example, an API key to call an API that requires authentication. If I can’t get the right secrets, I know I can’t continue. Indeed, continuing often leads to more confusing errors when some other function unexpectedly gets None, rather than a string. For a while, I wrapped get_password in a function that would throw an exception if it couldn’t find the password: def get_required_password(service_name: str, username: str) -> str: """ Get password from the specified service. If a matching password is not found in the system keyring, this function will throw an exception. """ password = keyring.get_password(service_name, username) if password is None: raise RuntimeError(f"Could not retrieve password {(service_name, username)}") return password When I use this function, my code will fail as soon as it fails to retrieve a password, rather than when it tries to use None as the password. This worked well enough for my personal projects, but it wasn’t a great fit for shared projects. I could make sense of the error, but not everyone could do the same. What’s that password meant to be? A good error message explains what’s gone wrong, and gives the reader clear steps for fixing the issue. The error message above is only doing half the job. It tells you what’s gone wrong (it couldn’t get the password) but it doesn’t tell you how to fix it. As I started using this snippet in codebases that I work on with other developers, I got questions when other people hit this error. They could guess that they needed to set a password, but the error message doesn’t explain how, or what password they should be setting. For example, is this a secret they should pick themselves? Is it a password in our shared password vault? Or do they need an API key for a third-party service? If so, where do they find it? I still think my initial error was an improvement over letting None be used in the rest of the codebase, but I realised I could go further. This is my extended wrapper: def get_required_password(service_name: str, username: str, explanation: str) -> str: """ Get password from the specified service. If a matching password is not found in the system keyring, this function will throw an exception and explain to the user how to set the required password. """ password = keyring.get_password(service_name, username) if password is None: raise RuntimeError( "Unable to retrieve required password from the system keyring!\n" "\n" "You need to:\n" "\n" f"1/ Get the password. Here's how: {explanation}\n" "\n" "2/ Save the new password in the system keyring:\n" "\n" f" keyring set {service_name} {username}\n" ) return password The explanation argument allows me to explain what the password is for to a future reader, and what value it should have. That information can often be found in a code comment or in documentation, but putting it in an error message makes it more visible. Here’s one example: get_required_password( "flask_app", "secret_key", explanation=( "Pick a random value, e.g. with\n" "\n" " python3 -c 'import secrets; print(secrets.token_hex())'\n" "\n" "This password is used to securely sign the Flask session cookie. " "See https://flask.palletsprojects.com/en/stable/config/#SECRET_KEY" ), ) If you call this function and there’s no keyring entry for flask_app/secret_key, you get the following error: Unable to retrieve required password from the system keyring! You need to: 1/ Get the password. Here's how: Pick a random value, e.g. with python3 -c 'import secrets; print(secrets.token_hex())' This password is used to securely sign the Flask session cookie. See https://flask.palletsprojects.com/en/stable/config/#SECRET_KEY 2/ Save the new password in the system keyring: keyring set flask_app secret_key It’s longer, but this error message is far more informative. It tells you what’s wrong, how to save a password, and what the password should be. This is based on a real example where the previous error message led to a misunderstanding. A co-worker saw a missing password called “secret key” and thought it referred to a secret key for calling an API, and didn’t realise it was actually for signing Flask session cookies. Now I can write a more informative error message, I can prevent that misunderstanding happening again. (We also renamed the secret, for additional clarity.) It takes time to write this explanation, which will only ever be seen by a handful of people, but I think it’s important. If somebody sees it at all, it’ll be when they’re setting up the project for the first time. I want that setup process to be smooth and straightforward. I don’t use this wrapper in all my code, particularly small or throwaway toys that won’t last long enough for this to be an issue. But in larger codebases that will be used by other developers, and which I expect to last a long time, I use it extensively. Writing a good explanation now can avoid frustration later. [If the formatting of this post looks odd in your feed reader, visit the original article]
I’ve been writing some internal dashboards recently, and one hard part is displaying timestamps. Our server does everything in UTC, but the team is split across four different timezones, so the server timestamps aren’t always easy to read. For most people, it’s harder to understand a UTC timestamp than a timestamp in your local timezone. Did that event happen just now, an hour ago, or much further back? Was that at the beginning of your working day? Or at the end? Then I remembered that I tried to solve this five years ago at a previous job. I wrote a JavaScript snippet that converts UTC timestamps into human-friendly text. It displays times in your local time zone, and adds a short suffix if the time happened recently. For example: today @ 12:00 BST (1 hour ago) In my old project, I was using writing timestamps in a <div> and I had to opt into the human-readable text for every date on the page. It worked, but it was a bit fiddly. Doing it again, I thought of a more elegant solution. HTML has a <time> element for expressing datetimes, which is a more meaningful wrapper than a <div>. When I render the dashboard on the server, I don’t know the user’s timezone, so I include the UTC timestamp in the page like so: <time datetime="2025-04-15 19:45:00Z"> Tue, 15 Apr 2025 at 19:45 UTC </time> I put a machine-readable date and time string with a timezone offset string in the datetime attribute, and then a more human-readable string in the text of the element. Then I add this JavaScript snippet to the page: window.addEventListener("DOMContentLoaded", function() { document.querySelectorAll("time").forEach(function(timeElem) { // Set the `title` attribute to the original text, so a user // can hover over a timestamp to see the UTC time. timeElem.setAttribute("title", timeElem.innerText); // Replace the display text with a human-friendly date string // which is localised to the user's timezone. timeElem.innerText = getHumanFriendlyDateString( timeElem.getAttribute("datetime") ); }) }); This updates any <time> element on the page to use a human friendly date string, which is localised to the user’s timezone. For example, I’m in the UK so that becomes: <time datetime="2025-04-15 19:45:00Z" title="Tue, 15 Apr 2025 at 19:45 UTC"> Tue, 15 Apr 2025 at 20:45 BST </time> In my experience, these timestamps are easier and more intuitive for people to read. I always include a timezone string (e.g. BST, EST, PDT) so it’s obvious that I’m showing a localised timestamp. If you really need the UTC timestamp, it’s in the title attribute, so you can see it by hovering over it. (Sorry, mouseless users, but I don’t think any of my team are browsing our dashboards from their phone or tablet.) If the JavaScript doesn’t load, you see the plain old UTC timestamp. It’s not ideal, but the page still loads and you can see all the information – this behaviour is an enhancement, not an essential. To me, this is the unfulfilled promise of the <time> element. In my fantasy world, web page authors would write the time in a machine-readable format, and browsers would show it in a way that makes sense for the reader. They’d take into account their language, locale, and time zone. I understand why that hasn’t happened – it’s much easier said than done. You need so much context to know what’s the “right” thing to do when dealing with datetimes, and guessing without that context is at the heart of many datetime bugs. These sort of human-friendly, localised timestamps are very handy sometimes, and a complete mess at other times. In my staff-only dashboards, I have that context. I know what these timestamps mean, who’s going to be reading them, and I think they’re a helpful addition that makes the data easier to read. [If the formatting of this post looks odd in your feed reader, visit the original article]
More in programming
One of the first types we learn about is the boolean. It's pretty natural to use, because boolean logic underpins much of modern computing. And yet, it's one of the types we should probably be using a lot less of. In almost every single instance when you use a boolean, it should be something else. The trick is figuring out what "something else" is. Doing this is worth the effort. It tells you a lot about your system, and it will improve your design (even if you end up using a boolean). There are a few possible types that come up often, hiding as booleans. Let's take a look at each of these, as well as the case where using a boolean does make sense. This isn't exhaustive—[1]there are surely other types that can make sense, too. Datetimes A lot of boolean data is representing a temporal event having happened. For example, websites often have you confirm your email. This may be stored as a boolean column, is_confirmed, in the database. It makes a lot of sense. But, you're throwing away data: when the confirmation happened. You can instead store when the user confirmed their email in a nullable column. You can still get the same information by checking whether the column is null. But you also get richer data for other purposes. Maybe you find out down the road that there was a bug in your confirmation process. You can use these timestamps to check which users would be affected by that, based on when their confirmation was stored. This is the one I've seen discussed the most of all these. We run into it with almost every database we design, after all. You can detect it by asking if an action has to occur for the boolean to change values, and if values can only change one time. If you have both of these, then it really looks like it is a datetime being transformed into a boolean. Store the datetime! Enums Much of the remaining boolean data indicates either what type something is, or its status. Is a user an admin or not? Check the is_admin column! Did that job fail? Check the failed column! Is the user allowed to take this action? Return a boolean for that, yes or no! These usually make more sense as an enum. Consider the admin case: this is really a user role, and you should have an enum for it. If it's a boolean, you're going to eventually need more columns, and you'll keep adding on other statuses. Oh, we had users and admins, but now we also need guest users and we need super-admins. With an enum, you can add those easily. enum UserRole { User, Admin, Guest, SuperAdmin, } And then you can usually use your tooling to make sure that all the new cases are covered in your code. With a boolean, you have to add more booleans, and then you have to make sure you find all the places where the old booleans were used and make sure they handle these new cases, too. Enums help you avoid these bugs. Job status is one that's pretty clearly an enum as well. If you use booleans, you'll have is_failed, is_started, is_queued, and on and on. Or you could just have one single field, status, which is an enum with the various statuses. (Note, though, that you probably do want timestamp fields for each of these events—but you're still best having the status stored explicitly as well.) This begins to resemble a state machine once you store the status, and it means that you can make much cleaner code and analyze things along state transition lines. And it's not just for storing in a database, either. If you're checking a user's permissions, you often return a boolean for that. fn check_permissions(user: User) -> bool { false // no one is allowed to do anything i guess } In this case, true means the user can do it and false means they can't. Usually. I think. But you can really start to have doubts here, and with any boolean, because the application logic meaning of the value cannot be inferred from the type. Instead, this can be represented as an enum, even when there are just two choices. enum PermissionCheck { Allowed, NotPermitted(reason: String), } As a bonus, though, if you use an enum? You can end up with richer information, like returning a reason for a permission check failing. And you are safe for future expansions of the enum, just like with roles. You can detect when something should be an enum a proliferation of booleans which are mutually exclusive or depend on one another. You'll see multiple columns which are all changed at the same time. Or you'll see a boolean which is returned and used for a long time. It's important to use enums here to keep your program maintainable and understandable. Conditionals But when should we use a boolean? I've mainly run into one case where it makes sense: when you're (temporarily) storing the result of a conditional expression for evaluation. This is in some ways an optimization, either for the computer (reuse a variable[2]) or for the programmer (make it more comprehensible by giving a name to a big conditional) by storing an intermediate value. Here's a contrived example where using a boolean as an intermediate value. fn calculate_user_data(user: User, records: RecordStore) { // this would be some nice long conditional, // but I don't have one. So variables it is! let user_can_do_this: bool = (a && b) && (c || !d); if user_can_do_this && records.ready() { // do the thing } else if user_can_do_this && records.in_progress() { // do another thing } else { // and something else! } } But even here in this contrived example, some enums would make more sense. I'd keep the boolean, probably, simply to give a name to what we're calculating. But the rest of it should be a match on an enum! * * * Sure, not every boolean should go away. There's probably no single rule in software design that is always true. But, we should be paying a lot more attention to booleans. They're sneaky. They feel like they make sense for our data, but they make sense for our logic. The data is usually something different underneath. By storing a boolean as our data, we're coupling that data tightly to our application logic. Instead, we should remain critical and ask what data the boolean depends on, and should we maybe store that instead? It comes easier with practice. Really, all good design does. A little thinking up front saves you a lot of time in the long run. I know that using an em-dash is treated as a sign of using LLMs. LLMs are never used for my writing. I just really like em-dashes and have a dedicated key for them on one of my keyboard layers. ↩ This one is probably best left to the compiler. ↩
As I slowly but surely work towards the next release of my setcmd project for the Amiga (see the 68k branch for the gory details and my total noob-like C flailing around), I’ve made heavy use of documentation in the AmigaGuide format. Despite it’s age, it’s a great Amiga-native format and there’s a wealth of great information out there for things like the C API, as well as language guides and tutorials for tools like the Installer utility - and the AmigaGuide markup syntax itself. The only snag is, I had to have access to an Amiga (real or emulated), or install one of the various viewer programs on my laptops. Because like many, I spend a lot of time in a web browser and occasionally want to check something on my mobile phone, this is less than convenient. Fortunately, there’s a great AmigaGuideJS online viewer which renders AmigaGuide format documents using Javascript. I’ve started building up a collection of useful developer guides and other files in my own reference library so that I can access this documentation whenever I’m not at my Amiga or am coding in my “modern” dev environment. It’s really just for my own personal use, but I’ll be adding to it whenever I come across a useful piece of documentation so I hope it’s of some use to others as well! And on a related note, I now have a “unified” code-base so that SetCmd now builds and runs on 68k-based OS 3.x systems as well as OS 4.x PPC systems like my X5000. I need to: Tidy up my code and fix all the “TODO” stuff Update the Installer to run on OS 3.x systems Update the documentation Build a new package and upload to Aminet/OS4Depot Hopefully I’ll get that done in the next month or so. With the pressures of work and family life (and my other hobbies), progress has been a lot slower these last few years but I’m still really enjoying working on Amiga code and it’s great to have a fun personal project that’s there for me whenever I want to hack away at something for the sheer hell of it. I’ve learned a lot along the way and the AmigaOS is still an absolute joy to develop for. I even brought my X5000 to the most recent Kickstart Amiga User Group BBQ/meetup and had a fun day working on the code with fellow Amigans and enjoying some classic gaming & demos - there was also a MorphOS machine there, which I think will be my next target as the codebase is slowly becoming more portable. Just got to find some room in the “retro cave” now… This stuff is addictive :)
A little while back I heard about the White House launching their version of a Drudge Report style website called White House Wire. According to Axios, a White House official said the site’s purpose was to serve as “a place for supporters of the president’s agenda to get the real news all in one place”. So a link blog, if you will. As a self-professed connoisseur of websites and link blogs, this got me thinking: “I wonder what kind of links they’re considering as ‘real news’ and what they’re linking to?” So I decided to do quick analysis using Quadratic, a programmable spreadsheet where you can write code and return values to a 2d interface of rows and columns. I wrote some JavaScript to: Fetch the HTML page at whitehouse.gov/wire Parse it with cheerio Select all the external links on the page Return a list of links and their headline text In a few minutes I had a quick analysis of what kind of links were on the page: This immediately sparked my curiosity to know more about the meta information around the links, like: If you grouped all the links together, which sites get linked to the most? What kind of interesting data could you pull from the headlines they’re writing, like the most frequently used words? What if you did this analysis, but with snapshots of the website over time (rather than just the current moment)? So I got to building. Quadratic today doesn’t yet have the ability for your spreadsheet to run in the background on a schedule and append data. So I had to look elsewhere for a little extra functionality. My mind went to val.town which lets you write little scripts that can 1) run on a schedule (cron), 2) store information (blobs), and 3) retrieve stored information via their API. After a quick read of their docs, I figured out how to write a little script that’ll run once a day, scrape the site, and save the resulting HTML page in their key/value storage. From there, I was back to Quadratic writing code to talk to val.town’s API and retrieve my HTML, parse it, and turn it into good, structured data. There were some things I had to do, like: Fine-tune how I select all the editorial links on the page from the source HTML (I didn’t want, for example, to include external links to the White House’s social pages which appear on every page). This required a little finessing, but I eventually got a collection of links that corresponded to what I was seeing on the page. Parse the links and pull out the top-level domains so I could group links by domain occurrence. Create charts and graphs to visualize the structured data I had created. Selfish plug: Quadratic made this all super easy, as I could program in JavaScript and use third-party tools like tldts to do the analysis, all while visualizing my output on a 2d grid in real-time which made for a super fast feedback loop! Once I got all that done, I just had to sit back and wait for the HTML snapshots to begin accumulating! It’s been about a month and a half since I started this and I have about fifty days worth of data. The results? Here’s the top 10 domains that the White House Wire links to (by occurrence), from May 8 to June 24, 2025: youtube.com (133) foxnews.com (72) thepostmillennial.com (67) foxbusiness.com (66) breitbart.com (64) x.com (63) reuters.com (51) truthsocial.com (48) nypost.com (47) dailywire.com (36) From the links, here’s a word cloud of the most commonly recurring words in the link headlines: “trump” (343) “president” (145) “us” (134) “big” (131) “bill” (127) “beautiful” (113) “trumps” (92) “one” (72) “million” (57) “house” (56) The data and these graphs are all in my spreadsheet, so I can open it up whenever I want to see the latest data and re-run my script to pull the latest from val.town. In response to the new data that comes in, the spreadsheet automatically parses it, turn it into links, and updates the graphs. Cool! If you want to check out the spreadsheet — sorry! My API key for val.town is in it (“secrets management” is on the roadmap). But I created a duplicate where I inlined the data from the API (rather than the code which dynamically pulls it) which you can check out here at your convenience. Email · Mastodon · Bluesky
SumatraPDF is a fast, small, open-source PDF reader for Windows, written in C++. This article describes how I implemented StrVec class for efficiently storing multiple strings. Much ado about the strings Strings are among the most used types in most programs. Arrays of strings are also used often. I count ~80 uses of StrVec in SumatraPDF code. This article describes how I implemented an optimized array of strings in SumatraPDF C++ code . No STL for you Why not use std::vector<std::string>? In SumatraPDF I don’t use STL. I don’t use std::string, I don’t use std::vector. For me it’s a symbol of my individuality, and my belief in personal freedom. As described here, minimum size of std::string on 64-bit machines is 32 bytes for msvc / gcc and 24 bytes for short strings (15 chars for msvc / gcc, 22 chars for clang). For longer strings we have more overhead: 32⁄24 bytes for the header memory allocator overhead allocator metadata padding due to rounding allocations to at least 16 bytes There’s also std::vector overhead: for fast appends (push()) std::vectorimplementations over-allocated space Longer strings are allocated at random addresses so they can be spread out in memory. That is bad for cache locality and that often cause more slowness than executing lots of instructions. Design and implementation of StrVec StrVec (vector of strings) solves all of the above: per-string overhead of only 8 bytes strings are laid out next to each other in memory StrVec High level design of StrVec: backing memory is allocated in singly-linked pages similar to std::vector, we start with small page and increase the size of the page. This strikes a balance between speed of accessing a string at random index and wasted space unlike std::vector we don’t reallocate memory (most of the time). That saves memory copy when re-allocating backing space Here’s all there is to StrVec: struct StrVec { StrVecPage* first = nullptr; int nextPageSize = 256; int size = 0; } size is a cached number of strings. It could be calculated by summing the size in all StrVecPages. nextPageSize is the size of the next StrVecPage. Most array implementation increase the size of next allocation by 1.4x - 2x. I went with the following progression: 256 bytes, 1k, 4k, 16k, 32k and I cap it at 64k. I don’t have data behind those numbers, they feel right. Bigger page wastes more space. Smaller page makes random access slower because to find N-th string we need to traverse linked list of StrVecPage. nextPageSize is exposed to allow the caller to optimize use. E.g. if it expects lots of strings, it could set nextPageSize to a large number. StrVecPage Most of the implementation is in StrVecPage. The big idea here is: we allocate a block of memory strings are allocated from the end of memory block at the beginning of the memory block we build and index of strings. For each string we have: u32 size u32 offset of the string within memory block, counting from the beginning of the block The layout of memory block is: StrVecPage struct { size u32; offset u32 } [] … not yet used space strings This is StrVecPage: struct StrVecPage { struct StrVecPage* next; int pageSize; int nStrings; char* currEnd; } next is for linked list of pages. Since pages can have various sizes we need to record pageSize. nStrings is number of strings in the page and currEnd points to the end of free space within page. Implementing operations Appending a string Appending a string at the end is most common operation. To append a string: we calculate how much memory inside a page it’ll need: str::Len(string) + 1 + sizeof(u32) + sizeof(u32). +1 is for 0-termination for compatibility with C APIs that take char*, and 2xu32 for size and offset. If we have enough space in last page, we add size and offset at the end of index and append a string from the end i.e. `currEnd - (str::Len(string) + 1). If there is not enough space in last page, we allocate new page We can calculate how much space we have left with: int indexEntrySize = sizeof(u32) + sizeof(u32); // size + offset char* indexEnd = (char*)pageStart + sizeof(StrVecPage) + nStrings*indexEntrySize int nBytesFree = (int)(currEnd - indexEnd) Removing a string Removing a string is easy because it doesn’t require moving memory inside StrVecPage. We do nStrings-- and move index values of strings after the removed string. I don’t bother freeing the string memory within a page. It’s possible but complicated enough I decided to skip it. You can compact StrVec to remove all overhead. If you do not care about preserving order of strings after removal, I haveRemoveAtFast() which uses a trick: instead of copying memory of all index values after removed string, I copy a single index from the end into a slot of the string being removed. Replacing a string or inserting in the middle Replacing a string or inserting a string in the middle is more complicated because there might not be enough space in the page for the string. When there is enough space, it’s as simple as append. When there is not enough space, I re-use the compacting capability: I compact all existing pages into a single page with extra space for the string and some extra space as an optimization for multiple inserts. Iteration A random access requires traversing a linked list. I think it’s still fast because typically there aren’t many pages and we only need to look at a single nStrings value. After compaction to a single page, random access is as fast as it could ever be. C++ iterator is optimized for sequential access: struct iterator { const StrVec* v; int idx; // perf: cache page, idxInPage from prev iteration int idxInPage; StrVecPage* page; } We cache the current state of iteration as page and idxInPage. To advance to next string we advance idxInPage. If it exceeds nStrings, we advance to page->next. Optimized search Finding a string is as optimized as it could be without a hash table. Typically to compare char* strings you need to call str::Eq(s, s2) for every string you compare it to. That is a function call and it has to touch s2 memory. That is bad for performance because it blows the cache. In StrVec I calculate length of the string to find once and then traverse the size / offset index. Only when size is different I have to compare the strings. Most of the time we just look at offset / size in L1 cache, which is very fast. Compacting If you know that you’ll not be adding more strings to StrVec you can compact all pages into a single page with no overhead of empty space. It also speeds up random access because we don’t have multiple pages to traverse to find the item and a given index. Representing a nullptr char* Even though I have a string class, I mostly use char* in SumatraPDF code. In that world empty string and nullptr are 2 different things. To allow storing nullptr strings in StrVec (and not turning them into empty strings on the way out) I use a trick: a special u32 value kNullOffset represents nullptr. StrVec is a string pool allocator In C++ you have to track the lifetime of each object: you allocate with malloc() or new when you no longer need to object, you call free() or delete However, the lifetime of allocations is often tied together. For example in SumatraPDF an opened document is represented by a class. Many allocations done to construct that object last exactly as long as the object. The idea of a pool allocator is that instead of tracking the lifetime of each allocation, you have a single allocator. You allocate objects with the same lifetime from that allocator and you free them with a single call. StrVec is a string pool allocator: all strings stored in StrVec have the same lifetime. Testing In general I don’t advocate writing a lot of tests. However, low-level, tricky functionality like StrVec deserves decent test coverage to ensure basic functionality works and to exercise code for corner cases. I have 360 lines of tests for ~700 lines of of implementation. Potential tweaks and optimization When designing and implementing data structures, tradeoffs are aplenty. Interleaving index and strings I’m not sure if it would be faster but instead of storing size and offset at the beginning of the page and strings at the end, we could store size / string sequentially from the beginning. It would remove the need for u32 of offset but would make random access slower. Varint encoding of size and offset Most strings are short, under 127 chars. Most offsets are under 16k. If we stored size and offset as variable length integers, we would probably bring down average per-string overhead from 8 bytes to ~4 bytes. Implicit size When strings are stored sequentially size is implicit as difference between offset of the string and offset of next string. Not storing size would make insert and set operations more complicated and costly: we would have to compact and arrange strings in order every time. Storing index separately We could store index of size / offset in a separate vector and use pages to only allocate string data. This would simplify insert and set operations. With current design if we run out of space inside a page, we have to re-arrange memory. When offset is stored outside of the page, it can refer to any page so insert and set could be as simple as append. The evolution of StrVec The design described here is a second implementation of StrVec. The one before was simply a combination of str::Str (my std::string) for allocating all strings and Vec<u32> (my std::vector) for storing offset index. It had some flaws: appending a string could re-allocate memory within str::Str. The caller couldn’t store returned char* pointer because it could be invalidated. As a result the API was akward and potentially confusing: I was returning offset of the string so the string was str::Str.Data() + offset. The new StrVec doesn’t re-allocate on Append, only (potentially) on InsertAt and SetAt. The most common case is append-only which allows the caller to store the returned char* pointers. Before implementing StrVec I used Vec<char*>. Vec is my version of std::vector and Vec<char*> would just store pointer to individually allocated strings. Cost vs. benefit I’m a pragmatist: I want to achieve the most with the least amount of code, the least amount of time and effort. While it might seem that I’m re-implementing things willy-nilly, I’m actually very mindful of the cost of writing code. Writing software is a balance between effort and resulting quality. One of the biggest reasons SumatraPDF so popular is that it’s fast and small. That’s an important aspect of software quality. When you double click on a PDF file in an explorer, SumatraPDF starts instantly. You can’t say that about many similar programs and about other software in general. Keeping SumatraPDF small and fast is an ongoing focus and it does take effort. StrVec.cpp is only 705 lines of code. It took me several days to complete. Maybe 2 days to write the code and then some time here and there to fix the bugs. That being said, I didn’t start with this StrVec. For many years I used obvious Vec<char*>. Then I implemented somewhat optimized StrVec. And a few years after that I implemented this ultra-optimized version. References SumatraPDF is a small, fast, multi-format (PDF/eBook/Comic Book and more), open-source reader for Windows. The implementation described here: StrVec.cpp, StrVec.h, StrVec_ut.cpp By the time you read this, the implementation could have been improved.
Consent morality is the idea that there are no higher values or virtues than allowing consenting adults to do whatever they please. As long as they're not hurting anyone, it's all good, and whoever might have a problem with that is by definition a bigot. This was the overriding morality I picked up as a child of the 90s. From TV, movies, music, and popular culture. Fly your freak! Whatever feels right is right! It doesn't seem like much has changed since then. What a moral dead end. I first heard the term consent morality as part of Louise Perry's critique of the sexual revolution. That in the context of hook-up culture, situationships, and falling birthrates, we have to wrestle with the fact that the sexual revolution — and it's insistence that, say, a sky-high body count mustn't be taboo — has led society to screwy dating market in the internet age that few people are actually happy with. But the application of consent morality that I actually find even more troubling is towards parenthood. As is widely acknowledged now, we're in a bit of a birthrate crisis all over the world. And I think consent morality can help explain part of it. I was reminded of this when I posted a cute video of a young girl so over-the-moon excited for her dad getting off work to argue that you'd be crazy to trade that for some nebulous concept of "personal freedom". Predictably, consent morality immediately appeared in the comments: Some people just don't want children and that's TOTALLY OKAY and you're actually bad for suggesting they should! No. It's the role of a well-functioning culture to guide people towards The Good Life. Not force, but guide. Nobody wants to be convinced by the morality police at the pointy end of a bayonet, but giving up on the whole idea of objective higher values and virtues is a nihilistic and cowardly alternative. Humans are deeply mimetic creatures. It's imperative that we celebrate what's good, true, and beautiful, such that these ideals become collective markers for morality. Such that they guide behavior. I don't think we've done a good job at doing that with parenthood in the last thirty-plus years. In fact, I'd argue we've done just about everything to undermine the cultural appeal of the simple yet divine satisfaction of child rearing (and by extension maligned the square family unit with mom, dad, and a few kids). Partly out of a coordinated campaign against the family unit as some sort of trad (possibly fascist!) identity marker in a long-waged culture war, but perhaps just as much out of the banal denigration of how boring and limiting it must be to carry such simple burdens as being a father or a mother in modern society. It's no wonder that if you incessantly focus on how expensive it is, how little sleep you get, how terrifying the responsibility is, and how much stress is involved with parenthood that it doesn't seem all that appealing! This is where Jordan Peterson does his best work. In advocating for the deeper meaning of embracing burden and responsibility. In diagnosing that much of our modern malaise does not come from carrying too much, but from carrying too little. That a myopic focus on personal freedom — the nights out, the "me time", the money saved — is a spiritual mirage: You think you want the paradise of nothing ever being asked of you, but it turns out to be the hell of nobody ever needing you. Whatever the cause, I think part of the cure is for our culture to reembrace the virtue and the value of parenthood without reservation. To stop centering the margins and their pathologies. To start centering the overwhelming middle where most people make for good parents, and will come to see that role as the most meaningful part they've played in their time on this planet. But this requires giving up on consent morality as the only way to find our path to The Good Life. It involves taking a moral stance that some ways of living are better than other ways of living for the broad many. That parenthood is good, that we need more children both for the literal survival of civilization, but also for the collective motivation to guard against the bad, the false, and the ugly. There's more to life than what you feel like doing in the moment. The worst thing in the world is not to have others ask more of you. Giving up on the total freedom of the unmoored life is a small price to pay for finding the deeper meaning in a tethered relationship with continuing a bloodline that's been drawn for hundreds of thousands of years before it came to you. You're never going to be "ready" before you take the leap. If you keep waiting, you'll wait until the window has closed, and all you see is regret. Summon a bit of bravery, don't overthink it, and do your part for the future of the world. It's 2.1 or bust, baby!