Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
34
by Maureen Holland Don’t get me wrong. You can keep it if you like it. But you don’t need it. A class selector can allow us to visually show or hide content for disclosure widgets, like a custom select component or dropdown navigation menu. But a disclosure widget is made of two parts: A button that controls show/hide behaviour Related content that is shown or hidden depending on the button state Whether the isOpen class is applied or not, a screenreader would announce this button as "Click me!, button". <button>Click me!</button> <p class="isOpen">Content that displays depending on a conditionally applied class.</p> One of the principles of web accessibility is to be perceivable. That means not relying exclusively on one sense, like sight, to provide information. In the above case, if I can’t see the content being shown or hidden, how do I know my button click did anything? We need a corresponding semantic update to indicate what happened. This is what the aria-expanded attribute...
a month ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from HTMHell

datalists are more powerful than you think

by Alexis Degryse I think we all know the <datalist> element (and if you don’t, it’s ok). It holds a list of <option> elements, offering suggested choices for its associated input field. It’s not an alternative for the <select> element. A field associated to a <datalist> can still allow any value that is not listed in the <option> elements. Here is a basic example: Pretty cool, isn't it? But what happens if we combine <datalist> with less common field types, like color and date: <label for="favorite-color">What is your favorite color?</label> <input type="color" list="colors-list" id="favorite-color"> <datalist id="colors-list"> <option>#FF0000</option> <option>#FFA500</option> <option>#FFFF00</option> <option>#008000</option> <option>#0000FF</option> <option>#800080</option> <option>#FFC0CB</option> <option>#FFFFFF</option> <option>#000000</option> </datalist> Colors listed in <datalist> are pre-selectable but the color picker is still usable by users if they need to choose a more specific one. <label for="event-choice" class="form-label col-form-label-lg">Choose a historical date</label> <input type="date" list="events" id="event-choice"> <datalist id="events"> <option label="Fall of the Berlin wall">1989-11-09</option> <option label="Maastricht Treaty">1992-02-07</option> <option label="Brexit Referendum">2016-06-23</option> </datalist> Same here: some dates are pre-selectable and the datepicker is still available. Depending on the context, having pre-defined values can possibly speed up the form filling by users. Please, note that <datalist> should be seen as a progressive enhancement because of some points: For Firefox (tested on 133), the <datalist> element is compatible only with textual field types (think about text, url, tel, email, number). There is no support for color, date and time. For Safari (tested on 15.6), it has support for color, but not for date and time. With some screen reader/browser combinations there are issues. For example, suggestions are not announced in Safari and it's not possible to navigate to the datalist with the down arrow key (until you type something matched with suggestions). Refer to a11ysupport.io for more. Find out more datalist experiment by Eiji Kitamura Documentation on MDN

a month ago 43 votes
Boost website speed with prefetching and the Speculation Rules API

by Schepp Everybody loves fast websites, and everyone despises slow ones even more. Site speed significantly contributes to the overall user experience (UX), determining whether it feels positive or negative. To ensure the fastest possible page load times, it’s crucial to design with performance in mind. However, performance optimization is an art form in itself. While implementing straightforward techniques like file compression or proper cache headers is relatively easy, achieving deeper optimizations can quickly become complex. But what if, instead of solely trying to accelerate the loading process, we triggered it earlier—without the user noticing? One way to achieve this is by prefetching pages the user might navigate to next using <link rel="prefetch"> tags. These tags are typically embedded in your HTML, but they can also be generated dynamically via JavaScript, based on a heuristic of your choice. Alternatively, you can send them as an HTML Link header if you lack access to the HTML code but can modify the server configuration. Browsers will take note of the prefetch directives and fetch the referenced pages as needed. ⚠︎ Caveat: To benefit from this prefetching technique, you must allow the browser to cache pages temporarily using the Cache-Control HTTP header. For example, Cache-Control: max-age=300 would tell the browser to cache a page for five minutes. Without such a header, the browser will discard the pre-fetched resource and fetch it again upon navigation, rendering the prefetch ineffective. In addition to <link rel="prefetch">, Chromium-based browsers support <link rel="prerender">. This tag is essentially a supercharged version of <link rel="prefetch">. Known as "NoState Prefetch," it not only prefetches an HTML page but also scans it for subresources—stylesheets, JavaScript files, images, and fonts referenced via a <link rel="preload" as="font" crossorigin> — loading them as well. The Speculation Rules API A relatively new addition to Chromium browsers is the Speculation Rules API, which offers enhanced prefetching and enables actual prerendering of webpages. It introduces a JSON-based syntax for precisely defining the conditions under which preprocessing should occur. Here’s a simple example of how to use it: <script type="speculationrules"> { "prerender": [{ "urls": ["next.html", "next2.html"] }] } </script> Alternatively, you can place the JSON file on your server and reference it using an HTTP header: Speculation-Rules: "/speculationrules.json". The above list-rule specifies that the browser should prerender the URLs next.html and next2.html so they are ready for instant navigation. The keyword prerender means more than fetching the HTML and subresources—it instructs the browser to fully render the pages in hidden tabs, ready to replace the current page instantly when needed. This makes navigation to these pages feel seamless. Prerendered pages also typically score excellent Core Web Vital metrics. Layout shifts and image loading occur during the hidden prerendering phase, and JavaScript execution happens upfront, ensuring a smooth experience when the user first sees the page. Instead of listing specific URLs, the API also allows for pattern matching using where and href_matches keys: <script type="speculationrules"> { "prerender": [{ "where": { "href_matches": "/*" } }] } </script> For more precise targeting, CSS selectors can be used with the selector_matches key: <script type="speculationrules"> { "prerender": [{ "where": { "selector_matches": ".navigation__link" } }] } </script> These rules, called document-rules, act on link elements as soon as the user triggers a pointerdown or touchstart event, giving the referenced pages a few milliseconds' head start before the actual navigation. If you want the preprocessing to begin even earlier, you can adjust the eagerness setting: <script type="speculationrules"> { "prerender": [{ "where": { "href_matches": "/*" }, "eagerness": "moderate" }] } </script> Eagerness values: immediate: Executes immediately. eager: Currently behaves like immediate but may be refined to sit between immediate and moderate. moderate: Executes after a 200ms hover or on pointerdown for mobile devices. conservative (default): Speculates based on pointer or touch interaction. For even greater flexibility, you can combine prerender and prefetch rules with different eagerness settings: <script type="speculationrules"> { "prerender": [{ "where": { "href_matches": "/*" }, "eagerness": "conservative" }], "prefetch": [{ "where": { "href_matches": "/*" }, "eagerness": "moderate" }] } </script> Limitations and Challenges While the Speculation Rules API is powerful, it comes with some limitations: Browser support: Only Chromium-based browsers support it. Other browsers lack this capability, so treat it as a progressive enhancement. Bandwidth concerns: Over-aggressive settings could waste user bandwidth. Chromium imposes limits to mitigate this: a maximum of 10 prerendered and 50 prefetched pages with immediate or eager eagerness. Server strain: Poorly optimized servers (e.g., no caching, heavy database dependencies) may experience significant load increases due to excessive speculative requests. Compatibility: Prefetching won’t work if a Service Worker is active, though prerendering remains unaffected. Cross-origin prerendering requires explicit opt-in by the target page. Despite these caveats, the Speculation Rules API offers a powerful toolset to significantly enhance perceived performance and improve UX. So go ahead and try them out! I would like to express a big thank you to the Webperf community for always being ready to help with great tips and expertise. For this article, I would like to thank Barry Pollard, Andy Davies, and Noam Rosenthal in particular for providing very valuable background information. ❤️

a month ago 44 votes
Misleading Icons: Icon-Only-Buttons and Their Impact on Screen Readers

by Alexander Muzenhardt Introduction Imagine you’re tasked with building a cool new feature for a product. You dive into the work with full energy, and just before the deadline, you manage to finish it. Everyone loves your work, and the feature is set to go live the next day. <button> <i class="icon">📆</i> </button> The Problem You find some good resources explaining that there are people with disabilities who need to be considered in these cases. This is known as accessibility. For example, some individuals have motor impairments and cannot use a mouse. In this particular case, the user is visually impaired and relies on assistive technology like a screen reader, which reads aloud the content of the website or software. The button you implemented doesn’t have any descriptive text, so only the icon is read aloud. In your case, the screen reader says, “Tear-Off Calendar button”. While it describes the appearance of the icon, it doesn’t convey the purpose of the button. This information is meaningless to the user. A button should always describe what action it will trigger when activated. That’s why we need additional descriptive text. The Challenge Okay, you understand the problem now and agree that it should be fixed. However, you don’t want to add visible text to the button. For design and aesthetic reasons, sighted users should only see the icon. Is there a way to keep the button “icon-only” while still providing a meaningful, descriptive text for users who rely on assistive technologies like screen readers? The Solution First, you need to give the button a descriptive name so that a screen reader can announce it. <button> <span>Open Calendar</span> <i class="icon">📆</i> </button> The problem now is that the button’s name becomes visible, which goes against your design guidelines. To prevent this, additional CSS is required. .sr-only { position: absolute; width: 1px; height: 1px; padding: 0; margin: -1px; overflow: hidden; clip: rect(0, 0, 0, 0); white-space: nowrap; border-width: 0; } <button> <span class="sr-only">Open Calendar</span> <i class="icon">📆</i> </button> The CSS ensures that the text inside the span-element is hidden from sighted users but remains readable for screen readers. This approach is so common that well-known CSS libraries like TailwindCSS, Bootstrap, and Material-UI include such a class by default. Although the text of the buttons is not visible anymore, the entire content of the button will be read aloud, including the icon — something you want to avoid. In HTML you are allowed to use specific attributes for accessibility, and in this case, the attribute aria-hidden is what you need. ARIA stands for “Accessible Rich Internet Applications” and is an initiative to make websites and software more accessible to people with disabilities. The attribute aria-hidden hides elements from screen readers so that their content isn’t read. All you need to do is add the attribute aria-hidden with the value “true” to the icon element, which in this case is the “i”-element. <button> <span class="sr-only">Open Calendar</span> <i class="icon" aria-hidden="true">📆</i> </button> Alternative An alternative is the attribute aria-label, which you can assign a descriptive, accessible text to a button without it being visible to sighted users. The purpose of aria-label is to provide a description for interactive elements that lack a visible label or descriptive text. All you need to do is add the attribute aria-label to the button. The attribute aria-hidden and the span-Element can be deleted. <button aria-label="Open Calendar"> <i class="icon">📆</i> </button> With this adjustment, the screen reader will now announce “Open calendar,” completely ignoring the icon. This clearly communicates to the user what the button will do when clicked. Which Option Should You Use? At first glance, the aria-label approach might seem like the smarter choice. It requires less code, reducing the likelihood of errors, and looks cleaner overall, potentially improving code readability. However, the first option is actually the better choice. There are several reasons for this that may not be immediately obvious: Some browsers do not translate aria-label It is difficult to copy aria-label content or otherwise manipulated it as text aria-label content will not show up if styles fail to load These are just a few of the many reasons why you should be cautious when using the aria-label attribute. These points, along with others, are discussed in detail in the excellent article "aria-label is a Code Smell" by Eric Bailey. The First Rule of ARIA Use The “First Rule of ARIA Use” states: If you can use a native HTML element or attribute with the semantics and behavior you require already built in, instead of re-purposing an element and adding an ARIA role, state or property to make it accessible, then do so. Even though the first approach also uses an ARIA attribute, it is more acceptable because aria-hidden only hides an element from screen readers. In contrast, aria-label overrides the standard HTML behavior for handling descriptive names. For this reason, following this principle, aria-hidden is preferable to aria-label in this case. Browser compatibility Both aria-label and aria-hidden are supported by all modern browsers and can be used without concern. Conclusion Ensuring accessibility in web design is more than just a nice-to-have—it’s a necessity. By implementing simple solutions like combining CSS with aria-hidden, you can create a user experience that is both aesthetically pleasing and accessible for everyone, including those who rely on screen readers. While there may be different approaches to solving accessibility challenges, the key is to be mindful of all users' needs. A few small adjustments can make a world of difference, ensuring that your features are truly usable by everyone. Cheers Resources / Links Unicode Character “Tear-Off Calendar” comport Unicode Website mdn web docs aria-label mdn web docs aria-hidden WAI-ARIA Standard Guidlines Tailwind CSS Screen Readers (sr-only) aria-label is a Code Smell First Rule of ARIA Use

a month ago 36 votes
The underrated &lt;dl&gt; element

by David Luhr The Description List (<dl>) element is useful for many common visual design patterns, but is unfortunately underutilized. It was originally intended to group terms with their definitions, but it's also a great fit for other content that has a key/value structure, such as product attributes or cards that have several supporting details. Developers often mark up these patterns with overused heading or table semantics, or neglect semantics entirely. With the Description List (<dl>) element and its dedicated Description Term (<dt>) and Description Definition (<dd>) elements, we can improve the semantics and accessibility of these design patterns. The <dl> has a unique content model: A parent <dl> containing one or more groups of <dt> and <dd> elements Each term/definition group can have multiple <dt> (Description Term) elements per <dd> (Description Definition) element, or multiple definitions per term The <dl> can optionally accept a single layer of <div> to wrap the <dt> and <dd> elements, which can be useful for styling Examples An initial example would be a simple list of terms and definitions: <dl> <dt>Compression damping</dt> <dd>Controls the rate a spring compresses when it experiences a force</dd> <dt>Rebound damping</dt> <dd>Controls the rate a spring returns to it's extended length after compressing</dd> </dl> A common design pattern is "stat callouts", which feature mini cards of small label text above large numeric values. The <dl> is a great fit for this content: <dl> <div> <dt>Founded</dt> <dd>1988</dd> </div> <div> <dt>Frames built</dt> <dd>8,678</dd> </div> <div> <dt>Race podiums</dt> <dd>212</dd> </div> </dl> And, a final example of a product listing, which has a list of technical specs: <h2>Downhill MTB</h2> <dl> <div> <dt>Front travel:</dt> <dd>160mm</dd> </div> <div> <dt>Wheel size:</dt> <dd>27.5"</dd> </div> <div> <dt>Weight:</dt> <dd>15.2 kg</dd> </div> </dl> Accessibility With this markup in place, common screen readers will convey important semantic and navigational information. In my testing, NVDA on Windows and VoiceOver on MacOS conveyed a list role, the count of list items, your position in the list, and the boundaries of the list. TalkBack on Android only conveyed the term and definition roles of the <dt> and <dd> elements, respectively. If the design doesn't include visible labels, you can at least include them as visually hidden text for assistive technology users. But, I always advocate to visually display them if possible. Wrapping up The <dl> is a versatile element that unfortunately doesn't get much use. In over a decade of coding, I've almost never encountered it in existing codebases. It also doesn't appear anywhere in the top HTML elements lists in the Web Almanac 2024 or an Advanced Web Ranking study of over 11.3 million pages. The next time you're building out a design, look for opportunities where the underrated Description List is a good fit. To go deeper, be sure to check out this article by Ben Myers on the <dl> element.

a month ago 40 votes
Preloading fonts for web performance with link rel=”preload”

by Alistair Shepherd Web performance is incredibly important. If you were here for the advent calendar last year you may have already read many of my thoughts on the subject. If not, read Getting started with Web Performance when you’re done here! This year I’m back for more web performance, this time focusing on my favourite HTML snippet for improving the loading performance of web fonts using preloads. This short HTML snippet added to the head of your page, can make a substantial improvement to both perceived and measured performance. <link rel="preload" href="/nova-sans.woff2" as="font" type="font/woff2" crossorigin="anonymous" > Above we have a link element that instructs the browser to preload the /nova-sans.woff2 font. By preloading your critical above-the-fold font we can make a huge impact by reducing potential flashes of unstyled or invisible text and layout shifts caused by font loading, like here in the following video: Recording of a page load illustrating how a font loading late can result in a jarring layout shift How web fonts are loaded To explain how preloading fonts can make such an impact, let’s go through the process of how web fonts are loaded. Font files are downloaded later than you may think, due to a combination of network requests and conservative browser behaviour. In a standard web page, there will be the main HTML document which will include a CSS file using a link element in the head. If you’re using self-hosted custom fonts you’ll have a @font-face rule within your CSS that specifies the font name, the src, and possibly some other font-related properties. In other CSS rules you specify a font-family so elements use your custom font. Once our browser encounters our page it: Starts streaming the HTML document, parsing it as it goes Encounters the link element pointing to our CSS file Starts downloading that CSS file, blocking the render of the page until it’s complete Parses and applies the contents of that file Finds the @font-face rule with our font URL Okay let’s pause here for a moment — It may make sense for step 6 to be “Starts downloading our font file”, however that’s not the case. You see, if a browser downloaded every font within a CSS file when it first encountered them, we could end up loading much more than is needed. We could be specifying fonts for multiple different weights, italics, other character sets/languages, or even multiple different fonts. If we don’t need all these fonts immediately it would be wasteful to download them all, and doing so may slow down higher priority CSS or JS. Instead, the browser is more conservative and simply takes note of the font declaration until it’s explicitly needed. The browser next: Takes a note of our @font-face declarations and their URLs for later Finishes processing CSS and starts rendering the page Discovers a piece of text on the page that needs our font Finally starts downloading our font now it knows it’s needed! So as we can see there’s actually a lot that happens between our HTML file arriving in the browser and our font file being downloaded. This is ideal for lower priority fonts, but for the main or headline font this process can make our custom font appear surprisingly late in the page load. This is what causes the behaviour we saw in the video above, where the page starts rendering but it takes some time before our custom font appears. A waterfall graph showing how our custom ‘lobster.woff2’ font doesn’t start being downloaded until 2 seconds into the page load and isn’t available until after 3 seconds This is an intentional decision by browser makers and spec writers to ensure that pages with lots of fonts aren’t badly impacted by having to load many font files ahead of time. But that doesn’t mean it can’t be optimised! Preloading our font with a link <link rel="preload" href="/nova-sans.woff2" as="font" type="font/woff2" crossorigin="anonymous" > The purpose of my favourite HTML snippet is to inform the browser that this font file will be needed with high priority, before it even has knowledge of it. We’re building our page and know more about how our fonts are used — so we can provide hints to be less conservative! If we start downloading the font as soon as possible then it can be ready ahead of when the browser ‘realises’ it’s needed. Looking back at our list above, by adding a preload we move the start of the font download from step 9 to step 2! Starts streaming the HTML document, parsing it as it goes Encounters our preload and starts downloading our font file in the background Encounters the link element pointing to our CSS file Continues as above Taking a closer look at the snippet, we’re using a link element and rel="preload" to ask the browser to preload a file with the intention of using it early in the page load. Like a CSS file, we provide the URL with the href attribute. We use as="font" and type="font/woff2" to indicate this is a font file in woff2. For modern browsers woff2 is the only format you need as it’s universally supported. Finally there’s crossorigin="anonymous". This comes from the wonderfully transparent and clear world of Cross Origin Resource Sharing. I jest of course, CORS is anything but transparent and clear! For fonts you almost always want crossorigin="anonymous" on your link element, even when the request isn’t cross-origin. Omitting this attribute would mean our preload wouldn’t be used and the file would be requested again. But why? Browser requests can be sent either with or without credentials (cookies, etc), and requests to the same URL with and without credentials are fundamentally different. For a preload to be used by the browser, it needs to match the type of request that the browser would have made normally. By default fonts are always requested without credentials, so we need to add crossorigin="anonymous" to ensure our preload matches a normal font request. By omitting this attribute our preload would not be used and the browser would request the font again. If you’re ever unsure of how your preloads are working, check your browsers’ devtools. In Chrome the Network pane will show a duplicate request, and the Console will log a warning telling you a preload wasn’t used. Screenshot showing the Chrome devtools Console pane, with warnings for an incorrect font preload Result and conclusion By preloading our critical fonts we ensure our browser has the most important fonts available earlier in the page loading process. We can see this by comparing our recording and waterfall charts from earlier: Side-by-side recording of the same page loading in different ways. ‘no-preload’ shows a large layout shift caused by the font switching and finishes loading at 4.4s. ‘preload’ doesn’t have a shift and finishes loading at 3.1s. Side-by-side comparison of two waterfall charts of the same site with font file `lobster.woff2`. For the ‘no-preload’ document the font loads after all other assets and finishes at 3s. The ‘preload’ document shows the font loading much earlier, in parallel with other files and finishing at 2s. As I mentioned in Getting started with Web Performance, it’s best to use preloads sparingly so limit this to your most important font or two. Remember that it’s a balance. By preloading too many resources you run the risk of other high-priority resources such as CSS being slowed down and arriving late. I would recommend preloading just the heading font to start with. With some testing you can see if preloading your main body font is worth it also! With care, font preloads can be a simple and impactful optimisation opportunity and this is why it’s my favourite HTML snippet! This is a great step to improving font loading, and there are plenty of other web font optimisations to try also!

a month ago 30 votes

More in programming

Making inventory spreadsheets for my LEGO sets

One of my recent home organisation projects has been sorting out my LEGO collection. I have a bunch of sets which are mixed together in one messy box, and I’m trying to separate bricks back into distinct sets. My collection is nowhere near large enough to be worth sorting by individual parts, and I hope that breaking down by set will make it all easier to manage and store. I’ve been creating spreadsheets to track the parts in each set, and count them out as I find them. I briefly hinted at this in my post about looking at images in spreadsheets, where I included a screenshot of one of my inventory spreadsheets: These spreadsheets have been invaluable – I can see exactly what pieces I need, and what pieces I’m missing. Without them, I wouldn’t even attempt this. I’m about to pause this cleanup and work on some other things, but first I wanted to write some notes on how I’m creating these spreadsheets – I’ll probably want them again in the future. Getting a list of parts in a set There are various ways to get a list of parts in a LEGO set: Newer LEGO sets include a list of parts at the back of the printed instructions You can get a list from LEGO-owned website like LEGO.com or BrickLink There are community-maintained databases on sites like Rebrickable I decided to use the community maintained lists from Rebrickable – they seem very accurate in my experience, and you can download daily snapshots of their entire catalog database. The latter is very powerful, because now I can load the database into my tools of choice, and slice and dice the data in fun and interesting ways. Downloading their entire database is less than 15MB – which is to say, two-thirds the size of just opening the LEGO.com homepage. Bargain! Putting Rebrickable data in a SQLite database My tool of choice is SQLite. I slept on this for years, but I’ve come to realise just how powerful and useful it can be. A big part of what made me realise the power of SQLite is seeing Simon Willison’s work with datasette, and some of the cool things he’s built on top of SQLite. Simon also publishes a command-line tool sqlite-utils for manipulating SQLite databases, and that’s what I’ve been using to create my spreadsheets. Here’s my process: Create a Python virtual environment, and install sqlite-utils: python3 -m venv .venv source .venv/bin/activate pip install sqlite-utils At time of writing, the latest version of sqlite-utils is 3.38. Download the Rebrickable database tables I care about, uncompress them, and load them into a SQLite database: curl -O 'https://cdn.rebrickable.com/media/downloads/colors.csv.gz' curl -O 'https://cdn.rebrickable.com/media/downloads/parts.csv.gz' curl -O 'https://cdn.rebrickable.com/media/downloads/inventories.csv.gz' curl -O 'https://cdn.rebrickable.com/media/downloads/inventory_parts.csv.gz' gunzip colors.csv.gz gunzip parts.csv.gz gunzip inventories.csv.gz gunzip inventory_parts.csv.gz sqlite-utils insert lego_parts.db colors colors.csv --csv sqlite-utils insert lego_parts.db parts parts.csv --csv sqlite-utils insert lego_parts.db inventories inventories.csv --csv sqlite-utils insert lego_parts.db inventory_parts inventory_parts.csv --csv The inventory_parts table describes how many of each part there are in a set. “Set S contains 10 of part P in colour C.” The parts and colors table contains detailed information about each part and color. The inventories table matches the official LEGO set numbers to the inventory IDs in Rebrickable’s database. “The set sold by LEGO as 6616-1 has ID 4159 in the inventory table.” Run a SQLite query that gets information from the different tables to tell me about all the parts in a particular set: SELECT ip.img_url, ip.quantity, ip.is_spare, c.name as color, p.name, ip.part_num FROM inventory_parts ip JOIN inventories i ON ip.inventory_id = i.id JOIN parts p ON ip.part_num = p.part_num JOIN colors c ON ip.color_id = c.id WHERE i.set_num = '6616-1'; Or use sqlite-utils to export the query results as a spreadsheet: sqlite-utils lego_parts.db " SELECT ip.img_url, ip.quantity, ip.is_spare, c.name as color, p.name, ip.part_num FROM inventory_parts ip JOIN inventories i ON ip.inventory_id = i.id JOIN parts p ON ip.part_num = p.part_num JOIN colors c ON ip.color_id = c.id WHERE i.set_num = '6616-1';" --csv > 6616-1.csv Here are the first few lines of that CSV: img_url,quantity,is_spare,color,name,part_num https://cdn.rebrickable.com/media/parts/photos/9999/23064-9999-e6da02af-9e23-44cd-a475-16f30db9c527.jpg,1,False,[No Color/Any Color],Sticker Sheet for Set 6616-1,23064 https://cdn.rebrickable.com/media/parts/elements/4523412.jpg,2,False,White,Flag 2 x 2 Square [Thin Clips] with Chequered Print,2335pr0019 https://cdn.rebrickable.com/media/parts/photos/15/2335px13-15-33ae3ea3-9921-45fc-b7f0-0cd40203f749.jpg,2,False,White,Flag 2 x 2 Square [Thin Clips] with Octan Logo Print,2335pr0024 https://cdn.rebrickable.com/media/parts/elements/4141999.jpg,4,False,Green,Tile Special 1 x 2 Grille with Bottom Groove,2412b https://cdn.rebrickable.com/media/parts/elements/4125254.jpg,4,False,Orange,Tile Special 1 x 2 Grille with Bottom Groove,2412b Import that spreadsheet into Google Sheets, then add a couple of columns. I add a column image where every cell has the formula =IMAGE(…) that references the image URL. This gives me an inline image, so I know what that brick looks like. I add a new column quantity I have where every cell starts at 0, which is where I’ll count bricks as I find them. I add a new column remaining to find which counts the difference between quantity and quantity I have. Then I can highlight or filter for rows where this is non-zero, so I can see the bricks I still need to find. If you’re interested, here’s an example spreadsheet that has a clean inventory. It took me a while to refine the SQL query, but now I have it, I can create a new spreadsheet in less than a minute. One of the things I’ve realised over the last year or so is how powerful “get the data into SQLite” can be – it opens the door to all sorts of interesting queries and questions, with a relatively small amount of code required. I’m sure I could write a custom script just for this task, but it wouldn’t be as concise or flexible. [If the formatting of this post looks odd in your feed reader, visit the original article]

19 hours ago 3 votes
Giving Junior Engineers Control Of A Six Trillion Dollar System Is Nuts

For some purpose, the DOGE people are burrowing their way into all US Federal Systems. Their complete control over the Treasury Department is entirely insane. Unless you intend to destroy everything, making arbitrary changes to complex computer systems will result in destruction, even if that was not your intention. No

4 hours ago 3 votes
Stanislav Petrov

A lieutenant colonel in the Soviet Air Defense Forces prevented the end of human civilization on September 26th, 1983. His name was Stanislav Petrov. Protocol dictated that the Soviet Union would retaliate against any nuclear strikes sent by the United States. This was a policy of mutually assured destruction, a doctrine that compels a horrifying logical conclusion. The second and third stage effects of this type of exchange would be even more catastrophic. Allies for each side would likely be pulled into the conflict. The resulting nuclear winter was projected to lead to 2 billion deaths due to starvation. This is to say nothing about those who would have been unfortunate enough to have survived. Petrov’s job was to monitor Oko, the computerized warning systems built to centralize Soviet satellite communications. Around midnight, he received a report that one of the satellites had detected the infrared signature of a single launch of a United States ICBM. While Petrov was deciding what to do about this report, the system detected four more incoming missile launches. He had minutes to make a choice about what to do. It is impossible to imagine the amount of pressure placed on him at this moment. Source: Stanislav Petrov, Soviet officer credited with averting nuclear war, dies at 77 by Schwartzreport. Petrov lived in a world of deterministic systems. The technologies that powered these warning systems have outputs that are guaranteed, provided the proper inputs are provided. However, deterministic does not mean infallible. The only reason you are alive and reading this is because Petrov understood that the systems he observed were capable of error. He was suspicious of what he was seeing reported, and chose not to escalate a retaliatory strike. There were two factors guiding his decision: A surprise attack would most likely have used hundreds of missiles, and not just five. The allegedly foolproof Oko system was new and prone to errors. An error in a deterministic system can still lead to expected outputs being generated. For the Oko system, infrared reflections of the sun shining off of the tops of clouds created a false positive that was interpreted as detection of a nuclear launch event. Source: US-K History by Kosmonavtika. The concept of erroneous truth is a deep thing to internalize, as computerized systems are presented as omniscient, indefective, and absolute. Petrov’s rewards for this action were reprimands, reassignment, and denial of promotion. This was likely for embarrassing his superiors by the politically inconvenient shedding of light on issues with the Oko system. A coerced early retirement caused a nervous breakdown, likely him having to grapple with the weight of his decision. It was only in the 1990s—after the fall of the Soviet Union—that his actions were discovered internationally and celebrated. Stanislav Petrov was given the recognition that he deserved, including being honored by the United Nations, awarded the Dresden Peace Prize, featured in a documentary, and being able to visit a Minuteman Missile silo in the United States. On January 31st, 2025, OpenAI struck a deal with the United States government to use its AI product for nuclear weapon security. It is unclear how this technology will be used, where, and to what extent. It is also unclear how OpenAI’s systems function, as they are black box technologies. What is known is that LLM-generated responses—the product OpenAI sells—are non-deterministic. Non-deterministic systems don’t have guaranteed outputs from their inputs. In addition, LLM-based technology hallucinates—it invents content with no self-knowledge that it is a falsehood. Non-deterministic systems that are computerized also have the perception as being authoritative, the same as their deterministic peers. It is not a question of how the output is generated, it is one of the output being perceived to come from a machine. These are terrifying things to know. Consider not only the systems this technology is being applied to, but also the thoughtless speed of their integration. Then consider how we’ve historically been conditioned and rewarded to interpret the output of these systems, and then how we perceive and treat skeptics. We don’t live in a purely deterministic world of technology anymore. Stanislav Petrov died on September 18th, 2017, before this change occurred. I would be incredibly curious to know his thoughts about our current reality, as well as the increasing abdication of human monitoring of automated systems in favor of notably biased, supposed “AI solutions.” In acknowledging Petrov’s skepticism in a time of mania and political instability, we acknowledge a quote from former U.S. Secretary of Defense William J. Perry’s memoir about the incident: [Oko’s false positives] illustrates the immense danger of placing our fate in the hands of automated systems that are susceptible to failure and human beings who are fallible.

yesterday 7 votes
01 · A spreadsheet for exploring scenarios

In our *Ambsheets* project, we are exploring a small extension to the familiar spreadsheet: **what if a single spreadsheet cell could hold multiple values at once**?

yesterday 2 votes
Recently

I am not going to repeat the news. But man, things are really, really bad and getting worse in America. It’s all so unendingly stupid and evil. The tech industry is being horrible, too. Wishing strength to the people who are much more exposed to the chaos than I am. Reading A Confederacy of Dunces was such a perfect novel. It was pure escapism, over-the-top comedy, and such an unusual artifact, that was sadly only appreciated posthumously. Very earnestly I believe that despite greater access to power and resources, the box labeled “socially acceptable ways to be a man” is much smaller than the box labeled “socially acceptable ways to be a woman.” This article on the distinction between patriarchy and men was an interesting read. With the whole… politics out there, it’s easy to go off the rails with any discussion about men and women and whether either have it easy or hard. The same author wrote this good article about declining male enrollment in college. I think both are worth a read. Whenever I read this kind of article, I’m reminded of how limited and mostly fortunate my own experience is. There’s a big difference, I think, in how vigorously you have to perform your gender in some red state where everyone owns a pickup truck, versus a major city where the roles are a little more fluid. Plus, I’ve been extremely fortunate to have a lot of friends and genuine open conversations about feelings with other men. I wish that was the norm! On Having a Maximum Wealth was right up my alley. I’m reading another one of the new-French-economist books right now, and am still fascinated by the prospect of wealth taxes. My friend David has started a local newsletter for Richmond, Virginia, and written a good piece about public surveillance. Construction Physics is consistently great, and their investigation of why skyscrapers are all glass boxes is no exception. Watching David Lynch was so great. We watched his film Lost Highway a few days after he passed, and it was even better than I had remembered it. Norm Macdonald’s extremely long jokes on late-night talk shows have been getting me through the days. Listening This song by the The Hard Quartet – a supergroup of Emmett Kelly, Stephen Malkmus (Pavement), Matt Sweeney and Jim White. It’s such a loving, tender bit of nonsense, very golden-age Pavement. They also have this nice chill song: I came across this SML album via Hearing Things, which has been highlighting a lot of good music. Small Medium Large by SML It’s a pretty good time for these independent high-quality art websites. Colossal has done the same for the art world and highlights good new art: I really want to make it out to see the Nick Cave (not the musician) art show while it’s in New York.

yesterday 1 votes