Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
58
by Léonie Watson Almost, but not quite, entirely unlike... Can you give me the HTML for an accessible button please? It was a simple enough question. Or it would have been, had I been asking another person. As it was, I was asking ChatGPT, and so of course there was nothing simple about it. For the briefest of moments, I caught myself thinking even ChatGPT couldn't get this wrong. The <button> element is probably the most talked about HTML element in accessibility. Literally all you have to do is use it, give it an accessible name, attach some JavaScript to get it to do something if it isn't part of a form, and you don't even need to think about keyboard accessibility because the browser takes care of all that for you. Then reality kicked me in the shins and told me to snap out of it, and, as if to prove reality's point, ChatGPT burbled: Absolutely! Here’s an accessible button in its simplest form, following best practices to ensure compatibility with screen readers and keyboard...
7 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from HTMHell

datalists are more powerful than you think

by Alexis Degryse I think we all know the <datalist> element (and if you don’t, it’s ok). It holds a list of <option> elements, offering suggested choices for its associated input field. It’s not an alternative for the <select> element. A field associated to a <datalist> can still allow any value that is not listed in the <option> elements. Here is a basic example: Pretty cool, isn't it? But what happens if we combine <datalist> with less common field types, like color and date: <label for="favorite-color">What is your favorite color?</label> <input type="color" list="colors-list" id="favorite-color"> <datalist id="colors-list"> <option>#FF0000</option> <option>#FFA500</option> <option>#FFFF00</option> <option>#008000</option> <option>#0000FF</option> <option>#800080</option> <option>#FFC0CB</option> <option>#FFFFFF</option> <option>#000000</option> </datalist> Colors listed in <datalist> are pre-selectable but the color picker is still usable by users if they need to choose a more specific one. <label for="event-choice" class="form-label col-form-label-lg">Choose a historical date</label> <input type="date" list="events" id="event-choice"> <datalist id="events"> <option label="Fall of the Berlin wall">1989-11-09</option> <option label="Maastricht Treaty">1992-02-07</option> <option label="Brexit Referendum">2016-06-23</option> </datalist> Same here: some dates are pre-selectable and the datepicker is still available. Depending on the context, having pre-defined values can possibly speed up the form filling by users. Please, note that <datalist> should be seen as a progressive enhancement because of some points: For Firefox (tested on 133), the <datalist> element is compatible only with textual field types (think about text, url, tel, email, number). There is no support for color, date and time. For Safari (tested on 15.6), it has support for color, but not for date and time. With some screen reader/browser combinations there are issues. For example, suggestions are not announced in Safari and it's not possible to navigate to the datalist with the down arrow key (until you type something matched with suggestions). Refer to a11ysupport.io for more. Find out more datalist experiment by Eiji Kitamura Documentation on MDN

6 months ago 73 votes
Boost website speed with prefetching and the Speculation Rules API

by Schepp Everybody loves fast websites, and everyone despises slow ones even more. Site speed significantly contributes to the overall user experience (UX), determining whether it feels positive or negative. To ensure the fastest possible page load times, it’s crucial to design with performance in mind. However, performance optimization is an art form in itself. While implementing straightforward techniques like file compression or proper cache headers is relatively easy, achieving deeper optimizations can quickly become complex. But what if, instead of solely trying to accelerate the loading process, we triggered it earlier—without the user noticing? One way to achieve this is by prefetching pages the user might navigate to next using <link rel="prefetch"> tags. These tags are typically embedded in your HTML, but they can also be generated dynamically via JavaScript, based on a heuristic of your choice. Alternatively, you can send them as an HTML Link header if you lack access to the HTML code but can modify the server configuration. Browsers will take note of the prefetch directives and fetch the referenced pages as needed. ⚠︎ Caveat: To benefit from this prefetching technique, you must allow the browser to cache pages temporarily using the Cache-Control HTTP header. For example, Cache-Control: max-age=300 would tell the browser to cache a page for five minutes. Without such a header, the browser will discard the pre-fetched resource and fetch it again upon navigation, rendering the prefetch ineffective. In addition to <link rel="prefetch">, Chromium-based browsers support <link rel="prerender">. This tag is essentially a supercharged version of <link rel="prefetch">. Known as "NoState Prefetch," it not only prefetches an HTML page but also scans it for subresources—stylesheets, JavaScript files, images, and fonts referenced via a <link rel="preload" as="font" crossorigin> — loading them as well. The Speculation Rules API A relatively new addition to Chromium browsers is the Speculation Rules API, which offers enhanced prefetching and enables actual prerendering of webpages. It introduces a JSON-based syntax for precisely defining the conditions under which preprocessing should occur. Here’s a simple example of how to use it: <script type="speculationrules"> { "prerender": [{ "urls": ["next.html", "next2.html"] }] } </script> Alternatively, you can place the JSON file on your server and reference it using an HTTP header: Speculation-Rules: "/speculationrules.json". The above list-rule specifies that the browser should prerender the URLs next.html and next2.html so they are ready for instant navigation. The keyword prerender means more than fetching the HTML and subresources—it instructs the browser to fully render the pages in hidden tabs, ready to replace the current page instantly when needed. This makes navigation to these pages feel seamless. Prerendered pages also typically score excellent Core Web Vital metrics. Layout shifts and image loading occur during the hidden prerendering phase, and JavaScript execution happens upfront, ensuring a smooth experience when the user first sees the page. Instead of listing specific URLs, the API also allows for pattern matching using where and href_matches keys: <script type="speculationrules"> { "prerender": [{ "where": { "href_matches": "/*" } }] } </script> For more precise targeting, CSS selectors can be used with the selector_matches key: <script type="speculationrules"> { "prerender": [{ "where": { "selector_matches": ".navigation__link" } }] } </script> These rules, called document-rules, act on link elements as soon as the user triggers a pointerdown or touchstart event, giving the referenced pages a few milliseconds' head start before the actual navigation. If you want the preprocessing to begin even earlier, you can adjust the eagerness setting: <script type="speculationrules"> { "prerender": [{ "where": { "href_matches": "/*" }, "eagerness": "moderate" }] } </script> Eagerness values: immediate: Executes immediately. eager: Currently behaves like immediate but may be refined to sit between immediate and moderate. moderate: Executes after a 200ms hover or on pointerdown for mobile devices. conservative (default): Speculates based on pointer or touch interaction. For even greater flexibility, you can combine prerender and prefetch rules with different eagerness settings: <script type="speculationrules"> { "prerender": [{ "where": { "href_matches": "/*" }, "eagerness": "conservative" }], "prefetch": [{ "where": { "href_matches": "/*" }, "eagerness": "moderate" }] } </script> Limitations and Challenges While the Speculation Rules API is powerful, it comes with some limitations: Browser support: Only Chromium-based browsers support it. Other browsers lack this capability, so treat it as a progressive enhancement. Bandwidth concerns: Over-aggressive settings could waste user bandwidth. Chromium imposes limits to mitigate this: a maximum of 10 prerendered and 50 prefetched pages with immediate or eager eagerness. Server strain: Poorly optimized servers (e.g., no caching, heavy database dependencies) may experience significant load increases due to excessive speculative requests. Compatibility: Prefetching won’t work if a Service Worker is active, though prerendering remains unaffected. Cross-origin prerendering requires explicit opt-in by the target page. Despite these caveats, the Speculation Rules API offers a powerful toolset to significantly enhance perceived performance and improve UX. So go ahead and try them out! I would like to express a big thank you to the Webperf community for always being ready to help with great tips and expertise. For this article, I would like to thank Barry Pollard, Andy Davies, and Noam Rosenthal in particular for providing very valuable background information. ❤️

6 months ago 81 votes
Misleading Icons: Icon-Only-Buttons and Their Impact on Screen Readers

by Alexander Muzenhardt Introduction Imagine you’re tasked with building a cool new feature for a product. You dive into the work with full energy, and just before the deadline, you manage to finish it. Everyone loves your work, and the feature is set to go live the next day. <button> <i class="icon">📆</i> </button> The Problem You find some good resources explaining that there are people with disabilities who need to be considered in these cases. This is known as accessibility. For example, some individuals have motor impairments and cannot use a mouse. In this particular case, the user is visually impaired and relies on assistive technology like a screen reader, which reads aloud the content of the website or software. The button you implemented doesn’t have any descriptive text, so only the icon is read aloud. In your case, the screen reader says, “Tear-Off Calendar button”. While it describes the appearance of the icon, it doesn’t convey the purpose of the button. This information is meaningless to the user. A button should always describe what action it will trigger when activated. That’s why we need additional descriptive text. The Challenge Okay, you understand the problem now and agree that it should be fixed. However, you don’t want to add visible text to the button. For design and aesthetic reasons, sighted users should only see the icon. Is there a way to keep the button “icon-only” while still providing a meaningful, descriptive text for users who rely on assistive technologies like screen readers? The Solution First, you need to give the button a descriptive name so that a screen reader can announce it. <button> <span>Open Calendar</span> <i class="icon">📆</i> </button> The problem now is that the button’s name becomes visible, which goes against your design guidelines. To prevent this, additional CSS is required. .sr-only { position: absolute; width: 1px; height: 1px; padding: 0; margin: -1px; overflow: hidden; clip: rect(0, 0, 0, 0); white-space: nowrap; border-width: 0; } <button> <span class="sr-only">Open Calendar</span> <i class="icon">📆</i> </button> The CSS ensures that the text inside the span-element is hidden from sighted users but remains readable for screen readers. This approach is so common that well-known CSS libraries like TailwindCSS, Bootstrap, and Material-UI include such a class by default. Although the text of the buttons is not visible anymore, the entire content of the button will be read aloud, including the icon — something you want to avoid. In HTML you are allowed to use specific attributes for accessibility, and in this case, the attribute aria-hidden is what you need. ARIA stands for “Accessible Rich Internet Applications” and is an initiative to make websites and software more accessible to people with disabilities. The attribute aria-hidden hides elements from screen readers so that their content isn’t read. All you need to do is add the attribute aria-hidden with the value “true” to the icon element, which in this case is the “i”-element. <button> <span class="sr-only">Open Calendar</span> <i class="icon" aria-hidden="true">📆</i> </button> Alternative An alternative is the attribute aria-label, which you can assign a descriptive, accessible text to a button without it being visible to sighted users. The purpose of aria-label is to provide a description for interactive elements that lack a visible label or descriptive text. All you need to do is add the attribute aria-label to the button. The attribute aria-hidden and the span-Element can be deleted. <button aria-label="Open Calendar"> <i class="icon">📆</i> </button> With this adjustment, the screen reader will now announce “Open calendar,” completely ignoring the icon. This clearly communicates to the user what the button will do when clicked. Which Option Should You Use? At first glance, the aria-label approach might seem like the smarter choice. It requires less code, reducing the likelihood of errors, and looks cleaner overall, potentially improving code readability. However, the first option is actually the better choice. There are several reasons for this that may not be immediately obvious: Some browsers do not translate aria-label It is difficult to copy aria-label content or otherwise manipulated it as text aria-label content will not show up if styles fail to load These are just a few of the many reasons why you should be cautious when using the aria-label attribute. These points, along with others, are discussed in detail in the excellent article "aria-label is a Code Smell" by Eric Bailey. The First Rule of ARIA Use The “First Rule of ARIA Use” states: If you can use a native HTML element or attribute with the semantics and behavior you require already built in, instead of re-purposing an element and adding an ARIA role, state or property to make it accessible, then do so. Even though the first approach also uses an ARIA attribute, it is more acceptable because aria-hidden only hides an element from screen readers. In contrast, aria-label overrides the standard HTML behavior for handling descriptive names. For this reason, following this principle, aria-hidden is preferable to aria-label in this case. Browser compatibility Both aria-label and aria-hidden are supported by all modern browsers and can be used without concern. Conclusion Ensuring accessibility in web design is more than just a nice-to-have—it’s a necessity. By implementing simple solutions like combining CSS with aria-hidden, you can create a user experience that is both aesthetically pleasing and accessible for everyone, including those who rely on screen readers. While there may be different approaches to solving accessibility challenges, the key is to be mindful of all users' needs. A few small adjustments can make a world of difference, ensuring that your features are truly usable by everyone. Cheers Resources / Links Unicode Character “Tear-Off Calendar” comport Unicode Website mdn web docs aria-label mdn web docs aria-hidden WAI-ARIA Standard Guidlines Tailwind CSS Screen Readers (sr-only) aria-label is a Code Smell First Rule of ARIA Use

6 months ago 64 votes
The underrated &lt;dl&gt; element

by David Luhr The Description List (<dl>) element is useful for many common visual design patterns, but is unfortunately underutilized. It was originally intended to group terms with their definitions, but it's also a great fit for other content that has a key/value structure, such as product attributes or cards that have several supporting details. Developers often mark up these patterns with overused heading or table semantics, or neglect semantics entirely. With the Description List (<dl>) element and its dedicated Description Term (<dt>) and Description Definition (<dd>) elements, we can improve the semantics and accessibility of these design patterns. The <dl> has a unique content model: A parent <dl> containing one or more groups of <dt> and <dd> elements Each term/definition group can have multiple <dt> (Description Term) elements per <dd> (Description Definition) element, or multiple definitions per term The <dl> can optionally accept a single layer of <div> to wrap the <dt> and <dd> elements, which can be useful for styling Examples An initial example would be a simple list of terms and definitions: <dl> <dt>Compression damping</dt> <dd>Controls the rate a spring compresses when it experiences a force</dd> <dt>Rebound damping</dt> <dd>Controls the rate a spring returns to it's extended length after compressing</dd> </dl> A common design pattern is "stat callouts", which feature mini cards of small label text above large numeric values. The <dl> is a great fit for this content: <dl> <div> <dt>Founded</dt> <dd>1988</dd> </div> <div> <dt>Frames built</dt> <dd>8,678</dd> </div> <div> <dt>Race podiums</dt> <dd>212</dd> </div> </dl> And, a final example of a product listing, which has a list of technical specs: <h2>Downhill MTB</h2> <dl> <div> <dt>Front travel:</dt> <dd>160mm</dd> </div> <div> <dt>Wheel size:</dt> <dd>27.5"</dd> </div> <div> <dt>Weight:</dt> <dd>15.2 kg</dd> </div> </dl> Accessibility With this markup in place, common screen readers will convey important semantic and navigational information. In my testing, NVDA on Windows and VoiceOver on MacOS conveyed a list role, the count of list items, your position in the list, and the boundaries of the list. TalkBack on Android only conveyed the term and definition roles of the <dt> and <dd> elements, respectively. If the design doesn't include visible labels, you can at least include them as visually hidden text for assistive technology users. But, I always advocate to visually display them if possible. Wrapping up The <dl> is a versatile element that unfortunately doesn't get much use. In over a decade of coding, I've almost never encountered it in existing codebases. It also doesn't appear anywhere in the top HTML elements lists in the Web Almanac 2024 or an Advanced Web Ranking study of over 11.3 million pages. The next time you're building out a design, look for opportunities where the underrated Description List is a good fit. To go deeper, be sure to check out this article by Ben Myers on the <dl> element.

6 months ago 65 votes
Preloading fonts for web performance with link rel=”preload”

by Alistair Shepherd Web performance is incredibly important. If you were here for the advent calendar last year you may have already read many of my thoughts on the subject. If not, read Getting started with Web Performance when you’re done here! This year I’m back for more web performance, this time focusing on my favourite HTML snippet for improving the loading performance of web fonts using preloads. This short HTML snippet added to the head of your page, can make a substantial improvement to both perceived and measured performance. <link rel="preload" href="/nova-sans.woff2" as="font" type="font/woff2" crossorigin="anonymous" > Above we have a link element that instructs the browser to preload the /nova-sans.woff2 font. By preloading your critical above-the-fold font we can make a huge impact by reducing potential flashes of unstyled or invisible text and layout shifts caused by font loading, like here in the following video: Recording of a page load illustrating how a font loading late can result in a jarring layout shift How web fonts are loaded To explain how preloading fonts can make such an impact, let’s go through the process of how web fonts are loaded. Font files are downloaded later than you may think, due to a combination of network requests and conservative browser behaviour. In a standard web page, there will be the main HTML document which will include a CSS file using a link element in the head. If you’re using self-hosted custom fonts you’ll have a @font-face rule within your CSS that specifies the font name, the src, and possibly some other font-related properties. In other CSS rules you specify a font-family so elements use your custom font. Once our browser encounters our page it: Starts streaming the HTML document, parsing it as it goes Encounters the link element pointing to our CSS file Starts downloading that CSS file, blocking the render of the page until it’s complete Parses and applies the contents of that file Finds the @font-face rule with our font URL Okay let’s pause here for a moment — It may make sense for step 6 to be “Starts downloading our font file”, however that’s not the case. You see, if a browser downloaded every font within a CSS file when it first encountered them, we could end up loading much more than is needed. We could be specifying fonts for multiple different weights, italics, other character sets/languages, or even multiple different fonts. If we don’t need all these fonts immediately it would be wasteful to download them all, and doing so may slow down higher priority CSS or JS. Instead, the browser is more conservative and simply takes note of the font declaration until it’s explicitly needed. The browser next: Takes a note of our @font-face declarations and their URLs for later Finishes processing CSS and starts rendering the page Discovers a piece of text on the page that needs our font Finally starts downloading our font now it knows it’s needed! So as we can see there’s actually a lot that happens between our HTML file arriving in the browser and our font file being downloaded. This is ideal for lower priority fonts, but for the main or headline font this process can make our custom font appear surprisingly late in the page load. This is what causes the behaviour we saw in the video above, where the page starts rendering but it takes some time before our custom font appears. A waterfall graph showing how our custom ‘lobster.woff2’ font doesn’t start being downloaded until 2 seconds into the page load and isn’t available until after 3 seconds This is an intentional decision by browser makers and spec writers to ensure that pages with lots of fonts aren’t badly impacted by having to load many font files ahead of time. But that doesn’t mean it can’t be optimised! Preloading our font with a link <link rel="preload" href="/nova-sans.woff2" as="font" type="font/woff2" crossorigin="anonymous" > The purpose of my favourite HTML snippet is to inform the browser that this font file will be needed with high priority, before it even has knowledge of it. We’re building our page and know more about how our fonts are used — so we can provide hints to be less conservative! If we start downloading the font as soon as possible then it can be ready ahead of when the browser ‘realises’ it’s needed. Looking back at our list above, by adding a preload we move the start of the font download from step 9 to step 2! Starts streaming the HTML document, parsing it as it goes Encounters our preload and starts downloading our font file in the background Encounters the link element pointing to our CSS file Continues as above Taking a closer look at the snippet, we’re using a link element and rel="preload" to ask the browser to preload a file with the intention of using it early in the page load. Like a CSS file, we provide the URL with the href attribute. We use as="font" and type="font/woff2" to indicate this is a font file in woff2. For modern browsers woff2 is the only format you need as it’s universally supported. Finally there’s crossorigin="anonymous". This comes from the wonderfully transparent and clear world of Cross Origin Resource Sharing. I jest of course, CORS is anything but transparent and clear! For fonts you almost always want crossorigin="anonymous" on your link element, even when the request isn’t cross-origin. Omitting this attribute would mean our preload wouldn’t be used and the file would be requested again. But why? Browser requests can be sent either with or without credentials (cookies, etc), and requests to the same URL with and without credentials are fundamentally different. For a preload to be used by the browser, it needs to match the type of request that the browser would have made normally. By default fonts are always requested without credentials, so we need to add crossorigin="anonymous" to ensure our preload matches a normal font request. By omitting this attribute our preload would not be used and the browser would request the font again. If you’re ever unsure of how your preloads are working, check your browsers’ devtools. In Chrome the Network pane will show a duplicate request, and the Console will log a warning telling you a preload wasn’t used. Screenshot showing the Chrome devtools Console pane, with warnings for an incorrect font preload Result and conclusion By preloading our critical fonts we ensure our browser has the most important fonts available earlier in the page loading process. We can see this by comparing our recording and waterfall charts from earlier: Side-by-side recording of the same page loading in different ways. ‘no-preload’ shows a large layout shift caused by the font switching and finishes loading at 4.4s. ‘preload’ doesn’t have a shift and finishes loading at 3.1s. Side-by-side comparison of two waterfall charts of the same site with font file `lobster.woff2`. For the ‘no-preload’ document the font loads after all other assets and finishes at 3s. The ‘preload’ document shows the font loading much earlier, in parallel with other files and finishing at 2s. As I mentioned in Getting started with Web Performance, it’s best to use preloads sparingly so limit this to your most important font or two. Remember that it’s a balance. By preloading too many resources you run the risk of other high-priority resources such as CSS being slowed down and arriving late. I would recommend preloading just the heading font to start with. With some testing you can see if preloading your main body font is worth it also! With care, font preloads can be a simple and impactful optimisation opportunity and this is why it’s my favourite HTML snippet! This is a great step to improving font loading, and there are plenty of other web font optimisations to try also!

6 months ago 65 votes

More in programming

Thoughts on Motivation and My 40-Year Career

I’ve never published an essay quite like this. I’ve written about my life before, reams of stuff actually, because that’s how I process what I think, but never for public consumption. I’ve been pushing myself to write more lately because my co-authors and I have a whole fucking book to write between now and October. […]

9 hours ago 4 votes
Single-Use Disposable Applications

As search gets worse and “working code” gets cheaper, apps get easier to make from scratch than to find.

14 hours ago 4 votes
Desktop UI frameworks written by a single person

Less known desktop UI frameworks Writing desktop software is hard. The UI technologies of Windows or MacOS are awful compared to web technology. What can trivially be done with HTML/CSS/JavaScript in few minutes can take hours using Windows’s win32 APIs or Mac’s Cocoa. That’s why the default technology for desktop apps, especially cross-platform, is Electron: a Chrome browser combined with Node runtime. The problem is that it’s bloaty: each app is a unique build of Chrome with a little bit of application code. Chrome is over 100MB so many apps ship less than 1MB of code in a 100M wrapper. People tried to address the problem of poor OS APIs by writing UI frameworks, often meant to be cross-platform. You’ve heard about QT, GTK, wxWindows. The problem with those is that they are also old, their APIs are not the greatest either and they are bloaty as well. There just doesn’t seem to be a good option. Writing your own framework seems impossible due to the size of task. But is it? I’ll show a couple of less-known UI frameworks written mostly be a single person, often done simply to enable writing an application. SWELL in WDL WDL is interesting. Justin Frankel, the guy who created Winamp, has a repository of C++ code he uses in different projects. After selling Winamp to AOL, a side quest of writing file sharing application, getting fired from AOL for writing file sharing application, he started a company building Reaper a digital audio workstation software for Windows. Winamp is a win32 API program and so is Reaper. At some point Justin decided to make a Mac version but by then he had a lot of code heavily using win32 APIs. So he did what anyone in his position would: he implemented win32 APIs for Mac OS and Linux and called it SWELL - Simple Windows Emulation Layer. Ok, actually no-one else would do it. It was an insane idea but it worked. It’s important to not over-state SWELL capabilities. It’s not Wine. You can’t take any win32 program and recompile for Mac with SWELL. Frankel is insanely pragmatic and so is his code. SWELL only implements the subset of APIs he uses in Reaper. At the same time Reaper is a big app so if SWELL works for Reaper, it could work for your app. WDL is open-source using permissive MIT license. Sublime Text For a few years Sublime Text was THE programmer’s editor. It was written by a single developer in C++ and he wrote a custom UI toolkit for it. Not open source but its existence shows it can be done. RAD Debugger RAD Debugger is an open-source Windows debugger for C/C++ apps written in C by mostly a single person. It implements a custom UI framework based on 3D renderer. The UI is integral part of the the app but the code is well structured so you probably can take just their UI / render code and use it in your own C / C++ app. Currently the app / UI is only for Windows but it’s designed to be cross-platform and they are working on porting the renderer to Mac OS / Linux. They use permissive MIT license and everything is written in C. Dear ImGUI Dear ImGui is a newer cross-platform, UI framework in C++. Open source, permissive MIT license. Written by mostly a single person. Ghostty Ghostty is a cross-platform terminal emulator and UI. It’s written in Zig by mostly a single person and uses it’s own low-level GPU renderer for the UI. You too can write your own UI framework At first the idea of writing your own UI framework seems impossibly daunting. What I’m hoping to show is that if you’re ambitious enough it’s possible to build cross platform desktop apps that are not just bloated 100MB Chrome wrappers around few kilobytes of custom code. I’m not saying it’s a simple thing, just that enough people did it that it’s possible. It shouldn’t be necessary but both Microsoft and Apple have tragically dropped the ball on providing decent, high-performance UI libraries for their OS. Microsoft even writes their own apps, like Teams, in web technologies. Thanks to open source you’re not at the staring line. You can just use Dear ImGUI or WDL’s SWELL. Or you can extract the UI code from RAD Debugger or Ghostty (if you write in Zig). Or you can look at how their implementation to speed up your own design and implementation.

yesterday 2 votes
Logic for Programmers Turns One

I released Logic for Programmers exactly one year ago today. It feels weird to celebrate the anniversary of something that isn't 1.0 yet, but software projects have a proud tradition of celebrating a dozen anniversaries before 1.0. I wanted to share about what's changed in the past year and the work for the next six+ months. The Road to 0.1 I had been noodling on the idea of a logic book since the pandemic. The first time I wrote about it on the newsletter was in 2021! Then I said that it would be done by June and would be "under 50 pages". The idea was to cover logic as a "soft skill" that helped you think about things like requirements and stuff. That version sucked. If you want to see how much it sucked, I put it up on Patreon. Then I slept on the next draft for three years. Then in 2024 a lot of business fell through and I had a lot of free time, so with the help of Saul Pwanson I rewrote the book. This time I emphasized breadth over depth, trying to cover a lot more techniques. I also decided to self-publish it instead of pitching it to a publisher. Not going the traditional route would mean I would be responsible for paying for editing, advertising, graphic design etc, but I hoped that would be compensated by much higher royalties. It also meant I could release the book in early access and use early sales to fund further improvements. So I wrote up a draft in Sphinx, compiled it to LaTeX, and uploaded the PDF to leanpub. That was in June 2024. Since then I kept to a monthly cadence of updates, missing once in November (short-notice contract) and once last month (Systems Distributed). The book's now on v0.10. What's changed? A LOT v0.1 was very obviously an alpha, and I have made a lot of improvements since then. For one, the book no longer looks like a Sphinx manual. Compare! Also, the content is very, very different. v0.1 was 19,000 words, v.10 is 31,000.1 This comes from new chapters on TLA+, constraint/SMT solving, logic programming, and major expansions to the existing chapters. Originally, "Simplifying Conditionals" was 600 words. Six hundred words! It almost fit in two pages! The chapter is now 2600 words, now covering condition lifting, quantifier manipulation, helper predicates, and set optimizations. All the other chapters have either gotten similar facelifts or are scheduled to get facelifts. The last big change is the addition of book assets. Originally you had to manually copy over all of the code to try it out, which is a problem when there are samples in eight distinct languages! Now there are ready-to-go examples for each chapter, with instructions on how to set up each programming environment. This is also nice because it gives me breaks from writing to code instead. How did the book do? Leanpub's all-time visualizations are terrible, so I'll just give the summary: 1180 copies sold, $18,241 in royalties. That's a lot of money for something that isn't fully out yet! By comparison, Practical TLA+ has made me less than half of that, despite selling over 5x as many books. Self-publishing was the right choice! In that time I've paid about $400 for the book cover (worth it) and maybe $800 in Leanpub's advertising service (probably not worth it). Right now that doesn't come close to making back the time investment, but I think it can get there post-release. I believe there's a lot more potential customers via marketing. I think post-release 10k copies sold is within reach. Where is the book going? The main content work is rewrites: many of the chapters have not meaningfully changed since 1.0, so I am going through and rewriting them from scratch. So far four of the ten chapters have been rewritten. My (admittedly ambitious) goal is to rewrite three of them by the end of this month and another three by the end of next. I also want to do final passes on the rewritten chapters; as most of them have a few TODOs left lying around. (Also somehow in starting this newsletter and publishing it I realized that one of the chapters might be better split into two chapters, so there could well-be a tenth technique in v0.11 or v0.12!) After that, I will pass it to a copy editor while I work on improving the layout, making images, and indexing. I want to have something worthy of printing on a dead tree by 1.0. In terms of timelines, I am very roughly estimating something like this: Summer: final big changes and rewrites Early Autumn: graphic design and copy editing Late Autumn: proofing, figuring out printing stuff Winter: final ebook and initial print releases of 1.0. (If you know a service that helps get self-published books "past the finish line", I'd love to hear about it! Preferably something that works for a fee, not part of royalties.) This timeline may be disrupted by official client work, like a new TLA+ contract or a conference invitation. Needless to say, I am incredibly excited to complete this book and share the final version with you all. This is a book I wished for years ago, a book I wrote because nobody else would. It fills a critical gap in software educational material, and someday soon I'll be able to put a copy on my bookshelf. It's exhilarating and terrifying and above all, satisfying. It's also 150 pages vs 50 pages, but admittedly this is partially because I made the book smaller with a larger font. ↩

2 days ago 5 votes
Implementing UI translation in SumatraPDF, a C++ Windows application

Translating user interface of SumatraPDF SumatraPDF is the best PDF/eBook/Comic Book viewer for Windows. It’s small, fast, full of features, free and open-source. It became popular enough that it made sense to translate the UI for non-English users. Currently we support 72 languages. This article describes how I designed and implemented a translation system in SumatraPDF, a native win32 C++ Windows application. Hard things about translating the UI There are 2 hard things about translating an application code for translation system (extracting strings to translate, translate strings from English to user’s language) translating them into many languages Extracting strings to translate from source code Currently there are 381 strings in SumatraPDF subject to translation. It’s important that the system requires the least amount of effort when adding new strings to translate. Every string that needs to be translated is marked in .cpp or .h file with one of two macros: _TRA("Rename") _TRN("Open") I have a script that extracts those strings from source files. Mine is written in Go but it could just as well be Python or JavaScript. It’s a simple regex job. _TR stands for “translation”. _TRA(s) expands into const char* trans::GetTranslation(const char* str) function which returns str translated to current UI language. We auto-detect language at startup based on Windows settings and allow the user to explicitly set UI language. For English we just return the original string. If a string to be translated is e.g. a part of const char* array[], we can’t use trans::GetTranslation(). For cases like that we have _TRN() which expands to English string. We have to write code to translate it at some point. Adding new strings is therefore as simple as wrapping them in _TRA() or _TRN() macros. Translating strings into many languages Now that we’ve extracted strings to be translated, we need to translate them into 72 languages. SumatraPDF is a free, open-source program. I don’t have a budget to hire translators. I don’t have a budget, period. The only option was to get help from SumatraPDF users. It was vital to make it very easy for users to send me translations. I didn’t want to ask them, for example, to download some translation software. Design and implementation of AppTranslator web app I couldn’t find a really simple software for crowd sourcing translations so I wrote my own: https://github.com/kjk/apptranslator You can see it in action: https://www.apptranslator.org/app/SumatraPDF I designed it to be generic but I don’t think anyone else is using it. AppTranslator is simple. Per https://tools.arslexis.io/wc/: 4k lines of Go server code 451 lines of html code a single dependency: bootstrap CSS framework (the project is old) It’s simple because I don’t want to spend a lot of time writing translation software. It’s just a side project in service of the goal of translating SumatraPDF. Login is exclusively via GitHub. It doesn’t even use a database. Like in Redis, changes are stored as a series of operations in an append-only log. We keep the whole state in memory and re-create it from the log at startup. Main operation is translate a string from English to language X represented as [kOpTranslation, english string, language, translation, user who provided translation]. When user provides a translation in the web UI, we send an API call to the server which appends the translation operation to the log. Simple and reliable. Because the code is written in Go, it’s very fast and memory efficient. When running it uses mere megabytes of RAM. It can comfortably run on the smallest 256 MB VPS server. I backup the log to S3 so if the server ever fails, I can re-install the program on a new server and re-download the translations from S3. I provide RSS feed for each language so that people who provide translations can monitor for new strings to be translated. Sending strings for translation and receiving translations So I have a web app for collecting translations and a script that extracts strings to be translated from source code. How do they connect? AppTranslator has an API for submitting the current set of strings to be translated in the simplest possible format: a line for each string (I ensure there are no newlines in the string itself by escaping them with \n) API is password protected because only I can submit the strings. The server compares the strings sent with the current set and records a difference in the log. It also sends a response with translations. Again the simplest possible format: AppTranslator: SumatraPDF 651b739d7fa110911f25563c933f42b1d37590f8 :%s annotation. Ctrl+click to edit. am:%s մեկնաբանություն: Ctrl+քլիք՝ խմբագրելու համար: ar:ملاحظة %s. اضغط Ctrl للتحرير. az:Qeyd %s. Düzəliş etmək üçün Ctrl+düyməyə basın. As you can see: a string to translate is on a line starting with : is followed by translations of that strings in the format: ${lang}: ${translation} An optimization: 651b739d7fa110911f25563c933f42b1d37590f8 is a hash of this response. If I submit this hash with my request and translations didn’t change on the server, the response is empty. Implementing C++ part of translation system So now I have a text file with translation downloaded from the server. How do I get a translation in my C++ code? As with everything in SumatraPDF, I try to do things in a simple and efficient way. The whole Translation.cpp is only 239 lines of code. The core of translation system is const char* trans::GetTranslation(const char* s); function. I embed the translations in exact the same format as received from AppTranslator in the executable as data file in resources. If the UI language is English, we do nothing. trans::GetTranslation() returns its argument. When we switch the language, we load the translations from resources and build an index: an array of English strings an array of corresponding translations Both arrays use my own StrVec class optimized for storing an array of strings. To find a translation we scan the first array to find an index of the string and return translation from the second array, at the same index. Linear scan seems like it would be slow but it isn’t. Resizing dialogs I have a few dialogs defined in SumatraPDF.rc file. The problem with dialogs is that position of UI elements is fixed. A translated string will almost certainly have a different size than the English string which will mess up fixed layout. Thankfully someone wrote DialogSizer that smartly resizes dialogs and solves this problem. The evolution of a solution No AppTranslator My initial implementation was simpler. I didn’t yet have AppTranslator so I stored the strings in a text file in repository in the same format as what I described above. People would download it, make changes using a text editor and send me the file via email which I would then checkin. It worked for a while but it became worse over time. More strings, more languages created more work for me to manually manage e-mail submissions. I decided to automate the process. Code generation My first implementation of C++ side used code generation instead of embedding the text file in resources. My Go script would generate C++ source code files with static const char* [] arrays. This worked well but I decided to improve it further by making the code use the text file with translations embedded in the app. The main motivation for the change was to open a possibility of downloading latest translations from the server to fix the problem of translations not being all ready when I build the release executable. I haven’t done that yet but it’s now easier to implement given that the format of strings embedded in the exe is the same as the one I can download from AppTranslator. Only utf-8 SumatraPDF started by using both WCHAR* Unicode strings and char* utf8 strings. For that reason the translation system had to support returning translation in both WCHAR* and char* version. Over time I refactored the code to use mostly utf8 and at some point I no longer needed to support WCHAR* version. That made the code even smaller and reduced memory usage. The experience I’m happy how things turned out. AppTranslator proved to be reliable and hassle free. It runs for many years now and collected 35440 string translations from users. I automated everything so that all I need to do is to periodically re-run the script that extracts strings from source code, uploads them to AppTranslator and downloads latest translations. One problem is that translations are not always ready in time for release so I make a release and then people start translating strings added since last release. I’ve considered downloading the latest translations from the server, in addition to embedding them in an executable at the time of building the app. Would I do the same today? While AppTranslator is reliable and doesn’t require on-going work, it would be better to not have to run a server at all. The world has changed since I started SumatraPDF. Namely: people are comfortable using GitHub and you can edit files directly in GitHub UI. It’s not a great experience but it works. One option would be to generate a translation text file for each language, in this format: :first untranslated string :second untranslated string :first translated string translation of first string :second translated string translation of second string Untranslated strings are listed at the top, to make it easier to find. A link would send a translator directly to edit this file in GitHub UI. When translator saves translations, it creates a PR for me to review and merge. The roads not taken But why did you re-invent everything? You should do X instead. All other X that I know about suck. Using per-language .rc resource files Traditional way of localizing / translating Window GUI apps is to store all strings and dialog definitions in an .rc file. Each language gets its own .rc file (or files) and the program picks the right resource based on a language. This doesn’t solve the 2 hard problems: having an easy way to add strings for translations having an easy way for users to provide translations XML horror show There was a dark time when the world was under the iron grip of XML fanaticism. Everything had to be an XML file even when it was the worst possible solution for the problem. XML doesn’t solve the 2 hard problems and a string storage format is an absolute nightmare for human editing. GNU gettext There’s a C library gettext that uses .po files. This is much saner solution than XML horror show. .po files are relatively simple text format. The code is already written. Warning: tooting my own horn. My format is better. It’s easier for people to edit, it’s easier to write code to parse it. This looks like many times more than 239 lines of code. Ok, gettext probably does a bit more than my code, but clearly nothing than I need. It also doesn’t solve the 2 hard problems. I would still have to write code to extract strings from source code and build a way to allow users to translate them easily.

2 days ago 3 votes