Full Width [alt+shift+f] FOCUS MODE Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
34
I like the default dark color theme in Firefox, but I prefer to view websites in light mode. This distinction has been working until recently, but with Firefox 95, the browser dark theme will enforce dark mode on websites as well. This sounds like a good default, but I want to separate the browser UI from website contents. Fortunately, there is an undocumented setting to change this behavior. Update: Firefox 100 (desktop) added a UI option to adjust the dark theme behavior. 🎉 Go to "Settings" → "General" → "Website appearance" and choose your preferred setting: Before Firefox 100 and within Firefox mobile apps, follow the following steps: Open about:config configuration page. Edit layout.css.prefers-color-scheme.content-override. This setting defines how webpages should be rendered: 0: dark mode 1: light mode 2: system's color setting 3: browser theme color By setting the value to 1, I can keep using the "Dark" Firefox UI theme and continue viewing websites in light mode.
over a year ago

Comments

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Darek Kay

Grab browser links and titles in one click

When I copy a browser tab URL, I often want to also keep the title. Sometimes I want to use the link as rich text (e.g., when pasting the link into OneNote or Jira). Sometimes I prefer a Markdown link. There are browser extensions to achieve this task, but I don't want to introduce potential security issues. Instead, I've written a bookmarklet based on this example extension. To use it, drag the following link onto your browser bookmarks bar: Copy Tab When you click the bookmark(let), the current page including its title will be copied into your clipboard. You don't even have to choose the output format: the link is copied both as rich text and plain text (Markdown). This works because it's possible to write multiple values into the clipboard with different content types. Here's the source code: function escapeHTML(str) { return String(str) .replace(/&/g, "&amp;") .replace(/"/g, "&quot;") .replace(/'/g, "&#39;") .replace(/</g, "&lt;") .replace(/>/g, "&gt;"); } function copyToClipboard({ url, title }) { function onCopy(event) { document.removeEventListener("copy", onCopy, true); // hide the event from the page to prevent tampering event.stopImmediatePropagation(); event.preventDefault(); const linkAsMarkdown = `[${title}](${url})`; event.clipboardData.setData("text/plain", linkAsMarkdown); const linkAsHtml = `<a href="${escapeHTML(url)}">${title}</a>` event.clipboardData.setData("text/html", linkAsHtml); } document.addEventListener("copy", onCopy, true); document.execCommand("copy"); } copyToClipboard({ url: window.location.toString(), title: document.title });

8 months ago 101 votes
Open Graph images: Format compatibility across platforms

While redesigning my photography website, I've looked into the Open Graph (OG) images, which are displayed when sharing a link on social media or messaging apps. Here's an example from WhatsApp: For each photo that I publish, I create a WebP thumbnail for the gallery. I wanted to use those as OG images, but the WebP support was lacking, so I've been creating an additional JPG variant just for Open Graph. I was interested in seeing how things have changed in the last 2.5 years. I've tested the following platforms: WhatsApp, Telegram, Signal, Discord, Slack, Teams, Facebook, LinkedIn, Xing, Bluesky, Threads and Phanpy (Mastodon). Here are the results: All providers support JPEG and PNG. All providers except Teams and Xing support WebP. No provider except Facebook supports AVIF. WhatsApp displays the AVIF image, but the colors are broken. "X, formerly Twitter" didn't display OG images for my test pages at all. I don't care about that platform, so I didn't bother to further investigate. Those results confirmed that I could now use WebP Open Graph images without creating an additional JPG file.

10 months ago 92 votes
A guide to bookmarklets

I'm a frequent user of bookmarklets. As I'm sharing some of them on my blog, I wrote this post to explain what bookmarklets are and how to use them. In short, a bookmarklet is a browser bookmark containing JavaScript code. Clicking the bookmark executes the script in the context of the current web page, allowing users to perform tasks such as modifying the appearance of a webpage or extracting information. Bookmarklets are a simpler, more lightweight alternative to browser extensions, Chrome snippets, and userscripts. How to add a bookmarklet? Here's an example to display a browser dialog with the title of the current web page: Display page title You can click the link to see what it does. To run this script on other websites, we have to save it as a bookmarklet. My preferred way is to drag the link onto the bookmarks toolbar: A link on a web page is dragged and dropped onto a browser bookmark bar. A bookmark creation dialog appears. The prompt is confirmed and closed. The created bookmarklet is clicked. The current web page title is displayed in a browser dialog. Another way is to right-click the link to open its context menu: In Firefox, you can then select "Bookmark Link…". Other browsers make it a little more difficult: select "Copy Link (Address)", manually create a new bookmark, and then paste the copied URL as the link target. Once created, you can click the bookmark(let) on any web page to display its title. Scroll further down to see more useful use cases. How to write a bookmarklet? Let's start with the code for the previous bookmarklet example: window.alert(document.title) To turn that script into a bookmarklet, we have to put javascript: in front of it: javascript:window.alert(document.title) To keep our code self-contained, we should wrap it with an IIFE (immediately invoked function expression): javascript:(() => { window.alert(document.title) })() Finally, you might have to URL-encode your bookmarklet if you get issues with special characters: javascript:%28%28%29%20%3D%3E%20%7B%0A%20%20window.alert%28document.title%29%0A%7D%29%28%29 Useful bookmarklets Here are some bookmarklets I've created: Debugger — Starts the browser DevTools debugger after 3 seconds, useful for debugging dynamic content changes. Log Focus Changes — Logs DOM elements when the focus changes. Design Mode — Makes the web page content-editable (toggle).

10 months ago 94 votes
Prevent data loss on page refresh

It can be frustrating to fill out a web form, only to accidentally refresh the page (or click "back") and lose all the hard work. In this blog post, I present a method to retain form data when the page is reloaded, which improves the user experience. Browser behavior Most browsers provide an autofill feature. In the example form below, enter anything into the input field. Then, try out the following: Click the "Example link" and use the "back" functionality of your browser. Reload the page. Query Example link Depending on your browser, the input value might be restored: Browser Reload Back Firefox 130 Yes Yes Chrome 129 No Yes Safari 18 No Yes How does it work? I was surprised to learn that this autofill behavior is controlled via the autocomplete, that is mostly used for value autocompletion from past web forms. However, if we disable the autocompletion, the autofill feature will be disabled as well: <input autocomplete="false" /> To learn more about the behavior, read the full spec on persisted history entry state. Preserving application state Even with autofill, no browser will restore dynamic changes previously triggered by the user. In the following example, the user has to always press the "Search" button to view the results: This is an interactive example. Please enable JavaScript to use it. Query Search ... const inputElementNative = document.querySelector("#example-search-input"); const outputElementNative = document.querySelector("#example-search-output"); const performSearch_exampleSearch = (outputElement) => { outputElement.innerText = ""; const introText = document.createTextNode("Open the result for "); outputElement.appendChild(introText); const link = document.createElement("a"); link.href = "https://example.com"; link.innerText = inputElementNative.value || "no text"; outputElement.appendChild(link); }; document.querySelector("#example-search-form").addEventListener("submit", (event) => { event.preventDefault(); performSearch_exampleSearch(outputElementNative); }, ); If the web page changes its content after user interaction, it might be a good idea to restore the UI state after the page has been refreshed. For example, it's useful to restore previous search results for an on-site search. Note that Chrome will fire a change event on inputs, but this is considered a bug as the respective spec has been updated. Storing form values As the form value might be lost on reload, we need to store it temporarily. Some common places to store data include local storage, session storage, cookies, query parameters or hash. They all come with drawbacks for our use case, though. Instead, I suggest using the browser history state, which has several advantages: We get data separation between multiple browser tabs with no additional effort. The data is automatically cleaned up when the browser tab is closed. We don't pollute the URL and prevent page reloads. Let's store the search input value as query: document.querySelector("form").addEventListener("submit", (event) => { event.preventDefault(); const inputElement = document.querySelector("input"); history.replaceState({ query: inputElement.value }, ""); performSearch(); }); This example uses the submit event to store the data, which fits our "search" use case. In a regular form, using the input change event might be a better trigger to store form values. Using replaceState over pushState will ensure that no unnecessary history entry is created. Uncaught TypeError: Failed to execute 'replaceState' on 'History': 2 arguments required, but only 1 present. Restoring form values My first approach to restore form values was to listen to the pageshow event. Once it's fired, we can access the page load type from window.performance: window.addEventListener("pageshow", () => { const type = window.performance.getEntriesByType("navigation")[0].type; const query = history.state?.query; if (query && (type === "back_forward" || type === "reload")) { document.querySelector("#my-input").value = query; performSearch(); } }); I will keep the solution here in case someone needs it, but usually it is unnecessary to check the page load type. Because the history state is only set after the search form has been submitted, we can check the state directly: const query = history.state?.query; if (query) { document.querySelector("#my-input").value = query; performSearch(); } Demo Here's an example combining both techniques to store and restore the input value: This is an interactive example. Please enable JavaScript to use it. Query Search ... const inputElementCustom = document.querySelector("#example-preserve-input"); const outputElementCustom = document.querySelector("#example-preserve-output"); const performSearch_examplePreserve = (outputElement) => { outputElement.innerText = ""; const introText = document.createTextNode("Open the result for "); outputElement.appendChild(introText); const link = document.createElement("a"); link.href = "https://example.com"; link.innerText = inputElementCustom.value || "no text"; outputElement.appendChild(link); }; document.querySelector("#example-preserve-form").addEventListener("submit", (event) => { event.preventDefault(); performSearch_examplePreserve(outputElementCustom); history.replaceState({ query: inputElementCustom.value }, ""); }); const historyQuery = history.state?.query; if (historyQuery) { document.querySelector("#example-preserve-input").value = historyQuery; performSearch_examplePreserve(outputElementCustom); } Conclusion Preserving form data on page refresh is a small but impactful way to improve user satisfaction. The default browser autofill feature handles only basic use cases, so ideally we should maintain the form state ourselves. In this blog post, I've explained how to use the browser history state to temporarily store and retrieve form values.

11 months ago 100 votes
Web push notifications: issues and limitations

In this post, I will summarize some problems and constraints that I've encountered with the Notifications and Push web APIs. Notification settings on macOS Someone who's definitely not me wasted half an hour wondering why triggered notifications would not appear. On macOS, make sure to enable system notifications for your browsers. Open "System Settings" → "Notifications". For each browser, select "Allow notifications" and set the appearance to "Alerts": Onchange listener not called Web APIs offer a way to subscribe to change events. This is especially useful in React: navigator.permissions .query({ name: "push", userVisibleOnly: true }) .then((status) => { status.onchange = function () { // synchronize permission status with local state setNotificationPermission(this.state); }; }); Whenever the notification permission changes (either through our application logic or via browser controls), we can synchronize the UI in real-time according to the current permission value (prompt, denied or granted). However, due to a Firefox bug, the event listener callback is never called. This means that we can't react to permission changes via browser controls in Firefox. That's especially unfortunate when combined with push messages, where we want to subscribe the user once they grant the notification permission. One workaround is to check at page load if the notification permission is granted with no valid subscription and resubscribe the user. Notification image not supported Browser notifications support an optional image property. This property is marked as "experimental", so it's not surprising that some browsers (Firefox, Safari) don't support it. There is an open feature request to add support in Firefox, but it has been open since 2019. VAPID contact information required When sending a push message, we have to provide VAPID parameters (e.g. the public and private key). According to the specification, the sub property (contact email or link) is optional: If the application server wishes to provide, the JWT MAY include a "sub" (Subject) claim. Despite this specification, the Mozilla push message server will return an error if the subject is missing: 401 Unauthorized for (...) and subscription https://updates.push.services.mozilla.com/wpush/v2/… You might not encounter this issue when using the popular web-push npm package, as its API encourages you to provide the subject as the first parameter: webpush.setVapidDetails("hello@example.com", publicKey, privateKey); However, in the webpush-java library, you need to set the subject explicitly: builder.subject("hello@example.com"); There is an open issue with more information about this problem. Microsoft Edge pitfalls Microsoft introduced adaptive notification requests in the Edge browser. It is a crowdsourced score system, which may auto-accept or auto-reject notification requests. The behavior can be changed in the Edge notification settings. Additionally, on a business or school device, those settings might be fixed, displaying the following tooltip: This setting is managed by your organization.

a year ago 94 votes

More in programming

How to Not Write "Garbage Code" (by Linus Torvalds)

Linus Torvalds, Creator of Git and Linux, on reducing cognitive load

yesterday 12 votes
Get Out of Technology

You heard there was money in tech. You never cared about technology. You are an entryist piece of shit. But you won’t leave willingly. Give it all away to everyone for free. Then you’ll have no reason to be here.

yesterday 3 votes
Trusting builds with Bazel remote execution

Understanding how the architecture of a remote build system for Bazel helps implement verifiable action execution and end-to-end builds

yesterday 6 votes
Words are not violence

Debates, at their finest, are about exploring topics together in search for truth. That probably sounds hopelessly idealistic to anyone who've ever perused a comment section on the internet, but ideals are there to remind us of what's possible, to inspire us to reach higher — even if reality falls short. I've been reaching for those debating ideals for thirty years on the internet. I've argued with tens of thousands of people, first on Usenet, then in blog comments, then Twitter, now X, and also LinkedIn — as well as a million other places that have come and gone. It's mostly been about technology, but occasionally about society and morality too. There have been plenty of heated moments during those three decades. It doesn't take much for a debate between strangers on this internet to escalate into something far lower than a "search for truth", and I've often felt willing to settle for just a cordial tone! But for the majority of that time, I never felt like things might escalate beyond the keyboards and into the real world. That was until we had our big blow-up at 37signals back in 2021. I suddenly got to see a different darkness from the most vile corners of the internet. Heard from those who seem to prowl for a mob-sanctioned opportunity to threaten and intimidate those they disagree with. It fundamentally changed me. But I used the experience as a mirror to reflect on the ways my own engagement with the arguments occasionally felt too sharp, too personal. And I've since tried to refocus way more of my efforts on the positive and the productive. I'm by no means perfect, and the internet often tempts the worst in us, but I resist better now than I did then. What I cannot come to terms with, though, is the modern equation of words with violence. The growing sense of permission that if the disagreement runs deep enough, then violence is a justified answer to settle it. That sounds so obvious that we shouldn't need to state it in a civil society, but clearly it is not. Not even in technology. Not even in programming. There are plenty of factions here who've taken to justify their violent fantasies by referring to their ideological opponents as "nazis", "fascists", or "racists". And then follow that up with a call to "punch a nazi" or worse. When you hear something like that often enough, it's easy to grow glib about it. That it's just a saying. They don't mean it. But I'm afraid many of them really do. Which brings us to Charlie Kirk. And the technologists who name drinks at their bar after his mortal wound just hours after his death, to name but one of the many, morbid celebrations of the famous conservative debater's death. It's sickening. Deeply, profoundly sickening. And my first instinct was exactly what such people would delight in happening. To watch the rest of us recoil, then retract, and perhaps even eject. To leave the internet for a while or forever. But I can't do that. We shouldn't do that. Instead, we should double down on the opposite. Continue to show up with our ideals held high while we debate strangers in that noble search for the truth. Where we share our excitement, our enthusiasm, and our love of technology, country, and humanity. I think that's what Charlie Kirk did so well. Continued to show up for the debate. Even on hostile territory. Not because he thought he was ever going to convince everyone, but because he knew he'd always reach some with a good argument, a good insight, or at least a different perspective. You could agree or not. Counter or be quiet. But the earnest exploration of the topics in a live exchange with another human is as fundamental to our civilization as Socrates himself. Don't give up, don't give in. Keep debating.

2 days ago 6 votes
AI Coding

In my old age I’ve mostly given up trying to convince anyone of anything. Most people do not care to find the truth, they care about what pumps their bags. Some people go as far as to believe that perception is reality and that truth is a construction. I hope there’s a special place in hell for those people. It’s why the world wasted $10B+ on self driving car companies that obviously made no sense. There’s a much bigger market for truths that pump bags vs truths that don’t. So here’s your new truth that there’s no market for. Do you believe a compiler can code? If so, then go right on believing that AI can code. But if you don’t, then AI is no better than a compiler, and arguably in its current form, worse. The best model of a programming AI is a compiler. You give it a prompt, which is “the code”, and it outputs a compiled version of that code. Sometimes you’ll use it interactively, giving updates to the prompt after it has returned code, but you find that, like most IDEs, this doesn’t work all that well and you are often better off adjusting the original prompt and “recompiling”. While noobs and managers are excited that the input language to this compiler is English, English is a poor language choice for many reasons. It’s not precise in specifying things. The only reason it works for many common programming workflows is because they are common. The minute you try to do new things, you need to be as verbose as the underlying language. AI workflows are, in practice, highly non-deterministic. While different versions of a compiler might give different outputs, they all promise to obey the spec of the language, and if they don’t, there’s a bug in the compiler. English has no similar spec. Prompts are highly non local, changes made in one part of the prompt can affect the entire output. tl;dr, you think AI coding is good because compilers, languages, and libraries are bad. This isn’t to say “AI” technology won’t lead to some extremely good tools. But I argue this comes from increased amounts of search and optimization and patterns to crib from, not from any magic “the AI is doing the coding”. You are still doing the coding, you are just using a different programming language. That anyone uses LLMs to code is a testament to just how bad tooling and languages are. And that LLMs can replace developers at companies is a testament to how bad that company’s codebase and hiring bar is. AI will eventually replace programming jobs in the same way compilers replaced programming jobs. In the same way spreadsheets replaced accounting jobs. But the sooner we start thinking about it as a tool in a workflow and a compiler—through a lens where tons of careful thought has been put in—the better. I can’t believe anyone bought those vibe coding crap things for billions. Many people in self driving accused me of just being upset that I didn’t get the billions, and I’m sure it’s the same thoughts this time. Is your way of thinking so fucking broken that you can’t believe anyone cares more about the actual truth than make believe dollars? From this study, AI makes you feel 20% more productive but in reality makes you 19% slower. How many more billions are we going to waste on this? Or we could, you know, do the hard work and build better programming languages, compilers, and libraries. But that can’t be hyped up for billions.

2 days ago 4 votes