More from Darek Kay
When I copy a browser tab URL, I often want to also keep the title. Sometimes I want to use the link as rich text (e.g., when pasting the link into OneNote or Jira). Sometimes I prefer a Markdown link. There are browser extensions to achieve this task, but I don't want to introduce potential security issues. Instead, I've written a bookmarklet based on this example extension. To use it, drag the following link onto your browser bookmarks bar: Copy Tab When you click the bookmark(let), the current page including its title will be copied into your clipboard. You don't even have to choose the output format: the link is copied both as rich text and plain text (Markdown). This works because it's possible to write multiple values into the clipboard with different content types. Here's the source code: function escapeHTML(str) { return String(str) .replace(/&/g, "&") .replace(/"/g, """) .replace(/'/g, "'") .replace(/</g, "<") .replace(/>/g, ">"); } function copyToClipboard({ url, title }) { function onCopy(event) { document.removeEventListener("copy", onCopy, true); // hide the event from the page to prevent tampering event.stopImmediatePropagation(); event.preventDefault(); const linkAsMarkdown = `[${title}](${url})`; event.clipboardData.setData("text/plain", linkAsMarkdown); const linkAsHtml = `<a href="${escapeHTML(url)}">${title}</a>` event.clipboardData.setData("text/html", linkAsHtml); } document.addEventListener("copy", onCopy, true); document.execCommand("copy"); } copyToClipboard({ url: window.location.toString(), title: document.title });
While redesigning my photography website, I've looked into the Open Graph (OG) images, which are displayed when sharing a link on social media or messaging apps. Here's an example from WhatsApp: For each photo that I publish, I create a WebP thumbnail for the gallery. I wanted to use those as OG images, but the WebP support was lacking, so I've been creating an additional JPG variant just for Open Graph. I was interested in seeing how things have changed in the last 2.5 years. I've tested the following platforms: WhatsApp, Telegram, Signal, Discord, Slack, Teams, Facebook, LinkedIn, Xing, Bluesky, Threads and Phanpy (Mastodon). Here are the results: All providers support JPEG and PNG. All providers except Teams and Xing support WebP. No provider except Facebook supports AVIF. WhatsApp displays the AVIF image, but the colors are broken. "X, formerly Twitter" didn't display OG images for my test pages at all. I don't care about that platform, so I didn't bother to further investigate. Those results confirmed that I could now use WebP Open Graph images without creating an additional JPG file.
I'm a frequent user of bookmarklets. As I'm sharing some of them on my blog, I wrote this post to explain what bookmarklets are and how to use them. In short, a bookmarklet is a browser bookmark containing JavaScript code. Clicking the bookmark executes the script in the context of the current web page, allowing users to perform tasks such as modifying the appearance of a webpage or extracting information. Bookmarklets are a simpler, more lightweight alternative to browser extensions, Chrome snippets, and userscripts. How to add a bookmarklet? Here's an example to display a browser dialog with the title of the current web page: Display page title You can click the link to see what it does. To run this script on other websites, we have to save it as a bookmarklet. My preferred way is to drag the link onto the bookmarks toolbar: A link on a web page is dragged and dropped onto a browser bookmark bar. A bookmark creation dialog appears. The prompt is confirmed and closed. The created bookmarklet is clicked. The current web page title is displayed in a browser dialog. Another way is to right-click the link to open its context menu: In Firefox, you can then select "Bookmark Link…". Other browsers make it a little more difficult: select "Copy Link (Address)", manually create a new bookmark, and then paste the copied URL as the link target. Once created, you can click the bookmark(let) on any web page to display its title. Scroll further down to see more useful use cases. How to write a bookmarklet? Let's start with the code for the previous bookmarklet example: window.alert(document.title) To turn that script into a bookmarklet, we have to put javascript: in front of it: javascript:window.alert(document.title) To keep our code self-contained, we should wrap it with an IIFE (immediately invoked function expression): javascript:(() => { window.alert(document.title) })() Finally, you might have to URL-encode your bookmarklet if you get issues with special characters: javascript:%28%28%29%20%3D%3E%20%7B%0A%20%20window.alert%28document.title%29%0A%7D%29%28%29 Useful bookmarklets Here are some bookmarklets I've created: Debugger — Starts the browser DevTools debugger after 3 seconds, useful for debugging dynamic content changes. Log Focus Changes — Logs DOM elements when the focus changes. Design Mode — Makes the web page content-editable (toggle).
It can be frustrating to fill out a web form, only to accidentally refresh the page (or click "back") and lose all the hard work. In this blog post, I present a method to retain form data when the page is reloaded, which improves the user experience. Browser behavior Most browsers provide an autofill feature. In the example form below, enter anything into the input field. Then, try out the following: Click the "Example link" and use the "back" functionality of your browser. Reload the page. Query Example link Depending on your browser, the input value might be restored: Browser Reload Back Firefox 130 Yes Yes Chrome 129 No Yes Safari 18 No Yes How does it work? I was surprised to learn that this autofill behavior is controlled via the autocomplete, that is mostly used for value autocompletion from past web forms. However, if we disable the autocompletion, the autofill feature will be disabled as well: <input autocomplete="false" /> To learn more about the behavior, read the full spec on persisted history entry state. Preserving application state Even with autofill, no browser will restore dynamic changes previously triggered by the user. In the following example, the user has to always press the "Search" button to view the results: This is an interactive example. Please enable JavaScript to use it. Query Search ... const inputElementNative = document.querySelector("#example-search-input"); const outputElementNative = document.querySelector("#example-search-output"); const performSearch_exampleSearch = (outputElement) => { outputElement.innerText = ""; const introText = document.createTextNode("Open the result for "); outputElement.appendChild(introText); const link = document.createElement("a"); link.href = "https://example.com"; link.innerText = inputElementNative.value || "no text"; outputElement.appendChild(link); }; document.querySelector("#example-search-form").addEventListener("submit", (event) => { event.preventDefault(); performSearch_exampleSearch(outputElementNative); }, ); If the web page changes its content after user interaction, it might be a good idea to restore the UI state after the page has been refreshed. For example, it's useful to restore previous search results for an on-site search. Note that Chrome will fire a change event on inputs, but this is considered a bug as the respective spec has been updated. Storing form values As the form value might be lost on reload, we need to store it temporarily. Some common places to store data include local storage, session storage, cookies, query parameters or hash. They all come with drawbacks for our use case, though. Instead, I suggest using the browser history state, which has several advantages: We get data separation between multiple browser tabs with no additional effort. The data is automatically cleaned up when the browser tab is closed. We don't pollute the URL and prevent page reloads. Let's store the search input value as query: document.querySelector("form").addEventListener("submit", (event) => { event.preventDefault(); const inputElement = document.querySelector("input"); history.replaceState({ query: inputElement.value }, ""); performSearch(); }); This example uses the submit event to store the data, which fits our "search" use case. In a regular form, using the input change event might be a better trigger to store form values. Using replaceState over pushState will ensure that no unnecessary history entry is created. Uncaught TypeError: Failed to execute 'replaceState' on 'History': 2 arguments required, but only 1 present. Restoring form values My first approach to restore form values was to listen to the pageshow event. Once it's fired, we can access the page load type from window.performance: window.addEventListener("pageshow", () => { const type = window.performance.getEntriesByType("navigation")[0].type; const query = history.state?.query; if (query && (type === "back_forward" || type === "reload")) { document.querySelector("#my-input").value = query; performSearch(); } }); I will keep the solution here in case someone needs it, but usually it is unnecessary to check the page load type. Because the history state is only set after the search form has been submitted, we can check the state directly: const query = history.state?.query; if (query) { document.querySelector("#my-input").value = query; performSearch(); } Demo Here's an example combining both techniques to store and restore the input value: This is an interactive example. Please enable JavaScript to use it. Query Search ... const inputElementCustom = document.querySelector("#example-preserve-input"); const outputElementCustom = document.querySelector("#example-preserve-output"); const performSearch_examplePreserve = (outputElement) => { outputElement.innerText = ""; const introText = document.createTextNode("Open the result for "); outputElement.appendChild(introText); const link = document.createElement("a"); link.href = "https://example.com"; link.innerText = inputElementCustom.value || "no text"; outputElement.appendChild(link); }; document.querySelector("#example-preserve-form").addEventListener("submit", (event) => { event.preventDefault(); performSearch_examplePreserve(outputElementCustom); history.replaceState({ query: inputElementCustom.value }, ""); }); const historyQuery = history.state?.query; if (historyQuery) { document.querySelector("#example-preserve-input").value = historyQuery; performSearch_examplePreserve(outputElementCustom); } Conclusion Preserving form data on page refresh is a small but impactful way to improve user satisfaction. The default browser autofill feature handles only basic use cases, so ideally we should maintain the form state ourselves. In this blog post, I've explained how to use the browser history state to temporarily store and retrieve form values.
In this post, I will summarize some problems and constraints that I've encountered with the Notifications and Push web APIs. Notification settings on macOS Someone who's definitely not me wasted half an hour wondering why triggered notifications would not appear. On macOS, make sure to enable system notifications for your browsers. Open "System Settings" → "Notifications". For each browser, select "Allow notifications" and set the appearance to "Alerts": Onchange listener not called Web APIs offer a way to subscribe to change events. This is especially useful in React: navigator.permissions .query({ name: "push", userVisibleOnly: true }) .then((status) => { status.onchange = function () { // synchronize permission status with local state setNotificationPermission(this.state); }; }); Whenever the notification permission changes (either through our application logic or via browser controls), we can synchronize the UI in real-time according to the current permission value (prompt, denied or granted). However, due to a Firefox bug, the event listener callback is never called. This means that we can't react to permission changes via browser controls in Firefox. That's especially unfortunate when combined with push messages, where we want to subscribe the user once they grant the notification permission. One workaround is to check at page load if the notification permission is granted with no valid subscription and resubscribe the user. Notification image not supported Browser notifications support an optional image property. This property is marked as "experimental", so it's not surprising that some browsers (Firefox, Safari) don't support it. There is an open feature request to add support in Firefox, but it has been open since 2019. VAPID contact information required When sending a push message, we have to provide VAPID parameters (e.g. the public and private key). According to the specification, the sub property (contact email or link) is optional: If the application server wishes to provide, the JWT MAY include a "sub" (Subject) claim. Despite this specification, the Mozilla push message server will return an error if the subject is missing: 401 Unauthorized for (...) and subscription https://updates.push.services.mozilla.com/wpush/v2/… You might not encounter this issue when using the popular web-push npm package, as its API encourages you to provide the subject as the first parameter: webpush.setVapidDetails("hello@example.com", publicKey, privateKey); However, in the webpush-java library, you need to set the subject explicitly: builder.subject("hello@example.com"); There is an open issue with more information about this problem. Microsoft Edge pitfalls Microsoft introduced adaptive notification requests in the Edge browser. It is a crowdsourced score system, which may auto-accept or auto-reject notification requests. The behavior can be changed in the Edge notification settings. Additionally, on a business or school device, those settings might be fixed, displaying the following tooltip: This setting is managed by your organization.
More in programming
Although it looks really good, I have not yet tried the Jujutsu (jj) version control system, mainly because it’s not yet clearly superior to Magit. But I have been following jj discussions with great interest. One of the things that jj has not yet tackled is how to do better than git refs / branches / tags. As I underestand it, jj currently has something like Mercurial bookmarks, which are more like raw git ref plumbing than a high-level porcelain feature. In particular, jj lacks signed or annotated tags, and it doesn’t have branch names that always automatically refer to the tip. This is clearly a temporary state of affairs because jj is still incomplete and under development and these gaps are going to be filled. But the discussions have led me to think about how git’s branches are unsatisfactory, and what could be done to improve them. branch merge rebase squash fork cover letters previous branch workflow questions branch One of the huge improvements in git compared to Subversion was git’s support for merges. Subversion proudly advertised its support for lightweight branches, but a branch is not very useful if you can’t merge it: an un-mergeable branch is not a tool you can use to help with work-in-progress development. The point of this anecdote is to illustrate that rather than trying to make branches better, we should try to make merges better and branches will get better as a consequence. Let’s consider a few common workflows and how git makes them all unsatisfactory in various ways. Skip to cover letters and previous branch below where I eventually get to the point. merge A basic merge workflow is, create a feature branch hack, hack, review, hack, approve merge back to the trunk The main problem is when it comes to the merge, there may be conflicts due to concurrent work on the trunk. Git encourages you to resolve conflicts while creating the merge commit, which tends to bypass the normal review process. Git also gives you an ugly useless canned commit message for merges, that hides what you did to resolve the conflicts. If the feature branch is a linear record of the work then it can be cluttered with commits to address comments from reviewers and to fix mistakes. Some people like an accurate record of the history, but others prefer the repository to contain clean logical changes that will make sense in years to come, keeping the clutter in the code review system. rebase A rebase-oriented workflow deals with the problems of the merge workflow but introduces new problems. Primarily, rebasing is intended to produce a tidy logical commit history. And when a feature branch is rebased onto the trunk before it is merged, a simple fast-forward check makes it trivial to verify that the merge will be clean (whether it uses separate merge commit or directly fast-forwards the trunk). However, it’s hard to compare the state of the feature branch before and after the rebase. The current and previous tips of the branch (amongst other clutter) are recorded in the reflog of the person who did the rebase, but they can’t share their reflog. A force-push erases the previous branch from the server. Git forges sometimes make it possible to compare a branch before and after a rebase, but it’s usually very inconvenient, which makes it hard to see if review comments have been addressed. And a reviewer can’t fetch past versions of the branch from the server to review them locally. You can mitigate these problems by adding commits in --autosquash format, and delay rebasing until just before merge. However that reintroduces the problem of merge conflicts: if the autosquash doesn’t apply cleanly the branch should have another round of review to make sure the conflicts were resolved OK. squash When the trunk consists of a sequence of merge commits, the --first-parent log is very uninformative. A common way to make the history of the trunk more informative, and deal with the problems of cluttered feature branches and poor rebase support, is to squash the feature branch into a single commit on the trunk instead of mergeing. This encourages merge requests to be roughly the size of one commit, which is arguably a good thing. However, it can be uncomfortably confining for larger features, or cause extra busy-work co-ordinating changes across multiple merge requests. And squashed feature branches have the same merge conflict problem as rebase --autosquash. fork Feature branches can’t always be short-lived. In the past I have maintained local hacks that were used in production but were not (not yet?) suitable to submit upstream. I have tried keeping a stack of these local patches on a git branch that gets rebased onto each upstream release. With this setup the problem of reviewing successive versions of a merge request becomes the bigger problem of keeping track of how the stack of patches evolved over longer periods of time. cover letters Cover letters are common in the email patch workflow that predates git, and they are supported by git format-patch. Github and other forges have a webby version of the cover letter: the message that starts off a pull request or merge request. In git, cover letters are second-class citizens: they aren’t stored in the repository. But many of the problems I outlined above have neat solutions if cover letters become first-class citizens, with a Jujutsu twist. A first-class cover letter starts off as a prototype for a merge request, and becomes the eventual merge commit. Instead of unhelpful auto-generated merge commits, you get helpful and informative messages. No extra work is needed since we’re already writing cover letters. Good merge commit messages make good --first-parent logs. The cover letter subject line works as a branch name. No more need to invent filename-compatible branch names! Jujutsu doesn’t make you name branches, giving them random names instead. It shows the subject line of the topmost commit as a reminder of what the branch is for. If there’s an explicit cover letter the subject line will be a better summary of the branch as a whole. I often find the last commit on a branch is some post-feature cleanup, and that kind of commit has a subject line that is never a good summary of its feature branch. As a prototype for the merge commit, the cover letter can contain the resolution of all the merge conflicts in a way that can be shared and reviewed. In Jujutsu, where conflicts are first class, the cover letter commit can contain unresolved conflicts: you don’t have to clean them up when creating the merge, you can leave that job until later. If you can share a prototype of your merge commit, then it becomes possible for your collaborators to review any merge conflicts and how you resolved them. To distinguish a cover letter from a merge commit object, a cover letter object has a “target” header which is a special kind of parent header. A cover letter also has a normal parent commit header that refers to earlier commits in the feature branch. The target is what will become the first parent of the eventual merge commit. previous branch The other ingredient is to add a “previous branch” header, another special kind of parent commit header. The previous branch header refers to an older version of the cover letter and, transitively, an older version of the whole feature branch. Typically the previous branch header will match the last shared version of the branch, i.e. the commit hash of the server’s copy of the feature branch. The previous branch header isn’t changed during normal work on the feature branch. As the branch is revised and rebased, the commit hash of the cover letter will change fairly frequently. These changes are recorded in git’s reflog or jj’s oplog, but not in the “previous branch” chain. You can use the previous branch chain to examine diffs between versions of the feature branch as a whole. If commits have Gerrit-style or jj-style change-IDs then it’s fairly easy to find and compare previous versions of an individual commit. The previous branch header supports interdiff code review, or allows you to retain past iterations of a patch series. workflow Here are some sketchy notes on how these features might work in practice. One way to use cover letters is jj-style, where it’s convenient to edit commits that aren’t at the tip of a branch, and easy to reshuffle commits so that a branch has a deliberate narrative. When you create a new feature branch, it starts off as an empty cover letter with both target and parent pointing at the same commit. Alternatively, you might start a branch ad hoc, and later cap it with a cover letter. If this is a small change and rebase + fast-forward is allowed, you can edit the “cover letter” to contain the whole change. Otherwise, you can hack on the branch any which way. Shuffle the commits that should be part of the merge request so that they occur before the cover letter, and edit the cover letter to summarize the preceding commits. When you first push the branch, there’s (still) no need to give it a name: the server can see that this is (probably) going to be a new merge request because the top commit has a target branch and its change-ID doesn’t match an existing merge request. Also when you push, your client automatically creates a new instance of your cover letter, adding a “previous branch” header to indicate that the old version was shared. The commits on the branch that were pushed are now immutable; rebases and edits affect the new version of the branch. During review there will typically be multiple iterations of the branch to address feedback. The chain of previous branch headers allows reviewers to see how commits were changed to address feedback, interdiff style. The branch can be merged when the target header matches the current trunk and there are no conflicts left to resolve. When the time comes to merge the branch, there are several options: For a merge workflow, the cover letter is used to make a new commit on the trunk, changing the target header into the first parent commit, and dropping the previous branch header. Or, if you like to preserve more history, the previous branch chain can be retained. Or you can drop the cover letter and fast foward the branch on to the trunk. Or you can squash the branch on to the trunk, using the cover letter as the commit message. questions This is a fairly rough idea: I’m sure that some of the details won’t work in practice without a lot of careful work on compatibility and deployability. Do the new commit headers (“target” and “previous branch”) need to be headers? What are the compatibility issues with adding new headers that refer to other commits? How would a server handle a push of an unnamed branch? How could someone else pull a copy of it? How feasible is it to use cover letter subject lines instead of branch names? The previous branch header is doing a similar job to a remote tracking branch. Is there an opportunity to simplify how we keep a local cache of the server state? Despite all that, I think something along these lines could make branches / reviews / reworks / merges less awkward. How you merge should me a matter of your project’s preferred style, without interference from technical limitations that force you to trade off one annoyance against another. There remains a non-technical limitation: I have assumed that contributors are comfortable enough with version control to use a history-editing workflow effectively. I’ve lost all perspective on how hard this is for a newbie to learn; I expect (or hope?) jj makes it much easier than git rebase.
In my post yesterday (“ARM is great, ARM is terrible (and so is RISC-V)), I described my desire to find ARM hardware with AES instructions to support full-disk encryption, and the poor state of the OS ecosystem around the newer ARM boards. I was anticipating buying either a newer ARM SBC or an x86 mini … Continue reading Performant Full-Disk Encryption on a Raspberry Pi, but Foiled by Twisty UARTs →
Debates, at their finest, are about exploring topics together in search for truth. That probably sounds hopelessly idealistic to anyone who've ever perused a comment section on the internet, but ideals are there to remind us of what's possible, to inspire us to reach higher — even if reality falls short. I've been reaching for those debating ideals for thirty years on the internet. I've argued with tens of thousands of people, first on Usenet, then in blog comments, then Twitter, now X, and also LinkedIn — as well as a million other places that have come and gone. It's mostly been about technology, but occasionally about society and morality too. There have been plenty of heated moments during those three decades. It doesn't take much for a debate between strangers on this internet to escalate into something far lower than a "search for truth", and I've often felt willing to settle for just a cordial tone! But for the majority of that time, I never felt like things might escalate beyond the keyboards and into the real world. That was until we had our big blow-up at 37signals back in 2021. I suddenly got to see a different darkness from the most vile corners of the internet. Heard from those who seem to prowl for a mob-sanctioned opportunity to threaten and intimidate those they disagree with. It fundamentally changed me. But I used the experience as a mirror to reflect on the ways my own engagement with the arguments occasionally felt too sharp, too personal. And I've since tried to refocus way more of my efforts on the positive and the productive. I'm by no means perfect, and the internet often tempts the worst in us, but I resist better now than I did then. What I cannot come to terms with, though, is the modern equation of words with violence. The growing sense of permission that if the disagreement runs deep enough, then violence is a justified answer to settle it. That sounds so obvious that we shouldn't need to state it in a civil society, but clearly it is not. Not even in technology. Not even in programming. There are plenty of factions here who've taken to justify their violent fantasies by referring to their ideological opponents as "nazis", "fascists", or "racists". And then follow that up with a call to "punch a nazi" or worse. When you hear something like that often enough, it's easy to grow glib about it. That it's just a saying. They don't mean it. But I'm afraid many of them really do. Which brings us to Charlie Kirk. And the technologists who name drinks at their bar after his mortal wound just hours after his death, to name but one of the many, morbid celebrations of the famous conservative debater's death. It's sickening. Deeply, profoundly sickening. And my first instinct was exactly what such people would delight in happening. To watch the rest of us recoil, then retract, and perhaps even eject. To leave the internet for a while or forever. But I can't do that. We shouldn't do that. Instead, we should double down on the opposite. Continue to show up with our ideals held high while we debate strangers in that noble search for the truth. Where we share our excitement, our enthusiasm, and our love of technology, country, and humanity. I think that's what Charlie Kirk did so well. Continued to show up for the debate. Even on hostile territory. Not because he thought he was ever going to convince everyone, but because he knew he'd always reach some with a good argument, a good insight, or at least a different perspective. You could agree or not. Counter or be quiet. But the earnest exploration of the topics in a live exchange with another human is as fundamental to our civilization as Socrates himself. Don't give up, don't give in. Keep debating.
I’ve long been interested in new and different platforms. I ran Debian on an Alpha back in the late 1990s and was part of the Alpha port team; then I helped bootstrap Debian on amd64. I’ve got somewhere around 8 Raspberry Pi devices in active use right now, and the free NNCPNET Internet email service … Continue reading ARM is great, ARM is terrible (and so is RISC-V) →
In my first interview out of college I was asked the change counter problem: Given a set of coin denominations, find the minimum number of coins required to make change for a given number. IE for USA coinage and 37 cents, the minimum number is four (quarter, dime, 2 pennies). I implemented the simple greedy algorithm and immediately fell into the trap of the question: the greedy algorithm only works for "well-behaved" denominations. If the coin values were [10, 9, 1], then making 37 cents would take 10 coins in the greedy algorithm but only 4 coins optimally (10+9+9+9). The "smart" answer is to use a dynamic programming algorithm, which I didn't know how to do. So I failed the interview. But you only need dynamic programming if you're writing your own algorithm. It's really easy if you throw it into a constraint solver like MiniZinc and call it a day. int: total; array[int] of int: values = [10, 9, 1]; array[index_set(values)] of var 0..: coins; constraint sum (c in index_set(coins)) (coins[c] * values[c]) == total; solve minimize sum(coins); You can try this online here. It'll give you a prompt to put in total and then give you successively-better solutions: coins = [0, 0, 37]; ---------- coins = [0, 1, 28]; ---------- coins = [0, 2, 19]; ---------- coins = [0, 3, 10]; ---------- coins = [0, 4, 1]; ---------- coins = [1, 3, 0]; ---------- Lots of similar interview questions are this kind of mathematical optimization problem, where we have to find the maximum or minimum of a function corresponding to constraints. They're hard in programming languages because programming languages are too low-level. They are also exactly the problems that constraint solvers were designed to solve. Hard leetcode problems are easy constraint problems.1 Here I'm using MiniZinc, but you could just as easily use Z3 or OR-Tools or whatever your favorite generalized solver is. More examples This was a question in a different interview (which I thankfully passed): Given a list of stock prices through the day, find maximum profit you can get by buying one stock and selling one stock later. It's easy to do in O(n^2) time, or if you are clever, you can do it in O(n). Or you could be not clever at all and just write it as a constraint problem: array[int] of int: prices = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, 8]; var int: buy; var int: sell; var int: profit = prices[sell] - prices[buy]; constraint sell > buy; constraint profit > 0; solve maximize profit; Reminder, link to trying it online here. While working at that job, one interview question we tested out was: Given a list, determine if three numbers in that list can be added or subtracted to give 0? This is a satisfaction problem, not a constraint problem: we don't need the "best answer", any answer will do. We eventually decided against it for being too tricky for the engineers we were targeting. But it's not tricky in a solver; include "globals.mzn"; array[int] of int: numbers = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, 8]; array[index_set(numbers)] of var {0, -1, 1}: choices; constraint sum(n in index_set(numbers)) (numbers[n] * choices[n]) = 0; constraint count(choices, -1) + count(choices, 1) = 3; solve satisfy; Okay, one last one, a problem I saw last year at Chipy AlgoSIG. Basically they pick some leetcode problems and we all do them. I failed to solve this one: Given an array of integers heights representing the histogram's bar height where the width of each bar is 1, return the area of the largest rectangle in the histogram. The "proper" solution is a tricky thing involving tracking lots of bookkeeping states, which you can completely bypass by expressing it as constraints: array[int] of int: numbers = [2,1,5,6,2,3]; var 1..length(numbers): x; var 1..length(numbers): dx; var 1..: y; constraint x + dx <= length(numbers); constraint forall (i in x..(x+dx)) (y <= numbers[i]); var int: area = (dx+1)*y; solve maximize area; output ["(\(x)->\(x+dx))*\(y) = \(area)"] There's even a way to automatically visualize the solution (using vis_geost_2d), but I didn't feel like figuring it out in time for the newsletter. Is this better? Now if I actually brought these questions to an interview the interviewee could ruin my day by asking "what's the runtime complexity?" Constraint solvers runtimes are unpredictable and almost always than an ideal bespoke algorithm because they are more expressive, in what I refer to as the capability/tractability tradeoff. But even so, they'll do way better than a bad bespoke algorithm, and I'm not experienced enough in handwriting algorithms to consistently beat a solver. The real advantage of solvers, though, is how well they handle new constraints. Take the stock picking problem above. I can write an O(n²) algorithm in a few minutes and the O(n) algorithm if you give me some time to think. Now change the problem to Maximize the profit by buying and selling up to max_sales stocks, but you can only buy or sell one stock at a given time and you can only hold up to max_hold stocks at a time? That's a way harder problem to write even an inefficient algorithm for! While the constraint problem is only a tiny bit more complicated: include "globals.mzn"; int: max_sales = 3; int: max_hold = 2; array[int] of int: prices = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, 8]; array [1..max_sales] of var int: buy; array [1..max_sales] of var int: sell; array [index_set(prices)] of var 0..max_hold: stocks_held; var int: profit = sum(s in 1..max_sales) (prices[sell[s]] - prices[buy[s]]); constraint forall (s in 1..max_sales) (sell[s] > buy[s]); constraint profit > 0; constraint forall(i in index_set(prices)) (stocks_held[i] = (count(s in 1..max_sales) (buy[s] <= i) - count(s in 1..max_sales) (sell[s] <= i))); constraint alldifferent(buy ++ sell); solve maximize profit; output ["buy at \(buy)\n", "sell at \(sell)\n", "for \(profit)"]; Most constraint solving examples online are puzzles, like Sudoku or "SEND + MORE = MONEY". Solving leetcode problems would be a more interesting demonstration. And you get more interesting opportunities to teach optimizations, like symmetry breaking. Because my dad will email me if I don't explain this: "leetcode" is slang for "tricky algorithmic interview questions that have little-to-no relevance in the actual job you're interviewing for." It's from leetcode.com. ↩