Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
33
Git's log and diff commands are useful for inspecting your repository changes. Both commands accept ranges of commits in different formats, which can be confusing. In this post, I will shed some light on the differences between a b, a..b and a...b commit ranges. Check out the repository that I will be using as an example. This is part 2 of my Git explained series. Part 1: Rewriting history Part 2: Commit ranges git log The git log command lists all commits that are reachable from a certain commit. git log feature You can also specify multiple commits separated by a space, which will list all commits that are reachable from any of them: git log main feature You might want to exclude certain commits from git log. The following commands are equivalent and will list all commits that are reachable from feature but not from main: git log main..feature git log ^main feature git log feature --not main Another special notation is the triple dot, which excludes the common ancestor of two...
over a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Darek Kay

Grab browser links and titles in one click

When I copy a browser tab URL, I often want to also keep the title. Sometimes I want to use the link as rich text (e.g., when pasting the link into OneNote or Jira). Sometimes I prefer a Markdown link. There are browser extensions to achieve this task, but I don't want to introduce potential security issues. Instead, I've written a bookmarklet based on this example extension. To use it, drag the following link onto your browser bookmarks bar: Copy Tab When you click the bookmark(let), the current page including its title will be copied into your clipboard. You don't even have to choose the output format: the link is copied both as rich text and plain text (Markdown). This works because it's possible to write multiple values into the clipboard with different content types. Here's the source code: function escapeHTML(str) { return String(str) .replace(/&/g, "&amp;") .replace(/"/g, "&quot;") .replace(/'/g, "&#39;") .replace(/</g, "&lt;") .replace(/>/g, "&gt;"); } function copyToClipboard({ url, title }) { function onCopy(event) { document.removeEventListener("copy", onCopy, true); // hide the event from the page to prevent tampering event.stopImmediatePropagation(); event.preventDefault(); const linkAsMarkdown = `[${title}](${url})`; event.clipboardData.setData("text/plain", linkAsMarkdown); const linkAsHtml = `<a href="${escapeHTML(url)}">${title}</a>` event.clipboardData.setData("text/html", linkAsHtml); } document.addEventListener("copy", onCopy, true); document.execCommand("copy"); } copyToClipboard({ url: window.location.toString(), title: document.title });

6 months ago 80 votes
Open Graph images: Format compatibility across platforms

While redesigning my photography website, I've looked into the Open Graph (OG) images, which are displayed when sharing a link on social media or messaging apps. Here's an example from WhatsApp: For each photo that I publish, I create a WebP thumbnail for the gallery. I wanted to use those as OG images, but the WebP support was lacking, so I've been creating an additional JPG variant just for Open Graph. I was interested in seeing how things have changed in the last 2.5 years. I've tested the following platforms: WhatsApp, Telegram, Signal, Discord, Slack, Teams, Facebook, LinkedIn, Xing, Bluesky, Threads and Phanpy (Mastodon). Here are the results: All providers support JPEG and PNG. All providers except Teams and Xing support WebP. No provider except Facebook supports AVIF. WhatsApp displays the AVIF image, but the colors are broken. "X, formerly Twitter" didn't display OG images for my test pages at all. I don't care about that platform, so I didn't bother to further investigate. Those results confirmed that I could now use WebP Open Graph images without creating an additional JPG file.

7 months ago 79 votes
A guide to bookmarklets

I'm a frequent user of bookmarklets. As I'm sharing some of them on my blog, I wrote this post to explain what bookmarklets are and how to use them. In short, a bookmarklet is a browser bookmark containing JavaScript code. Clicking the bookmark executes the script in the context of the current web page, allowing users to perform tasks such as modifying the appearance of a webpage or extracting information. Bookmarklets are a simpler, more lightweight alternative to browser extensions, Chrome snippets, and userscripts. How to add a bookmarklet? Here's an example to display a browser dialog with the title of the current web page: Display page title You can click the link to see what it does. To run this script on other websites, we have to save it as a bookmarklet. My preferred way is to drag the link onto the bookmarks toolbar: A link on a web page is dragged and dropped onto a browser bookmark bar. A bookmark creation dialog appears. The prompt is confirmed and closed. The created bookmarklet is clicked. The current web page title is displayed in a browser dialog. Another way is to right-click the link to open its context menu: In Firefox, you can then select "Bookmark Link…". Other browsers make it a little more difficult: select "Copy Link (Address)", manually create a new bookmark, and then paste the copied URL as the link target. Once created, you can click the bookmark(let) on any web page to display its title. Scroll further down to see more useful use cases. How to write a bookmarklet? Let's start with the code for the previous bookmarklet example: window.alert(document.title) To turn that script into a bookmarklet, we have to put javascript: in front of it: javascript:window.alert(document.title) To keep our code self-contained, we should wrap it with an IIFE (immediately invoked function expression): javascript:(() => { window.alert(document.title) })() Finally, you might have to URL-encode your bookmarklet if you get issues with special characters: javascript:%28%28%29%20%3D%3E%20%7B%0A%20%20window.alert%28document.title%29%0A%7D%29%28%29 Useful bookmarklets Here are some bookmarklets I've created: Debugger — Starts the browser DevTools debugger after 3 seconds, useful for debugging dynamic content changes. Log Focus Changes — Logs DOM elements when the focus changes. Design Mode — Makes the web page content-editable (toggle).

8 months ago 81 votes
Prevent data loss on page refresh

It can be frustrating to fill out a web form, only to accidentally refresh the page (or click "back") and lose all the hard work. In this blog post, I present a method to retain form data when the page is reloaded, which improves the user experience. Browser behavior Most browsers provide an autofill feature. In the example form below, enter anything into the input field. Then, try out the following: Click the "Example link" and use the "back" functionality of your browser. Reload the page. Query Example link Depending on your browser, the input value might be restored: Browser Reload Back Firefox 130 Yes Yes Chrome 129 No Yes Safari 18 No Yes How does it work? I was surprised to learn that this autofill behavior is controlled via the autocomplete, that is mostly used for value autocompletion from past web forms. However, if we disable the autocompletion, the autofill feature will be disabled as well: <input autocomplete="false" /> To learn more about the behavior, read the full spec on persisted history entry state. Preserving application state Even with autofill, no browser will restore dynamic changes previously triggered by the user. In the following example, the user has to always press the "Search" button to view the results: This is an interactive example. Please enable JavaScript to use it. Query Search ... const inputElementNative = document.querySelector("#example-search-input"); const outputElementNative = document.querySelector("#example-search-output"); const performSearch_exampleSearch = (outputElement) => { outputElement.innerText = ""; const introText = document.createTextNode("Open the result for "); outputElement.appendChild(introText); const link = document.createElement("a"); link.href = "https://example.com"; link.innerText = inputElementNative.value || "no text"; outputElement.appendChild(link); }; document.querySelector("#example-search-form").addEventListener("submit", (event) => { event.preventDefault(); performSearch_exampleSearch(outputElementNative); }, ); If the web page changes its content after user interaction, it might be a good idea to restore the UI state after the page has been refreshed. For example, it's useful to restore previous search results for an on-site search. Note that Chrome will fire a change event on inputs, but this is considered a bug as the respective spec has been updated. Storing form values As the form value might be lost on reload, we need to store it temporarily. Some common places to store data include local storage, session storage, cookies, query parameters or hash. They all come with drawbacks for our use case, though. Instead, I suggest using the browser history state, which has several advantages: We get data separation between multiple browser tabs with no additional effort. The data is automatically cleaned up when the browser tab is closed. We don't pollute the URL and prevent page reloads. Let's store the search input value as query: document.querySelector("form").addEventListener("submit", (event) => { event.preventDefault(); const inputElement = document.querySelector("input"); history.replaceState({ query: inputElement.value }, ""); performSearch(); }); This example uses the submit event to store the data, which fits our "search" use case. In a regular form, using the input change event might be a better trigger to store form values. Using replaceState over pushState will ensure that no unnecessary history entry is created. Uncaught TypeError: Failed to execute 'replaceState' on 'History': 2 arguments required, but only 1 present. Restoring form values My first approach to restore form values was to listen to the pageshow event. Once it's fired, we can access the page load type from window.performance: window.addEventListener("pageshow", () => { const type = window.performance.getEntriesByType("navigation")[0].type; const query = history.state?.query; if (query && (type === "back_forward" || type === "reload")) { document.querySelector("#my-input").value = query; performSearch(); } }); I will keep the solution here in case someone needs it, but usually it is unnecessary to check the page load type. Because the history state is only set after the search form has been submitted, we can check the state directly: const query = history.state?.query; if (query) { document.querySelector("#my-input").value = query; performSearch(); } Demo Here's an example combining both techniques to store and restore the input value: This is an interactive example. Please enable JavaScript to use it. Query Search ... const inputElementCustom = document.querySelector("#example-preserve-input"); const outputElementCustom = document.querySelector("#example-preserve-output"); const performSearch_examplePreserve = (outputElement) => { outputElement.innerText = ""; const introText = document.createTextNode("Open the result for "); outputElement.appendChild(introText); const link = document.createElement("a"); link.href = "https://example.com"; link.innerText = inputElementCustom.value || "no text"; outputElement.appendChild(link); }; document.querySelector("#example-preserve-form").addEventListener("submit", (event) => { event.preventDefault(); performSearch_examplePreserve(outputElementCustom); history.replaceState({ query: inputElementCustom.value }, ""); }); const historyQuery = history.state?.query; if (historyQuery) { document.querySelector("#example-preserve-input").value = historyQuery; performSearch_examplePreserve(outputElementCustom); } Conclusion Preserving form data on page refresh is a small but impactful way to improve user satisfaction. The default browser autofill feature handles only basic use cases, so ideally we should maintain the form state ourselves. In this blog post, I've explained how to use the browser history state to temporarily store and retrieve form values.

9 months ago 82 votes
Web push notifications: issues and limitations

In this post, I will summarize some problems and constraints that I've encountered with the Notifications and Push web APIs. Notification settings on macOS Someone who's definitely not me wasted half an hour wondering why triggered notifications would not appear. On macOS, make sure to enable system notifications for your browsers. Open "System Settings" → "Notifications". For each browser, select "Allow notifications" and set the appearance to "Alerts": Onchange listener not called Web APIs offer a way to subscribe to change events. This is especially useful in React: navigator.permissions .query({ name: "push", userVisibleOnly: true }) .then((status) => { status.onchange = function () { // synchronize permission status with local state setNotificationPermission(this.state); }; }); Whenever the notification permission changes (either through our application logic or via browser controls), we can synchronize the UI in real-time according to the current permission value (prompt, denied or granted). However, due to a Firefox bug, the event listener callback is never called. This means that we can't react to permission changes via browser controls in Firefox. That's especially unfortunate when combined with push messages, where we want to subscribe the user once they grant the notification permission. One workaround is to check at page load if the notification permission is granted with no valid subscription and resubscribe the user. Notification image not supported Browser notifications support an optional image property. This property is marked as "experimental", so it's not surprising that some browsers (Firefox, Safari) don't support it. There is an open feature request to add support in Firefox, but it has been open since 2019. VAPID contact information required When sending a push message, we have to provide VAPID parameters (e.g. the public and private key). According to the specification, the sub property (contact email or link) is optional: If the application server wishes to provide, the JWT MAY include a "sub" (Subject) claim. Despite this specification, the Mozilla push message server will return an error if the subject is missing: 401 Unauthorized for (...) and subscription https://updates.push.services.mozilla.com/wpush/v2/… You might not encounter this issue when using the popular web-push npm package, as its API encourages you to provide the subject as the first parameter: webpush.setVapidDetails("hello@example.com", publicKey, privateKey); However, in the webpush-java library, you need to set the subject explicitly: builder.subject("hello@example.com"); There is an open issue with more information about this problem. Microsoft Edge pitfalls Microsoft introduced adaptive notification requests in the Edge browser. It is a crowdsourced score system, which may auto-accept or auto-reject notification requests. The behavior can be changed in the Edge notification settings. Additionally, on a business or school device, those settings might be fixed, displaying the following tooltip: This setting is managed by your organization.

10 months ago 81 votes

More in programming

Logical Quantifiers in Software

I realize that for all I've talked about Logic for Programmers in this newsletter, I never once explained basic logical quantifiers. They're both simple and incredibly useful, so let's do that this week! Sets and quantifiers A set is a collection of unordered, unique elements. {1, 2, 3, …} is a set, as are "every programming language", "every programming language's Wikipedia page", and "every function ever defined in any programming language's standard library". You can put whatever you want in a set, with some very specific limitations to avoid certain paradoxes.2 Once we have a set, we can ask "is something true for all elements of the set" and "is something true for at least one element of the set?" IE, is it true that every programming language has a set collection type in the core language? We would write it like this: # all of them all l in ProgrammingLanguages: HasSetType(l) # at least one some l in ProgrammingLanguages: HasSetType(l) This is the notation I use in the book because it's easy to read, type, and search for. Mathematicians historically had a few different formats; the one I grew up with was ∀x ∈ set: P(x) to mean all x in set, and ∃ to mean some. I use these when writing for just myself, but find them confusing to programmers when communicating. "All" and "some" are respectively referred to as "universal" and "existential" quantifiers. Some cool properties We can simplify expressions with quantifiers, in the same way that we can simplify !(x && y) to !x || !y. First of all, quantifiers are commutative with themselves. some x: some y: P(x,y) is the same as some y: some x: P(x, y). For this reason we can write some x, y: P(x,y) as shorthand. We can even do this when quantifying over different sets, writing some x, x' in X, y in Y instead of some x, x' in X: some y in Y. We can not do this with "alternating quantifiers": all p in Person: some m in Person: Mother(m, p) says that every person has a mother. some m in Person: all p in Person: Mother(m, p) says that someone is every person's mother. Second, existentials distribute over || while universals distribute over &&. "There is some url which returns a 403 or 404" is the same as "there is some url which returns a 403 or some url that returns a 404", and "all PRs pass the linter and the test suites" is the same as "all PRs pass the linter and all PRs pass the test suites". Finally, some and all are duals: some x: P(x) == !(all x: !P(x)), and vice-versa. Intuitively: if some file is malicious, it's not true that all files are benign. All these rules together mean we can manipulate quantifiers almost as easily as we can manipulate regular booleans, putting them in whatever form is easiest to use in programming. Speaking of which, how do we use this in in programming? How we use this in programming First of all, people clearly have a need for directly using quantifiers in code. If we have something of the form: for x in list: if P(x): return true return false That's just some x in list: P(x). And this is a prevalent pattern, as you can see by using GitHub code search. It finds over 500k examples of this pattern in Python alone! That can be simplified via using the language's built-in quantifiers: the Python would be any(P(x) for x in list). (Note this is not quantifying over sets but iterables. But the idea translates cleanly enough.) More generally, quantifiers are a key way we express higher-level properties of software. What does it mean for a list to be sorted in ascending order? That all i, j in 0..<len(l): if i < j then l[i] <= l[j]. When should a ratchet test fail? When some f in functions - exceptions: Uses(f, bad_function). Should the image classifier work upside down? all i in images: classify(i) == classify(rotate(i, 180)). These are the properties we verify with tests and types and MISU and whatnot;1 it helps to be able to make them explicit! One cool use case that'll be in the book's next version: database invariants are universal statements over the set of all records, like all a in accounts: a.balance > 0. That's enforceable with a CHECK constraint. But what about something like all i, i' in intervals: NoOverlap(i, i')? That isn't covered by CHECK, since it spans two rows. Quantifier duality to the rescue! The invariant is equivalent to !(some i, i' in intervals: Overlap(i, i')), so is preserved if the query SELECT COUNT(*) FROM intervals CROSS JOIN intervals … returns 0 rows. This means we can test it via a database trigger.3 There are a lot more use cases for quantifiers, but this is enough to introduce the ideas! Next week's the one year anniversary of the book entering early access, so I'll be writing a bit about that experience and how the book changed. It's crazy how crude v0.1 was compared to the current version. MISU ("make illegal states unrepresentable") means using data representations that rule out invalid values. For example, if you have a location -> Optional(item) lookup and want to make sure that each item is in exactly one location, consider instead changing the map to item -> location. This is a means of implementing the property all i in item, l, l' in location: if ItemIn(i, l) && l != l' then !ItemIn(i, l'). ↩ Specifically, a set can't be an element of itself, which rules out constructing things like "the set of all sets" or "the set of sets that don't contain themselves". ↩ Though note that when you're inserting or updating an interval, you already have that row's fields in the trigger's NEW keyword. So you can just query !(some i in intervals: Overlap(new, i')), which is more efficient. ↩

15 hours ago 2 votes
Setting Element Ordering With HTML Rewriter Using CSS

After shipping my work transforming HTML with Netlify’s edge functions I realized I have a little bug: the order of the icons specified in the URL doesn’t match the order in which they are displayed on screen. Why’s this happening? I have a bunch of links in my HTML document, like this: <icon-list> <a href="/1/">…</a> <a href="/2/">…</a> <a href="/3/">…</a> <!-- 2000+ more --> </icon-list> I use html-rewriter in my edge function to strip out the HTML for icons not specified in the URL. So for a request to: /lookup?id=1&id=2 My HTML will be transformed like so: <icon-list> <!-- Parser keeps these two --> <a href="/1/">…</a> <a href="/2/">…</a> <!-- But removes this one --> <a href="/3/">…</a> </icon-list> Resulting in less HTML over the wire to the client. But what about the order of the IDs in the URL? What if the request is to: /lookup?id=2&id=1 Instead of: /lookup?id=1&id=2 In the source HTML document containing all the icons, they’re marked up in reverse chronological order. But the request for this page may specify a different order for icons in the URL. So how do I rewrite the HTML to match the URL’s ordering? The problem is that html-rewriter doesn’t give me a fully-parsed DOM to work with. I can’t do things like “move this node to the top” or “move this node to position x”. With html-rewriter, you only “see” each element as it streams past. Once it passes by, your chance at modifying it is gone. (It seems that’s just the way these edge function tools are designed to work, keeps them lean and performant and I can’t shoot myself in the foot). So how do I change the icon’s display order to match what’s in the URL if I can’t modify the order of the elements in the HTML? CSS to the rescue! Because my markup is just a bunch of <a> tags inside a custom element and I’m using CSS grid for layout, I can use the order property in CSS! All the IDs are in the URL, and their position as parameters has meaning, so I assign their ordering to each element as it passes by html-rewriter. Here’s some pseudo code: // Get all the IDs in the URL const ids = url.searchParams.getAll("id"); // Select all the icons in the HTML rewriter.on("icon-list a", { element: (element) => { // Get the ID const id = element.getAttribute('id'); // If it's in our list, set it's order // position from the URL if (ids.includes(id)) { const order = ids.indexOf(id); element.setAttribute( "style", `order: ${order}` ); // Otherwise, remove it } else { element.remove(); } }, }); Boom! I didn’t have to change the order in the source HTML document, but I can still get the displaying ordering to match what’s in the URL. I love shifty little workarounds like this! Email · Mastodon · Bluesky

16 hours ago 2 votes
The missing part of Espressif’s reset circuit

In the previous article, we peeked at the reset circuit of ESP-Prog with an oscilloscope, and reproduced it with basic components. We observed that it did not behave quite as expected. In this article, we’ll look into the missing pieces. An incomplete circuit For a hint, we’ll first look a bit more closely at the … Continue reading The missing part of Espressif’s reset circuit → The post The missing part of Espressif’s reset circuit appeared first on Quentin Santos.

15 hours ago 2 votes
clamp / median / range

Here are a few tangentially-related ideas vaguely near the theme of comparison operators. comparison style clamp style clamp is median clamp in range range style style clash? comparison style Some languages such as BCPL, Icon, Python have chained comparison operators, like if min <= x <= max: ... In languages without chained comparison, I like to write comparisons as if they were chained, like, if min <= x && x <= max { // ... } A rule of thumb is to prefer less than (or equal) operators and avoid greater than. In a sequence of comparisons, order values from (expected) least to greatest. clamp style The clamp() function ensures a value is between some min and max, def clamp(min, x, max): if x < min: return min if max < x: return max return x I like to order its arguments matching the expected order of the values, following my rule of thumb for comparisons. (I used that flavour of clamp() in my article about GCRA.) But I seem to be unusual in this preference, based on a few examples I have seen recently. clamp is median Last month, Fabian Giesen pointed out a way to resolve this difference of opinion: A function that returns the median of three values is equivalent to a clamp() function that doesn’t care about the order of its arguments. This version is written so that it returns NaN if any of its arguments is NaN. (When an argument is NaN, both of its comparisons will be false.) fn med3(a: f64, b: f64, c: f64) -> f64 { match (a <= b, b <= c, c <= a) { (false, false, false) => f64::NAN, (false, false, true) => b, // a > b > c (false, true, false) => a, // c > a > b (false, true, true) => c, // b <= c <= a (true, false, false) => c, // b > c > a (true, false, true) => a, // c <= a <= b (true, true, false) => b, // a <= b <= c (true, true, true) => b, // a == b == c } } When two of its arguments are constant, med3() should compile to the same code as a simple clamp(); but med3()’s misuse-resistance comes at a small cost when the arguments are not known at compile time. clamp in range If your language has proper range types, there is a nicer way to make clamp() resistant to misuse: fn clamp(x: f64, r: RangeInclusive<f64>) -> f64 { let (&min,&max) = (r.start(), r.end()); if x < min { return min } if max < x { return max } return x; } let x = clamp(x, MIN..=MAX); range style For a long time I have been fond of the idea of a simple counting for loop that matches the syntax of chained comparisons, like for min <= x <= max: ... By itself this is silly: too cute and too ad-hoc. I’m also dissatisfied with the range or slice syntax in basically every programming language I’ve seen. I thought it might be nice if the cute comparison and iteration syntaxes were aspects of a more generally useful range syntax, but I couldn’t make it work. Until recently when I realised I could make use of prefix or mixfix syntax, instead of confining myself to infix. So now my fantasy pet range syntax looks like >= min < max // half-open >= min <= max // inclusive And you might use it in a pattern match if x is >= min < max { // ... } Or as an iterator for x in >= min < max { // ... } Or to take a slice xs[>= min < max] style clash? It’s kind of ironic that these range examples don’t follow the left-to-right, lesser-to-greater rule of thumb that this post started off with. (x is not lexically between min and max!) But that rule of thumb is really intended for languages such as C that don’t have ranges. Careful stylistic conventions can help to avoid mistakes in nontrivial conditional expressions. It’s much better if language and library features reduce the need for nontrivial conditions and catch mistakes automatically.

yesterday 2 votes
C++ engineering decision in SumatraPDF code

SumatraPDF is a medium size (120k+ loc, not counting dependencies) Windows GUI (win32) C++ code base started by me and written by mostly 2 people. The goals of SumatraPDF are to be: fast small packed with features and yet with thoughtfully minimal UI It’s not just a matter of pride in craftsmanship of writing code. I believe being fast and small are a big reason for SumatraPDF’s success. People notice when an app starts in an instant because that’s sadly not the norm in modern software. The engineering goals of SumatraPDF are: reliable (no crashes) fast compilation to enable fast iteration SumatraPDF has been successful achieving those objectives so I’m writing up my C++ implementation decisions. I know those decisions are controversial. Maybe not Terry Davis level of controversial but still. You probably won’t adopt them. Even if you wanted to, you probably couldn’t. There’s no way code like this would pass Google review. Not because it’s bad but becaues it’s different. Diverging from mainstream this much is only feasible if you have total control: it’s your company or your own open-source project. If my ideas were just like everyone else’s ideas, there would be little point in writing about them, would it? Use UTF8 strings internally My app only runs on Windows and a string native to Windows is WCHAR* where each character consumes 2 bytes. Despite that I mostly use char* assumed to be utf8-encoded. I only decided on that after lots of code was written so it was a refactoring oddysey that is still ongoing. My initial impetus was to be able to compile non-GUI parts under Linux and Mac. I abandoned that goal but I think that’s a good idea anyway. WCHAR* strings are 2x larger than char*. That’s more memory used which also makes the app slower. Binaries are bigger if string constants are WCHAR*. The implementation rule is simple: I only convert to WCHAR* when calling Windows API. When Windows API returns WCHA* I convert it to utf-8. No exceptions Do you want to hear a joke? “Zero-cost exceptions”. Throwing and catching exceptions generate bloated code. Exceptions are a non-local control flow that makes it hard to reason about program. Every memory allocation becomes a potential leak. But RAII, you protest. RAII is a “solution” to a problem created by exceptions. How about I don’t create the problem in the first place. Hard core #include discipline I wrote about it in depth. My objects are not shy I don’t bother with private and protected. struct is just class with guts exposed by default, so I use that. While intellectually I understand the reasoning behind hiding implementation details in practices it becomes busy work of typing noise and then even more typing when you change your mind about visibility. I’m the only person working on the code so I don’t need to force those of lesser intellect to write the code properly. My objects are shy At the same time I minimize what goes into a class, especially methods. The smaller the class, the faster the build. A common problem is adding too many methods to a class. You have a StrVec class for array of strings. A lesser programmer is tempted to add Join(const char* sep) method to StrVec. A wise programmer makes it a stand-alone function: Join(const StrVec& v, const char* sep). This is enabled by making everything in a class public. If you limit visibility you then have to use friendto allow Join() function access what it needs. Another example of “solution” to self-inflicted problems. Minimize #ifdef #ifdef is problematic because it creates code paths that I don’t always build. I provide arm64, intel 32-bit and 64-bit builds but typically only develop with 64-bit intel build. Every #ifdef that branches on architecture introduces potential for compilation error which I’ll only know about when my daily ci build fails. Consider 2 possible implementations of IsProcess64Bit(): Bad: bool IsProcess64Bit() { #ifdef _WIN64 return true; #else return false; #endif } Good: bool IsProcess64Bit() { return sizeof(uintptr_t) == 8; } The bad version has a bug: it was correct when I was only doing intel builds but became buggy when I added arm64 builds. This conflicts with the goal of smallest possible size but it’s worth it. Stress testing SumatraPDF supports a lot of very complex document and image formats. Complex format require complex code that is likely to have bugs. I also have lots of files in those formats. I’ve added stress testing functionality where I point SumatraPDF to a folder with files and tell it to render all of them. For greater coverage, I also simulate some of the possible UI actions users can take like searching, switching view modes etc. Crash reporting I wrote about it in depth. Heavy use of CrashIf() C/C++ programmers are familiar with assert() macro. CrashIf() is my version of that, tailored to my needs. The purpose of assert / CrashIf is to add checks to detect incorrect use of APIs or invalid states in the program. For example, if the code tries to access an element of an array at an invalid index (negative or larger than size of the array), it indicates a bug in the program. I want to be notified about such bugs both when I test SumatraPDF and when it runs on user’s computers. As the name implies, it’ll crash (by de-referencing null pointer) and therefore generate a crash report. It’s enabled in debug and pre-release builds but not in release builds. Release builds have many, many users so I worry about too many crash reports. premake to generate Visual Studio solution Visual Studio uses XML files as a list of files in the project and build format. The format is impossible to work with in a text editor so you have no choice but to use Visual Studio to edit the project / solution. To add a new file: find the right UI element, click here, click there, pick a file using file picker, click again. To change a compilation setting of a project or a file? Find the right UI element, click here, click there, type this, confirm that. You accidentally changed compilation settings of 1 file out of a hundred? Good luck figuring out which one. Go over all files in UI one by one. In other words: managing project files using Visual Studio UI is a nightmare. Premake is a solution. It’s a meta-build system. You define your build using lua scripts, which look like test configuration files. Premake then can generate Visual Studio projects, XCode project, makefiles etc. That’s the meta part. It was truly a life server on project with lots of files (SumatraPDF’s own are over 300, many times more for third party libraries). Using /analyze and cppcheck cppcheck and /analyze flag in cl.exe are tools to find bugs in C++ code via static analysis. They are like a C++ compiler but instead of generating code, they analyze control flow in a program to find potential programs. It’s a cheap way to find some bugs, so there’s no excuse to not run them from time to time on your code. Using asan builds Address Sanitizer (asan) is a compiler flag /fsanitize=address that instruments the code with checks for common memory-related bugs like using an object after freeing it, over-writing values on the stack, freeing an object twice, writing past allocated memory. The downside of this instrumentation is that the code is much slower due to overhead of instrumentation. I’ve created a project for release build with asan and run it occasionally, especially in stress test. Write for the debugger Programmers love to code golf i.e. put us much code on one line as possible. As if lines of code were expensive. Many would write: Bad: // ... return (char*)(start + offset); I write: Good: // ... char* s = (char*)(start + offset); return s; Why? Imagine you’re in a debugger stepping through a debug build of your code. The second version makes it trivial to set a breakpoint at return s line and look at the value of s. The first doesn’t. I don’t optimize for smallest number of lines of code but for how easy it is to inspect the state of the program in the debugger. In practice it means that I intentionally create intermediary variables like s in the example above. Do it yourself standard library I’m not using STL. Yes, I wrote my own string and vector class. There are several reasons for that. Historical reason When I started SumatraPDF over 15 years ago STL was crappy. Bad APIs Today STL is still crappy. STL implementations improved greatly but the APIs still suck. There’s no API to insert something in the middle of a string or a vector. I understand the intent of separation of data structures and algorithms but I’m a pragmatist and to my pragmatist eyes v.insert (v.begin(), myarray, myarray+3); is just stupid compared to v.inert(3, el). Code bloat STL is bloated. Heavy use of templates leads to lots of generated code i.e. surprisingly large binaries for supposedly low-level language. That bloat is invisible i.e. you won’t know unless you inspect generated binaries, which no one does. The bloat is out of my control. Even if I notice, I can’t fix STL classes. All I can do is to write my non-bloaty alternative, which is what I did. Slow compilation times Compilation of C code is not fast but it feels zippy compared to compilation of C++ code. Heavy use of templates is big part of it. STL implementations are over-templetized and need to provide all the C++ support code (operators, iterators etc.). As a pragmatist, I only implement the absolute minimum functionality I use in my code. I minimize use of templates. For example Str and WStr could be a single template but are 2 implementations. I don’t understand C++ I understand the subset of C++ I use but the whole of C++ is impossibly complicated. For example I’ve read a bunch about std::move() and I’m not confident I know how to use it correctly and that’s just one of many complicated things in C++. C++ is too subtle and I don’t want my code to be a puzzle. Possibility of optimized implementations I wrote a StrVec class that is optimized for storing vector of strings. It’s more efficient than std::vector<std::string> by a large margin and I use it extensively. Temporary allocator and pool allocators I use temporary allocators heavily. They make the code faster and smaller. Technically STL has support for non-standard allocators but the API is so bad that I would rather not. My temporary allocator and pool allocators are very small and simple and I can add support for them only when beneficial. Minimize unsigned int STL and standard C library like to use size_t and other unsigned integers. I think it was a mistake. Go shows that you can just use int. Having two types leads to cast-apalooza. I don’t like visual noise in my code. Unsigned are also more dangerous. When you substract you can end up with a bigger value. Indexing from end is subtle, for (int i = n; i >= 0; i--) is buggy because i >= 0 is always true for unsigned. Sadly I only realized this recently so there’s a lot of code still to refactor to change use of size_t to int. Mostly raw pointers No std::unique_ptr for me. Warnings are errors C++ makes a distinction between compilation errors and compilation warnings. I don’t like sloppy code and polluting build output with warning messages so for my own code I use a compiler flag that turns warnings into errors, which forces me to fix the warnings.

yesterday 2 votes