More from Marco.org
Ten years ago, Apple’s Phil Schiller surprised Apple enthusiasts and developers by walking out on stage at John Gruber’s The Talk Show Live WWDC event and giving an open, human, honest interview to a somewhat jaded community. I wrote this in response: Both Apple and Phil Schiller himself took a huge risk in doing this. That they agreed at all is a noteworthy gift to this community of long-time enthusiasts, many of whom have felt under-appreciated as the company has grown. […] Phil’s appearance on the show was warm, genuine, informative, and entertaining. It was human. And humanizing the company and its decisions, especially to developers — remember, developer relations is all under Phil — might be worth the PR risk. This started a ten-year run of interviews by Apple executives on The Talk Show every year at WWDC that proved to be great, surprisingly safe PR for Apple. No executive ever said something they shouldn’t have (they’re pros), no sensational or negative news stories ever resulted from them, and Apple’s enthusiastic fans and developers felt seen, heard, and appreciated. * * * For unspecified reasons, Apple has declined to participate this year, ending what had become a beloved tradition in our community — and I can’t help but suspect that it won’t come back. (A lot has changed in the meantime.) Maybe Apple has good reasons. Maybe not. We’ll see what their WWDC PR strategy looks like in a couple of weeks. In the absence of any other information, it’s easy to assume that Apple no longer wants its executives to be interviewed in a human, unscripted, unedited context that may contain hard questions, and that Apple no longer feels it necessary to show their appreciation to our community and developers in this way. I hope that’s either not the case, or it doesn’t stay the case for long. This will be the first WWDC I’m not attending since 2009 (excluding the remote 2020 one, of course). Given my realizations about my relationship with Apple and how they view developers, I’ve decided that it’s best for me to take a break this year, gain some perspective, and decide what my future relationship should look like. Maybe Apple’s leaders are doing that, too.
Today, on the tenth anniversary of Overcast 1.0, I’m happy to launch a complete rewrite and redesign of most of the iOS app, built to carry Overcast into the next decade — and hopefully beyond. Like podcasts better than blog posts? Listen to ATP #596 for more! What’s new Much faster, more responsive, more reliable, and more accessible. Modern design, optimized for easily-reached controls on today’s phone sizes. Improvements throughout, such as undoing large seeks, new playlist-priority options, easier navigation, and more. What’s not Most features. Overcast is still Overcast! The audio engine. It’s the best part of Overcast, and still leads the industry in sound quality, silence skipping, and volume normalization. (More soon!) The business. I’m still a one-person operation, with no funding or external ownership, serving only my customers. My principles. I always want to make the best podcast app, and I’ll never disrespect your time, attention, or privacy. What’s gone Streaming. Most big podcasts now use dynamic ad insertion, which causes bugs and problems for streaming playback.1 Downloading episodes completely before they begin playback is much more reliable. Tapping a non-downloading episode will now open the playback screen, download it, then start playback. It works similarly to the way streaming did before, but playback begins after the download completes, not after a portion of it is buffered. On today’s fast networks, this usually only takes a few extra seconds. And in the near future, I’ll be adding smarter options and more control over selective downloading of episodes to further improve the experience for people who don’t automatically download every episode. What’s next The last few missing features from the old app, such as Shortcuts support, storage management, and OPML. These are absent now, but will return soon. More options for downloading and deleting episodes. Upgrading the Apple Watch app to the new, faster sync engine. (The Watch app is currently unchanged from the previous one.) And, of course, more features, including some of your most-requested features over the last decade. Getting this rewrite out the door was a monumental task. Thank you for your patience as I work through this list! Why? Most of Overcast’s core code was 10 years old, which made it cumbersome or impossible to easily move with the times, adopt new iOS functionality, or add new features, especially as one person. That’s why there haven’t been many new features or changes in years. You saw it, and I saw it. I wasn’t able to serve my customers as well as I wanted. For Overcast to have a future, it needed a modern foundation for its second decade. I’ve spent the past 18 months rebuilding most of the app with Swift, SwiftUI, Blackbird, and modern Swift concurrency. Now, development is rapidly accelerating. I’m more responsive, iterating more quickly, and ultimately making the app much better. Thank you all so much for the first decade of Overcast. Here’s to the next one. Dynamic ad insertion (DAI) splices ads into each download, and no two downloads are guaranteed to have the same number or duration of ads. So, for example, if the first half of an episode downloads, then the download fails, and it downloads the second half with another request, the combined audio may jump forward or back at the halfway mark, losing or repeating content. ↩︎
Overcast’s latest update (2022.2) brings the largest redesign in its nearly-eight-year history, plus many of the most frequently requested features and lots of under-the-hood improvements. I’m pretty proud of this one. For this first and largest phase of the redesign, I focused on the home screen, playlist screen, typography, and spacing. (I plan to revamp the now-playing and individual-podcast screens in a later update.) The home screen is radically different: Home screen, before (left) and after (right). Playlists now have strong visual identities for nicer and easier navigation. Each playlist has a customizable color, and a custom icon can be selected from over 3,000 SF Symbols to match modern iOS design and the other icons within Overcast. And playlists can be manually reordered with drag-and-drop. Recently played and newly published episodes can now be displayed on the home screen for quick access, much like the widget and CarPlay experience. Podcasts can now be pinned to the top of the home-screen list. Pinned podcasts can also be manually reordered with drag-and-drop. I’ve also rethought the old stacked “Podcasts” and “Played Podcasts” sections to better match people’s needs and expectations. Now, the toggle atop the podcast list switches between three modes: podcasts with current episodes, all followed podcasts, and inactive podcasts (those that you don’t follow and therefore won’t get any more episodes from, or haven’t posted a new episode in a long time). The playlist screen’s structure remains mostly the same, while refining the design for the modern era: Playlist screen, before (left) and after (right). Here, it’s more apparent that I’ve replaced the system San Francisco font with an alternate variant, San Francisco Rounded, to increase legibility and better match the personality of the app. I’ve also added highly demanded features: By far, Overcast’s most-requested feature is a Mark as Played feature. That’s now available as a checkmark button on episode rows, as well as a left-side swipe action. The second-most-requested feature is a way to view all starred episodes. Special playlists for Starred, Downloaded, and In Progress can now be created. The light and dark themes now each have a customizable tint color from the modern iOS UI-color palette, including these favorites from beta testers: And throughout the app, I’ve made tons of tweaks and bug fixes, including: Notifications and background downloads are now much more reliable. Episode downloads can now be individually deleted or re-downloaded. Links can now be opened in Safari. (under Nitpicky Details) Performance is now significantly better with very large playlists and collections. Fixed bugs with episode-duration detection, CarPlay lists, Mac-app sharing, and much more. So much is better in this update that I can’t even remember it all. Thank you so much to everyone who helped me beta-test this massive update. As always, Overcast is free in the App Store. Go get it!
Losing Steve affected me more than it probably should have, given that I never met him or had any correspondence with him. But losing him was devastating — not just to my world, but the world. He was a sort of virtual father figure: I was always hoping that maybe Steve would notice something I did. We all wanted his attention and approval, and that drove us to do better work — even those of us who never worked at Apple. Nobody replaced him in this role. Nobody can. But as an outsider who had no personal relationship with him to mourn, it has been most depressing to consider how much of his work the world missed out on. He wasn’t taken from us after a long, complete life — he was taken in his prime. He had so much more to offer the world.
After the dust settles from the developer class-action settlement, the South Korean law, the JFTC announcement, and the Apple v. Epic decision, I think the most likely long-term outcome isn’t very different from the status quo — and that’s a good thing. Allowing external purchases Here’s what I think we’ll end up with: Apple will still require apps to use their IAP system for any qualifying purchases that occur in the apps themselves. All app types will be allowed to link out to a browser for other purchase methods. Most apps will be required to also offer IAP side-by-side with any external methods.1 Only “Reader apps” will be exempt from this requirement.2 Apple will have many rules regarding the display, descriptions, and behavior of external purchases, many of which will be unpublished and ever-changing. App Review will be extremely harsh, inconsistent, capricious, petty, and punitive with their enforcement.3 Apple won’t require price-matching between IAP and external purchases. These few but important corrections reduce Apple’s worst behavior and should relieve most regulatory pressure. The result won’t look much different than the status quo: Most big media apps (qualifying as “reader” apps) won’t offer IAP, but will finally be allowed to link to their websites from their apps and offer purchases there. Many games will offer both IAP and external purchases, with the external choice offering a discount, bonus gems, extra loot boxes, or other manipulative tricks to optimize the profitability of casino games for children (commissions from which have been the largest portion of Apple’s “services revenue” to date). Most importantly, many products, services, and business models will become possible that previously weren’t, leading to more apps, more competition, and more money going to more places. External purchase methods will evolve to be almost as convenient as IAP (especially if Apple Pay is permitted in this context), and payment processors will reduce the burden of manual credit-card entry with shared credentials available across multiple apps. The payment-fraud doomsday scenarios argued by Apple and many fans mostly won’t happen, in part because App Review will prevent most obvious cases, but also because parents don’t typically offer their credit cards to untrustworthy children; and for buyers of all ages, most credit cards themselves provide stronger fraud prevention and easier recourse from unwanted charges than the App Store ever has. No side-loading I don’t expect side-loading or alternative app stores to become possible, and I’m relieved, because that is not a future I want for iOS. When evaluating such ideas, I merely ask myself: “What would Facebook do?” Facebook owns four of the top ten apps in the world. If side-loading became possible, Facebook could remove Instagram, WhatsApp, the Facebook app, and Messenger from Apple’s App Store, requiring customers to install these extremely popular apps directly from Facebook via side-loading. And everyone would. Most people use a Facebook-owned app not because it’s a good app, but because it’s a means to an important end in their life. Social pressure, family pressure, and network lock-in prevent most users from seeking meaningful alternatives. People would jump through a few hoops if they had to. Facebook would soon have apps that bypassed App Review installed on the majority of iPhones in the world. Technical limitations of the OS would prevent the most egregious abuses, but there’s a lot they could still do. We don’t need to do much imagining — they already have attempted multiple hacks, workarounds, privacy invasions, and other unscrupulous and technically invasive behavior with their apps over time to surveil user behavior outside of their app and stay running longer in the background than users intend or expect. The OS could evolve over time to reduce some of these vulnerabilities, but technical measures alone cannot address all of them. Without the threat of App Review to keep them in check, Facebook’s apps would become even more monstrous than they already are. As a user and a fan of iOS, I don’t want any part of that. No alternative app stores Alternative app stores would be even worse. Rather than offering individual apps via side-loading, Facebook could offer just one: The Facebook App Store. Instagram, WhatsApp, the Facebook app, and Messenger could all be available exclusively there. The majority of iOS users in the world would soon install it, and Facebook would start using leverage in other areas — apps’ social accounts, stats packages, app-install ads, ad-attribution requirements — to heavily incentivize (and likely strong-arm) a huge number of developers to offer their apps in the Facebook App Store, likely in addition to Apple’s. Maybe I’d be required to add the Facebook SDK to my app in order to be in their store, which they would then use to surveil my users. Maybe I’d need to buy app-install ads to show up in search there at all. Maybe I’d need to pay Facebook to “promote” each app update to reach more than a tiny percentage of my existing customers. And Facebook wouldn’t even be the only app store likely to become a large player on iOS. Amazon would almost certainly bring their garbage “Appstore” to iOS, but at least that one probably wouldn’t go anywhere. Maybe Google would bring the Play Store to iOS and offer a unified SDK to develop a single codebase for iOS and Android, effectively making every app feel like an Android app and further marginalizing native apps when they’re already hurting. Media conglomerates that own many big-name properties, like Disney, might each have their own app stores for their high-profile apps. Running your own store means you can promote all of your own apps as much as you want. What giant corporation would resist? Don’t forget games! Epic and Steam would come to iOS with their own game stores. Maybe Microsoft and Nintendo, too. Maybe you’d need to install seven different app stores on your iPhone just to get the apps and games you already use — and all without App Review to keep them in check. Most developers would probably need to start submitting our apps to multiple app stores, each with its own rules, metadata, technical requirements, capabilities, approval delays, payment processing, stats, crash reports, ads, promotion methods, and user reviews. As a user, a multiple-app-store world sounds like an annoying mess; as a developer, it terrifies me. Apple’s App Store is the devil we know. The most viable alternatives that would crop up would be far worse. Course correction The way Apple runs its business isn’t perfect, but it’s also not a democracy. I loved this part of Judge Yvonne Gonzalez Rogers’ decision in Apple v. Epic, as quoted by Ben Thompson’s excellent article that you should read: Apple has not offered any justification for the actions other than to argue entitlement. Where its actions harm competition and result in supracompetitive pricing and profits, Apple is wrong. I interpret “entitlement” without a negative connotation here — Apple is entitled to run their platform mostly as they wish, with governmental interference only warranted to fix market-scale issues that harm large segments of commerce or society. As a developer, I’d love to see more changes to Apple’s control over iOS. But it’s hard to make larger changes without potentially harming much of what makes iOS great for both users and developers. Judge Gonzalez Rogers got it right: we needed a minor course correction to address the most egregiously anticompetitive behavior, but most of the way Apple runs iOS is best left to Apple. If the South Korean law holds, IAP may not be required — but only in South Korea. With this exception, I expect the rest of these rules to be enforced the same way globally. ↩︎ Apple defines “reader” apps as “[allowing] a user to access previously purchased content or content subscriptions (specifically: magazines, newspapers, books, audio, music, and video).” This includes many apps that Apple’s services compete with, such as Netflix, Spotify, and Kindle, that raise anticompetitive concerns among regulators and legislators when forced to give Apple 30%. ↩︎ App Review has higher-level queues for managerial review of controversial rules or edge cases, typically identifiable from the outside by an app stuck with “In Review” status for days or weeks, and often ending in a phone call from “Bill”. I’d expect any app offering external purchases to have a very high chance of being escalated to a slower, more pain-in-the-ass review process, possibly causing it not to be worthwhile for most small developers to deal with. I have no plans to add external purchases to Overcast for multiple reasons, including this — but mostly because, for my purposes, I’m satisfied with Apple’s IAP system. ↩︎
More in programming
I’ve never published an essay quite like this. I’ve written about my life before, reams of stuff actually, because that’s how I process what I think, but never for public consumption. I’ve been pushing myself to write more lately because my co-authors and I have a whole fucking book to write between now and October. […]
As search gets worse and “working code” gets cheaper, apps get easier to make from scratch than to find.
Less known desktop UI frameworks Writing desktop software is hard. The UI technologies of Windows or MacOS are awful compared to web technology. What can trivially be done with HTML/CSS/JavaScript in few minutes can take hours using Windows’s win32 APIs or Mac’s Cocoa. That’s why the default technology for desktop apps, especially cross-platform, is Electron: a Chrome browser combined with Node runtime. The problem is that it’s bloaty: each app is a unique build of Chrome with a little bit of application code. Chrome is over 100MB so many apps ship less than 1MB of code in a 100M wrapper. People tried to address the problem of poor OS APIs by writing UI frameworks, often meant to be cross-platform. You’ve heard about QT, GTK, wxWindows. The problem with those is that they are also old, their APIs are not the greatest either and they are bloaty as well. There just doesn’t seem to be a good option. Writing your own framework seems impossible due to the size of task. But is it? I’ll show a couple of less-known UI frameworks written mostly be a single person, often done simply to enable writing an application. SWELL in WDL WDL is interesting. Justin Frankel, the guy who created Winamp, has a repository of C++ code he uses in different projects. After selling Winamp to AOL, a side quest of writing file sharing application, getting fired from AOL for writing file sharing application, he started a company building Reaper a digital audio workstation software for Windows. Winamp is a win32 API program and so is Reaper. At some point Justin decided to make a Mac version but by then he had a lot of code heavily using win32 APIs. So he did what anyone in his position would: he implemented win32 APIs for Mac OS and Linux and called it SWELL - Simple Windows Emulation Layer. Ok, actually no-one else would do it. It was an insane idea but it worked. It’s important to not over-state SWELL capabilities. It’s not Wine. You can’t take any win32 program and recompile for Mac with SWELL. Frankel is insanely pragmatic and so is his code. SWELL only implements the subset of APIs he uses in Reaper. At the same time Reaper is a big app so if SWELL works for Reaper, it could work for your app. WDL is open-source using permissive MIT license. Sublime Text For a few years Sublime Text was THE programmer’s editor. It was written by a single developer in C++ and he wrote a custom UI toolkit for it. Not open source but its existence shows it can be done. RAD Debugger RAD Debugger is an open-source Windows debugger for C/C++ apps written in C by mostly a single person. It implements a custom UI framework based on 3D renderer. The UI is integral part of the the app but the code is well structured so you probably can take just their UI / render code and use it in your own C / C++ app. Currently the app / UI is only for Windows but it’s designed to be cross-platform and they are working on porting the renderer to Mac OS / Linux. They use permissive MIT license and everything is written in C. Dear ImGUI Dear ImGui is a newer cross-platform, UI framework in C++. Open source, permissive MIT license. Written by mostly a single person. Ghostty Ghostty is a cross-platform terminal emulator and UI. It’s written in Zig by mostly a single person and uses it’s own low-level GPU renderer for the UI. You too can write your own UI framework At first the idea of writing your own UI framework seems impossibly daunting. What I’m hoping to show is that if you’re ambitious enough it’s possible to build cross platform desktop apps that are not just bloated 100MB Chrome wrappers around few kilobytes of custom code. I’m not saying it’s a simple thing, just that enough people did it that it’s possible. It shouldn’t be necessary but both Microsoft and Apple have tragically dropped the ball on providing decent, high-performance UI libraries for their OS. Microsoft even writes their own apps, like Teams, in web technologies. Thanks to open source you’re not at the staring line. You can just use Dear ImGUI or WDL’s SWELL. Or you can extract the UI code from RAD Debugger or Ghostty (if you write in Zig). Or you can look at how their implementation to speed up your own design and implementation.
I released Logic for Programmers exactly one year ago today. It feels weird to celebrate the anniversary of something that isn't 1.0 yet, but software projects have a proud tradition of celebrating a dozen anniversaries before 1.0. I wanted to share about what's changed in the past year and the work for the next six+ months. The Road to 0.1 I had been noodling on the idea of a logic book since the pandemic. The first time I wrote about it on the newsletter was in 2021! Then I said that it would be done by June and would be "under 50 pages". The idea was to cover logic as a "soft skill" that helped you think about things like requirements and stuff. That version sucked. If you want to see how much it sucked, I put it up on Patreon. Then I slept on the next draft for three years. Then in 2024 a lot of business fell through and I had a lot of free time, so with the help of Saul Pwanson I rewrote the book. This time I emphasized breadth over depth, trying to cover a lot more techniques. I also decided to self-publish it instead of pitching it to a publisher. Not going the traditional route would mean I would be responsible for paying for editing, advertising, graphic design etc, but I hoped that would be compensated by much higher royalties. It also meant I could release the book in early access and use early sales to fund further improvements. So I wrote up a draft in Sphinx, compiled it to LaTeX, and uploaded the PDF to leanpub. That was in June 2024. Since then I kept to a monthly cadence of updates, missing once in November (short-notice contract) and once last month (Systems Distributed). The book's now on v0.10. What's changed? A LOT v0.1 was very obviously an alpha, and I have made a lot of improvements since then. For one, the book no longer looks like a Sphinx manual. Compare! Also, the content is very, very different. v0.1 was 19,000 words, v.10 is 31,000.1 This comes from new chapters on TLA+, constraint/SMT solving, logic programming, and major expansions to the existing chapters. Originally, "Simplifying Conditionals" was 600 words. Six hundred words! It almost fit in two pages! The chapter is now 2600 words, now covering condition lifting, quantifier manipulation, helper predicates, and set optimizations. All the other chapters have either gotten similar facelifts or are scheduled to get facelifts. The last big change is the addition of book assets. Originally you had to manually copy over all of the code to try it out, which is a problem when there are samples in eight distinct languages! Now there are ready-to-go examples for each chapter, with instructions on how to set up each programming environment. This is also nice because it gives me breaks from writing to code instead. How did the book do? Leanpub's all-time visualizations are terrible, so I'll just give the summary: 1180 copies sold, $18,241 in royalties. That's a lot of money for something that isn't fully out yet! By comparison, Practical TLA+ has made me less than half of that, despite selling over 5x as many books. Self-publishing was the right choice! In that time I've paid about $400 for the book cover (worth it) and maybe $800 in Leanpub's advertising service (probably not worth it). Right now that doesn't come close to making back the time investment, but I think it can get there post-release. I believe there's a lot more potential customers via marketing. I think post-release 10k copies sold is within reach. Where is the book going? The main content work is rewrites: many of the chapters have not meaningfully changed since 1.0, so I am going through and rewriting them from scratch. So far four of the ten chapters have been rewritten. My (admittedly ambitious) goal is to rewrite three of them by the end of this month and another three by the end of next. I also want to do final passes on the rewritten chapters; as most of them have a few TODOs left lying around. (Also somehow in starting this newsletter and publishing it I realized that one of the chapters might be better split into two chapters, so there could well-be a tenth technique in v0.11 or v0.12!) After that, I will pass it to a copy editor while I work on improving the layout, making images, and indexing. I want to have something worthy of printing on a dead tree by 1.0. In terms of timelines, I am very roughly estimating something like this: Summer: final big changes and rewrites Early Autumn: graphic design and copy editing Late Autumn: proofing, figuring out printing stuff Winter: final ebook and initial print releases of 1.0. (If you know a service that helps get self-published books "past the finish line", I'd love to hear about it! Preferably something that works for a fee, not part of royalties.) This timeline may be disrupted by official client work, like a new TLA+ contract or a conference invitation. Needless to say, I am incredibly excited to complete this book and share the final version with you all. This is a book I wished for years ago, a book I wrote because nobody else would. It fills a critical gap in software educational material, and someday soon I'll be able to put a copy on my bookshelf. It's exhilarating and terrifying and above all, satisfying. It's also 150 pages vs 50 pages, but admittedly this is partially because I made the book smaller with a larger font. ↩
Translating user interface of SumatraPDF SumatraPDF is the best PDF/eBook/Comic Book viewer for Windows. It’s small, fast, full of features, free and open-source. It became popular enough that it made sense to translate the UI for non-English users. Currently we support 72 languages. This article describes how I designed and implemented a translation system in SumatraPDF, a native win32 C++ Windows application. Hard things about translating the UI There are 2 hard things about translating an application code for translation system (extracting strings to translate, translate strings from English to user’s language) translating them into many languages Extracting strings to translate from source code Currently there are 381 strings in SumatraPDF subject to translation. It’s important that the system requires the least amount of effort when adding new strings to translate. Every string that needs to be translated is marked in .cpp or .h file with one of two macros: _TRA("Rename") _TRN("Open") I have a script that extracts those strings from source files. Mine is written in Go but it could just as well be Python or JavaScript. It’s a simple regex job. _TR stands for “translation”. _TRA(s) expands into const char* trans::GetTranslation(const char* str) function which returns str translated to current UI language. We auto-detect language at startup based on Windows settings and allow the user to explicitly set UI language. For English we just return the original string. If a string to be translated is e.g. a part of const char* array[], we can’t use trans::GetTranslation(). For cases like that we have _TRN() which expands to English string. We have to write code to translate it at some point. Adding new strings is therefore as simple as wrapping them in _TRA() or _TRN() macros. Translating strings into many languages Now that we’ve extracted strings to be translated, we need to translate them into 72 languages. SumatraPDF is a free, open-source program. I don’t have a budget to hire translators. I don’t have a budget, period. The only option was to get help from SumatraPDF users. It was vital to make it very easy for users to send me translations. I didn’t want to ask them, for example, to download some translation software. Design and implementation of AppTranslator web app I couldn’t find a really simple software for crowd sourcing translations so I wrote my own: https://github.com/kjk/apptranslator You can see it in action: https://www.apptranslator.org/app/SumatraPDF I designed it to be generic but I don’t think anyone else is using it. AppTranslator is simple. Per https://tools.arslexis.io/wc/: 4k lines of Go server code 451 lines of html code a single dependency: bootstrap CSS framework (the project is old) It’s simple because I don’t want to spend a lot of time writing translation software. It’s just a side project in service of the goal of translating SumatraPDF. Login is exclusively via GitHub. It doesn’t even use a database. Like in Redis, changes are stored as a series of operations in an append-only log. We keep the whole state in memory and re-create it from the log at startup. Main operation is translate a string from English to language X represented as [kOpTranslation, english string, language, translation, user who provided translation]. When user provides a translation in the web UI, we send an API call to the server which appends the translation operation to the log. Simple and reliable. Because the code is written in Go, it’s very fast and memory efficient. When running it uses mere megabytes of RAM. It can comfortably run on the smallest 256 MB VPS server. I backup the log to S3 so if the server ever fails, I can re-install the program on a new server and re-download the translations from S3. I provide RSS feed for each language so that people who provide translations can monitor for new strings to be translated. Sending strings for translation and receiving translations So I have a web app for collecting translations and a script that extracts strings to be translated from source code. How do they connect? AppTranslator has an API for submitting the current set of strings to be translated in the simplest possible format: a line for each string (I ensure there are no newlines in the string itself by escaping them with \n) API is password protected because only I can submit the strings. The server compares the strings sent with the current set and records a difference in the log. It also sends a response with translations. Again the simplest possible format: AppTranslator: SumatraPDF 651b739d7fa110911f25563c933f42b1d37590f8 :%s annotation. Ctrl+click to edit. am:%s մեկնաբանություն: Ctrl+քլիք՝ խմբագրելու համար: ar:ملاحظة %s. اضغط Ctrl للتحرير. az:Qeyd %s. Düzəliş etmək üçün Ctrl+düyməyə basın. As you can see: a string to translate is on a line starting with : is followed by translations of that strings in the format: ${lang}: ${translation} An optimization: 651b739d7fa110911f25563c933f42b1d37590f8 is a hash of this response. If I submit this hash with my request and translations didn’t change on the server, the response is empty. Implementing C++ part of translation system So now I have a text file with translation downloaded from the server. How do I get a translation in my C++ code? As with everything in SumatraPDF, I try to do things in a simple and efficient way. The whole Translation.cpp is only 239 lines of code. The core of translation system is const char* trans::GetTranslation(const char* s); function. I embed the translations in exact the same format as received from AppTranslator in the executable as data file in resources. If the UI language is English, we do nothing. trans::GetTranslation() returns its argument. When we switch the language, we load the translations from resources and build an index: an array of English strings an array of corresponding translations Both arrays use my own StrVec class optimized for storing an array of strings. To find a translation we scan the first array to find an index of the string and return translation from the second array, at the same index. Linear scan seems like it would be slow but it isn’t. Resizing dialogs I have a few dialogs defined in SumatraPDF.rc file. The problem with dialogs is that position of UI elements is fixed. A translated string will almost certainly have a different size than the English string which will mess up fixed layout. Thankfully someone wrote DialogSizer that smartly resizes dialogs and solves this problem. The evolution of a solution No AppTranslator My initial implementation was simpler. I didn’t yet have AppTranslator so I stored the strings in a text file in repository in the same format as what I described above. People would download it, make changes using a text editor and send me the file via email which I would then checkin. It worked for a while but it became worse over time. More strings, more languages created more work for me to manually manage e-mail submissions. I decided to automate the process. Code generation My first implementation of C++ side used code generation instead of embedding the text file in resources. My Go script would generate C++ source code files with static const char* [] arrays. This worked well but I decided to improve it further by making the code use the text file with translations embedded in the app. The main motivation for the change was to open a possibility of downloading latest translations from the server to fix the problem of translations not being all ready when I build the release executable. I haven’t done that yet but it’s now easier to implement given that the format of strings embedded in the exe is the same as the one I can download from AppTranslator. Only utf-8 SumatraPDF started by using both WCHAR* Unicode strings and char* utf8 strings. For that reason the translation system had to support returning translation in both WCHAR* and char* version. Over time I refactored the code to use mostly utf8 and at some point I no longer needed to support WCHAR* version. That made the code even smaller and reduced memory usage. The experience I’m happy how things turned out. AppTranslator proved to be reliable and hassle free. It runs for many years now and collected 35440 string translations from users. I automated everything so that all I need to do is to periodically re-run the script that extracts strings from source code, uploads them to AppTranslator and downloads latest translations. One problem is that translations are not always ready in time for release so I make a release and then people start translating strings added since last release. I’ve considered downloading the latest translations from the server, in addition to embedding them in an executable at the time of building the app. Would I do the same today? While AppTranslator is reliable and doesn’t require on-going work, it would be better to not have to run a server at all. The world has changed since I started SumatraPDF. Namely: people are comfortable using GitHub and you can edit files directly in GitHub UI. It’s not a great experience but it works. One option would be to generate a translation text file for each language, in this format: :first untranslated string :second untranslated string :first translated string translation of first string :second translated string translation of second string Untranslated strings are listed at the top, to make it easier to find. A link would send a translator directly to edit this file in GitHub UI. When translator saves translations, it creates a PR for me to review and merge. The roads not taken But why did you re-invent everything? You should do X instead. All other X that I know about suck. Using per-language .rc resource files Traditional way of localizing / translating Window GUI apps is to store all strings and dialog definitions in an .rc file. Each language gets its own .rc file (or files) and the program picks the right resource based on a language. This doesn’t solve the 2 hard problems: having an easy way to add strings for translations having an easy way for users to provide translations XML horror show There was a dark time when the world was under the iron grip of XML fanaticism. Everything had to be an XML file even when it was the worst possible solution for the problem. XML doesn’t solve the 2 hard problems and a string storage format is an absolute nightmare for human editing. GNU gettext There’s a C library gettext that uses .po files. This is much saner solution than XML horror show. .po files are relatively simple text format. The code is already written. Warning: tooting my own horn. My format is better. It’s easier for people to edit, it’s easier to write code to parse it. This looks like many times more than 239 lines of code. Ok, gettext probably does a bit more than my code, but clearly nothing than I need. It also doesn’t solve the 2 hard problems. I would still have to write code to extract strings from source code and build a way to allow users to translate them easily.