Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
12
Despite my departure from Google, I am not leaving Flutter — the great thing about open source and open standards is that the product and the employer are orthogonal. I've had three employers in my career, and in all three cases when I left my employer I continued my job. With Netscape I was a member of the team before my internship, during my internship, and after my internship. With Opera Software, I joined while working on standards, kept working on standards, and left while working on the same standard that I then continued to work on at Google. So this is not a new thing for me. Flutter is amazingly successful. It's already the leading mobile app development framework, and I think we're close to having the table stakes required to make it the obvious default choice for desktop development as well (it's already there for some use cases). It's increasingly used in embedded scenarios. And Flutter is extremely well positioned to be the first truly usable Wasm framework as the web...
a year ago

More from Hixie's Natural Log

When complaints are a good sign

When you build something, you have to pick some design goals and priorities. Ideally you do so explicitly, but even if you don't, you're still implicitly doing so based on your design choices. These choices are trade-offs. If you want to write a quiet song, it won't be loud. If you are writing a software tool and you want to prioritize speed over simplicity, then it won't be as simple as if you'd prioritized simplicity over speed. There are two main signs that you've succeeded at your goals. The first, and more pleasant, is that you get compliments about how your thing is like you wanted it to be. "I love that song, it's so quiet!" "Your tool is so fast!" Why thank you, that's exactly what I was going for. The second sign, though, is that you will get complaints. Specifically, people will complain that your thing does not achieve the things you didn't set out to achieve. "I wish this song was louder", "this tool is so hard to use". That you are receiving complaints at all means that people are aware of your creation; that they are complaining about what you specifically set out to make a non-goal is a side-effect of the fact that you made that trade-off. The worst thing to do, when you receive such complaints, is take them to heart and try to fix them. This is because by definition you wanted these complaints. They are a sign that the thing you built is built as you wanted to build it. The people complaining want something different, they don't want your thing. It's just that your thing is so good that it's the thing they're compelled towards even though it doesn't prioritize the things they care about most. If you try to fix these complaints, you will, again by definition, be compromising on your goals. If you make the song have a loud part, then it's no longer a quiet song. You wanted a quiet song. Now it's a song that's quiet in parts and loud in parts. It probably still doesn't satisfy the needs of the people who want a loud song, and now it also doesn't satisfy the needs of the people who wanted your original quiet song. If you make your tool easier to use by compromising on the speed, then now you have a tool that's both not as fast as it could be and not as usable as it could have been if you'd started with that as a goal. It's important, therefore, to separate out complaints into those that are complaints you expect based on your design goals (which you should acknowledge but not fix), vs complaints that are either orthogonal to your goals (which you can fix without compromising your goals), or that are in line with your goals (which you should prioritize, since that's what you said to yourself was most important in the first place).

4 months ago 8 votes
Power dynamics in web specifications

My involvement in web standards started with the CSS working group. One of the things that we struggled with as a working group was that we would specify how the technology should work, but the browser vendors' implementations weren't exactly what we intended, and web authors would then write web pages that worked with those browsers, even though that meant the web pages themselves were also not doing things like the specifications said they should. The folks I worked with at the W3C (especially the academics and people working for organizations that did not themselves implement browsers) would frequently bemoan this state of affairs, expressing surprise at how they, the people in charge of the standards, were not being respected by the people implementing the standards. One of the key insights I had very early on in my work, before working on HTML5, which really influenced the WHATWG and its work, is the realization that the power dynamics at work were not at all the power dynamics that the folks at the W3C described. The reality of the situation was that the power lay entirely in the hands of the users. The users chose browsers. A browser vendor that ignored what the users wanted would lose market share. Market share is everything in this space. Browser vendors want users because they can convert users into dollars (in various ways, but they typically boil down to someone showing them ads and paying the browser vendors for the privilege). In turn, the browser vendors had more power than the specifications. What they implement is, by definition, what the technology is. The specification can say in absolute clarity that the keyword "marigold" should look yellow, but if a browser vendor makes it look red, then no web author is going to use it to mean yellow, and many will use it to mean red. There is a feedback loop here: if one browser implements "marigold" to mean red, and some important web site (or many unimportant web sites) rely on it, and say something like "best viewed in ThisOrThat browser!" because that's the one they use and in that browser it looks red and red is what looks best, then the other browser vendors are incentivised to make sure that the web page looks good in their browser too. Regardless of what the specification says, therefore, they are going to make "marigold" look red and not yellow. When I realized this, I also realized a corollary: if you have two competing specifications that both claim to define the same technology, but one matches what the browsers already do while the other one does not, the browser vendors are going to find it more useful to follow the one that matches what they do. This is because they can trust that implementing that specification will get them more market share. It means they won't have to stop and think at every step, "will following this specification cause me to lose users?". It is easier for them to use a specification that takes into account their needs in this way. We actually tried to explain this to the W3C membership. There was a big meeting in 2004 at Adobe in San Jose, the "W3C Workshop on Web Applications and Compound Documents". We tried to convey the above (I didn't quite understand it in the stark "power dynamic" terms yet, or at least, I didn't really express it in those terms, but if you read our position paper you can see this insight starting to crystalize). At this meeting, we made a pitch for the W3C to continue to maintain HTML and to care about what the browser vendors wanted. Representatives from Microsoft and Sun (in many ways arch enemies at the time) supported us. I seem to recall Apple being more quiet about it at the meeting but also essentially supporting the principles. The W3C membership resoundly rejected this whole concept. One of the W3C staff even explicitly said something along the lines of "if you want to do this you should do it elsewhere". That's what led to the WHATWG being founded a few weeks later.  The WHATWG was founded on this core principle — the specifications need to actually specify reality. When the browsers disagree with the spec, the spec is by definition incorrect and needs to change, regardless of how much technically superior the design in the spec is. Naturally, when you provide browser vendors with something that valuable, they will follow. You end up with a weird inverted power dynamic. The spec writer (when they follow this principle) has all the power, but only within the space that the browser vendors are themselves willing to play; and the browser vendors have all the power, but only within the space that the users are willing to put up with. It's very easy to appear to be in control when you tell people to do the thing they were going to do anyway (or at least, one of the things they were willing to do if they were to think about it). There is a (probably apocryphal) quote supposedly by Alexandre Auguste Ledru-Rollin that is often cited in mockery of bad leadership, but that perfectly matches the power dynamic here: "There go my people; I must find out where they are going so I can lead them". (Thanks to Leonard Damhorst for prompting me to write this post.)

9 months ago 19 votes
How big is the Flutter team?

I often get asked how many people contribute to Flutter. It's a hard question to answer because "contribute" is a very vague concept. There's tens of thousands of packages on pub.dev, all of which are written by contributors to the community. There's over 100,000 of issues filed in our issue database, filed by more than 35,000 people over the years (the exact number is hard to pin down because people sometimes delete their GitHub accounts; about 700 issues have been filed by people who have since deleted their account). Many more people still have used the "thumbs-up" reaction to indicate that an issue matters to them, with almost 165,000 thumbs up from about 45,000 people. All of these people are valuable contributors to Flutter. Usually, when pressed, people try to clarify by asking about "the core team". Again though it's hard to say exactly what that means, but let's assume they mean "people with commit access". That is, people we trust enough to have added to the GitHub repo as collaborators. This includes people who work on Flutter for companies like Google, Canonical, or Nevercode, and it includes people like me who are self-employed and/or contribute to Flutter on a volunteer basis. Currently that's about 280 people. So is that the answer? Well, no, not really. Some people have commit access but aren't active (maybe they got access because of their employer, but were then reassigned to work on another project, and the bureaucracy hasn't caught up with them yet — we only audit the membership occasionally because it's rather tedious to do). Some people have been very active recently but don't have commit access (e.g. because they were just laid off and a bot automatically removed their access; they might even resume working on Flutter in the future, as a volunteer or funded by another company). So what's the answer? I recently drilled down through our data to see if I could answer this. I will caveat the following numbers by saying that this changes all the time. We added a new team member just today (hi Nate!) who is not counted as a team member in the following numbers because we collected the data a few weeks ago (it takes literally days to scrape all the data from GitHub, and then hours to explore the resulting very large and very slow spreadsheet). Also, some of my definitions are a bit arbitrary, and slightly tweaking the limits would probably change the numbers noticeably. First, I collected a list of everyone who has ever created an issue, commented on an issue, put an emoji reaction on the first comment of an issue, or submitted a PR, excluding bots and people who deleted their GitHub account. (Actually Piinks did the actual data collection. Thanks!) I limited this to a subset of the GitHub repos of the flutter org that is relatively inclusive but does not count everything (we have a lot of historical repositories and so forth). This finds about 94,357 people. (So there you go. The Flutter team is about a hundred thousand people!) To avoid padding the numbers with people who left the project long ago, and to avoid counting "drive-by" contributors who came, did a bunch of work, and then left, I then limited the data set to people who contributed over a period of more than 180 days, and who last contributed sometime in 2024. Because of the definition of "contributed" described above, that means that someone who added a thumbs-up to an issue in December 2020 and then filed an issue in January 2024, and did nothing else, is included, but someone who submitted two PRs in March 2024 is not. Like I said, this is a bit arbitrary. Anyway, that leaves 3,839 people, of which 182 currently have commit access, 27 once had commit access but don't currently (these are mainly people who either got laid off recently and had their commit access revoked by an automated process, or people who were once team members, left, lost access from inactivity long ago, and then later came to comment on issues or file new issues — it's surprisingly common for people who once worked on Flutter full time to stick around even when their employment changes), and about 3,627 people who have never had commit access. Of those who have never had commit access, 2,407 have filed at least one issue or submitted at least one PR (accounting for a total of 12,383 issues and 2,613 PRs). Of those, 341 have filed 5 to 9 issues (2,242 issues total), and 296 have filed 10 or more issues in their lifetime (7,021 total issues). Similarly, of the "never had commit access" cohort, 73 people have sent 5 to 9 pull requests in their lifetime (458 total PRs) and 47 have sent 10 or more (1,321 PRs total). (For context, 4,663 people have ever submitted a pull request, and 429 have ever submitted more than 10 PRs.) Of the people who currently have commit access, 98 people have submitted more than one PR every 3 weeks on average since they first got involved (accounting for 49,173 PRs), 75 people have closed at least one issue every 3 weeks (accounting for 48,490 total issue closures), of which 10 are not in the first group (mostly that's our triage team), and 150 people have commented at least once every 3 weeks. A follow-up question a lot of people ask is "do they all work for Google?". This is a surprisingly hard question to answer. There are a lot of weird edge cases. For example, one person worked on Flutter for a company that Google hired to work on Flutter, but then quit that company, asked for their commit privileges to be removed, but continued to be active in the community. Several people who have quit Google (such as myself), or been laid off by Google, have continued to be active in one sense or another (I think I submit more code to Flutter now than I did in my last year at Google). It's also hard to answer because a lot more people at Google contribute to Flutter than just those on Google's Flutter team, and a lot of people on Google's Flutter team contribute in ways that don't show up on GitHub (e.g. product management, marketing, developer relations, internal tooling). Of the 98 people who have commit access, have been active for more than 180 days, have contributed at least once this year, and have submitted more than one PR every 3 weeks on average for the entire time they've been contributing, I estimate (based on what I know of people's employment and so forth) that about 85% are Googlers or somehow get their funding from Google, and about 15% are currently independent of Google. (This is by no means the entirety of the Google team contributing to Flutter; as I mentioned earlier, many folks at Google working on Flutter don't appear in these statistics.) I'm not sure what conclusion to draw from this; it's both more people than I expected to see funded by Google, which is great, and fewer people that aren't funded by Google, which is less great. On the other hand, it's still a significant number of non-Google-funded people. Is it enough? I think that really depends on what your goals are. I think if your goal is for Flutter to be an order of magnitude better than other UI frameworks, then frankly no, it's not enough. There is a ton of work to be done to get there. We know what it would take, but we don't have the people to do it today. On the other hand if your goal is to be a great framework, on par with others, then it's probably adequate. It would certainly be difficult to continue to be great with fewer people today. Of course, that may change as we complete big efforts, or as we take on new ones, or as the landscape changes, it's all hard to predict. That said, I would love to see more direct contributions from non-Google sources, if for no other reason but to end this silly "will Google cancel Flutter" line of questioning that has followed the project since its inception. It's a dumb question. Flutter's an open source UI framework. It will never die. It will become old and something else will shine brighter one day, just as happens with literally every other UI framework ever. That's just how our industry works. There's no reason to believe that'll happen any time soon though, and certainly no reason for it to happen earlier for Flutter than any other modern UI framework.

12 months ago 13 votes
Reflecting on 18 years at Google

I joined Google in October 2005, and handed in my resignation 18 years later. Last week was my last week at Google. I feel very lucky to have experienced the early post-IPO Google; unlike most companies, and contrary to the popular narrative, Googlers, from the junior engineer all the way to the C-suite, were genuinely good people who cared very much about doing the right thing. The oft-mocked "don't be evil" truly was the guiding principle of the company at the time (largely a reaction to contemporaries like Microsoft whose operating procedures put profits far above the best interests of customers and humanity as a whole). Many times I saw Google criticised for actions that were sincerely intended to be good for society. Google Books, for example. Much of the criticism Google received around Chrome and Search, especially around supposed conflicts of interest with Ads, was way off base (it's surprising how often coincidences and mistakes can appear malicious). I often saw privacy advocates argue against Google proposals in ways that were net harmful to users. Some of these fights have had lasting effects on the world at large; one of the most annoying is the prevalence of pointless cookie warnings we have to wade through today. I found it quite frustrating how teams would be legitimately actively pursuing ideas that would be good for the world, without prioritising short-term Google interests, only to be met with cynicism in the court of public opinion. Charlie's patio at Google, 2011. Image has been manipulated to remove individuals. Early Google was also an excellent place to work. Executives gave frank answers on a weekly basis, or were candid about their inability to do so (e.g. for legal reasons or because some topic was too sensitive to discuss broadly). Eric Schmidt regularly walked the whole company through the discussions of the board. The successes and failures of various products were presented more or less objectively, with successes celebrated and failures examined critically with an eye to learning lessons rather than assigning blame. The company had a vision, and deviations from that vision were explained. Having experienced Dilbert-level management during my internship at Netscape five years earlier, the uniform competence of people at Google was very refreshing. For my first nine years at Google I worked on HTML and related standards. My mandate was to do the best thing for the web, as whatever was good for the web would be good for Google (I was explicitly told to ignore Google's interests). This was a continuation of the work I started while at Opera Software. Google was an excellent host for this effort. My team was nominally the open source team at Google, but I was entirely autonomous (for which I owe thanks to Chris DiBona). Most of my work was done on a laptop from random buildings on Google's campus; entire years went by where I didn't use my assigned desk. In time, exceptions to Google's cultural strengths developed. For example, as much as I enjoyed Vic Gundotra's enthusiasm (and his initial vision for Google+, which again was quite well defined and, if not necessarily uniformly appreciated, at least unambiguous), I felt less confident in his ability to give clear answers when things were not going as well as hoped. He also started introducing silos to Google (e.g. locking down certain buildings to just the Google+ team), a distinct departure from the complete internal transparency of early Google. Another example is the Android team (originally an acquisition), who never really fully acclimated to Google's culture. Android's work/life balance was unhealthy, the team was not as transparent as older parts of Google, and the team focused on chasing the competition more than solving real problems for users. My last nine years were spent on Flutter. Some of my fondest memories of my time at Google are of the early days of this effort. Flutter was one of the last projects to come out of the old Google, part of a stable of ambitious experiments started by Larry Page shortly before the creation of Alphabet. We essentially operated like a startup, discovering what we were building more than designing it. The Flutter team was very much built out of the culture of young Google; for example we prioritised internal transparency, work/life balance, and data-driven decision making (greatly helped by Tao Dong and his UXR team). We were radically open from the beginning, which made it easy for us to build a healthy open source project around the effort as well. Flutter was also very lucky to have excellent leadership throughout the years, such as Adam Barth as founding tech lead, Tim Sneath as PM, and Todd Volkert as engineering manager. We also didn't follow engineering best practices for the first few years. For example we wrote no tests and had precious little documentation. This whiteboard is what passed for a design doc for the core Widget, RenderObject, and dart:ui layers. This allowed us to move fast at first, but we paid for it later. Flutter grew in a bubble, largely insulated from the changes Google was experiencing at the same time. Google's culture eroded. Decisions went from being made for the benefit of users, to the benefit of Google, to the benefit of whoever was making the decision. Transparency evaporated. Where previously I would eagerly attend every company-wide meeting to learn what was happening, I found myself now able to predict the answers executives would give word for word. Today, I don't know anyone at Google who could explain what Google's vision is. Morale is at an all-time low. If you talk to therapists in the bay area, they will tell you all their Google clients are unhappy with Google. Then Google had layoffs. The layoffs were an unforced error driven by a short-sighted drive to ensure the stock price would keep growing quarter-to-quarter, instead of following Google's erstwhile strategy of prioritising long-term success even if that led to short-term losses (the very essence of "don't be evil"). The effects of layoffs are insidious. Whereas before people might focus on the user, or at least their company, trusting that doing the right thing will eventually be rewarded even if it's not strictly part of their assigned duties, after a layoff people can no longer trust that their company has their back, and they dramatically dial back any risk-taking. Responsibilities are guarded jealously. Knowledge is hoarded, because making oneself irreplaceable is the only lever one has to protect oneself from future layoffs. I see all of this at Google now. The lack of trust in management is reflected by management no longer showing trust in the employees either, in the form of inane corporate policies. In 2004, Google's founders famously told Wall Street "Google is not a conventional company. We do not intend to become one." but that Google is no more. Much of these problems with Google today stem from a lack of visionary leadership from Sundar Pichai, and his clear lack of interest in maintaining the cultural norms of early Google. A symptom of this is the spreading contingent of inept middle management. Take Jeanine Banks, for example, who manages the department that somewhat arbitrarily contains (among other things) Flutter, Dart, Go, and Firebase. Her department nominally has a strategy, but I couldn't leak it if I wanted to; I literally could never figure out what any part of it meant, even after years of hearing her describe it. Her understanding of what her teams are doing is minimal at best; she frequently makes requests that are completely incoherent and inapplicable. She treats engineers as commodities in a way that is dehumanising, reassigning people against their will in ways that have no relationship to their skill set. She is completely unable to receive constructive feedback (as in, she literally doesn't even acknowledge it). I hear other teams (who have leaders more politically savvy than I) have learned how to "handle" her to keep her off their backs, feeding her just the right information at the right time. Having seen Google at its best, I find this new reality depressing. There are still great people at Google. I've had the privilege to work with amazing people on the Flutter team such as JaYoung Lee, Kate Lovett, Kevin Chisholm, Zoey Fan, Dan Field, and dozens more (sorry folks, I know I should just name all of you but there's too many!). In recent years I started offering career advice to anyone at Google and through that met many great folks from around the company. It's definitely not too late to heal Google. It would require some shake-up at the top of the company, moving the centre of power from the CFO's office back to someone with a clear long-term vision for how to use Google's extensive resources to deliver value to users. I still believe there's lots of mileage to be had from Google's mission statement (to organize the world’s information and make it universally accessible and useful). Someone who wanted to lead Google into the next twenty years, maximising the good to humanity and disregarding the short-term fluctuations in stock price, could channel the skills and passion of Google into truly great achievements. I do think the clock is ticking, though. The deterioration of Google's culture will eventually become irreversible, because the kinds of people whom you need to act as moral compass are the same kinds of people who don't join an organisation without a moral compass.

a year ago 13 votes

More in programming

Adding auto-generated cover images to EPUBs downloaded from AO3

I was chatting with a friend recently, and she mentioned an annoyance when reading fanfiction on her iPad. She downloads fic from AO3 as EPUB files, and reads it in the Kindle app – but the files don’t have a cover image, and so the preview thumbnails aren’t very readable: She’s downloaded several hundred stories, and these thumbnails make it difficult to find things in the app’s “collections” view. This felt like a solvable problem. There are tools to add cover images to EPUB files, if you already have the image. The EPUB file embeds some key metadata, like the title and author. What if you had a tool that could extract that metadata, auto-generate an image, and use it as the cover? So I built that. It’s a small site where you upload EPUB files you’ve downloaded from AO3, the site generates a cover image based on the metadata, and it gives you an updated EPUB to download. The new covers show the title and author in large text on a coloured background, so they’re much easier to browse in the Kindle app: If you’d find this helpful, you can use it at alexwlchan.net/my-tools/add-cover-to-ao3-epubs/ Otherwise, I’m going to explain how it works, and what I learnt from building it. There are three steps to this process: Open the existing EPUB to get the title and author Generate an image based on that metadata Modify the EPUB to insert the new cover image Let’s go through them in turn. Open the existing EPUB I’ve not worked with EPUB before, and I don’t know much about it. My first instinct was to look for Python EPUB libraries on PyPI, but there was nothing appealing. The results were either very specific tools (convert EPUB to/from format X) or very unmaintained (the top result was last updated in April 2014). I decied to try writing my own code to manipulate EPUBs, rather than using somebody else’s library. I had a vague memory that EPUB files are zips, so I changed the extension from .epub to .zip and tried unzipping one – and it turns out that yes, it is a zip file, and the internal structure is fairly simple. I found a file called content.opf which contains metadata as XML, including the title and author I’m looking for: <?xml version='1.0' encoding='utf-8'?> <package xmlns="http://www.idpf.org/2007/opf" version="2.0" unique-identifier="uuid_id"> <metadata xmlns:opf="http://www.idpf.org/2007/opf" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcterms="http://purl.org/dc/terms/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:calibre="http://calibre.kovidgoyal.net/2009/metadata"> <dc:title>Operation Cameo</dc:title> <meta name="calibre:timestamp" content="2025-01-25T18:01:43.253715+00:00"/> <dc:language>en</dc:language> <dc:creator opf:file-as="alexwlchan" opf:role="aut">alexwlchan</dc:creator> <dc:identifier id="uuid_id" opf:scheme="uuid">13385d97-35a1-4e72-830b-9757916d38a7</dc:identifier> <meta name="calibre:title_sort" content="operation cameo"/> <dc:description><p>Some unusual orders arrive at Operation Mincemeat HQ.</p></dc:description> <dc:publisher>Archive of Our Own</dc:publisher> <dc:subject>Fanworks</dc:subject> <dc:subject>General Audiences</dc:subject> <dc:subject>Operation Mincemeat: A New Musical - SpitLip</dc:subject> <dc:subject>No Archive Warnings Apply</dc:subject> <dc:date>2023-12-14T00:00:00+00:00</dc:date> </metadata> … That dc: prefix was instantly familiar from my time working at Wellcome Collection – this is Dublin Core, a standard set of metadata fields used to describe books and other objects. I’m unsurprised to see it in an EPUB; this is exactly how I’d expect it to be used. I found an article that explains the structure of an EPUB file, which told me that I can find the content.opf file by looking at the root-path element inside the mandatory META-INF/container.xml file which is every EPUB. I wrote some code to find the content.opf file, then a few XPath expressions to extract the key fields, and I had the metadata I needed. Generate a cover image I sketched a simple cover design which shows the title and author. I wrote the first version of the drawing code in Pillow, because that’s what I’m familiar with. It was fine, but the code was quite flimsy – it didn’t wrap properly for long titles, and I couldn’t get custom fonts to work. Later I rewrote the app in JavaScript, so I had access to the HTML canvas element. This is another tool that I haven’t worked with before, so a fun chance to learn something new. The API felt fairly familiar, similar to other APIs I’ve used to build HTML elements. This time I did implement some line wrapping – there’s a measureText() API for canvas, so you can see how much space text will take up before you draw it. I break the text into words, and keeping adding words to a line until measureText tells me the line is going to overflow the page. I have lots of ideas for how I could improve the line wrapping, but it’s good enough for now. I was also able to get fonts working, so I picked Georgia to match the font used for titles on AO3. Here are some examples: I had several ideas for choosing the background colour. I’m trying to help my friend browse her collection of fic, and colour would be a useful way to distinguish things – so how do I use it? I realised I could get the fandom from the EPUB file, so I decided to use that. I use the fandom name as a seed to a random number generator, then I pick a random colour. This means that all the fics in the same fandom will get the same colour – for example, all the Star Wars stories are a shade of red, while Star Trek are a bluey-green. This was a bit harder than I expected, because it turns out that JavaScript doesn’t have a built-in seeded random number generator – I ended up using some snippets from a Stack Overflow answer, where bryc has written several pseudorandom number generators in plain JavaScript. I didn’t realise until later, but I designed something similar to the placeholder book covers in the Apple Books app. I don’t use Apple Books that often so it wasn’t a deliberate choice to mimic this style, but clearly it was somewhere in my subconscious. One difference is that Apple’s app seems to be picking from a small selection of background colours, whereas my code can pick a much nicer variety of colours. Apple’s choices will have been pre-approved by a designer to look good, but I think mine is more fun. Add the cover image to the EPUB My first attempt to add a cover image used pandoc: pandoc input.epub --output output.epub --epub-cover-image cover.jpeg This approach was no good: although it added the cover image, it destroyed the formatting in the rest of the EPUB. This made it easier to find the fic, but harder to read once you’d found it. An EPUB file I downloaded from AO3, before/after it was processed by pandoc. So I tried to do it myself, and it turned out to be quite easy! I unzipped another EPUB which already had a cover image. I found the cover image in OPS/images/cover.jpg, and then I looked for references to it in content.opf. I found two elements that referred to cover images: <?xml version="1.0" encoding="UTF-8"?> <package xmlns="http://www.idpf.org/2007/opf" version="3.0" unique-identifier="PrimaryID"> <metadata xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:opf="http://www.idpf.org/2007/opf"> <meta name="cover" content="cover-image"/> … </metadata> <manifest> <item id="cover-image" href="images/cover.jpg" media-type="image/jpeg" properties="cover-image"/> … </manifest> </package> This gave me the steps for adding a cover image to an EPUB file: add the image file to the zipped bundle, then add these two elements to the content.opf. Where am I going to deploy this? I wrote the initial prototype of this in Python, because that’s the language I’m most familiar with. Python has all the libraries I need: The zipfile module can unpack and modify the EPUB/ZIP The xml.etree or lxml modules can manipulate XML The Pillow library can generate images I built a small Flask web app: you upload the EPUB to my server, my server does some processing, and sends the EPUB back to you. But for such a simple app, do I need a server? I tried rebuilding it as a static web page, doing all the processing in client-side JavaScript. That’s simpler for me to host, and it doesn’t involve a round-trip to my server. That has lots of other benefits – it’s faster, less of a privacy risk, and doesn’t require a persistent connection. I love static websites, so can they do this? Yes! I just had to find a different set of libraries: The JSZip library can unpack and modify the EPUB/ZIP, and is the only third-party code I’m using in the tool Browsers include DOMParser for manipulating XML I’ve already mentioned the HTML <canvas> element for rendering the image This took a bit longer because I’m not as familiar with JavaScript, but I got it working. As a bonus, this makes the tool very portable. Everything is bundled into a single HTML file, so if you download that file, you have the whole tool. If my friend finds this tool useful, she can save the file and keep a local copy of it – she doesn’t have to rely on my website to keep using it. What should it look like? My first design was very “engineer brain” – I just put the basic controls on the page. It was fine, but it wasn’t good. That might be okay, because the only person I need to be able to use this app is my friend – but wouldn’t it be nice if other people were able to use it? If they’re going to do that, they need to know what it is – most people aren’t going to read a 2,500 word blog post to understand a tool they’ve never heard of. (Although if you have read this far, I appreciate you!) I started designing a proper page, including some explanations and descriptions of what the tool is doing. I got something that felt pretty good, including FAQs and acknowledgements, and I added a grey area for the part where you actually upload and download your EPUBs, to draw the user’s eye and make it clear this is the important stuff. But even with that design, something was missing. I realised I was telling you I’d create covers, but not showing you what they’d look like. Aha! I sat down and made up a bunch of amusing titles for fanfic and fanfic authors, so now you see a sample of the covers before you upload your first EPUB: This makes it clearer what the app will do, and was a fun way to wrap up the project. What did I learn from this project? Don’t be scared of new file formats My first instinct was to look for a third-party library that could handle the “complexity” of EPUB files. In hindsight, I’m glad I didn’t find one – it forced me to learn more about how EPUBs work, and I realised I could write my own code using built-in libraries. EPUB files are essentially ZIP files, and I only had basic needs. I was able to write my own code. Because I didn’t rely on a library, now I know more about EPUBs, I have code that’s simpler and easier for me to understand, and I don’t have a dependency that may cause problems later. There are definitely some file formats where I need existing libraries (I’m not going to write my own JPEG parser, for example) – but I should be more open to writing my own code, and not jumping to add a dependency. Static websites can handle complex file manipulations I love static websites and I’ve used them for a lot of tasks, but mostly read-only display of information – not anything more complex or interactive. But modern JavaScript is very capable, and you can do a lot of things with it. Static pages aren’t just for static data. One of the first things I made that got popular was find untagged Tumblr posts, which was built as a static website because that’s all I knew how to build at the time. Somewhere in the intervening years, I forgot just how powerful static sites can be. I want to build more tools this way. Async JavaScript calls require careful handling The JSZip library I’m using has a lot of async functions, and this is my first time using async JavaScript. I got caught out several times, because I forgot to wait for async calls to finish properly. For example, I’m using canvas.toBlob to render the image, which is an async function. I wasn’t waiting for it to finish, and so the zip would be repackaged before the cover image was ready to add, and I got an EPUB with a missing image. Oops. I think I’ll always prefer the simplicity of synchronous code, but I’m sure I’ll get better at async JavaScript with practice. Final thoughts I know my friend will find this helpful, and that feels great. Writing software that’s designed for one person is my favourite software to write. It’s not hyper-scale, it won’t launch the next big startup, and it’s usually not breaking new technical ground – but it is useful. I can see how I’m making somebody’s life better, and isn’t that what computers are for? If other people like it, that’s a nice bonus, but I’m really thinking about that one person. Normally the one person I’m writing software for is me, so it’s extra nice when I can do it for somebody else. If you want to try this tool yourself, go to alexwlchan.net/my-tools/add-cover-to-ao3-epubs/ If you want to read the code, it’s all on GitHub. [If the formatting of this post looks odd in your feed reader, visit the original article]

5 hours ago 3 votes
Non-alcoholic apéritifs

I’ve been doing Dry January this year. One thing I missed was something for apéro hour, a beverage to mark the start of the evening. Something complex and maybe bitter, not like a drink you’d have with lunch. I found some good options. Ghia sodas are my favorite. Ghia is an NA apéritif based on grape juice but with enough bitterness (gentian) and sourness (yuzu) to be interesting. You can buy a bottle and mix it with soda yourself but I like the little cans with extra flavoring. The Ginger and the Sumac & Chili are both great. Another thing I like are low-sugar fancy soda pops. Not diet drinks, they still have a little sugar, but typically 50 calories a can. De La Calle Tepache is my favorite. Fermented pineapple is delicious and they have some fun flavors. Culture Pop is also good. A friend gave me the Zero book, a drinks cookbook from the fancy restaurant Alinea. This book is a little aspirational but the recipes are doable, it’s just a lot of labor. Very fancy high end drink mixing, really beautiful flavor ideas. The only thing I made was their gin substitute (mostly junipers extracted in glycerin) and it was too sweet for me. Need to find the right use for it, a martini definitely ain’t it. An easier homemade drink is this Nonalcoholic Dirty Lemon Tonic. It’s basically a lemonade heavily flavored with salted preserved lemons, then mixed with tonic. I love the complexity and freshness of this drink and enjoy it on its own merits. Finally, non-alcoholic beer has gotten a lot better in the last few years thanks to manufacturing innovations. I’ve been enjoying NA Black Butte Porter, Stella Artois 0.0, Heineken 0.0. They basically all taste just like their alcoholic uncles, no compromise. One thing to note about non-alcoholic substitutes is they are not cheap. They’ve become a big high end business. Expect to pay the same for an NA drink as one with alcohol even though they aren’t taxed nearly as much.

2 days ago 5 votes
It burns

The first time we had to evacuate Malibu this season was during the Franklin fire in early December. We went to bed with our bags packed, thinking they'd probably get it under control. But by 2am, the roaring blades of fire choppers shaking the house got us up. As we sped down the canyon towards Pacific Coast Highway (PCH), the fire had reached the ridge across from ours, and flames were blazing large out the car windows. It felt like we had left the evacuation a little too late, but they eventually did get Franklin under control before it reached us. Humans have a strange relationship with risk and disasters. We're so prone to wishful thinking and bad pattern matching. I remember people being shocked when the flames jumped the PCH during the Woolsey fire in 2017. IT HAD NEVER DONE THAT! So several friends of ours had to suddenly escape a nightmare scenario, driving through burning streets, in heavy smoke, with literally their lives on the line. Because the past had failed to predict the future. I feel into that same trap for a moment with the dramatic proclamations of wind and fire weather in the days leading up to January 7. Warning after warning of "extremely dangerous, life-threatening wind" coming from the City of Malibu, and that overly-bureaucratic-but-still-ominous "Particularly Dangerous Situation" designation. Because, really, how much worse could it be? Turns out, a lot. It was a little before noon on the 7th when we first saw the big plumes of smoke rise from the Palisades fire. And immediately the pattern matching ran astray. Oh, it's probably just like Franklin. It's not big yet, they'll get it out. They usually do. Well, they didn't. By the late afternoon, we had once more packed our bags, and by then it was also clear that things actually were different this time. Different worse. Different enough that even Santa Monica didn't feel like it was assured to be safe. So we headed far North, to be sure that we wouldn't have to evacuate again. Turned out to be a good move. Because by now, into the evening, few people in the connected world hadn't started to see the catastrophic images emerging from the Palisades and Eaton fires. Well over 10,000 houses would ultimately burn. Entire neighborhoods leveled. Pictures that could be mistaken for World War II. Utter and complete destruction. By the night of the 7th, the fire reached our canyon, and it tore through the chaparral and brush that'd been building since the last big fire that area saw in 1993. Out of some 150 houses in our immediate vicinity, nearly a hundred burned to the ground. Including the first house we moved to in Malibu back in 2009. But thankfully not ours. That's of course a huge relief. This was and is our Malibu Dream House. The site of that gorgeous home office I'm so fond to share views from. Our home. But a house left standing in a disaster zone is still a disaster. The flames reached all the way up to the base of our construction, incinerated much of our landscaping, and devoured the power poles around it to dysfunction. We have burnt-out buildings every which way the eye looks. The national guard is still stationed at road blocks on the access roads. Utility workers are tearing down the entire power grid to rebuild it from scratch. It's going to be a long time before this is comfortably habitable again. So we left. That in itself feels like defeat. There's an urge to stay put, and to help, in whatever helpless ways you can. But with three school-age children who've already missed over a months worth of learning from power outages, fire threats, actual fires, and now mudslide dangers, it was time to go. None of this came as a surprise, mind you. After Woolsey in 2017, Malibu life always felt like living on borrowed time to us. We knew it, even accepted it. Beautiful enough to be worth the risk, we said.  But even if it wasn't a surprise, it's still a shock. The sheer devastation, especially in the Palisades, went far beyond our normal range of comprehension. Bounded, as it always is, by past experiences. Thus, we find ourselves back in Copenhagen. A safe haven for calamities of all sorts. We lived here for three years during the pandemic, so it just made sense to use it for refuge once more. The kids' old international school accepted them right back in, and past friendships were quickly rebooted. I don't know how long it's going to be this time. And that's an odd feeling to have, just as America has been turning a corner, and just as the optimism is back in so many areas. Of the twenty years I've spent in America, this feels like the most exciting time to be part of the exceptionalism that the US of A offers. And of course we still are. I'll still be in the US all the time on both business, racing, and family trips. But it won't be exclusively so for a while, and it won't be from our Malibu Dream House. And that burns.

2 days ago 6 votes
Slow, flaky, and failing

Thou shalt not suffer a flaky test to live, because it’s annoying, counterproductive, and dangerous: one day it might fail for real, and you won’t notice. Here’s what to do.

3 days ago 7 votes
Name that Ware, January 2025

The ware for January 2025 is shown below. Thanks to brimdavis for contributing this ware! …back in the day when you would get wares that had “blue wires” in them… One thing I wonder about this ware is…where are the ROMs? Perhaps I’ll find out soon! Happy year of the snake!

3 days ago 5 votes