More from Hixie's Natural Log
When you build something, you have to pick some design goals and priorities. Ideally you do so explicitly, but even if you don't, you're still implicitly doing so based on your design choices. These choices are trade-offs. If you want to write a quiet song, it won't be loud. If you are writing a software tool and you want to prioritize speed over simplicity, then it won't be as simple as if you'd prioritized simplicity over speed. There are two main signs that you've succeeded at your goals. The first, and more pleasant, is that you get compliments about how your thing is like you wanted it to be. "I love that song, it's so quiet!" "Your tool is so fast!" Why thank you, that's exactly what I was going for. The second sign, though, is that you will get complaints. Specifically, people will complain that your thing does not achieve the things you didn't set out to achieve. "I wish this song was louder", "this tool is so hard to use". That you are receiving complaints at all means that people are aware of your creation; that they are complaining about what you specifically set out to make a non-goal is a side-effect of the fact that you made that trade-off. The worst thing to do, when you receive such complaints, is take them to heart and try to fix them. This is because by definition you wanted these complaints. They are a sign that the thing you built is built as you wanted to build it. The people complaining want something different, they don't want your thing. It's just that your thing is so good that it's the thing they're compelled towards even though it doesn't prioritize the things they care about most. If you try to fix these complaints, you will, again by definition, be compromising on your goals. If you make the song have a loud part, then it's no longer a quiet song. You wanted a quiet song. Now it's a song that's quiet in parts and loud in parts. It probably still doesn't satisfy the needs of the people who want a loud song, and now it also doesn't satisfy the needs of the people who wanted your original quiet song. If you make your tool easier to use by compromising on the speed, then now you have a tool that's both not as fast as it could be and not as usable as it could have been if you'd started with that as a goal. It's important, therefore, to separate out complaints into those that are complaints you expect based on your design goals (which you should acknowledge but not fix), vs complaints that are either orthogonal to your goals (which you can fix without compromising your goals), or that are in line with your goals (which you should prioritize, since that's what you said to yourself was most important in the first place).
My involvement in web standards started with the CSS working group. One of the things that we struggled with as a working group was that we would specify how the technology should work, but the browser vendors' implementations weren't exactly what we intended, and web authors would then write web pages that worked with those browsers, even though that meant the web pages themselves were also not doing things like the specifications said they should. The folks I worked with at the W3C (especially the academics and people working for organizations that did not themselves implement browsers) would frequently bemoan this state of affairs, expressing surprise at how they, the people in charge of the standards, were not being respected by the people implementing the standards. One of the key insights I had very early on in my work, before working on HTML5, which really influenced the WHATWG and its work, is the realization that the power dynamics at work were not at all the power dynamics that the folks at the W3C described. The reality of the situation was that the power lay entirely in the hands of the users. The users chose browsers. A browser vendor that ignored what the users wanted would lose market share. Market share is everything in this space. Browser vendors want users because they can convert users into dollars (in various ways, but they typically boil down to someone showing them ads and paying the browser vendors for the privilege). In turn, the browser vendors had more power than the specifications. What they implement is, by definition, what the technology is. The specification can say in absolute clarity that the keyword "marigold" should look yellow, but if a browser vendor makes it look red, then no web author is going to use it to mean yellow, and many will use it to mean red. There is a feedback loop here: if one browser implements "marigold" to mean red, and some important web site (or many unimportant web sites) rely on it, and say something like "best viewed in ThisOrThat browser!" because that's the one they use and in that browser it looks red and red is what looks best, then the other browser vendors are incentivised to make sure that the web page looks good in their browser too. Regardless of what the specification says, therefore, they are going to make "marigold" look red and not yellow. When I realized this, I also realized a corollary: if you have two competing specifications that both claim to define the same technology, but one matches what the browsers already do while the other one does not, the browser vendors are going to find it more useful to follow the one that matches what they do. This is because they can trust that implementing that specification will get them more market share. It means they won't have to stop and think at every step, "will following this specification cause me to lose users?". It is easier for them to use a specification that takes into account their needs in this way. We actually tried to explain this to the W3C membership. There was a big meeting in 2004 at Adobe in San Jose, the "W3C Workshop on Web Applications and Compound Documents". We tried to convey the above (I didn't quite understand it in the stark "power dynamic" terms yet, or at least, I didn't really express it in those terms, but if you read our position paper you can see this insight starting to crystalize). At this meeting, we made a pitch for the W3C to continue to maintain HTML and to care about what the browser vendors wanted. Representatives from Microsoft and Sun (in many ways arch enemies at the time) supported us. I seem to recall Apple being more quiet about it at the meeting but also essentially supporting the principles. The W3C membership resoundly rejected this whole concept. One of the W3C staff even explicitly said something along the lines of "if you want to do this you should do it elsewhere". That's what led to the WHATWG being founded a few weeks later. The WHATWG was founded on this core principle — the specifications need to actually specify reality. When the browsers disagree with the spec, the spec is by definition incorrect and needs to change, regardless of how much technically superior the design in the spec is. Naturally, when you provide browser vendors with something that valuable, they will follow. You end up with a weird inverted power dynamic. The spec writer (when they follow this principle) has all the power, but only within the space that the browser vendors are themselves willing to play; and the browser vendors have all the power, but only within the space that the users are willing to put up with. It's very easy to appear to be in control when you tell people to do the thing they were going to do anyway (or at least, one of the things they were willing to do if they were to think about it). There is a (probably apocryphal) quote supposedly by Alexandre Auguste Ledru-Rollin that is often cited in mockery of bad leadership, but that perfectly matches the power dynamic here: "There go my people; I must find out where they are going so I can lead them". (Thanks to Leonard Damhorst for prompting me to write this post.)
I often get asked how many people contribute to Flutter. It's a hard question to answer because "contribute" is a very vague concept. There's tens of thousands of packages on pub.dev, all of which are written by contributors to the community. There's over 100,000 of issues filed in our issue database, filed by more than 35,000 people over the years (the exact number is hard to pin down because people sometimes delete their GitHub accounts; about 700 issues have been filed by people who have since deleted their account). Many more people still have used the "thumbs-up" reaction to indicate that an issue matters to them, with almost 165,000 thumbs up from about 45,000 people. All of these people are valuable contributors to Flutter. Usually, when pressed, people try to clarify by asking about "the core team". Again though it's hard to say exactly what that means, but let's assume they mean "people with commit access". That is, people we trust enough to have added to the GitHub repo as collaborators. This includes people who work on Flutter for companies like Google, Canonical, or Nevercode, and it includes people like me who are self-employed and/or contribute to Flutter on a volunteer basis. Currently that's about 280 people. So is that the answer? Well, no, not really. Some people have commit access but aren't active (maybe they got access because of their employer, but were then reassigned to work on another project, and the bureaucracy hasn't caught up with them yet — we only audit the membership occasionally because it's rather tedious to do). Some people have been very active recently but don't have commit access (e.g. because they were just laid off and a bot automatically removed their access; they might even resume working on Flutter in the future, as a volunteer or funded by another company). So what's the answer? I recently drilled down through our data to see if I could answer this. I will caveat the following numbers by saying that this changes all the time. We added a new team member just today (hi Nate!) who is not counted as a team member in the following numbers because we collected the data a few weeks ago (it takes literally days to scrape all the data from GitHub, and then hours to explore the resulting very large and very slow spreadsheet). Also, some of my definitions are a bit arbitrary, and slightly tweaking the limits would probably change the numbers noticeably. First, I collected a list of everyone who has ever created an issue, commented on an issue, put an emoji reaction on the first comment of an issue, or submitted a PR, excluding bots and people who deleted their GitHub account. (Actually Piinks did the actual data collection. Thanks!) I limited this to a subset of the GitHub repos of the flutter org that is relatively inclusive but does not count everything (we have a lot of historical repositories and so forth). This finds about 94,357 people. (So there you go. The Flutter team is about a hundred thousand people!) To avoid padding the numbers with people who left the project long ago, and to avoid counting "drive-by" contributors who came, did a bunch of work, and then left, I then limited the data set to people who contributed over a period of more than 180 days, and who last contributed sometime in 2024. Because of the definition of "contributed" described above, that means that someone who added a thumbs-up to an issue in December 2020 and then filed an issue in January 2024, and did nothing else, is included, but someone who submitted two PRs in March 2024 is not. Like I said, this is a bit arbitrary. Anyway, that leaves 3,839 people, of which 182 currently have commit access, 27 once had commit access but don't currently (these are mainly people who either got laid off recently and had their commit access revoked by an automated process, or people who were once team members, left, lost access from inactivity long ago, and then later came to comment on issues or file new issues — it's surprisingly common for people who once worked on Flutter full time to stick around even when their employment changes), and about 3,627 people who have never had commit access. Of those who have never had commit access, 2,407 have filed at least one issue or submitted at least one PR (accounting for a total of 12,383 issues and 2,613 PRs). Of those, 341 have filed 5 to 9 issues (2,242 issues total), and 296 have filed 10 or more issues in their lifetime (7,021 total issues). Similarly, of the "never had commit access" cohort, 73 people have sent 5 to 9 pull requests in their lifetime (458 total PRs) and 47 have sent 10 or more (1,321 PRs total). (For context, 4,663 people have ever submitted a pull request, and 429 have ever submitted more than 10 PRs.) Of the people who currently have commit access, 98 people have submitted more than one PR every 3 weeks on average since they first got involved (accounting for 49,173 PRs), 75 people have closed at least one issue every 3 weeks (accounting for 48,490 total issue closures), of which 10 are not in the first group (mostly that's our triage team), and 150 people have commented at least once every 3 weeks. A follow-up question a lot of people ask is "do they all work for Google?". This is a surprisingly hard question to answer. There are a lot of weird edge cases. For example, one person worked on Flutter for a company that Google hired to work on Flutter, but then quit that company, asked for their commit privileges to be removed, but continued to be active in the community. Several people who have quit Google (such as myself), or been laid off by Google, have continued to be active in one sense or another (I think I submit more code to Flutter now than I did in my last year at Google). It's also hard to answer because a lot more people at Google contribute to Flutter than just those on Google's Flutter team, and a lot of people on Google's Flutter team contribute in ways that don't show up on GitHub (e.g. product management, marketing, developer relations, internal tooling). Of the 98 people who have commit access, have been active for more than 180 days, have contributed at least once this year, and have submitted more than one PR every 3 weeks on average for the entire time they've been contributing, I estimate (based on what I know of people's employment and so forth) that about 85% are Googlers or somehow get their funding from Google, and about 15% are currently independent of Google. (This is by no means the entirety of the Google team contributing to Flutter; as I mentioned earlier, many folks at Google working on Flutter don't appear in these statistics.) I'm not sure what conclusion to draw from this; it's both more people than I expected to see funded by Google, which is great, and fewer people that aren't funded by Google, which is less great. On the other hand, it's still a significant number of non-Google-funded people. Is it enough? I think that really depends on what your goals are. I think if your goal is for Flutter to be an order of magnitude better than other UI frameworks, then frankly no, it's not enough. There is a ton of work to be done to get there. We know what it would take, but we don't have the people to do it today. On the other hand if your goal is to be a great framework, on par with others, then it's probably adequate. It would certainly be difficult to continue to be great with fewer people today. Of course, that may change as we complete big efforts, or as we take on new ones, or as the landscape changes, it's all hard to predict. That said, I would love to see more direct contributions from non-Google sources, if for no other reason but to end this silly "will Google cancel Flutter" line of questioning that has followed the project since its inception. It's a dumb question. Flutter's an open source UI framework. It will never die. It will become old and something else will shine brighter one day, just as happens with literally every other UI framework ever. That's just how our industry works. There's no reason to believe that'll happen any time soon though, and certainly no reason for it to happen earlier for Flutter than any other modern UI framework.
Despite my departure from Google, I am not leaving Flutter — the great thing about open source and open standards is that the product and the employer are orthogonal. I've had three employers in my career, and in all three cases when I left my employer I continued my job. With Netscape I was a member of the team before my internship, during my internship, and after my internship. With Opera Software, I joined while working on standards, kept working on standards, and left while working on the same standard that I then continued to work on at Google. So this is not a new thing for me. Flutter is amazingly successful. It's already the leading mobile app development framework, and I think we're close to having the table stakes required to make it the obvious default choice for desktop development as well (it's already there for some use cases). It's increasingly used in embedded scenarios. And Flutter is extremely well positioned to be the first truly usable Wasm framework as the web transitions to the more powerful, lower-level Wasm-based model over the next few years. In the coming month I will prepare our roadmap for 2024 (in consultation with the rest of the team). For me personally, however, my focus will probably be on fixing fun bugs, and on making progress on blankcanvas, my library for making it easy to build custom widget sets. I also expect I will be continuing to work on package:rfw, the UI-push library, as there has been increasing interest from teams using Flutter and wanting ways to present custom interfaces determined by the server at runtime without requiring the user to download an updated app.
More in programming
One of the recurring challenges in any organization is how to split your attention across long-term and short-term problems. Your software might be struggling to scale with ramping user load while also knowing that you have a series of meaningful security vulnerabilities that need to be closed sooner than later. How do you balance across them? These sorts of balance questions occur at every level of an organization. A particularly frequent format is the debate between Product and Engineering about how much time goes towards developing new functionality versus improving what’s already been implemented. In 2020, Calm was growing rapidly as we navigated the COVID-19 pandemic, and the team was struggling to make improvements, as they felt saturated by incoming new requests. This strategy for resourcing Engineering-driven projects was our attempt to solve that problem. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. Reading this document To apply this strategy, start at the top with Policy. To understand the thinking behind this strategy, read sections in reverse order, starting with Explore. More detail on this structure in Making a readable Engineering Strategy document. Policy & Operation Our policies for resourcing Engineering-driven projects are: We will protect one Eng-driven project per product engineering team, per quarter. These projects should represent a maximum of 20% of the team’s bandwidth. Each project must advance a measurable metric, and execution must be designed to show progress on that metric within 4 weeks. These projects must adhere to Calm’s existing Engineering strategies. We resource these projects first in the team’s planning, rather than last. However, only concrete projects are resourced. If there’s no concrete proposal, then the team won’t have time budgeted for Engineering-driven work. Team’s engineering manager is responsible for deciding on the project, ensuring the project is valuable, and pushing back on attempts to defund the project. Project selection does not require CTO approval, but you should escalate to the CTO if there’s friction or disagreement. CTO will review Engineering-driven projects each quarter to summarize their impact and provide feedback to teams’ engineering managers on project selection and execution. They will also review teams that did not perform a project to understand why not. As we’ve communicated this strategy, we’ve frequently gotten conceptual alignment that this sounds reasonable, coupled with uncertainty about what sort of projects should actually be selected. At some level, this ambiguity is an acknowledgment that we believe teams will identify the best opportunities bottoms-up, we also wanted to give two concrete examples of projects we’re greenlighting in the first batch: Code-free media release: historically, we’ve needed to make a number of pull requests to add, organize, and release new pieces of media. This is high urgency work, but Engineering doesn’t exercise much judgment while doing it, and manual steps often create errors. We aim to track and eliminate these pull requests, while also increasing the number of releases that can be facilitated without scaling the content release team. Machine-learning content placement: developing new pieces of media is often a multi-week or month process. After content is ready to release, there’s generally a debate on where to place the content. This matters for the company, as this drives engagement with our users, but it matters even more to the content creator, who is generally evaluated in terms of their content’s performance. This often leads to Product and Engineering getting caught up in debates about how to surface particular pieces of content. This project aims to improve user engagement by surfacing the best content for their interests, while also giving the Content team several explicit positions to highlight content without Product and Engineering involvement. Although these projects are similar, it’s not intended that all Engineering-driven projects are of this variety. Instead it’s happenstance based on what the teams view as their biggest opportunities today. Diagnosis Our assessment of the current situation at Calm is: We are spending a high percentage of our time on urgent but low engineering value tasks. Most significantly, about one-third of our time is going into launching, debugging, and changing content that we release into our product. Engineering is involved due to limitations in our implementation, not because there is any inherent value in Engineering’s involvement. (We mostly just make releases slowly and inadvertently introduce bugs of our own.) We have a bunch of fairly clear ideas around improving the platform to empower the Content team to speed up releases, and to eliminate the Engineering involvement. However, we’ve struggled to find time to implement them, or to validate that these ideas will work. If we don’t find a way to prioritize, and succeed at implementing, a project to reduce Engineering involvement in Content releases, we will struggle to support our goals to release more content and to develop more product functionality this year Our Infrastructure team has been able to plan and make these kinds of investments stick. However, when we attempt these projects within our Product Engineering teams, things don’t go that well. We are good at getting them onto the initial roadmap, but then they get deprioritized due to pressure to complete other projects. Engineering team is not very fungible due to its small size (20 engineers), and because we have many specializations within the team: iOS, Android, Backend, Frontend, Infrastructure, and QA. We would like to staff these kinds of projects onto the Infrastructure team, but in practice that team does not have the product development experience to implement theis kind of project. We’ve discussed spinning up a Platform team, or moving product engineers onto Infrastructure, but that would either (1) break our goal to maintain joint pairs between Product Managers and Engineering Managers, or (2) be indistinguishable from prioritizing within the existing team because it would still have the same Product Manager and Engineering Manager pair. Company planning is organic, occurring in many discussions and limited structured process. If we make a decision to invest in one project, it’s easy for that project to get deprioritized in a side discussion missing context on why the project is important. These reprioritization discussions happen both in executive forums and in team-specific forums. There’s imperfect awareness across these two sorts of forums. Explore Prioritization is a deep topic with a wide variety of popular solutions. For example, many software companies rely on “RICE” scoring, calculating priority as (Reach times Impact times Confidence) divided by Effort. At the other extreme are complex methodologies like [Scaled Agile Framework)(https://en.wikipedia.org/wiki/Scaled_agile_framework). In addition to generalized planning solutions, many companies carve out special mechanisms to solve for particular prioritization gaps. Google historically offered 20% time to allow individuals to work on experimental projects that didn’t align directly with top-down priorities. Stripe’s Foundation Engineering organization developed the concept of Foundational Initiatives to prioritize cross-pillar projects with long-term implications, which otherwise struggled to get prioritized within the team-led planning process. All these methods have clear examples of succeeding, and equally clear examples of struggling. Where these initiatives have succeeded, they had an engaged executive sponsoring the practice’s rollout, including triaging escalations when the rollout inconvenienced supporters of the prior method. Where they lacked a sponsor, or were misaligned with the company’s culture, these methods have consistently failed despite the fact that they’ve previously succeeded elsewhere.
I used to make little applications just for myself. Sixteen years ago (oof) I wrote a habit tracking application, and a keylogger that let me keep track of when I was using a computer, and generate some pretty charts. I’ve taken a long break from those kinds of things. I love my hobbies, but they’ve drifted toward the non-technical, and the idea of keeping a server online for a fun project is unappealing (which is something that I hope Val Town, where I work, fixes). Some folks maintain whole ‘homelab’ setups and run Kubernetes in their basement. Not me, at least for now. But I have been tiptoeing back into some little custom tools that only I use, with a focus on just my own computing experience. Here’s a quick tour. Hammerspoon Hammerspoon is an extremely powerful scripting tool for macOS that lets you write custom keyboard shortcuts, UIs, and more with the very friendly little language Lua. Right now my Hammerspoon configuration is very simple, but I think I’ll use it for a lot more as time progresses. Here it is: hs.hotkey.bind({"cmd", "shift"}, "return", function() local frontmost = hs.application.frontmostApplication() if frontmost:name() == "Ghostty" then frontmost:hide() else hs.application.launchOrFocus("Ghostty") end end) Not much! But I recently switched to Ghostty as my terminal, and I heavily relied on iTerm2’s global show/hide shortcut. Ghostty doesn’t have an equivalent, and Mikael Henriksson suggested a script like this in GitHub discussions, so I ran with it. Hammerspoon can do practically anything, so it’ll probably be useful for other stuff too. SwiftBar I review a lot of PRs these days. I wanted an easy way to see how many were in my review queue and go to them quickly. So, this script runs with SwiftBar, which is a flexible way to put any script’s output into your menu bar. It uses the GitHub CLI to list the issues, and jq to massage that output into a friendly list of issues, which I can click on to go directly to the issue on GitHub. #!/bin/bash # <xbar.title>GitHub PR Reviews</xbar.title> # <xbar.version>v0.0</xbar.version> # <xbar.author>Tom MacWright</xbar.author> # <xbar.author.github>tmcw</xbar.author.github> # <xbar.desc>Displays PRs that you need to review</xbar.desc> # <xbar.image></xbar.image> # <xbar.dependencies>Bash GNU AWK</xbar.dependencies> # <xbar.abouturl></xbar.abouturl> DATA=$(gh search prs --state=open -R val-town/val.town --review-requested=@me --json url,title,number,author) echo "$(echo "$DATA" | jq 'length') PR" echo '---' echo "$DATA" | jq -c '.[]' | while IFS= read -r pr; do TITLE=$(echo "$pr" | jq -r '.title') AUTHOR=$(echo "$pr" | jq -r '.author.login') URL=$(echo "$pr" | jq -r '.url') echo "$TITLE ($AUTHOR) | href=$URL" done Tampermonkey Tampermonkey is essentially a twist on Greasemonkey: both let you run your own JavaScript on anybody’s webpage. Sidenote: Greasemonkey was created by Aaron Boodman, who went on to write Replicache, which I used in Placemark, and is now working on Zero, the successor to Replicache. Anyway, I have a few fancy credit cards which have ‘offers’ which only work if you ‘activate’ them. This is an annoying dark pattern! And there’s a solution to it - CardPointers - but I neither spend enough nor care enough about points hacking to justify the cost. Plus, I’d like to know what code is running on my bank website. So, Tampermonkey to the rescue! I wrote userscripts for Chase, American Express, and Citi. You can check them out on this Gist but I strongly recommend to read through all the code because of the afore-mentioned risks around running untrusted code on your bank account’s website! Obsidian Freeform This is a plugin for Obsidian, the notetaking tool that I use every day. Freeform is pretty cool, if I can say so myself (I wrote it), but could be much better. The development experience is lackluster because you can’t preview output at the same time as writing code: you have to toggle between the two states. I’ll fix that eventually, or perhaps Obsidian will add new API that makes it all work. I use Freeform for a lot of private health & financial data, almost always with an Observable Plot visualization as an eventual output. For example, when I was switching banks and one of the considerations was mortgage discounts in case I ever buy a house (ha 😢), it was fun to chart out the % discounts versus the required AUM. It’s been really nice to have this kind of visualization as ‘just another document’ in my notetaking app. Doesn’t need another server, and Obsidian is pretty secure and private.
At a conference a while back, I noticed a couple of speakers get such a confidence boost after solving a small technical glitch. We should probably make that a part of every talk. Have the mic not connect automatically, or an almost-complete puzzle on the stage that the speaker can finish, or have someone forget their badge and the speaker return it to them. Maybe the next time I, or a consenting teammate, have to give a presentation I’ll try to engineer such a situation. All conference talks should start with a small technical glitch that the speaker can easily solve was originally published by Ognjen Regoje at Ognjen Regoje • ognjen.io on April 03, 2025.
A large part of our civilisation rests on the shoulders of one medieval monk: Thomas Aquinas. Amid the turmoil of life, riddled with wickedness and pain, he would insist that our world is good. And all our success is built on this belief. Note: Before we start, let’s get one thing out of the way: Thomas Aquinas is clearly a Christian thinker, a Saint even. Yet he was also a brilliant philosopher. So even if you consider yourself agnostic or an atheist, stay with me, you will still enjoy his ideas. What is good? Thomas’ argument is rooted in Aristotle’s concept of goodness: Something is good if it fulfills its function. Aristotle had illustrated this idea with a knife. A knife is good to the extent that it cuts well. He made a distinction between an actual knife and its ideal function. That actual thing in your drawer is the existence of a knife. And its ideal function is its essence—what it means to be a knife: to cut well. So everything is separated into its existence and its ideal essence. And this is also true for humans: We have an ideal conception of what the essence of a human […] The post Thomas Aquinas — The world is divine! appeared first on Ralph Ammer.
My April Cools is out! Gaming Games for Non-Gamers is a 3,000 word essay on video games worth playing if you've never enjoyed a video game before. Patreon notes here. (April Cools is a project where we write genuine content on non-normal topics. You can see all the other April Cools posted so far here. There's still time to submit your own!) April Cools' Club