More from anderegg.ca
Today, Alec Watson posted a video titled “Algorithms are breaking how we think” on his YouTube channel, Technology Connections. The whole thing is excellent and very well argued. The main thrust is: people seem increasingly less mindful about the stuff they engage with. Watson argues that this is bad, and I agree. A little while ago I watched a video by Hank Green called “$4.5M to Spray Alcoholic Rats with Bobcat Urine”. Green has been banging this drum for a while. He hits some of the same notes as Watson, but from a different angle. This last month has been a lot, and I’ve withdrawn from news and social media quite a bit because of it. Part of this is because I’ve been very busy with work, but it’s also because I’ve felt overwhelmed. There are now a lot of bad-faith actors in positions of power. Part of their game plan is to spray a mass of obviously false, intellectually shallow, enraging nonsense into the world as quickly as possible. At a certain point the bullshit seeps in if you’re soaking in it. The ability to control over what you see next is powerful. I think it would be great if more people started being a bit more choosy about who they give that control to.
Yesterday, the Mastodon team announced it would be handing over control of its project to a new non-profit organization. The timing of this announcement is perfect given everything that’s happening with WordPress, Meta, and… well, everything else. To date, I think Eugen Rochko has done an excellent job stewarding Mastodon, but I also might have said the same thing about Matt Mullenweg a few years back. Why gamble when you can set up safeguards? Not to dwell on the WordPress, but I came across a shockingly prescient post from 2010. It lays out potential conflicts of interest between Automattic and the open source WordPress community. 1 Just about every warning from this post has come to pass in the last few months. It’s exactly these sorts of things that Mastodon looks to be trying to prevent with this new organizational structure. The re-org should also give Rochko more time to focus on product design, which sounds like a win in my book. At this point, I don’t think Mastodon will ever take over the world, but it’s a cozy place with stellar 3rd-party clients. It’s also where a large contingent of the Apple/tech cohort continue to hang out. Bluesky has really taken off, but Mastodon is still a big part of my social media diet. Yesterday also saw the launch of the Free Our Feeds campaign. I’m honestly not sure what to make of this, but I think John Gruber had a great take. The organization is requesting “$30M over three years” to launch “a new public interest foundation that puts Bluesky’s underlying technology on a pathway to become an open and healthy social media ecosystem that cannot be controlled by any single company or billionaire”. Only, that’s also Bluesky’s goal. I’ve written before about my hesitations around the protocol powering Bluesky, and I think that a competing “AppView” would be welcome — but it’s unclear if that’s what Free Our Feeds is going for. They mention wanting to build a second “relay”, though I don’t know if they’re talking about a Relay in the AT Protocol sense. Another canonical Relay would be a good start, but wouldn’t counter any issues if Bluesky started going off the rails. I wish the Free Our Feeds people all the best, but I hope they provide a more detailed plan soon. Until then, I think I’ll just continue to donating to Mastodon’s Patreon. Just watch out for the comment section. It really hasn’t aged well. ↩
This morning I read a 404 Media article about Instagram showing people ads with AI-generated images of themselves. I thought this take from Sam Biddle was very good: Never in my career have I seen such a giant gulf between What Companies Think Is the Most Important Thing in the World and What Normal People Have Absolutely Any Interest in Whatsoever Meta has always been a taste-free zone, so this sort of promotion isn’t surprising to me. Apple, who have always tried to embody taste, are also tripping over themselves to squeeze AI into everything. I think some of their ideas are reasonable, but not all of them. Most recently their AI summarization feature has tripped up a second time while editing notifications from the BBC. I think LLMs and generative AI are interesting and useful pieces of technology. I also think they’re massively overhyped and their current capabilities are poorly understood. Some people compare the LLM craze to the crypto/blockchain boom, and I think that’s unfair. Blockchains are slow, expensive databases masquerading as a social revolution while functionally being a get-rich-quick scheme. LLMs have been useful for years, and are only getting more useful with time. Still, that doesn’t mean they should be shoved into every corner of every product. There’s a group of the population that is disgusted by anything related to LLMs or generative AI. Part of this is because LLMs come with a massive ethical issue built-in. Training them requires feeding a statistical model as much content as possible. The makeup of the big LLM training datasets is proprietary, but a conservative bet is that at least 50% of any model’s training set is unlicensed copyrighted content. The companies doing the training say that using this copyrighted content is OK because it’s transformative. Some of them go so far as to say that any content on the web is free. There’s a deep sense of unfairness at play. OpenAI, for example, slurps up content which doesn’t belong to them, uses it to get billions in funding, and announces a $200/month subscription plan. Meanwhile, regular folks regularly get smacked around by lawyers for doing similarly transformative stuff. Heck, the Internet Archive isn’t even allowed to lend books, which I think is a very reasonable use of copyrighted material! 1 Because of this, even useful features that have a hint of “AI” get crapped on. Earlier this week I saw several posts on Bluesky linking to an article on The Register about an Apple Photos feature which detects landmarks, and which arrived in October. Many were upset about being automatically opted into something AI related. To me, this feature seems legitimately useful and it also seems like Apple has been more than responsible in terms of preserving privacy. It seemed to be a big deal because there’s “AI” at play. Only, according to the research paper, it’s actually using more traditional machine learning and not LLMs. I can maybe see an argument for having the feature to have been opt-in, but I also think it’s unreasonable for Apple to have a checkbox for every new feature in every point release. Meanwhile, what The Register is accusing Apple of is actually the sort of thing that Google does at every opportunity. My point here is that, like a lot of things these days, there seems to be a weird divide. Companies like Apple are currently trying to “AI” everything. Sometimes this makes sense to me, often it doesn’t. Meanwhile, there’s a chunk of the public that’s angrily opposed to anything vaguely AI-flavoured. There was a great discussion about this on Upgrade this week, which I recommend. Jason and Myke made a great case that Apple needs to be held more accountable when it screws up, and that it’s currently being graded on a curve because “LLMs make mistakes”. Apple seems to be pressured to play catch-up on AI technology, and I feel like this is being driven by activist shareholders instead of people who are focused on products. Apple has previously been a company to take their time and do things right. Their current AI strategy seems to be announce everything way too early and release some things before they’re ready. From the outside it feels like there was a dictate from on-high that everyone needs to drop what they’re doing and sprinkle AI everywhere. Like I said above, I think that generative AI is useful and interesting. I’m also someone who’s very interested in product design. Building a product that starts from a technology instead of a user need is ass-backwards. I really hope Apple is a bit more mindful as it continues to roll out future AI features. For whatever it’s worth, I square this circle personally because I believe that fair use of copyrighted material should be more permissive in general. The fact that large corporations can get away with copyright uses which most citizens can’t is an issue with our current implementation of capitalism, and needs to be addressed from the ground up. Companies like OpenAI are playing the game by the current set of rules, and boycotting their services won’t end that. ↩
At the end of last year, I wrote about wanting to focus on the web in 2024. How did that shake out? Top level stats I considered 2023 the first year that I honestly tried getting back into blogging. My goal then was to post something every month, and I managed that with 14 posts in total. This year I wanted to post something at least once a week, and I also met that goal with 79 posts (including this post). Overall I wrote 36,515 words on this site in 2024. My top five posts by traffic were: Maybe Bluesky has “won” Why isn’t the <html> element 100% supported on CanIUse.com? ACF has been hijacked Get yourself a /dev/lunch The hidden WordPress license Looking back on blog posts over the year January was a slower month for me, work-wise, and I was able to spend some time thinking about this site. I wrote a few things, and I’m still quite happy with Just write, you dolt and The library is a superpower, but February was when things really kicked off. I wrote a piece about a weird quirk of the Can I Use… website where no element (including html) was ever 100% supported. This got a large amount of traffic, and was the top post on Hacker News for a few hours. The rest of February though April were quieter. I experimented with posting some smaller items, but I couldn’t get that to stick. I’m still happy to have learned about the goofy Katamari Damacy patent images because of a copyright date oddity, as well as some TikTok ban thoughts that I still think hold up. May was another highlight for this site. I decided to participate in Weblog Posting Month at the last minute, having already posted something the day before. This was a fun experiment and I took away some lessons. I’ll likely give this another go this year, but I’ll give myself more leeway to post smaller things. June through the beginning of September were light, having somewhat burned myself out on posting in May. I didn’t post anything in August, which I still feel bad about. It was an extremely busy month for me, but I also feel like I fell off the wagon. Later in September and through October things kicked off again as I covered the still-ongoing WordPress vs. WP Engine drama. Several of the posts in this series got a lot of traffic, with ACF has been hijacked going viral. I’d been writing about Bluesky since mid-2023, and went from deep skepticism to begrudging support. In November, I wrote about how my feelings changed about the service in Maybe Bluesky has “won”. This turned out to be my biggest post by traffic ever. I also wrote about creating a small toy site to experiment with the Bluesky firehose. December was fairly low-key with a few small posts as I got some down time during the holidays. I used my iPad to post the last three pieces, which was a surprisingly pleasant experience. What’s next for 2025? As I’ve written before, the default publishing flow using Jekyll and GitHub Pages has become a headache. I plan to replatform, and I’m almost certainly going to use Eleventy for that. There are many things I’d like to improve on the site, and I don’t really want to build them on a creaky foundation. I plan to write more about this process as it moves forward. I also plan to do some incremental design tweaks. I’ve been using the same design since 2012 and have only slightly changed it over time. I don’t think I need a major redesign, but there are some additional things I’d like to add. Primarily, better archive navigation and adding post categories. I also want to add smaller posts more often, rather than having those thoughts live on someone else’s service first. I’m happy to use Mastodon and Bluesky for socializing and discussion, but I’d rather have links and small posts living on my own site. Finally, I want to build more small sites and projects. I used to do this a lot more, and I have no idea why I stopped. Building the Bluesky Filter was a fun afternoon project, and I was recently inspired by Robb Knight’s Mean Girls curio. I think I may have over-focused on the blogging part in 2024, and I want to correct that. Overall, I think this was a successful year in terms of upping my indie-web game. I hope to keep the momentum going next year.
More in technology
Waymo’s factory, a map of US land values, ships in the Arctic Circle, battery industry trends, and more.
What `git config` settings should be defaults by now? Here are some settings that even the core developers change.
It’s been fantastic being in the Philippines for this year’s WordCamp Asia. We have attendees from 71 countries, over 1,800 tickets sold, and contributor day had over 700 people! It’s an interesting contrast to US and EU WordCamps as well in that the audience is definitely a lot younger, and there’s very little interest in … Continue reading WordCamp Asia and Maha Kumbh Mela →
Plus the government did the stupid thing after all.
Today, Alec Watson posted a video titled “Algorithms are breaking how we think” on his YouTube channel, Technology Connections. The whole thing is excellent and very well argued. The main thrust is: people seem increasingly less mindful about the stuff they engage with. Watson argues that this is bad, and I agree. A little while ago I watched a video by Hank Green called “$4.5M to Spray Alcoholic Rats with Bobcat Urine”. Green has been banging this drum for a while. He hits some of the same notes as Watson, but from a different angle. This last month has been a lot, and I’ve withdrawn from news and social media quite a bit because of it. Part of this is because I’ve been very busy with work, but it’s also because I’ve felt overwhelmed. There are now a lot of bad-faith actors in positions of power. Part of their game plan is to spray a mass of obviously false, intellectually shallow, enraging nonsense into the world as quickly as possible. At a certain point the bullshit seeps in if you’re soaking in it. The ability to control over what you see next is powerful. I think it would be great if more people started being a bit more choosy about who they give that control to.