More from pcloadletter
Generative AI will probably make blogs better. Have you ever searched for something on Google and found the first one, two, or three blog posts to be utter nonsense? That's because these blog posts have been optimized not for human consumption, but rather to entertain the search engine ranking algorithms. People have figured out the right buzzwords to include in headings, how to game backlinks, and research keywords to write up blog posts about things they know nothing about. Pleasing these bots means raking in the views—and ad revenue (or product referrals, sales leads, etc.). Search Engine Optimization (SEO) may have been the single worst thing that happened to the web. Every year it seems like search results get worse than the previous. The streets of the internet are littered with SEO junk. But now, we may have an escape from this SEO hellscape: generative AI! Think about it: if AI-generated search results (or even direct use of AI chat interfaces) subsumes web search as a primary way to look up information, there will be no more motivation to crank out SEO-driven content. These kinds of articles will fade into obscurity as the only purpose for their existence (monetization) is gone. Perhaps we will be left with the blogosphere of old with webrings and RSS (not that these things went away but they're certainly not mainstream anymore). This, anyways, is my hope. No more blogging to entertain the robots. Just writing stuff you want to write and share with other like-minded folks online.
The older I get, the more I dislike clever code. This is not a controversial take; it is pretty-well agreed upon that clever code is bad. But I particularly like the on-call responsiblity framing: write code that you can understand when you get paged at 2am. If you have never been lucky enough to get paged a 2am, I'll paint the picture for you: A critical part of the app is down. Your phone starts dinging on your nightstand next to you. You wake up with a start, not quite sure who you are or where you are. You put on your glasses and squint at the way-too-bright screen of your phone. It's PagerDuty. "Oh shit," you think. You pop open your laptop, open the PagerDuty web app, and read the alert. You go to your telemetry and logging systems and figure out approximate whereabouts in the codebase the issue is. You open your IDE and start sweating: "I have no idea what the hell any of this code means." The git blame shows you wrote the code 2 years ago. You thought that abstraction was pretty clever at the time, but now you're paying a price: your code is inscrutable to an exhausted, stressed version of yourself who just wants to get the app back online. Reasons for clever code # There are a few reasons for clever code that I have seen over my career. Thinking clever code is inherently good # I think at some point a lot of engineers end up in a place where they become very skilled in a language before they understand the importance of writing clean, readable code. Consider the following two javascript snippets: snippet 1 const sum = items.reduce( (acc, el) => (typeof el === "number" ? acc + el : acc), 0 ); snippet 2 let sum = 0; for (const item of items) { if (typeof item === "number") { sum = sum + item; } } At one point in my career, I would have assumed the first snippet was superior: fewer lines and uses the reduce method! But I promise far more engineers can very quickly and easily understand what's going on in the second snippet. I would much rather the second snippet in my codebase any day. Premature abstraction # Premature abstractions tend to be pretty common in object-oriented languages. This stackexchange answer made me laugh quite a bit, so I'll use it as an example. Let's say you have a system with employee information. Well perhaps you decide employees are types of humans, so we'd better have a human class, and humans are a type of mammal, so we'd better have a mammal class, and so on. All of a sudden, you might have to navigate several layers up to the animal class to see an employee's properties and methods. As the stackexchange answer succinctly put it: As a result, we ended up with code that really only needed to deal with, say, records of employees, but were carefully written to be ready if you ever hired an arachnid or maybe a crustacean. DRY dogma # Don't Repeat Yourself (DRY) is a coding philosophy where you try to minimize the amount of code repeated in your software. In theory, even repeating code once results in an increased chance that you'll miss updating the code in both places or having inconsistent behavior when you have to implement the code somewhere else. In practice, DRYing up code can sometimes be complex. Perhaps there is a little repeated code shared between client and server. Do we need to create a way to share this logic? If it's only one small instance, it simply may not be worth the complexity of sharing logic. If this is going to be a common issue in the codebase, then perhaps centralizing the logic is worth it. But importantly we can't just assume that one instance of repeated code means we must eliminate the redundancy. What should we aim for instead? # There's definitely a balance to be struck. We can't have purely dumb code with no abstractions: that ends up being pretty error prone. Imagine you're working with an API that has some set of required headers. Forcing all engineers to remember to include those headers with every API call is error-prone. file1 fetch("/api/users", { headers: { Authorization: `Bearer ${token}`, AppVersion: version, XsrfToken: xsrfToken, }, }); fetch(`/api/users/${userId}`, { headers: { Authorization: `Bearer ${token}`, AppVersion: version, XsrfToken: xsrfToken, }, }); file2 fetch("/api/transactions", { headers: { Authorization: `Bearer ${token}`, AppVersion: version, XsrfToken: xsrfToken, }, }); file3 fetch("/api/settings", { headers: { Authorization: `Bearer ${token}`, AppVersion: version, XsrfToken: xsrfToken, }, }); Furthermore, having to track down every instance of that API call to update the headers (or any other required info) could be challenging. In this instance, it makes a lot of sense to create some kind of API service that encapsulates the header logic: service function apiRequest(...args) { const [url, headers, ...rest] = args; return fetch( url, { ...headers, Authorization: `Bearer ${token}`, AppVersion: version, XsrfToken: xsrfToken, }, ...rest ); } file1 apiRequest("/api/users"); apiRequest(`/api/users/${userId}`); file2 apiRequest("/api/transactions"); file3 apiRequest("/api/settings"); The apiRequest function is a pretty helpful abstraction. It helps that it is a very minimal abstraction: just enough to prevent future engineers from making mistakes but not so much that it's confusing. These kinds of abstractions, however, can get out of hand. I have see code where making a request looks something like this: const API_PATH = "api"; const USER_PATH = "user"; const TRANSACTIONS_PATH = "transactions"; const SETTINGS_PATH = "settings"; createRequest( endpointGenerationFn, [API_PATH, USER_PATH], getHeaderOverrides("authenticated") ); createRequest( endpointGenerationFn, [API_PATH, USER_PATH, userId], getHeaderOverrides("authenticated") ); There's really no need for this. You're not saving all that much for making variables instead of using strings for paths. In fact, this ends up making it really hard for someone debugging the code to search! Typically, I'd lok for the string "api/user" in my IDE to try to find the location of the request. Would I be able to find it with this abstraction? Would I be able to find it at 2am? Furthermore, passing an endpoint-generation function that consumes the path parts seems like overkill and may be inscrutable to more junior engineers (or, again, 2am you). Keep it as simple as possible # So I think in the end my message is to keep your code as simple as possible. Don't create some abstraction that may or may not be needed eventually. Weigh the maintenance value of DRYing up parts of your codebase versus readability.
Here we go again: I'm so tired of crypto web3 LLMs. I'm positive there are wonderful applications for LLMs. The ChatGPT web UI seems great for summarizing information from various online sources (as long as you're willing to verify the things that you learn). But a lot fo the "AI businesses" coming out right now are just lightweight wrappers around ChatGPT. It's lazy and unhelpful. Probably the worst offenders are in the content marketing space. We didn't know how lucky we were back in the "This one weird trick for saving money" days. Now, rather than a human writing that junk, we have every article sounding like the writing voice equivalent of the dad from Cocomelon. Here's an approximate technical diagram of how these businesses work: Part 1 is what I like to call the "bilking process." Basically, you put up a flashy landing page promising content generation in exchange for a monthly subscription fee (or discounted annual fee, of course!). No more paying pesky writers! Once the husk of a company has secured the bag, part 2, the "bullshit process," kicks in. Customers provide their niches and the service happily passes queries over to the ChatGPT (or similar) API. Customers are rewarded with stinky garbage articles that sound like they're being narrated by HAL on Prozac in return. Success! I suppose we should have expected as much. With every new tech trend comes a deluge of tech investors trying to find the next great thing. And when this happens, it's a gold rush every time. I will say I'm more optimistic about "AI" (aka machine learning, aka statistics). There are going to be some pretty cool applications of this tech eventually—but your ChatGPT wrapper ain't it.
I have noticed a trend in a handful of products I've worked on at big tech companies. I have friends at other big tech companies that have noticed a similar trend: The products are kind of crummy. Here are some experiences that I have often encountered: the UI is flakey and/or unintuitive there is a lot of cruft in the codebase that has never been cleaned up bugs that have "acceptable" workarounds that never get fixed packages/dependencies are badly out of date the developer experience is crummy (bad build times, easily breakable processes) One of the reasons I have found for these issues is that we simply aren't investing enough time to increase product quality: we have poorly or nonexistent quality metrics, invest minimally in testing infrastructure (and actually writing tests), and don't invest in improving the inner loop. But why is this? My experience has been that quality is simply a hard sell in bigh tech. Let's first talk about something that's an easy sell right now: AI everything. Why is this an easy sell? Well, Microsoft could announce they put ChatGPT in a toaster and their stock price would jump $5/share. The sad truth is that big tech is hyper-focused on doing the things that make their stock prices go up in the short-term. It's hard to make this connection with quality initiatives. If your software is slightly less shitty, the stock price won't jump next week. So instead of being able to sell the obvious benefit of shiny new features, you need to have an Engineering Manager willing to risk having lower impact for the sake of having a better product. Even if there is broad consensus in your team, group, org that these quality improvements are necessary, there's a point up the corporate hierarchy where it simply doesn't matter to them. Certainly not as much as shipping some feature to great fanfare. Part of a bigger strategy? # Cory Doctorow has said some interesting things about enshittification in big tech: "enshittification is a three-stage process: first, surpluses are allocated to users until they are locked in. Then they are withdrawn and given to business-customers until they are locked in. Then all the value is harvested for the company's shareholders, leaving just enough residual value in the service to keep both end-users and business-customers glued to the platform." At a macro level, it's possible this is the strategy: hook users initially, make them dependent on your product, and then cram in superficial features that make the stock go up but don't offer real value, and keep the customers simply because they really have no choice but to use your product (an enterprise Office 365 customer probably isn't switching anytime soon). This does seem to have been a good strategy in the short-term: look at Microsoft's stock ever since they started cranking out AI everything. But how can the quality corner-cutting work long-term? I hope the hubris will backfire # Something will have to give. Big tech products can't just keep getting shittier—can they? I'd like to think some smaller competitors will come eat their lunch, but I'm not sure. Hopefully we're not all too entrenched in the big tech ecosystem for this to happen.
More in science
When Britain actually made something
Window collisions and cats kill more birds than wind farms do, but ornithologists say turbine impacts must be taken seriously. Scientists are testing a range of technologies to reduce bird strikes — from painting stripes to using artificial intelligence — to keep birds safe. Read more on E360 →
This article is excerpted from Every American an Innovator: How Innovation Became a Way of Life, by Matthew Wisnioski (The MIT Press, 2025). Imagine a point-to-point transportation service in which two parties communicate at a distance. A passenger in need of a ride contacts the service via phone. A complex algorithm based on time, distance, and volume informs both passenger and driver of the journey’s cost before it begins. This novel business plan promises efficient service and lower costs. It has the potential to disrupt an overregulated taxi monopoly in cities across the country. Its enhanced transparency may even reduce racial discrimination by preestablishing pickups regardless of race. aspect_ratio Every American an Innovator: How Innovation Became a Way of Life, by Matthew Wisnioski (The MIT Press, 2025).The MIT Press Carnegie Mellon University. The dial-a-ride service was designed to resurrect a defunct cab company that had once served Pittsburgh’s African American neighborhoods. National Science Foundation, the CED was envisioned as an innovation “hatchery,” intended to challenge the norms of research science and higher education, foster risk-taking, birth campus startups focused on market-based technological solutions to social problems, and remake American science to serve national needs. Are innovators born or made? During the Cold War, the model for training scientists and engineers in the United States was one of manpower in service to a linear model of innovation: Scientists pursued “basic” discovery in universities and federal laboratories; engineer–scientists conducted “applied” research elsewhere on campus; engineers developed those ideas in giant teams for companies such as Lockheed and Boeing; and research managers oversaw the whole process. This model dictated national science policy, elevated the scientist as a national hero in pursuit of truth beyond politics, and pumped hundreds of millions of dollars into higher education. In practice, the lines between basic and applied research were blurred, but the perceived hierarchy was integral to the NSF and the university research culture that it helped to foster. RELATED: Innovation Magazine and the Birth of a Buzzword The question was, how? And would the universities be willing to remake themselves to support innovation? The NSF experiments with innovation At the Utah Innovation Center, engineering students John DeJong and Douglas Kihm worked on a programmable electronics breadboard.Special Collections, J. Willard Marriott Library, The University of Utah In 1972, NSF director H. Guyford Stever established the Office of Experimental R&D Incentives to “incentivize” innovation for national needs by supporting research on “how the government [could] most effectively accelerate the transfer of new technology into productive enterprise.” Stever stressed the experimental nature of the program because many in the NSF and the scientific community resisted the idea of goal-directed research. Innovation, with its connotations of profit and social change, was even more suspect. To lead the initiative, Stever appointed C.B. Smith, a research manager at United Aircraft Corp., who in turn brought in engineers with industrial experience, including Robert Colton, an automotive engineer. Colton led the university Innovation Center experiment that gave rise to Carnegie Mellon’s CED. The NSF chose four universities that captured a range of approaches to innovation incubation. MIT targeted undergrads through formal coursework and an innovation “co-op” that assisted in turning ideas into products. The University of Oregon evaluated the ideas of garage inventors from across the country. The University of Utah emphasized an ecosystem of biotech and computer graphics startups coming out of its research labs. And Carnegie Mellon established a nonprofit corporation to support graduate student ventures, including the dial-a-ride service. Grad student Fritz Faulhaber holds one of the radio-coupled taxi meters that Carnegie Mellon students installed in Pittsburgh cabs in the 1970s.Ralph Guggenheim;Jerome McCavitt/Carnegie-Mellon Alumni News Carnegie Mellon got one of the first university incubators Carnegie Mellon had all the components that experts believed were necessary for innovation: strong engineering, a world-class business school, novel approaches to urban planning with a focus on community needs, and a tradition of industrial design and the practical arts. CMU leaders claimed that the school was smaller, younger, more interdisciplinary, and more agile than MIT. Dwight Baumann. Baumann exemplified a new kind of educator-entrepreneur. The son of North Dakota farmers, he had graduated from North Dakota State University, then headed to MIT for a Ph.D. in mechanical engineering, where he discovered a love of teaching. He also garnered a reputation as an unusually creative engineer with an interest in solving problems that addressed human needs. In the 1950s and 1960s, first as a student and then as an MIT professor, Baumann helped develop one of the first computer-aided-design programs, as well as computer interfaces for the blind and the nation’s first dial-a-ride paratransit system. Dwight Baumann, director of Carnegie Mellon’s Center for Entrepreneurial Development, believed that a modern university should provide entrepreneurial education. Carnegie Mellon University Archives The CED’s mission was to support entrepreneurs in the earliest stages of the innovation process when they needed space and seed funding. It created an environment for students to make a “sequence of nonfatal mistakes,” so they could fail and develop self-confidence for navigating the risks and uncertainties of entrepreneurial life. It targeted graduate students who already had advanced scientific and engineering training and a viable idea for a business. Carnegie Mellon’s dial-a-ride service replicated the Peoples Cab Co., which had provided taxi service to Black communities in Pittsburgh. Charles “Teenie” Harris/Carnegie Museum of Art/Getty Images A few CED students did create successful startups. The breakout hit was Compuguard, founded by electrical engineering Ph.D. students Romesh Wadhwani and Krishnahadi Pribad, who hailed from India and Indonesia, respectively. The pair spent 18 months developing a security bracelet that used wireless signals to protect vulnerable people in dangerous work environments. But after failing to convert their prototype into a working design, they pivoted to a security- and energy-monitoring system for schools, prisons, and warehouses. Wadhwani Foundation supports innovation and entrepreneurship education worldwide, particularly in emerging economies. Wharton School and elsewhere. In 1983, Baumann’s onetime partner Jack Thorne took the lead of the new Enterprise Corp., which aimed to help Pittsburgh’s entrepreneurs raise venture capital. Baumann was kicked out of his garage to make room for the initiative. Was the NSF’s experiment in innovation a success? As the university Innovation Center experiment wrapped up in the late 1970s, the NSF patted itself on the back in a series of reports, conferences, and articles. “The ultimate effect of the Innovation Centers,” it stated, would be “the regrowth of invention, innovation, and entrepreneurship in the American economic system.” The NSF claimed that the experiment produced dozens of new ventures with US $20 million in gross revenue, employed nearly 800 people, and yielded $4 million in tax revenue. Yet, by 1979, license returns from intellectual property had generated only $100,000. “Today, the legacies of the NSF experiment are visible on nearly every college campus.” Critics included Senator William Proxmire of Wisconsin, who pointed to the banana peelers, video games, and sports equipment pursued in the centers to lambast them as “wasteful federal spending” of “questionable benefit to the American taxpayer.” And so the impacts of the NSF’s Innovation Center experiment weren’t immediately obvious. Many faculty and administrators of that era were still apt to view such programs as frivolous, nonacademic, or not worth the investment.
The timing of benefits matters to families, and doesn't change costs for governments
Studies of neural metabolism reveal our brain’s effort to keep us alive and the evolutionary constraints that sculpted our most complex organ. The post How Much Energy Does It Take To Think? first appeared on Quanta Magazine