Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
42
I have noticed a trend in a handful of products I've worked on at big tech companies. I have friends at other big tech companies that have noticed a similar trend: The products are kind of crummy. Here are some experiences that I have often encountered: the UI is flakey and/or unintuitive there is a lot of cruft in the codebase that has never been cleaned up bugs that have "acceptable" workarounds that never get fixed packages/dependencies are badly out of date the developer experience is crummy (bad build times, easily breakable processes) One of the reasons I have found for these issues is that we simply aren't investing enough time to increase product quality: we have poorly or nonexistent quality metrics, invest minimally in testing infrastructure (and actually writing tests), and don't invest in improving the inner loop. But why is this? My experience has been that quality is simply a hard sell in bigh tech. Let's first talk about something that's an easy sell right now: AI...
a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from pcloadletter

Generative AI will probably make blogs better

Generative AI will probably make blogs better. Have you ever searched for something on Google and found the first one, two, or three blog posts to be utter nonsense? That's because these blog posts have been optimized not for human consumption, but rather to entertain the search engine ranking algorithms. People have figured out the right buzzwords to include in headings, how to game backlinks, and research keywords to write up blog posts about things they know nothing about. Pleasing these bots means raking in the views—and ad revenue (or product referrals, sales leads, etc.). Search Engine Optimization (SEO) may have been the single worst thing that happened to the web. Every year it seems like search results get worse than the previous. The streets of the internet are littered with SEO junk. But now, we may have an escape from this SEO hellscape: generative AI! Think about it: if AI-generated search results (or even direct use of AI chat interfaces) subsumes web search as a primary way to look up information, there will be no more motivation to crank out SEO-driven content. These kinds of articles will fade into obscurity as the only purpose for their existence (monetization) is gone. Perhaps we will be left with the blogosphere of old with webrings and RSS (not that these things went away but they're certainly not mainstream anymore). This, anyways, is my hope. No more blogging to entertain the robots. Just writing stuff you want to write and share with other like-minded folks online.

a month ago 8 votes
My articles don't belong on certain social networks

I write this blog because I enjoy writing. Some people enjoy reading what I write, which makes me feel really great! Recently, I took down a post and stopped writing for a few months because I didn't love the reaction I was getting on social media sites like Reddit and Hacker News. On these social networks, there seems to be an epidemic of "gotcha" commenters, contrarians, and know-it-alls. No matter what you post, you can be sure that folks will come with their sharpest pitchforks to try to skewer you. I'm not sure exactly what it is about those two websites in particular. I suspect it's the gamification of the comment system (more upvotes = more points = dopamine hit). Unfortunately, it seems the easiest way to win points on these sites is to tear down the original content. At any rate, I really don't enjoy bad faith Internet comments and I have a decent-enough following outside of these social networks that I don't really have to endure them. Some might argue I need thicker skin. I don't think that's really true: your experience on the Internet is what you make of it. You don't have to participate in parts of it if you don't want. Also, I know many of you reading this post (likely RSS subscribers at this point) came from Reddit or Hacker News in the first place. I don't mean to insult you or suggest by any means that everyone, or even the majority of users, on these sites are acting in bad faith. Still, I have taken a page from Tom MacWright's playbook and decided to add a bit of javascript to my website that helpfully redirects users from these two sites elsewhere: try { const bannedReferrers = [/news\.ycombinator\.com/i, /reddit\.com/i]; if (document.referrer) { const ref = new URL(document.referrer); if (bannedReferrers.some((r) => r.test(ref.host))) { window.location.href = "https://google.com/"; } } } catch (e) {} After implementing this redirect, I feel a lot more energized to write! I'm no longer worried about having to endlessly caveat my work for fear of getting bludgeoned on social media. I'm writing what I want to write and, if for those of you here to join me, I say thank you!

a year ago 105 votes
Write code that you can understand when you get paged at 2am

The older I get, the more I dislike clever code. This is not a controversial take; it is pretty-well agreed upon that clever code is bad. But I particularly like the on-call responsiblity framing: write code that you can understand when you get paged at 2am. If you have never been lucky enough to get paged a 2am, I'll paint the picture for you: A critical part of the app is down. Your phone starts dinging on your nightstand next to you. You wake up with a start, not quite sure who you are or where you are. You put on your glasses and squint at the way-too-bright screen of your phone. It's PagerDuty. "Oh shit," you think. You pop open your laptop, open the PagerDuty web app, and read the alert. You go to your telemetry and logging systems and figure out approximate whereabouts in the codebase the issue is. You open your IDE and start sweating: "I have no idea what the hell any of this code means." The git blame shows you wrote the code 2 years ago. You thought that abstraction was pretty clever at the time, but now you're paying a price: your code is inscrutable to an exhausted, stressed version of yourself who just wants to get the app back online. Reasons for clever code # There are a few reasons for clever code that I have seen over my career. Thinking clever code is inherently good # I think at some point a lot of engineers end up in a place where they become very skilled in a language before they understand the importance of writing clean, readable code. Consider the following two javascript snippets: snippet 1 const sum = items.reduce( (acc, el) => (typeof el === "number" ? acc + el : acc), 0 ); snippet 2 let sum = 0; for (const item of items) { if (typeof item === "number") { sum = sum + item; } } At one point in my career, I would have assumed the first snippet was superior: fewer lines and uses the reduce method! But I promise far more engineers can very quickly and easily understand what's going on in the second snippet. I would much rather the second snippet in my codebase any day. Premature abstraction # Premature abstractions tend to be pretty common in object-oriented languages. This stackexchange answer made me laugh quite a bit, so I'll use it as an example. Let's say you have a system with employee information. Well perhaps you decide employees are types of humans, so we'd better have a human class, and humans are a type of mammal, so we'd better have a mammal class, and so on. All of a sudden, you might have to navigate several layers up to the animal class to see an employee's properties and methods. As the stackexchange answer succinctly put it: As a result, we ended up with code that really only needed to deal with, say, records of employees, but were carefully written to be ready if you ever hired an arachnid or maybe a crustacean. DRY dogma # Don't Repeat Yourself (DRY) is a coding philosophy where you try to minimize the amount of code repeated in your software. In theory, even repeating code once results in an increased chance that you'll miss updating the code in both places or having inconsistent behavior when you have to implement the code somewhere else. In practice, DRYing up code can sometimes be complex. Perhaps there is a little repeated code shared between client and server. Do we need to create a way to share this logic? If it's only one small instance, it simply may not be worth the complexity of sharing logic. If this is going to be a common issue in the codebase, then perhaps centralizing the logic is worth it. But importantly we can't just assume that one instance of repeated code means we must eliminate the redundancy. What should we aim for instead? # There's definitely a balance to be struck. We can't have purely dumb code with no abstractions: that ends up being pretty error prone. Imagine you're working with an API that has some set of required headers. Forcing all engineers to remember to include those headers with every API call is error-prone. file1 fetch("/api/users", { headers: { Authorization: `Bearer ${token}`, AppVersion: version, XsrfToken: xsrfToken, }, }); fetch(`/api/users/${userId}`, { headers: { Authorization: `Bearer ${token}`, AppVersion: version, XsrfToken: xsrfToken, }, }); file2 fetch("/api/transactions", { headers: { Authorization: `Bearer ${token}`, AppVersion: version, XsrfToken: xsrfToken, }, }); file3 fetch("/api/settings", { headers: { Authorization: `Bearer ${token}`, AppVersion: version, XsrfToken: xsrfToken, }, }); Furthermore, having to track down every instance of that API call to update the headers (or any other required info) could be challenging. In this instance, it makes a lot of sense to create some kind of API service that encapsulates the header logic: service function apiRequest(...args) { const [url, headers, ...rest] = args; return fetch( url, { ...headers, Authorization: `Bearer ${token}`, AppVersion: version, XsrfToken: xsrfToken, }, ...rest ); } file1 apiRequest("/api/users"); apiRequest(`/api/users/${userId}`); file2 apiRequest("/api/transactions"); file3 apiRequest("/api/settings"); The apiRequest function is a pretty helpful abstraction. It helps that it is a very minimal abstraction: just enough to prevent future engineers from making mistakes but not so much that it's confusing. These kinds of abstractions, however, can get out of hand. I have see code where making a request looks something like this: const API_PATH = "api"; const USER_PATH = "user"; const TRANSACTIONS_PATH = "transactions"; const SETTINGS_PATH = "settings"; createRequest( endpointGenerationFn, [API_PATH, USER_PATH], getHeaderOverrides("authenticated") ); createRequest( endpointGenerationFn, [API_PATH, USER_PATH, userId], getHeaderOverrides("authenticated") ); There's really no need for this. You're not saving all that much for making variables instead of using strings for paths. In fact, this ends up making it really hard for someone debugging the code to search! Typically, I'd lok for the string "api/user" in my IDE to try to find the location of the request. Would I be able to find it with this abstraction? Would I be able to find it at 2am? Furthermore, passing an endpoint-generation function that consumes the path parts seems like overkill and may be inscrutable to more junior engineers (or, again, 2am you). Keep it as simple as possible # So I think in the end my message is to keep your code as simple as possible. Don't create some abstraction that may or may not be needed eventually. Weigh the maintenance value of DRYing up parts of your codebase versus readability.

a year ago 104 votes
The ChatGPT wrapper product boom is an uncanny valley hellscape

Here we go again: I'm so tired of crypto web3 LLMs. I'm positive there are wonderful applications for LLMs. The ChatGPT web UI seems great for summarizing information from various online sources (as long as you're willing to verify the things that you learn). But a lot fo the "AI businesses" coming out right now are just lightweight wrappers around ChatGPT. It's lazy and unhelpful. Probably the worst offenders are in the content marketing space. We didn't know how lucky we were back in the "This one weird trick for saving money" days. Now, rather than a human writing that junk, we have every article sounding like the writing voice equivalent of the dad from Cocomelon. Here's an approximate technical diagram of how these businesses work: Part 1 is what I like to call the "bilking process." Basically, you put up a flashy landing page promising content generation in exchange for a monthly subscription fee (or discounted annual fee, of course!). No more paying pesky writers! Once the husk of a company has secured the bag, part 2, the "bullshit process," kicks in. Customers provide their niches and the service happily passes queries over to the ChatGPT (or similar) API. Customers are rewarded with stinky garbage articles that sound like they're being narrated by HAL on Prozac in return. Success! I suppose we should have expected as much. With every new tech trend comes a deluge of tech investors trying to find the next great thing. And when this happens, it's a gold rush every time. I will say I'm more optimistic about "AI" (aka machine learning, aka statistics). There are going to be some pretty cool applications of this tech eventually—but your ChatGPT wrapper ain't it.

a year ago 124 votes

More in science

This 1945 TV Console Showed Two Programs at Once

As I try to write this article, my friend and I have six different screens attached to three types of devices. We’re working in the same room but on our own projects—separate yet together, a comfortable companionship. I had never really thought of the proliferation of screens as a peacekeeping tool until I stumbled across one of Allen B. DuMont’s 1950s dual-screen television sets. DuMont’s idea was to let two people in the same room watch different programs. It reminded me of my early childhood and my family’s one TV set, and the endless arguments with my sisters and parents over what to watch. Dad always won, and his choice was rarely mine. The DuMont Duoscopic Was 2 TVs in 1 Allen B. DuMont was a pioneer of commercial television in the United States. His eponymous company manufactured cathode-ray tubes and in 1938 introduced one of the earliest electronic TV sets. He understood how human nature and a shortage of TV screens could divide couples, siblings, and friends. Accordingly, he built at least two prototype TVs that could play two shows at once. In the 1945 prototype shown at top, DuMont retrofitted a maple-finished cabinet that originally held a single 15-inch Plymouth TV receiver to house two black-and-white 12-inch receivers. Separate audio could be played with or without earpieces. Viewers used a 10-turn dial to tune into TV channel 1 (which went off the air in 1948) and VHF channels 2 through 13. As radio was still much more popular than television, the dial also included FM from 88 to 108 megahertz, plus a few channels used for weather and aviation. The lower left drawer held a phonograph. It was an all-in-one entertainment center. To view their desired programs on the DuMont Duoscopic TV set, this family wore polarized glasses and listened through earpieces.Allen DuMont/National Museum of American History/Smithsonian In 1954, DuMont introduced a different approach. With the DuMont Duoscopic, two different channels were broadcast on a single screen. To the naked eye, the images appeared superimposed on one another. But a viewer who wore polarized glasses or looked at the screen through a polarized panel saw just one of the images. Duoscopic viewers could use an earpiece to listen to the audio of their choice. You could also use the TV set to watch a single program by selecting only one channel and playing the audio through one speaker. DuMont seemed committed to the idea that family members should spend time together, even if they were engaged in different activities. An image of the Duoscopic sent out by the Associated Press Wirephoto Service heralded “No more lonely nights for the missus.” According to the caption, she could join “Hubby,” who was already relaxing in his comfy armchair enjoying his favorite show, but now watch something of her own choosing. “Would you believe it?” a Duoscopic brochure asks. “While HE sees and hears the fights, SHE sees and hears her play…. Separate viewing and solo sound allows your family a choice.” The technology to separate and isolate the images and audio was key. The Duoscopic had two CRTs, each with its own feed, set at right angles to each other. A half-silvered mirror superimposed the two images onto a single screen, which could then be filtered with polarized glasses or screens. TV pioneer Allen B. DuMont designed and manufactured cathode ray tubes and TV sets and launched an early TV network.Science History Images/Alamy A separate box could be conveniently placed nearby to control the volume of each program. Users could toggle between the two programs with the flick of a switch. Each set came with eight earpieces with long cords. A short note in the March 1954 issue of Electrical Engineering praises the engineers who crafted the sound system to eliminate sound bleed from the speakers. It notes that a viewer “very easily could watch one television program and listen to the audio content of a second.” Or, as a United Press piece published in the Panama City News Herald suggested, part of the family could use the earpieces to watch and listen to the TV while others in the room could “read, play bridge, or just sit and brood.” I suspect the brooders were the children who still didn’t get to watch their favorite show. Of course, choice was a relative matter. In the 1950s, many U.S. television markets were lucky to have even two channels. Only in major metropolitan areas were there more programming options. The only known example of DuMont’s side-by-side version resides at the South Carolina State Museum, in Columbia. But sources indicate that DuMont planned to manufacture about 30 Duoscopics for demonstration purposes, although it’s unclear how many were actually made. (The Smithsonian’s National Museum of American History has a Duoscopic in its collections.) Alas, neither version ever went into mainstream production. Perhaps that’s because the economics didn’t make sense: Even in the early 1950s, it would have been easier and cheaper for families to simply purchase two television sets and watch them in different rooms. Who Was Early TV Pioneer Allen DuMont? DuMont is an interesting figure in the history of television because he was actively engaged in the full spectrum of the industry. Not only did he develop and manufacture receivers, he also conducted broadcasting experiments, published papers on transmission and reception, ran a television network, and produced programming. After graduating from Rensselaer Polytechnic Institute in 1924 with a degree in electrical engineering, DuMont worked in a plant that manufactured vacuum tubes. Four years later, he joined the De Forest Radio Co. as chief engineer. With Lee de Forest, DuMont helped design an experimental mechanical television station, but he was unconvinced by the technology and advocated for all-electric TV for its crisper image. RELATED: In 1926, TV Was Mechanical When the Radio Corporation of America acquired De Forest Radio in 1931, DuMont started his own laboratory in his basement, where he worked on improving cathode ray tubes. In 1932 he invented the “magic eye,” a vacuum tube that was a visual tuning aid in radio receivers. He sold the rights to RCA. In 1935, DuMont moved the operation to a former pickle factory in Passaic, N.J., and incorporated it as the Allen B. DuMont Laboratories. The company produced cathode ray oscilloscopes, which helped finance his experiments with television. He debuted the all-electronic DuMont 180 TV set in June 1938. It cost US $395, or almost $9,000 today—so not exactly an everyday purchase for most people. Although DuMont was quick to market, RCA and the Television Corp. of America were right on his tail. RELATED: RCA’s Lucite Phantom Teleceiver Introduced the Idea of TV Of course, if companies were going to sell televisions, consumers had to have programs to watch. So in 1939, DuMont launched his own television network, starting with station W2XWV, broadcasting from Passaic. The Federal Communications Commission licensed W2XWV as an experimental station for television research. DuMont received a commercial license and changed its call sign to WABD on 2 May 1944, three years after NBC’s and CBS’s commercial stations went into operation in New York City. Due to wartime restrictions and debates over industry standards, television remained mostly experimental during World War II. As of September 1944, there were only six stations operating—three in New York City and one each in Chicago, Los Angeles, and Philadelphia. There were approximately 7,000 TV sets in personal use. The DuMont Television Network’s variety show hosted by Jackie Gleason [left, hands raised] featured a recurring skit that later gave rise to “The Honeymooners.”Left: CBS/Getty Images; Right: Garry Winogrand/Picture Post/Hulton Archive/Getty Images While other networks focused on sports, movies, or remote broadcasts, the DuMont Television Network made its mark with live studio broadcasts. In April 1946, WABD moved its studios to the Wanamaker Department Store in Manhattan. DuMont converted the 14,200-cubic-meter (500,000-cubic-foot) auditorium into the world’s largest television studio. The network’s notable programming included “The Original Amateur Hour,” which started as a radio program; “The Johns Hopkins Science Review,” which had a surprisingly progressive take on women’s health; “Life Is Worth Living,” a devotional show hosted by Catholic Bishop Fulton Sheen (that garnered DuMont’s only Emmy Award); “Cavalcade of Stars,” a variety show hosted by Jackie Gleason that birthed “The Honeymooners”; and “Captain Video and His Video Rangers,” a children’s science fiction series, the first of its genre. My grandmother, who loved ballroom dancing, was a big fan of “The Arthur Murray Party,” a dance show hosted by Arthur’s wife, Kathryn; my mom fondly recalls Kathryn’s twirling skirts. While NBC, CBS, and the other major television players built their TV networks on their existing radio networks, DuMont was starting fresh. To raise capital for his broadcast station, he sold a half-interest in his company to Paramount Pictures in 1938. The partnership was contentious from the start. There were disputes over money, the direction of the venture, and stock. But perhaps the biggest conflict was when Paramount and some of its subsidiaries began applying for FCC licenses in the same markets as Dumont’s. This ate into the DuMont network’s advertising and revenue and its plans to expand. In August 1955, Paramount gained full control over the DuMont network and proceeded to shut it down. DuMont continued to manufacture television receivers until 1958, when he sold the business to the Emerson Radio & Phonograph Corp. Two years later, the remainder of DuMont Labs merged with the Fairchild Camera and Instrument Corp. (whose founder, Sherman Fairchild, had in 1957 helped a group of ambitious young scientists and engineers known as the “Traitorous Eight” set up Fairchild Semiconductor). Allen DuMont served as general manager of the DuMont division for a year and then became a technical consultant to Fairchild. He died in 1965. One Thing Allen DuMont Missed My family eventually got a second and then a third television, but my dad always had priority. He watched the biggest set from his recliner in the family room, while my mom made do with the smaller sets in the kitchen and bedroom. He was relaxing, while she was usually doing chores. As a family, we would watch different shows in separate places. An ad for the DuMont Duoscopic touted it as a device for household harmony: “While HE sees and hears the fights, SHE sees and hears her play.” National Museum of American History/Smithsonian These days, with so many screens on so many devices and so many programming options, we may have finally achieved DuMont’s vision of separate but together. While I was writing this piece, my friend was watching the French Open on the main TV, muted so she didn’t disturb me. She streamed the same channel on her tablet and routed the audio to her headset. We both worked on our respective laptops and procrastinated by checking messages on our phones. But there’s one aspect of human nature that DuMont’s prototypes and promotional materials failed to address—that moment when someone sees something so exciting that they just have to share it. Sarah and I were barely getting any work done in this separate-but-together setting because we kept interrupting each other with questions, comments, and the occasional tennis update. We’ve been friends too long; we can’t help but chitchat. The only way for me to actually finish this article will be to go to a room by myself with no other screens or people to distract me. Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology. An abridged version of this article appears in the July 2025 print issue as “The 2-in-1 TV.” References I first learned about the Duoscopic in a short article in the March 1954 issue of Electrical Engineering, a precursor publication to Spectrum. My online research turned up several brochures and newspaper articles from the Early Television Museum, which surprisingly led me to the dual-screen DuMont at the South Carolina State Museum in my hometown of Columbia, S.C. Museum objects are primary sources, and I was fortunate to be able to visit this amazing artifact and examine it with Director of Collections Robyn Thiesbrummel. I also consulted the museum’s accession file, which gave additional information about the receiver from the time of acquisition. I took a look at Gary Newton Hess’s 1960 dissertation, An Historical Study of the Du Mont Television Network, as well as several of Allen B. DuMont’s papers published in the Proceedings of the IRE and Electrical Engineering.

17 hours ago 4 votes
The end of lead

How a single taxi ride saved millions of lives

18 hours ago 3 votes
Meta Said A.I. Could Help Tackle Warming. An Early Experiment Underwhelmed

Last year Meta identified 135 materials that could potentially be used to draw down carbon dioxide, work it described as "groundbreaking." But when scientists tried to reproduce the results, they found that none of the materials could perform as promised and that some did not even exist. Read more on E360 →

20 hours ago 2 votes
How Smell Guides Our Inner World

A better understanding of human smell is emerging as scientists interrogate its fundamental elements: the odor molecules that enter your nose and the individual neurons that translate them into perception in your brain. The post How Smell Guides Our Inner World first appeared on Quanta Magazine

17 hours ago 2 votes
A Decade After a Lead Crisis, Flint Has At Last Replaced Its Pipes

A decade after Flint, Michigan, was beset by widespread lead contamination, officials confirmed the city has replaced its lead pipes, as ordered by a federal court. Read more on E360 →

2 days ago 3 votes