Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
100
I write this blog because I enjoy writing. Some people enjoy reading what I write, which makes me feel really great! Recently, I took down a post and stopped writing for a few months because I didn't love the reaction I was getting on social media sites like Reddit and Hacker News. On these social networks, there seems to be an epidemic of "gotcha" commenters, contrarians, and know-it-alls. No matter what you post, you can be sure that folks will come with their sharpest pitchforks to try to skewer you. I'm not sure exactly what it is about those two websites in particular. I suspect it's the gamification of the comment system (more upvotes = more points = dopamine hit). Unfortunately, it seems the easiest way to win points on these sites is to tear down the original content. At any rate, I really don't enjoy bad faith Internet comments and I have a decent-enough following outside of these social networks that I don't really have to endure them. Some might argue I need thicker skin. I...
a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from pcloadletter

Generative AI will probably make blogs better

Generative AI will probably make blogs better. Have you ever searched for something on Google and found the first one, two, or three blog posts to be utter nonsense? That's because these blog posts have been optimized not for human consumption, but rather to entertain the search engine ranking algorithms. People have figured out the right buzzwords to include in headings, how to game backlinks, and research keywords to write up blog posts about things they know nothing about. Pleasing these bots means raking in the views—and ad revenue (or product referrals, sales leads, etc.). Search Engine Optimization (SEO) may have been the single worst thing that happened to the web. Every year it seems like search results get worse than the previous. The streets of the internet are littered with SEO junk. But now, we may have an escape from this SEO hellscape: generative AI! Think about it: if AI-generated search results (or even direct use of AI chat interfaces) subsumes web search as a primary way to look up information, there will be no more motivation to crank out SEO-driven content. These kinds of articles will fade into obscurity as the only purpose for their existence (monetization) is gone. Perhaps we will be left with the blogosphere of old with webrings and RSS (not that these things went away but they're certainly not mainstream anymore). This, anyways, is my hope. No more blogging to entertain the robots. Just writing stuff you want to write and share with other like-minded folks online.

3 days ago 1 votes
Write code that you can understand when you get paged at 2am

The older I get, the more I dislike clever code. This is not a controversial take; it is pretty-well agreed upon that clever code is bad. But I particularly like the on-call responsiblity framing: write code that you can understand when you get paged at 2am. If you have never been lucky enough to get paged a 2am, I'll paint the picture for you: A critical part of the app is down. Your phone starts dinging on your nightstand next to you. You wake up with a start, not quite sure who you are or where you are. You put on your glasses and squint at the way-too-bright screen of your phone. It's PagerDuty. "Oh shit," you think. You pop open your laptop, open the PagerDuty web app, and read the alert. You go to your telemetry and logging systems and figure out approximate whereabouts in the codebase the issue is. You open your IDE and start sweating: "I have no idea what the hell any of this code means." The git blame shows you wrote the code 2 years ago. You thought that abstraction was pretty clever at the time, but now you're paying a price: your code is inscrutable to an exhausted, stressed version of yourself who just wants to get the app back online. Reasons for clever code # There are a few reasons for clever code that I have seen over my career. Thinking clever code is inherently good # I think at some point a lot of engineers end up in a place where they become very skilled in a language before they understand the importance of writing clean, readable code. Consider the following two javascript snippets: snippet 1 const sum = items.reduce( (acc, el) => (typeof el === "number" ? acc + el : acc), 0 ); snippet 2 let sum = 0; for (const item of items) { if (typeof item === "number") { sum = sum + item; } } At one point in my career, I would have assumed the first snippet was superior: fewer lines and uses the reduce method! But I promise far more engineers can very quickly and easily understand what's going on in the second snippet. I would much rather the second snippet in my codebase any day. Premature abstraction # Premature abstractions tend to be pretty common in object-oriented languages. This stackexchange answer made me laugh quite a bit, so I'll use it as an example. Let's say you have a system with employee information. Well perhaps you decide employees are types of humans, so we'd better have a human class, and humans are a type of mammal, so we'd better have a mammal class, and so on. All of a sudden, you might have to navigate several layers up to the animal class to see an employee's properties and methods. As the stackexchange answer succinctly put it: As a result, we ended up with code that really only needed to deal with, say, records of employees, but were carefully written to be ready if you ever hired an arachnid or maybe a crustacean. DRY dogma # Don't Repeat Yourself (DRY) is a coding philosophy where you try to minimize the amount of code repeated in your software. In theory, even repeating code once results in an increased chance that you'll miss updating the code in both places or having inconsistent behavior when you have to implement the code somewhere else. In practice, DRYing up code can sometimes be complex. Perhaps there is a little repeated code shared between client and server. Do we need to create a way to share this logic? If it's only one small instance, it simply may not be worth the complexity of sharing logic. If this is going to be a common issue in the codebase, then perhaps centralizing the logic is worth it. But importantly we can't just assume that one instance of repeated code means we must eliminate the redundancy. What should we aim for instead? # There's definitely a balance to be struck. We can't have purely dumb code with no abstractions: that ends up being pretty error prone. Imagine you're working with an API that has some set of required headers. Forcing all engineers to remember to include those headers with every API call is error-prone. file1 fetch("/api/users", { headers: { Authorization: `Bearer ${token}`, AppVersion: version, XsrfToken: xsrfToken, }, }); fetch(`/api/users/${userId}`, { headers: { Authorization: `Bearer ${token}`, AppVersion: version, XsrfToken: xsrfToken, }, }); file2 fetch("/api/transactions", { headers: { Authorization: `Bearer ${token}`, AppVersion: version, XsrfToken: xsrfToken, }, }); file3 fetch("/api/settings", { headers: { Authorization: `Bearer ${token}`, AppVersion: version, XsrfToken: xsrfToken, }, }); Furthermore, having to track down every instance of that API call to update the headers (or any other required info) could be challenging. In this instance, it makes a lot of sense to create some kind of API service that encapsulates the header logic: service function apiRequest(...args) { const [url, headers, ...rest] = args; return fetch( url, { ...headers, Authorization: `Bearer ${token}`, AppVersion: version, XsrfToken: xsrfToken, }, ...rest ); } file1 apiRequest("/api/users"); apiRequest(`/api/users/${userId}`); file2 apiRequest("/api/transactions"); file3 apiRequest("/api/settings"); The apiRequest function is a pretty helpful abstraction. It helps that it is a very minimal abstraction: just enough to prevent future engineers from making mistakes but not so much that it's confusing. These kinds of abstractions, however, can get out of hand. I have see code where making a request looks something like this: const API_PATH = "api"; const USER_PATH = "user"; const TRANSACTIONS_PATH = "transactions"; const SETTINGS_PATH = "settings"; createRequest( endpointGenerationFn, [API_PATH, USER_PATH], getHeaderOverrides("authenticated") ); createRequest( endpointGenerationFn, [API_PATH, USER_PATH, userId], getHeaderOverrides("authenticated") ); There's really no need for this. You're not saving all that much for making variables instead of using strings for paths. In fact, this ends up making it really hard for someone debugging the code to search! Typically, I'd lok for the string "api/user" in my IDE to try to find the location of the request. Would I be able to find it with this abstraction? Would I be able to find it at 2am? Furthermore, passing an endpoint-generation function that consumes the path parts seems like overkill and may be inscrutable to more junior engineers (or, again, 2am you). Keep it as simple as possible # So I think in the end my message is to keep your code as simple as possible. Don't create some abstraction that may or may not be needed eventually. Weigh the maintenance value of DRYing up parts of your codebase versus readability.

a year ago 99 votes
The ChatGPT wrapper product boom is an uncanny valley hellscape

Here we go again: I'm so tired of crypto web3 LLMs. I'm positive there are wonderful applications for LLMs. The ChatGPT web UI seems great for summarizing information from various online sources (as long as you're willing to verify the things that you learn). But a lot fo the "AI businesses" coming out right now are just lightweight wrappers around ChatGPT. It's lazy and unhelpful. Probably the worst offenders are in the content marketing space. We didn't know how lucky we were back in the "This one weird trick for saving money" days. Now, rather than a human writing that junk, we have every article sounding like the writing voice equivalent of the dad from Cocomelon. Here's an approximate technical diagram of how these businesses work: Part 1 is what I like to call the "bilking process." Basically, you put up a flashy landing page promising content generation in exchange for a monthly subscription fee (or discounted annual fee, of course!). No more paying pesky writers! Once the husk of a company has secured the bag, part 2, the "bullshit process," kicks in. Customers provide their niches and the service happily passes queries over to the ChatGPT (or similar) API. Customers are rewarded with stinky garbage articles that sound like they're being narrated by HAL on Prozac in return. Success! I suppose we should have expected as much. With every new tech trend comes a deluge of tech investors trying to find the next great thing. And when this happens, it's a gold rush every time. I will say I'm more optimistic about "AI" (aka machine learning, aka statistics). There are going to be some pretty cool applications of this tech eventually—but your ChatGPT wrapper ain't it.

a year ago 119 votes
Quality is a hard sell in big tech

I have noticed a trend in a handful of products I've worked on at big tech companies. I have friends at other big tech companies that have noticed a similar trend: The products are kind of crummy. Here are some experiences that I have often encountered: the UI is flakey and/or unintuitive there is a lot of cruft in the codebase that has never been cleaned up bugs that have "acceptable" workarounds that never get fixed packages/dependencies are badly out of date the developer experience is crummy (bad build times, easily breakable processes) One of the reasons I have found for these issues is that we simply aren't investing enough time to increase product quality: we have poorly or nonexistent quality metrics, invest minimally in testing infrastructure (and actually writing tests), and don't invest in improving the inner loop. But why is this? My experience has been that quality is simply a hard sell in bigh tech. Let's first talk about something that's an easy sell right now: AI everything. Why is this an easy sell? Well, Microsoft could announce they put ChatGPT in a toaster and their stock price would jump $5/share. The sad truth is that big tech is hyper-focused on doing the things that make their stock prices go up in the short-term. It's hard to make this connection with quality initiatives. If your software is slightly less shitty, the stock price won't jump next week. So instead of being able to sell the obvious benefit of shiny new features, you need to have an Engineering Manager willing to risk having lower impact for the sake of having a better product. Even if there is broad consensus in your team, group, org that these quality improvements are necessary, there's a point up the corporate hierarchy where it simply doesn't matter to them. Certainly not as much as shipping some feature to great fanfare. Part of a bigger strategy? # Cory Doctorow has said some interesting things about enshittification in big tech: "enshittification is a three-stage process: first, surpluses are allocated to users until they are locked in. Then they are withdrawn and given to business-customers until they are locked in. Then all the value is harvested for the company's shareholders, leaving just enough residual value in the service to keep both end-users and business-customers glued to the platform." At a macro level, it's possible this is the strategy: hook users initially, make them dependent on your product, and then cram in superficial features that make the stock go up but don't offer real value, and keep the customers simply because they really have no choice but to use your product (an enterprise Office 365 customer probably isn't switching anytime soon). This does seem to have been a good strategy in the short-term: look at Microsoft's stock ever since they started cranking out AI everything. But how can the quality corner-cutting work long-term? I hope the hubris will backfire # Something will have to give. Big tech products can't just keep getting shittier—can they? I'd like to think some smaller competitors will come eat their lunch, but I'm not sure. Hopefully we're not all too entrenched in the big tech ecosystem for this to happen.

a year ago 40 votes

More in science

Two of My Science-Fiction Stories Published in May

View this email in your browser A Change of Pace from Astronomy News  As you may know, I have been writing science-fiction stories based on good astronomy as my retirement project.  After a good number of rejections from the finest sci-fi magazines the world over, I am now finding some success. My ninth and tenth stories […] The post Two of My Science-Fiction Stories Published in May appeared first on Andrew Fraknoi - Astronomy Lectures - Astronomy Education Resources.

17 hours ago 2 votes
Telepathy Tapes Promotes Pseudoscience

I was away on vacation the last week, hence no posts, but am now back to my usual schedule. In fact, I hope to be a little more consistent starting this summer because (if you follow me on the SGU you already know this) I am retiring from my day job at Yale at the […] The post Telepathy Tapes Promotes Pseudoscience first appeared on NeuroLogica Blog.

6 hours ago 1 votes
The world of tomorrow

When the future arrived, it felt… ordinary. What happened to the glamour of tomorrow?

5 hours ago 1 votes
The Core of Fermat’s Last Theorem Just Got Superpowered

By extending the scope of the key insight behind Fermat’s Last Theorem, four mathematicians have made great strides toward building a “grand unified theory” of math. The post The Core of Fermat’s Last Theorem Just Got Superpowered first appeared on Quanta Magazine

4 hours ago 1 votes
Pushing back on US science cuts: Now is a critical time

Every week has brought more news about actions that, either as a collateral effect or a deliberate goal, will deeply damage science and engineering research in the US.  Put aside for a moment the tremendously important issue of student visas (where there seems to be a policy of strategic vagueness, to maximize the implicit threat that there may be selective actions).  Put aside the statement from a Justice Department official that there is a general plan is to "bring these universities to their knees", on the pretext that this is somehow about civil rights.   The detailed version of the presidential budget request for FY26 is now out (pdf here for the NSF portion).  If enacted, it would be deeply damaging to science and engineering research in the US and the pipeline of trained students who support the technology sector.  Taking NSF first:  The topline NSF budget would be cut from $8.34B to $3.28B.  Engineering would be cut by 75%, Math and Physical Science by 66.8%.  The anticipated agency-wide success rate for grants would nominally drop below 7%, though that is misleading (basically taking the present average success rate and cutting it by 2/3, while some programs are already more competitive than others.).  In practice, many programs already have future-year obligations, and any remaining funds will have to go there, meaning that many programs would likely have no awards at all in the coming fiscal year.  The NSF's CAREER program (that agency's flagship young investigator program) would go away  This plan would also close one of the LIGO observatories (see previous link).  (This would be an extra bonus level of stupid, since LIGO's ability to do science relies on having two facilities, to avoid false positives and to identify event locations in the sky.  You might as well say that you'll keep an accelerator running but not the detector.)  Here is the table that I think hits hardest, dollars aside: The number of people involved in NSF activities would drop by 240,000.  The graduate research fellowship program would be cut by more than half.  The NSF research training grant program (another vector for grad fellowships) would be eliminated.   The situation at NIH and NASA is at least as bleak.  See here for a discussion from Joshua Weitz at Maryland which includes this plot:  This proposed dismantling of US research and especially the pipeline of students who support the technology sector (including medical research, computer science, AI, the semiconductor industry, chemistry and chemical engineering, the energy industry) is astonishing in absolute terms.  It also does not square with the claim of some of our elected officials and high tech CEOs to worry about US competitiveness in science and engineering.  (These proposed cuts are not about fiscal responsibility; just the amount added in the proposed DOD budget dwarfs these cuts by more than a factor of 3.) If you are a US citizen and think this is the wrong direction, now is the time to talk to your representatives in Congress. In the past, Congress has ignored presidential budget requests for big cuts.  The American Physical Society, for example, has tools to help with this.  Contacting legislators by phone is also made easy these days.  From the standpoint of public outreach, Cornell has an effort backing large-scale writing of editorials and letters to the editor.

yesterday 1 votes