More from pcloadletter
I write this blog because I enjoy writing. Some people enjoy reading what I write, which makes me feel really great! Recently, I took down a post and stopped writing for a few months because I didn't love the reaction I was getting on social media sites like Reddit and Hacker News. On these social networks, there seems to be an epidemic of "gotcha" commenters, contrarians, and know-it-alls. No matter what you post, you can be sure that folks will come with their sharpest pitchforks to try to skewer you. I'm not sure exactly what it is about those two websites in particular. I suspect it's the gamification of the comment system (more upvotes = more points = dopamine hit). Unfortunately, it seems the easiest way to win points on these sites is to tear down the original content. At any rate, I really don't enjoy bad faith Internet comments and I have a decent-enough following outside of these social networks that I don't really have to endure them. Some might argue I need thicker skin. I don't think that's really true: your experience on the Internet is what you make of it. You don't have to participate in parts of it if you don't want. Also, I know many of you reading this post (likely RSS subscribers at this point) came from Reddit or Hacker News in the first place. I don't mean to insult you or suggest by any means that everyone, or even the majority of users, on these sites are acting in bad faith. Still, I have taken a page from Tom MacWright's playbook and decided to add a bit of javascript to my website that helpfully redirects users from these two sites elsewhere: try { const bannedReferrers = [/news\.ycombinator\.com/i, /reddit\.com/i]; if (document.referrer) { const ref = new URL(document.referrer); if (bannedReferrers.some((r) => r.test(ref.host))) { window.location.href = "https://google.com/"; } } } catch (e) {} After implementing this redirect, I feel a lot more energized to write! I'm no longer worried about having to endlessly caveat my work for fear of getting bludgeoned on social media. I'm writing what I want to write and, if for those of you here to join me, I say thank you!
Here we go again: I'm so tired of crypto web3 LLMs. I'm positive there are wonderful applications for LLMs. The ChatGPT web UI seems great for summarizing information from various online sources (as long as you're willing to verify the things that you learn). But a lot fo the "AI businesses" coming out right now are just lightweight wrappers around ChatGPT. It's lazy and unhelpful. Probably the worst offenders are in the content marketing space. We didn't know how lucky we were back in the "This one weird trick for saving money" days. Now, rather than a human writing that junk, we have every article sounding like the writing voice equivalent of the dad from Cocomelon. Here's an approximate technical diagram of how these businesses work: Part 1 is what I like to call the "bilking process." Basically, you put up a flashy landing page promising content generation in exchange for a monthly subscription fee (or discounted annual fee, of course!). No more paying pesky writers! Once the husk of a company has secured the bag, part 2, the "bullshit process," kicks in. Customers provide their niches and the service happily passes queries over to the ChatGPT (or similar) API. Customers are rewarded with stinky garbage articles that sound like they're being narrated by HAL on Prozac in return. Success! I suppose we should have expected as much. With every new tech trend comes a deluge of tech investors trying to find the next great thing. And when this happens, it's a gold rush every time. I will say I'm more optimistic about "AI" (aka machine learning, aka statistics). There are going to be some pretty cool applications of this tech eventually—but your ChatGPT wrapper ain't it.
I have noticed a trend in a handful of products I've worked on at big tech companies. I have friends at other big tech companies that have noticed a similar trend: The products are kind of crummy. Here are some experiences that I have often encountered: the UI is flakey and/or unintuitive there is a lot of cruft in the codebase that has never been cleaned up bugs that have "acceptable" workarounds that never get fixed packages/dependencies are badly out of date the developer experience is crummy (bad build times, easily breakable processes) One of the reasons I have found for these issues is that we simply aren't investing enough time to increase product quality: we have poorly or nonexistent quality metrics, invest minimally in testing infrastructure (and actually writing tests), and don't invest in improving the inner loop. But why is this? My experience has been that quality is simply a hard sell in bigh tech. Let's first talk about something that's an easy sell right now: AI everything. Why is this an easy sell? Well, Microsoft could announce they put ChatGPT in a toaster and their stock price would jump $5/share. The sad truth is that big tech is hyper-focused on doing the things that make their stock prices go up in the short-term. It's hard to make this connection with quality initiatives. If your software is slightly less shitty, the stock price won't jump next week. So instead of being able to sell the obvious benefit of shiny new features, you need to have an Engineering Manager willing to risk having lower impact for the sake of having a better product. Even if there is broad consensus in your team, group, org that these quality improvements are necessary, there's a point up the corporate hierarchy where it simply doesn't matter to them. Certainly not as much as shipping some feature to great fanfare. Part of a bigger strategy? # Cory Doctorow has said some interesting things about enshittification in big tech: "enshittification is a three-stage process: first, surpluses are allocated to users until they are locked in. Then they are withdrawn and given to business-customers until they are locked in. Then all the value is harvested for the company's shareholders, leaving just enough residual value in the service to keep both end-users and business-customers glued to the platform." At a macro level, it's possible this is the strategy: hook users initially, make them dependent on your product, and then cram in superficial features that make the stock go up but don't offer real value, and keep the customers simply because they really have no choice but to use your product (an enterprise Office 365 customer probably isn't switching anytime soon). This does seem to have been a good strategy in the short-term: look at Microsoft's stock ever since they started cranking out AI everything. But how can the quality corner-cutting work long-term? I hope the hubris will backfire # Something will have to give. Big tech products can't just keep getting shittier—can they? I'd like to think some smaller competitors will come eat their lunch, but I'm not sure. Hopefully we're not all too entrenched in the big tech ecosystem for this to happen.
Coding interviews are controversial. It can be unpleasant to code in front of someone else, knowing you're being judged. And who likes failing? Especially when it feels like you failed intellectually. But, coding interviews are effective. One big criticism of coding interviews is that they end up filtering out a lot of great candidates. It's true: plenty of great developers don't do well in coding interviews. Maybe they don't perform well under pressure. Or perhaps they don't have time (or desire) to cram leetcode. So if this is happening, then how can coding interviews be effective? Minimizing risk # Coding interviews are optimized towards minimizing risk and hiring a bad candidate is far worse than not hiring a good candidate. In other words, the hiring process is geared towards minimizing false positives, not false negatives. The truth is, there are typically a bunch of good candidates that apply for a job. There are also not-so-great candidates. As long as a company hires one of the good ones, they don't really care if they lose all the rest of the good ones. They just need to make sure they don't hire one of the no-so-great ones. Coding interviews are a decent way to screen out the false positives. Watching someone solve coding challenges gives you some assurance that they can, well, code. Why I myself like coding interviews # Beyond why coding interviews are beneficial for the company, I actually enjoy them as an interviewer. It's not that I like making people uncomfortable or judging them (I don't), but rather I like seeing how potential future colleagues think. How do they think about problems? Do they plan their solution or just jump in? This is a person with whom I'll be working closely. How do they respond to their code being scrutinized? Do I feel comfortable having to "own" their code? On automated online assessments (OAs) # The junior developer market right now is extremely competitive and therefore it is common to use automated coding challenges (OAs) as an initial screen. OAs kind of accomplish the false positive filtering mentioned above, but that assumes candidates aren't cheating. But some are. So you're filtering your candidate pool down to good candidates and dishonest candidates. Maybe that's worth it? Additionally, OAs don't give you any chance to interact with candidates. So you get no sense of what they'd really be like to work with. All in all, I'm not a fan of OAs. Far from perfect # Coding interviews are far from perfect. They're a terrible simulation of actual working conditions. They favor individuals who have time to do the prep work (e.g., grind leetcode). They're subject to myriad biases of the interviewer. But there's a reason companies still use them: they're effective in minimizing hiring risk for the company. And to them, that's the ball game.
More in science
Within 1-5 years, our daily transportation will be upended, and cities will be reshaped.
One nice bit of condensed matter/nanoscale physics news: This year's Wolf Prize in Physics has gone to three outstanding scientists, Jim Eisenstein, Moty Heiblum, and Jainendra Jain, each of whom have done very impactful work involving 2D electron gases - systems of electrons confined to move only in two dimensions by the electronic structure and alignment of energy bands at interfaces between semiconductors. Of particular relevance to these folks are the particularly clean 2D electron gases at the interfaces between GaAs and AlGaAs, or in GaAs quantum wells embedded in AlGaAs. A thread that connects all three of these scientists is the fractional quantum Hall effect in these 2D systems. Electrons confined to move in 2D, in the presence of a magnetic field perpendicular to the plane of motion, form a remarkable system. The quantum wavefunction of an electron in this situation changes as the magnetic induction \(B\) is increased. The energy levels of such an electron are given by \((n+1/2)\hbar \omega_{c}\), where \(\omega_c \equiv eB/m*\) is the cyclotron frequency. These energy levels are called Landau Levels. The ratio between the 2D density of electrons and the density of magnetic flux in fundamental units (\(B/(h/e)\)) is called the "filling factor", \(\nu\), and when this is an integer, the Hall conductance is quantized in fundamental units - see here. Figure 4 from this article by Jain, with \(R_{xx}(B)\) data from here. Notice how the data around \(B=0\) looks a lot like the data around \(\nu = 1/2\), which looks a lot like the data around \(\nu=1/4\). A remarkable thing happens when \(\nu = 1/2\) - see the figure above. There is no quantum Hall effect there; in fact, if you look at the longitudinal resistance \(R_{xx}\) as a function of \(B\) near \(\nu = 1/2\), it looks remarkably like \(R_{xx}(B)\) near \(B = 0\). At this half-integer filling factor, the 2D electrons plus the magnetic flux "bundle together", leading to a state with new low-energy excitations called composite fermions that act like they are in zero magnetic field. In many ways the FQHE looks like the integer quantum Hall effect for these composite fermions, though the situation is more complicated than that. Jainendra Jain did foundational work on the theory of composite fermions, among many other things. Jim Eisenstein has done a lot of great experimental work involving composite fermions and even-denominator FQH states. My postdoctoral mentor, Bob Willett, and he are first two authors on the paper where an unusual quantum Hall state was discovered at \(\nu = 5/2\), a state still under active investigation for potential topological quantum computing applications. One particularly surprising result from Eisenstein's group was the discovery that some "high" Landau level even-denominator fillings (\(\nu = 9/2, 11/2\)) showed enormously anisotropic resistances, with big differences between \(R_{xx}\) and \(R_{yy}\), an example of the onset of a "stripe" phase of alternating fillings. Another very exciting result from Eisenstein's group used 2D electron gases in close proximity parallel layers and in high magnetic fields, as well as 2D electron gases near 2D hole gases. Both can allow the formation of excitons, bound states of electrons and holes, but with the electrons and holes in neighboring layers so that they could not annihilate each other. Moreover, a Bose-Einstein condensation of those excitons is possible leading to remarkable superflow of excitons and resonant tunneling between the layers. This review article is a great discussion of all of this. Moty Heiblum's group at the Weizmann Institute has been one of the world-leading groups investigating "mesoscopic" physics of confined electrons in the past 30+ years. They have performed some truly elegant experiments using 2D electron gases as their platform. A favorite of mine (mentioned in my textbook) is this one, in which they make a loop-shaped interferometer for electrons which shows oscillations in the conductance as they thread magnetic flux through the loop; they then use a nearby quantum point contact as a charge sensor near one arm of the interferometer, a which-path detector that tunably suppresses the quantum interference. His group also did foundational work on the use of shot noise as a tool to examine the nature and transport of charge carriers in condensed matter systems (an idea that I found inspiring). Their results showing that the quasiparticles in the fractional quantum Hall regime can have fractional charges are remarkable. More recently, they have shown how subtle these measurements really can be, in 2D electron systems that can support neutral excitations as well as charged ones. All in all, this is a great recognition of outstanding scientists for a large volume of important, influential work. (On a separate note: I will be attending 3+ days of the APS meeting next week. I'll try to do my usual brief highlight posts, time permitting. If people have suggestions of cool content, please let me know.)
We’ve known about far-UVC’s promise for a decade. Why isn't it everywhere?
Larger models can pull off greater feats, but the accessibility and efficiency of smaller models make them attractive tools. The post Why Do Researchers Care About Small Language Models? first appeared on Quanta Magazine
For my entire career as a neurologist, spanning three decades, I have been hearing about various kinds of stem cell therapy for Parkinson’s Disease (PD). Now a Phase I clinical trial is under way studying the latest stem cell technology, autologous induced pluripotent stem cells, for this purpose. This history of cell therapy for PD […] The post Stem Cells for Parkinson’s Disease first appeared on NeuroLogica Blog.