Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
10
Some of my favorite internet people sometimes organize little community experiments. Like, let’s eat potatoes and see if we lose weight. Or, let’s try take some supplements and see if anxiety goes down. I’ve toyed with doing one myself, to see if theanine (a chemical in tea) really helps with stress. But sometimes, when everyone is having fun, some very mean very bad people show up and say, “HEY! YOU CAN’T DO THAT! THAT’S HUMAN SUBJECTS RESEARCH! YOU NEED TO GET APPROVAL FROM AN INSTITUTIONAL REVIEW BOARD!” So I wondered—is that right? Who exactly actually needs to get approval from an institutional review board (IRB)? More than a year later, I’m now convinced that: No single source on the internet actually answers that question. The answer is absurdly complex. The reason it’s so complex is that IRB rules are an illegible mishmash of things, some of which themselves have near-fractal complexity. If you stare at this long enough, it’s impossible not to question the degree to which we...
a week ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from DYNOMIGHT

The first RCT for GLP-1 drugs and alcoholism isn’t what we hoped

GLP-1 drugs are a miracle for diabetes and obesity. There are rumors that they might also be a miracle for addiction to alcohol, drugs, nicotine, and gambling. That would be good. We like miracles. We just got the first good trial and—despite what you might have heard—it’s not very encouraging. Semaglutide—aka Wegovy / Ozempic—is a GLP-1 agonist. This means it binds to the same receptors the glucagon-like peptide-1 hormone normally binds to. Similar drugs include dulaglutide, exenatide, liraglutide, lixisenatide, and tirzepatide. These were originally investigated for diabetes, on the theory that GLP-1 increases insulin and thus decreases blood sugar. But GLP-1 seems to have lots of other effects, like preventing glucose from entering the bloodstream, slowing digestion, and making you feel full longer. It was found to cause sharp decreases in body mass, which is why supposedly 12% of Americans had tried one of these drugs by mid 2024. (I’m skeptical that of that 12% number, but a different survey in late 2024 found that 10% of Americans were currently taking one of these drugs. I know Americans take more drugs than anyone on the planet, but still…) Anyway, there are vast reports from people taking these drugs that they help with various addictions. Many people report stopping drinking or smoking without even trying. This is plausible enough. We don’t know which of the many effects of these drugs is really helping with obesity. Maybe it’s not the effects on blood sugar that matter, but these drugs have some kind of generalized “anti-addiction” effect on the brain? Or maybe screwing around with blood sugar changes willpower? Or maybe when people get thinner, that changes how the brain works? Who knows. Beyond anecdotes, are some observational studies and animal experiments suggesting they might help with addiction (OKeefe et al. 2024). We are so desperate for data that some researchers have even resorted to computing statistics based on what people say on reddit. So while it seems plausible these drugs might help with other addictions, there’s limited data and no clear story for why this should happen biologically. This makes the first RCT, which came out last week, very interesting. This paper contains this figure, about which everyone is going crazy: I admit this looks good. This is indeed a figure in which the orange bar is higher than the blue bar. However: This figure does not mean what you think it means. Despite the label, this isn’t actually the amount of alcohol people consumed. What’s shown is a regression coefficient, which was calculated on a non-random subset of subjects. There are other figures. Why isn’t anyone talking about the other figures? What they did This trial gathered 48 participants. They selected them according to the DSM-5 definition of “alcohol use disorder” which happens to be more than 14 drinks per week for men and 7 drinks per week for women, plus at least 2 heavy drinking episodes. Perhaps because of this lower threshold, 34 of the subjects were women. The trial lasted 9 weeks. During it, half of the subjects were given weekly placebo injections. The other half were given weekly injections of increasing amounts of semaglutide: 0.25 mg for 4 weeks, then 0.5 mg for 4 weeks, and then 0.5 or 1 mg in the last week, depending on a doctor’s judgement. Outcome 1: Drinking The first outcome was to simply ask people to record how much they drank in daily life. Here are the results: If I understand correctly, at some point 6 out of the 24 subjects in the placebo group stopped providing these records, and 3 out of 24 in the semaglutide group. I believe the above shows the data for whatever subset of people were still cooperating on each week. It’s not clear to me what bias this might produce. When I first saw that figure, I thought it looked good. The lines are going down, and the semaglutide line is lower. But then I checked the appendix. (Protip: Always check the appendix.) This contains the same data, but stratified by if people were obese or not: Now it looks like semaglutide isn’t doing anything. It’s just that among the non-obese, the semaglutide group happened to start at a lower baseline. How to reconcile this with the earlier figure? Well, if you look carefully, it doesn’t really show any benefit to semaglutide either. There’s a difference in the two curves, but it was there from the beginning. Over time, there’s no difference in the difference, which is what we’d expect to see if semaglutide was helping. The paper provides other measurements like “changes in drinking days” and “changes in heavy drinking days” and “changes in drinks per drinking day”, but it’s the same story: Either no benefit or no difference. So… This is a small sample. It only lasted nine weeks, and subjects spent many of them on pretty small doses. But this is far the miracle we hoped for. Some effect might be hiding in the noise, but what these results most look like is zero effect. Outcome 2: Delayed drinking There are also lab experiments. They did these at both the start and end of the study. In the first experiment, they basically set each subject’s favorite alcoholic drink in front of them and said them, “For each minute you wait before drinking this, we will pay you, up to a maximum of 50 minutes.” How much were they paid, you ask? Oddly, that’s not specified in the paper. It’s also not specified in the supplemental information. It’s also not specified in the 289 page application they made to the FDA to be able to do this study. (Good times!) But there is a citation for a different paper in which people were paid $0.24/minute, decreasing by $0.01 / minute every five minutes. If they used the same amounts here, then the maximum subjects could earn was $9.75. Anyway, here are the results: So… basically nothing? Because almost everyone waited the full 50 minutes? And they did this for only $9.75? Seems weird. I don’t really see this as evidence against semaglutide. Rather, I think this didn’t end up proving much in either direction. Outcome 3: Laboratory drinking So what’s with that initial figure? Well, after the delayed drinking experiment was over, the subjects were given 2 hours to drink as much as they wanted, up to some kind of safe limit. This is what led to the figure everyone is so excited about: When I first saw this, I too thought it looked good. I thought it looked so good that I started writing this post, eager to share the good news. But at some point I read the caption more carefully and my Spidey sense started tingling. There’s two issues here. First of all, subjects were free to skip this part of the experiment, and a lot did. Only 12 of the 24 subjects in the placebo group and 13 of 24 in the semaglutide group actually did it. This means the results are non-randomized. I mean, the people who declined to do this experiment would probably have drunk different amounts than those who agreed, right? So if semaglutide had any influence on people decision to participate (e.g. because it changed their relationship with alcohol, which is the hypothesis of this research) then the results would be biased. That bias could potentially go in either direction. But basically this means we’re sort of working with observational data. The second issue is that what’s being show in this plot is not data. I know it looks like data, but what’s shown are numbers derived from regression coefficients. In the appendix, you can find this table: Basically, they fit a regression to predict how much people drank in this experiment at the end of the study (“g-EtOH”) based on (a) how much they drank during the same experiment at the start of the study (“Baseline”) (b) their sex, and (c) if they got semaglutide or not (“Condition”). Those coefficients are in the B column. How exactly they got from these coefficients to the numbers in the figure isn’t entirely clear to me. But using a plot digitizer I found that the figure shows ~59.9 g for the placebo group and ~33.3 g for the semaglutide group, for a difference of 26.6 g. I believe that difference comes from the regression coefficient for “Condition” (-25.32) plus some adjustments for the fact that sex and baseline consumption vary a bit between the two groups. So… that’s not nothing! This is some evidence in favor of semaglutide being helpful. But it’s still basically just a regression coefficient computed on a non-randomized sample. Which is sad, since the point of RCTs is to avoid resorting to regression coefficients on non-randomized samples. Thus, I put much more faith in outcome #1. Discussion To summarize, the most reliable outcome of this paper was how much people reported drinking in daily life. No effect was observed there. The laboratory experiment suggests some effect, but the evidence is much weaker. When you combine the two, the results of this paper are quite bad, at least relative to my (high) hopes. Obviously, just because the results are disappointing does not mean the research was bad. The measure of science is the importance of the questions, not what the answers happen to be. It’s unfortunate that a non-randomized sample participated in the final drinking experiment, but what were they supposed to do, force them? This experiment involved giving a synthetic hormone and an addictive substance with people with a use disorder. If you have any doubts about the amount of work necessary to bring that to reality, I strongly encourage you to look at the FDA application. OK, fine, I admit that I do feel this paper “hides the bodies” slightly too effectively, in a way that could mislead people who aren’t experts or that don’t read the paper carefully. I think I’m on firm ground with that complaint, since in the discussions I’ve seen, 100% of people were in fact misled. But I’m sympathetic to the reality that most reviewers don’t share my enlightened views about judging science, and that a hypothetical paper written with my level of skepticism would never be published. (People think the problem with science is that it’s too woke. While I don’t really disagree, I still think the bigger problem is screwed up incentives that force everyone oversell everything, because that’s what you have to do to survive. But that’s a story for another time.) Anyway, despite these results, I’m still hopeful that GLP-1 drugs might help with addiction. This is a relatively small study, and it only lasted 9 weeks. I’m don’t think we can dismiss the huge number of anecdotes yet. And the laboratory experiment was at least a little promising. Given how destructive addictions can be, I vote for more research in this direction. Fortunately, given the billions of dollars to be made, that’s sure to happen. But given just how miraculous semaglutide is for obesity, and given the miraculous anecdotes, I don’t see how to spin this paper as anything but a letdown. It provides weak evidence for any effect and comes close to excluding the possibility of another miracle. If you’ve forgotten what miracles look like, here is the figure for body weight:

2 days ago 4 votes
Car trouble

Some time ago—I’m not sure when exactly—my car started rattling. It would only rattle: When the engine was on, sitting idle, or When accelerating with just the right amount of throttle. This rattle, I did not like it. It sounded like a tiny spoon in a garbage disposal. Which can’t be good, can it? But I exist only in the world of ideas and couldn’t summon the executive function to do anything about it. Eventually, the future Dynomight biologist rode in the car, and we had this conversation: Dynomight biologist: What’s that sound? Dynomight: Rattling! Dynomight biologist: (Pause.) Huh. (In the “Huh”, I could sense overtones of, “How interesting that you would choose to live like this.”) Time went by. I kept reminding myself that selfhood doesn’t exist and therefore we all have a moral responsibility to be kind to our future selves and that future me wouldn’t be any more enthusiastic having this rattle situation dumped on them than I was. So I spent many irreplaceable hours reading about the many, many possible causes of rattling. Eventually, I came to the conclusion that it wasn’t rattling, but rather incomplete fuel combustion. I put in high-octane petrol, convinced that would make the sound would go away. But it didn’t. So I spent more hours reading. Maybe it was a problem with the catalytic converter? Rod bearings? Heat shield? Maybe it was incomplete combustion, but I’d let it go on so long that the car was damaged? Nothing seemed to exactly fit the symptoms. Finally, after several years, I decided try something crazy: I started the car, lay on the ground, and tried to look for where the rattling noise was coming from. When I did that, I immediately saw an extremely rusted piece of metal dancing around on a pipe. I pulled on it, and this thing fell off the car: And the rattling stopped. What is this disease? (Cf. my stupid noise journey.) Maybe it’s that if you spend all your time trying to understand complex systems, then when you face a complex system that isn’t behaving like you want, you naturally… try to understand it. But that’s not necessarily smart. Often, “understanding” is weak. Thinking is weak. The world is chaotic and not easy to simulate inside a brain. Often, you want to resist the urge to understand and simply gather more information. Instead of thinking, look. Maybe that’s the bitter lesson for real life.

2 weeks ago 14 votes
Algorithmic ranking is unfairly maligned

What does “algorithmic ranking” bring to mind for you? Personally, I get visions of political ragebait and supplement hucksters and unnecessary cleavage. I see cratering attention spans and groups of friends on the subway all blankly swiping at glowing rectangles. I see overconfident charlatans and the hollow eyes eyes of someone reviewing 83 photo she just made her boyfriends take of her in front of a sunset. Most of all, I see dreams of creative expression perverted into a desperate scramble to do whatever it takes to please the Algorithm. Of course, lots of people like algorithmic ranking, too. I theorize that the skeptics are right and algorithmic ranking is in fact bad. But it’s not algorithmic ranking per se that’s bad—it’s just that the algorithms you’re used to don’t care about your goals. That might be an inevitable consequence of “enshittification”, but the solution isn’t to avoid all algorithms, but just to avoid algorithms you can’t control. This will become increasingly important in the future as algorithmic ranking becomes algorithmic everything. Why algorithmic ranking is bad for some people sometimes You’ve heard it all before. I think algorithmic ranking leads many people to spend time and emotional energy on things they’d rather not spend them on. I also think it leads lots of people to believe preposterous bullshit promoted by charismatic charlatans. I’m disturbed when I see how kids interact with addictive algorithms, but then I notice that adults are much the same. You know the story. A common defense of algorithmic ranking is that “your feed is your problem”. If you’re getting political ragebait and unnecessary cleavage, then that’s on you for engaging with that stuff. If you actually cared about the things you pretend you care about, everything would be fine. The problem is looking you in the mirror. I find this defense bewildering and almost hostile. I mean, where is it spelled out how you’re supposed to behave to get the content you want? Maybe there are little like buttons, but what do they do? How do they interact with all the other signals, like what you watch? It’s unclear. Now, I’m sure that many of the people who complain about ragebait and cleavage are in fact drawn to them in some way. Maybe they don’t swipe away fast enough when that stuff is inserted into their endless content trough. If they only had eyes for philosophy lectures and meditation videos, maybe that’s all they’d see. OK, but so what? Where’s the empathy? Everyone has some divergence between the urges they feel and the urges they wish they felt. We don’t judge alcoholics who throw away their booze. We don’t make fun of gambling addicts who avoid travel to Vegas or Monaco. So why is it wrong to not want lurid but unhealthy content dangled in front of you? One of the fundamental arts of being a human is using your “better self” to try to control your “lesser self”. You can put a giant calendar on your wall to track when you exercise. You can keep junk food out of your home. You can write your algorithmic ranking manifestos using an app that permanently deletes everything if you stop typing for 5 seconds. These tricks are good. We need more of them! Algorithmic ranking—as we know it—is the opposite. So what would I propose instead? How about… sliders? Why can’t I click a slider to say “more educational content” or “less political rage” or “no David Sinclair-esque supplement huckster bullshit”? Or how about an algorithm that just does what it says on the tin? Remember, YouTicBooX might let you “like” stuff, but the algorithm’s goal isn’t to show you stuff you’d like. The goal is to make money. Your likes are just another feature to be integrated with the rest of your behavioral profile for increasing engagement and targeting ads, thank you very much. The algorithm is running on you, not for you. But what about the market? Another common defense of current algorithms goes like this: If they’re so bad, then why did they win? If people wanted sliders, they would have picked services with sliders. But they didn’t. Go make SliderTube if you want. No one cares. There isn’t some pent-up demand for algorithms that give more control to our better selves. Sometimes people also gesture at the tyranny of the marginal user. The idea is that, sure, power users like control. But where companies really compete is in the fight for the “marginal user”, the person who is almost ready to quit. This person only vaguely understands what a phone is, has never heard of “algorithms”, likes flashing lights, and has the attention span of a pissed-off chimpanzee. They will not tolerate any complexity, so all sliders must go. Sorry, power users. There’s clearly something to these arguments. But still: If your portal to the world is designed for the marginal user, surely you’ve gone wrong somewhere, no? What happened to Netflix? In the long-long ago, Netflix had star ratings. You’d rate stuff between 1 and 5 stars and get a sorted list of predictions: Chunking Express 4.49 Days of Being Wild 4.51 Solaris 4.57 In the Mood for Love 4.62 Master and Commander: The Far Side of the World 4.98 Nowadays, you get a disorienting set of categories like DARK COMEDIES ABOUT ITALIAN FEUDALISM and LIFE IS SHORT—WATCH IT AGAIN and THINGS YOU’RE IN THE MIDDLE OF, HELPFULLY PLACED IN A INCONSISTENT LOCATION. Instead of star ratings, there are “match percentages”, but you have to interact to see them and they always seem to be 98%. What happened? Well, the story is in the public record. (See, for example this post and this post by Gibson Biddle.) In short, Netflix realized a bunch of things: That they needed to concentrate everything on increasing subscriber revenue. And that the main goal of recommendations should be subscriber retention, or making sure people don’t cancel. That the things people rate highly aren’t always the same as what they actually watch. It’s cool that you gave The Seventh Seal five stars. But after a long day at work and finally getting the kids to bed, are you really going to choose Andrei Rublev over The Great British Bachelorette and the Furious 7? That to retain people, you need to get them started watching new stuff. Lots of people want to watch Friends, so Netflix will pay $100 million/year for Friends. But if you just join, binge every episode of Friends, and then cancel, that’s bad. However, if the Friends button were to—say—randomly shift around in the interface, maybe while hunting for it you’ll get hooked on some other (hopefully cheaper) shows and stick around longer. That beyond your explicit ratings, there are lots of implicit signals like what watch, what you click on, what devices you use, and how long you stop scrolling when shown different kinds of thumbnails. These implicit signals are more useful than explicit rankings when predicting what to show you to keep you subscribed. That many people don’t want to rate stuff. And (I speculate) that this provides a convenient excuse to drop the whole star rating system and replace it with the “whatever the hell order we want” system that prevails today, where the match % means nothing and promises nothing. Netflix could have kept the star ratings as some kind of optional feature for the nerds, hidden away in some dusty submenu. But they didn’t. They 100% killed the star ratings for everyone. Why? I guess maintaining features takes work. But mostly I think they figured that most star rating diehard wouldn’t actually quit the platform. They’d grumble but accept the new system and thereby be Retained. And probably they were right. Now, I don’t mean to vilify Netflix. I mean, sure, it’s an amoral automaton doing whatever maximizes profits. But this is hardly the worst example of algorithmic ranking, and hey, this is capitalism! If Netflix tried to be “principled” and stuck with star ratings, maybe some other company would have displaced them? Don’t hate the player, hate the game? (Maybe don’t hate the game either?) What algorithmic ranking should be Let me summarize my argument so far: Algorithmic ranking as we know it is designed to maximize money for the companies doing the ranking, duh. That’s great if what you want is to be addicted or—more precisely—if your behavior happens to create incentives for rankings that are well-aligned with your goals in life. Otherwise, it’s not. Companies settled on these algorithms for a reason. It’s not because they’re evil, it’s because this is what’s profitable. If you accept all that, then what follows? One view is that this is further proof we must smash capitalism and end the malign power of the invisible hand. That’s intellectually coherent, but not my style. Another view is that, well, this is the outcome of the market. It’s pointless to fight the equilibrium, so we should just live with the algorithms. This is also not my style. A third view is that we must reject algorithmic ranking. Refuse to use any social media with algorithmic ranking. Subscribe to blogs using RSS. Install browser extensions that block YouTube’s recommendations. Chronological timelines only. Human curation only, forever. This third view is very much my style. One of the reasons I love blogging is that I can reach people without worrying about the damn algorithms. (Hi.) But I’ve come to believe it’s a dead end. After all, people like algorithmic ranking. Maybe with better interfaces or a bigger social movement, more people would shift towards human curation. But I suspect not that many, and the arc of history bends towards algorithms. And good algorithmic ranking—which you control—would be awesome. I mean, I appreciate that people subscribe to this blog. But I find it a little disturbing that if someone less well-known than me wrote the same thing, then many fewer people would read it. (I find it extremely disturbing that if someone more famous wrote it, then many more people would read it.) Yet that’s an inevitable consequence of relying on subscriptions instead of algorithms. This isn’t (only) an issue of vanity. I think it leads to an “invisible graveyard” of contributions that never happen. Say you’re a sane person who doesn’t want to spend hours every week writing for strangers on the internet. But maybe you’re—I don’t know—maybe you’re on the board of your local volunteer fire department. And maybe you have one sizzling banger to write about how local volunteer fire department boards should be organized. That information is important! Reality is fractally complex! But probably you won’t write it, because almost none of the people who’d benefit from it will ever see it. I think the solution is to embrace algorithmic ranking, but insist on “control”—to insist that the algorithm serves your goals and not someone else’s. How could that happen? In principle, we could all just refuse to use services without control. But I’m skeptical this would work, because of rug-pulls. The same forces that made TikTok into TikTok will still exist and history is filled with companies providing control early on, getting a dominant position, and then taking the control away. Theoretically everyone could leave at that point, but that rarely seems to happen in practice. Instead, I think the control needs to be somehow “baked in” from the beginning. There needs to be some kind of technological/legal/social structures in place that makes rug pulls impossible. What exactly should those structures be? And what exactly is “control”, after all? I don’t know! Those seem like difficult technical problems. But they don’t seem that hard, do they? I suspect the main reason they haven’t been solved is that we haven’t tried very hard. We should do that. And—I dare say—we should do it quickly. My guess is that algorithmic ranking will soon become a sort of all-encompassing algorithmic interface. More about that soon, but if all the information that enters your brain is being filtered by an algorithm, it seems important that you know the algorithm is on your side. Thanks: Steve Newman, Séb Krier

a month ago 21 votes
I am offering mentoring

What is this? I am offering to act as a “mentor”, to you, in case that seems like something you’d find useful. How will it work? We will meet three times for 30 minutes. During those sessions, I’ll try to help you do whatever it is you’re trying to do. Then I’ll sit back and congratulate myself on everything you do for the rest of your life. Why are you doing this? It’s an experiment. I have a theory that there isn’t enough mentoring in the world, and that creating more mentoring might be an efficient step towards filling the universe with bliss-maximizing Dyson spheres. So I’ve decided to try this and see what happens. What topics are allowed? In principle, any topic. But I’ll probably be able to help you more if I have some kind of expertise or interest in whatever you’re trying to do. It could be anything related to statistics, science, AI, self-improvement, blogging, academia, writing, rationalism, (effective) altruism, air quality, the welfare of animals, or anything I’ve ever written about. Or it could be something else entirely. To encourage you to think broadly, I will try to pick at least one person with a topic not on the previous list. Who is eligible? Anyone. In particular, anyone at any age or career stage. But also the other kinds of anyone. What will it cost? Nothing. Is there an application? I have no idea how many people will be interested, but when scarce resources are given away, demand often exceeds supply. So there is an application, which is here: I basically just ask (1) what you want to do, and (2) how you hope I can help you. How many people will you pick? Three. If lots of people apply, I might try to recruit some other mentors. Or I might be lazy and disorganized and not do that. How will you pick? I will pick primarily based on how much I think I can help you and secondarily based on how much what you’re trying to do will advance the general welfare of the universe. I stress that the primary axis is primary! If you make a strong argument that I can help you become very rich or succeed in dating (unlikely) then I’ll pick you over some do-gooder that I can help slightly less. What makes you great that you can mentor other people? Nothing. Honestly, I could give you several reasons that I’m less than ideal as a mentor! But I suspect that mentoring is valuable enough (and undersupplied enough) that even less-than-ideal people like me can still be helpful. How will we meet? We will meet by video call using Signal. Will you share any details about me without my permission? I might write about the results of this experiment in general, reflecting on how it went, giving tips for other people who might try something similar, etc. The whole point of doing this is to experiment and understand if it works, after all. If you want me to share some details about you, I might. But I am very sensitive to privacy and I will not share anything (even broad details) without running it past you first and I’m extremely likely to agree to remove any information you ask me to remove, even if you’re being ridiculous. Is this a good opportunity to interview you or try to get you to go on a podcast, etc.? No. I will attack you.

a month ago 32 votes

More in life

Fast Cash vs. Slow Equity

Knowing what you're building

6 hours ago 3 votes
Hiring judgement

In the end, judgment comes first. And that means hiring is a gut decision. As much science as people want to try to pour into the hiring process, art always floats to the top. This is especially true when hiring at the executive level. The people who make the final calls — the ones who are judged on outcome, not effort — are ultimately hired based on experience and judgement. Two traits that are qualities, not quantities. They are tasked with setting direction, evaluating situations, and making decisions with limited information. All day long they are making judgment calls. That's what you hire them to do, and that's how you decide who to hire. Presented with a few finalists, you decide who you *think* will do a better job when they have to *think* about what to do in uncertain situations. This is where their experience and judgment come in. It's the only thing they have that separates them from someone else. Embrace the situation. You don't know, they don't know, everyone's guessing, some guess better than others. You can't measure how well someone's going to guess next time, you can only make assumptions based on other assumptions. Certainty is a mirage. In the art of people, everything is subjective. In the end, it's not about qualifications — it's about who you trust to make the right call when it matters most. Ultimately, the only thing that was objective was your decision. The reasons were not. -Jason

6 hours ago 3 votes
Orson Welles as Falstaff on Late Night TV

This post is in the Notebook - my digital workshop for anecdotes, links, excerpts, sketches, lists, and anything else I want to explore in brief, revisit later, or post for reference.

16 hours ago 2 votes
How to Become a Millionaire in Your 30s

Build distribution then build whatever the f*ck you want

18 hours ago 2 votes
Classical Music Got Invented with a Hard Kick from a Peasant's Foot

Or why we need less math in music theory

6 hours ago 2 votes