Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from The Honest Broker

Two Robots Talk About Me Behind My Back

And other updates on previous stories

4 hours ago 1 votes
My 12 Favorite Existential Songs

It's not a real genre—but should be

3 days ago 2 votes
More Entries from My Private Journal

A few months ago, I shared private entries from my journal. People seemed to enjoy reading them—so I’m doing it again.

5 days ago 4 votes
What's Happening to Students?

Here's the latest news from the zombie wars

a week ago 6 votes

More in life

Make Work Seasonal Again

Laetitia@Work #79

18 hours ago 3 votes
embracing my Wild Woman

i’m a cheetah, not a house cat

2 hours ago 2 votes
Two Robots Talk About Me Behind My Back

And other updates on previous stories

4 hours ago 1 votes
Buddhism in the Age of Metacrisis

How Individual Creative Embodiment Shapes Dharma's Future

12 hours ago 1 votes
Limits of smart

Take me. Now take someone with the combined talents of Von Neumann, Archimedes, Ramanujan, and Mozart. Now take someone smarter again by the same margin and repeat that a few times. Say this Being is created and has an IQ of 300. Let’s also say it can think at 10,000× normal speed. But it only has access to the same resources you do. Now what? Let’s assume it would quickly solve all our problems in math and programming and philosophy. (To the extent they’re solvable.) That’s plausible since progress in these fields only requires thinking. What about other fields? Other fields How good would it be at predicting the weather? We’re constantly getting better at predicting weather, because: We have faster supercomputers to run simulations. We have better data from new satellites, weather stations, and radar. We use machine learning and statistics to exploit patterns in all that data. The Being could surely design better algorithms for simulations or machine learning. But still: There’s only so much you can do with a given supercomputer or a given amount of data. Weather is a chaotic system. If you want to predict further in the future, you’ll eventually need more FLOPs and better knowledge of starting conditions. Those require bigger supercomputers and better satellites. Just being smart doesn’t (immediately) cause those things to exist. Best guess: A bit better. Would it have known that Donald Trump would win the 2024 election? I don’t think this was knowable. Take all the available polling, economic data, and lessons from history. If you looked at these on Nov 2, 2024, I doubt they provide enough signal to predict the winner with confidence. The truth was out there in people’s voting intentions, but they were buried in the brains of millions of people. I’m sure the Being would give better predictions. If you let it bet in prediction markets, it would probably make tons of money. But it wouldn’t be able to give geopolitical events 0% or 100% probabilities. It wouldn’t be psychic. Best guess: No. Would it beat current chess engines? The top current chess engine has an Elo of 3625. This is insane. It’s 750 Elo higher than ever achieved by a human. Anyway, the old hated Levitt Equation says that after years of study, a person can achieve an Elo of around (10×IQ)+1000. This suggests the 300 IQ Being would manage an Elo of 4000. If you trust that calculation, and the Being played our current best engine, it would win 81.09% of games, draw 18.88%, and lose 0.03%. But we shouldn’t trust that calculation. Obviously, the Levitt Equation isn’t accurate even for normal IQs. And I suspect the Being would lose to modern chess engines in complex endgames. Because it turns out that complex endgames in chess aren’t really solved with “intelligence”. Chess engines do incredibly deep searches of trees of possible moves and countermoves. The best move is the thing that comes out of that tree search. There is no other explanation. We assumed the Being could think 10,000x faster than a normal human, and that would allow it to do some searching of its own, but it still wouldn’t approach the 100,000,000 positions chess engines might evaluate per second. But maybe that’s wrong? Or maybe the Being could find some way to avoid complex endgames? (Of course, if the Being had its own computer, it would reprogram it and crush us.) Best guess: Unsure. Would it solve “creativity”? Would the Being be able to create better novels or music or jokes? It would surely be amazing. Since we included Mozart, this is basically true by definition. But there are reasons to think normal-person art would remain valuable. One is that if you accept an extreme version of Bourdieu, then taste is fake and the only reason we “like” anything is so that we can play class games and oppress each other. If so, then it doesn’t matter how “good” the Being’s books are. The upper class will just continue finding ways to demonstrate their cultural capital to keep their less privileged competitors in their place. Alternatively, maybe you find that life is strange and cruel and beautiful, and sometimes you feel things that seem important but you can’t understand, but sometimes someone else feels the same things and they create something that transcends the gap between your minds and just for a moment you feel that you’re part of some universal story and you don’t feel so alone. Just because the Being is smart doesn’t mean it knows what it’s like to walk in your shoes. Best guess: It would be great, but if art is borne from experience, normal-person art will still have a place. Would it solve physics? If you were sufficiently smart, could you look at all our current experiments and see some underlying pattern? Is there some mathematical trick or idea that will make all the pieces fall into place? Maybe! Or maybe that’s impossible. Maybe there are just too many rulesets consistent with the observations we have. After all, no one predicted quantum physics. Starting around 1900 we observed strange things, and then we invented quantum physics to make peace with those observations. If it’s impossible, then all the Being could do in the short term would be to help design new experiments: “Go build this kind of super collider, or this kind of space telescope, please.” Best guess: Probably not. Would it cure cancer? I’ve asked many biologists this question. The universal answer is “no”. The idea seems to be that biology isn’t really limited at the moment by our intelligence, but by our experimental knowledge. Many people don’t realize just cumulative modern biology research is. Take these two mental models: Biology is a giant pool of phenomena, with people picking random things to investigate. Biology is a “hard onion” that needs to be peeled away layer-by-layer. We use our current knowledge to invent new tools, then use those tools to do experiments, gain new knowledge from those experiments, and then invent new tools. The truth is a mixture of both, perhaps a bit more like the first. But there’s a lot of the second, too! Modern biology concerns many very small things that we can’t just pick up and manipulate. So instead we build tools to build tools (like TALENs or molecular beacons or phage display or ChIP-seq or bioluminescence imaging or prime editing) to see and manipulate them. So why might the Being be unable to cure cancer? Because perhaps it’s not possible to cure cancer right now. New knowledge and new tools are needed, and both of these depend on the other. Probably the best the Being could do is accelerate that invention loop. Best guess: Probably not. Would it solve persuasion? Would the Being be able to convince anyone of anything? Would it be the best diplomat in history? Let’s just assume that the Being has the best logic, the best rhetoric, the most convincing emotional appeals, etc., and all calibrated based on who it’s speaking to. Fine. But at the same time, for what fraction of people do you think exist words that would change their mind about Trump or abortion, or the wars in Israel or Ukraine? I suspect that if you decided to be open-minded, then the Being would probably be extremely persuasive. But I don’t think it’s very common to do that. On the contrary, most of us live most of our lives with strong “defenses” activated. Would the Being be so good that defenses don’t matter? Would it convince enough people to start a social movement? Would everyone respond by refusing to listen to anything? I have no idea. Best guess: No idea. Themes There are a few repeated themes above. To do many things requires new fundamental knowledge (e.g. the results of physical experiments, how molecular biology works). The Being might eventually be able to acquire this knowledge, but it wouldn’t happen automatically because it requires physical experiments. To do other things requires situational knowledge (e.g. the voting intentions of millions of people, the temperature and humidify at every position in Earth’s atmosphere, which particular cells in your body have become cancerous as a result of what mutations). Getting this knowledge requires creating and maintaining a complex infrastructure. To do most things requires moving molecules around. There are lots of feedback loops. Maybe the Being could run its own experiments. But to do that would require building new machines. Which would require moving lots of molecules around. Which would require new machines and new knowledge and new experiments… Finally, there is chaos/complexity. Many things that are predictable in principle (e.g. chess, the weather, possibly psychology or social movements) aren’t predictable in practice because the underlying dynamics are too complicated to be understood or simulated. Looking back I often think to myself, “Hey self, if super-intelligent AI is invented in a few years, you’ll almost certainly look back on 2025 and feel really stupid for not predicting many things that will seem obvious in retrospect. What are those things? xox, Self.” (Usually the first thought this prompts is, “Computer security is going to be really important, can we please for the love of god keep our critical systems simple and isolated from the internet?” But let’s put that aside.) The second thought this prompts is, “Maybe the first-order consequences wouldn’t be that big?” Perhaps it would solve math and programming and overturn all creative industries, and maybe… that’s “all”, at first? A super-intelligence wouldn’t be a god. I would expect a super-intelligence to be better than humans at creating better super-intelligences. But physics still exists! To do most things, you need to move molecules around. And humans would still be needed to do that, at least at first. So here’s one plausible future: Super-intelligent AI is invented. At first, existing robots cannot replace humans for most tasks. It doesn’t matter how brilliantly it’s programmed. There simply aren’t enough robots and the hardware isn’t good enough. In order to make better robots, lots of research is needed. Humans are needed to move molecules around to build factories and to do that research. So there’s a feedback loop between more/better research, robotics, energy, factories, and hardware to run the AI on. Gradually that loop goes faster and faster. Until one day the loop can continue without the need for humans. That’s still rather terrifying. But it seems likely that there’s a substantial delay between step 1 and step 6. Factories and power plants take years to build (for humans). So maybe the best mental initial model is as a “multiplier on economic growth” like all the economists have been insisting all alone. Odds and ends How quickly could simulations (of, e.g., biological systems) replace physical experiments? I suspect simulations will be limited by the same feedback loops because (1) simulations are limited by available hardware and (2) new fundamental and/or situational knowledge is needed to set the simulations up. Would the Being actually solve all the problems in math? It’s not clear, because, as you get smarter and smarter, is there more interesting math to be done, forever? And does that math keep getting more and more unreasonably effective, forever? Or is an IQ of 300 still actually quite stupid in the grand scheme of things? If you want to comment but don’t like Substack, I’ve created a forum on lemmy. (I tried this a year ago with kbin, and 2 weeks later kbin died forever. Hopefully that won’t happen again?)

yesterday 2 votes