More from the singularity is nearer
Intel is sitting on a huge amount of card inventory they can’t move, largely because of bad software. Most of this is a summary of the public #intel-hardware channel in the tinygrad discord. Intel currently is sitting on: 15,000 Gaudi 2 cards (with baseboards) 5,100 Intel Data Center GPU Max 1450s (without baseboards) If you were Intel, what would you do with them? First, starting with the Gaudi cards. The open source repo needed to control them was archived on Feb 4, 2025. There’s a closed source version of this that’s maybe still maintained, but eww closed source and do you think it’s really maintained? The architecture is kind of tragic, and that’s likely why they didn’t open source it. Unlike every other accelerator I have seen, the MMEs, which is where all the FLOPS are, are not controllable by the TPCs. While the TPCs have an LLVM port, the MME is not documented. After some poking around, I found the spec: It’s highly fixed function, looks very similar to the Apple ANE. But that’s not even the real problem with it. The problem is that it is controlled by queues, not by the TPCs. Unpacking habanalabs-dkms-1.19.2-32.all.deb you can find the queues. There is some way to push a command stream to the device so you don’t actually have to deal with the host itself for the queues. But that doesn’t prevent you having to decompose the network you are trying to run into something you can put on this fixed function block. Programmability is on a spectrum, ranging from CPUs being the easiest, to GPUs, to things like the Qualcomm DSP / Google TPU (where at least you drive the MME from the program), to this and the Apple ANE being the hardest. While it’s impressive that they actually got on MLPerf Training v4.0 training GPT3, I suspect it’s all hand coded, and if you even can deviate off the trodden path you’ll get almost no perf. Accelerators like this are okay for low power inference where you can adjust the model architecture for the target, Apple does a great job of this. But this will never be acceptable for a training chip. Then there’s the Data Center GPU Max 1450. Intel actually sent us a few of these. You quickly run into a problem…how do you plug them in? They need OAM sockets, 48V power, and a cooling solution that can sink 600W. As far as I can tell, they were only ever deployed in two systems, the Aurora Supercomputer and the Dell XE9640. It’s hard to know, but I really doubt many of these Dell systems were sold. Intel then sent us this carrier board. In some ways it’s helpful, but in other ways it’s not at all. It still doesn’t solve cooling or power, and you need to buy 16x MCIO cables (cheap in quantity, but expensive and hard to find off the shelf). Also, I never got a straight answer, but I really doubt Intel has many of these boards. And that board doesn’t look cheap to manufacturer more of. The connectors alone, which you need two of per GPU, cost $26 each. That’s $104 for just the OAM connectors. tiny corp was in discussions to buy these GPUs. How much would you pay for one of these on a PCIe card? The specs look great. 839 TFLOPS, 128 GB of ram, 3.3 TB/s of bandwidth. However…read this article. Even in simple synthetic benchmarks, the chip doesn’t get anywhere near its max performance, and it looks to be for fundamental reasons like memory latency. We estimate we could sell PCIe versions of these GPUs for $1,000; I don’t think most people know how hard it is to move non NVIDIA hardware. Before you say you’d pay more, ask yourself, do you really want to deal with the software? An adapter card has four pieces. A PCB for the card, a 12->48V voltage converter, a heatsink, and a fan. My quote from the guy who makes an OAM adapter board was $310 for 10+ PCBs and $75 for the voltage converter. A heatsink that can handle 600W (heat pipes + vapor chamber) is going to cost $100, then maybe $20 more for the fan. That’s $505, and you still need to assemble and test them, oh and now there’s tariffs. Maybe you can get this down to $400 in ~1000 quantity. So $200 for the GPU, $400 for the adapter, $100 for shipping/fulfillment/returns (more if you use Amazon), and 30% profit if you sell at $1k. tiny would net $1M on this, which has to cover NRE and you have risk of unsold inventory. We offered Intel $200 per GPU (a $680k wire) and they said no. They wanted $600. I suspect that unless a supercomputer person who already uses these GPUs wants to buy more, they will ride it to zero. tl;dr: there’s 5100 of these GPUs with no simple way to plug them in. It’s unclear if they worth the cost of the slot they go in. I bet they end up shredded, or maybe dumped on eBay for $50 each in a year like the Xeon Phi cards. If you buy one, good luck plugging it in! The reason Meta and friends buy some AMD is as a hedge against NVIDIA. Even if it’s not usable, AMD has progressed on a solid steady roadmap, with a clear continuation from the 2018 MI50 (which you can now buy for 99% off), to the MI325X which is a super exciting chip (AMD is king of chiplets). They are even showing signs of finally investing in software, which makes me bullish. If NVIDIA stumbles for a generation, this is AMD’s game. The ROCm “copy each NVIDIA repo” strategy actually works if your competition stumbles. They can win GPUs with slow and steady improvement + competition stumbling, that’s how AMD won server CPUs. With these Intel chips, I’m not sure who they would appeal to. Ponte Vecchio is cancelled. There’s no point in investing in the platform if there’s not going to be a next generation, and therefore nobody can justify the cost of developing software, therefore there won’t be software, therefore they aren’t worth plugging in. Where does this leave Intel’s AI roadmap? The successor to Ponte Vecchio was Rialto Bridge, but that was cancelled. The successor to that was Falcon Shores, but that was also cancelled. Intel claims the next GPU will be “Jaguar Shores”, but fool me once… To quote JazzLord1234 from reddit “No point even bothering to listen to their roadmaps anymore. They have squandered all their credibility.” Gaudi 3 is a flop due to “unbaked software”, but as much as I usually do blame software, nothing has changed from Gaudi 2 and it’s just a really hard chip to program for. So there’s no future there either. I can’t say that “Jaguar Shores” square instills confidence. It didn’t inspire confidence for “Joseph B.” on LinkedIn either. From my interactions with Intel people, it seems there’s no individuals with power there, it’s all committee like leadership. The problem with this is there’s nobody who can say yes, just many people who can say no. Hence all the cancellations and the nonsense strategy. AMD’s dysfunction is different. from the beginning they had leadership that can do things (Lisa Su replied to my first e-mail), they just didn’t see the value in investing in software until recently. They sort of had a point if they were only targeting hyperscalars. but it seems like SemiAnalysis got through to them that hyperscalars aren’t going to deal with bad software either. It remains to be seem if they can shift culture to actually deliver good software, but there’s movement in that direction, and if they succeed AMD is so undervalued. Their hardware is good. With Intel, until that committee style leadership is gone, there’s 0 chance for success. Committee leadership is fine if you are trying to maintain, but Intel’s AI situation is even more hopeless than AMDs, and you’d need something major to turn it around. At least with AMD, you can try installing ROCm and be frustrated when there are bugs. Every time I have tried Intel’s software I can’t even recall getting the import to work, and the card wasn’t powerful enough that I cared. Intel needs actual leadership to turn this around, or there’s 0 future in Intel AI.
If you give some monkeys a slice of cucumber each, they are all pretty happy. Then you give one monkey a grape, and nobody is happy with their cucumber any more. They might even throw the slices back at the experimenter. He got a god damned grape this is bullshit I don’t want a cucumber anymore! Nobody was in absolute terms worse off, but that doesn’t prevent the monkeys from being upset. And this isn’t unique to monkeys, I see this same behavior on display when I hear about billionaires. It’s not about what I have, they got a grape. The tweet is here. What do you do about this? Of course, you can fire this women, but what percent of people in American society feel the same way? How much of this can you tolerate and still have a functioning society? What’s particularly absurd about the critique in the video is that it hasn’t been thought through very far. If that house and its friends stopped “ordering shit”, the company would stop making money and she wouldn’t have that job. There’s nothing preventing her from quitting today and getting the same outcome for herself. But of course, that isn’t what it’s about, because then somebody else would be delivering the packages. You see, that house got a grape. So how do we get through this? I’ll propose something, but it’s sort of horrible. Bring people to power based on this feeling. Let everyone indulge fully in their resentment. Kill the bourgeois. They got grapes, kill them all! Watch the situation not improve. Realize that this must be because there’s still counterrevolutionaries in the mix, still a few grapefuckers. Some billionaire is trying to hide his billions! Let the purge continue! And still, things are not improving. People are starving. The economy isn’t even tracked anymore. Things are bad. Millions are dead. The demoralization is complete. Starvation and real poverty are more powerful emotions than resentment. It was bad when people were getting grapes, but now there aren’t even cucumbers anymore. In the face of true poverty for all, the resentment fades. Society begins to heal. People are grateful to have food, they are grateful for what they have. Expectations are back in line with market value. You have another way to fix this? Cause this is what seems to happen in history, and it takes a generation. The demoralization is just beginning.
AMD is sending us the two MI300X boxes we asked for. They are in the mail. It took a bit, but AMD passed my cultural test. I now believe they aren’t going to shoot themselves in the foot on software, and if that’s true, there’s absolutely no reason they should be worth 1/16th of NVIDIA. CUDA isn’t really the moat people think it is, it was just an early ecosystem. tiny corp has a fully sovereign AMD stack, and soon we’ll port it to the MI300X. You won’t even have to use tinygrad proper, tinygrad has a torch frontend now. Either NVIDIA is super overvalued or AMD is undervalued. If the petaflop gets commoditized (tiny corp’s mission), the current situation doesn’t make any sense. The hardware is similar, AMD even got the double throughput Tensor Cores on RDNA4 (NVIDIA artificially halves this on their cards, soon they won’t be able to). I’m betting on AMD being undervalued, and that the demand for AI has barely started. With good software, the MI300X should outperform the H100. In for a quarter million. Long term. It can always dip short term, but check back in 5 years.
This is a map of primary trading partners, US vs China, and how it has evolved over the last 20 years. Think about it, and realize this probably reflects your experience. I know there was a similar panic about Japan in the 80s, but Japan by population has always been 3x smaller than the US, whereas China is 3x larger. In addition, we had and have military bases in Japan. This is not the same situation. The US, since I have been born, has been coasting. The main product made by the US is the dollar, and it used those manufactured dollars to outsource everything. Most jobs in the US are now basically fake. It’s basically an economy in which five people stick a pipe in the ground, but that pipe is the fed and the oil was the good will built up over 1870-1970. In 2008, with the bailouts, it was made clear that the US has no interest in reform. The next decade, in perhaps a spitting in your face move, the fed made the interest rate 0. Known as ZIRP, this had never been done before. This led to insane perversions. When I got into business, I didn’t understand that business in America was mostly a total scam. Sure, you might look at a single business, and be like, oh, that sounds reasonable, but then you zoom out and look at the entire system, and it doesn’t really make sense. It’s scams feeding other scams. Wanna each start a business, pass dollars back and forth over and over again, and drive both our revenues super high? Sure, we don’t produce anything, but we have companies with high revenues and we can raise money based on those revenues. We’ll both be rich! Let’s do it with a bunch of extra steps so people don’t catch on though. They’ll only see it reflected in the lack of movement of real macro metrics. You see, the US is a “developed” country, which means real growth is over? You do understand that guns and boats are made of steel, right? Oh, airplanes aren’t, they are made of aluminum. Oh…right, yea, it’s not just steel it is absolutely everything. The future is chips you say? All the good chips are made in the Republic of China you say? This 2021 article lays it out clearly, and it also explains why nothing I saw in Silicon Valley made any sense. I’m not going to go into the personal stories, but I just had an underlying assumption that the goal was growth and value production. It isn’t. It’s self licking ice cream cone scams, and any growth or value is incidental to that. It isn’t until you understand this that people’s behavior starts to make sense. America really is at a fork in the road. In one world, they abandon all hopes of being an empire, becoming a regional power with highly protectionist economics. This happened before, and it’s called Europe. I know it’s hard to believe now, but Europe used to be the seat of power for the whole world. The sun never set on the British empire. Now they put you in jail for memes. Protectionist America is a boring place and not somewhere I want to be. It kicks the can further down the road of poverty, basically embraces socialism, is stagnant, is stale, is a museum…etc, again there’s a contemporary example of this. When I said on Lex they were gonna nationalize NVIDIA, look at the AI Diffusion Framework, and notice how Trump hasn’t repealed it. It allows export of GPUs to only 18 countries. Nationalization with American characteristics. It tells the other 177 countries that they should plan on purchasing their AI infrastructure from China. The other path, which is the exciting path, is the attempt to maintain an empire. An empire has to compete on its merits. There’s two simple steps to restore American greatness: 1) Brain drain the world. Work visas for every person who can produce more than they consume. I’m talking doubling the US population, bringing in all the factory workers, farmers, miners, engineers, literally anyone who produces value. Can we raise the average IQ of America to be higher than China? 2) Back the dollar by gold (not socially constructed crypto), and bring major crackdowns to finance to tie it to real world value. Trading is not a job. Passive income is not a thing. Instead, go produce something real and exchange it for gold. The first will bring the value of “American” labor in line with its global market value. It is a particularly unique advantage of the US over China, the US has a potentially much larger pool of talent. Non ironically, diversity is our strength. Unfortunately, there’s a lot of resistance to American labor finding its market value. The second will prevent a lot of the scams. The reason the banking industry is so big is that it is close to the source of the made up dollars. If currency is gold backed, you could imagine something similar happening to the mining industry instead. However, the mining industry is real! It uses steel and aluminum to build physical things. And imagine when we start to mine space. That’s a way better reward function than scamming politicians out of fake dollars. Unfortunately, I doubt either will happen. They very much both can, but people haven’t been demoralized enough yet.
More in programming
When OpenAI released GPT-4 back in March 2023, they kickstarted the AI revolution. The consensus online was that front-end development jobs would be totally eliminated within a year or two.Well, it’s been more than two years since then, and I thought it was worth revisiting some of those early predictions, and seeing if we can glean any insights about where things are headed.
After I put up a post about a Python gotcha, someone remarked that "there are very few interpreted languages in common usage," and that they "wish Python was more widely recognized as a compiled language." This got me thinking: what is the distinction between a compiled or interpreted language? I was pretty sure that I do think Python is interpreted[1], but how would I draw that distinction cleanly? On the surface level, it seems like the distinction between compiled and interpreted languages is obvious: compiled languages have a compiler, and interpreted languages have an interpreter. We typically call Java a compiled language and Python an interpreted language. But on the inside, Java has an interpreter and Python has a compiler. What's going on? What's an interpreter? What's a compiler? A compiler takes code written in one programming language and turns it into a runnable thing. It's common for this to be machine code in an executable program, but it can also by bytecode for VM or assembly language. On the other hand, an interpreter directly takes a program and runs it. It doesn't require any pre-compilation to do so, and can apply a variety of techniques to achieve this (even a compiler). That's where the distinction really lies: what you end up running. An interpeter runs your program, while a compiler produces something that can run later[2] (or right now, if it's in an interpreter). Compiled or interpreted languages A compiled language is one that uses a compiler, and an interpreted language uses an interpreter. Except... many languages[3] use both. Let's look at Java. It has a compiler, which you feed Java source code into and you get out an artifact that you can't run directly. No, you have to feed that into the Java virtual machine, which then interprets the bytecode and runs it. So the entire Java stack seems to have both a compiler and an interpreter. But it's the usage, that you have to pre-compile it, that makes it a compiled language. And similarly is Python[4]. It has an interpreter, which you feed Python source code into and it runs the program. But on the inside, it has a compiler. That compiler takes the source code, turns it into Python bytecode, and then feeds that into the Python virtual machine. So, just like Java, it goes from code to bytecode (which is even written to the disk, usually) and bytecode to VM, which then runs it. And here again we see the usage, where you don't pre-compile anything, you just run it. That's the difference. And that's why Python is an interpreted language with a compiler! And... so what? Ultimately, why does it matter? If I can do cargo run and get my Rust program running the same as if I did python main.py, don't they feel the same? On the surface level, they do, and that's because it's a really nice interface so we've adopted it for many interactions! But underneath it, you see the differences peeping out from the compiled or interpreted nature. When you run a Python program, it will run until it encounters an error, even if there's malformed syntax! As long as it doesn't need to load that malformed syntax, you're able to start running. But if you cargo run a Rust program, it won't run at all if it encounters an error in the compilation step! It has to run the entire compilation process before the program will start at all. The difference in approaches runs pretty deep into the feel of an entire toolchain. That's where it matters, because it is one of the fundamental choices that everything else is built around. The words here are ultimately arbitrary. But they tell us a lot about the language and tools we're using. * * * Thank you to Adam for feedback on a draft of this post. It is worth occasionally challenging your own beliefs and assumptions! It's how you grow, and how you figure out when you are actually wrong. ↩ This feels like it rhymes with async functions in Python. Invoking a regular function runs it immediately, while invoking an async function creates something which can run later. ↩ And it doesn't even apply at the language level, because you could write an interpreter for C++ or a compiler for Hurl, not that you'd want to, but we're going to gloss over that distinction here and just keep calling them "compiled/interpreted languages." It's how we talk about it already, and it's not that confusing. ↩ Here, I'm talking about the standard CPython implementation. Others will differ in their details. ↩
What can't be solved with money, are the most valuable things.