More from the singularity is nearer
You know about Critical Race Theory, right? It says that if there’s an imbalance in, say, income between races, it must be due to discrimination. This is what wokism seems to be, and it’s moronic and false. The right wing has invented something equally stupid. Introducing Critical Trade Theory, stolen from this tweet. If there’s an imbalance in trade between countries, it must be due to unfair practices. (not due to the obvious, like one country is 10x richer than the other) There’s really only one way the trade deficits will go away, and that’s if trade goes to zero (or maybe if all these countries become richer than America). Same thing with the race deficits, no amount of “leg up” bullshit will change them. Why are all the politicians in America anti-growth anti-reality idiots who want to drive us into the poor house? The way this tariff shit is being done is another stupid form of anti-merit benefits to chosen groups of people, with a whole lot of grift to go along with it. Makes me just not want to play.
Intel is sitting on a huge amount of card inventory they can’t move, largely because of bad software. Most of this is a summary of the public #intel-hardware channel in the tinygrad discord. Intel currently is sitting on: 15,000 Gaudi 2 cards (with baseboards) 5,100 Intel Data Center GPU Max 1450s (without baseboards) If you were Intel, what would you do with them? First, starting with the Gaudi cards. The open source repo needed to control them was archived on Feb 4, 2025. There’s a closed source version of this that’s maybe still maintained, but eww closed source and do you think it’s really maintained? The architecture is kind of tragic, and that’s likely why they didn’t open source it. Unlike every other accelerator I have seen, the MMEs, which is where all the FLOPS are, are not controllable by the TPCs. While the TPCs have an LLVM port, the MME is not documented. After some poking around, I found the spec: It’s highly fixed function, looks very similar to the Apple ANE. But that’s not even the real problem with it. The problem is that it is controlled by queues, not by the TPCs. Unpacking habanalabs-dkms-1.19.2-32.all.deb you can find the queues. There is some way to push a command stream to the device so you don’t actually have to deal with the host itself for the queues. But that doesn’t prevent you having to decompose the network you are trying to run into something you can put on this fixed function block. Programmability is on a spectrum, ranging from CPUs being the easiest, to GPUs, to things like the Qualcomm DSP / Google TPU (where at least you drive the MME from the program), to this and the Apple ANE being the hardest. While it’s impressive that they actually got on MLPerf Training v4.0 training GPT3, I suspect it’s all hand coded, and if you even can deviate off the trodden path you’ll get almost no perf. Accelerators like this are okay for low power inference where you can adjust the model architecture for the target, Apple does a great job of this. But this will never be acceptable for a training chip. Then there’s the Data Center GPU Max 1450. Intel actually sent us a few of these. You quickly run into a problem…how do you plug them in? They need OAM sockets, 48V power, and a cooling solution that can sink 600W. As far as I can tell, they were only ever deployed in two systems, the Aurora Supercomputer and the Dell XE9640. It’s hard to know, but I really doubt many of these Dell systems were sold. Intel then sent us this carrier board. In some ways it’s helpful, but in other ways it’s not at all. It still doesn’t solve cooling or power, and you need to buy 16x MCIO cables (cheap in quantity, but expensive and hard to find off the shelf). Also, I never got a straight answer, but I really doubt Intel has many of these boards. And that board doesn’t look cheap to manufacturer more of. The connectors alone, which you need two of per GPU, cost $26 each. That’s $104 for just the OAM connectors. tiny corp was in discussions to buy these GPUs. How much would you pay for one of these on a PCIe card? The specs look great. 839 TFLOPS, 128 GB of ram, 3.3 TB/s of bandwidth. However…read this article. Even in simple synthetic benchmarks, the chip doesn’t get anywhere near its max performance, and it looks to be for fundamental reasons like memory latency. We estimate we could sell PCIe versions of these GPUs for $1,000; I don’t think most people know how hard it is to move non NVIDIA hardware. Before you say you’d pay more, ask yourself, do you really want to deal with the software? An adapter card has four pieces. A PCB for the card, a 12->48V voltage converter, a heatsink, and a fan. My quote from the guy who makes an OAM adapter board was $310 for 10+ PCBs and $75 for the voltage converter. A heatsink that can handle 600W (heat pipes + vapor chamber) is going to cost $100, then maybe $20 more for the fan. That’s $505, and you still need to assemble and test them, oh and now there’s tariffs. Maybe you can get this down to $400 in ~1000 quantity. So $200 for the GPU, $400 for the adapter, $100 for shipping/fulfillment/returns (more if you use Amazon), and 30% profit if you sell at $1k. tiny would net $1M on this, which has to cover NRE and you have risk of unsold inventory. We offered Intel $200 per GPU (a $680k wire) and they said no. They wanted $600. I suspect that unless a supercomputer person who already uses these GPUs wants to buy more, they will ride it to zero. tl;dr: there’s 5100 of these GPUs with no simple way to plug them in. It’s unclear if they worth the cost of the slot they go in. I bet they end up shredded, or maybe dumped on eBay for $50 each in a year like the Xeon Phi cards. If you buy one, good luck plugging it in! The reason Meta and friends buy some AMD is as a hedge against NVIDIA. Even if it’s not usable, AMD has progressed on a solid steady roadmap, with a clear continuation from the 2018 MI50 (which you can now buy for 99% off), to the MI325X which is a super exciting chip (AMD is king of chiplets). They are even showing signs of finally investing in software, which makes me bullish. If NVIDIA stumbles for a generation, this is AMD’s game. The ROCm “copy each NVIDIA repo” strategy actually works if your competition stumbles. They can win GPUs with slow and steady improvement + competition stumbling, that’s how AMD won server CPUs. With these Intel chips, I’m not sure who they would appeal to. Ponte Vecchio is cancelled. There’s no point in investing in the platform if there’s not going to be a next generation, and therefore nobody can justify the cost of developing software, therefore there won’t be software, therefore they aren’t worth plugging in. Where does this leave Intel’s AI roadmap? The successor to Ponte Vecchio was Rialto Bridge, but that was cancelled. The successor to that was Falcon Shores, but that was also cancelled. Intel claims the next GPU will be “Jaguar Shores”, but fool me once… To quote JazzLord1234 from reddit “No point even bothering to listen to their roadmaps anymore. They have squandered all their credibility.” Gaudi 3 is a flop due to “unbaked software”, but as much as I usually do blame software, nothing has changed from Gaudi 2 and it’s just a really hard chip to program for. So there’s no future there either. I can’t say that “Jaguar Shores” square instills confidence. It didn’t inspire confidence for “Joseph B.” on LinkedIn either. From my interactions with Intel people, it seems there’s no individuals with power there, it’s all committee like leadership. The problem with this is there’s nobody who can say yes, just many people who can say no. Hence all the cancellations and the nonsense strategy. AMD’s dysfunction is different. from the beginning they had leadership that can do things (Lisa Su replied to my first e-mail), they just didn’t see the value in investing in software until recently. They sort of had a point if they were only targeting hyperscalars. but it seems like SemiAnalysis got through to them that hyperscalars aren’t going to deal with bad software either. It remains to be seem if they can shift culture to actually deliver good software, but there’s movement in that direction, and if they succeed AMD is so undervalued. Their hardware is good. With Intel, until that committee style leadership is gone, there’s 0 chance for success. Committee leadership is fine if you are trying to maintain, but Intel’s AI situation is even more hopeless than AMDs, and you’d need something major to turn it around. At least with AMD, you can try installing ROCm and be frustrated when there are bugs. Every time I have tried Intel’s software I can’t even recall getting the import to work, and the card wasn’t powerful enough that I cared. Intel needs actual leadership to turn this around, or there’s 0 future in Intel AI.
If you give some monkeys a slice of cucumber each, they are all pretty happy. Then you give one monkey a grape, and nobody is happy with their cucumber any more. They might even throw the slices back at the experimenter. He got a god damned grape this is bullshit I don’t want a cucumber anymore! Nobody was in absolute terms worse off, but that doesn’t prevent the monkeys from being upset. And this isn’t unique to monkeys, I see this same behavior on display when I hear about billionaires. It’s not about what I have, they got a grape. The tweet is here. What do you do about this? Of course, you can fire this women, but what percent of people in American society feel the same way? How much of this can you tolerate and still have a functioning society? What’s particularly absurd about the critique in the video is that it hasn’t been thought through very far. If that house and its friends stopped “ordering shit”, the company would stop making money and she wouldn’t have that job. There’s nothing preventing her from quitting today and getting the same outcome for herself. But of course, that isn’t what it’s about, because then somebody else would be delivering the packages. You see, that house got a grape. So how do we get through this? I’ll propose something, but it’s sort of horrible. Bring people to power based on this feeling. Let everyone indulge fully in their resentment. Kill the bourgeois. They got grapes, kill them all! Watch the situation not improve. Realize that this must be because there’s still counterrevolutionaries in the mix, still a few grapefuckers. Some billionaire is trying to hide his billions! Let the purge continue! And still, things are not improving. People are starving. The economy isn’t even tracked anymore. Things are bad. Millions are dead. The demoralization is complete. Starvation and real poverty are more powerful emotions than resentment. It was bad when people were getting grapes, but now there aren’t even cucumbers anymore. In the face of true poverty for all, the resentment fades. Society begins to heal. People are grateful to have food, they are grateful for what they have. Expectations are back in line with market value. You have another way to fix this? Cause this is what seems to happen in history, and it takes a generation. The demoralization is just beginning.
AMD is sending us the two MI300X boxes we asked for. They are in the mail. It took a bit, but AMD passed my cultural test. I now believe they aren’t going to shoot themselves in the foot on software, and if that’s true, there’s absolutely no reason they should be worth 1/16th of NVIDIA. CUDA isn’t really the moat people think it is, it was just an early ecosystem. tiny corp has a fully sovereign AMD stack, and soon we’ll port it to the MI300X. You won’t even have to use tinygrad proper, tinygrad has a torch frontend now. Either NVIDIA is super overvalued or AMD is undervalued. If the petaflop gets commoditized (tiny corp’s mission), the current situation doesn’t make any sense. The hardware is similar, AMD even got the double throughput Tensor Cores on RDNA4 (NVIDIA artificially halves this on their cards, soon they won’t be able to). I’m betting on AMD being undervalued, and that the demand for AI has barely started. With good software, the MI300X should outperform the H100. In for a quarter million. Long term. It can always dip short term, but check back in 5 years.
More in programming
This article was originally commissioned by Luca Rossi (paywalled) for refactoring.fm, on February 11th, 2025. Luca edited a version of it that emphasized the importance of building “10x engineering teams” . It was later picked up by IEEE Spectrum (!!!), who scrapped most of the teams content and published a different, shorter piece on March […]
Go team wrote golang.org/x/sys/windows package to call functions in a Windows DLL. Their way is inefficient and this article describes a better way. The sys/windows way To call a function in a DLL, let’s say kernel32.dll, we must: load the dll into memory with LoadLibrary get the address of a function in the dll call the function at that address Here’s how it looks when you use sys/windows library: var ( libole32 *windows.LazyDLL coCreateInstance *windows.LazyProc ) func init() { libole32 = windows.NewLazySystemDLL("ole32.dll") coCreateInstance = libole32.NewProc("CoCreateInstance") } func CoCreateInstance(rclsid *GUID, pUnkOuter *IUnknown, dwClsContext uint32, riid *GUID, ppv *unsafe.Pointer) HRESULT { ret, _, _ := syscall.SyscallN(coCreateInstance.Addr(), 5, uintptr(unsafe.Pointer(rclsid)), uintptr(unsafe.Pointer(pUnkOuter)), uintptr(dwClsContext), uintptr(unsafe.Pointer(riid)), uintptr(unsafe.Pointer(ppv)), 0, ) return HRESULT(ret) } The problem The problem is that this is memory inefficient. For every function all we need is: name of the function to get its address in a dll. That is a string so its 8 bytes (address of the string) + 8 bytes (size of the string) + the content of the string. address of a function, which is 8 bytes on a 64-bit CPU Unfortunately in sys/windows each function requires this: type LazyProc struct { Name string mu sync.Mutex l *LazyDLL proc *Proc } type Proc struct { Dll *DLL Name string addr uintptr } // sync.Mutex type Mutex struct { _ noCopy mu isync.Mutex } // isync.Mutex type Mutex struct { state int32 sema uint32 } Let’s eyeball the size of all those structures: LazyProc : 16 + sizeof(Mutex) + 8 + 8 = 32 + sizeof(Mutex) Proc : 8 + 16 + 8 = 32 Mutex : 8 Total: 32 + 32 + 8 = 72 and that’s not counting possible memory padding for allocations. Windows has a lot of functions so this adds up. Additionally, at startup we call NewProcfor every function, even if they are not used by the program. This increases startup time. The better way What we ultimately need is uintptr for the address of the function. It’ll be lazily looked up. Let’s say we use 8 functions from ole32.dll. We can use a single array of uintptr values for storing function pointers: var oleFuncPtrs = [8]uintptr var oleFuncNames = []string{"CoCreateInstance", "CoGetClassObject", ... } const kCoCreateInstance = 0 const kCoGetClassObject = 1 // etc. const kFuncMissing = 1 func funcAddrInDLL(dll *windows.LazyDLL, funcPtrs []uintptr, funcIdx int, funcNames []string) uintptr { addr := funcPtrs[funcIdx]; if addr == kFuncMissing { // we already tried to look it up and didn't find it // this can happen becuse older version of Windows might not implement this function return 0 } if addr != 0 { return addr } // lookup the funcion by name in dll name := funcNames[funcIdx] /// ... return addr } In real life this would need multi-threading protection with e.g. a mutex. Saving on strings The following is not efficient: var oleFuncNames = []string{"CoCreateInstance", "CoGetClassObject", ... } In addition to the text of the string Go needs 16 bytes: 8 for a pointer to the string and 8 for the size of the string. We can be more efficient by storing all names as a single string: var oleFuncNames ` CoCreateInstance CoGetClassObject ` Only when we’re looking up the function by name we need to construct temporary string that is a slice of oleFuncNames. We need to know the offset and size inside oleFuncNames which we can cleverly encode as a single number: // Auto-generated shell procedure identifier: cache index | str start | str past-end. const ( _PROC_SHCreateItemFromIDList _PROC_SHELL = 0 | (9 << 16) | (31 << 32) _PROC_SHCreateItemFromParsingName _PROC_SHELL = 1 | (32 << 16) | (59 << 32) // ... ) We pack the info into a single number: bits 0-15 : index of function in array of function pointers bits 16-31: start of function name in multi-name string bits 32-47: end of function name in multi-name string This technique requires code generation. It would be too difficult to write those numbers manually. References This technique is used in https://github.com/rodrigocfd/windigo win32 bindings Go library. See e.g. https://github.com/rodrigocfd/windigo/blob/master/internal/dll/dll_gdi.go
How a wild side-quest became the source of many of the articles you’ve read—and have come to expect—in this publication
Watch now | Privilege levels, syscall conventions, and how assembly code talks to the Linux kernel
Learn how disposable objects solve test cleanup problems in flat testing. Use TypeScript's using keyword to ensure reliable resource disposal in tests.