More from Jorge Arango
Do you ever catch yourself avoiding things you need to do? Sure you, do: we all do it. In episode 9 of Traction Heroes, Harry and I discuss what to do about it. The conversation took off when Harry read a fragment from Oliver Burkeman’s book, Meditations for Mortals. I won’t cite the entire passage here, but this gives you a taste: It can be alarming to realize just how much of life gets shaped by what we’re actively trying to avoid. We talk about not getting around to things as if it were merely a failure of organization or a will. But often the truth is that we invest plenty of energy in making sure that we never get around to them. … The more you organize your life around not addressing things that make you anxious, the more likely they are to develop into serious problems. And even if they don’t, the longer you fail to confront them, the more unhappy time you spend being scared of what might be lurking in the places you don’t want to go. The irony, of course, is that we put off uncomfortable tasks because they make us anxious. But putting them off ultimately makes us more anxious. As Harry reminded us, “bad news doesn’t get better over time.” He also proposed a helpful framing: that facts are friendly. That is, even though knowing the truth might make us uncomfortable, knowing is better than not knowing. We discussed practical steps to gain traction: Ask yourself, what am I pretending not to know? Deep down, you know there’s more to the situation than you’ve let on; acknowledging the elephant in the room to move forward. Plan around the last responsible moment. Some events have fixed time windows; understand by when you must decide. Rewrite the narrative using the non-violent communication lens: separate your observations from interpretations, feelings, and needs. As always, I got lots of value from this conversation with Harry. But this one you can’t think about; it’s about doing. And doing is hard when the mind doesn’t want to face facts. Traction Heroes episode 9: Procrastination
In week 18 of the humanities crash course, I read five stories from One Thousand and One Nights, a collection of Middle Eastern folktales that have influenced lots of other stories. Keeping with the theme, I also saw one of the most influential movies based on these stories. Readings An influential collection of Middle Eastern folk tales compiled during the Islamic Golden Age. The framing device is brutally misogynistic: a sultan learns that his wife is unfaithful, so he executes her. He decides all women are the same, so he marries a new bride every day and has her executed the following day. Sheherazade asks her father, the vizier, to offer her in marriage to the sultan. The vizier is reluctant: they both know the wives’ fate. But Sheherazade has a clever plan: she starts a new story for the sultan every night but leaves it in a cliffhanger. Curious for the outcome, the sultan stays her execution to the next day. In this way, Sheherazade spares the lives of other maidens of the land. Of the many stories in the book, I read five recommended by Gioia: The Fisherman and the Jinni: a poor fisherman unwittingly unleashes a murderous jinni from a bottle, but tricks him back into the bottle by outwitting him. The Three Apples: an ancient murder mystery (again, centered on the murder of an innocent woman); the “solution” involves more unjust death (at least by our standards.) Sinbad the Sailor: a series of seven fantastical voyages involving monsters, magic, and stolen treasures; one of the voyages closely parallels the Cyclops episode from the Odyssey. Ali Baba and the Forty Thieves: another story of murder and ill-gotten treasure; a poor man discovers where a band of thieve stashes their loot and steals from them. Aladdin: a poor boy discovers a magic lamp that makes him wealthy and powerful, allowing him to marry a princess. These have been re-told in numerous guises. As often happens in these cases, the originals are much darker and bloodier than their spawn. These aren’t Disney versions, for sure. Audiovisual Music: Highlights from Tchaikovsky’s famous ballets plus Rimsky-Korsakov’s Sheherazade. I’d heard the ballets, but not the Rimsky-Korsakov. This piece reminded me of Paul Smith’s music for Disney’s 20,000 LEAGUES UNDER THE SEA (1954). Arts: Gioia recommended aboriginal Australian art. I’d seen works in this style, but hadn’t paid attention. This tradtion has a particular (and gorgeous) style that expresses strong connections to the land. I was surprised to learn about recent developments in this tradition. Cinema: Alexander Korba’s THE THIEF OF BAGDAD (1940), one of the many films ispired by the One Thousand and One Nights. While it now looks dated, this film was a special effects breakthrough. As an early example of Technicolor, it also features an over-the-top palette, much like it’s near-contemporary, THE WIZARD OF OZ. Reflections One can’t do justice to One Thousand and One Nights by only reading five stories. But the ones I read dealt with poor people being unfairly granted wealth and power. Escapist fantasies tend to stand the test of time. The “heroes” in the stories deserved as much comeuppance as the “villains.” For example, in Ali Baba and the Forty Thieves), one of the heroes commits a mass killing of the “bad guys” while they were unable to react. Not only does this go unpunished; it’s celebrated. The people who told these stories had moral standards different from our own. I also learned several stories — including some of the most famous, such as Ali Baba and the Forty Thieves and Aladdin — were not part of the original collection. Instead, they were added by a French translator in the 18th Century. This was frustrating, as they weren’t present in the collection I bought; I had to seek them out separately. So, this week, I’ve been pondering questions of authorship and derivation. We don’t know who originated these stories. Like the aboriginal Australian art, the stories in the One Thousand and One Nights emerged — and belong to — a people more than an individual author or artist. And yet, they’ve inspired other works, such as THE THIEF OF BAGDAD — which inspired Disney’s ALADDIN. (The latter “borrows” liberally from the former.) Is it any wonder I heard Rimsky-Korsakov in the 20k score? At this point, I assume at least some cross-pollination — after all, Rimsky-Korsakov himself was inspired by the One Thousand and One Nights. This is how art has always evolved: artists build on what’s come before. In some cases, the inspiration is obvious. In others, it’s more nebulous. Did Odysseus inspire Simbad? Or did they both retell older stories? The process changed in the 20th Century. With strong copyright laws, stories become intellectual property. Disney may build on the One Thousand and One Nights stories, but we can’t build on Disney’s stories. And it’s changing again with large language models. It will be interesting to see how these new tools allow us to retell old stories in new ways. At a minimum, they’re causing us to reevaluate our approach to IP. Notes on Note-taking A realization: my Obsidian knowledge repository is better suited to reflecting on text than other media. I can try to write down my impressions of the beautiful aboriginal art and Rimsky-Korsakov’s music. But words articulate concepts, not feelings — even when trying to articulate feelings. So I end up reflecting on abstract ideas such as authorship and derivation rather than the nature of the works. It’s a limitation of my current note-taking system, and one I can’t do much about. Perhaps ChatGPT can help by letting me riff on pictures and sounds? But there, too, communication happens through language. Up Next Gioia recommends the Bhagavad Gita, the Rule of St. Benedict, and the first two books of Saint Augustine’s Confessions. This will be my first time with any of them. Again, there’s a YouTube playlist for the videos I’m sharing here. I’m also sharing these posts via Substack if you’d like to subscribe and comment. See you next week!
The dream is running GraphRAG with locally-hosted LLMs. And at least for now, the dream is on hold for me. In case you missed it, GraphRAG is a way of getting more useful results with LLMs by working with data you provide (in addition to whatever they’ve trained on.) The system uses LLMs to build a knowledge graph from documents you provide and then uses those graphs to power RAG queries. This opens lots of possibilities. For information architecture work, it lets you ask useful questions of your own content. I’ve written about my experiments in that scenario. In that case, I used OpenAI’s models to power Microsoft’s GraphRAG application. But I’m especially excited about the possibilities for personal knowledge management. Imagine an LLM tuned to and focused on your personal notes, journals, calendars, etc. That’s primarily why I’m dreaming of GraphRAG powered by local models. There are several reasons why local models would be preferable. For one, there’s the cost: GraphRAG indexing runs are expensive. There’s also a privacy angle. Yes, I’ve told OpenAI I don’t want them to train their models using my data, but some of this stuff is extremely personal and I’m not comfortable with it leaving my computer at all. But an even larger concern is dependency. I’m building a lifelong thinking assistant. (An amanuensis, as I outlined in Duly Noted.) It’s risky to delegate such a central part of this system to a party that could turn off the spigot at any time. So I’ve been experimenting with graphrag using local models. There are good news and bad news. Before I tell you about them, let me explain my setup. I’m using a 16” 2023 M2 Max MacBook Pro with 32GB of RAM. It’s not an entry-level machine, but not a monster either. I’m using ollama to run local models. I’ve tried around half a dozen at this point and have successfully set up one automated (non-GraphRAG) workflow using mistral-small3.1. GraphRAG is extremely flexible. There are dozens of parameters to configure, including different LLMs for each step in the process. Off-the-shelf, its prompts are optimized specifically for GPT-4-turbo; other models require tweaking. Indexing runs (where the model converts texts to knowledge graphs) can take a long time. So tweaks are time-consuming. I’ve had a go at it several times, but given up after a bit. I don’t have much free time these days, and most experiments have unsuccessfully ended with failed (and long!) indexing runs. But a few things have changed in recent weeks: GraphRAG itself keeps evolving There are now more powerful small local models that run better within my machine’s limitations ChatGPT o3 came out That last one may sound like a non-sequitur. Aren’t I trying to get away from cloud-hosted models for this use case? Well, yes — but in this case, I’m not using o3 to power GraphRAG. Instead, I’m using it to help me debug failed runs. While certainly nothing like AGI, as some have claimed, o3 has proven to be excellent for dealing with the sort of tech-related issues that would’ve sent me off to Stack Overflow in the past. Debugging GraphRAG runs is one such task. I’ve been feeding o3 logfiles after each run, and it’s recommended helpful tweaks. It’s the single most important factor in my recent progress. Yes, there’s been some progress: yesterday, after many tries, I finally got two local models to successfully complete an indexing run. Mind you, that doesn’t mean I can yet successfully query GraphRAG. But finishing the indexing run without issues is progress. That’s the good news. Alas, the indexing run took around thirty-six hours to process nineteen relatively short Markdown files. To put that in perspective, the same indexing run using cloud-hosted models would likely have taken under ten minutes. My machine also ran at full throttle the whole time. (It’s the first time I’ve felt an M-series Mac get hot.) The reduced processing speed isn’t just because the models themselves are slower: it’s also due to my machine’s limitations. After analyzing the log files, ChatGPT suggested reducing the number of concurrent API calls. The successful run specified just one call at a time for both models. The upshot is that even though the indexing run finished successfully, this process is impractical for real-world use. My PKM has thousands of Markdown files. ChatGPT keeps suggesting further tweaks, but progress is frustratingly slow when cycles are measured in days. I’ve considered upgrading to a MBP with more RAM or increasing the number of concurrent processes to find the upper threshold for my machine. But based on these results, I suspect improvements will be marginal given the amount of data I’m looking to process. So that’s the bad news. For now, I’ll keep working with local models for other uses (such as OCRing handwritten notes; the workflow I alluded to above. More on that soon!) And of course, I’ll continue experimenting with cloud-based models for other use cases. In any case, I’ll share what I learn here.
In week 17 of the humanities crash course, I read a book that was completely new to me: Apuleius’s Metamorphoses, better known as The Golden Ass. I also watched a movie with a similar story (but with different aims.) Readings The Golden Ass was written by Apuleius around the second century CE. The only complete Latin novel to survive, it tells the story of Lucius, a man whose reckless curiositas leads him to accidentally be transformed into an ass. (What is curiositas, you ask? Read on…) As a donkey, Lucius goes from owner to owner, exposing him to dangers, adventure, and gossip. Characters tell several sub-stories, mostly about crime, infidelity, and magic. The most famous is the story of Cupid and Psyche, a cautionary allegory that echoes the themes and structures of the novel as a whole. Throughout his wanderings, Lucius is treated brutally. At one point a woman falls in love with him and treats him as a sex object. Eventually, the goddess Isis brings him back to human form after an initiation into her cult. He becomes an acolyte, making the story a metaphor for religious conversion. The final section of the book, where Lucius undergoes his spiritual transformation, is one of several surprising tone shifts: the book is sometimes drama, horror, fairy tale, and bawdy farce. Overall, it gives an entertaining picture of moral codes in second century Europe. Audiovisual Music: Scott Joplin. Again, a composer whose work was familiar to me. Rather than the usual piano solo versions, I listened to a recording of his works featuring Andre Previn on piano and Itzhak Perlman on violin. Arts: van Gogh, who, like Joplin, is overly familiar. This lecture from The National Gallery helped put his work in context: I hadn’t realized the degree to which van Gogh’s paintings are the result of a tech innovation: synthetic pigments in the newly invented roll-up tubes. As always, understanding context is essential. Cinema: Jerzy Skolimowski’s EO, a road picture that follows a donkey as he drifts through the Polish and Italian countrysides. Like Lucius, he’s exposed to humanity’s moral failings (and a tiny bit of tenderness.) While visually and aurally stunning, I found the movie overbearingly preachy. Reflections As usual, I entered my reflections on the book into ChatGPT to ask for what I might have missed or gotten wrong. My notes said Lucius’s curiosity about witchcraft led him to be transformed into an ass. ChatGPT corrected me: it wasn’t curiosity but curiositas. I asked for clarification, since the two terms are so similar. As I now understand it, curiositas refers to “an immoderate appetite for forbidden or frivolous knowledge that distracts from real duties” — i.e., wasting time on B.S. of the sort one finds in tabloids or chasing after forbidden knowledge. ChatGPT suggested as contemporary equivalents clickbait and doomscrolling, gossip culture (think the Kardashians), and “risk-blind experimentation” — i.e., the “move fast and break things” ethos — as the LLM put it, a “reckless desire to test the limits without counting the costs.” In other words, Lucius wasn’t punished (and ultimately disciplined) because he was curious. Instead, he “messed around and found out” — literally making an ass out of himself. For the ancients, the healthy opposite was studiositas, a “disciplined study in service of truth.” We’ll spend time with Thomas Aquinas later in the course; ChatGPT suggests he makes much of this distinction. Notes on Note-taking Last week, I said I’d return to ChatGPT 4o for its responsiveness. I haven’t; the o3 model’s results are better enough that the slightly longer wait is worth it. That said, I remain disappointed with o3’s preference for tables. One good sign: at one point, ChatGPT presented me with a brief A/B test where it asked me to pick between a table-based result and one with more traditional prose. Of course, I picked the latter. I hope they do away with the tables, or at least make them much less frequent. Up Next Gioia recommends selected readings from The Arabian Nights. While I’ve never read the original, several of these stories (Aladdin, Sinbad) are familiar through reinterpretations. I’m looking forward to reading the originals. Again, there’s a YouTube playlist for the videos I’m sharing here. I’m also sharing these posts via Substack if you’d like to subscribe and comment. See you next week!
There’s a lot of turbulence in the world. What is the source of the turbulence? And how can we navigate skillfully? These questions were on my mind as I met with Harry to record episode 8 of the Traction Heroes podcast. My (at least partial) answer to the first question is that there’s a general lack of systems literacy in the world. Most people aren’t aware of the high degree of complexity that characterizes highly intertwingled systems such as modern economies. As a result, they opt for simplistic interventions that often do more harm than good. At least that was my hypothesis. I was keen to hear Harry’s thoughts — and he didn’t disappoint. My prompt was the following passage from Donella Meadows’s classic Thinking in Systems: A Primer (emphasis in the original): Ever since the Industrial Revolution, Western society has benefited from science, logic, and reductionism over intuition and holism. Psychologically and politically we would much rather assume that the cause of a problem is “out there,” rather than “in here.” It’s almost irresistible to blame something or someone else, to shift responsibility away from ourselves, and to look for the control knob, the product, the pill, the technical fix that will make a problem go away. Serious problems have been solved by focusing on external agents—preventing smallpox, increasing food production, moving large weights and many people rapidly over long distances. Because they are embedded in larger systems, however, some of our “solutions” have created further problems. And some problems, those most rooted in the internal structure of complex systems, the real messes, have refused to go away. Hunger, poverty, environmental degradation, economic instability, unemployment, chronic disease, drug addiction, and war, for example, persist in spite of the analytical ability and technical brilliance that have been directed toward eradicating them. No one deliberately creates those problems, no one wants them to persist, but they persist nonetheless. That is because they are intrinsically systems problems—undesirable behaviors characteristic of the system structures that produce them. They will yield only as we reclaim our intuition, stop casting blame, see the system as the source of its own problems, and find the courage and wisdom to restructure it. Of course, the broader context was (and is) on my mind. But we’re all enmeshed in complex systems in our day-to-day lives. It behooves us to ponder whether the causes of problems are really “out there” — or whether, as Harry suggested, we need to be more introspective. Traction Heroes ep. 8: Quagmires
More in technology
I uploaded YouTube videos from time to time, and a fun comment I often get is “Whoa, this is in 8K!”. Even better, I’ve had comments from the like, seven people with 8K TVs that the video looks awesome on their TV. And you guessed it, I don’t record my videos in 8K! I record them in 4K and upscale them to 8K after the fact. There’s no shortage of AI video upscaling tools today, but they’re of varying quality, and some are great but quite expensive. The legendary Finn Voorhees created a really cool too though, called fx-upscale, that smartly leverages Apple’s built-in MetalFX framework. For the unfamiliar, this library is an extensive of Apple’s Metal graphics library, and adds functionality similar to NVIDIA’s DLSS where it intelligently upscales video using machine learning (AI), so rather than just stretching an image, it uses a model to try to infer what the frame would look like at a higher resolution. It’s primarily geared toward video game use, but Finn’s library shows it does an excellent job for video too. I think this is a really killer utility, and use it for all my videos. I even have a license for Topaz Video AI, which arguably works better, but takes an order of magnitude longer. For instance my recent 38 minute, 4K video took about an hour to render to 8K via fx-upscale on my M1 Pro MacBook Pro, but would take over 24 hours with Topaz Video AI. # Install with homebrew brew install finnvoor/tools/fx-upscale # Outputs a file named my-video Upscaled.mov fx-upscale my-video.mov --width 7680 --codec h265 Anyway, just wanted to give a tip toward a really cool tool! Finn’s even got a [version in the Mac App Store called Unsqueeze](https://apps.apple.com/ca/app/unsqueeze/id6475134617 Unsqueeze) with an actual GUI that’s even easier to use, but I really like the command line version because you get a bit more control over the output. 8K is kinda overkill for most use cases, so to be clear you can go from like, 1080p to 4K as well if you’re so inclined. I just really like 8K for the future proofing of it all, in however many years when 8K TVs are more common I’ll be able to have some of my videos already able to take advantage of that. And it takes long enough to upscale that I’d be surprised to see TVs or YouTube offering that upscaling natively in a way that looks as good given the amount of compute required currently. Obviously very zoomed in to show the difference easier If you ask me, for indie creators, even when 8K displays are more common, the future of recording still probably won’t be in native 8K. 4K recording gives so much detail still that have more than enough details to allow AI to do a compelling upscale to 8K. I think for my next camera I’m going to aim for recording in 6K (so I can still reframe in post), and then continue to output the final result in 4K to be AI upscaled. I’m coming for you, Lumix S1ii.
Talks about the famous Dragon's Lair
totally unreasonable price for a completely untested item, as-was, no returns, with no power supply, no wiring harness and no auxiliary daughterboards. At the end of this article, we'll have it fully playable and wired up to a standard ATX power supply, a composite monitor and off-the-shelf Atari joysticks, and because this board was used for other related games from that era, the process should work with only minor changes on other contemporary Gremlin arcade classics like Blockade, Hustle and Comotion [sic]. It's time for a Refurb Weekend. a July 1982 San Diego Reader article, the locally famous alternative paper I always snitched a copy of when I was downtown, and of which I found a marginally better copy to make these scans. There's also an exceptional multipart history of Gremlin you can read but for now we'll just hit the highlights as they pertain to today's project. ported to V1 Unix and has a simpler three-digit variant Bagels which was even ported to the KIM-1. Unfortunately his friends didn't have minicomputers of their own, so Hauck painstakingly put together a complete re-creation from discrete logic so they could play too, later licensed to Milton Bradley as their COMP IV handheld. Hauck had also been experimenting with processor-controlled video games, developing a simple homebrew unit based around the then-new Intel 8080 CPU that could connect to his television set and play blackjack. Fogleman met Hauck by chance at a component vendor's office and hired him on to enhance the wall game line, but Hauck persisted in his experiments, and additionally presented Fogleman with a new and different machine: a two-player game played with buttons on a video TV display, where each player left a boxy solid trail in an attempt to crowd out the other. To run the fast action on its relatively slow ~2MHz CPU and small amount of RAM, a character generator circuit made from logic chips painted a 256x224 display from 32 8x8 tiles in ROM specified by a 32x28 screen matrix, allowing for more sophisticated shapes and relieving the processor of having to draw the screen itself. (Does this sound like an early 8-bit computer? Hold that thought.) patent application was too late and too slow to stop the ripoffs. (For the record, Atari programmer Dennis Koble was adamant he didn't steal the idea from Gremlin, saying he had seen similar "snake" games on CompuServe and ARPANET, but Nolan Bushnell nevertheless later offered Gremlin $100,000 in "consolation" which the company refused.) Meanwhile, Blockade orders evaporated and Gremlin's attempts to ramp up production couldn't save it, leaving the company with thousands of unused circuit boards, game cabinets and video monitors. While lawsuits against the copycats slowly lumbered forward, Hauck decided to reprogram the existing Blockade hardware to play new games, starting with converting the Comotion board into Hustle in 1977 where players could also nab targets for additional points. The company ensured they had a thousand units ready to ship before even announcing it and sales were enough to recoup at least some of the lost investment. Hauck subsequently created a reworked version of the board with the same CPU for the more advanced game Depthcharge, initially testing poorly with players until the controls were simplified. This game was licensed to Taito as Sub Hunter and the board reworked again for the target shooter Safari, also in 1977, and also licensed by Taito. For 1978, Gremlin made one last release using the Hustle-Comotion board. This game was Blasto. present world record is 8,730), but in two player mode the players can also shoot each other for an even bigger point award. This means two-player games rapidly turn into active hunts, with a smaller bonus awarded to a player as well if the other gets nailed by a mine. shown above with a screenshot of the interactive on-board assembler. Noval also produced an education-targeted system called the Telemath, based on the 760 hardware, which was briefly deployed in a few San Diego Unified elementary schools. Alas, they were long gone before we arrived. Industry observers were impressed by the specs and baffled by the desk. Although the base price of $2995 [about $16,300] was quite reasonable considering its capabilities, you couldn't buy it without its hulking enclosure, which made it a home computer only to the sort of people who would buy a home PDP-8. (Raises hand.) Later upgrades with a Z80 and a full 32K didn't make it any more attractive to buyers and Noval barely sold about a dozen. Some of the rest remained at Gremlin as development systems (since they practically were already), and an intact upgraded unit with aftermarket floppy drives lives at the Computer History Museum. The failure of Noval didn't kill Gremlin outright, but Fogleman was concerned the company lacked sufficient capital to compete more strongly in the rapidly expanding video game market, and Noval didn't provide it. With wall game sales fading fast and cash flow crunched, the company was slowly approaching bankruptcy by the time Blasto hit arcades. At the same time, Sega Enterprises, Inc., then owned by conglomerate Gulf + Western (who also then owned Paramount Pictures), was looking for a quick way to revive its failing North American division which was only surviving on the strength of its aggressively promoted mall arcades. Sega needed development resources to bring out new games States-side, and Gremlin needed money. In September 1978 Fogleman agreed to make Gremlin a Sega subsidiary in return for an undisclosed number of shares, and became a vice chairman. Sega was willing to do just about anything to achieve supremacy on this side of the Pacific. In addition to infusing cash into Gremlin to make new games (as Gremlin/Sega) and distribute others from their Japanese peers and partners (as Sega/Gremlin), Sega also perceived a market opportunity in licensing arcade ports to the growing home computer segment. Texas Instruments' 99/4 had just hit the market in 1979 to howls there was hardly any software, and their close partner Milton Bradley was looking for marketable concepts for cartridge games. Blasto had simple fast action and a good name in the arcades, required only character graphics (well within the 9918 video chip's capabilities) and worked for both one or two players, and Sega had no problem blessing a home port of an older property for cheap. Milton Bradley picked up the license to Hustle as well. Bob Harris for completion, and TI house programmer Kevin Kenney wrote some additional features. 1 to 40 (obviously some thought was given to using the same PCB as much as possible). The power header is also a 10-pin block and the audio and video headers are 4-pin. Oddly, the manual doesn't say anywhere what the measurements are, so I checked them with calipers and got a pitch of around 0.15", which sounds very much like a common 0.156" header. I ordered a small pack of those as an experiment. 0002 because of the control changes: if you have an 814-0001, then you have a prototype. The MAME driver makes reference to an Amutech Mine Sweeper which is a direct and compatible ripoff of this board — despite the game type, it's not based on Depthcharge.) listed with the part numbers for the cocktail, but the ROM contents expected in the hashes actually correspond to the upright. Bipolar ROMs and PROMs are, as the name suggests, built with NPN bipolar junction transistors instead of today's far more common MOSFETs ("MOS transistors"). This makes them lower density but also faster: these particular bipolar PROMs have access times of 55-60ns as opposed to EPROMs or flash ROMs of similar capacity which may be multiple times slower depending on the chip and process. For many applications this doesn't matter much, but in some tightly-timed systems the speed difference can make it difficult to replace bipolar PROMs with more convenient EPROMs, and most modern-day chip programmers can't generate the higher voltage needed to program them (you're basically blowing a whole bunch of microscopic Nichrome metal fuses). Although modern CMOS PROMs are available at comparable speeds, bipolars were once very common, including in military environments where they could be manufactured to tolerate unusually harsh operating conditions. The incomparable Ken Shirriff has a die photo and article on the MMI 5300, an open-collector chip which is one of the military-spec parts from this line. Model 745 KSR and bubble memory Model 763 ASR, use AMD 8080s! The Intel 8080A is a refined version of the original Intel 8080 that works properly with more standard TTL devices (the original could only handle low-power TTL); the "NL" tag is TI's designation for a plastic regular-duty DIP. Its clock source is a 20.79MHz crystal at Y1 which is divided down by ten to yield the nominal clock rate of 2.079MHz, slightly above its maximum rating of 2MHz but stable enough at that speed. The later Intel 8080A-1 could be clocked up to 3.125MHz and of course the successor Intel 8085 and Zilog Z80 processors could run faster still. An interesting absence on this board is an Intel 8224 or equivalent to generate the 8080A's two-phase clock: that's done directly off the crystal oscillator with discrete logic, an elegant (and likely cheaper) design by Hauck. The video output also uses the same crystal. Next to the CPU are pads for the RAM chips. You saw six of them in the last picture under the second character ROM (316-0100M), all 2102 (1Kbit) static RAM. These were the chips I was most expecting to fail, having seen bad SRAM in other systems like my KIM-1. The ones here are 450ns Fairchild 21021 SRAMs in the 21021PC plastic case and "commercial" temperature range, and six of them adds up to 768 bytes of memory. NOS examples and equivalents are fortunately not difficult to find. Closer to the CPU in this picture, however, are two more RAM chip pads that are empty except for tiny factory-installed jumpers. On the Hustle and Blasto boards (both), they remain otherwise unpopulated, and there is an additional jumper between E4 and E5 also visible in the last picture. The Comotion board, however, has an additional 256 bytes of RAM here (as two more 1024x1 SRAMs). On that board these pads have RAM, there are no jumpers on the pads, and the jumper is now between E3 (ground) and E5. This jumper is also on Blockade, even though it has only five 2102s and three dummy jumpers on the other pads. That said, the games don't seem to care how much RAM is present as long as the minimum is: the current MAME driver gives all of them the full 1K. this 8080 system which uses a regulator). Tracing the schematic out further, the -12V line is also used with the +5V and +12V lines to run the video circuit. These are all part of the 10-pin power header. almost this exact sequence of voltages? An AT power supply connector! If we're clever about how we put the two halves on, we can get nearly the right lines in the right places. The six-pin AT P9 connector reversed is +5V, +5V, +5V, -5V, ground, ground, so we can cut the -5V to be the key. The six-pin AT P8 connector not reversed is power-good, +5V (or NC), +12V, -12V, ground, ground, so we cut the +5V to be the key, and cut the power-good line and one of the dangling grounds and wire ground to the power-good pin. Fortunately I had a couple spare AT-to-ATX converter cables from when we redid the AT power supply on the Alpha Micro Eagle 300. connectors since we're going to modify them anyway. A quick couple drops of light-cured cyanoacrylate into the key hole ... Something's alive! An LED glows! Time now for the video connector to see if we can get a picture! a nice 6502 reset circuit). The board does have its own reset circuit, of a sort. You'll notice here that the coin start is wired to the same line, and the manual even makes reference to this ("The circuitry in this game has been arranged so that the insertion of a quarter through the coin mechanism will reset the restart [sic] in the system. This clears up temporary problems caused by power line disturbances, static, etc."). We'll of course be dealing with the coin mechanism a little later, but that doesn't solve the problem of bringing the machine into the attract mode when powered on. I also have doubts that people would have blithely put coins into a machine that was obviously on the fritz. pair is up and down, or left and right, but not which one is exactly which because that depends on the joystick construction. We'll come back to this. Enterprises) to emphasize the brand name more strongly. The company entered a rapid decline with the video game crash of 1983 and the manufacturing assets were sold to Bally Midway with certain publishing rights, but the original Gremlin IP and game development teams stayed with Sega Electronics and remained part of Gulf+Western until they were disbanded. The brand is still retained as part of CBS Media Ventures today though modern Paramount Global doesn't currently use the label for its original purpose. In 1987 the old wall game line was briefly reincarnated under license, also called Gremlin Industries and with some former Gremlin employees, but only released a small number of new machines before folding. Meanwhile, Sega Enterprises separated from Gulf+Western in a 1984 management buyout by original founder David Rosen, Japanese executive Hayao Nakayama and their backers. This Sega is what people consider Sega today, now part of Sega Sammy Holdings, and the rights to the original Gremlin games — including Blasto — are under it. Lane Hauck's last recorded game at Gremlin/Sega was the classic Carnival in 1980 (I played this first on the Intellivision). After leaving the company, he held positions at various companies including San Diego-based projector manufacturer Proxima (notoriously later merging with InFocus), Cypress Semiconductor and its AgigA Tech subsidiary (both now part of Infineon), and Maxim Integrated Products (now part of Analog Devices), and works as a consultant today. I'm not done with Blasto. While I still enjoy playing the TI-99/4A port, there are ... improvements to be made, particularly the fact it's single fire, and it was never ported to anything else. I have ideas, I've been working on it off and on for a year or so and all the main gameplay code is written, so I just have to finish the graphics and music. You'll get to play it. And the arcade board? Well, we have a working game and a working harness that I can build off. I need a better sound amplifier, the "boom" circuit deserves a proper subwoofer, and I should fake up a little circuit using the power-good line from the ATX power supply to substitute for the power interrupt board. Most of all, though, we really need to get it a proper display and cabinet. That's naturally going to need a budget rather larger than my typical projects and I'm already saving up for it. Suggestions for a nice upright cab with display, buttons and joysticks that I can rewire — and afford! — are solicited. On both those counts, to be continued.
Hard data is hard to find, but roughly 100 million books were published prior to the 21st century. Of those, a significant portion were never available in a digital format and haven’t yet been digitized, which means their content is effectively inaccessible to most people today. To bring that content into the digital world, Redditor […] The post This machine automatically scans books from cover to cover appeared first on Arduino Blog.