More from Old Vintage Computing Research
Just a brief programming note. Before this blog there was Floodgap Retrobits, and I still maintain those pages. One of the earliest was my Tomy Tutor-specific page devoted to my very first computer which we got in 1983. Relatives of the Texas Instruments home computers and closely patterned after the unreleased TI 99/8, the history of the Japanese models is relatively well-known and there are a number of Japanese enthusiasts that specialize in the Pyuuta, the Tutor's ancestor system. On the other hand, hardly anybody knows anything about the British version. That system is the Grandstand Tutor: Your Computer October 1983, at that time one of the major home computer magazines in the UK. The original form of this machine was very similar to the Pyuuta, which emphasized its TMS9918A-based built-in paint program and due to its specialization on graphics only implements a very simplified animation-oriented dialect of BASIC called GBASIC. Adam Imports rebadged many Asian and some American toys and games for the UK (and, for some period of time, New Zealand) and had a particularly close relationship with Tomy. In fact, the relationship was close enough that Adam in fact rejected this initial version as uncompetitive with other home computers and sent it back to Tomy for a more upgraded BASIC. Tomy provided this by modifying TI Extended BASIC, calling it Tomy BASIC and implementing it as a second mode accessible from the system's menu-based interface. The absence of Tomy BASIC places the earliest Grandstand Tutor prior to the American Tomy Tutor, which also has the upgraded Tomy BASIC. Tomy subsequently sold this upgrade in at least two forms as an option for the Japanese machines as well. The interesting part is that while PAL Tutors have been documented to exist (the American Tutor is obviously NTSC), no one yet has reported finding a Grandstand. It wouldn't be hard to distinguish one — the photograph has obvious Grandstand branding on its silver badge. It's possible they were never released at all because even accounting for publishing delays, the second Grandstand would have emerged late in 1983, hitting in the wake of the video game crash and against heavier hitters like the Commodore VIC-20 and Commodore 64 as well as (in the UK) the ZX Spectrum. Adam may have simply concluded it wasn't a strong enough competitor even with the upgraded BASIC to sell. including SunView. Later their IDT workstations, though uniprocessor, competed directly with and even could squeak by contemporary SPARCstations, at least in the beginning. Solbourne eventually ran out of money when they hit engineering limits with their own CPU and could never reclaim the throughput crown, abandoning the computer hardware market in 1994. We might be adding more remembrances as other Solbourne engineers are contacted. You can see these updates at The Little Orphan Tomy Tutor as well as past Old VCR Tomy articles, and The Solbourne Solace as well as past Old VCR Solbourne-specific articles. Naturally, if you have anything to add, feel free to post in the comments or drop me E-mail at ckaiser at floodgap dawt com.
COMPUTE!'s Gazette was for many years the leading Commodore-specific managzine. I liked Ahoy! and RUN, and I subscribed to Loadstar too, but Gazette had the most interesting type-ins and the most extensive coverage. They were also the last of COMPUTE!'s machine-specific magazines and one of the longest lived Commodore publications, period: yours truly had some articles published in COMPUTE (no exclamation point by then) Gazette as a youthful freelancer in the 1990s until General Media eventually made Gazette disk-only and then halted entirely in 1995. I remember pitching Tom Netzel on a column idea and getting a cryptic E-mail back from him saying that "things were afoot." What was afoot was General Media divesting the entire publication to Ziff-Davis, who was only interested in the mailing list, and I got a wholly inadequate subscription to PC Magazine in exchange which I mostly didn't read and eventually didn't renew. This week I saw an announcement about a rebooted Gazette — even with a print edition, and restoring the classic ABC/Cap Cities trade dress — slated for release in July. I'm guessing that "president and founder [sic]" Edwin Nagle either bought or licensed the name from Ziff-Davis when forming the new COMPUTE! Media; the announcement also doesn't say if he only has rights to the name, or if he actually has access to the back catalogue, which I think could be more lucrative: since there appears to be print capacity, seems like there could be some money in low-run back issue reprints or even reissuing some of their disk products, assuming any residual or royalty arrangements could be dealt with. I should say for the record that I don't have anything to do with the company myself and I don't know Nagle personally. By and large I naturally think this is a good thing, and I'll probably try to get a copy, though the stated aim of the magazine is more COMPUTE! and less Gazette since it intends to cover the entire retro community. Doing so may be the only way to ensure an adequate amount of content at a monthly cadence, so I get the reasoning, but it necessarily won't be the Gazette you remember. Also, since most retro enthusiasts have some means to push downloaded data to their machines, the type-in features which were the predominant number of pages in the 1980s will almost certainly be diminished or absent. I suspect you'll see something more like the General Media incarnation, which was a few type-ins slotted between various regular columns, reviews and feature articles. The print rate strikes me as very reasonable at $9.95/mo for a low-volume rag and I hope they can keep that up, though they would need to be finishing the content for layout fairly soon and the only proferred sample articles seem to be on their blog. I'm at most cautiously optimistic right now, but the fact they're starting up at all is nice to see, and I hope it goes somewhere.
prior articles for more of the history, but MacLynx is a throwback port of the venerable Lynx 2.7.1 to the classic Mac OS last updated in 1997 which I picked up again in 2020. Rather than try to replicate its patches against a more current Lynx which may not even build, I've been improving its interface and Mac integration along with the browser core, incorporating later code and patching the old stuff. However, beta 6 is not a fat binary — the two builds are intentionally separate. One reason is so I can use a later CodeWarrior for better code that didn't have to support 68K, but the main one is to consider different code on Power Macs which may be expensive or infeasible on 68K Macs. The primary use case for this — which may occur as soon as the next beta — is adding a built-in vendored copy of Crypto Ancienne for onboard TLS without a proxy. On all but upper-tier 68040s, setting up the TLS connection takes longer than many servers will wait, but even the lowliest Performa 6100 with a barrel-bottom 60MHz 601 can do so reasonably quickly. The port did not go altogether smoothly. While Olivier Gutknecht's original fat binary worked fine on Power Macs, it took quite a while to get all the pieces reassembled on a later CodeWarrior with a later version of GUSI, the Mac POSIX glue layer which is a critical component (the Power Mac version uses 2.2.3, the 68K version uses 1.8.0). Some functions had changed and others were missing and had to be rewritten with later alternatives. One particularly obnoxious glitch was due to a conflict between the later GUSI's time.h and Apple Universal Interfaces' Time.h (remember, HFS+ is case-insensitive) which could not be solved by changing the search order in the project due to other conflicting headers. The simplest solution was to copy Time.h into the project and name it something else! Even after that, though, basic Mac GUI operations like popping open the URL dialogue would cause it to crash. Can you figure out why? Here's a hint: application: your application itself was almost certainly fully native. However, a certain amount of the Toolbox and the Mac OS retained 68K code, even in the days of Classic under Mac OS X, and your PowerPC application would invariably hit one of these routines eventually. The component responsible for switching between ISAs is the Mixed Mode Manager, which is tightly integrated with the 68K emulator and bridges the two architectures' different calling conventions, marshalling their parameters (PowerPC in registers, 68K on the stack) and managing return addresses. I'm serious when I say the normal state is to run 68K code: 68K code is necessarily the first-class citizen in Mac OS, even in PowerPC-only versions, because to run 68K apps seamlessly they must be able to call any 68K routine directly. All the traps that 68K apps use must also look like 68K code to them — and PowerPC apps often use those traps, too, because they're fundamental to the operating system. 68K apps can and do call code fragments in either ISA using the Code Fragment Manager (and PowerPC apps are obliged to), but the system must still be able to run non-CFM apps that are unaware of its existence. To jump to native execution thus requires an additional step. Say a 68K app running in emulation calls a function in the Toolbox which used to be 68K, but is now PowerPC. On a 68K MacOS, this is just 68K code. In later versions, this is replaced by a routine descriptor with a special trap meaningful only to the 68K emulator. This descriptor contains the destination calling convention and a pointer to the PowerPC function's transition vector, which has both the starting address of the code fragment and the initial value for the TOC environment register. The MMM converts the parameters to a PowerOpen ABI call according to the specified convention and moves the return address into the PowerPC link register, and upon conclusion converts the result back and unwinds the stack. The same basic idea works for 68K code calling a PowerPC routine. Unfortunately, we forgot to make a descriptor for this and other routines the Toolbox modal dialogue routine expected to call, so the nanokernel remains in 68K mode trying to execute them and makes a big mess. (It's really hard to debug it when this happens, too; the backtrace is usually totally thrashed.) the last time that my idea with MacLynx is to surround the text core with the Mac interface. Lynx keys should still work and it should still act like Lynx, but once you move to a GUI task you should stay in the GUI until that task is completed. In beta 5, I added support for the Standard File package so you get a requester instead of entering a filename, but once you do this you still need to manually select "Save to disk" inside Lynx. That changes in beta 6: :: which in MacOS is treated as the parent folder. Resizing, scrolling and repainting are also improved. The position of the thumb in MacLynx's scrollbar is now implemented using a more complex but yet more dynamic algorithm which should also more properly respond to resize events. A similar change fixes scroll wheels with USB Overdrive. When MacLynx's default window opens, a scrollbar control is artificially added to it. USB Overdrive implements its scrollwheel support by finding the current window's scrollbar, if any, and emulating clicks on its up and down (or left and right) buttons as the wheel is moved. This works fine in MacLynx, at least initially. When the window is resized, however, USB Overdrive seems to lose track of the scrollbar, which causes its scrollwheel functionality to stop working. The solution was to destroy and rebuild the scrollbar after the window takes its new dimensions, like what happens on start up when the window first opens. This little song and dance may also fix other scrollwheel extensions. Always keep in mind that the scrollbar is actually used as a means to send commands to Lynx to change its window on the document; it isn't scrolling, say, a pre-rendered GWorld. This causes the screen to be redrawn quite frequently, and big window sizes tend to chug. You can also outright crash the browser with large window widths: this is difficult to do on a 68K Mac with on-board video where the maximum screen size isn't that large, but on my 1920x1080 G4 I can do so reliably. lynx.cfg a no-op. However, if you are intentionally using another character set and this will break you, please feel free to plead your use case to me and I will consider it. Another bug fixed was an infinite loop that could trigger during UTF-8 conversion of certain text strings. These sorts of bugs are also a big pain to puzzle out because all you can do from CodeWarrior is force a trap with an NMI, leaving the debugger's view of the program counter likely near but probably not at the scene of the foul. Eventually I single-stepped from a point near the actual bug and was able to see what was happening, and it turned out to be a very stupid bug on my part, and that's all I'm going to say about that. SameSite and HttpOnly (irrelevant on Lynx but supported for completeness) attributes are, the next problem was that any cookie with an expiration value — which nowadays is nearly any login cookie — wouldn't stick. The problem turned out to be the difference in how the classic MacOS handles time values. In 32-bit Un*xy things, including Mac OS X, time_t is a signed 32-bit integer with an epoch starting on Thursday, January 1, 1970. In the classic MacOS, time_t is an unsigned 32-bit integer with an epoch starting on Friday, January 1, 1904. (This is also true for timestamps in HFS+ filesystems, even in Mac OS X and modern macOS, but not APFS.) Lynx has a utility function that can convert a ASCII date string into a seconds-past-the-epoch count, but in case you haven't guessed, this function defaults to the Unix epoch. In fact, the version previously in MacLynx only supports the Unix epoch. That means when converted into seconds after the epoch, the cookie expiration value would always appear to be in the past compared to the MacOS time value which, being based on a much earlier epoch, will always be much larger — and thus MacLynx would conclude the cookie was actually expired and politely clear it. I reimplemented this function based on the MacOS epoch, and now login cookies actually let you log in! Unfortunately other cookies like trackers can be set too, and this is why we can't have nice things. Sorry. At least they don't persist between runs of the browser. Even then, though, there's still some additional time fudging because time(NULL) on my Quadra 800 running 8.1 and time(NULL) on my G4 MDD running 9.2.2, despite their clocks being synchronized to the same NTP source down to the second, yielded substantially different values. Both of these calls should go to the operating system and use the standard Mac epoch, and not through GUSI, so GUSI can't be why. For the time being I use a second fudge factor if we get an outlandish result before giving up. I'm still trying to figure out why this is necessary. ogle). This didn't work for PNG images before because it was using the wrong internal MIME type, which is now fixed. (Ignore the MIME types in the debug window because that's actually a problem I noticed with my Internet Config settings, not MacLynx. Fortunately Picture Viewer will content-sniff, so it figures it out.) Finally, there is also miscellaneous better status code and redirect handling (again not a problem with mainline Lynx, just our older fork here), which makes login and browsing sites more streamlined, and you can finally press Shift-Tab to cycle backwards through forms and links. If you want to build MacLynx from source, building beta 6 is largely the same on 68K with the same compiler and prerequisites except that builds are now segregated to their own folders and you will need to put a copy of lynx.cfg in with them (the StuffIt source archive does not have aliases predone for you). For the PowerPC version, you'll need the same set up but substituting CodeWarrior Pro 7.1, and, like CWGUSI, GUSI 2.2.3 should be in the same folder or volume that contains the MacLynx source tree. There are debug and optimized builds for each architecture. Pre-built binaries and source are available from the main MacLynx page. MacLynx, like Lynx, is released under the GNU General Public License v2.
Everyone should pull one great practical joke in their lifetimes. This one was mine, and I think it's past the statute of limitations. The story is true. Only the names are redacted to protect the guilty. My first job out of college was a database programmer, even though my undergraduate degree had nothing to do with computers and my current profession still mostly doesn't. The reason was that the University I worked for couldn't afford competitive wages, but they did offer various fringe benefits, and they were willing to train someone who at least had decent working knowledge. I, as a newly minted graduate of the august University of California system, had decent working knowledge at least of BSD/386 and SunOS, but more importantly also had the glowing recommendation of my predecessor who was being promoted into a new position. I was hired, which was their first mistake. The system I was hired to work on was an HP 9000 K250, one of Hewlett-Packard's big PA-RISC servers. I wish I had a photograph of it, but all I have are a couple bad scans of some bad Polaroids of my office and none of the server room. The server room was downstairs from my office back in the days when server rooms were on-premises, complete with a swipe card lock and a halon system that would give you a few seconds of grace before it flooded everything. The K250 hulked in there where it had recently replaced what I think was an Encore mini of some sort (probably a Multimax, since it was a few years old and the 88K Encores would have been too new for the University), along with the AIX RS/6000s that provided student and faculty shell accounts and E-mail, the bonded T1 lines, some of the terminal servers, the massive Cabletron routers and a lot of the telco stuff. One of the tape reels from the Encore hangs on my wall today as a memento. The K250 and the Encore it replaced (as well as the L-Class that later replaced the K250 when I was a consultant) ran an all-singing, all-dancing student information system called CARS. CARS is still around, renamed Jenzabar, though I suspect that many of its underpinnings remain if you look under the table. In those days CARS was a massive overlay that was loaded atop the operating system and database, which when I started were, respectively, HP/UX 10.20 and Informix. (I'm old.) It used Informix tables, screens and stored procedures plus its own text UI libraries to run code written variously as Perform screens, SQL, C-shell scripts and plain old C or ESQL/C. Everything was tracked in RCS using overgrown Makefiles. I had the admin side (resource management, financials, attendance trackers, etc.) and my office partner had the academic side (mostly grades and faculty tracking). My job was to write and maintain this code and shortly after to help the University create custom applications in CARS' brand-spanking new web module, which chose the new hotness in scripting languages, i.e., Perl. Fortuitously I had learned Perl in, appropriately enough, a computational linguistics course. CARS also managed most of the printers on campus except for the few that the RS/6000s controlled directly. Most of the campus admin printers were HP LaserJet 4 units of some derivation equipped with JetDirect cards for networking. These are great warhorse printers, some of the best laser printers HP ever made. I suspect there were line printers other places, but those printers were largely what existed in the University's offices. It turns out that the READY message these printers show on their VFD panels is changeable. I don't remember where I read this, probably idly paging through the manual over a lunch break, but initially the only fun things I could think of to do was to have the printer say hi to my boss when she sent jobs to it, stuff like that (whereupon she would tell me to get back to work). Then it dawned on me: because I had access to the printer spools on the K250, and the spool directories were conveniently named the same as their hostnames, I knew where each and every networked LaserJet on campus was. I was young, rash and motivated. This was a hack I just couldn't resist. It would be even better than what had been my favourite joke at my alma mater, where campus services, notable for posting various service suspension notices, posted one April Fools' Day that gravity itself would be suspended to various buildings. I felt sure this hack would eclipse that too. The plan on April Fools' Day was to get into work at OMG early o'clock and iterate over every entry in the spool, sending it a sequence that would change the READY message to INSERT 5 CENTS. This would cause every networked LaserJet on campus to appear to ask for a nickel before you printed anything. The script was very simple (this is the actual script, I saved it): The ^[ was a literal ASCII 27 ESCape character, and netto was a simple netcat-like script I had written in these days before netcat was widely used. That's it. Now, let me be clear: the printer was still ready! The effect was merely cosmetic! It would still print if you sent jobs to it! Nevertheless, to complete the effect, this message was sent out on the campus-wide administration mailing list (which I also saved): At the end of the day I would reset everything back to READY, smile smugly, and continue with my menial existence. That was the plan. Having sent this out, I fielded a few anxious calls, who laughed uproariously when they realized, and I reset their printers manually afterwards. The people who knew me, knew I was a practical joker, took note of the date, and sent approving replies. One of the best was sent to me later in the day by intercampus mail, printed on their laser printer, with a nickel taped to it. Unfortunately, not everybody on campus knew me, and those who did not not only did not call me, but instead called university administration directly. By 8:30am it was chaos in the main office and this filtered up to the head of HR, who most definitely did know me, and told me I'd better send a retraction before the CFO got in or I was in big trouble. That went wrong also, because my retraction said that campus administration was not considering charging per-page fees when in fact they actually were, so I had to retract it and send a new retraction that didn't call attention to that fact. I also ran the script to reset everything early. Eventually the hubbub finally settled down around noon. Everybody in the office thought it was very funny. Even my boss, who officially disapproved, thought it was somewhat funny. The other thing that went wrong, as if all that weren't enough, was that the director of IT — which is to say, my boss's boss — was away on vacation when all this took place. (Read E-mail remotely? Who does that?) I compounded this situation with the tactical error of going skiing over the coming weekend and part of the next week, most of which I spent snowplowing down the bunny slopes face first, so that he discovered all the angry E-mail in his box without me around to explain myself. (My office partner remembers him coming in wide-eyed asking, "what did he do??") When I returned, it was icier in the office than it had been on the mountain. The assistant director, who thought it was funny, was in trouble for not putting a lid on it, and I was in really big trouble for doing it in the first place. I was appropriately contrite and made various apologies and was an uncharacteristically model employee for an unnaturally long period of time. The Ice Age eventually thawed and the incident was officially dropped except for a "poor judgment" on my next performance review and the satisfaction of what was then considered the best practical joke ever pulled on campus. Indeed, everyone agreed it was much more technically accomplished than the previous award winner, where someone had supposedly gotten it around the grounds that the security guards at the entrance would be charging a nominal admission fee per head. Years later they still said it was legendary. I like to think they still do.
More in technology
As we pack our bags and prepare for the adult-er version of BlackHat (that apparently doesn’t require us to print out stolen mailspoolz to hand to people at their talks), we want to tell you about a recent adventure - a heist, if you will. No heist story
For your small business to survive, you need customers. Not just to buy once. You need them to come back, tell their friends, and trust you over time. And yet, too many small businesses make it weirdly hard to talk to them. Well, duh, right? I agree, yet I see small businesses fumbling this over and over. All the attention when discussing business is about giant corporations. Whether they’re selling servers or vehicles or every product under the sun, millions of dollars pass through their doors every day. Yet it is folly to apply the methodologies of giant companies to our small businesses. It sounds obvious, but I constantly see small businesses making it hard for customers to get in touch. If a customer does get through the “contact us” gauntlet, that small business often uses needlessly complicated enterprise software to talk with customers. Small businesses don’t get the spotlight, but they are the engine of the economy. To wit, in the United States: 99.9% of businesses are small Nearly half the private workforce is employed by small businesses They generate over 43% of the country’s GDP And beyond the stats, small businesses are who we turn to every day: your corner coffee shop, your local cleaner, your neighborhood software team. And don’t forget that every big business started small. Small businesses are the genesis of innovation. We all need small businesses to succeed. Most small teams aren’t trying to become giant corporations. They want to make a living doing work for a fair return. Many of them work hard in hopes of moving the needle from a fair return to a comfortable life, and maybe even some riches down the road. Yet it’s amazing how often it’s forgotten: you need customers to succeed. Success in small business starts with human conversation. While talking effectively with your customers does not guarantee success, it is certainly a requirement. Here’s what that looks like: a customer has a question and your team responds kindly, clearly, and quickly. Or sometimes your team wants to reach out with a question for a customer. It’s a simple, human interaction that cannot be done effectively by automation or AI. It’s the air your small business is breathing. Starve that air, and everything else suffers. Your product or service is almost secondary to building a healthy relationship with each of your customers. Big business doesn’t operate this way. We shouldn’t expect it to show us how to build real relationships. We’re doing our best here at Good Enough to build healthy, happy customer relationships. Whenever you write to us about any of our products, someone on the team is going to reply to offer help or an explanation or an alternative. Fact is, if you write to us about anything, we’re going to reply to offer help or an explanation or an alternative. As an online business, we’re talking with customers primarily over email. For us, Jelly makes those conversations easy to have—human, not hectic. Actual customer support is remarkable. Actual, healthy human relationships are important. Actual customer conversations are a key to small business success. Choose your actions and tools accordingly. If you liked this post, maybe you’ll like Jelly, our new email collaboration app for small teams!
Poll will only be live for 3 days, so vote while you can.