Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
2
Bona fides: Commodore 128DCR on my desk with a second 1571, Ultimate II+-L and a ZoomFloppy, three SX-64s I use for various projects, heaps of spare 128DCRs, breadbox 64s, 16s, Plus/4s and VIC-20s on standby, multiple Commodore collectables (blue-label PET 2001, C64GS, 116, TV Games, 1551, 1570), a couple A500s, an A3000 and a AmigaOS 3.9 QuikPak A4000T with '060 CPU, Picasso IV RTG card and Ethernet. I wrote for COMPUTE!'s Gazette (during the General Media years) and Loadstar. Here's me with Jack Tramiel and his son Leonard from a Computer History Museum event in 2007. It's on my wall. Retro Recipes video (not affiliated) stating that, in answer to a request for a very broad license to distribute under the Commodore name, Commodore Corporation BV instead simply proposed he buy them out, which would obviously transfer the trademark to him outright. Amiga News has a very nice summary. There was a time when Commodore intellectual property and the Commodore brand had substantial value,...
11 hours ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Old Vintage Computing Research

RIP Bill Atkinson

As posted by his family (Facebook link), Bill Atkinson passed away on June 5 from pancreatic cancer at the age of 74. The Macintosh would not have been the same without him (QuickDraw, MacPaint, HyperCard, and so much more). Rest in peace.

2 days ago 2 votes
Harpoom: of course the Apple Network Server can be hacked into running Doom

a $10,000+ Apple server running IBM AIX. Of course you can. Well, you can now. Now, let's go ahead and get the grumbling out of the way. No, the ANS is not running Linux or NetBSD. No, this is not a backport of NCommander's AIX Doom, because that runs on AIX 4.3. The Apple Network Server could run no version of AIX later than 4.1.5 and there are substantial technical differences. (As it happens, the very fact it won't run on an ANS was what prompted me to embark on this port in the first place.) And no, this is not merely an exercise in flogging a geriatric compiler into building Doom Generic, though we'll necessarily do that as part of the conversion. There's no AIX sound driver for ANS audio, so this port is mute, but at the end we'll have a Doom executable that runs well on the ANS console under CDE and has no other system prerequisites. We'll even test it on one of IBM's PowerPC AIX laptops as well. Because we should. almost by default, Apple's first true Unix server since the A/UX-based Workgroup Server 95, but IBM AIX has a long history of its own dating back to the 1986 IBM RT PC. That machine was based on the IBM ROMP CPU as derived from the IBM 801, generally considered the first modern RISC design. AIX started as an early port of UNIX System V Release 3 and is thus a true Unix descendent, but also merged in some code from BSD 4.2 and 4.3. The RT PC ran AIX v1 and v2 as its primary operating systems, though IBM also supported 4.3BSD (ported by IBM as the Academic Operating System) and a spin of Pick OS. Although a truly full-fledged workstation with aggressive support from IBM, it ended up a failure in the market due to comparatively poor performance and notorious problems with its floating point support. Nevertheless, AIX's workstation roots persisted even through the substantial rewrite that became version 3 in 1989, and was likewise the primary operating system for its next-generation technical workstations now based on POWER. AIX 3 introduced AIXwindows, a licensed port of X.desktop from IXI Limited (later acquired by another licensee, SCO) initially based on X11R3 with Motif as the toolkit. In 1993 the "SUUSHI" partnership — named for its principals, Sun, Unix System Laboratories, the Univel joint initiative of Novell and AT&T, SCO, HP and IBM — negotiated an armistice in the Unix Wars, their previous hostilities now being seen as counterproductive against common enemy Microsoft. This partnership led to the Common Open Software Environment (COSE) initiative and the Common Desktop Environment (CDE), derived from HP VUE, which was also Motif-based. AIX might have been the next Mac OS. For that matter, OS/2 was still a thing on the desktop (as Warp 4) despite Workplace OS's failure, Ultimedia was a major IBM initiative in desktop multimedia, and the Common User Access model was part of CDE too. AIX 4 had multimedia capabilities as well through its own native port of Ultimedia, supporting applications like video capture and desktop video conferencing, and even featured several game ports IBM themselves developed — two for AIX 4.1 (Quake and Abuse) and one later for 4.3 (Quake II). The 4.1 game ports run well on ANS AIX with the Ultimedia libraries installed, though oddly Doom was never one of them. IBM cancelled all this with AIX 5L and never looked back. ANS "Harpoon" AIX only made two standard releases, 4.1.4.1 and 4.1.5, prior to Gil Amelio cancelling the line in April 1997. However, ANS AIX is almost entirely binary-compatible with regular 4.1 and there is pretty much no reason not to run 4.1.5, so we'll make that our baseline. Although AIX 4.3 was a big jump forward, for our purposes the major difference is support for X11R6 as 4.1 only supports X11R5. Upgrading the X11 libraries is certainly possible but leads to a non-standard configuration, and anyway, the official id Linux port by Dave Taylor hails from 1994 when many X11R5 systems would have still been out there. We would also rather avoid forcing people to install Ultimedia. There shouldn't be anything about basic Doom that would require anything more than the basic operating system we have. NCommander AIX Doom port is based on Chocolate Doom, taking advantage of SDL 1.2's partial support for AIX. Oddly, the headers for the MIT Shared Memory Extension were reportedly missing on 4.3 despite the X server being fully capable of it, and he ended up cribbing them from a modern copy of Xorg to get it to build. Otherwise, much of his time was spent bringing in other necessary SDL libraries and getting the sound working, neither of which we're going to even attempt. Owing to the ANS' history as a heavily modified Power Macintosh 9500, it thus uses AWACS audio for which no driver was ever written for AIX, and AIX 4.1 only supports built-in audio on IBM hardware. Until that changes or someone™ figures out an alternative, the most audio playback you'll get from Harpoon AIX is the server quacking on beeps (yes, I said quacking, the same as the Mac alert sound). However, Doom Generic is a better foundation for exotic Doom ports because it assumes very little about the hardware and has straight-up Xlib support, meaning we won't need SDL or even MIT-SHM. It also removes architecture-specific code and is endian-neutral, important because AIX to this day remains big-endian, though this is less of a issue with Doom specifically since it was originally written on big-endian NeXTSTEP 68K and PA-RISC. We now need to install a toolchain, since Harpoon AIX does not include an xlC license, and I'd be interested to hear from anyone trying to build this with it. Although the venerable AIXPDSLIB archive formerly at UCLA has gone to the great bitbucket in the sky, there are some archives of it around and I've reposted the packages I personally kept for 4.1 and 3.2.5 on the Floodgap gopher server. The most recent compiler AIXPDSLIB had for 4.1 was gcc 2.95.2, though for fun I installed the slightly older egcs 2.91.66, and you will also need GNU make, for which 3.81 is the latest available. These compilers use the on-board assembler and linker. I did not attempt to build a later compiler with this compiler. It may work and you get to try that yourself. Optionally you can also install gdb 5.3, which I did to stomp out some glitches. These packages are all uncompressed and un-tarred from the root directory in place; they don't need to be installed through smit. I recommend symlinking /usr/local/bin/make as /usr/local/bin/gmake so we know which one we're using. Finally, we'll need a catchy name for our port. Originally it was going to be ANS Doom, but that sounded too much like Anus Doom, which I proffer as a free metal band name and I look forward to going to one of their concerts soon. Eventually I settled on Harpoom, which I felt was an appropriate nod to its history while weird enough to be notable. All of my code is on Github along with pre-built binaries and all development was done on stockholm, my original Apple Network Server 500 that I've owned continuously since 1998, with a 200MHz PowerPC 604e, 1MB of cache, 512MB of parity RAM and a single disk here running a clean install of 4.1.5. Starting with Doom Generic out of the box, we'll begin with a Makefile to create a basic Doom that I can run over remote X for convenience. (Since the ANS runs big-endian, if you run a recent little-endian desktop as I do with my POWER9 you'll need to start your local X server with +byteswappedclients or a configuration file change, or the connection will fail.) I copied Makefile.freebsd and stripped it down to I also removed -Wl,-Map,$(OUTPUT).map from the link step in advance because AIX ld will barf on that. gmake understood the Makefile fine but the compile immediately bombed. It's time to get out that clue-by-four and start bashing the compiler with it. There is, in fact, no inttypes.h or stdint.h on AIX 4.1. So let's create an stdint.h! We could copy it from somewhere else, but I wanted this to only specify what it needed to. After several false starts, the final draft was and we include that instead of inttypes.h. Please note this is only valid for 32 bit systems like this one. Obviously we'll change that from <stdint.h> to "stdint.h". doomtype.h has this definition for a boolean: Despite this definition, undef isn't actually used in the codebase anywhere, and if C++ bool is available then it just typedefs it to boolean. But egcs and gcc come with their own definition, here in its entirety: This is almost identical. Since we know we don't really need undef, we comment out the old definition in doomtype.h, #include <stdbool.h> and just typedef bool boolean like C++. The col_t is an AIX specific problem that conflicts with AIX locales. Since col_t is only found in i_video.c, we'll just change it in four places to doomcol_t. The last problem was this bit of code at the end of I_InitGraphics(): Here we can cheat, being pre-C99, by merely removing the declaration. This is aided by the fact I_InitInput neither passes nor returns anything. The compiler accepted that. X11R5 does not support the X Keyboard Extension (Xkb). To make the compile go a bit farther I switched out X11/XKBlib.h for X11/keysym.h. We're going to have some missing symbols at link time but we'll deal with that momentarily. DG_Init() is naughty and didn't declare all its variables at the beginning. This version of the compiler can't cope with that and I had to rework the function. Although my revisions compiled, the link failed, as expected: XkbSetDetectableAutoRepeat tells the keyboard driver to not generate synthetic KeyRelease events for this X client when a key is auto-repeating. X11R5 doesn't have this capability, so the best we can do is XAutoRepeatOff, which still gives us single KeyPress and KeyRelease events but that's because it disables key repeat globally. (A footnote on this later on.) There's not a lot we can do about that, though we can at least add an atexit to ensure the previous keyboard repeat status is restored on quit. Similarly, there is no exact equivalent for XkbKeycodeToKeysym, though we can sufficiently approximate it for our purposes with XLookupKeysym in both places: That was enough to link Doom Generic. Nevertheless, with our $DISPLAY properly set and using the shareware WAD, it immediately fails: This error comes from this block of code in w_wad.c: With some debugging printfs, we discover the value of additional lumps we're being asked to allocate is totally bogus: This nonsense number is almost certainly caused by an unconverted little-endian value. Values in WADs are stored little-endian, even in the native Macintosh port. Doom Generic does have primitives for handling byteswaps, however, so it seems to have incorrectly detected us as little-endian. After some grepping this detection quite logically comes from i_swap.h. As we have intentionally not enabled sound, for some reason (probably an oversight) this file ends up defaulting to little endian: Ordinarily this would be a great place to use gcc's byteswap intrinsics, buuuuuuuuut (and I was pretty sure this would happen) ... so we're going to have to write some. Since they've been defined as quasi-functions, I decided to do this as actual inlineable functions with a sprinkling of inline PowerPC assembly. The semantics of static inline here are intended to take advantage of the way gcc of this era handled it. These snippets are very nearly the optimal code sequences, at least if the value is already in a register. If the value was being fetched from memory, you can do the conversion in one step with single instructions (lwbrx or lhbrx), but the way the code is structured we won't know where the value is coming from, so this is the best we can do for now. Atypically, these conversions must be signed. If you miss this detail and only handle the unsigned case, as yours truly did in a first draft, you get weird things like this: was extended on values we did not, 16-bit values started picking up wacky negative quantities because the most significant byte eventually became all ones and 32-bit PowerPC GPRs are 32 bits, all the time. Properly extending the sign after conversion was enough to fix it. CMAP256. Since this is a compile-time choice, and we want to support both remote X and console X, we'll just make two builds. I rebuilt the executable this time adding -DCMAP256 to the CFLAGS in the Makefile. PseudoColor 8-bit visuals, so we must not be creating a colourmap for the window nor updating it, and indeed there is none in the Doom Generic source code. Fortunately, there is in the original O.G. Linux Doom, so I cribbed some code from it. I added a new function DG_Upload_Palette to accept a 256-colour internal palette from the device-independent portion, turn it into an X Colormap, and push it to the X server with XStoreColors. Because the engine changes the entire palette every time (on damage, artifacts, etc.), we must set every colour change flag in the Colormap, which we do and cache on first run just like O.G. Linux Doom did. The last step is to tag the Colormap we create to the X window using XSetWindowColormap. the other AIX games, by the way. Here are some direct grabs from the framebuffer using xwd -icmap -screen -root -out. map to nothing, not Meta, Super or even Hyper in Harpoon's X server. Instead, when pressed or released each Command key generates an XEvent with an unexpected literal keycode of zero. After some experimentation, it turns out that no other key (tested with a full Apple Extended Keyboard II) on a connected ADB keyboard will generate this keycode. I believe this was most likely an inadvertent bug on Apple's part but I decided to take advantage of it. I don't think it's a good idea to do this if you're running a remote X session and the check is disabled there, but if you run the 256-colour version on the console, you can use the Command keys to strafe instead (Alt works in either version). Lastly, I added some code to check the default or available visuals so that you can't (easily) run the wrong version in the wrong place and bumped the optimization level to -O3. And that's the game. Here's a video of it on the console, though I swapped in an LCD display so that the CRT flicker won't set you off. This is literally me pointing my Pixel 7 Pro camera at the screen. RISC ThinkPad-like laptop that isn't, technically, a ThinkPad. You might see this machine in a future entry. precompiled builds both for 24-bit and 8-bit colour are available on Github. Like Doom Generic and the original Doom, Harpoom is released under the GNU General Public License v2.

a week ago 9 votes
prior-art-dept.: The hierarchical hypermedia world of Hyper-G

Prior Art Department and today we'll consider a forgotten yet still extant sidebar of the early 1990s Internet. If you had Internet access at home back then, it was almost certainly dialup modem (like I did); only the filthy rich had T1 lines or ISDN. Moreover, from a user perspective, the hosts you connected to were their own universe. You got your shell account or certain interactive services over Telnet (and, for many people including yours truly, E-mail), you got your news postings from the spool either locally or NNTP, and you got your files over FTP. It may have originated elsewhere, but everything on the host you connected to was a local copy: the mail you received, the files you could access, the posts you could read. Exceptional circumstances like NFS notwithstanding, what you could see and access was local — it didn't point somewhere else. Around this time, however, was when sites started referencing other sites, much like the expulsion from Eden. In 1990 both HYTELNET and Archie appeared, which were early search engines for Telnet and FTP resources. Since they relied on accurate information about sites they didn't control, both of them had to regularly update their databases. Gopher, when it emerged in 1991, consciously tried to be a friendlier FTP by presenting files and resources hung from a hierarchy of menus, which could even point to menus on other hosts. That meant you didn't have to locally mirror a service to point people at it, but if the referenced menu was relocated or removed, the link to it was broken and the reference's one-way nature meant there was no automated way to trace back and fix it. And then there was that new World Wide Web thing introduced to the public in 1993: a powerful soup of media and hypertext with links that could point to nearly anything, but they were unidirectional as well, and the sheer number even in modest documents could quickly overwhelm users in a rapidly expanding environment. Not for nothing was the term "linkrot" first attested around 1996, as well as how disoriented a user might get following even perfectly valid links down a seemingly infinite rabbithole. "memex" idea, imagining not only literature but photographs, sketches and notes all interconnected with various "trails." The concept was exceedingly speculative and never implemented (nor was Ted Nelson's Xanadu "docuverse" in 1965) but Douglas Engelbart's oN-Line System "NLS" at the Stanford Research Institute was heavily inspired by it, leading to the development of the mouse and the 1968 Mother of All Demos. The notion wasn't new on computers, either, such as 1967's Hypertext Editing System on an IBM System/360 Model 50, and early microcomputer implementations like OWL Guide appeared in the mid-1980s on workstations and the Macintosh. Hermann Maurer, then a professor at the Graz University of Technology in Austria, had been interested in early computer-based information systems for some time, pioneering work on early graphic terminals instead of the pure text ones commonly in use. One of these was the MUPID series, a range of Z80-based systems first introduced in 1981 ostensibly for the West German videotex service Bildschirmtext but also standalone home computers in their own right. This and other work happened at what was the Institutes for Information Processing Graz, or IIG, later the Institute for Information Processing and Computer-Supported New Media (IICM). Subsequently the IIG started researching new methods of computer-aided instruction by developing an early frame-based hypermedia system called COSTOC (originally "COmputer Supported Teaching Of Computer-Science" and later "COmputer Supported Teaching? Of Course!") in 1985, which by 1989 had been commercialized, was in use at about twenty institutions on both sides of the Atlantic, and contained hundreds of one-hour lessons. COSTOC's successful growth also started to make it unwieldy, and a planned upgrade in 1989 called HyperCOSTOC proposed various extensions to improve authoring, delivery, navigation and user annotation. Meanwhile, it was only natural that Maurer's interest would shift to the growing early Internet, at that time under the U.S. National Science Foundation and by late that year numbering over 150,000 hosts. Maurer's group decided to consolidate their experiences with COSTOC and HyperCOSTOC into what they termed "the optimal large-scale hypermedia system," code-named Hyper-G (the G, natürlich, for Graz). It would be networked and searchable, preserve user orientation, and maintain correct and up-to-date linkages between the resources it managed. In January 1990, the Austrian Ministry of Science agreed to fund a prototype for which Maurer's grad student Frank Kappe formally wrote the architectural design as his PhD dissertation. Other new information technologies like Gopher and the Web were emerging at the same time, at the University of Minnesota and CERN respectively, and the Hyper-G team worked with the Gopher and W3 teams so that the Hyper-G server could also speak to those servers and clients. The prototype emerged in January 1992 as the University's new information system TUGinfo. Because Hyper-G servers could also speak Gopher and HTTP, TUGinfo was fully accessible by the clients of the day, but it could also be used with various Hyper-G line-mode clients. One of these was a bespoke tool named UniInfo which doesn't appear to have been distributed outside the University and is likely lost. The other is called the Hyper-G Terminal Viewer, or hgtv (not to be confused with the vapid cable channel), which became a standard part of the server for administration tasks. The success of TUGinfo convinced the European Space Agency to adopt Hyper-G for its Guide and Directory in the fall, after which came a beta native Windows client called Amadeus in 1993 and a beta Unix client called Harmony in 1994. Yours truly remembers accessing some of these servers through a web browser around this time, which is how this whole entry got started trying to figure out where Hyper-G ended up. a partial copy of these files, it lacks, for example, any of the executables for the Harmony client. Fortunately there were also at least two books on Hyper-G, one by Hermann Maurer himself, and a second by Wolfgang Dalitz and Gernot Heyer, two partnering researchers then at the Konrad-Zuse-Zentrum für Informationstechnik Berlin (ZIB). Happily these two books have CDs with full software kits, and the later CD from Dalitz and Heyer's book is what we'll use here. I've already uploaded its contents to the Floodgap Gopher server to serve as a supreme case of historical irony. collections. A resource must belong to at least one collection, but it may belong to multiple collections, and a collection can span more than one server. A special type of collection is the cluster, where semantically related materials are grouped together such as multiple translations, alternate document formats, or multimedia aggregates (e.g., text and various related images or video clips). We'll look at how this appears practically when we fire the system up. Any resource may link to another resource. Like HTML, these links are called anchors, but unlike HTML, anchors are bidirectional and can occur in any media type like PostScript documents, images, or even audio/video. Because they can be followed backwards, clients can walk the chains to construct a link map, like so: man page for grep(1), showing what it connects to, and what those pages connect to. Hyper-G clients could construct such maps on demand and all of the resources it shows can of course be jumped to directly. This was an obvious aid to navigation because you always could find out where you were in relation to anything else. Under the hood, anchors aren't part of the document, or even hidden within it; they're part of the metadata. Here's a real section of a serialized Hyper-G database: This textual export format (HIF, the Hyper-G Interchange Format) is how a database could be serialized and backed up or transmitted to another server, including internal resources. Everything is an object and has an ID, with resources existing at a specified path (either a global ID based on its IPv4 address or a filesystem path), and the parent indicating the name of the collection the resource belongs to. These fields are all searchable, as are text resources via full-text search, all of which is indexed immediately. You don't need to do anything to set up a site search facility — it comes built-in. Anchors are connected at either byte ranges or spatial/time coordinates within their resource. This excerpt defines three source anchors, i.e., a link that goes out to another resource. uudecodeing the text fragment and dumping it, the byte offsets in the anchor sections mean the text ranges for hg_comm.h, hg_comm.c and hg_who.c will be linked to those respective entries as destination anchors in the database. For example, here is the HIF header for hg_comm.h: These fields are indexed, so the server can walk them backwards or forwards, and the operation is very fast. The title and its contents and even its location can change; the link will always be valid as long as the object exists, and if it's later deleted, the server can automatically find and remove all anchors to it. Analogous to an HTML text fragment, destination anchors can provide a target covering a specific position and/or portion within a text resource. As the process requires creating and maintaining various unique IDs, Hyper-G clients have authoring capability as well, allowing a user to authenticate and then insert or update resources and anchors as permitted. We're going to do exactly that. Since resources don't have to be modified to create an anchor, even read-only resources such as those residing on a mounted CD-ROM could be linked and have anchors of their own. Instead of having their content embedded in the database, however, they can also appear as external entities pointed to by conventional filesystem paths. This would have been extremely useful for multimedia in particular considering the typical hard disk size of the early 1990s. Similarly, Internet resources on external servers could also be part of the collection: While resources that are not Hyper-G will break the link chain, the connection can still be expressed, and at least the object itself can be tracked by the database. The protocol could be Hyper-G, Gopher, HTTP, WAIS, Telnet or FTP. It was also possible to create SQL queries this way, which would be performed live. Later versions of the server even had a CGI-compatible scripting ability. I mentioned that the user can authenticate to the server, as well as being anonymous. When logged in, authenticated access allows not only authoring and editing but also commenting through annotations (and annotating the annotations). This feature is obviously useful for things like document review, but could also have served as a means for a blog with comments, well before the concept existed formally, or a message board or BBS. Authenticated access is also required for resources with limited permissions, or those that can only be viewed for a limited time or require payment (yes, all this was built-in). In the text file you can also see markup tags that resemble and in some cases are the same as, but in fact are not, HTML. These markup tags are part of HTF, or the Hyper-G Text Format, Hyper-G's native text document format. HTF is dynamically converted for Gopher or Web clients; there is a corresponding HTML tag for most HTF tags, eventually supporting much of HTML 3.0 except for tables and forms, and most HTML entities are the same in HTF. Anchor tags in an HTF document are handled specially: upon upload the server strips them off and turns them into database entries, which the server then maintains. In turn, anchor tags are automatically re-inserted according to their specified positions with current values when the HTF resource is fetched or translated. dbserver) that handles the database and the full-text index server (ftserver) used for document search. The document cache server (dcserver), however, has several functions: it serves and stores local documents on request, it runs CGI scripts (using the same Common Gateway Interface standard as a webserver of the era would have), and to request and cache resources from remote servers referenced on this one, indicated by the upper 32 bits of the global ID. In earlier versions of the server, clients were responsible for other protocols. A Hyper-G client, if presented with a Gopher or HTTP URL, would have to go fetch it. hgserver (no relation to Mercurial). This talks directly to other Hyper-G servers (using TCP port 418), and also directly to clients with port 418 as a control connection and a dynamically assigned port number for document transfer (not unlike FTP). Since links are bidirectional, Hyper-G servers contact other Hyper-G servers to let them know a link has been made (or, possibly, removed), and then those servers will send them updates. There are hazards with this approach. One is that it introduces an inevitable race condition between the change occurring on the upstream and any downstream(s) knowing about it, so earlier implementations would wait until all the downstream(s) acknowledged the change before actually making it effective. Unfortunately this ran into a second problem: particularly for major Hyper-G sites like IIG/IICM itself, an upstream server could end up sending thousands of update notifications after making any change at all, and some downstreams might not respond in a timely fashion for any number of reasons. Later servers use a probablistic version of the "flood" algorithm from the Harvest resource discovery system (perhaps a future Prior Art entry) where downstreams pass the update along to a smaller subset of hosts, who in turn do the same to another subset, until the message has propagated throughout the network (p-flood). Any temporary inconsistency is simply tolerated until the message makes the rounds. This process was facilitated because all downstreams knew about all other Hyper-G servers, and updates to this master list were sent in the same manner. A new server could get this list from IICM after installation to bootstrap itself, becoming part of a worldwide collection called the Hyper Root. requiring license fees for commercial use of their Gopher server implementation. Subsequent posts were made to clarify this only applied to UMN gopherd, and then only to commercial users, nor is it clear exactly how much that license fee was or whether anybody actually paid, but the damage was done and the Web — freely available from the beginning — continued unimpeded on its meteoric rise. (UMN eventually relicensed 2.3.1 under the GNU Public License in 2000.) Hyper-G's principals would have no doubt known of this precautionary tale. On the other hand, they also clearly believed that they possessed a fundamentally superior product to existing servers that people would be willing to pay good money for. Indeed, just like they did with COSTOC, the intention of spinning Hyper-G/HyperWave off as a commercial enterprise had been planned from the very beginning. Hyper-G, now renamed HyperWave, officially became a commercial product in June 1996. This shift was facilitated by the fact that no publicly available version had ever been open-source. Early server versions of Hyper-G had no limit on users, but once HyperWave was productized, its free unregistered tier imposed document restrictions and a single-digit login cap (anonymous users could of course still view HyperWave sites without logging in, but they couldn't post anything either). Non-commercial entities could apply for a free license key, something that is obviously no longer possible, but commercial use required a full paid license starting at US$3600 for a 30-user license (in 2025 dollars about $6900) or $30,000 for an unlimited one ($57,600). An early 1997 version of this commercial release appears to be what's available from the partial mirror at the Internet Archive, which comes with a license key limiting you to four users and 500 documents — that expired on July 31, 1997. This license is signed with a 128-bit checksum that might be brute-forceable on a modern machine but you get to do that yourself. Fortunately, the CD from our HyperWave book, although also published in 1996, predates the commercial shift; it is a offline and complete copy of the Hyper-G FTP server as it existed on April 13, 1996 with all the clients and server software then available. We'll start with the Hyper-G server portion, which on disc offers official builds for SunOS 4.1.3, Solaris 2.2 (SPARC only), HP-UX 9.01 (PA-RISC only), Ultrix 4.2 (MIPS DECstation "PMAX" only), IRIX 3.0 (SGI MIPS), Linux/x86 1.2, OSF/1 3.2 (on Alpha) and a beta build for IBM AIX 4.1. Apple Network Server 500 would have been perfect: it has oodles of disk space (like, whole gigabytes, man), a full 200MHz PowerPC 604e upgrade, zippy 20MB/s SCSI-2 and a luxurious 512MB of parity RAM. I'll just stop here and say that it ended in failure because both the available AIX versions on disc completely lack the Gopher and Web gateways, without which the server will fail to start. I even tried the Internet Archive pay-per-u-ser beta version and it still lacked the Gopher gateway, without which it also failed to start, and the new Web gateway in that release seemed to have glitches of its own (though the expired license key may have been a factor). Although there are ways to hack around the startup problems, doing so only made it into a pure Hyper-G system with no other protocols which doesn't make a very good demo for our purposes, and I ended up spending the rest of the afternoon manually uninstalling it. In fairness it doesn't appear AIX was ever officially supported. Otherwise, I don't have a PA-RISC HP-UX server up and running right now (just a 68K one running HP-UX 8.0), and while the SunOS 4 version should be binary compatible with my Solbourne S3000 running OS/MP 4.1C, I wasn't sure if the 56MB of RAM it has was enough if I really wanted to stress-test it. and it has 256MB of RAM. It runs IRIX 6.5.22 but that should still start these binaries. That settled the server part. For the client hardware, however, I wanted something particularly special. My original Power Macintosh 7300 (now with a G4/800) sitting on top will play a supporting role running Windows 98 in emulation for Amadeus, and also testing our Hyper-G's Gopher gateway with UMN TurboGopher, which is appropriate because when it ran NetBSD it was gopher.floodgap.com. Today, though, it runs Mac OS 9.1, and the planned native Mac OS client for Hyper-G was never finished nor released. Our other choices for Harmony are the same as for the server, sans AIX 4.1, which doesn't seem to have been supported as a client at all. Unfortunately the S3000 is only 36MHz, so it wouldn't be particularly fast at the hypermedia features, and I was concerned about the Indy running as client and server at the same time. But while we don't have any PA-RISC servers running, we do have a couple choices in PA-RISC workstations, and one of them is a especially rare bird. Let's meet ... ruby, named for HP chief architect Ruby B. Lee, who was a key designer of the PA-RISC architecture and its first single-chip implementation. This is an RDI PrecisionBook 160 laptop with a 160MHz PA-7300LC CPU, one of the relatively few PA-RISC chips with support for a categorical L2 cache (1MB, in this case), and the last and most powerful family of 32-bit PA-RISC 1.1 chips. Visualize B160L, even appearing as the same exact model number to HP-UX, it came in the same case as its better-known SPARC UltraBook siblings (I have an UltraBook IIi here as well) and was released in 1998, just prior to RDI's buyout by Tadpole. This unit has plenty of free disk space, 512MB of RAM and runs HP-UX 11.00, all of which should run Harmony splendidly, and its battery incredibly still holds some charge. Although the on-board HP Visualize-EG graphics don't have 3D acceleration, neither does the XL24 in our Indy, and its PA-7300LC will be better at software rendering than the Indy's R4400. Fortunately, the Visualize-EG has very good 2D performance for the time. With our hardware selected, it's time to set up the server side. We'll do this by the literal book, and the book in this case recommends creating a regular user hgsystem belonging to a new group hyperg under which the server processes should run. IRIX makes this very easy. hyperg as the user's sole group membership, ... tcsh, which is fine by me because other shells are for people who don't know any better. Logging in and checking our prerequisites: This is the Perl that came with 6.5.22. Hyper-G uses Perl scripts for installation, but they will work under 4.036 and later (Perl 5 isn't required), and pre-built Perl distributions are also included on the CD. Ordinarily, and this is heavily encouraged in the book and existing documentation, you would run one of these scripts to download, unpack and install the server. At the time you had to first manually request permission from an E-mail address at IICM to download it, including the IPv4 address you were going to connect from, the operating system and of course the local contact. Fortunately some forethought was applied and an alternative offline method was also made available if you already had the tarchive in your possession, or else this entire article might not have been possible. Since the CD is a precise copy of the FTP site, even including the READMEs, we'll just pretend to be the FTP site for dramatic purposes. The description files you see here are exactly what you would have seen accessing TU Graz's FTP site in 1996. quote PASV 227 Entering Passive Mode (XXX). ftp> cd /run/media/spectre/Hyper-G/unix/Hyper-G 250-You have entered the Hyper-G archive (ftp://ftp.iicm.tu-graz.ac.at/pub/Hyper-G). 250-================================================================================ 250- 250-What's where: 250- 250- Server Hyper-G Server Installation Script 250- UnixClient the vt100 client for UNIX - Installation Script 250- Harmony Harmony (the UNIX/X11 client) 250- Amadeus Amadeus (the PC/Windows client) 250- VRweb VRweb (free VRML browser for Hyper-G, Mosaic & Gopher) 250- papers documentation on Hyper-G (mainly PostScript) 250- talk slides & illustrations we use for Hyper-G talks 250- 250-Note: this directory is mirrored daily (nightly) to: 250- 250- Australia ftp://ftp.cinemedia.com.au/pub/Hyper-G 250- ftp://gatekeeper.digital.com.au/pub/Hyper-G 250- Austria ftp://ftp.tu-graz.ac.at/pub/Hyper-G 250- Czech Rep. ftp://sunsite.mff.cuni.cz/Net/Infosystems/Hyper-G 250- Germany ftp://elib.zib-berlin.de/pub/InfoSystems/Hyper-G 250- ftp://ftp.ask.uni-karlsruhe.de/pub/infosystems/Hyper-G 250- Italy ftp://ftp.esrin.esa.it/pub/Hyper-G 250- Poland ftp://sunsite.icm.edu.pl/pub/Hyper-G 250- Portugal ftp://ftp.ua.pt/pub/infosystems/www/Hyper-G 250- Spain ftp://ftp.etsimo.uniovi.es/pub/Hyper-G 250- Sweden ftp://ftp.sunet.se/pub/Networked.Information.Retrieval/Hyper-G 250- UK ftp://unix.hensa.ac.uk/mirrors/Hyper-G 250- USA ftp://ftp.ncsa.uiuc.edu/Hyper-G 250- ftp://mirror1.utdallas.edu/pub/Hyper-G 250 Directory successfully changed. ftp> cd Server 250 Directory successfully changed. ftp> get Hyper-G_Server_21.03.96.SGI.tar.gz local: Hyper-G_Server_21.03.96.SGI.tar.gz remote: Hyper-G_Server_21.03.96.SGI.tar.gz 200 PORT command successful. Consider using PASV. 150 Opening BINARY mode data connection for Hyper-G_Server_21.03.96.SGI.tar.gz (2582212 bytes). 226 Transfer complete. 2582212 bytes received in 3.12 seconds (808.12 Kbytes/s) ftp> get Hyper-G_Tools_21.03.96.SGI.tar.gz local: Hyper-G_Tools_21.03.96.SGI.tar.gz remote: Hyper-G_Tools_21.03.96.SGI.tar.gz 200 PORT command successful. Consider using PASV. 150 Opening BINARY mode data connection for Hyper-G_Tools_21.03.96.SGI.tar.gz (3337367 bytes). 226 Transfer complete. 3337367 bytes received in 3.95 seconds (825.82 Kbytes/s) ftp> ^D221 Goodbye. ~hgsystem, a central directory (by default /usr/local/Hyper-G) holds links to it as a repository. We'll create that and sign it over to hgsystem as well. Next, we unpack the server (first) package and start the offline installation script. This package includes the server binaries, server documentation and HTML templates. Text in italics was my response to prompts, which the script stores in configuration files and also in your environment variables, and patches the startup scripts for hgsystem to instantiate them on login. Floodgap Hyper-G Full internet host name of this machine:indy.floodgap.com installed bin/scripts/hginstserver installed bin/SGI/dbcontrol [...] installed HTML/ge/options.html installed HTML/ge/result_head.html installed HTML/ge/search.html installed HTML/ge/search_simple.html installed HTML/ge/status.html did make this piece open-source so a paranoid sysadmin could see what they were running as root (in this case setuid). Now for the tools. This includes adminstration utilities but also the hgtv client and additional documentation. The install script is basically the same for the tools as for the server. Last but not least, we will log out and log back in to ensure that our environment is properly setup, and then set the password on the internal hgsystem user (which is not the same as hgsystem, the Unix login). This account is setup by default in the database and to modify it we'll use the hgadmin tool. This tool is always accessible from the hgsystem login in case the database gets horribly munged. That should be all that was necessary (NARRATOR: It wasn't.), but starting up the server still failed. It's possible the tar offline install utility wasn't updated as often as the usual one. Nevertheless, it seemed out-of-sync with what the startup script was actually looking for. Riffling the Perl and shell-script code to figure out the missing piece, it turns out I had to manually create ~hgsystem/HTF and ~hgsystem/server, then add two more environment variables to ~hgsystem/.hgrc (nothing to do with Mercurial): Logging out and logging back in to refresh the environment, ... we're up! Immediately I decided to see if the webserver would answer. It did, buuuuut ... uname identifies this PrecisionBook 160 as a 9000/778, which is the same model number as the Visualize B160L workstation.) Netscape Navigator Gold 3.01 is installed on this machine, and we're going to use it later, but I figured you'd enjoy a crazier choice. Yes, you read that right ... on those platforms as well as Tru64, but no version for them ever emerged. After releasing 5.0 SP1 in 2001, Microsoft cited low uptake of the browser and ended all support for IE Unix the following year. As for Mainsoft, they became notorious for the 2004 Microsoft source code leak when a Linux core in the file dump fingered them as the source; Microsoft withdrew WISE completely, eliminating MainWin's viability as a commercial product, though Mainsoft remains in business today as Harmon.ie since 2010. IE Unix was a completely different codebase from what became Internet Explorer 5 on Mac OS X (and a completely different layout engine, Tasman) and of course is not at all related to modern Microsoft Edge either. because there are Y2K issues, the server fails to calculate its own uptime, but everything else basically works. ruby:/pro/harmony/% uname -a HP-UX ruby B.11.00 A 9000/778 2000295180 two-user license ruby:/pro/harmony/% model 9000/778/B160L ruby:/pro/harmony/% grep -i b160l /usr/sam/lib/mo/sched.models B160L 1.1e PA7300 ruby:/pro/harmony/% su Password: # echo itick_per_usec/D | adb -k /stand/vmunix /dev/mem itick_per_usec: itick_per_usec: 160 # ^D ruby:/pro/harmony/% cat /var/opt/ignite/local/hw.info disk: 8/16/5.0.0 0 sdisk 188 31 0 ADTX_AXSITS2532R_014C 4003776 /dev/rdsk/c0t0d0 /dev/dsk/c0t0d0 -1 -1 5 1 9 disk: 8/16/5.1.0 1 sdisk 188 31 1000 ADTX_AXSITS2532R_014C 6342840 /dev/rdsk/c0t1d0 /dev/dsk/c0t1d0 -1 -1 4 1 9 cdrom: 8/16/5.4.0 2 sdisk 188 31 4000 TOSHIBA_CD-ROM_XM-5701TA 0 /dev/rdsk/c0t4d0 /dev/dsk/c0t4d0 -1 -1 0 1 0 lan: 8/16/6 0 lan0 lan2 0060B0C00809 Built-in_LAN 0 graphics: 8/24 0 graph3 /dev/crt0 INTERNAL_EG_DX1024 1024 768 16 755548327 ext_bus: 8/16/0 1 CentIf n/a Built-in_Parallel_Interface ext_bus: 8/16/5 0 c720 n/a Built-in_SCSI ps2: 8/16/7 0 ps2 /dev/ps2_0 Built-in_Keyboard/Mouse processor: 62 0 processor n/a Processor an old version ready to run. Otherwise images and most other media types are handled by Harmony itself, so let's grab and setup the client now. We'll want both Harmony proper and, later when we play a bit with the VRML tools, VRweb. Notionally these both come in Mesa and IRIX GL or OpenGL versions, but this laptop has no 3D acceleration, so we'll use the Mesa builds which are software-rendered and require no additional 3D support. cd /run/media/spectre/Hyper-G/unix/Hyper-G/Harmony 250- 250-You have entered the Harmony archive. 250- 250-The current version is Release 1.1 250-and there are a few patched binaries in the 250-patched-bins directory. 250- 250-Please read INSTALLATION for full installation instructions. 250- 250-Mirrors can be found at: 250- 250- Australia ftp://ftp.cinemedia.com.au/pub/Hyper-G 250- Austria ftp://ftp.tu-graz.ac.at/pub/Hyper-G/ 250- Germany ftp://elib.zib-berlin.de/pub/InfoSystems/Hyper-G/ 250- ftp://ftp.ask.uni-karlsruhe.de/pub/infosystems/Hyper-G 250- Italy ftp://ftp.esrin.esa.it/pub/Hyper-G 250- Spain ftp://ftp.etsimo.uniovi.es/pub/Hyper-G 250- Sweden ftp://ftp.sunet.se/pub/Networked.Information.Retrieval/Hyper-G 250- New Zealand ftp://ftp.cs.auckland.ac.nz/pub/HMU/Hyper-G 250- UK ftp://unix.hensa.ac.uk/mirrors/Hyper-G 250- USA ftp://ftp.utdallas.edu/pub/Hyper-G 250- ftp://ftp.ncsa.uiuc.edu/Hyper-G 250- ftp://ftp.ua.pt/pub/infosystems/www/Hyper-G 250- 250-and a distributing WWW server: 250- 250- http://ftp.ua.pt/infosystems/www/Hyper-G 250- 250 Directory successfully changed. ftp> get harmony-1.1-HP-UX-A.09.01-mesa.tar.gz get harmony-1.1-HP-UX-A.09.01-mesa.tar.gz 200 PORT command successful. Consider using PASV. 150 Opening BINARY mode data connection for harmony-1.1-HP-UX-A.09.01-mesa.tar.gz (11700275 bytes). 226 Transfer complete. 11700275 bytes received in 11.95 seconds (956.02 Kbytes/s) ftp> cd ../VRweb 250- 250-ftp://ftp.iicm.tu-graz.ac.at/pub/Hyper-G/VRweb/ 250-... here you find the VRweb (VRML 3D Viewer) distribution. 250- 250-The current release is 1.1.2 of Mar 13 1996. 250- 250-Note: this directory is mirrored daily (nightly) to: 250- 250- Australia ftp://ftp.cinemedia.com.au/pub/Hyper-G/VRweb 250- ftp://gatekeeper.digital.com.au/pub/Hyper-G/VRweb 250- Austria ftp://ftp.tu-graz.ac.at/pub/Hyper-G/VRweb 250- Czech Rep. ftp://sunsite.mff.cuni.cz/Net/Infosystems/Hyper-G/VRweb 250- Germany ftp://elib.zib-berlin.de/pub/InfoSystems/Hyper-G/VRweb 250- ftp://ftp.ask.uni-karlsruhe.de/pub/infosystems/Hyper-G/VRweb 250- Italy ftp://ftp.esrin.esa.it/pub/Hyper-G/VRweb 250- Poland ftp://sunsite.icm.edu.pl/pub/Hyper-G/VRweb 250- Portugal ftp://ftp.ua.pt/pub/infosystems/www/Hyper-G/VRweb 250- Spain ftp://ftp.etsimo.uniovi.es/pub/Hyper-G/VRweb 250- Sweden ftp://ftp.sunet.se/pub/Networked.Information.Retrieval/Hyper-G/VRweb 250- UK ftp://unix.hensa.ac.uk/mirrors/Hyper-G/VRweb 250- USA ftp://ftp.ncsa.uiuc.edu/Hyper-G/VRweb 250- ftp://mirror1.utdallas.edu/pub/Hyper-G/VRweb 250 Directory successfully changed. ftp> cd UNIX 250-This directory contains the VRweb 1.1.2e distribution for UNIX/X11 250- 250- 250-vrweb-1.1.2e-[GraphicLibrary]-[Architecture]: 250- VRweb scene viewer for viewing VRML files 250- as external viewer for your WWW client. 250- 250-harscened-[GraphicLibrary]-[Architecture]: 250- VRweb for Harmony. Only usable with Harmony, the Hyper-G 250- client for UNIX/X11. 250- 250-[GraphicLibry]: ogl ... OpenGL (available for SGI, DEC Alpha) 250- mesa ... Mesa (via X protocol; for all platforms) 250- 250-help.tar.gz 250- on-line Help, includes installation guide 250- 250-vrweb.src-1.1.2e.tar.gz 250- VRweb source code 250- 250 Directory successfully changed. ftp> get vrweb-1.1.2e-mesa-HPUX9.05.gz 200 PORT command successful. Consider using PASV. 150 Opening BINARY mode data connection for vrweb-1.1.2e-mesa-HPUX9.05.gz (1000818 bytes). 226 Transfer complete. 1000818 bytes received in 1.23 seconds (794.05 Kbytes/s) ftp> ^D221 Goodbye. /pro logical volume which has ample space. Because this is for an earlier version of HP-UX, although it should run, we'd want to make sure it wasn't using outdated libraries or paths. Unfortunately, checking for this in advance is made difficult by the fact that ldd in HP-UX 11.00 will only show dependencies for 64-bit binaries and this is a 32-bit binary on a 32-bit CPU: So we have to do it the hard way. For some reason symlinks for the shared libraries below didn't exist on this machine, though I had to discover that one by one. /usr/lib/X11R5/libX11.1 lrwxr-xr-x 1 root sys 23 Jan 17 2001 /usr/lib/libX11.2 -> /usr/lib/X11R6/libX11.2 lrwxr-xr-x 1 root sys 23 Jan 17 2001 /usr/lib/libX11.3 -> /usr/lib/X11R6/libX11.3 ruby:/pro/% su Password: # cd /usr/lib # ln -s libX11.1 libX11.sl # ^D ruby:/pro/% harmony/bin/harmony /usr/lib/dld.sl: Can't open shared library: /usr/lib/libXext.sl /usr/lib/dld.sl: No such file or directory Abort ruby:/pro/% ls -l /usr/lib/libXext* lrwxr-xr-x 1 root sys 24 Jan 17 2001 /usr/lib/libXext.1 -> /usr/lib/X11R5/libXext.1 lrwxr-xr-x 1 root sys 24 Jan 17 2001 /usr/lib/libXext.2 -> /usr/lib/X11R6/libXext.2 lrwxr-xr-x 1 root sys 24 Jan 17 2001 /usr/lib/libXext.3 -> /usr/lib/X11R6/libXext.3 ruby:/pro/% su Password: # cd /usr/lib # ln -s libXext.1 libXext.sl # ^D ruby:/pro/% harmony/bin/harmony --- Harmony Version 1.1 (MESA) of Fri 15 Dec 1995 --- Enviroment variable HARMONY_HOME not set ruby:/pro/harmony/% gunzip vrweb-1.1.2e-mesa-HPUX9.05.gz ruby:/pro/harmony/% file vrweb-1.1.2e-mesa-HPUX9.05 vrweb-1.1.2e-mesa-HPUX9.05: PA-RISC1.1 shared executable dynamically linked ruby:/pro/harmony/% chmod +x vrweb-1.1.2e-mesa-HPUX9.05 ruby:/pro/harmony/% ./vrweb-1.1.2e-mesa-HPUX9.05 can't open DISPLAY ruby:/pro/harmony/% mv vrweb-1.1.2e-mesa-HPUX9.05 bin/vrweb -hghost option passed to Harmony or it will connect to the IICM by default. starmony #!/bin/csh setenv HARMONY_HOME /pro/harmony set path=($HARMONY_HOME/bin $path) setenv XAPPLRESDIR $HARMONY_HOME/misc/ $HARMONY_HOME/bin/harmony -hghost indy & ^D ruby:/pro/harmony/% ps -fu spectre UID PID PPID C STIME TTY TIME COMMAND spectre 2172 2170 0 14:55:25 pts/0 0:00 /usr/bin/tcsh spectre 1514 1 0 12:34:19 ? 0:00 /usr/dt/bin/ttsession -s spectre 1535 1534 0 13:18:41 pts/ta 0:01 -tcsh spectre 1523 1515 0 12:34:20 ? 0:04 dtwm spectre 1515 1510 0 12:34:19 ? 0:00 /usr/dt/bin/dtsession spectre 2210 1535 3 15:28:44 pts/ta 0:00 ps -fu spectre spectre 1483 1459 0 12:34:13 ? 0:00 /usr/dt/bin/Xsession /usr/dt/bin/Xsession spectre 1510 1483 0 12:34:15 ? 0:00 /usr/bin/tcsh -c unsetenv _ PWD; spectre 2169 1523 0 14:55:24 ? 0:00 /usr/dt/bin/dtexec -open 0 -ttprocid 1.1eAEIw 01 1514 134217 spectre 2170 2169 0 14:55:24 ? 0:00 /usr/dt/bin/dtterm spectre 2194 2193 0 15:01:47 pts/0 0:01 hartextd -c 49285 spectre 2193 1 0 15:01:42 pts/0 0:06 /pro/harmony/bin/harmony -hghost indy hgsystem anyway, but in a larger deployment you'd of course have multiple users with appropriate permissions. Hyper-G users are specific to the server; they do not have a Unix uid. Users may be in groups and may have multiple simultaneously valid passwords (this is to facilitate automatic login from known hosts, where the password can be unique to each host). Each user gets their own "home collection" that they may maintain, like a home directory. Each user also has a credit account which is automatically billed when pay resources are accessed, though the Hyper-G server is agnostic about how account value is added. We can certainly whip out hgadmin again and do it from the command line, but we can also create users from a graphical administration tool that comes as part of Harmony. This tool is haradmin, or the Harmony Administrator. DocumentType is what we'd consider the "actual" object type. By default, all users, including anonymous ones, can view objects, but cannot write or delete ("unlink") anything; only the owner of the object and the system administrators can do those. In practical terms this unprivileged user with no group memberships we created has the same permissions as an anonymous drive-by right now. However, becase this user is authenticated, we can add permissions to it later. I've censored the most significant word in this and other screenshots with global IDs for this machine because it contains the Indy's IPv4 address and you naughty little people out there don't need to know the details of my test network. % hifimport rootcollection cleaned-tech.hif Username:hgsystem Password: hifimport: HIF 1.0 hifimport: # hifimport: # hifimport: Collection en:Technical Documentation on Hyper-G hifimport: Text en:Hyper-G Anchor Specification Version 1.0 [...] hifimport: # END COLLECTION obj.rights hifimport: # END COLLECTION hg_server hifimport: Collection en:Software User Manuals (man-pages) hifimport: Text en:dbserver.control (1) hifimport: # already visited: 0x000003b7 (en:dcserver (1)) hifimport: # already visited: 0x000003bf (en:ftmkmirror (1)) hifimport: # already visited: 0x000003c2 (en:ftquery (1)) hifimport: # already visited: 0x000003c1 (en:ftserver (1)) hifimport: # already visited: 0x000003be (en:ftunzipmirror (1)) hifimport: # already visited: 0x000003bd (en:ftzipmirror (1)) hifimport: Text en:gophgate (1) hifimport: Text en:hgadmin (1) [...] hifimport: Text en:Clark J.: SGMLS hifimport: * Object already exists. Not replaced. hifimport: Text en:Goldfarb C. F.: The SGML Handbook hifimport: Text en:ISO: Information Processing - 8-bit single-byte coded graphic character sets - Part 1: Latin alphabet No. 1, ISO IS 8859-1 [...] hifimport: # END COLLECTION HTFdoc hifimport: Text en:Hyper-G Interchange Format (HIF) hifimport: # END COLLECTION hyperg/tech PASS 2: Additional Collection Memberships C 0x00000005 hyperglinks(0xa1403d02) hifimport. Error: No Collection hyperglinks(0xa1403d02) C 0x00000005 technik-speziell(0x83f65901) hifimport. Error: No Collection technik-speziell(0x83f65901) C 0x0000015c ~bolle(0x83ea6001) hifimport. Error: No Collection ~bolle(0x83ea6001) C 0x00000193 ~smitter hifimport. Error: No Collection ~smitter [...] PASS 3: Source Anchors SRC Doc=0x00000007 GDest=0x811b9908 0x00187b01 SRC Doc=0x00000008 GDest=0x811b9908 0x00064f74 [...] hifimport: Warning: Link Destination outside HIF file: 0x00000323 hifimport: ... linking to remote object. hifimport. Error: Could not make src anchor: remote server not responding [...] SRC Doc=0x000002e1 GDest=0x811b9908 0x000b5c5b SRC Doc=0x000002e1 GDest=0x811b9908 0x000b5c5a hifimport: Inserted 75 collections. hifimport: Inserted 528 documents. hifimport: Inserted 596 anchors. rootcollection). The import then proceeds in three passes. The first part just loads the objects and the second one sets up additional collections. This dump included everything, including user collections that did not exist, so those additional collections were (in this case desirably) not created. For the final third pass, all new anchors added to the database are processed for validity. Notice that one of them referred to an outside Hyper-G server that the import process duly attempted to contact, as it was intended to. In the end, the import process successfully added 75 new collections and 528 documents with 596 anchors. Instant content! hgtv, the line-mode Hyper-G client, on the Indy. This client, or at least this version of the client, does not start at the Hyper Root and we are immediately within our own root collection. The number is the total number of documents and subdocuments within it. hgtv understands HTF, so we can view the documents directly. Looks like it all worked. Let's jump back into Harmony and see how it looks there too. hgsystem. emacs, but this house is Team vi, and we will brook no deviations. For CDE, though, dtpad would be better. You can change the X resource for Harmony.Text.editcommand accordingly. Here I've written a very basic HTF file that will suffice for the purpose. the Floodgap machine room page (which hasn't been updated since 2019, but I'll get around to it soon enough). As a point of contrast I've elected to do this as a cluster rather than a collection so that you can see the difference. Recall from the introduction that a cluster is a special kind of collection intended for use where the contents are semantically equivalent or related, like alternative translations of text or alternative formats of an image. This is a rather specific case so for most instances, admittedly including this one, you'd want a collection. In practice, however, Hyper-G doesn't really impose this distinction rigidly and as you're about to see, a cluster mostly acts like a collection by a different term — except where it doesn't. <map> for imagemaps. alex, our beige-box Am5x86 DOS games machine, as the destination. can be open at the same time — I guess for people who might do text descriptions for the visually impaired, or something. Clusters appear differently in two other respects we'll get to a little later on. Markup Language), was nearly a first-class citizen. Most modern browsers don't support VRML and its technological niche is now mostly occupied by X3D, which is largely backwards-compatible with it. Like older browsers, Hyper-G needs an external viewer (in this case VRweb, which we loaded onto the system as part of the Harmony client install), but once installed VRML becomes just as smoothly integrated into the client as PostScript documents. Let's create a new collection with the sample VRML documents that came with VRweb. fsn), as most famously seen in 1993's Jurassic Park. Jurassic Park is a candy store for vintage technology sightings, notably the SGI Crimson, Macintosh Quadra 700 and what is likely an early version of the Motorola Envoy, all probably due to Michael Crichton's influence. online resources, made possible by the fact that edges could be walked in both directions and retrieved rapidly from the database. GopherVR, which came out in 1995 and post-dates both FSN and earlier versions of Harmony, but it now renders a lot better with some content. (I do need to get around to updating GopherVR for 64-bit.) hgtv did. This problem likely never got fixed because the beta versions on the Internet Archive unfortunately appear to have removed Gopher support entirely. Rodney Anonymous. C:\ ... \Program Files as \PROGRA~1. This version of Amadeus is distributed in multiple floppy-sized archives. That was a nice thing at the time but today it's also really obnoxious. Here are all the points at which you'll need to "switch disks" (e.g., by unzipping them to the installation folder): Simpsons music). hgtv than it does to Harmony, which to be sure was getting the majority of development resources. In particular, there isn't a tree view, just going from collection to collection like individual menus. blofeld and spectre) into the lusers group. W:g lusers which keeps the default read and unlink permissions, but specifically allows users in group lusers to create and modify documents here. blofeld, because you can never say never again, I will now annotate that "thread." (Oddly, this version of Harmony appears to lack an option to directly annotate from the text view. I suspect this oversight was corrected later.) spectre can post too. blofeld and spectre have the same permissions and the default is to allow anyone in the group to write, without taking some explicit steps they can then edit each other's posts with impunity. To wit, we'll deface blofeld's comment. now? That concludes our demonstration, so on the Indy we'll type dbstop to bring down the database and finish our story. offices in Germany and Austria, later expanding to the US and UK. Gopher no longer had any large-scale relevance and the Web had clearly become dominant, causing Hyperwave to also gradually de-emphasize its own native client in lieu of uploading and managing content with more typical tools like WebDAV, Windows Explorer and Microsoft Word, and administering and accessing it with a regular web browser (offline operation was still supported), as depicted in this screenshot from the same year. Along the way the capital W got dropped, becoming merely Hyperwave. In all of these later incarnations, however, the bidirectional linkages and strict hierarchy remained intact as foundational features in some form, even though the massive Hyper Root concept contemplated by earlier versions ultimately fell by the wayside. Hyperwave continues to be sold as a commercial product today, with the company revived after a 2005 reorganization, and the underlying technology of Hyper-G still seems to be a part of the most current release. As proof, at the IICM — now after several name changes called Institute of Human-Centred Computing, with professor Frank Kappe its first deputy — there's still a HyperWave [sic] IS/7 server. It has a home collection just like ours with exactly one item, Herman Maurer's home page, who as of this writing still remains on Hyperwave's advisory board. Although later products have attempted to do similar sorts of large-scale document and resource management, Hyper-G pioneered the field by years, and even smaller such tools owe it a debt either directly or by independent convergent evolution. That makes it more than appropriate to appear in the Prior Art Department, especially since some of its more innovative solutions to hypermedia's intrinsic navigational issues have largely been forgotten — or badly reinvented. That said, unlike many such examples of prior art, it has managed to quietly evolve and survive to the present, even if by doing so it lost much of its unique nature and some of its wildest ideas. Of course, without those wild ideas, this article would have been a great deal less interesting. You can access the partial mirror on the Internet Archive, or our copy of the CD with everything I've demonstrated here and more on the Floodgap gopher server.

2 weeks ago 13 votes
What went wrong with wireless USB

(Hat tip to the late Bill Strauss and The Capitol Steps' Lirty Dies.) take my Palm OS Fossil Wrist PDA smartwatch mobile. It has no on-board networking libraries but can be coerced into doing PPP over its serial port (via USB) by using the libraries from my Palm m505. Of course, that then requires it be constantly connected to a USB port, which is rather inconvenient for a wristwatch. But what if the USB connection could be made wirelessly? For a few years, real honest-to-goodness wireless USB devices were actually a thing. Competing standards led to market fracture and the technologies fizzled out relatively quickly in the marketplace, but like the parallel universe of FireWire hubs there was another parallel world of wireless USB devices, at least for a few years. As it happens, we now have a couple of them here, so it's worth exploring what wireless USB was and what happened to it, how the competing standards worked (and how well), and if it would have helped. for the iBook G3 in 1999 people really started to believe a completely wireless future was possible — for any device. This was nevertheless another type of network, just one involving only one computer and one user over a short range, which was grandiosely dubbed the "personal area network," or PAN, or WPAN, depending on executive and blood alcohol level. Although initial forms of Bluetooth were the first to arrive in this space, Bluetooth was never intended to handle the very high data rates that some wireless peripherals might require, and even modern high-speed Bluetooth isn't specced beyond 50 megabits/sec (though hold that thought for a later digression). The key basis technology instead was the concept of ultra wide-band, or UWB, which in modern parlance collectively refers to technologies allowing very weak, very wide-spectrum (in excess of 500MHz) signals to become a short range yet high bandwidth communications channel. Wideband, in this case, is contrasted against the more typical narrowband. In general radio transmission works by modulating a carrier wave of a specified frequency, changing its amplitude (AM), phase, and/or the frequency itself (FM), to encode a signal to be communicated. For terrestrial analogue broadcasting, a good example of narrowband radio, this might be an audio signal carrying some specified frequency range; for FM radio in the United States this audio signal ranges from 30Hz to 15kHz, enough to capture much of the human-audible range, plus various higher frequencies not intended for listening. This collective signal effectively becomes encoded into sidebands on one or both sides of the carrier frequency (even with AM), and per Carson's rule the higher the maximum modulated frequency of the encoded signal then the larger the sidebands (ergo, its bandwidth) must be. As a result, commercial radio stations in particular are often heavily filtered for coexistence to allow many stations to share the band: in the United States, within ITU Region 2, the Federal Communications Commission (FCC) divides the FM band from 88.0MHz to 108.0MHz into 100 "channels" of 200kHz each, putting the nominal carrier frequency in the middle of the channel to provide sufficient sideband width for modulation, and strictly regulates any spillover outside those channel boundaries. In practice, most adjacent U.S. FM stations are no closer than 400kHz, a balance between spectrum capacity and signal strength. This typically permits a maximum FM stereo modulated frequency of about 53kHz; frequencies in the aggregate range being transmitted that are unused or unnecessary can be repurposed as subcarriers to emit additional information, such as FM stereo's 19kHz pilot tone subcarrier used to signal receivers, or Microsoft's brief flirtation with one-way transmissions to SPOT smartwatches. Doing so is "free" because the subcarrier frequency is already part of the frequency range. By contrast, signals like 802.11 Wi-Fi are wideband radio, or at least comparatively wid-er band, because they pass much higher bandwidths. Although 802.11 frequencies (except for the very highest 45/60GHz band) are generally divided into 5MHz channels, people typically only use channels 1, 6 and 11 with 2.4GHz Wi-Fi, for example (or, in later standards, 1, 5, 9 and maybe 13), which spaces them by 20MHz or more. Compare this with medium-wave AM radio, where channel spacing in the United States is just 10kHz and even 9kHz in some countries like Australia, or shortwave radio with only 5kHz spacing. impulse radar, which is a form of the more familiar pulsed radar (such as the traffic cop at the corner) using much briefer radio pulses. Radar also works on a carrier wave model, but instead of FM or AM, the radar carrier wave is merely pulsed on and off. This is necessary so that the detector during the "off" phase can pick up echoes of the radio pulse transmitted during the "on" phase, and for most applications of radar, the pulse-repetition frequency (PRF) is much less than the frequency of the carrier wave being pulsed. Shorter, more frequent pulses would have theoretically yielded greater precision at close range, but such capability was beyond the electronics available to early radar researchers who were more concerned with long-range detection anyway, where the off phase had to be of sufficient length to detect a distant reflection. By the 1970s, however, the technology had sufficiently advanced that the radar's PRF could approach its carrier frequency, making things like ground-penetrating radar possible. While higher frequencies couldn't travel through ground for great distances, they did yield much better resolution and therefore meaningful data. To a basic approximation UWB uses the same principle as impulse radar: a series of pulses, potentially as short as picoseconds long, of a particular carrier wave. As the carrier wave itself isn't changing, all of the information is necessarily being encoded in the pulses' timing. Being discontinuous waves, Carson's rule doesn't directly apply to most forms of UWB, but the analogous Shannon capacity limit indicates that rapid modulation from a high PRF would also require significant bandwidth — hence, ultra wide-band. To mitigate UWB transmissions from interfering with narrower-band transmissions on the same frequencies, the pulsed transmissions can be made at very low power, often below the typical noise floor for other transmissions. Naturally this also necessarily limits its range to perhaps a hundred or so metres at most but also makes battery-powered operation highly practical. Its utility in location-finding is because time-of-flight can be measured very quickly and exactly due to the short pulse lengths; when fully active, an Apple AirTag typically transmits a pulse about every other nanosecond. subsequent amendments. A standards group quickly emerged at the IEEE called the IEEE 802.15 Working Group for WPANs, addressing not only UWB but other WPAN-enabling technologies generally. The 802.15 WG had two arms, 802.15.4 for low bandwidth applications which we will not discuss further in this article (Zigbee is probably the most well-known in this category), and 802.15.3 for high bandwidth applications. Subsequently, the WiMedia Alliance was established that summer to capitalize on the new high-bandwidth technology, counting among other early members Eastman Kodak, Motorola, Hewlett-Packard, Intel, Philips, Samsung, Sharp and STMicroelectronics. 802.15.3 had obvious utility in determining precise location, but an extension called 802.15.3a in December 2002 sought to further enhance the standard for high-speed transmission of image and multimedia data. This team started with 23 proposals and whittled them down to two, DS-UWB (alternatively DS-CDMA) and MB-OFDM. DS-UWB stands for direct sequence ultra wide-band, where the data is simply sent as pulses (as in binary pulse AM) over the entire frequency range in use. However, although the low power of UWB prevents it from interfering with higher-power narrowband signals, an additional layer is needed to prevent UWB transmissions from interfering with each other (i.e., multiple access). DS-UWB uses a system similar to cellular CDMA (code-division multiple access) where each transmitter modulates the data signal with an even higher frequency pseudorandom code known to the receiver, hence its alternative name of DS-CDMA. An interfering transmitter without the same code will have its signals attenuated during the decoding (despreading) process and be ignored. Additionally, by making the transmitted signal require more bandwidth than the original one, the composite signal becomes spread over an even larger frequency range and thus more resistant to narrow-band interference (i.e., direct sequence spread spectrum). On the other hand, MB-OFDM (multiband orthogonal frequency-division multiplexing) instead employs a massive number of subcarriers to send its data. The basic principle of OFDM, which dates back to Bell Labs in 1966, is to divide up the desired digital signal into multiple bits transmitted in parallel on multiple simultaneous subcarriers. OFDM has many current applications, among others various standards for Wi-Fi and digital TV such as DVB-T and 802.11a/g/n/ac/ah. To avoid interference between the subcarriers yet maximize the channel's capacity, each subcarrier is separated by a minimum frequency distance usually computed as the reciprocal of the useful symbol duration, making each subcarrier orthogonal to the others surrounding it and easily discriminated. MB-OFDM as used here divides the approved range into fourteen 528MHz subbands of 128 subcarriers each, 100 of which are used for data transmission and the remainder for zeroes, guard tones and pilot signals. It solves the multiple access problem by hopping between transmission subfrequencies in a defined pattern (time-frequency coding), meaning each user ideally is on a different one at any given time while also avoiding narrowband intrusion on any one particular frequency. In practice the spec doesn't use all of the subbands simultaneously, bundling them into four bandgroups of three (with a fifth group of two, and a sixth group overlapping two others) and selecting a group as required by local regulation or to compensate for existing sources of interference. MultiBand OFDM Alliance in June 2003, founded by Texas Instruments and crucially joined by Intel, while the DS-UWB camp was largely led by Motorola and subsequently Freescale, its inheritor, who had significant investment in CDMA. Although MB-OFDM demanded obviously greater technical complexity, it also presented the prospect of much faster data rates, and as a result the MBOA continued to accrete members despite Motorola's protests. Motorola attempted to develop a compromise lower-speed Common Signaling Mode ("UWB CSM") so DS-UWB and MB-OFDM devices could coexist, but the process descended into squabble, and Motorola pulled out of the WiMedia Alliance to establish the competing UWB Forum in 2004 exclusively focused on DS-UWB with CSM. As the standards argument raged in the background, OEMs meanwhile started evaluating potential market applications. After all, just about any potential short-range interconnect could be built on it; proposals to replace or reimplement Bluetooth with UWB were considered, as well as transports for IPv4 networking and FireWire (IEEE 1394). The original wireless USB concept in particular came from Motorola's new spinoff Freescale, who was determined to win the war anyway by getting their chipset to retail first, but also from Intel, who through its heavy influence on the USB Implementers Forum (USB-IF) persuaded the organization to adopt WiMedia's version of MB-OFDM as their officially blessed USB solution for high-speed wireless devices. In February 2004 Intel announced the formation of the Wireless USB (W-USB) Promoter Group, composed of themselves, Agere (now part of Broadcom via the former LSI Logic), Hewlett-Packard, Microsoft, NEC, Philips and Samsung, with an aim for products within the next year. Because the W-USB name clashed with Freescale's initial branding, Intel and the USB-IF eventually settled on CW-USB ("Certified Wireless USB") and the MBOA was merged into the WiMedia Alliance in 2005. Now that the attempt to make an IEEE standard had clearly stalled for good, WiMedia submitted its own specification to Ecma instead, published as ECMA-368, and the 802.15.3a Task Group subsequently disbanded in January 2006. Both Freescale W-USB (later changed to Cord-Free USB and then Cable-Free USB, both of which we'll call CF-USB for short) and Intel CW-USB conceptually replicate the host-centric nature of USB 2.0, hewing more or less to the same basic topology but obviously without wires. Both systems supported up to 127 devices and necessarily the over-the-air connection was encrypted, both with AES-128. There were of course no compliant devices yet, nor compliant computers, so both competing standards required a dongle on the PC side and offered wireless USB hubs to connect existing peripherals. The main user-facing difference between Cable-Free and Certified Wireless USB was that CF-USB was intentionally, and in this case ironically, much closer to the wired USB spec. In particular, although CF-USB connections could only go point-to-point — just like a single cord — all USB features and transfer types were supported, even isochronous transfers for real-time data. CF-USB also had the compatibility edge in that the other end would look just like a regular USB hub to the computer, so no software update was necessary. CW-USB, on the other hand, although its virtual bus was much more flexible and devices could be hosts to other devices, wasn't fully backwards-compatible with all USB devices and needed new drivers and operating system support. non-UWB wireless USB system that I'll come back to later on. Freescale's team eventually suffered management departures and failed to release any future CF-USB hardware, after which the UWB Forum itself imploded in 2007. Using a USB PC dongle made by Taiwanese OEM Gemtek, exhibitors were shown a PC and a digital camera associating with each other and the PC downloading images from the camera to its desktop, which Intel claimed could run at up to USB 2.0's full 480Mb/s at three metres (110Mb/sec up to 10). One heavily anticipated application was as a docking station you could just walk up to: if you had been previously associated, then boom, you were connected. The bandwidth, Intel promised, would be real and it would be spectacular. A few months later, Belkin's reworked dongle-hub kit — initially still called "Cable-Free" until Freescale objected — finally emerged for retail sale in 2007. Unfortunately, the chipset switch eliminated Belkin's Mac compatibility and it only came with Windows drivers. Worse, Belkin's hub took it on the chin in various reviews, citing an eighty percent reduction in throughput if the devices were just a foot away, and another 30% on top of that at four feet, with a maximum range of somewhere around six (or one big wall). This probably made it more secure, but definitely not more convenient, and far short of the claimed 10 metre maximum range. It doesn't look like Belkin sold very many. Another vendor was D-Link, who produced both dongles and hubs along with a starter kit containing one of each. This NOS example, utterly unused in a sealed box, had an original MSRP of about $170 ($225 in 2025 dollars) but showed up on eBay for $12. I couldn't resist picking it up and a couple other cheap CW-USB products to play with, all of which carried the proud and official Certified Wireless USB logo. I made sure one of them was a docking station since that was intended to be the killer app. all of them come with an HWA, even though only one of them actually (or at least officially) has a DWA hub: the D-Link starter kit (model DUB-9240), consisting of a DUB-2240 4-port DWA USB 2.0 hub and a DUB-1210 HWA. The TRULink #29596 Wireless USB to VGA and Audio Kit has two downstream devices with on-board DWAs, one for a VGA monitor (up to 1600x1200 or 1680x1050) and one for analogue audio, plus its own HWA; the Atlona AT-PCLink Wireless USB DisplayDock offers DVI video, 3.5mm (1/8") audio and two USB ports, advertised for your mouse and keyboard (but really a lurking hub also). The dock base, interestingly enough, is not a CW-USB device itself: you have to plug a DWA into it (included) which can go in one of two ports depending on physical configuration. In the package Atlona also includes another HWA. However, since they're all allegedly CW-USB 1.0 compliant, you should be able to use any HWA you want. (Theoretically. That's called "foreshadowing.") The D-Link and TRULink HWAs only support Windows XP SP3 and Vista — there was a short-lived Linux implementation that Intel themselves wrote, but it was very incomplete and eventually removed — and the Atlona HWA does too, but it also claims support for Windows 7 and even Mac OS X (Leopard and Snow Leopard). So our test system will be ... Virus Alert by Weird Al. Get out your Intel if you want to try this.) That covers the HWA and the HWA-DWA link, but being "USB" (after a fashion) you also need a driver for the device it's connecting to. Fortunately the TRULink and Atlona video systems are both DisplayLink-based, supporting screen mirroring and spanning, for which (Intel) Mac drivers also exist. does later on.) association process, which is necessary because obviously you don't want malicious USB devices trying to talk to you, and you don't want your next-door neighbour possibly being able to use your printer or read tax returns on your thumb drive. (I didn't know that was deductable!) The process of association generates a new AES-128 session key and records both 128-bit host and device IDs for future recognition. This shared 384-bit association context remains in effect until explicitly disabled: the associated device now won't interact with HWAs it doesn't know, other than to potentially associate with them also, and the HWA will only talk to devices with which it has been associated. It is possible, and absolutely supported, for a device to be associated with multiple HWAs. Association in CW-USB can be done one of three ways, either by factory pre-association (the TRULink and Atlona devices come pre-associated with their HWAs, for example), numeric association where the device provides an on-screen code (like Bluetooth pairing) or the PIN on the underside of the device can be manually entered (an alpha-numeric code like D0NTH4CKM3), or, uniquely, by cable association. physically connect the CW-USB device to your PC or Mac via USB cable, let it be recognized by the HWA's driver, and then disconnect it. It then continues to act connected, just via the HWA. The D-Link DWA-hub is cable-associated as part of the installation process, or can be associated by PIN; it is the only one of these three that is not pre-associated. All devices support pre-association and some sort of numeric association, but a physical USB port is naturally required for cable association. It is nevertheless the most secure of the three methods, first because you have to have physical custody of both the device and the computer, second because it's a new and unique key, and third because key creation and distribution occurs entirely via the cable and never over the air. Unfortunately it's not possible to blacklist the other association methods, so you'd better not let your neighbour get your PIN. (You pay how much in mortgage interest??) Some devices like the TRUlinks mercifully do support changing it, but that ability didn't seem universal in the devices I looked at. In this case, all three devices support cable-association. The D-Link hub and the TRULink devices do so via their USB ports, but the Atlona dock does it by plugging the DWA into the computer instead of the docking base itself. The reverse process is also obviously possible to de-associate a device, and you can outright block devices as well, though this may require some fiddling if they were pre-associated. Similarly, most devices, including this one, have a reset button which will clear the association context(s) stored in them, removing any undesired linkages. Let's get the D-Link kit installed in the Vista VM. not big the wireless USB market ultimately got. A few pieces of WiQuest and their IP are now part of modern Staccato Communications. not to plug in either the dongle or the hub until installation is complete and the hub has been cable-associated. can see it. Although I found it initially surprising that VMware didn't ask me about the device when I connected it, upon reflection it's perfectly logical that wireless USB devices wouldn't be seen at all by the Mac because we've effectively constructed a new and separate USB bus completely outside of it. For all the devices I connected to the hub-DWA, VMware was absolutely unaware of all of them, including the hub itself; the only device the MacBook and therefore VMware saw was the HWA. This is probably good from a performance view though possibly bad from a device control view. COM3: we just installed (the others are provided by VMware). emulate a serial port but only appear on the USB bus when there is activity, such as kicking off a HotSync. Notice the only thing connected to the MacBook is the HWA (all ports are on the left side of this model), but with the watch connected to the hub-DWA the Vista VM sees the new USB device appear. does appear to work suggests this should work on, say, Windows XP. Let's see if the Atlona AT-PCLink natively in Snow Leopard can do any better. It's time to bring on the docking station. .dmg double the size it needed to be. A help alias points to a small HTML-based manual, but the Windows version has a full PDF available on the disc. and had a Universal payload, even though Atlona said explicitly it wasn't compatible with PowerPC. I grabbed my great big A1139 17" DLSD PowerBook G4 1.67GHz, the last and mightiest PowerBook which I use as a portable DVD player and Leopard 10.5.8 test system, to see if it would work. this way? pilot-xfer, but that doesn't give you the rest of the PIM, and Mark/Space's Missing Sync does not run on Mavericks or later.) Fortunately, this is Snow Leopard, so we have O.G. Rosetta. associate or pair a new device. (Remember what I said about foreshadowing?) This restriction appears to be entirely due to the software and isn't unique to the Mac version; the Atlona manual indicates that the Windows version can't associate new devices either, other than possibly another Atlona dock. Officially it can be re-paired with its original DWA and that's it. /System/Library/WUSB/CBA.app/Contents/Resources/DB.plist, which lists associated devices. (The very location of this .plist again suggests it wasn't intended for user modification.) Here is the relevant portion, with keys suppressed: You can identify the WiMedia bandgroup (1, 3.1GHz to 4.8GHz), the 128-bit host and device IDs, and the 384-bit association context (which includes both IDs) in the key-value pairs. Yes, I could insert another device entry easily enough, but I wouldn't know the AES key the other end is using, so I couldn't compute a valid context. Since this driver is running natively and we're not paying a VM tax, let's see how well video streaming worked since oodles of cableless bandwidth was just about the entire use case for wireless USB. Snow Leopard welcome video and prove it wrong. (The Snow Leopard welcome video has audio in a separate file, so this comparison movie has no sound.) 2.0 Extender, used a much more familiar wireless transport: 802.11g. Yup — it's USB over Wi-Fi. bulk transfers, and I tend to believe Icron because it's their hardware — but both are clear that high-bandwidth devices like UVC webcams are going to have a bad time: "Icron Technologies Corporation does not guarantee that all USB devices are compatible with the WiRanger and only recommends the product be used with keyboard, mouse, and some flash drives." some of my devices use. bus like CW-USB. I wanted to do some performance tests with it, but strangely macOS Sequoia will not recognize the sender when connected to my M1 MacBook Air, even though it worked fine connected to the Raptor POWER9 in Fedora 41 and was seen as a hub there too. So we'll do the tests on the MacBook as well, which also had no problem seeing and using the pair. Again, we'll just copy that same 1.09GB combo installer and see how long it takes. pilot-xfer from the command line instead of Palm Desktop. Slirp or usb2ppp. as before. It actually doesn't feel much different in terms of speed from a direct connect and I didn't find it particularly unstable. Of what we've explored here, the Gefen box seems the least complicated solution, though the receiver pulled a bit more battery power and of course you'd need a host around to connect through. As such I'm still using the "Raspberry Pi in a camera bag" notion for the time being, but it's nice to see it can work other ways. My experience with the Gefen/Icron extender was generally consistent with other reviews that found it adequate for undemanding tasks like printing. However, it doesn't look like either Icron or Gefen sold many of them, likely due to their unattractive price tag. Icron did announce plans for a much faster wireless USB solution on the 60GHz band using 802.11ad, which with its 7 Gbit/s capacity would easily handle USB 2.0 and even 5 Gbit/s 3.0, but it doesn't seem like the device was ever offered for sale. (A couple people have mentioned to me that there were other ≥802.11ad wireless USB products out there, including WiGig. Their bandwidth was reportedly more than sufficient for video but I don't have any of those devices here, and they are better understood as Wi-Fi routers that also do USB device sharing.) Although Icron still sells extenders, now as a division of Analog Devices, all of their current products are wired-only. As for CW-USB, by 2008 few laptops on the market offered the feature, and even for those that did like the Lenovo ThinkPad X200, it was always an extra option. That meant most computers that wanted to connect to a CW-USB device still needed a HWA-dongle, so they still took up a port, and HWAs never got produced in enough numbers to be cheap. On top of that, it further didn't help matters that anything close to the promised maximum bandwidth was hardly ever observed in real-world situations. Device makers, meanwhile, mostly chose to wait out greater availability of CW-USB capable computers, and since that never happened, neither a large number of computers with built-in CW-USB nor CW-USB devices were ever made before the standard was abandoned. The last stand of UWB as a device interconnect was ironically as a proposal for Bluetooth 3.0+HS, the 2009 optional high-data-rate specification. 3.0+HS introduced AMP (Alternative MAC/PHY) as a bolt-on method, in which low-speed Bluetooth would be used to set up a link and then the high-speed data exchange would occur over the second transport, originally MB-OFDM. With CW-USB fading from the market, however, the WiMedia Alliance closed its doors in 2009 and shed its existing work to the USB-IF, the W-USB Promoter Group and the Bluetooth SIG. This move was controversial with some WiMedia members, who consequently refused to grant access to their intellectual property to the new successor groups, and instead AMP ended up being based on 802.11 as well. The AMP extension was little used and eventually removed in Bluetooth 5.3. Is there a moral to this story? I'm not quite certain. As so often happens, the best technology didn't win; in my eyes CF-USB had the potential for being more widely adopted because of its simplicity and compatibility, but was ruined when Freescale got greedy and it never recovered. That said, the real question is whether wireless USB itself, with all of its broken promises, was the right approach for the WPAN concept. It's certainly not an indictment of ultra wideband, which is used today more than ever before: many chips are still produced that implement it, the best known undoubtedly being Apple's U1 and U2 chips in iOS and iOS-adjacent devices like the AirTag, and such chips continue to be widely used for things such as precise location fixing and local interactions. UWB has also been used for diverse tasks like tracking NFL players during games or parts during factory assembly, and for autonomous vehicles in particular it's extremely useful. without wireless USB — and we just never noticed. Maybe that's the moral.

a month ago 9 votes

More in technology

2025-06-08 Omnimax

In a previous life, I worked for a location-based entertainment company, part of a huge team of people developing a location for Las Vegas, Nevada. It was COVID, a rough time for location-based anything, and things were delayed more than usual. Coworkers paid a lot of attention to another upcoming Las Vegas attraction, one with a vastly larger budget but still struggling to make schedule: the MSG (Madison Square Garden) Sphere. I will set aside jokes about it being a square sphere, but they were perhaps one of the reasons that it underwent a pre-launch rebranding to merely the Sphere. If you are not familiar, the Sphere is a theater and venue in Las Vegas. While it's know mostly for the video display on the outside, that's just marketing for the inside: a digital dome theater, with seating at a roughly 45 degree stadium layout facing a near hemisphere of video displays. It is a "near" hemisphere because the lower section is truncated to allow a flat floor, which serves as a stage for events but is also a practical architectural decision to avoid completely unsalable front rows. It might seem a little bit deceptive that an attraction called the Sphere does not quite pull off even a hemisphere of "payload," but the same compromise has been reached by most dome theaters. While the use of digital display technology is flashy, especially on the exterior, the Sphere is not quite the innovation that it presents itself as. It is just a continuation of a long tradition of dome theaters. Only time will tell, but the financial difficulties of the Sphere suggest that follows the tradition faithfully: towards commercial failure. You could make an argument that the dome theater is hundreds of years old, but I will omit it. Things really started developing, at least in our modern tradition of domes, with the 1923 introduction of the Zeiss planetarium projector. Zeiss projectors and their siblings used a complex optical and mechanical design to project accurate representations of the night sky. Many auxiliary projectors, incorporated into the chassis and giving these projectors famously eccentric shapes, rendered planets and other celestial bodies. Rather than digital light modulators, the images from these projectors were formed by purely optical means: perforated metal plates, glass plates with etched metalized layers, and fiber optics. The large, precisely manufactured image elements and specialized optics created breathtaking images. While these projectors had considerable entertainment value, especially in the mid-century when they represented some of the most sophisticated projection technology yet developed, their greatest potential was obviously in education. Planetarium projectors were fantastically expensive (being hand-built in Germany with incredible component counts) [1], they were widely installed in science museums around the world. Most of us probably remember a dogbone-shaped Zeiss, or one of their later competitors like Spitz or Minolta, from our youths. Unfortunately, these marvels of artistic engineering were mostly retired as digital projection of near comparable quality became similarly priced in the 2000s. But we aren't talking about projectors, we're talking about theaters. Planetarium projectors were highly specialized to rendering the night sky, and everything about them was intrinsically spherical. For both a reasonable viewing experience, and for the projector to produce a geometrically correct image, the screen had to be a spherical section. Thus the planetarium itself: in its most traditional form, rings of heavily reclined seats below a hemispherical dome. The dome was rarely a full hemisphere, but was usually truncated at the horizon. This was mostly a practical decision but integrated well into the planetarium experience, given that sky viewing is usually poor near the horizon anyway. Many planetaria painted a city skyline or forest silhouette around the lower edge to make the transition from screen to wall more natural. Later, theatrical lighting often replaced the silhouette, reproducing twilight or the haze of city lights. Unsurprisingly, the application-specific design of these theaters also limits their potential. Despite many attempts, the collective science museum industry has struggled to find entertainment programming for planetaria much beyond Pink Floyd laser shows [1]. There just aren't that many things that you look up at. Over time, planetarium shows moved in more narrative directions. Film projection promised new flexibility---many planetaria with optical star projectors were also equipped with film projectors, which gave show producers exciting new options. Documentary video of space launches and animations of physical principles became natural parts of most science museum programs, but were a bit awkward on the traditional dome. You might project four copies of the image just above the horizon in the four cardinal directions, for example. It was very much a compromise. With time, the theater adapted to the projection once again: the domes began to tilt. By shifting the dome in one direction, and orienting the seating towards that direction, you could create a sort of compromise point between the traditional dome and traditional movie theater. The lower central area of the screen was a reasonable place to show conventional film, while the full size of the dome allowed the starfield to almost fill the audience's vision. The experience of the tilted dome is compared to "floating in space," as opposed to looking up at the sky. In true Cold War fashion, it was a pair of weapons engineers (one nuclear weapons, the other missiles) who designed the first tilted planetarium. In 1973, the planetarium of what is now called the Fleet Science Center in San Diego, California opened to the public. Its dome was tilted 25 degrees to the horizon, with the seating installed on a similar plane and facing in one direction. It featured a novel type of planetarium projector developed by Spitz and called the Space Transit Simulator. The STS was not the first, but still an early mechanical projector to be controlled by a computer---a computer that also had simultaneous control of other projectors and lighting in the theater, what we now call a show control system. Even better, the STS's innovative optical design allowed it to warp or bend the starfield to simulate its appearance from locations other than earth. This was the "transit" feature: with a joystick connected to the control computer, the planetarium presenter could "fly" the theater through space in real time. The STS was installed in a well in the center of the seating area, and its compact chassis kept it low in the seating area, preserving the spherical geometry (with the projector at the center of the sphere) without blocking the view of audience members sitting behind it and facing forward. And yet my main reason for discussing the Fleet planetarium is not the the planetarium projector at all. It is a second projector, an "auxiliary" one, installed in a second well behind the STS. The designers of the planetarium intended to show film as part of their presentations, but they were not content with a small image at the center viewpoint. The planetarium commissioned a few of the industry's leading film projection experts to design a film projection system that could fill the entire dome, just as the planetarium projector did. They knew that such a large dome would require an exceptionally sharp image. Planetarium projectors, with their large lithographed slides, offered excellent spatial resolution. They made stars appear as point sources, the same as in the night sky. 35mm film, spread across such a large screen, would be obviously blurred in comparison. They would need a very large film format. Fortuitously, almost simultaneously the Multiscreen Corporation was developing a "sideways" 70mm format. This 15-perf format used 70mm film but fed it through the projector sideways, making each frame much larger than typical 70mm film. In its debut, at a temporary installation in the 1970 Expo Osaka, it was dubbed IMAX. IMAX made an obvious basis for a high-resolution projection system, and so the then-named IMAX Corporation was added to the planetarium project. The Fleet's film projector ultimately consisted of an IMAX film transport with a custom-built compact, liquid-cooled lamphouse and spherical fisheye lens system. The large size of the projector, the complex IMAX framing system and cooling equipment, made it difficult to conceal in the theater's projector well. Threading film into IMAX projectors is quite complex, with several checks the projectionist must make during a pre-show inspection. The projectionist needed room to handle the large film, and to route it to and from the enormous reels. The projector's position in the middle of the seating area left no room for any of this. We can speculate that it was, perhaps, one of the designer's missile experience that lead to the solution: the projector was serviced in a large projection room beneath the theater's seating. Once it was prepared for each show, it rose on near-vertical rails until just the top emerged in the theater. Rollers guided the film as it ran from a platter, up the shaft to the projector, and back down to another platter. Cables and hoses hung below the projector, following it up and down like the traveling cable of an elevator. To advertise this system, probably the greatest advance in film projection since the IMAX format itself, the planetarium coined the term Omnimax. Omnimax was not an easy or economical format. Ideally, footage had to be taken in the same format, using a 70mm camera with a spherical lens system. These cameras were exceptionally large and heavy, and the huge film format limited cinematographers to short takes. The practical problems with Omnimax filming were big enough that the first Omnimax films faked it, projecting to the larger spherical format from much smaller conventional negatives. This was the case for "Voyage to the Outer Planets" and "Garden Isle," the premier films at the Fleet planetarium. The history of both is somewhat obscure, the latter especially. "Voyage to the Outer Planets" was executive-produced by Preston Fleet, a founder of the Fleet center (which was ultimately named for his father, a WWII aviator). We have Fleet's sense of showmanship to thank for the invention of Omnimax: He was an accomplished business executive, particularly in the photography industry, and an aviation enthusiast who had his hands in more than one museum. Most tellingly, though, he had an eccentric hobby. He was a theater organist. I can't help but think that his passion for the theater organ, an instrument almost defined by the combination of many gizmos under electromechanical control, inspired "Voyage." The film, often called a "multimedia experience," used multiple projectors throughout the planetarium to depict a far-future journey of exploration. The Omnimax film depicted travel through space, with slide projectors filling in artist's renderings of the many wonders of space. The ten-minute Omnimax film was produced by Graphic Films Corporation, a brand that would become closely associated with Omnimax in the following decades. Graphic was founded in the midst of the Second World War by Lester Novros, a former Disney animator who found a niche creating training films for the military. Novros's fascination with motion and expertise in presenting complicated 3D scenes drew him to aerospace, and after the war he found much of his business in the newly formed Air Force and NASA. He was also an enthusiast of niche film formats, and Omnimax was not his first dome. For the 1964 New York World's Fair, Novros and Graphic Films had produced "To the Moon and Beyond," a speculative science film with thematic similarities to "Voyage" and more than just a little mechanical similarity. It was presented in Cinerama 360, a semi-spherical, dome-theater 70mm format presented in a special theater called the Moon Dome. "To the Moon and Beyond" was influential in many ways, leading to Graphic Films' involvement in "2001: A Space Odyssey" and its enduring expertise in domes. The Fleet planetarium would not remain the only Omnimax for long. In 1975, the city of Spokane, Washington struggled to find a new application for the pavilion built for Expo '74 [3]. A top contender: an Omnimax theater, in some ways a replacement for the temporary IMAX theater that had been constructed for the actual Expo. Alas, this project was not to be, but others came along: in 1978, the Detroit Science Center opened the second Omnimax theater ("the machine itself looks like and is the size of a front loader," the Detroit Free Press wrote). The Science Museum of Minnesota, in St. Paul, followed shortly after. The Carnegie Science Center, in Pittsburgh, rounded out the year's new launches. Omnimax hit prime time the next year, with the 1979 announcement of an Omnimax theater at Caesars Palace in Las Vegas, Nevada. Unlike the previous installations, this 380-seat theater was purely commercial. It opened with the 1976 IMAX film "To Fly!," which had been optically modified to fit the Omnimax format. This choice of first film is illuminating. "To Fly!" is a 27 minute documentary on the history of aviation in the United States, originally produced for the IMAX theater at the National Air and Space Museum [4]. It doesn't exactly seem like casino fare. The IMAX format, the flat-screen one, was born of world's fairs. It premiered at an Expo, reappeared a couple of years later at another one, and for the first years of the format most of the IMAX theaters built were associated with either a major festival or an educational institution. This noncommercial history is a bit hard to square with the modern IMAX brand, closely associated with major theater chains and the Marvel Cinematic Universe. Well, IMAX took off, and in many ways it sold out. Over the decades since the 1970 Expo, IMAX has met widespread success with commercial films and theater owners. Simultaneously, the definition or criteria for IMAX theaters have relaxed, with smaller screens made permissible until, ultimately, the transition to digital projection eliminated the 70mm film and more or less reduce IMAX to just another ticket surcharge brand. It competes directly with Cinemark xD, for example. To the theater enthusiast, this is a pretty sad turn of events, a Westinghouse-esque zombification of a brand that once heralded the field's most impressive technical achievements. The same never happened to Omnimax. The Caesar's Omnimax theater was an odd exception; the vast majority of Omnimax theaters were built by science museums and the vast majority of Omnimax films were science documentaries. Quite a few of those films had been specifically commissioned by science museums, often on the occasion of their Omnimax theater opening. The Omnimax community was fairly tight, and so the same names recur. The Graphic Films Corporation, which had been around since the beginning, remained so closely tied to the IMAX brand that they practically shared identities. Most Omnimax theaters, and some IMAX theaters, used to open with a vanity card often known as "the wormhole." It might be hard to describe beyond "if you know you know," it certainly made an impression on everyone I know that grew up near a theater that used it. There are some videos, although unfortunately none of them are very good. I have spent more hours of my life than I am proud to admit trying to untangle the history of this clip. Over time, it has appeared in many theaters with many different logos at the end, and several variations of the audio track. This is in part informed speculation, but here is what I believe to be true: the "wormhole" was originally created by Graphic Films for the Fleet planetarium specifically, and ran before "Voyage to the Outer Planets" and its double-feature companion "Garden Isle," both of which Graphic Films had worked on. This original version ended with the name Graphic Films, accompanied by an odd sketchy drawing that was also used as an early logo of the IMAX Corporation. Later, the same animation was re-edited to end with an IMAX logo. This version ran in both Omnimax and conventional IMAX theaters, probably as a result of the extensive "cross-pollination" of films between the two formats. Many Omnimax films through the life of the format had actually been filmed for IMAX, with conventional lenses, and then optically modified to fit the Omnimax dome after the fact. You could usually tell: the reprojection process created an unusual warp in the image, and more tellingly, these pseudo-Omnimax films almost always centered the action at the middle of the IMAX frame, which was too high to be quite comfortable in an Omnimax theater (where the "frame center" was well above the "front center" point of the theater). Graphic Films had been involved in a lot of these as well, perhaps explaining the animation reuse, but it's just as likely that they had sold it outright to the IMAX corporation which used it as they pleased. For some reason, this version also received new audio that is mostly the same but slightly different. I don't have a definitive explanation, but I think there may have been an audio format change between the very early Omnimax theaters and later IMAX/Omnimax systems, which might have required remastering. Later, as Omnimax domes proliferated at science museums, the IMAX Corporation (which very actively promoted Omnimax to education) gave many of these theaters custom versions of the vanity card that ended with the science museum's own logo. I have personally seen two of these, so I feel pretty confident that they exist and weren't all that rare (basically 2 out of 2 Omnimax theaters I've visited used one), but I cannot find any preserved copies. Another recurring name in the world of IMAX and Omnimax is MacGillivray Freeman Films. MacGillivray and Freeman were a pair of teenage friends from Laguna Beach who dropped out of school in the '60s to make skateboard and surf films. This is, of course, a rather cliché start for documentary filmmakers but we must allow that it was the '60s and they were pretty much the ones creating the cliché. Their early films are hard to find in anything better than VHS rip quality, but worth watching: Wikipedia notes their significance in pioneering "action cameras," mounting 16mm cinema cameras to skateboards and surfboards, but I would say that their cinematography was innovative in more ways than just one. The 1970 "Catch the Joy," about sandrails, has some incredible shots that I struggle to explain. There's at least one where they definitely cut the shot just a couple of frames before a drifting sandrail flung their camera all the way down the dune. For some reason, I would speculate due to their reputation for exciting cinematography, the National Air and Space Museum chose MacGillivray and Freeman for "To Fly!". While not the first science museum IMAX documentary by any means (that was, presumably, "Voyage to the Outer Planets" given the different subject matter of the various Expo films), "To Fly!" might be called the first modern one. It set the pattern that decades of science museum films followed: a film initially written by science educators, punched up by producers, and filmed with the very best technology of the time. Fearing that the film's history content would be dry, they pivoted more towards entertainment, adding jokes and action sequences. "To Fly!" was a hit, running in just about every science museum with an IMAX theater, including Omnimax. Sadly, Jim Freeman died in a helicopter crash shortly after production. Nonetheless, MacGillivray Freeman Films went on. Over the following decades, few IMAX science documentaries were made that didn't involve them somehow. Besides the films they produced, the company consulted on action sequences in most of the format's popular features. I had hoped to present here a thorough history of the films were actually produced in the Omnimax format. Unfortunately, this has proven very difficult: the fact that most of them were distributed only to science museums means that they are very spottily remembered, and besides, so many of the films that ran in Omnimax theaters were converted from IMAX presentations that it's hard to tell the two apart. I'm disappointed that this part of cinema history isn't better recorded, and I'll continue to put time into the effort. Science museum documentaries don't get a lot of attention, but many of the have involved formidable technical efforts. Consider, for example, the cameras: befitting the large film, IMAX cameras themselves are very large. When filming "To Fly!", MacGillivray and Freeman complained that the technically very basic 80 pound cameras required a lot of maintenance, were complex to operate, and wouldn't fit into the "action cam" mounting positions they were used to. The cameras were so expensive, and so rare, that they had to be far more conservative than their usual approach out of fear of damaging a camera they would not be able to replace. It turns out that they had it easy. Later IMAX science documentaries would be filmed in space ("The Dream is Alive" among others) and deep underwater ("Deep Sea 3D" among others). These IMAX cameras, modified for simpler operation and housed for such difficult environments, weighed over 1,000 pounds. Astronauts had to be trained to operate the cameras; mission specialists on Hubble service missions had wrangling a 70-pound handheld IMAX camera around the cabin and developing its film in a darkroom bag among their duties. There was a lot of film to handle: as a rule of thumb, one mile of IMAX film is good for eight and a half minutes. I grew up in Portland, Oregon, and so we will make things a bit more approachable by focusing on one example: The Omnimax theater of the Oregon Museum of Science and Industry, which opened as part of the museum's new waterfront location in 1992. This 330-seat boasted a 10,000 sq ft dome and 15 kW of sound. The premier feature was "Ring of Fire," a volcano documentary originally commissioned by the Fleet, the Fort Worth Museum of Science and Industry, and the Science Museum of Minnesota. By the 1990s, the later era of Omnimax, the dome format was all but abandoned as a commercial concept. There were, an announcement article notes, around 90 total IMAX theaters (including Omnimax) and 80 Omnimax films (including those converted from IMAX) in '92. Considering the heavy bias towards science museums among these theaters, it was very common for the films to be funded by consortia of those museums. Considering the high cost of filming in IMAX, a lot of the documentaries had a sort of "mashup" feel. They would combine footage taken in different times and places, often originally for other projects, into a new narrative. "Ring of Fire" was no exception, consisting of a series of sections that were sometimes more loosely connected to the theme. The 1982 Loma Prieta earthquake was a focus, and the eruption of Mt. St. Helens, and lava flows in Hawaii. Perhaps one of the reasons it's hard to catalog IMAX films is this mashup quality, many of the titles carried at science museums were something along the lines of "another ocean one." I don't mean this as a criticism, many of the IMAX documentaries were excellent, but they were necessarily composed from painstakingly gathered fragments and had to cover wide topics. Given that I have an announcement feature piece in front of me, let's also use the example of OMSI to discuss the technical aspects. OMSI's projector cost about $2 million and weighted about two tons. To avoid dust damaging the expensive prints, the "projection room" under the seating was a positive-pressure cleanroom. This was especially important since the paucity of Omnimax content meant that many films ran regularly for years. The 15 kW water-cooled lamp required replacement at 800 to 1,000 hours, but unfortunately, the price is not noted. By the 1990s, Omnimax had become a rare enough system that the projection technology was a major part of the appeal. OMSI's installation, like most later Omnimax theaters, had the audience queue below the seating, separated from the projection room by a glass wall. The high cost of these theaters meant that they operated on high turnovers, so patrons would wait in line to enter immediately after the previous showing had exited. While they waited, they could watch the projectionist prepare the next show while a museum docent explained the equipment. I have written before about multi-channel audio formats, and Omnimax gives us some more to consider. The conventional audio format for much of Omnimax's life was six-channel: left rear, left screen, center screen, right screen, right rear, and top. Each channel had an independent bass cabinet (in one theater, a "caravan-sized" enclosure with eight JBL 2245H 46cm woofers), and a crossover network fed the lowest end of all six channels to a "sub-bass" array at screen bottom. The original Fleet installation also had sub-bass speakers located beneath the audience seating, although that doesn't seem to have become common. IMAX titles of the '70s and '80s delivered audio on eight-track magnetic tape, with the additional tracks used for synchronization to the film. By the '90s, IMAX had switched to distributing digital audio on three CDs (one for each two channels). OMSI's theater was equipped for both, and the announcement amusingly notes the availability of cassette decks. A semi-custom audio processor made for IMAX, the Sonics TAC-86, managed synchronization with film playback and applied equalization curves individually calibrated to the theater. IMAX domes used perforated aluminum screens (also the norm in later planetaria), so the speakers were placed behind the screen in the scaffold-like superstructure that supported it. When I was young, OMSI used to start presentations with a demo program that explained the large size of IMAX film before illuminating work lights behind the screen to make the speakers visible. Much of this was the work of the surprisingly sophisticated show control system employed by Omnimax theaters, a descendent of the PDP-15 originally installed in the Fleet. Despite Omnimax's almost complete consignment to science museums, there were some efforts it bringing commercial films. Titles like Disney's "Fantasia" and "Star Wars: Episode III" were distributed to Omnimax theaters via optical reprojection, sometimes even from 35mm originals. Unfortunately, the quality of these adaptations was rarely satisfactory, and the short runtimes (and marketing and exclusivity deals) typical of major commercial releases did not always work well with science museum schedules. Still, the cost of converting an existing film to dome format is pretty low, so the practice continues today. "Star Wars: The Force Awakens," for example, ran on at least one science museum dome. This trickle of blockbusters was not enough to make commercial Omnimax theaters viable. Caesars Palace closed, and then demolished, their Omnimax theater in 2000. The turn of the 21st century was very much the beginning of the end for the dome theater. IMAX was moving away from their film system and towards digital projection, but digital projection systems suitable for large domes were still a nascent technology and extremely expensive. The end of aggressive support from IMAX meant that filming costs became impractical for documentaries, so while some significant IMAX science museum films were made in the 2000s, the volume definitely began to lull and the overall industry moved away from IMAX in general and Omnimax especially. It's surprising how unforeseen this was, at least to some. A ten-screen commercial theater in Duluth opened an Omnimax theater in 1996! Perhaps due to the sunk cost, it ran until 2010, not a bad closing date for an Omnimax theater. Science museums, with their relatively tight budgets and less competitive nature, did tend to hold over existing Omnimax installations well past their prime. Unfortunately, many didn't: OMSI, for example, closed its Omnimax theater in 2013 for replacement with a conventional digital theater that has a large screen but is not IMAX branded. Fortunately, some operators hung onto their increasingly costly Omnimax domes long enough for modernization to become practical. The IMAX Corporation abandoned the Omnimax name as more of the theaters closed, but continued to support "IMAX Dome" with the introduction of a digital laser projector with spherical optics. There are only ten examples of this system. Others, including Omnimax's flagship at the Fleet Science Center, have been replaced by custom dome projection systems built by competitors like Sony. Few Omnimax projectors remain. The Fleet, to their credit, installed the modern laser projectors in front of the projector well so that the original film projector could remain in place. It's still functional and used for reprisals of Omnimax-era documentaries. IMAX projectors in general are a dying breed, a number of them have been preserved but their complex, specialized design and the end of vendor support means that it may become infeasible to keep them operating. We are, of course, well into the digital era. While far from inexpensive, digital projection systems are now able to match the quality of Omnimax projection. The newest dome theaters, like the Sphere, dispense with projection entirely. Instead, they use LED display panels capable of far brighter and more vivid images than projection, and with none of the complexity of water-cooled arc lamps. Still, something has been lost. There was once a parallel theater industry, a world with none of the glamor of Hollywood but for whom James Cameron hauled a camera to the depths of the ocean and Leonardo DiCaprio narrated repairs to the Hubble. In a good few dozen science museums, two-ton behemoths rose from beneath the seats, the zenith of film projection technology. After decades of documentaries, I think people forgot how remarkable these theaters were. Science museums stopped promoting them as aggressively, and much of the showmanship faded away. Sometime in the 2000s, OMSI stopped running the pre-show demonstration, instead starting the film directly. They stopped explaining the projectionist's work in preparing the show, and as they shifted their schedule towards direct repetition of one feature, there was less for the projectionist to do anyway. It became just another museum theater, so it's no wonder that they replaced it with just another museum theater: a generic big-screen setup with the exceptionally dull name of "Empirical Theater." From time to time, there have been whispers of a resurgence of 70mm film. Oppenheimer, for example, was distributed to a small number of theaters in this giant of film formats: 53 reels, 11 miles, 600 pounds of film. Even conventional IMAX is too costly for the modern theater industry, though. Omnimax has fallen completely by the wayside, with the few remaining dome operators doomed to recycling the same films with a sprinkling of newer reformatted features. It is hard to imagine a collective of science museums sending another film camera to space. Omnimax poses a preservation challenge in more ways than one. Besides the lack of documentation on Omnimax theaters and films, there are precious few photographs of Omnimax theaters and even fewer videos of their presentations. Of course, the historian suffers where Madison Square Garden hopes to succeed: the dome theater is perhaps the ultimate in location-based entertainment. Photos and videos, represented on a flat screen, cannot reproduce the experience of the Omnimax theater. The 180 horizontal degrees of screen, the sound that was always a little too loud, in no small part to mask the sound of the projector that made its own racket in the middle of the seating. You had to be there. IMAGES: Omnimax projection room at OMSI, Flickr user truk. Omnimax dome with work lights on at MSI Chicago, Wikimedia Commons user GualdimG. Omnimax projector at St. Louis Science Center, Flickr user pasa47. [1] I don't have extensive information on pricing, but I know that in the 1960s an "economy" Spitz came in over $30,000 (~10x that much today). [2] Pink Floyd's landmark album Dark Side of The Moon debuted in a release event held at the London Planetarium. This connection between Pink Floyd and planetaria, apparently much disliked by the band itself, has persisted to the present day. Several generations of Pink Floyd laser shows have been licensed by science museums around the world, and must represent by far the largest success of fixed-installation laser projection. [3] Are you starting to detect a theme with these Expos? the World's Fairs, including in their various forms as Expos, were long one of the main markets for niche film formats. Any given weird projection format you run into, there's a decent chance that it was originally developed for some short film for an Expo. Keep in mind that it's the nature of niche projection formats that they cannot easily be shown in conventional theaters, so they end up coupled to these crowd events where a custom venue can be built. [4] The Smithsonian Institution started looking for an exciting new theater in 1970. As an example of the various niche film formats at the time, the Smithsonian considered a dome (presumably Omnimax), Cinerama (a three-projector ultrawide system), and Circle-Vision 360 (known mostly for the few surviving Expo films at Disney World's EPCOT) before settling on IMAX. The Smithsonian theater, first planned for the Smithsonian Museum of Natural History before being integrated into the new National Air and Space Museum, was tremendously influential on the broader world of science museum films. That is perhaps an understatement, it is sometimes credited with popularizing IMAX in general, and the newspaper coverage the new theater received throughout North America lends credence to the idea. It is interesting, then, to imagine how different our world would be if they had chosen Circle-Vision. "Captain America: Brave New World" in Cinemark 360.

yesterday 2 votes
RIP Bill Atkinson

As posted by his family (Facebook link), Bill Atkinson passed away on June 5 from pancreatic cancer at the age of 74. The Macintosh would not have been the same without him (QuickDraw, MacPaint, HyperCard, and so much more). Rest in peace.

2 days ago 2 votes
This robotic tongue drummer bangs out all the ambient hits

If you like to listen to those “deep focus” soundtracks that are all ambient and relaxing, then you’ve heard a tongue drum in action. A tongue drum, or tank drum, is a unique percussion instrument traditionally made from an empty propane cylinder — though purpose-built models are now common. Several tongues are cut into one […] The post This robotic tongue drummer bangs out all the ambient hits appeared first on Arduino Blog.

2 days ago 2 votes
Lenovo ThinkCentre M900 Tiny: how does it fare as a home server?

My evenings of absent-minded local auction site scrolling1 paid off: I now own a Lenovo ThinkCentre M900 Tiny. It’s relatively old, being manufactured in 20162, but it’s tiny and has a lot of useful life left in it. It’s also featured in the TinyMiniMicro series by ServeTheHome. I managed to get it for 60 EUR plus about 4 EUR shipping, and it comes with solid specifications: CPU: Intel i5-6500T RAM: 16GB DDR4 Storage: 256GB SSD Power adapter included The price is good compared to similar auctions, but was it worth it? Yes, yes it was. I have been running a ThinkPad T430 as a server for a while now, since October 2024. It served me well in that role and would’ve served me for even longer if I wanted to, but I had an itch for a project that didn’t involve renovating an apartment.3 Power usage One of my main curiosities was around the power usage. Will this machine beat the laptop in terms of efficiency while idling and running normal home server workloads? Yes, yes it does. While booting into Windows 11 and calming down a bit, the lowest idle power numbers I saw were around 8 W. This concludes the testing on Windows. On Linux (Fedora Server 42), the idle power usage was around 6.5 W to 7 W. After running powertop --auto-tune, I ended up getting that down to 6.1 W - 6.5 W. This is much lower compared to the numbers that ServeTheHome got, which were around 11-13 W (120V circuit). My measurements are made in Europe, Estonia, where we have 240V circuits. You may be able to find machines where the power usage is even lower. Louwrentius mada an idle power comparison on an HP EliteDesk Mini G3 800 where they measured it at 4 W. That might also be due to other factors in play, or differences in measurement tooling. During normal home server operation with 5 SATA SSD-s connected (4 of them with USB-SATA adapters), I have observed power consumption being around 11-15 W, with peaks around 40 W. On a pure CPU load with stress -c 8, I saw power consumption being around 32 W. Formatting the internal SATA SSD added 5 W to that figure. USB storage, are you crazy? Yes. But hear me out. Back in 2021, I wrote about USB storage being a very bad idea, especially on BTRFS. I’ve learned a lot over the years, and BTRFS has received continuous improvements as well. In my ThinkPad T430 home server setup, I had two USB-connected SSD-s running in RAID0 for over half a year, and it was completely fine unless you accidentally bumped into the SSD-s. USB-connected storage is fine under the right circumstances: the cables are not damaged the cables are not at a weird angle or twisted I actually had issues with this point, my very cool and nice cable management resulted in one disk having connectivity issues, which I fixed by relieving stress on the cables and routing them differently the connected PC does not have chronic overheating issues the whole setup is out of the reach of cats, dogs, children and clumsy sysadmin cosplayers the USB-SATA adapters pass through the device ID and S.M.A.R.T information to the host the device ID part especially is key to avoiding issues with various filesystems (especially ZFS) and storage pool setups the ICY BOX IB-223U3a-B is a good option that I have personally been very happy with, and it’s what I’m using in this server build a lot of adapters (mine included) don’t support running SSD TRIM commands to the drives, which might be a concern has not been an issue for over half a year with those ICY BOX adapters, but it’s something to keep in mind you are not using an SBC as the home server even a Raspberry Pi 4 can barely handle one USB-powered SSD not an issue if you use an externally powered drive, or an USB DAS After a full BTRFS scrub and a few days of running, it seems fine. Plus it looks sick as hell with the identical drives stacked on top. All that’s missing are labels specifying which drive is which, but I’m sure that I’ll get to that someday, hopefully before a drive failure happens. In a way, this type of setup best represents what a novice home server enthusiast may end up with: a tiny, power-efficient PC with a bunch of affordable drives connected. Less insane storage ideas for a tiny PC There are alternative options for handling storage on a tiny 1 liter PC, but they have some downsides that I don’t want to be dealing with right now. An USB DAS allows you to handle many drives with ease, but they are also damn expensive. If you pick wrong, you might also end up with one where the USB-SATA chip craps out under high load, which will momentarily drop all the drives, leaving you with a massive headache to deal with. Cheaper USB-SATA docks are more prone to this, but I cannot confirm or deny if more expensive options have the same issue. Running individual drives sidesteps this issue and moves any potential issues to the host USB controller level. There is also a distinct lack of solutions that are designed around 2.5" drives only. Most of them are designed around massive and power-hungry 3.5" drives. I just want to run my 4 existing SATA SSD-s until they crap out completely. An additional box that does stuff generally adds to the overall power consumption of the setup as well, which I am not a big fan of. Lowering the power consumption of the setup was the whole point! I can’t rule out testing USB DAS solutions in the future as they do seem handy for adding storage to tiny PC-s and laptops with ease, but for now I prefer going the individually connected drives route, especially because I don’t feel like replacing my existing drives, they still have about 94% SSD health in them after 3-4 years of use, and new drives are expensive. Or you could go full jank and use that one free NVMe slot in the tiny PC to add more SATA ports or break out to other devices, such as a PCIe HBA, and introduce a lot of clutter to the setup with an additional power supply, cables and drives. Or use 3.5" external hard drives with separate power adapters. It’s what I actually tried out back in 2021, but I had some major annoyances with the noise. Miscellaneous notes Here are some notes on everything else that I’ve noticed about this machine. The PC is quite efficient as demonstrated by the power consumption numbers, and as a result it runs very cool, idling around 30-35 °C in a ~22-24 °C environment. Under a heavy load, the CPU temperatures creep up to 65-70 °C, which is perfectly acceptable. The fan does come on at higher load and it’s definitely audible, but in my case it runs in a ventilated closet, so I don’t worry about that at all. The CPU (Intel i5-6500T) is plenty fast for all sorts of home server workloads with its 4 CPU cores and clock speeds of 2.7-2.8 GHz under load. The UEFI settings offered a few interesting options that I decided to change, the rest are set to default. There is an option to enable an additional C-state for even better power savings. For home server workloads, it was nice to see the setting to allow you to boot the PC without a keyboard being attached, found under “Keyboardless operation” setting. I guess that in some corporate environments disconnected keyboards are such a common helpdesk issue that it necessitates having this option around. Closing thoughts I just like these tiny PC boxes a lot. They are tiny, fast and have a very solid construction, which makes them feel very premium in your hands. They are also perfectly usable, extensible and can be an absolute bargain at the right price. With solid power consumption figures that are only a few watts off of a Raspberry Pi 5, it might make more sense to get a TinyMiniMicro machine for your next home server. I’m definitely very happy with mine. well, at least it beats doom-scrolling social media. ↩︎ yeah, I don’t like being reminded of being old, too. ↩︎ there are a lot of similarities between construction/renovation work and software development, but that’s a story for another time. ↩︎

3 days ago 4 votes