Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
8
We’re excited to introduce two tiny additions to the Arduino ecosystem that will make a big difference: the Nano Connector Carrier and seven new Modulino® nodes, now available individually in the Arduino Store! These products are designed to make your prototyping experience faster, easier, and more fun – whether you’re building interactive installations, automating tasks, […] The post New arrivals: Nano Connector Carrier + 7 Modulino® nodes to supercharge your projects appeared first on Arduino Blog.
a week ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Arduino Blog

Recreating a bizarre century-old electronic instrument

There are a handful of instruments that are staples of modern music, like guitars and pianos. And then there are hundreds of other instruments that were invented throughout history and then fell into obscurity without much notice. The Luminaphone, invented by Harry Grindell Matthews and unveiled in 1925, is a particularly bizarre example. Few people […] The post Recreating a bizarre century-old electronic instrument appeared first on Arduino Blog.

11 hours ago 1 votes
Get 3 months of Arduino Cloud Free with your UNO R4 WiFi!

If you own an Arduino UNO R4 WiFi or plan to get one, there’s a special reward waiting for you. Register your board on the Arduino website and receive three months of free access to the Arduino Cloud Maker plan. This offer gives you everything you need to start creating smarter, more connected projects, with […] The post Get 3 months of Arduino Cloud Free with your UNO R4 WiFi! appeared first on Arduino Blog.

18 hours ago 1 votes
YouTuber builds robot to make boyfriend take out the trash

Is there anything more irritating than living with a partner who procrastinates on their share of the chores? Even if it isn’t malicious, it sure is annoying. Taking out the trash is YouTuber CircuitCindy’s boyfriend’s responsibility, but he often fails to do the task in a timely manner. That forced Cindy to implement a sinister […] The post YouTuber builds robot to make boyfriend take out the trash appeared first on Arduino Blog.

2 days ago 3 votes
This DIY bruxism detector prevents jaw clenching during sleep

At some point in your life, you’ve probably had a doctor or dentist ask you if you clench your jaw or grind your teeth — particularly at night. That is called bruxism and it can have serious effects, such as tooth damage, jaw pain, headaches, and more. But because it often happens when you’re asleep […] The post This DIY bruxism detector prevents jaw clenching during sleep appeared first on Arduino Blog.

5 days ago 5 votes

More in technology

Get 3 months of Arduino Cloud Free with your UNO R4 WiFi!

If you own an Arduino UNO R4 WiFi or plan to get one, there’s a special reward waiting for you. Register your board on the Arduino website and receive three months of free access to the Arduino Cloud Maker plan. This offer gives you everything you need to start creating smarter, more connected projects, with […] The post Get 3 months of Arduino Cloud Free with your UNO R4 WiFi! appeared first on Arduino Blog.

18 hours ago 1 votes
2025-05-27 the first smart homes

Sometimes I think I should pivot my career to home automation critic, because I have many opinions on the state of the home automation industry---and they're pretty much all critical. Virtually every time I bring up home automation, someone says something about the superiority of the light switch. Controlling lights is one of the most obvious applications of home automation, and there is a roughly century long history of developments in light control---yet, paradoxically, it is an area where consumer home automation continues to struggle. An analysis of how and why billion-dollar tech companies fail to master the simple toggling of lights in response to human input will have to wait for a future article, because I will have a hard time writing one without descending into incoherent sobbing about the principles of scene control and the interests of capital. Instead, I want to just dip a toe into the troubled waters of "smart lighting" by looking at one of its earliest precedents: low-voltage lighting control. A source I generally trust, the venerable "old internet" website Inspectapedia, says that low-voltage lighting control systems date back to about 1946. The earliest conclusive evidence I can find of these systems is a newspaper ad from 1948, but let's be honest, it's a holiday and I'm only making a half effort on the research. In any case, the post-war timing is not a coincidence. The late 1940s were a period of both rapid (sub)urban expansion and high copper prices, and the original impetus for relay systems seems to have been the confluence of these two. But let's step back and explain what a relay or low-voltage lighting control system is. First, I am not referring to "low voltage lighting" meaning lights that run on 12 or 24 volts DC or AC, as was common in landscape lighting and is increasingly common today for integrated LED lighting. Low-voltage lighting control systems are used for conventional 120VAC lights. In the most traditional construction, e.g. in the 1940s, lights would be served by a "hot" wire that passed through a wall box containing a switch. In many cases the neutral (likely shared with other fixtures) went directly from the light back to the panel, bypassing the switch... running both the hot and neutral through the switch box did not become conventional until fairly recently, to the chagrin of anyone installing switches that require a neutral for their own power, like timers or "smart" switches. The problem with this is that it lengthens the wiring runs. If you have a ceiling fixture with two different switches in a three-way arrangement, say in a hallway in a larger house, you could be adding nearly 100' in additional wire to get the hot to the switches and the runner between them. The cost of that wiring, in the mid-century, was quite substantial. Considering how difficult it is to find an employee to unlock the Romex cage at Lowes these days, I'm not sure that's changed that much. There are different ways of dealing with this. In the UK, the "ring main" served in part to reduce the gauge (and thus cost) of outlet wiring, but we never picked up that particular eccentricity in the US (for good reason). In commercial buildings, it's not unusual for lighting to run on 240v for similar reasons, but 240v is discouraged in US residential wiring. Besides, the mid-century was an age of optimism and ambition in electrical technology, the days of Total Electric Living. Perhaps the technology of the relay, refined by so many innovations of WWII, could offer a solution. Switch wiring also had to run through wall cavities, an irritating requirement in single-floor houses where much of the lighting wiring could be contained to the attic. The wiring of four-way and other multi-switch arrangements could become complex and require a lot more wall runs, discouraging builders providing switches in the most convenient places. What if relays also made multiple switches significantly easier to install and relocate? You probably get the idea. In a typical low-voltage lighting control system, a transformer provides a low voltage like 24VAC, much the same as used by doorbells. The light switches simply toggle the 24VAC control power to the coils of relays. Some (generally older) systems powered the relay continuously, but most used latching relays. In this case, all light switches are momentary, with an "on" side and an "off" side. This could be a paddle that you push up or down (much like a conventional light switch), a bar that you push the left or right sides of, or a pair of two push buttons. In most installations, all of the relays were installed together in a single enclosure, usually in the attic where the high-voltage wiring to the actual lights would be fairly short. The 24VAC cabling to the switches was much smaller gauge, and depending on the jurisdiction might not require any sort of license to install. Many systems had enclosures with separate high voltage and low voltage components, or mounted the relays on the outside of an enclosure such that the high voltage wiring was inside and low voltage outside. Both arrangements helped to meet code requirements for isolating high and low voltage systems and provided a margin of safety in the low voltage wiring. That provided additional cost savings as well; low voltage wiring was usually installed without any kind of conduit or sheathed cable. By 1950, relay lighting controls were making common appearances in real estate listings. A feature piece on the "Melody House," a builder's model home, in the Tacoma News Tribune reads thus: Newest features in the house are the low voltage touch plate and relay system lighting controls, with wide plates instead of snap buttons---operated like the stops of a pipe organ, with the merest flick of a finger. The comparison to a pipe organ is interesting, first in its assumption that many readers were familiar with typical organ stops. Pipe organs were, increasingly, one of the technological marvels of the era: while the concept of the pipe organ is very old, this same era saw electrical control systems (replete with relays!) significantly reduce the cost and complexity of organ consoles. What's more, the tonewheel electric organ had become well-developed and started to find its way into homes. The comparison is also interesting because of its deficiencies. The Touch-Plate system described used wide bars, which you pressed the left or right side of---you could call them momentary SPDT rocker switches if you wanted. There were organs with similar rocker stops but I do not think they were common in 1950. My experience is that such rocker switch stops usually indicate a fully digital control system, where they make momentary action unobtrusive and avoid state synchronization problems. I am far from an expert on organs, though, which is why I haven't yet written about them. If you have a guess at which type of pipe organ console our journalist was familiar with, do let me know. Touch-Plate seems to have been one of the first manufacturers of these systems, although I can't say for sure that they invented them. Interestingly, Touch-Plate is still around today, but their badly broken WordPress site ("Welcome to the new touch-plate.com" despite it actually being touchplate.com) suggests they may not do much business. After a few pageloads their WordPress plugin WAF blocked me for "exceed[ing] the maximum number of page not found errors per minute for humans." This might be related to my frustration that none of the product images load. It seems that the Touch-Plate company has mostly pivoted to reselling imported LED lighting (touchplateled.com), so I suppose the controls business is withering on the vine. The 1950s saw a proliferation of relay lighting control brands, with GE introducing a particularly popular system with several generations of fixtures. Kyle Switch Plates, who sell replacement switch plates (what else?), list options for Remcon, Sierra, Bryant, Pyramid, Douglas, and Enercon systems in addition to the two brands we have met so far. As someone who pays a little too much attention to light switches, I have personally seen four of these brands, three of them still in use and one apparently abandoned in place. Now, you might be thinking that simply economizing wiring by relocating the switches does not constitute "home automation," but there are other features to consider. For one, low-voltage light control systems made it feasible to install a lot more switches. Houses originally built with them often go a little wild with the n-way switching, every room providing lightswitches at every door. But there is also the possibility of relay logic. From the same article: The necessary switches are found in every room, but in the master bedroom there is a master control panel above the bed, from where the house and yard may be flooded with instant light in case of night emergency. Such "master control panels" were a big attraction for relay lighting, and the finest homes of the 1950s and 1960s often displayed either a grid of buttons near the head of the master bed, or even better, a GE "Master Selector" with a curious system of rotary switches. On later systems, timers often served as auxiliary switches, so you could schedule exterior lights. With a creative installer, "scenes" were even possible by wiring switches to arbitrary sets of relays (this required DC or half-wave rectified control power and diodes to isolate the switches from each other). Many of these relay control systems are still in use today. While they are quite outdated in a certain sense, the design is robust and the simple components mean that it's usually not difficult to find replacement parts when something does fail. The most popular system is the one offered by GE, using their RR series relays (RR3, RR4, etc., to the modern RR9). That said, GE suggests a modernization path to their LightSweep system, which is really a 0-10v analog dimming controller that has the add-on ability to operate relays. The failure modes are mostly what you would expect: low voltage wiring can chafe and short, or the switches can become stuck. This tends to cause the lights to stick on or off, and the continuous current through the relay coil often burns it out. The fix requires finding the stuck switch or short and correcting it, and then replacing the relay. One upside of these systems that persists today is density: the low voltage switches are small, so with most systems you can fit 3 per gang. Another is that they still make N-way switching easier. There is arguably a safety benefit, considering the reduction in mains-voltage wire runs. Yet we rarely see such a thing installed in homes newer than around the '80s. I don't know that I can give a definitive explanation of the decline of relay lighting control, but reduced prices for copper wiring were probably a main factor. The relays added a failure point, which might lead to a perception of unreliability, and the declining familiarity of electricians means that installing a relay system could be expensive and frustrating today. What really interests me about relay systems is that they weren't really replaced... the idea just went away. It's not like modern homes are providing a master control panel in the bedroom using some alternative technology. I mean, some do, those with prices in the eight digits, but you'll hardly ever see it. That gets us to the tension between residential lighting and architectural lighting control systems. In higher-end commercial buildings, and in environments like conference rooms and lecture halls, there's a well established industry building digital lighting control systems. Today, DALI is a common standard for the actual lighting control, but if you look at a range of existing buildings you will find everything from completely proprietary digital distributed dimming to 0-10v analog dimming to central dimmer racks (similar to traditional theatrical lighting). Relay lighting systems were, in a way, a nascent version of residential architectural lighting control. And the architectural lighting control industry continues to evolve. If there is a modern equivalent to relay lighting, it's something like Lutron QSX. That's a proprietary digital lighting (and shade) control system, marketed for both residential and commercial use. QSX offers a wide range of attractive wall controls, tight integration to Lutron's HomeSense home automation platform, and a price tag that'll make your eyes water. Lutron has produced many generations of these systems, and you could make an argument that they trace their heritage back to the relay systems of the 1940s. But they're just priced way beyond the middle-class home. And, well, I suppose that requires an argument based on economics. Prices have gone up. Despite tract construction being a much older idea than people often realize, it seems clear that today's new construction homes have been "value engineered" to significantly lower feature and quality levels than those of the mid-century---but they're a lot bigger. There is a sort of maxim that today's home buyers don't care about anything but square footage, and if you've seen what Pulte or D. R. Horton are putting up... well, I never knew that 3,000 sq ft could come so cheap, and look it too. Modern new-construction homes just don't come with the gizmos that older ones did, especially in the '60s and '70s. Looking at the sales brochure for a new development in my own Albuquerque ("Estates at La Cuentista"), besides 21st century suburbanization (Gated Community! "East Access to Paseo del Norte" as if that's a good thing!) most of the advertised features are "big." I'm serious! If you look at the "More Innovation Built In" section, the "innovations" are a home office (more square footage), storage (more square footage), indoor and outdoor gathering spaces (to be fair, only the indoor ones are square footage), "dedicated learning areas" for kids (more square footage), and a "basement or bigger garage" for a home gym (more square footage). The only thing in the entire innovation section that I would call a "technical" feature is water filtration. You can scroll down for more details, and you get to things like "space for a movie room" and a finished basement described eight different ways. Things were different during the peak of relay lighting in the '60s. A house might only be 1,600 sq ft, but the builder would deck it out with an intercom (including multi-room audio of a primitive sort), burglar alarm, and yes, relay lighting. All of these technologies were a lot newer and people were more excited about them; I bring up Total Electric Living a lot because of an aesthetic obsession but it was a large-scale advertising and partnership campaign by the electrical industry (particularly Westinghouse) that gave builders additional cross-promotion if they included all of these bells and whistles. Remember, that was when people were watching those old videos about the "kitchen of the future." What would a 2025 "Kitchen of the Future" promotional film emphasize? An island bigger than my living room and a nook for every meal, I assume. Features like intercoms and even burglar alarms have become far less common in new construction, and even if they were present I don't think most buyers would use them. But that might seem a little odd, right, given the push towards home automation? Well, built-in home automation options have existed for longer than any of today's consumer solutions, but "built in" is a liability for a technology product. There are practical reasons, in that built-in equipment is harder to replace, but there's also a lamer commercial reason. Consumer technology companies want to sell their products like consumer technology, so they've recontextualized lighting control as "IoT" and "smart" and "AI" rather than something an electrician would hook up. While I was looking into relay lighting control systems, I ran into an interesting example. The Lutron Lu Master Lumi 5. What a name! Lutron loves naming things like this. The Lumi 5 is a 1980s era product with essentially the same features as a relay system, but architected in a much stranger way. It is, essentially, five three way switches in a box with remote controls. That means that each of the actual light switches in the house (which could also be dimmers) need mains-voltage wiring, including runner, back to the Lumi 5 "interface." Pressing a button on one of the Lutron wall panels toggles the state of the relay in the "interface" cabinet, toggling the light. But, since it's all wired as a three-way switch, toggling the physical switch at the light does the same thing. As is typical when combining n-way switches and dimming, the Lumi 5 has no control over dimmers. You can only dim a light up or down at the actual local control, the Lumi 5 can just toggle the dimmer on and off using the 3-way runner. The architecture also means that you have two fundamentally different types of wall panels in your house: local switches or dimmers wired to each light, and the Lu Master panels with their five buttons for the five circuits, along with "all on" and "all off." The Lumi 5 "interface" uses simple relay logic to implement a few more features. Five mains-voltage-level inputs can be wired to time clocks, so that you can schedule any combination(s) of the circuits to turn on and off. The manual recommends models including one with an astronomical clock for sunrise/sunset. An additional input causes all five circuits to turn on; it's suggested for connection to an auxiliary relay on a burglar alarm to turn all of the lights on should the alarm be triggered. The whole thing is strange and fascinating. It is basically a relay lighting control system, like so many before it, but using a distinctly different wiring convention. I think the main reason for the odd wiring was to accommodate dimmers, an increasingly popular option in the 1980s that relay systems could never really contend with. It doesn't have the cost advantages of relay systems at all, it will definitely be more expensive! But it adds some features over the fancy Lutron switches and dimmers you were going to install anyway. The Lu Master is the transitional stage between relay lighting systems and later architectural lighting controls, and it straddled too the end of relay light control in homes. It gives an idea of where relay light control in homes would have evolved, had the whole technology not been doomed to the niche zone of conference centers and universities. If you think about it, the Lu Master fills the most fundamental roles of home automation in lighting: control over multiple lights in a convenient place, scheduling and triggers, and an emergency function. It only lacks scenes, which I think we can excuse considering that the simple technology it uses does not allow it to adjust dimmers. And all of that with no Node-RED in sight! Maybe that conveys what most frustrates me about the "home automation" industry: it is constantly reinventing the wheel, an oligopoly of tech companies trying to drag people's homes into their "ecosystem." They do so by leveraging the buzzword of the moment, IoT to voice assistants to, I guess now AI?, to solve a basic set of problems that were pretty well solved at least as early as 1948. That's not to deny that modern home automation platforms have features that old ones don't. They are capable of incredibly sophisticated things! But realistically, most of their users want only very basic functionality: control in convenient places, basic automation, scenes. It wouldn't sting so much if all these whiz-bang general purpose computers were good at those tasks, but they aren't. For the very most basic tasks, things like turning on and off a group of lights, major tech ecosystems like HomeKit provide a user experience that is significantly worse than the model home of 1950. You could install a Lutron system, and it would solve those fundamental tasks much better... for a much higher price. But it's not like Lutron uses all that money to be an absolute technical powerhouse, a center of innovation at the cutting edge. No, even the latest Lutron products are really very simple, technically. The technical leaders here, Google, Apple, are the companies that can't figure out how to make a damn light switch. The problem with modern home automation platforms is that they are too ambitious. They are trying to apply enormously complex systems to very simple tasks, and thus contaminating the simplest of electrical systems with all the convenience and ease of a Smart TV. Sometimes that's what it feels like this whole industry is doing: adding complexity while the core decays. From automatic programming to AI coding agents, video terminals to Electron, the scope of the possible expands while the fundamentals become more and more irritating. But back to the real point, I hope you learned about some cool light switches. Check out the Kyle Switch Plates reference and you'll start seeing these buildings and homes, at least if you live in an area that built up during the era that they were common (1950s to the 1970s).

yesterday 1 votes
YouTuber builds robot to make boyfriend take out the trash

Is there anything more irritating than living with a partner who procrastinates on their share of the chores? Even if it isn’t malicious, it sure is annoying. Taking out the trash is YouTuber CircuitCindy’s boyfriend’s responsibility, but he often fails to do the task in a timely manner. That forced Cindy to implement a sinister […] The post YouTuber builds robot to make boyfriend take out the trash appeared first on Arduino Blog.

2 days ago 3 votes
Datamost Nightraiders

Day Must Turn to Night Before Mankind Dares to Fight

3 days ago 7 votes
prior-art-dept.: The hierarchical hypermedia world of Hyper-G

Prior Art Department and today we'll consider a forgotten yet still extant sidebar of the early 1990s Internet. If you had Internet access at home back then, it was almost certainly dialup modem (like I did); only the filthy rich had T1 lines or ISDN. Moreover, from a user perspective, the hosts you connected to were their own universe. You got your shell account or certain interactive services over Telnet (and, for many people including yours truly, E-mail), you got your news postings from the spool either locally or NNTP, and you got your files over FTP. It may have originated elsewhere, but everything on the host you connected to was a local copy: the mail you received, the files you could access, the posts you could read. Exceptional circumstances like NFS notwithstanding, what you could see and access was local — it didn't point somewhere else. Around this time, however, was when sites started referencing other sites, much like the expulsion from Eden. In 1990 both HYTELNET and Archie appeared, which were early search engines for Telnet and FTP resources. Since they relied on accurate information about sites they didn't control, both of them had to regularly update their databases. Gopher, when it emerged in 1991, consciously tried to be a friendlier FTP by presenting files and resources hung from a hierarchy of menus, which could even point to menus on other hosts. That meant you didn't have to locally mirror a service to point people at it, but if the referenced menu was relocated or removed, the link to it was broken and the reference's one-way nature meant there was no automated way to trace back and fix it. And then there was that new World Wide Web thing introduced to the public in 1993: a powerful soup of media and hypertext with links that could point to nearly anything, but they were unidirectional as well, and the sheer number even in modest documents could quickly overwhelm users in a rapidly expanding environment. Not for nothing was the term "linkrot" first attested around 1996, as well as how disoriented a user might get following even perfectly valid links down a seemingly infinite rabbithole. "memex" idea, imagining not only literature but photographs, sketches and notes all interconnected with various "trails." The concept was exceedingly speculative and never implemented (nor was Ted Nelson's Xanadu "docuverse" in 1965) but Douglas Engelbart's oN-Line System "NLS" at the Stanford Research Institute was heavily inspired by it, leading to the development of the mouse and the 1968 Mother of All Demos. The notion wasn't new on computers, either, such as 1967's Hypertext Editing System on an IBM System/360 Model 50, and early microcomputer implementations like OWL Guide appeared in the mid-1980s on workstations and the Macintosh. Hermann Maurer, then a professor at the Graz University of Technology in Austria, had been interested in early computer-based information systems for some time, pioneering work on early graphic terminals instead of the pure text ones commonly in use. One of these was the MUPID series, a range of Z80-based systems first introduced in 1981 ostensibly for the West German videotex service Bildschirmtext but also standalone home computers in their own right. This and other work happened at what was the Institutes for Information Processing Graz, or IIG, later the Institute for Information Processing and Computer-Supported New Media (IICM). Subsequently the IIG started researching new methods of computer-aided instruction by developing an early frame-based hypermedia system called COSTOC (originally "COmputer Supported Teaching Of Computer-Science" and later "COmputer Supported Teaching? Of Course!") in 1985, which by 1989 had been commercialized, was in use at about twenty institutions on both sides of the Atlantic, and contained hundreds of one-hour lessons. COSTOC's successful growth also started to make it unwieldy, and a planned upgrade in 1989 called HyperCOSTOC proposed various extensions to improve authoring, delivery, navigation and user annotation. Meanwhile, it was only natural that Maurer's interest would shift to the growing early Internet, at that time under the U.S. National Science Foundation and by late that year numbering over 150,000 hosts. Maurer's group decided to consolidate their experiences with COSTOC and HyperCOSTOC into what they termed "the optimal large-scale hypermedia system," code-named Hyper-G (the G, natürlich, for Graz). It would be networked and searchable, preserve user orientation, and maintain correct and up-to-date linkages between the resources it managed. In January 1990, the Austrian Ministry of Science agreed to fund a prototype for which Maurer's grad student Frank Kappe formally wrote the architectural design as his PhD dissertation. Other new information technologies like Gopher and the Web were emerging at the same time, at the University of Minnesota and CERN respectively, and the Hyper-G team worked with the Gopher and W3 teams so that the Hyper-G server could also speak to those servers and clients. The prototype emerged in January 1992 as the University's new information system TUGinfo. Because Hyper-G servers could also speak Gopher and HTTP, TUGinfo was fully accessible by the clients of the day, but it could also be used with various Hyper-G line-mode clients. One of these was a bespoke tool named UniInfo which doesn't appear to have been distributed outside the University and is likely lost. The other is called the Hyper-G Terminal Viewer, or hgtv (not to be confused with the vapid cable channel), which became a standard part of the server for administration tasks. The success of TUGinfo convinced the European Space Agency to adopt Hyper-G for its Guide and Directory in the fall, after which came a beta native Windows client called Amadeus in 1993 and a beta Unix client called Harmony in 1994. Yours truly remembers accessing some of these servers through a web browser around this time, which is how this whole entry got started trying to figure out where Hyper-G ended up. a partial copy of these files, it lacks, for example, any of the executables for the Harmony client. Fortunately there were also at least two books on Hyper-G, one by Hermann Maurer himself, and a second by Wolfgang Dalitz and Gernot Heyer, two partnering researchers then at the Konrad-Zuse-Zentrum für Informationstechnik Berlin (ZIB). Happily these two books have CDs with full software kits, and the later CD from Dalitz and Heyer's book is what we'll use here. I've already uploaded its contents to the Floodgap Gopher server to serve as a supreme case of historical irony. collections. A resource must belong to at least one collection, but it may belong to multiple collections, and a collection can span more than one server. A special type of collection is the cluster, where semantically related materials are grouped together such as multiple translations, alternate document formats, or multimedia aggregates (e.g., text and various related images or video clips). We'll look at how this appears practically when we fire the system up. Any resource may link to another resource. Like HTML, these links are called anchors, but unlike HTML, anchors are bidirectional and can occur in any media type like PostScript documents, images, or even audio/video. Because they can be followed backwards, clients can walk the chains to construct a link map, like so: man page for grep(1), showing what it connects to, and what those pages connect to. Hyper-G clients could construct such maps on demand and all of the resources it shows can of course be jumped to directly. This was an obvious aid to navigation because you always could find out where you were in relation to anything else. Under the hood, anchors aren't part of the document, or even hidden within it; they're part of the metadata. Here's a real section of a serialized Hyper-G database: This textual export format (HIF, the Hyper-G Interchange Format) is how a database could be serialized and backed up or transmitted to another server, including internal resources. Everything is an object and has an ID, with resources existing at a specified path (either a global ID based on its IPv4 address or a filesystem path), and the parent indicating the name of the collection the resource belongs to. These fields are all searchable, as are text resources via full-text search, all of which is indexed immediately. You don't need to do anything to set up a site search facility — it comes built-in. Anchors are connected at either byte ranges or spatial/time coordinates within their resource. This excerpt defines three source anchors, i.e., a link that goes out to another resource. uudecodeing the text fragment and dumping it, the byte offsets in the anchor sections mean the text ranges for hg_comm.h, hg_comm.c and hg_who.c will be linked to those respective entries as destination anchors in the database. For example, here is the HIF header for hg_comm.h: These fields are indexed, so the server can walk them backwards or forwards, and the operation is very fast. The title and its contents and even its location can change; the link will always be valid as long as the object exists, and if it's later deleted, the server can automatically find and remove all anchors to it. Analogous to an HTML text fragment, destination anchors can provide a target covering a specific position and/or portion within a text resource. As the process requires creating and maintaining various unique IDs, Hyper-G clients have authoring capability as well, allowing a user to authenticate and then insert or update resources and anchors as permitted. We're going to do exactly that. Since resources don't have to be modified to create an anchor, even read-only resources such as those residing on a mounted CD-ROM could be linked and have anchors of their own. Instead of having their content embedded in the database, however, they can also appear as external entities pointed to by conventional filesystem paths. This would have been extremely useful for multimedia in particular considering the typical hard disk size of the early 1990s. Similarly, Internet resources on external servers could also be part of the collection: While resources that are not Hyper-G will break the link chain, the connection can still be expressed, and at least the object itself can be tracked by the database. The protocol could be Hyper-G, Gopher, HTTP, WAIS, Telnet or FTP. It was also possible to create SQL queries this way, which would be performed live. Later versions of the server even had a CGI-compatible scripting ability. I mentioned that the user can authenticate to the server, as well as being anonymous. When logged in, authenticated access allows not only authoring and editing but also commenting through annotations (and annotating the annotations). This feature is obviously useful for things like document review, but could also have served as a means for a blog with comments, well before the concept existed formally, or a message board or BBS. Authenticated access is also required for resources with limited permissions, or those that can only be viewed for a limited time or require payment (yes, all this was built-in). In the text file you can also see markup tags that resemble and in some cases are the same as, but in fact are not, HTML. These markup tags are part of HTF, or the Hyper-G Text Format, Hyper-G's native text document format. HTF is dynamically converted for Gopher or Web clients; there is a corresponding HTML tag for most HTF tags, eventually supporting much of HTML 3.0 except for tables and forms, and most HTML entities are the same in HTF. Anchor tags in an HTF document are handled specially: upon upload the server strips them off and turns them into database entries, which the server then maintains. In turn, anchor tags are automatically re-inserted according to their specified positions with current values when the HTF resource is fetched or translated. dbserver) that handles the database and the full-text index server (ftserver) used for document search. The document cache server (dcserver), however, has several functions: it serves and stores local documents on request, it runs CGI scripts (using the same Common Gateway Interface standard as a webserver of the era would have), and to request and cache resources from remote servers referenced on this one, indicated by the upper 32 bits of the global ID. In earlier versions of the server, clients were responsible for other protocols. A Hyper-G client, if presented with a Gopher or HTTP URL, would have to go fetch it. hgserver (no relation to Mercurial). This talks directly to other Hyper-G servers (using TCP port 418), and also directly to clients with port 418 as a control connection and a dynamically assigned port number for document transfer (not unlike FTP). Since links are bidirectional, Hyper-G servers contact other Hyper-G servers to let them know a link has been made (or, possibly, removed), and then those servers will send them updates. There are hazards with this approach. One is that it introduces an inevitable race condition between the change occurring on the upstream and any downstream(s) knowing about it, so earlier implementations would wait until all the downstream(s) acknowledged the change before actually making it effective. Unfortunately this ran into a second problem: particularly for major Hyper-G sites like IIG/IICM itself, an upstream server could end up sending thousands of update notifications after making any change at all, and some downstreams might not respond in a timely fashion for any number of reasons. Later servers use a probablistic version of the "flood" algorithm from the Harvest resource discovery system (perhaps a future Prior Art entry) where downstreams pass the update along to a smaller subset of hosts, who in turn do the same to another subset, until the message has propagated throughout the network (p-flood). Any temporary inconsistency is simply tolerated until the message makes the rounds. This process was facilitated because all downstreams knew about all other Hyper-G servers, and updates to this master list were sent in the same manner. A new server could get this list from IICM after installation to bootstrap itself, becoming part of a worldwide collection called the Hyper Root. requiring license fees for commercial use of their Gopher server implementation. Subsequent posts were made to clarify this only applied to UMN gopherd, and then only to commercial users, nor is it clear exactly how much that license fee was or whether anybody actually paid, but the damage was done and the Web — freely available from the beginning — continued unimpeded on its meteoric rise. (UMN eventually relicensed 2.3.1 under the GNU Public License in 2000.) Hyper-G's principals would have no doubt known of this precautionary tale. On the other hand, they also clearly believed that they possessed a fundamentally superior product to existing servers that people would be willing to pay good money for. Indeed, just like they did with COSTOC, the intention of spinning Hyper-G/HyperWave off as a commercial enterprise had been planned from the very beginning. Hyper-G, now renamed HyperWave, officially became a commercial product in June 1996. This shift was facilitated by the fact that no publicly available version had ever been open-source. Early server versions of Hyper-G had no limit on users, but once HyperWave was productized, its free unregistered tier imposed document restrictions and a single-digit login cap (anonymous users could of course still view HyperWave sites without logging in, but they couldn't post anything either). Non-commercial entities could apply for a free license key, something that is obviously no longer possible, but commercial use required a full paid license starting at US$3600 for a 30-user license (in 2025 dollars about $6900) or $30,000 for an unlimited one ($57,600). An early 1997 version of this commercial release appears to be what's available from the partial mirror at the Internet Archive, which comes with a license key limiting you to four users and 500 documents — that expired on July 31, 1997. This license is signed with a 128-bit checksum that might be brute-forceable on a modern machine but you get to do that yourself. Fortunately, the CD from our HyperWave book, although also published in 1996, predates the commercial shift; it is a offline and complete copy of the Hyper-G FTP server as it existed on April 13, 1996 with all the clients and server software then available. We'll start with the Hyper-G server portion, which on disc offers official builds for SunOS 4.1.3, Solaris 2.2 (SPARC only), HP-UX 9.01 (PA-RISC only), Ultrix 4.2 (MIPS DECstation "PMAX" only), IRIX 3.0 (SGI MIPS), Linux/x86 1.2, OSF/1 3.2 (on Alpha) and a beta build for IBM AIX 4.1. Apple Network Server 500 would have been perfect: it has oodles of disk space (like, whole gigabytes, man), a full 200MHz PowerPC 604e upgrade, zippy 20MB/s SCSI-2 and a luxurious 512MB of parity RAM. I'll just stop here and say that it ended in failure because both the available AIX versions on disc completely lack the Gopher and Web gateways, without which the server will fail to start. I even tried the Internet Archive pay-per-u-ser beta version and it still lacked the Gopher gateway, without which it also failed to start, and the new Web gateway in that release seemed to have glitches of its own (though the expired license key may have been a factor). Although there are ways to hack around the startup problems, doing so only made it into a pure Hyper-G system with no other protocols which doesn't make a very good demo for our purposes, and I ended up spending the rest of the afternoon manually uninstalling it. In fairness it doesn't appear AIX was ever officially supported. Otherwise, I don't have a PA-RISC HP-UX server up and running right now (just a 68K one running HP-UX 8.0), and while the SunOS 4 version should be binary compatible with my Solbourne S3000 running OS/MP 4.1C, I wasn't sure if the 56MB of RAM it has was enough if I really wanted to stress-test it. and it has 256MB of RAM. It runs IRIX 6.5.22 but that should still start these binaries. That settled the server part. For the client hardware, however, I wanted something particularly special. My original Power Macintosh 7300 (now with a G4/800) sitting on top will play a supporting role running Windows 98 in emulation for Amadeus, and also testing our Hyper-G's Gopher gateway with UMN TurboGopher, which is appropriate because when it ran NetBSD it was gopher.floodgap.com. Today, though, it runs Mac OS 9.1, and the planned native Mac OS client for Hyper-G was never finished nor released. Our other choices for Harmony are the same as for the server, sans AIX 4.1, which doesn't seem to have been supported as a client at all. Unfortunately the S3000 is only 36MHz, so it wouldn't be particularly fast at the hypermedia features, and I was concerned about the Indy running as client and server at the same time. But while we don't have any PA-RISC servers running, we do have a couple choices in PA-RISC workstations, and one of them is a especially rare bird. Let's meet ... ruby, named for HP chief architect Ruby B. Lee, who was a key designer of the PA-RISC architecture and its first single-chip implementation. This is an RDI PrecisionBook 160 laptop with a 160MHz PA-7300LC CPU, one of the relatively few PA-RISC chips with support for a categorical L2 cache (1MB, in this case), and the last and most powerful family of 32-bit PA-RISC 1.1 chips. Visualize B160L, even appearing as the same exact model number to HP-UX, it came in the same case as its better-known SPARC UltraBook siblings (I have an UltraBook IIi here as well) and was released in 1998, just prior to RDI's buyout by Tadpole. This unit has plenty of free disk space, 512MB of RAM and runs HP-UX 11.00, all of which should run Harmony splendidly, and its battery incredibly still holds some charge. Although the on-board HP Visualize-EG graphics don't have 3D acceleration, neither does the XL24 in our Indy, and its PA-7300LC will be better at software rendering than the Indy's R4400. Fortunately, the Visualize-EG has very good 2D performance for the time. With our hardware selected, it's time to set up the server side. We'll do this by the literal book, and the book in this case recommends creating a regular user hgsystem belonging to a new group hyperg under which the server processes should run. IRIX makes this very easy. hyperg as the user's sole group membership, ... tcsh, which is fine by me because other shells are for people who don't know any better. Logging in and checking our prerequisites: This is the Perl that came with 6.5.22. Hyper-G uses Perl scripts for installation, but they will work under 4.036 and later (Perl 5 isn't required), and pre-built Perl distributions are also included on the CD. Ordinarily, and this is heavily encouraged in the book and existing documentation, you would run one of these scripts to download, unpack and install the server. At the time you had to first manually request permission from an E-mail address at IICM to download it, including the IPv4 address you were going to connect from, the operating system and of course the local contact. Fortunately some forethought was applied and an alternative offline method was also made available if you already had the tarchive in your possession, or else this entire article might not have been possible. Since the CD is a precise copy of the FTP site, even including the READMEs, we'll just pretend to be the FTP site for dramatic purposes. The description files you see here are exactly what you would have seen accessing TU Graz's FTP site in 1996. quote PASV 227 Entering Passive Mode (XXX). ftp> cd /run/media/spectre/Hyper-G/unix/Hyper-G 250-You have entered the Hyper-G archive (ftp://ftp.iicm.tu-graz.ac.at/pub/Hyper-G). 250-================================================================================ 250- 250-What's where: 250- 250- Server Hyper-G Server Installation Script 250- UnixClient the vt100 client for UNIX - Installation Script 250- Harmony Harmony (the UNIX/X11 client) 250- Amadeus Amadeus (the PC/Windows client) 250- VRweb VRweb (free VRML browser for Hyper-G, Mosaic & Gopher) 250- papers documentation on Hyper-G (mainly PostScript) 250- talk slides & illustrations we use for Hyper-G talks 250- 250-Note: this directory is mirrored daily (nightly) to: 250- 250- Australia ftp://ftp.cinemedia.com.au/pub/Hyper-G 250- ftp://gatekeeper.digital.com.au/pub/Hyper-G 250- Austria ftp://ftp.tu-graz.ac.at/pub/Hyper-G 250- Czech Rep. ftp://sunsite.mff.cuni.cz/Net/Infosystems/Hyper-G 250- Germany ftp://elib.zib-berlin.de/pub/InfoSystems/Hyper-G 250- ftp://ftp.ask.uni-karlsruhe.de/pub/infosystems/Hyper-G 250- Italy ftp://ftp.esrin.esa.it/pub/Hyper-G 250- Poland ftp://sunsite.icm.edu.pl/pub/Hyper-G 250- Portugal ftp://ftp.ua.pt/pub/infosystems/www/Hyper-G 250- Spain ftp://ftp.etsimo.uniovi.es/pub/Hyper-G 250- Sweden ftp://ftp.sunet.se/pub/Networked.Information.Retrieval/Hyper-G 250- UK ftp://unix.hensa.ac.uk/mirrors/Hyper-G 250- USA ftp://ftp.ncsa.uiuc.edu/Hyper-G 250- ftp://mirror1.utdallas.edu/pub/Hyper-G 250 Directory successfully changed. ftp> cd Server 250 Directory successfully changed. ftp> get Hyper-G_Server_21.03.96.SGI.tar.gz local: Hyper-G_Server_21.03.96.SGI.tar.gz remote: Hyper-G_Server_21.03.96.SGI.tar.gz 200 PORT command successful. Consider using PASV. 150 Opening BINARY mode data connection for Hyper-G_Server_21.03.96.SGI.tar.gz (2582212 bytes). 226 Transfer complete. 2582212 bytes received in 3.12 seconds (808.12 Kbytes/s) ftp> get Hyper-G_Tools_21.03.96.SGI.tar.gz local: Hyper-G_Tools_21.03.96.SGI.tar.gz remote: Hyper-G_Tools_21.03.96.SGI.tar.gz 200 PORT command successful. Consider using PASV. 150 Opening BINARY mode data connection for Hyper-G_Tools_21.03.96.SGI.tar.gz (3337367 bytes). 226 Transfer complete. 3337367 bytes received in 3.95 seconds (825.82 Kbytes/s) ftp> ^D221 Goodbye. ~hgsystem, a central directory (by default /usr/local/Hyper-G) holds links to it as a repository. We'll create that and sign it over to hgsystem as well. Next, we unpack the server (first) package and start the offline installation script. This package includes the server binaries, server documentation and HTML templates. Text in italics was my response to prompts, which the script stores in configuration files and also in your environment variables, and patches the startup scripts for hgsystem to instantiate them on login. Floodgap Hyper-G Full internet host name of this machine:indy.floodgap.com installed bin/scripts/hginstserver installed bin/SGI/dbcontrol [...] installed HTML/ge/options.html installed HTML/ge/result_head.html installed HTML/ge/search.html installed HTML/ge/search_simple.html installed HTML/ge/status.html did make this piece open-source so a paranoid sysadmin could see what they were running as root (in this case setuid). Now for the tools. This includes adminstration utilities but also the hgtv client and additional documentation. The install script is basically the same for the tools as for the server. Last but not least, we will log out and log back in to ensure that our environment is properly setup, and then set the password on the internal hgsystem user (which is not the same as hgsystem, the Unix login). This account is setup by default in the database and to modify it we'll use the hgadmin tool. This tool is always accessible from the hgsystem login in case the database gets horribly munged. That should be all that was necessary (NARRATOR: It wasn't.), but starting up the server still failed. It's possible the tar offline install utility wasn't updated as often as the usual one. Nevertheless, it seemed out-of-sync with what the startup script was actually looking for. Riffling the Perl and shell-script code to figure out the missing piece, it turns out I had to manually create ~hgsystem/HTF and ~hgsystem/server, then add two more environment variables to ~hgsystem/.hgrc (nothing to do with Mercurial): Logging out and logging back in to refresh the environment, ... we're up! Immediately I decided to see if the webserver would answer. It did, buuuuut ... uname identifies this PrecisionBook 160 as a 9000/778, which is the same model number as the Visualize B160L workstation.) Netscape Navigator Gold 3.01 is installed on this machine, and we're going to use it later, but I figured you'd enjoy a crazier choice. Yes, you read that right ... on those platforms as well as Tru64, but no version for them ever emerged. After releasing 5.0 SP1 in 2001, Microsoft cited low uptake of the browser and ended all support for IE Unix the following year. As for Mainsoft, they became notorious for the 2004 Microsoft source code leak when a Linux core in the file dump fingered them as the source; Microsoft withdrew WISE completely, eliminating MainWin's viability as a commercial product, though Mainsoft remains in business today as Harmon.ie since 2010. IE Unix was a completely different codebase from what became Internet Explorer 5 on Mac OS X (and a completely different layout engine, Tasman) and of course is not at all related to modern Microsoft Edge either. because there are Y2K issues, the server fails to calculate its own uptime, but everything else basically works. ruby:/pro/harmony/% uname -a HP-UX ruby B.11.00 A 9000/778 2000295180 two-user license ruby:/pro/harmony/% model 9000/778/B160L ruby:/pro/harmony/% grep -i b160l /usr/sam/lib/mo/sched.models B160L 1.1e PA7300 ruby:/pro/harmony/% su Password: # echo itick_per_usec/D | adb -k /stand/vmunix /dev/mem itick_per_usec: itick_per_usec: 160 # ^D ruby:/pro/harmony/% cat /var/opt/ignite/local/hw.info disk: 8/16/5.0.0 0 sdisk 188 31 0 ADTX_AXSITS2532R_014C 4003776 /dev/rdsk/c0t0d0 /dev/dsk/c0t0d0 -1 -1 5 1 9 disk: 8/16/5.1.0 1 sdisk 188 31 1000 ADTX_AXSITS2532R_014C 6342840 /dev/rdsk/c0t1d0 /dev/dsk/c0t1d0 -1 -1 4 1 9 cdrom: 8/16/5.4.0 2 sdisk 188 31 4000 TOSHIBA_CD-ROM_XM-5701TA 0 /dev/rdsk/c0t4d0 /dev/dsk/c0t4d0 -1 -1 0 1 0 lan: 8/16/6 0 lan0 lan2 0060B0C00809 Built-in_LAN 0 graphics: 8/24 0 graph3 /dev/crt0 INTERNAL_EG_DX1024 1024 768 16 755548327 ext_bus: 8/16/0 1 CentIf n/a Built-in_Parallel_Interface ext_bus: 8/16/5 0 c720 n/a Built-in_SCSI ps2: 8/16/7 0 ps2 /dev/ps2_0 Built-in_Keyboard/Mouse processor: 62 0 processor n/a Processor an old version ready to run. Otherwise images and most other media types are handled by Harmony itself, so let's grab and setup the client now. We'll want both Harmony proper and, later when we play a bit with the VRML tools, VRweb. Notionally these both come in Mesa and IRIX GL or OpenGL versions, but this laptop has no 3D acceleration, so we'll use the Mesa builds which are software-rendered and require no additional 3D support. cd /run/media/spectre/Hyper-G/unix/Hyper-G/Harmony 250- 250-You have entered the Harmony archive. 250- 250-The current version is Release 1.1 250-and there are a few patched binaries in the 250-patched-bins directory. 250- 250-Please read INSTALLATION for full installation instructions. 250- 250-Mirrors can be found at: 250- 250- Australia ftp://ftp.cinemedia.com.au/pub/Hyper-G 250- Austria ftp://ftp.tu-graz.ac.at/pub/Hyper-G/ 250- Germany ftp://elib.zib-berlin.de/pub/InfoSystems/Hyper-G/ 250- ftp://ftp.ask.uni-karlsruhe.de/pub/infosystems/Hyper-G 250- Italy ftp://ftp.esrin.esa.it/pub/Hyper-G 250- Spain ftp://ftp.etsimo.uniovi.es/pub/Hyper-G 250- Sweden ftp://ftp.sunet.se/pub/Networked.Information.Retrieval/Hyper-G 250- New Zealand ftp://ftp.cs.auckland.ac.nz/pub/HMU/Hyper-G 250- UK ftp://unix.hensa.ac.uk/mirrors/Hyper-G 250- USA ftp://ftp.utdallas.edu/pub/Hyper-G 250- ftp://ftp.ncsa.uiuc.edu/Hyper-G 250- ftp://ftp.ua.pt/pub/infosystems/www/Hyper-G 250- 250-and a distributing WWW server: 250- 250- http://ftp.ua.pt/infosystems/www/Hyper-G 250- 250 Directory successfully changed. ftp> get harmony-1.1-HP-UX-A.09.01-mesa.tar.gz get harmony-1.1-HP-UX-A.09.01-mesa.tar.gz 200 PORT command successful. Consider using PASV. 150 Opening BINARY mode data connection for harmony-1.1-HP-UX-A.09.01-mesa.tar.gz (11700275 bytes). 226 Transfer complete. 11700275 bytes received in 11.95 seconds (956.02 Kbytes/s) ftp> cd ../VRweb 250- 250-ftp://ftp.iicm.tu-graz.ac.at/pub/Hyper-G/VRweb/ 250-... here you find the VRweb (VRML 3D Viewer) distribution. 250- 250-The current release is 1.1.2 of Mar 13 1996. 250- 250-Note: this directory is mirrored daily (nightly) to: 250- 250- Australia ftp://ftp.cinemedia.com.au/pub/Hyper-G/VRweb 250- ftp://gatekeeper.digital.com.au/pub/Hyper-G/VRweb 250- Austria ftp://ftp.tu-graz.ac.at/pub/Hyper-G/VRweb 250- Czech Rep. ftp://sunsite.mff.cuni.cz/Net/Infosystems/Hyper-G/VRweb 250- Germany ftp://elib.zib-berlin.de/pub/InfoSystems/Hyper-G/VRweb 250- ftp://ftp.ask.uni-karlsruhe.de/pub/infosystems/Hyper-G/VRweb 250- Italy ftp://ftp.esrin.esa.it/pub/Hyper-G/VRweb 250- Poland ftp://sunsite.icm.edu.pl/pub/Hyper-G/VRweb 250- Portugal ftp://ftp.ua.pt/pub/infosystems/www/Hyper-G/VRweb 250- Spain ftp://ftp.etsimo.uniovi.es/pub/Hyper-G/VRweb 250- Sweden ftp://ftp.sunet.se/pub/Networked.Information.Retrieval/Hyper-G/VRweb 250- UK ftp://unix.hensa.ac.uk/mirrors/Hyper-G/VRweb 250- USA ftp://ftp.ncsa.uiuc.edu/Hyper-G/VRweb 250- ftp://mirror1.utdallas.edu/pub/Hyper-G/VRweb 250 Directory successfully changed. ftp> cd UNIX 250-This directory contains the VRweb 1.1.2e distribution for UNIX/X11 250- 250- 250-vrweb-1.1.2e-[GraphicLibrary]-[Architecture]: 250- VRweb scene viewer for viewing VRML files 250- as external viewer for your WWW client. 250- 250-harscened-[GraphicLibrary]-[Architecture]: 250- VRweb for Harmony. Only usable with Harmony, the Hyper-G 250- client for UNIX/X11. 250- 250-[GraphicLibry]: ogl ... OpenGL (available for SGI, DEC Alpha) 250- mesa ... Mesa (via X protocol; for all platforms) 250- 250-help.tar.gz 250- on-line Help, includes installation guide 250- 250-vrweb.src-1.1.2e.tar.gz 250- VRweb source code 250- 250 Directory successfully changed. ftp> get vrweb-1.1.2e-mesa-HPUX9.05.gz 200 PORT command successful. Consider using PASV. 150 Opening BINARY mode data connection for vrweb-1.1.2e-mesa-HPUX9.05.gz (1000818 bytes). 226 Transfer complete. 1000818 bytes received in 1.23 seconds (794.05 Kbytes/s) ftp> ^D221 Goodbye. /pro logical volume which has ample space. Because this is for an earlier version of HP-UX, although it should run, we'd want to make sure it wasn't using outdated libraries or paths. Unfortunately, checking for this in advance is made difficult by the fact that ldd in HP-UX 11.00 will only show dependencies for 64-bit binaries and this is a 32-bit binary on a 32-bit CPU: So we have to do it the hard way. For some reason symlinks for the shared libraries below didn't exist on this machine, though I had to discover that one by one. /usr/lib/X11R5/libX11.1 lrwxr-xr-x 1 root sys 23 Jan 17 2001 /usr/lib/libX11.2 -> /usr/lib/X11R6/libX11.2 lrwxr-xr-x 1 root sys 23 Jan 17 2001 /usr/lib/libX11.3 -> /usr/lib/X11R6/libX11.3 ruby:/pro/% su Password: # cd /usr/lib # ln -s libX11.1 libX11.sl # ^D ruby:/pro/% harmony/bin/harmony /usr/lib/dld.sl: Can't open shared library: /usr/lib/libXext.sl /usr/lib/dld.sl: No such file or directory Abort ruby:/pro/% ls -l /usr/lib/libXext* lrwxr-xr-x 1 root sys 24 Jan 17 2001 /usr/lib/libXext.1 -> /usr/lib/X11R5/libXext.1 lrwxr-xr-x 1 root sys 24 Jan 17 2001 /usr/lib/libXext.2 -> /usr/lib/X11R6/libXext.2 lrwxr-xr-x 1 root sys 24 Jan 17 2001 /usr/lib/libXext.3 -> /usr/lib/X11R6/libXext.3 ruby:/pro/% su Password: # cd /usr/lib # ln -s libXext.1 libXext.sl # ^D ruby:/pro/% harmony/bin/harmony --- Harmony Version 1.1 (MESA) of Fri 15 Dec 1995 --- Enviroment variable HARMONY_HOME not set ruby:/pro/harmony/% gunzip vrweb-1.1.2e-mesa-HPUX9.05.gz ruby:/pro/harmony/% file vrweb-1.1.2e-mesa-HPUX9.05 vrweb-1.1.2e-mesa-HPUX9.05: PA-RISC1.1 shared executable dynamically linked ruby:/pro/harmony/% chmod +x vrweb-1.1.2e-mesa-HPUX9.05 ruby:/pro/harmony/% ./vrweb-1.1.2e-mesa-HPUX9.05 can't open DISPLAY ruby:/pro/harmony/% mv vrweb-1.1.2e-mesa-HPUX9.05 bin/vrweb -hghost option passed to Harmony or it will connect to the IICM by default. starmony #!/bin/csh setenv HARMONY_HOME /pro/harmony set path=($HARMONY_HOME/bin $path) setenv XAPPLRESDIR $HARMONY_HOME/misc/ $HARMONY_HOME/bin/harmony -hghost indy & ^D ruby:/pro/harmony/% ps -fu spectre UID PID PPID C STIME TTY TIME COMMAND spectre 2172 2170 0 14:55:25 pts/0 0:00 /usr/bin/tcsh spectre 1514 1 0 12:34:19 ? 0:00 /usr/dt/bin/ttsession -s spectre 1535 1534 0 13:18:41 pts/ta 0:01 -tcsh spectre 1523 1515 0 12:34:20 ? 0:04 dtwm spectre 1515 1510 0 12:34:19 ? 0:00 /usr/dt/bin/dtsession spectre 2210 1535 3 15:28:44 pts/ta 0:00 ps -fu spectre spectre 1483 1459 0 12:34:13 ? 0:00 /usr/dt/bin/Xsession /usr/dt/bin/Xsession spectre 1510 1483 0 12:34:15 ? 0:00 /usr/bin/tcsh -c unsetenv _ PWD; spectre 2169 1523 0 14:55:24 ? 0:00 /usr/dt/bin/dtexec -open 0 -ttprocid 1.1eAEIw 01 1514 134217 spectre 2170 2169 0 14:55:24 ? 0:00 /usr/dt/bin/dtterm spectre 2194 2193 0 15:01:47 pts/0 0:01 hartextd -c 49285 spectre 2193 1 0 15:01:42 pts/0 0:06 /pro/harmony/bin/harmony -hghost indy hgsystem anyway, but in a larger deployment you'd of course have multiple users with appropriate permissions. Hyper-G users are specific to the server; they do not have a Unix uid. Users may be in groups and may have multiple simultaneously valid passwords (this is to facilitate automatic login from known hosts, where the password can be unique to each host). Each user gets their own "home collection" that they may maintain, like a home directory. Each user also has a credit account which is automatically billed when pay resources are accessed, though the Hyper-G server is agnostic about how account value is added. We can certainly whip out hgadmin again and do it from the command line, but we can also create users from a graphical administration tool that comes as part of Harmony. This tool is haradmin, or the Harmony Administrator. DocumentType is what we'd consider the "actual" object type. By default, all users, including anonymous ones, can view objects, but cannot write or delete ("unlink") anything; only the owner of the object and the system administrators can do those. In practical terms this unprivileged user with no group memberships we created has the same permissions as an anonymous drive-by right now. However, becase this user is authenticated, we can add permissions to it later. I've censored the most significant word in this and other screenshots with global IDs for this machine because it contains the Indy's IPv4 address and you naughty little people out there don't need to know the details of my test network. % hifimport rootcollection cleaned-tech.hif Username:hgsystem Password: hifimport: HIF 1.0 hifimport: # hifimport: # hifimport: Collection en:Technical Documentation on Hyper-G hifimport: Text en:Hyper-G Anchor Specification Version 1.0 [...] hifimport: # END COLLECTION obj.rights hifimport: # END COLLECTION hg_server hifimport: Collection en:Software User Manuals (man-pages) hifimport: Text en:dbserver.control (1) hifimport: # already visited: 0x000003b7 (en:dcserver (1)) hifimport: # already visited: 0x000003bf (en:ftmkmirror (1)) hifimport: # already visited: 0x000003c2 (en:ftquery (1)) hifimport: # already visited: 0x000003c1 (en:ftserver (1)) hifimport: # already visited: 0x000003be (en:ftunzipmirror (1)) hifimport: # already visited: 0x000003bd (en:ftzipmirror (1)) hifimport: Text en:gophgate (1) hifimport: Text en:hgadmin (1) [...] hifimport: Text en:Clark J.: SGMLS hifimport: * Object already exists. Not replaced. hifimport: Text en:Goldfarb C. F.: The SGML Handbook hifimport: Text en:ISO: Information Processing - 8-bit single-byte coded graphic character sets - Part 1: Latin alphabet No. 1, ISO IS 8859-1 [...] hifimport: # END COLLECTION HTFdoc hifimport: Text en:Hyper-G Interchange Format (HIF) hifimport: # END COLLECTION hyperg/tech PASS 2: Additional Collection Memberships C 0x00000005 hyperglinks(0xa1403d02) hifimport. Error: No Collection hyperglinks(0xa1403d02) C 0x00000005 technik-speziell(0x83f65901) hifimport. Error: No Collection technik-speziell(0x83f65901) C 0x0000015c ~bolle(0x83ea6001) hifimport. Error: No Collection ~bolle(0x83ea6001) C 0x00000193 ~smitter hifimport. Error: No Collection ~smitter [...] PASS 3: Source Anchors SRC Doc=0x00000007 GDest=0x811b9908 0x00187b01 SRC Doc=0x00000008 GDest=0x811b9908 0x00064f74 [...] hifimport: Warning: Link Destination outside HIF file: 0x00000323 hifimport: ... linking to remote object. hifimport. Error: Could not make src anchor: remote server not responding [...] SRC Doc=0x000002e1 GDest=0x811b9908 0x000b5c5b SRC Doc=0x000002e1 GDest=0x811b9908 0x000b5c5a hifimport: Inserted 75 collections. hifimport: Inserted 528 documents. hifimport: Inserted 596 anchors. rootcollection). The import then proceeds in three passes. The first part just loads the objects and the second one sets up additional collections. This dump included everything, including user collections that did not exist, so those additional collections were (in this case desirably) not created. For the final third pass, all new anchors added to the database are processed for validity. Notice that one of them referred to an outside Hyper-G server that the import process duly attempted to contact, as it was intended to. In the end, the import process successfully added 75 new collections and 528 documents with 596 anchors. Instant content! hgtv, the line-mode Hyper-G client, on the Indy. This client, or at least this version of the client, does not start at the Hyper Root and we are immediately within our own root collection. The number is the total number of documents and subdocuments within it. hgtv understands HTF, so we can view the documents directly. Looks like it all worked. Let's jump back into Harmony and see how it looks there too. hgsystem. emacs, but this house is Team vi, and we will brook no deviations. For CDE, though, dtpad would be better. You can change the X resource for Harmony.Text.editcommand accordingly. Here I've written a very basic HTF file that will suffice for the purpose. the Floodgap machine room page (which hasn't been updated since 2019, but I'll get around to it soon enough). As a point of contrast I've elected to do this as a cluster rather than a collection so that you can see the difference. Recall from the introduction that a cluster is a special kind of collection intended for use where the contents are semantically equivalent or related, like alternative translations of text or alternative formats of an image. This is a rather specific case so for most instances, admittedly including this one, you'd want a collection. In practice, however, Hyper-G doesn't really impose this distinction rigidly and as you're about to see, a cluster mostly acts like a collection by a different term — except where it doesn't. <map> for imagemaps. alex, our beige-box Am5x86 DOS games machine, as the destination. can be open at the same time — I guess for people who might do text descriptions for the visually impaired, or something. Clusters appear differently in two other respects we'll get to a little later on. Markup Language), was nearly a first-class citizen. Most modern browsers don't support VRML and its technological niche is now mostly occupied by X3D, which is largely backwards-compatible with it. Like older browsers, Hyper-G needs an external viewer (in this case VRweb, which we loaded onto the system as part of the Harmony client install), but once installed VRML becomes just as smoothly integrated into the client as PostScript documents. Let's create a new collection with the sample VRML documents that came with VRweb. fsn), as most famously seen in 1993's Jurassic Park. Jurassic Park is a candy store for vintage technology sightings, notably the SGI Crimson, Macintosh Quadra 700 and what is likely an early version of the Motorola Envoy, all probably due to Michael Crichton's influence. online resources, made possible by the fact that edges could be walked in both directions and retrieved rapidly from the database. GopherVR, which came out in 1995 and post-dates both FSN and earlier versions of Harmony, but it now renders a lot better with some content. (I do need to get around to updating GopherVR for 64-bit.) hgtv did. This problem likely never got fixed because the beta versions on the Internet Archive unfortunately appear to have removed Gopher support entirely. Rodney Anonymous. C:\ ... \Program Files as \PROGRA~1. This version of Amadeus is distributed in multiple floppy-sized archives. That was a nice thing at the time but today it's also really obnoxious. Here are all the points at which you'll need to "switch disks" (e.g., by unzipping them to the installation folder): Simpsons music). hgtv than it does to Harmony, which to be sure was getting the majority of development resources. In particular, there isn't a tree view, just going from collection to collection like individual menus. blofeld and spectre) into the lusers group. W:g lusers which keeps the default read and unlink permissions, but specifically allows users in group lusers to create and modify documents here. blofeld, because you can never say never again, I will now annotate that "thread." (Oddly, this version of Harmony appears to lack an option to directly annotate from the text view. I suspect this oversight was corrected later.) spectre can post too. blofeld and spectre have the same permissions and the default is to allow anyone in the group to write, without taking some explicit steps they can then edit each other's posts with impunity. To wit, we'll deface blofeld's comment. now? That concludes our demonstration, so on the Indy we'll type dbstop to bring down the database and finish our story. offices in Germany and Austria, later expanding to the US and UK. Gopher no longer had any large-scale relevance and the Web had clearly become dominant, causing Hyperwave to also gradually de-emphasize its own native client in lieu of uploading and managing content with more typical tools like WebDAV, Windows Explorer and Microsoft Word, and administering and accessing it with a regular web browser (offline operation was still supported), as depicted in this screenshot from the same year. Along the way the capital W got dropped, becoming merely Hyperwave. In all of these later incarnations, however, the bidirectional linkages and strict hierarchy remained intact as foundational features in some form, even though the massive Hyper Root concept contemplated by earlier versions ultimately fell by the wayside. Hyperwave continues to be sold as a commercial product today, with the company revived after a 2005 reorganization, and the underlying technology of Hyper-G still seems to be a part of the most current release. As proof, at the IICM — now after several name changes called Institute of Human-Centred Computing, with professor Frank Kappe its first deputy — there's still a HyperWave [sic] IS/7 server. It has a home collection just like ours with exactly one item, Herman Maurer's home page, who as of this writing still remains on Hyperwave's advisory board. Although later products have attempted to do similar sorts of large-scale document and resource management, Hyper-G pioneered the field by years, and even smaller such tools owe it a debt either directly or by independent convergent evolution. That makes it more than appropriate to appear in the Prior Art Department, especially since some of its more innovative solutions to hypermedia's intrinsic navigational issues have largely been forgotten — or badly reinvented. That said, unlike many such examples of prior art, it has managed to quietly evolve and survive to the present, even if by doing so it lost much of its unique nature and some of its wildest ideas. Of course, without those wild ideas, this article would have been a great deal less interesting. You can access the partial mirror on the Internet Archive, or our copy of the CD with everything I've demonstrated here and more on the Floodgap gopher server.

3 days ago 7 votes