Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
86
I like having a safety net whenever I’m doing something potentially destructive, which is why I use the btrfs file system for my operating system and my data. Snapshots are one half of my “whoops, there goes all my work” strategy (backups are the other half). I’ve written about how I use snapshots on btrfs using snapper, but lately I’ve become annoyed with it. Shortcomings of snapper snapper is great while you’re on the happy path, but when you wander off of it, it gets a bit frustrating. This is 100% my own personal experience and I cannot rule out any PEBCAK scenarios, but it’s how I felt using the tool. The snapshots are on the same subvolume, such as /home/.snapshots, which means that I have to specifically exclude the .snapshots folder in every tool and script that I use for making backups. Without that change, the tools ended up scanning the folder and eating up a lot of resources, mainly the CPU and storage. Backing up snapshots to something like an external backup SSD wasn’t...
over a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from ./techtipsy

PSA: part of your Kagi subscription fee goes to a Russian company (Yandex)

Today I learned that Kagi uses Yandex as part of its search infrastructure, making up about 2% of their costs, and their CEO has confirmed that they do not plan to change that. To quote: Yandex represents about 2% of our total costs and is only one of dozens of sources we use. To put this in perspective: removing any single source would degrade search quality for all users while having minimal economic impact on any particular region. The world doesn’t need another politicized search engine. It needs one that works exceptionally well, regardless of the political climate. That’s what we’re building. That is unfortunate, as I found Kagi to be a good product with an interesting take on utilizing LLM models with search that is kind of useful, but I cannot in good heart continue to support it while they unapologetically finance a major company that has ties to the Russian government, the same country that is actively waging a war against Ukraine, an European country, for over 11 years, during which they’ve committed countless war crimes against civilians and military personnel. Kagi has the freedom to decide how they build the best search engine, and I have the freedom to use something else. Please send all your whataboutisms to /dev/null.

2 weeks ago 24 votes
How a Hibernate deprecation log message made our Java backend service super slow

It was time to upgrade Hibernate on that one Java monolithic1 backend service that my team was responsible for. We took great precautions with these types of changes due to the scale of the system, splitting changes into as many small parts as possible and releasing them as often as possible. With bigger changes we opted for running a few instances of the new version in parallel to the existing one. Then came Hibernate 5.2. Hibernate 5.2 introduced a new warning log to indicate that the existing API for writing queries is deprecated. Hibernate's legacy org.hibernate.Criteria API is deprecated; use the JPA javax.persistence.criteria.CriteriaQuery instead Every time you used the Criteria API it would print the line. Just one little issue there. Can you see it? Every time you used the Criteria API it would print the line. In a poorly written Java backend service, one HTTP request can make multiple queries to the database. With hundreds of millions of HTTP requests, this can easily balloon to billions of additional logs a day. Well, that’s exactly what happened to our service, resulting in the CPU usage jumping up considerably and the latency of the service being negatively impacted. We didn’t have the foresight to compare every metric against every instance of the service, and when the metrics were summarized across all instances, this increase was not that noticeable while both new and existing instances of the service were running. Aside from the service itself, this had negative effects downstream as well. If you have a solution for collecting your service logs for analysis and retention, and it’s priced on the amount of logs that you print out, then this can end up being a very costly issue for you. We resolved the issue by making a configuration change to our logger that disabled these specific logs. This does make me wonder who else may have been impacted by this change over the years and what that impact might’ve looked like regarding the resource usage on a world-wide scale. I’m not blaming the Hibernate developers, they had good intentions, but the impact of an innocent change like that was likely not taken into account for large-scale services. Last I heard, the people behind Hibernate are a very small team, and yet their software powers much of the world, including critical infrastructure like the banking system. I’m well aware that we’re talking about Hibernate releases that were released around the time I was still a junior developer (2016-2018). Some call it technical debt, others call it over half a decade of neglect. unmaintaned monoliths suck, but so do unmaintained microservices. ↩︎

2 weeks ago 25 votes
From building ships to shipping builds: how to succeed in making a career switch to software development

I have worked with a few software developers who made the switch to this industry in the middle of their careers. A major change like that can be scary and raise a lot of fears and doubts, but I can attest that this can work out well with the right personality traits and a supporting environment. Here’s what I’ve observed. To keep the writing concise, I’ll be using the phrase “senior junior”1 to describe those that have made such a career switch. Overcoming the fear Fear is a natural reaction to any major change in life, especially when there’s risk of taking a financial hit while you have a family to support and a home loan to pay. The best mitigation that I’ve heard is believing that you can make the change, successfully. It sounds like an oversimplification, sure, as all it does is that it removes a mental blocker and throws out the self-doubt. And yet it works unreasonably well. It also helps if you have at least some savings to help mitigate the financial risk. A years’ worth of expenses saved up can go a long way in providing a solid safety net. What makes them succeed A great software developer is not someone that simply slings some code over the wall and spends all of their day working only on the technical stuff, there are quite a few critical skills that one needs to succeed. This is not an exhaustive list, but I’ve personally observed that the following ones are the most critical: ability to work in a team great communication skills conflict resolution ability to make decisions in the context of product development and business goals maintaining an environment of psychological safety Those with more than a decade of experience in another role or industry will most likely have a lot of these skills covered already, and they can bring that skill set into a software development team while working with the team to build their technical skill set. Software development is not special, at the end of they day, you’re still interacting with humans and everything that comes with that, good or bad. After working with juniors that are fresh out of school and “senior juniors” who have more career experience than I do, I have concluded that the ones that end up being great software developers have one thing in common: the passion and drive to learn everything about the role and the work we do. One highlight that I often like to share in discussions is one software developer who used to work in manufacturing. At some point they got interested in learning how they can use software to make work more efficient. They started with an MVP solution involving a big TV and Google Sheets, then they started learning about web development for a solution in a different area of the business, and ended up building a basic inventory system for the warehouse. After 2-3 years of self-learning outside of work hours and deploying to production in the most literal sense, they ended up joining my team. They got up to speed very quickly and ended up being a very valuable contributor in the team. In another example, I have worked with someone who previously held a position as a technical draftsman and 3D designer in a ship building factory (professionals call it a shipyard), but after some twists and turns ended up at a course for those interested in making a career switch, which led to them eventually working in the same company I do. Now they ship builds with confidence while making sure that the critical system we are working on stays stable. That developer also kicks my ass in foosball about 99% of the time. The domain knowledge advantage The combination of industry experience and software development skills is an incredibly powerful one. When a software developer starts work in a project, they learn the business domain piece by piece, eventually reaching a state where they have a slight idea about how the business operates, but never the full picture. Speaking with their end users will help come a long way, but there are always some details that get lost in that process. Someone coming from the industry will have in-depth knowledge about the business, how it operates, where the money comes from, what are the main pain points and where are the opportunities for automation. They will know what problems need solving, and the basic technical know-how on how to try solving them. Like a product owner, but on steroids. Software developers often fall into the trap of creating a startup to scratch that itch they have for building new things, or trying out technologies that have for a very long time been on their to-do list. The technical problems are fun to solve, sure, but the focus should be on the actual problem that needs fixing. If I wanted to start a new startup with someone, I’d look for someone working in an industry that I’m interested in and who understands the software development basics. Or maybe I’m just looking for an excellent product owner. How to help them succeed If you have a “senior junior” software developer on your team, then there really isn’t anything special you’d need to do compared to any other new joiner. Do your best to foster a culture of psychological safety, have regular 1-1s with them, and make sure to pair them up with more experienced team members as often as possible. A little bit of encouragement in challenging environments or periods of self-doubt can also go a long way. Temporary setbacks are temporary, after all. What about “AI”? Don’t worry about all that “AI”2 hype, if it was as successful in replacing all software development jobs as a lof of people like to shout from the rooftops, then it would have already done so. At best, it’s a slight productivity boost3 at the cost of a huge negative impact on the environment. Closing thoughts If you’re someone that has thought about working as a software developer or who is simply excited about all the ways that software can be used to solve actual business problems and build something from nothing, then I definitely recommend giving it a go, assuming that you have the safety net and risk appetite to do so. For reference, my journey towards software development looked like this, plus a few stints of working as a newspaper seller or a grocery store worker. who do you call a “senior senior” developer, a senile developer? ↩︎ spicy autocomplete engines (also known as LLM-s) do not count as actual artificial intelligence. ↩︎ what fascinates me about all the arguments around “AI” (LLM-s) is the feeling of being more productive. But how do you actually measure developer productivity, and do you account for possible reduced velocity later on when you’ve mistaken code generation speed as velocity and introduced hard to catch bugs into the code base that need to be resolved when they inevitably become an issue? ↩︎

a month ago 25 votes
My horrible Fairphone customer care experience

Fairphone has bad customer support. It’s not an issue with the individual customer support agents, I know how difficult their job is1, and I’m sure that they’re trying their best, but it’s a more systematic issue in the organization itself. It’s become so bad that Fairphone issued an open letter to the Fairphone community forum acknowledging the issue and steps they’re taking to fix it. Until then, I only have my experience to go by. I’ve contacted Fairphone customer support twice, once with a question about Fairphone 5 security updates not arriving in a timely manner, and another time with a request to refund the Fairphone Fairbuds XL as part of the 14-day policy. In both cases, I received an initial reply over 1 month later. It’s not that catastrophic for a non-critical query, but in situations where you have a technical issue with a product, this can become a huge inconvenience for the customer. I recently gave the Fairbuds XL a try because the reviews for it online were decent and I want to support the Fairphone project, but I found the sound profile very underwhelming and the noise cancelling did not work adequately.2 I decided to use the 14-day return policy that Fairphone advertise, which led to the worst customer care experience I’ve had so far.3 Here’s a complete timeline of the process on how to return a set of headphones to the manufacturer for a refund. 2025-02-10: initial purchase of the headphones 2025-02-14: I receive the headphones and test them out, with disappointing results 2025-02-16: I file a support ticket with Fairphone indicating that I wish to return the headphones according to their 14-day return policy 2025-02-25: I ask again about the refund after not hearing back from Faiprhone 2025-03-07: I receive an automated message that apologized for the delay and asked me to not make any additional tickets on the matter, which I had not been doing 2025-04-01: I start the chargeback process for the payment through my bank due to Fairphone support not replying over a month later 2025-04-29: Fairphone support finally responds with instructions on how to send back the device to receive a refund 2025-05-07: after acquiring packaging material and printing out three separate documents (UPS package card, invoice, Cordon Electronics sales voucher), I hand the headphones over to UPS 2025-05-15: I ask Fairphone about when the refund will be issued 2025-05-19 16:20 EEST: I receive a notice from Cordon Electronics confirming they have received the headphones 2025-05-19 17:50 EEST: I receive a notice from Cordon Electronics letting me know that they have started the process, whatever that means 2025-05-19 20:05 EEST: I receive a notice from Cordon Electronics saying that the repairs are done and they are now shipping the device back to me (!) 2025-05-19 20:14 EEST: I contact Fairphone support about this notice that I received, asking for a clarification 2025-05-19 20:24 EEST: I also send an e-mail to Cordon Electronics clarifying the situation and asking them to not send the device back to me, but instead return it to Fairphone for a refund 2025-05-20 14:42 EEST: Cordon Electronics informs me that they have already shipped the device and cannot reverse the decision 2025-05-21: Fairphone support responds, saying that it is being sent back due to a processing error, and that I should try to “refuse the order” 2025-05-22: I inform Fairphone support about the communication with Cordon Electronics 2025-05-27: Fairphone is aware of the chargeback that I initiated and they believe the refund is issued, however I have not yet received it 2025-05-27: I receive the headphones for the second time. 2025-05-28: I inform Fairphone support about the current status of the headphones and refund (still not received) 2025-05-28: Fairphone support recommends that I ask the bank about the status of the refund, I do so but don’t receive any useful information from them 2025-06-03: Fairphone support asks if I’ve received the refund yet 2025-06-04: I receive the refund through the dispute I raised through the bank. This is almost 4 months after the initial purchase took place. 2025-06-06: Fairphone sends me instructions on how to send back the headphones for the second time. 2025-06-12: I inform Fairphone that I have prepared the package and will post it next week due to limited access to a printer and the shipping company office 2025-06-16: I ship the device back to Fairphone again. There’s an element of human error in the whole experience, but the initial lack of communication amplified my frustrations and also contributed to my annoyances with my Fairphone 5 boiling over. And just like that, I’ve given up on Fairphone as a brand, and will be skeptical about buying any new products from them. I was what one would call a “brand evangelist” to them, sharing my good initial experiences with the phone to my friends, family, colleagues and the world at large, but bad experiences with customer care and the devices themselves have completely turned me off. If you have interacted with Fairphone support after this post is live, then please share your experiences in the Fairphone community forum, or reach out to me directly (with proof). I would love to update this post after getting confirmation that Fairphone has fixed the issues with their customer care and addressed the major shortcomings in their products. I don’t want to crap on Fairphone, I want them to do better. Repairability, sustainability and longevity still matter. I haven’t worked as a customer care agent, but I have worked in retail, so I roughly know what level of communication the agents are treated with, often unfairly. ↩︎ that experience reminded me of how big of a role music plays in my life. I’ve grown accustomed to using good sounding headphones and I immediately noticed all the little details being missing in my favourite music. ↩︎ until this point, the worst experience I had was with Elisa Eesti AS, a major ISP in Estonia. I wanted to use my own router-modem box that was identical to the rented one from the ISP, and that only got resolved 1.5 months later after I expressed intent to switch providers. Competition matters! ↩︎

a month ago 29 votes
Lenovo ThinkCentre M900 Tiny: how does it fare as a home server?

My evenings of absent-minded local auction site scrolling1 paid off: I now own a Lenovo ThinkCentre M900 Tiny. It’s relatively old, being manufactured in 20162, but it’s tiny and has a lot of useful life left in it. It’s also featured in the TinyMiniMicro series by ServeTheHome. I managed to get it for 60 EUR plus about 4 EUR shipping, and it comes with solid specifications: CPU: Intel i5-6500T RAM: 16GB DDR4 Storage: 256GB SSD Power adapter included The price is good compared to similar auctions, but was it worth it? Yes, yes it was. I have been running a ThinkPad T430 as a server for a while now, since October 2024. It served me well in that role and would’ve served me for even longer if I wanted to, but I had an itch for a project that didn’t involve renovating an apartment.3 Power usage One of my main curiosities was around the power usage. Will this machine beat the laptop in terms of efficiency while idling and running normal home server workloads? Yes, yes it does. While booting into Windows 11 and calming down a bit, the lowest idle power numbers I saw were around 8 W. This concludes the testing on Windows. On Linux (Fedora Server 42), the idle power usage was around 6.5 W to 7 W. After running powertop --auto-tune, I ended up getting that down to 6.1 W - 6.5 W. This is much lower compared to the numbers that ServeTheHome got, which were around 11-13 W (120V circuit). My measurements are made in Europe, Estonia, where we have 240V circuits. You may be able to find machines where the power usage is even lower. Louwrentius mada an idle power comparison on an HP EliteDesk Mini G3 800 where they measured it at 4 W. That might also be due to other factors in play, or differences in measurement tooling. During normal home server operation with 5 SATA SSD-s connected (4 of them with USB-SATA adapters), I have observed power consumption being around 11-15 W, with peaks around 40 W. On a pure CPU load with stress -c 8, I saw power consumption being around 32 W. Formatting the internal SATA SSD added 5 W to that figure. USB storage, are you crazy? Yes. But hear me out. Back in 2021, I wrote about USB storage being a very bad idea, especially on BTRFS. I’ve learned a lot over the years, and BTRFS has received continuous improvements as well. In my ThinkPad T430 home server setup, I had two USB-connected SSD-s running in RAID0 for over half a year, and it was completely fine unless you accidentally bumped into the SSD-s. USB-connected storage is fine under the right circumstances: the cables are not damaged the cables are not at a weird angle or twisted I actually had issues with this point, my very cool and nice cable management resulted in one disk having connectivity issues, which I fixed by relieving stress on the cables and routing them differently the connected PC does not have chronic overheating issues the whole setup is out of the reach of cats, dogs, children and clumsy sysadmin cosplayers the USB-SATA adapters pass through the device ID and S.M.A.R.T information to the host the device ID part especially is key to avoiding issues with various filesystems (especially ZFS) and storage pool setups the ICY BOX IB-223U3a-B is a good option that I have personally been very happy with, and it’s what I’m using in this server build a lot of adapters (mine included) don’t support running SSD TRIM commands to the drives, which might be a concern has not been an issue for over half a year with those ICY BOX adapters, but it’s something to keep in mind you are not using an SBC as the home server even a Raspberry Pi 4 can barely handle one USB-powered SSD not an issue if you use an externally powered drive, or an USB DAS After a full BTRFS scrub and a few days of running, it seems fine. Plus it looks sick as hell with the identical drives stacked on top. All that’s missing are labels specifying which drive is which, but I’m sure that I’ll get to that someday, hopefully before a drive failure happens. In a way, this type of setup best represents what a novice home server enthusiast may end up with: a tiny, power-efficient PC with a bunch of affordable drives connected. Less insane storage ideas for a tiny PC There are alternative options for handling storage on a tiny 1 liter PC, but they have some downsides that I don’t want to be dealing with right now. An USB DAS allows you to handle many drives with ease, but they are also damn expensive. If you pick wrong, you might also end up with one where the USB-SATA chip craps out under high load, which will momentarily drop all the drives, leaving you with a massive headache to deal with. Cheaper USB-SATA docks are more prone to this, but I cannot confirm or deny if more expensive options have the same issue. Running individual drives sidesteps this issue and moves any potential issues to the host USB controller level. There is also a distinct lack of solutions that are designed around 2.5" drives only. Most of them are designed around massive and power-hungry 3.5" drives. I just want to run my 4 existing SATA SSD-s until they crap out completely. An additional box that does stuff generally adds to the overall power consumption of the setup as well, which I am not a big fan of. Lowering the power consumption of the setup was the whole point! I can’t rule out testing USB DAS solutions in the future as they do seem handy for adding storage to tiny PC-s and laptops with ease, but for now I prefer going the individually connected drives route, especially because I don’t feel like replacing my existing drives, they still have about 94% SSD health in them after 3-4 years of use, and new drives are expensive. Or you could go full jank and use that one free NVMe slot in the tiny PC to add more SATA ports or break out to other devices, such as a PCIe HBA, and introduce a lot of clutter to the setup with an additional power supply, cables and drives. Or use 3.5" external hard drives with separate power adapters. It’s what I actually tried out back in 2021, but I had some major annoyances with the noise. Miscellaneous notes Here are some notes on everything else that I’ve noticed about this machine. The PC is quite efficient as demonstrated by the power consumption numbers, and as a result it runs very cool, idling around 30-35 °C in a ~22-24 °C environment. Under a heavy load, the CPU temperatures creep up to 65-70 °C, which is perfectly acceptable. The fan does come on at higher load and it’s definitely audible, but in my case it runs in a ventilated closet, so I don’t worry about that at all. The CPU (Intel i5-6500T) is plenty fast for all sorts of home server workloads with its 4 CPU cores and clock speeds of 2.7-2.8 GHz under load. The UEFI settings offered a few interesting options that I decided to change, the rest are set to default. There is an option to enable an additional C-state for even better power savings. For home server workloads, it was nice to see the setting to allow you to boot the PC without a keyboard being attached, found under “Keyboardless operation” setting. I guess that in some corporate environments disconnected keyboards are such a common helpdesk issue that it necessitates having this option around. Closing thoughts I just like these tiny PC boxes a lot. They are tiny, fast and have a very solid construction, which makes them feel very premium in your hands. They are also perfectly usable, extensible and can be an absolute bargain at the right price. With solid power consumption figures that are only a few watts off of a Raspberry Pi 5, it might make more sense to get a TinyMiniMicro machine for your next home server. I’m definitely very happy with mine. well, at least it beats doom-scrolling social media. ↩︎ yeah, I don’t like being reminded of being old, too. ↩︎ there are a lot of similarities between construction/renovation work and software development, but that’s a story for another time. ↩︎

a month ago 34 votes

More in technology

A real PowerBook: the Macintosh Application Environment on a PA-RISC laptop

I like the Power ISA very much, but there's nothing architecturally obvious to say that the next natural step from the Motorola 68000 family must be to PowerPC. For example, the Palm OS moved from the DragonBall to ARM, and it's not necessarily a well-known fact that the successor to Commodore's 68K Amigas was intended to be based on PA-RISC, Hewlett-Packard's "Precision Architecture" processor family. (That was the Hombre chipset, and prototype chips existed prior to Commodore's demise in 1994, though controversy swirled regarding backwards compatibility.) Sure, Apple and Motorola were two-thirds of the AIM alliance, and there were several PowerPC PowerBooks available when the fall of 1997 rolled around. But what if the next PowerBooks had been based on PA-RISC instead? Well, no need to strain yourself imagining it. Here's nearly as close as you're gonna get. that game we must all try running), analyze its performance and technical underpinnings, and uncover an unusual artifact of its history hidden in the executable. (A shout-out to Paul Weissman, the author and maintainer of the incomparable PA-RISC resource OpenPA.net, who provided helpful insights for this article.) near my childhood hometown, RDI Computer Systems was founded in 1989 as Research, Development and Innovations Incorporated in La Costa, California, a neighbourhood of Carlsbad annexed in 1972 in northern San Diego county. (It is completely unrelated to earlier Carlsbad company RDI Video Systems, short for "Rick Dyer Industries" and the developers of laserdisc games like Dragon's Lair and Space Ace, who folded in 1985 after their expensive Halcyon home console imploded mid-development from the 1983 video game crash.) RDI, like several of its contemporaries, was established to capitalize on Sun Microsystems' attempt to commoditize SPARC and open up the market to other OEMs. While most such vendors like Solbourne Computer heavily invested in the desktop workstation segment, RDI instead went even smaller, producing what would become the first SPARC laptops in the United States. Basically SPARCstation IPC and IPX systems crammed into boxy off-white portable cases, the BriteLite series weighed a bit over 13 pounds and started at $10,000 [$24,600]. They were lauded for their performance and compatibility but not their battery life, and RDI became an early adopter of Sun's lower-power microSPARC for the sleeker, sexier PowerLites, using a more dramatic jet-black case. An 85MHz microSPARC II PowerLite 85 was the machine that computational physicist Tsutomu Shimomura, then at the San Diego Supercomputer Center, used to track down hacker Kevin Mitnick in 1995. RDI's initial success enabled its expansion into a bigger 40,000-square foot industrial park facility at 2300 Faraday Avenue, which apparently still exists today. Unfortunately for the company, however, microSPARC hit a performance wall above 125MHz and Sun abandoned further development in 1994, which RDI management took as an indication they needed to diversify. By then the RISC market had started to flourish with many architectures competing for dominance, and RDI decided to throw in with Hewlett-Packard's PA-RISC which had extant portable systems already from Hitachi and SAIC. Neither of those systems had ever existed in large numbers (and the SAIC Galaxys only in military applications at that), giving RDI a new market opportunity with a respected architecture that was already mobile-capable. Producing a final PowerLite in 1996 with the 170MHz Fujitsu TurboSPARC, RDI expanded the PowerLite case substantially for their next systems, replacing the trackball with a touchpad and adding an icon-based LCD status display but keeping its multiple hard disk bays and port options. In the same way the original BriteLites were SPARCstations in every other respect, the new RDI PA-RISC laptop was an otherwise standard HP Visualize B132L or B160L workstation, just inside a laptop case. in our demonstration of Hyper-G, which I've christened ruby after HP chief architect Ruby B. Lee, a key designer of the PA-RISC architecture and its first single-chip implementation. This time around, however, we'll take a closer look at ruby's hardware as a comparison point since it will inform some of the choices we'll make running the Macintosh Application Environment. So that I can save some typing, I'm going to liberally abbreviate "PrecisionBook" to "PABook" for the remainder of this article (avoiding "PBook" so we don't confuse it with PowerBooks). RDI's estimate. I haven't bothered trying to recell this one, and although the status LCD claims it's fully charged, it currently lasts maybe a minute or so which is enough to ride out a AC voltage drop and not much else. Under that is a small 15-pin connector for the optional external 3.5" floppy drive which I don't have either, and behind the other door is the external micro 50-pin SCSI-2 port. A diagram sheet shows that an IR transceiver was planned to be next to the SCSI port, but I don't see one on mine. I took some grabs of the boot process before starting the operating system using a different video mode that my Inogeni VGA capture box would tolerate (my Hall scan converter didn't like it either), though this turned the LCD off, so we won't be bringing up the operating system in this configuration. There is no separate service processor; this all runs on the main CPU. SAIC Galaxy family, and the last and fastest-clocked of the 32-bit PA-RISC 1.1 chips (though the earlier PA-7150 and PA-7200 with their comparatively massive caches can easily beat the 132MHz part). Being effectively a hopped-up PA-7100LC, it inherits most of the characteristics of the earlier chip including a two-way superscalar design, bi-endian support, two asymmetric ALUs, a slightly gimped FPU (the "coprocessor" in the POST summary) with greater double precision latency, and MAX-1 multimedia SIMD instructions. It also incorporates the GSC bus controller on-board. Where the PA-7300LC exceeds its ancestor is in its faster clock speeds — from 132 to 180MHz versus 60 to 100MHz — on a slightly longer six-stage pipeline instead of five, and much larger 64K/64K L1 caches (versus just 1K of I-cache) that were on-die for the first time. In keeping with its "consumer" roots the PA-7300LC additionally includes an L2 cache controller like the PA-7100LC, but here as shown on-screen the PABook's L2 is a full megabyte, and up to 8MB was supported. This is particularly notable given L2 cache was rarely a feature with PA-RISC — large L1s were more typical — and it would not be seen again on a PA-RISC CPU until the PA-8800 seven years later. Other improvements include a 96-entry unified translation lookaside buffer (versus 64) and a four-entry instruction lookaside buffer (versus one) specifically for instruction addresses, which also supported prefetching. The die was fabbed by HP on a 0.5μm process with 9.2 million transistors, 8 million of which were for the L1 cache which consumed most of its 260 square millimetres. A velociraptor can famously be seen in die photos. And here the PowerBook 3400c has a run for its money. This is the point at which I get conflicted because I'm a big fan of the Power ISA, yet I have a real soft spot for PA-RISC because it was my first job out of college, so this is like trying to pick which of my "children" I like best. Although the 3400c with a 240MHz PowerPC 603e was briefly the "world's fastest laptop," at least according to Apple, on benchmarks this 160MHz PA-7300LC wins handily. The last generation 300MHz 603e got SPEC95 numbers of 7.4/6.1, while the 180MHz PA-7300LC recorded scores of 9.22/9.43, with 9.06/9.35 officially recorded for the 180MHz PABook. If we linearly adjust both figures for clock speed we get 5.92/4.88 versus 8.20/8.38, and even using the lower figures in the PABook's technical manual (7.78/7.39 at 160MHz and 6.49/6.54 at 132MHz) the PABook still triumphs. While this isn't a completely fair fight due to the 603's notoriously underpowered FPU, clock for clock the PA-7300LC could challenge both the Pentium Pro and the piledriver PowerPC 604e; the Alpha 21164 could only beat it by revving to 300MHz. And I say all this as a pro-PowerPC bigot! Processing power isn't everything, of course: the 160MHz PA-7300LC does this with a TDP of 15W, while the 300MHz 603e displaces just four to six watts, and the 240MHz part (fabbed on the same 290nm process) is on the lower end of that range. In real world terms that translated to battery life that was at least twice as long on the 3400c. The PABook normally boots from the drive bay closest to the rear (SCSI ID 0; the others are 1 and 2) and the 4GB drive as shipped is in that position, but it can also boot from an external device (SCSI ID 3 or higher) if necessary. The console path is virtually always GRAPHICS(0) for the on-board Visualize-EG and the keyboard path is likewise PS/2, but this can apply to both an external keyboard or the built-in keyboard, which is internally connected the same way. COnfiguration, with the RDI commands for RDI-specific features like mirroring the LCD, but here we'll be asking for INformation on what's installed. INTERNAL_EG_X800, is directly connected to the GSC, but most of the rear ports are connected to a "Bus Adapter." This is the HP LASI ("LAN SCSI") combo chip updated for the PA-7300LC, implementing an Intel i82C596CA 10Mbit NIC, NCR 53C710 SCSI-2 controller, a 16550 UART for RS-232, a WD16C522-compatible parallel port, PS/2 controllers, floppy drive controller and HP Harmony audio. Not enumerated here, a secondary low-speed bus from the LASI called the PHD bus connects to its 1MB flash boot memory, NVRAM and power supply controller. The LASI only supports one serial port, so a second UART is attached to a "Bus Bridge" to provide the second one. This "Bus Bridge" is Dino, a GSC-to-PCI bridge. mikec, I'd like to ask about your experiences with the hardware: please say hi in the comments or drop me a line at ckaiser at floodgap dawt com. directly on AIX — which also used the same PowerOpen ABI — through a thinner runtime layer called Macintosh Application Services (MAS) exclusively for IBM's operating system. only one Apple computer ever ran AIX. In May 1996 Apple updated MAE to version 3.0 with System 7.5.3. This release added compatibility with HP-UX 10.x and made the CPU emulation even faster, primarily through improved handling of condition codes and floating point instructions. It also touted better application compatibility and faster screen updates for users running MAE over a remote X11 session. MAE 3.0 received four point updates up to 3.0.4, badged as "Version 3.0 (Update 4)," which is the final release and the version we'll use. xwd. The installation process is with a shell script and binary installer, both running from within a CDE shell window. Apple offered a trial version so that people could test their software and unlocking the trial limitations requires a license key which you may or may not be able to find on any Archive on the Internet. I won't show the installation process here since it's not particularly interesting or customizeable, but ideally it should be installed to /opt/apple, and even though this version of MAE includes it we're not going to install AppleTalk: ./mae in /opt/apple/bin. The very first run requires us to accept the EULA; there is a specific binary for this and we'll look at it when we take the code apart a bit. /opt? Hang on and we'll get to the on-disk representation. /opt. Again, explanations presently. ruby:/home/spectre/% ls System Folder bin src uploads ruby:/home/spectre/% ls -l System\ Folder/ total 1650 drwxr-xr-x 2 spectre users 1024 Jul 23 21:23 Apple Menu Items -rw-r--r-- 1 spectre users 2846 Jul 23 21:23 Clipboard drwxr-xr-x 2 spectre users 1024 Jul 23 21:23 Control Panels drwxr-xr-x 2 spectre users 1024 Jul 23 21:23 Control Strip Modules drwxr-xr-x 3 spectre users 1024 Jul 23 21:23 Extensions -rw-r--r-- 1 spectre users 35921 Jul 23 21:23 Finder drwxr-xr-x 2 spectre users 1024 Jul 23 21:23 Fonts -rw-r--r-- 1 spectre users 152162 Jul 23 21:23 MAE Enabler -rw-rw-r-- 1 spectre users 18952 Jul 23 21:24 MacTCP DNR drwxr-xr-x 3 spectre users 1024 Jul 23 23:04 Preferences -rw-r--r-- 1 spectre users 72200 Jul 23 21:23 Scrapbook File drwxrwxr-x 2 spectre users 96 Jul 23 21:24 Shutdown Items drwxrwxr-x 2 spectre users 96 Jul 23 21:24 Startup Items -rw-r--r-- 1 spectre users 553762 Jul 23 23:59 System CODE resources) on a native HP-UX Veritas filesystem? The answer is that these files are all AppleSingle, which is to say with their resource and data forks combined, and MAE reads and writes AppleSingle on the fly. There is another interesting folder that gets created. This directory is effectively where the virtual Mac lives. It contains the contents of the virtual Mac's "PRAM" (sm.vpram) plus various databases for files and aliases. The numbered directories require specific explanation. Since each Macintosh volume is its own root, which is certainly not the case in Unix, this directory collects the virtual Mac's volumes here. These aren't symbolic links elsewhere in the filesystem; these are MIVs, or MAE Independent Volumes. They correlate with all the mount points in /etc/fstab by default but any directory can be designated as an MIV, "mounted," and then treated as a volume. We only saw two of them on the desktop because only two of them are "mounted" in /opt/apple/lib/Default_MIV_file, and only those two "volumes" have desktop databases. The home directory is obvious, but /opt was also given a mount because we're running MAE from it and there are various resources in /opt/apple/lib it will try to access. (Some of these are global resources and are treated as part of the System Folder, such as fonts, additional standard applications for the Apple menu, keymaps, locales and, of course, the license key.) These MIVs can be renamed and otherwise treated as if they were any other mounted Macintosh fixed volume. Two other hidden files are also present in this directory, .fs_cache and .fs_info, which maintain the virtual Mac's file and volume information respectively. .fs_cache in particular is very important as it is roughly the global equivalent of an HFS catalog file (and, like a real HFS catalog file, is stored on disk as a B-tree), storing similar metadata like type and creator, timestamps and so forth. This file is so important to MAE that Apple distributed a separate tool called fstool to validate and repair it, sort of like MAE's own Disk First Aid from the shell prompt. You'll have also noticed above that the desktop database in spectre and opt is made up of four files. Desktop DB and Desktop DF are present as usual for the bundle database and Finder information respectively, but there are also two more files %Desktop DB and %Desktop DF, named exactly the same except for a percent sign sigil. This is the other way that resource forks can be represented in MAE, as AppleDouble. Here, the data fork and resource fork are split, with the percent sign indicating the resource fork. Let's explore the MAE System 7.5.3 some more before we attempt to install anything. /opt as they appear in the Finder. /opt is read-only to my uid, so I can't write directly to it. If I had permissions, I could change them from the Permissions dialogue, which is MAE's equivalent of chown, chmod and chgrp all in one. You can also view the (composite) System Folder here and see that it looks pretty much like any other System Folder on any other Mac with the exception of the MAE Enabler. SoftwareFPU. Since not all 68K Macs have floating point units, applications are supposed to use Apple's SANE IEEE-754 library which computes the result in software if no FPU is available. Not all software does this, of course (the Magic Cap 1.0 simulator comes to mind), and this is particularly relevant with Power Macs because the 68K emulator only provides a virtual 68LC040. SoftwareFPU, then, is very simple conceptually: it traps F-line instructions intended for the non-existent coprocessor and turns them into SANE calls. This is slow but it means certain software is able to run that otherwise could not. The MAE SoftwareFPU, which Apple licensed from John Neil & Associates and modified for MAE 3.0, goes a bit further. This version implements a fast path where 68K floating point instructions are directly forwarded to MAE, effectively making F-line traps into hypercalls. Apple estimated this was about 50 percent faster than using regular SoftwareFPU. That said, you'll notice that SoftwareFPU is disabled, which is the default. We'll come back to this when we benchmark the emulator. In MAE 3.0, SANE was changed to directly use host FPU instructions (either SPARC or PA-RISC) for the most commonly performed floating point operations. This works for single and double precision and ran substantially faster than MAE 2.0, but it doesn't work for the 68K's 80-bit extended-precision type, where double precision operations are performed instead and converted (but with a corresponding loss of precision). The previous behaviour, where SANE is simply run under emulation, can be restored with the -sane command line option. A better solution on Power Macs is Tom Pittman's PowerFPU, which (where possible) uses PowerPC floating point instructions directly rather than SANE. All Power Macs have an FPU, so this works on all Power Macs, and is over ten times faster than SoftwareFPU. xclock or xcal. Frodo Commodore 64 emulator, which I installed in /opt. The terminal window opens, which is important to capture any standard error or output, but otherwise it runs normally outside of MAE. That makes you can use the MAE Finder as ... your desktop. You could make MAE take up the entire screen by passing /opt/apple/bin/mae the appropriate -geometry option and setting the X resource Vuewm*mae*clientDecoration to none, effectively making it rootless, and Apple fully supported and documented doing so. Now you've got a virtual Mac that will launch your native X11 applications as well. Who needs CDE when you've got this? We'll look at another standard control panel that MAE uses for a different purpose in a little while. Meantime, having made a basic survey of the emulator, it's now time to actually run software on it. A benchmark would be a good first test but to do that we need to actually put software on it. thule, my little 128MB Macintosh IIci running NetBSD and Netatalk for AppleShare. I have lots of basic software on here including useful INITs and CDEVs and essential tools like StuffIt Expander. It still runs NetBSD 1.5.2 because I had trouble getting regular AppleTalk DDP working with 1.6 and up, so it's a fun time capsule too. But we don't have AppleTalk in MAE, so how are we going to get files from it? Easy: we're going to download the files from thule with FTP and put them into my home directory while MAE is running. The Finder will see the new files and incorporate them. What about the resource forks? The fact that the files are being served by Netatalk from a non-HFS volume (i.e., BSD FFS) actually makes that easier. Netatalk natively stores anything with a resource fork as AppleDouble, depositing the resource fork itself as a separate file into a hidden directory .AppleDouble. We pull down both the data fork and the resource fork, rename the resource fork with a %, and move them both at the same time into my home directory. On the next Finder update, it sees the "whole" file and it becomes accessible. Mac and moved StuffIt Expander there. We can now work with StuffIt archives and only have to download one file, which saves having to get the resource fork separately. An alternative approach, especially if you are transferring a file directly from an HFS or HFS+ volume, is to turn it into AppleSingle first and copy that over; MAE will use the file as-is. Apple provided a tool for this in later versions of Mac OS X/macOS, though 10.4 and prior, arguably where it would have been most useful because those versions still support the Classic Environment, don't seem to have it. The best alternative there is /Developer/Tools/SplitForks, which doesn't do AppleSingle but does create separate AppleDouble data and resource fork files, so at least you can copy those. We'll get to a somewhat more automatic way of specifically handling Netatalk's AppleDouble directories a bit later. over 23 years ago and here we are. I wrote SieveAhl in Modula-2 using the unfortunately named MacMETH compiler just to be weird, rolling all the Toolbox calls by hand. It implements the Sieve of Eratosthenes and a modified version of the FPU-dependent Ahl's Simple Benchmark and issues a score relative to my Macintosh Plus which I have as a reference standard. The main advantage SieveAhl has over other benchmarks is that I wrote it intentionally to run on just about any Mac, even down to System 1.1 (tested in vMac). Here, I'm simply grabbing the StuffIt archive using Internet Explorer 5 for UNIX on the CDE side and saving it into the Mac folder. .sit files isn't too swift on other 68K systems either. We now have a Mac directory that looks like this from the Unix side: Our newly created files in the SieveAhl Folder are now AppleSingle, for example the readme file: We'll get to the rules about when MAE creates AppleSingle and AppleDouble files in a moment. Let's see the numbers we get. Byte in September 1981. It iterates over the interval 0-8190, in which 1,899 primes are expected. Creative Computing, intended originally to evaluate performance and precision differences between various microcomputer BASIC implementations. We don't care about the accuracy or randomness values his benchmark would compute (well, we don't care much), so we just compute those and throw them away. This gets 4,863% the speed of a Mac Plus, which we would expect to be roughly the same because we have no floating point hardware. Repeated runs of both tests were nearly identical. regular SoftwareFPU, not against using it at all. Additionally, MacMETH generates well-behaved code that calls SANE as it should and doesn't emit floating point instructions. How does this compare to a real 68LC040? Conveniently, we have one handy to try it out on! rintintin, my PowerBook 540c with a 33MHz 68LC040 and 12MB of RAM running Mac OS 7.6.1, and the most powerful Blackbird PowerBook sold in the United States (the later Japanese 550c is the same speed, but with a full 68040 and FPU). It was the first PowerBook with any '040 processor, stereo speakers, on-board Ethernet (via AAUI), a trackpad instead of a trackball, twin battery bays and a full-size keyboard. The PowerBook 520/520c and 540/540c came out just a couple months after the first Power Macs and Apple placed the processor onto a daughtercard as a promise that it could be eventually upgraded. As such, the "Ready for PowerPC upgrade" sticker came on these models from the factory, though this particular one is a slightly larger reproduction I printed up a few copies of so I could surreptitiously slap them on the Intel Macs at the Apple Store. Apple nevertheless greatly underestimated demand for the line, mistakenly believing people would rather wait for what eventually was the PowerBook 5300, and the Blackbirds were chronically short-stocked for months. I upgraded this particular unit with an additional 8MB of RAM (on top of the base 4MB) and a SCSI2SD, making it an almost silent unit in operation. The only flaw it has is an iffy cable connection between the display and the top case, which is unfortunately a common problem with these models. Mission: Impossible movie with Tom Cruise and Jon Voight. A regular Blackbird in the standard two-tone grey, most likely a 540c, was what Luther used to block the NOC list transmission on the TGV in the third act. any Blackbird laptop anymore by the time the movie came out, neither computer's model badge is ever visible, though you can at least see a rainbow apple on Luther's. MacLynx, the venerable text browser natively ported to the MacOS. Here we'll run beta 6. lynx.cfg to point to our local Crypto Ancienne TLS 1.3 proxy server. However, we don't have a proper text editor installed other than SimpleText. We could certainly grab BBEdit Lite from the server as well, but MAE gives us an alternative. The manual indicates that "[b]y default, MAE stores files (except text files) in AppleSingle format." Text files, however, are stored as AppleDouble. If we look at our directory listing after unStuffing MacLynx, we can see this rule has been followed: You'll notice that all the text files — the readmes, index.html, lynx.cfg and lynxrc — got separate resource forks as AppleDouble, but the main executable did not. Now, the smart ones among you will say, "But wait! The SieveAhl readme file is text, and it was AppleSingle!" That's right — except that file's text is all stored in styled text resources and there's nothing in the data fork at all. MAE seems to content-sniff files to figure out what to do with them, so BinHex files (which are valid text) will be treated as a text file and made AppleDouble, but a read-only SimpleText file with nothing in the data fork will be treated as a binary and made AppleSingle. The AppleDouble control panel allows you to always force storing files as AppleDouble with specific applications and the separately distributed asdtool will convert a Mac file between AppleSingle and AppleDouble from the shell prompt. vi or, for the mentally deranged, emacs — and the resource fork will remain undisturbed. With MacLynx thus configured for the proxy server, we can view modern HTTPS sites inside MAE no problem. move it. That almost sounds like that the MAE desktop doesn't belong to any MIV even though it is, in fact, part of the MIV for your home directory and that's where we had StuffIt Expander: Conveniently, if you open an alias to something on an MIV that's defined but not yet mounted, MAE will automount it on the desktop for you. Our next set of programs will be TattleTech 2.8 and Gestalt.Appl so we can see what's going on under the hood. There are some surprises here. _L at the end of the Device sResource Name is significant because /opt/apple/bin/macd defines six such resources: Display_Video_Apple_MAE_S, Display_Video_Apple_MAE_M, Display_Video_Apple_MAE_L, Display_Video_Apple_MAE_F, Display_Video_Apple_MAE_C1 and Display_Video_Apple_MAE_C2. These apparently correlate to specific resolutions, namely (in the order they appear) "512 x 342 (9" Macintosh)", "640 x 480 (14" Macintosh)", "832 x 624 (17" Macintosh)", "864 x 864 Resolution", "640 x 800 Resolution" and "640 x 640 Resolution". Since we're at 832x624, we get the _L (presumably small, medium, large, full and two custom?) "card." The emulated DeclROM used by these virtual cards is part of the big blob stored in /opt/apple/lib/engine, along with the Toolbox ROM and other goodies. We'll come back to this when we explore its Gestalt selectors. are defined, but neither appears to be supported. MAE naturally supports printing, but only to a "UNIX PostScript printer" (i.e., lpr) or via AppleTalk, and it does not support using the modem port for a modem or even as a serial port. deprecated it for 68K in 1996. allow the Apple Network Server to numbercrunch for connected clients. It should be possible to do something similar with MAE and have the local host do the work, but there are no local AppleTalk interfaces or headers to compile against. QuickDraw is naturally present (no GX or 3D, of course). In MAE 3.0 QuickDraw is especially important and we'll get to that when we try playing a couple rather famous games. QuickTime is also present, version 2.5, though not everything is enabled (no software MIDI synthesizer, for example). -noextensions. cith selector ("Sith"? Darth Mac?), which is unique to MAE and is conveniently set to $00000304 (i.e., major version 3, minor version 4). While I don't have MAE 1.0 here to test with, René Ros indicates in his Gestalt reports that it has a cith value of 0, though it does exist there too. I don't know what version MAE 2.0 reports but I'm sure someone is firing it up right now to find out and will post in the comments. To more easily decipher the others we'll turn to Gestalt.Appl and I'll point out the highlights. micn is not interesting for the icon (just generic Mac) but rather the string shown, which is a STR# resource indexed by the value of mach (-16395). The string is "Macintosh ApplicationEnvironment" [sic]. mmu " (note space), but this isn't a surprise for an emulator. Consequently, there is also no virtual memory support within MAE (the host is supposed to handle that). romv) is more interesting. Although a great many Old World Mac ROMs are tagged as version 1917, the particular ROM that MAE is using is from the Quadra 660AV and 840AV because we can find its checksum (5b f1 0f d1) and version (07 7d) at offset $001c0000 in /opt/apple/lib/engine. No other valid checksum and version appears anywhere else in this file, and no true Scotsman Macintosh LC would have used a ROM that recent. Thus, if you get a Gestalt ID of 19 but a ROM version of 1917, that's a pretty good indication you're running under MAE. René's list also shows a ROM version of 1917 for MAE 1.0, so MAE 2.0 almost certainly does as well. sltc insists there aren't any NuBus slots. snhw for the sound hardware, which reports a driver cith. This likewise appears nowhere else and is specific to the MAE emulated audio hardware. Oddly, although MAE 1.0 lacked sound, René's list indicates an snhw of awac which would suggest an AWACS. Let's get back to running some more apps. I'm going to bump up the emulated RAM now because a couple near the end will likely benefit. still sold! — was a bit of a mixed bag on MAE. 2.1.2 was the 68K version I had on thule, so I tried that first. It starts and runs fine, but when I actually tried to download anything with this version of Fetch it locked up the entire emulated Mac as soon the file was transferred. That said, I'm not sure if this is a fault of MAE or Fetch because at the exact same point of the exact same file with the exact same version of Fetch on the 540c, it abruptly dropped the connection and threw an error message — though I note transfer speeds were faster on MAE, probably because of the better hardware, right up until it hit the wall. Fortunately the later Fetch 3.0.1 behaves correctly and Apple even offered that specific version for download from the MAE website. There are specific advantages to using Fetch in MAE because of its transparent support for AppleSingle transfers. Still, grabbing files with MacLynx works fine too (hurray for eating your own dogfood), so I'll mostly use that. ssheven, which works great on my real Macs, crashes on MAE and I'm not sure why. ssheven, though I'd have to get a better debugger up on it to figure it out. ssh, and that would even be more useful. Instead, we should try something really important next. Apple IIgs versions are derived from the Mac version. This port was released in 1994 and the "Accelerated for Power Macintosh" and "System 7 Savvy" stickers give the rough timeframe. It requires a 25MHz 68030 or better and is a fat binary. Given that the PABook doesn't have a floppy drive, I copied over this version by simply Stuffing the installed folder on my Power Macintosh 7300 and downloading it over shell FTP (NCSA Telnet on the Mac contains a very basic FTP server which is handy for this). Wolfenzoom (Gopher link) that will scale-blit the 320x200 to 640x400, and I used to use it on my unaccelerated desktop Macintosh IIci when I first bought this game. However, since I'm all about pushing my luck, let's try the highest resolution. unchecked, the game will try to draw directly to video memory. This doesn't always work, and as you can see in this screenshot, not all of the screen was updating properly in this mode. This observation will become relevant when we try running our next game. You do see where this is going, don't you? does use QuickDraw, and appears correctly, but ... thule, but copying Word 5.1 over was a much bigger situation than simply grabbing a single file and its resource fork; we had a number of double-forked files we needed to move en masse. This Perl script, which works on both Perl 5 and 4.036, will iterate over a copy and move Netatalk's AppleDouble resource files into the proper location for MAE, resolving ambiguities in filenames if needed. Call it with find [directory] -name '.AppleDouble' -print | perl ad2mae to run. Only run this on a copy! When everything was in the right place, I moved the directory into my Mac folder and it was ready to go. ad2mae from the MAE Finder, the Finder determined it was a text file and opened it up in vi in a CDE window ready for editing. Not bad! As MAE 3.0 specifically advertised that it was faster than previous versions over remote X, let's test how well that works from my POWER9 Raptor Talos II in Fedora 42. Being the Wayland refusenik I am, I still run KDE Plasma in X11, so with xhost set appropriately and AllowByteSwappedClients enabled (because the POWER9 is running Power ISA little-endian and the PABook is running big) we should be able to connect: actual Command and Option keys, though (I use a white A1048 USB Apple Keyboard with the Quad G5 and the Talos II). makes it tick. DLOG modal when MAE is formatting the TIV. DLOGs for the MAE Toolbar help we saw earlier. DLOG is one of the modals for the floppy/CD mounter. DITL looks like it's part of a credits easter egg, though I haven't figured out yet how to trigger it. I'm assuming the dog is named Rosco ("In Memory of Rosco - 10/5/96"). Also note the dog's Dr Seuss hat, a callback to MAE's "Cat-in-the-Hat" codename. Here's the MAE 3.0 development team: Peter Blas, Michael Brenner, Matthew Caprile, Mary Chan, Bill Convis, Jerry Cottingham, Ivan Drucker, Gerri Eaton, Tim Gilman, Gary Giusti, Mark Gorlinsky, John Grabowski, Cindi Hollister, Richard W. Johnson, John Kullmann, Tom Molnar, John Morley, Stephen Nelson, Michael Press, Jeff Roberts, Shinji Sato, Marc Sinykin, Earl Wallace, Gayle Wiesner. This list of credits will show up again later. PICTs. I'm not sure what they refer to. Notice that the Toolbar Menu when you use the third mouse button is actually just a bunch of PICT resources. There are some other interesting things of note when we start going through the binaries. I separately extracted the files from the installer packages (they're just cpio archives) to preserve their time stamps for analysis. Let's look at everything that's there and then dig into the most notable individual files. All of the core binaries have a modification date of January 23, 1997, presumably the RTM date. Of the library files, data and engine are probably the most notable. We will look at those seprately. KeymapDepotDB is where the default keymaps MAE uses are kept, and MajorUpdate is instructions to the installer script for how to perform an upgrade to a new major release. Since there's no MAE 4.0, this presumably will never be used again. The manual does not document what btree does, and it has only a single readable string in it: Copyright 1991 Apple Computer, Inc. All Rights Reserved. Ricardo Batista The rest are character set mappings and the EULAs in graphic form for both the MAE demo and the full version (with X bitmap buttons for accept/don't accept in all languages except English): After installation Default_MIV_file also lives in /opt/apple/lib, and optionally AliasList for default aliases to appear when starting MAE. To reduce the size of individual users' System Folders, a substantial portion of the composite MAE System Folder is pulled from the shared directory, and other pieces from /opt/apple/lib/data. /opt/apple/lib/data contains the rest of the System Folder, with common pieces like the System 7.5 jigsaw puzzle (licensed by Apple from Captain's Software), note pad (Light Software), scrapbook, menu bar clock (from Steve Christensen's SuperClock!), desktop patterns, compressed System resources and standard INITs and CDEVs. /opt/apple/sys, however, which we won't do much more with here, is the master template for creating each user's own System Folder. We don't need to look at it again because we already saw my own copy of it. /opt/apple/lib/engine is a mashup of many miscellaneous tools. There are various conglomated binaries in it ratted out by the presence of .text, .data and .bss, plus the fake DeclROM for the virtual video card and the Quadra 660AV/840AV Toolbox ROM it uses. There are also many other interesting strings, and being a 10MB file, there are a lot of them: /dev/null 2>&1 Move failed: (%d) [...] cGetDevPixmap, could not emulate Macintosh color table (%d), exiting. cGetDevPixmap, unknown depth %d in encountered. doVideo, error installing video driver, exiting. doVideo, error initializing NewGDevice, exiting. doVideo, could not emulate any Macintosh video devices. Insufficient shared memory or swap space. Using malloc instead. Performance could be increased by adding more swap space and/or configuring more shared memory into the kernel. ERROR: MAE could not allocate the video screen (%dx%d, %dk). Please increase swap space or kill other processes before restarting MAE. Could not malloc new screen buffer, restoring previous size. Not enough shared memory, using malloc instead. Performance would be increased by configuring more shared memory into the kernel. [...] BUGS on MacPlus/SE, NuMc on later [...] Got the OKAY to clear %s (0x%02x bytes at pram 0x%02x) - [...] QDtoGC: (penMode & kHilitePenModeMask); *punting* QDtoGC: penmode=invert (but not well matched); *punting* [...] Aae: AAAAARGH! Fatal X Error [...] Copyright (c) 1987 Apple Computer, Inc., 1985 Adobe Systems Incorporated, 1983-87 AT&T-IS, 1985-87 Motorola Inc., 1980-87 Sun Microsystems Inc., 1980-87 The Regents of the University of California, 1985-87 Unisoft Corporation, All Rights Reserved. [...] 36 41 2 1 c #FFFFFFFFFFFF . c #000000000000 ... .... ..... ..... ..... ..... .... .. ...... ...... .......... .......... ....................... ......................... ....................... ....................... ....................... ...................... ...................... ...................... ...................... ....................... ...................... ....................... ......................... ....................... ....................... ..................... .................... ................... ........ ........ .... .... 36 41 9 1 c #FFFFFFFFFFFF c #0000BBBB0000 c #FFFFFFFF0000 c #FFFF66663333 c #FFFF64640202 c #DDDD00000000 c #999900006666 c #999900009999 c #00000000DDDD ... .... ..... ..... ..... ..... .... .. ...... ...... .......... .......... ....................... ........................ XXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXX oOOOOOOOOOOOOOOOOooooo oOOOOOOOOOOOOOOOOOOOOo oOOOOOOOOOOOOOOOOOOOOo oOOOOOOOOOOOOOOOOOOOOOo +++++++++++++++++++++++ ++++++++++++++++++++++ +++++++++++++++++++++++ ++@+@+@+@+@+@+@+@+@+@+@+ @@@@@@@@@@@@@@@@@@@@@@@ @@@@@@@@@@@@@@@@@@@@@@ @@@@@@@@@@@@@@@@@@@@# $$$$$$$$$$$$$$$$$$$$ $$$$$$$$$$$$$$$$$$ $$$$$$$$ $$$$$$$ $$$$ $$$$ # CREATOR: MAE 3.0 %d %d %3d %3d %3d GIF87a $Id: ximage_high.c,v 3.6 1996/09/11 08:19:50 johng Exp $ $Id: ximage_icon.c,v 3.0 1995/03/23 20:51:23 cvs Exp $ X36 41 2 1 c #FFFFFFFFFFFF . c #000000000000 ... .... ..... ..... ..... ..... .... .. ...... ...... .......... .......... ....................... ......................... ....................... ....................... ....................... ...................... ...................... ...................... ...................... ....................... ...................... ....................... ......................... ....................... ....................... ..................... .................... ................... ........ ........ .... .... 36 41 9 1 c #FFFFFFFFFFFF c #0000BBBB0000 c #FFFFFFFF0000 c #FFFF66663333 c #FFFF64640202 c #DDDD00000000 c #999900006666 c #999900009999 c #00000000DDDD ... .... ..... ..... ..... ..... .... .. ...... ...... .......... .......... ....................... ........................ XXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXX oOOOOOOOOOOOOOOOOooooo oOOOOOOOOOOOOOOOOOOOOo oOOOOOOOOOOOOOOOOOOOOo oOOOOOOOOOOOOOOOOOOOOOo +++++++++++++++++++++++ ++++++++++++++++++++++ +++++++++++++++++++++++ ++@+@+@+@+@+@+@+@+@+@+@+ @@@@@@@@@@@@@@@@@@@@@@@ @@@@@@@@@@@@@@@@@@@@@@ @@@@@@@@@@@@@@@@@@@@# $$$$$$$$$$$$$$$$$$$$ $$$$$$$$$$$$$$$$$$ $$$$$$$$ $$$$$$$ $$$$ $$$$ PaintIconIntoImage, unknown message type %d. %d %d %d Unable to get Icon window id from MACD, exiting. The SuperROM SuperTeam: Central: Ricardo Batista, Rich Biasi, Philip Nguyen, Roger Mann Kurt Clark, Chas Spillar, Paul Wolf, Clinton Bauder Giovanni Agnoli and Debbie Lockett RISC: Scott Boyd, Tim Nichols and Steve Smith MSAD: Jeff Miller and Fred Monroe Cyclone: Tony Leung, Greg Schroeder, Mark Law, Fernando Urbina Dan Hitchens, Jeff Boone, Craig Prouse, Eric Behnke Mike Bell, Mike Puckett, William Sheet, Robert Polic and Kevin Williams Thankzzz to all who contributed to past ROMs...and System 7.x legal, which is the program that requires you to accept the EULA on first run. legal in particular looks like it's just an embedded Tcl/Tk script. It's also notable that appleping, appletalk, atlookup, asdtool and legal still have symbol tables. The AppleTalk tools in particular list functions related to DDP, LAP, NBP and RTMP, but you'd expect that (legal has symbols for Tcl/Tk instead). Interestingly, a few of the strings in appletalk suggest that LocalTalk might have been, or at least was considered for being, supported at one time. Although mae is the binary that you run directly, macd is what handles a lot of the background stuff and mae communicates with it over IPC. The administrator's manual is (probably intentionally) vague about its exact functions, saying only that it "is a daemon that runs whenever apple/bin/mae runs and helps MAE interact with the UNIX environment. It also cleans up after mae if the mae process is killed." We can get a little better idea of what it does from its own set of strings at least: And now mae itself. Some of these strings and components are duplicative of what we saw in /opt/apple/lib/engine. I'm not sure why they're being used twice. Then we start getting into an unusual section. ] [-o >outfile>] [-step] [-remote_debug] decimalinetport hexinetaddr <A/UX COFF file> [<arg1>] ... [<argn>] emulator: cannot uname(2) local system errno = %d TBATDEBUG SOFTMAC_RESTARTING_NOW TBWARN Midnight Emulator version %s Remote Debug version built %s at %s, loading %s [...] Midnight Debugger Unless otherwise specified, the following rules apply: - Commands are single-letter and case matters a whole lot! - Whitespace between the command and arguments are allowed. - Values are hex by default. - Addresses are automatically forced to be on even address boundaries. - '<68kaddr>' is an address which will be offsetted automatically by the emulator so it lies within the 68k image. For example, on this HP system a <68kaddr> of 0x4600 is a real address of 0x%x. - '<emuaddr>' is an address which is not mucked with in any manner. It can be any address within the emulator address space. [...] HUGGERZ What About Bob? ___ ____ ___ ____( \ .-' `-. / )____ (____ \_____ / (O O) \ _____/ ____) (____ `-----( ) )-----' ____) (____ _____________\ .____. /_____________ ____) (______/ `-.____.-' \______) *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug**Hug**Hug* *Hug* *Hug* *Hug* *Hug**Hug**Hug* *Hug* *Hug* *Hug* *Hug**Hug* *Hug* *T3W* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* Here goes a big hug from the MAE Team!!!!! [...] mae binary has a second executable binary in it, called the Midnight Emulator. And we can run it! ] [-o >outfile>] [-step] [-remote_debug] decimalinetport hexinetaddr <A/UX COFF file> [<arg1>] ... [<argn>] ruby:/opt/apple/bin/% ./midnight -h Midnight Emulator version 12:02:00 Remote Debug version built Mar 25 1997 at 10:26:25, loading -h must set LM_LICENSE_FILE env var for midnight ruby:/opt/apple/bin/% setenv LM_LICENSE_FILE /opt/apple/XXXXXXX ruby:/opt/apple/bin/% ./midnight -h Midnight Emulator version 12:02:00 Remote Debug version built Mar 25 1997 at 10:26:25, loading -h Unable to open file -h spindler, my clock-chipped Quadra 800. We'll fire it up in A/UX 3.1. spindler:/bin/% uname -a A/UX spindler 3.1 SVR2 mc68040 spindler:/bin/% file sync sync: COFF object paged executable spindler:/bin/% ls -l sync -rwxr-xr-x 1 bin bin 764 Feb 4 1994 sync spindler:/bin/% dis sync **** DISASSEMBLER **** disassembly for sync section .text $ a8: 23c0 0040 0150 mov.l %d0,0x400150.l $ ae: 518f subq.l &8,%sp $ b0: 2eaf 0008 mov.l 0x8(%sp),(%sp) $ b4: 41ef 000c lea 0xc(%sp),%a0 [...] $ 144: 480e ffff fffc link.l %fp,&-4 $ 14a: 4e71 nop $ 14c: 4e5e unlk %fp $ 14e: 4e75 rts /bin/sync looks like a very small, yet valid A/UX binary we can pull over and see if Midnight will run it. First, a quick negative control by running it on itself: The complaint that it's neither COFF nor engine suggests that its normal state is to be running /opt/apple/lib/engine, though this isn't too interesting, since we would assume MAE does that ordinarily. Regardless, it doesn't like our real A/UX COFF binary, even though it does try to load it. It is particularly strange that mae has a modification date of January 23, 1997, but the Midnight Emulator claims to have been built on March 25, over two months later. More explorations to come, especially into whether this could help to debug MAE itself. At the end of this extensive strange trip, I found I rather liked the way MAE worked, and the integration features with HP-UX in particular really tempt me to try running it as my primary environment on top of CDE or VUE. Its performance was surprisingly good and I think if I had the choice back in the day between buying new a 3400c or this thing, even as noisy, heavy and costly as it is, I might strongly have considered buying the latter. Also, I bet MAE would run like a bat out of hell on my maximally configured C8000 and some additional explorations of the Macintosh Application Environment, possibly also on one of my SAIC Galaxys with a floppy drive and an earlier version of HP-UX, might be the subject of a future article. The MAE team clearly didn't think 3.0 was the end of MAE; in the MAE 3.0 white paper's "Future Directions" they indicate that support for additional hardware platforms and "additional UNIX systems" are "being considered." They acknowledge the fact it doesn't run Power Mac software, even saying that "Apple is also investigating the viability of supporting PowerPC Macintosh applications on MAE." However, the document also adds that "Apple wants to ensure that the performance of RISC-on-RISC emulation will meet customer requirements before committing to this development effort." It's not clear if any work on this was ever done. In the wake of Apple's purchase of NeXT in 1997 there was a mention in Informationweek that MAE would be ported to NeXTSTEP as a solution for legacy applications, but this would have been a limiting choice because of the existing PowerPC software that people wanted to run and I don't think it was ever a truly serious proposal. Although I couldn't find anything obvious in the trade rags about exactly when MAE was cancelled, I suspect Gil Amelio did it at Steve Jobs' suggestion after the buyout, like what happened with the Apple Network Server and OpenDoc. After all, MAE had just become superfluous after Apple adopted Rhapsody as the future Mac OS: now that Apple had its own Unix, there was no reason to support anyone else's. Nevertheless, although the Classic Environment in PowerPC versions of Rhapsody and Mac OS X (nicknamed the "Blue Box") is not a direct descendant of MAE, it's very possible that MAE informed its design. In practice, Classic is actually closer to MAS in concept in that it runs native PowerPC code directly in the so-called "problem state" on a paravirtualized Mac OS 8 or 9, using the standard ROM 68K emulator for 68K applications. Classic is less flexible than MAE in that only one instance can be running on any one Power Mac and only one user can be running it, but it was never going to be the future anyway.

14 hours ago 2 votes
Your Computer Interviewed Chris Curry (1981)

Chris talks about his work with Clive Sinclair and Acorn Computers with a little BBC Micro.

yesterday 3 votes
Camera Genealogica (Part 1)

I’m very much into genealogy. I came to realize that my interest was more specifically as a kind of photograph genealogist.

4 days ago 9 votes
Altima NSX

Light in weight. Light in price. Heavy in features.

4 days ago 12 votes
Could these VR haptic gloves replace human touch?

We’re seeing a substantial turn towards online social interaction replacing in-person social interaction — especially among the younger generations. That was exacerbated and accelerated by the COVID-19 pandemic. But mountains of research show that physical touch is critical to a person’s mental wellbeing and online interactions haven’t been able to provide that. One solution may […] The post Could these VR haptic gloves replace human touch? appeared first on Arduino Blog.

5 days ago 6 votes