Japan 2023

Knowing my penchant for retro gaming and vintage computing, for a few years now several friends of mine have recommended I visit Tokyo, Japan, specifically the Akihabara ward famous for its shops full of esoteric electronics. I had booked such a trip for April of 2020, but… nature had other plans. Now that individual tourism has opened up a bit in Japan, I finally booked my visit for real, having never been before, and spent a week there. My planned purpose of the trip was to see if a market for second-hand Pippin gear exists and if so, if I could bring home any Pippin-related goodies, particularly hardware. I also wanted to meet with a friend in Tokyo whom I hadn’t seen since he moved there prior to the start of the COVID-19 pandemic. All in all I quite enjoyed my trip, and so this post will be decidedly less technical than usual as I document what I did during that week.

Day 0 / Wednesday

I took a red-eye flight out of Oakland International Airport, arriving at Honolulu International Airport the next morning. I generally prefer red-eye flights so that I can make the most of my time on them—if I’m tired enough I can fall asleep anywhere, and then I don’t remember whether the flight is good or bad when I wake up and land. When I landed at HNL though, I wanted breakfast, but most places I found are geared toward cocktails and so naturally aren’t open in the morning. I wound up grabbing a quick bite at Burger King as a sort of “last chance” at U.S. food before continuing west, somewhat reminiscent of how my dad and I grab a “last chance” Runza in Sidney, NE before leaving the state heading west on I-80.

The second leg of my Tokyo-bound flights took me from HNL into HND, or Haneda Airport / Tokyo International Airport. For this flight I had an aisle seat way in the back of the plane (an Airbus A330), but fortunately nobody was in the window seat by the time we took off, so I got the whole row to myself for the next 9+ hours. Hooray! I’ve been told that the best way to overcome jet lag on long international flights is to force yourself to stay awake until you reach what would be considered “bed time” at your destination. Accordingly, to pass the time I watched three movies on this flight: Top Gun Maverick (liked it; good), Bullet Train (liked it; haha), and Moonshot (did not like).

My first impression of HND upon touchdown was that it is a large airport. I hadn’t been on an international trip since 2012, so perhaps I had simply forgotten how big international terminals are. But nevertheless, before doing anything else upon arrival I had to pass through customs. The long walk from my arrival gate to the customs area was peppered with advertisements for “Visit Japan Web.” This is a site whose purpose is to expedite the customs process by providing customs forms online, then upon completion of those forms produces a pair of QR codes which you then show to customs officials. The customs officials at HND speak very little English, but fortunately I didn’t have to say very much because I had already read of Visit Japan Web prior to my flight(s) and so had filled out the necessary forms and received the requisite QR codes. It was slightly confusing to be directed first to a pod full of laptops (“Oh, maybe this is where I scan my QR codes” I thought) but as it turned out those laptops exist for folks who didn’t come prepared and would have to go through Visit Japan Web there instead. Once that was made apparent to the customs official, I was instead directed to the main queue for immigration.

Customs is a two-part process: immigration and declaration. Each part has its own QR code via Visit Japan Web. The immigration line was long, but at least moved along steadily. It took maybe ten minutes to get through immigration, wherein my fingerprints were scanned and a photo of my face was taken. Claiming my luggage took another ten minutes, then declaration was done at a QR code scanning kiosk. Finally, with all customs procedures complete, I simply walked out of the customs area via a guarded exit gate. Welcome to Japan!

Priority number one after customs (for me) was retrieving my pocket WiFi device. My carrier doesn’t provide cellular data service in Japan, so my options were either to register with a local carrier (necessitating a new phone number and a new SIM card) or rent what’s called a “pocket WiFi.” Pocket WiFi devices basically amount to a cellular data hotspot; they are pocket-sized battery-powered devices that piggyback off of 5G/LTE cell services in Japan to provide a WiFi hotspot that my personal phone can connect to just like any other WiFi network. In this way, I could carry a tiny WiFi network with me in my pocket (hey!) and log in to the Internet telephony and chat apps I already know and use on a regular basis. This seemed far more convenient to me, so I made sure to preorder one of these over a week ahead of my trip to be picked up at the respective point when I landed. When I got to the pocket WiFi counter though, the problems started. The gal behind the counter fortunately spoke English pretty well, but informed me that they did not have a device for me. I was confused; not only had I preordered one, but I had the confirmation email in my hands on my phone, which I showed to her to try to explain my perspective. If I had cancelled, wouldn’t I have received a cancellation email as I had done when my order was first confirmed? No such cancellation email existed. After a bit of (polite) back and forth, to their credit the pocket WiFi folks acknowledged that because a refund for my “cancelled” order had not gone through, that they could furnish me with a device to be delivered directly to my hotel the next evening. This was annoying but, given my options, I found it the most acceptable, so we went ahead with that plan and I looked forward to moving on to priority number two.

In retrospect given the tight timeline I don’t think I could have reasonably expected to retrieve both my pocket WiFi and my JR Pass that evening, but I should have been able to get one or the other. In my case, I got neither that evening. The back and forth at the pocket WiFi counter took me well past 8pm, by which time the JR ticket office had closed to new customers. The JR Pass is offered only to tourists, and it is a fixed-rate pass that allows unlimited travel on the train lines operated by Japan Rail. The pass itself costs a couple hundred U.S. dollars, but given that I was expecting to ride the shinkansen (bullet train) at least twice later that week, this made the pass well worth it. As with my pocket WiFi, I had ordered a JR Pass weeks earlier, after which I received a voucher in the mail to be exchanged at a JR ticket office for my real pass, valid for the week-long duration of my trip. But with the office closed, and without WiFi, I was suddenly faced with the unexpected task of not only navigating the JR ticketing system on my own, but without any cash, and without any way to translate the stations on the map into the romanized names I knew. In particular, I needed to know how I was going to get from Haneda Airport to Akihabara and Ryogoku, where I was scheduled to stay for the week.

Enter a very sweet helpful bilingual businesswoman who saw I looked lost and alone, so not only accompanied me to an ATM where I could get cash (luckily my debit card worked!) but then walked me through buying a transfer ticket to Akihabara via the monorail system, the latter of which I had no idea about previously. I didn’t get her name but nonetheless I thanked her profusely; despite not having a lot of faith in Japanese transportation services at this point, I had a reason to believe in Japanese humanity thanks to her. 🙂

The trains in Tokyo follow a schedule similar to BART in the Bay Area: after midnight, the lines start shutting down until the next morning. It’s a long walk from Akihabara to Ryogoku Station, but I was reluctant to try the train again until I could be sure I could get my JR Pass, so I walked from Akihabara Station to my hotel, dragging along my big leather suitcase and my garment bag and my tenor saxophone and my twelve-pound bowling ball and my lucky lucky autographed glow-in-the-dark snorkel luggage. My first impression of Tokyo streets was that they are safe, and quiet at night; my rolling luggage made the most noise, and my presence after 11pm didn’t seem to disturb or otherwise arouse suspicion from others on the street, nor did I notice an abundance of caution from the same amongst themselves. I wound up getting to bed by midnight, about two hours later than I expected to.

A view of Asakusa and Akihabara from my room at the APA Ryogoku Eki Tower
Hello, my neighbors!

Day 1 / Thursday: Akihabara, Part One

I woke up around 7:30am and got used to the shower. “Small” isn’t an accurate enough word to describe my hotel room at the APA Ryogoku Eki Tower; although I concur that the total square feet is less than in a typical Western hotel room, the space is used very efficiently. I would describe my room as “compact,” and this definitely extended to the bathroom and shower.

The water spigot inside my room's bathroom, with a label reading "Drinkable"

The water spigot serves both the sink and the shower, and only one or the other may be used at a time. The interface took a moment to figure out, but it’s kind of clever: the temperature is regulated by a knob on the left side, with a red “safety catch” intended to keep the water at or below 40 degrees Celsius unless I really wanted otherwise. The knob on the opposite end of the temperature control governed the shower pressure: all the way one way maxed out the pressure, and all the way the other way turned the shower off. I was very delighted that the shower head itself was optionally handheld; I have a similar setup at home and it certainly helps being able to thoroughly wash in my decidedly more cramped quarters here.

An unfogged region of the mirror in my room's bathroom after taking a shower

One thing about this bathroom that stuck out and impressed me was how the mirror behind the sink stayed clear while the rest of the mirror retained condensation from the steam of the shower. I’m not sure how this was done—maybe a heating plate behind that section of the mirror?—but it was another small detail that I appreciated. American hotels should take note!

The drain guard in my room's bathroom, showing both plastic "catch" layers

Another tiny detail that seems like such a small thing but with a potential to make a huge difference was the drain guard in the sink. There are two levels to the “grate” of the guard, each offset by one another, so that for example if you lose a piece of jewelry down the drain, the drain guard has a better chance of catching it. Again, I was impressed by this too and would love to see this become standard issue in the States.

The McDonald's restaurant across from Ryogoku Station

The last time I went on an international trip was in 2012 to Thailand, and the one thing I regret not doing when I was there was sample the McDonald’s in Bangkok. Each country’s McD’s has a menu tailored to their respective cuisine, and I knew Japan would be no exception. Luckily there was a McD’s right across from Ryogoku Station so for the first few days of my time in Tokyo I ordered hotcakes and orange juice, all in what little spoken Japanese I know. Knowing how and when to say “hai” (yes) and “arigatou gozaimasu” (thank you) got me pretty far. 🙂 The breakfast menu is more or less the same as in the U.S., except that the McD’s in Tokyo doesn’t offer breakfast biscuits. Instead, you can get bacon, egg, and cheese “McSandwiches,” which are just bacon, egg, and cheese on seedless burger buns. I didn’t try them.

Even though my pocket WiFi would not arrive until later that evening, I wasn’t about to let that stop me from making the most of my day. My hotel room came with its own WiFi network, so I relied on that in the morning to plan my itinerary in advance. The first order of the day was to find a JR ticket office and get my JR Pass. I found via Googling that there’s a JR ticket office at Tokyo Station, which was the next biggest station south of Akihabara. I found and downloaded an English version of the train map seen in the JR stations, and used that to figure out what kind of ticket to get to Tokyo Station. Once I got to Tokyo Station, finding the JR ticket office was easy and I only had to wait about 45 minutes in line before I finally was able to exchange my voucher for my actual JR Pass. Since it was Thursday by this point, they set up the pass so that it would expire by next Wednesday, which was fine by me as I expected to be back home by then anyway. The gal exchanging my voucher also assured me that I could keep the physical pass as a souvenir at the conclusion of my trip; they didn’t need the expired pass back when I was done with it. I was free to ride the JR trains anywhere within Tokyo, so with my JR Pass in hand and feeling a lot more comfortable, I headed back up to Akihabara Station and began touring the area in earnest. 🙂

My first shopping stop in Akihabara was a tall building with a sign advertising “Book-Off,” which was a brand in a family of stores I knew specialized in second-hand goods. I stopped into this building to quickly find it wasn’t much of a second-hand store at all, but instead a store called Onoden. Still, there were some neat items for sale here, notably some high-end home audio gear.

An in-store display of various high-end audio components and speakers

As I moved on from shop to shop, searching specifically for Pippin and Mac-related items but keeping my eyes and ears open to anything else that might capture my attention, I made a few observations and formed some corresponding impressions. First, very few of these stores would be legal in the United States from an accessibility standpoint. Someone who is not ambulatory would have a very hard time navigating many of the tight corridors and narrow stairwells/escalators, assuming options other than stairs are available at all. Second, the overall theme of Akihabara I would describe as surplus stores; many little individually-owned shops selling components and cast-off electronics like radios, laptops, audiovisual equipment, and other appliances. However, unlike the beloved now-closed Bay Area outlets like WeirdStuff Warehouse and Excess Solutions, when it came to computers I couldn’t find any vendors in Akihabara offering accessories for anything older than an iPhone 8. Accordingly, most of the focus is on phones and mobile devices.

Vending machines are everywhere. There is no shortage of drink options should you find yourself parched while on the street gawking at all the shiny toys available for sale. I bought a Dr. Pepper while on my way out of the “ICOM” store.

Looking up towards various shopping high-rises in Akihabara
Much of Akihabara looks like this.

I popped into a place called “Yellow Submarine” which billed itself as a hobby store for collectors of TCG, tabletop gaming, and model kits. They had a floor in the building for some Mac stuff, too, going back to the late PowerBook era. I bought another drink from a vending machine here: “Aquarius” from Coca-Cola. Supposedly served to the Olympians during the 2020 games, this drink looked to me for all the world like water, but a sip of it tasted a bit… off. It was clearly flavored with… something… but I just wanted regular water. Maybe Aquarius has what plants crave? 😉

By this point it was time for me to track down the three Hard-Off locations I had pinned in my Google Maps. The Google Maps app on my phone keeps them locally cached, so I had no problem finding them by walking along to the map, nor did I ever feel like my personal security was in jeopardy doing so. 🙂 I had higher expectations for Hard-Off than what I experienced. The stores I went to were pretty small, albeit on several floors. They were average sized by Akihabara standards, but I suppose I was expecting something nearer to the scale of the Sofmap store I had visited earlier that day.

Looking up towards a Sofmap building in Akihabara

All Hard-Off locations seem to play the same in-store music, so it started to get ingrained into my head as I searched the “junk” floor looking for vintage Mac and Pippin gear, plus whatever else struck my fancy. There is no shortage of retro video game hardware, from controllers to systems, but good luck finding anything rare on a regular basis. I did find with some regularity a variety of LaserDisc players, DVD players, and videocassette machines, including one ED Beta machine at one location. But overall, the Hard-Off stores in Akihabara left me less than impressed. Perhaps I would have to explore locations outside an area frequented by tourists. 😉

Innsmouth no Yakata and Jack Bros, both CIB, displayed on sale behind glass at Mandarake in Akihabara

Another store I had on my list of must-see places was Mandarake. There are a few locations in Tokyo, and the first one I visited was in Akihabara, spread over at least six floors. The sixth floor is where the games were, so naturally I made a beeline directly for that section. Here is where I found the most promise, at least so far: behind glass in the back corner of the floor were a couple of reasonably-priced Japanese Virtual Boy titles, complete in box. Elsewhere in the shop, I found a stock of games for the Bandai WonderSwan. This game me hope: if games for the Virtual Boy and WonderSwan—system that are commonly considered “failures”—were stocked in stores like this, maybe I might find something for the Pippin, another “failed” platform. In fact, this was not the only location where I could find WonderSwan-related products; this was the case in at least a couple of retro game shops in Akihabara.

Two Beatmania IIDX machines inside a Taito arcade in Akihabara

I needed a bit of a break from all this shopping and searching, so I dropped into a few nearby arcades. The GiGO near the train station looked impressive from outside, but inside it was mostly crane machines. Likewise, the Taito arcade was a lot of crane machines, but they also had a few “candy cabinets” and Vewlix-style machines. HEY, or Hirose Entertainment Yard, had the most variety: the first floor is crane machines, but the second and higher floors have a mix of sit-down cabs and dedicated cabs. Pricing is pretty reasonable, too: typically only 100 yen per play, which is a little less than $1 USD. Even though I was wearing a mask, I was happy to find that the arcades didn’t have an overbearing smell of smoke as I expected they would. Smoking appears to be limited now to a designated smoking room within the arcade. Whether this was a change prompted by COVID-19 or independent of that, I’m very appreciative of it.

Nine CIB Virtual Boy games displayed on sale behind glass at Super Potato in Akihabara

No trip to Akihabara is complete without a visit to the legendary Super Potato. This is a shop that I had heard of long before planning what was going to be my first voyage to Tokyo in 2020, but again like Hard-Off I expected it to be at least 3x the size it actually is. Its offerings inside left me similarly unimpressed. I think in the time since I had first heard of this place, they got wise to tourists seeking them out, and their prices since reflect that savvy. In any event, nothing for the Bandai Pippin could be found here. I was tempted by something, though: on the bottom shelf were four boxed PC-FX systems, which are somewhat hard to come by in America since they were only marketed in Japan. The PC-FX shares a little bit of DNA with the Virtual Boy, but at about $300 USD apiece without any software, I couldn’t justify it. Nor could I justify any of the boxed Virtual Boy games they had for sale, also behind glass as at Mandarake.

A Bandai-branded Vectrex on display at Super Potato in Akihabara
They had a Bandai-branded Vectrex on display though, which was pretty cool.

Next I explored BIC CAMERA which, despite its name, is more of a general department store ala Target, albeit a little more high-end. Phones, computers, gaming setups, groceries, toiletries, appliances, digital pianos, toys, games,… they had a little bit of everything. I was a little astonished to see for sale a handful of digital pianos from Roland, Yamaha, and Casio.

A beige-colored Roland HP702 digital keyboard displayed on sale at BIC CAMERA in Akihabara
The Roland and Casio models had the best feeling weighted keys. Yamaha felt the cheapest, which is backwards from what I’ve come to expect. Casio’s quality must have improved dramatically over the years.

The top floor is the toy floor. About a quarter of this section is dedicated to LEGO. Many figurines and model kits were available too. I found both die-cast models and DIY kits of every DeLorean from the Back to the Future movies.

Three die-cast models of the DeLorean time machine, each from each of the Back to the Future movies, displayed on sale at BIC CAMERA in Akihabara
Great scott!

The grocery floor impressed me by its large selection of liquor, both local and imported. All were available at pretty reasonable prices; I didn’t see any bottles going for much more than $50 USD. There was a small selection of candy, too, but not only did I not want to buy a souvenir bottle of whisky this early in my trip, but I also wanted to go to a specialty store for Japanese KitKats, so I ultimately passed on this floor. If I do come back to Akihabara, though, I know where I can go to restock on anything I either run out of or forget.

A Yamaha Xeno YTR-8335 UGR trumpet and a Bach 43G trumpet displayed on sale behind glass at Hard-Off in Akihabara

By the way, Hard-Off doesn’t exclusively deal in electronics; they sell musical instruments too, though it’s mostly guitars and synth gear. A handful of wind instruments for sale can be found behind glass, which suggests to me that a lot of folks try guitar/synth before selling it back to the store when they realize it’s not for them. XD

For dinner, I got brave and decided to pop into a highly rated katsu curry place back near the train station. This place was operated by a vending machine, too! You put in your money, press the button labeled with the name of your desired dish, then out comes a ticket that you hand to the cook. It’s all printed in Japanese, of course, but next to the machine is a picture menu with English descriptions next to their Japanese descriptions. So I was able to easily order a deep fried pork katsu curry by matching up its button to its English equivalent on the picture menu. It was so good and well worth it. I was very happy I did that. 🙂

Mister Donut marquee and overhang in Akihabara

To cap off my evening, I backtracked to the Mister Donut next to the Sofmap store. Mister Donut used to have a presence here in the US up until the 90s, at which point many of them became Dunkin Donuts locations instead. Indeed, as of this writing I think the only Mister Donut store left in the United States is in southern Illinois. In the late 70s, my dad used to work at a Mister Donut location in Lincoln, Nebraska, so I had to stop here to at least give him a blast from his past. Mister Donut must have really found a market in east Asia, because they had at least one in Bangkok when I visited there, too. Japan even plays host to a Mister Donut Museum. But for the moment, I just wanted some donuts, so I ordered three cake donuts plus an “iced milk” shortly before their scheduled closing time of 7pm. When it comes to milk, they’ve got both kinds country and western “hot” and “iced” — you can’t get any more specific than that.

Day 2 / Friday: Akihabara, Part Two

This day was meant to finish up any Akihabara shopping that I didn’t get to on Day 1. This included Beep, Surigaya (next to Beep), and Trader. Beep and Surigaya both opened at 11am, so after breakfast at McD’s again, I trekked into Akihabara to wait outside Beep first before they opened.

Looking down the street from the entrance to Beep in Akihabara
Outside of Beep

Beep is a basement store. Many older Japanese computers like the NEC PC-9801 and MSX are offered here, but very little from Apple. I found nothing Pippin-related here. In fact the only Apple items I found in the store were a caddy-fed external AppleCD, an external Apple 5.25″ Drive, an ADB Global Village modem, and an Apple-branded tea mug, the latter of which was only on display on a shelf near the ceiling. Once again, I left emptyhanded.

A Super Famicom modem from NTT Data at Surigaya in Akihabara

Surigaya was similarly unimpressive; I didn’t buy anything here. The most interesting item I found was what looked to me like a Super Famicom modem from NTT Data. As I saw in Mandarake the previous day, there was a surprising amount of WonderSwan goodies here too. Trader is more of a media shop specializing in books, videos, software, and hobby kits, with almost no electronic hardware for sale. I was intrigued by a sign pointing me to “PC” games near the top floor; maybe I could find some hidden gems there? And indeed I did find a handful of PC games I was familiar with (like Caesar III, for instance), but though the games were for PC, it became immediately clear perusing the aisles that the games weren’t, ahem, PC—a bit of a cultural shift I had yet to get used to. 😉

With my visits to those three remaining stores on my list of bookmarked locations, I declared my shopping excursion into Akihabara complete. No visit to Japan is complete without spending some time in a cat café, either, so my next stop was Neko JaLaLa. This is a very unassuming place on the less-traveled outskirts of Akihabara, but the signs on their front door facing the street make it obvious that cats live here. 🙂 To gain entrance I purchased a drink of my choice (a fruity hot tea, very similar to the berry blend I make at home) and 60 minutes of play time with the kitties.

Cookie the cat at Neko JaLaLa in Akihabara

I immediately met Cookie, a very social cat who reminded me of my folks’ cat. He is much slimmer than my folks’ cat, but he is just as rambunctious.

Moja the cat at Neko JaLaLa in Akihabara

On the opposite end of the spectrum was a very fluffy cat named Moja, short for “moja moja” which in Japanese means “fluffy.” 🙂 Moja looked very familiar to me so I took a liking to him right away. He was content with sitting high atop the cat houses looking down or at eye level with his human visitors. At one point he climbed aboard the cat “bridge” built to hug the ceiling above people’s heads. If you could get to him, he would let you pet him, but he made you work for it.

These cats are well cared for. One might even call them spoiled, for they have nothing to want for in their lives. 😀 Most of them were perfectly happy napping in their respective houses; there were close to a dozen cats in this café. A sure way to get the attention of a few of them was to buy them a treat, of which several varieties are offered here. I bought one pureed fish treat and shared it among three new feline friends. I suspect these cats have come to expect this on a daily basis and so are generally not interested in being social unless somebody has something to offer them. 😀

Anne the cat at Neko JaLaLa in Akihabara

This is Anne, or rather this was Anne. :'( I noticed Anne lying down in the corner of the café under a blanket on a padded mat, away from the interests of other human visitors. Her eyes were open but she wasn’t as active or lively as the other cats, so I thought to pay her some attention. As soon as I did, it was explained to me that Anne was born in July of 2007 and was one of the most “senior” cats on staff at Neko JaLaLa. But she had developed a tumor behind her nose and was expected to “go to Heaven” (the staff’s words) either that day or the following day. The tumor had gotten so bad by the time I met her that it prevented her from closing her eyes, so every few minutes one of her humans would come out with some eye drops and a tiny cloth to clean her up and make her feel comfortable.

Anne's "stat sheet" showing her birthday: 2007.7.30
They had a book explaining each cat’s name and personality, but the book was clearly outdated since the only one I could find still in the café was Anne.

I spent the most time with Anne, softly stroking her thin coat on the back of her head and neck. Near to when my time was up, Anne was determined to get to her feet, which must have been quite the surprise because it got the attention of the entire human staff of the café. It was as though she suddenly felt energized enough to say “hi” for one of her last times, so I returned the favor by giving her pets and telling her she was a very good kitty. On my way out, the staff thanked me for spending so much time with Anne. It still chokes me up to think about it, but I was happy that I got to meet her even if just for that brief hour. She did indeed pass away later that day, and was cremated over the weekend. She’s a jellicle cat now. RIP Anne 2007-2023 <3

An orange juice vending machine in Akihabara
There are even vending machines for fresh orange juice!

Having had enough of Akihabara, I boarded the train west to Nakano Broadway, a multi-story indoor shopping mall a few stops away from Shinjuku. I came here to find the Mandarake Special 4 location and the Lashinbang Audio Video Shop, but what I thought was an escalator to the second floor actually took me to the third floor, so I thought for a few minutes that maybe Google Maps was outdated and the shops no longer existed. Nope—a nearby stairwell was helpfully labeled with the floor number, and sure enough when I descended to the proper second floor, I found both.

Mario's Tennis and Virtual Boy Wario Land, both CIB, displayed on sale behind glass at Mandarake in the Nakano Broadway shopping center
Even more Virtual Boy games at Nakano Broadway

Though the Mandarake store at Nakano, like its Akihabara counterpart, had an impressive collection of collectibles, once again I left emptyhanded. What I was looking for continued to be far too specific. This location however was spread out across multiple units within the mall, giving me the impression that it covered more square footage than the Akihabara location.

By this time it was nearing 5pm and I was ready to meet up with my friend Jonas for one of two must-do meals I wanted to cross off my “bucket list” before returning home to the U.S. That night, that meant finding a table at a local fugu restaurant. Fugu, or the tiger pufferfish, is known for being lethally poisonous to humans, so chefs in Japan must go through rigorous training and certification in order to legally prepare and serve it. If not properly removed, the toxin causes muscle paralysis while fully conscious, leading to a slow death by asphyxiation… not a great way to go. But nevertheless, Jonas booked us a table at Tora-fugu Tei Shinjuku, which came highly rated online. I liked those odds, so despite the warnings (and hearing that the flavor was not actually anything to write home about), I was excited to try fugu for myself.

But first, upon arriving at Shinjuku Station, Jonas and I wanted to square away our shinkansen tickets for the next morning (Saturday), when we planned to spend several hours at Universal Studios Japan in Osaka. Reserved seats are not required on the shinkansen (bullet train), but they do help alleviate the stress of figuring out where to sit, especially in my case being a tourist and wanting to take in some of the picturesque scenery on the way down. We couldn’t find two seats together on the Mount Fuji side of our southbound train, but we did find two seats together on the ocean side of our train. So I booked a window seat for myself first, with Jonas following up shortly thereafter. Much to my annoyance though, in the span of less than a minute between getting my reserved seat and Jonas booking his seat, some stranger booked the seat right next to mine! The travel center where we might be able to change seats wouldn’t open until 8:30am the next morning, by which time we hoped to be well on our way, so all we could do was throw our hands in the air and instead expect that there’d be plenty of opportunities to chat and catch up while we stood in line for various things at Universal. At least we’d be on the same train car (or so we thought).

Our fugu dinner was a preset series of courses, starting with fugu sashimi which was my first taste of fugu. What I had heard was absolutely correct: it tastes like a very bland whitefish, almost like nothing. The second / main course was fugu served hot pot style, with vegetables and tofu, to make fugu chiri, or stew. The tofu pretty much disintegrated upon contact with the water/broth, and by this point I was much more excited about the enoki mushrooms than the fugu, which I could barely taste anyway.

The third course was the only part of the meal I didn’t care for. After we were satisfied with the stew, our server took the leftover broth, removed the bones/cartilage from what was left of the fugu, then added rice to soak up the broth. This turned the broth into a porridge of sorts which was beginning to look appetizing… until they turned off the hot pot and stirred in two raw eggs. D: My body has a very strong aversion to eggs—it doesn’t feel right to me calling it an “allergy” because to me an allergy connotes hospitalization or emergency treatment. It’s more of an “intolerance” in that I don’t have to get medical attention, but… eggs don’t stay down if I eat them and they are even the slightest bit undercooked. Fortunately for me, I haven’t had a problem with pastries and other baked goods, I suspect because they’re baked at such a high temperature for so long that whatever my body finds offensive is denatured or otherwise neutralized. Whew—I do love me some baked treats, which just so happened to be part of what was served as our final dessert course: a “monaka,” or an ice cream sandwich with red bean filling. All the while I enjoyed very delicious hot tea. This made up for my having to miss out on the third course, but the server was understanding and I learned a new Japanese phrase: “tamago nashi.” 😉

The tank of pufferfish greeting visitors to Tora-fugu Tei Shinjuku

Of course in the midst of all this activity I forgot to take any pictures of any part of this meal or restaurant, but on our way out I did remember to snap a photo of the tank of pufferfish adjoining the entrance facing the street. All in all, I enjoyed my dinner here, but now having done it once, I can’t say that having fugu will be near the top of my dinner choices in the future. It wasn’t bad, it just was more about the novelty for me. As a bonus, I didn’t die. 🙂

To close out the evening, but being careful not to stay out too late, Jonas and I found a popular karaoke place at the “Pasela Resorts” in Shinjuku, near the train station. Given its name and that I found another one near Akihabara, I thought it was a chain of hotels, but inside we were presented with the options of a regular room or “premium” karaoke. Not needing anything fancy, we reserved ourselves a regular karaoke box for one hour and split a pitcher of liquid courage beer. The selection of Japanese tracks is of course excellent, but the selection of English songs is impressively large as well. The backing tracks, which in the U.S. I’m used to being pale imitations of the real thing, are pretty accurate here. My only complaint with this place is the tablet-based interface used to select songs: it’s really designed for searching, and not so much for browsing. Despite that, we still had a good time and I can truthfully say I enjoyed real Japanese-style karaoke while in Tokyo. 🙂

Day 3 / Saturday: Universal Studios and Osaka

Saturday morning turned out to be quite the adventure. After McD’s breakfast again, I met Jonas in Ueno around 8am expecting to board the shinkansen together, but right away we realized that something went errant in the course of booking our tickets the previous evening. While we were both booked for the same train, my ticket departed from Shinagawa Station, and Jonas’s ticket departed from Shin-Yokohama. In addition, Jonas found some kind of issue with his ticket and had to buy a new one for a scheduled 8:30ish departure on a different train altogether. What to do? Jonas came up with a brilliant plan whereby we each would board our respective trains from our respective departure points, and reunite later that day hundreds of miles away. 😀 So I rode past Tokyo Station to Shinagawa where I then discovered to my dismay that my reserved ticket was for April 2, not the 1st. This wasn’t a dealbreaker since my JR Pass let me get a non-reserved seat at no cost, and I planned on taking the shinkansen down to Kyoto on Sunday anyway, so my existing reserved seat would not go to waste. But in my rush to solve this new dilemma, I wasn’t paying attention to which train I was supposed to board. Instead of the Hikari line with fewer stops, I hastily jumped onto the Kodama line scheduled several minutes earlier, which only went as far as Nagoya, stopping at every station in between.

Looking at the high-rises of Nagoya from the train station
I was stuck in Nagoya for a few minutes. May as well make the most of it.

Ultimately our meetup in Shin-Osaka was delayed by about an hour to 12:30pm, but I was still amazed by how the plan became “let’s meet in another part of the country, good luck!” and it worked.

Getting from Shin-Osaka to Universal Studios involved taking a local train to Universal City, which was just outside the park, so that was quite convenient. We only had a few hours allocated to Universal on this day, and though I was excited to try at least one coaster, a couple of things worked against me there:

  1. Jonas doesn’t do roller coasters.
  2. The singles line for Hollywood Dream / Backdrop was 70 minutes, and climbed as the afternoon wore on.

I wasn’t about to blow my afternoon waiting in line for a single ride, especially given that it was lunchtime and the main purpose of my visit to Universal was to check out the original Super Nintendo World. Super Nintendo World requires its own “e-ticket” to enter, but by the time Jonas and I arrived in the park, all timed entry tickets for that day had been distributed, which meant our only hope was to get in via a “standby” ticket during a specific window later that afternoon. Standby tickets are distributed via a lottery system, so immediately after we gained entrance to the park we found the lottery kiosks and entered ourselves, hoping we might luck out and still spend an hour or so later that day.

Me doing a Mario-like jumping pose at Universal Studios in Osaka
Let’sa go!

While we waited, Jonas knew that the Jaws area had a restaurant where you could get shark sandwiches. Where else am I going to get something like that? So I ordered a shark sandwich, fries, a Diet Coke, and a clam chowder. All of it was delicious.

We didn’t win the lottery. I went back to the kiosk figuring we could enter ourselves again, but when I tried I found that each ticket could only enter the lottery once. Long story short, I didn’t get into Super Nintendo World. Hoping to make lemonade out of the lemons we were dealt, we sat and watched the Waterworld stunt show instead, which was pretty relaxing given we could sit down for its duration.

A fire flower, Starman, and mushroom ? block all hanging above the entrance to a Universal Studios Japan gift shop

Even though we weren’t admitted into Super Nintendo World, Universal was nonetheless willing to let us spend money on Nintendo-related gifts and collectibles. We spent our last few minutes at Universal tootling around the gift shops offering Nintendo-themed wares. I did buy a few things here: a popup Mario in a pipe and a plush Mario Kart banana for myself, a Shy Guy for my sister, and a Boo for my nephew/sister. One offering that I found strange in the “I bet they don’t have that at the Hollywood location” sense was a Toad-themed risotto kit. I guess because Toad is a mushroom? Isn’t that… kind of dark? I passed on that. 😉

Heading into the evening, we trekked into Den Den Town in Osaka for more retro game and computer shopping. There was a Mandarake store here, too, but I didn’t find anything I wanted to buy here either. I did see a store that, from the street, looking promising: a large sign indicating what could be found in the building showed for the second floor an Apple logo and “Macintosh” in the classic Apple Garamond font! Could it be? Vintage Macs? The first and other floors had rather racy imagery associated with them, and Jonas understandably had his apprehensions about entering such a store, but I was on a mission and willing to look the other way if it meant I might finally have a chance at bringing home some vintage Japanese Apple computers. The first floor was definitely 18+, as was the second floor, and the third… and the fourth… and so on, getting more and more explicit as I climbed, with me getting more and more disappointed. No Macintosh anything could be found in the whole store. What a letdown! False advertising!

An animation of me playing Special Criminal Investigation at an arcade in Shinsekai
Playing S.C.I.

We next walked toward Naniwa and Shinsekai, which I was told was planned to be an entertainment district of sorts that never really got off the ground, but still has plenty of arcades, food, and other fun destinations, including the huge Tsutenkaku tower. Kushikatsu Daruma came highly recommended for skewers, but by the time we got there they had a line almost around the block, so instead we ducked into a sushi place called Rokusen across the street near the base of Tsutenkaku. I was happy we did, because this turned out to be the only sushi restaurant I would visit on this trip. The experience at Rokusen is just as much about the food as it is about the atmosphere, which felt very authentic; indeed we were seated just behind the counter where the sushi chefs made each piece and hand-delivered them to where we were eating. Ordering was done via tablet, and I made sure to order some uni (sea urchin) and plenty of fatty tuna. Whale sushi is also an option here, which stood out to me as I had never seen, much less had, that before, as serving whale meat is illegal in the United States. I tried a lime-flavored shochu to drink, which to me tasted like a LaCroix, but I also had a hot tea. We sat near the entrance which opened wide into the street outside. By the time we had finished, the line at Kushikatsu Daruma hadn’t really subsided much, so we felt reassured that we made the right call by coming here instead.

Looking at the brightly-lit high-rises in downtown Osaka at night

It was getting dark by this point, but Jonas had one last destination in mind for me before calling it a night. We walked through Nanba and into Chuo to find a tiny barcade called Space Station. At the top of the steep stairwell leading into this dimly-lit hangout, I could see arcade PCBs hanging on the walls, 80s game cartridge boxes adorning the back of the bar, and several retro game stations spread across the floor, ranging from Famicom to Nintendo 64. The cocktails were all retro-themed as well; I had a “Dr. Mario” which was not much more than a rum and Dr. Pepper, and Jonas ordered himself a Commodore 64, the recipe for which I neglected to remember, along with the rest of the drinks written on a small chalkboard. We only had time here to play a few tracks of the Special Cup in Mario Kart 64 before we had to leave to catch one of the last shinkansen trains back up to Tokyo, but if I come back, this is a hangout I’d definitely want to revisit.

Interesting architecture of a "Namba Hips" building

Day 4 / Sunday: Kyoto and Kobe

I returned to Tokyo Station after yet another McDonald’s breakfast and boarded the shinkansen to Kyoto. My reserved seat ticket that I bought on Friday, intended to be used on Saturday, was actually for this day, so fortunately it didn’t go to waste. Unfortunately, somebody I didn’t know reserved the seat right next to me, and to add insult to injury this person was very obviously not a native. They weren’t very respectful of my personal space and kept video chatting with somebody—in a language I didn’t recognize—on an otherwise quiet train. With plenty of other seats available, all I could do is wonder why they chose to sit next to me specifically. Sigh. At least I had a window, albeit one on the ocean side.

I met up with Jonas at Kyoto Station (which is huge, by the way), and we took a local train to Arashiyama, a much smaller town with a much smaller train stop. Immediately upon stepping out into the street, I could tell that despite its more rural, small-town vibe, Arashiyama was a popular spot for tourists and they knew it. Lots of options for matcha are available here; the train stop empties directly in front of a matcha shop and many more are to be found up and down the street through the quaint neighborhood. Once we made it to the main drag through town, it was apparent how popular this place is, and why Jonas brought me here. Arashiyama sits on the Kotsura River and one of the main attractions is the bridge across the river to the park containing many cherry blossom trees. We had lunch at a lovely place called Yosiya before braving the crowds across the bridge. I timed my Japan visit such that today would be at the height of cherry blossom season, and I was not disappointed.

A cherry blossom tree at Arashiyama Park
One of many cherry blossom trees in Arashiyama

The weather on this day was very conducive to taking photos; the sun was out but behind clouds in an overcast sky. While the weather was in our favor, Jonas wanted us to check out the nearby Bamboo Forest, so we headed back across the bridge in that direction after I had taken enough photos on the park side. It started dripping a little bit after we had arrived and started hiking, then a drip turned into a bit of a stream, and then the stream turned into a bit of a squall. Even with my hooded jacket and the protection of tree cover, Jonas and I were soaked after a few minutes, and we debated giving up and leaving Arashiyama, but luck was on our side in that the rains let up enough that we could continue without being too bothered by them.

A purple-colored blossom tree near Arashiyama's Bamboo Forest
The trees up here were a good blend of vibrant colors.

I was glad Jonas had been here before, because he kept mentioning somewhere he called “the gorge” and that he really hoped he could remember how to find it. We hiked up a modest hill to an area near a temple where there were a sparse number of tourists. We both knew that the view of the gorge had to be from a high vantagepoint so we kept climbing until we found a railing overlooking the river we had crossed earlier that day, but as it bends behind the Forest and surrounding hills. The scene here is hard to describe, even with the photo I took, but from where we stood we had a stunning view of someone’s house on the river. It felt to me like looking over Rivendell from Lord of the Rings, with the only difference in scale being the size of the “town.” It was the kind of environment I could imagine myself retiring to, remote enough to be tucked away but close enough to town to have civilization right around the corner. The hike to this point was well worth it.

The view from "the gorge" above Bamboo Forest in Arashiyama

As we made our way down from the gorge vista and on our way back to the train, threatening clouds started to return overhead. Indeed, again they made their power known to us as we navigated the crowds. We had a few hours to kill before making our dinner reservation, so keeping within the theme of being a first-time tourist, we evaded the rains in Arashiyama and sought out the Fushimi Inari Shrine in the heart of Kyoto. The station where we stopped to visit this shrine is pretty obviously dedicated to it, and it’s very apparent that it’s been tooled to appeal to tourists. Gift shops, guides, maps, and photo opportunities abound. We knew we wouldn’t have time to hike the entire trail, as that alone is worthy of a day trip, but we could at least get some pictures of the iconic orange gates.

Looking through the tunnel of Senbon Torii at the Fushimi Inari Shrine in Kyoto
I only got this picture by stopping traffic.

There were a lot of people here. Pictures I saw online of empty trails or with just one or two people walking? Either those are staged (as I had to do) or taken much farther up the hill. This is still very much an active shrine used for sacred purposes; signs adorn the temple buildings instructing visitors not to take photographs of worshippers inside. I was already trying to be careful not to have other people’s faces in my pictures, as the privacy laws in Japan are stricter than in the United States, but even without that I didn’t want to disrespect the sanctity of the temple areas. By this time of the day it was very sunny and the clouds had burned off; had we time to further climb the hill we might have had an opportunity for a radiant Kyoto sunset. But we had a schedule of which to adhere…

Looking through downtown toward a lighted "Kobe" sign at sunset

Before I met Jonas in person on Friday, I had told him that there were two “bucket list” items that I wanted to accomplish while on this trip. The first one—fugu—we checked off Friday night in Shinjuku. But my second item was also food-related: I wanted to try real Kobe beef. “If you want the real, 100% Kobe beef you have to go to Kobe” he explained. Well, I suppose if we have to, then we have to! So a little after 5pm we departed the train station attached to the Fushimi Inari Shrine for downtown Kobe. In my head I imagined Kobe as a tiny farm town, but when we arrived there I saw it was anything but. Jonas made reservations at a place called Propeller, which like many places I chanced upon in Japan is easy to miss if you don’t know specifically where to go.

The sign for the Propeller restaurant just outside its entrance in Kobe

The restaurant is very small, but also felt very familiar. There are five, maybe six tables total, each I estimated seating up to four people. The walls are adorned with the kind of signs and artwork one might associate with an upscale steakhouse: paintings of country living and a couple of beer signs. I felt closer to home here, and that feeling would only get stronger with the courses we were each served. The first course was an appetizer of smoked salmon, chilled shrimp, salumi(?) on a toothpick with olive, a tiny salad, and a hot cream of bacon soup. Unlike at the fugu restaurant, this served only to whet my appetite for the main course; no beef here yet. We were also each served a dinner roll, which was a first for me in Japan; maybe it’s a steakhouse thing?

How do I describe the main course? The steak itself was a small portion, cut into bite-sized pieces. The best way I can put it is that it was like “meat candy”—it melted in my mouth as soon as it touched my tongue. Alongside the steak, its companion fingerling potatoes, and fried garlic chips, we each had a ceramic tray with three wells each designated for three different additives we could optionally use for dipping: pink salt, onion sauce, and peanut sauce. Of the three, the onion sauce paired the best with the steak, but that’s not to take away from the other two; all three proved excellent at enhancing the flavor. All of this went down our gullets easily, washed down with a pint of premium beer. Dessert was ice cream with a berry sauce. We walked out of that restaurant very satisfied and happy. I highly recommend Propeller to anybody visiting Kobe wanting to try “real, 100% Kobe beef.”

The main course at Propeller: bean sprouts, potatoes, garlic chips, and seven bite-sized slices of real 100% Kobe beef
Meat candy.

We were in for a bit of a rude awakening when it came time to figure out how to conclude the evening, though. See, the shinkansen operates on a different schedule on Sunday evenings than it does the rest of the week, which for us meant that by the time we were finished with dinner, the only trains going back up to Tokyo would have cost Jonas and myself over $120 each. Ouch. If only I had checked that before planning this fun excursion! On top of that, Jonas had to work the following morning. What to do?

Weighing our options, the most economical thing was to book a room in Kobe for that night, and in the morning we would each take the first shinkansen to Tokyo when the prices would be nicer. So that’s what we did. Looking back on it, had it only been me, I might have looked into booking a “capsule hotel,” which I can only describe to the unfamiliar as renting a bed recessed into a wall. After working out the math of splitting the cost of a room, though, it would have cost me the same, so I instead opted for Jonas’s plan of the comfort and familiarity of a traditional double-bed room. As a nice plus, we had our own bathroom.

Day 5 / Monday: Loose ends in Tokyo, farewell in Shibuya

Monday was my last full day in Japan, and since I hadn’t bought any tech-related souvenirs for myself, I had plenty of suitcase space I could use for gifts instead. Before I started my trip, a friend back in the U.S. asked me to keep an eye out for something called a “mudkip” if I happened into any Pokémon stores. I’m not that familiar with Pokémon so I had to look up what a mudkip is (please don’t leave comments; I won’t approve them 😛 ) but I originally didn’t plan on visiting any Pokémon stores in the first place, since I’m not really a fan anyway. By this time I had little else to shop for, so I made it my mission to put in my best effort to find a mudkip for my friend.

A mudkip from Pokémon
This is a mudkip.

There is a Pokémon center a block or so east of Tokyo Station among other upscale shopping locations, so I started there. I had to take an elevator up to the sixth floor, which was just for the Pokémon store from what I could tell. It wasn’t just a shopping center; there was some other kind of attraction that folks had to stand in line for and, I presume, buy a ticket to experience. I was just there, appropriately enough, to hunt for a particular Pokémon. 😛 They must have added quite a few Pokémon to the lineup since I last checked, because as I perused the shelves of Poké-plushies, none of the them looked like any of the 152 characters I knew. Except Pikachu. There was no shortage of Pikachu merchandise, but no mudkips to be found in any form. No plushies, no stickers, no keychains, nothing. Disappointed but not yet defeated, I left this store and made my way back to Tokyo Station to head north past Akihabara to Ikebukoro.

Walking out into Ikebukoro is when I got my first real sense of how huge Tokyo is. I had spent a day and a half in Akihabara, several hours near Tokyo Station, several hours in Shinjuku, and now that I was exploring Ikebukoro—which to me felt just as much like a city of its own just as the other wards I mentioned—I wondered how would somebody know where the “city” part of Tokyo ends? I had gone from “city” to “city” but it was all part of Tokyo. It’s no wonder to me that Tokyo is considered the largest city on Earth.

I didn’t have a whole lot of hope that I would strike gold, but I had to stop at the Ikebukoro Super Potato for completion’s sake. Sure enough, no Pippin stuff here either. I did however find a working Virtual Boy kiosk in the store with Mario’s Tennis set up to play. The left eye was glitchy and in obvious need of resoldering, but that’s a common issue with Virtual Boy units nowadays. The kiosk wasn’t for sale (they are quite expensive when they occasionally hit the market) but I wouldn’t have been able to carry it home with me anyway as it is quite tall and cumbersome.

A standing Virtual Boy kiosk on display, but not for sale, at Super Potato in Ikebukoro

While I was here, I wanted to see if I could find the famed Japanese KitKat store. I knew it was somewhere near Ikebukoro Station, but according to the map on my phone I stood right where it was supposed to be, yet it wasn’t there. It didn’t occur to me that there is a shopping mall that extends below the station. Taking the escalator down a floor, I descended into a confectionery mall of sorts: all kinds of sweets here. There were fronts set up for pies, cakes, pastries, fruit arrangements… I was a kid in a literal candy store. It was easy to feel overwhelmed in what I estimate was at least 3000 sq ft of des(s)ert, but eventually I stumbled upon the unassuming KitKat booth. The “standard” KitKat flavor was nowhere to be found here. Instead I collected one of each flavor, including sake, raspberry, banana, ruby, strawberry, and of course several types of chocolate. As of this writing I still have a few of them left. 😛

Although I had relied on McDonald’s up to this point for breakfast, I had stuck to their traditional menu and hadn’t branched out to their regional offerings. Here in Ikebukoro I had a chance just after buying the KitKats to get a late lunch at McDonald’s, where I eagerly ordered the “Roasted Soy Sauce Beef Burger” with a side of fries and chicken nuggets. Finally something exclusive to a McDonald’s in another country! 😀 Well, here’s the verdict: the fries and chicken nuggets taste exactly the same as they do in the U.S., but the burger was definitely something different. Not different enough, though; the best way I can describe it is that it tasted like a McRib, but in burger form. But hey, that’s something given how infrequently the McRib can be found. I suppose I shouldn’t have expected to be blown away—it’s McDonald’s after all—but I’ll admit I was expecting to experience something a bit more unique to Japan, not a McDonald’s flavor recycled from another one of their products. Oh well, at least I can say I tried it this time. 🙂

I didn’t want to carry a bag full of KitKats with me all day long, so I went back to my hotel to drop them off, and then found that I had a few more hours left in the day before I wanted to meet up with Jonas for one last hangout. Retreading my steps, I took the train back in the Ikebukoro direction, but got off one stop short at Otsuka. The other big Pokémon center is at the Sunshine City mall about halfway between this station and Ikebukoro so I took the adventurous route and ventured through unfamiliar territory on my way there. As far as retail square feet, this Pokémon store was smaller than the one near Tokyo Station, and while I came up empty here too, the themed décor outside the store proper was fun to see. There are better photo opportunities here than at the other store, for sure.

The Pokémon Center at the Sunshine City shopping center in Ikebukoro

By this time it was dark outside, so just for completion’s sake I took the train out to Koenji Station past Nakano to visit the last Hard-Off location within a reasonable traveling distance. Though I didn’t have much hope I’d have much more luck here, to my own surprise I did buy something! On the second floor with various musical instruments and other audio gear, I found a Tascam US122-mkII USB audio mixer that is identical to one I bought years ago and have since gifted to my dad. So now we both have identical audio mixers. 😀 The other item I purchased I found by complete accident. On the topmost floor with games, CDs, and high-end stereo gear, I casually flipped through a box full mostly of (mostly Japanese, naturally) audio CDs. To my surprise, a CD-ROM was mixed in with the rest of the lot. More to my surprise, upon inspecting the system requirements, it specified “Macintosh” specifically KanjiTalk 7.5.3. Whoa! For a mere 500 yen, “Mah-Jong II” from the “Shock Price 500 Game Series” wasn’t a huge gamble, so this CD-ROM ended up being the only vintage Macintosh-related product I brought home with me from Japan. Had I thought about it, I should have searched the other Hard-Off stores for MIDI gear, because this location had a small number of vintage MIDI modules. Mind, there weren’t any at this location that I was looking for, but now I have something new to keep an eye out for the next time I visit Hard-Off in Japan.

A Yakiniku Like location near Kōenji Station: "Tasty! Quick! Value!"
Pick two?

Golden Gai in Shinjuku at night is an experience. I saved this for my last hangout with Jonas. Essentially, Golden Gai is a series of alleys in inner Shinjuku where it’s just tiny bars up and down the street, row after row. Each one is practically a hole in the wall, as there’s often only enough room for the bartender and one or two other people. I was fortunate to have Jonas with me as my de facto translator. Any place was as good as the next to us, so we popped into the “Blue Dragon” where we each had a couple of beers and listened to American oldies on the speaker system. The bartender served us some potato chips and other snacks. Jonas urged me to try the rice cake, which I didn’t find palatable at all (sorry Jonas), but I did enjoy one of the cookies.

I was psyched up to hop to subsequent bars in Golden Gai, but Jonas had other, better plans in mind…

Looking toward the brightly-lit high-rises of Shibuya at night from within the Shibuya Scramble

We took the train to Shibuya and crossed the famous “Shibuya Scramble” known for its busy aerial photos. Indeed, it was busy; for a Monday night, there were more folks out and about than I’m used to expecting. Being the height of cherry blossom season, we couldn’t get into the Shibuya Sky nightclub at the top of Shibuya Scramble Square—next time! But Jonas had a backup plan, so we hiked our way up the street to Café Legato, which stood a still-impressive 15 stories tall. Inside I knew right away this was a place for me because the DJ had queued up a couple of songs by Queen. 🙂 They offered a cocktail called “Slava Ukraine” which, to my recollection, was made of blue curacao and vodka, covered with a slice of lime, and topped with a cube of sugar which was then set on fire. Yeah. If that wasn’t enough, it was so sweet that I couldn’t taste the alcohol. Always a dangerous situation, but neither Jonas nor I were driving anywhere so what the hell, I had two of them.

Me looking in astonishment at the flaming drinks we ordered at Café Legato in Shibuya

Miyashita Park is not too far away from Café Legato so we walked off our happy hour there. This park sits on top of a shopping mall and reminds me a lot of the High Line in western Manhattan: long and tucked among high-rise buildings. For as late as it was in the evening (past 8pm) there were still quite a few folks enjoying themselves in the park, and there is certainly plenty to do: rock climbing and volleyball were among the sports I found the park could accommodate. Sadly by this time my phone battery was just about dead, so I didn’t take any pictures, but that was fine with me since Jonas and I had plenty to talk about, like what I could look forward to the next time I came to visit. 🙂

As farewells go, this one was pretty memorable. I do need to make it back to Tokyo at least to make it up to Shibuya Sky. Tokyo is the largest city in the world, and it’s not even a contest: from the window of Café Legato looking out toward the horizon, all I could see was miles and miles of more city, in every direction. There’s no way to tell from Shibuya where Tokyo ends. At the end of the evening, the preeminent thought in my mind was just how little of Japan I had explored, and how my brief visit merely skimmed the surface. I had surveyed Japan as a tourist, but I have to go back at some point, no question about it. This final evening in Tokyo made for a sweet dessert to my overall “taste” of Japan.

(Just kidding, I capped off the evening by bathing in the onsen back at my hotel. I mean I was in Japan, I kind of had to. But as they say about Vegas… 😉 )

Day 6 / Tuesday: Home and Takeaways

There isn’t much to say about my last day in Tokyo since my flight back to the U.S. was that evening out of Narita Airport, so I budgeted plenty of time to get there, checked in, and situated. However, that morning I broke my tradition of getting breakfast at McDonald’s to instead try some pastries at Tokyo Station. Near the various platforms is a tiny café called Boulangerie la Terre with plenty of large French-style buns, rolls, and croissants. I had a cinnamon-flavored roll, a chocolate roll, and a sugar roll, just enough to keep me going until I got to my terminal. Narita Airport is much farther from downtown Tokyo compared to Haneda, though, but I was prepared for such an extended train ride; about an hour. Once I got there, before checking in I knew I had to drop off my pocket WiFi into a post box. Being the early afternoon, there were staff to point me in the direction of the post office, but much to my bemusement, there was no staff present at the Hawaiian Airlines check-in counter to process either myself or the small handful of other guests also waiting to catch flights to the States. Facing the possibility of having to keep myself entertained indefinitely until sufficient personnel arrived, I pulled a small trick: I left my pocket WiFi turned on before sealing it in its envelope and dropping it into the terminal’s post box. My rental was still valid through the rest of the day, so I figured no harm done. There’s a bench near the post office where I parked myself, my phone still able to connect to my hotspot. As long as its and my batteries held out, I could sit and surf the ‘net until such time as things opened up.

A patrolling robot inside the international terminal at Narita Airport

After people watching for a few minutes though, I noticed that one needed not to check in to explore the shopping area on the upper floor beyond the check-in counters. Since I had the time, I checked my bags into a temporary locker and headed upstairs to explore. Lo and behold, there was a Pokémon store here too. But to my delight, this store had a mudkip in stock. Not just one, but several in a few sizes from which I could select. Who’d have thought! What a lovely cap to my almost week-long odyssey.

Visiting an entirely different country really challenges my assumptions and habits. I’ve found that when I am a stranger in a strange land, being patient, flexible, and polite gets me pretty far. That said, here is a short list of memorable takeaways that stuck with me during and after the trip:

  • Haneda Airport has an attendant at the baggage claim carousel whose job it is to make sure the conveyor belt doesn’t get logjammed with arriving bags. I don’t think I’ve seen anything like this in the U.S. (although Heathrow Airport has an automated system that accomplishes the same thing)
  • Police officers I saw on the Tokyo streets do not carry guns. As a lifelong resident of the U.S., this wasn’t terribly surprising, but also I was curious how more serious offenses are enforced if LEOs aren’t armed to handle such incidents. Naively I thought maybe the incarceration system in Japan is so horrific that people would think twice before subjecting themselves to that possibility, but nope. It’s a difference of social norms compared to the States. To my understanding, in Tokyo if you have a criminal record, you are essentially ostracized from society and cannot reasonably function as a human being. Folks behave themselves by and large as a matter of self-preservation. As a consequence, officers by and large don’t need to be armed since the frequency of incidents that would require they be are so rare that they are newsworthy when they happen. I can’t say whether this is a good or a bad thing overall—there are certainly pros and cons to how both Japan and the U.S. handle criminal temptations, but it’s certainly different from what I am accustomed to.
  • The trains in Japan announce which side of the car the doors will open upon nearing a station, relative to the direction of the train. As an occasional rider of the BART system in the San Francisco Bay Area, this would be a small but welcome indication, especially as BART itself is renovating the trains themselves.
  • Google Maps is your friend! If you’re going to Japan and you take away one piece of advice, I would say it’s that you should load and learn the Google Maps app. It saved me several times while wandering the streets trying to get where I wanted to go.

Remembering Tony Diaz

Tony Diaz passed away recently. He was a very fervent Apple enthusiast, particularly a fan of the Apple ][. I never met him in real life, but over the past few years we talked quite a bit on IRC—where he alternated between the handles “amd” and “tdiaz”—about his involvement with the final few weeks of Apple’s Pippin project.

A few weeks ago, in one of the last conversations I had with Tony, he gave me permission to share his story and assorted photos of various Pippin-related ephemera. Rather than editorialize his own words myself, I think it makes the most sense to simply share word-for-word what he in turn shared on IRC. I have however edited out system messages, typical irrelevant IRC side chatter, and replaced now-dead links to the Internet Archive where available and photo links with the images themselves. Without further ado…

Monday, November 16, 2015
12:38 < blitter> hey dougg3 maybe i asked this before-- how hard would it be to do a pippin rom simm?
12:39 < blitter> in the absence of schematics, i've got access to an original dev ROM. the programmable one from apple
12:39 < blitter> no programmer for it though. i think only apple has/had that
12:40 < koshii> blitter: Whoa, what's your idea with that? :)
12:41 < blitter> bypass the authentication check so I can run whatever
12:41 < blitter> s/run/boot
12:41 < blitter> the dev ROM can do that iirc, and the 1.3 ROM can. however i don't think the 1.3 ROM has surfaced online
12:41 < blitter> they're rarer than hen's teeth
12:42 < blitter> but using this dev ROM makes me really nervous, as it's the only one i've ever come across and i'd hate for something to happen to it if i'm careless
13:00 < koshii> Wow this is neat http://www.vintagemacworld.com/pip1.html
13:01 < koshii> I didn't know about any of this
17:51 < Lord_Nightmare> blitter: iirc the dumps of one of the pippin roms the powerpc boot part in the last 1mb has a bad checksum and may or may not work
18:45 < dougg3> blitter: i'm only here for a few minutes and then i have to go afk again...but i dunno. is it a SIMM or a DIMM? DIMM is more complicated to do. no idea of the pinout of the pippin socket either
18:46 < dougg3> http://thinkclassic.org/viewtopic.php?id=325&p=3 --- ClassicHasClass was doing something with this on ThinkClassic, I think? or is the apple interactive television set top box something else?
18:48 < blitter> not sure, is there a way to tell visually?
18:48 < bbraun> pippen and interactive television box are different afaik
18:49 < blitter> correct
18:49 < blitter> STB predates pippin too i think by 3 or 4 years
18:49 < dougg3> ah, ok
18:49 < dougg3> well, one thing that would be helpful is the number of pins
18:49 < bbraun> and yes, this has come up before, and the same basic questions were asked, and nobody knew or looked into it. :)
18:50 < dougg3> also, if you can see traces going to connectors on both sides of the pcb, that would probably indicate a dimm
18:50 < blitter>
18:50 < blitter> here's a picture
18:50 < dougg3> what i mean is a trace going to essentially each pin
18:50 < blitter> if you can see it (hope you can)
18:50 < dougg3> ok, i can already tell it's not a normal mac rom simm from that, so either way, would require new hardware work :)
18:50 < blitter> right
18:50 < dougg3> i don't know anything about the pinouts. that would be what we'd need to know
18:51 < blitter> the pippin had a special slot for this that i haven't seen elsewhere
18:51 < dougg3> yeah, it's a weird card edge connector instead of a simm/dimm slot it looks like
18:51 < dougg3> like closer to a pci connector
21:52 < blitter> there are traces to contacts on both sides
21:52 < blitter> i guess that means it's a dimm
22:03 < dougg3> yeah, if most contacts have traces on both sides it's probably a dimm
22:03 < dougg3> just means it probably has to be at least 4 layers
22:07 < blitter> in the absence of a pinout, would datasheets of the flash chips help? they're just generic intel chips, i think i have a part #
22:08 < blitter> N28F020-90
22:08 < blitter> 8 of them per side, 262144 bytes each
22:17 < dougg3> if you traced the flash chips out to the pins, that would help
22:17 < dougg3> is that all that's on the board? I see a chip in the middle too
22:26 < balrog> blitter: how many pins are the DIMMs?
22:26 < balrog> these are probably the same as the PPC Performas
22:26 < balrog> and possibly all the way up to the Beige G3
22:26 < balrog> a reprogrammable ROM DIMM for those would be neat.
22:28 < blitter> the chip in the middle says P52AB 74F86 on it and has a tiny national semiconductor logo on it
22:30 < blitter> does the beige g3 use a similar card in it? i have one handy, i can look and compare it
22:30 < blitter> never thought to check
22:31 < blitter> hmm the rom dimm on my beige g3 is longer than the pippin card
22:31 < blitter> there's a slot next to it labeled "sgram only" though which is closer
22:31 < balrog> hmm, it is.
22:32 < balrog> SGRAM is for graphics memory
22:32 < blitter> yeah
22:32 < blitter> idk if the pins are similar, i don't have anything in that slot
22:32 < balrog> how many pins is the beige g3 rom dimm?
22:32 < balrog> and do you have one of those crappy mid-era performas?
22:32 < balrog> like the 6200
22:35 < blitter> unfortunately the only performas i have are LC040s
22:35 < balrog> http://lowendmac.com/2014/power-mac-and-performa-x200-road-apples/
22:35 < blitter> here are some closeup pics
22:35 < blitter>
22:35 < blitter>
22:35 < balrog> that's the one for the Pippin, right?
22:36 < blitter> front and back respectively
22:36 < blitter> yes, pippin
22:39 < tdiaz> That's interesting.
22:40 < tdiaz> Don't think I've seen one with Intel Flash.
22:40 < blitter> i wonder if amelio was still at natl semi when these were made :P
22:41 < balrog> what is that natsemi chip?
22:41 < blitter> i'm guessing some kind of memory controller?
22:41 < blitter> like the MMC chips found in some NES carts maybe? idk
22:42 < blitter> i'm a bit of a noob when it comes to this
22:42 < blitter> the hardware side
22:48 < tdiaz> PiPPiN shizzle..
22:48 < tdiaz>
22:49 < blitter> that's a lot of 1.3 boards... :D
22:50 < blitter> or 1.0 boards
23:07 < dougg3> that's definitely a dimm
23:08 < dougg3> 74F86 is a set of four XOR gates
23:09 < tdiaz> I have some of the STB units too.
23:11 < Lord_Nightmare> STP?
23:12 < Lord_Nightmare> the accelerator?
23:12 < balrog> Lord_Nightmare: set top boxes
23:12 < Lord_Nightmare> oh yeah
23:12 < balrog> Apple Pippin
23:12 < Lord_Nightmare> those.
23:13 < tdiaz> No, the STB is a different thing, '040 based. Elongated pizza box form factor, uses the same SCSI as the Powerbook.
23:13 < blitter> yeah. circa-'93
23:13 < balrog> oh, those things
23:13 < blitter> older cousin of the pippin


23:27 <blitter> Bandai wasn't in the best of shape back then
23:27 <tdiaz> Yeah. It was a mess. ;-)
23:27 <tdiaz> I don't have very many extererior plastic cases- they shipped some in plastic cases like the Sega CD.
23:28 <tdiaz> Others in the cardboard based jewel case clone.
23:28 <tdiaz> Trying to locate the stack of plastic cased ones.
23:34 <tdiaz> A fair amount of whats been listed on various sites for titles never actually shipped, or even existed.
23:36 <tdiaz> I was supposed to get the authoring system from them, in the end. But they pretty much disappeared.
23:36 <tdiaz> The bits on sites about how they shipped them back to Japan and converted them ...
23:36 <tdiaz> It's bull.
23:37 <blitter> hmm, interesting
23:37 <blitter> i thought all the authoring had to go through apple
23:38 <tdiaz> Because we sold close to 40,000 Matsushita/Panasonic SCSI CD drives .. that we extracted from 40,000 boxed PiPPiN units.
23:39 <tdiaz> In a nutshell, BDE was exploring what to do with them.
23:39 <tdiaz> Niche, kiosk unit, something.
23:40 <blitter> yeah and i presume that's where the kmp 2000 sprung from?
23:42 <tdiaz> I had been looking into making larger RAM modules, swapping the SCSI ribbon cables to have a connector on the back, which would have meant a custom metal die to stamp the chassis hole. Disassemble each one, make the hole, put it back together.
23:42 <tdiaz> In the end. BDE decided to just write it off- and we had to scrap them all.
23:43 <tdiaz> The "proof" of the scrapping? - The ROM simm with a hole drilled through the middle of it.
23:44 <tdiaz> The Katz units are ATMark based as far as I knew.
23:46 <tdiaz> Close to 40,000 keyboards, AppleJack controllers,.. mostly ground up.
23:46 <tdiaz> The SCSI drives and external 2400 modems went out our shipping dept. over a few years.
23:46 <blitter> what a shame
23:46 <blitter> the applejack controller was pretty neat
23:47 <tdiaz> Sucked.
23:47 <blitter> Do you have a Super Marathon package?
23:47 <tdiaz> I squirreled away 10 or so boxes, most have the CD drive removed, but were all dumped back in.
23:47 <tdiaz> No package.
23:47 <blitter> just the cd?
23:48 <tdiaz> Yes.
23:48 <blitter> ah ok
23:49 <tdiaz> The KMP 2000 has the SCSI connector.
23:50 <tdiaz> Though I've got several ADB dongles that override the boot check, and adapters to use standard ADB devices.
23:51 <blitter> those are probably harder to recreate than a rom simm though
23:51 <tdiaz> Authoring was intended to be done by Apple, yes- with each licensee having a sub-station and then that would communicate to Apple.
23:51 <tdiaz> Nearing the end, BDE had the whole thing in house.
23:52 <tdiaz> It was a stickered up, modded Mac IIsi.
23:52 <blitter> hah!
23:52 <blitter> i take it you worked for BDE during that time
23:52 <blitter> ;)
23:53 <tdiaz> No, actually.
23:53 <blitter> i wonder otherwise how you know all of this
23:53 <tdiaz> Yeah. I know. ;-)
23:53 <tdiaz> (Or have CDs marked as BDEC corporate portfolio, etc. .
23:54 <tdiaz> I was with a computer/electronics surplus place.
23:54 <blitter> ah
23:54 <tdiaz> They came to us ..
23:54 <blitter> my first guess was going to be maybe you jumped on some kind of auction or office raid when they closed up shop
23:56 <tdiaz> When I got into what else they had, I ended up with a couple wireless (RF) AppleJack, 13MB RAM module, Floppy Dock, EtherDock, a box of CDs..
23:56 <tdiaz> Worked for about two months to market/find a need and make the thing 'fit'.
23:56 <blitter> do you have the blue case for racing days?
23:57 <blitter> it used the same template as super marathon
23:57 <blitter> that RAM module.... yeah. hang onto that :)
23:57 <tdiaz> I'm not sure. That's a box I can't find right now.
23:58 <tdiaz> I got a lot of discs just dumped in a box, and 15 or so packaged ones in the larger case.
23:58 <tdiaz> It was clear they were not going to make anymore - it turned to "what can we get for them" ..
23:58 <tdiaz> and the bean counters ultimately decided the write off was a better option.
23:59 <tdiaz> They didn't care what happened with the stuff, as long as the ROM SIMMs were accounted for, with a certificate of destruction.
--- Day changed Tue Nov 17 2015
00:00 <tdiaz> The CD drives and modems were in demand, as Apple had the custom firmware only recognized by MacOS .. and these were basically CD 600i
00:01 <tdiaz> The power supplies, many got sold as a raw power supply for bench screwing around./
00:02 <tdiaz> Local trash hauler .. "you need a roll-off for cardboard?" .. "no, actually. I need a couple, and they need to be swapped a few times."
00:02 <tdiaz> Lots of cardboard. Each unit was in the outer brown carton with the inner white package inside.
00:03 <tdiaz> I could do an "unboxing" video .. ;-)
00:04 <tdiaz> Crap. I don't know why I did that.
00:04 <tdiaz> (Fake Mac) .. has a weird issue with the optical drive.
00:05 <tdiaz> If a disc is mounted for "too long" (how long, don't know) idle, it will not come out, unmount, the drive goes off line and a restart is required.
00:06 <tdiaz> I had just done that earlier. I was going to poke around one of their corporate marked discs.
00:06 <tdiaz> As for the authoring, I'
00:07 <tdiaz> ve got some discs that boot into 7.5.5/Finder, using OpenTransport to connect for internet- SlIP, PPP, Ethernet or MacIP.
00:07 <tdiaz> One that boots to 7.6, and a few others with HyperStudio / HyperCard
00:08 <tdiaz> Authenticated - mastered, to boot on any unit.
00:09 <tdiaz> I've not uploaded them anywhere because they've got some stuff I never intended to distribute externally.
00:09 <blitter> huh, interesting
00:10 <tdiaz> If perhaps- take a sector editor to an ISO and blast specific file entries to smithereens ..
00:10 <tdiaz> See if it would still pass since it's otherwise got the key'ed boot sector.
00:10 <blitter> Just the boot sector is keyed?
00:10 <blitter> I thought it was the whole disc. then i wondered how the check was done at boot since the boot time is far too short to read the whole disc
00:11 <tdiaz> The disc is checksummed, and that, plus the key is put into the boot sector.
00:12 <tdiaz> That's it. No other protection. You could copy titles as a whole. When I was getting stuff authored - I brought a 650MB external SCSI drive to them. Left with the drive and a CD.
00:12 <tdiaz> ..and interestingly, the drive itself;f would actually boot as an authenticated volume until it was screwed with.
00:15 <tdiaz> The order to scrap the stuff came from Tokyo ultimately. Even the locals were hoping they would just dump them as surplus, as is.
00:16 <tdiaz> If not find a niche to market them towards.
00:27 <tdiaz> This stuff all happened from about March 1998, and it was late October when they decided.
00:29 <tdiaz> The scuttlebutt was the inventory had been sitting since 2Q 1997, untouched except for returned, un-opened stock placed back with it.
00:31 <tdiaz> So it sounds like to me, the latter half of 1997, the thing petered out- and early 1998 they were trying to regroup, ultimately calling it a year later.
00:31 <tdiaz> My stuff is date July & Sept. 1998.


00:32 < tdiaz> I've got a System 7.5.5 and 7.6 signed disc for PiPPiN.
00:33 < tdiaz> Problem is, release wise- it'
00:33 < tdiaz> s got stuff on it I never intended to spread around.
00:34 < tdiaz> Always figured I'd be able to make "all I wanted", back when. As I was supposed to end up with the authoring system.
00:35 < Lord_Nightmare> tdiaz: is the entire disk checked during sig check, or only the header?
00:35 < tdiaz> The bean counters ultimately decided, "write it off", scrap 40K units, by removing the ROM SIMM, which is accessible from, underneath.
00:35 < tdiaz> Drill a hole it, certificate of destruction.
00:35 < tdiaz> Just the boot sector.
00:35 < tdiaz> That makes me wonder ..
00:35 < Lord_Nightmare> since even if we can't alter the file system itself (if that part is signed), we can very easily null out any data you don't want released on the disk
00:35 < tdiaz> H,mmm..
00:36 < tdiaz> Yeah, that'
00:36 < Lord_Nightmare> and while the files may appear on the disk they'd be blank
00:36 < tdiaz> s what I'd thought of figuring a way of.
00:36 < Lord_Nightmare> or "ThisFileIntentionallyBlankedOut" over and over and over
00:36 < Lord_Nightmare> hex editor :)
00:37 < tdiaz> I probably should try it. Make an image, burn it back, see if it boots.
00:37 < Lord_Nightmare> also what is the signature means?
00:37 < Lord_Nightmare> is it rsa-128?
00:37 < tdiaz> If it does, whack some data and see if it still boots.
00:37 < Lord_Nightmare> if its 128 bit rsa that can be cracked extremely easily
00:37 < tdiaz> It's a keypair, I believe 64 bit, and involves a checksum of the volume at the time of mastering.
00:38 < tdiaz> At worse, it's 128.
00:38 < Lord_Nightmare> 64 bit, if we knew the algorithm could be cracked in probably less than a minute
00:38 < Lord_Nightmare> 128 bit in a few hours or days
00:38 < Lord_Nightmare> on an AWS cluster, less
00:39 < blitter> for a second i thought AWS meant Apple Workgroup Server :P
00:39 < tdiaz> LOL ;-)
00:39 < blitter> i didn't think they were *that* powerful
00:39 < blitter> :P
00:39 < tdiaz> I know a generic boot disc with OpenTransport would be nice to have.
00:39 < tdiaz> Though that also is partially an issue.
00:40 < tdiaz> The dialup information - is written to the disc.
00:40 < tdiaz> I have several ones for different ISPs, back in the day.
00:40 < Lord_Nightmare> the AWS bootroms are dumped
00:40 < tdiaz> netcom, netzero, juno .. etc.
00:41 < Lord_Nightmare> balrog found 2 boards somewhere and we dumped both of them
00:41 < Lord_Nightmare> whats NOT dumped is the ANS/"shiner" roms
00:41 < Lord_Nightmare> those i'd really love to see

Saturday, March 4, 2017
20:33 < blitter> huh, well that's new
20:34 < blitter> i got the rom copier to copy my pippin's rom
20:34 < blitter> at the end it complains the checksum doesn't match
20:36 < koshii> How would it even know the checksum
20:38 < blitter> it verifies after the copy
20:38 < blitter> i wonder if it's because i only gave it 3000K?
20:39 < blitter> maybe it needs more RAM when verifying and just fails if it runs out
20:39 < blitter> also, i have a pippin prototype now
20:39 < blitter> so will be dumping its ROM next
20:39 < blitter> i suspect its ROM may just be 1.0, since it says "GM Flash" on it
20:40 < koshii> Crazy, how did you get your paws on that?
20:41 < blitter> friend of mine has one
20:41 < koshii> Are you all ex-Apple employees or something?
20:41 < blitter> he's letting me borrow it for the next couple weeks
20:41 < blitter> no
20:41 < blitter> ex-pippin developers ;)
20:42 < koshii> ahh
20:42 < koshii> A small group of folks I guess :)
20:42 < blitter> yup
20:42 < blitter> who knows how many pippin protos are left
20:42 < blitter> i imagine tdiaz probably has one
20:42 < koshii> Was the Pippin ever commercially produced?
20:42 < blitter> oh yah
20:42 < koshii> At all, in any number?
20:42 < blitter> mostly in japan
20:42 < koshii> I see
20:42 < blitter> they're pretty available on ebay
20:43 < koshii> Can they boot bog-standard OS7?
20:43 < blitter> the japanese model anyway. the black US version is rarer
20:43 < blitter> mine boots into 7.5.3
20:43 < blitter> 7.6 kinda works
20:43 < blitter> 8 doesn't
20:43 < blitter> http://www.vintagemacworld.com/pip1.html
20:44 < koshii> Pippin can only do LocalTalk?
20:48 < blitter> a stock one, yeah
20:49 < blitter> there might have been an ethernet dock but i haven't seen one nor do i have one
20:54 < blitter> in any case, both times i launched the rom copier and ran it, it generated the same md5 at startup
20:54 < blitter> so maybe the dump is fine?
21:22 < jjuran> blitter: Yeah, sha256sum is a separate binary, so it could fail to run.
21:22 < jjuran> Er, md5sum.
21:23 < jjuran> Good to hear you got it working. :-)
21:26 < blitter> ah ok, that explains things
21:26 < blitter> i just also dumped the rom from my prototype, it has the same checksum as the "kinka dev" rom that's floating on the internet
21:26 < blitter> so i'm going to assume that's also a good dump
21:37 < blitter> jjuran: i do have some mem usage numbers for you if you want :)
21:38 < blitter> 111K launching, 1657K while calculating checksums, 1565K while waiting to click Copy/Quit, 2812K used when copying
21:38 < blitter> don't have numbers for verification since it didn't get very far into that process
23:33 < tdiaz> 50,000 of them were sitting in Orange County for a few years. I think it sold more than the Sony eVilla. Barely.
23:34 < tdiaz> But in the end, 40K of them were written off and scrapped.
23:34 < blitter> :(
23:34 < tdiaz> No, they didn't get "sent back to Japan" and re-badged.
23:35 < tdiaz> Because we (where I worked at the time) actually did the scrapping.
23:36 < tdiaz> After some earnest attempts at getting them sold in various niche markets, Bandai ultimately decided to take the write off and we had to return the ROM SIMMs as 'proof'.
23:37 < tdiaz> The edge connectors were chopped off the SIMMs for gold reclaimation.
23:39 < tdiaz> I have several boxed units still, though they had their ROMs yanked as well. I was able to take what I could put in the trunk that day.
23:40 < tdiaz> Units opened, ROMs removed, placed back in the original packaging, carried out back. ;-)
23:42 < tdiaz> We ended up selling the modems and SCSI CD readers into the end user market.
23:49 < tdiaz> The controllers, keyboards didn't seem to strike up much interest. The ruggedized ADB connector is probably the biggest reason, as the 'batarang' controller worked fully with MacOS (Game sprockets).
23:50 < tdiaz> Changing the cables or even just making an adapter wasn't economical.
23:52 < tdiaz> Though .. if the connectors from the motherboard had not been right angle PCB mount ... as we considered removing them and using it to make an adapter. But finding a hood that would cover the thing the way it was, was the big problem. If they had pins straight back, the whole thing would have fit nicely into a 15 pin D-sub hood. (PC Gameport sized) with a female MiniDin 4 in the hole.
23:54 < tdiaz> At the time, costs to get an injection mold made to make a custom hood was about $200K just by itself.
23:56 < tdiaz> I've got an EtherDock and FloppyDock, 13MB RAM expansion card, a few of the wireless controllers, some 1200 baud modems that are actually styled to go along with the thing, that never shipped because they went with an off the shelf Motorola 2400 external with PiPPiN silkscreened onto it, and it
23:57 < tdiaz> 's case was black plastic instead of the beige, tan, white, whatever that the same modem had in it's retail packaged iteration.
23:59 < tdiaz> All this about the various ROMs and how there was one that had the authentication check removed.. I've never encountered one. The two ROM sets that I have many SIMMs of, have been dumped, and required the authenticated disc or the developer dongle will override that.
--- Day changed Sun Mar 05 2017
00:01 < tdiaz> Otherwise the only other way to override it was to attach a SCSI HD in line with CD drive. It would boot un-authenticated from the HD only, with it being SCSI 0 or 6 only. I can't recall. The CD ROM was configured as ID 4 and had the parity jumper on it, unlike the ones in the computers.
00:02 < tdiaz> "AppleJack" controller - though I called it the batarang, batwing, boomerang, two ended banana..

Thursday, January 17, 2019
16:27 < blitter> did anyone in here win this auction? https://www.ebay.com/itm/Apple-Bandai-Pippin-ROM-Update-Disc-/323641812449
16:27 < blitter> tdiaz: do you have this ROM 1.1 disc?
17:54 < tdiaz> blitter: I have a disc that looks like that, I cant say for sure what the title is off hand, the PiPPiN stuff isnt accessible at the moment, though I'll see if I can get to it..
18:18 < blitter> tdiaz: if you could locate that disc, i'd love to get a copy of it. supposedly it updates the firmware/ROM on dev units? but i would be really surprised if devs were trusted to do that. still, want to examine the disc to find out what's on it
18:18 < blitter> would be further surprised if pippins are even wired to write to the ROM board
18:19 < blitter> (pleasantly surprised, mind :) )
18:24 < tdiaz> I keep hesring about these ROM upgrades, or these ROM versions that don't require thr disc autherntication..
18:24 < blitter> the only ones of *those* I know of are dev ROMs and the 1.3
18:25 < blitter> I've never seen a ROM version 1.1, so curious if that's on the ROM upgrade disc as well
18:25 < tdiaz> I've got some ADB dongles that override the check.
18:25 < blitter> would love to reverse engineer those
18:25 < tdiaz> Rainbow Sentinel.
18:25 < blitter> only thing I know about them is that they have an ADB "type" of 43 or something
18:26 < tdiaz> Standard Mac issue .. with a second dongle cable, rugged ADB to standard.
18:26 < blitter> i've seen another dongle from i think MicroGuard?
18:26 < tdiaz> Maybe it's MicroGuard.
18:27 < tdiaz> It's got a triangular-ish piece of plastic with the logo on the top, it's nearly squared cylinder shaped. About the size of a lipstick.
18:28 < tdiaz> With the cable hanging off an addition 3 inches or so.
18:28 < blitter>
18:28 < blitter> that?
18:28 < tdiaz> The last time I read the wikipedia site on PiPPiN, it was mostly total bullshit, as were most of the blog sites, none actualy had the real story.
18:29 < tdiaz> Yes, that.
18:29 < blitter> the pippin page on wikipedia is full of crap, yeah
18:29 < blitter> the pippin wikia is more accurate (i'm friends with the admin)
18:29 < tdiaz> None of them were "sent back", repackaged, or anything of the sort.
18:30 < blitter> the dongles?
18:30 < tdiaz> The units in north America.
18:30 < blitter> i believe that
18:30 < blitter> apple probably didn't care :P
18:30 < tdiaz> I know .. because I had to oversee just under 50,000 of them get disassembled and scrapped.
18:30 < blitter> :'(
18:31 < tdiaz> While retaining the ROM SIMMs.
18:31 < blitter> were those dev units? or retail?
18:31 < blitter> (both?)
18:32 < tdiaz> Retail. Packagesd.
18:32 < tdiaz> That was -the- initial shipment for North America.
18:32 < blitter> losing the 1.2 SIMMs is no huge loss imo
18:32 < blitter> they were mask ROMs anyway and have the auth check
18:32 < tdiaz> Exactly.
18:33 < blitter> kinda sucks the units themselves were scrapped because i really like the dark case
18:33 < blitter> but oh well, i have one, i'm good :)
18:33 < tdiaz> The product launch was so shitty ..
18:33 < tdiaz> Piss poorly done, too much cost, though you were actually getting something that -could- be a real computer at least, unlike that Sony eVilla..
18:34 < tdiaz> Which they just took those all back.
18:34 < tdiaz> The PiPPiNs were each individually packed, with an exterior box over the white box.
18:34 < blitter> never heard of the eVilla... wow. looks like it was waaaaay late on that boat
18:34 < tdiaz> Oh, yeah. Way. Late. :)
18:35 < tdiaz> The whole IA thing .. totally fucked over Be.
18:35 < tdiaz> I was trying to find a niche market for those PiPPiNs..
18:36 < tdiaz> Kioks, school network multi-unit installs, etc.
18:36 < blitter> kiosk makes a lot of sense
18:36 < tdiaz> But in the end, the Bandai America corporate people just threw in the towel, and decided to write it off .. as the best, and quick plan of action.
18:37 < tdiaz> Which meant certificate of destruction, so they told us we could keep / do whatever we wanted with everything except the ROM SIMM.
18:38 < tdiaz> Which they wanted back, as a physical count, to be destroyed.
18:39 < tdiaz> "Oh, we can do that for you, too" .. so we got the whole thing, and provided certified documentation that the ROM SIMMs were shredded.
18:40 < tdiaz> We ended up chopping the edge connectors off just about a few mm under the ROM so that the ROM chip itself would also have damage and be most lilely ripped off the PCB, rip up traces..
18:41 < tdiaz> So there was a flood of Matsushita/Panasonic CD 504-L's on the market for $19 bucks.
18:41 < blitter> hahaha!
18:41 < blitter> makes sense
18:42 < blitter> wonder what happened to the applejack controllers
18:42 < tdiaz> For $29, if you wanted the "right" bezel, 'cause I got some made.
18:42 < blitter> the black ones are hard to come by
18:42 < tdiaz> The controllers, keyboards.. tried to sell them invidivually, but without the ADB connector adapter..
18:42 < tdiaz> They ended up gettig scrapped.
18:43 < blitter> :'(
18:43 < blitter> the keyboards are even harder to find
18:43 < tdiaz> The metal chassis, the plastic shells, the papers, cardboard, packaged CDs, all of it, went into separate gaylords and scrapped.
18:43 < blitter> especially with the stylus
18:44 < tdiaz> I kept approx 15 units for myself.
18:44 < tdiaz> Though I got most of those from when they were opening them and taking the SIMM out.
18:45 < tdiaz> What I was supposed to get, but the contact at Bandai disappeared shortly after that, was their mastering system.
18:45 < blitter> !!
18:45 < blitter> that would have saved me a hell of a lot of trouble :P
18:46 < tdiaz> Which was just an SE/30 with software / burner attached.
18:46 < blitter> i heard it was a IIsi
18:46 < blitter> in a safe
18:46 < blitter> or maybe that was just the signing machine
18:46 < blitter> was the mastering system essentially a Mac loaded up with Toast?
18:47 < tdiaz> Yeah, the sigining machine was a IIsi, but the SE/30 had the same stuff on it becauise it was more compact.
18:47 < blitter> that's how i mastered the pegasus prime disc, albeit it's unsigned, obviously
18:48 < tdiaz> Basically, Toast and the sigining utility that would write to the hard drive before you ran Toast on it and made an image of the drive.
18:48 < blitter> ohhhh yeah that would be valuable
18:48 < tdiaz> I have a couple signed discs that boot into 7.6
18:48 < blitter> 7.6 runs but not well
18:48 < tdiaz> Because I have the 13MB RAM card, it works quite well.
18:49 < tdiaz> But without it, OMG. Forgetaboutit.
18:49 < blitter> when i run 7.6 on my pippin i get video issues
18:49 < tdiaz> OpenTransport worked so much nicer on pippin.
18:49 < blitter> Mac OS 8 runs as well, but it's even more broken-- if you do anything with sound, the whole machine pukes
18:49 < blitter> 8.1 too
18:49 < blitter> I only have 13MB of RAM so I haven't tested 8.5
18:50 < tdiaz> What I found out later, the developer models were basically a 6100, and most of those were prototype 6100 motherboards adapted.
18:50 < blitter> yep
18:50 < blitter> can confirm that
18:50 < tdiaz> The ones that say PDM Macintosh.
18:51 < blitter> apple recommended devs use a 6100 for dev and debugging on
18:51 < tdiaz> Piltdown Man, Butt Head Astronomer.
18:51 < blitter> since you couldn't really debug on the pippin itself
18:51 < blitter> the pippin *does* run MacsBug though, but that needs the extra RAM
18:51 < tdiaz> I have the motherboard from one.
18:51 < tdiaz> Yeah, with that extra RAM..
18:52 < tdiaz> I have a couple EtherDocks, and a wireless AppleJack controller.
18:52 < blitter> the wireless applejacks can still be found on ebay from time to time
18:52 < blitter> never saw an official dock
18:52 < jjuran> IA thing?
18:53 < tdiaz> Oh, the modems, too- The modems, were ultimately Motorola 56K.
18:53 < blitter> oh yeah the modems are rebadged
18:53 < blitter> those are common
18:54 < tdiaz> Well, in this case, they werne't even re-badged. Just they were black Motorolas, bulk packaged, plastic bagged, with a set of stickers, was supposed to put the sticker, one on the bottom of the PiPPiN, the other on the modem and one on the external packaging.
18:54 < blitter> yeah there we go
18:54 < blitter> it's curious that the pippin prototypes would be a 6100 prototype board, because Pippin is a PCI system, and the 6100 is NuBus
18:54 < tdiaz> ..pull out that cheese block shapped square 2400 that was originally packaged with the thing.
18:55 < tdiaz> Well, I think they re-purposed those PDM prototype boards.
18:55 < tdiaz> Because they were housed in a generic bookshelf PC style case.
18:56 < tdiaz> Typical large metropolitain sized Yellw Pages sized with the CD drive at the front using it's OEM bezel.
18:56 < blitter> this thing?
18:56 < tdiaz> The PDM boards were hand modded to put the 6100 A/V card in there sideways, hand-wired instead of the slot.
18:57 < tdiaz> Yes, that.
18:57 < tdiaz> Inside there is just that brown PDM Macintosh board with wiremods and presumably a different ROM.
18:59 < blitter> the undocumented ROM I have might have come from an EVT unit with the PDM board, but i'm not sure because an Open Firmware ROM shouldn't be able to boot a NuBus Mac
18:59 < blitter> so unless the PDM board was retrofitted with PCI...
18:59 < tdiaz> This whole episode was, I want to say, about Summer 1998 through just after COMDEX that year.
19:00 < tdiaz> They made the decision to write it all off for the end of the calendar year.
19:00 < blitter> i suspect the undocumented ROM I have was an experiment with Apple trying to branch out the pippin platform to other areas like kiosks, because it's dated *after* the retail release
19:00 < blitter> also sound doesn't work
19:01 < tdiaz> I have some pictures from a couple years ago, inside one of those prototypes.
19:01 < blitter> those would be great to get!
19:03 < blitter> http://pippin.wikia.com/wiki/Pippin_prototypes
19:03 < tdiaz> All I have left is the PDM board itself. There were a few of the prototypes in the lot of stuff, and another guy used the cases for PC clones.
19:03 < tdiaz> I've never seen anything but the dark cased ones. I'd actually like to get the white one.
19:03 < blitter> white ones are commonly found on ebay :P
19:03 < tdiaz> I have extra plastics..
19:04 < tdiaz> Yeah.
19:04 < blitter> i have no interest in the white ones
19:04 < blitter> just the black ones
19:04 < blitter> i currently have custody of a "jet black" pippin
19:04 < blitter> that just says "PowerPC" on it in front, no bandai markings at all
19:04 < blitter> it's a 1.2 though, nothing special
19:04 < blitter> just looks neat
19:05 < tdiaz> When I actually get my podcast started, I think I'm gonna do an unboxing of the North American one, since I've hardly even seen themn, and never seen the retail packaging.
19:05 < tdiaz> The way they added the silkscreened text on the front of the unit always looked ghetto to me.
19:05 < blitter> i've got custody of a boxed north american unit
19:05 < tdiaz> Like an afterthought.
19:06 < blitter> it's where my keyboard is from. would like one of my own someday
19:06 < tdiaz> Thats what I grabbed several of, boxed with outer box as well.
19:06 < blitter> yeah those
19:06 < tdiaz> A 1958 Dodge trunk and backseat full of them.
19:08 < tdiaz> I saw where someone put the PCB for the floppy drive on OSH Park, so I'm gonna make some of those.
19:08 < tdiaz> I have so danged many Sony drive mechanisms.. I'll never run out.
19:09 < tdiaz> The EtherDock is a re-made Floppy Dock, it's got an off the shelf DEC PCI NIC in it.
19:10 < blitter> he's trying to sell them on eBay. PCB + floppy drive + some burned game for $200
19:10 < blitter> no thanks to that
19:10 < tdiaz> Yeah, no thanks to that.
19:10 < blitter> i have enough sony drives myself
19:10 < tdiaz> I have the games and drives. I'd sell 'em for $75-100 if I were gonna do it as a package.
19:11 < blitter> throw in a 3D printed enclosure and i'll give you the $200 ;)
19:11 < tdiaz> But I'd lose money on that because I'd want it to be in a nice box.
19:11 < tdiaz> Oh, so he's wanting without the enclosure? Thats the only way I'd do it.
19:12 < blitter> same, otherwise how the heck am I going to set up the base unit?
19:13 < tdiaz> $200 is a fair price for that, but a lot more would sell for half that.. but yeah, how else to set it up? Bah.
19:13 < tdiaz> I'd be willing to make some kind of adapter that goes from the inside of the unit and has a 19 pin connector for an external drive ;-)
19:14 < tdiaz> But thats when there's plenty of time to kill.
19:14 < blitter> or modify that adapter to provide a floppy interface *and* PCI
19:14 < blitter> that PCB just routes the floppy lines
19:14 < blitter> you give up the PCI slot
19:14 < tdiaz> Yes, basically.
19:15 < tdiaz> That was my intention. Return that PCI slot, so you can put the NIC in there.
19:15 < blitter> or whatever :D
19:15 < tdiaz> The SCSI CD drives were $19 and the modems, we sold for $29.
19:15 < blitter> external PCI video card, go :D
19:16 < tdiaz> Heh. Put some of those Apple ATI PCI cards in there, or the Matrox..
19:16 < tdiaz> The Matrox 128..
19:17 < blitter> I have a Voodoo5 5500 in my beige G3, but that's overkill even for that system
19:17 < tdiaz> The whole PiPPiN story sucked. :(
19:17 < blitter> it'd be ridiculous in a pippin
19:17 < tdiaz> LOL, yeah.
19:18 < blitter> the voodoo would pack more power than the entire system :P
19:18 < tdiaz> Thats for sure.
19:18 < blitter> I did discover the other day that the Power Mac 7500 uses the same display driver in ROM as the pippin does :)
19:19 < tdiaz> Oh? .. thats interesting.
19:19 < blitter> so I wonder if the final pippin hardware was more closely modeled after the 7500
19:19 < blitter> which *is* PCI
19:19 < blitter> one of the first, in fact
19:19 < tdiaz> It is, basically, so that makes a lot of sense now.
19:20 < tdiaz> Wow, that ..yeah, that totally links the era then.
19:25 < blitter> was right about the same time :)
19:25 < blitter> if only apple had put a 603e in it :(

Friday, April 19, 2019
22:50 < tdiaz> I'd still like to get the authentication mastering software... the contact I had was going to give it to me but I think he got cut loose earlier than expected, too.
22:51 < blitter> well who knows. maybe i'll write my own auth software first :P
22:51 < tdiaz> Or that would be just as good ;-)
22:51 < tdiaz> Any PCBs for RAM more than 13MB?
22:51 < blitter> having the official one would probably speed up such an effort, though ;)
22:52 < blitter> i got a 16MB RAM board off eBay a couple months ago
22:52 < blitter> had it shipped to a friend of mine in the UK, need to get it from him
22:52 < tdiaz> I've got a 12MB board, I think.
22:52 < blitter> it's the standard size, not one of those giant hobbyist ones
22:53 < tdiaz> I need to check what size it is, It's something oddball I keep thinking.
22:54 < tdiaz> The machine came with 6MB, IIRC.
22:54 < blitter> yes
22:54 < blitter> 1MB for video, 5MB for system
22:54 < blitter> shared, IIci-style
22:56 < blitter> i seem to recall though that the memory controller puts an upper bound on the total RAM
22:56 < blitter> sometimes weird like 36MB or 30MB
22:56 < tdiaz> I don't remember the PCB heing enclosed in a plastic housing, though I might be visualizing Powerbook RAM modules in my head.
22:56 < blitter> so even if a large board surfaces, official or (likely) unofficial, it can't be used to its full capacity
22:57 < blitter> yeah, my 16 meg module looks like this
22:58 < tdiaz> It's at the bottom of a poorly piled onto shelf of stuff. Touching it could become a bigger chore than I desire.
23:00 < tdiaz> That wasn't so bad.
23:01 < tdiaz> Its an 8MB card.
23:01 < blitter> ah. I'm not even sure the 16 meg one shipped
23:01 < blitter> or if it did, likely in extremely limited quantities
23:02 < tdiaz> The 13MB is a 12MB card I got for a small Windows XP Pen era tablet PC, plus the 1MB onboard.
23:03 < tdiaz> At the time I could only find a RAM module for it from Japan, they wanted some close to 4 figure amount for it.
23:04 < tdiaz> Wasn't anywhere near worth it.
23:04 < blitter> yeah tell me about it-- i find similar offers in the states for pippin stuff
23:04 < blitter> $90 for an applejack controller? no thanks
23:05 < tdiaz> Crack smokers.
23:07 < tdiaz> Nothing popping up open source wise for PCBs to make your own, that I can find..
23:08 < tdiaz> for RAM modules, anyway. I've got so dang many Auto Inject floppy drives, I cuold make a few FloppyDocks if the housing were not so complicated.
23:09 < blitter> https://web.archive.org/web/20070203031235/http://www.catkicking.com/pippin/lets2memory2.html
23:09 < tdiaz> Or look for some PCI NICs that have that particular Digital chipset that is supported in MacOS.
23:09 < blitter> there's what i know about custom memory modules
23:09 < blitter> it's in japanese... but there's a pinout there
23:11 < tdiaz> catkicking.. totally forgot about that one. Probably why I never found it again, didn't know what to look for on archive.org ..
23:16 < tdiaz> Hmmm.. I could fit a pippin next to the PC-98 ... with a monitor switch.
23:16 < tdiaz> Though it would be the North American one.
23:17 < tdiaz> When I get the classic console game stuff setup.. thats probably where it will go anyway.
23:17 < blitter> same, but not until i've cracked it ;)
23:17 < blitter> super marathon isn't *that* compelling
23:22 < tdiaz> LOMOMG. Looking through completed pippin listings on eBay. Someone paid $600 for super marathon on eBay. WTF.
23:22 < blitter> somebody else paid $900+ a few weeks ago
23:23 < tdiaz> I had no idea they would fetch that much.
23:26 < tdiaz> I look every once in a while to see if there's a run on boxed/complete units. If so, it's totally worth putting a ROM SIMM back into one and posting it..
23:26 < tdiaz> But there hasn't been, though I've also seen way more boxed Japanese units than US by far.
23:34 < tdiaz> Hmmm.. wonder what the deal with this is, https://www.ebay.com/itm/333135936890
23:34 < tdiaz> the printer port is taped over.
23:35 < blitter> somebody probably thinks they have something more valuable than it really is
23:35 < blitter> i got my @world for $200
23:37 < tdiaz> https://www.ebay.com/itm/183762036719 this got bid up crazy.
23:38 < tdiaz> No pen, 'very' used, listed as non working. $622? Come on.
23:38 < tdiaz> Thats got to be a money laundering deal.
23:39 < tdiaz> That thing is a -mess-
23:51 < tdiaz> LOL, Another "it's been documented" that the US models were repackaged for Japan.
23:53 < tdiaz> Well, okay. They might have done a few. But just under 50,000 got disassembled in our back lot. The metal scrap yard was wondering WTF we were doing with several roll-off's full of metal chassis.
23:54 < blitter> i'm mostly curious where the black atMarks are from
23:54 < blitter> i've only ever seen one. it's the one i have in my custody
23:54 < blitter> and it's got one of those "sample for evaluation only" labels on the bottom of it
23:55 < blitter> so i don't even know if black pippins shipped
23:55 < tdiaz> There's one on eBay with the box. It claims it's for "business use".
23:56 < tdiaz> It's got a fair amount of printed materials with it.
23:56 < tdiaz> I like that Apple styled manual.
23:57 < tdiaz> https://www.ebay.com/itm/232882908310
23:58 < tdiaz> Nevermind, that's a black box. But looks to be platinum unit.
23:59 < blitter> yep. dev units shipped that way also
23:59 < blitter> maybe this is a dev unit
23:59 < blitter> "professional" use only
--- Day changed Sat Apr 20 2019
00:00 < blitter> might mean for professional development
00:00 < tdiaz> If it says that, I'd say it is.
00:00 < blitter> the auction says that
00:01 < blitter> but otherwise, except for the lack of "GM Flash" label on the CD tray, this one looks identical to a dev unit i've used
00:01 < blitter> packaging, labeling, and all
00:01 < tdiaz> Like I mentioned, though, I've seen way more AtMark models than @World listed, ever.
00:02 < blitter> oh i totally believe that
00:02 < blitter> a number of my friends have urged me to visit akihabara
00:02 < blitter> apparently there's a glut of pippin crap there
00:02 < blitter> in varying degrees of condition. mostly rough
00:02 < tdiaz> "was" probably, that area got redeveloped it seems, from what I've read.
00:03 < blitter> it's just more spread out now from what i hear
00:03 < tdiaz> Though I suppose you couldn't really chase away nich vendors, so they're probbaly all still going somewhere.
00:04 < tdiaz> So suffice to say, I'd guess that the @World units are far less in existence than the ATMark.
00:05 < tdiaz> Perhaps had we been able to simply sell those things at whatever price, there would be a glut of them still around.. BDE took the write off.
00:06 < tdiaz> But thats what I've got, several boxed sets of the @World with an outer box, so the retail box is protected.
00:09 < tdiaz> They've been opened/handled because the ROM SIMMs were removed, and I just grabbed them from the line right where they were opening the boxes and taking out the consoles. In the end, I ended up with enough ROM SIMMs for what I've got.
00:09 < blitter> i thought all the ROM SIMMs had to be shredded?
00:09 < blitter> and they counted them up
00:09 < tdiaz> They did.
00:10 < blitter> then how did you get some?
00:10 < tdiaz> Anything I had from before the decision was made .. was mine.
00:10 < blitter> unless they... ahem... miscounted ;)
00:10 < tdiaz> ..and then there is that ;-)
00:11 < tdiaz> They wanted holes drilled in them, we chose to chop off the edge connector.
00:11 < tdiaz> That usually damaged the ROMs anyway. But yeah, a few fell on the floor. ;-)
00:13 < tdiaz> I don't think they cared, we sold a few units here and there. I think once they saw that there was some number in the high 49,000's they were happy with it.
00:14 < tdiaz> The lot was 50,000 units, but they'd been dipping into it for samples, employees, gifts, etc.
00:17 < tdiaz> I'd like to get the hard drive contents from a prototype.
00:18 < tdiaz> Since I apparently have the right motherboard. Just need to add the video card to it, from a production Mac, IIRC.
00:19 < tdiaz> There's something sandwiched onto it, but otherwise it's a PDM Mac motherboard.
00:20 < tdiaz> Piltdown Man 6100, brown prototype. The chassis is a generic metal bookshelf style PC case with the back cut out enough for the motherbnoard ports to all be accessible.
00:20 < tdiaz> So I wouldn't be surprised if they were each one offs.
00:21 < tdiaz> Though the 6100 of course, has the 601..
00:21 < tdiaz> Still a 66MHz.
00:22 < blitter> i'd be curious to see if it boots the 7.5.2a3 disc that boots the "Disco" ROM I played with a while ago
00:22 < blitter> but if you have one of those early boards, I'd love for you to dump the ROM from it
00:23 < blitter> at the very least, to confirm it is or is not the same ROM as a PDM
00:23 < blitter> i suspect it's not
00:24 < tdiaz> I'd have to figure that the production was total probably about 100K with a split between Japan and the US models.
00:24 < tdiaz> Appears that the Japanese inventory eventually sold, where as what didn't sell was recalled/written off in the US.
00:28 < tdiaz> The prototype I'm sure lacks the authentication. I don't see why it shouldn't boot whatever MacOS that runs from the era.
00:28 < blitter> depends on what ROM is inside
00:28 < blitter> the "Disco" ROM lacks authentication too, but only boots the 7.5.2a3 disc
00:29 < tdiaz> I've got ADB dongles to skip the authentication, but I want to say also, that if the thing booted from a hard drive with SCSI ID 0, I think it also skipped authentication.
00:29 < blitter> everything else either bombs or hangs at the Welcome screen
00:30 < tdiaz> I had to attach different SCSI ribbon cables in them, so I could have a 50 pin centronics hanging out of the back.
00:30 < blitter> yeah i just went and ordered a ribbon cable with extra headers on it. i have a scsi header dangling out the back that i just attach directly to a bare drive :P
00:32 < tdiaz> I took the nibbling tool and cut out the rectangular boxed area in the center top, and bolted in the 50 pin connector on some, others it's just hanging over the back.
00:33 < tdiaz> We also had plenty of those 1/2 AA batteries ;-) "Bulk, $2 each"..
00:34 < tdiaz> EVT-1 PiPPiN: https://pippin.fandom.com/wiki/Pippin_prototypes
00:35 < tdiaz> That looks like the A/V card from a 61/71/8100..
00:36 < tdiaz> Never saw the TV box prototype before.
00:37 < tdiaz> Thats basically an LC III motherboard stacked on the PCB with the SCART stuff.
00:37 < blitter> yeah slightly modified so the PDS slot faced the other way
00:38 < tdiaz> Looks like they built in a modem and RF/Video interfaces and integrated the LC III to it.
00:38 < tdiaz> I have 3 of the TV boxes. Never touched 'em.
00:39 < blitter> i just have one. it lights up and that's it
00:39 < blitter> that's all i can get it to do
00:39 < tdiaz> One sits on the stack of A/C stuff next to the TV..
00:39 < tdiaz> The other two, came in a brown generic flip top box.
00:39 < blitter> yeah mine came in a brown box
00:41 < tdiaz> With what looked like photocopies of the draft manual with crop marks, for some of the docs, a thin Apple styled manual, teardrop ADB mouse, Powerbook packaged HDI-30 SCSI cable and a clear PlainTalk microphone.
00:41 < blitter> i didn't get any of that, but i did get a remote
00:42 < blitter> which looks to me like a powercd remote
00:42 < tdiaz> I don't remember seeing a remote. The only Apple remote that comes to mind is the PowerCD one.. which I have a few of.
00:42 < blitter> i have three 90s Apple remotes, haha
00:43 < blitter> the Mac TV "credit card" remote, the Apple TV/Video system remote, and my STB/PowerCD remote
00:43 < tdiaz> Ah.. that may be why I have them, then. I've got two PowerCDs and 4 remotes.
00:43 < tdiaz> The front end has that round spot with the trianglur like dip in it.
00:43 < blitter> yah
00:44 < tdiaz> I think I plugged in the one, like you, got a blinking light, no video and say "Pththt".
00:50 < tdiaz> I'd like to get an ATMark unit at some point, but I'm not going to go stupid on it or anything. It's not like I don't have any. For that matter, the shell is really all I 'need'. Though matching accessories would be a plus.

Thursday, August 15, 2019
20:16 < tdiaz> I got to the stashed NIoB PiPPiN units last week.


20:19 < tdiaz> What's always been amusing to me is the thing is basically a retail packaged PowerMac 6100.
20:19 < dtm> ho ho ho they were doin good to make it all the way to ppc
20:20 < dtm> except... not quite CHRP huh lol
20:20 < dtm> tdiaz: that's what i'm puzzling over... how custom did any of the OEMs get? i was thinking they're a mac.
20:21 < dtm> i thought the closest they got to cloning a mac was .... new case and a mouse with many buttons lol
20:21 < tdiaz> ..and if it hadn't have been so proprietary with connectors and designed around aboslute turnkey presentation - you'd have probably seen a bit more of OS hacking and such.
20:21 < dtm> but isn't it all based on DRM too? wicked pippin app signing?
20:21 < tdiaz> The PiPPiN is a Mac, for all intends.
20:21 < tdiaz> intents.
20:21 < dtm> *intensive
20:22 < dtm> it's a very intensively purpled mac
20:22 < tdiaz> Apple wasn't a strong marketer in that sector and chose to go with a partner.
20:22 < tdiaz> The intent was to license that stuff out to whomever. Bandai was just the first, and the only that actually made it to market.


20:26 < tdiaz> The Last Ditch PiPPiN effort that I lead was quite fun. It's just too bad that in then, Bandai ultimately opted for the write off.
20:26 < dtm> yeah
20:27 < dtm> tdiaz: did Bandai have a pretty good position? lots of cash for such diverse ventures, blow it off if it doesn't work?
20:27 < dtm> i dont really remember what they were doing
20:27 < dtm> in general
20:28 < tdiaz> I suspect it was partially related to product / name exposure and continued 'obligations' for support.
20:29 < tdiaz> After all, the market was not used to technology / computing vendors dropping products like a rotten egg over night ..
20:30 < tdiaz> At the same time, they really didn't want it to actually be a Macintosh competitor. Contractually that was forbidden, too.
20:32 < tdiaz> I kinda suspect that was one of the reasons for disc authentication requirements, but the enforcement of that was so lax, as in purposely easy to override in the delivered product..
20:35 < tdiaz> As for resources that Bandai had available to it, I wouldn't say it was a free spirit venture. The whole 'division', if you will, was very few people with the core of the technical support done via back channel with Apple.
20:36 < tdiaz> Technical support on the OEM side, not the consumer side.
20:37 < tdiaz> But Apple DTS did initialy distribute PiPPiN related information to registered Mac developers.
20:39 < tdiaz> It could be thought of as the Mac Mini of then, price wise.

Monday, October 26, 2020
17:13 < defor> what we really need is a 3d printable expansion module...
17:13 < defor> because floppy modules are expensive as hell
17:13 < defor> but since the expansion bus has scsi....
17:15 < blitter> defor: you sure about that? afaik it's just PCI and floppy lines
17:16 < blitter> the deltis 230 MO docking station has its own scsi controller exposed over PCI it looks like
17:16 < blitter> there's definitely a SCSI controller on its adapter board
17:19 < defor> reallllly
17:19 < defor> also do you have the mo drive? I've yet to find one even listed
17:20 < defor> hmm
17:20 < blitter> i'm going off this photo
17:20 < defor> oh nice
17:20 < blitter> that's a 53c8x chip. dead ringer for scsi
17:20 < defor> also that's pretty crazy
17:20 < defor> yep
17:20 < defor> i assumed the undocumented pins were for scsi
17:20 < defor> hmm
17:21 < defor> well damn
17:21 < defor> so can the 1.3 bios boot off pci? :P
17:21 < blitter> don't know, i don't have a 1.3 to test
17:21 < defor> yep
17:21 < defor> also dont have a pci adapter either right?
17:22 < blitter> i don't have one of those either, right
17:22 < blitter> i do have a floppy adapter though, finally
17:22 < blitter> would like to test whether it can boot from floppy
17:23 < defor> yeah i have a prototype floppy (the only one i have)
17:31 < defor> damn pippin prices have gone through the roof recently
17:38 < tdiaz> MrBIOS: In the US, we had to scrap just under 50K units. They didn't even sell that many.
17:39 < tdiaz> From what I was able to deduce, that was a good chunk of the product run for the US.
17:40 < MrBIOS> ok
17:40 < tdiaz> They decided to write it off rather than let them go to a niche. Sucked. Like we've said before. Most of that wikipedia article is bullshit :)
17:41 < tdiaz> That said.. PiPPiN... prices.. Hmmm.. ;-)
17:44 < tdiaz> Let's find some DEC chipset PCI NICs..
17:44 < defor> ?
17:44 < defor> what for?
17:44 < tdiaz> They work with System 7.5
17:44 < tdiaz> .. EtherDock.
17:44 < defor> ok
17:45 < defor> is this related to pippin or something else?
17:45 < tdiaz> PiPPiN..
17:45 < defor> oh i see
17:45 < defor> i mean... realtek works in 7.5 if you really want
17:45 < defor> just have to add the extension
17:46 < tdiaz> DEC works out of the box, which for PiPPiN ... when you needed to authenticate :) ..
17:47 < defor> well thats nice at least
17:47 < defor> although i wonder if the pippin out of the box can even use it on anything- i dont recall chooser on many games i've booted...
17:47 < defor> but.. i dunno
17:48 < blitter> defor: why would games need the chooser? ;)
17:48 < defor> to print?
17:48 < defor> or whatever else you'd use it for
17:48 < blitter> to print what?
17:48 < blitter> it's a game...
17:48 < defor> ok .. software..
17:48 < defor> better?
17:48 < blitter> i think pease has the chooser
17:49 < defor> well that;s a start
17:49 < blitter> one issue is the titles that do feature printing only support the printers for which drivers are on the disc
17:50 < defor> hm
17:51 < MrBIOS> blitter I want to see ucLinux boot on the Pippin
17:51 < MrBIOS> ;)
17:52 < blitter> why uCLinux? what's wrong with ppc debian :)
17:52 < blitter> the pippin is a pci machine
17:52 < MrBIOS> MMU
17:52 < blitter> oh good point. hm
17:53 < defor> why not osx 1.x :P
17:53 < MrBIOS> heh
17:53 < MrBIOS> RAM? :)
17:54 < defor> 16gb or whatever is enough right?
17:54 < MrBIOS> It's too bad the ROM slot in the Pippin can't be abused to add more RAM that way
17:54 < defor> i think there's a 32mb expansion that was made
17:54 < defor> i guess... someone should investigate the ram modules :D
17:54 < blitter> the pinout to those is known too
17:55 < defor> is it?
17:55 < defor> i havent seen that one
17:55 < blitter> some japanese hackers long ago built a homemade 16 meg module
17:55 < tdiaz> There is a larger RAM module. Someone made it, I think the PCB is posted somewhere.
17:55 < MrBIOS> blitter, tdiaz where?
17:55 < defor> oooooh
17:55 < blitter> the pippin itself supports no more than 37 megs of system RAM total, though
17:55 < blitter> so, OS X is out
17:56 < tdiaz> Yeah, onboard RAM plus 32MB.
17:57 < blitter> I own a bandai-produced 16 meg module, but it's sitting in scotland right now since i had to have it shipped to a friend, and he has yet to send it to me
17:57 < blitter> the 37 meg limitation is due to how aspen works. it's not something you can fix with a "we'll just build a better module!" nope
17:58 < MrBIOS> 37 megabytes was a shit-ton of RAM for a console in 1996
17:59 < blitter> it's already a shit-ton of RAM for a ppc603 :P
17:59 < MrBIOS> yep
17:59 < blitter> anyway, when I get that 16 meg module, i'll attempt to boot os 8.5 on it
17:59 < blitter> I have successfully booted all the way up to 8.1 to the finder on the pippin
18:00 < blitter> but caveat: when i say "booted" that's all i mean
18:00 < tdiaz> I've got 8MB modules only.
18:00 < MrBIOS> tdiaz how many?
18:00 < tdiaz> 2.
18:00 < blitter> the pippin gets really ornery past system 7.5
18:00 < tdiaz> Yeah, I put 8.1 on.. and ... well. Yeah, I did that. Nothing else.
18:01 < defor> how do we know the 32mb limit?
18:01 < blitter> i'm telling you :)
18:01 < tdiaz> Because there'a a 32MB limit :)
18:01 < tdiaz> What he says.
18:01 < defor> ok...
18:02 < MrBIOS> defor it's a limitation of the glue logic itself.
18:02 < tdiaz> They told me that too, back in the day. 32MB on that connector, that's it. I wanted to get larger RAM.
18:02 < blitter> yah the memory controller has three banks wired for RAM. the first bank is hardwired to 6 megs (1 VRAM + 5 system), and the other two are 16 meg banks exposed to the module connector
18:03 < MrBIOS> tdiaz to date, I've had no luck finding a modern supplier of the RAM-module-side high density board-to-board connector used in the Pippin RAM module. Do you know anything about it?
18:04 < tdiaz> Back then, I did find some. They weren't that available then though. Vendor wanted $4/each and I just laughed.
18:08 < tdiaz> https://retrostuff.org/2019/11/24/pippin-atmark-adb-adapter-dongles/
18:08 < tdiaz> ... wonder what OSX would see that as..
18:09 < tdiaz> Probably get masked by the ADB adapter as a USB HID away.
18:09 < blitter> the applejack controller speaks its own protocol
18:10 < blitter> os x wouldn't know what to do with it other than use it as a mouse
18:10 < tdiaz> I'd like to get a white shell set. I've got several black ones..
18:19 < MrBIOS> tdiaz external plastics?
18:25 < tdiaz> Yes, external plastics.
18:29 < tdiaz> The prototype PiPPiN is basically a 6100 with a video card soldered on the board. It's got the PDM Mac board in it.
18:29 < tdiaz> I have the motherboard, though it's in 6100 configuration.
18:31 < MrBIOS> tdiaz are there photos of that anywhere?
18:32 < tdiaz> Should be.. it's in a desktop PC generic like mini chassis.
18:32 < tdiaz> https://pippin.fandom.com/wiki/Pippin_Concept_Prototype
18:34 < MrBIOS> tdiaz thanks. do you know why they decided to have audio in on the production Pippin?
18:34 < MrBIOS> there has to have been some rationale
18:37 < defor> fore recording
18:37 < defor> iirc you can record messages with one of the email things
18:37 < tdiaz> The PlainTalk mic.. I believe, fits in there.
18:38 < tdiaz> Yeah, something let you attach audio bits.

Thursday, December 31, 2020
19:23 < blitter> Pippin Kickstart 1.1 has been seeded to beta testers :D
19:23 < blitter> If all goes well, I have a release candidate in the chamber ready to close out 2020 :)
--- Day changed Fri Jan 01 2021
00:46 < tdiaz> PiPPiN kickstart? Whats this? ;-)
01:48 < tdiaz> blitter: Whatcha PiP'ing?
01:49 < tdiaz> I'm trying to gather all the PiPPiN stuff in one place so I can figure out how many complete ones I have and what other accessories.
01:51 < tdiaz> Kind of curious, how someone decided to call that disc 'TUCSON'.. Pretty sure there's not a mention of Tucson on it..


18:54 < tdiaz> blitter: The reason for the chuckle about the "tucson" disc is because .. .
18:55 < blitter> well for one, "Tuscon" is misspelled :P
18:55 < tdiaz> I didn't realize it was so "famous". spread far and wide, debated, discussed and speculated over ..
18:55 < tdiaz> Because it's my disc.
18:55 < tdiaz> I'm the one that made it.
18:55 < blitter> oh really?
18:55 < blitter> how?
18:55 < blitter> and how did you sign it?
18:55 < tdiaz> I put it together, and got it authenticated back then.
18:56 < blitter> how did it get onto the internet?
18:56 < tdiaz> That's .. what I'm kinda curious about :)
18:56 < blitter> can you prove you're the author beyond just saying so?
18:57 < tdiaz> Absoluty.
18:57 < blitter> how were you able to get it authenticated?
18:57 < blitter> where were you at the time such that you had that ability?
18:57 < blitter> is there anything on the disc that says "yep, that's tony diaz all right"
18:57 < tdiaz> All those photos on the splash screen.. I have more of the same aircraft. I can tell you where each one was, I was flying that plane.
18:58 < tdiaz> They're 35 mm prints. I have those prints, too.
18:58 < tdiaz> I just must have not put that exact set back in the drawer at some point because I was looking for them a few nights ago.
18:58 < tdiaz> But that screenshot, tjers
18:59 < tdiaz> 's also another rendtion that does not have the PiPPiN stuff on it.
18:59 < tdiaz> I believe it says KFEst Adventure 98..
18:59 < tdiaz> Lets see if that's still buried on a web site.
18:59 < blitter> If you can furnish that, I can get that wiki article updated (I don't maintain it, but I know who does)
19:00 < tdiaz> I got that authenticated because when Bandai was trying to unload the remaining machines..
19:00 < tdiaz> I was trying to market them as kiosk / special use things, in a VAR environment.
19:01 < tdiaz> So I threw that disc together to show that it's basically very easy to setup something like Hypercard, HyperStudio, etc.
19:02 < tdiaz> Using standard System 7.6 and 7.5.3, with just the PiPPiN enabler.
19:02 < tdiaz> Then I have a picture of me outside of FedEx with about 20 of them on a flat to be brought in, as I was sending units to various places as samples.
19:03 < tdiaz> In the end, Bandai decided to write it off and we had to disassemble / get a certificate of destruction for them.
19:03 < tdiaz> ..and retain the ROM cards, give them back, with holes drilled through them.
19:04 < tdiaz> The Wikipedia stuff about the stock going back to Japan, etc ..
19:04 < tdiaz> It's all bullshit. It was about 47,500 or so units, out of a 50,000 run for North America.
19:05 < tdiaz> We sold the SCSI drives and reclaimed chips off the motherboards.
19:05 < blitter> just wanna nitpick, but Tuscon runs System 7.5.2 with the 7.5.3 Finder
19:05 < tdiaz> The koala pad keyboard thing and the AppleJacks, sadly, they ended up mostly getting chewed up. People were not buying thing.
19:05 < blitter> System 7.6 doesn't run well on Pippin really at all
19:06 < tdiaz> I know it doesn't. But I did get a volume authenticated .. to see just how bad.
19:06 < blitter> oh I see
19:07 < blitter> yeah I've booted up to OS 8.1. They all boot, but the experience gets worse the more you got past 7.5.2
19:09 < tdiaz> ..and yes, you can nitpick. It's my disc. I know, it sounds like a pie in the sky claim... I'm not sure how it got out, I do know I have given it in the past to a few people, and I just hope it's the "right" one that got out ..
19:09 < tdiaz> But even then, I don't think it matters anymore. ;-)
19:09 < tdiaz> Right one, meaning I went and edited the ISO raw and whacked some bytes.
19:10 < tdiaz> But it should be setup to do a dialup PPP connection in the 760 area code.
19:11 < tdiaz> Using OpenTransport, IIRC. I'm gonna get the PiPPiN stuff out..
19:11 < tdiaz> Set one up on the desk now that I have a free VGA port on a monitor.
19:11 < tdiaz> Have a look here:
19:11 < tdiaz>

19:12 < blitter> does mainstreet.net mean anything to you?
19:13 < tdiaz> There's some random stuff in there too, but a fair amount of PiPPiN stuff, and of a tray of CDs..
19:14 < tdiaz> Mainstreet.net, Right off, I can't say for absolute, but there was a guy in town here that had a BBS called Mainstreet Data and evolved it to the web pack in the PageMill era, as a seller of hardware and modems.
19:14 < tdiaz> So I'm kinda muddy on the thought, because of the similarity.
19:15 < tdiaz> I don't think that was the PPP dialup I set up on there, but I think that might be a ConnectNet or SBC/Prodigy pre Y! era dialup.
19:16 < blitter> yeah there's no PPP on here but there are some files for the @WORLD Dialer
19:16 < tdiaz> Of course, the old website won't connect..
19:16 < blitter> does alltech electronics mean anything to you?
19:16 < tdiaz> Yes.
19:16 < tdiaz> allelec.com
19:17 < blitter> it's in the netscape bookmarks ;)
19:17 < tdiaz> Of course it is..
19:17 < tdiaz> That was where I was when I did that stuff, though the name was changed by then to be CCC, Computer Circulation Center, but still had a dba for Alltech Electronics.
19:18 < blitter> allelec.com is a parked domain nowadays
19:18 < tdiaz> Yeah, it was some other company for a little while.. it's parked now? Hmmm,..
19:19 < tdiaz> Maybe try to snag it again .. I used to have it on a register.com account but when I quit there, they let all the domain names lapse.
19:19 < tdiaz> The two partners split, our end got renamed, but we kept the dba because we did Apple II mail order under that name, had ads in InCider/A+, Nuts 'n Volts..
19:20 < tdiaz> the different volumes are like mostly the same, but one of them I left out a whole bunch of stuff ... for a bunch of other stuff, about half the content.
19:20 < tdiaz> The Chrystar Demo was of particular amusement.. People were like, "wow" .. (Yeah, I know. Dots. LOL)
19:21 < blitter> where did the "Apple R&P Lib Reseller" files come from?
19:22 < blitter> fwiw the @WORLD Dialer attempts to dial 273-4213 locally
19:23 < tdiaz> The Apple Resource and Publications Library CD, Reseller edition.
19:23 < tdiaz> IT's an early era disc from around the time when they did the movie name knockoffs.
19:24 < tdiaz> Pretty bland, red and black printing, Apple garamond typeface..
19:24 < tdiaz> er, Reference and Presentation Library.
19:27 < tdiaz> https://docs.google.com/spreadsheets/d/1RPBnIMIfUwpQFdhxy04YY7QHicrW45rOWRV9Jew5Kck/edit?usp=sharing
19:27 < tdiaz> Those are the discs I have from that era.
19:37 < tdiaz> The photos on the splash screen are Gallup, Roswell NM, Dodge City KS. Johnson County Executive Airport and Avila University (Then Avila College) Wornall Rd, KC MO.
19:37 < tdiaz> Right near the KS border, I-435 and Wornall Rd, at about 120th st.
19:38 < tdiaz> The core of the campus looks like that today, but it's got more parking lot and extra buildings around it.
19:40 < blitter> was Alltech a Pippin reseller, in addition to DayStar? wondering how you were left responsible for marketing and reselling unsold units
19:40 < blitter> how, and why you / where you were
19:42 < tdiaz> No, we were a surplus reseller.
19:42 < tdiaz> Bandai came to us with the Motorola 56K external modems.
19:43 < tdiaz> .. and I questioned the @World silkscreened on them,.
19:44 < tdiaz> ..and they said "yeah, they're for .. but we never made more systems.." got to talking about that and they were unsure what to do with them and we asked about surplus selling them out, or what not.
19:44 < tdiaz> I know, the whole thing sounds really out there.
19:44 < blitter> can you recall roughly the timeline?
19:44 < blitter> may 1998?
19:44 < tdiaz> Late 1998, early 1999.
19:44 < blitter> your tuscon disc suggests it was mastered on june 7 98
19:45 < tdiaz> It was over by late 1998 pretty much, but yeah, the disc was done in the Summer.
19:47 < tdiaz> One of the initial questions was "what about Apple?" .. and this was 100% Bandai's call as to what to do with them.
19:47 < blitter> I'm just figuring you wouldn't have put together the disc until you felt a need to market a surplus of units ;)
19:47 < blitter> so Bandai must have approached you folks no later than early June
19:50 < blitter> oh woops, i misspoke
19:51 < blitter> the latest timestamp i can find on your tuscon disc is from mid-sept 1998
19:51 < tdiaz> The splash page graphics from from July 1998.
19:52 < tdiaz> The conversation started before I went on that summer vacation. I had one unit with me at that time. I'd taken it to KansasFest.
19:52 < tdiaz> http://17500mph.com/PiPPiN/Pippin-SSW-7-5-2a3.iso
19:53 < tdiaz> I don't think that's authenticated, but it should be a plain system boot of the same stuff that's on there.
19:53 < tdiaz> I don
19:54 < tdiaz> I don't remember getting such a small volume authenticated.. Though I do remember the drive being 650MB and a small partition, so it's possible.
19:54 < blitter> that 7.5.2a3 disc only boots a particular early Pippin ROM
19:54 < blitter> it doesn't boot any retail models
19:55 < tdiaz> Has that one been around too?
19:55 < blitter> I found it on macintoshrepository a year ago
19:55 < blitter> don't know where they got it
19:56 < blitter> but yeah, that's the only thing that I've found boots the prototype "Disco" ROM from 1995
19:56 < blitter> retail titles bomb on startup with that ROM, so something substantial must have changed between Pippin OS 7.5.2a3 and final
19:58 < tdiaz> Refresh the directory, is that disk image out?
19:59 < blitter> I see two images
19:59 < blitter> PiPPNROMUpdate.dmg and Pippin-SSW-7-5-2a3.iso
20:03 < tdiaz> Yes, the ROM update dmg.
20:04 < tdiaz> Here's the tray of discs, that KAO disk is one of those authenticated ones.
20:05 < tdiaz> I'm gonna dig that stuff out in a little bit, there's already a pile of the stuff on my floor.
20:05 < tdiaz> Also, near that image in the shared album is some retail boxed units.
20:06 < tdiaz> The outer boxes are brown, with the inner box being the retail, they have those red / white fragile square stickers on them.
20:06 < tdiaz> I have some more photos to scan .. including some of the ones of that same aircraft from the same roll.
20:07 < tdiaz> N15562 Piper Arrow 1972, I think it's at some rental place in Ohio now, and it's red/white instead of blue/white but the panel looks the same except the radios are updated.
20:10 < tdiaz> defor: IIRC, that PiPPiN ROM update disc is on that URL above.
20:10 < blitter> how pippin discs get authenticated is mostly a historical curiosity to me at this point tdiaz
20:10 < blitter> pippin kickstart defeats that whole thing :)
20:11 < blitter> i figured out how it all works in may 2019
20:11 < tdiaz> I know.
20:11 < blitter> ah ok
20:11 < blitter> just checking since you asked about pippin kickstart earlier
20:11 < tdiaz> You figured out how it all works back then, we've talked about this .. here.
20:11 < tdiaz> I just didn't realize you'd actually posted a disc.
20:11 < tdiaz> I remember when you wrote the article about decoding it.
20:12 < blitter> yeah 1.0.1 has been out for a while, and I thought I was done with it, but then somebody on twitter complained to me that they couldn't boot their white pippin from hard drive. so i went and figured out why *that* is
20:12 < blitter> and now i have a new 1.1 ready to release as soon as they test and verify that i have fixed that as well
20:14 < blitter> is this PiPPiN ROM Upgrade CD just a video presentation on how to service the Pippin hardware on a bench?
20:14 < blitter> or does it actually flash a ROM board
20:14 < blitter> because I know tools to do the latter exist
20:14 < blitter> I don't have them though
20:21 < tdiaz> That .dmg has tech notes and sample code on it.
20:21 < tdiaz> short myVRefNum,
20:21 < tdiaz> psRefNum,
20:21 < tdiaz> myFileRefNum;
20:21 < tdiaz> long myDirID,
20:21 < tdiaz> count = 100;
20:21 < tdiaz> char readBuffPtr[100],
20:21 < tdiaz> pointerToDataToWrite[] = "abcdefg0123456789",
20:21 < tdiaz> pointerForDataToRead[50];
20:21 < tdiaz> Str255 fileName = "\pFlash Sample Prefs";
20:21 < tdiaz> OSType fileCreator = 'Flsh',
20:21 < tdiaz> fileType = 'TEXT';
20:21 < tdiaz> FSSpec spec;
20:21 < blitter> yeah but nothing newer than what's on the 11/96 SDK
20:21 < tdiaz> OSErr myErr;
20:21 < tdiaz>
20:23 < tdiaz> ... well, there's the 1999 follow up to that graphic the splash screen is based on.
20:24 < blitter> the PiPPiNROMUpdate.dmg isn't authenticated btw
20:24 < blitter> but it has a Pippin System Folder
20:24 < blitter> so you'd have to use a dongle / kickstart
20:25 < tdiaz> Yeah,. that one was definitely from the developer seed / era


20:58 < tdiaz> blitter:

20:58 < tdiaz> There's some similar photos, including the two aircraft that are on the splash screen.
20:58 < defor> heh
20:59 < tdiaz> The 1958 Dodge with the flat and trunk full of PiPPiN boxes is me at FedEx dropping off a bunch.

Monday, January 4, 2021
22:58 < blitter> tdiaz: are you on Twitter? mind if I post a thread about what you've told me in here re: the Tuscon disc? just so a wiki has a "source" to link to
22:59 < blitter> I found an Apple IIgs hard drive image on it full of goodies, including an alpha build of a game called "Turkey Shoot" that I suppose somebody built just for you, since it says "For Tony Diaz" in the archive :)
--- Day changed Tue Jan 05 2021
00:41 < tdiaz> blitter: @y816
00:42 < tdiaz> lol, Turkey Shoot GS.. that was... an interesting era.
00:49 < tdiaz> LOL, Tucson.. I wonder who actually coined that title for it.
00:49 < tdiaz> In-N-Out is calling.
00:49 < blitter> "Tuscon" -- it's misspelled, even
00:50 < tdiaz> Yeah, I know. I can't seem to get past the auto correct fast enough.
00:52 < jjuran> Tuscon raiders
00:57 < blitter> tdiaz: writing a thread to set the record straight :)
00:57 < blitter> you don't mind me "outing" you for putting this disc together?
01:03 < tdiaz> At this point.. I don't care..
01:03 < tdiaz> I think it's hilarious.
01:06 < tdiaz> blitter: If I had known it'd be so .. umm.. well received? ... I've have done a better job. LOL.
01:06 < tdiaz> Who knew.
01:30 < blitter> https://twitter.com/ablitter/status/1346358277404180481
01:30 < blitter> I set you up to respond however you wish :)
03:44 < tdiaz> Hmmm... I wonder if RAM can be double stacked on the PiPPiN motherboard. S'pose the schematic would help so to see if, more likely, where the select lines might be.
03:44 < dtm> blitter: what is the nature of this Tuscon disk?
03:44 < tdiaz> Make a PCB that has sockets on it upside down.
03:45 < dtm> tdiaz: you was bein a mac bro
03:46 < tdiaz> LOL.. That's it, I'm gonna call that disc Tuscon Loader in a starcastic wars sense..
03:48 < jjuran> :-)
03:48 < tdiaz> I wanna see who coined that term.. What caused 'em to call it that.
03:49 < jjuran> Get your Tuscon :-)
03:50 < tdiaz> "Get your tusc-on!"
04:25 -!- amdamd [~tdiaz@ip68-101-252-59.sd.sd.cox.net] has joined #mac68k
04:25 < tdiaz> That one is mine
04:26 < dtm> tdiaz: what is it
04:26 < tdiaz> alternate fallback machine, for when I'm doing something with this one, like actually updating macOS.
04:49 < dtm> tdiaz: https://web.archive.org/web/20080820072239/http://www.hypermall.com/~tdiaz oops too late
04:50 < dtm> tdiaz: brah i'm askin what the Tuscon hard drive is. what does that mean
04:50 < amdamd> What are you mining for?
04:54 < dtm> tdiaz: ohhh i finished scrolling back. lol. ok so it's a demo cd? from where, from Apple or from you? why are your files on a cd?
05:11 < dtm> wow this whole site is utterly frozen in time. i wonder what happened. http://www.merlinmedia.com/cdrom_portfolio/pippin.html
05:13 < dtm> and they misspelled Apple Coputer.
05:39 < amdamd> My files are on a CD .. because I put them there.
05:39 < amdamd> That disc .. is my disc. I made that disc.
05:41 < amdamd> I made it to have MacOS booted so I could show various things .. on the disc, and that it can do pretty much any MacOS applications, like HyperCard/Studio for kiosk type modes/uses.
05:43 < amdamd> As an example, interactive kiosk and similar at museums that used HyperCard/Studio.
05:44 < amdamd> Which was pretty common in the day. Master the disc, burn it, stick it in the drive. Boom.
05:44 < amdamd> The only thing I never actually got was the authentication system.
05:46 < amdamd> Because Bandai decided to write them off and the authentication server fell through the cracks at that point.
05:47 < amdamd> But I was supposed to get it and handle authenticating discs at that point, if these things got sold as a VAR would, and support for the specific thing is via that VAR only.
05:49 < amdamd> Ultimately it was the Japan people that pulled the plug. The US based people were in for it, even if we didn't sell a lot of them at first. I doubt 47K of them would have sold, but we would basically get them all anyway, either way.
05:49 < amdamd> But in the end, they took the write off and wanted a certificate of destruction.
05:50 < amdamd> e.g. Alamagordo, NM, or Logan UT landfill like. (Atari/E.T. .. Apple/Lisa)
05:52 < amdamd> But like I've noted, we disassembled them and sold the SCSI CD drives, pulled various chips off the boards, and drilled holes in the ROM card to give them back.
05:55 < amdamd> Thought the ROM card ended up being chopping off the edge connector instead as that was simpler, and once they saw them all chopped off, got their certification on paper, we sent those for reclamation too.
05:56 < amdamd> The guillotine process usually cracked the plastic package on at least the two ROMs closest to the edge connector, if not all four of them.
05:57 < amdamd> Like I've said, this whole thing came to be .. as they had offered us bulk packed Motorola 56K external modems with the Macintosh cable.
05:58 < amdamd> Vendors, hawkers, whatever, would send samples to various places like us that would buy excess, surplus electronics, inventory, etc.. for everything from resale to destruction.
06:00 < amdamd> The things were black instead of beige/white a they were custom made for the PiPPiN, The thing initially was to be packaged with a smaller modem that was about the size of a pack of cigarettes.
06:01 < amdamd> It was just a 9600/14.4. They changed over to these 56K Motorola ones at the last minute, and they had twice as many as they completed units for sale.
06:03 < amdamd> They offered those first, the vendor we usually end up duking it out with over offers, poo-pooed the modems in an effort to get them totally cheap by citing that since it came with the macintosh cable, it was specificities and they would not be able to be used with anything else.
06:03 < amdamd> LOL. :) Right. Yes, they had slightly different onboard firmware where hardware handshake was default for the Mac serial port.
06:03 < amdamd> BFD.
06:05 < amdamd> That crap was typically done by the highest bid gets the deal, or other factors like how fast they'd pay for it, etc. Money talks, BS walks. We pretty much always had the checkbook out, ready to go,
06:07 < amdamd> So .. the offer was we'll beat whoever offer by a certain amount per unit, unequivocally. Because we knew what kinds of offers they'd get.
06:09 < amdamd> In the process of talking about that, and mentioning the @world markings.. "Yeah, these were supposed to be packaged with a later run of PiPPiN consoles, but the platform didn't do well, so we never did another run, and we have most of them left as it is.
06:11 < amdamd> "Oh?" .. "what's the plan?" .. got to talking about it, they weren't opposed too letting them sell for whatever kind of use, as in we'd have to make RAM cards to make them useful, get the injection molds for the dock and put a floppy and/or Ethernet dock together. We had a constant supply of 1.44MB auto insert drives.
06:12 < amdamd> So for school labs. Even the option of punching the back panel for a SCSI connector and including a drive, as a cheap Mac system for schools.
06:14 < amdamd> So .. chop chop.
06:15 < amdamd> Got the modems out of the packed consoles, the Matushita 4X with Apple firmware. (Basically, CD-600i) Those two items were the money product to resell wise.
06:16 < amdamd> The power supplies got sold as a DC switching PSU. The 1/2 AA batteries .. hell, whored those things out too for a couple bucks, free with various orders of Mac / Apple IIgs stuff.
06:18 < amdamd> Because of the cable end on the AppleJack controller, chopping them off and changing to the 4 pin mini-din was just not feasible to be done by hand.
06:19 < amdamd> ..and to get cables made from Taiwan, needed to order a minimum of 50K pieces.. There just wasn't interest in the controllers or the keyboards with the pen tablet, though IIRC, the same retrofit cable could have been used.
06:22 < amdamd> So those ultimately became shredder food. The metal chassis went to a scrap metal yard nearby, the motherboards got various chips pulled. The RAM and CPUs, Zilog SSC, and whatever else fell off during that process after those things we did want were pulled.
06:23 < amdamd> Aluminum heatsinks went to that scrap, and the small little fans.. well, we had small fans for x86 heatsinks for quite a long time.
06:24 < amdamd> Nothing went "back" to Japan, no plastics were changed, nothing reboxed, and all that jazz that various wiki sites say happened. The North America allotment of PiPPiNs were torn down.
06:25 < amdamd> Microsoft got Bungie, and that was it. Like the Newton, Apple was ahead of it's time slightly, and the product was too dumbed down.
06:26 < amdamd> They didn't want the PiPPiN licensees to sell the systems as competitors to the Macintosh. Thats why the thing didn't use standard SIMMs either.
06:28 < amdamd> But by the time this all was happening, there wasn't any threat to competing with Mac desktops with these things, and that was why they researched niche markets for them, before ultimately saying screw it
06:29 < amdamd> When the PiPPiN was announced, I could have never imagined this whole thing happening like that. Not in the slightest. So weird.. of an era.
06:30 < amdamd> Kind of like the Video Game crash of 1982 in some ways. Only to be repeated in 2001/02 with the whole Internet Appliance thing. Sony eVilla, anyone?
06:32 < amdamd> IoT 15 years too early, and too stupidly priced considering you could get a Windows 98 thing that did all the same stuff for less than the stupid dedicated, locked down, IA device.
06:33 < amdamd> But Sony didn't blow their idea. The PlayStation Network .. and what we have now with all the consoles and subscriptions.
06:33 < amdamd> Ah well.. 'nuff rambling.
06:33 < amdamd> Nearly 23 years ago.
06:36 < amdamd> If we were in the desert and real estate / buildings were dirt cheap, I can see a lot of this stuff having been just stashed. Like the nearly 2,000 Bell & Howell Apple II's unloaded from Broward County Schools, and 4,000 or so, basically, the last IIgs systems Apple had, picked up in Winter of 1993 from three different auctions on the east coast.
06:37 < amdamd> Those black A2's got sold in pieces, the cases got flattened. I so wanted to figure a way to stash them. But the boards, keyboards, power supplies as separate items were way more in demand than the whole machine.
06:39 < amdamd> Came down to if you bought a motherboard and PSU, the bottom pan would get shipped with the two things attached. Keyboard? Want the case too? $15 bucks more. All three? You get the case free.
06:39 < amdamd> So weird.
06:40 < tdiaz> OTOH, that CD that is mentioned on the Frozen in Time web site sounds interesting. Kinda similar to what I made maybe. Maybe it's out there somewhere, too.
06:42 < tdiaz> I sent about 30 units overall as samples to various places for possible interest. I did include disc.. who knows. Maybe one of those units and that disc surfaced after a bunch of years and someone was like "Ooh, what's this?!?!"
06:45 < tdiaz> I believe the hard drive I had used to master those things is in one of the boxes I have here, and there are variants of that CD, so I need to look and see if there's anything on them that I don't want out there and zero out bytes in an ISO if so.
06:45 < tdiaz> It doesn't check the whole disc against the authentication process. That would take forever.
06:55 < amdamd> LOL. Blast ][ the Past: https://apple2online.com/web_documents/apple_ii_faqs.pdf
06:56 < amdamd> Was wondering if any usenet archives had the SCSI drives being sold mentioned, didn't find any.
06:56 < amdamd> but there's Alltech and Apple II stuff in that.

Saturday, January 23, 2021
01:04 < amdamd> if anyone here didn't see it, and is PiPPiNterested...
01:04 < amdamd> https://twitter.com/y816/status/1352348523061149698
01:05 < blitter> i'd really love an image of that green pippin disc :)
01:06 < amdamd> It's not in the wild I guess.. people have photos of it.. Hmmm..
01:06 < blitter> the only pippin sdk disc in the wild that I know is the 11/96 disc
01:06 < blitter> I know someone with a DR 0 SDK disc. sealed
01:07 < blitter> he keeps asking me if he should break the seal to image it and put it up on the 'net
01:08 < amdamd> F. that sealed BS. Who cares. It's not gonna do anything to it value wise, it's not a limited edition Beetlethoven or something.
01:08 < blitter> that's what i said
01:08 < blitter> i said the market for pippin developer collectibles is infintesimally small :P
01:09 < amdamd> -Anyone- -At- -All- this is going to pay -Any- money for that ... is going to absolute open it.
01:10 < amdamd> Absoositively sure that it Would. Not. Even. Be. A. Pico. Flinch. of a difference.
01:10 < amdamd> The only thing, "is it scratched? No." back to the above.
01:11 < amdamd> "Absopitivlylutely. =
01:12 < amdamd> 'was supposed to the the word
01:14 < amdamd> Now, I do wonder.. if we'd gone this far and that disc never surfaced until now.
01:14 < amdamd> Hopefully it spurred some interest all along ..
01:16 < amdamd> For the longest time I never thought a thing about it because people keep telling me that the units out there, many don't check the disc anymore, that the thing had been removed towards the end of the product line.
01:19 < amdamd> but based on all the conversations I've had with them back then, I was to get the actual Mac LC or IIsi, whatever they had it running on, if a product transfer had materialized. Which basically was going to be the wanted some X amount for each, with a percentage up front, (for X units) but we would take possession of all of it.
01:21 < amdamd> That meant all support, the inside the box paperwork would have to reflect that Bandai is not responsible for any support, any claims, any whatever for this product line.
01:21 < amdamd> On the serial # range we had , that is.
01:22 < amdamd> Whatever was out there via retail, would have been however that was going, went.
01:24 < amdamd> The legal dept. said that Bandai was 100% stake holder as to what these could be released as, as long as they did not try and compete with a current Macintosh product line.
01:25 < amdamd> As in they're pretty low performers wise to the Macintosh at that point in the game. So.. Didn't matter if we went niche/var with them as MacOS compatible for like an embedded system.
01:27 < amdamd> Which actually would have meant that the outer plastic would have been removed and I was going to make a small PCB that would fit somewhere on the metal chassis, for two Mini-Din 4 ADBs.
01:28 < amdamd> "You assemble/modify it, here's the recommendation", or we do it for an added price.
01:29 < amdamd> The real sticker was the RAM module. Which pretty much was "we're going to have to manufacture one" to include with every unit.
01:32 < amdamd> The out the door price was target to be where someone would spend a bit just over half on what the MSRP was, to get them, if they were taking more than a set number of units, and anything else was that base price and we bundle whatever accessories that we wanted and could a-la-carte the rest of the stuff.
01:35 < amdamd> Would have basically been the three inner sub packages removed from the glossy retail box, any bandai logos masked off. A common thing back then, a rig like a small cabinet with a flap that has knock outs. Stack in X amount of boxes and hit it with specially made spray for corrugated board.
01:36 < amdamd> Which the stuff worked the best on brown, to where you'd not see it blatantly.
01:37 < amdamd> On bleached corrugated, if you wanted an absolute close to clean look doing that, you'd likely spray the brown, let it dry and then cover it with flat white.
01:41 < amdamd> So, the three internal boxes, or however if we just made a die set and got new foam inserts all together. But it would he a whole hell of a lot more cost effective to have just used the existing interior packaging with the exterior carton and four blank sheets, or an inner sleeve to drop into the box to make up for the thickness that was the retail box so it would retain the rigidity for shipping.
01:44 < amdamd> We never expected 45K+ units to all go, and based it on the initial percentage of units sold, and in increments there aftermaths if there was continued interest, to at one point pulling the plug on the rest of them for scrap, giving them all the ROM cards after they'd been smacked by a sledge hammer like.
01:45 < amdamd> The rest was ours, to do with as pleased. Chip reclamation, sell the peripherals, accessories as "whatever" ..
01:47 < amdamd> Metal chassis would have likely been scrap metal, motherboards with specified parts pulled go as bulk scrap PCB, Optical drive and Power Supplies being sold as surplus.
01:48 < amdamd> Which meant that the CD drives would have been snapped up by Mac users, Mac peripheral vendors, likely to make the ZFP style external CD product, or.. whatever.
01:50 < amdamd> The CD drives were put out for $29 each, $19 in lots of 100 or more, and they went pretty quickly.
01:52 < amdamd> They ended up opting for the deal that would do when demand was lower than whatever it was we decided would trigger it, vs. paying in large blocks to sell them as a functioning product.
01:53 < amdamd> .. as in apply that from the beginning, with the ROM cards plus a certificate of destruction for write off purposes.
01:54 < amdamd> Once they saw the ROM cards piled in a gaylord box, damaged, and could dig in as far as they wanted to see that it wasn't a layer on top..
01:55 < amdamd> We take that back too and dump it in with low grade PCBs.
01:58 < amdamd> What I really suspect the deal was, since it was close to the end of the year, they just decided to finish the year with the whole thing settled.Where as had the whole thing started in January instead of July/August, there'd have been a window for some to get out as functional units in bare packaging with no paperwork, just the internal CDs in jewel cases, plus whatever pressed titles they had left.
02:00 < amdamd> Depending on who you ask, the concept was kind of too early, or too late. The price was a show stopper, and that the thing required very unique peripherals for expansion, was ultimately what sank the product line.
02:02 < amdamd> The MSRP was, "why should I buy this, that they say can run specific PiPPiNized releases of Mac games, and that those developers would also be able to just market their game as an edition for the Macintosh period. With an entry level system costing as little as 1/4 more MSRP to acquire.
02:53 < blitter> ^^ this belongs in a blog post ;)
02:53 < blitter> or a twitter thread

Tuesday, February 2, 2021
04:37 < amdamd> I wish I had been able to score a larger PiPPiN RAM module back when. BDE only had 8MB.
04:38 < amdamd> Kinda wonder if more can be piggybacked on the motherboard..
04:39 < amdamd> Like the Xbox developer machine had double the RAM onboard, though that's because there's pads on the PCB for it.
13:54 < blitter> amdamd: the pippin can address up to 38 megs of RAM tops
13:54 < blitter> that's a limitation of the aspen memory controller
13:55 < blitter> I've online of some japanese hackers long ago that fabbed a homemade 32 meg board, but it's twice the size of the regular modules so just kinda hangs off the bottom of the logic board
13:55 < blitter> *I've read online
15:27 < amdamd> blitter: well, with an Ether/Floppy dock, the hanging off isn't much of an issue if the dock base provides for the extra room.. so.. :)
15:27 * amdamd wonders what it would take to make that card ...
15:28 < amdamd> e.g., any further logic, or just a matter of more memory modules accordingly.
15:29 < amdamd> For an OSH Park PCB, and find your own era DRAMs..
15:29 < MrBIOS> amdamd so far I've been unable to even find a source of the mating connector on the RAM module board
15:29 < amdamd> Or yes, design a new one with a PLD and use some new single piece of memory.
15:29 < MrBIOS> so it's kind of a non-starter unless you can
15:30 < MrBIOS> it's a surface-mount board-to-board connector for which it appears there is no modern equivalent.
15:30 < amdamd> Of course. But that's not an absolute show stopper if something mechanical can fit on the same footprint.
15:30 < amdamd> Which, I
15:30 < amdamd> 'm sure has been researched.
15:31 < MrBIOS> it is a show stopper as far as I am concerned.
15:32 < MrBIOS> the easiest thing to do would probably be to find a NEW connector with the same PCB-side pin pitch, which can actually be acquired, and replace the original Apple connector
15:32 < MrBIOS> and I use the term "easy" vey loosely
15:32 < amdamd> But.. it's not an absolute either. A thin PCB could be set in the middle of that with routing to an alternate connector pair, and you have to use proper equipment to remove and place that thin PCB in there and flow solder from it's pads to the ones on the motherboard.
15:32 < MrBIOS> sure, you can always hack something like that up
15:32 < amdamd> I mean, there's a way some how. Just depends on how far you want to go. :)
15:32 < MrBIOS> but there's rather limited space around the access hatch for the RAM on the bottom
15:33 < amdamd> Yes. See Dock comment. Leave the door off. Make a recess in the dock.
15:33 < amdamd> No dock? Make a stand off plate.
15:33 < MrBIOS> I don't think you understand
15:33 < amdamd> How far do you want to go? :) That's all.
15:34 < amdamd> I know what you said. The area that is exposed when the flap is flipped open is only so big.
15:34 < MrBIOS> correct
15:34 < amdamd> Take the stupid flap off.
15:34 < amdamd> Raise up the console.
15:34 < MrBIOS> there's sheet metal around that
15:35 < amdamd> No there's not. You take the door off and you can see straight in.
15:36 < amdamd> Make your product fit in there, even if that means you have to double stack a second PCB on a connector pair that allows it to stick far enough out.. "above" the bottom of the case..
15:37 < amdamd> If you need more room. Or make two of them the same footprint that just stack and stack. Then you have options. 16 and 16, two cards, or 16 only, one card.
15:38 < amdamd> Like the crummy thing that the original Newer Tech made for the Powerbook Duo RAM expansion .. that they never supported. Pfft.
15:38 < amdamd> Stupid thing didn't work on the 2300.. while every one elses did. WTF people?

Tuesday, February 9, 2021
21:31 < tdiaz> ... just now realized they a couple used different PRAM battery configurations.
21:34 < tdiaz> Looks like the battery being accessible by only removing the top case vs. having to remove all of it and get inside of the cage.
21:36 < blitter> sort of
21:37 < blitter> the 1.0 board has a removable battery, but you have to remove the sheet metal cage to get at it
21:37 < blitter> the 1.2 board has a little cubby accessible just by removing the top plastic case... but the battery is soldered with little wires to the logic board >:(
21:37 < tdiaz> Where as a later one has the battery in the side of the sheet metal connected via wir...that
21:40 < tdiaz> The 1.2 has a 'nicer' plastic shield over the slot connector connections.
21:44 < tdiaz> I have a handful of the cups for the battery.
22:38 < tdiaz> The way the PiPPiN case is made, they certainly do show wear easily.
22:38 < tdiaz> The controller is better, but the matte, semi not smoothed surfaces on the rest suck.

Wednesday, February 10, 2021
18:10 < elliot> blitter: Hard to tear my eyes away from the Kickstart padlock animation. It makes me smile!
19:11 < tdiaz> :)

Tuesday, March 9, 2021
19:08 < tdiaz> blitter: Just incase there'a any possible remaining doubt on the 'Tuscon' disk:
19:08 < tdiaz> 19:09 < tdiaz> When we had to pay 30 cents a piece .. photos of photos of related pictures of some of the stuff on the splash screen.
19:09 < blitter> yeah lmk if you ever find any of the originals
19:09 < tdiaz> Because just getting the actual graphic is still not quite enough. The same plane, instrument panel..
19:11 < tdiaz> That's the funny part. That's probably why those prints are not in this set. They're somewhere else because I scanned them.
19:13 < tdiaz> The middle one with the batwing overlaid into it as Avila College (now University) off Wornall Rd, just south of the I-435 loop, right about the middle.
19:14 < tdiaz> The place has more buildings now, but that's the aerial photo I got when I then buzzed the place to say "I
19:14 < tdiaz> 'm here"
19:14 < tdiaz> before going to the airport nearish.
19:16 < tdiaz> The negative strip in there isn't these either, but I'm sure it's in one of the envelopes. I tended to sometimes.. shove the stack back into an empty one sitting on the table, in poor form. :)
19:18 < tdiaz> Scan the negative, get original of original.
19:18 < tdiaz> Stupid browser is being cranky today.

Saturday, April 10, 2021
21:32 < blitter> tdiaz: it occurred to me over the past couple of days-- what was the process to get the "Tuscon" disc signed? Did you send Bandai a master CD and they sent a signed CD back to you?
21:32 < blitter> Did they email you a file that you had to burn onto disc yourself?
21:33 < blitter> did you send them a hard drive with a writable "Pippin Netscape" partition already on it?
21:58 < tdiaz> blitter: A guy took the drive from me, and brought it back the same day. I just burned the drive as a disc.
21:59 < tdiaz> I had a drive with 650MB partition and whatever was left for utilities and a separate system folder.
22:00 < tdiaz> Bandai was pretty close. The consoles were at the same facility.
22:01 < tdiaz> What sucked is .. he kinda of disappeared right near the end of that .. and was going to bring me that machine.
22:02 < tdiaz> The docs talk about a process where it contacts a server at Apple and does some exchanging before signing it.
22:02 < tdiaz> But in this case it was just self contained on a IIsi.
22:04 < tdiaz> When the decision was made to scrap them, he bailed. But it was basically going to be Bandai transferring the whole product and what they had to me/us.
22:05 < tdiaz> They were just not going to mess with it at all since it had been long enough and any obligations they had to purchased units was past.
22:06 < tdiaz> The only thing I wasn't clear on is if JP was a different division.
22:07 < tdiaz> But .. ya, he took the hard drive, brought it back and I saved a disk image with Disk Utility and Toast before doing anything else.
22:08 < tdiaz> I remember screwing around with it trying to see just how much I could mess with before it wouldn't pass again.
22:08 < tdiaz> Like messing with stuff in sub directories, files with same or different names, etc.
23:08 < blitter> tdiaz: ah, thanks for the details (glad you remember)! so basically, you handed them a hard drive, they massaged the partition for you, then were you given any instruction after that? like "only burn this, don't mess with it or you'll undo what we just did"
23:09 < blitter> you didn't have to perform any additional "signing" work on the drive after you received it back? just burned it?
23:13 < tdiaz> Basically, yeah. Make an image of this before you do anything.
23:14 < blitter> did your Bandai guy tell you that? or was it part of some official document somewhere
23:14 < tdiaz> He told me thats what he's doing.
23:15 < blitter> got it. thanks again for the details-- Apple's own docs are vague and suggest that Bandai emailed devs the authentication file, which doesn't make any sense to me because just having the file isn't enough; the MDB has to be poked in a particular place too, which you can't do without special tools
23:15 < tdiaz> Because I inquired as to the process as I know I would need to be able to make various discs and the intention was to make RAM cards and 7.6.1 environments.
23:15 < blitter> so I'm trying to get a sense of how the auth process really worked
23:16 < tdiaz> Yeah, the official process with Apple in the loop was that the vendor would have a machine and Apple had an authentication server that the OEM would work with via WAN.
23:17 < blitter> I see, that makes some sense. The vendor maybe got emailed an auth file from Apple, then applied it to the image
23:17 < tdiaz> But in the end, only Bandai licensed stuff.
23:17 < blitter> yeah Katz units didn't use auth at all, so no need there
23:17 < tdiaz> Yeah, though I got the impression that the file was transferred as part of that process.
23:18 < tdiaz> ..and then you'd do the image of the drive.
23:18 < blitter> right yeah that makes sense. I think it'd make sense if whatever signing tool would lock the partition after it was done, just to be safe
23:18 < blitter> I'm guessing that didn't happen? You didn't unlock it manually after making your image?
23:19 < tdiaz> No, I didn't have to unlock.
23:19 < blitter> fascinating
23:20 < tdiaz> Like I noted, it was handing over the whole thing, reminding stock and do what you're gonna do with them. There were no limits with Apple as to what they can't do marketing wise .. and even if there was, it was two years out by then. Wasn't gonna byte into anything Apple was doing by then.
23:21 < blitter> so just trying to get it straight in my head-- by the time you entered the picture, had Bandai retrieved the signing server (the IIsi) from Apple? had Apple completely divested itself by that point and the ball was 100% in Bandai's court?
23:21 < tdiaz> Wasn't a license transfer, just an inventory and support hardware / tools to be able to support the hardware.
23:22 < tdiaz> I think so, or for all I know they had that all along and the other process never got established.
23:22 < blitter> ok gotcha, so Bandai was both the vendor *and* the signing authority by your time
23:22 < blitter> Apple was nowhere in the signing chain
23:23 < tdiaz> Correct.
23:23 < blitter> got it, thanks
23:24 < tdiaz> That I even got into it at this whole level .. is just so weird. :)
23:25 < tdiaz> Just happened to be in the right place when I saw them offering the modems in bulk. They were still in the bulk boxes from Motorola. Black modem, black 1 meter cable in a bag.
23:25 < blitter> oh another question-- did you burn your disc from the image you made when you got the drive back? or did you go into Toast and point it toward the hard drive. reason I ask is because Toast has options to "optimize" a volume (basically it defrags and creates a brand new Desktop file), which would totally fuck up how the auth file knows the volume is laid out
23:25 < tdiaz> Wall Wart was separate in the carton.
23:25 < blitter> and did you get directions advising you to use said options in toast or not
23:25 < tdiaz> Yeah, I just burned it as I got it. I did that optimize bit before I sent it.
23:26 < tdiaz> But it didn't seem to matter. :)
23:26 < tdiaz> No, just "burn this"
23:26 < blitter> interesting. I wanna say the optimize flag only applies at burn time, but I'd have to go back and check
23:27 < tdiaz> Well, I mean defrag with MacOS native tools more than that Toast function.
23:27 < blitter> ahhhhh
23:27 < blitter> ok that makes sense
23:27 < tdiaz> Which .. screwed stuff up. Period.
23:28 < tdiaz> Toast did stupid shit with the ISO-9660 and Joliet stuff that caused things to be wonky no matter how you set things up it did whatever when the disc was inserted.
23:29 < blitter> I did have to navigate a complex web of UI when I set up my Kickstart disc, which I authored as a hybrid bootable Mac and Joliet disc
23:29 < tdiaz> ..and those dual catalog volumes - LOL. The only thing I had a smooth time with was the ProDOS / HFS stuff for Apple II support.
23:29 < blitter> it's not at all intuitive how to do hybrid discs in Toast
23:29 < tdiaz> No, it's not. It just sucks.
23:30 < tdiaz> Sucks so bad that I keep the 9600 with Toast 4. something just for that.
23:32 < tdiaz> As I still do master HFS / A2 stuff too.

Friday, May 28, 2021
00:06 < amd> jjuran: Pffft. Type all ya want. Talk makes more talk. Don't worry about typabusterinug .. no ones gonna go -that- long .. at least without an interesting topic perhaps. .. Says the guy that rattled off many pages about PiPPiN stuff.
00:08 < amd> Which .. I mean, how did that thing get coined .. 'tuscon' .. The Ultimate System CON maybe? I don't get it.
00:09 < amd> ...anyhow, back to the deafness.
00:19 < blitter> amd: in your defense, i did ask you about a couple things :P

Tuesday, June 29, 2021
13:59 < blitter> amd: do you still have this?
14:00 < blitter> as well as the board it plugged into?

--- Day changed Wed Jun 30 2021

--- Day changed Thu Jul 01 2021

01:00 < amd> blitter: No. There's nothing going on with it other than the mechanical repositioning of the slot.
01:11 < blitter> but you don't have that machine anymore either? that photo is dated 2017
01:11 < blitter> curious what's on the ROM

Saturday, July 10, 2021
21:23 < amd> https://twitter.com/y816/status/1414032478340145160
--- Day changed Sun Jul 11 2021
06:42 < amd> Hmmm..
06:43 < amd> Wonder what the odds of finding a PiPPiN motherboard are..
08:43 < CuriosTiger> About the same as coming across a lightly used Apple I at a yard sale.
13:28 < blitter> amd: I have one :)
13:28 < blitter> do what I do and save a search on YAJ
17:10 < amd> blitter: Turns out I must have not actually gotten any more motherboards, or there is still a mystery box somewhere.
17:10 < amd> I have so many chassis/casings..
18:02 < blitter> amd: any accessories? specifically, any PCI adapters?

Tuesday, September 28, 2021
17:27 < blitter> amd: mind if I share a link to your photos of the early 6100-based Pippin prototype? so they can be cited on wiki
17:27 < amd> That's fine.
17:28 < blitter> mind if they're shared via twitter? (you don't have to be the one to do it)
17:28 < blitter> also do you want credit for them? :)
17:28 < amd> I still can't quit laughing over that whole tuscon thing..
17:29 < amd> Twitter, whatever is fine. credit is nice, but doesn't always happen ;-)
17:31 < blitter> right on, thanks!

Saturday, October 9, 2021
02:48 < blitter> amdamd: https://twitter.com/ablitter/status/1446730216022306821
04:53 < amdamd> blitter: Some even have the PDM motherboard in them.
14:09 < blitter> amdamd: ... isn't PDM the 6100? or do you mean a PDM prototype motherboard?
17:54 < amdamd> blitter: The latter.

Rest in peace, Tony. Thanks for the memories.

Pippin Kickstart 1.1

I closed out 2020 by piecing together a minor update to Pippin Kickstart (then spent most of January writing this blog post 😛 ). Version 1.1 patches the SCSI Manager on Pippins with ROM 1.0 (most models with a white case) so that they too may boot from SCSI devices other than the internal CD-ROM drive, such as an external hard drive. If you have a 1.2 or 1.3 Pippin and are happy with Pippin Kickstart 1.0.x, then 1.1 adds no new functionality other than some cute graphics (see below).

Download it here: pippin-kickstart-1.1.zip. Extract and burn pippin-kickstart-1.1.iso to a CD-R using the software of your choice.

Source code is available here, licensed under the GPLv2: https://bitbucket.org/blitter/pippin-kickstart

The Pippin was Bandai and Apple’s ill-fated collaborative attempt to break into the video game console market by marrying two things I love: Macs and video games. Bandai launched the Pippin in 1996 amid fierce competition from other fifth-generation consoles like the Panasonic 3DO, Sega Saturn, Sony PlayStation, and Nintendo 64. Based on Macintosh technology, the Pippin is capable of running Mac software and vice versa, but Apple built some software-based security into the Pippin’s boot process making it difficult to use the Pippin just like any another Mac. In part because of its high price and lack of developer support—both internally and externally—the Pippin was considered a commercial failure and Apple subsequently canceled the project in early 1997, with Bandai following shortly after in 1998.

I grew up playing games on Macs through the 90s and early 2000s, so I’ve always had a soft spot for the classic Mac OS. I learned how to program on a Mac and nurtured those coding skills over several years, which I later parlayed into a modest career in video game development. All the while I noticed that while other vintage consoles were getting renewed attention due to burgeoning homebrew developer scenes of their own, the poor Pippin was being left out in the cold. By the late 2010s I figured that since nobody had paid much attention to Apple’s foray into video games, then I may as well, especially given my nostalgia for classic Mac gaming. So I cracked its signing key in May 2019 and shortly thereafter released a boot disc called Pippin Kickstart that made it easier for folks to test their own Pippin CD-ROMs.

When I released Pippin Kickstart 1.0.1 at the beginning of July 2019, I thought it was a done deal. Owners of 1.2 and 1.3 Pippins could boot from any SCSI device they wanted, and 1.0 Pippin owners could boot from any CD-ROM they wanted, owing to its lack of support for other SCSI devices. Unsigned booting was finally a reality on the Pippin, and I could rest easy not having to worry about this little project anymore.

That is, until September 2020, when @LuigiThirty on Twitter picked up a 1.0 Pippin and wrote about this obstacle she encountered while working on some homebrew:

Not only was her Pippin refusing to boot from a known-good external SCSI hard drive, but the drive wouldn’t work with her Pippin at all. Hard drives are the earliest and most basic SCSI storage devices available for Macs, but even when unformatted or without drivers installed, formatting utilities should still recognize that a SCSI device is attached and available for use. Perhaps the 1.0 Pippin’s ignorance of external SCSI devices had less to do with driver support in ROM and more to do with artificial limitations. As a hacker, artificial limitations offend my sensibilities, so I revisited Pippin Kickstart with an aim to do something about this.

The Problem

The consumer model of the Bandai Pippin is designed to boot exclusively from its internal CD-ROM drive. In late 1996, a revised ROM—1.2—was offered as an upgrade allowing the use of external SCSI devices, but particularly the Deltis 230 MO Docking Turbo which provides a magneto-optical drive “docked” to the underside of the Pippin. A developer dongle could be attached to Pippins equipped with ROM 1.2 to enable booting from external SCSI devices. But the earlier ROM 1.0 completely ignores all SCSI devices (and dongles) other than the internal CD-ROM, precluding altogether the use—not to mention booting—of external SCSI volumes. Signs point to the ROM’s low-level SCSI Manager as the culprit, since while mandatory initial booting from CD-ROM is standard across all retail Pippins, only ROM 1.0 refuses to see additional SCSI devices after the Pippin has fully booted.

However, the “GM Flash” ROM supplied with developer and test Pippin kits is nearly identical to the final 1.0 ROM, save for a few minor changes to enable debugging. Indeed, the GM Flash ROM has an Open Firmware timestamp of 1996-01-28, while the 1.0 ROM has a corresponding timestamp of 1996-01-29—just one day apart, suggesting that the two ROM versions were built from the same codebase. Among the differences, the GM Flash ROM can enumerate all SCSI devices at all times and may attempt to boot from any of them.

The Solution

All known Pippin ROMs load their SCSI Manager code from a ‘nitt’ resource located in ROM. Upon closer examination it appears that the ‘nitt’ resources with ID 43 (hereafter referred to as “‘nitt’ 43”; SCSI Manager 4.3 was codenamed “Cousin Itt.”) in the GM Flash and 1.0 ROMs—where the SCSI Manager code is stored—are the exact same size and, aside from timestamps, differ by only nine bytes. These nine bytes make up code that check a SCSI device’s ID to determine whether or not it should be considered. In the GM Flash version, this code verifies that the ID is between 0 and 7 inclusive (all legal SCSI IDs), whereas in ROM 1.0, this code only passes devices with an ID of 3, the internal CD-ROM drive. Given that the GM Flash and 1.0 ROMs are so closely related, it’s reasonable to hypothesize that the 1.0 ROM can use a SCSI Manager from the GM Flash ROM. Replacing ROM 1.0’s ‘nitt’ 43 with the GM Flash version should therefore be the most straightforward fix. Barring that, patching those nine bytes to match those of the GM Flash ROM should be sufficient to make ROM 1.0’s SCSI Manager functionally equivalent to that of the GM Flash ROM.

Finding The Problem

Pippin Kickstart was first conceptualized as a custom SCSI CD-ROM driver. My thinking was, since Macs automatically load device drivers from the first few partitions of an Apple-formatted disk, why would the Pippin behave differently? The longer story is written in my blog post about that, but the short answer is that the Pippin simply ignores patch and driver partitions. The Pippin has its own .AppleCD driver in ROM, which has to be loaded and active before it can boot, well, anything. Perhaps the thinking at Apple was, “since .AppleCD has to be working to boot the Pippin into an OS, there’s no sense in patching it that early. Let the OS do that if need be.” Since the signing process is only applied to the boot volume and no other partitions, maybe it was a conscious effort to block patches and custom drivers from executing their own unsigned code early enough to work around the Pippin’s security. Whatever the reason, I discovered quickly that Pippin Kickstart’s payload couldn’t sneak in through a back door.

Ultimately, I reverse-engineered the signing keys and implemented Pippin Kickstart as a simple bootloader, signed using Apple’s private key so that it launches on any retail Pippin without resorting to any sneaky tricks. My own code implements its own boot candidate search loop, mimicking the loop in the Pippin ROM’s Start Manager (in fact calling some support functions in ROM as necessary) but omitting an authentication and driver check. I implemented my own loop because the ROM’s loop, being read-only, can’t be patched in-place. But since the ROM’s search loop is part of the early startup code, frozen in ROM rather than loaded as a resource, the locations of code in ROM needed by Pippin Kickstart are always at known, fixed addresses. They never change, so I can hardcode them directly into Pippin Kickstart’s logic.

🎵 Call me (call me) on the line / Call me, call me any, anytime 🎵

It’s a different story with the SCSI Manager. The SCSI Manager is a low-level library used to enumerate and wrangle access to any and all SCSI devices attached to a Mac or Mac-based system like the Pippin. If you want to get into the party where the SCSI devices are, the SCSI Manager is both the bouncer and the emcee. In order to get to the point where any user-provided code—including Pippin Kickstart—can be loaded at all, it has to be read from a drive, a process which the ROM starts by first querying the SCSI Manager for where and how it can ask a drive for anything. The SCSI Manager therefore has to be loaded and active before the Start Manager even checks to see if it can boot from anything. Furthermore, as I point out above, the 1.0 ROM’s SCSI Manager appears to reject external devices even after we’re done booting, so the SCSI Manager has to persist as long as it may be in use.

The Pippin’s earliest boot code in ROM—from the time it powers on—executes natively on the PowerPC, configuring some low-level hardware and initializing a 68K emulator. But soon after that it enters the emulator to launch the boot code located in the Toolbox, which is predominantly written in 68K assembly. In fact most of the Pippin’s ROM targets the 68K instruction set architecture (or “ISA”), but some portions target the Pippin’s native PowerPC ISA. To understand why this startup code (and, by extension, Pippin Kickstart) runs in emulation rather than natively, we need to look back in time to when the Pippin’s software was being designed.

In 1994, Apple released their first PowerPC-based Mac. The Power Macintosh 6100/60 sports a PowerPC 601 processor running at 60 MHz and shipped with System 7.1.2. The development of the 6100 is a very interesting tale, with some parallels to how the most recent Apple Silicon-based Macs came to be, particularly when it comes to emulation. The Mac had a relatively rich library of both first-party and third-party software, a relatively mature OS that was going to be ten years old by the Power Macs’ release, and a developer community that was used to working with the Mac and how to flex its muscles. Throwing that all away and starting up again from scratch—especially with Windows 95’s release on the horizon—would not have made the best business sense given the short period of time in which Apple had to make the transition. Thus the decision was made to use emulation to run the Mac’s existing software library—including most of its operating system—on the new PowerPC-based machines, with the idea that modules of the OS could be replaced piece by piece over time rather than all at once. In turn, existing software and development knowhow would continue to retain its value, and developers were under less pressure to produce PowerPC-native versions of their software right away. This strategy paid off in a big way; the new Power Macs were a hit, and soon became the basis for the Pippin platform.

The Power Macintosh 6100/66av, a.k.a. the Pippin prototype prototype

Due mostly to time pressure, only the most-often used portions of the Mac’s System Software were rewritten as native PowerPC code for the initial lineup of Power Macs. Namely, QuickDraw was given the native treatment, while components that still had many 68K dependencies—such as the SCSI Manager and disk drivers that had heretofore been written to work with it (targeting the 68K, mind you)—were left emulated, for the time being anyway. It didn’t take long for Apple to let the SCSI Manager into the native club. Just 15 months after the first Power Macs hit the market, the Power Macintosh 9500 arrived on the scene in June 1995, utilizing the PCI standard as the first of the “second-generation” Power Macs and featuring a native SCSI Manager 4.3 built into its ROM. The new PCI-based Power Macs—particularly the Power Mac 7500—lent many of their features and specifications to what eventually became the final hardware and software design of the Pippin.

The opportunity to transition to a new processor architecture in turn gave Apple the opportunity to learn from the effects of previous design decisions and implement more modern corresponding changes. The original Macintosh operating system was designed in 1983 to run one application at a time on a computer with no built-in storage, 128 kilobytes of RAM, no virtual memory, and a 68000 processor which was limited to relative branches up to a range of 32 kilobytes in either direction. User-provided 68K code must be loaded into RAM before it can be executed; Macintosh engineers invented the Segment Loader as a sort of virtual memory to allow larger applications to be broken into code resources swapped in and out 32K at a time. Enterprising hackers later figured out how to work around this to get much larger segment sizes, but segments still have to be loaded into RAM regardless. Generally, all the code associated with a 68K application has to live in that application. “Dynamic” or “shared” code libraries were not explicitly supported by the operating system, so the best one could hope for were system extensions/patches offering either new official system APIs, or third-party de facto standard interfaces informally agreed upon by multiple applications. This was not the most stable environment in which to develop scalable software.

The original Macintosh, a.k.a. the Pippin’s granddaddy

By contrast, the upcoming Power Macs would feature cooperative multitasking with System 7, paged memory support, an internal hard drive, multiple megabytes of RAM, and a completely different ISA which could support much larger branch sizes. Apple had almost ten years of Macintosh development experience by the time they began designing software for the upcoming Power Macs, and had taken lessons to heart in terms of how to improve their operating system to better scale to modern software development practices. The new PowerPC code that would run natively on the improved operating system would not use the old-style Segment Loader. Instead, code blocks on the hard drive could be mapped and executed directly as pages in memory without having to copy them to physical RAM. Self-contained blocks of PowerPC code, known as “fragments,” are defined in terms of multiple “sections”—both code and data—and could be paged/loaded into memory once and then optionally shared with multiple applications, either as application plugins or standalone libraries in their own right. Each fragment could have its own block of globals and constants loaded into RAM for it to use, with their initial values specified in the fragment’s definition. The standard interface in the Mac OS to fetch these fragments and their entry point(s) at runtime became known as the Code Fragment Manager, or “CFM.”

There are three ways to load a fragment, and the CFM provides three respective functions to accomplish each one:

68K application code lives by and large in that application’s resource fork, leaving the data fork typically empty; one or more ‘CODE’ resources are swapped in and out at runtime by the Segment Loader, using a jump table in ‘CODE’ 0 as an initial reference point. To support “fat” binaries that can run on 68K Macs but also natively on Power Macs, a PowerPC-aware application stores its code in the otherwise unused data fork of an application, using a ‘cfrg’ resource as an initial reference point. When booting from ROM, there is no concept of “resource forks” or “data forks” until Mac filesystem code is loaded, but the Resource Manager doesn’t require resources to live in a file to be located and used. There exists a provision to load resources from a resource map located in ROM instead, and this is how the native SCSI Manager and other modular components are loaded despite the rest of ROM running in emulation. GetMemFragment is therefore the function used by the ROM; the SCSI Manager isn’t a shared library (at least not by the CFM’s definition), and we obviously can’t call GetDiskFragment until we have the ability to read from disk. During startup, the ROM asks the Resource Manager for a handle to the ‘nitt’ 43 resource in ROM, and then that resource is passed to GetMemFragment to prepare the fragment contained in that resource.

Comparing the ‘nitt’ 43 resources of the GM Flash and 1.0 ROMs reveals that, aside from timestamps in the fragments’ respective headers, the two resources differ by only nine bytes in four places. Curiously, three of those nine bytes are replaced by the value 3 in the 1.0 ROM, where they originally have the value 7 in the GM Flash version. The SCSI ID 7 is the upper bound of what IDs may be assigned to devices (7 is always reserved for the host in Apple’s implementation), whereas ID 3 is that of the internal CD-ROM drive. I suspected this might have something to do with why only the CD-ROM drive is recognized by the 1.0 ROM, but without knowing what the other changed bytes correspond to, I couldn’t know for sure. I’d have to run the two versions through a disassembler.

Three of the four significant differences. Left: GM Flash. Right: 1.0

Finding a sufficient PowerPC disassembler was somewhat of an adventure in itself, but I wound up circling right back to something that I had long since installed on my G3 Power Mac: Apple’s own Macintosh Programmer’s Workshop, or “MPW.” I didn’t know this, but MPW comes with a tool called DumpPEF specifically designed to tear apart fragment containers for analysis. The best part: it includes a very good PowerPC disassembler. All I had to do in MPW Shell was pass along the contents of ‘nitt’ 43 as a data-fork-only file and redirect DumpPEF’s output to a text file, like so:

DumpPEF -do All -ldr All -pi u -dialect PPC601 -fmt on -v "Arthur:Development:Projects:Pippin Kickstart:1.1:nitt43" > "Arthur:Development:Projects:Pippin Kickstart:1.1:nitt43dump.txt"
My boot drive is named Blackwood, after the protagonist in the Journeyman Project games, and my working/data drive is named Arthur, after his wise-cracking sidekick.

I know 68K assembly much better than I do PowerPC, but teaching myself just enough PowerPC to understand what goes on in the SCSI Manager was not as bad as I thought it might be. Matching up the offsets where bytes differ to their corresponding locations in the disassembly, I quickly confirmed my suspicions. If Apple’s SCSI Manager 4.3 Reference guide (and the short amount of time it took Apple to build a native version) is any indication, the SCSI Manager itself was originally written in C. According to Apple’s own documentation, then, one of the common structures used by the SCSI Manager is called a “DeviceIdent,” containing among other things the “targetID” of a particular SCSI device.

If we look at the last two differences as one code “site,” making the total amount of differences map to changes at three code sites, then in the GM Flash ROM, the SCSI Manager appears to be doing the equivalent to this C code at all three sites:

if (devIdent.targetID > 7)

where target ID 7 is the upper bound of what IDs are legally allowed as mentioned earlier. Translated into English, when a request for a SCSI action comes in, the SCSI Manager looks to see if the intended device’s ID is out of range, and if so it refuses the request.

Contrast that with this C code, equivalent to what the 1.0 ROM does at each code site:

if (devIdent.targetID != 3)

Notice the difference? “If a device’s ID is not 3, refuse the request.” This is clearly why no other SCSI devices are recognized by the 1.0 ROM; the low-level code responsible for routing SCSI requests flat out refuses to do so unless it’s to or from a device with ID 3. I had figured out where the problem is, and fortunately, Apple already showed me how to fix it by way of how the GM Flash ROM behaves. 🙂 The next logical step then was to develop a patch.

Implementing The Solution

The most obvious way to patch the SCSI Manager would be to burn a patched 1.0 ROM. In theory it’d be easy—the ‘nitt’ 43 resource is the same size in both the GM Flash ROM and the 1.0 ROM. From a content perspective it’d literally just be a copy/paste job, but I’m primarily a software guy, and I’d rather lose just my time debugging software than lose my time and money feebly trying to make and support a reliable physical tool all while out of my wheelhouse. Acquiring, programming, and installing a custom Pippin ROM board can not only be intimidating to a casual collector/homebrewer (including yours truly), but also significantly more expensive (and legally questionable) than burning Pippin Kickstart to a CD and running it on stock hardware. Besides, if I was to burn a new ROM anyway, why would I stick with 1.0 when I could use the much more fully-featured version 1.3 instead? Pippin Kickstart is a free, open, and purely software-only utility, so I think it’s worth trying to patch in software. The fix should only cost the price of a blank CD-R. 🙂

When GetMemFragment is called to prepare the native SCSI Manager fragment in ROM, no code is copied or moved around in memory. The ‘nitt’ 43 resource stays right where it is and the SCSI Manager is executed directly from its home in ROM. How then does one patch this read-only code in software? Is it even possible?

Writing into read-only memory is out of the question for reasons that should be obvious. What about replacing the SCSI Manager with my own implementation? In order to cleanly install my own replacement, I would have to shut down and clean up the existing ROM-based SCSI Manager so as to make sure no remnants remain. Is this possible? I don’t know. The SCSI Manager is designed to remain permanently resident, so while I know a SCSI Manager system extension exists for older 68K Macs, I don’t know how or when it installs itself and furthermore, I couldn’t find a mechanism by which the PowerPC-based Pippin could accomplish the same task at boot time. So that’s out.

Hang on. If, hypothetically, the CFM loads a fragment from ROM that depends on a fragment that’s loaded from RAM or from disk, how does the ROM fragment know how—and where—to call into those dependencies? What about non-ROM fragments that in turn depend on ROM fragments and other non-ROM fragments alike? It would make sense for the CFM to keep track of these inter- (and, as we’ll see, intra-) fragment locations in a unified way.

Each loaded fragment has at least one associated “data” section allocated in RAM. This section may contain globals or other statically-initialized data referenced by the fragment, but the data section also contains a special area called (by IBM) the “Table of Contents” or “TOC,” though that’s a bit of a misnomer. Apple says that the TOC is more like an address book, acting as a lookup table for functions and data living both within a fragment and outside that fragment. Each fragment has its own TOC, so before a routine in another fragment is called, PowerPC register GPR2—otherwise known as the “RTOC”—is saved, then preloaded with the bottom of the destination fragment’s TOC so that the fragment knows how to find its own globals and data. The calling fragment’s RTOC is restored upon return of the called routine.

A fragment’s code must be position-independent; that is, it should be able to be loaded into and run from any address. Therefore, a fragment’s code section does not usually reference hardcoded memory locations. Instead, it fetches an address it needs from its TOC at runtime by looking up that address in its TOC and reading it from RAM. It does this by using the RTOC register plus a known offset as an index into its TOC. The CFM is responsible for preparing and maintaining these addresses in the TOC at fragment load time, and at any time the loaded fragment or any of its dependencies have to be relocated in memory.

When pointing to data, a TOC entry is just a pointer to that raw data. But when pointing to a routine, because that routine could be exported from a fragment (someone can call us) or imported from another fragment (we’re calling someone else), it needs to know at minimum where its TOC lives in RAM, so a simple raw pointer to the routine is not enough. Enter the “transition vector.” A transition vector is very simple: it contains at least two pointers, the first being the address of the routine within the fragment, and the second being the address of a fragment’s context. In most cases, a fragment’s TOC provides enough context, so the second pointer is used to prepare RTOC immediately prior to entering the routine. A transition vector may optionally contain other fields at the discretion of the compiler and environment used to build the fragment; the only expectation is that it contains at least the first two.

A strange environment

Transition vectors must contain at least two pointers—one to the routine itself and one to its context—but the SCSI Manager’s transition vectors each contain three pointers. The third pointer is unused, but is designated as an “environment” pointer. Early PowerPC Mac development was done on IBM RS/6000 workstations, so this vestigial “environment” pointer may have come from that early toolchain.

If you’ve been paying attention so far, you might be able to figure out where this is going. The SCSI Manager’s transition vectors all point to code in ROM, because ‘nitt’ 43 itself is not loaded into RAM and there’s no problem executing its code directly from its home in ROM. But the transition vectors themselves live in RAM, which means they can be changed. Patching the necessary transition vectors in RAM is tantamount to patching the routines that the SCSI Manager itself exports to be called by the OS. So naturally, the next question is, how do we find those transition vectors?

Calling GetSharedLibrary, GetDiskFragment, or GetMemFragment prepares a fragment (if found) and returns a “connection ID” to that fragment. Each time an interface is established to a particular loaded fragment, it’s called a “connection” to that fragment. Connections are reference counted and when all connections to a particular fragment have been closed, that fragment is unloaded. All three of the “GetFragment” APIs create a new connection and each takes a parameter called findFlags that can equal one of three values:

  • kLoadLib: load a fragment if it’s found and not yet loaded. If it is loaded, create a new connection to the already-loaded fragment.
  • kFindLib: find a loaded fragment. If it is loaded, create a new connection to the already-loaded fragment. If not, return fragLibNotFound.
  • kLoadNewCopy: load a fragment if it’s found and not yet loaded. If it is loaded, create a new connection to the already-loaded fragment but also create a new data section specifically for this connection.

The CFM provides the FindSymbol API for locating a symbol in a fragment by name, given a connection ID. After preparing the SCSI Manager’s native fragment, the ROM calls FindSymbol to find the transition vector for the SCSI Manager’s “InitItt” entry point, then calls it to begin executing the native SCSI Manager’s code.

Hmm. In the SCSI Manager’s case, a connection is created at startup, but it is never closed at any time thereafter, ensuring that the SCSI Manager is never unloaded. Could a new connection be established to the SCSI Manager / ‘nitt’ 43 fragment-as-resource, then a known symbol—perhaps its entry point—be used as a reference to poke around in the rest of the fragment’s data section, including its transition vectors?

This seemed like the most “polite” way of getting at the SCSI Manager’s data section, so I tried this first.

 move.w  #0xFFFF, (RomMapInsert)
 subq.l  #6, %a7        /* make room for GetResource's return handle (4 bytes) */
                        /*  and GetMemFragment's return value (2 bytes) */
 move.l  #nittRsrcType, -(%a7)
 move.w  #43, -(%a7)
 movea.l (%a7)+, %a4

 move.l  (%a4), -(%a7)  /* Ptr                  memAddr */
 subq.l  #4, %a7        /* make room for SizeRsrc's value in length */
                        /*  (4 bytes) */
 move.l  %a4, -(%a7)
 clr.l  -(%a7)          /* Str63                fragName */
 pea    1               /* kLoadLib /*
 pea    0x1A(%a7)       /* ConnectionID*        connID */
 clr.l  -(%a7)          /* Ptr*                 mainAddr */
 clr.l  -(%a7)          /* Str255               errName */
 move.w #3, -(%a7)      /* GetMemFragment */
 _CodeFragmentDispatch  /* we'll assume it succeeds... */
-2817, better known by its other name fragLibConnErr

Would that it were so simple. The use of the findFlags parameter is documented for GetSharedLibrary and GetDiskFragment, but the documentation for GetMemFragment just refers to the documentation for GetDiskFragment for how findFlags is used. Despite Apple’s redirection, passing kLoadLib to GetMemFragment will not create a new connection to an already-loaded fragment at an address previously provided to the CFM. There would be no going in through the front door. Damn. I’d have to sneak in through another way.

The classic Mac OS memory map is split into several areas. There are low-level system globals near the bottom of the address space, a system heap for use by the OS above that, and the remaining RAM is comprised of one or more fixed-size application heaps (and stacks) belonging to running applications. Originally, the system heap was fixed in size, with the remainder of usable RAM reserved for the application heap and stack. Double-clicking an application from the Finder would close down the Finder and the newly-launched application would take its place in the application heap. The reverse would occur when the application closed down, relaunching the Finder in its stead. As this original design was built around running one application at a time, later some technical gymnastics were achieved to add multitasking to the system while maintaining backward and some level of future compatibility with Mac apps. The original Memory Manager APIs and low-memory globals were therefore left mostly unchanged, including a well-known global containing the base address of the system heap.

As is mentioned earlier, the SCSI Manager is designed to remain permanently resident, so it makes sense that its fragment’s data section lives in the system heap. Opening and closing applications has no effect on the existence of the SCSI Manager. Indeed, launching applications often involves the SCSI Manager to fetch those very applications from disk; the SCSI Manager is a core component of loading code on the Pippin. Furthermore, since the Pippin needs to know at all times how to load additional code and data from disk, those exported transition vectors need to be at fixed locations in memory in a nonrelocatable block of RAM. Of the many low-level Memory Manager structures documented early on by Apple, the system heap is one of them, so we can find the SCSI Manager’s data section in the system heap by doing a brute-force linear search.

Reading the SysZone system global gives us the address of the beginning of the system heap. The system heap “zone” begins with a zone header block for various bookkeeping tasks like keeping track of its size, flags, which blocks are free, and other internal uses. Immediately following the zone header are the contents of the heap itself. Likewise, each block in a classic Mac OS heap starts with a block header, describing among other things its size in memory.

Immediately following the block header are the block’s contents. By adding each block’s size to its respective header’s address, we can step through each block of the system heap, inspecting each block’s contents along the way.

 movea.l SysHeap, %a0
 lea     heapData(%a0), %a1  /* A1 -> allocated block in system heap */
 cmp.l   bkLim(%a0), %a1 /* bkLim(A0) -> system heap trailer block */
 beq.w   SkipPatching    /* if this block is the trailer, we've searched */
                         /*  the entire system heap and couldn't find the */
                         /*  SCSI Manager's pidata section */

 movea.l %a1, %a4
 move.l  blkSize(%a1), %d0   /* D0 == physical size of this block */
 add.l   %d0, %a1            /* A1 -> the next block in case we skip */

As well as being a handy PowerPC code fragment disassembler, another nifty facility DumpPEF provides is the ability to examine how a fragment’s data section is initialized. The data section is of a fixed size and, as discussed previously, the SCSI Manager’s data section starts with a Table of Contents, followed by a list of transition vectors. These transition vectors are the bytes we’re looking to patch. But in the SCSI Manager’s case, after the transition vectors comes a series of text string constants. These strings are always in the same location relative to the beginning of the data section and as read-only constants, they always have the same predictable values. Therefore I reasoned that in addition to verifying that a block within the system heap is of the same expected size as the SCSI Manager’s data section, checksumming these strings of text within that block should provide a suitable heuristic for identifying a particular block as belonging to the SCSI Manager.

 /* Verify this really is the SCSI Manager's block in the system heap. */
 /* We do that by checksumming the area in the middle of this block where */
 /*  we know the SCSI Manager looks for some read-only strings. Use an */
 /*  algorithm similar to that used to checksum the Toolbox, only we'll */
 /*  walk our pointer backwards so we can use A3 as-is if/when it comes */
 /*  time to check our TVectors. */
 add.l   -(%a2), %d0
 cmpa.l  %a2, %a3
 ble.s   ChecksumLoop
 cmp.l   #scsiStrsCksum, %d0
 bne.s   NextSysBlock

Once we’ve found our block, we know where the transition vectors live within it, so it’s time to patch them, right? Well, first we need to create the patch itself. At the time Pippin Kickstart runs, there is no application heap yet. Application heaps aren’t set up until the Process Manager starts, which doesn’t happen until after the familiar “Welcome to MacintoshPippin” extension parade has completed and our first application is ready to launch. Therefore, the system heap is our active and only heap. What’s more, as of System 7, so long as there is RAM available the system heap can grow dynamically to accommodate allocation requests, with the Process Manager shifting its base accordingly.

Our patch needs to stick around as long as the SCSI Manager exists, so naturally we need to give it a home somewhere where the OS won’t stomp over it later. Since the SCSI Manager’s data section lives in the system heap, and since Pippin Kickstart itself works from the system heap, it follows that we should be able to safely create a small block of nonrelocatable space in the system heap for our patch to live. Our patch really only needs to replace nine bytes in four locations, but we can’t just create a nine-byte block, stick our bytes there, and call it a day. The nine patched bytes belong to different functions in the SCSI Manager—functions that can and are referenced internally. Specifically, these functions are invoked internal to the SCSI Manager not by referencing transition vectors, but by good old-fashioned relative branching. It makes sense; the SCSI Manager targets the PowerPC and its functions run natively on the PowerPC, so why bother with transition vectors when you know you’re calling other PowerPC functions that are part of the same code fragment? It’s certainly convenient for the Pippin, but makes things slightly more annoying when creating this patch.

We have to ensure that no code that either leads to or leads from our patched locations can lead back to the unpatched versions in ROM. Therefore we have to account for relative branching by including all of that extra code in our patch, even though we don’t change any of it! I wrote a small C++ program to calculate exactly how much code I’d have to copy by essentially “emulating” PowerPC branch instructions, keeping track of the lowest and highest reachable addresses and using the ‘blr‘ instruction as a heuristic for the ends of subroutines. Passing my program a line-by-line disassembly of the SCSI Manager’s code section cut from DumpPEF’s output, I started the “emulation” at each of the three code sites and noted which one could be reached by the largest range. It turns out that all three sites lie within a mere 35K of code that only calls into itself; the rest of the SCSI Manager’s code section appears to be helper functions or routines unrelated to the SCSI Manager’s “core.”

The earliest address of our 35K block is offset 0xB854 into the SCSI Manager’s code section. The transition vector in the SCSI Manager’s data section with the earliest offset that should call into our patch is the vector that points to offset 0xBAD4. We know this transition vector’s index into the data section’s list, so by reading its target address and subtracting an offset (0xBAD4 – 0xB854), we can get the starting address in RAM of the code block to copy into our patch area. We also know exactly how much code to copy—35536 bytes—so the procedure becomes rather simple: copy our 35K of code into a nonrelocatable block on the system heap, then patch that. We can also easily determine which transition vectors point within that original 35K of code, so we know exactly which transition vectors to patch. It turns out that the vectors, starting with the one pointing to offset 0xBAD4 through the end of the data section’s list, all need to point into our patched code. By calculating the difference between offset 0xB854 into the SCSI Manager’s code section, and where our 35K block is in RAM, we get an offset value that makes it trivial to patch the transition vectors. We simply add that offset to each of those transition vectors so that they then point into our patched code instead of into ROM.

 /* now let's patch up the TVectors to point to our patched code */
 move.b  #tVectorsSize-1, %d0    /* # of TVectors to patch minus one */
 movea.l (%a4), %a2
 suba.l  %a0, %a2
 move.l  %a2, (%a4)+
 addq.l  #8, %a4
 dbra    %d0, TVectorLoop

After all of that, we’re still not quite done. All PowerPC processors have some form of a “data cache.” When you make changes to RAM on a PowerPC architecture, those changes aren’t necessarily written to RAM right away. Instead, the address decoder checks first to see whether where you’re reading/writing has been “cached,” or saved in a smaller but faster block of memory within arm’s length of the processor. Cache is to RAM what RAM itself is to System 7’s virtual memory; it is prioritized as a faster alternative to its counterpart, and when there’s no space left its contents are “flushed” to make room. Consequently, writing to a particular address will often write only to the cache instead, anticipating that its contents will be referenced again shortly thereafter.

To further complicate matters, the cache on the PowerPC 603 chip used in the Pippin is split between instructions (code) and data (not code). We’re patching code in RAM, but the Pippin doesn’t know that; it’s all just bytes of data as far as it’s concerned. We want to make sure that our patched area is flushed to RAM so that when the SCSI Manager comes around to execute it next, those patched bytes are waiting in RAM ready for the instruction decoder to pick them up. It’s reasonable to assume that at the time in the startup process when Pippin Kickstart runs, our block of code in the system heap does not have corresponding entries in the instruction cache, but it’s not necessarily safe to assume that our patched code will be automatically flushed to RAM before Pippin Kickstart exits. We certainly wouldn’t want the SCSI Manager to invoke the old unpatched transition vectors, or worse, execute whatever happened to be in RAM before we put our patched block of code there.

There exists an API called MakeDataExecutable that does exactly what we want here. But in order for this API to work for us, we’d have to make a connection to InterfaceLib, call FindSymbol, call MakeDataExecutable with the proper parameters, then close the connection to InterfaceLib. That’s a lot of work for just one call. Fortunately, since Pippin Kickstart runs in the 68K emulator, there’s a faster and easier way, albeit undocumented.

 movea.l %a1, %a0
 move.l  %d6, %d0
 dc.w    0xFE0C      /* undocumented F-line instruction that evicts our */
                     /*  patched area from the PPC data cache into main */
                     /*  memory so it's visible to the instruction decoder */

Apple’s 68K emulator supports the features of a 68LC040 processor with a 68020 exception stack frame. The 68LC040 is like the more powerful 68040 processor powering the Quadra line, but minus the floating-point operations built into the latter. Floating-point operations on the ‘040 are implemented by way of “F-line instructions;” that is, instruction opcodes that begin with the hex digit F. But just because the 68K emulator doesn’t support floating-point operations doesn’t mean that the emulator doesn’t support F-line instructions. 😉 Elliot Nunn helpfully pointed out that one of the F-line instructions used internally by the 68K emulator has the opcode 0xFE0C and it does just what we want: it flushes the PowerPC’s data cache to RAM. This instruction takes two parameters: a pointer in 68K register A0 to an area in memory, and a size in bytes in register D0. Easy peasy, if a little skeezy.

With that, we’re finally done patching the SCSI Manager so that it behaves identically to the version in the Pippin GM Flash ROM.

Bad F-line instructions

System error type 11 often manifests itself as a “bad F-line instruction” bomb dialog in System 7 and later. In System 6 this dialog instead displays the message “coprocessor not installed,” owing to the fact that F-line instructions can map to floating-point operations on an internal or external FPU. Despite suggesting that these errors stem from a missing FPU, very little software for classic Mac OS makes use of—let alone requires—floating-point hardware. For the broadest compatibility, programs requiring floating-point operations either use their own integer math library or call into Apple’s SANE math library instead, requiring no F-line instructions.

Usually these system errors are the result of a buggy program erroneously jumping into an area of data and interpreting it as code, setting off the bomb when carelessly stumbling upon a pair of bytes starting with the hex digit F. 😉

Adding Some Fun

Pippin Kickstart runs from RAM after being loaded from the boot blocks, which are the first two 512-byte sectors of an HFS-formatted volume. I was able to squeeze versions 1.0 and 1.0.1 each into the first 512 bytes of this area. Keeping Pippin Kickstart’s footprint limited to the boot blocks makes authoring the Pippin Kickstart disc relatively easy; I merely have to replace the boot blocks with my own, and since the Pippin loads them for me, I don’t have to make any other calls to load any additional code from the CD. Calls to the disk driver’s _Read—which Pippin Kickstart makes to check the first block of boot candidates—can only return 512-byte chunks, so 1.0 and 1.0.1 use the latter half of the boot blocks as scratch space during their respective boot candidate search loops. With the aforementioned SCSI Manager patch going into Pippin Kickstart 1.1, I need more code than will fit in those first 512 bytes, but I still want to limit myself to the boot blocks for convenience. Keeping the code tight is a fun engineering challenge, too. 🙂

If I move the boot candidate search loop and its dependencies from the first 512-byte block into the second block, I leave behind enough room in the first block to patch the SCSI Manager. By the time I’m done patching the SCSI Manager, I don’t need anything from the first block anymore, so I can jump into the search loop in the second block and that first block can be used instead as scratch space. Other than the address of my scratch space, I don’t have to change any of my tested and working search loop logic from 1.0 and 1.0.1. Hooray!

But the SCSI Manager patch doesn’t take up a lot of space, certainly not a whole extra 512 bytes. All that extra unused space felt like a waste to me, but there’s nothing more that Pippin Kickstart needs to do to allow a stock 1.0 Pippin to boot from any capable SCSI devices. My code golf skills had gotten the better of me. How could I make meaningful use of those remaining bytes?

Pippin Kickstart has a very spartan interface, though perhaps it’s a little too spartan—folks have mistaken it for BSD and have asked me what “kernel” it boots into. 😆 I take that as a compliment and a testament to how much utility I’ve packed into such a small space, but I do admit that it could look prettier. My good friend Tommy Yune is an accomplished graphic artist who graciously drew up a Pippin Kickstart “logo” (seen above) around the time I was working on the first versions. His graphics appear in the readme files I include on the disc. But other than the text Pippin Kickstart prints to the screen logging its behavior, the most anybody sees is the Pippin logo leftover from when the Pippin gets a fresh start.

Perhaps those extra bytes could translate into some extra polish. 🙂

Normally when the Pippin boots, it first draws the Pippin logo and looks for a bootable CD-ROM. If after a few seconds it can’t find one, it starts looping an animation suggesting that a CD be inserted into the built-in CD-ROM drive.

If the inserted CD is an audio CD, then the Pippin launches into its built-in audio CD player application. If the inserted CD is a data CD, then the screen goes black and the Pippin tries to boot from that disc. Many Pippin titles at this point put up a “StartupScreen” made up of the Bandai Digital Entertainment (the Pippin’s first-party publisher) logo; some unofficial titles like “Tuscon” have a custom StartupScreen file. But if the Pippin cannot boot from a given data CD, then it is ejected and the Pippin reboots, drawing the Pippin logo again and repeating the cycle. Therefore if Pippin Kickstart is inserted during the tray-loading animation, you get the least interesting visual result: the screen goes black and nothing else is shown on the screen other than Pippin Kickstart’s text console.

We’ll fix that in 1.1 by drawing Tommy’s “locked” Pippin logo, then drawing it “unlocked” after we’ve successfully circumvented the Pippin’s security. 🙂

Except for the Pippin logo itself, which we get from ROM, we’ll do all of this using QuickDraw primitives. Drawing the Pippin Kickstart logo programmatically rather than storing it as a bitmap takes up a fraction of the space, which is important given that we’re drawing two versions of it and have less than 512 bytes available to pull it all off. To begin, we create a new “clip region” that excludes the area where the Pippin logo is in the center of the screen. Since this is created in RAM at runtime, we have to clean it up before Pippin Kickstart exits, but it’s needed to later tell QuickDraw that we want to allow drawing anywhere but where the logo is.

 lea     logoRect, %a3

 subq.l  #8, %a7
 _NewRgn                        /* create clipRgn */
 move.l	 (%a7), %d5             /* save clipRgn */
 move.l	 %a3, -(%a7)
 _RectRgn                       /* clipRgn == logoRect */

 _NewRgn                        /* create tempRgn */
 move.l	 (%a7), %d6             /* put tempRgn in D6 because D7 is our ROM index */
 move.l	 %d6, (tempRgn - logoRect)(%a3) /* save tempRgn in RAM since D6 */
                                        /*  is used by the search loop */
 _GetClip                               /* tempRgn == original clip region */

 movem.l %d5-%d6/%a3, -(%a7)
 move.l	 %d5, -(%a7)
 _DiffRgn                       /* clipRgn == original clip region - logoRect */

We then set the background color to black (at this point it’s not—black is the foreground color and that’s what QuickDraw uses to initially paint the screen at startup) and erase the area inside the clip region we just created. This has the positive effect of erasing around the Pippin logo if it has already been drawn. We later draw the logo ourselves just in case, but either way this approach avoids some flicker when Pippin Kickstart launches.

 pea     blackColor
 move.l	 %d5, -(%a7)
 _EraseRgn                      /* erase around the logo */

Next we set the foreground color to white and draw a 7-pixel border around the logo area. Since we have to set the foreground color to white for the text anyway, we set it here so we don’t have to worry about it again.

 pea     whiteColor
 _ForeColor                     /* draw white-on-black */
 move.l  #((outlineThickness << 16) + outlineThickness), -(%a7)
 _FrameRect                     /* draw logo outline */

The Pippin logo is stored in ROM as 'PICT' resource -20137. It's trivial to find in ROM where the code is to draw the Pippin logo—search for a call to _GetPicture (0xA9BC) and an instruction that passes -20137 (0xB157) to it. The logo-drawing routine is located in the same place in all three known retail ROMs: offset 0xE3C from the beginning of ROM. We call this routine to fill the outline with the Pippin logo if it hasn't yet been drawn. If it has, then drawing over the Pippin logo has no perceptible side effects.

 jsr     logoRoutineOffset(%a4)  /* draw the Pippin logo from ROM */

We then have to draw the locked Pippin logo's "shackle." I created a routine for this purpose called, appropriately, DrawShackle and placed it in the second boot block so that we can call it again to draw the "unlocked" logo when it's time to exit the search loop. DrawShackle sets QuickDraw's clip region and then draws a framed rounded rectangle clipped to that region. The net effect is a shackle that appears inside the "locked" Pippin logo.

/* input: A3 -> shackle rect - 8 */
/* trashes: D0-D2, A0-A1 are scratch regs used by QuickDraw */
 move.l  %d5, -(%a7)
 _SetClip               /* clip shackle to logo */

 move.l  #((shackleThickness << 16) + shackleThickness), -(%a7)

 addq.l  #8, %a3        /* a3 -> shackle rect */
 move.l  %a3, -(%a7)
 move.l  #((shackleRadius << 16) + shackleRadius), -(%a7)
Green: drawable region, Red: clipped region

Notice how DrawShackle grabs the clip region handle from register D5. Luckily, none of the external routines called by Pippin Kickstart trash this register, leaving it available for temporary storage. The same is true of register D7, used by Pippin Kickstart as an index corresponding to the Pippin's ROM version so that we can call ROM routines from their proper locations. It is not true however of register D6, which is used by the ROM routines called by the search loop.

Now that the "locked" Pippin logo is up on the screen, we ready a clip region in register D5 that will be used to replace the "locked" shackle with one that dangles off to the side. We use the same clip region both to erase the existing shackle and to clip the "unlocked" shackle during the next and final call to DrawShackle. The _SectRgn API lets me calculate this region easily, finding the intersection of the existing clip region (set during the first call to DrawShackle that allows drawing anywhere except the logo area) and a predefined rectangle enclosing both the intended area of the "unlocked" shackle and the area of the existing "locked" shackle. Even though my predefined rectangle overlaps the forbidden logo area, this isn't a problem because _SectRgn finds the intersection of both drawable regions; that is, it calculates the region common to both. In the final clip region, only the shackle areas outside the logo will be affected.

 addq.l  #8, %a3                /* A3 -> clipRect */
 move.l	 %d5, -(%a7)
 move.l	 %a3, -(%a7)
 _RectRgn                       /* clipRgn == clipRect */

 movea.l GrafGlobals(%a5), %a0  /* A0 -> qdGlobals */
 movea.l thePort(%a0), %a0      /* A0 -> qdGlobals.thePort */
 move.l	 clipRgn(%a0), -(%a7)	/* push existing clip region */
 move.l	 %d5, -(%a7)            /* find intersection with clipRect */
 move.l	 %d5, -(%a7)            /* make it our next clip region */

The active clip region at this point is still what DrawShackle uses to draw the "locked" Pippin logo, so everything outside the Pippin logo is still fair game as far as drawing goes. This includes text, so naturally we still get the familiar text console as Pippin Kickstart goes through its expected motions.

When it comes time for Pippin Kickstart to exit and boot from the candidate it finds, the string "Booting..." is printed and the existing shackle is erased, using the clip region that we prepared earlier. Dangling the shackle to the left side of the Pippin logo would overlap our lovely text console, so we call DrawShackle to dangle an "unlocked" shackle off to the right instead.

 /* Erase the "locked" Pippin. */
 move.l	 %d5, -(%a7)

 /* Draw an "unlocked" Pippin. */
 lea     unlockedRect-8, %a3    /* because A4 is our link pointer */
 bsr.s   DrawShackle
Green: drawable region, Red: clipped region

This "unlocked" shackle is missing something, or rather it needs to be missing something. 🙂 In Tommy's logo, the shackle has a "notch" cut out of it, as would a shackle on a real padlock. Cutting a rectangular notch out of our shackle is simple enough; just erase a tiny rectangle where the notch should be.

 pea     notchRect

There's no "notch" primitive in QuickDraw (nor is there a corresponding _DrawNotch API), so drawing the slanted part of the notch requires a little bit of outside-the-box thinking. One option is to set the pen size to the notch width and draw a line between two points. That would work, but there's an even more efficient way, at least in terms of instructions used.

Among its supported primitives, QuickDraw can draw ovals, circles, and rounded rectangles. What do all these shapes have in common? They all involve drawing one or more arcs of a particular width, height, and arc length. Since ovals, circles, and rounded rectangles are themselves at least partially made up of arcs, QuickDraw also exposes the ability to draw just an arc through its _PaintArc API. If we draw an arc at least half the height of the notch we erase with a length extending to the edge of the notch, we get the slanted part we need. There is a tiny bit of overdraw into the shackle area above the notch, but since both the shackle and the arc are drawn with the same white foreground color, it doesn't matter. In the end, using an arc instead of a line segment gets the job done in about half the required instructions, with even less overdraw than drawing a line segment would produce.

 pea     arcRect
 clr.w   -(%a7)
 move.w  #-45, -(%a7)

Finally, we clean up after ourselves and exit. The clip regions we create are allocated dynamically in RAM, so to be a good citizen we dispose of those, but not before restoring the original clip region from prior to launching Pippin Kickstart. The next thing QuickDraw puts on the screen could be an alert, a StartupScreen, or something else entirely; it's up to whomever we hand off the boot process. The least we can do before we say goodbye is return the Pippin to a state reasonably close to how we found it.

 /* Clean up. */
 move.l	 %d5, -(%a7)    /* push clipRgn */
 move.l	 -(%a3), -(%a7) /* push tempRgn */
 move.l	 (%a3), -(%a7)  /* push tempRgn */

After adding these graphics, I'm back to having zero bytes left in both boot blocks. Waste not, want not. 🙂


Apple Computer was willing to license their Macintosh technology to third parties by the mid-90s, but not at the expense of their own first-party products. While the initial line-up of Power Macs and clones were a success, Apple was still a computer company; it was right there in their name. Licensed Mac derivatives like the Bandai Pippin could not be allowed to cut into Apple's bottom line, so Apple took measures both technical and tactical to help protect against the Pippin cannibalizing Mac sales. The term "Mac" or "Macintosh" was never to be used publicly to describe the Pippin or its software; the Pippin runs "Pippin OS" and is based on "advanced technology by Apple Computer." Pippins with the first revision of the retail ROM have special code that explicitly blocks the use of storage devices other than those built into the system and those officially available at launch. But on top of that, as extra insurance against Pippins being used as "cheap Macs," Apple added a signing check to the startup process that verifies that a particular boot CD has been authorized for use on the Pippin.

Bandai: "It's not a Mac. We swear."

There was nothing particularly novel about this approach in 1996, and in fact it's still in use today by almost all video game console platforms. Ever since the Nintendo Entertainment System's release in 1985, most consoles have some kind of protection against running unlicensed software. Atari's 7800 ProSystem from 1986 was the first video game console (that I know of) to use a boot-time signature check (with similarities to the RSA algorithm the Pippin would use ten years later). Like the Pippin and all major disc-based consoles that came after it, all 7800 titles had to be digitally signed for release. Given the amount of computing power required in those days to crack a digital cryptographic signature, and the amount of computing power typically available to the general public, these strategies were mostly effective for their time to prevent unlicensed software from affecting a platform's brand and public image during its supported lifetime, for better or worse.

Reverse-engineering and circumventing the Pippin's boot security wasn't easy, but with the exception of deducing Apple's private RSA key, Pippin Kickstart could have been developed using tools and documentation available in the late 90s and early 2000s. I am often nostalgic for the Mac games of my youth and feel that in modern times they deserve to be played on a "real" video game system on a big-screen TV. Thus the idea appealed to me of hacking an "Apple" video game system to let me do just that. Given the Pippin's place in history and how little attention it has received compared to homebrew efforts for systems from Atari, Nintendo, Sega, Sony, and Microsoft, I was surprised at the positive reception and interest Pippin Kickstart got when I first released it in June 2019. That folks are beginning to dip their toes into producing homebrew for this relatively obscure platform goes above and beyond my expectations; I'm absolutely delighted to have enabled this and I hope it continues.

It has been 25 years since the introduction of the Bandai Pippin. In that time, the Internet has exploded with a force that hardly anybody could have predicted back then, and Apple itself has gone from a relatively small player in the computer industry to one of the largest consumer electronics companies in the world. (Bandai is still doing OK, too.) The Internet has brought together fans of gaming from all over the world and from all kinds of backgrounds, fostering communities that celebrate video games and their technology from the mainstream to the obscure. The classic systems of yesteryear may have been forgotten by major retail outlets, but that doesn't mean they have been forgotten by fans and enthusiasts, no matter how obscure or commercially unsuccessful. Nostalgia is a powerful drug, and thanks to it I think there will always be an audience for new developments targeting these vintage consoles, by amateurs and professionals alike. These days, "retro" is cool, and I'm happy to contribute to the zeitgeist in my own small way.

If I've done anything to help rekindle some gaming nostalgia among fellow retro gaming fans, then it's all worth the while. 🙂

My Ken Williams story

Not All Fairy Tales Have Happy Endings by Ken Williams

I picked up Ken Williams’s book Not All Fairy Tales Have Happy Endings back in October and read it over my Thanksgiving break. I recommend it, especially if you’re interested in the history of video games or software. Even at 530 pages, it’s a very quick read, and I was able to get from start to finish in just a day. It’s a good complement to the Sierra chapters in Steven Levy’s book Hackers, which I also recommend for tech history buffs.

The house that Ken and Roberta built

For those unfamiliar, Ken Williams founded Sierra On-Line in 1979 with his wife Roberta. After Roberta conceived of the game Mystery House for the Apple ][ (marking the invention of the graphical adventure genre), Ken programmed the game’s “engine” while Roberta fleshed out the game’s world, doing both the writing and illustrations. The pair marketed the game by demonstrating it at local computer stores, where store owners would thereafter offer the game for sale. Sierra grew from a husband-and-wife team into a multimillion-dollar powerhouse by the mid-to-late 1990s.

I’ve never met either of the Williamses. But in 2017, I exchanged a few e-mails with Al Lowe, one of Sierra’s most prolific game designers.

This story is mentioned briefly in Ken’s book (on page 205), but the longer version goes like this: Leading up to its launch in January 1983, Apple was hip-deep in development of their first computer built around a graphical user interface. Steve Jobs wanted to call this new computer the “Lisa,” after his daughter, but the legal folks at Apple quickly discovered that Sierra was offering an assembler for the Apple ][ with the same name. Trademark law being what it is, Apple couldn’t easily go ahead and use the “Lisa” name in the computer field without risking infringing on the trademark Sierra already owned.

So one day between 1981-1982, Ken Williams got a call from Apple. In exchange for the rights to use the “Lisa” name on Apple’s newest computer, Ken would receive several brand-new Lisa computers—each worth $10,000 at launch in 1983—as well as a few prototypes of a top-secret new machine in the works at Apple (this turned out to be the Macintosh). The Lisas arrived, and a few months later came the promised Mac prototypes, which Sierra used to develop Frogger, among other early Mac games.

Along with four Mac prototypes came a Picasso-style “Macintosh” lamp, usually designated for Mac dealers. Ken was not exactly known for keeping a tidy office (I can relate), so the Macintosh lamp became one of many items scattered about his space. Several months later, Al was in Ken’s office, noticed the lamp on his floor, and inquired about it… only to receive the shock that Ken was just going to throw it away. Al was able to successfully begnegotiate for the lamp, which found a new home in his personal office, where it lived for almost 33 years.

Al offered up the lamp for sale in 2017, and given its history, I bought it. It was on my desk at work for a while but now lives in my office perched—appropriately—atop my Lisa.

It was always obvious which desk was mine.

I promised Al and myself that the lamp would stay in the games industry. So far I’ve kept my word, and I foresee no reason to break that promise any time soon.

Digitizing Old 8mm Tapes

It’s astounding to think back and consider how much technological progress has occurred in just the past 15 years. Most folks today carry a smartphone in their pocket everywhere they go, and a great many of those smartphones have powerful cameras built in capable of recording multiple hours in high definition. Pair this ability with low-cost video editing software—some of which comes at no cost at all—and far more people today have the tools to practice shooting, editing, compositing, and rendering professional-looking videos on a modest budget.

My personal experience with photography began around age 7 shooting on 110 film using a small “spy” camera I got as a gift. My dad’s Sony CCD-V5 was bulky, heavy, and probably expensive when he bought it around 1987, so he was reluctant to let me or my sister operate it under his supervision, let alone borrow it to make our own films by ourselves. As a consequence, my sister and I kept ourselves entertained by making audio recordings on much cheaper audio cassette hardware and tapes—we produced an episodic “radio show” starring our stuffed animals long before the podcast was invented. Though my sister and I took good care of our audio equipment, Dad stuck to his guns when it came to who got to use the camcorder, but he would sometimes indulge us when we had a full production planned, scripted, and rehearsed. Video8 tapes were expensive, too, and for the most part Dad reserved their use for important events like concerts, school graduations, birthdays, and family holidays.

Sony CCD-V5 camcorder
I remember it being a lot bigger.

I went off to college and spent a lot of time lurking the originaltrilogy.com forums. It was here that not only did I learn a lot about the making and technical background of the Star Wars films (a topic I could blog about ad nauseum), but I also picked up a lot about video editing, codecs, post-production techniques, and preservation. OT.com was and still is home to a community of video hobbyists and professionals, most of whom share a common love for the unreleased “original unaltered” versions of the Star Wars trilogy. As such, many tips were/are shared as to how to produce the best “fan preservations” of Star Wars and other classic films given the materials available, sacrificing the least amount of quality.

I bought my dad a Sony HDR-CX100 camcorder some years ago to supplement his by that time affinity for digital still cameras—he took it to Vienna and Salzburg soon after and has since transitioned to shooting digital video mostly on his iPhone. But the 8mm tapes chronicling my family’s milestones over the first 25 years of my life continued to sit, undisturbed, in my folks’ cool, dry basement. My dad has recordings on them going as far back as 1988 that I’ve found so far. These recordings are over 30 years old, so the tapes must be at least that age.

8mm video tape does not last forever, but making analog copies of video tape incurs generational loss each time a copy is dubbed. On the other hand, a digital file can be copied as many times as one wants without any quality loss. All I need is the right capture hardware, appropriate capture software, enough digital storage, and a way to play back the source tapes, and I can preserve one lossless digital capture of each tape indefinitely. The last 8mm camcorder my dad bought—a Sony CCD-TR917—still has clean, working heads and can route playback of our existing library of tapes through its S-video and stereo RCA outputs. This provides me with the best possible quality given how they were originally shot.

Generally with modern analog-to-digital preservation, you want to losslessly capture the raw source at a reasonably high sample rate with as little processing done to the source material as possible, from the moment it hits the playback heads to the instant it’s written to disk. Any cleanup can be done in post-production software; in fact, as digital restoration technology improves, it is ideal to have a raw, lossless original available to revisit with improved techniques. For this project, I am using my dad’s aforementioned Sony CCD-TR917 camcorder attached directly to the S-video and stereo audio inputs of a Blackmagic Intensity Pro PCIe card. The capturing PC is running Debian Linux and is plugged into the same circuit as the camcorder to avoid possible ground loop noise.

Since my Debian box is headless, I’m not interested in bringing up a full X installation just to grab some videos. Therefore I use the open source, command-line based bmdtools suite—specifically bmdcapture—to do the raw captures from my Intensity Pro card. I do have to pull down the DeckLink SDK in order to build bmdcapture, which does have some minor X-related dependencies, but I have to pull down the DeckLink software anyway for Linux drivers. I invoke the following from a shell before starting playback on the camcorder:

$ ./bmdcapture -C 0 -m 0 -M 4 -A 1 -V 6 -d 0 -n 230000 -f <output>.nut

The options passed to bmdcapture configure the capture as follows:

  • -C 0: Use the one Intensity Pro card I have installed (ID 0)
  • -m 0: Capture using mode 0; that is, 525i59.94 NTSC, or 720×486 pixels at 29.97 FPS
  • -M 4: Set a queue size of up to 4GB. Without this, bmdcapture can run out of memory before the entire tape is captured to disk.
  • -A 1: Use the “Analog (RCA or XLR)” audio input. In my case, stereo RCA.
  • -V 6: Use the “S-Video” video input. The S-video input on the Intensity Pro is provided as an RCA pair for chroma (“B-Y In”) and luma/sync (“Y In”); an adapter cable is necessary to convert to the standard miniDIN-4 connector.
  • -d 0: Fill in dropped frames with a black frame. The Sony CCD-TR917 has a built-in TBC (which I leave enabled since I don’t own a separate TBC), but owing to the age of the tapes, there is an occasional frame drop.
  • -n 230000: Capture 230000 frames. At 29.97 FPS, that’s almost 7675 seconds, which is a little over two hours. Should be enough even for full tapes.
  • -f <output>.nut: Write to <output>.nut in the NUT container format by default, substituting the tape’s label for <output>. The README.md provided with bmdtools suggests sticking with the default, and since FFmpeg has no trouble converting from NUT and I’ve had no trouble capturing to that format, I leave the output file format alone.

Once I have my lossless capture, I compress the .nut file using bzip2, getting the file size down to up to a quarter of the original size depending on how much of the tape is filled. I then create parity data on the .bz2 archive using the par2 utility, and put my compressed capture and parity files somewhere safe for long-term archival storage. 🙂

My Windows-based Intel NUC is where I do most of my video post-production work. It lacks a PCIe slot, so I can’t capture there, but that’s fine because at this point my workflow is purely digital and I only have to worry about moving files around. My tools of choice here are AviSynth 2.6 and VirtualDub 1.10.4, but since AviSynth/VirtualDub are designed to work with AVI containers, I first convert my capture from the NUT container to the AVI container using FFmpeg:

$ ffmpeg.exe -i <output>.nut -vcodec copy -acodec copy <output>.avi

The options passed to FFmpeg are order-dependent and direct it to do the following:

  • -i <output>.nut: Use <output>.nut as the input file. FFmpeg is smart and will auto-detect its file format when opened.
  • -vcodec copy: Copy the video stream from the input file’s container to the output file’s container; do not re-encode.
  • -acodec copy: Likewise for the audio stream, copy from the input file’s container to the output file; do not re-encode.
  • <output>.avi: Write to <output>.avi, again substituting my tape’s label for <output> in both the input and output filenames.

A note about video containers vs. video formats

Pop quiz! Given a file with the .mov extension, do you know for sure whether it will play in your media player?

Files ending with .mov, .avi, .mkv, and even the .nut format mentioned above are “container” files. When you save a digital video as a QuickTime .mov file, the .mov file is just a wrapper around your media, which must be encoded using one or more “codecs.” Codecs are small programs that can encode and/or decode audio or video. These codecs must be specified at the same time as when you save your movie. QuickTime files can wrap among a great many codecs: Motion JPEG, MPEG, H.264, and Cinepak just to name a few. They’re a bit like Zip files, except that instead of files inside you have audio and/or video tracks, and there’s no compression other than what’s already done by the tracks’ codecs. Though Apple provides support in QuickTime for a number of modern codecs, older formats have been dropped over time and so any particular .mov file may or may not play… even using Apple’s own QuickTime software! Asking for a “QuickTime movie” is terribly vague—a QuickTime .mov file may not play properly on a given piece of hardware if support for a containing codec is missing.

AVI, MKV, and MP4 are containers, too—MP4 is in fact based on Apple’s own QuickTime format. But these are still just containers, and a movie file is nothing without some media inside that can be decoded. Put another way, when I buy a book I’m often offered the option of PDF, hardcover, or paperback form. But if the words contained therein are in Klingon, I still won’t be able to read it. When asked to provide a movie in QuickTime or AVI “format,” get the specifics—what codecs should be inside?

Now that I have an AVI source file, I can open it in VirtualDub. Owing to its namesake, VirtualDub’s interface is reminiscent of a dual cassette deck ready to “dub” from one container to another. It isn’t as user-friendly as, say, Premiere or Resolve when it comes to editing and compositing, but what it lacks in usability it gains in flexibility. In particular, VirtualDub is designed to run a designated range of source video through one or more “filters,” encoding to one of several output codecs available at the user’s discretion via Video for Windows and/or DirectShow. If no filters are applied, VirtualDub can trim a video (and its audio) without re-encoding—great for preparing source footage clips for later editing or other processing.

This slam dunk in 1988 marked the peak of my basketball career.

Though the Sony CCD-TR917 has a built-in video noise reduction feature, I explicitly turn it off before capturing, because one of the filters I have for VirtualDub is “Neat Video” by ABSoft. It’s the temporal version of their “Neat Image” Photoshop filter for still images, which I used most recently to prepare a number of stills for Richard Moss’s The Secret History of Mac Gaming. It’s a very intelligent program that has a lot of knobs and dials to really tune in the noise profile you want to filter out, so I was equally delighted to find that ABSoft’s magic works on videos too. Luckily they offer a plugin built to work with VirtualDub, so I didn’t hesitate to buy it as a sure improvement over the mid-90s noise reduction technology built in to the camcorder.

Most of the aforementioned features can be done in high-end NLE applications such as Resolve—indeed I have used Resolve to edit several video projects of my own. What makes VirtualDub the “killer app” for me is its use of Windows’s built-in video playback library, and therefore its ability to work with AviSynth scripts. AviSynth is a library that can be installed on Windows PCs that grants the ability to interpret AviSynth “script” files (with the .avs extension) as AVI files anywhere Windows is prompted to play one using its built-in facilities. The basic AviSynth scripting language is procedural, without loops or conditionals, but it does retain the ability to work with multiple variables at runtime and organize frequently-called sequences into subroutines. Its most common use is to form a filter chain starting with one or more source clips, ending with a final output clip. When “played back,” the filter chain is evaluated for each frame, but this is transparent to Windows, which instead just sees a complete movie as though it’s already rendered to an AVI container.

Combined with VirtualDub, AviSynth allows me to write tiny scripts to do trims and conversions with frame-accurate precision, then render these edits to a final output video. Though AviSynth should be able to invoke VirtualDub plugins from its scripting language, I couldn’t figure out how to get it to work with Neat Video, so I did the next best thing: I created a pair of AviSynth scripts; one to feed to Neat Video, and one to process the output from Neat Video. The first script looks like this:

AviSource("1988 Christmas.avi")
# Invoke Neat Video
ConvertToRGB(matrix="Rec601", interlaced=true)

Absent of an explicit input argument, each AviSynth instruction receives the output of the previous instruction as its input. The Neat Video plugin for VirtualDub expects its input to be encoded as 8-bit RGB. VirtualDub will automatically convert the source video to what Neat Video expects if not already in the proper format. Since I’m not sure exactly how VirtualDub does its automatic conversion, I want to retain control over the process so I do the conversion myself from YUV to RGB using the Rec.601 matrix. I know that my source video is from an interlaced analog NTSC source; VirtualDub doesn’t know that unless I explicitly say so.

I render this intermediate video to an AVI container using the Huffyuv codec. Huffyuv is a lossless codec, meaning it can compress the video without any generational loss. Despite its name, Huffyuv is perfectly capable of keeping my video encoded as RGB. I can’t do further AviSynth processing on the result from Neat Video until I load it into my second AviSynth script, so I’m happy that its output can be unchanged from one script to the next.

Looks like I picked the wrong week to quit huffing YUV.

Color encodings and chroma subsampling

Colors reproduced by mixing photons can be broken down into three “primary colors.” We all learned about these in grade school: red, green, and blue. Red and blue make purple, blue and green make turquoise, all three make white, and so on.

On TV screens, things are a bit more complicated. Way back when, probably before you were born, TV signals in the United States only came in black and white, and TVs only had one electron gun responsible for generating the entire picture. The picture signal mostly comprised of a varying voltage level per 525 lines indicating how bright or dark the picture should be at that point in that particular line. The history of the NTSC standard used to transmit analog television in the United States is well-documented elsewhere on the Internet, but the important fact here is that in 1953, color information was added to the TV signal broadcast to televisions conforming to the NTSC standard.

One of the challenges in adding color to what was heretofore a strictly monochrome-only signal was that millions of black and white TVs were already in active use in the United States. A TV set was extremely expensive even by the early 1950s, so rendering all the active sets obsolete by introducing a new color standard would have proven quite unpopular. The solution—similar to how FM stereo radio was later standardized in 1961—was to add color as a completely optional, but still integral, signal of monochrome TV. The original black and white signal—now known as “luma”—would continue to be used to determine the brightness or “luminance” of the picture at any particular point on the screen, while the new color stream—known as “chroma”—would only transmit the color or “chrominance” information for that point. Existing black and white TVs would only know about the original “luma” signal, and so would continue to interpret it as a monochrome picture, whereas new color TVs would be aware of and overlay the new “chroma” stream on top of the original “luma” stream to produce a rich, vibrant, color picture. All of this information still had to fit into a relatively limited bandwidth signal, designed in the early 1940s to be transmitted through the air with graceful degradation in poor conditions.

The developers of early color computer monitors, by contrast, needed not worry about maintaining backward compatibility with black and white American television nor did they need to concern themselves with adopting a signal format that was almost 50 years old by that point. It should be of little surprise then, naturally, that computer monitors generate color closer to how we learned in grade school. Computer monitors in particular translate a signal describing separate intensities of red, green, and blue to a screen made up of triads of red, green, and blue dots of light. This signal describing what’s known as “RGB” color (for Red, Green, and Blue) comes both from that aforementioned color theory of mixing primaries, but also historically from those individual color signals more or less directly driving the respective voltages of three electron guns inside a color CRT. Despite both color TVs and computer monitors having three electron guns for mixing red, green, and blue primary colors, the way that color information is encoded before entering the TV is a main differentiator.

Whereas RGB is the encoding scheme by which discrete Red, Green, and Blue values are represented, color TV uses something more akin to what’s known as “YUV.” YUV doesn’t really stand for anything—the “Y” component represents Luma, and the “UV” represents a coordinate into a 2D color plane, where (1, 1) is magenta, (-1, -1) is green, and (0, 0) is gray (the “default” value for when only the “Y” component is present, such as on black and white TVs). In NTSC, quadrature amplitude modulation is used to convey both of the UV components on top of the Y component’s frequency—I don’t know exactly what quadrature amplitude modulation is either, but suffice it to say it’s a fancy way of conveying two streams of information over one signal. 🙂

An interesting quirk of how the human visual system works is that we have evolved to be much more sensitive to changes in brightness than in color. Some very smart people have sciencey explanations as to why this is, but ultimately we can thank our early ancestors for this trait—being able to detect the subtlest of movements even in low light made us expert hunters. Indeed, when the lights are mostly off, most of us can still do a pretty good job of navigating our surroundings (read: hunting for the fridge) despite there being limited dynamic range in the brightness of what we can see.

Note that having a higher sensitivity to brightness vs. color does not mean humans are better at seeing in black and white. It merely means that we notice the difference between something bright and something dark better than we can tell if something is one shade of red vs. a different shade of red. In addition, humans are more sensitive to orange/blue than we are to purple/green. These facts actually came in very handy when trying to figure out how to fit a good-enough-looking color TV signal into the bandwidth already reserved (and used) for American television. Because we are not as sensitive to color, the designers of NTSC color TV could get away with transmitting less color information than what’s in the luma signal. By reducing the amount of bandwidth for the purple/green range, color in NTSC can still be satisfactorily reproduced, though the designers of NTSC adopted a variant of YUV to accomplish this called “YIQ.” In YIQ, the Y component is still Luma, but the “IQ” represents a new coordinate into the same 2D color plane as YUV, just rotated slightly so that the purple/green spectrum falls on the axis with a smaller range. Nowadays with the higher bandwidth digital TV provides, we no longer need to encode using YIQ, but due again to the way our vision system responds to color and the technical benefits it provides, TV/video is still encoded using YUV, albeit will a fuller chroma representation.

What does all of this have to do with digitizing old 8mm tapes?

Each pixel on a modern computer screen is represented by at least three discrete values for Red, Green, and Blue. Though NTSC defines 525 lines per frame (~480 visible), being an analog standard means there really isn’t such a thing as “pixels” horizontally. However, most capture cards are configured to sample 720 points along each line of NTSC video, forming what we would call 720 “pixels” per line. But two important details must be noted:

  • Though 720 samples are enough to effectively capture the entire line, only 704 of them are typically visible and furthermore, NTSC TV is designed for a 4:3 aspect ratio. That is, if the picture is 480 square dots vertically, then it must be (480 * 4) / 3 == 640 square dots horizontally, or the picture will appear squished and everything will look “fat.” A captured NTSC frame at 720×480 will need horizontal scaling plus cropping to 640×480 to be displayed with the correct aspect ratio on a computer screen with square dots.
  • 720 samples are enough to capture each line of the luma component. The chroma component is a whole other story.

Remember how the luma and chroma components are encoded separately, but that some of the chroma information can be discarded to save space and we’re not likely to notice? Turns out, computers can use that technique too to reduce bandwidth usage and save disk space. RGB is just how your computer talks to its display, but there’s no rule that says computer video files need to be encoded as RGB. We can encode them as YUV, too, and this is where the term chroma subsampling comes in.

While we always want to sample all 704 visible “pixels” of luma information, we can often get away with capturing 50% or even as little as 25% of the chroma information for a given line of picture. The ratio of sampled chroma data to luma data is called the “chroma subsampling” ratio and is indicated by the notation Y:a:b, where:

  • Y: the number of luma samples per line in the conceptual two-line block used by a and b to reference. This is almost always 4.
  • a: the number of chroma samples mapped over those Y dots in the first line. This is at most Y.
  • b: the number of times those a chroma samples change in the second line. This is at most a.
Trans rights!

A chroma subsampling ratio of 4:4:4 means that every luma sample has its own distinct chroma sample; this ratio captures the most data and therefore provides the highest resolution. The value of a is directly proportional to the horizontal chroma resolution, and the value of b is directly proportional to the vertical chroma resolution. NTSC is somewhere between 4:1:1 and 4:2:1 (consumer tape formats even less), whereas the PAL standard in Europe is closer to 4:2:0 (half the vertical chrominance resolution as NTSC). As the values of a and b shrink, the overall chroma resolution decreases, and depending on your source picture it may be more or less noticeable.

Annie is always a willing test subject.

As the chroma resolution on this photo of Annie is diminished, artifacts become apparent which should remind you of heavily-compressed JPEG images. This is no coincidence—the popular JPEG image codec and all variants of the popular MPEG video codec use YUV encoding and discard chroma data as a form of compression. This is one reason why encoding as YUV is popular with real-world images and video—an RGB representation can be reconstructed by decoding the often-smaller YUV information, much like a color CRT has to do to get a YUV-encoded picture on its screen through its RGB electron guns. A ratio of 4:2:0 is popular with the common H.26x video codecs (though H.26x can go up to a full 4:4:4), and several professional codecs indicate their maximum chroma subsampling ratios directly in the name of the codec. DNxHR 444 and ProRes 422 come to mind. What these numbers represent is how much chroma information can be preserved from the original, uncompressed image, but looking at it another way, they represent how much chroma information is discarded during compression.

Breaking it down further, we can see how various chroma subsampling ratios affect the chroma information saved (or not) in this close-up of Annie’s toys. Notice the boundary between the pink and green halves of her ball, and the rightmost edge of her red and white rolling cage.

With my analog NTSC 8mm tapes having chroma resolution of at most 50% of the luma resolution, I need to capture them with at least a 4:2:0 chroma subsampling ratio. That 50% could be in the horizontal direction, the vertical direction, or both, but fortunately bmdcapture grabs YUV video at a 4:2:2 chroma subsampling ratio from my Blackmagic Intensity Pro card by default, which is enough to sample all the chroma information in my NTSC source. This is fine for my 8mm tapes, but if I want to capture a richer video signal in the future (like, say, a digital source over HDMI), I would probably want to investigate capturing at a full 4:4:4 ratio.

I tell Neat Video to clean up my interlaced video with a temporal radius of 5 (the maximum) and a noise profile I generated in advance for the Sony CCD-TR917 I’m using. With these parameters, it takes about eight hours to clean up two hours of raw footage on my PC, so I usually start it in the morning and have it running in the background while I work during the day. Once it’s finished, I’m ready to feed its intermediate result to my second and final AviSynth script to get my final output:

AviSource("1988 Christmas NV.avi")
QTGMC(Preset="Placebo", MatchPreset="Placebo", MatchPreset2="Placebo", NoiseProcess=0, SourceMatch=3, Lossless=2)
# Trim the fat
Trim(329, end=222768)
# Crop to 704x480
Crop(4, 0, -12, -6)

For some reason, Neat Video produces two identical frames for every input frame I provide in the first script, so the first step in the second script tells AviSynth to disregard those duplicate frames and assume we’re dealing with a 59.94 Hz interlaced NTSC video. I then convert Neat Video’s intermediate RGB result back to its original YUV encoding, once again using the Rec.601 matrix corresponding to my video’s NTSC format.


YUY2 is a digital representation of YUV-encoded video, using 16 bits per pixel. When capturing with a chroma subsampling ratio of 4:2:2, each chroma sample is shared with two luma samples. YUY2 encodes this by reserving 8 bits for each of four luma samples (“Y”), then 16 bits for each of the two chroma samples (8 bits for each of the two components of “UV”). When you add it all up, 8 bits x 4 luma samples + 16 bits x 2 chroma samples == 64 bits. Divide that by the four luma samples and you get 16 bits per pixel.

Before calling ConvertToRGB in the first script, our original raw capture was encoded as uncompressed YUV 4:2:2. ConvertToRGB does a bit of math to map the YUV values into RGB space given the conversion matrix specified. If you are converting back to YUY2 from an RGB clip created by AviSynth, using ConvertBackToYUY2 can be more effective because it is allowed to make assumptions about the algorithm initially used to convert to RGB, given that both functions belong to AviSynth. In particular, ConvertBackToYUY2 does a point resampling of the chroma information instead of averaging chroma values as ConvertToYUY2 would, resulting in a chroma value closer to that in the original pre-RGB source clip.

After cleaning up the noise with Neat Video, to prepare these videos for playback on a progressive scan display such as a PC, mobile device, or modern TV, I still need to apply a deinterlacing filter. Each NTSC frame is split into two “fields”—the first field alternates every other line, and the second field has the other half. NTSC only has enough bandwidth to transmit 29.97 frames per second, so interlacing is a trick used to send half-sized “fields” at twice the rate. CRT TVs have a slower response time than modern digital TVs, so the net effect of interlacing on CRTs is that one field “blurs” into the next, producing the illusion of a higher rate of motion. Nowadays when TVs receive an analog NTSC signal they’re smart enough to recognize that it’s interlaced and apply a simple deinterlacing filter. But such filters are designed to work in real time, and without one, an interlaced source will appear to stutter when compared to its deinterlaced 60 Hz counterpart. A free third-party AviSynth script called QTGMC can do a much higher-quality job of deinterlacing ahead of time, with the caveat that I need to apply this script in advance and offline, that is, not in real time.

Here’s something to know about me. When it comes to filtering and rendering media offline, it doesn’t matter to me all that much how long it takes. The only deadline I have with this stuff is the eventual deterioration of the source media (and my own mortality, I suppose). For the moment, I have time and computing cycles to burn. As with the eight-hour Neat Video process, I have no qualms about backgrounding a rendering queue and letting it run overnight or even well into the next day. So when QTGMC and other tools advertise “Placebo” presets, I use them. I sleep better at night knowing there’s nothing more I could do to get a better result out of my tools, however miniscule. 🙂

I finally trim the start and end of my raw capture, leaving only the actual content behind. The last step crops some “garbage” lines from the bottom (VBI data I don’t need to save, but my Blackmagic card captures anyway), along with some black bars on the left and right sides of the image. Not surprisingly, the final dimensions of the video come out to 704×480: the visible part of a digitally-sampled NTSC frame. I render the whole thing out as a lossless Huffyuv-encoded AVI once again, saving the final lossy encoding step for FFmpeg.

At this point, I have a couple of directions I can take. I can prepare my final video for export to DVD, or I can split my video into one or more video files suitable for uploading to my Plex server. I can do both if I want. In either case, I need to prepare chapter markers so FFmpeg knows how to split the final, monolithic video into its constituent clips. I use Google Sheets for this; I have one sheet with columns for “Start time,” “Duration,” “End time,” and “Label.” I put the “Start time” column in the HH:mm:ss.SSS format that FFmpeg expects for its timecodes, and I express the “Duration” column in seconds with decimals. When I’ve marked all the chapters (relying on VirtualDub’s interface for precise frame numbers and timing), I pull the “Start time” and “Duration” columns into a new sheet and export to a CSV file.

I wrote a handful of Windows batch scripts that take this CSV file, a path to FFmpeg, and a path to my final video, and split my final render into a collection of H.264-encoded MP4 files, or a series of MPEG files encoded for NTSC DVD. In the DVD version, I also generate an XML file suitable for import into dvdauthor. I found that neither my iPhone nor my Android devices like H.264 files encoded with a chroma subsampling ratio higher than 4:2:0, so I wrote a script specifically to encode videos destined for mobile devices. These scripts are longer than make sense to post inline here, so here are links to each one if you’re curious and would like to use them:

Capturing and digitizing my family’s movie memories will help preserve these moments so they can be enjoyed in the years to come without wearing out the original tapes. The biggest cost to this digitization process is time, both in capturing from tape (which is always done in real time) and the offline clean up and encoding to digital files. I still have more than a couple dozen tapes to sort through, so this blog post doesn’t signify the end of this project, but now that I have a reliable process that results in what I consider a marginal improvement over the source material, I like to think I can get through the remaining tapes on a sort of “autopilot.”

I hope this post will inspire someone else to preserve their own memories.

Bonus postscript: field order

As I mentioned above, an interlaced NTSC frame is split into two “fields,” with each field alternating between even and odd lines of the full frame. But which field is sent first? The even field, or the odd field?

The answer, as one might expect, is “it depends.” For my purposes here, the fields in a 480i NTSC signal are always sent bottom field first. But a 1080i ATSC signal—as is used with modern digital TV broadcasts—sends interlaced frames top field first. If you crop the source video before deinterlacing, you have to be careful to crop in vertical increments of two lines, or you run the risk of throwing off the deinterlacer’s expected field order. For example, if line 12 is the first visible line of the first field of my picture, it’s reasonable to expect it to be the bottom field of the frame. After cropping 13 lines from the top of the frame, though, the deinterlacer will assume that line 13 is the bottom field, when it really is the top field. The deinterlacer will then assume that the previous frame is the current frame and will get confused, resulting in motion that appears jerky.

If you must crop an interlaced source by an odd number of lines, AviSynth fortunately has functions to assume a specific field order. AssumeTFF, as its name implies, tells AviSynth that an interlaced clip is Top Field First. Its complement, AssumeBFF, tells AviSynth that a clip is Bottom Field First, though for raw analog NTSC captures this function is largely unnecessary.

Not sure which order is correct for a particular clip? Try deinterlacing in one order, and if the motion looks jerky, try the other. If the motion is smooth, the order is correct.

Alternatively, avoid capturing from interlaced sources altogether. 😛

Rooting the “Pippin Classic”

It was a rhetorical question.

OK, I’ll admit that the Retroquest Super Retro Castle isn’t really a “Pippin Classic,” but it sure looks the part, and when I learned of its existence back in April, I figured it had to have at least as much power as a stock Pippin. Its advertised specs reminded me of the Raspberry Pi, leading me to speculate that maybe the Super Retro Castle is nothing more than a Raspberry Pi in a fancy Pippin-inspired case:

  • Two USB controller ports
  • HDMI output
  • microSD slot
  • DC power jack

Odds seemed good to me that it runs either a variant of Android or mainline Linux. Therefore, couldn’t I turn it into a sort of miniature Pippin console?

With a working knowledge of Linux and a little bit of hardware hacking, maybe I can. 🙂

Pippin and "Pippin Classic"
Don’t talk to me or my son ever again. 😛

Coaxing the Super Retro Castle into running user-provided code is certainly easier than cracking its inspiration. Booting it quickly reveals that it’s running a mostly-stock build of RetroArch on Linux, and booting it with the bundled microSD card inserted reveals that it’s configured to search the card for supplemental configurations. A-ha, a vector! Without making any modifications to the console itself, it is trivial to add new ROMs and even new libretro modules for emulators not built in to the Super Retro Castle. Just make sure the desired modules are built for the “armhf” architecture and the respective .lpl playlist file is configured with the correct path to the .so file under the microSD card’s /storagesd mountpoint, and you’re off to the races.

Bonus points: it runs RetroArch as root.

This is great if you don’t mind the Super Retro Castle’s built-in software and interface and are content to use it as an cheap emulator box. Don’t get me wrong, it’s great at what it does, but 1) I can’t read Japanese and 2) I’d like to be able to replace its built-in software with something newer, or something different entirely. To do that, I’d have to break out of its RetroArch jail and get access to its filesystem, ideally via a shell.

Rooting the Super Retro Castle, though similarly easy, requires a bit more elbow grease. Step one is getting the case apart, which is pretty straightforward: Just remove the four Phillips-head screws located underneath the rubber pads on the underside of the case:

Underside of the Super Retro Castle
Opened Super Retro Castle

Right away we notice that most of the console’s weight is concentrated in the metal heatsink affixed to the bottom of the case. But the other thing we see is that the Super Retro Castle is not a Raspberry Pi or derivative at all; it appears to be a custom logic board built around the Amlogic S905X SoC.

Overview of the Super Retro Castle logic board
And now I see with eye serene
The very pulse of the machine

Wikipedia tells me quite a bit about the S905X: It sports a quad-core ARM-based CPU clocked at 1.2 GHz, a Mali-450 GPU, and support for decoding H.264-encoded video at up to 1080p60 in hardware. Quite the chip. When implemented on the Super Retro Castle board, the S905X is supported by two Samsung K4B461646E-BCMA chips for a total of 1 GiB of RAM, and one Samsung KLMBG2JETD-B041 eMMC chip for 32 GiB of onboard storage. The latter definitely explains the wealth of preloaded ROMs available for selection right after bootup. 😉

A handful of pads next to the S905X chip suggest a JTAG interface, but what piques my interest is this small set of vias next to the microSD card slot:

UART pins on the Super Retro Castle logic board
A literal backdoor

GND, TX, RX, and VCC? That sure looks like a UART serial interface to me, and if the Linux image used is only slightly tweaked from defaults, I should expect to see a console on this interface at 115200 baud. But first, let’s solder on a header:

Header soldered to UART vias

Next, route a short cable out through the rear vent holes…

Jumper cable through rear of the Super Retro Castle

… reassemble it…

Reassembled Super Retro Castle with UART cable through back

… and now it almost vaguely resembles a Pippin dev/test kit. 😛

Side note about UARTs

It is tempting to think of a UART interface such as the one found on the Super Retro Castle as the same interface used by oldschool PC serial ports (RS-232 / RS-422). But beware: they are not the same. Not only are the voltage ranges different—between -15 and +15 V for RS-232 and -6 to +6 V for RS-422—but UARTs are TTL devices (transistor-transistor logic) that expect voltages between 0V for logic low, and either 3.3V or 5V for logic high. Attempting to drive a UART by naively wiring it to a USB serial adapter (as I initially did) therefore runs the risk of frying the UART, which in the instance of the Super Retro Castle is built in to the S905X SoC. Fortunately the Super Retro Castle has some protective circuitry somewhere, so while my initial efforts produced garbage data regardless of baud rate, I was lucky in that I could try again with a proper USB UART adapter. I picked up a μART from Crowd Supply for this purpose and I really like it so far. 🙂

UART adapter wired to the Super Retro Castle

Getting root at this point just required wiring the GND, TX, RX, and VCC pins to my μART adapter, connecting the adapter to my PC, and opening up a PuTTY instance over the COM port it provides. At 115200 baud, I get a full boot log confirming four CPU cores each clocked at 1.2 GHz, U-Boot as the bootloader, and a build of Lakka running from the internal eMMC chip mounted read-only, followed by a root shell prompt. The initial syslog also seems to indicate the presence of a network interface—this follows from the description of the S905X chip on Wikipedia, but I don’t see any unused pads on the main board suggesting the relevant pins are routed from the SoC. Next steps for me will be backing up an image of the internal filesystem before attempting to remount it read-write so I can augment it with my own provided software.

Maybe I can get Advanced Mac Substitute to run on it and truly turn it into a real “Pippin Classic.” 🙂

Decomposing Professional Composer

Professional Composer

I owe a lot to Mark of the Unicorn’s Professional Composer. Had my dad not encountered this program around 1985 and subsequently adopted it (and its corresponding Mac hardware) for himself two years later, I would not have grown up with Macs, possibly even computers. I certainly wouldn’t have become as familiar with music notation software, let alone music theory, as I am today. My dad tells the origin story thusly:

After Graduate School (1985 or so) I became familiar with the notation program called Professional Composer™. The program was housed in the ISU [Illinois State University] computer lab where it was run on several Macintosh 512 computers. I’ve always been something of a visual learner when it came to things like this and found I could navigate this program rather easily without having to read a manual. I began by doing easy arrangements of trumpet quartets. Early on, these programs ran on 3 1/2″ floppy disks which meant that your files couldn’t be very big before you’d need another disk. This gave way to hard drives but even then you were still limited as to how big your files were or how many files you had on the drive.

I remember asking upper-level administration in District #131 that if I bought the hardware (in this case a Mac Plus), would they buy me the software for this program? They agreed and the rest is pretty much history.

MOTU discontinued Professional Composer (hereafter referred to by the nickname “ProCo,” courtesy of my friend and fellow hacker Josh Juran) sometime after its final 2.3M revision was released in 1990. My dad used this program almost every day for 20 years(!), after which I had convinced him to crossgrade to MOTU’s Composer’s Mosaic. The latter offered better MIDI playback and print layout capabilities, plus could import his by this time extensive library of ProCo files. However, neither ProCo nor Mosaic files can be fully imported into any modern music notation program. It therefore fell upon me as the family’s computer expert—and continues to even now—to ensure that the hardware powering my dad’s favorite music software continues to run despite all other advancements. As a matter of course, I have an intimate knowledge of the capabilities, requirements, and quirks of these two programs. (I have a lot of sympathy for banks and government institutions tasked with similar mandates.)

By the time ProCo 2.3M came out, hard drives were common and the ProCo application itself had long since outgrown its original home on a 400K (later 800K) boot disk. So 2.3M shipped on an 800K disk offering the option to install or remove itself to or from an attached hard drive, respectively, if launched from the master disk. Installing to the hard drive decrements an “install count” on the master disk, allowing the user to use one master disk to install ProCo to one hard drive at a time. Merely copying the ProCo application to a hard drive isn’t enough; if the application isn’t properly installed by the master disk, launching the program from hard drive prompts the user to insert the master disk if not already present. I remember accidentally wiping out at least one of these hard drive installations from my dad’s Mac as a curious tinkerer in my youth (for which Dad was not pleased), leading Dad to request/beg MOTU for one final backup master disk some time in the mid-90s. It is a testament to the quality of 3.5″ floppies back then along with how well my dad takes care of them that the disks remain usable, some 30+ years later.

Hell hath no fury like my dad scorned.

Eventually Dad got me my own Mac(s), where I could hack away safely—safe from his expensive software, at least. But as I grew as a hacker and programmer, acquiring and installing various software packages of my own, encountering—and defeating—assorted authentication schemes, going deep down the rabbit hole of the inner workings of Mac OS, and even dipping my toes into the field of software preservation, the music notation program that started it all continued to elude me. Why can’t Disk Copy or DiskDup produce a working substitute for the ProCo master disk? Why is it so difficult to duplicate the master disk using, for example, a KryoFlux? Why can I install ProCo to an emulated HD20 via my Floppy Emu, but not to a mounted Disk Copy disk image? Why does ProCo crash when After Dark kicks in? Why does its installer only appear when launched from the original master disk? And how does ProCo know that it has been properly installed?

I’m determined to finally find out.

Twenty years is much too long to be greeted by this.

ProCo has a minimal About dialog, displaying the name of the software, the version, its copyright years, and credit only to “Mark of the Unicorn, Inc.” I therefore have no real idea who wrote it, despite having asked MOTU via email and Twitter for the source code multiple times over the years. Ultimately my goal here is to develop a conversion utility that brings ProCo files into the 21st century, so I’d even be satisfied with documentation of its file format, but I would be surprised if there is anybody left at MOTU who is even aware of Professional Composer, let alone familiar with a product they haven’t supported in over a quarter century.

So off to the disassembler we go. 😉

“Much more advanced?” We’ll see about that.

A Short Primer on Memory Management and Launching 68K Applications on the Mac

Much of the Mac’s software is split into chunks of data and code called “resources” that can be swapped in and out of RAM as needed. In order to maximize the use of the sparse amount of memory on the original Macintosh, its designers traded a tiny bit of speed for greater efficiency when building the Memory Manager. When asked to load a resource from disk, the Resource Manager returns a “handle” to the loaded resource, which is a pointer to an OS-controlled “master pointer” pointing to a relocatable block within the heap. The Memory Manager can then be allowed to move or “compact” relocatable blocks in the heap, or even remove/”purge” such blocks when available RAM is running low. This relieves applications from some of this management burden and is leveraged throughout the Mac System Software. In addition to allocating their own handles and non-relocatable blocks, applications may mark existing handles with various attributes to guide the Memory Manager in its housekeeping; for example marking a handle “purgeable” allows the Memory Manager to free its associated block, and likewise “locking” a handle prevents it from being moved or freed.

Each time you double-click on a 68K application to launch it from the Finder, a carefully orchestrated sequence of events takes place:

  1. Finder calls the _Launch trap with the name of the application.
  2. The Segment Loader opens the resource fork of the file passed to _Launch and immediately preloads ‘CODE’ resource ID 0. ‘CODE’ 0 is a specially formatted ‘CODE’ resource. It contains the parameters necessary to set up a non-relocatable block of memory near the top of the application’s memory space containing application and QuickDraw globals, any parameters passed to it from the Finder, and the application’s jump table. Following these parameters is the jump table itself: a list of tiny eight-byte routines that each load a ‘CODE’ resource, or “segment,” and jump to an offset within that segment.
  3. Using the parameters at the start of ‘CODE’ 0, the Segment Loader allocates space for globals pivoting around register A5, which is eventually passed to the application. This is known in Mac programming parlance as the “A5 world” and is unique to each running application.
  4. The jump table is copied above the Finder’s application parameters in the A5 world and the ‘CODE’ 0 resource is released.
  5. The first entry in the jump table is executed, and the application takes control.

The relative jump instructions of the original 68000 processor are limited to signed 16-bit offsets, so branches or subroutine calls are limited to 32K offsets in either direction from the current program counter. In order to accommodate programs with more than 32K of code under the memory constraints of the original 128K Macintosh, the Segment Loader was invented which manages applications split into ~32K code “segments.” Code within each segment can make intra-segment jumps (branches or subroutine calls), but once a subroutine is needed outside a particular segment, a call must be made to the jump table which in turn loads the necessary segment. New segments are returned as handles to relocatable blocks just like any other resource, so as they are loaded the Memory Manager automatically compacts the heap and/or frees purgeable handles to make room in RAM. Recall that the jump table is copied to a known location relative to the A5 register, so applications always have easy access to it. But since new code segments are created at locations on the application heap unknown at compile time, this also means that all code segments are invoked assuming position-independent code, meaning all branches and subroutine calls are relative.

All 68000 processors support branches to absolute addresses utilitizing the full usable width of the address bus. The 68020 and later processors support larger relative branch offsets, so segments are not necessarily limited to around 32K. Well-behaved applications check at launch that the host Mac has their necessarily capabilities, and exit early if not. But for maximum compatibility, some applications built with compilers such as CodeWarrior are generated with a table of offsets to absolute branch instructions within each code segment. These instructions are compiled as jumps to offsets within the segment relative to zero—sure to crash the Mac if executed as stored. But in a small bit of “preflight” code, these absolute branches are fixed up to point within the segment, providing larger branch offsets to all Macs. This is how the ‘rvpr’ 0 resource was compiled for the Pippin.

The first thing we notice is that ProCo’s jump table contains one valid entry and then… a lot of nonsense. This is a likely sign of an encrypted jump table—Epyx’s Temple of Apshai along with Winter Games also uses this obfuscation trick to scare off casual hackers. In fact, almost all of ProCo’s ‘CODE’ resources look to be encrypted! If I hope to make any sense out of ProCo’s file format by looking at its code, we’ll need to derive the algorithm that decrypts the rest of this.

MOVE.W #$0029,-(A7) pushes the ID of ‘CODE’ resource 41 onto the stack prior to jumping into it via _LoadSeg. Once there, we start by allocating a couple of memory blocks to use, starting with a 384-byte block of memory that we’ll call the “environment” block. We stash the stack pointer into offset 28 of that block, push a pointer to our environment block onto the stack, then stash the value of the ScrDmpEnb global into offset 32 of our environment block.

ScrDmpEnb is short for “screen dump enable” which originally meant whether the screen shot feature is enabled via Command-Shift-3 on Macs, but grew to include other FKEYs as well. One popular third-party FKEY available to hackers was the “Programmer’s Key,” which drops into the installed debugger when invoking e.g. Command-Shift-7, providing a way to drop to MacsBug without a physical programmer’s switch installed on the side of the machine. But the same functionality could be had by simply writing your own equivalent FKEY, following instructions provided in the official MacsBug manual. MOTU certainly couldn’t have made it easy for most would-be crackers to just conveniently drop into the debugger during the startup process of their precious software, so ScrDmpEnb is set to zero, effectively disabling all FKEYs.

What the FKEY?!

The Apple ][, Lisa keyboard, original Macintosh keyboard, and most keyboards that later shipped with 20th-century Macs, did not feature what we know as “function keys”: the F1-F12 (and beyond) keys that adorn the top of most keyboards today. These devices more closely mimicked typewriter keyboard layouts that many users were familiar with at the time of their respective introductions. But on the Apple ][, there are a few reserved keyboard shortcuts that are always available to the user: Control-Reset breaks out of the currently running program, and Control-Open Apple-Reset resets the computer, for example.

These shortcuts are hardcoded into the ROM and not easily modifiable by the user. On the original Mac, since localization (in particular, keyboard layouts) fell out of the disk-based System Software’s foundation built on resources, it’s only natural then that shortcuts be handled by the OS in a modular way as well. So Apple made up for the lack of physical function keys by providing several “virtual” function keys bound to Command-Shift-numbers. When invoked, these shortcuts run tiny programs stored as ‘FKEY’ resources in the System file, which is why they are known as “FKEYs.” Programmers quickly discovered that they could write their own tiny FKEY programs and install them into the System file assigned to otherwise unused numbers.

The original set of FKEYs as shipped in 1984 are as follows:

  • Command-Shift-1: eject the first/internal floppy disk, if present
  • Command-Shift-2: eject the second/external floppy disk, if present
  • Command-Shift-3: take a screenshot and save it to disk
  • Command-Shift-4: take a screenshot and print it

The first two ejecting FKEYs went away with the introduction of Mac OS X in 2001, as Apple had stopped shipping Macs with floppy drives by then (though macOS continues to support external drives natively). But Command-Shift-3 lives on as the assigned shortcut for saving screenshots to disk—one of the few remaining holdovers from the original 1984 System Software.

We then allocate a handle to a new locked 82-byte “context,” pop the pointer to our environment block into offset 48 of our context, then push our newly-allocated context’s handle onto the stack. Next we pass the pointer to the top of the ‘CODE’ 41 resource we’re executing from to _RecoverHandle to get our ‘CODE’ resource’s handle. We store this handle at offset 0 of our context, then store its master pointer at offset 4. We finally recover our context’s handle from the stack before passing it to the first real “stage.”

Stage 1: Front Line Disassembly

Stage 1 is fairly simple and calls three subroutines before launching into Stage 2, looking roughly like this when decompiled back to pseudo-C code:


context.aggregateChecksum = 0;


jumpTo(context.stagePtrs[1]);   // jump to stage 2

So let’s break it down, function by function. We start with initContext, which looks like this:

void initContext(Ptr contextPtr)
    for (short i = 0; i < 8; ++i)
        contextPtr->stagePtrs[i] = stageInfoCmd(

    contextPtr->stageGlobalsPtr = contextPtr->stagePtrs[7];

    contextPtr->code41Size = _GetHandleSize(contextPtr->code41Handle);
    contextPtr->code41End = contextPtr->code41Ptr
        + contextPtr->code41Size - 1;

initContext initializes a block of eight pointers to the results of stageInfoCmd, which looks like this:

enum StageInfoCmdSelector
    SET_OFFSET = 0,
    GET_OFFSET = 1,

Ptr stageInfoCmd(
    StageInfoCmdSelector selector,
    Ptr stagePtr,
    short index,
    Ptr codePtr)
    static short offsets[] =
        0x0086,     // offset to Stage 1 from top of 'CODE' 41
        0x0388,     // offset to Stage 2 from top of 'CODE' 41
        0x0D24,     // ?

    Ptr outPtr = nullptr;

    switch (selector)
    case SET_OFFSET:
        offsets[index] = (short)(stagePtr - codePtr);
    case GET_OFFSET:
        outPtr = codePtr + offsets[index];
        outPtr = offsets;

    return outPtr;

So initContext initializes a block of eight pointers in our context to point to eight different areas of 'CODE' resource ID 41. The first of these pointers points to the Stage 1 code we're currently executing, and the second of these pointers points to the encrypted block of 'CODE' 41 immediately following the bits of Stage 1 that are recognizable as executable code. This eventually becomes the Stage 2 code that we jump to later. (It's interesting that stageInfoCmd is only ever called with the GET_OFFSET selector, making the rest of that function dead code.)

Next we initialize a field in our context that's used to store an aggregate of all computed checksums. We then call updateStage2Checksums which looks like this:

long updateStage2Checksums(Ptr contextPtr)
    contextPtr->latestChecksum = calculateChecksum(

    contextPtr->aggregateChecksum += calculateChecksum(
        'PACE') + contextPtr->latestChecksum;

    return contextPtr->latestChecksum;

updateStage2Checksums in turn makes a couple of calls to calculateChecksum, passing each call the blocks leading up to, and following Stage 2, respectively.

long calculateChecksum(Ptr startPtr, Ptr endPtr, long seed)
    long size = endPtr - startPtr;
    long cksum = seed;
    long sizeInLongs = size / sizeof(long);

    Ptr ptr = startPtr;
    if (sizeInLongs != 0)
        size -= sizeInLongs * sizeof(long);
        for (long i = 0; i < sizeInLongs; ++i)
            cksum += *((long*)ptr)++;

    if (size > 0)
        for (long i = 0; i < size; ++i)
            cksum += *(byte*)ptr)++;

    return cksum;

One thing that sticks out immediately to me is the use of the longword 'PACE' as an initial checksum seed. 'PACE' is likely a reference to PACE Anti-Piracy: a company founded in 1985 that's still around today. MOTU adopted PACE's protection code for ProCo starting with version 2.1, released in late 1987. Indeed, pirated versions of ProCo existed as early as 1985; some are still available on the Internet. With 1.0 and 2.0 unencrypted, "sharing" these was much easier than with later versions. My dad acquired his Mac Plus / ProCo combo in the summer of 1987—possibly June of that year—so with 2.1 having a creation date of September 11, 1987 and assuming MOTU shipped the latest version with new orders, it follows that 2.0 is the earliest version my dad owns. MOTU periodically shipped new master disks containing updated versions of ProCo to registered owners as they became available, free of charge—a practice I commend them for. 🙂

Gotta catch 'em all!

Now that we have our checksums, we're ready to head into decryptStage2. The decryption code takes the checksum of the code to be decrypted as the "key." This makes binary patching the ProCo application an involved process, since the remainder of the code would need to be reencrypted for decryption with its new checksum to succeed. One thing is for sure about this protection: it is designed to be resilient against quick-and-dirty patches.

void decryptStage2(Ptr contextPtr)

void decryptStage2Block(long key, Ptr startPtr, Ptr endPtr)
    long size = endPtr - startPtr;
    long sizeInLongs = size / sizeof(long);

    Ptr ptr = startPtr;
    if (sizeInLongs-- != 0)
        long longsLeft = sizeInLongs;
            long rotCount = key & 0x0F;
            if (rotCount == 0)
                rotCount = 1;
            key = rotateRight(key, rotCount);   // Ror.L in 68K
            *((long*)ptr)++ ^= key;
        while (longsLeft--);

        size -= sizeInLongs * sizeof(long);

    if (--size >= 0)
        long bytesLeft = size;
            long rotCount = key & 0x0F;
            if (rotCount == 0)
                rotCount = 1;
            key = rotateRight(key, rotCount);   // Ror.L in 68K
            *((byte*)ptr)++ ^= (byte)key;
        while (bytesLeft--);

With Stage 2 now fully decrypted, we can jump right in by loading contextPtr->stagePtrs[1] into A0 and JMPing right to it.

That wasn't so hard, was it? 😛

There is still at least another stage of this protection to get through, and we're hardly that much closer to decrypting the remaining 'CODE' resources or the jump table. Knowing PACE's reputation, this is likely a small victory in what will ultimately be a long battle. ProCo is legendary in some Mac circles for its copy protection, so if what I've heard of it is true, then I'm surely in for a ride. PACE even makes a bold claim on their own website:

We know it sounds like an unrealistic boast to say our anti piracy software cannot be cracked. Our goal is to stay ahead of the curve and hacking trends. We avoid giving known hooks or patterns that they recognize, and we pepper our anti piracy solutions with methods that we know are time consuming and difficult, if not impossible, to remove.

Challenge accepted. 😉

Exploring the Pippin ROM(s), part 9: Kickstart

I’ve been busy. The Pippinizer is going to take me longer than I expected to put together into a releasable form, so I wrote a small utility that should tide folks over until that’s ready.

Introducing Pippin Kickstart. This is a small, carefully-crafted boot disc for the Pippin that circumvents the console’s built-in security and instead offers the choice to boot from an unsigned volume. It works on 1.0, 1.2, and 1.3 Pippins (so, every known retail Pippin ROM out there as of the time of this writing) without any modification.

Pippin Kickstart booting
It’s basically like Swap Magic, but for Pippin.

To use it, simply download the Pippin Kickstart disc image available here, burn it to CD, and use that disc to boot the Pippin. Pippin Kickstart will identify what ROM and RAM it detects, eject itself, and then immediately begin searching for a bootable volume candidate. The Pippin will boot from CD-ROM using only its internal drive, but other types of removable media may work as well assuming that they can boot a regular Mac without special drivers. It also has been tested working using an external hard drive.

“But Keith, I thought 1.3 Pippins don’t do the authentication check at startup. Why would I use Pippin Kickstart with a 1.3 Pippin?” While it is true that ROM 1.3 does away with the signing check, it is still hardcoded to boot only using the Pippin’s internal CD-ROM drive. Pippin Kickstart offers owners of 1.3 Pippins the ability to boot from other media sources such as a hard drive, providing itself as a sort of “launch pad.”

The Pippin Kickstart disc is a hybrid HFS/ISO image containing the source code, a short README, and—just for fun—a few extra “goodies” that I found useful during its development:

  • MacRelix ROM Copier by Josh Juran, used to dump the ROM of my own 1.2 Pippin
  • tbxi by Elliot Nunn, a project which evolved from an early tool Elliot wrote that I used to extract the ‘rvpr’ resource kicking off this whole mess
  • FDisasm and FindCode by Paul Pratt, indispensable tools used to locate and examine code within the Pippin’s ROM

All of these extras are licensed according to their respective authors.

I’ve licensed the Pippin Kickstart bootloader under the GPLv2. Source code is available on my Bitbucket: https://bitbucket.org/blitter/pippin-kickstart

Have fun.

UPDATE (20210209): I’ve updated Pippin Kickstart to version 1.1, which patches the SCSI Manager on units equipped with ROM 1.0 so that they too may boot from external SCSI volumes. It is available here (I’ve updated the rest of this post as well) and a detailed explanation is available here.

Exploring the Pippin ROM(s), part 8: Cracking open the Pippin (without the boys)

The Bandai Pippin has finally been cracked.

They say a picture is worth a thousand words. So I put together a fun proof-of-concept demo, and made a video to summarize these last few months.

For an in-depth technical explanation of what’s happening here, here’s some further reading:

Exploring the Pippin ROM(s), part 7: A lot to digest

Apple’s public key for verifying the authentication data on a Pippin boot volume is:
E0 E0 27 5C AB 60 C8 86 A3 FA C2 98 21 79 54 A8 9F D1 B9 DC 8A BA 84 EF B1 E7 C9 E2 1B F7 DD D7 DC F0 E4 4A BB 79 51 0E 7C EB 80 B1 1D

How did I find this? Strap in and let’s go for a ride. 🙂

The What

Quick recap: I want to unlock homebrew on the Pippin. Every time a Pippin tries to boot from an HFS volume on CD-ROM, it loads the ‘rvpr’ resource with ID 0 from ROM and executes it as code, passing as arguments that volume’s ID and two blocks of data found elsewhere in ROM. ‘rvpr’ 0 locates and reads an RSA-signed “PippinAuthenticationFile” from the volume. It contains a 128-bit digest for each 128K block in the volume, along with a 45-byte signature at the end of the file. If the signature cannot be verified or any of the digests don’t match what ‘rvpr’ 0 calculates from the volume in its main loop, the Pippin (r)ejects the disc.

“Feed me, Seymour!”

Last time I looked at ‘rvpr’ 0, I examined and broke down what the main loop does at a high level. Outside of main, there are ten non-library function symbols in the resource, six of which I’ve identified their usage. Reading through these, I determined the format of the PippinAuthenticationFile and how its data is fed to the rest of the main loop. The remaining four functions—VerifyDigestInfo, VerifySignature, CreateDigest, and InitRSAAlgorithmChooser—form what I conjectured to be the “meat” of the authentication process. I elected to pore over these at a later time.

‘rvpr’ 0 is over 35K in size, with almost 34K of that comprised of 68K assembly code. Unfortunately, what I looked at in part 6 only touches about 3K of that—not even 10% of the whole. What’s worse is that the remaining 31K/90+% of code is almost completely lacking symbols, save for an occasional call to T_malloc, T_memset, T_memcpy, or T_free. Without human-readable symbols to guide what the remaining memory locations, values, and functions mean in the context of ‘rvpr’ 0’s greater purpose, I would be “flying blind” without a safety net. If I was to attempt to grok this code to the same degree as I currently understand main and its (named, mind you) auxiliary functions, I would have a long road ahead of me especially if I used the same static analysis technique of stepping through the code offline on paper.

The Why

I decided that the best way to figure out the rest of this code was to use dynamic analysis; that is, to examine it while it’s running. There were just too many subroutines and too much data being pushed around to keep it all straight in my head. I needed a computer to help. I don’t have any hardware debuggers for the Pippin that would allow me to step the CPU and examine the contents of RAM, and no working software emulators exist for the Pippin (yet). What I found does exist is a suite of 68K assembly tools—a code editor, binary editor, and, crucially: a simulator—called EASy68K. If I wanted to look at ‘rvpr’ 0 in something that even somehow resembled a debugger, I’d have to build a working version of ‘rvpr’ 0 that could be run outside of a Pippin, without first understanding how the code works in the first place.

EASy68K’s simulator provides a hypothetical computer system featuring a 68K CPU, 16MB of RAM, and rudimentary I/O facilities. Luckily, ‘rvpr’ 0 is pretty self-contained, which allowed me to quickly “port” it to EASy68K. I was correct in that this technique significantly accelerated my understanding of the digesting and verification process, but as I hope to elucidate later in this post, that pursuit required very little actual parsing of code. 🙂

The How

The first step to building an ‘rvpr’ 0 replica in EASy68K was to adopt the syntax EASy68K likes. I prefer FDisasm for disassembly because it’s part of the Mini vMac project and as such, it’s at least a tiny bit aware of the classic Mac OS API, known as the Toolbox. FDisasm can replace raw A-traps (two-byte 68K instructions beginning with the hex digit $A, which typically map to commonly-used subroutines) with their corresponding human-readable names according to official Mac documentation, which is a nice time-saver especially in large blocks of 68K Mac code. I also like FDisasm’s output formatting, which is the basis of how I list 68K assembly in these blog posts.

I admit it ain’t neat, but it’s handy.

Cloning ‘rvpr’ 0 in EASy68K serves two purposes. First, I can step through it using a real signed Pippin CD, observe what its code does, and document it. After I reverse-engineer the authentication process though, this functional copy will serve a second purpose: to verify that my own authentication files are crafted properly. Since we know from part 2 and part 6 that the main loop will return zero in register D0 if the verification process succeeds, we should be able to observe that in the simulator. Using our own ‘rvpr’ 0 binary that’s as close as possible to what’s in ROM on an actual Pippin should assuage doubt as to whether a proof-of-concept will pass the console’s tests or not. Plus, since it’s all simulated in software, it saves me from having to burn a ton of test CDs. 😛

Converting my (annotated) disassembly from FDisasm’s syntax to EASy68K’s was easy—regular expressions to the rescue. Assembling the result produces code identical to what’s in the original resource—yay, the assembler works, and we have a byte-for-byte clone of what’s in the Pippin ROM. But making this new replica functional required a little bit of creativity.

On a real Pippin, ‘rvpr’ 0 is loaded from the ROM’s resource map into an area on the system heap in RAM. The relocation code at the beginning of ‘rvpr’ 0 patches each subroutine jump by offsetting them relative to where the code resides in RAM (discussed in part 6). It keeps track of whether this is done by storing this offset in a global when relocation is complete. Recall from part 3 that this global has an initial value of zero when ‘rvpr’ 0 is first run. If this code is executed a second time, it subtracts this global from its base address in RAM and, if the result is zero, it doesn’t need to do relocation again since the jumps already have valid destination addresses.

The simulator comes with no bootloader at all but starts up fully initialized, so in essence the contents of ‘rvpr’ 0 form the “ROM” of our virtual computer. We thus boot directly to ‘rvpr’ 0’s entry point, at the start of the simulator’s memory space. But since ‘rvpr’ 0 now always starts at address 0, the difference between that initial global and our base address… is zero. So the relocation code never runs in the simulator; it doesn’t have to because those unpatched jumps are already relative to zero. 😉

By the time ‘rvpr’ 0 executes in a real Pippin’s boot process, many subsystems on the console have been readied: the QuickDraw API for graphics, the Device Manager for device I/O, and the HFS filesystem package to name a few. These APIs, having been designed and built by Apple for the Mac, only exist on a Mac-based system and therefore naturally aren’t present in the fantasy system we get from EASy68K. We are in luck though in that ‘rvpr’ 0 only makes calls to a grand total of nine Toolbox APIs. Four of these calls are used in the “prologue” code discussed earlier that relocate all the jumps before main is even called. Since that relocation code doesn’t run in our simulator, that leaves five Toolbox APIs essential to the main loop: _Read, _Random, _NewPtr, _DisposePtr, and _BlockMoveData. We need equivalents to these routines if we are to expect ‘rvpr’ 0 to work properly.

Oh yes I would. And I did.

_BlockMoveData is an easy one. It copies D0 bytes from (A0) to (A1):

8DCA   12D8    Move.B    (A0)+, (A1)+
8DCC   5380    SubQ.L    #1, D0
8DCE   6CFA    BGE.B     _myBlockMoveData

I took a shortcut with _Random: my implementation simply returns a constant value. I did this partially because I’m lazy but also because _Random is only called once, albeit in a loop: to determine which 128K chunks to digest. By controlling the values returned, I can selectively and deterministically test chunk hashing.

There’s an XKCD for everything, even cracking Pippins.

I took similar liberties with _NewPtr and _DisposePtr: I keep a global pointing to the next unused block, and _NewPtr simply returns the value of that global and then advances it by the requested size. _DisposePtr is implemented as a no-op. Why did I do this? Well, again, part of it is because I’m lazy and didn’t want to write a proper heap allocator for this, but also because it affords me the ability to inspect memory blocks used even after they’ve been “freed.” I don’t care about memory leaks in this case—in fact, here they’re a feature! 🙂 Since ‘rvpr’ 0 is roughly 36K, I set aside the first 64K of memory for it (and any additional supporting code/data I add, like these replacement Toolbox routines). With register A7 initially pointing to the top of memory for use by the stack, the rest of RAM starting at $10000 I designate for my “heap.”

Finally we come to _Read. EASy68K may be pretty bare-bones, but it does come with some niceties allowing for basic interactions with its host PC. In this case, I needed a way for my “virtual Pippin” to have random-access readability from a virtual CD-ROM drive. Fortunately, EASy68K provides this in the form of the Trap #15 instruction. My version of _Read only does the bare minimum of what ‘rvpr’ 0’s main loop requires: it opens an HFS disk image on the host PC, seeks to the offset specified in the ParamBlockRec passed on the stack, reads the requested amount of bytes into the specified buffer, then closes the file.

8DF0=  6361 6E64 ...     DC.B      'candidate.hfs', 0

8DFE   43F9 0000 8DF0    Lea.L     _myFilename, A1
8E04   303C 0033         Move      #51, D0
8E08   4E4F              Trap      #15

8E0A   2428 002E         Move.L    $46(A0), D2
8E0E   303C 0037         Move      #55, D0
8E12   4E4F              Trap      #15

8E14   2268 0020         Move.L    $32(A0), A1
8E18   2428 0024         Move.L    $36(A0), D2
8E1C   303C 0035         Move      #53, D0
8E20   4E4F              Trap      #15

8E22  303C 0032          Move      #50, D0
8E26  4E4F               Trap      #15

8E28  303C 0000          Move      #0, D0

Now that we’ve got functional replacements for the necessary Toolbox routines, how do we refit the rest of the code so that our versions are called instead of Apple’s, which don’t exist? I already had the Toolbox API names substituted in my listing, thanks to FDisasm, so I could simply create macros with those names that execute a tiny bit of code in place of those calls. The easiest way, and the method I tried first, is to invoke each replacement with a Jsr instruction, which is short for “Jump to SubRoutine.” This was really straightforward to do and assembled without issue, but upon loading and running in the simulator, I quickly discovered why this approach wouldn’t work. Jsr is a four, sometimes six byte instruction, whereas the original A-traps they were to replace use only two bytes. Since these larger instructions are inserted in the main loop near the beginning of the code, this throws off hardcoded addresses used later. Needless to say, when I ran in the simulator, a hardcoded Jsr landed instead in an unexpected area of code and I crashed almost instantly.

However I was going to invoke my faux-Toolbox calls, they had to be done in only two bytes. I thought for a second about how I could write an A-trap exception handler and leave Apple’s original A-trap instructions as-is, but I didn’t do that either because 1) laziness and 2) I thought of an easier way. Remember the Trap #15 instruction I used to implement _Read?

On a 68K processor, the two-byte Trap instruction provides a way to jump to any of 16 different addresses stored in a predefined area of memory, known as “vectors.” These 32-bit vectors are all stored consecutively in a block of memory that always starts at address $80. ‘rvpr’ 0 normally executes code at address $80, but that’s part of the address relocation done only on a real Pippin, not in our simulator. It is therefore safe for us to replace that block of code with the addresses of our replacement Toolbox routines, starting with Trap #0 and ending with Trap #3. Recall that I’ve implemented _DisposePtr as a no-op—which is the two-byte opcode $4E71—so I don’t need to set aside a trap vector for it. EASy68K only sets aside trap 15 for itself, leaving traps 0-14 for us to use however we wish. The code we do care about executing in the simulator doesn’t start until after address $100, so our entire trap table easily fits inside this block of unused code. How lucky can you get? 🙂

My very own Pippin “emulator”

With my cobbled-together Pippin “emulator” now up and running, finally I could take a look at the before and after of the remaining functions called by the main loop. I decided to start with CreateDigest, as I had already figured out the inner workings of CompareDigests, so this seemed like a simple starting point. CreateDigest starts by creating a context object for itself, and initializes one of its buffers with a curious but predictable pattern of 16 bytes: 67 45 23 01 EF CD AB 89 98 BA DC FE 10 32 54 76. Along the way it checks to see if any of this setup fails, and if so, the entire authentication check is written off as a failure. But assuming everything is fine, it enters a loop which digests our 128K input chunk up to 16K at a time, by passing each 16K “window” to an unnamed subroutine. Each time we iterate through this loop, the aforementioned buffer which was initially filled with a predictable pattern now instead contains a brand new seemingly random jumble of numbers. I suspected that this 16-byte buffer was used as a sort of working space for whatever Apple chose as a hashing algorithm, since its size matched that of the digests in the PippinAuthenticationFile.

764   B883              Cmp.L     D3, D4               ; chunk size > 16K?
766   6C02              BGE.B     dontResetD3          ; if so, use 16K for window size
768   2604              Move.L    D4, D3               ; otherwise we have <16K left, so use what's left as window size instead
76A   4878 0000         Pea.L     ($0)                 ; push zero
76E   2F03              Move.L    D3, -(A7)            ; push window size (typically 16K until the end)
770   2F0A              Move.L    A2, -(A7)            ; push working chunk buffer ptr
772   2F2E FFFC         Move.L    -$4(A6), -(A7)       ; push ptr to our hash buffer
776   4EB9 0000 6516    Jsr       Anon217              ; patched, creates hash? digest?
77C   2A00              Move.L    D0, D5
77E   4A85              Tst.L     D5
780   4FEF 0010         Lea.L     $10(A7), A7          ; cleanup stack
784   6628              BNE.B     createDigestCleanup  ; if Anon217 failed, bail
786   9883              Sub.L     D3, D4               ; subtract window size from chunk size
788   D5C3              AddA.L    D3, A2               ; advance working chunk buffer to next 16K-ish window
78A   4A84              Tst.L     D4                   ; still more data to hash/digest?
78C   6ED6              BGT.B     createDigestLoop     ; hash/digest it

78E   4878 0000         Pea.L     ($0)                 ; push zero
792   4878 0010         Pea.L     ($10)                ; push 16
796   486E FFF8         Pea.L     -$8(A6)              ; push address of local longword
79A   2F2E 0010         Move.L    $10(A6), -(A7)       ; push out buffer ptr
79E   2F2E FFFC         Move.L    -$4(A6), -(A7)       ; push ptr to our hash buffer
7A2   4EB9 0000 654C    Jsr       Anon218              ; patched, copies hash out?
7A8   2A00              Move.L    D0, D5
7AA   4FEF 0014         Lea.L     $14(A7), A7          ; cleanup stack

Finally, CreateDigest makes one more call to an unnamed subroutine, passing it among other things the number 16 (presumably the digest size in bytes), a pointer to its context object, and a pointer to a 16-byte area on the stack filled with $FF bytes. After this call, the working buffer is once again reset to its initial pattern, and the area on the stack is filled with what looks like it could be a digest.

Wait a minute, upon closer examination...


... the output hash matches what's in the PippinAuthenticationFile! This makes sense, because all CreateDigest does after this is tear down and dispose of its context object. It then returns to the main loop, where the computed digest is passed along for CompareDigests to, well, compare. So clearly those two unnamed subroutines play a vital role in computing the digest, however that's done.

I dove right in to the routine called in the loop. It starts by doing several integrity checks of the data structures it's about to use, then goes right into a confusing routine that appears to add the amount of bits in our up-to-16K input window to a counter of some sort, optionally increasing the next byte if the counter rolls over. I suspected this was used for a 40-bit counter of bits hashed, its purpose not obvious yet. It then enters a loop of its own, dividing our input window into yet another sliding window, this time of 64 bytes in size. Each iteration of this loop, it passes these 64 bytes and CreateDigest's 16-byte working buffer to an unnamed subroutine with some very interesting behavior.

7186   2612              Move.L    (A2), D3
7188   282A 0004         Move.L    $4(A2), D4
718C   2A2A 0008         Move.L    $8(A2), D5
7190   2C2A 000C         Move.L    $C(A2), D6
7194   4878 0040         Pea.L     ($40)
7198   2F2E 000C         Move.L    $C(A6), -(A7)
719C   486E FFC0         Pea.L     -$40(A6)
71A0   4EB9 0000 7C58    Jsr       ($7C58).L
71A6   2004              Move.L    D4, D0
71A8   4680              Not.L     D0
71AA   C086              And.L     D6, D0
71AC   2204              Move.L    D4, D1
71AE   C285              And.L     D5, D1
71B0   8280              Or.L      D0, D1
71B2   D2AE FFC0         Add.L     -$40(A6), D1
71B6   0681 D76A A478    AddI.L    #$D76AA478, D1
71BC   D681              Add.L     D1, D3
71BE   2003              Move.L    D3, D0
71C0   7219              MoveQ.L   #$19, D1
71C2   E2A8              LsR.L     D1, D0
71C4   2203              Move.L    D3, D1
71C6   EF89              LsL.L     #$7, D1
71C8   8280              Or.L      D0, D1
71CA   2601              Move.L    D1, D3
71CC   D684              Add.L     D4, D3

Looking at this new subroutine, I was fairly convinced that it does the actual hashing. It is an unrolled loop that does a number of bitwise operations before adding a longword (32-bit quantity) from our input window along with what appear to be magic numbers to one of four 32-bit registers. At the end of this function, the contents of these registers are concatenated and added to the existing contents of CreateDigest's 16-byte working buffer. In order to maybe recognize a pattern to what this hash function was doing and perhaps identify the algorithm, I converted this assembly back to C code, and then verified that my C version produced identical output. Unfortunately, the algorithm didn't look familiar to me at all—I assumed it was something Apple invented specifically for the Pippin. I feared the initial "salt" might not be constant and could change depending on where the input chunk exists in the volume. Perhaps I merely found one hash function, but the Pippin could switch between different hash functions depending on some heuristic? It would require more disassembly and careful analysis to verify whether or not this was the case and why. 🙁

A Fun Side Story

Last Friday was 4/26, known informally among fans as Alien Day. I’m a big fan of the Alien universe, and this year happens to be the 40th anniversary of the 1979 Ridley Scott classic. So on Friday I had a number of friends over to watch both Alien films. 😉 There was pizza, chips, my homemade queso (half a box of Velveeta + a can of Rotel chiles—nuke it in the microwave for five minutes, stirring occasionally), and everybody had a good time.

One of the folks who dropped by was my friend Allison, who wanted to leave me with a disc of early Xbox demos her wife Erica found for me. I’m interested in investigating the contents of this disc in case there’s anything of historical value on it, but Xbox discs cannot be mounted or copied using a run-of-the-mill DVD-ROM drive. I remember years ago burning one or two (homebrew, ahem) Xbox DVDs with my PC, so I know writing Xbox discs is possible, but I was curious why reading them posed such an obstacle.

All roads lead back to Presto.

After some Googling, I found that the Xbox employs its own scheme to verify and “unlock” a boot disc candidate (described by none other than Multimedia Mike—an intrepid hacker whose blog I recommend). As I read, I learned that the Xbox’s disc verification involves the host (in most cases, an Xbox) answering an encrypted series of challenges at the drive level. This process, which is unique to each Xbox disc, uses SHA-1 hashes and RC4 encryption. This is a pretty cool and fascinating way to hide Xbox game data from non-Xboxes—it’s definitely worth checking out the details.

As one does on a Friday evening, I found myself clicking through to the Wikipedia entry on SHA-1. Not much time later, I was deep in the Wikipedia rabbit hole, ultimately landing on the page describing the MD5 message digest algorithm. Those of you reading this who have at least a passing familiarity with cryptography might recognize where this is going based on my description of CreateDigest's behavior above. I did not. 😛


According to Wikipedia, MD5 was designed in 1991 by Ronald Rivest, one of the inventors of the RSA cryptosigning algorithm used by the Pippin. MD5 was designed to replace an earlier version, MD4, which traded better security for increased performance. At a basic level, MD5 takes a bitstring of arbitrary length—the "message"—and generates a 128-bit string that uniquely identifies this input, called a "digest." The input string is padded to a multiple of 512 bits by adding a 1 bit, a number of zero bits, and finally the size of the original message in bits, stored as a little-endian 64-bit value. The padded message is finally split into chunks of 16 longwords, and these 64-byte chunks are then passed into MD5's core hash function to be added to a final 16-byte digest. If that sounds confusing, here's a summary: MD5 takes an arbitrary-sized message and turns it into a unique fixed-size message. The same input will result in the same output, but no two distinct inputs will result in the same output (this isn't 100% true, but for the purpose of this discussion we'll pretend it is).

On the surface, MD5 sounded like it might be what Apple adopted as the Pippin's digest algorithm, but I knew I'd hit paydirt when I instantly recognized the "magic numbers" used in its reference implementation. What's more, the Transform function looked almost exactly the same as the C code I derived from that unnamed subroutine in 'rvpr' 0, and the MD5Update function likewise performs the same steps as the 68K routine that calls into Transform. I am confident that Apple licensed this particular implementation for use in the Pippin. It follows the MD5 specification to the letter, even going so far as endian-swapping the input longwords from the Pippin's native big-endianness.

See? They match!

Armed with the knowledge that MD5 is the message digest algorithm used in the Pippin authentication process, it is clear that the digests computed in CreateDigest, and the digests read from the PippinAuthenticationFile used in CompareDigests, are themselves not signed with RSA. In fact, RSA is not involved with verifying chunks of the disc at all. This tells me that the only thing RSA is used for in the authentication process is for verifying the signature at the end of the PippinAuthenticationFile.

The signature, to recap, is 45 bytes long and lives at the end of the PippinAuthenticationFile. Before entering the main loop, 'rvpr' 0 makes a call to VerifyDigestInfo, which in turn makes a call to VerifySignature. VerifySignature calls upon MD5 to digest the "message" portion of the PippinAuthenticationFile—everything but the signature. It then must use RSA to decrypt and verify the signature against that MD5 digest. If it does, we know the chunk hashes therein can be trusted, so RSA is no longer needed. Otherwise, we know the PippinAuthenticationFile has been tampered with in some way.

Diagram credit: Tommy Yune
This diagram took way too long to make.

Let's say for illustration's sake that the PippinAuthenticationFile is 64K and the last 1K is the signature. When the signature is decrypted, it should contain a digest of that first 63K. If we digest that first 63K ourselves and the two match, we're verified. The whole process is... pretty modern, actually, when you consider this was 1995. 🙂

Using the "Macintosh on Pippin" CD (a.k.a. "Tuscon") as a test case, I stepped through VerifySignature to obtain the MD5 digest of the authentication file's "message:" AE 1A EC AE A4 C5 11 68 2E 38 7D D1 48 F0 55 C2. With this in mind, I set out to test my hypothesis and hopefully find our computed MD5 digest of the message portion somewhere in memory. If I could find this, I could work backwards and reveal how the signature is decrypted. Apple indicated in a Pippin technote that RSA was licensed for their authentication software library. Whether this means Apple used the library as-is, or licensed the code to augment for their own needs, I wanted to verify this one way or the other. VerifySignature makes two unnamed calls before cleaning up and returning a result:

668   B883              Cmp.L     D3, D4          ; is the remaining bytes to hash greater than 16K
66A   6C02              BGE.B     (* + $4)        ; then hash another 16K
66C   2604              Move.L    D4, D3          ; else hash the remaining bytes (which will be < 16K)
66E   4878 0000         Pea.L     ($0)
672   2F03              Move.L    D3, -(A7)       ; push size as bytes to hash (typically 16K until the last chunk)
674   2F0A              Move.L    A2, -(A7)       ; push ptr to start of chunk to hash
676   2F2E FFFC         Move.L    -$4(A6), -(A7)  ; -$4(A6) -> hash object
67A   4EB9 0000 816C    Jsr       ($816C).L       ; create digest of 16K chunk in hash object
680   2A00              Move.L    D0, D5          ; (don't know what is returned in D0, I think a size?)
682   9883              Sub.L     D3, D4          ; remaining bytes -= how many bytes we just hashed (typically 16K)
684   D5C3              AddA.L    D3, A2          ; A2 -> next chunk to hash
686   4FEF 0010         Lea.L     $10(A7), A7     ; cleanup stack
68A   4A84              Tst.L     D4              ; are there any remaining bytes left?
68C   6EDA              BGT.B     (* + -$24)      ; keep hashing
68E   4878 0000         Pea.L     ($0)
692   4878 0000         Pea.L     ($0)
696   2F2E 001C         Move.L    $1C(A6), -(A7)  ; $1C(A6) == $2D (size of signature?)
69A   2F2E 0018         Move.L    $18(A6), -(A7)  ; $18(A6) -> second longword in data block after hashes in auth file (start of signature?)
69E   2F2E FFFC         Move.L    -$4(A6), -(A7)  ; -$4(A6) -> hash object
6A2   4EB9 0000 81A2    Jsr       ($81A2).L       ; probably decrypt the signature?
6A8   2A00              Move.L    D0, D5
6AA   4FEF 0014         Lea.L     $14(A7), A7     ; cleanup stack

I knew that the first call at $816C computes the MD5 digest of the PippinAuthenticationFile. I intuited that in order to determine whether verification succeeds, whatever occurs in the second call at $81A2 must accomplish that. Therefore, the public key must exist in memory at some point during $81A2's execution. In addition, the signature bytes must exist in memory at the same time. If I drill down into $81A2 until I find the signature bytes in RAM, I should find clues as to what data is used to decrypt it based on the proximity of what's changed to what hasn't, thanks to how I implemented my "heap."

$81A2 eventually makes its way to a subroutine at $1B0E, wherein I found the following:

1B58   2F0B              Move.L    A3, -(A7)    ; push nullptr?
1B5A   4878 0000         Pea.L     ($0)         ; push 0
1B5E   2F04              Move.L    D4, -(A7)    ; push size of signature?
1B60   2F05              Move.L    D5, -(A7)    ; push address of signature?
1B62   42A7              Clr.L     -(A7)        ; push 0
1B64   486E FF78         Pea.L     -$88(A6)     ; push ptr to area on stack
1B68   4878 0000         Pea.L     ($0)         ; push 0
1B6C   486A 0028         Pea.L     $28(A2)      ; push ptr to $10F0 in hash object?
1B70   4EB9 0000 12E0    Jsr       ($12E0).L    ; jump somewhere that copies the signature to a working buffer
1B76   2600              Move.L    D0, D3       ; D0 == result code
1B78   4FEF 0020         Lea.L     $20(A7), A7
1B7C   6600 00AE         BNE       (* + $B0)    ; if it's nonzero, bail
1B80   2F0B              Move.L    A3, -(A7)    ; push nullptr?
1B82   4878 0000         Pea.L     ($0)         ; push 0
1B86   4878 0040         Pea.L     ($40)        ; push 64
1B8A   486E FF98         Pea.L     -$68(A6)     ; push ptr to somewhere on stack
1B8E   486E FFA8         Pea.L     -$58(A6)     ; push ptr to somewhere else on stack
1B92   486A 0028         Pea.L     $28(A2)      ; push ptr to $10F0 in hash object?
1B96   4EB9 0000 13D8    Jsr       ($13D8).L    ; jump somewhere, get processed data we care about on stack

$12E0 copies our signature from the PippinAuthenticationFile into a working buffer shortly after a copy of part of the data block in ROM passed to 'rvpr' 0 upon initial invocation. Could the "processed data" coming from the subroutine at $13D8 be our decrypted signature? I took a look at the memory before and after the call, and...

Do you see it?

Look at memory location $20451.

When I saw it, I gasped. There it is. There's our decrypted digest.

I wasn't as lucky with the RSA code as I was with MD5—neither the reference implementations 1.0 nor 2.0 of RSA have portions that appear in this code, but they do answer the question of the signature format. The bytes appearing before our decrypted digest are a header consisting mostly of magic numbers and some $FF padding bytes, but with a lonely "05" byte at address $2044C, or offset 24 into our decrypted signature. This byte's value indicates that the digest is an MD5 digest, just like the reference implementation specifies.

That completed my understanding of the format of the PippinAuthenticationFile, leaving only one final piece of the puzzle: what and where the public key is. The public key must come from somewhere, but at this point I hadn't yet determined the purpose of the data passed in from ROM to 'rvpr' 0...

RSA (in which I dust off my math minor)

The RSA algorithm for cryptosecurity, invented in 1977 by Ronald Rivest, Adi Shamir, and Leonard Adleman, is built upon the notion that factoring large semiprime numbers is considered a hard problem. Not impossible, but very hard. Finding these primes can take a computer or cluster of computers a significant amount of time, proportional to the size of the semiprime to factor. Mathematically, it involves a few steps but can be implemented using basic algebraic concepts.

First, find two numbers P and Q such that they are both prime, meaning that both P and Q can only be divided neatly by themselves and one. Let's use P = 19 and Q = 17 as examples.

Next, compute P \cdot Q. 19 \cdot 17 = 323 in our example case. Call this N.

Now we need to calculate \lambda. We do this by computing (P - 1) \cdot (Q - 1). (19 - 1) \cdot (17 - 1) = 18 \cdot 16 = 288.

We need to choose a value for e such that e and \lambda are coprime; that is, \lambda is not neatly divisible by e. The smallest value that works here is 5, so we'll use that in our example case, but typically a larger value is used with a small amount of 1 bits in its binary representation.

Finally we need to find the value of D. D can be found by solving the equation D \cdot e \equiv 1 \pmod \lambda. We do this by using the extended Euclidean algorithm.

All we do here is integer divide \lambda by e and also take \lambda \mod e; that is, we divide \lambda by e with remainder. Then repeat this process with the results until the remainder equals one: divide the divisor by the remainder from the previous calculation. For example, start with \lambda \div e:

\begin{aligned}  288 \div 5 &= 57\text{ remainder }3 \\  288 &= (57 \cdot 5) + 3  \end{aligned}

Take the divisor from that and divide by the remainder:

\begin{aligned}  5 \div 3 &= 1\text{ remainder }2 \\  5 &= (1 \cdot 3) + 2  \end{aligned}

One more time:

\begin{aligned}  3 \div 2 &= 1\text{ remainder }1 \\  3 &= (1 \cdot 2) + 1  \end{aligned}

Once we have a remainder of one, we've found the greatest common divisor and so we need to build back up to find our value for D. Do so by substituting our results until we have a linear combination of \lambda and e (288 and 5, respectively):

\begin{aligned}  1 &= 3 - (1 \cdot 2) \\  &= 3 - 1 \cdot (5 - (1 \cdot 3)) \\  &= 3 - 5 + 3 \\  &= 2 \cdot 3 - 1 \cdot 5 \\  &= 2 \cdot (288 - (57 \cdot 5)) - 1 \cdot 5 \\  &= 2 \cdot 288 - 114 \cdot 5 - 5 \\  &= 2 \cdot 288 - 115 \cdot 5  \end{aligned}

Since we need to satisfy D \cdot e \equiv 1 \pmod \lambda, we ignore \lambda's coefficient here, leaving e with a coefficient of -115. \lambda equals 288 (as we calculated earlier), so -115 \mod 288 is 173, giving us our value for D.

We now have everything we need to sign and verify messages. Our "private key" is our values for D and N—messages signed with the private key can only be verified by someone with our "public key". Our "public key" is our values for e and N—only our public key can verify messages signed by our private key, assuring our recipient that the signed message indeed comes from us and can be trusted.

Let's say we want to send someone the answer to the ultimate question, but we want to sign it first in case our message gets intercepted by Vogons. 😛 Call the original message M with the value 42, and here we'll calculate the signature S. We do so using our values for D and N:

\begin{aligned}  S &= M^{D} \mod N \\  S &= 42^{173} \mod 323 \\  &= 111  \end{aligned}

When our recipient receives our message, they will need to verify our signature in order to be sure it can be trusted and that it has not been tampered with by any poetic aliens. 😉 They do so using our values for e and N:

\begin{aligned}  M &= S^{e} \mod N \\  M &= 111^{5} \mod 323 \\  &= 42  \end{aligned}

If the signature matches the original message (as it does here in our example), the message arrived safely intact.

Notice that the values for P, Q, e, \lambda, N, and D are all relatively small—the largest of these, N, only needs nine bits for its binary representation. You could therefore say that we used a 9-bit key in our above example. Furthermore, notice that our message 42—and its signature 111—also fit inside of nine bits. This is a property of RSA: a key of bit length X can only operate on a message with a maximum bit length also of X.

The signature size as defined in the PippinAuthenticationFile is 45 bytes, suggesting that the Pippin's public RSA key is at least 360 bits long (45 * 8 bits per byte). Recall from earlier that although the RSA public key is still unknown, it must come from somewhere in the ROM, and it is still at this point unclear what purpose the blocks of data passed to 'rvpr' 0 serve.

I found part of one of the blocks in RAM near where the decrypted signature is: 45 bytes, same as the signature. I also found a nearby value of 0x10001, or 65537, which seems to be a popular choice for the value of e. Hmm. Interesting.

I found another block of memory also nearby containing the same data, but reversed by 16-bit words. Hmm. Interesting.

Wonder what the odds are... 😉

It was reasonable to hypothesize that one of these blocks contained the public key. There are plenty of web pages out there that explain RSA using examples, some with implementations in JavaScript allowing someone to plug in their own keys, messages, and signatures. I tried the nearest 45-byte "key" in one such website with the raw 45-byte signature from the PippinAuthenticationFile and...

It didn't work. The "decrypted" signature didn't match at all. Garbage in, garbage out. Cue sigh of disappointment.

I had one data block left, and with little hope remaining...


I'd found it.

Apple's public key for verifying the signature on a PippinAuthenticationFile is:
E0 E0 27 5C AB 60 C8 86 A3 FA C2 98 21 79 54 A8 9F D1 B9 DC 8A BA 84 EF B1 E7 C9 E2 1B F7 DD D7 DC F0 E4 4A BB 79 51 0E 7C EB 80 B1 1D

... and I didn't even have to look at very much code. 😀

Cracking RSA

I just have to crack RSA now, right?

Fortunately, available tools make that a much less daunting prospect than popular media contemporary with the Pippin suggested. There was even an ongoing RSA Factoring Challenge for a while until 2007. Back then though, it was a different story. The Open Source Initiative had yet to be founded. Prime factoring was done mostly in isolation by dedicated teams with access to massive amounts of computing power (for the time). A 364-bit decimal number took a two-person team and a supercomputer about a month to factor in 1992.

But this isn't 1992 anymore. The computers on our desks and in our pockets have more than enough number-crunching power to factor the Pippin's public key. Today, with some freely available open source software and a typical desktop PC, a 360-bit key can be factored in a matter of hours. And thanks to the efforts of several open source projects within recent years, we have a little tool to help us called msieve. 😀

msieve is very user-friendly. 😉 You pass the number you want to factor as its only command line argument and it just goes. It even saves its progress to disk, just in case it's a Really Big Number and something terrible happens like a power outage or something.

msieve took 18 hours, 34 minutes, and 4 seconds on my i7 Intel NUC to find two prime factors P and Q of the Pippin's public key:

Hard part's over. 😀

Let's plug these into our RSA formulas from above and find Apple's private key, shall we?

P = 0F 2D 25 BF 3C 5B 70 28 72 6E 49 75 3F D5 62 67 11 37 38 94 51 EF D7
Q = 0E D1 47 5D E1 92 41 28 59 2C 4B 3E 47 4E 5F C1 23 1F 1B AF A0 D8 2B
e = 0x10001
N = P \cdot Q = E0 E0 27 5C AB 60 C8 86 A3 FA C2 98 21 79 54 A8 9F D1 B9 DC 8A BA 84 EF B1 E7 C9 E2 1B F7 DD D7 DC F0 E4 4A BB 79 51 0E 7C EB 80 B1 1D (this is the public key—we know this from stepping through 'rvpr' 0 and examining memory)
\lambda = (P - 1) \cdot (Q - 1) = E0 E0 27 5C AB 60 C8 86 A3 FA C2 98 21 79 54 A8 9F D1 B9 DC 8A BA 66 F1 44 CA AB F4 6A A7 12 3D 48 3D 5D 26 F9 51 1C B8 28 A7 8D E9 1C
D \equiv e^{-1} \pmod \lambda = 01 1C D3 AD E7 99 86 67 D6 E9 E2 17 11 DB EC 33 07 B6 0E 4D 6D 03 26 20 77 5D DB 9B 3B 64 CF 22 B2 0E 4A F3 2F 07 40 EE B0 6F 85 F2 A0 1D

That last value for D should be Apple's private signing key. Now let's verify, using the same JavaScript RSA calculator I found on the Web. As a test case, let's again use the PippinAuthenticationFile from the "Macintosh on Pippin" CD (a.k.a. "Tuscon").

Its first four bytes indicate a message size of $FD4F, or 64847 bytes. The MD5 digest of those first 64847 bytes is AE 1A EC AE A4 C5 11 68 2E 38 7D D1 48 F0 55 C2.

It has the following signature:
5A 90 36 69 DD 06 F5 15 EF 7A A2 04 5D 24 C2 CA 3C DD 2E C3 85 7D BB B8 9C 53 78 24 65 CC F0 0A 52 09 20 76 E1 9D F7 CC B3 C6 6D 7E AF

When decrypted, we get:
00 01 FF FF FF FF FF FF FF FF 00 30 20 30 0C 06 08 2A 86 48 86 F7 0D 02 05 05 00 04 10 AE 1A EC AE A4 C5 11 68 2E 38 7D D1 48 F0 55 C2

Notice that the last 16 bytes match our computed MD5 digest.

Finally, if we take the decrypted signature and re-sign it using what we think is Apple's private key, we get:
5A 90 36 69 DD 06 F5 15 EF 7A A2 04 5D 24 C2 CA 3C DD 2E C3 85 7D BB B8 9C 53 78 24 65 CC F0 0A 52 09 20 76 E1 9D F7 CC B3 C6 6D 7E AF

... which matches the original signature found in the PippinAuthenticationFile.

Mr. Hammond, I think we're in business. 😀
The RSA keys used in the signing and verification of a PippinAuthenticationFile are 360 bits long.

Apple’s public key for verifying the PippinAuthenticationFile is:
E0 E0 27 5C AB 60 C8 86 A3 FA C2 98 21 79 54 A8 9F D1 B9 DC 8A BA 84 EF B1 E7 C9 E2 1B F7 DD D7 DC F0 E4 4A BB 79 51 0E 7C EB 80 B1 1D

Apple’s private key for signing a PippinAuthenticationFile is:
01 1C D3 AD E7 99 86 67 D6 E9 E2 17 11 DB EC 33 07 B6 0E 4D 6D 03 26 20 77 5D DB 9B 3B 64 CF 22 B2 0E 4A F3 2F 07 40 EE B0 6F 85 F2 A0 1D

There you go, Internet. We now have all the information we need to sign and boot our own Pippin media.