Doing DigiPres with Windows

A couple of times at NYU, a new student would ask me what kind of laptop I would recommend for courses and professional use in video and digital preservation. Maybe their personal one had just died and they were shopping for something new. Often they’d used both Mac and Windows in the past so were generally comfortable with either.

I was and still am conflicted by this question. There are essentially three options here: 1) some flavor of MacBook, 2) a PC running Windows, or 3) a PC running some flavor of Linux. (There are Chromebooks as well, but given their focus on cloud applications and lack of a robust desktop environment, I wouldn’t particularly recommend them for archivists or students looking for flexibility in professional and personal use)

Each of these options have their drawbacks. MacBooks are and always have been prohibitively expensive for many people, and I’m generally on board with calling those “butterfly switch” keyboards on recent models a crime. Though a Linux distribution like Ubuntu or Linux Mint is my actual recommendation and personally preferred option, it remains unfamiliar to the vast majority of casual users (though Android and ChromeOS are maybe opening the window), and difficult to recommend without going full FOSS evangelist on someone who is largely just looking to make small talk (I’ll write this post another day).

So I never want to steer people away from the very real affordability benefits of PC/Windows – yet feel guilty knowing that’s angling them full tilt against the grain of Unix-like environments (Mac/Linux) that seem to dominate digital preservation tutorials and education. It makes them that student or that person in a workshop desperately trying in vain to follow along with Bash commands or muck around in /usr/local while the instructor guiltily confronts the trolley problem of spending their time saving the one or the many.

I also recognize that *nix education, while the generally preferred option for the #digipres crowd, leaves a gaping hole in many archivists’ understanding of computing environments. PC/Windows remains the norm in a large number of enterprise/institutional workplaces. Teaching ffmpeg is great, but do we leave students stranded when they step into a workplace and can’t install command line programs without Homebrew (or don’t have admin privileges to install new programs at all)?

This post is intended as a mea culpa for some of these oversights by providing an overview of some fundamental concepts of working with Windows beyond the desktop environment – with an eye towards echoing the Windows equivalents to usual digital preservation introductions (the command line, scripting, file system, etc.).

*nix vs NT

To start, let’s take a moment to consider how we got here – why are Mac/Linux systems so different from Windows, and why do they tend to dominate digital preservation conversations?

Every operating system (whether macOS, Windows, a Linux distribution, etc.) is actually a combination of various applications and a kernel. The kernel is the core part of the OS – it handles the critical task of translating all the requests you as a user make in software/applications to the hardware (CPU, RAM, drives, monitor, peripherals, etc.) that actually perform/execute the request.

“Applications” and “Kernel” together make up a complete operating system.

UNIX was a complete operating system created by Bell Labs in 1969 – but it was originally distributed with full source code, meaning others could see exactly how UNIX worked and develop their own modifications. There is a whole complex web of UNIX offshoots that emerged in the ’80s and early ’90s, but the upshot is that a lot of people took UNIX’s kernel and used it as the base for a number of other kernels and operating systems, including modern-day macOS and Linux distributions. (The modifications to the kernel mean that these operating systems, while originally derived in some way from UNIX, are not technically UNIX itself – hence you will see these systems referred to as “Unix-like” or “*nix”)

The result of this shared lineage is that Mac and Linux operating systems are similar to each other in many fundamental ways, which in turn makes cross-compatible software relatively easy to create. They share a basic file system structure and usually a default terminal and programming language (Bash) for doing command line work, for instance.

Meanwhile there was Microsoft, which, though it originally created its own (phenomenally popular!) variant of UNIX in the ’80s, switched over in the late ’80s/early ’90s into developing its own, proprietary kernel, entirely separate from UNIX. This was the NT kernel, which is at the heart of basically every Windows operating system ever made. These fundamental differences in architecture (affecting not just clearly visible characteristics like file systems/naming and command line work, but extremely low-level methods of how software communicates with hardware) make it more difficult to make software cross-compatible between both *nix and Windows operating systems without a bunch of time and effort from programmers.

So, given the choice between these two major branches in computing, why this appearance that Mac hardware and operating systems have won out for digital preservation tasks, despite being the more expensive option for constantly cash-strapped cultural institutions and employees?

It seems to me it has very little to do with an actual affinity for Macs/Apple and everything to do with Linux and GNU and open source software. Unlike either macOS or Windows, Linux systems are completely open source – meaning any software developer can look at the source code for Linux operating systems and thus both design and make available software that works really well for them without jumping through any proprietary hoops (like the App Store or Windows Store). It just so happens that Macs, because at their core they are also Unix-like, can run a lot of the same, extremely useful software, with minimal to no effort. So a lot of the software that makes digital preservation easier was created for Unix-like environments, and we wind up recommending/teaching on Macs because given the choice between the macOS/Windows monoliths, Macs give us Bash and GNU tools and other critical pieces that make our jobs less frustrating.

Microsoft, at least in the ’90s and early aughts, didn’t make as much of an effort to appeal directly to developers (who, very very broadly speaking, value independence and the ability to do their own, unique thing to make themselves or their company stand out) – they made their rampant success on enterprise-level business, selling Windows as an operating system that could be managed en masse as the desktop environment for entire offices and institutions with less troubleshooting of individual machines/setups.

(Also key is that Microsoft kept their systems cheaper by never getting into the game of hardware manufacturing – unlike Mac operating systems, Windows is meant to run on third-party hardware made by companies like HP, Dell, IBM, etc. The competition between these manufacturers keeps prices down, unlike Apple’s all-in-one hardware + OS systems, making them the cheaper option for businesses opening up a new office to buy a lot of computers all at once. Capitalism!)

As I’ll get into soon, there have been some very intriguing reversals to this dynamic in recent years. But, in summary: the reason Macs have seemed to dominate digital preservation workshops and education is that there was software that made our jobs easier. It was never that you can’t do these things with Windows.

Windows File System

Beyond the superficial desktop design differences (dock vs. panel, icons), the first thing any user moving between macOS and Windows probably notices are the differences between Finder (Mac) and File Explorer (Windows) in navigating between files and folders.

In *nix systems, all data storage devices – hard disk drives, flash drives, external discs or floppies, or partitions on those devices – are mounted into the same, “root” file system as the operating system itself, in Windows every single storage device or partition is separated into its own “drive” (I use quotes because two partitions might exist on the same physical hard disk drive, but are identified by Windows as separate drives for the purpose of saving/working with files), identified by a letter – C:, D:, G:, etc. Usually, the Windows operating system itself, and probably most of a casual user’s files, are saved on to the default C: drive. (Why start at C? It harkens back to the days of floppy drives and disk operating systems, but that’s a story for another post). From that point, any additional drives, disks or partitions are assigned another letter (if for some reason you need more than 26 drives or partitions mounted, you’ll have to resort to some trickery!)

Also, file paths in Windows use backslashes (e.g. C:\Users\Ethan\Desktop) instead of forward slashes (/Users/Ethan/Desktop)

and if you’re doing command-line work, Bash (the default macOS CLI) is case-sensitive (meaning /Users/Ethan/Desktop/sample.txt and /Users/Ethan/Desktop/SAMPLE.txt are different files) while Windows’ DOS/PowerShell command line interfaces (more on these in a minute) are not (so SAMPLE.txt would overwrite sample.txt)

¯\_(ツ)_/¯

Fundamentally, I don’t think these minor differences affect the difficulty either way of performing common tasks like navigating directories in the command line or scripting from Macs – it’s just different, and an important thing to keep in mind to remember where your files live.

What DOES make things more difficult is when you start trying to move files between macOS and Windows file systems. You’ve probably encountered issues when trying to connect external hard drives formatted for one file system to a computer formatted for another. Though there’s a lot of programming differences between file systems that cause these issues (logic, security, file sizes, etc.), you can see the issue just with the minor, user-visible differences just mentioned: if a file lives at E:\DRIVE\file.txt on a Windows-formatted external drive, how would a computer running macOS even understand where to find that file?

Modern Windows systems/drives are formatted with a file system called NTFS (for many years Microsoft used FAT and its variants such as FAT16 and FAT32, so NTFS is backwards-compatible with these legacy file systems). Apple, meanwhile, is in a moment transitioning from its very longtime system HFS+ (which you may have also seen referred to in software as “Mac OS Extended”) to a new file system, APFS.

File system transitions tend to be a headache for users. NTFS and HFS+ have at least been around for long enough that quite a lot of software has been made to allow moving back and forth/between the two file systems (if you are on Windows, HFS Explorer is a great free program for exploring and extracting files on Mac-formatted drives; while on macOS, the NTFS-3G driver allows for similarly reading and writing from Windows-formatted storage). Since it is still pretty new, I am less aware of options for reading  APFS volumes on Windows, but will happily correct that if anyone has recommendations!

** I’ve got a lot of questions in the past about the exFAT file system, which is theoretically cross-compatible with both Mac and Windows systems. I personally have had great success with formatting external hard drives with exFAT, but have also heard a number of horror stories from those who have tried to use it and been unable to mount the drive into one or another OS. Anecdotally it seems that drives originally formatted exFAT on a Windows computer have better success than drives formatted exFAT on a Mac, but I don’t know enough to say why that might be the case. So: proceed, but with caution!

Executable Files

No matter what operating system you’re using, executable files are pieces of code that directly instruct the computer to perform a task (as opposed to any other kind of data file, whose information usually needs to be parsed/interpreted by executable software in some way to become in any way meaningful – like a word processing application opening a text file, where the application is the executable and the text file is the data file). As a common user, anytime you double-clicked on a file to open an application or run a script, whether a GUI or a command line program, you ran an executable.

Where executable files live and how they’re identified to you, the user, varies a bit from one operating system to the next. A MacOS user is accustomed to launching installed programs from the Applications folder, which are all actually .app files (.apps in turn are actually secret “bundles” that contain executables and other data files inside the .app, which you can view like a folder if you right-click and select “Open Package Contents” on an application).

Executable files (with or without a file extension) might also be automatically identified by macOS anywhere and labeled with this icon:

And finally, because executable files are also called binaries, you might find executables in a folder labeled “bin” (for instance, Homebrew puts the executable files for programs it installs into /usr/local/bin)

Windows-compatible executables are almost always identified with .exe file extensions. Applications that come with an installer (or, say, from the Windows Store) tend to put these in directories identified by the application name in the “Program Files” folder(s) by default. But some applications you just download off of GitHub or SourceForge or anywhere else on the internet might just put them in a “bin” folder or somewhere else.

(Note: script files, which are often identified by an extension according to the programming language they were written in – .py files for Python, .rb for Ruby, .sh for Bash, etc. – can usually be made automatically executable like an .exe but are not necessarily by default. If you download a script, whether working on Mac or Windows, you may need to check whether it’s automatically come to you as an executable or more steps need to be taken. They often don’t, because downloading executables is a big no-no from the perspective of malware protection.)

Command Prompt vs. PowerShell

On macOS, we usually do command line work (which offers powerful tools for automation, cool/flexible software, etc.) using the Terminal application and Bash shell/programming language.

Modern Windows operating systems are a little more confusing because they actually offer two command line interfaces by default: Command Prompt and PowerShell.

The Command Prompt application uses DOS commands and has essentially not changed since the days of MS-DOS, Microsoft’s pre-Windows operating system that was entirely accessed/used via command line (in fact, early Windows operating systems like Windows 3.1 and Windows 95 were to a large degree just graphic user interfaces built on top of MS-DOS).

The stagnation of Command Prompt/DOS was another reason developers started migrating over to Linux and Bash – to take advantage of more advanced/convenient features for CLI work and scripting. In 2006, Microsoft introduced PowerShell as a more contemporary, powerful command line interface and programming language aimed at developers in an attempt to win some of them back.

While I recognize that PowerShell can do a lot more and make automation/scripting way easier, I still tend to stick with Command Prompt/DOS when doing or teaching command line work in Windows. Why? Because I’m lazy.

The Command Prompt/DOS interface and commands were written around the same time as the original Bash – so even though development continued on Bash while Command Prompt stayed the same, a lot of the most basic commands and syntax remained analogous. For instance, the command for making a directory is the same in both Terminal and Command Prompt (“mkdir”), even if the specific file paths are written out differently.

So for me, while learning Command Prompt is a matter of tweaking some commands and knowledge that I already know because of Bash, learning PowerShell is like learning a whole new language – making a directory in PowerShell, e.g., is done with the command shortcut “md”.* It’s sort of like saying I already know Castillian Spanish, so I can tweak that to figure out a regional dialect or accent – but learning Portuguese would be a whole other course.

Relevant? Maybe not.

But your mileage on which of those things is easier may vary! Perhaps it’s more clear to keep your *nix and Windows knowledge totally separate and take advantage of PowerShell scripting. But this is just to explain the Command Prompt examples I may use for the remaining of this post.

* ok, yes, “mkdir” will also work in PowerShell because it’s the cmdlet name – this comparison doesn’t work so well with the basic stuff, but it gets true for more advanced functions/commands and you know it, Mx. Know-It-All

The PATH Variable

The PATH system variable is a fun bit of computing knowledge that hopefully ties together all the three previous topics we’ve just covered!

Every modern operating system has a default PATH variable set when you start it up (system variables are basically configuration settings that, as their name implies, you can update or change). PATH determines where your command line interface looks for executable files/binaries to use as commands.

Here’s an example back on macOS. Many of the binaries you know and execute as commands (cd, mkdir, echo, rm) live in /bin or /usr/bin in the file system. By default, the PATH variable is set as a list that contains both “/bin” and “/usr/bin” – so that when you type

[cc lang=”Bash”]$ cd /my/home/directory[/cc]

into Terminal, the computer knows to look in /bin and /usr/bin to find the executable file that contains instructions for running the “cd” command.

Package managers, like Homebrew, usually put all the executables for the programs you install into one folder (in Homebrew’s case, /usr/local/bin) and ALSO automatically update the PATH variable so that /usr/local/bin is added to the PATH list. If Homebrew put, let’s say, ffmpeg’s executable file into the /usr/local/bin folder, but never updated the PATH variable, your Terminal wouldn’t know where to find ffmpeg (and would just return a “command not found” error if you ran any command starting with [cc lang=”Bash”]$ ffmpeg[/cc]), even though it *is* somewhere on your computer.

So let’s move over to Windows: because Command Prompt, by default, does not have a package manager, if you download a command line program like ffmpeg, Command Prompt will not automatically know where to find that command – unless a) you move the ffmpeg directory, executable file included, into somewhere already in your PATH list (maybe somewhere like “C:\Program Files\”), or b) update the PATH list to include where you’ve put the ffmpeg directory.

I personally work with Windows rarely enough that I tend to just add a command line program’s directory to the PATH variable when needed – which usually requires rebooting the computer. So I can see how if you were adding/installing command line programs for Windows more frequently, it would probably be convenient to just create/determine “here’s my folder of command line executables”, update the PATH variable once, and then just put new downloaded executables in that folder from then on.

*** I do NOT recommend putting your Downloads or Documents folder, or any user directory that you can access/change files in without admin privileges, into your PATH!!! It’s an easy trick for accidentally-downloaded malware to get into a commonly-accessed folder (and/or one that doesn’t require raised/administrative privileges) and then be able to run its executable files from there without you ever knowing because it’s in your PATH.

Updating the PATH variable was a pain in the ass in Windows 7 and 8 but is much less so now on Windows 10.

If you are in the command line and want to quickly troubleshoot what file paths/directories are and aren’t in your PATH, you can use the “echo” command, which is available on both *nix and Windows systems. In *nix/Bash, variables can be invoked in scripts or changed with $PATH, so

[cc lang=”Bash”]$ echo $PATH[/cc]

would return something like so, with the different file paths in the PATH list separated by colons:

While on Windows, variables are identified like %THIS%, so you would run

[cc lang=”DOS” escaped=”true”] > echo %PATH%[/cc]

Package Management

Now, as I said above, package managers like Homebrew on macOS can usually handle updating/maintaining the PATH variable so you don’t have to do it by hand. For some time, there wasn’t any equivalent command line package manager (at least that I was aware of) for Windows, so there wasn’t any choice in the matter.

That’s no longer true! Now there’s Chocolatey, a package manager that will work with either Command Prompt or PowerShell. (Note: Chocolatey *uses* PowerShell in the background to do its business, so even though you can install and use Chocolatey with Command Prompt, PowerShell still needs to be on your computer. As long as your computer/OS is newer than 2006, this shouldn’t be an issue).

Users of Homebrew will generally be right at home with Chocolatey: instead of [cc lang=”Bash”]$ brew install [package-name][/cc] you can just type [cc lang=”DOS” escaped=”true”] > choco install [package-name][/cc] and Chocolatey will do the work of downloading the program and putting the executable in your PATH, so you can get right back to command line work with your new program. It works whether you’re on Windows 7 or 10.

The drawback, at least if you’re coming over from using macOS and Homebrew, is that Chocolatey is much more limited in the number/type of packages it offers. That is, Chocolatey still needs Windows-compatible software to offer, and as we went over before, some *nix software has just never been ported over to supported Windows versions (or at least, into Chocolatey packages). But many of the apps/commands you probably know and love as a media archivist: ffmpeg, youtube-dl, mediainfo, exiftool, etc. – are there!

Scripting

If you’ve been doing introductory digipres stuff, you’ve probably dipped your toes into Bash scripting – which is just the idea of writing some code written in Bash into a plain text file, containing instructions for the Terminal. Your computer can then run the instructions in the script without you needing to type out all the commands into Terminal by hand. This is obviously most useful for automating repetitive tasks, where you might want to run the same set of commands over and over again on different directories or sets of files, without constant supervising and re-typing/remembering commands. Scripts written in Bash and intended for Mac/Linux Terminals tend to be identified with “.sh” file extensions (though not exclusively).

Scripting is absolutely still possible with Windows, but we have to adjust for the fact that the Windows command line interface doesn’t speak Bash. Command Prompt scripts are written in DOS style, and their benefit for Windows systems (over, say, PowerShell scripts) is that they are extremely portable – it doesn’t much matter what version of Windows or PowerShell you’re using.

Command Prompt scripts are often called “batch files” and can usually be identified by two different file extensions: .bat or .cmd

(The difference between the two extensions/file formats is negligible on a modern operating system like Windows 7 or 10. Basically, BAT files were used with MS-DOS, and CMD files were introduced with Windows NT operating systems to account for the slight improvements from the MS-DOS command line to NT’s Command Prompt. But, NT/Command Prompt remained backwards-compatible with MS-DOS/BAT files, so you would only ever encounter issues if you tried to run a CMD file on MS-DOS, which, why are you doing that?)

Actually writing Windows batch scripts for Command Prompt is its own whole tutorial. Which – thankfully – I don’t have to write, because you can follow this terrific one instead!

Because Command Prompt batch scripts are still so heavily rooted in decades-old strategies (DOS) for talking to computers, they are pretty awkward to look at and understand.

I mean what

A lot of software developers skip right to a more advanced/human-readable programming language like Python or Ruby. Scripts in these languages do also have the benefit of being cross-platform, since Python, for example, can be installed on either Mac or Windows. But Windows OS will not understand them by default, so you have to go install them and set up a development environment rather than start scripting from scratch as you can with BAT/CMD files.

Windows Registry

The Windows Registry is fascinating (to me!). Whereas in a *nix operating system, critical system-wide configuration settings are contained as text in files (usually high up under the “root” level of your computer’s file system, where you need special elevated permissions to access and change them), in Windows these settings (beyond the usual, low-level user settings you can change in the Settings app or Control Panel) are stored in what’s called the Windows Registry.

The Windows Registry is a database where configuration settings are stored, not as text in files, but as “registry values” – pieces or strings of data that store instructions. These values are stored in “keys” (folders), which are themselves stored in “hives” (larger folders that categorize the keys within related subfolders).

Registry values can be a number of different things – text, a string of characters representing hex code, a boolean value, and more. Which kind of data they are, and what you can acceptably change them to, depends entirely on the specific setting they are meant to control. However, values and keys are also often super abstracted from the settings they change – meaning it can be very very hard to tell what exactly a registry value can change or control just by looking at it.

…..no, not at all!

For that reason, casual users are generally advised to stay out of the Windows Registry altogether, and those who are going to change something are advised to make a backup of their Registry before making edits, so they can restore it if anything goes wrong.

Just as you may never mess around in your “/etc” folder on a Mac, even as a digitally-minded archivist, you may or may not ever need to mess in the Windows Registry! I’ve only poked around in it to do things like removing that damn OneDrive tab from the side of my File Explorer windows so I stop accidentally clicking on it. But I thought I’d mention it since it’s a pretty big difference in how the Windows operating and file systems work.

Or disable Microsoft’s over-extensive telemetry! Which, along with the fact that you can’t fucking uninstall Candy Crush no matter how hard you try, is one of the big things holding back an actually pretty nifty OS.

How to Cheat: *nix on Windows

OK. So ALL OF THIS has been based on getting more familiar with Windows and working with it as-is, in its quirky not-Unix-like glory. I wanted to write these things out, because to me, understanding the differences between the major operating systems helped me get a better handle on how computers work in general.

But guess what? You could skip all that. Because there are totally ways to just follow along with *nix-based, Bash-centric digital preservation education on Windows, and get everyone on the same page.

On Windows 7, there were several programs that would port a Bash-like command line environment and a number of Linux/GNU utilities to work in Windows – Cygwin probaby being the most popular. Crucially, this was not a way to run Linux/Unix-like applications on Windows – it was just a way of doing the most basic tasks of navigating and manipulating files on your Windows computer in a *nix-like way, with Bash commands (the software essentially translated them into DOS/Windows commands behind-the-scenes).

But a way way more robust boon for cross-compatibility was the introduction of the Windows Subsystem for Linux in Windows 10. Microsoft, for many years the fiercest commercial opponent of open source software/systems, has in the last ten years or so increasingly been embracing the open source community (probably seeing how Linux took over the web and making a long-term bet to win back the business of software/web developers).

Hmm

The WSL is a complete compatibility layer for, in essence, installing Linux operating systems like Ubuntu or or Kali or openSUSE *inside* a Windows operating system. Though it still has its own limitations, this basically means that you can run a Bash terminal, install Linux software, and navigate around your computer just as you would on a *nix-only system – from inside Windows.

The major drawback from a teaching perspective: it is still a Linux operating system Windows users will be working with (I’d personally recommend Ubuntu for digipres newbies) – meaning there may still be nagging differences from Mac users (command line package management using “apt” instead of Homebrew for instance, although this too can be smoothed over with the addition of Linuxbrew). This little tutorial covers some of those, like mounting external drives into the WSL file system and how to get to your Windows files in the WSL.

You can follow Microsoft’s instructions for installing Ubuntu on the WSL here (it may vary a bit depending on which update/version exactly of Windows 10 you’re running).

That’s it! To be absolutely clear at the end of things here, I’ve generally not worked with Windows for most digital/audiovisual preservation tasks, so I’m hardly an expert. But I hope this post can be used to get Windows users up to speed and on the same page as the Mac club when it comes to the basics of using their computers for digipres fun!

A Guide to Legacy Mac Emulators

Last fall I wrote about the collaborative technical/scholarly process of making some ’90s multimedia CD-ROMs available for a Cinema Studies course on Interactive Cinema. I elided much of the technical process of setting up a legacy operating system environment in an emulator, since my focus for that post was on general strategy and assessment – but there are aspects of the technical setup process that aren’t super clear from the Emaculation guides that I first started with.

That’s not too surprising. The tinkering enthusiast communities that come up with emulators for Mac systems, in particular, are not always the clearest about self-documentation (the free-level versions of PC-emulating enterprise software like VirtualBox or VMWare are, unsurprisingly, more self-describing). That’s also not something that to hold against them in the least, mind you – when you are a relatively tiny, all-volunteer group of programmers keeping the software going to maintain decades’ worth of content from a major computing company that’s notoriously litigious about intellectual property….some of the details are going to fall through the cracks, especially when you’re trying to cram them into a forum post, not specifically addressing the archival/information science community, etc. etc.

In particular, while each Mac emulator has some pretty good information available to troubleshoot it (if you’ve got the time to find it), I’ve never found a really satisfying overview, that is, an explanation of why you might choose X program over Y. That’s partly because, as open source software, each of these programs is *potentially* capable of a hell of a lot – but might require a lot of futzing in configuration files and compiling of source code to actually unlock all those potentials (which, those of us just trying to load up Nanosaur for the first time in 15 years aren’t necessarily looking to mess with). Most of the “default” or recommended pre-compiled Mac/Windows versions of emulators offered up to casual or first-time users don’t necessarily do every single feature that the emulator’s front page brags about.

So what really is the boundary between Basilisk II and SheepShaver? Why is there such a difference between MacOS 9.0.4 and 9.1? And what the hell is a ROM file anyway? That’s what I want to get into today. I’ll start with the essential components to get any Mac emulation program running, give some recommendations for picking an emulator, then round it out with some installation instructions and tips for each one. (You can jump right to an app with this Table of Contents:

What do I need?

An Emulator
There are several free and open-source software options for emulating legacy Mac systems on contemporary computers. We’ll go over these in more detail in a minute.

An Operating System Installer
You’ll need the program that installs the desired operating system that you’re trying to recreate/emulate: let’s say, for example, Mac OS 8.5. These used to come on bootable CD-ROMs, or depending on the age of the OS, floppy disks. If you still have the original installer CD lying around, great! You can still use that. For the rest of us, there’s WinWorld , providing disk image files for all your abandonware OS needs.

A ROM File
This is the first thing that can start to throw people off. So you know your computer has a hard drive, where your operating system and all your files and programs live. It also has a CPU, central processing unit, which is commonly analogized to the “brain” of the computer: it coordinates all the different pieces of your computer, hardware and software alike: operating system, keyboard, mouse, monitor, hard drive (or solid state drive), CD-ROM drive, USB hub, etc.  etc.  This ensures your computer has at least some basic functionality even if your operating system were to get corrupted or some piece of hardware were to truly go haywire (see this other post). But the CPU itself needs a little bit of programmed code in order to work – it has to be able to both understand and give instructions. Rather than stored on a hard drive like the operating system, which is easily writable/modifiable by the user, this crucial, small central piece of code is stored on the CPU on a chip of Read-Only Memory (the read-only part is why this sort of code is often called firmware rather than software). That’s your ROM file. If the whole goal of an emulator is to trick legacy software into thinking it’s on an older machine by creating a fake computer-inside-your-computer (a “virtual machine”), you need a ROM file to serve as the fake brain.

This is trickier with Mac emulation than it is with Windows/PC emulation. The major selling point of Windows systems is that they are not locked into specific hardware: there are/have been any number of third-party manufacturers (Dell, Lenovo, HP, IBM, etc etc) and they all have to make sure their hardware, including the CPU/ROM that come with their desktops, are very broadly compatible, because they can never predict what other manufacturer’s hardware/software you may be trying to use in combination. This makes emulation easier, because the emulating application can likewise go for broad compatibility and probably be fine, without worrying too specifically about *exactly* what model of CPU/ROM it’s trying to imitate (see, for example, DOSBox).

Not so with Mac, since Apple makes closed-box systems: the hardware, OS, software, etc., are all very carefully designed to only work with their own stuff (or, at least, stuff that Apple has pretty specifically approved/licensed). So setting up a Mac emulator, you have to get very specific about which ROM file you are using as your fake brain – because certain Apple models would have certain CPUs, which could only work with certain operating system versions, which would only work with certain versions of QuickTime, which would only play certain files, which would………

That sounds exhausting.

It is.

Wait, if this relies on proprietary code from closed-box systems… is this legal?

Well if you got this far in an article about making fake Macs before asking that, I’m not so sure you actually care about the answer. But, at least so far as my knowledge of American intellectual property law goes, and I am by no means whatsoever an expert, we are in gray legal territory. You’re definitely safest to extract and use the ROM file of a Mac computer you bought. Otherwise we’re basically making an intellectual property fair use case here – that we’re not taking any business/profit from Apple from using this firmware/software for personal or educational purpose, and that providing emulation for archival purposes (and yes, I would consider recovering personal data an “archival” process) serves a public good by providing access to otherwise-lost files. The (non-legal) term “abandonware” does also exist for a reason – these forums/communities are pretty prominent, and Apple’s shown no particular signs recently of looking to shut them down or stem the proliferation of legacy ROMs floating around.

Of course, be careful about who and where you download from. Besides malware, it’s easy to come across ROM files that are just corrupted and non-functional. I’ll link with impunity to options that have worked for me.

[Also be advised! Files made from video game cartridges – SNES, Game Boy, Atari, Sega Genesis, the like – are *also* called ROM files, since game cartridges are also really just pieces of Read-Only Memory. Just searching “ROMs” is going to offer up a lot of sites offering those, which is very fun and all but definitely not going to work in your Mac emulator]

So how do I pick what ROM file and emulator to use?

That’s largely going to depend on what OS you’re aiming for. There are four emulators that I’ve used successfully (read: that have builds and guides available on Emaculation) that together cover the gamut of basically all legacy Mac machines: Mini vMac, Basilisk II, SheepShaver, and QEMU.

As I mentioned at the top, a confusing aspect is that many of these programs have various “builds” – different versions of the same basic application that offer tweaks and improvements focused on one particular feature or another. The most stable, generic version that they recommend for download might not actually be compatible with *every* ROM or operating system that the emulator can theoretically handle (with a different build).

In hunting down ROM files, you’ll probably also come across ROMs listed, rather than from a particular Mac model, as “Old World” or “New World”. These essentially refer to the two broad “families” of CPUs that Apple used for Macs before moving to the Intel chips still found in Macs today: generally speaking, “Old World” refers to the Motorola 68000 series of processors, while “New World” refers to the PowerPC line spearheaded by “AIM” (an Apple-IBM-Motorola alliance).

New World and Old World ROMs can be a good place to start, since they are often taken from sources (e.g. recovery discs) more broadly aimed at emulating Motorola 68000 or PowerPC architecture and therefore could potentially imitate a number of specific Mac models – but don’t be too surprised if you come across a software/OS combination that’s just not working and you have to hunt down a more specific ROM for a particular Mac brand/model. We’ll see an example of this in a moment with our first emulator.

If you are currently using macOS or iOS, you can find some wonderfully detailed tech specs on every single piece of Mac hardware ever made using the freeware Mactracker app. Picking an exact model to emulate based on your OS/processor needs can help narrow down your search.

So with that in mind, let’s cut to the chase – moving chronologically through Mac history, here are my recommendation and install tips:

Mini vMac

Recommended:

Mini vMac is an easy-to-setup (but thereby probably not the most robust) emulator for black-and-white Motorola 68000-based Macs. While it’s eminently customizable if you dig into the source code (or pay the guy who makes it at least $10 for a year’s worth of by-request custom setups – not a bad deal!!), the “standard variation” of this app that the Mini vMac page encourages you to download for a quick start quite specifically emulates a Macintosh Plus. The Macintosh Plus could run Mac System 1.0 through 7.6.1 – but the System 7.x versions frequently required some extra hardware finagling, so I’d recommend sticking with Mini vMac for Systems 1.0 through 6.x.

Also, while a generic Old World ROM *should* work since Mac Pluses had 68000-series CPUs, I haven’t had much success with those on the standard variation of Mini vMac. Look for a ROM file specifically from a Mac Plus instead (such as the one in the link above, to a very useful GitHub repository with several ROMs from specific Mac models).

Installation:

  1. Download the standard variation build for your host operating system (OSX, Windows, etc.)
  2. Download a bootable disk image for your desired guest OS – System 1.x through 6.x (note these will probably be floppy disk image files, either with a .dsk or .img file extension).
  3. Download the Mac Plus ROM, rename it to “vMac.ROM” and place it in the same folder as the Mini vMac application.
  4. Double-click on the Mini vMac application to launch. You should hear a (very loud) startup beep and see a picture of a 3.5″ floppy with a blinking question mark on it – this indicates that the emulator has correctly loaded the ROM file but can’t find a bootable operating system.
  5. Drag the disk image for your operating system on to the Mini vMac window. If it’s a complete, bootable disk image, your desired operating system environment should appear!
  6. From there, it’ll depend what you’re doing – if you’re just trying to run/look at a piece of software from a floppy image, you can just drag and drop the .img or .dsk into the Mini vMac window and it should appear in your booted System 1.x-6.x environment.If you want to create a virtual “drive” to save your emulated environment on, you can create an empty disk image with one of the next two emulators on this list or the dd command-line utility if you’re on a Mac/Linux system (note: creating a Blank Image through a contemporary version of macOS’ Disk Utility won’t work):


    [cc lang=”Bash”]$ dd if=/dev/zero of=/path/to/image.img bs=512 count=1000[/cc]

    will make a 512 KB blank image (512 bytes was the traditional size for floppy disk “blocks”).You can increase the count argument to however large you want, but be warned that early file systems had size limitations and at some point operating systems and software like the classic Mac systems might not recognize/work with a disk that’s too large. Once you have a blank disk image, you can just drop it into a Mini vMac window running a legacy OS to format (“initialize”) it to Apple’s old HFS file system and start installing software/saving files to it.Generally speaking though, a lot of apps and files from this era ran straight off of floppies, so if you’re just looking to run old programs without necessarily saving/creating new files, dragging and dropping in a system disk and then your .img or .dsk files should just do the trick!

Basilisk II

Recommended:

Basilisk II is another well-maintained Motorola 68000 series emulator. But whereas Mini vMac is primarily aimed at emulating the Mac Plus, Basilisk can either emulate a Mac Classic or a Mac II series model (hence “Basilisk II” – there never was a “Basilisk I”), depending on the configuration/build and ROM file used. As with Mini vMac, this might not be clear, but the precompiled configurations of Basilisk for Mac and Windows offered up on the Emaculation forums – which is where the Basilisk II homepage sends you – specifically offers the latter. So while you may see info that Basilisk II can emulate all the way from System 0.x through 8.1, the version you’re most likely going to be using out-of-the-box can probably only handle the more limited range of System 7.x through 8.1 (the targeted range for the Mac II series).

Note though: this was a kind of bananas era for Apple hardware when it came to model branding. So in terms of looking for ROMs, you can try to find something specifically labelled as being from a Mac II series machine – but you may also find success with a ROM file from a Quadra (largely the same as Mac II machines, but with some CPU/RAM upgrades) or a Performa (literally re-branded Quadras with a different software configuration, intended for consumer rather than professional use). In fact, the Performa ROM and Quadra ROMs on Redundant Robot’s site works quite well, so I’ve linked/recommended that above (the Performa one definitely supports color display, so that’s a good one). These have the benefit of emulating a slightly more powerful machine than the older Mac IIs – but just keep in mind what you’re doing. Your desired OS may install fine, but you may still encounter a piece of legacy software that is relying on the configuration and limitations of the Mac II.

Installation:

  1. Download the precompiled app for your host operating system from Emaculation (I’ll assume OSX/macOS here).
  2. Download a bootable disk image for your desired guest OS, from 7.x through 8.1 – I’ll use Mac OS 7.6.1 here as an example. Note this is right in the transition period from floppies to CD-ROM – so your images may likely be ISOs instead of .img.
  3. Download the Performa ROM from Redundant Robot and unzip it. It’s a good idea to put it in the same folder as the Basilisk app just to keep track of it, but this time you can put it anywhere – you’ll have the option to select it in the Basilisk preferences no matter where it lives on your computer (and you don’t need to rename it either).
  4. In the Basilisk folder, double-click on the “BasiliskGUI” app. This is where you can edit the preferences/configuration for your virtual machine:
  5. In the “Volumes” tab, click on “Create…”. We need to make a totally blank disk image to serve as the virtual hard drive for our emulated computer. Choose your save location using the file navigation menu, then pick a size and name for this file (something descriptive like MacOS7 or OS7_System can help you keep track of what it is). For System 7, somewhere around 500 or 1000 MB is going to be plenty. Hit “OK” and in a moment your new blank disk image will appear under the “Volumes” tab in preferences.
  6. Next select “Add” and navigate to the bootable disk image for your operating system. So, here we select and add the System 7.6.1 ISO file.
  7. The “Unix Root” is an option for Basilisk to create a “shared” folder between your host, contemporary desktop and your emulated environment. This could be useful later for shuttling individual files into your emulator later (as opposed to packaging and mounting disk images), but is not strictly necessary to set up right now. If you do choose to create and set up a shared folder right now, you have to type out the full file path for the directory you want you use, e.g.:
  8. Keep the “Boot From” drop-down on “Any” and the “Disable CD-ROM Driver” box unchecked.
  9. The only other tab you should have to make any changes on at this point are “Graphics” and “Memory/Misc”. For “Graphics”, make sure you have Video Type: “Window” selected. The Refresh Rate can be “Dynamic” so long as you’re on a computer that’s come out within…like, the last 10 years. The width and height resolution settings are up to you, but you might want to keep them accurate to a legacy resolution like 640×480 for accurate rendering of software later.
  10. In the “Memory/Misc” tab, choose the amount of RAM you want your virtual machine to have. 64 MB is, again, plenty powerful for the era. Again, you can always tweak this later if it helps with rendering software. Keep Mac Model ID on “Mac IIci (MacOS7.x)” (note: I’ve successfully installed System 7 in Basilisk using this setting and Quadra 900 ROM, so I think the desired operating system, rather than the Mac Model, might be the most important thing to keep in mind here). Also leave CPU Type on “68040”. Finally, use the Browse button to select the Performa ROM file you downloaded:
  11. At this point, you can hit “Save” and then “Start” at the bottom of the Basilisk Settings window…and you should successfully boot into the System 7.6.1 ISO! The desktop background even comes up with little optical discs to make sure you know you’re operating from a CD-ROM:
  12. The “disk is unreadable by this Macintosh” error is to be expected: this is the blank virtual drive that you created, but haven’t formatted yet. Choose “Initialize” and then “Continue” (you’re not wiping any data, because the blank disk image doesn’t have any data in it yet!!). You should get an option to name your new drive in System 7 – I generally like to name this the same thing that I named the disk image file, just to avoid confusion and make it clear (to myself) that this is just the same file/disk being represented in two different environments.
  13. Once that’s done, you should just be on the System 7 desktop, with two “drives” mounted in the top right: the Mac OS 7.6.1 ISO installer, and then your blank virtual drive. Double-click on the Mac OS 7.6.1 ISO drive and then the “Install Mac OS” program to start running the installer software.
  14. The System 7.6.1 installer in particular forces you to go through a README file and then to “update your hard disk driver” (you can just select “Skip” for the latter).
  15. For the “Choose a disk for installation” step, make sure you have selected the empty (but now formatted) virtual drive you named.
  16. From there you should be able to just run the installer. This copies over and installs the Mac OS 7 system from the ISO to your virtual hard disk!
  17. When the program has successfully run, you now quit out of Basilisk by shutting down the emulated System 7 machine. (Hint: this used to be in the “Special” menu!)
  18. You can now remove the System 7.6.1 ISO file from the Volumes list in Basilisk’s setting, so that every time you launch the software, it goes right into the System 7 environment now installed on your virtual drive. You can now also mount/add any other software for which you have an ISO or IMG file (CD or floppy, basically) by adding that file to Basilisk’s Volumes list and relaunching Basilisk!

SheepShaver

Recommended:

SheepShaver is a PowerPC Macintosch emulator, maintained by the same team/people who run Basilisk II – so you’ll find that the interface and installation process of using SheepShaver are extremely similar. Since we’re dealing with PowerPC architecture now, you’re going to need a New World ROM, or similar specific ROM (I’ve had general success pretending I’m using one of the blue-and-white Power Macintosh G3s).

You might notice that SheepShaver’s maximum installable-OS point is Mac OS 9.0.4 – even though yes, versions of Mac OS 9 existed above 9.0.4. This is because SheepShaver cannot emulate modern Memory Management Units (MMUs) – a particular advancement in CPU technology that we don’t need to get into too much right now (particularly since I don’t totally understand at the moment what it does). But the point is, Mac OS 9.0.4 was the last Mac operating system that could function entirely without a modern-style MMU. If you need Mac OS 9.1 or above, it’s time to start looking at QEMU – but, just speaking personally, I have yet to encounter a piece of software that was so specific to Mac OS 9.1-9.2.2 that it didn’t work in 9.0.4 in SheepShaver.

Installation:

  1. Download the precompiled app for your host operating system from Emaculation (I’ll assume OSX/macOS here).If you are using macOS Sierra (10.12) or above as your host OS:
    Starting with Sierra, Apple introduced a new silent “quarantine” feature to its Gatekeeper software, which manages and verifies applications you’ve downloaded through the App Store or elsewhere. I can bet that at one point or another you’ve gone to the Security & Privacy section of your System Preferences to allow apps downloaded from “identified developers” or even to just let through one troublesome app that Gatekeeper didn’t trust for whatever reason. Perhaps you even remember the old ability to allow apps downloaded from “Anywhere”, which Apple has now removed as an option from the System Preferences GUI.As a result, some applications that you just download in a .zip file from a third-party (like the Emaculation forums) may not open correctly because it has been quietly “quarantined” by Gatekeeper, removing the permissions and file associations the application needs to run correctly. Obviously this is a strong security feature to have on by default to protect against malware, but in this case we want to be sure SheepShaver at least can pass through. Though you can’t do this in System Preferences anymore, you can pass apps individually through the quarantine using the following command in Terminal:[cc lang=”Bash”] $ sudo xattr -rd com.apple.quarantine /path/to/SheepShaver.app [/cc]
  2. Download a bootable disk image for your desired guest OS, from 8.2 through 9.0.4 – I’ll use Mac OS 9.0.4 here as an example. This is firmly in the CD-ROM era now so it should be an ISO file.Once you have downloaded and extracted the ISO file, you must “lock” the ISO in Finder. This is needed to trick the Mac OS 9 installer software into thinking it is really on a physical CD-ROM. You can do this by right-clicking on the ISO, selecting “Get Info”, then checking the “Locked” box:
  3. Download the New World ROM from Redundant Robot and unzip it. Put this in the same folder as the SheepShaver app itself and rename the ROM file so it is just named “Mac OS ROM” (spaces included, no file extension):
  4. Double-click on the SheepShaver app icon to launch the app. If the app has successfully found a compatible ROM in the same folder, you should see a gray screen with a floppy disk icon and a blinking question mark on it:

    (If no compatible ROM is found, the app will just automatically quit on launch, so you can troubleshoot from there).
  5. In the SheepShaver app menu at the top of your screen, select “Preferences” to open the settings menu, which happens to look pretty much just like the Basilisk II settings window.
  6. In the “Setup” tab, click on “Create…”. We need to make a totally blank disk image to serve as the virtual hard drive for our emulated computer. Choose your save location using the file navigation menu, then pick a size and name for this file (something descriptive like MacOS9 or OS9_System can help you keep track of what it is). For installing OS 9, again, somewhere around 500 or 1000 MB is going to be plenty. Hit “OK” and in a moment your new blank disk image will appear under the “Volumes” tab in preferences.
  7. Next select “Add” and navigate to the bootable disk image for your operating system. So, here we select and add the (locked) Mac OS 9.0.4 ISO file.
  8. For the “ROM File” field, click Browse and select your “Mac OS ROM” file. As long as the ROM file is in the same directory as the application and named “Mac OS ROM”, SheepShaver *will* read it by default, but it doesn’t hurt to specify again here (or if desired, you could select other ROMs here at this point for testing/troubleshooting).
  9. The “Unix Root” is an option for SheepShaver to create a “shared” folder between your host desktop and your emulated environment. This could be useful later for shuttling individual files into your emulator later (as opposed to packaging and mounting disk images), but is not strictly necessary to set up right now. If you do choose to create and set up a shared folder right now, you should type out the full file path for the directory you want you use, e.g.:
  10. Keep the options on “Boot From: Any” and leave “Disable CD-ROM” unchecked. Choose an amount of RAM you want your virtual machine to have. 512 MB was plenty powerful for the time, but you can futz with this potentially based on the requirements for the software you’re trying to run.
  11. For the “Audio/Video” tab, make sure you have Video Type: “Window” selected. The Refresh Rate can be “Dynamic” so long as you’re on a computer that’s come out within…like, the last 10 years. The width and height resolution settings are up to you, but you might want to keep them accurate to a legacy resolution like 640×480 for accurate rendering of software later. Make sure “Enable QuickDraw Acceleration” is selected and leave audio settings alone.
  12. Under the “Miscelleaneous” tab, select “Enable JIT Compiler”, “Allow Emulated CPU to Idle” and “Ignore Illegal Memory Accesses”. You can select the Mouse Wheel Function to taste. If you’re interested in getting networking working inside your virtual machine, in the “Ethernet Interface” enter the word “slirp” – otherwise, you can ignore this section.
  13. Hit “Save” to save these preferences and leave the settings window. You should be back at your blinking gray floppy. At this point, the only way to shut down the virtual machine and apply your new settings is to force-quit SheepShaver by pressing Control + Esc.
  14. Immediately double-click on the SheepShaver app again to relaunch. This time, you should boot into the Mac OS 9 installer CD! The desktop background should make it very clear you are booted into a CD installer:
  15. The “disk is unreadable by this Macintosh” error is to be expected: this is the blank virtual drive that you created, but haven’t formatted yet. Choose “Initialize” and then “Continue” (you’re not wiping any data, because the blank disk image doesn’t have any data in it yet!!). You should get an option to name your new drive in OS 9 – I generally like to name this the same thing that I named the disk image file on my host system, just to avoid confusion and make it clear (to myself) that this is just the same file/disk being represented in two different environments.
  16. If it hasn’t already opened automatically, double-click on the Mac OS 9 ISO drive and then the “Install Mac OS 9” program to start running the installer software.
  17. For the “Choose a disk for installation” step, make sure you have selected the empty (but now formatted) virtual drive you named.
  18. From there you should be able to just run the installer. This copies over and installs the Mac OS 9 system from the ISO to your virtual hard disk!
  19. When the program has successfully run, you can remove the ISO file from the mounted “Volumes” in SheepShaver’s preferences. Hit “Save” and then shut down the emulated machine to quiet SheepShaver (hint: the “Shut Down” command used to be in the “Special” menu on Mac OS desktop!!)
  20. Now, when you relaunch SheepShaver again, you should boot directly into your installed Mac OS 9 environment! As with Basilisk, you can mount floppy or CD-ROM images into your emulated desktop by opening SheepShaver’s Preferences and selecting the image files to Add them to your Volumes list (you’ll have to quit SheepShaver and relaunch any time you want to change/apply new settings, including loading new images!!)

QEMU

Recommended:

  • Mac OS 9.1 – Mac OSX (10.0-10.5)

QEMU is a generic, extremely flexible emulator/virtualization application. It can be theoretically configured to run all kinds of guest operating systems on all kinds of hosts, not just Macs; but the forum users at Emaculation have put a lot of work into assembling QEMU builds specifically capable of PowerPC MacOSX emulation, a processor/OS combination that has generally hounded and confounded the emulation community for some time.

As such, I’ll say up front this is definitely the roughest emulator to use, both in terms of the emulated system not running as smoothly as you’ll see with the other listed programs (audio, for instance, isn’t working unless you use an experimental build) and in that troubleshooting is going to be near impossible without a basic understanding of Bash scripting. (You *can* just follow exactly this guide or the Emaculation tips with a basic text editor like TextEdit, but especially given the fact that the forums are pumping out new builds on the regular, there’s no guarantee these will work/last very long).

The good news is – you don’t need a separate ROM file for this one! I assume the necessary code is just packaged into the provided QEMU builds rather than kept in a separate file. The Emaculation forums provide a few approved builds depending on what exactly you’re trying to do, but unless you’re trying to do some emulated networking (if you’re just a beginner here – not recommended), the first download link on that page is the one you should use. This will let you install any PowerPC OS from 9.1 up that you choose – for the purposes of this guide, I’m going to pretend I’m putting Mac OSX 10.1 “Puma” on a classic Bondi Blue iMac G3!

Installation:

  1. Download the precompiled build for your host system. For Mac systems, we’ll be using the first download in this forum post, although again, remember that you will not have audio with this build!! Feel free to explore some of the experimental versions once you have the process down.
  2. Unzip the QEMU folder and put your MacOSX install disc ISO into that same folder.
  3. Open the Disk Utility program. Hit Command+N (or go to File -> New Image -> Blank Image) to create a blank disk image for your virtual hard drive (we have now moved up far enough in time that Disk Utility can actually do all the proper formatting/partitioning here!)With the file browser navigate into the QEMU download folder and name your file something descriptive, like MacOSPuma. You will need at least 1.2 GB to install OSX 10.1, so I’d recommend making the blank image at least 2GB – note that Disk Utility makes you enter this number in MB! (so, 2000 MB)For “Format”, select “Mac OS Extended (Journaled)”. For “Encryption” select “none”, for “Partitions”, select “Single partition – Apple Partition Map” and for “Image Format” select “read/write disk image”.It will take a minute to create the blank disk image, but you should wind up with a file called “MacOSPuma.dmg” in your QEMU folder.
  4. In the downloaded folder there should be a file called “qemu.command”. Open this file with a plain text editor, TextEdit will do.
  5. Much of the text here is good, but we need to make a few substitutions. Namely, we need to attach a virtual CD-ROM drive, make sure it reads our Mac OSX installer ISO, make sure the code reads our blank disk image, and set it so that the QEMU application attempts to boot from the ISO, not the blank hard disk image.First, change the flag for[cc lang=”Bash”]boot -c[/cc] to [cc lang=”Bash”]boot -d[/cc]Change the [cc lang=”Bash”]-drive file=~/Mac-disks/9.2.img,format=raw,media=disk[/cc] so that the path matches the file path of the blank .dmg disk image that you made. Make sure to change the file extension and not to put any extra spaces into the code.Now, add another drive path flag to the code BEFORE the disk image drive flag. It should read something like: [cc lang=”Bash”]-drive file=~/QEMU/Mac_OS_Puma.iso,format=raw,media=cdrom[/cc], only substituting in the correct file path and name for your ISO.All told, the file should now look like this – note how all the code following “./qemu-system-ppc” is all on one line, even though the text wrapping in TextEdit will probably make it look otherwise for you:[cc lang=”Bash”]#!/bin/bash
    cd “$(dirname “$0″)”./qemu-system-ppc -L pc-bios -boot d -M mac99 -m 256 -prom-env ‘auto-boot?=true’ -prom-env ‘boot-args=-v’ -prom-env ‘vga-ndrv?=true’ -drive file=~/QEMU/Mac_OS_Puma.iso,format=raw,media=cdrom -drive file=~/QEMU/MacOSPuma.dmg,format=raw,media=disk -netdev user,id=network0 -device sungem,netdev=network0[/cc]Save the “qemu.command” file.
  6. Now we need to make this .command file (which is basically just a tiny Bash script) executable so that your host operating system knows to run it like an application just by double-clicking. Open the Terminal app and run[cc lang=”Bash”] $ chmod +x /path/to/qemu.command [/cc]either typing out the full file path yourself, or just drag-and-dropping the qemu.command file into Terminal to get the full path automatically.
  7. Double-click on the “qemu.command” file. If all has been set up correctly, after a minute you should boot into the Mac OSX Puma installer.
  8. Your blank .dmg file should show up as an available destination disk on which to install OSX! Once the full installer has run (takes a few minutes), the software will eventually try to reboot the machine, which at this point will just take you back into the ISO installer. Mash Command+Q to quit out of QEMU.
  9. Go back to your “qemu.command” file in TextEdit. Swap the [cc lang=”Bash”]boot -d[/cc] flag BACK to [cc lang=”Bash”]boot -c[/cc], and delete the -drive flag for the ISO installer entirely, just leaving the .dmg file attached. Save the file. At this point, you should now be able to just double-click the “qemu.command” file and launch into OSX Puma, installed on your virtual hard drive!(it may take a minute! remember this is not going to be nearly as quick/smooth as loading on original hardware – and that you’ll have to go through a bunch of Apple’s “registration” nonsense)
  10. You can always attach more disk images to the emulated environment by re-creating “-drive” entries with “media=cdrom” and the file path to your ISO/CDR in the “qemu.command” file and saving/relaunching QEMU.

Live Capture Plus and QuickTime for Java

One of the particular challenges of video preservation is how to handle and preserve the content on digital tape formats from the latter days of magnetic A/V media: Digital Betacam, DVCam, DVCPro, etc. Caught in the nebulous time of transition between analog and digital signals (the medium itself, magnetic tape, is basically the same as previous videotape formats like VHS or Betacam – but the information stored on them was encoded digitally), these formats were particularly popular in production environments but there were plenty of prolific consumer-grade efforts as well (MiniDV, Digital8). In some ways, this makes transferring content easier than handling analog formats: there is no “digitization” involved, no philosophical-archival conundrum of how best to approximate an analog signal into a digital one. One simply needs to pull the digital content off the magnetic tape intact and get it to a modern storage medium (hard disk, solid-state, or maybe LTO, which yes I know is still magnetic tape but pay no attention to the man behind the curtain).

 https://twitter.com/dericed/status/981965351482249216?ref_src=twsrc%5Etfw&ref_url=https%3A%2F%2Ftweetdeck.twitter.com%2F

However, even if you still have the proper playback deck, and the right cables and adapters to hook up to a contemporary computer, there’s the issue of software – do you have a capture application that can communicate properly with the deck *and* pull the digital video stream off the tape as-is?

The last bit is getting especially tricky. As DV-encoded formats in particular lost popularity in broadcast/production environments, the number of applications that can import and capture DV video without transcoding (that is, changing the digital video stream in the process of capture), while staying compatible with contemporary, secure operating systems/environments, has dwindled. That’s created a real conundrum for a lot of archivists. Apple’s Final Cut Pro application, for instance, infamously dropped the ability to capture native DV when it “upgraded” from Final Cut Pro 7 to Final Cut X (you can hook up and capture tapes, but Final Cut X will automatically transcode the video to ProRes). Adobe Premiere will still capture DV and HDV codecs natively, but will re-package the stream into a .mov QuickTime wrapper (you can extract the raw DV back out, though, so this is still a solid option for many, though of course for just as many more an Adobe CC subscription is beyond their means).

One of the best options for DV capture is (was?) a Mac application called Live Capture Plus, made by Square Box Systems as part of its CatDV media suite. It has great options for error handling (e.g. trying to read a problem area of a tape multiple times automatically if there’s dropout), generating DV files based on clips or scenes or timecode rather than the whole tape, remote tape deck control over the FireWire/Thunderbolt connection, etc; a bunch of ingesting stuff that’s more appealing to an archivist than an application primarily for editing, like Adobe Premiere. It also talks to you, which is fun but also terrifying.

Failed to power up

However, Square Box removed Live Capture Plus from its product list some years back, and as far as I’m aware has refused all pleas to either open-source the legacy code or even continue to sell new licenses to those in the know.

Let’s say you *are* lucky enough to still have an old Live Capture Plus license on hand, however. The Live Capture Plus GUI is built on Java, but an older, legacy version of Java, so when you first try to run the app on a contemporary OS (~10.10 and up), you’ll first see this:

Luckily, for at least the moment, Apple still offers/maintains a download of this deprecated version of Java – just clicking on “More Info…” in that window will take you there, or you can search for “Java for OSX” to find the Apple Support page.

OK, so you’ve downloaded and installed the legacy Java for OSX. Yet this time, when you try again to run Live Capture Plus, you run into this fun error message instead:


All right. What’s going on here?

When I first encountered this error message, even though I didn’t know Java, this error message provided two clues: 1) “NoClassDefFoundError” – so Java *can’t find* some piece that it needs to run the application correctly; and 2) “quicktime”/”QTHandleRef” – so it specifically can’t find some piece that relates to QuickTime. That’s enough to go on a search engine deep dive, where I eventually found this page where researchers at the University of Wisconsin-Madison’s zoology/molecular biology lab apparently encountered and solved a similar issue with a piece of legacy software related to, near as I can figure from that site, taking images of tiny tiny worms. (I desperately want to get in touch and propose some sort of panel with these people about working with legacy software, but am not even sure what venue/conference would be appropriate)

The only way my ’90s-kid brain can understand what’s happening

So basically, recent versions of Mac OS have not included key files for a  plugin called “QuickTime for Java” – a deprecated software library that allowed applications built in/on Java (like Live Capture Plus) to provide multimedia functionality (playback, editing, capture) by piggybacking on the QuickTime application’s support for a pretty wide range of media formats and codecs (including DV). Is this fixable? If you can get those key files, yes!

For now, both the downloads and instructions for what to do with these three files are available on that Hardin Lab page, but I’m offering them here as well. The fix is pretty quick:

 

I would only note a couple things, which is that I’ve successfully installed on to macOS Sierra (10.12) without needing to mess with the System Integrity Protection settings by just installing for the local user (i.e. putting the files into ~/Library rather than /System/Library); and that if you want to do this via Finder rather than the command line, here is how to get to the “View Options” box to reveal the Library folder in Finder, as mentioned in Step 1 above (a useful step in general, really, if you’re up to digipres shenanigans):

Once these files are in place, Live Capture Plus should open correctly and be able to start communicating with any DV/FireWire-capable deck that you’ve got connected to your Mac – again, provided that you’ve got a registration code to enter at this point.

A final word of warning, however. Live Capture Plus comes from the era of 32-bit applications, and we’re now firmly in the era of 64-bit operating systems. Exactly what all that means is probably the subject of another post, but basically it’s just to say that legacy 32-bit apps weren’t made to take advantage of modern hardware, and may run slower on contemporary computers than they did on their original, legacy hardware. Not really an issue when you’re in video/digital preservation and your entire life is work-arounds, but recently Mac OS has taken to complaining about 32-bit apps:

Despite 32-bit apps providing, near as I can tell, no actual security or compatibility concerns for 64-bit OSes (32-bit apps just can’t take advantage of more than 4GB of RAM), this is a pretty heavy indication that Apple will likely cut off support for 32-bit apps entirely sometime in the not-so-so-distant future. And that will go for, not just Live Capture Plus, but other legacy apps capable of native DV transfer (Final Cut 7, the DVHSCap utility from the FireWire SDK, etc.)

So go get those DV tapes transferred!!!!