Doing DigiPres with Windows

A couple of times at NYU, a new student would ask me what kind of laptop I would recommend for courses and professional use in video and digital preservation. Maybe their personal one had just died and they were shopping for something new. Often they’d used both Mac and Windows in the past so were generally comfortable with either.

I was and still am conflicted by this question. There are essentially three options here: 1) some flavor of MacBook, 2) a PC running Windows, or 3) a PC running some flavor of Linux. (There are Chromebooks as well, but given their focus on cloud applications and lack of a robust desktop environment, I wouldn’t particularly recommend them for archivists or students looking for flexibility in professional and personal use)

Each of these options have their drawbacks. MacBooks are and always have been prohibitively expensive for many people, and I’m generally on board with calling those “butterfly switch” keyboards on recent models a crime. Though a Linux distribution like Ubuntu or Linux Mint is my actual recommendation and personally preferred option, it remains unfamiliar to the vast majority of casual users (though Android and ChromeOS are maybe opening the window), and difficult to recommend without going full FOSS evangelist on someone who is largely just looking to make small talk (I’ll write this post another day).

So I never want to steer people away from the very real affordability benefits of PC/Windows – yet feel guilty knowing that’s angling them full tilt against the grain of Unix-like environments (Mac/Linux) that seem to dominate digital preservation tutorials and education. It makes them that student or that person in a workshop desperately trying in vain to follow along with Bash commands or muck around in /usr/local while the instructor guiltily confronts the trolley problem of spending their time saving the one or the many.

I also recognize that *nix education, while the generally preferred option for the #digipres crowd, leaves a gaping hole in many archivists’ understanding of computing environments. PC/Windows remains the norm in a large number of enterprise/institutional workplaces. Teaching ffmpeg is great, but do we leave students stranded when they step into a workplace and can’t install command line programs without Homebrew (or don’t have admin privileges to install new programs at all)?

This post is intended as a mea culpa for some of these oversights by providing an overview of some fundamental concepts of working with Windows beyond the desktop environment – with an eye towards echoing the Windows equivalents to usual digital preservation introductions (the command line, scripting, file system, etc.).

*nix vs NT

To start, let’s take a moment to consider how we got here – why are Mac/Linux systems so different from Windows, and why do they tend to dominate digital preservation conversations?

Every operating system (whether macOS, Windows, a Linux distribution, etc.) is actually a combination of various applications and a kernel. The kernel is the core part of the OS – it handles the critical task of translating all the requests you as a user make in software/applications to the hardware (CPU, RAM, drives, monitor, peripherals, etc.) that actually perform/execute the request.

“Applications” and “Kernel” together make up a complete operating system.

UNIX was a complete operating system created by Bell Labs in 1969 – but it was originally distributed with full source code, meaning others could see exactly how UNIX worked and develop their own modifications. There is a whole complex web of UNIX offshoots that emerged in the ’80s and early ’90s, but the upshot is that a lot of people took UNIX’s kernel and used it as the base for a number of other kernels and operating systems, including modern-day macOS and Linux distributions. (The modifications to the kernel mean that these operating systems, while originally derived in some way from UNIX, are not technically UNIX itself – hence you will see these systems referred to as “Unix-like” or “*nix”)

The result of this shared lineage is that Mac and Linux operating systems are similar to each other in many fundamental ways, which in turn makes cross-compatible software relatively easy to create. They share a basic file system structure and usually a default terminal and programming language (Bash) for doing command line work, for instance.

Meanwhile there was Microsoft, which, though it originally created its own (phenomenally popular!) variant of UNIX in the ’80s, switched over in the late ’80s/early ’90s into developing its own, proprietary kernel, entirely separate from UNIX. This was the NT kernel, which is at the heart of basically every Windows operating system ever made. These fundamental differences in architecture (affecting not just clearly visible characteristics like file systems/naming and command line work, but extremely low-level methods of how software communicates with hardware) make it more difficult to make software cross-compatible between both *nix and Windows operating systems without a bunch of time and effort from programmers.

So, given the choice between these two major branches in computing, why this appearance that Mac hardware and operating systems have won out for digital preservation tasks, despite being the more expensive option for constantly cash-strapped cultural institutions and employees?

It seems to me it has very little to do with an actual affinity for Macs/Apple and everything to do with Linux and GNU and open source software. Unlike either macOS or Windows, Linux systems are completely open source – meaning any software developer can look at the source code for Linux operating systems and thus both design and make available software that works really well for them without jumping through any proprietary hoops (like the App Store or Windows Store). It just so happens that Macs, because at their core they are also Unix-like, can run a lot of the same, extremely useful software, with minimal to no effort. So a lot of the software that makes digital preservation easier was created for Unix-like environments, and we wind up recommending/teaching on Macs because given the choice between the macOS/Windows monoliths, Macs give us Bash and GNU tools and other critical pieces that make our jobs less frustrating.

Microsoft, at least in the ’90s and early aughts, didn’t make as much of an effort to appeal directly to developers (who, very very broadly speaking, value independence and the ability to do their own, unique thing to make themselves or their company stand out) – they made their rampant success on enterprise-level business, selling Windows as an operating system that could be managed en masse as the desktop environment for entire offices and institutions with less troubleshooting of individual machines/setups.

(Also key is that Microsoft kept their systems cheaper by never getting into the game of hardware manufacturing – unlike Mac operating systems, Windows is meant to run on third-party hardware made by companies like HP, Dell, IBM, etc. The competition between these manufacturers keeps prices down, unlike Apple’s all-in-one hardware + OS systems, making them the cheaper option for businesses opening up a new office to buy a lot of computers all at once. Capitalism!)

As I’ll get into soon, there have been some very intriguing reversals to this dynamic in recent years. But, in summary: the reason Macs have seemed to dominate digital preservation workshops and education is that there was software that made our jobs easier. It was never that you can’t do these things with Windows.

Windows File System

Beyond the superficial desktop design differences (dock vs. panel, icons), the first thing any user moving between macOS and Windows probably notices are the differences between Finder (Mac) and File Explorer (Windows) in navigating between files and folders.

In *nix systems, all data storage devices – hard disk drives, flash drives, external discs or floppies, or partitions on those devices – are mounted into the same, “root” file system as the operating system itself, in Windows every single storage device or partition is separated into its own “drive” (I use quotes because two partitions might exist on the same physical hard disk drive, but are identified by Windows as separate drives for the purpose of saving/working with files), identified by a letter – C:, D:, G:, etc. Usually, the Windows operating system itself, and probably most of a casual user’s files, are saved on to the default C: drive. (Why start at C? It harkens back to the days of floppy drives and disk operating systems, but that’s a story for another post). From that point, any additional drives, disks or partitions are assigned another letter (if for some reason you need more than 26 drives or partitions mounted, you’ll have to resort to some trickery!)

Also, file paths in Windows use backslashes (e.g. C:\Users\Ethan\Desktop) instead of forward slashes (/Users/Ethan/Desktop)

and if you’re doing command-line work, Bash (the default macOS CLI) is case-sensitive (meaning /Users/Ethan/Desktop/sample.txt and /Users/Ethan/Desktop/SAMPLE.txt are different files) while Windows’ DOS/PowerShell command line interfaces (more on these in a minute) are not (so SAMPLE.txt would overwrite sample.txt)


Fundamentally, I don’t think these minor differences affect the difficulty either way of performing common tasks like navigating directories in the command line or scripting from Macs – it’s just different, and an important thing to keep in mind to remember where your files live.

What DOES make things more difficult is when you start trying to move files between macOS and Windows file systems. You’ve probably encountered issues when trying to connect external hard drives formatted for one file system to a computer formatted for another. Though there’s a lot of programming differences between file systems that cause these issues (logic, security, file sizes, etc.), you can see the issue just with the minor, user-visible differences just mentioned: if a file lives at E:\DRIVE\file.txt on a Windows-formatted external drive, how would a computer running macOS even understand where to find that file?

Modern Windows systems/drives are formatted with a file system called NTFS (for many years Microsoft used FAT and its variants such as FAT16 and FAT32, so NTFS is backwards-compatible with these legacy file systems). Apple, meanwhile, is in a moment transitioning from its very longtime system HFS+ (which you may have also seen referred to in software as “Mac OS Extended”) to a new file system, APFS.

File system transitions tend to be a headache for users. NTFS and HFS+ have at least been around for long enough that quite a lot of software has been made to allow moving back and forth/between the two file systems (if you are on Windows, HFS Explorer is a great free program for exploring and extracting files on Mac-formatted drives; while on macOS, the NTFS-3G driver allows for similarly reading and writing from Windows-formatted storage). Since it is still pretty new, I am less aware of options for reading  APFS volumes on Windows, but will happily correct that if anyone has recommendations!

** I’ve got a lot of questions in the past about the exFAT file system, which is theoretically cross-compatible with both Mac and Windows systems. I personally have had great success with formatting external hard drives with exFAT, but have also heard a number of horror stories from those who have tried to use it and been unable to mount the drive into one or another OS. Anecdotally it seems that drives originally formatted exFAT on a Windows computer have better success than drives formatted exFAT on a Mac, but I don’t know enough to say why that might be the case. So: proceed, but with caution!

Executable Files

No matter what operating system you’re using, executable files are pieces of code that directly instruct the computer to perform a task (as opposed to any other kind of data file, whose information usually needs to be parsed/interpreted by executable software in some way to become in any way meaningful – like a word processing application opening a text file, where the application is the executable and the text file is the data file). As a common user, anytime you double-clicked on a file to open an application or run a script, whether a GUI or a command line program, you ran an executable.

Where executable files live and how they’re identified to you, the user, varies a bit from one operating system to the next. A MacOS user is accustomed to launching installed programs from the Applications folder, which are all actually .app files (.apps in turn are actually secret “bundles” that contain executables and other data files inside the .app, which you can view like a folder if you right-click and select “Open Package Contents” on an application).

Executable files (with or without a file extension) might also be automatically identified by macOS anywhere and labeled with this icon:

And finally, because executable files are also called binaries, you might find executables in a folder labeled “bin” (for instance, Homebrew puts the executable files for programs it installs into /usr/local/bin)

Windows-compatible executables are almost always identified with .exe file extensions. Applications that come with an installer (or, say, from the Windows Store) tend to put these in directories identified by the application name in the “Program Files” folder(s) by default. But some applications you just download off of GitHub or SourceForge or anywhere else on the internet might just put them in a “bin” folder or somewhere else.

(Note: script files, which are often identified by an extension according to the programming language they were written in – .py files for Python, .rb for Ruby, .sh for Bash, etc. – can usually be made automatically executable like an .exe but are not necessarily by default. If you download a script, whether working on Mac or Windows, you may need to check whether it’s automatically come to you as an executable or more steps need to be taken. They often don’t, because downloading executables is a big no-no from the perspective of malware protection.)

Command Prompt vs. PowerShell

On macOS, we usually do command line work (which offers powerful tools for automation, cool/flexible software, etc.) using the Terminal application and Bash shell/programming language.

Modern Windows operating systems are a little more confusing because they actually offer two command line interfaces by default: Command Prompt and PowerShell.

The Command Prompt application uses DOS commands and has essentially not changed since the days of MS-DOS, Microsoft’s pre-Windows operating system that was entirely accessed/used via command line (in fact, early Windows operating systems like Windows 3.1 and Windows 95 were to a large degree just graphic user interfaces built on top of MS-DOS).

The stagnation of Command Prompt/DOS was another reason developers started migrating over to Linux and Bash – to take advantage of more advanced/convenient features for CLI work and scripting. In 2006, Microsoft introduced PowerShell as a more contemporary, powerful command line interface and programming language aimed at developers in an attempt to win some of them back.

While I recognize that PowerShell can do a lot more and make automation/scripting way easier, I still tend to stick with Command Prompt/DOS when doing or teaching command line work in Windows. Why? Because I’m lazy.

The Command Prompt/DOS interface and commands were written around the same time as the original Bash – so even though development continued on Bash while Command Prompt stayed the same, a lot of the most basic commands and syntax remained analogous. For instance, the command for making a directory is the same in both Terminal and Command Prompt (“mkdir”), even if the specific file paths are written out differently.

So for me, while learning Command Prompt is a matter of tweaking some commands and knowledge that I already know because of Bash, learning PowerShell is like learning a whole new language – making a directory in PowerShell, e.g., is done with the command shortcut “md”.* It’s sort of like saying I already know Castillian Spanish, so I can tweak that to figure out a regional dialect or accent – but learning Portuguese would be a whole other course.

Relevant? Maybe not.

But your mileage on which of those things is easier may vary! Perhaps it’s more clear to keep your *nix and Windows knowledge totally separate and take advantage of PowerShell scripting. But this is just to explain the Command Prompt examples I may use for the remaining of this post.

* ok, yes, “mkdir” will also work in PowerShell because it’s the cmdlet name – this comparison doesn’t work so well with the basic stuff, but it gets true for more advanced functions/commands and you know it, Mx. Know-It-All

The PATH Variable

The PATH system variable is a fun bit of computing knowledge that hopefully ties together all the three previous topics we’ve just covered!

Every modern operating system has a default PATH variable set when you start it up (system variables are basically configuration settings that, as their name implies, you can update or change). PATH determines where your command line interface looks for executable files/binaries to use as commands.

Here’s an example back on macOS. Many of the binaries you know and execute as commands (cd, mkdir, echo, rm) live in /bin or /usr/bin in the file system. By default, the PATH variable is set as a list that contains both “/bin” and “/usr/bin” – so that when you type

$ cd /my/home/directory

into Terminal, the computer knows to look in /bin and /usr/bin to find the executable file that contains instructions for running the “cd” command.

Package managers, like Homebrew, usually put all the executables for the programs you install into one folder (in Homebrew’s case, /usr/local/bin) and ALSO automatically update the PATH variable so that /usr/local/bin is added to the PATH list. If Homebrew put, let’s say, ffmpeg’s executable file into the /usr/local/bin folder, but never updated the PATH variable, your Terminal wouldn’t know where to find ffmpeg (and would just return a “command not found” error if you ran any command starting with

$ ffmpeg

), even though it *is* somewhere on your computer.

So let’s move over to Windows: because Command Prompt, by default, does not have a package manager, if you download a command line program like ffmpeg, Command Prompt will not automatically know where to find that command – unless a) you move the ffmpeg directory, executable file included, into somewhere already in your PATH list (maybe somewhere like “C:\Program Files\”), or b) update the PATH list to include where you’ve put the ffmpeg directory.

I personally work with Windows rarely enough that I tend to just add a command line program’s directory to the PATH variable when needed – which usually requires rebooting the computer. So I can see how if you were adding/installing command line programs for Windows more frequently, it would probably be convenient to just create/determine “here’s my folder of command line executables”, update the PATH variable once, and then just put new downloaded executables in that folder from then on.

*** I do NOT recommend putting your Downloads or Documents folder, or any user directory that you can access/change files in without admin privileges, into your PATH!!! It’s an easy trick for accidentally-downloaded malware to get into a commonly-accessed folder (and/or one that doesn’t require raised/administrative privileges) and then be able to run its executable files from there without you ever knowing because it’s in your PATH.

Updating the PATH variable was a pain in the ass in Windows 7 and 8 but is much less so now on Windows 10.

If you are in the command line and want to quickly troubleshoot what file paths/directories are and aren’t in your PATH, you can use the “echo” command, which is available on both *nix and Windows systems. In *nix/Bash, variables can be invoked in scripts or changed with $PATH, so

$ echo $PATH

would return something like so, with the different file paths in the PATH list separated by colons:

While on Windows, variables are identified like %THIS%, so you would run

 > echo %PATH%

Package Management

Now, as I said above, package managers like Homebrew on macOS can usually handle updating/maintaining the PATH variable so you don’t have to do it by hand. For some time, there wasn’t any equivalent command line package manager (at least that I was aware of) for Windows, so there wasn’t any choice in the matter.

That’s no longer true! Now there’s Chocolatey, a package manager that will work with either Command Prompt or PowerShell. (Note: Chocolatey *uses* PowerShell in the background to do its business, so even though you can install and use Chocolatey with Command Prompt, PowerShell still needs to be on your computer. As long as your computer/OS is newer than 2006, this shouldn’t be an issue).

Users of Homebrew will generally be right at home with Chocolatey: instead of

$ brew install [package-name]

you can just type

 > choco install [package-name]

and Chocolatey will do the work of downloading the program and putting the executable in your PATH, so you can get right back to command line work with your new program. It works whether you’re on Windows 7 or 10.

The drawback, at least if you’re coming over from using macOS and Homebrew, is that Chocolatey is much more limited in the number/type of packages it offers. That is, Chocolatey still needs Windows-compatible software to offer, and as we went over before, some *nix software has just never been ported over to supported Windows versions (or at least, into Chocolatey packages). But many of the apps/commands you probably know and love as a media archivist: ffmpeg, youtube-dl, mediainfo, exiftool, etc. – are there!


If you’ve been doing introductory digipres stuff, you’ve probably dipped your toes into Bash scripting – which is just the idea of writing some code written in Bash into a plain text file, containing instructions for the Terminal. Your computer can then run the instructions in the script without you needing to type out all the commands into Terminal by hand. This is obviously most useful for automating repetitive tasks, where you might want to run the same set of commands over and over again on different directories or sets of files, without constant supervising and re-typing/remembering commands. Scripts written in Bash and intended for Mac/Linux Terminals tend to be identified with “.sh” file extensions (though not exclusively).

Scripting is absolutely still possible with Windows, but we have to adjust for the fact that the Windows command line interface doesn’t speak Bash. Command Prompt scripts are written in DOS style, and their benefit for Windows systems (over, say, PowerShell scripts) is that they are extremely portable – it doesn’t much matter what version of Windows or PowerShell you’re using.

Command Prompt scripts are often called “batch files” and can usually be identified by two different file extensions: .bat or .cmd

(The difference between the two extensions/file formats is negligible on a modern operating system like Windows 7 or 10. Basically, BAT files were used with MS-DOS, and CMD files were introduced with Windows NT operating systems to account for the slight improvements from the MS-DOS command line to NT’s Command Prompt. But, NT/Command Prompt remained backwards-compatible with MS-DOS/BAT files, so you would only ever encounter issues if you tried to run a CMD file on MS-DOS, which, why are you doing that?)

Actually writing Windows batch scripts for Command Prompt is its own whole tutorial. Which – thankfully – I don’t have to write, because you can follow this terrific one instead!

Because Command Prompt batch scripts are still so heavily rooted in decades-old strategies (DOS) for talking to computers, they are pretty awkward to look at and understand.

I mean what

A lot of software developers skip right to a more advanced/human-readable programming language like Python or Ruby. Scripts in these languages do also have the benefit of being cross-platform, since Python, for example, can be installed on either Mac or Windows. But Windows OS will not understand them by default, so you have to go install them and set up a development environment rather than start scripting from scratch as you can with BAT/CMD files.

Windows Registry

The Windows Registry is fascinating (to me!). Whereas in a *nix operating system, critical system-wide configuration settings are contained as text in files (usually high up under the “root” level of your computer’s file system, where you need special elevated permissions to access and change them), in Windows these settings (beyond the usual, low-level user settings you can change in the Settings app or Control Panel) are stored in what’s called the Windows Registry.

The Windows Registry is a database where configuration settings are stored, not as text in files, but as “registry values” – pieces or strings of data that store instructions. These values are stored in “keys” (folders), which are themselves stored in “hives” (larger folders that categorize the keys within related subfolders).

Registry values can be a number of different things – text, a string of characters representing hex code, a boolean value, and more. Which kind of data they are, and what you can acceptably change them to, depends entirely on the specific setting they are meant to control. However, values and keys are also often super abstracted from the settings they change – meaning it can be very very hard to tell what exactly a registry value can change or control just by looking at it.

…, not at all!

For that reason, casual users are generally advised to stay out of the Windows Registry altogether, and those who are going to change something are advised to make a backup of their Registry before making edits, so they can restore it if anything goes wrong.

Just as you may never mess around in your “/etc” folder on a Mac, even as a digitally-minded archivist, you may or may not ever need to mess in the Windows Registry! I’ve only poked around in it to do things like removing that damn OneDrive tab from the side of my File Explorer windows so I stop accidentally clicking on it. But I thought I’d mention it since it’s a pretty big difference in how the Windows operating and file systems work.

Or disable Microsoft’s over-extensive telemetry! Which, along with the fact that you can’t fucking uninstall Candy Crush no matter how hard you try, is one of the big things holding back an actually pretty nifty OS.

How to Cheat: *nix on Windows

OK. So ALL OF THIS has been based on getting more familiar with Windows and working with it as-is, in its quirky not-Unix-like glory. I wanted to write these things out, because to me, understanding the differences between the major operating systems helped me get a better handle on how computers work in general.

But guess what? You could skip all that. Because there are totally ways to just follow along with *nix-based, Bash-centric digital preservation education on Windows, and get everyone on the same page.

On Windows 7, there were several programs that would port a Bash-like command line environment and a number of Linux/GNU utilities to work in Windows – Cygwin probaby being the most popular. Crucially, this was not a way to run Linux/Unix-like applications on Windows – it was just a way of doing the most basic tasks of navigating and manipulating files on your Windows computer in a *nix-like way, with Bash commands (the software essentially translated them into DOS/Windows commands behind-the-scenes).

But a way way more robust boon for cross-compatibility was the introduction of the Windows Subsystem for Linux in Windows 10. Microsoft, for many years the fiercest commercial opponent of open source software/systems, has in the last ten years or so increasingly been embracing the open source community (probably seeing how Linux took over the web and making a long-term bet to win back the business of software/web developers).


The WSL is a complete compatibility layer for, in essence, installing Linux operating systems like Ubuntu or or Kali or openSUSE *inside* a Windows operating system. Though it still has its own limitations, this basically means that you can run a Bash terminal, install Linux software, and navigate around your computer just as you would on a *nix-only system – from inside Windows.

The major drawback from a teaching perspective: it is still a Linux operating system Windows users will be working with (I’d personally recommend Ubuntu for digipres newbies) – meaning there may still be nagging differences from Mac users (command line package management using “apt” instead of Homebrew for instance, although this too can be smoothed over with the addition of Linuxbrew). This little tutorial covers some of those, like mounting external drives into the WSL file system and how to get to your Windows files in the WSL.

You can follow Microsoft’s instructions for installing Ubuntu on the WSL here (it may vary a bit depending on which update/version exactly of Windows 10 you’re running).

That’s it! To be absolutely clear at the end of things here, I’ve generally not worked with Windows for most digital/audiovisual preservation tasks, so I’m hardly an expert. But I hope this post can be used to get Windows users up to speed and on the same page as the Mac club when it comes to the basics of using their computers for digipres fun!

About Time: Sifting Through Analog Video Terminology

In case it’s ever unclear, writing these blog posts is as much for me as it is for you, dear reader. Technical audiovisual and digital concepts are hard to wrap your head around, and for a myriad of reasons technical writing is frequently no help. Writing things out, as clearly and simply as I can, is as much a way of self-checking that I actually understand what’s going on as it is a primer for others.

And it case this intro is unclear, this is a warning to cover my ass as I dive into a topic that still trips me up every time I try to walk through it out loud. I’ll provide references as I can at the end of the piece, but listen to me at your own risk.

When I first started learning about preserving analog video, *most* of what I was reading/talking about made a certain amount of sense, which is to say it at least translated into some direct sensory/observable phenomenon – luminance values made images brighter or darker, mapping audio channels clearly changed how and what I was hearing, using component video rather than composite meant mucking around with a bunch more cables. But when we talked about analog signal workflows, there always seemed to be three big components to keep track of: video (what I was seeing), audio (what I was hearing) and then this nebulous goddamn thing called “sync” that I didn’t understand at all but seemed very very important.

The same way I feel when anyone tells me “umami” is a thing WHAT ARE YOU

I do not know who to blame for the morass of analog video signal terms and concepts that all have to do in some form with “time” and/or “synchronization”. Most likely, it’s just an issue at the very heart of the technology, which is such a marvel of mechanical and mathematical precision that even as OLED screens become the consumer norm I’m going to be right here crowing about how incredible it is that CRTs ever are/were a thing. What I do know is that between the glut of almost-but-not-quite-the-same words and devices that do almost-but-not-quite-the-same-thing-except-when-they’re-combined-and-then-they-DO-do-the-same-thing, there’s a real hindrance to understanding how to put together or troubleshoot an analog video digitization setup. Much of both the equipment and the vocabulary that we’re working with came from the needs of production, and that has led to some redundancies that don’t always make sense coming from the angle of either preservation or casual use.

I once digitized an audio cassette from the ’70s on which an audio tech was testing levels (…I hope?) by just repeating “time time time time” about 100 times in a row and ever since that day I’ve feared that his unleashed soul now possesses me. Anyway.

So I’m going to offer a guide here to some concepts and equipment that I’ve frequently seen either come up together or understandably get confused for each other. That includes in particular:

  • sync and sync generators
  • time base correction
  • genlock
  • timecode

…though other terms will inevitably rise as we go along. I’m going to assume a certain base level of knowledge with the characteristics and function of analog video signal – that is, I will write assuming the reader has been introduced to electron guns, lines, fields, frames, interlacing, and the basics of reading a classic/default analog video waveform. But I will try not to ever go right into the deep end.

Sync/Sync Generators

I know I just said I would assume some analog video knowledge, but let’s take a moment to contemplate the basics, because it helps me wrap my head around sync generators and why they can be necessary.

Consider the NTSC analog video standard (developed in the United States and employed in North, Central and parts of South America). According to NTSC, every frame of video has two fields, each consisting of 262.5 scan lines – so that’s 525 scan lines per frame. There are also 29.97 frames of video per second.

That means the electron gun in a television monitor has to fire off 15734.25 lines of video PER SECOND – and that includes not just actually tracing the lines themselves, but the time it takes to reset both horizontally (to start the next line) and vertically (to start the next field).

Now throw a recording device into the mix. At the same time that a camera is creating that signal and a monitor is playing it back, a VTR/VCR has to receive that signal and translate it into a magnetic field stored on a constantly-moving stretch of videotape via an insanely quick-spinning metal drum.

Even in the most basic of analog video signal flows, there are multiple devices (monitor, playback/recording device, camera, etc.) here that need to have an extremely precise sense of timing. The recording device, for instance, needs to have a very consistent internal sense of how long a second is, in order for all its tiny little metal and plastic pieces to spin and roll and pulse in sync (for a century, electronic devices, either analog OR digital, have used tiny pieces of vibrating crystal quartz to keep track of time – which is just a whole other insane tangent). But, in order to achieve the highest quality signal flow, it also needs to have the exact same sense of how long a second is as the camera and the monitor that it’s also hooked up to.

When you’re dealing with physical pieces of equipment, possibly manufactured by completely different companies at completely different times from completely different materials, that’s difficult to achieve. Enter in sync signal and sync generators.

Stay with me here

Sync generators essentially serve as a drumbeat for an analog video system, pumping out a trusted reference signal that all other devices in the signal chain can use to drive their work instead of their own internal sense of timing. There are two kinds of sync signals/pulses that sync generators have historically output:

Drive pulses were almost exclusively used to trigger certain circuits in tube cameras, and were never part of any broadcast/recorded video signal. So you’re almost certainly never going to need to use one in archival digitization work, but just in case you ever come across a sync generator with V. Drive and H. Drive outputs (vertical drive and horizontal drive pulses), that’s what those are for.

Blanking pulses cause the electron gun on a camera or monitor to go into its “blanking” period – which is the point at which the electron gun briefly shuts off and retraces back to the beginning of a new line (horizontal blanking) or of a new field (vertical blanking). These pulses are a part of every broadcast/recorded video signal, and they must be extremely consistent to maintain the whole 525-scan-lines-29.97-times-per-second deal.

Since blanking pulses are the vast majority of modern sync signals, you may also just see these referred to as “sync pulses”.

So the goal of sync generators is to output a constant video signal with precise blanking pulses that are trusted to be exactly where (or to be more accurate, when) they should be. Blanking pulses are contained in the “inactive” part of a video signal, meaning they are never meant to actually be visualized as image on a video camera or monitor (unless for troubleshooting purposes) – so it literally does not matter what the “active” part of a sync generator’s video signal is. It could just be field after field, frame after frame of black (often labeled “black burst”).  It could be some kind of test pattern – so you will often see test pattern generators that double as or are used as sync generators, even though these are separate functions/purposes, and thus could also be entirely separate devices in your setup.

Color bars – not *just* for geeky yet completely hard-to-use desktop wallpapers

(To belabor the point, test patterns, contained in the “active” part of the video signal, can be used to check the system, but they do not drive the system in the same way that the blanking pulses in the “inactive” part of a signal do. IMHO, this is an extremely misleading definition of “active” and “inactive”, since both parts are clearly serving crucial roles to the signal, and what is meant is just “seen” or “unseen”.)

Here’s the kicker – strictly speaking, sync generators aren’t absolutely necessary to a digitization station. It’s entirely possible that you’ll hook up all your components and all the individual pieces of equipment will work together acceptably – their sense of where and when blanking pulses should fall might already be the same, or close enough to be negligible.

But maybe you’re seeing some kind of  a wobbling or inconsistent image. That *could* be a sync issue, with one of your devices (the monitor, e.g.) losing its sense of proper timing, and resolvable by making sure all devices are working off the same, trusted sync generator.

You’ll see inputs for sync signal labeled in all sorts of inconsistent ways on analog video devices: “Sync In”, “Ext Sync”, “Ext Ref”, “Ref In”, etc. As far as I’m aware, these all mean the same thing, and references in manuals or labels to “sync signal” or “reference signal” should be treated interchangeably.

Time Base Correction

Sync generators can help line up devices with inconsistent timing in a system as a video signal is passed from one to another. Time Base Correctors (TBCs) perform an extremely similar but ever-so-confusingly-different task. TBCs can take an input video signal, strip out inconsistent sync/blanking pulses, and replace them entirely with new, steady ones.

This is largely necessary when dealing with pre-recorded signals. Consider a perfectly set up, synced recording system: using a CRT for the operator to monitor the image, a video camera passed a video signal along to a VTR, which recorded that signal on to magnetic tape. At the time, a sync generator made sure all these devices worked together seamlessly. But now, playing back that tape for digitization, we’ve introduced the vagaries of magnetic tape’s physical/chemical makeup into the mix. Perhaps it’s been years and the tape has stretched or bent, the metallic particles have expanded or compressed – not enough to prevent playback, but enough to muck with the sync pulses that rely on precision down to millionths of a second. As described with sync loss above, this usually manifests in the image looking wobbly, improperly framed or distorted on a monitor, etc.

TBCs can either use their own internal sense of time to replace/correct the timing of the sync pulses on the recorded video signal – or they can use the drumbeat of a sync generator as a reference, to ensure consistency within a whole system of devices.

(Addendum): A point I’m still not totally clear on is the separation (or lack thereof) between time base correction and frame synchronization. According to the following video I found, a stand-alone frame synchronizer could stabilize the vertical blanking pulses in a signal only, resulting in a solid image at the moment of transition from one frame to another (that is, the active video image remains properly centered vertically within a monitor), but did nothing for horizontal sync pulses, potentially resulting in line-by-line inconsistencies:

So time base correction would appear to incorporate frame synchronization, while adding the extra stabilization of consistent horizontal sync/blanking pulses.

While, as I said above, you can very often get away with digitizing video without a sync generator, TBCs are generally much more critical. It depends on what analog video format you’re working with exactly, but whereas the nature of analog devices meant there was ever so slight leeway to deal with those millionth-of-a-second inconsistencies (say, to display a signal on a CRT), the precise on-or-off, one-or-zero nature of digital signals means analog-to-digital converters usually need a very very steady signal input to do their job properly.

You may however not need an external TBC unit. Many video playback decks had them built in, though of varying quality and performance. If you can, check the manual for your model(s) to see if it has an internal TBC, and if so, if it’s possible to adjust or even turn it off if you have a more trustworthy external unit.

Professional-quality S-VHS deck with built-in TBC controls

Technically there are actually three kinds of TBCs: line, full frame and full field. Line TBCs can only sample, store and correct errors/blanking pulses for a few lines of video at a time. Full field TBCs can handle, as the name implies, a full 262.5 (NTSC) lines of video at a time, and full frame TBCs can take care a whole frame’s worth (525, NTSC) of lines at a time. “Modern”, late-period analog TBCs are pretty much always going to be full frame, or even capable of multiple frames’ worth of correction at a time (this is good for avoiding delay and sync issues in the system if working without a sync generator). This will likely only come into play with older TBC units.

And here’s one last thing that I found confusing about TBCs given the particular equipment I was trained on: this process of time base correction, and TBCs themselves, has nothing to do with adjusting the qualities of the active image itself. Brightness, saturation, hue – these visible characteristics of video image are adjusted using a device called a processing amplifier or proc amp.

Because the process of replacing or adjusting sync pulses is a natural moment in a signal flow to ALSO adjust active video signal levels (may as well do all your mucking at once, to limit troubleshooting if something’s going wrong), many external TBC units also contain proc amps, thus time base correction and video adjustments are made on the same device, such as the DPS TBC unit above. But there are two different parts/circuit boards of the unit that are doing these two tasks, and they can be housed in completely separate devices as well (i.e. you may have a proc amp that does no time base correction, or a TBC that offers no signal level adjustments).


“Genlock” is a phrase and label that you may see on various devices like TBCs, proc amps, special effects generators and more – often instead of or in the same general place you would see “Sync” or “Ref” inputs. What’s the deal here?

This term really grew out of the production world and was necessary for cases where editors or broadcasters were trying to mix two or more input video signals into one output. When mixing various signals, it was again good/recommended practice to choose one as the timing reference – all the other input signals, output, and special effects created (fades, added titles, wipes, etc. etc.) would be “locked” on to the timing of the chosen genlock input (which might be either a reference signal from a sync generator, or one of the actually-used video inputs). This prevented awkward “jumps” or other visual errors when switching between or mixing the multiple signals.

In the context of a more straightforward digitization workflow, where the goal is not to mix multiple signals together but just pass one steadily all the way through playback and time base correction to monitoring and digitization, “genlock” can be treated as essentially interchangeable with the sync signal we discussed in the “Sync/Sync Generators” section above. If the device you’re using has a genlock input, and you’re employing a sync generator to provide an external timing reference for your system, this is where you would connect the two.


The signals and devices that I’ve been describing have been all about driving machinery – time base is all about coordinating mechanical and electrical components that function down at the level of milliseconds.

Timecode, on the other hand, is for people. It’s about identifying frames of video for the purpose of editing, recording or otherwise manipulating content. Unlike film, where if absolutely need be a human could just look at and individually count frames to find and edit images on a reel, magnetic tape provides no external, human-readable sense of what image or content is located where. Timecode provides that. SMPTE timecode, probably the most commonly used standard, identified frames according to a clock-based system formatted “00:00:00:00”, which translated to “Hours:Minutes:Seconds:Frames”.

Timecode could be read, generated, and recorded by most playback/recording/editing devices (cameras or VTRs), but there were also stand-alone timecode reader/generator boxes created (such as in the photo above) for consistency and stability’s sake across multiple recording devices in a system.

There have been multiple systems of recording and deploying timecode throughout the history of video, depending on the format in question and other concerns. Timecode was sometimes recorded as its own signal/track on a videotape, entirely separate from the video signal, in the same location as one might record an audio track. This system was called Linear Timecode (LTC). The problem with LTC was, like with audio tracks, the tape had to be in constant motion to read the timecode track (like how you can not keep hearing an audio signal when you pause it, even though you *can* potentially keep seeing one frame of image when a video signal is paused on one line).

Vertical Interval Timecode (VITC) fixed this (and had the added benefit of freeing up more physical space on the tape itself) by incorporating timecode into video signal itself, in a portion of the inactive, blanking part of the signal. This allowed systems to read/identify frame numbers even when the tape was paused.

Other systems like CTL timecode (control track) and BITC (timecode burnt-in to the video image) were also developed, but we don’t need to go too far into that here.

As long as we’re at it, however, I’d also like to quickly clarify two more terms: drop-frame and non-drop-frame timecode. These came about from a particular problem with the NTSC video standard (as far as I’m aware, PAL and SECAM do not suffer the same, but please someone feel free to correct me). Since NTSC video plays back at the frustratingly non-round number of 29.97 frames per second (a number arrived at for mathematical reasons beyond my personal comprehension that have something to do with how the signal carries color information), the timecode identifying frame numbers will eventually drift away from “actual” clock time as perceived/used by humans. An “hour of timecode” which by necessity must count upwards at a 30 fps rate such as:


played back at 29.97 fps, will actually be 3.6 seconds longer than “wall-clock” time.

That’s already an issue at an hour, and proceeds to get worse the longer the content being recorded. So SMPTE developed “drop-frame” timecode, which confusingly does not drop actual frames of video!! Instead, it drops some of the timecode frame markers. Using drop-frame timecode, our sequence would actually proceed thus:


With the “:00” and “:01” frame markers removed entirely from the timecode; except for every tenth minute, when they are re-inserted to make the math check out:


Non-drop-frame timecode simply proceeds as normal, with the potential drift from clock time. Drop-frame timecode is often (but not necessarily – watch out) identified in video systems using semi-colons or single periods instead of colons between the second and frame counts, as I have done above. Semi-colons are common on digital devices, while periods are common on VTRs that didn’t have the ability to display semi-colons.

I hope this journey through the fourth dimension of analog video clarifies a few things. While each of these concepts is reasonable enough on their own, the way they relate to each other is not always clear, and the similarity in terms can send your brain down the wrong alley quickly. Happy digitizing!


Andre, Adam. “Time Base Corrector.” Written for NYU-MIAP course in Video Preservation I, Fall 2017. Accessed 6/9/2018.

“Genlock: What is it and why is it important?” Worship IMAG blog. Posted 6/11/2011. Accessed 6/8/2018.

Marsh, Ken. Independent Video. San Francisco, CA: Straight Arrow Books, 1974.

Poynton, Charles. Digital Video and HD: Algorithms and Interfaces. 2nd edition. Elsevier Science, 2012.

SMPTE timecode. Wikipedia. Accessed 6/8/2018.

Weise, Marcus and Diana Weynand. How Video Works: From Analog to High Definition. 2nd Edition. Burlington, MA: Focal Press, 2013.

A Guide to Legacy Mac Emulators

Last fall I wrote about the collaborative technical/scholarly process of making some ’90s multimedia CD-ROMs available for a Cinema Studies course on Interactive Cinema. I elided much of the technical process of setting up a legacy operating system environment in an emulator, since my focus for that post was on general strategy and assessment – but there are aspects of the technical setup process that aren’t super clear from the Emaculation guides that I first started with.

That’s not too surprising. The tinkering enthusiast communities that come up with emulators for Mac systems, in particular, are not always the clearest about self-documentation (the free-level versions of PC-emulating enterprise software like VirtualBox or VMWare are, unsurprisingly, more self-describing). That’s also not something that to hold against them in the least, mind you – when you are a relatively tiny, all-volunteer group of programmers keeping the software going to maintain decades’ worth of content from a major computing company that’s notoriously litigious about intellectual property….some of the details are going to fall through the cracks, especially when you’re trying to cram them into a forum post, not specifically addressing the archival/information science community, etc. etc.

In particular, while each Mac emulator has some pretty good information available to troubleshoot it (if you’ve got the time to find it), I’ve never found a really satisfying overview, that is, an explanation of why you might choose X program over Y. That’s partly because, as open source software, each of these programs is *potentially* capable of a hell of a lot – but might require a lot of futzing in configuration files and compiling of source code to actually unlock all those potentials (which, those of us just trying to load up Nanosaur for the first time in 15 years aren’t necessarily looking to mess with). Most of the “default” or recommended pre-compiled Mac/Windows versions of emulators offered up to casual or first-time users don’t necessarily do every single feature that the emulator’s front page brags about.

So what really is the boundary between Basilisk II and SheepShaver? Why is there such a difference between MacOS 9.0.4 and 9.1? And what the hell is a ROM file anyway? That’s what I want to get into today. I’ll start with the essential components to get any Mac emulation program running, give some recommendations for picking an emulator, then round it out with some installation instructions and tips for each one. (You can jump right to an app with this Table of Contents:

What do I need?

An Emulator
There are several free and open-source software options for emulating legacy Mac systems on contemporary computers. We’ll go over these in more detail in a minute.

An Operating System Installer
You’ll need the program that installs the desired operating system that you’re trying to recreate/emulate: let’s say, for example, Mac OS 8.5. These used to come on bootable CD-ROMs, or depending on the age of the OS, floppy disks. If you still have the original installer CD lying around, great! You can still use that. For the rest of us, there’s WinWorld , providing disk image files for all your abandonware OS needs.

A ROM File
This is the first thing that can start to throw people off. So you know your computer has a hard drive, where your operating system and all your files and programs live. It also has a CPU, central processing unit, which is commonly analogized to the “brain” of the computer: it coordinates all the different pieces of your computer, hardware and software alike: operating system, keyboard, mouse, monitor, hard drive (or solid state drive), CD-ROM drive, USB hub, etc.  etc.  This ensures your computer has at least some basic functionality even if your operating system were to get corrupted or some piece of hardware were to truly go haywire (see this other post). But the CPU itself needs a little bit of programmed code in order to work – it has to be able to both understand and give instructions. Rather than stored on a hard drive like the operating system, which is easily writable/modifiable by the user, this crucial, small central piece of code is stored on the CPU on a chip of Read-Only Memory (the read-only part is why this sort of code is often called firmware rather than software). That’s your ROM file. If the whole goal of an emulator is to trick legacy software into thinking it’s on an older machine by creating a fake computer-inside-your-computer (a “virtual machine”), you need a ROM file to serve as the fake brain.

This is trickier with Mac emulation than it is with Windows/PC emulation. The major selling point of Windows systems is that they are not locked into specific hardware: there are/have been any number of third-party manufacturers (Dell, Lenovo, HP, IBM, etc etc) and they all have to make sure their hardware, including the CPU/ROM that come with their desktops, are very broadly compatible, because they can never predict what other manufacturer’s hardware/software you may be trying to use in combination. This makes emulation easier, because the emulating application can likewise go for broad compatibility and probably be fine, without worrying too specifically about *exactly* what model of CPU/ROM it’s trying to imitate (see, for example, DOSBox).

Not so with Mac, since Apple makes closed-box systems: the hardware, OS, software, etc., are all very carefully designed to only work with their own stuff (or, at least, stuff that Apple has pretty specifically approved/licensed). So setting up a Mac emulator, you have to get very specific about which ROM file you are using as your fake brain – because certain Apple models would have certain CPUs, which could only work with certain operating system versions, which would only work with certain versions of QuickTime, which would only play certain files, which would………

That sounds exhausting.

It is.

Wait, if this relies on proprietary code from closed-box systems… is this legal?

Well if you got this far in an article about making fake Macs before asking that, I’m not so sure you actually care about the answer. But, at least so far as my knowledge of American intellectual property law goes, and I am by no means whatsoever an expert, we are in gray legal territory. You’re definitely safest to extract and use the ROM file of a Mac computer you bought. Otherwise we’re basically making an intellectual property fair use case here – that we’re not taking any business/profit from Apple from using this firmware/software for personal or educational purpose, and that providing emulation for archival purposes (and yes, I would consider recovering personal data an “archival” process) serves a public good by providing access to otherwise-lost files. The (non-legal) term “abandonware” does also exist for a reason – these forums/communities are pretty prominent, and Apple’s shown no particular signs recently of looking to shut them down or stem the proliferation of legacy ROMs floating around.

Of course, be careful about who and where you download from. Besides malware, it’s easy to come across ROM files that are just corrupted and non-functional. I’ll link with impunity to options that have worked for me.

[Also be advised! Files made from video game cartridges – SNES, Game Boy, Atari, Sega Genesis, the like – are *also* called ROM files, since game cartridges are also really just pieces of Read-Only Memory. Just searching “ROMs” is going to offer up a lot of sites offering those, which is very fun and all but definitely not going to work in your Mac emulator]

So how do I pick what ROM file and emulator to use?

That’s largely going to depend on what OS you’re aiming for. There are four emulators that I’ve used successfully (read: that have builds and guides available on Emaculation) that together cover the gamut of basically all legacy Mac machines: Mini vMac, Basilisk II, SheepShaver, and QEMU.

As I mentioned at the top, a confusing aspect is that many of these programs have various “builds” – different versions of the same basic application that offer tweaks and improvements focused on one particular feature or another. The most stable, generic version that they recommend for download might not actually be compatible with *every* ROM or operating system that the emulator can theoretically handle (with a different build).

In hunting down ROM files, you’ll probably also come across ROMs listed, rather than from a particular Mac model, as “Old World” or “New World”. These essentially refer to the two broad “families” of CPUs that Apple used for Macs before moving to the Intel chips still found in Macs today: generally speaking, “Old World” refers to the Motorola 68000 series of processors, while “New World” refers to the PowerPC line spearheaded by “AIM” (an Apple-IBM-Motorola alliance).

New World and Old World ROMs can be a good place to start, since they are often taken from sources (e.g. recovery discs) more broadly aimed at emulating Motorola 68000 or PowerPC architecture and therefore could potentially imitate a number of specific Mac models – but don’t be too surprised if you come across a software/OS combination that’s just not working and you have to hunt down a more specific ROM for a particular Mac brand/model. We’ll see an example of this in a moment with our first emulator.

If you are currently using macOS or iOS, you can find some wonderfully detailed tech specs on every single piece of Mac hardware ever made using the freeware Mactracker app. Picking an exact model to emulate based on your OS/processor needs can help narrow down your search.

So with that in mind, let’s cut to the chase – moving chronologically through Mac history, here are my recommendation and install tips:

Mini vMac


Mini vMac is an easy-to-setup (but thereby probably not the most robust) emulator for black-and-white Motorola 68000-based Macs. While it’s eminently customizable if you dig into the source code (or pay the guy who makes it at least $10 for a year’s worth of by-request custom setups – not a bad deal!!), the “standard variation” of this app that the Mini vMac page encourages you to download for a quick start quite specifically emulates a Macintosh Plus. The Macintosh Plus could run Mac System 1.0 through 7.6.1 – but the System 7.x versions frequently required some extra hardware finagling, so I’d recommend sticking with Mini vMac for Systems 1.0 through 6.x.

Also, while a generic Old World ROM *should* work since Mac Pluses had 68000-series CPUs, I haven’t had much success with those on the standard variation of Mini vMac. Look for a ROM file specifically from a Mac Plus instead (such as the one in the link above, to a very useful GitHub repository with several ROMs from specific Mac models).


  1. Download the standard variation build for your host operating system (OSX, Windows, etc.)
  2. Download a bootable disk image for your desired guest OS – System 1.x through 6.x (note these will probably be floppy disk image files, either with a .dsk or .img file extension).
  3. Download the Mac Plus ROM, rename it to “vMac.ROM” and place it in the same folder as the Mini vMac application.
  4. Double-click on the Mini vMac application to launch. You should hear a (very loud) startup beep and see a picture of a 3.5″ floppy with a blinking question mark on it – this indicates that the emulator has correctly loaded the ROM file but can’t find a bootable operating system.
  5. Drag the disk image for your operating system on to the Mini vMac window. If it’s a complete, bootable disk image, your desired operating system environment should appear!
  6. From there, it’ll depend what you’re doing – if you’re just trying to run/look at a piece of software from a floppy image, you can just drag and drop the .img or .dsk into the Mini vMac window and it should appear in your booted System 1.x-6.x environment.If you want to create a virtual “drive” to save your emulated environment on, you can create an empty disk image with one of the next two emulators on this list or the dd command-line utility if you’re on a Mac/Linux system (note: creating a Blank Image through a contemporary version of macOS’ Disk Utility won’t work):

    $ dd if=/dev/zero of=/path/to/image.img bs=512 count=1000

    will make a 512 KB blank image (512 bytes was the traditional size for floppy disk “blocks”).You can increase the count argument to however large you want, but be warned that early file systems had size limitations and at some point operating systems and software like the classic Mac systems might not recognize/work with a disk that’s too large. Once you have a blank disk image, you can just drop it into a Mini vMac window running a legacy OS to format (“initialize”) it to Apple’s old HFS file system and start installing software/saving files to it.Generally speaking though, a lot of apps and files from this era ran straight off of floppies, so if you’re just looking to run old programs without necessarily saving/creating new files, dragging and dropping in a system disk and then your .img or .dsk files should just do the trick!

Basilisk II


Basilisk II is another well-maintained Motorola 68000 series emulator. But whereas Mini vMac is primarily aimed at emulating the Mac Plus, Basilisk can either emulate a Mac Classic or a Mac II series model (hence “Basilisk II” – there never was a “Basilisk I”), depending on the configuration/build and ROM file used. As with Mini vMac, this might not be clear, but the precompiled configurations of Basilisk for Mac and Windows offered up on the Emaculation forums – which is where the Basilisk II homepage sends you – specifically offers the latter. So while you may see info that Basilisk II can emulate all the way from System 0.x through 8.1, the version you’re most likely going to be using out-of-the-box can probably only handle the more limited range of System 7.x through 8.1 (the targeted range for the Mac II series).

Note though: this was a kind of bananas era for Apple hardware when it came to model branding. So in terms of looking for ROMs, you can try to find something specifically labelled as being from a Mac II series machine – but you may also find success with a ROM file from a Quadra (largely the same as Mac II machines, but with some CPU/RAM upgrades) or a Performa (literally re-branded Quadras with a different software configuration, intended for consumer rather than professional use). In fact, the Performa ROM and Quadra ROMs on Redundant Robot’s site works quite well, so I’ve linked/recommended that above (the Performa one definitely supports color display, so that’s a good one). These have the benefit of emulating a slightly more powerful machine than the older Mac IIs – but just keep in mind what you’re doing. Your desired OS may install fine, but you may still encounter a piece of legacy software that is relying on the configuration and limitations of the Mac II.


  1. Download the precompiled app for your host operating system from Emaculation (I’ll assume OSX/macOS here).
  2. Download a bootable disk image for your desired guest OS, from 7.x through 8.1 – I’ll use Mac OS 7.6.1 here as an example. Note this is right in the transition period from floppies to CD-ROM – so your images may likely be ISOs instead of .img.
  3. Download the Performa ROM from Redundant Robot and unzip it. It’s a good idea to put it in the same folder as the Basilisk app just to keep track of it, but this time you can put it anywhere – you’ll have the option to select it in the Basilisk preferences no matter where it lives on your computer (and you don’t need to rename it either).
  4. In the Basilisk folder, double-click on the “BasiliskGUI” app. This is where you can edit the preferences/configuration for your virtual machine:
  5. In the “Volumes” tab, click on “Create…”. We need to make a totally blank disk image to serve as the virtual hard drive for our emulated computer. Choose your save location using the file navigation menu, then pick a size and name for this file (something descriptive like MacOS7 or OS7_System can help you keep track of what it is). For System 7, somewhere around 500 or 1000 MB is going to be plenty. Hit “OK” and in a moment your new blank disk image will appear under the “Volumes” tab in preferences.
  6. Next select “Add” and navigate to the bootable disk image for your operating system. So, here we select and add the System 7.6.1 ISO file.
  7. The “Unix Root” is an option for Basilisk to create a “shared” folder between your host, contemporary desktop and your emulated environment. This could be useful later for shuttling individual files into your emulator later (as opposed to packaging and mounting disk images), but is not strictly necessary to set up right now. If you do choose to create and set up a shared folder right now, you have to type out the full file path for the directory you want you use, e.g.:
  8. Keep the “Boot From” drop-down on “Any” and the “Disable CD-ROM Driver” box unchecked.
  9. The only other tab you should have to make any changes on at this point are “Graphics” and “Memory/Misc”. For “Graphics”, make sure you have Video Type: “Window” selected. The Refresh Rate can be “Dynamic” so long as you’re on a computer that’s come out within…like, the last 10 years. The width and height resolution settings are up to you, but you might want to keep them accurate to a legacy resolution like 640×480 for accurate rendering of software later.
  10. In the “Memory/Misc” tab, choose the amount of RAM you want your virtual machine to have. 64 MB is, again, plenty powerful for the era. Again, you can always tweak this later if it helps with rendering software. Keep Mac Model ID on “Mac IIci (MacOS7.x)” (note: I’ve successfully installed System 7 in Basilisk using this setting and Quadra 900 ROM, so I think the desired operating system, rather than the Mac Model, might be the most important thing to keep in mind here). Also leave CPU Type on “68040”. Finally, use the Browse button to select the Performa ROM file you downloaded:
  11. At this point, you can hit “Save” and then “Start” at the bottom of the Basilisk Settings window…and you should successfully boot into the System 7.6.1 ISO! The desktop background even comes up with little optical discs to make sure you know you’re operating from a CD-ROM:
  12. The “disk is unreadable by this Macintosh” error is to be expected: this is the blank virtual drive that you created, but haven’t formatted yet. Choose “Initialize” and then “Continue” (you’re not wiping any data, because the blank disk image doesn’t have any data in it yet!!). You should get an option to name your new drive in System 7 – I generally like to name this the same thing that I named the disk image file, just to avoid confusion and make it clear (to myself) that this is just the same file/disk being represented in two different environments.
  13. Once that’s done, you should just be on the System 7 desktop, with two “drives” mounted in the top right: the Mac OS 7.6.1 ISO installer, and then your blank virtual drive. Double-click on the Mac OS 7.6.1 ISO drive and then the “Install Mac OS” program to start running the installer software.
  14. The System 7.6.1 installer in particular forces you to go through a README file and then to “update your hard disk driver” (you can just select “Skip” for the latter).
  15. For the “Choose a disk for installation” step, make sure you have selected the empty (but now formatted) virtual drive you named.
  16. From there you should be able to just run the installer. This copies over and installs the Mac OS 7 system from the ISO to your virtual hard disk!
  17. When the program has successfully run, you now quit out of Basilisk by shutting down the emulated System 7 machine. (Hint: this used to be in the “Special” menu!)
  18. You can now remove the System 7.6.1 ISO file from the Volumes list in Basilisk’s setting, so that every time you launch the software, it goes right into the System 7 environment now installed on your virtual drive. You can now also mount/add any other software for which you have an ISO or IMG file (CD or floppy, basically) by adding that file to Basilisk’s Volumes list and relaunching Basilisk!



SheepShaver is a PowerPC Macintosch emulator, maintained by the same team/people who run Basilisk II – so you’ll find that the interface and installation process of using SheepShaver are extremely similar. Since we’re dealing with PowerPC architecture now, you’re going to need a New World ROM, or similar specific ROM (I’ve had general success pretending I’m using one of the blue-and-white Power Macintosh G3s).

You might notice that SheepShaver’s maximum installable-OS point is Mac OS 9.0.4 – even though yes, versions of Mac OS 9 existed above 9.0.4. This is because SheepShaver cannot emulate modern Memory Management Units (MMUs) – a particular advancement in CPU technology that we don’t need to get into too much right now (particularly since I don’t totally understand at the moment what it does). But the point is, Mac OS 9.0.4 was the last Mac operating system that could function entirely without a modern-style MMU. If you need Mac OS 9.1 or above, it’s time to start looking at QEMU – but, just speaking personally, I have yet to encounter a piece of software that was so specific to Mac OS 9.1-9.2.2 that it didn’t work in 9.0.4 in SheepShaver.


  1. Download the precompiled app for your host operating system from Emaculation (I’ll assume OSX/macOS here).If you are using macOS Sierra (10.12) or above as your host OS:
    Starting with Sierra, Apple introduced a new silent “quarantine” feature to its Gatekeeper software, which manages and verifies applications you’ve downloaded through the App Store or elsewhere. I can bet that at one point or another you’ve gone to the Security & Privacy section of your System Preferences to allow apps downloaded from “identified developers” or even to just let through one troublesome app that Gatekeeper didn’t trust for whatever reason. Perhaps you even remember the old ability to allow apps downloaded from “Anywhere”, which Apple has now removed as an option from the System Preferences GUI.As a result, some applications that you just download in a .zip file from a third-party (like the Emaculation forums) may not open correctly because it has been quietly “quarantined” by Gatekeeper, removing the permissions and file associations the application needs to run correctly. Obviously this is a strong security feature to have on by default to protect against malware, but in this case we want to be sure SheepShaver at least can pass through. Though you can’t do this in System Preferences anymore, you can pass apps individually through the quarantine using the following command in Terminal:

     $ sudo xattr -rd /path/to/

  2. Download a bootable disk image for your desired guest OS, from 8.2 through 9.0.4 – I’ll use Mac OS 9.0.4 here as an example. This is firmly in the CD-ROM era now so it should be an ISO file.Once you have downloaded and extracted the ISO file, you must “lock” the ISO in Finder. This is needed to trick the Mac OS 9 installer software into thinking it is really on a physical CD-ROM. You can do this by right-clicking on the ISO, selecting “Get Info”, then checking the “Locked” box:
  3. Download the New World ROM from Redundant Robot and unzip it. Put this in the same folder as the SheepShaver app itself and rename the ROM file so it is just named “Mac OS ROM” (spaces included, no file extension):
  4. Double-click on the SheepShaver app icon to launch the app. If the app has successfully found a compatible ROM in the same folder, you should see a gray screen with a floppy disk icon and a blinking question mark on it:

    (If no compatible ROM is found, the app will just automatically quit on launch, so you can troubleshoot from there).
  5. In the SheepShaver app menu at the top of your screen, select “Preferences” to open the settings menu, which happens to look pretty much just like the Basilisk II settings window.
  6. In the “Setup” tab, click on “Create…”. We need to make a totally blank disk image to serve as the virtual hard drive for our emulated computer. Choose your save location using the file navigation menu, then pick a size and name for this file (something descriptive like MacOS9 or OS9_System can help you keep track of what it is). For installing OS 9, again, somewhere around 500 or 1000 MB is going to be plenty. Hit “OK” and in a moment your new blank disk image will appear under the “Volumes” tab in preferences.
  7. Next select “Add” and navigate to the bootable disk image for your operating system. So, here we select and add the (locked) Mac OS 9.0.4 ISO file.
  8. For the “ROM File” field, click Browse and select your “Mac OS ROM” file. As long as the ROM file is in the same directory as the application and named “Mac OS ROM”, SheepShaver *will* read it by default, but it doesn’t hurt to specify again here (or if desired, you could select other ROMs here at this point for testing/troubleshooting).
  9. The “Unix Root” is an option for SheepShaver to create a “shared” folder between your host desktop and your emulated environment. This could be useful later for shuttling individual files into your emulator later (as opposed to packaging and mounting disk images), but is not strictly necessary to set up right now. If you do choose to create and set up a shared folder right now, you should type out the full file path for the directory you want you use, e.g.:
  10. Keep the options on “Boot From: Any” and leave “Disable CD-ROM” unchecked. Choose an amount of RAM you want your virtual machine to have. 512 MB was plenty powerful for the time, but you can futz with this potentially based on the requirements for the software you’re trying to run.
  11. For the “Audio/Video” tab, make sure you have Video Type: “Window” selected. The Refresh Rate can be “Dynamic” so long as you’re on a computer that’s come out within…like, the last 10 years. The width and height resolution settings are up to you, but you might want to keep them accurate to a legacy resolution like 640×480 for accurate rendering of software later. Make sure “Enable QuickDraw Acceleration” is selected and leave audio settings alone.
  12. Under the “Miscelleaneous” tab, select “Enable JIT Compiler”, “Allow Emulated CPU to Idle” and “Ignore Illegal Memory Accesses”. You can select the Mouse Wheel Function to taste. If you’re interested in getting networking working inside your virtual machine, in the “Ethernet Interface” enter the word “slirp” – otherwise, you can ignore this section.
  13. Hit “Save” to save these preferences and leave the settings window. You should be back at your blinking gray floppy. At this point, the only way to shut down the virtual machine and apply your new settings is to force-quit SheepShaver by pressing Control + Esc.
  14. Immediately double-click on the SheepShaver app again to relaunch. This time, you should boot into the Mac OS 9 installer CD! The desktop background should make it very clear you are booted into a CD installer:
  15. The “disk is unreadable by this Macintosh” error is to be expected: this is the blank virtual drive that you created, but haven’t formatted yet. Choose “Initialize” and then “Continue” (you’re not wiping any data, because the blank disk image doesn’t have any data in it yet!!). You should get an option to name your new drive in OS 9 – I generally like to name this the same thing that I named the disk image file on my host system, just to avoid confusion and make it clear (to myself) that this is just the same file/disk being represented in two different environments.
  16. If it hasn’t already opened automatically, double-click on the Mac OS 9 ISO drive and then the “Install Mac OS 9” program to start running the installer software.
  17. For the “Choose a disk for installation” step, make sure you have selected the empty (but now formatted) virtual drive you named.
  18. From there you should be able to just run the installer. This copies over and installs the Mac OS 9 system from the ISO to your virtual hard disk!
  19. When the program has successfully run, you can remove the ISO file from the mounted “Volumes” in SheepShaver’s preferences. Hit “Save” and then shut down the emulated machine to quiet SheepShaver (hint: the “Shut Down” command used to be in the “Special” menu on Mac OS desktop!!)
  20. Now, when you relaunch SheepShaver again, you should boot directly into your installed Mac OS 9 environment! As with Basilisk, you can mount floppy or CD-ROM images into your emulated desktop by opening SheepShaver’s Preferences and selecting the image files to Add them to your Volumes list (you’ll have to quit SheepShaver and relaunch any time you want to change/apply new settings, including loading new images!!)



  • Mac OS 9.1 – Mac OSX (10.0-10.5)

QEMU is a generic, extremely flexible emulator/virtualization application. It can be theoretically configured to run all kinds of guest operating systems on all kinds of hosts, not just Macs; but the forum users at Emaculation have put a lot of work into assembling QEMU builds specifically capable of PowerPC MacOSX emulation, a processor/OS combination that has generally hounded and confounded the emulation community for some time.

As such, I’ll say up front this is definitely the roughest emulator to use, both in terms of the emulated system not running as smoothly as you’ll see with the other listed programs (audio, for instance, isn’t working unless you use an experimental build) and in that troubleshooting is going to be near impossible without a basic understanding of Bash scripting. (You *can* just follow exactly this guide or the Emaculation tips with a basic text editor like TextEdit, but especially given the fact that the forums are pumping out new builds on the regular, there’s no guarantee these will work/last very long).

The good news is – you don’t need a separate ROM file for this one! I assume the necessary code is just packaged into the provided QEMU builds rather than kept in a separate file. The Emaculation forums provide a few approved builds depending on what exactly you’re trying to do, but unless you’re trying to do some emulated networking (if you’re just a beginner here – not recommended), the first download link on that page is the one you should use. This will let you install any PowerPC OS from 9.1 up that you choose – for the purposes of this guide, I’m going to pretend I’m putting Mac OSX 10.1 “Puma” on a classic Bondi Blue iMac G3!


  1. Download the precompiled build for your host system. For Mac systems, we’ll be using the first download in this forum post, although again, remember that you will not have audio with this build!! Feel free to explore some of the experimental versions once you have the process down.
  2. Unzip the QEMU folder and put your MacOSX install disc ISO into that same folder.
  3. Open the Disk Utility program. Hit Command+N (or go to File -> New Image -> Blank Image) to create a blank disk image for your virtual hard drive (we have now moved up far enough in time that Disk Utility can actually do all the proper formatting/partitioning here!)With the file browser navigate into the QEMU download folder and name your file something descriptive, like MacOSPuma. You will need at least 1.2 GB to install OSX 10.1, so I’d recommend making the blank image at least 2GB – note that Disk Utility makes you enter this number in MB! (so, 2000 MB)For “Format”, select “Mac OS Extended (Journaled)”. For “Encryption” select “none”, for “Partitions”, select “Single partition – Apple Partition Map” and for “Image Format” select “read/write disk image”.It will take a minute to create the blank disk image, but you should wind up with a file called “MacOSPuma.dmg” in your QEMU folder.
  4. In the downloaded folder there should be a file called “qemu.command”. Open this file with a plain text editor, TextEdit will do.
  5. Much of the text here is good, but we need to make a few substitutions. Namely, we need to attach a virtual CD-ROM drive, make sure it reads our Mac OSX installer ISO, make sure the code reads our blank disk image, and set it so that the QEMU application attempts to boot from the ISO, not the blank hard disk image.First, change the flag for
    boot -c


    boot -d

    Change the

    -drive file=~/Mac-disks/9.2.img,format=raw,media=disk

    so that the path matches the file path of the blank .dmg disk image that you made. Make sure to change the file extension and not to put any extra spaces into the code.Now, add another drive path flag to the code BEFORE the disk image drive flag. It should read something like:

    -drive file=~/QEMU/Mac_OS_Puma.iso,format=raw,media=cdrom

    , only substituting in the correct file path and name for your ISO.All told, the file should now look like this – note how all the code following “./qemu-system-ppc” is all on one line, even though the text wrapping in TextEdit will probably make it look otherwise for you:

    cd "$(dirname "$0")"./qemu-system-ppc -L pc-bios -boot d -M mac99 -m 256 -prom-env 'auto-boot?=true' -prom-env 'boot-args=-v' -prom-env 'vga-ndrv?=true' -drive file=~/QEMU/Mac_OS_Puma.iso,format=raw,media=cdrom -drive file=~/QEMU/MacOSPuma.dmg,format=raw,media=disk -netdev user,id=network0 -device sungem,netdev=network0

    Save the “qemu.command” file.

  6. Now we need to make this .command file (which is basically just a tiny Bash script) executable so that your host operating system knows to run it like an application just by double-clicking. Open the Terminal app and run
     $ chmod +x /path/to/qemu.command

    either typing out the full file path yourself, or just drag-and-dropping the qemu.command file into Terminal to get the full path automatically.

  7. Double-click on the “qemu.command” file. If all has been set up correctly, after a minute you should boot into the Mac OSX Puma installer.
  8. Your blank .dmg file should show up as an available destination disk on which to install OSX! Once the full installer has run (takes a few minutes), the software will eventually try to reboot the machine, which at this point will just take you back into the ISO installer. Mash Command+Q to quit out of QEMU.
  9. Go back to your “qemu.command” file in TextEdit. Swap the
    boot -d

    flag BACK to

    boot -c

    , and delete the -drive flag for the ISO installer entirely, just leaving the .dmg file attached. Save the file. At this point, you should now be able to just double-click the “qemu.command” file and launch into OSX Puma, installed on your virtual hard drive!(it may take a minute! remember this is not going to be nearly as quick/smooth as loading on original hardware – and that you’ll have to go through a bunch of Apple’s “registration” nonsense)

  10. You can always attach more disk images to the emulated environment by re-creating “-drive” entries with “media=cdrom” and the file path to your ISO/CDR in the “qemu.command” file and saving/relaunching QEMU.