Everyday Linux

A couple weeks ago, there was a Forbes article that caught my eye. (No, librarians, it wasn’t that twaddle, please don’t hurt me)

No, it was this one, relating one writer/podcaster’s decision to switch to Linux as his everyday operating system after a few too many of the new “Blue Screen of Death”: Windows 10’s staggeringly inconvenient and endless Update screens.

Actually, turn it on, turn it off, who cares! You’re not working on that spreadsheet today anyway.

The article stood out because I had already been planning to write something extremely similar myself. A couple years ago, I needed a new laptop and decided I wanted out of the Apple ecosystem, partly because Apple’s desktop/laptop hardware and macOS design seemed increasingly shunted to the side in favor of iOS/mobile/tablets, but mostly because of $$$. I considered jumping ship to a Windows 10 machine, which, as I’ve said on several occasions, is actually a pretty nifty OS at its core – but, like Mr. Evangelho, I had encountered one too many productivity-destroying updates to my liking on my Windows station at work. Never mind the intrusive privacy defaults and the insane inability to permanently uninstall Candy Crush, Minecraft and other bloatware forced upon me by Microsoft.

why

I had used Linux operating systems, particularly Ubuntu (via BitCurator) before, and thought it might be time to take the leap to everyday use. After a little bit of research to make sure I would still be able to find versions of my most common/critical applications, I jumped ship and haven’t looked back. So, whereas I have written before about Linux in a professional context for digital preservation on several occasions, I want to finally make my evangelizing case of Linux as an everyday, personal operating system – for anyone.

Linux has a reputation as a “geeky” system for programmers and hardcore computer tinkerers, but it’s become incredibly accessible to anyone – or at least, certainly to anyone who’s used to having macOS or Windows in their daily lives. In fact, you’re almost certainly already using Linux even if you don’t realize it – if you have an Android smartphone, if you have a Chromebook laptop, if you have any of a thousand different smart/networked home devices (which, please throw them in the trash, but whatever), you’re using and relying on Linux.

Breaking away from the Mac/Windows dichotomy is as easy as your original choice of one or the other – the hurdle is largely just realizing there’s another option to debate.

Why Linux?

A Linux operating system is an example of free and open source software, often abbreviated FOSS. The “free” in there is meant to refer to “freedom”, not price (although FOSS tends to be free in that sense as well) – a legacy of the Free Software Foundation‘s four maxims that computer users should be able to:

  • run a program
  • study a program’s source code (in essence, to understand exactly how it works)
  • redistribute exact copies of that program
  • create and distribute modified versions of that program

This is all in opposition to closed and proprietary software, which use copyright and patent licenses to run contrary to at least one or more of these ideas. (Note that some open source software may still carry licenses that restrict the latter two points in certain ways – always check!)

Look, I could go on my anti-capitalist screed here, but you should probably just go see a more clever and entertaining one. But what it comes down to is that, unlike the proprietary model of one company hiring employees to build and distribute/sell its own, closed software, open source software is built by collaborative networks of programmers and users, under the general philosophy that humanity tends to make better, more broadly applicable advancements when everyone stands to (at least potentially) benefit.

Laika believed in this, don’t you?

That doesn’t mean that all open source developers are noble self-sacrificing volunteers. There are entire companies – like Canonical, Mozilla, Red Hat – dedicated to creating and supporting it, and any number of name-brand tech giants – Google, Oracle, yes Microsoft and Apple even – that at least participate in certain open source projects. When I say everyone can benefit, that often includes Big Tech. So don’t get me wrong, there’s plenty of ways to participate in and advocate for FOSS in ways that don’t involve a total shift in your operating system and computing environment, if you’re perfectly content where you are now.

But for me, switching over completely to a FOSS operating system in Linux felt like a way to take back some control from increasingly intrusive devices. For many years, Apple products’ big selling point was “it just works”, and I solidly felt that way with my first couple MacBooks – buy a laptop and the operating system got out of the way, letting you browse the internet, make movies, write up Sticky Note reminders, listen to music, and install other favorite programs and games, in a matter of minutes. I could do whatever it was I wanted to do.

I don’t feel like macOS (or Windows) “just works” quite in that same way anymore – they’re designed to work the way Apple and Microsoft want me to work. Constant, barraging notifications to log in to iCloud or OneDrive accounts, to enable Siri or Cortana AI assistance. Obscured telemetry settings sending data back to the hivemind and downloading “helpful” background programs, clogging up the computer’s resources without user knowledge. Stepping way beyond security concerns to slowly but surely cordon off anything downloaded by the user, to pigeonhole them into corporately-vetted App Stores. A six-month long hooplah over “Dark Mode.”

…go away forever?

(Look, I’m no fool – Apple’s choices were always business choices, made to ultimately improve their company’s market share, no matter which way you slice it – but I don’t think I’m alone in feeling that for some time that meant ceding at least the illusion of control to the user, or at least not nagging them every damn day into the feeling they were somehow using their own computer the “wrong” way)

Linux operating systems, because they are open and modifiable, are also extremely flexible and controllable – if that means you want to get into the nitty-gritty and install every single piece of software that makes a computer work yourself, go for it. But if that means you just want something that gets out of the way and lets you play Oregon Trail on the Internet Archive, Linux can also be that. It can be your everyday, bread-and-butter, “just works” computer, without voices constantly shouting at you about what that should look like.

What’s different?

Well, despite the whole stirring case I may have just made…there is no “Linux operating system.” Or at least, there is no one thing called “Linux” that you just go out and download and start streaming Netflix on.

Linux is a kernel. It’s the very center, core, most important piece of an operating system, but it’s not entirely functional in and of itself. You have to pile a bunch of other things on top of it: a desktop environment, a way to install and update applications, icons and windows and buttons – all the sexy, front-facing stuff that most of us actually consider when picking which operating system we want to use. So many, many, many people and companies have created their own version of that stuff, piled it on top of Linux, and released it as their own operating system. And each one of those can have a completely different look or feel to them.

All these different flavors or versions of Linux are referred to as distributions. (If you want to really fit in, call them “distros“)

So….what distribution do you choose???

This is absolutely the most overwhelming thing about switching to Linux. There are a lot of distributions, and they all have their own advantages and disadvantages – sometimes not very obvious, because there aren’t necessarily whole marketing teams behind them to give you the quick, summarized pitch on what makes their distribution different from others.

I’ve tried out several myself and will give out some recommendations, in what I hope are user-friendly terms, in the next/last section. This huge amount of choice at this very first stage can be staggering, but consider the benefit compared to closed systems: do you ever wish macOS had a “home” button and a super-key like Windows, so you could just pull up applications and more without having to remember the keyboard shortcut for Spotlight? Do you wish the Windows dock was more responsive, or its drop-down menus were located all the way at the top of the screen so you had more space for your word document? You’re probably never going to be able to make those tweaks unless Apple or Microsoft make them for you. With Linux, you can find the distribution that either already mixes and matches things the right way for you – or lets you tweak them yourself!

Like literally entire applications dedicated to easily tweaking the system

In terms of hardware, if you’re coming from Windows/PC-land, there’s not going to be much difference at all. Like Windows, you install a Linux operating system on a third-party hardware manufacturer’s device: HP, IBM, Lenovo, etc. You can competitively price features to your liking – more or less storage, higher resolution screen, higher quality keyboard or trackpad, whatever it is that’s important to you and your everyday comfort.

A small handful of companies will even directly sell you laptops with Linux distributions pre-installed (System76, Dell). But for the most competitive (read: cheapest) options, you’ll have to install Linux on a PC of your choice yourself.

Maybe not the most efficient choice

Like with Windows, this does also mean you *may* occasionally have to install or reinstall drivers to make certain peripheral devices (Wi-Fi cards, external mice) play nice with your operating system. This used to be a much more common issue than it is now, and a legitimate knock against Linux systems – but these days, if you’re using a major, well-supported distribution, it’s really no worse than Windows. And if you’re sitting there, a dedicated Windows user, thinking “huh, I’ve never had to deal with that”, neither have I in two years on my Linux laptop. This is more a warning to the Mac crowd that, hey, it’s possible for problems to arise when the company making the software isn’t also making the hardware (and if you’ve ever used a cheap non-Apple Thunderbolt adapter  or power charger – you probably knew that anyway!)

Finally, applications! Again, the major Linux distributions have all at this point pretty much borrowed the visual conception of the “App Store” – a program you can use to easily browse, install and launch a vast range of open source software. The vetting may not be as thorough – so bring the same healthy dose of skepticism and awareness that you do to the Google Play Store and you’ll be fine.

Sure, “contains ads”, seems legit

If you’re worried about losing out on your favorite Mac/Windows programs, you absolutely may want to do some research to make sure there are either Linux versions or at least satisfactory Linux equivalents to the software you need. But while you might not be able to get Adobe programs, for instance, on to your new operating system, there’s plenty of big-name proprietary apps that have made the leap in recent years: Spotify, Slack, even Skype. And Linux programs are usually available to at least open and convert the files you originally made with their Mac/Windows equivalents (LibreOffice can open and work with the various Microsoft Office formats, for instance, and GIMP can at least partially convert PhotoShop .PSDs)

Installing these applications is as easy as clicking “Install” in an App Store. No Apple ID hoops to go through. And the really wonderful thing is that unlike Mac or Windows, most Linux systems will track and perform application updates at the same time and in the same place as operating system updates – no more menu-searching and notifications from individual applications to make sure you’re on the latest, greatest, and most secure version of any given program. You’ll just get a general pop-up from the “Software Center” or equivalent and perform updates in one quick, fell swoop, or as nit-picky as desired. (And my Linux laptop has never unexpectedly forced a restart to update while I was doing something else)

And finally, Linux drives are formatted differently than macOS or Windows, using the “ext4” file system. This means you can encounter some of the same quirks in moving your old files to a Linux system that you’ve ever had in shuttling between Mac and Windows – but Linux pretty much always comes with at least read support for HFS+ (Mac) and NTFS (Windows) drives, so likewise I’ve never had issues with at least just transferring old files over to a new drive.

How can I try it out?

The great news is, unlike Mac or Windows, you don’t have to go to a physical store or buy a completely new laptop just to try a Linux distribution and see if it’s something you would like to use!

Just like when you install (or reinstall) the operating system on a Mac or Windows computer, to install Linux you’ll need a USB drive that is at least 8GB large to house the installation disk image – an ISO file. Unlike Mac or Windows, Linux installation images, in addition to the installation program itself, pretty much always have a “Live” mode – this lets you run a Linux session on your computer, to see how well it works on your hardware and if you like the distribution’s design and features. It’s a fantastic try-before-you-buy feature, and can even work with MacBooks, if that’s all you have (just don’t be surprised/blame Linux if there’s some hardware wonkiness, like your keyboard not responding 100% correctly).

Once you have an 8GB USB flash drive and the ISO file for the OS you want to try downloaded, you’ll need an application to “burn” the ISO to the flash drive and make it bootable. I recommend Etcher, which is multi-platform so it’ll work whether you’re starting out on Mac or Windows (and also, just for the record, if you’re trying to make bootable installers from macOS .DMGs or Windows ISOs – Etcher is a rad tool!). From there you’ll need to boot into the installer USB according to instructions that will depend on your laptop manufacturer (it usually means holding down one of the function keys at the top of your keyboard during startup, but the key combination varies depending on the hardware/maker).

So what about all those distributions? Which ones should you try? Here are some of the most popular flavors that I think would also be accessible to converts making their way over from macOS or Windows. These distributions all have wide user bases, meaning they all have either good documentation or even active support accounts that you can contact in the event of questions or problems.

Ubuntu

Thanks to its popularity as an operating system for servers and the Internet of Things, Ubuntu is probably the biggest name in Linux, and if you just want a super stable, incredibly well-supported desktop with thoughtful features, it is still my go-to recommendation for most casual users seeking alternative to macOS and Windows (and as I said earlier, if you’ve ever encountered BitCurator, you already know what it looks/feels like). It’s what I use myself for day-to-day web browsing, streaming services, word processing, some Steam gaming, light digipres/coding and a bit of server maintenance for this very site.

When it comes to Ubuntu, you’re going to want to look to try out a version labeled “LTS” – that’s “Long-Term Support”, meaning OS updates are guaranteed for five years (any of the other versions are primarily for developers and other anxious early adopters who don’t mind a few more bugs). The latest LTS release, 18.04, just came out a couple months ago with a pretty major desktop redesign, but it’s as attractive, sleek and functional as ever, and I came back to it after flirting with some of the other distributions on this list.

Linux Mint

Linux Mint is itself a derivative of Ubuntu, so everything I just said about stability and support goes for Mint as well – the Mint developers just wait for Ubuntu to release updates and then add their own spin. The differences are thus largely visual – Linux Mint’s desktop is made to look more like Windows, so users who are migrating from that direction are more likely to be at home here. It’s been around for a while so the support community is likewise large and varied.

Zorin

Zorin is also an Ubuntu derivative that’s even more explicitly targeted at converting Windows users/Linux newbies. It’s newer than Mint but I have to say personally I think that’s led to a fresher, more attractive design. (More Windows 10 to Linux Mint’s Windows 7). In fact it’s pretty much built around flexible, easily changed desktop design. Going with a distribution based on super pretty icons and easily swapping around menus might seem silly, but honestly, there’s so many distributions with such similar core features that these *are* the kinds of decisions that make Linux users go with one over the other.

elementary OS

Whereas Linux Mint and Zorin are more or less “Ubuntu for Windows converts”, elementary OS has a visual design more targeted to bring over macOS users, with a dock, top menu bar, etc. It’s a very light, sleek OS that really only ships with the most basic apps and the design encourages you to keep things simple (their media apps are literally just called things like “Music” and “Videos”, and their custom web browser Epiphany has a bare minimum of features to keep from becoming a memory hog like Firefox and Chrome). But since it’s Ubuntu based you can still easily install more familiar open source apps like VLC, etc. Also a very intriguing Linux OS to try if you’re used to something like a Chromebook or a tablet as your primary device.

Pop!_OS

System76 is known more for their attractive hardware and great customer support (I’ve had my Lemur for two years and am still in love), but they recently tried putting out their own Ubuntu Linux distribution targeted at developers and creative professionals. It pretty much seems like Ubuntu but with some more minimalist icon design and some nifty expanded features for multiple workspaces, but mostly I’m plugging them here because if you wind up looking for higher-end hardware and some really helpful, responsive support, System76 is tops.

Solus

Whereas all these previous distributions I’ve mentioned have built off the stability of Ubuntu, Solus is different, as it is its own completely independent OS. It’s an example of a “rolling release” operating system: while with macOS, Windows, and many Linux distributions like Ubuntu, you have to worry about your particular, fixed-point version eventually becoming obsolete and no longer receiving updates (e.g. once macOS 10.14 comes out, those users still using 10.11 will no longer receive security and software updates from Apple), Solus will just continually roll out updates, forever. Or, well, “forever,” but you get what I mean.

That means users actually get the latest software and features much faster, as you don’t have to wait for those things to get bundled into the next “release”. It’s an interesting model if you’re really ready to unmoor and explore away from the usual systems.

Aside from that, Solus’ custom “Budgie” desktop design is extremely pretty and modern, with a quick-launch menu and a custom applets bar. It feels a lot like Windows 10 in that sense.

Doing DigiPres with Windows

A couple of times at NYU, a new student would ask me what kind of laptop I would recommend for courses and professional use in video and digital preservation. Maybe their personal one had just died and they were shopping for something new. Often they’d used both Mac and Windows in the past so were generally comfortable with either.

I was and still am conflicted by this question. There are essentially three options here: 1) some flavor of MacBook, 2) a PC running Windows, or 3) a PC running some flavor of Linux. (There are Chromebooks as well, but given their focus on cloud applications and lack of a robust desktop environment, I wouldn’t particularly recommend them for archivists or students looking for flexibility in professional and personal use)

Each of these options have their drawbacks. MacBooks are and always have been prohibitively expensive for many people, and I’m generally on board with calling those “butterfly switch” keyboards on recent models a crime. Though a Linux distribution like Ubuntu or Linux Mint is my actual recommendation and personally preferred option, it remains unfamiliar to the vast majority of casual users (though Android and ChromeOS are maybe opening the window), and difficult to recommend without going full FOSS evangelist on someone who is largely just looking to make small talk (I’ll write this post another day).

So I never want to steer people away from the very real affordability benefits of PC/Windows – yet feel guilty knowing that’s angling them full tilt against the grain of Unix-like environments (Mac/Linux) that seem to dominate digital preservation tutorials and education. It makes them that student or that person in a workshop desperately trying in vain to follow along with Bash commands or muck around in /usr/local while the instructor guiltily confronts the trolley problem of spending their time saving the one or the many.

I also recognize that *nix education, while the generally preferred option for the #digipres crowd, leaves a gaping hole in many archivists’ understanding of computing environments. PC/Windows remains the norm in a large number of enterprise/institutional workplaces. Teaching ffmpeg is great, but do we leave students stranded when they step into a workplace and can’t install command line programs without Homebrew (or don’t have admin privileges to install new programs at all)?

This post is intended as a mea culpa for some of these oversights by providing an overview of some fundamental concepts of working with Windows beyond the desktop environment – with an eye towards echoing the Windows equivalents to usual digital preservation introductions (the command line, scripting, file system, etc.).

*nix vs NT

To start, let’s take a moment to consider how we got here – why are Mac/Linux systems so different from Windows, and why do they tend to dominate digital preservation conversations?

Every operating system (whether macOS, Windows, a Linux distribution, etc.) is actually a combination of various applications and a kernel. The kernel is the core part of the OS – it handles the critical task of translating all the requests you as a user make in software/applications to the hardware (CPU, RAM, drives, monitor, peripherals, etc.) that actually perform/execute the request.

“Applications” and “Kernel” together make up a complete operating system.

UNIX was a complete operating system created by Bell Labs in 1969 – but it was originally distributed with full source code, meaning others could see exactly how UNIX worked and develop their own modifications. There is a whole complex web of UNIX offshoots that emerged in the ’80s and early ’90s, but the upshot is that a lot of people took UNIX’s kernel and used it as the base for a number of other kernels and operating systems, including modern-day macOS and Linux distributions. (The modifications to the kernel mean that these operating systems, while originally derived in some way from UNIX, are not technically UNIX itself – hence you will see these systems referred to as “Unix-like” or “*nix”)

The result of this shared lineage is that Mac and Linux operating systems are similar to each other in many fundamental ways, which in turn makes cross-compatible software relatively easy to create. They share a basic file system structure and usually a default terminal and programming language (Bash) for doing command line work, for instance.

Meanwhile there was Microsoft, which, though it originally created its own (phenomenally popular!) variant of UNIX in the ’80s, switched over in the late ’80s/early ’90s into developing its own, proprietary kernel, entirely separate from UNIX. This was the NT kernel, which is at the heart of basically every Windows operating system ever made. These fundamental differences in architecture (affecting not just clearly visible characteristics like file systems/naming and command line work, but extremely low-level methods of how software communicates with hardware) make it more difficult to make software cross-compatible between both *nix and Windows operating systems without a bunch of time and effort from programmers.

So, given the choice between these two major branches in computing, why this appearance that Mac hardware and operating systems have won out for digital preservation tasks, despite being the more expensive option for constantly cash-strapped cultural institutions and employees?

It seems to me it has very little to do with an actual affinity for Macs/Apple and everything to do with Linux and GNU and open source software. Unlike either macOS or Windows, Linux systems are completely open source – meaning any software developer can look at the source code for Linux operating systems and thus both design and make available software that works really well for them without jumping through any proprietary hoops (like the App Store or Windows Store). It just so happens that Macs, because at their core they are also Unix-like, can run a lot of the same, extremely useful software, with minimal to no effort. So a lot of the software that makes digital preservation easier was created for Unix-like environments, and we wind up recommending/teaching on Macs because given the choice between the macOS/Windows monoliths, Macs give us Bash and GNU tools and other critical pieces that make our jobs less frustrating.

Microsoft, at least in the ’90s and early aughts, didn’t make as much of an effort to appeal directly to developers (who, very very broadly speaking, value independence and the ability to do their own, unique thing to make themselves or their company stand out) – they made their rampant success on enterprise-level business, selling Windows as an operating system that could be managed en masse as the desktop environment for entire offices and institutions with less troubleshooting of individual machines/setups.

(Also key is that Microsoft kept their systems cheaper by never getting into the game of hardware manufacturing – unlike Mac operating systems, Windows is meant to run on third-party hardware made by companies like HP, Dell, IBM, etc. The competition between these manufacturers keeps prices down, unlike Apple’s all-in-one hardware + OS systems, making them the cheaper option for businesses opening up a new office to buy a lot of computers all at once. Capitalism!)

As I’ll get into soon, there have been some very intriguing reversals to this dynamic in recent years. But, in summary: the reason Macs have seemed to dominate digital preservation workshops and education is that there was software that made our jobs easier. It was never that you can’t do these things with Windows.

Windows File System

Beyond the superficial desktop design differences (dock vs. panel, icons), the first thing any user moving between macOS and Windows probably notices are the differences between Finder (Mac) and File Explorer (Windows) in navigating between files and folders.

In *nix systems, all data storage devices – hard disk drives, flash drives, external discs or floppies, or partitions on those devices – are mounted into the same, “root” file system as the operating system itself, in Windows every single storage device or partition is separated into its own “drive” (I use quotes because two partitions might exist on the same physical hard disk drive, but are identified by Windows as separate drives for the purpose of saving/working with files), identified by a letter – C:, D:, G:, etc. Usually, the Windows operating system itself, and probably most of a casual user’s files, are saved on to the default C: drive. (Why start at C? It harkens back to the days of floppy drives and disk operating systems, but that’s a story for another post). From that point, any additional drives, disks or partitions are assigned another letter (if for some reason you need more than 26 drives or partitions mounted, you’ll have to resort to some trickery!)

Also, file paths in Windows use backslashes (e.g. C:\Users\Ethan\Desktop) instead of forward slashes (/Users/Ethan/Desktop)

and if you’re doing command-line work, Bash (the default macOS CLI) is case-sensitive (meaning /Users/Ethan/Desktop/sample.txt and /Users/Ethan/Desktop/SAMPLE.txt are different files) while Windows’ DOS/PowerShell command line interfaces (more on these in a minute) are not (so SAMPLE.txt would overwrite sample.txt)

¯\_(ツ)_/¯

Fundamentally, I don’t think these minor differences affect the difficulty either way of performing common tasks like navigating directories in the command line or scripting from Macs – it’s just different, and an important thing to keep in mind to remember where your files live.

What DOES make things more difficult is when you start trying to move files between macOS and Windows file systems. You’ve probably encountered issues when trying to connect external hard drives formatted for one file system to a computer formatted for another. Though there’s a lot of programming differences between file systems that cause these issues (logic, security, file sizes, etc.), you can see the issue just with the minor, user-visible differences just mentioned: if a file lives at E:\DRIVE\file.txt on a Windows-formatted external drive, how would a computer running macOS even understand where to find that file?

Modern Windows systems/drives are formatted with a file system called NTFS (for many years Microsoft used FAT and its variants such as FAT16 and FAT32, so NTFS is backwards-compatible with these legacy file systems). Apple, meanwhile, is in a moment transitioning from its very longtime system HFS+ (which you may have also seen referred to in software as “Mac OS Extended”) to a new file system, APFS.

File system transitions tend to be a headache for users. NTFS and HFS+ have at least been around for long enough that quite a lot of software has been made to allow moving back and forth/between the two file systems (if you are on Windows, HFS Explorer is a great free program for exploring and extracting files on Mac-formatted drives; while on macOS, the NTFS-3G driver allows for similarly reading and writing from Windows-formatted storage). Since it is still pretty new, I am less aware of options for reading  APFS volumes on Windows, but will happily correct that if anyone has recommendations!

** I’ve got a lot of questions in the past about the exFAT file system, which is theoretically cross-compatible with both Mac and Windows systems. I personally have had great success with formatting external hard drives with exFAT, but have also heard a number of horror stories from those who have tried to use it and been unable to mount the drive into one or another OS. Anecdotally it seems that drives originally formatted exFAT on a Windows computer have better success than drives formatted exFAT on a Mac, but I don’t know enough to say why that might be the case. So: proceed, but with caution!

Executable Files

No matter what operating system you’re using, executable files are pieces of code that directly instruct the computer to perform a task (as opposed to any other kind of data file, whose information usually needs to be parsed/interpreted by executable software in some way to become in any way meaningful – like a word processing application opening a text file, where the application is the executable and the text file is the data file). As a common user, anytime you double-clicked on a file to open an application or run a script, whether a GUI or a command line program, you ran an executable.

Where executable files live and how they’re identified to you, the user, varies a bit from one operating system to the next. A MacOS user is accustomed to launching installed programs from the Applications folder, which are all actually .app files (.apps in turn are actually secret “bundles” that contain executables and other data files inside the .app, which you can view like a folder if you right-click and select “Open Package Contents” on an application).

Executable files (with or without a file extension) might also be automatically identified by macOS anywhere and labeled with this icon:

And finally, because executable files are also called binaries, you might find executables in a folder labeled “bin” (for instance, Homebrew puts the executable files for programs it installs into /usr/local/bin)

Windows-compatible executables are almost always identified with .exe file extensions. Applications that come with an installer (or, say, from the Windows Store) tend to put these in directories identified by the application name in the “Program Files” folder(s) by default. But some applications you just download off of GitHub or SourceForge or anywhere else on the internet might just put them in a “bin” folder or somewhere else.

(Note: script files, which are often identified by an extension according to the programming language they were written in – .py files for Python, .rb for Ruby, .sh for Bash, etc. – can usually be made automatically executable like an .exe but are not necessarily by default. If you download a script, whether working on Mac or Windows, you may need to check whether it’s automatically come to you as an executable or more steps need to be taken. They often don’t, because downloading executables is a big no-no from the perspective of malware protection.)

Command Prompt vs. PowerShell

On macOS, we usually do command line work (which offers powerful tools for automation, cool/flexible software, etc.) using the Terminal application and Bash shell/programming language.

Modern Windows operating systems are a little more confusing because they actually offer two command line interfaces by default: Command Prompt and PowerShell.

The Command Prompt application uses DOS commands and has essentially not changed since the days of MS-DOS, Microsoft’s pre-Windows operating system that was entirely accessed/used via command line (in fact, early Windows operating systems like Windows 3.1 and Windows 95 were to a large degree just graphic user interfaces built on top of MS-DOS).

The stagnation of Command Prompt/DOS was another reason developers started migrating over to Linux and Bash – to take advantage of more advanced/convenient features for CLI work and scripting. In 2006, Microsoft introduced PowerShell as a more contemporary, powerful command line interface and programming language aimed at developers in an attempt to win some of them back.

While I recognize that PowerShell can do a lot more and make automation/scripting way easier, I still tend to stick with Command Prompt/DOS when doing or teaching command line work in Windows. Why? Because I’m lazy.

The Command Prompt/DOS interface and commands were written around the same time as the original Bash – so even though development continued on Bash while Command Prompt stayed the same, a lot of the most basic commands and syntax remained analogous. For instance, the command for making a directory is the same in both Terminal and Command Prompt (“mkdir”), even if the specific file paths are written out differently.

So for me, while learning Command Prompt is a matter of tweaking some commands and knowledge that I already know because of Bash, learning PowerShell is like learning a whole new language – making a directory in PowerShell, e.g., is done with the command shortcut “md”.* It’s sort of like saying I already know Castillian Spanish, so I can tweak that to figure out a regional dialect or accent – but learning Portuguese would be a whole other course.

Relevant? Maybe not.

But your mileage on which of those things is easier may vary! Perhaps it’s more clear to keep your *nix and Windows knowledge totally separate and take advantage of PowerShell scripting. But this is just to explain the Command Prompt examples I may use for the remaining of this post.

* ok, yes, “mkdir” will also work in PowerShell because it’s the cmdlet name – this comparison doesn’t work so well with the basic stuff, but it gets true for more advanced functions/commands and you know it, Mx. Know-It-All

The PATH Variable

The PATH system variable is a fun bit of computing knowledge that hopefully ties together all the three previous topics we’ve just covered!

Every modern operating system has a default PATH variable set when you start it up (system variables are basically configuration settings that, as their name implies, you can update or change). PATH determines where your command line interface looks for executable files/binaries to use as commands.

Here’s an example back on macOS. Many of the binaries you know and execute as commands (cd, mkdir, echo, rm) live in /bin or /usr/bin in the file system. By default, the PATH variable is set as a list that contains both “/bin” and “/usr/bin” – so that when you type

$ cd /my/home/directory

into Terminal, the computer knows to look in /bin and /usr/bin to find the executable file that contains instructions for running the “cd” command.

Package managers, like Homebrew, usually put all the executables for the programs you install into one folder (in Homebrew’s case, /usr/local/bin) and ALSO automatically update the PATH variable so that /usr/local/bin is added to the PATH list. If Homebrew put, let’s say, ffmpeg’s executable file into the /usr/local/bin folder, but never updated the PATH variable, your Terminal wouldn’t know where to find ffmpeg (and would just return a “command not found” error if you ran any command starting with

$ ffmpeg

), even though it *is* somewhere on your computer.

So let’s move over to Windows: because Command Prompt, by default, does not have a package manager, if you download a command line program like ffmpeg, Command Prompt will not automatically know where to find that command – unless a) you move the ffmpeg directory, executable file included, into somewhere already in your PATH list (maybe somewhere like “C:\Program Files\”), or b) update the PATH list to include where you’ve put the ffmpeg directory.

I personally work with Windows rarely enough that I tend to just add a command line program’s directory to the PATH variable when needed – which usually requires rebooting the computer. So I can see how if you were adding/installing command line programs for Windows more frequently, it would probably be convenient to just create/determine “here’s my folder of command line executables”, update the PATH variable once, and then just put new downloaded executables in that folder from then on.

*** I do NOT recommend putting your Downloads or Documents folder, or any user directory that you can access/change files in without admin privileges, into your PATH!!! It’s an easy trick for accidentally-downloaded malware to get into a commonly-accessed folder (and/or one that doesn’t require raised/administrative privileges) and then be able to run its executable files from there without you ever knowing because it’s in your PATH.

Updating the PATH variable was a pain in the ass in Windows 7 and 8 but is much less so now on Windows 10.

If you are in the command line and want to quickly troubleshoot what file paths/directories are and aren’t in your PATH, you can use the “echo” command, which is available on both *nix and Windows systems. In *nix/Bash, variables can be invoked in scripts or changed with $PATH, so

$ echo $PATH

would return something like so, with the different file paths in the PATH list separated by colons:

While on Windows, variables are identified like %THIS%, so you would run

 > echo %PATH%

Package Management

Now, as I said above, package managers like Homebrew on macOS can usually handle updating/maintaining the PATH variable so you don’t have to do it by hand. For some time, there wasn’t any equivalent command line package manager (at least that I was aware of) for Windows, so there wasn’t any choice in the matter.

That’s no longer true! Now there’s Chocolatey, a package manager that will work with either Command Prompt or PowerShell. (Note: Chocolatey *uses* PowerShell in the background to do its business, so even though you can install and use Chocolatey with Command Prompt, PowerShell still needs to be on your computer. As long as your computer/OS is newer than 2006, this shouldn’t be an issue).

Users of Homebrew will generally be right at home with Chocolatey: instead of

$ brew install [package-name]

you can just type

 > choco install [package-name]

and Chocolatey will do the work of downloading the program and putting the executable in your PATH, so you can get right back to command line work with your new program. It works whether you’re on Windows 7 or 10.

The drawback, at least if you’re coming over from using macOS and Homebrew, is that Chocolatey is much more limited in the number/type of packages it offers. That is, Chocolatey still needs Windows-compatible software to offer, and as we went over before, some *nix software has just never been ported over to supported Windows versions (or at least, into Chocolatey packages). But many of the apps/commands you probably know and love as a media archivist: ffmpeg, youtube-dl, mediainfo, exiftool, etc. – are there!

Scripting

If you’ve been doing introductory digipres stuff, you’ve probably dipped your toes into Bash scripting – which is just the idea of writing some code written in Bash into a plain text file, containing instructions for the Terminal. Your computer can then run the instructions in the script without you needing to type out all the commands into Terminal by hand. This is obviously most useful for automating repetitive tasks, where you might want to run the same set of commands over and over again on different directories or sets of files, without constant supervising and re-typing/remembering commands. Scripts written in Bash and intended for Mac/Linux Terminals tend to be identified with “.sh” file extensions (though not exclusively).

Scripting is absolutely still possible with Windows, but we have to adjust for the fact that the Windows command line interface doesn’t speak Bash. Command Prompt scripts are written in DOS style, and their benefit for Windows systems (over, say, PowerShell scripts) is that they are extremely portable – it doesn’t much matter what version of Windows or PowerShell you’re using.

Command Prompt scripts are often called “batch files” and can usually be identified by two different file extensions: .bat or .cmd

(The difference between the two extensions/file formats is negligible on a modern operating system like Windows 7 or 10. Basically, BAT files were used with MS-DOS, and CMD files were introduced with Windows NT operating systems to account for the slight improvements from the MS-DOS command line to NT’s Command Prompt. But, NT/Command Prompt remained backwards-compatible with MS-DOS/BAT files, so you would only ever encounter issues if you tried to run a CMD file on MS-DOS, which, why are you doing that?)

Actually writing Windows batch scripts for Command Prompt is its own whole tutorial. Which – thankfully – I don’t have to write, because you can follow this terrific one instead!

Because Command Prompt batch scripts are still so heavily rooted in decades-old strategies (DOS) for talking to computers, they are pretty awkward to look at and understand.

I mean what

A lot of software developers skip right to a more advanced/human-readable programming language like Python or Ruby. Scripts in these languages do also have the benefit of being cross-platform, since Python, for example, can be installed on either Mac or Windows. But Windows OS will not understand them by default, so you have to go install them and set up a development environment rather than start scripting from scratch as you can with BAT/CMD files.

Windows Registry

The Windows Registry is fascinating (to me!). Whereas in a *nix operating system, critical system-wide configuration settings are contained as text in files (usually high up under the “root” level of your computer’s file system, where you need special elevated permissions to access and change them), in Windows these settings (beyond the usual, low-level user settings you can change in the Settings app or Control Panel) are stored in what’s called the Windows Registry.

The Windows Registry is a database where configuration settings are stored, not as text in files, but as “registry values” – pieces or strings of data that store instructions. These values are stored in “keys” (folders), which are themselves stored in “hives” (larger folders that categorize the keys within related subfolders).

Registry values can be a number of different things – text, a string of characters representing hex code, a boolean value, and more. Which kind of data they are, and what you can acceptably change them to, depends entirely on the specific setting they are meant to control. However, values and keys are also often super abstracted from the settings they change – meaning it can be very very hard to tell what exactly a registry value can change or control just by looking at it.

…..no, not at all!

For that reason, casual users are generally advised to stay out of the Windows Registry altogether, and those who are going to change something are advised to make a backup of their Registry before making edits, so they can restore it if anything goes wrong.

Just as you may never mess around in your “/etc” folder on a Mac, even as a digitally-minded archivist, you may or may not ever need to mess in the Windows Registry! I’ve only poked around in it to do things like removing that damn OneDrive tab from the side of my File Explorer windows so I stop accidentally clicking on it. But I thought I’d mention it since it’s a pretty big difference in how the Windows operating and file systems work.

Or disable Microsoft’s over-extensive telemetry! Which, along with the fact that you can’t fucking uninstall Candy Crush no matter how hard you try, is one of the big things holding back an actually pretty nifty OS.

How to Cheat: *nix on Windows

OK. So ALL OF THIS has been based on getting more familiar with Windows and working with it as-is, in its quirky not-Unix-like glory. I wanted to write these things out, because to me, understanding the differences between the major operating systems helped me get a better handle on how computers work in general.

But guess what? You could skip all that. Because there are totally ways to just follow along with *nix-based, Bash-centric digital preservation education on Windows, and get everyone on the same page.

On Windows 7, there were several programs that would port a Bash-like command line environment and a number of Linux/GNU utilities to work in Windows – Cygwin probaby being the most popular. Crucially, this was not a way to run Linux/Unix-like applications on Windows – it was just a way of doing the most basic tasks of navigating and manipulating files on your Windows computer in a *nix-like way, with Bash commands (the software essentially translated them into DOS/Windows commands behind-the-scenes).

But a way way more robust boon for cross-compatibility was the introduction of the Windows Subsystem for Linux in Windows 10. Microsoft, for many years the fiercest commercial opponent of open source software/systems, has in the last ten years or so increasingly been embracing the open source community (probably seeing how Linux took over the web and making a long-term bet to win back the business of software/web developers).

Hmm

The WSL is a complete compatibility layer for, in essence, installing Linux operating systems like Ubuntu or or Kali or openSUSE *inside* a Windows operating system. Though it still has its own limitations, this basically means that you can run a Bash terminal, install Linux software, and navigate around your computer just as you would on a *nix-only system – from inside Windows.

The major drawback from a teaching perspective: it is still a Linux operating system Windows users will be working with (I’d personally recommend Ubuntu for digipres newbies) – meaning there may still be nagging differences from Mac users (command line package management using “apt” instead of Homebrew for instance, although this too can be smoothed over with the addition of Linuxbrew). This little tutorial covers some of those, like mounting external drives into the WSL file system and how to get to your Windows files in the WSL.

You can follow Microsoft’s instructions for installing Ubuntu on the WSL here (it may vary a bit depending on which update/version exactly of Windows 10 you’re running).

That’s it! To be absolutely clear at the end of things here, I’ve generally not worked with Windows for most digital/audiovisual preservation tasks, so I’m hardly an expert. But I hope this post can be used to get Windows users up to speed and on the same page as the Mac club when it comes to the basics of using their computers for digipres fun!

A Guide to Legacy Mac Emulators

Last fall I wrote about the collaborative technical/scholarly process of making some ’90s multimedia CD-ROMs available for a Cinema Studies course on Interactive Cinema. I elided much of the technical process of setting up a legacy operating system environment in an emulator, since my focus for that post was on general strategy and assessment – but there are aspects of the technical setup process that aren’t super clear from the Emaculation guides that I first started with.

That’s not too surprising. The tinkering enthusiast communities that come up with emulators for Mac systems, in particular, are not always the clearest about self-documentation (the free-level versions of PC-emulating enterprise software like VirtualBox or VMWare are, unsurprisingly, more self-describing). That’s also not something that to hold against them in the least, mind you – when you are a relatively tiny, all-volunteer group of programmers keeping the software going to maintain decades’ worth of content from a major computing company that’s notoriously litigious about intellectual property….some of the details are going to fall through the cracks, especially when you’re trying to cram them into a forum post, not specifically addressing the archival/information science community, etc. etc.

In particular, while each Mac emulator has some pretty good information available to troubleshoot it (if you’ve got the time to find it), I’ve never found a really satisfying overview, that is, an explanation of why you might choose X program over Y. That’s partly because, as open source software, each of these programs is *potentially* capable of a hell of a lot – but might require a lot of futzing in configuration files and compiling of source code to actually unlock all those potentials (which, those of us just trying to load up Nanosaur for the first time in 15 years aren’t necessarily looking to mess with). Most of the “default” or recommended pre-compiled Mac/Windows versions of emulators offered up to casual or first-time users don’t necessarily do every single feature that the emulator’s front page brags about.

So what really is the boundary between Basilisk II and SheepShaver? Why is there such a difference between MacOS 9.0.4 and 9.1? And what the hell is a ROM file anyway? That’s what I want to get into today. I’ll start with the essential components to get any Mac emulation program running, give some recommendations for picking an emulator, then round it out with some installation instructions and tips for each one. (You can jump right to an app with this Table of Contents:

What do I need?

An Emulator
There are several free and open-source software options for emulating legacy Mac systems on contemporary computers. We’ll go over these in more detail in a minute.

An Operating System Installer
You’ll need the program that installs the desired operating system that you’re trying to recreate/emulate: let’s say, for example, Mac OS 8.5. These used to come on bootable CD-ROMs, or depending on the age of the OS, floppy disks. If you still have the original installer CD lying around, great! You can still use that. For the rest of us, there’s WinWorld , providing disk image files for all your abandonware OS needs.

A ROM File
This is the first thing that can start to throw people off. So you know your computer has a hard drive, where your operating system and all your files and programs live. It also has a CPU, central processing unit, which is commonly analogized to the “brain” of the computer: it coordinates all the different pieces of your computer, hardware and software alike: operating system, keyboard, mouse, monitor, hard drive (or solid state drive), CD-ROM drive, USB hub, etc.  etc.  This ensures your computer has at least some basic functionality even if your operating system were to get corrupted or some piece of hardware were to truly go haywire (see this other post). But the CPU itself needs a little bit of programmed code in order to work – it has to be able to both understand and give instructions. Rather than stored on a hard drive like the operating system, which is easily writable/modifiable by the user, this crucial, small central piece of code is stored on the CPU on a chip of Read-Only Memory (the read-only part is why this sort of code is often called firmware rather than software). That’s your ROM file. If the whole goal of an emulator is to trick legacy software into thinking it’s on an older machine by creating a fake computer-inside-your-computer (a “virtual machine”), you need a ROM file to serve as the fake brain.

This is trickier with Mac emulation than it is with Windows/PC emulation. The major selling point of Windows systems is that they are not locked into specific hardware: there are/have been any number of third-party manufacturers (Dell, Lenovo, HP, IBM, etc etc) and they all have to make sure their hardware, including the CPU/ROM that come with their desktops, are very broadly compatible, because they can never predict what other manufacturer’s hardware/software you may be trying to use in combination. This makes emulation easier, because the emulating application can likewise go for broad compatibility and probably be fine, without worrying too specifically about *exactly* what model of CPU/ROM it’s trying to imitate (see, for example, DOSBox).

Not so with Mac, since Apple makes closed-box systems: the hardware, OS, software, etc., are all very carefully designed to only work with their own stuff (or, at least, stuff that Apple has pretty specifically approved/licensed). So setting up a Mac emulator, you have to get very specific about which ROM file you are using as your fake brain – because certain Apple models would have certain CPUs, which could only work with certain operating system versions, which would only work with certain versions of QuickTime, which would only play certain files, which would………

That sounds exhausting.

It is.

Wait, if this relies on proprietary code from closed-box systems… is this legal?

Well if you got this far in an article about making fake Macs before asking that, I’m not so sure you actually care about the answer. But, at least so far as my knowledge of American intellectual property law goes, and I am by no means whatsoever an expert, we are in gray legal territory. You’re definitely safest to extract and use the ROM file of a Mac computer you bought. Otherwise we’re basically making an intellectual property fair use case here – that we’re not taking any business/profit from Apple from using this firmware/software for personal or educational purpose, and that providing emulation for archival purposes (and yes, I would consider recovering personal data an “archival” process) serves a public good by providing access to otherwise-lost files. The (non-legal) term “abandonware” does also exist for a reason – these forums/communities are pretty prominent, and Apple’s shown no particular signs recently of looking to shut them down or stem the proliferation of legacy ROMs floating around.

Of course, be careful about who and where you download from. Besides malware, it’s easy to come across ROM files that are just corrupted and non-functional. I’ll link with impunity to options that have worked for me.

[Also be advised! Files made from video game cartridges – SNES, Game Boy, Atari, Sega Genesis, the like – are *also* called ROM files, since game cartridges are also really just pieces of Read-Only Memory. Just searching “ROMs” is going to offer up a lot of sites offering those, which is very fun and all but definitely not going to work in your Mac emulator]

So how do I pick what ROM file and emulator to use?

That’s largely going to depend on what OS you’re aiming for. There are four emulators that I’ve used successfully (read: that have builds and guides available on Emaculation) that together cover the gamut of basically all legacy Mac machines: Mini vMac, Basilisk II, SheepShaver, and QEMU.

As I mentioned at the top, a confusing aspect is that many of these programs have various “builds” – different versions of the same basic application that offer tweaks and improvements focused on one particular feature or another. The most stable, generic version that they recommend for download might not actually be compatible with *every* ROM or operating system that the emulator can theoretically handle (with a different build).

In hunting down ROM files, you’ll probably also come across ROMs listed, rather than from a particular Mac model, as “Old World” or “New World”. These essentially refer to the two broad “families” of CPUs that Apple used for Macs before moving to the Intel chips still found in Macs today: generally speaking, “Old World” refers to the Motorola 68000 series of processors, while “New World” refers to the PowerPC line spearheaded by “AIM” (an Apple-IBM-Motorola alliance).

New World and Old World ROMs can be a good place to start, since they are often taken from sources (e.g. recovery discs) more broadly aimed at emulating Motorola 68000 or PowerPC architecture and therefore could potentially imitate a number of specific Mac models – but don’t be too surprised if you come across a software/OS combination that’s just not working and you have to hunt down a more specific ROM for a particular Mac brand/model. We’ll see an example of this in a moment with our first emulator.

If you are currently using macOS or iOS, you can find some wonderfully detailed tech specs on every single piece of Mac hardware ever made using the freeware Mactracker app. Picking an exact model to emulate based on your OS/processor needs can help narrow down your search.

So with that in mind, let’s cut to the chase – moving chronologically through Mac history, here are my recommendation and install tips:

Mini vMac

Recommended:

Mini vMac is an easy-to-setup (but thereby probably not the most robust) emulator for black-and-white Motorola 68000-based Macs. While it’s eminently customizable if you dig into the source code (or pay the guy who makes it at least $10 for a year’s worth of by-request custom setups – not a bad deal!!), the “standard variation” of this app that the Mini vMac page encourages you to download for a quick start quite specifically emulates a Macintosh Plus. The Macintosh Plus could run Mac System 1.0 through 7.6.1 – but the System 7.x versions frequently required some extra hardware finagling, so I’d recommend sticking with Mini vMac for Systems 1.0 through 6.x.

Also, while a generic Old World ROM *should* work since Mac Pluses had 68000-series CPUs, I haven’t had much success with those on the standard variation of Mini vMac. Look for a ROM file specifically from a Mac Plus instead (such as the one in the link above, to a very useful GitHub repository with several ROMs from specific Mac models).

Installation:

  1. Download the standard variation build for your host operating system (OSX, Windows, etc.)
  2. Download a bootable disk image for your desired guest OS – System 1.x through 6.x (note these will probably be floppy disk image files, either with a .dsk or .img file extension).
  3. Download the Mac Plus ROM, rename it to “vMac.ROM” and place it in the same folder as the Mini vMac application.
  4. Double-click on the Mini vMac application to launch. You should hear a (very loud) startup beep and see a picture of a 3.5″ floppy with a blinking question mark on it – this indicates that the emulator has correctly loaded the ROM file but can’t find a bootable operating system.
  5. Drag the disk image for your operating system on to the Mini vMac window. If it’s a complete, bootable disk image, your desired operating system environment should appear!
  6. From there, it’ll depend what you’re doing – if you’re just trying to run/look at a piece of software from a floppy image, you can just drag and drop the .img or .dsk into the Mini vMac window and it should appear in your booted System 1.x-6.x environment.If you want to create a virtual “drive” to save your emulated environment on, you can create an empty disk image with one of the next two emulators on this list or the dd command-line utility if you’re on a Mac/Linux system (note: creating a Blank Image through a contemporary version of macOS’ Disk Utility won’t work):

    $ dd if=/dev/zero of=/path/to/image.img bs=512 count=1000

    will make a 512 KB blank image (512 bytes was the traditional size for floppy disk “blocks”).You can increase the count argument to however large you want, but be warned that early file systems had size limitations and at some point operating systems and software like the classic Mac systems might not recognize/work with a disk that’s too large. Once you have a blank disk image, you can just drop it into a Mini vMac window running a legacy OS to format (“initialize”) it to Apple’s old HFS file system and start installing software/saving files to it.Generally speaking though, a lot of apps and files from this era ran straight off of floppies, so if you’re just looking to run old programs without necessarily saving/creating new files, dragging and dropping in a system disk and then your .img or .dsk files should just do the trick!

Basilisk II

Recommended:

Basilisk II is another well-maintained Motorola 68000 series emulator. But whereas Mini vMac is primarily aimed at emulating the Mac Plus, Basilisk can either emulate a Mac Classic or a Mac II series model (hence “Basilisk II” – there never was a “Basilisk I”), depending on the configuration/build and ROM file used. As with Mini vMac, this might not be clear, but the precompiled configurations of Basilisk for Mac and Windows offered up on the Emaculation forums – which is where the Basilisk II homepage sends you – specifically offers the latter. So while you may see info that Basilisk II can emulate all the way from System 0.x through 8.1, the version you’re most likely going to be using out-of-the-box can probably only handle the more limited range of System 7.x through 8.1 (the targeted range for the Mac II series).

Note though: this was a kind of bananas era for Apple hardware when it came to model branding. So in terms of looking for ROMs, you can try to find something specifically labelled as being from a Mac II series machine – but you may also find success with a ROM file from a Quadra (largely the same as Mac II machines, but with some CPU/RAM upgrades) or a Performa (literally re-branded Quadras with a different software configuration, intended for consumer rather than professional use). In fact, the Performa ROM and Quadra ROMs on Redundant Robot’s site works quite well, so I’ve linked/recommended that above (the Performa one definitely supports color display, so that’s a good one). These have the benefit of emulating a slightly more powerful machine than the older Mac IIs – but just keep in mind what you’re doing. Your desired OS may install fine, but you may still encounter a piece of legacy software that is relying on the configuration and limitations of the Mac II.

Installation:

  1. Download the precompiled app for your host operating system from Emaculation (I’ll assume OSX/macOS here).
  2. Download a bootable disk image for your desired guest OS, from 7.x through 8.1 – I’ll use Mac OS 7.6.1 here as an example. Note this is right in the transition period from floppies to CD-ROM – so your images may likely be ISOs instead of .img.
  3. Download the Performa ROM from Redundant Robot and unzip it. It’s a good idea to put it in the same folder as the Basilisk app just to keep track of it, but this time you can put it anywhere – you’ll have the option to select it in the Basilisk preferences no matter where it lives on your computer (and you don’t need to rename it either).
  4. In the Basilisk folder, double-click on the “BasiliskGUI” app. This is where you can edit the preferences/configuration for your virtual machine:
  5. In the “Volumes” tab, click on “Create…”. We need to make a totally blank disk image to serve as the virtual hard drive for our emulated computer. Choose your save location using the file navigation menu, then pick a size and name for this file (something descriptive like MacOS7 or OS7_System can help you keep track of what it is). For System 7, somewhere around 500 or 1000 MB is going to be plenty. Hit “OK” and in a moment your new blank disk image will appear under the “Volumes” tab in preferences.
  6. Next select “Add” and navigate to the bootable disk image for your operating system. So, here we select and add the System 7.6.1 ISO file.
  7. The “Unix Root” is an option for Basilisk to create a “shared” folder between your host, contemporary desktop and your emulated environment. This could be useful later for shuttling individual files into your emulator later (as opposed to packaging and mounting disk images), but is not strictly necessary to set up right now. If you do choose to create and set up a shared folder right now, you have to type out the full file path for the directory you want you use, e.g.:
  8. Keep the “Boot From” drop-down on “Any” and the “Disable CD-ROM Driver” box unchecked.
  9. The only other tab you should have to make any changes on at this point are “Graphics” and “Memory/Misc”. For “Graphics”, make sure you have Video Type: “Window” selected. The Refresh Rate can be “Dynamic” so long as you’re on a computer that’s come out within…like, the last 10 years. The width and height resolution settings are up to you, but you might want to keep them accurate to a legacy resolution like 640×480 for accurate rendering of software later.
  10. In the “Memory/Misc” tab, choose the amount of RAM you want your virtual machine to have. 64 MB is, again, plenty powerful for the era. Again, you can always tweak this later if it helps with rendering software. Keep Mac Model ID on “Mac IIci (MacOS7.x)” (note: I’ve successfully installed System 7 in Basilisk using this setting and Quadra 900 ROM, so I think the desired operating system, rather than the Mac Model, might be the most important thing to keep in mind here). Also leave CPU Type on “68040”. Finally, use the Browse button to select the Performa ROM file you downloaded:
  11. At this point, you can hit “Save” and then “Start” at the bottom of the Basilisk Settings window…and you should successfully boot into the System 7.6.1 ISO! The desktop background even comes up with little optical discs to make sure you know you’re operating from a CD-ROM:
  12. The “disk is unreadable by this Macintosh” error is to be expected: this is the blank virtual drive that you created, but haven’t formatted yet. Choose “Initialize” and then “Continue” (you’re not wiping any data, because the blank disk image doesn’t have any data in it yet!!). You should get an option to name your new drive in System 7 – I generally like to name this the same thing that I named the disk image file, just to avoid confusion and make it clear (to myself) that this is just the same file/disk being represented in two different environments.
  13. Once that’s done, you should just be on the System 7 desktop, with two “drives” mounted in the top right: the Mac OS 7.6.1 ISO installer, and then your blank virtual drive. Double-click on the Mac OS 7.6.1 ISO drive and then the “Install Mac OS” program to start running the installer software.
  14. The System 7.6.1 installer in particular forces you to go through a README file and then to “update your hard disk driver” (you can just select “Skip” for the latter).
  15. For the “Choose a disk for installation” step, make sure you have selected the empty (but now formatted) virtual drive you named.
  16. From there you should be able to just run the installer. This copies over and installs the Mac OS 7 system from the ISO to your virtual hard disk!
  17. When the program has successfully run, you now quit out of Basilisk by shutting down the emulated System 7 machine. (Hint: this used to be in the “Special” menu!)
  18. You can now remove the System 7.6.1 ISO file from the Volumes list in Basilisk’s setting, so that every time you launch the software, it goes right into the System 7 environment now installed on your virtual drive. You can now also mount/add any other software for which you have an ISO or IMG file (CD or floppy, basically) by adding that file to Basilisk’s Volumes list and relaunching Basilisk!

SheepShaver

Recommended:

SheepShaver is a PowerPC Macintosch emulator, maintained by the same team/people who run Basilisk II – so you’ll find that the interface and installation process of using SheepShaver are extremely similar. Since we’re dealing with PowerPC architecture now, you’re going to need a New World ROM, or similar specific ROM (I’ve had general success pretending I’m using one of the blue-and-white Power Macintosh G3s).

You might notice that SheepShaver’s maximum installable-OS point is Mac OS 9.0.4 – even though yes, versions of Mac OS 9 existed above 9.0.4. This is because SheepShaver cannot emulate modern Memory Management Units (MMUs) – a particular advancement in CPU technology that we don’t need to get into too much right now (particularly since I don’t totally understand at the moment what it does). But the point is, Mac OS 9.0.4 was the last Mac operating system that could function entirely without a modern-style MMU. If you need Mac OS 9.1 or above, it’s time to start looking at QEMU – but, just speaking personally, I have yet to encounter a piece of software that was so specific to Mac OS 9.1-9.2.2 that it didn’t work in 9.0.4 in SheepShaver.

Installation:

  1. Download the precompiled app for your host operating system from Emaculation (I’ll assume OSX/macOS here).If you are using macOS Sierra (10.12) or above as your host OS:
    Starting with Sierra, Apple introduced a new silent “quarantine” feature to its Gatekeeper software, which manages and verifies applications you’ve downloaded through the App Store or elsewhere. I can bet that at one point or another you’ve gone to the Security & Privacy section of your System Preferences to allow apps downloaded from “identified developers” or even to just let through one troublesome app that Gatekeeper didn’t trust for whatever reason. Perhaps you even remember the old ability to allow apps downloaded from “Anywhere”, which Apple has now removed as an option from the System Preferences GUI.As a result, some applications that you just download in a .zip file from a third-party (like the Emaculation forums) may not open correctly because it has been quietly “quarantined” by Gatekeeper, removing the permissions and file associations the application needs to run correctly. Obviously this is a strong security feature to have on by default to protect against malware, but in this case we want to be sure SheepShaver at least can pass through. Though you can’t do this in System Preferences anymore, you can pass apps individually through the quarantine using the following command in Terminal:

     $ sudo xattr -rd com.apple.quarantine /path/to/SheepShaver.app

  2. Download a bootable disk image for your desired guest OS, from 8.2 through 9.0.4 – I’ll use Mac OS 9.0.4 here as an example. This is firmly in the CD-ROM era now so it should be an ISO file.Once you have downloaded and extracted the ISO file, you must “lock” the ISO in Finder. This is needed to trick the Mac OS 9 installer software into thinking it is really on a physical CD-ROM. You can do this by right-clicking on the ISO, selecting “Get Info”, then checking the “Locked” box:
  3. Download the New World ROM from Redundant Robot and unzip it. Put this in the same folder as the SheepShaver app itself and rename the ROM file so it is just named “Mac OS ROM” (spaces included, no file extension):
  4. Double-click on the SheepShaver app icon to launch the app. If the app has successfully found a compatible ROM in the same folder, you should see a gray screen with a floppy disk icon and a blinking question mark on it:

    (If no compatible ROM is found, the app will just automatically quit on launch, so you can troubleshoot from there).
  5. In the SheepShaver app menu at the top of your screen, select “Preferences” to open the settings menu, which happens to look pretty much just like the Basilisk II settings window.
  6. In the “Setup” tab, click on “Create…”. We need to make a totally blank disk image to serve as the virtual hard drive for our emulated computer. Choose your save location using the file navigation menu, then pick a size and name for this file (something descriptive like MacOS9 or OS9_System can help you keep track of what it is). For installing OS 9, again, somewhere around 500 or 1000 MB is going to be plenty. Hit “OK” and in a moment your new blank disk image will appear under the “Volumes” tab in preferences.
  7. Next select “Add” and navigate to the bootable disk image for your operating system. So, here we select and add the (locked) Mac OS 9.0.4 ISO file.
  8. For the “ROM File” field, click Browse and select your “Mac OS ROM” file. As long as the ROM file is in the same directory as the application and named “Mac OS ROM”, SheepShaver *will* read it by default, but it doesn’t hurt to specify again here (or if desired, you could select other ROMs here at this point for testing/troubleshooting).
  9. The “Unix Root” is an option for SheepShaver to create a “shared” folder between your host desktop and your emulated environment. This could be useful later for shuttling individual files into your emulator later (as opposed to packaging and mounting disk images), but is not strictly necessary to set up right now. If you do choose to create and set up a shared folder right now, you should type out the full file path for the directory you want you use, e.g.:
  10. Keep the options on “Boot From: Any” and leave “Disable CD-ROM” unchecked. Choose an amount of RAM you want your virtual machine to have. 512 MB was plenty powerful for the time, but you can futz with this potentially based on the requirements for the software you’re trying to run.
  11. For the “Audio/Video” tab, make sure you have Video Type: “Window” selected. The Refresh Rate can be “Dynamic” so long as you’re on a computer that’s come out within…like, the last 10 years. The width and height resolution settings are up to you, but you might want to keep them accurate to a legacy resolution like 640×480 for accurate rendering of software later. Make sure “Enable QuickDraw Acceleration” is selected and leave audio settings alone.
  12. Under the “Miscelleaneous” tab, select “Enable JIT Compiler”, “Allow Emulated CPU to Idle” and “Ignore Illegal Memory Accesses”. You can select the Mouse Wheel Function to taste. If you’re interested in getting networking working inside your virtual machine, in the “Ethernet Interface” enter the word “slirp” – otherwise, you can ignore this section.
  13. Hit “Save” to save these preferences and leave the settings window. You should be back at your blinking gray floppy. At this point, the only way to shut down the virtual machine and apply your new settings is to force-quit SheepShaver by pressing Control + Esc.
  14. Immediately double-click on the SheepShaver app again to relaunch. This time, you should boot into the Mac OS 9 installer CD! The desktop background should make it very clear you are booted into a CD installer:
  15. The “disk is unreadable by this Macintosh” error is to be expected: this is the blank virtual drive that you created, but haven’t formatted yet. Choose “Initialize” and then “Continue” (you’re not wiping any data, because the blank disk image doesn’t have any data in it yet!!). You should get an option to name your new drive in OS 9 – I generally like to name this the same thing that I named the disk image file on my host system, just to avoid confusion and make it clear (to myself) that this is just the same file/disk being represented in two different environments.
  16. If it hasn’t already opened automatically, double-click on the Mac OS 9 ISO drive and then the “Install Mac OS 9” program to start running the installer software.
  17. For the “Choose a disk for installation” step, make sure you have selected the empty (but now formatted) virtual drive you named.
  18. From there you should be able to just run the installer. This copies over and installs the Mac OS 9 system from the ISO to your virtual hard disk!
  19. When the program has successfully run, you can remove the ISO file from the mounted “Volumes” in SheepShaver’s preferences. Hit “Save” and then shut down the emulated machine to quiet SheepShaver (hint: the “Shut Down” command used to be in the “Special” menu on Mac OS desktop!!)
  20. Now, when you relaunch SheepShaver again, you should boot directly into your installed Mac OS 9 environment! As with Basilisk, you can mount floppy or CD-ROM images into your emulated desktop by opening SheepShaver’s Preferences and selecting the image files to Add them to your Volumes list (you’ll have to quit SheepShaver and relaunch any time you want to change/apply new settings, including loading new images!!)

QEMU

Recommended:

  • Mac OS 9.1 – Mac OSX (10.0-10.5)

QEMU is a generic, extremely flexible emulator/virtualization application. It can be theoretically configured to run all kinds of guest operating systems on all kinds of hosts, not just Macs; but the forum users at Emaculation have put a lot of work into assembling QEMU builds specifically capable of PowerPC MacOSX emulation, a processor/OS combination that has generally hounded and confounded the emulation community for some time.

As such, I’ll say up front this is definitely the roughest emulator to use, both in terms of the emulated system not running as smoothly as you’ll see with the other listed programs (audio, for instance, isn’t working unless you use an experimental build) and in that troubleshooting is going to be near impossible without a basic understanding of Bash scripting. (You *can* just follow exactly this guide or the Emaculation tips with a basic text editor like TextEdit, but especially given the fact that the forums are pumping out new builds on the regular, there’s no guarantee these will work/last very long).

The good news is – you don’t need a separate ROM file for this one! I assume the necessary code is just packaged into the provided QEMU builds rather than kept in a separate file. The Emaculation forums provide a few approved builds depending on what exactly you’re trying to do, but unless you’re trying to do some emulated networking (if you’re just a beginner here – not recommended), the first download link on that page is the one you should use. This will let you install any PowerPC OS from 9.1 up that you choose – for the purposes of this guide, I’m going to pretend I’m putting Mac OSX 10.1 “Puma” on a classic Bondi Blue iMac G3!

Installation:

  1. Download the precompiled build for your host system. For Mac systems, we’ll be using the first download in this forum post, although again, remember that you will not have audio with this build!! Feel free to explore some of the experimental versions once you have the process down.
  2. Unzip the QEMU folder and put your MacOSX install disc ISO into that same folder.
  3. Open the Disk Utility program. Hit Command+N (or go to File -> New Image -> Blank Image) to create a blank disk image for your virtual hard drive (we have now moved up far enough in time that Disk Utility can actually do all the proper formatting/partitioning here!)With the file browser navigate into the QEMU download folder and name your file something descriptive, like MacOSPuma. You will need at least 1.2 GB to install OSX 10.1, so I’d recommend making the blank image at least 2GB – note that Disk Utility makes you enter this number in MB! (so, 2000 MB)For “Format”, select “Mac OS Extended (Journaled)”. For “Encryption” select “none”, for “Partitions”, select “Single partition – Apple Partition Map” and for “Image Format” select “read/write disk image”.It will take a minute to create the blank disk image, but you should wind up with a file called “MacOSPuma.dmg” in your QEMU folder.
  4. In the downloaded folder there should be a file called “qemu.command”. Open this file with a plain text editor, TextEdit will do.
  5. Much of the text here is good, but we need to make a few substitutions. Namely, we need to attach a virtual CD-ROM drive, make sure it reads our Mac OSX installer ISO, make sure the code reads our blank disk image, and set it so that the QEMU application attempts to boot from the ISO, not the blank hard disk image.First, change the flag for
    boot -c

    to

    boot -d

    Change the

    -drive file=~/Mac-disks/9.2.img,format=raw,media=disk

    so that the path matches the file path of the blank .dmg disk image that you made. Make sure to change the file extension and not to put any extra spaces into the code.Now, add another drive path flag to the code BEFORE the disk image drive flag. It should read something like:

    -drive file=~/QEMU/Mac_OS_Puma.iso,format=raw,media=cdrom

    , only substituting in the correct file path and name for your ISO.All told, the file should now look like this – note how all the code following “./qemu-system-ppc” is all on one line, even though the text wrapping in TextEdit will probably make it look otherwise for you:

    #!/bin/bash
    cd "$(dirname "$0")"./qemu-system-ppc -L pc-bios -boot d -M mac99 -m 256 -prom-env 'auto-boot?=true' -prom-env 'boot-args=-v' -prom-env 'vga-ndrv?=true' -drive file=~/QEMU/Mac_OS_Puma.iso,format=raw,media=cdrom -drive file=~/QEMU/MacOSPuma.dmg,format=raw,media=disk -netdev user,id=network0 -device sungem,netdev=network0

    Save the “qemu.command” file.

  6. Now we need to make this .command file (which is basically just a tiny Bash script) executable so that your host operating system knows to run it like an application just by double-clicking. Open the Terminal app and run
     $ chmod +x /path/to/qemu.command

    either typing out the full file path yourself, or just drag-and-dropping the qemu.command file into Terminal to get the full path automatically.

  7. Double-click on the “qemu.command” file. If all has been set up correctly, after a minute you should boot into the Mac OSX Puma installer.
  8. Your blank .dmg file should show up as an available destination disk on which to install OSX! Once the full installer has run (takes a few minutes), the software will eventually try to reboot the machine, which at this point will just take you back into the ISO installer. Mash Command+Q to quit out of QEMU.
  9. Go back to your “qemu.command” file in TextEdit. Swap the
    boot -d

    flag BACK to

    boot -c

    , and delete the -drive flag for the ISO installer entirely, just leaving the .dmg file attached. Save the file. At this point, you should now be able to just double-click the “qemu.command” file and launch into OSX Puma, installed on your virtual hard drive!(it may take a minute! remember this is not going to be nearly as quick/smooth as loading on original hardware – and that you’ll have to go through a bunch of Apple’s “registration” nonsense)

  10. You can always attach more disk images to the emulated environment by re-creating “-drive” entries with “media=cdrom” and the file path to your ISO/CDR in the “qemu.command” file and saving/relaunching QEMU.