Upgrading Video Digitization Stations

In the primary MIAP lab we have four Mac Pro stations set up mainly for video digitization and capture. They get most heavily used during our two Video Preservation courses: Video Preservation I, which focuses on technical principles and practice of digitization from analog video sources, and Video Preservation II, which focuses more on vendor relations and guiding outsourced mass digitization projects, but by necessity involves a fair amount of digital video quality control/quality assurance as well. They get used for assorted projects in Collections Management, the “Talking Tech” workshops I’ve started leading myself, and the Cinema Studies department’s archive as well.

Over the course of 2016, the hardware on these four stations was really starting to show its age. These machines were originally bought and set up in 2012 – putting them in the last generation of the older “tower”-style silver Mac Pro desktops, before Apple radically shifted its hardware design to the “trash bin” style Mac Pros that you can buy today. The operating system hadn’t been updated in a while either, they were still running Mac OSX 10.10 (Yosemite), whose last full update came in August 2015 (with a few security updates still following, at least).

maxresdefault
This guy isn’t allowed in anymore, for instance.

These stations were stable – at least, in the sense that all the software we needed definitely worked, and they would get the job done of digitizing/capturing analog video. But the limitations of how quickly and efficiently they could do this work was more and more apparent. The amount of time it took, to, say, create a bag out of 200 GB of uncompressed video, transcode derivative copies, run an rsync script to back video packages up to a local NAS unit, or move the files to/from external  drives (a frequent case, as both Video Preservation classes usually partner with other cultural organizations in New York City who come to pick up their newly-digitized material via hard drive) was getting excruciating relative to newer systems, wasting class time and requiring a lot of coordination/planning of resources as ffmpeg or rsync chugged along for hours, or even overnight.

So, I knew it was time to upgrade our stations. But how to go about it? There were two basic options:

1. Purchase brand-new “trash bin” Mac Pros to replace the older stations

pratttrashcan_macpro
http://rudypospisil.com/wordpress/wp-content/uploads/2013/10/prattTrashCan_macPro.jpg

2. Open up the innards of the old Mac Pros and swap in updated, more powerful components

Buying brand-new Windows stations was basically out, just given the way our classes have been taught, the software we work with, and my own personal knowledge/preference/ability to maintain hardware. And I was lucky that #1 was even an option at all – the considerable resources available at NYU allow for choices that I would not have many other places. But, MIAP also has a lot of equipment needs, and I’d generally rather stash larger portions of our budget towards harder-to-get analog video equipment and refurbishment, than jump for splashy new hardware that we don’t actually need. So I drew up some thoughts on what I actually wanted to accomplish:

  • improved data transfer rate between desktops and external drives (the fastest connection available, at best, was the mid-2012 Mac Pro’s native FireWire 800 ports; and many times we were limited to USB 2.0)
  • improved application multi-tasking (allow for, say, a Blackmagic Media Express capture to run at the same time as the ffmpeg transcode of a previous capture)
  • improved single-application processing power (speed up transcoding, bag creation and validation, rsync transfer if possible)
  • update operating system to OSX 10.11 (El Capitan, a more secure and up-to-date release than Yosemite and MUCH more stable than the new 10.12 Sierra)
  • maintain software functionality with a few older programs, especially Final Cut 7 or equivalent native-DV capture software

Consulting with adjunct faculty, a few friends, and the good old internet, it became clear that a quick upgrade by way of just purchasing new Mac Pros would pose several issues: first, that the Blackmagic Decklink Studio 2 capture cards we used for analog video digitization would not be compatible, requiring additional purchases of stand-alone Blackmagic analog-to-digital converter boxes on top of the new desktops to maintain current workflows. It is also more difficult to cheaply upgrade or replace the storage inside the newer Mac Pros, again likely requiring the eventual purchase of stand-alone RAID storage units to keep up with the amount of uncompressed video being pumped out; whereas the old Mac Pro towers have four internal drive slots that can be swapped in and out within minutes, with minimal expertise, and be easily arranged into various internal RAID configurations.

In other words, I decided it was much cheaper and more efficient to keep the existing Mac Pro stations, which are extremely flexible and easy to upgrade, and via new components bring them more or less up to speed with what completely new Mac Pros could offer anyway. In addition to the four swappable storage slots, the old Mac Pro towers feature easy-to-replace RAM modules, and PCI expansion slots on the back that offer the option to add extra data buses (i.e. more USB, eSATA, or Thunderbolt ports). You can also update the CPU itself – but while adding a processor with more cores would in theory (if I understand the theory, which is also far from a 100% proposition) be the single biggest boost to improving/speeding up processing, the Intel Quad-Core processors already in the old towers are no slouch (the default new models of the Mac Pro still have Quad-Cores), and would be more expensive and difficult to replace than those other pieces. Again, it seemed more efficient, and safer given my limited history with building computer hardware, to incrementally upgrade all the other parts, see what we’re working with, and someday in the future step up the CPU if we really, desperately need to breathe more life into these machines.

So, for each of the four stations, here were the upgrades made (separation made between the upgrade and specific model/pricing found; for any of these you could pursue other brands/models/sellers as well):

  • (1) 120 GB solid-state drive (for operating system and applications)

OWC Mercury Extreme Pro 6G SSD: $77/unit
OWC Mount Pro Drive Sled (necessary to mount SSDs in old Mac Pros): $17/unit

  • (1) 1 TB hard drive (for general data storage – more on this later)

Western Digital Caviar Blue 1 TB Internal HDD: $50/unit

  • (1) PCI Express Expansion Card, w/ eSATA, USB 3.0 and USB 3.1 capability

CalDigit FASTA-6GU3 Plus: $161/unit

  • (4) 8 GB RAM modules, for a total of 32 GB

OWC 32.0 GB Upgrade Kit: $139/unit

IMG_2839.JPG
Swaaaaaaaaaaag

Summed up, that’s less than $500 per computer and less than $2000 for the whole lab, which is a pretty good price for (hopefully) speeding up our digitization workflow and keeping our Video Preservation courses functional for at least a couple more years.

The thinking: with all that RAM, multi-tasking applications shouldn’t be an issue, even with higher-resource applications like Final Cut 7, Blackmagic Media Express, ffmpeg, etc. With the OSX El Capitan operating system and all applications hosted on solid-state memory (the 120 GB SSD) rather than hard drive, single applications should run much faster (as the drives don’t need to literally spin around to find application or system data). By buying a new 1 TB hard drive for each computer, the three non-OS drive slots on each computer are now all filled with 1 TB hard drives. I could have two of those configured in a RAID 0 stripe arrangement, to increase the read and write speed of user data (i.e. video captures) – the third drive can serve as general backup or as storage for non-video digitization projects, as needed.

IMG_2843.JPG
RAM for days
IMG_2854.JPG
*Oh what fun it is to ride in a one-120-GB-solid-state-drive open sled*

IMG_2855.JPG

IMG_2856.JPG

The expansion cards will now allow eSATA or USB 3.0-speed transfers to compatible external drives. The USB 3.1 function on the specific CalDigit cards I got won’t work unless I upgrade the operating system to 10.12 Sierra, which I don’t want to do just yet. That’s basically the one downside compared to the all-new Mac Pros, which would’ve offered the Thunderbolt transfer speeds better than USB 3.0 – but for now, USB 3.0 is A) still a drastic improvement over what we had before, B) probably the most common connection on the consumer external drives we see anyway, and C) with an inevitable operating system upgrade we’ll “unlock” the USB 3.1 capability to keep up as USB 3.1 connections become more common on external drives.

IMG_2849.JPG
Uninstalled…
IMG_2852.JPG
…installed! Top row – followed by two slots for the Blackmagic Decklink Studio 2 input/output and the AMD Radeon graphics card input/output at the bottom.

Installing all these components was a breeze. Seriously! Even if you don’t know your way around the inside of a computer at all, the old Mac Pro towers were basically designed to be super customizable and easy to swap out parts, and there’s tons of clear, well-illustrated instructional videos available to follow.

[vimeo 139648427 w=640 h=360]

As I mentioned in a previous post about opening up computers, the main issue was grounding. Static discharge damaging the internal parts of your computer is always a risk when you go rooting around and touching components, and especially since the MIAP lab is carpeted I was a bit worried about accidentally frying a CPU with my shaky, novice hands. So I also picked up a $20 computer repair kit that included an anti-static wristband that I wore while removing the desktops from their station mounts, cleaning them out with compressed air, and swapping in the mounted SSDs, new HDDs, expansion cards, and RAM modules.

IMG_2841.JPG

With the hardware upgrades completed, it was time to move on to software and RAID configuration. Using a free program called DiskMaker X6, I had created a bootable El Capitan install disk on a USB stick (to save the time of having to download the installer to each of the four stations separately). Booting straight into this installer program (by plugging in the USB stick and holding down the Option key when turning on a Mac), I was able to quickly go through the process of installing OSX El Capitan on to the SSDs. For now that meant I could theoretically start up the desktop from either El Capitan (on the SSD) or Yosemite (still hosted on one of the HDDs) – but I wanted to wipe all the storage and start from scratch here.

I accomplished this using Disk Utility, the built-in program for drive management included with OSX. Once I had backed up all important user data from all the hard drives, I completely reformatted all of them (sticking with the default Mac OS Extended (Journaled) formatting), including the brand-new 1 TB drives. So now each station had an operating system SSD running El Capitan and three blank 1 TB hard drives to play with. As mentioned earlier, I wanted to put two of those in a RAID 0 data stripe arrangement – a way of turning two separate drives into one logical “volume”. RAID 0 is a mildly dangerous arrangement in that failure of either one of those drives means total data loss; but, it means a significant performance boost in read/write speed (hopefully decreasing the likelihood of dropped frames during capture, improving time spent on fixity checks and bagging, etc.), while maintaining a total of 2 TB storage on the drives (most RAID arrangements focused more on data security and redundancy, rather than performance, will result in a total amount of storage on the volume less than the capacity of the physical disks), and files are not meant to be stored long-term on these stations. They are either returned to the original institution, backed up to the more secure, RAID 6-arranged NAS, or backed up to our department’s fleet of external drives – if not some combination of those options.

So it was at this point that I discovered that in the upgrade from Yosemite to El Capitan, Apple actually removed functionality from the Disk Utility application. The graphic interface for Disk Utility in Yosemite and earlier versions of OSX featured an option to easily customize RAID arrangements with your drives. In El Capitan (and, notably, El Capitan only – the feature has returned in Sierra), you’re only allowed to erase, reformat and partition drives.

jboddiskutilityyosemite-577aac645f9b58587592afb8

screen-shot-2017-03-02-at-11-07-37-am
Cool. Cool cool cool.

Which means to the Terminal we go. The command-line version of Disk Utility (invoked with the “diskutil” command) can still quickly create format a RAID volume. First, I have to run a quick

[cc lang=”Bash”]$ diskutil list[/cc]

…in order to see the file paths/names for the two physical disks that I wanted to combine to create one volume (previously named MIAP_Class_Projects and Station_X_Backup):

screen-shot-2017-03-02-at-11-08-24-am

In this case, I was working with /dev/disk1 and /dev/disk3. Once I had the correct disks identified, I could use the following command:

[cc lang=”Bash”]$ diskutil appleRAID create stripe JHFS+ disk1 disk3[/cc]

Let’s break this down:

diskutil – command used to invoke the Disk Utility application

appleRAID – option to invoke the underlying function of Disk Utility that creates RAIDs – it’s still there, they just removed it from the graphical version of Disk Utility in El Capitan for some reason ¯\_(ツ)_/¯

create stripe – tells Disk Utility that I want to create a RAID 0 (striped) volume

JHFS+ – tells Disk Utility I want the striped volume to be formatted using the journaled HFS+ file system (the default Mac OS Extended (Journaled) formatting)

disk 1 disk 3 – lists the two drives, with the names taken from the previous command above, that I want to combine for this striped volume

Note: Be careful! When working with Disk Utility, especially in the command line, be sure you have all the data you want properly backed up. You can see how you could easily wipe/reformat disks by putting in the wrong disk number in the command.

End result: two physical disks combined to form a 2 TB volume, renamed to MIAP_Projects_RAID:

Screen Shot 2017-03-02 at 11.15.37 AM.png
The 2 TB RAID volume, visible in the GUI of Disk Utility – note the two physical drives are still listed in the “Internal” menu on the left, but without subset logical volumes, as with the SSD and El Capitan OS volume, or WDC hard drive with the “CS_Archive_Projects” volume.

Hooray! That’s about it. I did all of this with one station first, which allowed me the chance to reinstall all the software, both graphical and CLI, that we generally use in our courses, and test our usual video capture workflows. As mentioned before, my primary concern was older native-DV capture software like Final Cut 7 or Live Capture Plus would break, given that official OS support for those programs ended a long time ago, but near as I can tell they can still work in El Capitan. That’s no guarantee, but I’ll troubleshoot more when I get there (and keep around a bootable USB stick with OSX 10.9 Mavericks on it, just in case we have to go revert to using an older operating system to capture DV).

Screen Shot 2017-03-02 at 11.13.17 AM.png
In order to not eat up memory on the 120 GB SSD operating system drive, I figured this was advisable.

I wish that I had thought to actually run some timed tests before I made these upgrades, so that I would have some hard evidence of the improved processing power and time spent on transcoding, checksumming, etc. But I can say that having the operating system and applications hosted on solid-state memory, and the USB 3.0 transfer speeds to external drives, have certainly made a difference even to the unscientific eye. It’s basically like we’ve had brand-new desktops installed – for a fraction of the cost. So if you’re running a video digitization station, I highly recommend learning your way around the inside of a computer and the different components – whether you’re on PC or still working with the old Mac Pro towers, just swapping in some fresh innards could make a big difference and save the trouble and expense of all-new machines. I can’t speak to working with the new Mac Pros of course, but would be very interested to hear with anyone using those for digitization as to their flexibility – for instance, if I didn’t already have the old Mac Pros to work with, and had been completely starting from scratch, what would you recommend? Buying the new Pros, or hunting down some of the older desktop stations, for the greater ability to customize them?

Windows Subsystem for Linux – What’s the Deal?

This past summer, Microsoft released its “Anniversary Update” for Windows 10. It included a lot of the business-as-usual sort of operating system updates: enhanced security, improved integration with mobile devices, updates to Microsoft’s “virtual assistant” Cortana (who is totally not named after a video game AI character who went rampant and is currently trying to destroy all biological life in the known universe, because what company would possibly tempt fate like that?)

halo-4-cortana-rampant
“NO I WILL NOT OPEN ANOTHER INCOGNITO WINDOW FOR YOU, FILTH”

But possibly the biggest under-the-radar change to Windows 10 was the introduction of Bash on Ubuntu on Windows. Microsoft partnered with Canonical, the company that develops the popular Linux operating system distribution Ubuntu, to create a full-fledged Linux/Ubuntu subsystem (essentially Ubuntu 14.04 LTS) inside of Windows 10. That’s like a turducken of operating systems.

foo_ck_turducken_1223
Which layer is the NT kernel, though?

What does that mean, practically speaking? For years, if you were interested in command-line control of your Windows computer, you could use Powershell or the Command Prompt – the same basic command-line system that Microsoft has been using since the pre-Windows days of MS-DOS. Contrast that to Unix-based systems like Mac OSX and Ubuntu, which by default use an input system called the Bash shell – the thing you see any time you open the application Terminal.

 

The Bash shell is very popular with developers and programmers. Why? A variety of reasons. It’s an open-source system versus Microsoft’s proprietary interface, for one. It has some enhanced security features to keep users from completely breaking their operating system with an errant command (if you’re a novice command-line user, that’s why you use the “sudo” command sometimes in Terminal but never in Command Prompt – Windows just assumes everyone using Command Prompt is a “super user” with access to root directories, whereas Mac OSX/Linux prefers to at least check that you still remember your administrative password before deleting your hard drive from practical existence). The Bash scripting language handles batch processing (working with a whole bunch of files at once), scheduling commands to be executed at future times, and other automated tasks a little more intuitively. And, finally, Unix systems have a lot more built-in utility tools that make software development and navigating file systems more elegant (to be clear, these utility applications are not technically part of the Bash shell – they are built into the Mac OSX/Linux operating system itself and accessed via the Bash shell).

Bringing in a Linux subsystem and Bash shell to Windows is a pretty bold move to try and win back developers to Microsoft’s platform. There have been some attempts before to build Linux-like environments for Windows to port Mac/Linux software – Cygwin was probably the most notable – but no method I ever tried, at least, felt as intuitive to a Mac/Linux user as Bash on Ubuntu on Windows does.

cygwinsetup
what even are you

Considering the increasing attention on open-source software development and command-line implementation in the archival community, I was very curious as to whether Bash on Ubuntu on Windows could start bridging the divide between Mac and Windows systems in archives and libraries. The problem of incompatible software and the difference in command-line language between Terminal and Command Prompt isn’t insurmountable, but it’s not exactly convenient. What if we could get all users on the same page with the software they use AND how they use them – regardless of operating system???

OK. That’s still a pipe dream. I said earlier that the Windows Subsystem for Linux (yes that’s what it’s technically called even though that sounds like the exact opposite of what it should be) was “full-fledged” – buuuuuut I kinda lied. Microsoft intends the WSL to be a platform for software development, not implementation. You’re supposed to use it to build your applications, but not necessarily actually deploy it into a Windows-based workflow. To that end, there are some giant glaring holes from just a pure Ubuntu installation: using Bash on Ubuntu on Windows, you can’t deploy any Linux software with a graphical user interface (GUI) (for example, the common built-in Linux text editing utility program gedit doesn’t work – but nano, which allows you to text edit from within the Bash terminal window itself, does). It’s CLI or bust. Any web-based application is also a big no-no, so you’re not going to be able to sneakily run a Windows machine as a server using the Linux subsystem any time soon.

Edit: Oh and the other giant glaring thing I forgot to mention the first time around – there’s no external drive support yet. So the WSL can’t access removable media on USB or optical disc mounted on the Windows file system – only fixed drives. So disc imaging software, while it technically “works”, can only work with data already moved to your Windows system.

But with all those caveats in mind… who cares what Microsoft says is supposed to happen? What does it actually do? What works, and what doesn’t? I went through a laundry list of command-line tools that have been used or taught the past few years in our MIAP courses (primarily Video Preservation, Digital Preservation and Handling Complex Media), plus a few tools that I’ve personally found useful. First, I wanted to see if they installed at all – and if they did, I would try a couple of that program’s most basic commands, hardly anything in the way of flags or options. I wasn’t really trying to stress-test these applications, just see if they could indeed install and access the Windows file system in a manner familiar to Mac/Linux users.

bash
*Hello Bash my only friend / I’ve come to ‘cat’ with you again*

Before I start the run-down, a note on using Bash on Ubuntu on Windows yourself, if interested. Here are the instructions for installing and launching the Windows Subsystem for Linux – since the whole thing is technically still in beta, you’ll need to activate “developer” mode. Once installed and launched, ALL of these applications will only work through the Bash terminal window – you can not access the Linux subsystem, and all software installed thereon, from the traditional Windows Command Prompt. (It goes the other way too – you can’t activate your Windows applications from the Bash shell. This is all about accessing and working with the same files from your preferred command-line environment.) And once again, the actual Ubuntu version in this subsystem is 14.04 LTS – which is not the latest stable version of that operating system. So any software designed only to work with Ubuntu 16.04 or the very latest 16.10 isn’t going to work in the Windows subsystem.

Once you’re in a Bash terminal, you can access all your files stored within the Windows file system by navigating into the “/mnt/” directory:

[cc lang=”Bash”]$ cd /mnt/[/cc]

You should see different letters within this directory according to how many drives you have mounted in your computer, and their assigned letters/paths. For instance, for many Windows users all your files will probably be contained within something like:

[cc lang=”Bash”]/mnt/c/Users/your_user_name/Downloads[/cc] or

[cc lang=”Bash”]/mnt/c/Users/your_user_name/Desktop[/cc] , etc. etc.

And one last caveat: dragging and dropping a file into the Bash terminal to quickly get the full file path doesn’t work. It will give you a Command Prompt file path (e.g. “C:\Users\username\Downloads\file.pdf”) that the Bash shell can’t read. You’re going to have to manually type out the full file path yourself (tabbing over to automatically fill in directory/file names does still work, at least).

Let’s get to it!

Programs That Install Via Apt-Get:

  • bagit-java
  • bagit-python
  • cdrdao
  • ClamAV
  • ddrescue (install w/package name “gddrescue”, execute w/ command “ddrescue”)
  • ffmpeg (but NOT ffplay)
  • git
  • imagemagick
  • md5deep
  • mediainfo
  • MKVToolNix
  • Python/Python3/pip
  • Ruby/RubyGems
  • rsync
  • tree

Installing via Ubuntu’s “apt-get” utility is by far the easiest and most desirable method of getting applications installed on your Linux subsystem. It’s a package manager that works the same way as Homebrew on Mac, for those used to that system: just execute

[cc lang=”Bash”]$ sudo apt-get install nameofpackage [/cc]

and apt-get will install the desired program, including all necessary dependencies (any software libraries or other software utilities necessary to make the program run). As you can see, the WSL can handle a variety of useful applications: disk imaging (cdrdao, ddrescue), transcoders (ffmpeg, imagemagick), virus scanning (ClamAV), file system and metadata utilities (mediainfo, tree), hash/checksum generation (md5deep).

mediainfo
Windows 10!

You can also get distributions of programming languages like Python and Ruby and use their own package managers (pip, RubyGems) to install further packages/libraries/programs. I tried this out with Python by installing bagit-python (my preferred flavor of BagIt – see this previous post for difference between bagit-python and the bagit-java program you get by just running “apt-get bagit”), and with Ruby by installing the Nokogiri library and running through this little Ruby exercise by Ashley Blewer. (I’d tried it before on Mac OSX but guess what, works on Windows through the WSL too!)

A couple things to note: one, if you’re trying to install the Ubuntu version of ddrescue, there’s confusingly a couple different packages named the same thing, and serve the same purpose. There’s a nice little rundown of how that happened and how to make sure you’re installing and executing exactly the program you want on the Ubuntu forums.

Also, while ffmpeg’s transcoding and metadata-gathering features (ffprobe) work fine, its built-in media playback command (ffplay) will not, because of the aforementioned issue with GUIs (it has to do with X11, the window system that Unix systems use for graphical display, but never mind that for now). Actually, it sort of depends on how you define “work”, because while ffplay won’t properly play back video, it will generate some fucking awesome text art for you in the Bash terminal:

 

Programs That Require More Complicated Installation:

  • bulk_extractor (requires legacy JDK)
  • exiftool
  • Fslint
  • mediaconch
  • The Sleuth Kit tools

These applications can’t be installed via an apt-get package, but you can still get them running with a little extra work, thanks to other Linux features such as dpkg. Dpkg is another package management program – this one comes from Debian, a Linux operating system of which Ubuntu is a direct (more user-friendly) derivative. You can use dpkg to install Debian (.deb) packages (like the CLI of Mediaconch), although take note that unlike apt-get, dpkg does not automatically install dependencies – so you might need to go out and find other libraries/packages to install via apt-get or dpkg before your desired program actually starts working (for Mediaconch, for instance, you should just apt-get install Mediainfo first to make sure you have the libmediainfo library already in place).

The WSL does also have the autocompile and automake utilities of full Linux distributions, so you can also use those to get packages like The Sleuth Kit (a bunch of digital forensics tools) or Fslint (a duplicate file finder) running. Best solution is to follow whatever Linux installation documentation there is for each of these programs – if you have questions about troubleshooting specific programs, let me know and I’ll try to walk you through my process.

fslint

 

Programs That Don’t Work/Install…Yet:

  • Archivematica
  • Guymager
  • vrecord

I had no expectation that these programs would work given the stated GUI and web-based limitations of the WSL, but this is just to confirm that as far as I can tell, there’s no way to get them running. Guymager has the obvious GUI/X11 issue (plus the inability to recognize external devices, anyway, and the general dysfunction of the /dev/ directory). The vrecord team hasn’t successfully installed on Linux yet, and the WSL would run into the GUI issue even if they do release a Linux version. And web applications definitely aren’t my strong suit, but in the long process of attempting an Archivematica installation, the WSL seemed to have separate issues with Apache, uWSGI and NGINX. That’s a lot of troubleshooting to likely no end, so best to probably leave that one aside.

That’s about all for now – I’m curious if anyone else has been testing the WSL, or has any thoughts about its possible usefulness in bridging compatibility concerns. Is there any reason we shouldn’t just be teaching everyone Bash commands now??

Update (10/20): So the very day that I post this, Microsoft released a pretty major update to the WSL, with two major effects: 1) new installations of WSL will now be Ubuntu 16.04 (Xenial), though existing users such as myself will not automatically upgrade from 14.04; and 2) the Windows and Linux command-line interfaces now have cross-compatibility, so you can launch Windows applications from the Bash terminal and Linux applications from Command Prompt or Powershell. Combine that with the comment below from Euan with directions to actually launch Linux applications with GUIs, and there’s a whole slew of options to continue exploring here. Look for further posts in the future! This subsystem is clearly way more powerful than Microsoft initially wanted to let on.

Dual-Boot a Windows Machine

It is an inconvenient truth that the MIAP program is spread across two separate buildings along Broadway. They’re only about five minutes apart, and the vast majority of the time this presents no problems for students or staff, but it does mean that my office and one of our primary lab spaces are in geographically separate locations. Good disaster planning, troublesome for day-to-day operations.

The Digital Forensics Lab (alternately referred to as the Old Media Lab or the Dead Media Lab, largely depending on my current level of frustration or endearment towards the equipment contained within it) is where we house our computing equipment for the excavation and exploration of born-digital archival content: A/V files created and contained on hard drive, CD, floppy disk, zip disk, etc. We have both contemporary and legacy systems to cover decades of potential media, primarily Apple hardware (stretching back to a Macintosh SE running OS 7), but also a couple of powerful modern Windows machines set up with virtual machines and emulators to handle Microsoft operating systems back to Windows 3.1 and MS-DOS.

Having to schedule planned visits over from my office to the main Tisch building in order to test, update, or otherwise work with any of this equipment is mildly irksome. That’s why my office Mac is chock full of emulators and other forensic software that I hardly use on any kind of regular basis – when I get a request from a class for a new tool to be installed in the Digital Forensics Lab, it’s much easier to familiarize myself with the setup process right where I am before working with legacy equipment; and I’m just point-blank unlikely to trek over the other building for no other reason than to test out new software that I’ve just read about or otherwise think might be useful for our courses.

sleepy-office-worker-at-desk-with-multiple-coffees
#ProtestantWorkEthic

This is a long-winded way of justifying why the department purchased, at my request, a new Windows machine that I will be able to use as a testing ground for Windows-based software and workflows (I had previously installed a Windows 7 virtual machine on my Mac to try to get around some of this, but the slowed processing power of a VM on a desktop not explicitly setup for such a purpose was vaguely intolerable). The first thing I was quite excited to do with this new hardware was to set up a dual-boot operating system: that is, make it so that on starting up the computer I would have the choice of using either Windows 7 or Windows 10, which is the main thing I’m going to talk about today.

IMG_2329
Swag

Pretty much all of our Windows computers in the archive and MIAP program still run Windows 7 Pro, for a variety of reasons – Windows 8 was geared so heavily towards improved communication with and features for mobile devices that it was hardly worth the cost of upgrading an entire department, and Windows 10 is still not even a year old, which gives me pause in terms of the stability and compatibility of software that we rely on from Windows 7. So I needed Windows 7 in order to test how new programs work with our current systems. However, as it increases in market share and developers begin to migrate over, I’m increasingly intrigued by Windows 10, to the point that I also wanted access to it in order to test out the direction our department might go in the future. In particular I very much wanted to try out the new Windows Subsystem for Linux, available in the Windows 10 Anniversary Update coming this summer – a feature that will in theory make Linux utilities and local files accessible to the Windows user via a Bash shell (the command-line interface already seen on Mac and Ubuntu setups). Depending how extensive the compatibility gets, that could smooth over some of the kinks we have getting all our students (on different operating systems) on the same page in our Digital Literacy and Digital Preservation courses. But that is a more complicated topic for another day.

When my new Windows machine arrived, it came with a warning right on the box that even though the computer came pre-installed with Windows 7 and licenses/installation discs for both 7 and Windows 10,

You may only use one version of the Windows software at a time. Switching versions will require you to uninstall one version and install the other version.

1d8acd8c6e8e337ce31bef84a8636491

This statement is only broadly true if you have no sense of partitioning, a process by which you can essentially separate your hard drive into distinct, discrete sections. The computer can basically treat separate partitions as separate drives, allowing you to format the different partitions with entirely separate file systems, or, as we will see here, install completely different operating systems.

Now, as it happens, it also turned out to be semi-true for my specific setup, but only temporarily and because of some kinks specific to manufacturer who provided this desktop (hi, HP!). I’ll explain more in a minute, but right now would be a good point to note that I was working with a totally clean machine, and therefore endangering no personal files in this whole partitioning/installation process. If you also want to setup some kind of dual-boot partition, please please please make sure all of your files are backed up elsewhere first. You never know when you will, in fact, have to perform a clean install and completely wipe your hard drive just to get back to square one.

1a0a18d74db871e6358d7526b271c0e749d9cedb8afd2411816625802370c924
“Arnim Zola sez: back up your files, kids!”

So, as the label said, booting up the computer right out of the box, I got a clean Windows 7 setup. The first step was to make a new blank partition on the hard drive, on to which I could install the Windows 10 operating system files. In order to do this, we run the Windows Disk Management utility (you can find it by just hitting the Windows Start button and typing “disk management” into the search bar:

start

Once the Disk Management window pops up, I could see the 1TB hard drive installed inside the computer (labelled “Disk 0”), as well as all the partitions (also called “volumes”) already on that drive. Some small partitions containing system and recovery files (from which the computer could boot into at least some very basic functionality even if the Windows operating system were to corrupt or fail) were present, but mostly (~900 GB) the drive is dedicated to the main C: volume, which contains all the Windows 7 operating files, program files, personal files if there were any, etc. By right-clicking on this main partition and selecting “Shrink Volume,” I can set aside some of that space to a new partition, on to which we will install the Windows 10 OS. (note all illustrative photos gathered after the fact, so some numbers aren’t going to line up exactly here, but the process is the same)

hesx3

If you wanted to dual-boot two operating systems that use completely incompatible file systems – for instance, Mac and Windows – you would have to set aside space for not only the operating system’s files, but also all of the memory you would want to dedicate to software, file storage, etc. However, Windows 7 and 10 both use the NTFS file system – meaning Windows 10 can easily read and work with files that have been created on or are stored in a Windows 7 environment. So in setting up this new partition I only technically had to create space for the Windows 10 operating system files, which run about 25 GB total. In practice I wanted to leave some extra space, just in case some software comes along that can only be installed on the Windows 10 partition, so I went ahead and doubled that number to 50 GB (since Disk Management works in MB, we enter “50000” into the amount of space to shrink from the C: volume).

shrink_volume

Disk Management runs for a minute and then a new Blank Partition appears on Disk 0. Perfect! I pop in the Windows 10 installation disc that came with the computer and restart. In my case, the hardware automatically knew to boot up from the installation disc (rather than the Windows 7 OS on the hard drive), but it’s possible others would have to reset the boot order to go from the CD/DVD drive first, rather than the installed hard drive (this involves the computer’s BIOS or UEFI firmware interface – more on that in a minute – but for now if it gives you problems, there’s plenty of guides out there on the Googles).

Following the instructions for the first few parts of the Windows 10 installer is straightforward (entering a user name and password, name for the computer, suchlike), but I ran into a problem when finally given the option to select the partition on to which I wanted to install Windows 10. I could see the blank, unformatted 50 GB partition I had created, right there, but in trying to select it, I was given this warning message:

Windows cannot be installed to this disk. The selected disk is of the GPT partition style.

Humph. In fact I could not select ANY of the partitions on the disk, so even if I had wanted to do a clean install of Windows 10 on to the main partition where Windows 7 now lived, I couldn’t have done that either. What gives, internet?

So for many many many years (in computer terms, anyway – computer years are probably at least equivalent to dog years), PCs came installed with a firmware interface called the BIOS – Basic Input/Output System. In order to install or reinstall operating system software, you need a way to send very basic commands to the hard drive. The BIOS was able to do this because it lived on the PC’s motherboard, rather than on the hard drive – as long as your BIOS was intact, your computer would have at least some very basic functionality, even if your operating system corrupted or your hard drive had a mechanical failure. With the BIOS you could reformat your hard drive, select whether you booted the operating system from the hard drive or an external source (e.g. floppy drive or CD drive), etc.

header
Or rule a dystopian underwater society! …wait

In the few seconds when you first powered on a PC, the BIOS would look to the very first section of a hard drive, which (if already formatted) would contain something called a Master Boot Record, a table that contains information about the partitions present on that hard drive: how many partitions are present, how large each of them are, what file system was present on each, which one(s) contained bootable operating system software, which partition to boot from first (if multiple partitions had a bootable OS).

windows-cannot-be-installed-to-this-disk
You probably saw something like this screen by accident once when your cat walked across your keyboard right as you started up the computer.

Here’s the thing: because of the limitations of the time, the BIOS and MBR partition style can only handle a total of four partitions on any one drive, and can only boot from a partition if it isless than about 2.2 TB in size. For a long time, that was plenty of space and functionality to work with, but with rapid advancements in the storage size of hard drives and the processing power of motherboards, the BIOS and MBR partitioning became increasingly severe and arbitrary roadblocks. So from the late ’90s through the mid-’00s, an international consortium developed a more advanced firmware interface, called UEFI (Unified Extensible Firmware Interface) that employed a new partition system, GPT (GUID Partition Table). With GPT, there’s theoretically no limit to the number of partitions on a drive, and  UEFI can boot from partitions as large as 9.4 ZB (yes, that’s zettabytes). For comparison’s sake, 1 ZB is about equivalent to 36,000 years of 1080p high-definition video. So we’re probably set for motherboard firmware and partition styles for a while.

n2cnt4
We’re expected to hit about 40 zettabytes of known data in 2020. Like, total. In the world. Our UEFI motherboards are good for now.

UEFI can not read MBR partitions as is, though it has a legacy mode that can be enabled to restrict its own functionality to that of the BIOS, and thereby read MBR. If the UEFI motherboard is set to only boot from the legacy BIOS, it can not understand or work with GPT partitions. Follow?

So GETTING BACK TO WHAT WE WERE ACTUALLY DOING….the reason I could not install a new, Windows 10-bootable partition on to my drive was that the UEFI motherboard in my computer had booted from the legacy BIOS -for some reason.

jdhvc
Me.

Honestly, I’m not sure why this is. Obviously this was not a clean hard drive when I received it – someone at HP had already installed Windows 7 on to this GPT-partitioned hard drive, which would’ve required the motherboard to be in UEFI boot mode. So why did it arrive with legacy BIOS boot mode not only enabled, but set first in the preferential boot order? My only possible answer is that after installing Windows 7, they went back in and set the firmware settings to legacy BIOS boot mode in order to improve compatibility with the Windows 7 OS – which was developed and released still in the days when BIOS was still the default for new equipment.

This was a quick fix – restart the computer, follow the brief on-screen instructions to enter the BIOS (usually pressing the ESC key, though it can vary with your setup), and navigating through the firmware settings to re-enable UEFI boot mode (I also left legacy BIOS boot enabled, though lower in the boot order, for the above-stated reasoning about compatibility with Windows 7 – so now, theoretically, my computer can start up from either MBR or GPT drives/disks with no problem).

Phew. Are you still with me after all this? As a reward, here’s a vine of LeBron James blocking Andre Iguodala to seal an NBA championship, because that is now you owning computer history and functionality.

https://vine.co/v/5BuzmV0Xw5b

From this point on, we can just pop the Windows 10 installation disc back in and follow the instructions like we did before. I can now select the unformatted 50 GB partition on which to install Windows 10 – and the installation wizard basically runs itself. After a lot of practical setup username and password nonsense, now when I start up my computer, I get this screen:

boot-screen-640x480

And I can just choose whether to enter the Windows 7 or 10 OS. Simple as that. I’ll go more into some of what this setup allows me to do (particularly the Windows Subsystem for Linux) another day, as this post has gone on waaaayy too long. Happy summer, everyone!