About Time: Sifting Through Analog Video Terminology

In case it’s ever unclear, writing these blog posts is as much for me as it is for you, dear reader. Technical audiovisual and digital concepts are hard to wrap your head around, and for a myriad of reasons technical writing is frequently no help. Writing things out, as clearly and simply as I can, is as much a way of self-checking that I actually understand what’s going on as it is a primer for others.

And it case this intro is unclear, this is a warning to cover my ass as I dive into a topic that still trips me up every time I try to walk through it out loud. I’ll provide references as I can at the end of the piece, but listen to me at your own risk.

When I first started learning about preserving analog video, *most* of what I was reading/talking about made a certain amount of sense, which is to say it at least translated into some direct sensory/observable phenomenon – luminance values made images brighter or darker, mapping audio channels clearly changed how and what I was hearing, using component video rather than composite meant mucking around with a bunch more cables. But when we talked about analog signal workflows, there always seemed to be three big components to keep track of: video (what I was seeing), audio (what I was hearing) and then this nebulous goddamn thing called “sync” that I didn’t understand at all but seemed very very important.

The same way I feel when anyone tells me “umami” is a thing WHAT ARE YOU

I do not know who to blame for the morass of analog video signal terms and concepts that all have to do in some form with “time” and/or “synchronization”. Most likely, it’s just an issue at the very heart of the technology, which is such a marvel of mechanical and mathematical precision that even as OLED screens become the consumer norm I’m going to be right here crowing about how incredible it is that CRTs ever are/were a thing. What I do know is that between the glut of almost-but-not-quite-the-same words and devices that do almost-but-not-quite-the-same-thing-except-when-they’re-combined-and-then-they-DO-do-the-same-thing, there’s a real hindrance to understanding how to put together or troubleshoot an analog video digitization setup. Much of both the equipment and the vocabulary that we’re working with came from the needs of production, and that has led to some redundancies that don’t always make sense coming from the angle of either preservation or casual use.

I once digitized an audio cassette from the ’70s on which an audio tech was testing levels (…I hope?) by just repeating “time time time time” about 100 times in a row and ever since that day I’ve feared that his unleashed soul now possesses me. Anyway.

So I’m going to offer a guide here to some concepts and equipment that I’ve frequently seen either come up together or understandably get confused for each other. That includes in particular:

  • sync and sync generators
  • time base correction
  • genlock
  • timecode

…though other terms will inevitably rise as we go along. I’m going to assume a certain base level of knowledge with the characteristics and function of analog video signal – that is, I will write assuming the reader has been introduced to electron guns, lines, fields, frames, interlacing, and the basics of reading a classic/default analog video waveform. But I will try not to ever go right into the deep end.

Sync/Sync Generators

I know I just said I would assume some analog video knowledge, but let’s take a moment to contemplate the basics, because it helps me wrap my head around sync generators and why they can be necessary.

Consider the NTSC analog video standard (developed in the United States and employed in North, Central and parts of South America). According to NTSC, every frame of video has two fields, each consisting of 262.5 scan lines – so that’s 525 scan lines per frame. There are also 29.97 frames of video per second.

That means the electron gun in a television monitor has to fire off 15734.25 lines of video PER SECOND – and that includes not just actually tracing the lines themselves, but the time it takes to reset both horizontally (to start the next line) and vertically (to start the next field).

Now throw a recording device into the mix. At the same time that a camera is creating that signal and a monitor is playing it back, a VTR/VCR has to receive that signal and translate it into a magnetic field stored on a constantly-moving stretch of videotape via an insanely quick-spinning metal drum.

Even in the most basic of analog video signal flows, there are multiple devices (monitor, playback/recording device, camera, etc.) here that need to have an extremely precise sense of timing. The recording device, for instance, needs to have a very consistent internal sense of how long a second is, in order for all its tiny little metal and plastic pieces to spin and roll and pulse in sync (for a century, electronic devices, either analog OR digital, have used tiny pieces of vibrating crystal quartz to keep track of time – which is just a whole other insane tangent). But, in order to achieve the highest quality signal flow, it also needs to have the exact same sense of how long a second is as the camera and the monitor that it’s also hooked up to.

When you’re dealing with physical pieces of equipment, possibly manufactured by completely different companies at completely different times from completely different materials, that’s difficult to achieve. Enter in sync signal and sync generators.

Stay with me here

Sync generators essentially serve as a drumbeat for an analog video system, pumping out a trusted reference signal that all other devices in the signal chain can use to drive their work instead of their own internal sense of timing. There are two kinds of sync signals/pulses that sync generators have historically output:

Drive pulses were almost exclusively used to trigger certain circuits in tube cameras, and were never part of any broadcast/recorded video signal. So you’re almost certainly never going to need to use one in archival digitization work, but just in case you ever come across a sync generator with V. Drive and H. Drive outputs (vertical drive and horizontal drive pulses), that’s what those are for.

Blanking pulses cause the electron gun on a camera or monitor to go into its “blanking” period – which is the point at which the electron gun briefly shuts off and retraces back to the beginning of a new line (horizontal blanking) or of a new field (vertical blanking). These pulses are a part of every broadcast/recorded video signal, and they must be extremely consistent to maintain the whole 525-scan-lines-29.97-times-per-second deal.

Since blanking pulses are the vast majority of modern sync signals, you may also just see these referred to as “sync pulses”.

So the goal of sync generators is to output a constant video signal with precise blanking pulses that are trusted to be exactly where (or to be more accurate, when) they should be. Blanking pulses are contained in the “inactive” part of a video signal, meaning they are never meant to actually be visualized as image on a video camera or monitor (unless for troubleshooting purposes) – so it literally does not matter what the “active” part of a sync generator’s video signal is. It could just be field after field, frame after frame of black (often labeled “black burst”).  It could be some kind of test pattern – so you will often see test pattern generators that double as or are used as sync generators, even though these are separate functions/purposes, and thus could also be entirely separate devices in your setup.

Color bars – not *just* for geeky yet completely hard-to-use desktop wallpapers

(To belabor the point, test patterns, contained in the “active” part of the video signal, can be used to check the system, but they do not drive the system in the same way that the blanking pulses in the “inactive” part of a signal do. IMHO, this is an extremely misleading definition of “active” and “inactive”, since both parts are clearly serving crucial roles to the signal, and what is meant is just “seen” or “unseen”.)

Here’s the kicker – strictly speaking, sync generators aren’t absolutely necessary to a digitization station. It’s entirely possible that you’ll hook up all your components and all the individual pieces of equipment will work together acceptably – their sense of where and when blanking pulses should fall might already be the same, or close enough to be negligible.

But maybe you’re seeing some kind of  a wobbling or inconsistent image. That *could* be a sync issue, with one of your devices (the monitor, e.g.) losing its sense of proper timing, and resolvable by making sure all devices are working off the same, trusted sync generator.

You’ll see inputs for sync signal labeled in all sorts of inconsistent ways on analog video devices: “Sync In”, “Ext Sync”, “Ext Ref”, “Ref In”, etc. As far as I’m aware, these all mean the same thing, and references in manuals or labels to “sync signal” or “reference signal” should be treated interchangeably.

Time Base Correction

Sync generators can help line up devices with inconsistent timing in a system as a video signal is passed from one to another. Time Base Correctors (TBCs) perform an extremely similar but ever-so-confusingly-different task. TBCs can take an input video signal, strip out inconsistent sync/blanking pulses, and replace them entirely with new, steady ones.

This is largely necessary when dealing with pre-recorded signals. Consider a perfectly set up, synced recording system: using a CRT for the operator to monitor the image, a video camera passed a video signal along to a VTR, which recorded that signal on to magnetic tape. At the time, a sync generator made sure all these devices worked together seamlessly. But now, playing back that tape for digitization, we’ve introduced the vagaries of magnetic tape’s physical/chemical makeup into the mix. Perhaps it’s been years and the tape has stretched or bent, the metallic particles have expanded or compressed – not enough to prevent playback, but enough to muck with the sync pulses that rely on precision down to millionths of a second. As described with sync loss above, this usually manifests in the image looking wobbly, improperly framed or distorted on a monitor, etc.

https://bavc.github.io/avaa/artifacts/time_base_error.html

TBCs can either use their own internal sense of time to replace/correct the timing of the sync pulses on the recorded video signal – or they can use the drumbeat of a sync generator as a reference, to ensure consistency within a whole system of devices.

(Addendum): A point I’m still not totally clear on is the separation (or lack thereof) between time base correction and frame synchronization. According to the following video I found, a stand-alone frame synchronizer could stabilize the vertical blanking pulses in a signal only, resulting in a solid image at the moment of transition from one frame to another (that is, the active video image remains properly centered vertically within a monitor), but did nothing for horizontal sync pulses, potentially resulting in line-by-line inconsistencies:

So time base correction would appear to incorporate frame synchronization, while adding the extra stabilization of consistent horizontal sync/blanking pulses.

While, as I said above, you can very often get away with digitizing video without a sync generator, TBCs are generally much more critical. It depends on what analog video format you’re working with exactly, but whereas the nature of analog devices meant there was ever so slight leeway to deal with those millionth-of-a-second inconsistencies (say, to display a signal on a CRT), the precise on-or-off, one-or-zero nature of digital signals means analog-to-digital converters usually need a very very steady signal input to do their job properly.

You may however not need an external TBC unit. Many video playback decks had them built in, though of varying quality and performance. If you can, check the manual for your model(s) to see if it has an internal TBC, and if so, if it’s possible to adjust or even turn it off if you have a more trustworthy external unit.

Professional-quality S-VHS deck with built-in TBC controls

Technically there are actually three kinds of TBCs: line, full frame and full field. Line TBCs can only sample, store and correct errors/blanking pulses for a few lines of video at a time. Full field TBCs can handle, as the name implies, a full 262.5 (NTSC) lines of video at a time, and full frame TBCs can take care a whole frame’s worth (525, NTSC) of lines at a time. “Modern”, late-period analog TBCs are pretty much always going to be full frame, or even capable of multiple frames’ worth of correction at a time (this is good for avoiding delay and sync issues in the system if working without a sync generator). This will likely only come into play with older TBC units.

And here’s one last thing that I found confusing about TBCs given the particular equipment I was trained on: this process of time base correction, and TBCs themselves, has nothing to do with adjusting the qualities of the active image itself. Brightness, saturation, hue – these visible characteristics of video image are adjusted using a device called a processing amplifier or proc amp.

Because the process of replacing or adjusting sync pulses is a natural moment in a signal flow to ALSO adjust active video signal levels (may as well do all your mucking at once, to limit troubleshooting if something’s going wrong), many external TBC units also contain proc amps, thus time base correction and video adjustments are made on the same device, such as the DPS TBC unit above. But there are two different parts/circuit boards of the unit that are doing these two tasks, and they can be housed in completely separate devices as well (i.e. you may have a proc amp that does no time base correction, or a TBC that offers no signal level adjustments).

Genlock

“Genlock” is a phrase and label that you may see on various devices like TBCs, proc amps, special effects generators and more – often instead of or in the same general place you would see “Sync” or “Ref” inputs. What’s the deal here?

This term really grew out of the production world and was necessary for cases where editors or broadcasters were trying to mix two or more input video signals into one output. When mixing various signals, it was again good/recommended practice to choose one as the timing reference – all the other input signals, output, and special effects created (fades, added titles, wipes, etc. etc.) would be “locked” on to the timing of the chosen genlock input (which might be either a reference signal from a sync generator, or one of the actually-used video inputs). This prevented awkward “jumps” or other visual errors when switching between or mixing the multiple signals.

In the context of a more straightforward digitization workflow, where the goal is not to mix multiple signals together but just pass one steadily all the way through playback and time base correction to monitoring and digitization, “genlock” can be treated as essentially interchangeable with the sync signal we discussed in the “Sync/Sync Generators” section above. If the device you’re using has a genlock input, and you’re employing a sync generator to provide an external timing reference for your system, this is where you would connect the two.

Timecode

The signals and devices that I’ve been describing have been all about driving machinery – time base is all about coordinating mechanical and electrical components that function down at the level of milliseconds.

Timecode, on the other hand, is for people. It’s about identifying frames of video for the purpose of editing, recording or otherwise manipulating content. Unlike film, where if absolutely need be a human could just look at and individually count frames to find and edit images on a reel, magnetic tape provides no external, human-readable sense of what image or content is located where. Timecode provides that. SMPTE timecode, probably the most commonly used standard, identified frames according to a clock-based system formatted “00:00:00:00”, which translated to “Hours:Minutes:Seconds:Frames”.

Timecode could be read, generated, and recorded by most playback/recording/editing devices (cameras or VTRs), but there were also stand-alone timecode reader/generator boxes created (such as in the photo above) for consistency and stability’s sake across multiple recording devices in a system.

There have been multiple systems of recording and deploying timecode throughout the history of video, depending on the format in question and other concerns. Timecode was sometimes recorded as its own signal/track on a videotape, entirely separate from the video signal, in the same location as one might record an audio track. This system was called Linear Timecode (LTC). The problem with LTC was, like with audio tracks, the tape had to be in constant motion to read the timecode track (like how you can not keep hearing an audio signal when you pause it, even though you *can* potentially keep seeing one frame of image when a video signal is paused on one line).

Vertical Interval Timecode (VITC) fixed this (and had the added benefit of freeing up more physical space on the tape itself) by incorporating timecode into video signal itself, in a portion of the inactive, blanking part of the signal. This allowed systems to read/identify frame numbers even when the tape was paused.

Other systems like CTL timecode (control track) and BITC (timecode burnt-in to the video image) were also developed, but we don’t need to go too far into that here.

As long as we’re at it, however, I’d also like to quickly clarify two more terms: drop-frame and non-drop-frame timecode. These came about from a particular problem with the NTSC video standard (as far as I’m aware, PAL and SECAM do not suffer the same, but please someone feel free to correct me). Since NTSC video plays back at the frustratingly non-round number of 29.97 frames per second (a number arrived at for mathematical reasons beyond my personal comprehension that have something to do with how the signal carries color information), the timecode identifying frame numbers will eventually drift away from “actual” clock time as perceived/used by humans. An “hour of timecode” which by necessity must count upwards at a 30 fps rate such as:

00:00:00:28
00:00:00:29
00:00:01:00
00:00:01:01

played back at 29.97 fps, will actually be 3.6 seconds longer than “wall-clock” time.

That’s already an issue at an hour, and proceeds to get worse the longer the content being recorded. So SMPTE developed “drop-frame” timecode, which confusingly does not drop actual frames of video!! Instead, it drops some of the timecode frame markers. Using drop-frame timecode, our sequence would actually proceed thus:

00:00:00;28
00:00:00;29
00:00:01;02
00:00:01;03

With the “:00” and “:01” frame markers removed entirely from the timecode; except for every tenth minute, when they are re-inserted to make the math check out:

00:00:09;28
00:00:09;29
00:00:10;00
00:00:10;01

Non-drop-frame timecode simply proceeds as normal, with the potential drift from clock time. Drop-frame timecode is often (but not necessarily – watch out) identified in video systems using semi-colons or single periods instead of colons between the second and frame counts, as I have done above. Semi-colons are common on digital devices, while periods are common on VTRs that didn’t have the ability to display semi-colons.

I hope this journey through the fourth dimension of analog video clarifies a few things. While each of these concepts is reasonable enough on their own, the way they relate to each other is not always clear, and the similarity in terms can send your brain down the wrong alley quickly. Happy digitizing!

Resources

Andre, Adam. “Time Base Corrector.” Written for NYU-MIAP course in Video Preservation I, Fall 2017. Accessed 6/9/2018. https://www.nyu.edu/tisch/preservation/program/student_work/2017fall/17f_3403_Andre_a1.pdf

“Genlock: What is it and why is it important?” Worship IMAG blog. Posted 6/11/2011. Accessed 6/8/2018. https://worshipimag.com/2011/06/11/gen-lock-what-is-it-and-why-is-it-important/

Marsh, Ken. Independent Video. San Francisco, CA: Straight Arrow Books, 1974.

Poynton, Charles. Digital Video and HD: Algorithms and Interfaces. 2nd edition. Elsevier Science, 2012.

SMPTE timecode. Wikipedia. Accessed 6/8/2018.

Weise, Marcus and Diana Weynand. How Video Works: From Analog to High Definition. 2nd Edition. Burlington, MA: Focal Press, 2013.

Live Capture Plus and QuickTime for Java

One of the particular challenges of video preservation is how to handle and preserve the content on digital tape formats from the latter days of magnetic A/V media: Digital Betacam, DVCam, DVCPro, etc. Caught in the nebulous time of transition between analog and digital signals (the medium itself, magnetic tape, is basically the same as previous videotape formats like VHS or Betacam – but the information stored on them was encoded digitally), these formats were particularly popular in production environments but there were plenty of prolific consumer-grade efforts as well (MiniDV, Digital8). In some ways, this makes transferring content easier than handling analog formats: there is no “digitization” involved, no philosophical-archival conundrum of how best to approximate an analog signal into a digital one. One simply needs to pull the digital content off the magnetic tape intact and get it to a modern storage medium (hard disk, solid-state, or maybe LTO, which yes I know is still magnetic tape but pay no attention to the man behind the curtain).

 https://twitter.com/dericed/status/981965351482249216?ref_src=twsrc%5Etfw&ref_url=https%3A%2F%2Ftweetdeck.twitter.com%2F

However, even if you still have the proper playback deck, and the right cables and adapters to hook up to a contemporary computer, there’s the issue of software – do you have a capture application that can communicate properly with the deck *and* pull the digital video stream off the tape as-is?

The last bit is getting especially tricky. As DV-encoded formats in particular lost popularity in broadcast/production environments, the number of applications that can import and capture DV video without transcoding (that is, changing the digital video stream in the process of capture), while staying compatible with contemporary, secure operating systems/environments, has dwindled. That’s created a real conundrum for a lot of archivists. Apple’s Final Cut Pro application, for instance, infamously dropped the ability to capture native DV when it “upgraded” from Final Cut Pro 7 to Final Cut X (you can hook up and capture tapes, but Final Cut X will automatically transcode the video to ProRes). Adobe Premiere will still capture DV and HDV codecs natively, but will re-package the stream into a .mov QuickTime wrapper (you can extract the raw DV back out, though, so this is still a solid option for many, though of course for just as many more an Adobe CC subscription is beyond their means).

One of the best options for DV capture is (was?) a Mac application called Live Capture Plus, made by Square Box Systems as part of its CatDV media suite. It has great options for error handling (e.g. trying to read a problem area of a tape multiple times automatically if there’s dropout), generating DV files based on clips or scenes or timecode rather than the whole tape, remote tape deck control over the FireWire/Thunderbolt connection, etc; a bunch of ingesting stuff that’s more appealing to an archivist than an application primarily for editing, like Adobe Premiere. It also talks to you, which is fun but also terrifying.

Failed to power up

However, Square Box removed Live Capture Plus from its product list some years back, and as far as I’m aware has refused all pleas to either open-source the legacy code or even continue to sell new licenses to those in the know.

Let’s say you *are* lucky enough to still have an old Live Capture Plus license on hand, however. The Live Capture Plus GUI is built on Java, but an older, legacy version of Java, so when you first try to run the app on a contemporary OS (~10.10 and up), you’ll first see this:

Luckily, for at least the moment, Apple still offers/maintains a download of this deprecated version of Java – just clicking on “More Info…” in that window will take you there, or you can search for “Java for OSX” to find the Apple Support page.

OK, so you’ve downloaded and installed the legacy Java for OSX. Yet this time, when you try again to run Live Capture Plus, you run into this fun error message instead:


All right. What’s going on here?

When I first encountered this error message, even though I didn’t know Java, this error message provided two clues: 1) “NoClassDefFoundError” – so Java *can’t find* some piece that it needs to run the application correctly; and 2) “quicktime”/”QTHandleRef” – so it specifically can’t find some piece that relates to QuickTime. That’s enough to go on a search engine deep dive, where I eventually found this page where researchers at the University of Wisconsin-Madison’s zoology/molecular biology lab apparently encountered and solved a similar issue with a piece of legacy software related to, near as I can figure from that site, taking images of tiny tiny worms. (I desperately want to get in touch and propose some sort of panel with these people about working with legacy software, but am not even sure what venue/conference would be appropriate)

The only way my ’90s-kid brain can understand what’s happening

So basically, recent versions of Mac OS have not included key files for a  plugin called “QuickTime for Java” – a deprecated software library that allowed applications built in/on Java (like Live Capture Plus) to provide multimedia functionality (playback, editing, capture) by piggybacking on the QuickTime application’s support for a pretty wide range of media formats and codecs (including DV). Is this fixable? If you can get those key files, yes!

For now, both the downloads and instructions for what to do with these three files are available on that Hardin Lab page, but I’m offering them here as well. The fix is pretty quick:

 

I would only note a couple things, which is that I’ve successfully installed on to macOS Sierra (10.12) without needing to mess with the System Integrity Protection settings by just installing for the local user (i.e. putting the files into ~/Library rather than /System/Library); and that if you want to do this via Finder rather than the command line, here is how to get to the “View Options” box to reveal the Library folder in Finder, as mentioned in Step 1 above (a useful step in general, really, if you’re up to digipres shenanigans):

Once these files are in place, Live Capture Plus should open correctly and be able to start communicating with any DV/FireWire-capable deck that you’ve got connected to your Mac – again, provided that you’ve got a registration code to enter at this point.

A final word of warning, however. Live Capture Plus comes from the era of 32-bit applications, and we’re now firmly in the era of 64-bit operating systems. Exactly what all that means is probably the subject of another post, but basically it’s just to say that legacy 32-bit apps weren’t made to take advantage of modern hardware, and may run slower on contemporary computers than they did on their original, legacy hardware. Not really an issue when you’re in video/digital preservation and your entire life is work-arounds, but recently Mac OS has taken to complaining about 32-bit apps:

Despite 32-bit apps providing, near as I can tell, no actual security or compatibility concerns for 64-bit OSes (32-bit apps just can’t take advantage of more than 4GB of RAM), this is a pretty heavy indication that Apple will likely cut off support for 32-bit apps entirely sometime in the not-so-so-distant future. And that will go for, not just Live Capture Plus, but other legacy apps capable of native DV transfer (Final Cut 7, the DVHSCap utility from the FireWire SDK, etc.)

So go get those DV tapes transferred!!!!

Upgrading Video Digitization Stations

In the primary MIAP lab we have four Mac Pro stations set up mainly for video digitization and capture. They get most heavily used during our two Video Preservation courses: Video Preservation I, which focuses on technical principles and practice of digitization from analog video sources, and Video Preservation II, which focuses more on vendor relations and guiding outsourced mass digitization projects, but by necessity involves a fair amount of digital video quality control/quality assurance as well. They get used for assorted projects in Collections Management, the “Talking Tech” workshops I’ve started leading myself, and the Cinema Studies department’s archive as well.

Over the course of 2016, the hardware on these four stations was really starting to show its age. These machines were originally bought and set up in 2012 – putting them in the last generation of the older “tower”-style silver Mac Pro desktops, before Apple radically shifted its hardware design to the “trash bin” style Mac Pros that you can buy today. The operating system hadn’t been updated in a while either, they were still running Mac OSX 10.10 (Yosemite), whose last full update came in August 2015 (with a few security updates still following, at least).

maxresdefault
This guy isn’t allowed in anymore, for instance.

These stations were stable – at least, in the sense that all the software we needed definitely worked, and they would get the job done of digitizing/capturing analog video. But the limitations of how quickly and efficiently they could do this work was more and more apparent. The amount of time it took, to, say, create a bag out of 200 GB of uncompressed video, transcode derivative copies, run an rsync script to back video packages up to a local NAS unit, or move the files to/from external  drives (a frequent case, as both Video Preservation classes usually partner with other cultural organizations in New York City who come to pick up their newly-digitized material via hard drive) was getting excruciating relative to newer systems, wasting class time and requiring a lot of coordination/planning of resources as ffmpeg or rsync chugged along for hours, or even overnight.

So, I knew it was time to upgrade our stations. But how to go about it? There were two basic options:

1. Purchase brand-new “trash bin” Mac Pros to replace the older stations

pratttrashcan_macpro
http://rudypospisil.com/wordpress/wp-content/uploads/2013/10/prattTrashCan_macPro.jpg

2. Open up the innards of the old Mac Pros and swap in updated, more powerful components

Buying brand-new Windows stations was basically out, just given the way our classes have been taught, the software we work with, and my own personal knowledge/preference/ability to maintain hardware. And I was lucky that #1 was even an option at all – the considerable resources available at NYU allow for choices that I would not have many other places. But, MIAP also has a lot of equipment needs, and I’d generally rather stash larger portions of our budget towards harder-to-get analog video equipment and refurbishment, than jump for splashy new hardware that we don’t actually need. So I drew up some thoughts on what I actually wanted to accomplish:

  • improved data transfer rate between desktops and external drives (the fastest connection available, at best, was the mid-2012 Mac Pro’s native FireWire 800 ports; and many times we were limited to USB 2.0)
  • improved application multi-tasking (allow for, say, a Blackmagic Media Express capture to run at the same time as the ffmpeg transcode of a previous capture)
  • improved single-application processing power (speed up transcoding, bag creation and validation, rsync transfer if possible)
  • update operating system to OSX 10.11 (El Capitan, a more secure and up-to-date release than Yosemite and MUCH more stable than the new 10.12 Sierra)
  • maintain software functionality with a few older programs, especially Final Cut 7 or equivalent native-DV capture software

Consulting with adjunct faculty, a few friends, and the good old internet, it became clear that a quick upgrade by way of just purchasing new Mac Pros would pose several issues: first, that the Blackmagic Decklink Studio 2 capture cards we used for analog video digitization would not be compatible, requiring additional purchases of stand-alone Blackmagic analog-to-digital converter boxes on top of the new desktops to maintain current workflows. It is also more difficult to cheaply upgrade or replace the storage inside the newer Mac Pros, again likely requiring the eventual purchase of stand-alone RAID storage units to keep up with the amount of uncompressed video being pumped out; whereas the old Mac Pro towers have four internal drive slots that can be swapped in and out within minutes, with minimal expertise, and be easily arranged into various internal RAID configurations.

In other words, I decided it was much cheaper and more efficient to keep the existing Mac Pro stations, which are extremely flexible and easy to upgrade, and via new components bring them more or less up to speed with what completely new Mac Pros could offer anyway. In addition to the four swappable storage slots, the old Mac Pro towers feature easy-to-replace RAM modules, and PCI expansion slots on the back that offer the option to add extra data buses (i.e. more USB, eSATA, or Thunderbolt ports). You can also update the CPU itself – but while adding a processor with more cores would in theory (if I understand the theory, which is also far from a 100% proposition) be the single biggest boost to improving/speeding up processing, the Intel Quad-Core processors already in the old towers are no slouch (the default new models of the Mac Pro still have Quad-Cores), and would be more expensive and difficult to replace than those other pieces. Again, it seemed more efficient, and safer given my limited history with building computer hardware, to incrementally upgrade all the other parts, see what we’re working with, and someday in the future step up the CPU if we really, desperately need to breathe more life into these machines.

So, for each of the four stations, here were the upgrades made (separation made between the upgrade and specific model/pricing found; for any of these you could pursue other brands/models/sellers as well):

  • (1) 120 GB solid-state drive (for operating system and applications)

OWC Mercury Extreme Pro 6G SSD: $77/unit
OWC Mount Pro Drive Sled (necessary to mount SSDs in old Mac Pros): $17/unit

  • (1) 1 TB hard drive (for general data storage – more on this later)

Western Digital Caviar Blue 1 TB Internal HDD: $50/unit

  • (1) PCI Express Expansion Card, w/ eSATA, USB 3.0 and USB 3.1 capability

CalDigit FASTA-6GU3 Plus: $161/unit

  • (4) 8 GB RAM modules, for a total of 32 GB

OWC 32.0 GB Upgrade Kit: $139/unit

IMG_2839.JPG
Swaaaaaaaaaaag

Summed up, that’s less than $500 per computer and less than $2000 for the whole lab, which is a pretty good price for (hopefully) speeding up our digitization workflow and keeping our Video Preservation courses functional for at least a couple more years.

The thinking: with all that RAM, multi-tasking applications shouldn’t be an issue, even with higher-resource applications like Final Cut 7, Blackmagic Media Express, ffmpeg, etc. With the OSX El Capitan operating system and all applications hosted on solid-state memory (the 120 GB SSD) rather than hard drive, single applications should run much faster (as the drives don’t need to literally spin around to find application or system data). By buying a new 1 TB hard drive for each computer, the three non-OS drive slots on each computer are now all filled with 1 TB hard drives. I could have two of those configured in a RAID 0 stripe arrangement, to increase the read and write speed of user data (i.e. video captures) – the third drive can serve as general backup or as storage for non-video digitization projects, as needed.

IMG_2843.JPG
RAM for days
IMG_2854.JPG
*Oh what fun it is to ride in a one-120-GB-solid-state-drive open sled*

IMG_2855.JPG

IMG_2856.JPG

The expansion cards will now allow eSATA or USB 3.0-speed transfers to compatible external drives. The USB 3.1 function on the specific CalDigit cards I got won’t work unless I upgrade the operating system to 10.12 Sierra, which I don’t want to do just yet. That’s basically the one downside compared to the all-new Mac Pros, which would’ve offered the Thunderbolt transfer speeds better than USB 3.0 – but for now, USB 3.0 is A) still a drastic improvement over what we had before, B) probably the most common connection on the consumer external drives we see anyway, and C) with an inevitable operating system upgrade we’ll “unlock” the USB 3.1 capability to keep up as USB 3.1 connections become more common on external drives.

IMG_2849.JPG
Uninstalled…
IMG_2852.JPG
…installed! Top row – followed by two slots for the Blackmagic Decklink Studio 2 input/output and the AMD Radeon graphics card input/output at the bottom.

Installing all these components was a breeze. Seriously! Even if you don’t know your way around the inside of a computer at all, the old Mac Pro towers were basically designed to be super customizable and easy to swap out parts, and there’s tons of clear, well-illustrated instructional videos available to follow.

[vimeo 139648427 w=640 h=360]

As I mentioned in a previous post about opening up computers, the main issue was grounding. Static discharge damaging the internal parts of your computer is always a risk when you go rooting around and touching components, and especially since the MIAP lab is carpeted I was a bit worried about accidentally frying a CPU with my shaky, novice hands. So I also picked up a $20 computer repair kit that included an anti-static wristband that I wore while removing the desktops from their station mounts, cleaning them out with compressed air, and swapping in the mounted SSDs, new HDDs, expansion cards, and RAM modules.

IMG_2841.JPG

With the hardware upgrades completed, it was time to move on to software and RAID configuration. Using a free program called DiskMaker X6, I had created a bootable El Capitan install disk on a USB stick (to save the time of having to download the installer to each of the four stations separately). Booting straight into this installer program (by plugging in the USB stick and holding down the Option key when turning on a Mac), I was able to quickly go through the process of installing OSX El Capitan on to the SSDs. For now that meant I could theoretically start up the desktop from either El Capitan (on the SSD) or Yosemite (still hosted on one of the HDDs) – but I wanted to wipe all the storage and start from scratch here.

I accomplished this using Disk Utility, the built-in program for drive management included with OSX. Once I had backed up all important user data from all the hard drives, I completely reformatted all of them (sticking with the default Mac OS Extended (Journaled) formatting), including the brand-new 1 TB drives. So now each station had an operating system SSD running El Capitan and three blank 1 TB hard drives to play with. As mentioned earlier, I wanted to put two of those in a RAID 0 data stripe arrangement – a way of turning two separate drives into one logical “volume”. RAID 0 is a mildly dangerous arrangement in that failure of either one of those drives means total data loss; but, it means a significant performance boost in read/write speed (hopefully decreasing the likelihood of dropped frames during capture, improving time spent on fixity checks and bagging, etc.), while maintaining a total of 2 TB storage on the drives (most RAID arrangements focused more on data security and redundancy, rather than performance, will result in a total amount of storage on the volume less than the capacity of the physical disks), and files are not meant to be stored long-term on these stations. They are either returned to the original institution, backed up to the more secure, RAID 6-arranged NAS, or backed up to our department’s fleet of external drives – if not some combination of those options.

So it was at this point that I discovered that in the upgrade from Yosemite to El Capitan, Apple actually removed functionality from the Disk Utility application. The graphic interface for Disk Utility in Yosemite and earlier versions of OSX featured an option to easily customize RAID arrangements with your drives. In El Capitan (and, notably, El Capitan only – the feature has returned in Sierra), you’re only allowed to erase, reformat and partition drives.

jboddiskutilityyosemite-577aac645f9b58587592afb8

screen-shot-2017-03-02-at-11-07-37-am
Cool. Cool cool cool.

Which means to the Terminal we go. The command-line version of Disk Utility (invoked with the “diskutil” command) can still quickly create format a RAID volume. First, I have to run a quick

[cc lang=”Bash”]$ diskutil list[/cc]

…in order to see the file paths/names for the two physical disks that I wanted to combine to create one volume (previously named MIAP_Class_Projects and Station_X_Backup):

screen-shot-2017-03-02-at-11-08-24-am

In this case, I was working with /dev/disk1 and /dev/disk3. Once I had the correct disks identified, I could use the following command:

[cc lang=”Bash”]$ diskutil appleRAID create stripe JHFS+ disk1 disk3[/cc]

Let’s break this down:

diskutil – command used to invoke the Disk Utility application

appleRAID – option to invoke the underlying function of Disk Utility that creates RAIDs – it’s still there, they just removed it from the graphical version of Disk Utility in El Capitan for some reason ¯\_(ツ)_/¯

create stripe – tells Disk Utility that I want to create a RAID 0 (striped) volume

JHFS+ – tells Disk Utility I want the striped volume to be formatted using the journaled HFS+ file system (the default Mac OS Extended (Journaled) formatting)

disk 1 disk 3 – lists the two drives, with the names taken from the previous command above, that I want to combine for this striped volume

Note: Be careful! When working with Disk Utility, especially in the command line, be sure you have all the data you want properly backed up. You can see how you could easily wipe/reformat disks by putting in the wrong disk number in the command.

End result: two physical disks combined to form a 2 TB volume, renamed to MIAP_Projects_RAID:

Screen Shot 2017-03-02 at 11.15.37 AM.png
The 2 TB RAID volume, visible in the GUI of Disk Utility – note the two physical drives are still listed in the “Internal” menu on the left, but without subset logical volumes, as with the SSD and El Capitan OS volume, or WDC hard drive with the “CS_Archive_Projects” volume.

Hooray! That’s about it. I did all of this with one station first, which allowed me the chance to reinstall all the software, both graphical and CLI, that we generally use in our courses, and test our usual video capture workflows. As mentioned before, my primary concern was older native-DV capture software like Final Cut 7 or Live Capture Plus would break, given that official OS support for those programs ended a long time ago, but near as I can tell they can still work in El Capitan. That’s no guarantee, but I’ll troubleshoot more when I get there (and keep around a bootable USB stick with OSX 10.9 Mavericks on it, just in case we have to go revert to using an older operating system to capture DV).

Screen Shot 2017-03-02 at 11.13.17 AM.png
In order to not eat up memory on the 120 GB SSD operating system drive, I figured this was advisable.

I wish that I had thought to actually run some timed tests before I made these upgrades, so that I would have some hard evidence of the improved processing power and time spent on transcoding, checksumming, etc. But I can say that having the operating system and applications hosted on solid-state memory, and the USB 3.0 transfer speeds to external drives, have certainly made a difference even to the unscientific eye. It’s basically like we’ve had brand-new desktops installed – for a fraction of the cost. So if you’re running a video digitization station, I highly recommend learning your way around the inside of a computer and the different components – whether you’re on PC or still working with the old Mac Pro towers, just swapping in some fresh innards could make a big difference and save the trouble and expense of all-new machines. I can’t speak to working with the new Mac Pros of course, but would be very interested to hear with anyone using those for digitization as to their flexibility – for instance, if I didn’t already have the old Mac Pros to work with, and had been completely starting from scratch, what would you recommend? Buying the new Pros, or hunting down some of the older desktop stations, for the greater ability to customize them?