This is a guest post written by Jeff Lauber, based on his paper originally written for a course on “Video Preservation” in NYU’s M.A. program in Moving Image Archiving and Preservation.
Jeff is a media archivist based in New York City. He currently works as an archivist for Jenny Holzer Studio in Brooklyn and carries out freelance archival projects for a number of NYC institutions.
In its most basic sense, signal-to-noise ratio (SNR) explains itself: the proportion of noise present in transmission of a given signal compared to the pure signal itself. In more technical terms: “the dimensionless ratio of the signal power to the noise power contained in a recording […] the signal-to-noise ratio parameterizes the performance of optimal signal processing systems when the noise is Gaussian [i.e. normally distributed and/or predictable].”1 Applicable in numerous fields in which imaging and/or transmission of electronic signals are used, SNR allows for a more precise understanding of how accurately a signal is being received. In the preservation of videotape, SNR can act as an important indication of whether or not audiovisual signals from the tape are being captured properly and with minimal unwanted background noise in the analog-to-digital conversion process.
Noise in magnetic/electronic audiovisual signal transmission can be the result of a number of factors. In its most common instance, noise is perceived in the video signal as “snow” and is the result of random electrical disturbances in the signal transmission process. Most of this random noise is white (i.e. distributed evenly across the frequency spectrum) and is introduced to the signal via components that contain low signal levels such as camera imagers, videotape recorders, cable circuits, and broadcast receivers.2 In fact, noise can be introduced into the signal at any stage in the recording and transmission process; as Jim Lindner importantly notes, “the impact [of introducing noise into the signal] is cumulative over the entire signal chain and is not necessarily just an issue of the magnetic media itself or of a problem with magnetic fields in storage.”3
In general, the SNR of analog recording media and electronics has improved in tandem with advancements in technology. Though natural to both analog and digital recordings, noise has often been considered impure and undesirable in audiovisual signals, and efforts to improve recording technology over the years were intent on achieving the highest possible SNR, i.e. the lowest level of noise proportionate to the audiovisual signals.3 Video SNR is most often expressed in decibels and, in more precise terms, is the “power ratio […] of the peak-to-peak signal voltage or current, from black level to reference white but not including the sync pulses, to the rms [sic] value of the noise.”2 The noise power in a video signal can be expressed as either a weighted or unweighted value. Unweighted noise is an expression of the noise power in a given signal and can be determined mathematically/logarithmically or using an instrument with a uniform frequency response which quantitatively measures the output spectrum of a signal transmission. Weighted noise takes into account the noise that the average human can visibly perceive in a video signal: by considering factors such as natural eye response, screen brightness, and scan-line width, among others, the weighted factor simulates the aperture response of the human eye to adjust the noise power measurement to what can be visually perceived.2 In general, weighted noise measurements result in a SNR that is about 8dB higher than unweighted, the greater proportion a result of adjusting the factor for noise that is present in the signal but cannot be visually perceived.4 A final important consideration when measuring the SNR of a given video signal is to factor in time : in terms of human perception, still frames of a film or video image appear noisier than moving images, eye-brain coordination generally integrating around six frames of a video image to improve the SNR. It is noted that incorporating a 0.2 second factor into the mathematical equation to calculate SNR is good practice in that respect.5 In broadcast television, weighted SNR of between 43 and 53dB is considered reasonable, though newer analog and digital technologies have become capable of ratios far greater.2
SNR is an important factor to consider in the analog-to-digital conversion of videotape signals and in video- and audiotape preservation generally. In the simplest sense, awareness of noise and its proportion to the pure, desired audiovisual signal is essential. Lindner notes that many people—notably younger generations whose lives have been predominantly digital—have come to expect pure, seemingly noiseless image and sound quality from audiovisual content. However, both visual and audible noise are natural qualities of analog recordings.3 Thus, attempting to reduce noise in analog-to-digital transfer of video- and audiotapes has the potential to inaccurately capture intrinsic noise qualities if done too severely.
In a technical sense, digitization of analog videotape must take into consideration the SNR of a given audiovisual signal since, as with signal amplification and transmission, the digitization process involves sending a signal through multiple components or pieces of equipment, each of which has the potential to introduce noise. This consideration is especially pertinent given that noise is additive in analog systems, i.e. noise accumulates at every stage of the recording, transmission, and/or digitization process.2 Converting the waveform of analog audiovisual content into quantized, digital values in general introduces error, and the quality of the recording or presence of noise is also determined by the number of bits per sample. The signal to quantization noise ratio (SQNR), in that respect, measures the difference between the signal and the noise introduced in quantization; it has been noted that every bit adds around 6dB of resolution to the digitized signal so that a 16-bit sample, for instance, would allow for a maximum SQNR of 96dB.6 An awareness of target SNRs and methods for maximizing the ratio of digitized content can help ensure cleaner, more accurate digital transfer.
However, when dealing with analog videotape and transmitting its signal, noise is an inevitability; even when sample size is significant and even when other measures are taken to reduce noise in the signal path during digitization (e.g. minimizing the length of the path and the number of components used – literally, shorter cables and less equipment, if possible), there is an “irreducible amount of noise superimposed on the signal each time the camera image is read out, amplified, and digitized.”7
I’ve always been amused by the way a certain professional field frequently goes out of its way to shout “we don’t understand audio” to the world. (Association of Moving Image Archivists, Moving Image Archiving and Preservation, Moving Image Archive Studies, Museum of the Moving Image, etc. etc.)
“But there’s no good word to quickly cover the range of media we potentially work with,” you cry! To which I say, “audiovisual” is a perfectly good format-agnostic term that’s been in the public consciousness for decades and doesn’t confuse my second cousins when I try to explain what I do. “But silent film!” you counter, trying to use clever technicalities. To which I say, silent film was almost never really silent, home movies were meant to be narrated over, and stop being a semantic straw man when I’m trying to have a hot take over here!
The point is: when working in video preservation and digitization, our training and resources have a tendency to lean toward “visual” fidelity rather than the “audio” half of things (and it IS half). I’m as guilty of it as anyone. As I’ve described before, I’m a visual learner and it bleeds into my understanding and documentation of technical concepts. So I’m going to take a leap and try and address another personal Achilles’ heel: audio calibration, monitoring, and transfer for videotape digitization.
I hope this to be the first in an eventual two-part post (though both halves can stand alone as well). Today I’ll be talking about principles of audio monitoring: scales, scopes and characteristics to watch out for. Tune in later for a breakdown of format-by-format tips and tricks that vary depending on which video format you’re working with: track arrangements, encoding, common errors, and more. My focus, for now, is on audio in regards to videotape – audio-only formats like 1/4″ open reel, audio cassette, vinyl/lacquer disc and more bring their own concerns that I won’t get into right now, if only for the sake of scope crawl (I’ve been writing enough 3000+ word posts of late). But much if not all of the content in this post particularly should be applicable to audio-only preservation as well!
Big thanks to Andrew Weaver for letting me pick his brain and help spitball these posts!
The Spinal Tap Problem
Anyone who has ever attended one of my workshops knows that I love to take a classic bit of comedy and turn it into a buzz-killing object lesson. So:
Besides an exceptional sense of improvisational timing, what we have here is an excellent illustration of a fundamental disconnect in audio calibration and monitoring: the difference between how audio signal is measured (e.g. on a scale of 1 to 10) and how it is perceived (“loudness”).
There are two places where we can measure or monitor audio: on the signal level (as the audio passes through electrical equipment and wires, as voltage) or on the output level (as it travels through the air and hits our ears as a series of vibrations). We tend to be obsessed with the latter – judging audio based on whether it’s “too quiet” or “too loud”, which strictly speaking is as much a matter of presentation as preservation. Cranking the volume knob to 11 on a set of speakers may cause unpleasant aural side effects (crackling, popping, bleeding eardrums) but the audio signal as recorded on the videotape you’re watching stays the same.
To be clear, this isn’t actually any different than video signal: as I’ve alluded to in past posts, a poorly calibrated computer monitor can affect the brightness and color of how a video is displayed regardless of what the signal’s underlying math is trying to show you. So just as we use waveform monitors and vectorscopes to look at video signals, we need “objective” scales and monitors to tell us what is happening on the signal level of audio to make sure that we are accurately and completely transferring analog audio into the digital realm.
Just like different color spaces have come up with different scales and algorithms for communicating color, different scales and systems can be applied to audio depending on the source and/or characteristic in question. Contextualizing and knowing how to “read” exactly what these scales are telling us is something that tends to get lost by the wayside in video preservation, and what I’m aiming to demystify a bit here.
Measuring Audio Signal
So if we’re concerned with monitoring audio on the signal level – how do we do that?
All audiovisual signal/information is ultimately just electricity passed along on wires, whether that signal is interpreted as analog (a continuous wave) or digital (a string of binary on/off data points). So at some level measuring a signal quantitatively (rather than how it looks or sounds) always means getting down and interpreting the voltage: the fluctuations in electric charge passed along a cable or wire.
In straight scientific terms, voltage is usually measured in volts (V). But engineers tend to come up with other scales to interpret voltage that adjust unit values and are thus more meaningful to their own needs. Take analog video signal, for instance: rather than volts, we use the IRE scale to talk about, interpret and adjust some of the most important characteristics of video (brightness and sync).
We never really talk about it in these terms, but +100 IRE (the upper limit of NTSC’s “broadcast range”, essentially “white”) is equivalent to 700 millivolts (0.7 V). We *could* just use volts/millivolts to talk about video signal, but the IRE scale was designed to be more directly illustrative about data points important to analog video. Think of it this way: what number is easier to remember, 100 or 700?
Audio engineers had the same conundrum when it came to analog audio signal. Where it gets super confusing from here is that many scales emerged to translate voltage into something useful for measuring audio. I think the best way to proceed from here is just to break down the various scales you might see and the context for/behind them.
Decibel-based scales are logarithmic rather than linear, which makes them ideal for measuring audio signals and vibrations – the human ear is more sensitive to certain changes in frequency and/or amplitude than others, and a logarithmic scale can better account for that (quite similar to gamma correction when it comes to color/luminance and the human eye).
The problem is that decibels are also a relative unit of measurement: something can not just be “1 dB” or “1000 dB” loud; it can only be 1 dB or 1000 dB louder than something else. So you can see quite a lot of scales related to audio that start with “dB” but then have some sort of letter serving as a suffix; this suffix specifies what “sound” or voltage or other value is serving as the reference point for that particular scale.
An extremely common decibel-based scale for measuring analog audio signals is dBu. The “u” value in there stands for an “unterminated” voltage of 0.775 volts (in other words, the value “+1 dBu” stands for an audio wave that is 1 decibel louder than the audio wave generated by a sustained voltage of 0.775 V).
In the analog world, dBu is considered a very precise unit of measurement, since it was based on electrical voltage values rather than any “sound”, which can get subjective. So you’ll see it in a lot of professional analog audio applications, including professional-grade video equipment.
Confusingly: “dBu” was originally called “dBv”, but was re-named to avoid confusion with the next unit of measurement on this list. So yes, it is very important to distinguish whether you are dealing with a lower-case “v” or upper-case “V”. If you see “dBv”, it should be completely interchangeable with “dBu” (…unless someone just wrote it incorrectly).
dBV functions much the same as dBu, except the reference value used is equivalent to exactly 1 volt (1 V). It is also used as a measurement of analog audio signal. (+1 dBV indicates an audio wave one decibel louder than the wave generated by a sustained voltage of 1 V)
Now…why do these two scales exist, referenced to slightly different voltages? Honestly, I’m still a bit mystified myself. Explanations of these reference values delve quite a bit into characteristics of electrical impedance and resistance that I don’t feel adequately prepped/informed enough to get into at the moment.
What you DO need to know is that a fair amount of consumer-grade analog audiovisual equipment was calibrated according to and uses the dBV scale instead of dBu. This will be a concern if you’re figuring out how to properly mix and match and set up equipment, but let’s table that for a minute.
PPM and VU
dBu and dBV, while intended for accurately measuring audio signal/waves, still had a substantial foot in the world of electrical signal/voltage, as I’ve shown here. Audio engineers still wanted their version of “IRE”: a reference scale and accompanying monitor/meter that was most useful and illustrative for the practical range of sound and audio signal that they tended to work with. At the time (~1930s), that meant radio broadcasting, so just keep that in mind whenever you’re flipping over your desk in frustration trying to figure out why audio calibration is the way it is.
The BBC struck in this arena first, with PPM (peak program meter), a highly accurate meter and scale intended to show “instant peaks”: the highest point on each crest of an audio wave. These meters became very popular in European broadcasting environments, but different countries and engineers couldn’t agree on what reference value, and therefore measurement scale, to use. So if you come across a PPM audio meter, you might see all kind of scales/number values depending on the context and who made it: the original BBC PPM scale went from 1 to 7, for instance (with 6 being the “intended maximum level” for radio broadcasts) while EBU PPM, ABC (American) PPM, Nordic PPM, DIN PPM, CBC PPM, etc. etc., might all show different decibel levels.
In the United States, however, audio/radio engineers decided that PPM would be too expensive to implement, and instead came up with VU meters. VU stands for VolumeUnits (not, as I thought myself for a long time, “voltage units”!!! VU and V are totally different units of scale/measurement).
VU meters are averaged, which means they don’t so much give a precise reading of the peaks of audio waves so much as a generalized sense of the strength of the signal. Even though this meant they might miss certain fluctuations (a very quick decibel spike on an audio wave might not fully register on a VU meter if it is brief and unsustained), this translated close enough to perceived loudness that American engineers went with this lower-cost option. VU meters (and the Volume Unit scale that accompany it) are and always have been intended to get analog audio “in the ballpark” rather than give highly accurate readings – you can see this in the incredibly bad low-level sensitivity on most VU meters (going from, say, -26 VU to -20 VU, for instance, is barely going to register a blip on your needle).
So I’ve lumped these two scales/types of meter together because you’re going to see them in similar situations (equipment for in-studio production monitoring for analog A/V), just generally varying by your geography. From here on out I will focus on VU because it is the scale I am used to dealing with as an archivist in the United States.
All of these scales I’ve described so far have related to analog audio. If the whole point of this post is to talk about digitizing analog video formats…what about digital audio?
Thankfully, digital audio is a little more straightforward, at least in that there’s pretty much only one scale to concern yourself with: dBFS (“decibels below [or in relation to] Full Scale”).
Whereas analog scales tend to use relatively “central” reference values – where ensuing audio measurements can be either higher OR lower than the “zero” point – the “Full Scale” reference refers to the point at which digital audio equipment simply will not accept any higher value. In other words, 0 dBFS is technically the highest possible point on the scale, and all other audio values can only be lower (-1 dBFS, -100 dBFS, etc. etc.), because anything higher would simply be clipped: literally, the audio wave is just cut off at that point.
Tipping the Scales
All right. dBu, dBV, dBFS, VU….I’ve thrown around a lot of acronyms and explanations here, but what does this all actually add up to?
If you take away anything from this post, remember this:
0 VU = +4 dBu = -20 dBFS
The only way to sift through all these different scales and systems – the only way to take an analog audio signal and make sure it is being translated accurately into a digital audio signal – is to calibrate them all against a trusted, known reference. In other words – we need a reference point for the reference points.
In the audio world, that is accomplished using a 1 kHz sine wave test tone. Like SMPTE color bars, the 1 kHz test tone is used to calibrate all audio equipment, whether analog or digital, to ensure that they’re all understanding an audio signal the same way, even if they’re using different numbers/scales to express it.
In the analog world, this test tone is literally the reference point for the VU scale – so if you play a 1 kHz test tone on equipment with VU meters, it should read 0 VU. From there, the logarithms and standards demand that 0 VU is the same as +4 dBu. That is where the test tone should read if you have equipment that uses those scales.
dBFS is a *little* more tricky. It’s SMPTE-recommended practice to set 1 kHz test tone to read at -20 dBFS on digital audio meters – but this is not a hard-and-fast standard. Depending on the context, some equipment (and therefore the audio signals recorded using them) are calibrated so that a 1 kHz test tone is meant to hit -18 or even -14 dBFS, which can throw the relationship between your scales all out of whack.
(In my experience, however, 99 times out of 100 you will be fine assuming 0 VU = -20 dBFS)
Once you’re confident (relatively) that everyone’s starting in the same place, that makes it possible to proceed from there: audio signals hitting between 0 and +3 VU on VU meters, for example, should be hitting roughly between -20 dBFS and -6 dBFS on a digital scale.
Note that these are all logarithmic scales based on different logarithms – so they are never going to line up one-to-one except at the agreed-upon reference point. That is, +4 dBu may be equal to 0 VU, but +5 dBu is not equal to 1 VU. When it comes to translating audio signal from one system and scale to another, we can follow certain guidelines and ranges, but there is always going to be a certain amount of imprecision and subjectivity in working with these scales on a practical level during the digitization process.
Danger, Will Robinson
Remember when I said that we were talking about signal level, not necessarily output level, in this post? And then I said something about how professional equipment calibrated to the dBu scale versus how consumer equipment calibrated to dBV scale? Sorry. Let’s circle back to that for a minute.
Audio equipment engineers and manufacturers didn’t stop trying to cut corners when they adopted the VU meters over PPM. As a cost-saving measure for wider consumer releases, they wanted to make audio devices with ever-cheaper physical components. Cheaper components literally can’t handle as much voltage passing through them as higher-quality, “professional-grade” components and equipment.
So many consumer-grade devices were calibrated to output a 1 kHz test tone audio signal at -10 dBV, which is equivalent to a significantly lower voltage than the professional, +4 dBu standard.
(The math makes my head hurt, but you can walk through it in this post; also, I get the necessary difference in voltage but no, I still don’t really understand why this necessitated a difference in the decibel scale used)
What this means is: if you’re not careful, and you’re mixing devices that weren’t meant to work together, you can output a signal that is too strong for the input equipment to handle (professional -> consumer), or way weaker than it should be (consumer -> professional). I’ll quote here the most important conclusion from that post I just linked above:
If you plug a +4dBu output into a -10dBV input the signal is coming in 11.79dB hotter than the gear was designed for… turn something down.
If you plug a -10dBV output into a +4dBu input the signal is coming in 11.79dB quieter than the gear was designed for… turn something up.
Unbalanced audio signal/cables are a big indicator of equipment calibrated to -10 dBV: so watch out for any audio cables and connections you’re making with RCA connectors.
The reality is also that after a while many professional-grade manufacturers were aware of the -10 dBV/+4 dBu divide, and factored that into their equipment: somewhere, somehow (usually on the back of your device, perhaps on an unmarked switch) is the ability to actually swap back and forth between expecting a -10 dBV input and a +4 dBu one (thereby making any voltage gain adjustments to give you your *expected* VU/dBFS readings accordingly). Otherwise, you’ll have to figure out a way to make your voltage gain adjustments yourself.
The lessons are two-fold:
Find a manual and get to know your equipment!!
You can plug in consumer to professional equipment, but BE CAREFUL going professional into consumer!! It is possible to literally fry circuits by overloading them with the extra voltage and cause serious damage.
Set Phase-rs to Stun
There’s another thing to watch out for while we’re on the general topic of balanced and unbalanced audio, which are the concepts of polarity and phase.
Polarity is what makes balanced audio work; it refers to the relation of an audio signal’s position to the median line of voltage (0 V). Audio sine waves swing from positive voltage to negative voltage and vice versa -precisely inverting the polarity of a wave (i.e. taking a voltage of +0.5 V and flipping it to -0.5 V) and summing those two signals together (playing them at the same time) results in complete cancellation.
Professional audio connections (those using XLR cables/connections) take advantage of this quality of polarity to help eliminate noise from audio signals (again, you can read my post from a couple years back to learn more about that process). But this relies on audio cables and equipment being correctly wired: it’s *possible* for technicians, especially those making or working with custom setups, to accidentally solder a wire such that the “negative” wire on an XLR connector leads to a “positive” input on a deck or vice versa.
This would result in all kinds of insanely incorrectly-recorded signals, and probably caught very quickly. But it’s a thing to possibly watch out for – and if you’re handed an analog video tape where the audio was somehow recorded with inverse polarity, there are often options (both on analog equipment or in digital audio software, depending on what you have on hand) that are as easy as flipping a switch or button, rather than having to custom solder wires to match the original misalignment of the recording equipment.
This is where phase might come into play, though. Phase is, essentially, a delay of audio signal. It’s expressed in terms of relation to the starting point of the original audio sine wave: e.g. a 90 degree phase shift would result in a quarter-rotation, or a delay of a quarter of a wavelength.
In my experience, phase doesn’t come too much into play when digitizing audio – except that a 180 degree phase shift can, inconveniently, look precisely the same as a polarity inversion when looking at an audio waveform. This has led to some sloppy labeling and nomenclature in audio equipment, meaning that you may see settings on either analog or digital equipment that refer to “reversing the phase” when what they actually do is reverse the polarity.
You can read a bit more here about the distinction between the two, including what “phase shifts” really mean in audio terms, but the lesson here is to watch your waveforms and be careful of what your audio settings are actually doing to the signal, regardless of how they’re labelled.
Reading Your Audio
I’ve referred to a few tools for watching and “reading” the characteristics of audio we’ve been discussing. For clarity’s sake, in this section I’ll review exactly, for practical purposes, what tools and monitors you can look at to keep track of your audio.
Level meters are crucial to measuring signal level and will be where you see scales such as dBu, dBFS, VU, etc. In both analog and digital form, they’re often handily color-coded; so if after all of this, you still don’t really get the difference between dBu and dBFS, to some degree it doesn’t matter: level meters will warn you when levels are getting too hot by changing from green to yellow and eventually to red when you’re in danger of getting too “hot” and clipping (whatever that equivalent point is in the scale in question).
Waveforms will actually chart the shape of an audio wave; they’re basically a graph with time on the x-axis and amplitude (usually measured in voltage) on the y-axis. These are usually great for post-digitization quality control work, since they give an idea of audio levels not just in any one given moment but over the whole length of the recording. That can alert you to issues like noise in the signal (if, say, the waveform stays high where you would expect more fluctuation in a recording that alternates loud and quiet sections) or unwanted shifts in polarity.
Waveform monitors can sometimes come in the form of oscilloscopes: these are essentially the same device, in terms of showing the user the “shape” of the audio signal (the amplitude of the wave based on voltage). Oscilloscopes tend to be more of a “live” form of monitoring, like level meters – that is, they function moment-to-moment and require the audio to be actively playing to show you anything. Digital waveform monitors tend to actually save/track themselves over time to give the full shape of the recording/signal, rather than just the wave at any one given moment.
Spectrograms help with a quality of audio that we haven’t really gotten to yet: frequency. Like waveforms, they are a graph with time on the x-axis, but instead of amplitude they chart audio wave frequency.
If amplitude is perceived by human ears as loudness, frequency is perceived as pitch. They end up looking something like a “heat map” of an audio signal – stronger frequencies in the recording show up “brighter” on a spectrogram.
Spectrograms are crucial to audio engineers for crafting and recording new signals, “cleaning up” audio signals by removing unwanted frequencies. As archivists, you’re probably not actually looking to mess or change with the frequencies in your recorded signal, but they can be helpful for checking and controlling your equipment; that is, making sure that you’re not introducing any new noise into the audio signal in the process of digitization. Certain frequencies can be dead giveaways for electrical hum, for example.
The More You Know
This is all a lot to sift through. I hope this post clarifies a few things – at the very least, why so much of the language around digitizing audio, and especially digitizing audio for video, is so muddled.
I’ll leave off with a few general tips and resources:
Get to know your equipment. Test audio reference signals as they pass through different combinations of devices to get an idea what (if anything) each is doing to your signal. The better you know your “default” position, the more confidently you can move forward with analyzing and reading individual recordings.
Get to know your meters. Which one of the scales I mentioned are they using? What does that tell you? If you have both analog and digital meters (which would be ideal), how do they relate as you move from one to the other?
Leave “headroom”. This is the general concept in audio engineering of adjusting voltage/gain/amplitude so that you can be confident there is space between the top of your audio waveform and “clipping” level (wherever that is). For digitization purposes, if you’re ever in doubt about where your levels should be, push it down and leave more headroom. If the choice is between capturing the accurate shape of an audio signal, albeit too “low”, there will be opportunity to readjust that signal again later as long as you got all the information. If you clip when digitizing, that’s it – you’re not getting that signal information back and you’re not going to be able to adjust it.
I know, I hate to break it to you. I’m not thrilled about it either – one of my favorite memories from high school is walking straight up to my Statistics teacher on the last day of senior year and proudly announcing that it was the last time I was ever going to be in a math class. (Yes, I do feel a bit guilty about the look of befuddled disappointment on his face, but by god I was right)
But it’s true: at least when it comes to video preservation, color is math. Everything else you thought you knew about color – that it’s light, that you get colors by mixing together other colors, that it’s pretty – is irrelevant.
Just now, you thought you were looking at color bars? Wrong. Numbers.
When I first started learning to digitize analog video, the concept of luminance and brightness made sense. Waveform monitors made sense. A bright spot in a frame of video shows up as a spike on the waveform in way that appeases visual logic. When digitizing, you wanted to keep images from becoming too bright or too dark, lest visual details at these extremes be lost in the digital realm. All told, pretty straightforward.
Vectorscopes and chrominance/color information made less sense. There were too many layers of abstraction, and not just in “reading” the vectorscope and translating it to what I was seeing on the screen – there was something about the vocabulary around color and color spaces, full of ill-explained and overlapping acronyms (as I have learned, the only people who love their acronyms more than moving image archivists and metadata specialists are video engineers).
I’d like to sift through some of the concepts, redundancies, and labeling/terminology that threw me off personally for a long time when it came to color.
CIE XYZ OMG
Who in the rainbow can draw the line where the violet tint ends and the orange tint begins? Distinctly we see the difference of the colors, but where exactly does the one first blendingly enter into the other? So with sanity and insanity.
—Herman Melville, Billy Budd
I think it might help if we start very very broad in thinking about color before narrowing in on interpreting color information in the specific context of video.
In the early 20th century, color scientists attempted to define quantitative links between the visible spectrum of electromagnetic wavelengths and the physiological perception of the human eye. In other words – in the name of science, they wanted to standardize the color “red” (and all the other ones too I guess)
Given the insanely subjective process of assigning names to colors, how do you make sure that what you and I (or more importantly, two electronics manufacturers) call “red” is the same? By assigning an objective system of numbers/values to define what is “red” – and orange, yellow, green, etc. etc. – based on how light hits and is interpreted by the human eye.
After a number of experiments in the 1920s, the International Commission on Illumination (abbreviated CIE from the French – like FIAF) developed in 1931 what is called the CIE XYZ color space: a mathematical definition of, in theory, all color visible to the human eye. The “X”, “Y” and “Z” stand for specific things that are not “colors” exactly, so I don’t even want to get into that here.
The important thing about the CIE XYZ color space – you can’t really “see” it. The limitations of image reproduction technology mean there will always be tangible limits standing between you and the full CIE XYZ gamut (outside of, maybe, a color scientist’s lab, but I’m not even convinced of that). Consider that graph above: even though it’s colored in for illustrative purposes, that’s not actually every color you can possibly see in there. The actual wavelengths of light produced, and therefore colors represented, by your computer’s LED screen encompasses a much much more limited range of values.
tl;dr your computer screen, compared to the mathematical limits of the natural world – sucks.
That’s to be expected though! Practical implementations of any standard will always be more limited than an abstract model.
So even if it basically only exists in theory (and in the unbounded natural world around us), the CIE XYZ color space and its definition of color values served as the foundation for most color spaces that followed it. A color space is the system for creating/reproducing color employed by any particular piece of technology. Modern color printers, for example, use the CMYK color space: a combination of four color primaries (cyan, magenta, yellow, and “key”/black) that when mixed together by certain, defined amounts, create other colors, until you have hard copies of your beautiful PowerPoint presentation ready to distribute.
Again, just like CIE XYZ, any color space you encounter is math – it’s just a method of saying this number means orange and this number means blue and this number means taupe. But, just like with all other kinds of math, color math rarely stays that straightforward. The way each color space works, the way its values are calculated, and the gamut it can cover largely depend on the vagaries and specific limitations of the piece of technology it’s being employed with/on. In the case of CIE XYZ, it’s the human eye – in the case of CMYK, it’s those shitty ink cartridges on your desktop laser printer that are somehow empty again even though you JUST replaced them.
So what about analog video? What are the specific limitations there?
Peak TV: Color in Analog Video
Video signals and engineering are linked pretty inextricably with the history of television broadcasting, the first “mass-market” application of video recording.
So tune in to what you know of television history. Much like with film, it started with black-and-white images only, right? That’s because it’s not just budding moving image archivists who find brightness easier to understand and manipulate in their practical work – it was the same for the engineers figuring out how to create, record, and broadcast a composite video signal. It’s much easier for a video signal to tell the electron gun in a CRT television monitor what to do if frame-by-frame it’s only working with one variable: “OK, be kinda bright here, now REALLY BRIGHT here, now sorta bright, now not bright at all”, etc.
Compare that to: “OK, now display this calculated sum of three nonlinear tristumulus primary component values, now calculate it again, and again, and oh please do this just as fast when we just gave you the brightness information you needed”.
So in first rolling out their phenomenal, game-changing new technology, television engineers and companies were fine with “just” creating and sending out black-and-white signals. (Color film was only just starting to get commonplace anyway, so it’s not like moving image producers and consumers were clamoring for more – yet!)
But as we moved into the early 1950s, video engineers and manufacturing companies needed to push their tech forward (capitalism, competition, spirit of progress, yadda yadda), with color signal. But consider this – now, not only did they need to figure out how to create a color video signal, they needed to do it while staying compatible with the entire system and market of black-and-white broadcasting. The broadcast companies and the showmakers and the government regulation bodies and the consumers who bought TVs and everyone else who JUST got this massive network of antennas and cables and frequencies and television sets in place were not going to be psyched to re-do the entire thing only a few years later to get color images. Color video signal needed to work on televisions that had been designed for black-and-white.
From the CIE research of the ’20s and ’30s, video engineers knew that both the most efficient and wide-ranging (in terms of gamut of colors covered) practical color spaces were composed by mixing values based on primaries of red, green, and blue (RGB).
But in a pure RGB color space, brightness values are not contained on just one component, like in composite black-and-white video – each of the three primary values is a combination of both chrominance and luminance (e.g. the R value is a sum of two other values that mean “how red is this” and “how bright is this”, respectively). If you used such a system for composite analog video, what would happen if you piped that signal into a black-and-white television monitor, designed to only see and interpret one component, luminance? You probably would’ve gotten a weirdly dim and distorted image, if you got anything at all, as the monitor tried to interpret numbers containing both brightness and color as just brightness.
This is where differentialcolor systems came into play. What engineers found is that you could still create a color composite video signal from three component primaries – but instead of those being chrominance/luminance-mixed RGB values, you could keep the brightness value for each video frame isolated in its own Y′ component (which Charles Poynton would insist I now call luma, for reasons of…..math), while all the new chroma/color information could be contained in two channels instead of three: a blue-difference (B′-Y′) and a red-difference (R′-Y′) component. By knowing these three primaries, you can actually recreate four values: brightness (luma) plus three chroma values (R, G, and B). Even though there’s strictly speaking no “green” value specified in the signal, a color television monitor can figure out what this fourth value should be based on those differential calculations.
For television broadcasting, keeping the luma component isolated meant that you could still pipe a color composite video signal into a black-and-white TV, and it would still just display a correct black-and-white image: it just used the values from the luma component and discarded the two color difference components. Meanwhile, new monitors designed to interpret all three components correctly would display a proper color image.
This basic model of using two color difference components for video was so wildly efficient and successful that we still use color spaces based on this model today! Even as we passed from analog video signals into digital.
Lookup Table: “YUV”
…but you may have noticed that I just said “basic model” and “color spaces”, as in plural. Uh oh.
As if this wasn’t all complicated enough, video engineers still couldn’t all just agree on one way to implement the Y′, B′-Y′,R′-Y′ model. Engineers working within the NTSC video standard needed something different than those working with PAL, who needed something different than SECAM. The invention of component video signal, where these three primaries were each carried on their own signal/cable, rather than mixed together in a single, composite signal, also required new adjustments. And digital video introduced even more opportunities for sophisticated color display.
So you got a whole ton of related color spaces, each using the same color difference model, but employing different math to get there. In each case, the “scale factors” of the two color difference signals are adjusted to optimize the color space for the particular recording technology or signal in question. Even though they *always* basically represent blue-difference and red-difference components, the letters change because the math behind them is slightly different.
So here is a quick guide to some common blue/red difference color spaces, and the specific video signal/context they have been employed with:
Y′UV = Composite analog PAL video
Y′IQ = Composite analog NTSC video
Y′DBDR = Composite analog SECAM video
Y′PBPR = Component analog video
Y′CBCR = Component digital video – intended as digital equivalent/conversion of YPBPR, and also sometimes called “YCC”
PhotoYCC (Y′C1C2) = Kodak-developed digital color space intended to replicate the gamut of film on CD; mostly for still images, but could be encountered with film scans (whereas most still image color spaces are fully RGB rather than color-difference!)
xvYCC = component digital video, intended to take advantage of post-CRT screens (still relatively rare compared to Y′CBCR, though), sometimes called “x.v. Color” or “Extended-gamut YCC”
And now a PSA for all those out there who use ffmpeg or anyone who’s heard/seen the phrase “uncompressed YUV” or similar when looking at recommendations for digitizing analog video. You might be confused at this point: according to the table above, why are we making a “YUV” file in a context where PAL video might not be involved at all???
As Charles Poynton again helpfully lays out in this really must-read piece – it literally comes down to sloppy naming conventions. For whatever reason, the phrase “YUV” has now loosely been applied, particularly in the realm of digital video and images, to pretty much any color space that uses the general B′-Y′ and R′-Y′ color difference model.
I don’t know the reasons for this. Is it because Y′UV came first? Is it because, as Poynton lays out, certain pieces of early digital video software used “.yuv” as a file extension? Is it a peculiar Euro-centric bias in the history of computing? The comments below are open for your best conspiracy theories.
I don’t know the answer, but I know that in the vast vast majority of cases where you see “YUV” in a digital context – ffmpeg, Blackmagic, any other kind of video capture software – that video/file almost certainly actually uses the Y′CBCR color space.
Gamma Correction, Or How I Learned To Stop Worrying and Make a Keyboard Shortcut for “Prime”
Another thing that I must clarify at this point. As evidenced by this “YUV” debacle, some people are really picky about color space terminology and others are not. For the purposes of this piece, I have been really picky, because for backtracking through this history it’s easier to understand precise definitions and then enumerate the ways people have gotten sloppy. You have to learn to write longhand before shorthand.
So you may have noticed that I’ve been picky about using proper, capitalized subscripts in naming color spaces like Y′CBCR. In their rush to write hasty, mean-spirited posts on the Adobe forums, you may see people write things like “YCbCr” or “Y Cb Cr” or “YCC” or any other combination of things. They’re all trying to say the same thing, but they basically can’t be bothered to find or set the subscript shortcuts on their keyboard.
In the same vein, you may have noticed these tiny tick marks (′) next to the various color space components I’ve described. That is not a mistake nor an error in your browser’s character rendering nor a spot on your screen (I hope). Nor is it any of the following things: an apostrophe, a single quotation mark, or an accent mark.
This little ′ is a prime. In this context it indicates that the luma or chroma value in question has been gamma-corrected. The issue is – you guessed it – math.
The human eye perceives brightness in a nonlinear fashion. Let’s say the number “0” is the darkest thing you can possibly see and “10” is the brightest. If you started stepping up from zero in precisely regular increments, e.g. 1, 2, 3, 4, etc., all the way up to ten, your eyes would weirdly not perceive these changes in brightness in a straight linear fashion – that is, you would *think* that some of the increases were more or less drastic than the others, even though mathemetically speaking they were all exactly the same. Your eyes are just more sensitive to certain parts of the spectrum.
Gamma correction adjusts for this discrepancy between nonlinear human perception and the kind of linear mathematical scale that technology like analog video cameras or computers tend to work best with. Don’t ask me how it does that – I took Statistics, not Calculus.
The point is, these color spaces adjust their luma values to account for this, and that’s what the symbol indicates. It’s pretty important – but then again, prime symbols are usually not a readily-accessible shortcut that people have figured out on their keyboards. So they just skip them and write things like… “YUV”. And I’m not just talking hasty forum posts at this point – I double-dare you to try and reconstruct everything I’ve just told you about color spaces from Wikipedia.
Which, come to think of it, the reason why “YUV” won out as a shorthand term for Y′CBCR may very well be just that it doesn’t involve any subscripts or primes. Never underestimate ways programmers will look to solve inefficiencies!
Reading a Vectorscope – Quick Tips
This is all phenomenal background information, but when it gets down to it, a lot of people digitizing video just want to know how to read a vectorscope so they can tell if the color in their video signal is “off”.
Here we have a standard SMPTE NTSC color bars test pattern as it reads on a properly calibrated vectorscope. Each of the six color bars (our familiar red, green, and blue, as well as cyan, yellow, and magenta) are exactly as they should, as evidenced by the dots/points falling right into the little square boxes indicated for them (the white/gray bar should actually be the point smack in the center of the vectorscope, since it contains no chroma at all).
The saturation of the signal will be determined by how far away from the center of the vectorscope the dots for any given frame or pixel of video fall. For example, here is an oversaturated frame – the chroma values for this signal are too high, which you can tell because the dots are extending out way beyond the little labeled boxes.
Meanwhile, an undersaturated image will just appear as a clump towards the center of the vectorscope, never spiking out very far (and a black-and-white image should in fact only do that!)
Besides saturation, a vectorscope can also help you see if there is an error in phase or hue (both words refer to the same concept). As hopefully demonstrated by our discussion of color spaces and primaries, colors generally relate to each other in set ways – and the circular, wheel-like design of the vectorscope reflects that. If the dots on a vectorscope appear rotated – that is, dots that should be spiking out towards the “R”/red box, are instead headed for the “Mg”/magenta box – that indicates a phaseerror. In such a case, any elephants you have on screen are probably going to start all looking like pink elephants.
The problem is, even with these “objective” measurements, color remains a stubbornly subjective thing. Video errors in saturation and phase may not be so wildly obvious as the examples above, making them a tricky thing to judge, especially against artistic intent or the possibility of an original recording being made with mis-calibrated equipment. Again, you can find plenty of tutorials and opinions on the subject of color correction online from angry men, but I always just tell everyone to use your best judgement. It’s as good as anyone else’s!
(If you do want or need to adjust the color in a video signal, you’ll need a processing amplifier.)
Pixel Perfect: Further Reading on Color
I’ve touched on digital video and color here but mostly focused on pre-digital color spaces and systems for audiovisual archivists and others who work with analog video signals. Really, that’s because all of this grew out of a desire to figure out what was going on with “uncompressed YUV” codecs and “-pix_fmt yuv420p” ffmpeg flags, and I worked backwards from there until I found my answers in analog engineering.
There’s so much more to get into with color in video, but for today I think I’m going to leave you with some links to explore further.
On the subject of other color spaces beyond Y′CBCR
An experimental video codec from AV Preservation by Reto Kromer, based on the Y′COCG color space, which can convert to/from RGB values but isn’t based on human vision models (using orange-difference and green-difference instead of the usual blue/red)