What is a Video Monitor?

Panasnic Plasma

Like everyone else, I, too, regularly use video monitors for work. I use a Panasonic on set and various others in post, depending who’s doing the post production. One thing I’ve noticed about all the monitors I’ve looked at, is that I’m very easy to please.

That got me thinking. What is a video monitor, and what makes it special? Don’t get me wrong, I’m an engineer so I pretty much understand what happens behind the hood. It’s a real pleasure watching a high-end monitor in a dark environment, but it’s also a pleasure watching a movie on a laptop display.

Are all monitors alike? Of course not. So what makes a video monitor special enough to warrant a higher price? Let me share my thoughts.

What is a Video Monitor?

Like everything else, we start by defining our terms. Monitoring means inspecting. We’re observing, checking and studying – all at the same time.

In our case we are observing, checking and studying video. Therefore, a video monitor is a device that lets us accurately observe, check and study video.

Why ‘accurately’? Usually people buy video monitors to fulfill the following functions:

  • They want to see their video as it ‘really is’.
  • They want others to see their video as it ‘really is’, and as they see it.
  • They want to ensure video conforms to certain technical specifications, like broadcast, cinema, etc.

Who wants to buy a monitor that shows a false image? If I’m sharing videos on Facebook, I do so with the intention of letting my friends see it, as it is. As regarding the third point, on set we expect our tools to conform to technical specifications. That’s why we’ve chosen them, right?

Surely two monitors with the same technical specifications can’t be equal if one is ten times the price of the other? Or can it?

So I start by studying what ‘accuracy’ means. After all, if all monitors hit the bullseye, we wouldn’t have to worry about accuracy, would we?

Leader Monitor

Can any video monitor ever be perfect?

Question: Are we perfect?

No. So, why do I expect perfection from a device made by imperfect beings to be monitored with imperfect eyes? Okay, that doesn’t help anybody. For the sake of argument, let’s say we are capable of building the perfect video monitor. Let’s see how that goes.

This is a simplified explanation of what happens from camera to monitor:

Light is converted to an electrical pulse in a camera pixel -> sampling -> ADC -> debayering -> sub-sampling -> compression -> encoding -> Standard Signal ==> Monitor -> unwrap signal -> decoding -> decompression -> DAC -> display pixel fires, producing light of a certain wavelength

Here, we’re only concerned with knowing whether what is ‘packaged’ by the camera can be unwrapped and displayed as-is.  Here is a list of technical challenges in this seemingly straight-forward process:

  • Pixels are physical objects, in which properties are liable to change over time or under certain scenarios, while still staying within the ‘bounds of accuracy’
  • Display color space (decided by its gamut) isn’t the same as the video gamut (dictated by the camera+standard)
  • Displays themselves aren’t perfect, introducing errors due to its own construction, design and manufacturing – just like every other thing.
  • Digital decoding/decompression algorithm/math isn’t the same as the encoding algorithm.

References are provided at the end of this article for further study. The only way a video monitor even stands a chance is if

  • It were built by the same engineers who built the camera, or at least by those knowing the full secret sauce of the camera.
  • The engineers study every camera in the market this way.
  • The engineers have full control over the viewing environment.
  • The engineers could control the supply of silicon to near-perfect specifications from the choicest mines, like Starbucks claims to do with its coffee. Even they don’t have a unanimous monopoly on perfection.
  • There is a single accepted formula or method for all monitor design, which there isn’t. Since each part inside each part is patented, with different design specifications, materials and manufacturing tolerances, the sum total is a different beast altogether – like different sub-breeds of the same breed of dog.

We’ve just discovered what we know already. Even the most perfectly designed objects aren’t perfect. We just like to use words like ‘perfect’ and ‘accurate’, but that doesn’t make it so.

Then there’s the sad fact that even if a monitor did perfectly reproduce an image, we wouldn’t be able to see it properly! The electronics and the math and the graphs might all say it’s perfect, but it will never look the same to us. How so?

E.g., the same image on the same monitor at 2pm will look different than what it looked like at 9am. Add another human to the queue, and the error is doubled. A million? In that case even the best monitors in the world are on the same level as the cheapest grade monitors. I’ll get to this soon, but first let’s finish our study of the eye.

Gemini 4:4:4

Monitoring is about us, not about machines. Our eyes play a significant part in the monitoring chain. Here’s how ‘good’ our eyes are:

  • Peripheral vision is different from central vision
  • Perception of luminosity color and motion changes based on psychological and physiological factors
  • Our left eye is different from the right eye
  • Perception changes depending on age
  • Motion detection and perception varies according to light levels
  • Prescription eyeglasses, contacts and sunglasses alter the imagery
  • Non-prescription eyeglasses and contacts alter the imagery
  • Perception changes based on ambient light
  • Perception changes depending on what light source you have been subjected to prior to perception

Wow, huh? What I’m getting from all this is that errors are introduced in every link in the chain. But are these errors practically worth worrying about?

Let’s play a small game, called ‘pleasing the masses’:

Which monitor to choose?

It’s a simple thought experiment. In Option A, the imagery from the camera goes through a ‘perfect broadcast or video monitor’, one which shows you ‘exactly what the camera produced’.

This imagery is processed and delivered to eight people, the end users, each having their own brand of monitors. For the sake of simplicity, let’s assume the signal doesn’t change, and we are just rerouting the signals with zero difference.

If four of these people use a cheap but reliable consumer grade panel, while the rest use monitors of varying quality or price levels, whom should we aim for? Should we aim to please the majority, or maintain our own ‘standard’, as decided by the perfect broadcast monitor?

Most of us artists will indignantly decide to please ourselves. After all, we shot the images, didn’t we? We know what the scene looked like, and we know the true intention of the settings we chose to represent and record the scene. Is it our fault the end user has crappy monitors?

This is where it gets interesting. Let’s consider Option B.

Let’s exchange the perfect broadcast monitor with a cheap but reliable monitor. The same videographer, being the professional that he or she is, will use the relevant settings on camera to realize his or her vision on this monitor. What happens when the signal is rerouted to eight end users?

Should the camera settings be chosen for the cheap monitors or the perfect broadcast monitor? We don’t have or know what it looks like on a perfect monitor, but our vision is achieved with the consumer monitor. Here’s the killer. It will definitely look different on the perfect monitor, but it isn’t the videographer’s vision anymore, is it? Who’s right now? Or, let me put it differently: Who’s more accurate now?

It gets even more interesting if you consider Option C, which is if a videographer uses a monitor that is neither perfect nor equal to what the masses use. I’ll let you figure out this one for yourselves.

Think about headphones. There are literally hundreds of headphones of every budget imaginable. If you get used to listening with a certain ‘type of brand’, hearing the same music ‘as it is’ on a better headphone might sound worse! Humans are more capable of finding fault in audio that video. Monitors can be inaccurate and still deliver perfect results, even to the person who shot the scene!

What a unwholesome pile of muck. From image accuracy we are down to pleasing ___________ (somebody, executives, colorists, director, you, your pet dog – fill in the blanks).

If it is mandatory to grade a high budget movie destined for 3,000 screens on a super expensive monitor, why is it perfectly acceptable to sell the same movie on consumer displays (trailers on the internet or television)? If the audience is hooked with the imagery on these displays, why makes you think they’d notice the extra color on cinema?

In fact, I can’t think having read or heard of a single instance where someone watching cinema or television has complained about colorimetry or resolution. Unless the device wasn’t working, that is. It seems consumer displays aren’t as bad as some ‘professionals’ make them out to be.

If the person shooting the imagery is okay with whatever monitor is available, and the end user watching the imagery does not know the difference, who is the perfect video monitor for? Since many use it, and are quite ready to pay for it, there must be a strong reason, or is there?

Let’s not go overboard and make it overly dramatic. At the very least, I know my inability to see small differences in monitors lets me enjoy stuff more, without having to worry about unnecessary details – especially since I can’t do it perfectly with my eyes anyway.

Therefore, I’m content with forgetting ‘accuracy’ and embracing ‘satisfactory’.

The only question I have to ask now is, whom do I want to satisfy?

Here’s my definition of a video monitor:

A video monitor is a device that lets you satisfactorily observe, check and study video.

Do you feel this is a worthy definition, or am I way off base? How do you feel about your imagery being watched on different monitors? How do you deal with it?

References: