From the Dynamic Range of the Human Eye we know that the average eye can see about 20 stops, in a complicated sort of way.
There isn’t a camera, digital or film-based, that is capable of reproducing this range, without resorting to HDR or computer trickery. Traditional color film is good for about 10 stops, and Kodak Vision 3 reaches a theoretical maximum of 13 stops.
The BMCC, C300 and Red cameras claim to easily surpass film in dynamic range, yet their footage tells a different tale. The only digital footage I’ve seen that consistently looks like film comes from the Arri Alexa. Dynamic range isn’t the whole story, because contrast is a property that ‘moves’, for lack of a better word. The most brilliant explanation of contrast I’ve seen is by Bruce Barnbaum, in The Art of Photography; and the unparalleled master of exposure is Ansel Adams.
I have found, that if you plan your shoots and shots carefully, you could bring most of the objects within a frame to about 12 stops. The exception being specular highlights and dark shadows. There are always scenarios beyond your control, but these are scenarios where film won’t fare any better, and even your eyes will have trouble seeing!
I would demand at least 10 stops from any camera for good ‘controllable’ video. From my own tests of dynamic range and Technicolor Cinestyle for DSLRs, and from shooting with it, I know most DSLRs reach this threshold, if only barely. Some of the more expensive cameras, like the FS100, can reach 11 stops.
This is one instance where the more you have the better, and the internet is abound with thousands of examples from every camera for you to judge.
The frame rate is one problem that many will find difficult to solve. If you are planning to shoot sport for the web, then you’ll welcome the ability to display in higher frame rates. However, higher frame rates also mean larger files.
For all intents and purposes, most video will do well with a minimum of 24 fps (23.976 fps to be precise). Some might need a higher rate, and since this is an aesthetic criterion particular to certain scenarios, we’ll just have to bite the bullet on this one.
In my opinion, artifacts are what truly separate digital images from film. The way small details are handled by digital cameras are totally different from the grain-like structure of film. This is the fundamental reason for the ‘video look’.
Some attempt to solve this problem by increasing resolution, so the pixels get smaller, and more detail is resolved. However, that introduces another problem – too much resolution, which makes skin defects pop out.
Since resolution helps greatly (look at still images from DSLRs), one way to mask this problem is by using diffusion filters to soften the skin, as I’ve explained here. All said and done, there’s nothing you can do to fully eliminate the ‘video-look’ in every shot. That is a problem the camera manufacturers will have to solve for you.
If your material is good, people will look past its video-ness. Only camera nerds like us care about these things.
Most weird artifacts are usually caused by incorrect post processing – either by using color incorrectly or by adopting the wrong compression workflow.
Most cameras either shoot RAW, MPEG-2 or a variant of the MPEG-4 specification (H.264 or AVCHD). Some others can shoot directly to Prores or DNxHD.
If your footage needs to be pushed a lot in post processing, then I highly recommend you start with the least amount of compression, if possible none. The flip side is, the lower the compression, the greater the file size.
Eventually, your footage will be compressed to H.264 for web viewing. Its adoption in HTML5 has cemented its place as the king of the internet codecs for many years to come. Ten or twenty years from now, when they find a better codec, they’ll still think a hundred times before adopting it – who’ll transcode all the billions of videos from H.264 to that new format? And even if they do manage, they’ll be working with a highly compressed file! They won’t have access to the original. See where this is going?
Don’t worry if your camera can only shoot H.264 or AVCHD or XDCAM or whatever. They are all good, and are robust enough to withstand the demands of the web. The important thing is to find the right workflow for your project, and that’s not hard. It just takes a few hours of testing – at most a few days. Isn’t that more fruitful than asking a bunch of strangers about their opinions on the web? This includes me, too, by the way!
Last, but not least, the internet prefers progressive video, so shoot progressive whenever possible.
The choice of resolution, frame rate, dynamic range and color specifications affect the file size directly. Yet, your final data rate is dictated by a totally different factor – the bandwidth.
From Streaming Solutions, we know that, ideally, your video should be around the 1 Mbps mark for world-wide live streaming. If your audience is limited to a zone with great internet speeds, or if you are not interested in live streaming, you could jack that data rate up higher.
How high should you go? Well, Youtube specifies about 8 Mbps for 1080p, which is lower than DVD standard definition. Very few people in the world have access to a sustained internet download speed of 8 Mbps, so consider that before you feel shortchanged.
Almost every camera records at a data rate far higher than what the web demands, so don’t let data rate hold you back. What you should be concerned about is introducing unnecessary codecs with monster data rates into your workflow, for no useful reason.
Finally, we come to audio. I’ve given the minimum requirements for professional audio here. All said and done, the internet can take a full 7.1 mix (Youtube already accepts 5.1) easily.
No excuses. Your audio better be perfect. Remember, cameras don’t do audio, just as cameras don’t do images. A great DP is to audio as a great Sound designer is to lighting.
If you can’t afford decent sound, but have spent months researching the best camera for your project, and have spent your last dollar on it – you’ve already lost half the battle for ultimate web quality. Sure, you could ‘get away’ with it, and produce a viral video. Then again, viral videos can be shot on the cheapest webcam.
What makes a great web video?
Here you go:
Great content first. Perfect audio second. The right workflow third. Lighting and production values fourth. Your camera, last. In today’s world, you could pick up a camera on the way to the shoot from a local store. Can you do the same about everything else?
For the more technically minded, here are the specs for great web video:
- 1920×1080 with an aspect ratio of 16:9 and pixel aspect ratio of 1:1
- Progressive video, with a minimum frame rate of 24 fps
- Color space: Rec. 709, with a color bit depth per channel of 8
- 10 stops of dynamic range, with a film-like gamma curve
- Good in low light, usable ISO of 1600 at least.
- Interchangeable lenses, with the ability to shoot at f/2.8 at least, throughout the entire range of focal lengths.
- H.264 codec, in whatever wrapper it comes in, with the best bit rate offered by that camera for this codec. If you need to push your images in post, then opt for an intraframe codec or RAW or uncompressed. No, simple color grading is not a good enough ‘push’.
- 16-bit audio sampled at 48 KHz, 2 channels minimum (stereo) at a minimum bit rate of 128 kbps (CD quality or better). Ideally, you’d want to master your audio as uncompressed, which will give you a bit rate of 1,577 kbps.
Now all we need to do is find which gear meets these requirements, for the cheapest price. That’s what I’ll cover next.