How much Resolution do you Lose with Compression, and does 4K help make 1080p Look Better?

This article attempts to answer two questions:

  • How much resolution is lost in the compression stage?
  • Does shooting at a higher resolution and then downsampling help mitigate this loss?

The answer to the first question should be pretty obvious, as anyone who has compressed video will know. However, the answer to the second question can only be answered faithfully if we have an idea about the degree of the loss in the first.

Understanding the limits of resolution loss will help us greatly when it is time to deliver the best quality work to our clients or customers.

Exclusive Bonus: Download my free guide (with examples) on how to find the best camera angles for dialogue scenes when your mind goes blank.

The test

In order to eliminate lens effects, a vector image (8-bit) was created in two versions:

Animation was applied to the test images to replicate resolution loss in motion. However, no motion blur was applied because any loss by motion blur will scale proportionately. Remember, this test is about resolution, not sharpness. The software used was Adobe After Effects CC.

The first composition is pure 1080p. The second is 4K, downscaled to 1080p (nothing fancy, just using the Scale function in AE). Both compositions were rendered to the following formats for testing:

  • Uncompressed TIFF (image sequence)
  • Prores HQ (Quicktime, QT)
  • H.264 50 Mbps QT
  • H.264 20 Mbps QT
  • H.264 10 Mbps QT
  • H.264 5 Mbps QT

The first is my favorite method of archival. The second is the best intermediary (and nowadays acquisition as well) codec. The third is how DSLRs record data, while the last is how sites like Youtube and Vimeo display 1080p content. Don’t forget, these sites compress your already compressed video, which makes things a lot worse. Our goal is to find a formula so we can set limits.

Here’s what a sample frame looks like:

SampleFrame

While producing the same vector images, I had to take into account the increased resolution of a 4K sensor while keeping the same area. This means, if a hair is one pixel wide in 1080p, it will be 2 pixels wide on the same sensor at 4K, assuming everything else is constant. This is why 4K has more resolution, though whether or not this carries forward while downsampling remains to be seen.

Two frames from each video were exported in TIFF. These were further compressed to JPEGs for the purposes of this article. However, the original TIFF frames are available for download here, should you wish to study them personally.

How much resolution do you lose with compression?

To arrive at a numerical percentage for resolution loss, I decided to use lines and circles with different pixel widths. It then becomes the matter of discovering how much the line or circle has ‘broken’ – which is a matter of counting.

Here are the results:

Image Format Width (1*) Width (4*) Width (16*) Diff. %
1080p Diagonal TIFF 3 7 31 100%
Prores HQ 3 7 31 100%
H.264 50 Mbps 4 8 32 88%
H.264 20 Mbps 7 8 35 77%
H.264 10 Mbps 10 10 35 67%
H.264 5 Mbps 25** 11 38 58%
1080p Vertical TIFF 2 4 17 100%
Prores HQ 2 4 17 100%
H.264 50 Mbps 2 4 17 100%
H.264 20 Mbps 4 7 18 70%
H.264 10 Mbps 4 12 20 57%
H.264 5 Mbps 28** 10 20 51%

Notes:

  • *These are the original widths, even though the lines were 1 px, 4px and 16px. In case of diagonal lines, it is better to not consider the absolute values because the software interpolates them to fill in the gaps. For relative percentages, use both. For absolute percentages, use the vertical measure.
  • **At this point large swathes of the line were missing. I’ve counted the largest gap rather than the width.

Now there’s a surprise. If you’re looking for fine detail, a 5 Mbps video loses about 10 times the information. On average, it loses at least twice the amount of resolution, as a conservative estimate.

What does this mean? It means that if you’re shooting 1080p and putting it up on Youtube, what you’re seeing is standard definition TV. In fact, for 1080p video, going by what we know about resolution and what we saw in the limits of upsampling, we can safely say that any data rate below 20 Mbps for 1080p is unacceptable – if you’re a fan of image quality.

The results also goes to show how important it is for an acquisition format to go beyond 50 Mbps, which is why it is mandatory for broadcast quality. Here’s one reason why AVCHD sucks, with its 28 Mbps bit rate. Just in case it isn’t obvious, the results also include the effects of chroma subsampling, which is unavoidable for these codecs.

Okay, let’s move on to the second part of this test.

Exclusive Bonus: Download my free guide (with examples) on how to find the best camera angles for dialogue scenes when your mind goes blank.

Results of downsampling 4K to 1080p – does it help?

The background image should not be used for comparisons in this second test. Here are the results:

Image Format Width (1) Width (4) Width (16) Diff. %
4K Diagonal TIFF 3 9 31 100%
Prores HQ 3 9 31 100%
H.264 50 Mbps 4 10 32 89%
H.264 20 Mbps 8 10 32 79%
H.264 10 Mbps 8 10 33 78%
H.264 5 Mbps 19 12 34 66%
4K Vertical TIFF 2 5 17 100%
Prores HQ 2 5 17 100%
H.264 50 Mbps 3 5 17 93%
H.264 20 Mbps 4 5 18 87%
H.264 10 Mbps 8 8 20 64%
H.264 5 Mbps 20 8 20 60%

It must come as no surprise that downsampling 4K to 1080p follows a similar pattern, with one notable exception:

 4K vs 1080p chart

When things are great (at the higher data rates), it hardly makes a difference if you downsample or not. After all, resolution is resolution. However, when you compress more, 4K footage downsampled tends to hold better detail than 1080p. By how much? Not much really, but let me put it this way: If the limit to compression was 20 Mbps for 1080p, it is only 10 Mbps for 4K downsampled.

This means, for Internet video streaming at less than 10 Mbps, it is always better to shoot 4K and downsample to 1080p – no exceptions.

Takeaways:

  • If you’re shooting 1080p, never compress below 20 Mbps, or you’re effectively watching standard definition.
  • For Internet video, shoot 4K and downsample always.
  • Read this next, because for Internet delivery, a lot of us first master in a lossy format, and then Youtube or Vimeo adds another layer of compression.

What do you think? Does this match your experiences?

 

3 replies on “How much Resolution do you Lose with Compression, and does 4K help make 1080p Look Better?”

  1. Good article. But Sareesh makes some important points.

    Also, what about variances in down-sample method when resizing 4K to 1080 for example. Such as post-processing, software being used, on an edit timeline vs rencoding first, specific h264 codec being used (QuickTime/x264/MainConcept) etc etc. Not to mention consumer TV downsampling/upscaling and the huge amount of processing that goes on there!

  2. Alex_Rus  To answer
    your first question. It depends on what you mean by detail. A fully
    random-sampled image cannot be compressed, because any compression works on
    redundant data, whatever form it may take. Random data cannot be recreated
    mathematically. Detail can also be high-frequency detail like clothes, leaves,
    etc, or low-frequency like a clear blue sky. What I’ve seen in practice is that
    codecs like H.264 and MPEG-2 are brilliant at leaving high frequency detail
    alone, while the weakness is obviously low frequency gradations with low detail
    – like smoke, skin, etc. If this is what you meant by detail, then yes, but
    subject to:
    a. VBR
    interframe codec – the algorithm has the freedom to pick and choose
    b. Keyframes
    at every cut.
    For video
    with lots of gradations and smoke, I prefer CBR, because it forces the
    alogirthm to pay more attention to finer gradations.
    To answer
    your second question: I don’t know for sure. I don’t add grain, and don’t like
    grain. Never did. The preservation of detail is secondary to the emotional
    content in the video, along with motion, audio and editing. It’s different for
    stills and video. In video you have motion blur and this is a huge resolution
    eater. That is why an APS-C sensor can go up to 24 MP while Super 35mm film
    only holds 4K (10 MP) resolution.
    Also, don’t
    forget that moving images themselves look sharper. The higher the frame rate,
    the sharper it looks. Between all these moving parts I’m not sure the
    contribution of grain will affect the overall result, or at least I haven’t
    seen a single example to this effect in video. Then again, 99.9% of video is
    restricted to a computer monitor or HDTV.

  3. Does that mean, that the more actual details let say per 100 pixels there is in the image the less the codec will compress it?
    And if so, will the grain addition to 1080p footage help by making a false detail for the codec to hold?

Comments are closed.