Post Production

How much Resolution do you Lose with Compression, and does 4K help make 1080p Look Better?

A couple of tests to determine the exact percentage loss of resolution while compressing video, and how 4K can help.

This article attempts to answer two questions:

  • How much resolution is lost in the compression stage?
  • Does shooting at a higher resolution and then downsampling help mitigate this loss?

The answer to the first question should be pretty obvious, as anyone who has compressed video will know. However, the answer to the second question can only be answered faithfully if we have an idea about the degree of the loss in the first.

Understanding the limits of resolution loss will help us greatly when it is time to deliver the best quality work to our clients or customers.

Exclusive Bonus: Download your FREE Blueprint: How to make a movie. A complete visual representation + video of the filmmaking process from beginning to end.

The test

In order to eliminate lens effects, a vector image (8-bit) was created in two versions:

  • 1920 x 1080
  • 3840 x 2160

Animation was applied to the test images to replicate resolution loss in motion. However, no motion blur was applied because any loss by motion blur will scale proportionately. Remember, this test is about resolution, not sharpness. The software used was Adobe After Effects CC.

The first composition is pure 1080p. The second is 4K, downscaled to 1080p (nothing fancy, just using the Scale function in AE). Both compositions were rendered to the following formats for testing:

  • Uncompressed TIFF (image sequence)
  • Prores HQ (Quicktime, QT)
  • H.264 50 Mbps QT
  • H.264 20 Mbps QT
  • H.264 10 Mbps QT
  • H.264 5 Mbps QT

The first is my favorite method of archival. The second is the best intermediary (and nowadays acquisition as well) codec. The third is how DSLRs record data, while the last is how sites like Youtube and Vimeo display 1080p content. Don’t forget, these sites compress your already compressed video, which makes things a lot worse. Our goal is to find a formula so we can set limits.

Here’s what a sample frame looks like:


While producing the same vector images, I had to take into account the increased resolution of a 4K sensor while keeping the same area. This means, if a hair is one pixel wide in 1080p, it will be 2 pixels wide on the same sensor at 4K, assuming everything else is constant. This is why 4K has more resolution, though whether or not this carries forward while downsampling remains to be seen.

Two frames from each video were exported in TIFF. These were further compressed to JPEGs for the purposes of this article. However, the original TIFF frames are available for download here, should you wish to study them personally.

How much resolution do you lose with compression?

To arrive at a numerical percentage for resolution loss, I decided to use lines and circles with different pixel widths. It then becomes the matter of discovering how much the line or circle has ‘broken’ – which is a matter of counting.

Here are the results:

ImageFormatWidth (1*)Width (4*)Width (16*)Diff. %
1080p DiagonalTIFF3731100%
Prores HQ3731100%
H.264 50 Mbps483288%
H.264 20 Mbps783577%
H.264 10 Mbps10103567%
H.264 5 Mbps25**113858%
1080p VerticalTIFF2417100%
Prores HQ2417100%
H.264 50 Mbps2417100%
H.264 20 Mbps471870%
H.264 10 Mbps4122057%
H.264 5 Mbps28**102051%


  • *These are the original widths, even though the lines were 1 px, 4px and 16px. In case of diagonal lines, it is better to not consider the absolute values because the software interpolates them to fill in the gaps. For relative percentages, use both. For absolute percentages, use the vertical measure.
  • **At this point large swathes of the line were missing. I’ve counted the largest gap rather than the width.

Now there’s a surprise. If you’re looking for fine detail, a 5 Mbps video loses about 10 times the information. On average, it loses at least twice the amount of resolution, as a conservative estimate.

What does this mean? It means that if you’re shooting 1080p and putting it up on Youtube, what you’re seeing is standard definition TV. In fact, for 1080p video, going by what we know about resolution and what we saw in the limits of upsampling, we can safely say that any data rate below 20 Mbps for 1080p is unacceptable – if you’re a fan of image quality.

The results also goes to show how important it is for an acquisition format to go beyond 50 Mbps, which is why it is mandatory for broadcast quality. Here’s one reason why AVCHD sucks, with its 28 Mbps bit rate. Just in case it isn’t obvious, the results also include the effects of chroma subsampling, which is unavoidable for these codecs.

Okay, let’s move on to the second part of this test.

Results of downsampling 4K to 1080p – does it help?

The background image should not be used for comparisons in this second test. Here are the results:

ImageFormatWidth (1)Width (4)Width (16)Diff. %
4K DiagonalTIFF3931100%
Prores HQ3931100%
H.264 50 Mbps4103289%
H.264 20 Mbps8103279%
H.264 10 Mbps8103378%
H.264 5 Mbps19123466%
4K VerticalTIFF2517100%
Prores HQ2517100%
H.264 50 Mbps351793%
H.264 20 Mbps451887%
H.264 10 Mbps882064%
H.264 5 Mbps2082060%

It must come as no surprise that downsampling 4K to 1080p follows a similar pattern, with one notable exception:

 4K vs 1080p chart

When things are great (at the higher data rates), it hardly makes a difference if you downsample or not. After all, resolution is resolution. However, when you compress more, 4K footage downsampled tends to hold better detail than 1080p. By how much? Not much really, but let me put it this way: If the limit to compression was 20 Mbps for 1080p, it is only 10 Mbps for 4K downsampled.

This means, for Internet video streaming at less than 10 Mbps, it is always better to shoot 4K and downsample to 1080p – no exceptions.


  • If you’re shooting 1080p, never compress below 20 Mbps, or you’re effectively watching standard definition.
  • For Internet video, shoot 4K and downsample always.

What do you think? Does this match your experience?

3 replies on “How much Resolution do you Lose with Compression, and does 4K help make 1080p Look Better?”

Good article. But Sareesh makes some important points.

Also, what about variances in down-sample method when resizing 4K to 1080 for example. Such as post-processing, software being used, on an edit timeline vs rencoding first, specific h264 codec being used (QuickTime/x264/MainConcept) etc etc. Not to mention consumer TV downsampling/upscaling and the huge amount of processing that goes on there!

Alex_RusĀ  To answer
your first question. It depends on what you mean by detail. A fully
random-sampled image cannot be compressed, because any compression works on
redundant data, whatever form it may take. Random data cannot be recreated
mathematically. Detail can also be high-frequency detail like clothes, leaves,
etc, or low-frequency like a clear blue sky. What I’ve seen in practice is that
codecs like H.264 and MPEG-2 are brilliant at leaving high frequency detail
alone, while the weakness is obviously low frequency gradations with low detail
– like smoke, skin, etc. If this is what you meant by detail, then yes, but
subject to:
a. VBR
interframe codec – the algorithm has the freedom to pick and choose
b. Keyframes
at every cut.
For video
with lots of gradations and smoke, I prefer CBR, because it forces the
alogirthm to pay more attention to finer gradations.
To answer
your second question: I don’t know for sure. I don’t add grain, and don’t like
grain. Never did. The preservation of detail is secondary to the emotional
content in the video, along with motion, audio and editing. It’s different for
stills and video. In video you have motion blur and this is a huge resolution
eater. That is why an APS-C sensor can go up to 24 MP while Super 35mm film
only holds 4K (10 MP) resolution.
Also, don’t
forget that moving images themselves look sharper. The higher the frame rate,
the sharper it looks. Between all these moving parts I’m not sure the
contribution of grain will affect the overall result, or at least I haven’t
seen a single example to this effect in video. Then again, 99.9% of video is
restricted to a computer monitor or HDTV.

Does that mean, that the more actual details let say per 100 pixels there is in the image the less the codec will compress it?
And if so, will the grain addition to 1080p footage help by making a false detail for the codec to hold?

Leave a Reply

Your email address will not be published. Required fields are marked *