What’s Stopping us from having a Universal Codec that will Benefit Humanity?

In Understanding Intellectual Property, Copyright, and Patents and Understanding Software Licenses I’ve tried to simplify the terms as I see it, and have tried to deliver an objective (hopefully) assessment of the landscape as it stands today. In this article I’ll carry that forward, but also add my own thoughts to it, and try to survey the codecs available for us and our audiences. This article is not an objective assessment.

What’s the point of this exercise? I want to discover the issues involved in selecting a particular codec for a kind of work, and want to see if there is any codec that I can liberally use without worry from persecution.

Important: This article is only a brief and overly simplified overview of the concepts, as I understand it. My definitions and explanations may be inaccurate, may represent my personal views and prejudices, and are not meant to be legal and/or practical advice about licensing, patents, technology or intellectual property. Please consult a lawyer before taking any action. Just treat this article as one more opinion on the state of affairs. And don’t forget these terms mean different things in different countries.


My feelings on patents

Let us look at two quotes that, I believe, sums up the sentiments for and against patents:

The invention all admired, and each, how he to be the inventor missed; so easy it seemed once found, which yet unfound most would have thought impossible.

  – John Milton, poet and essayist

That ideas should freely spread from one to another over the globe, for the moral and mutual instruction of man, and the improvement of his conditions, seems to have been peculiarly and benevolently designed by nature, whe she made them, like fire, expansible over all space, without lessening their density in any point, and like the air in which we breathe, move, and have our physical being, incapable of confinement of exclusive appropriation. Inventions then cannot, in nature, be a subject of property.

– T. Jefferson, Letter to Isaac McPherson, 1813.

I like to boil the issues down to two feelings (Yes, feelings, not logic):

Man as Inventor: The person who creates something. The harder the work involved, the greater the bond between creator and product. Therefore, it is naive to think any human will take kindly to being robbed of a creation. It’s human nature.

Man as Marketer: This is where the inventor forgets to show the same consideration he or she demands for himself or herself. When you publicly market a product, you’re saying: “Covet this! Covet this! Covet this! It’s worth it!!” You’re using a psychological tool to create desire in the minds of those who never knew they wanted your product. The desire becomes so great, that those who can afford it will purchase it, and those who can’t might try to steal it.

It’s like how movie stars market themselves. Throughout their careers they hunt for publicity. They plant stories that the naive public believe to be true. Will they, or won’t they? It happens one too often, and the paparazzi cannot bear to wait for your next planted piece to arrive, and must infiltrate your personal space to see for themselves.

When you create desire, you will also create a lot of unfulfilled desire, which leads to envy. Those who cannot purchase the latest camera will envy those who can. Imagine how that affects their relationship.

The problem is, both are different and possibly contradictory facets of human nature. A patent, then, becomes a placebo to a man who wants to deal with two irrational sides of his nature and wants to get away clean. It’s impossible. Every invention has been copied. You can’t create a product, market it, and then hope nobody will want to copy it or profit from it just because there is a law against it. And the more you resist, the greater will be the push. This is also human nature.

Who has sympathy for the movie star who wants to invade your personal space in every which way, while wanting to keep you out of theirs?

I believe the simpler way to deal with both sides of human nature is to satiate both desires. If you can acknowledge the creator(s) – a single human being or a group of human beings – as the holder of the patent, and that they shall have the right to personally exploit their intellectual property as the first mover, then that is as good as it can get.

Once the product or property hits public consciousness, then all bets are off. This also means, that no company, government, corporation or entity can hold on to any patents, simply because they are incapable of creating anything. If by law, individuals within corporations are not liable for their actions, then how can the reverse be true?

This is my position in a nutshell:

  • Give the person or persons who actually invented something credit for posterity. If patents are that important, give everyone a Nobel Prize.
  • Give them first-mover advantage. This should be easy because they already have the product ready in the market, which is when it becomes public. If someone else invents the exact same thing independently before the product hits the market, they can patent it too. They will be allowed to market their product exactly the same number of days or hours as the patents are apart.
  • You don’t have to give away your secret sauce. If someone’s smart enough to reengineer your product by taking it apart, so be it.
  • Once a product hits the market, then the patent lapses, and the product enters the public domain.
  • The differentiator between similar looking products is the trademark. It’s like the service industry, which does not sell products, but services.

To those who think my idea is crap. Maybe, I’m no patent or intellectual property expert. However, what I do know, is regardless of any man or entity’s efforts, history has shown that ideas will spread, in one way or another (Did anyone else have a vision of Jeff Goldblum as they read this sentence?).

To the corporations who want to behave like the kings of old – who chopped off the hands of their artisans to ensure they didn’t make copies, or who pulled out the eyes of those who dared to steal a glimpse of the ‘most beautiful’ – after having gone to the extreme of instigating man’s desire for the object created, woe on you.

And to those who are small, afraid that their inventions will be usurped and sold for a profit by large corporations, you entered the jungle and wanted to dance with a wild elephant. Did you really expect the elephant to follow your lead? Even with patents, how many days can you survive in a court of law going toe to toe with a monster whom you provoked with your act of creation? The law is on your side. Good and expensive lawyers might not be.

How important is quality to you?

If you spend a lot of time and effort to use the best tools, and work with the best professionals, you are guaranteed some level of technical quality (let’s forget about artistic quality, because that’s not the point of this article). Isn’t it in your best interest to preserve this quality for your intended audience?

Or, don’t you care? Many filmmakers make corporate videos, commercials, wedding videos, movies, etc., just to earn a buck. The client’s budget always dictates the kind of quality they will receive. Even by those standards, a filmmaker can screw up on quality by:

  • Acts of omission – they don’t know how to use the tools to get the best technical results.
  • Acts of commission – they use poorer tools and rip off a client who isn’t too discerning.

I am not speaking to the latter group. I never will. My concern is with those individuals who always believe it is in their interests to deliver the best technical quality possible within reason. After all, we don’t have total say in the kinds of cameras, software and codecs we use.

The Master and the Deliverables

As I’ve written in an overview of the different stages of Post Production, I define a Master as ‘the best possible representation of your work’. I have also said why I believe TIFF is the best possible format to preserve your Master.

The problem is not with the Master, but with the Deliverables. Deliverables are those copies you create from the Master, to distribute to your intended audience. Examples of deliverables include:

  • DVDs and Blu-ray
  • Internet video
  • DCP
  • HDCAM SR tape, and so on.

Why is it important? Simple. It’s the version your audience gets to watch. They never see the Master, so if you believe the fullest quality of your work should be seen by your audience, then you must choose a delivery format that ensures this is so.

The problem is, your audience is not always educated on what good technical quality is. Usually they don’t even care. They are too busy living to care about codecs. It is this lack of interest that allowed cool but regressive technologies like interlacing, chroma-subsampling, Rec. 709, sRGB, H.264, MPEG-2, etc. to flourish. It’s up to you to decide whether you care or not, and that makes all the difference.

Now, I’m going to assume you have chosen whatever compression or file format is appropriate to your workflow, and you are happy with the results. In other words, you agree that your Master is the best representative of your work. From now on it’s about the deliverables.


Compression is here to stay. Due to the lack of Internet infrastructure and bandwidth, the only way to enjoy videos is if they are compressed. If you’re waiting to stream your TIFF image sequence you’ll be waiting for decades.

You don’t need to understand the intricacies of compression technology. All you need to be concerned about is the end result. Here’s my personal take:

Any compression that visually worsens the Master (be that in color, dynamic range, tonality, sharpness, resolution, audio, whatever) is unacceptable. The only acceptable result of compression is a reduction in file size or data rate.

Who is the guardian of this principle? I am, in my case. In your case, it should be you. As guardians of your content, the physical manifestation of which is the Master, it is your responsibility to ensure the quality is retained over any deliverables that are produced. This means, simply, that you must have total control over the compression and delivery process.

Is that the case? No. Then shouldn’t we get to it?

Lower file size or faster bandwidth?

Let’s conduct a thought experiment with some actual data. Ever heard of the Nielsen’s law of Internet Bandwidth? It simply states that the Internet bandwidth grows at 50% every year. The difference in theory and actual data are as follows:

Now, in these two articles

I had said that 1080p must not go below 50 Mbps if good visual quality is to be maintained. Ideally, it should be at 100 Mbps intra-frame. Here are the maximum data rates of three major networks, at 1080p:

  • Youtube – about 5 Mbps
  • Vimeo – about 5 Mbps
  • Netflix – about 7 Mbps

Red 4K content is capped at 20 Mbps, which is roughly four times 5 Mbps.

In order to stream 5 Mbps without glitches, you probably need a steady 8 Mbps Internet connection, with no other downloads happening in the background. What about 100 Mbps and 400 Mbps:

  • 100 Mbps (1080p) – you’ll need 160 Mbps
  • 400 Mbps (4K) – you’ll need 640 Mbps

We’re rapidly heading for a 4K and even 8K future, possibly in our own lifetimes, so we cannot neglect either of them. Let’s compare the rate of Internet bandwidth growth against the ‘ideal’ data rates for different resolutions:

 Internet Bandwidth vs Resolution Trends

Note: Y-axis is Mbps

What the graph shows is, Internet bandwidth capability will overtake video data streaming requirements (400 Mbps for 4K) by 2030.

Before we get carried away, let’s understand that the bandwidth growth is based on modem technology. Even though all of us have at least a 100 Mbps modem and a 1 GbE Ethernet link, most of us are lucky to have a data plan of 4 Mbps. However, even at this rate, by 2030, we should have sufficient bandwidth for even 8K data rates!

Internet speeds have gone up substantially over the last five years, and will continue to climb as better infrastructure is put into place. Even if the governments and service providers lag behind (which is only to be expected) in setting up the right infrastructure, it is clear that by 2030 everyone who is able to stream 1080p today should also be able to stream 4K, and that too at 400 Mbps. This means, we should fully expect Internet speeds of 1 Gbps by 2030. That’s only about 15 years away.

Now think about this: Is there a point in trying to compress data, especially video, further, when Internet bandwidth is bound to catch up, just like it did with images and music? Isn’t it more productive to stick to your guns and try to maintain the best visual quality possible? If everyone’s going gaga over having to convert mountains of videos of H.264 to H.265, why do we want to do it again 15 years from now?

Here’s my take:

Even as you are compressing to H.264 at 8-10 Mbps for a Youtube or Vimeo upload today, always be ready for a 100 Mbps future for 1080p, and a 400 Mbps future for 4K.

The codecs of the future

So, where are our ‘leaders’ taking us? I’m not sure it’s a place we want to go. Here’s a list of codecs in the running for a place in the future:

Codec Last Version License  Patents Lossless?
H.264 and Variants Up to date Proprietary Yes No
HEVC/H.265 Up to date Proprietary Yes No
MPEG-2 2006 Proprietary Yes No
DIRAC/Schrödinger 2012 MIT/GPL No Yes
Huffyuv 2003 GPL 2 No Yes
Lagarith 2011 GPL 2 No Yes
Theora 2009 BSD Yes No
VP9 Up to date BSD Yes Unknown
Daala Up to date FOSS No Unknown
Prores Up to date Proprietary Yes No

The only three known patent-free codecs that are somewhat modern are Schrödinger, Lagarith and Daala, of which the last is still in development with a goal to be better than H.265. The codec that shows (showed?) greatest promise is Dirac, which has the following phenomenal features:

  • Wavelet compression
  • Unlimited data rate
  • Unlimited resolution
  • Variable frame rate video
  • Any color space
  • AVI, OGG and MKV container support
  • Supported on VLC player, OggConvert, FFmpeg, etc.

There are tests on the Internet, like this one (Dirac, Dirac Pro vs Theora, 2009), which concludes with the statement:

Both Dirac and Theora have been developed to be state-of-the-art technology. However, my experiments have shown that they are in fact not even close to the cutting edge. Also, what is cutting edge? H.264 is from 2003, and Motion JPEG2000 is from 2001. That’s ages in the ICT development. Dirac and Theora are comparable to MPEG-2 and H.263+, at best. That’s the previous century.

But take away the need to reduce file sizes and this is no longer an issue! On the other hand, media ‘giants’ like Google and MPEG have decided that smaller is better, and they are waiting gleefully for the day they will charge royalties for VP9 and H.265. Royalties for encoding, decoding and who knows what else.

With all that I’ve written in this article, the crux of the matter is as follows:

  • Do we even need a heavier compression at all?
  • Should we support proprietary codecs that we cannot use for personal projects?
  • Should we stop using visually lossy codecs at all?

These are questions you need to keep in mind while encoding your next project. If you think the services you upload your videos to freely are working for your benefit, think again. Here’s an article from Ars Technica on how Google has managed to maneuver an open-source project called Android into a closed system.

I’m all for paying for the right to encode to a better codec. But what’s the point of encoding rich content into H.264 or VP9 or H.265? By the time these codecs really hit the mainstream it’ll be around 2020. Remember what I said will happen by 2030?

There is one case where smaller data rates are important. If somebody is running a service, the smaller the data rate, the more number of videos it can store per terabyte. But, if each filmmaker has access to a 1 Gbps Internet connection supported by fast SSD drives, then we won’t need to host our videos on someone else’s servers. All we have to fight for (actually negotiate) is the price we have to pay for bandwidth. I believe that’s a more worthy fight than the fight to compress work even further. It’s bad as it is.

And, don’t confuse free or cheap video uploads with owning your content. Read the fine print. Who really owns your content after you have uploaded it? Now think about this: If the entire excerise of writing, producing and distributing a movie is given a score of 100%, what is the value of the compression codec used? If you’re not able to arrive at a number, think of Titanic. How important to Titanic was the camera used, or the film stock used? A compression codec, in the grand scheme of things, has an importance of less than 1%. Are you willing to sign away the rights of your work for that 1%?

The people who are in a position to make a difference are the content creators, who, using the power and potential of the Internet, no longer have to be slaves to large corporations. Adopt patent-free open source codecs that look to the future. Never compromise on quality. Shoot 6K and compress to 1 Mbps? Screw that. The right way is to increase the Internet bandwidth and renegotiate prices. The whole world develops and gains from this. Nobody except the corporations win with heavily compressed codecs.

You decide.

Those who cannot remember the past are condemned to repeat it – George Santayana

2 replies on “What’s Stopping us from having a Universal Codec that will Benefit Humanity?”

  1. Blackandwhitecat¬†I don’t see how any compromise being incorporated into a ‘standard’ isn’t regressive. Especially since it isn’t marketed as a compromise. I have another article that goes into the ideal archival format. It might shed some light on what I mean.

  2. Interesting post, however there are a couple of big omissions in your codec comparison: jpeg2000 (can be lossy or lossless) and ffv1. Ffv1 is free and open source. It would be worth looking into these.
    I’m also curious about this statement: “cool but regressive technologies like interlacing, chroma-subsampling, Rec. 709, sRGB, H.264, MPEG-2, etc. to flourish.” Could you expand on this? I wouldn’t have thought of them as regressive, just making compromises according to what was possible to do at the time, but I’m interested in other viewpoints.
    Thank you!

Comments are closed.