Cinematography Jargon Explained

What is ACES (Academy Color Encoding System)?

The simplest explanation of what ACES is, and why you should learn more about it now.

Accuracy is important, but consistency is critical!

– from the Image Interchange Framework Presentation

If you’re a cinematographer, editor or colorist you’ll have heard of ACES (which stands for Academy Color Encoding System). This article explains what ACES is, and why it is important you start looking into it.

Exclusive Bonus: Download your FREE Blueprint: How to make a movie. A complete visual representation + video of the filmmaking process from beginning to end.

Film scratches

What is the goal of ACES?

Preserving art is the goal. In their own words:

ACES is the final movie with the full fidelity of the original source material

Today, a typical workflow is to master to the required delivery format. E.g., if you make a movie and your intended goal is a theatrical release, you will master for DCI or its equivalent. On the other hand, if your project is a simple web video, your master might be an H.264 file, or sometimes a Prores or DNxHD file.

Whatever your current ‘master’ workflow, it shares the following common disadvantages:

  • Color space is fixed and dependent on current technology
  • Encoding is irreversible
  • Dynamic range is limited by display technology

What if you want to preserve your video and need to re-release it at some point many years from now? We can already see a sample of this problem when we watch badly scanned movies from the past. The ‘usual’ solution is a restoration and/or a complete rescan, with the best technology possible. But, there’s a problem.

Film is on its way out. Today’s movies don’t have the luxury of being on film. Nor do they have the luxury of being preserved as RAW files, simply because RAW files cannot have effects, transitions and other important data. The most common delivery formats (H.264 and MPEG-2 in Rec. 709, Rec. 2020, DCI P3, etc.) are based on current technology. When technology improves, we will be left holding a sub-par version of our original movie.

To avoid this problem, and to embrace digital technology to its fullest ‘unknown’ potential, a new methodology is required – one that will preserve the ‘best possible copy’ of our art. Enter ACES.

What is ACES?

ACES is an encoding system that tries to take in as much dynamic range and color information as possible, so you can never complain that your master is limited by technology.

Some of the characteristics of ACES are:

Imagine the ability to encode to possibly the best file format available for video (OpenEXR), with a color space so big that any future display technology can use it, with a color bit depth far greater than what is necessary, and which can hold more dynamic range than the human eye can see!

The ACES Workflow

Today, a software or hardware encoder has to account for the input format as well as the output format. E.g., if you’re converting from Prores to H.264, the encoder must be able to read the latest version of Prores and output to the latest version of H.264.

When a new codec appears on the scene, software developers scramble to rewrite their encoding apps to account for these. The permutations and combinations of moving between one codec to another are staggering. Now, I really don’t care what software developers do, but unfortunately, these complexities are passed on for us to deal with!

What if, instead of having to deal with two codecs, the encoder just has to deal with one codec?

Instead of X codec —> Y codec, we have ACES —> Y codec. You could replace Y with any future encoding system, however advanced. It doesn’t need to worry about the ‘source’ codec, which will always be ACES.

Imagine filmmakers, web video makers, wedding videographers, etc., all using the same workflow! Simplification without compromise. That’s the plan.

Some people have the idea that ACES is only about color, but that is only partially true. It’s a total rewrite of how workflows should be carried out. This is what it looks like:


It’s simple really. Everything starts with the camera. You record in the best possible format available in camera. It’s left to you which settings to choose, and to the manufacturer which settings to provide. Who knows what the future might hold, right?

Once you’ve recorded, you will need to convert your files (similar to transcoding) into the ACES format. To do this, you need what is generically called the Input Device Transform (or IDT). It is not a ‘thing’ or a device, but a set of math formulas that will convert whatever codec you’ve shot in into the ACES format.

Once you’re in, you’re home free. You can edit, add effects, grade, etc, all in the ACES specification. Imagine every NLE or app that just has to be designed for this one format! Whenever, there’s a new camera with a new codec, one just has to provide a new IDT that will be ‘plugged-in’ (plug-in?) to make the conversion possible. Once the ‘industry’ adopts ACES, everyone has to play along, don’t they?

At the other end, there are display technologies that are as different as chalk and cheese. Think theatrical projection vs mobile device – different gamuts, color spaces, encoding formats, a totally yucky space to be in. We’re already in that world.

Instead of applications or workflows needing to be rewritten or redesigned every time a new display technology comes along, one just provides another set of math formulas to convert ACES into whatever the display devices need. This set of formulas is called the Output Display Transform (ODT).

You can have as many ODTs as there are display devices, but you don’t have to change your workflow. Furthermore, when both IDT and ODT get redundant, new ones will replace it, but your master will remain in ACES, ready for the future.

The last piece of the puzzle is the Reference Render Transform (RRT). This is probably the most important piece in this workflow chain. What is it?

Simply put, it is a set of math formulas that ensure your intended ‘look’ is preserved. E.g., a camera shooting RAW is converted to ACES. A Colorist works her magic and the end result is a ‘look’. If you’re coming from film you already know film stock has a certain look, depending on its chemical development. In the future, camera RAW files will have the same ability, and the intention of the artist must be preserved (and not just the data).

We all know scenarios where we’ve tried to convert one file format to another, and somehow the colors have changed – it no longer looks like it did in the editing station. The RRT is supremely important to avoid that. Twenty years from now, when your movie is re-released into four thousand types of devices, all with their own codecs, gamuts and spaces, the RRT will preserve your intention, and ACES will preserve your data.

Together, the Academy hopes ACES will make movies last forever.

Have you started using ACES already? Do you think the Academy will succeed in their objective?

Exclusive Bonus: Download your FREE Blueprint: How to make a movie. A complete visual representation + video of the filmmaking process from beginning to end.

9 replies on “What is ACES (Academy Color Encoding System)?”, is a portal where you can find more information on ACES and post questions to the community, the ACES Product Partners and Academy staff. If you have questions about ACES or want to join the active conversations on using ACES in production, post, VFX, VR or archiving, I recommend checking it out.

StephanieShirley Hi Stephanie, sure – just contact me through the e-mail adress provided on my website. My thesis was pretty techy, but you might find some useful links and info.

Interesting you mention Film is on the way out and commented on restoration. I work for a major studio working on a pile of over 100 years of film, the most stable jobs in the industry is restoration.

What other media do we know that survived that long and new technology can arguably barely achieve the resolution and range.

New features are still shot on film. About 8 last year at our studio. It was discovered that it was not all that much cheaper shooting digitally after all.

Aces ripped off something we’ve been doing in film scanning for years by working in log space. ADX scanning levels for ACES is almost identical to what’s always been done, so provided you have a quality scan 4K or better, you ADX IDT should put you in a very good place for grading without the cost of rescanning old stuff.

I’m not completely a film kook, far from it. But long term storage without re occurring cost, incredible image quality and range, hard to beat. Heck new features are still digitally shot to YCM film separations on laser recorders (except one studio who has a bulletproof plan, Ha).

RaphKing HI Raph,
I’m also beginning a thesis on why ACES was developed and I’m wondering if you managed to find material to support your own thesis? If so, can you point me towards anything I might find useful about ACES? Books, journals, articles, anything really!

Sareesh Sudhakaran RaphKing Thank you. I know… but my company wants to try it and discuss pros and cons.

Hello Sareesh, I’m a student from Germany and I’m starting to dive into this topic. Thanks a lot for the easy workflow explanation. It verified that I already understood most of it. The original presentation from ACES is kind of mixxed up and not very clear. )

I’m starting to write my bachelor thesis on a comparison between regular post production (and more specifically color grading) workflows and ACES. I couldn’t really find a lot of stuff about ACES on the web or in libraries. Do you have a few recommendations for me? Magazines, books, e-papers or blogs… anything helps! 

Thank you for the informative post!


I prefer to use the linear 32-bit linear workflow in AE myself, which I learned initially from Nuke.
However, the ACES 16-bit specification is for files, while 32-bit math is a different thing. I believe files don’t need to be better than 16-bit for post production work.

The basics of this system have been in use in the stills / print world as ICC (International Color Consortium) colour management since the mid-90s. Apple was the first major player to run with it. There has been no equivalent in the movie world until now. 

The problem with the system is that it is only 16bit. Once you start to talk about ‘future proof’ and 25 stops of dynamic range, there becomes a point where 32 bit processing (present in Photoshop for many years) starts to become a necessity.

I think the basic concept is important, but ti needs to be opened up to a world beyond the movie studios to allow it to become a major standard for Colour management in the movie and broadcast space.

Comments are closed.