Web Analytics

Netflix: Colour Management Illiteracy and Confusion


Mohammed El Sharqawy
 

Greetings Everyone!

I second Charles of how informative this conversation was to read and thanks to you Pawel I learned a lot.

A thought on Scene-Referred workflows:
I find it fascinating that one could match scenes filmed with different light sources especially in the world of LED lighting equipment that is overwhelmingly taking over the industry. Considering variations of color rendering capabilities of light sources. achieving consistent skin tones and matching colors can be challenging.
Using Scene-Referred workflow to aid matching different scenes shot with different light sources is understood. The help profited from using color chart to match colors is priceless and saves tremendous amounts of work in post-production trying to match different light sources. In my experience the problem with creating Scene IDT is when I need to light a scene with two different light sources (within the same scene). like the classic fill to match the daylight coming from a window or a door, a situation where not all the equipment options were available or to combat differences in LED color accuracies from different manufacturers. What is your advice on imaging the color chart on a scene that contains different light sources for creating the best IDT from the scene?

Thanks,

Mohammed El Sharqawy


On Fri, Dec 17, 2021 at 1:17 AM Pawel Achtel, ACS <pawel.achtel@...> wrote:

Hi Charles,

 

Many thanks for your feedback and it is great to hear from vice from different perspective.

 

Indeed, Netflix’s push for ACES workflow is good and makes things streamlined and consistent. I’m personally all for it.

 

 

Ø  The main masters you'd deliver to Netflix would be display referred and there you are spot on. But, Netflix also pushes to get what they call a NAM (non graded archival master) and also get a GAM (graded archival masters). The NAM is technically 100% scene referred (from the original scene and the VFX) but the GAM is not... It's like a linear version of your grade. So it retains a lot of info but not all.

 

I like NAM and GAM terminology because they are descriptive and clear. In my mind there are two ways to arrive at NAM:

 

1.   Scene Referred

 

This is done either through straight Scene-Referred camera IDT as I described earlier or (the “poor man’s version”) by using “backed-in camera IDT and corrective Scene-referred transform downstream (for example through the “Color Match” function in Resolve). Both would be technically “Scene-Referred” except the second workflow may be limiting the gamut and/or introducing unwanted colour “twists” depending on how much the “backed-in” IDT reference lighting differs from the actual scene lighting. As I demonstrated with RED Monstro, it can be a lot.

 

2.   Output Referred

 

This is most common workflow where colour correction (before grade or look are applied) happens by referencing display. However, I personally call it “Output-Referred” because it usually happens in space that is much wider than the actual display, but the transform to P3 or BT1886 or REC 2020 (ODT), etc. is always fixed. So, it may not have to be display-specific, but it is always created based on evaluation of the output.

 

All of the above are obviously colour-managed, just managed very differently.

 

What is characteristic about Scene-Referred workflows (and what really defines them) is that it requires capture of spectral data at the scene. There are different ways and techniques to do so, but this is one critical ingredient that is needed to go from camera-specific colour space to working colour space (NAM). It also puts the burden of colour correction on the cinematographer rather than the colourist. The look is ultimately applied downstream by the colourist. Scene-Referred workflow also doesn’t require chromatic adaptation (setting colour temperature).

 

Hope it makes sense.

 

Maybe it is a good time to bring this discussion into IMAGO Technical Committee (ITC), which I’m a full member of, and get a broader consensus for consistent terminology.

 

I look forward to input from anyone. How do we make those distinctions clearer?

 

Kind Regards,

 

Pawel Achtel ACS B.Eng.(Hons) M.Sc.

“Sharp to the Edge”

 

ACHTEL PTY LIMITED, ABN 52 134 895 417

Website: www.achtel.com

Mobile: 040 747 2747 (overseas: +61 4 0747 2747)

Mail: PO BOX 557, Rockdale, NSW 2216, Australia

Email: Pawel.Achtel@...

Facebook: facebook.com/PawelAchtel

Twitter: twitter.com/PawelAchtel

Skype: Pawel.Achtel

 

From: cml-general@... [mailto:cml-general@...] On Behalf Of Charles Boileau
Sent: Friday, 10 December 2021 12:57 AM
To: cml-general@...
Subject: Re: [cml-general] Netflix: Colour Management Illiteracy and Confusion

 

[Edited Message Follows]

Hi Pawel,

Great post. I had a look at the video. But, I think I see where things might get confusing.

A colour managed workflow (like the one that Netflix is using) is inherently scene referred. Meaning that at any moment in the workflow you could revert back to wide gamut and linear. And, this is why they are advocating the use of ACES. It's the easiest way to get everyone (big and small "actors" in the industry) on the same "footing". 

The main masters you'd deliver to Netflix would be display referred and there you are spot on. But, Netflix also pushes to get what they call a NAM (non graded archival master) and also get a GAM (graded archival masters). The NAM is technically 100% scene referred (from the original scene and the VFX) but the GAM is not... It's like a linear version of your grade. So it retains a lot of info but not all.

Now Netflix (and, other online content providers) also needs a few versions of display referred deliverables (HDR, BT1886, P3 etc...). And, the best way to retain the original intent (and not convert directly from your "best" master) is to keep the workflow in scene referred. And, again this is what ACES does and it's all done in the background. You are just viewing things thru different viewing transforms as you progress along.

But, to do this you need your whole pipeline to be set this way (shoot, VFX and post).

I see where the confusion might be for you... Because for a lot of people colour management also means that your can convert from one colour space to another. And this is right in essence... For example, you grade in P3 and use a LUT convert to BT1886 (rec709). This in essence is colour managed. But, it's obviously not scene referred colour managed. Which ultimately give you better results and this is what Netflix is trying to explain. 

Hopefully, this will make more sense to you. But, i would be curious to know more about your thoughts and to why you think was not well explained. I'm always keen on refining my educational angle and your input might be good for a internal educational program I'm putting together for our producers.

And, please feel free to reach out to me if you have any questions. I'm more than happy to help!

Thanks!

--

Charles Boileau, ex-Cinematographer, ex-Colorist and now: imaging engineer at some VFX studio.


Pawel Achtel, ACS
 

Hello Mohammed,

 

Using this forum as a sounding board has been great. Sometimes you get an odd comment, but generally speaking it’s been good source of information and feedback.

 

Ø  In my experience the problem with creating Scene IDT is when I need to light a scene with two different light sources (within the same scene). like the classic fill to match the daylight coming from a window or a door, a situation where not all the equipment options were available or to combat differences in LED color accuracies from different manufacturers. What is your advice on imaging the color chart on a scene that contains different light sources for creating the best IDT from the scene?

 

Scene-Referred IDTs work exceptionally well even with “bad” LEDs (which have strong green and orange spikes) and just about any other “non-reference” light sources (within a reason, it won’t work with a sodium light). From my experience the resulting IDT is just as good as those created in lab conditions under tightly controlled illumination and do not suffer from gamut clipping or colour “twists” that you normally get by using non-Scene-Referred “baked-in” IDT and downstream transform to correct it.

 

If you want to match colours in mixed lighting scene, the rule of thumb is to place the reference chart in a place where those colours should match. In mixed lighting, this means a single spot, i.e. actor’s face.

 

Some cameras, which can store IDT in each and every frame as metadata (cough, cough, like the 9x7), it is actually possible to have a variable IDT in a single clip, for example, as an actor walks from daylight to tungsten environment (and the intent is to keep skin colours unchanged). We were even thinking of creating such a demo, but thought it would be too much of an “edge case” J

 

But, Scene-Referred workflows are not just for colour matching/correcting. There are other really exciting possibilities where you can use colour filters to intentionally shift native camera colour gamut and achieve different coverage of hues and colour ranges that the camera otherwise wouldn’t be able to discern accurately. This is not really possible with cameras with “baked-in” IDTs. I think, this is where digital cinematography has exciting future and a leg up on film because you have a tool that controls native gamut and subtle hues, just like film stocks had 50 years ago, except now you can “easily” control the effect with a little bit of physics and some maths sprinkled into it. J

 

Kind Regards,

 

Pawel Achtel ACS B.Eng.(Hons) M.Sc.

“Sharp to the Edge”

 

ACHTEL PTY LIMITED, ABN 52 134 895 417

Website: www.achtel.com

Mobile: 040 747 2747 (overseas: +61 4 0747 2747)

Mail: PO BOX 557, Rockdale, NSW 2216, Australia

Email: Pawel.Achtel@...

Facebook: facebook.com/PawelAchtel

Twitter: twitter.com/PawelAchtel

Skype: Pawel.Achtel

_._,_._,_


Jonathan Doose
 

Hello Pawel,

thank you for sharing your findings. Fascinating, indeed.

Is it possible for us camera owners/operators to create a "Scene-Referred camera IDT" (if the camera manufacturer does not provide one) or are we limited to the "semi" Display-Referred workflow in Davinci using the ColorMatcher tool ?

Any further resources on the subject you could recommend are greatly appreciated.

Thanks,
Jonathan Doose
Camera operator Switzerland/US West Coast


Pawel Achtel, ACS
 

Hi Jonathan,

 

Ø  Is it possible for us camera owners/operators to create a "Scene-Referred camera IDT" (if the camera manufacturer does not provide one)

 

It depends on the format. There is Adobe workflow that works very well with DNG, so if your camera can either produce or can be converted to Cinema DNG, there are tools like DCamProf, which allow custom profile creation.

 

Ø  Any further resources on the subject you could recommend are greatly appreciated.

 

These almost entirely relate to still photography, but can be readily adopted for cinematography. Capture One workflow is pretty good. There are several workflows and techniques using Ligtroom/Photoshop, which allow custom IDT too.

 

DCamProf/RawTherapee workflow is worth reading with some good examples:

 

https://rawtherapee.com/mirror/dcamprof/dcamprof.html

 

Good discussion about gamut compression (in custom IDT) and when to use it:

 

https://forum.luminous-landscape.com/index.php?topic=118372.0

 

Hope it helps.

 

 

Kind Regards,

 

Pawel Achtel ACS B.Eng.(Hons) M.Sc.

“Sharp to the Edge”

 

ACHTEL PTY LIMITED, ABN 52 134 895 417

Website: www.achtel.com

Mobile: 040 747 2747 (overseas: +61 4 0747 2747)

Mail: PO BOX 557, Rockdale, NSW 2216, Australia

Email: Pawel.Achtel@...

Facebook: facebook.com/PawelAchtel

Twitter: twitter.com/PawelAchtel

Skype: Pawel.Achtel

_._,_._,_


Pekka Riikonen
 

This is really great thread.

Improving the terminology is definitely important.  The post side would be a bit confused if the scene-referred and display.-referred terminology is redefined in this manner, as has already happened in this thread.

It's interesting that camera makers spend millions developing sensors and cameras but when it comes to IDTs the best they throw at it is a 3x3 matrix.  There's a reason for that of course, it's mainly because of simplicity.  But it does create number of problems that then needs to be addressed in the post side, like out-of-gamut colors, as Pawel already mentioned earlier.  And there is no one perfect solution to that problem.  ACES 1.3 finally has a (partial) solution to out-of-gamut colors with the new Reference Gamut Compression (Davinci Resolve 17.4 supports it).

ACES has an IDT working group ongoing at the moment (https://paper.dropbox.com/doc/Input-Device-Transform-IDT-Implementation-Virtual-Working-Group-edDJ5mgkFjd0qnlBcfPkG) but the group is not concentrating on color performance.  The goal is to simplify IDT creation process and to agree on exposure formula.  The group will release a document with instructions for creating IDTs for ACES workflow.  They will also release a web-based tool to create new IDTs (for those that have spectral sensitivity data available for the sensor).

There's a paper from 2013 that shows IDT creation process for ACES workflow by using 2D LUTs to get better and more accurate color out: https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.731.8582&rep=rep1&type=pdf.  This would work for other workflows as well.

There's open source tools like dcamprof (https://torger.se/anders/dcamprof.html) that could be used to create new IDTs, and its commercial version (http://www.lumariver.com/lrpd-manual/) that is easier to use.  These tools can be used to create DCP (DNG/cDNG) and ICC profiles.  There's also other open source tools that can do these things, but they are all more low-level and more complicated, and might require programming skills (for example https://www.colour-science.org/).  DITs might use such tools to create custom IDTs.

Pekka

--
Pekka Riikonen
Helsinki, Finland


Rick Gerard
 

The problem that I see with the Netflix explanation is that it all looks so simple and straightforward. A single one-click solution for a workflow that requires a professional level of understanding and implementation. It’s so easy to produce a slick, compelling video that makes people think you are an expert with the authority to implement change. Unfortunately, most people don’t have the knowledge or experience or take the time to figure out if what is being said is accurate or even true.

I learned a lot from carefully reading this thread and thinking about my workflow. I have done a lot of visual effects in the last couple of years using After Effects and Davinci. All of it remotely. All of the source footage from every client but one came ungraded with a LUT they were using. All of my work was done with the Lut applied to an adjustment layer for the whole comp, but all of the rendered assets that were delivered were rendered with that adjustment layer turned off so my FX work matched the original source footage in its native state as closely as possible. Sometimes, even the original source footage was turned off so the editor and colorist had, in effect, an element with transparency that matched the original footage (log) that they could grade separately if needed.

All of those projects went off without a hitch. The test renders for approval were all rendered with the supplied LUT applied so judgments could be made, but all of the renders matched the original log footage as closely as possible. The final color grade was done by the people in charge of that part of the production.

Anyone else using that kind of workflow? All of my recent projects have been on small independent productions. I would like to know how the big studios are handling the effects and editing workflow.

Thanks.

Rick Gerard
DP/VFX Supervisor
Northern California


Pawel Achtel, ACS
 

Hello Pekka,

 

Great post! Thanks for sharing.

 

Ø  It's interesting that camera makers spend millions developing sensors and cameras but when it comes to IDTs the best they throw at it is a 3x3 matrix. 

 

J

 

Ø 
There's a paper from 2013 that shows IDT creation process for ACES workflow by using 2D LUTs to get better and more accurate color out: https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.731.8582&rep=rep1&type=pdf.  This would work for other workflows as well.

 

Yes, good you pointed this out. I found it to be the case too and the paper describes it really well. We are currently using 3x3 Matrix plus a LUT to handle saturated colours (our Hue-Sat LUT is 90x30x30 =81,000 lookups). This makes the IDT much more accurate and resilient. Good to see others arriving at similar conclusions. And, yes, this helps both “baked-in” IDTs as well as Scene-Referred IDTs.

Another significant factor, I found, is the spectral sensitivity of the sensor. Many sensors used in digital cinema cameras have a common problem around near IR region, where blue channel response can be significant. Many try to counter this by applying stronger IR cut (cutting IR earlier), but this has some other problems. This common sensor deficiency can be challenging for the IDT to “untangle” and make that IDT resilient to downstream chromatic adaptation. This manifests itself by “dirty” magenta line on the CIE horseshoe. Scene-Referred IDTs almost always handle this problem better.

 

Ø  .  The group will release a document with instructions for creating IDTs for ACES workflow.  They will also release a web-based tool to create new IDTs (for those that have spectral sensitivity data available for the sensor).

Great to hear. That’s awesome news.

 

Ø  DITs might use such tools to create custom IDTs.

 

Yes. This is essentially what we recommend (and provide tools for).

 

Kind Regards,

 

Pawel Achtel ACS B.Eng.(Hons) M.Sc.

“Sharp to the Edge”

 

ACHTEL PTY LIMITED, ABN 52 134 895 417

Website: www.achtel.com

Mobile: 040 747 2747 (overseas: +61 4 0747 2747)

Mail: PO BOX 557, Rockdale, NSW 2216, Australia

Email: Pawel.Achtel@...

Facebook: facebook.com/PawelAchtel

Twitter: twitter.com/PawelAchtel

Skype: Pawel.Achtel

_._,_._,_


Nezih Savaşkan
 

I also was wondering about mixed lighting.

@Pawel, forgive me if I’ve misunderstood, but surely, with the scene-referring workflow you are recommending, if a chart is placed at the “point of interest” (actor’s face, for example) and adjustments are made for that, then other areas of the image (such as a background) with different light sources could end up looking completely wrong?

In post (VFX/colour grade) this can be worked around using masks/mattes/tracking etc. But how is this dealt with in your workflow?

 

You also mention that people can try this for themselves if they can record images to cDNG files. I assume this needs to the camera’s RAW output to cDNG, right? As surely converting a log recording (however lightly compressed and however high data rate) would defeat the point, as it’d include the manufacturer’s default “colour science” to fit the sensor data into their wide colour gamut, right? Am I correct in understanding that in your workflow adjustments are made to “correct/adjust” the camera’s RAW sensor data during the debayer process and before each bit of information is “told where to sit” within a specified wide colour gamut?

Interesting thread!

-----------------------------------------------

Nezih Savaşkan
Director of Photography + Camera Operator
-----------------------------------------------


Pawel Achtel, ACS
 

Ø  @Pawel, forgive me if I’ve misunderstood, but surely, with the scene-referring workflow you are recommending, if a chart is placed at the “point of interest” (actor’s face, for example) and adjustments are made for that, then other areas of the image (such as a background) with different light sources could end up looking completely wrong?

Ø  In post (VFX/colour grade) this can be worked around using masks/mattes/tracking etc. But how is this dealt with in your workflow?

Nazih, it is not any different than Display-Referred, except that colours will match exactly where you want them to match. If you use mixed lighting, chances are that you are using them for a reason and no not actually want to match colours under each different illuminant.

If you do not want to match colours in any specific spot, you can just use “baked-in” IDT supplied by the camera maker. It is very easy to mix-and-match Scene-Referred and Display-Referred IDTs in a single time line as you see fit.

 

Ø  I assume this needs to the camera’s RAW output to cDNG, right? As surely converting a log recording (however lightly compressed and however high data rate) would defeat the point, as it’d include the manufacturer’s default “colour science” to fit the sensor data into their wide colour gamut, right?

 

Not necessarily. It just happens that Cinema DNG allows IDT to be embedded in the actual clip (in every frame, to be precise) as metadata and there are tools that allows this IDT to be changed and manipulated. It’s not a “black box”, like in case of most digital cinema cameras. And, the IDT travels with the clip.

 

The whole point of doing Scene-Referred workflow is that you throw away the “colour science” (IDT) provided by the manufacturer and use your own, specific to the actual Scene you capture.

 

Log recording implies some tone curve and highlight compression “baked-in”. IDT generally works in liner gamma because all sensors are (more or less) linear. But, Log output from the IDT doesn’t preclude Scene-Referred workflow. So, it is possible for ARRI, Sony or RED to “open up” their IDT and still output log or some sort of curved response. And, if not, you can still use “poor man’s” Scene-Referred workflow with functionality such as “Match Color” in Davinci Resolve, it is just that you may lose some colour information in the process (as you would in Display-Referred colour correction anyway).

 

In a typical feature film setup, you generally do want actor’s skin tone to look consistently across the cuts, even if this means some background features may change their hues slightly. In such case, I would capture the scene colorimetry on set and try to construct an IDT from it. You can always drop it and revert to “baked-in” IDT provided by the manufacturer, if that works better. But, generally speaking, this is the best colour transform path because it is likely to preserve the maximum gamut that the camera can natively see.

 

Ø  Am I correct in understanding that in your workflow adjustments are made to “correct/adjust” the camera’s RAW sensor data during the debayer process and before each bit of information is “told where to sit” within a specified wide colour gamut?

Yes. The camera IDT is the very first transform in the entire image pipeline. The IDT transforms camera/sensor specific colour spce into a wider one: SGammut3 (Sony), RedWideGamutRGB (RED), Arri Wide Color Gamut (ARRI) or CIE XYZ (in case of 9x7). This is why it is so critical. If you clip gamut here or if you cause metamerism, there is nothing you can do downstream to “correct” it regardless how wide your working colour space is. That information is forever lost.

 

Kind Regards,

 

Pawel Achtel ACS B.Eng.(Hons) M.Sc.

“Sharp to the Edge”

 

ACHTEL PTY LIMITED, ABN 52 134 895 417

Website: www.achtel.com

Mobile: 040 747 2747 (overseas: +61 4 0747 2747)

Mail: PO BOX 557, Rockdale, NSW 2216, Australia

Email: Pawel.Achtel@...

Facebook: facebook.com/PawelAchtel

Twitter: twitter.com/PawelAchtel

Skype: Pawel.Achtel

 

From: cml-general@... [mailto:cml-general@...] On Behalf Of Nezih Savaskan via cml.news
Sent: Tuesday, 21 December 2021 3:01 AM
To: cml-general@...
Subject: Re: [cml-general] Netflix: Colour Management Illiteracy and Confusion

 

I also was wondering about mixed lighting.

@Pawel, forgive me if I’ve misunderstood, but surely, with the scene-referring workflow you are recommending, if a chart is placed at the “point of interest” (actor’s face, for example) and adjustments are made for that, then other areas of the image (such as a background) with different light sources could end up looking completely wrong?

In post (VFX/colour grade) this can be worked around using masks/mattes/tracking etc. But how is this dealt with in your workflow?

 

You also mention that people can try this for themselves if they can record images to cDNG files. I assume this needs to the camera’s RAW output to cDNG, right? As surely converting a log recording (however lightly compressed and however high data rate) would defeat the point, as it’d include the manufacturer’s default “colour science” to fit the sensor data into their wide colour gamut, right? Am I correct in understanding that in your workflow adjustments are made to “correct/adjust” the camera’s RAW sensor data during the debayer process and before each bit of information is “told where to sit” within a specified wide colour gamut?

Interesting thread!

-----------------------------------------------

Nezih Savaşkan

Director of Photography + Camera Operator

-----------------------------------------------


Nezih Savaşkan
 

This is all much clearer to me now, thank you!
-----------------------------------------------
Nezih Savaşkan
Director of Photography + Camera Operator
-----------------------------------------------