Netflix: Colour Management Illiteracy and Confusion
A few days ago, Netflix posted somewhat embarrassing “educational” video “The Benefits of Color Management”
In this video Netflix says they “define Color Management”, literally! (sarcasm)
I think there is already a lot of confusion in this space as it stands. And, for a large, well-resourced organisation, to come out and make blatantly incorrect “educational” presentation is an insult to the injury.
For those that are confused: in this video they present “Colour Managed” and “Non-colour Managed” workflows. So far so good. But the next minute they go to explain that the “Colour Managed” workflows are inherently “Scene-Referred” while the “Non-colour Managed” workflows are somehow “Display-Referred” and should be avoided. The presentation goes downhill from there.
I guess it may come as a surprise to Netflix that ALL of their films have, in fact, been produced using Display-Referred workflows – the exact opposite to what they present in their “educational” video. Display-Referred workflows are the basis of pretty much every single production these days.
If well-resourced companies, like Netflix, pump out this sort of misinformation and confusion, what we, as cinematographers, colour scientists and colourists can do to about this general lack of understanding?
Shame their channel is followed by 3.89K subscribers, it attracted 2,517 views so far, and comments are “conveniently” disabled.
I’m taking the first step: I’m calling it out.
Any other suggestions?
Kind Regards,
Pawel Achtel ACS B.Eng.(Hons) M.Sc. “Sharp to the Edge”
ACHTEL PTY LIMITED, ABN 52 134 895 417 Website: www.achtel.com Mobile: 040 747 2747 (overseas: +61 4 0747 2747) Mail: PO BOX 557, Rockdale, NSW 2216, Australia Email: Pawel.Achtel@... Facebook: facebook.com/PawelAchtel Twitter: twitter.com/PawelAchtel Skype: Pawel.Achtel
_,_._,_ |
|
Mitch Gross
I suggest you contact the Netflix technical group, based in LA at Netflix headquarters. They are an extremely knowledgeable bunch and if there’s something wrong with the information presented they’ll want to know about it.
toggle quoted message
Show quoted text
These people are determining technical standards for 1000s of hours of future content valued at $billions. They are very much on top of their game and forward-thinking. And they are incredibly influential. If there’s an issue you should make them aware and discuss it with them. Mitch Gross Prolycht Lighting New York On Dec 7, 2021, at 6:43 PM, Pawel Achtel, ACS <pawel.achtel@...> wrote:
|
|
Pawel has already flagged this in other threads online and we are reviewing the feedback.
|
|
Ø Pawel has already flagged this in other threads online and we are viewing the feedback.
Thanks, Michael. Good to hear. I wasn’t sure if my (and others’) feedback on Facebook was followed through. I think it is extremely important that everyone is on the same page and we communicate with consistent language and terms that mean the same to everyone.
Let me know if I can help with anything.
Kind Regards,
Pawel Achtel ACS B.Eng.(Hons) M.Sc. “Sharp to the Edge”
ACHTEL PTY LIMITED, ABN 52 134 895 417 Website: www.achtel.com Mobile: 040 747 2747 (overseas: +61 4 0747 2747) Mail: PO BOX 557, Rockdale, NSW 2216, Australia Email: Pawel.Achtel@... Facebook: facebook.com/PawelAchtel Twitter: twitter.com/PawelAchtel Skype: Pawel.Achtel
|
|
Gentle reminder to please sign your posts as required by CML - with your full name, job/title, and location - so we can all know who it is who is writing.
-- ---------- George Hupka Director/DP Saskatoon, Canada Listmum, Cinematography Mailing List |
|
Hi Pawel,
Great post. I had a look at the video. But, I think I see where things might get confusing. A colour managed workflow (like the one that Netflix is using) is inherently scene referred. Meaning that at any moment in the workflow you could revert back to wide gamut and linear. And, this is why they are advocating the use of ACES. It's the easiest way to get everyone (big and small "actors" in the industry) on the same "footing". The main masters you'd deliver to Netflix would be display referred and there you are spot on. But, Netflix also pushes to get what they call a NAM (non graded archival master) and also get a GAM (graded archival masters). The NAM is technically 100% scene referred (from the original scene and the VFX) but the GAM is not... It's like a linear version of your grade. So it retains a lot of info but not all. Now Netflix (and, other online content providers) also needs a few versions of display referred deliverables (HDR, BT1886, P3 etc...). And, the best way to retain the original intent (and not convert directly from your "best" master) is to keep the workflow in scene referred. And, again this is what ACES does and it's all done in the background. You are just viewing things thru different viewing transforms as you progress along. But, to do this you need your whole pipeline to be set this way (shoot, VFX and post). I see where the confusion might be for you... Because for a lot of people colour management also means that your can convert from one colour space to another. And this is right in essence... For example, you grade in P3 and use a LUT convert to BT1886 (rec709). This in essence is colour managed. But, it's obviously not scene referred colour managed. Which ultimately give you better results and this is what Netflix is trying to explain. Hopefully, this will make more sense to you. But, i would be curious to know more about your thoughts and to why you think was not well explained. I'm always keen on refining my educational angle and your input might be good for a internal educational program I'm putting together for our producers. And, please feel free to reach out to me if you have any questions. I'm more than happy to help! Thanks! -- Charles Boileau, ex-Cinematographer, ex-Colorist and now: imaging engineer at some VFX studio. |
|
Hi Charles,
Thank you for taking time to provide explanations. I think it is great that Netflix is advocating ACES. But this is not where the incorrectness lies.
Ø A colour managed workflow (like the one that Netflix is using) is inherently scene referred.
No. The fact that they are using ACES workflow or any other Colour-Managed Workflow doesn’t make it Scene-Referred. The critical distinction that they missed here is the camera IDT (from sensor-specific colour space to AWG, RedWideGamutRGB or SGammut3, etc…) that eventually brings us into ACES with a fixed transform is not Scene-Referred. This IDT is not Scene-Referred and therefore nothing downstream is (with one exception, addressed below). This is because it doesn’t Reference the Scene. It references a different Scene that camera maker “canned” in their lab as their “Colour Science” for particular camera model. In order to be corrected for the actual Scene illumination, it requires Chromatic Adaptation and/or some other transforms downstream. And this process is inherently Display-Referred (or Output-Referred in general) because you are working in Display Colour space in order to do so.
Almost all workflows in Cinematography/Video are Display Referred and this doesn’t mean that they are or aren’t Colour-Managed, as incorrectly stated by Netflix.
Ø I see where the confusion might be for you... Because for a lot of people colour management also means that you can convert from one colour space to another. And this is right in essence... For example, you grade in P3 and use a LUT convert to BT1886 (rec709). This in essence is colour managed. But, it's obviously not scene referred colour managed. Which ultimately give you better results and this is what Netflix is trying to explain.
No, this is not where the incorrectness is. Netflix stated that Colour-Managed workflow is Scene-Referred and that Non-Colour-Managed workflow is Display-Referred. This is incorrect.
The ACES workflow that Netflix presented is Display-Referred workflow and it is Colour-Managed because it does assure consistent colours even if source material is coming from variety of cameras. There is nothing wrong with this workflow and this is how most movies are made.
In distinction to that workflow, Scene-Referred workflows assure Scene colorimetry to look the same across different Scenes and lighting conditions. Whether you shot a scene using tungsten lighting, LED, or natural sunlight, or next to a purple wall, the colours will be consistent across those shots in terms of colour reproduction. The reason they will be consistent is not because they went through Chromatic Adaptation process using your Display in a Display-Referred workflow, but because they didn’t need to. Camera Scene-Referred IDT already assure those colours to be consistent across different Scene illumination.
Scene-Referred workflows are much more common in still photography. Capture One, Photoshop, RawTherapee – all allow Scene-Referred workflows. They do so by allowing custom, Scene-Referred camera IDT that was created at the Scene, to be used instead of the “canned” camera IDT provided by the camera maker that was captured in a different Scene and forcing you to use Chromatic Adaptation or other means to make the look across the scenes with different lighting consistent. ACES workflow doesn’t address this colour consistency across Scenes with varied lighting conditions.
Most cinema cameras do not allow for Scene-Referred workflows. This is because their IDT is fixed and cannot be made Scene-Referred (specific to scene illumination). The workflow requires Chromatic Adaptation (setting your CCT and/or Tint), colour balance or other transforms in Display (or Output) space, hence they are all Display-Referred workflows.
There is one “exception”. Davinci Resolve does provide a “semi” Display-Referred workflow for cameras that do not support Scene-Referred IDT.
It allows you to create a Scene-Referred transform downstream of the IDT. This is not the same as using Scene-Referred IDT (and can result in unwanted colour “twists” and out of gamut values) but, at least, if used to bring different Scene illuminations into common look, for all intents and purposes, it would count as Scene-Referred workflow. I quite like the explanation given by Bram Desmet on Facebook thread:
“ …I've seen from some people promoting concepts of 'scene referred' color grading and mastering. The moment you are making a decision based on how things look on a display you are participating in display referred decision making imo and while that may seem obvious to you or I it is a point I've had people argue with me. "No, this is scene referred grading." Me: cool, so you aren't looking at a display when making your decisions? "Yes, we are." Me: so you are referencing a display, but not working display referred...weird…”
Look, I’m glad that Netflix is advocating ACES workflow. It is good workflow. But, let’s call it what it is: ACES workflow. Why use some other term that means something completely different?
Kind Regards,
Pawel Achtel ACS B.Eng.(Hons) M.Sc. “Sharp to the Edge”
ACHTEL PTY LIMITED, ABN 52 134 895 417 Website: www.achtel.com Mobile: 040 747 2747 (overseas: +61 4 0747 2747) Mail: PO BOX 557, Rockdale, NSW 2216, Australia Email: Pawel.Achtel@... Facebook: facebook.com/PawelAchtel Twitter: twitter.com/PawelAchtel Skype: Pawel.Achtel ,_._,_ |
|
Hi Pawel, I get that for you the IDT says nothing about the scene itself. But, if we are talking math and physics the data is the back end is scene referred (in that sense) and thus retains all the pure data from the scene you shot (without the grade of course). And this what Netflix means.
BTW the photography apps you mention are all a sort version of ACES. They all have a viewing transform that comes from some sort of creative decision (done by a human). I’m pretty sure that C1, LR and others don’t look the same « out of the box ». Charles Boileau, ex-Cinematographer, ex-Colorist and now: imaging engineer at some VFX studio. |
|
Luis Gomes <gomes.luis@...>
Was really intrigued by this thread.
At first I though you were discussing some futuristic workflow that does not exists yet. The subject of a “master” isn’t it a transcoded “color bake in” destructive master? Archival as in material archival? From reading this thread it occurred to me that we have still a long way before reinventing a better archival media than negative film. Now if those colorists stopped deleting all metadata and starting from scratch :-) Luís. Dreaming about this technique |
|
Hi Charles,
Thanks for your thoughts. See my comments below:
Ø Thus the fact that your are not « locking things in » when using ACES (and could technically regrade at any point if archiving is done correctly) means that the workflow is retaining scene referred data.
I think it is extremely important to use accurate and correct terms in professional environments. I’m all for ACES (or any other common colour space). ACES workflow works very well in both: Scene-Referred and Display-Referred workflows. But, again, ACES workflow doesn’t actually assure consistent look despite variations in scene illumination. That’s what either Scene or Display-Referred colour workflows do. Scene-Referred workflow is inherently upstream of ACES. Display-Referred workflow is usually done within ACES. Hence, I prefer using Output-Referred term.
But, for all intents and purposes, Netflix was trying to explain that their managed colour workflow (which is in reality Display-Referred, but they called it incorrectly Scene-Referred) does not rely on having properly calibrated monitors for the process to produce consistent results. That’s incorrect. I can give you black-and-white monitor and a colour blind operator and good luck getting the colours consistent or correct through that process. Why? Because it is inherently Display-Referred. It is a “stab in the dark”, an approximation at matching the colours by looking at the monitor display. It is not consistent process and it doesn’t actually assure accurate or consistent colours. It depends on accuracy of Displays, people looking at those Displays and how far the actual scene illumination was from the reference illumination of the camera IDT.
Scene-Referred workflow is different. It assures colour consistency and accuracy by not having to reply upon calibrated Display or human interpretation of what they see on those displays. You can use it with black and white display throughout the entire pipeline (not that you should). It ensures the most consistent colour reproduction irrespective of scene variations in illumination. It assures accurate colours whether you used LED, tungsten, HID or natural lighting. It assures consistency and accuracy the moment footage is handed over by cameraman to the DIT. It assures colour accuracy and gamut irrespective of Display limitations. The only role of the colourist is to apply creative colour grade.
Therefore there is substantial difference and consequence between those two distinct workflows.
Ø But, as a engineer working in VFX, scene referred simply means that we can retain the correct data for a bunch manipulation and math to operate in a natural way.
We need consistent terms, not just among colourists, but across entire production team.
Just because Scene-Referred workflow can get results that, in most cases, are almost as good as Scene-Referred, doesn’t make it Scene-Referred. Far from it.
Unlike Display-Referred workflows, Scene-Referred workflows do not rely upon display technology to be accurate. In fact you can have black-and-white monitor or no monitor at all to assure colour consistency and accuracy. Scene-Referred workflows would almost always deliver more consistent results. There are exceptions, of course. You would not use Scene-Referred workflows for night-time scenic shots, or under sodium lights, or some other extreme lighting conditions. These are best handled through Display-Referred workflows to look “good and natural”.
The reason Scene-Referred workflow will almost always deliver more consistent results is because it doesn’t depend on calibrated monitors, individual variations in perception of colours and multiple transforms that often cause colour “twists”, colour discontinuities or out-of-gamut clipping.
Ø But, if we are talking math and physics the data is the back end is scene referred (in that sense) and thus retains all the pure data from the scene you shot (without the grade of course).
Not exactly. Two wrongs make it right. Having “canned” (non-Scene-Referred) camera IDT transform and then another transform to “undo” the colour mapping to apply chromatic adaptation and/or correction back to the actual Scene is not always lossless, consistent or accurate. These processes are not equivalent. But, that’s not to say one is “colour-managed” and one is “non-colour-managed” as incorrectly stated by Netflix. They are both colour managed, just managed very differently.
Ø . But, I feel this « debate » is purely semantics.
I disagree. I disagree because it creates a lot of confusion.
It creates confusion as to who is responsible to deliver colour corrected images: the cinematographer or the colourist?
It creates confusion as to whether or not one needs colour calibrated displays to ensure colour consistency throughout production pipelines.
It creates confusion as to the camera’s capability or lack therof. To my knowledge, my company’s camera, the 9x7, is the only cinema camera currently available that offers proper Display-Referred workflow (in addition to “traditional” Display-Referred workflows). Does it mean it is the only camera that Netflix is going to accept on their productions? Who knows? But somehow I do not think this is what Netflix meant. Again, more confusion.
As cinematographers and professionals we need to use correct terms to describe concepts, processes and specifications. It is not about “semantics”. It is the basis of effective communication in any profession: medical, engineering, legal or scientific. Why not cinematography?
Kind Regards,
Pawel Achtel ACS B.Eng.(Hons) M.Sc. “Sharp to the Edge”
ACHTEL PTY LIMITED, ABN 52 134 895 417 Website: www.achtel.com Mobile: 040 747 2747 (overseas: +61 4 0747 2747) Mail: PO BOX 557, Rockdale, NSW 2216, Australia Email: Pawel.Achtel@... Facebook: facebook.com/PawelAchtel Twitter: twitter.com/PawelAchtel Skype: Pawel.Achtel
_,_ |
|
fulmetaljacket@...
Hi Everyone,
This is a really interesting discussion and an important point that Pawel is raising. Exact terminology and open standards are really important, and the concepts of 'scene vs. display' vis-à-vis camera system (including lenses), have confused me, especially when you take into consideration the presumably non-linear properties of camera sensors, and the non-linear nature of lenses and the human visual system. This is all very interesting to consider in terms of the prior analogue technology which would have completely annulled the notion of returning to a physical ground-truth of the scene itself. It seems to me that the idea of a ’non-graded’ master is really just something that means no data loss/distortion i.e. ’original camera media’. But that raises the notion of ‘original creative intent’ in my mind. If someone shot a movie on Baltars, and then someone created some code to invert the effects of that lens down the line through physical modelling, what are we trying to achieve? We could also do full digital replacements of the actors down the line. The thing to preserve in my mind is the data (and its metadata through analysis of the scene by other means than the camera perhaps), and the creative intent of the image authors. Would the the term ‘camera referred’ make any sense, if considering the camera itself as part of the scene would make sense (it is after all a physical process on set delivering the photons to their ultimate destination in the digital realm). IMO there is no such thing as a perceptual experience that is a reliable indication of underlying physical properties, so why toil to establish a physical one (outside of the technical utility required for VFX etc..) I also know that even between camera models (i.e. two RED Epics for example) that the imagery will come out differently. Thanks for a very interesting thread! Brett Cinematographer | Camera Operator
IMDb.me/brett.harrison brettharrison.co AUS: +61 4 2816 0615 | USA: +1 (310) 994-9952 On 12 Dec 2021, 10:50 AM +1100, Pawel Achtel, ACS <pawel.achtel@...>, wrote:
|
|
Mike Nagel
@Pawel
Thanks. |
|
Hi Mike,
Thanks for the corrections! Glad someone is awake. J Yes, my mistake. Ø if so, are you saying that you provide a scene-referred IDT to your customers ? Free ? charge ? Is that an industry first for cine cams ? When using the 9x7, Scene-Referred IDT can be created on set by the cinematographer. It is a choice and you can easily throw away multi-illuminant reference IDT that is supplied with the camera in favour of that Scene-Referred IDT. We also supply workflow software (for Linux and Windows) that supports whatever workflow you chose. And, if you change your mind in post, you can easily switch back to Display-Referred workflow on clip-by-clip basis. It is a flexible choice and yes, it is free. J
There are “lower-end” stills/video cameras that that, with a little bit of “gymnastics” allow for Scene-Referred workflow, but none of the “big names” that I’m aware of. One prominent still picture camera that supports Scene-Referred workflows is Phase One + Capture One workflow software. In terms of actually supporting Scene-Referred workflow on set, yes, I believe the 9x7 is the first one. Ø Can you share examples (using footage shot w/ your cam) how a true scene-referred workflow and pipeline can provide better and more consistent results than a display-referred workflow in scenes that were shot under different lightning conditions ? Here are two examples of vastly different illuminations (I have included spectral data of each illuminant in “light” folder). Please note that, whilst CCT was set in the metadata to the actual values measured with spectrometer, it is actually irrelevant as long as it is set to the same value when Scene-Referred IDT is created and viewed. The actual CCT value is irrelevant because it is taken care of by the IDT, not chromatic adaptation. 1. Tungsten https://drive.google.com/drive/folders/1BxnjhFS8l5V6M5tV0wk3-2UlfQLOMqf5?usp=sharing 2. Daylight https://drive.google.com/drive/folders/1B7rCcRx4ZcOEh-PgQzl_fERBKunTyd4W?usp=sharing
These colour profiles were custom created for Scene-specific illumination on set. And, they can be easily created using uncalibrated monitor, even black-and-white monitor, because they are completely independent of any ODT.
I am intending to make a better illustration by showing how Display-Referred workflow with “canned” IDT can produce less accurate colours in certain situations due to colour “twists” or out-of-gamut values. It is coming. We are not as well-resourced as Netflix J
Some excellent points, Brett.
You are absolutely right. If cinematographer’s intent is to create distinctive look through his/her lens choices or creative filters, Scene-Referred workflow is not going to be suitable.
However, if cinematographer’s intent is to create consistent colours irrespective of scene illumination, irrespective of different lens looks, or irrespective of different cameras, then Scene-Referred workflow will most likely be a better choice. Baltars will look indistinguishable from Zeiss (as far as colour is concerned). LED will look indistinguishable from daylight or HMI.
As you rightly pointed out, it’s not the case of “one fits all” or telling filmmakers that one is better than the other. It is the case of identifying and therefore naming those processes correctly, so that everyone is on the same page. And this is also to help make informed decisions as to which workflow better suits the filmmaker’s intent.
Kind Regards,
Pawel Achtel ACS B.Eng.(Hons) M.Sc. “Sharp to the Edge”
ACHTEL PTY LIMITED, ABN 52 134 895 417 Website: www.achtel.com Mobile: 040 747 2747 (overseas: +61 4 0747 2747) Mail: PO BOX 557, Rockdale, NSW 2216, Australia Email: Pawel.Achtel@... Facebook: facebook.com/PawelAchtel Twitter: twitter.com/PawelAchtel Skype: Pawel.Achtel
From: cml-general@... [mailto:cml-general@...] On Behalf Of Mike Nagel via cml.news
@Pawel
Thanks. |
|
Philip Holland
For those unfamiliar with RED cameras and the IPP2 Color Workflow, a Scene Referred Workflow can be achieved in camera through the use of "Creative Cubes" as well as IPP2. This has been supported within the camera and in post since DSMC2 cameras and is on
the current DSMC3 lineup as well.
At it's core, IPP2 allows you inject this Creative Cube and either through camera or on the unique display itself provide a Scene Referred correction or look that is carried out to whatever output you want. i.e SDR or whatever type of HDR. This has been useful
when creating looks on set and monitoring both in HDR and SDR. I believe Panavision did a nice demonstration of this a few years ago showing their Outpost on set monitoring solutions.
In it's simplest form, this looks like this:
Footage in REDWideGamutRGB/Log3G10 > Creative Cube (as well as grading, IDT, calibration, etc) > Output Transform
Where the Output Transform can be something like Rec.709/BT.1886, P3D65/ST2084_1KN, etc. And this Creative Cube carries over and is interpreted correctly whatever the Output Transform.
The Creative Cube can be applied in camera, in post, and carries over with the footage to assist with post production.
Interestingly as somebody who's been making LUTs for Creative Cubes along the way it's been very interesting to see some of the creative and technical asks on various productions outside of what I film. Transforms, calibrations, emulations, show LUTs, etc.
At the inception, if my memory serves correctly, Scene Referred began with Linear being the reference point. Which in fact all major digital cinema cameras utilize in crafting their output, but isn't something we see well as humans. In modern times whether
you are going through ACES, whatever camera format (ideally some sort of Log Format), or converting everything over to Linear; it's all about getting things into a starting space. In the VFX world this has been Linear forever. In practical workflows on set,
not using an ACES workflow, that comes in the form of accurate transform of say RWG/Log3G10 to AWG/LOGC or vice versa or honestly whatever. Before ACES we were doing these transforms on multi-camera shoots to keep things consistent when rolling with several
cameras on a scene. A personal note, Linear is "the best" container when it comes to gamma to build up from.
Back to IPP2. An example of the practical use case scenario outside of what can be done with footage, is producing VFX elements in Linear, converting to RWG/Log3G10 and in my case usually applying an IDT or not depending on things and then applying the Creative
Cube which then goes to whatever Output Transform you want. This allows for all of those Rec.709, P3D65, and Rec.2020 outputs to come from the Scene Referred look.
In terms of a Scene Referred workflow all of this works has been working for several years. To a point the IPP2 workflow can be viewed as an alternative
to ACES or not depending on your perspective. Same for something built around AWG/LogC which many post houses deploy. AP0 makes great sense, I like RWG due to the cameras actually being able to capture insane saturation way out there that we can utilize,
which has small benefits in regards to HDR grading when it comes to colors we as humans can see.
I switched over to IPP2 Worklow (for all cameras) in 2017 if I recall correctly for most projects that have come through my doors or I've pushed out through post elsewhere. That's with a dash of ACES, Linear, and Display Referred workflows being used for rather
specific displays at tradeshows, theaters, and exhibition screens.
Don't really care how people want me to make the pizza, as long as they are asking me to make the pizza.
Phil
-----------------
Phil Holland - Director & Cinematographer From: cml-general@... <cml-general@...> on behalf of Pawel Achtel, ACS via cml.news <pawel.achtel=24x7.com.au@...>
Sent: Saturday, December 11, 2021 10:42 PM To: cml-general@... <cml-general@...> Subject: Re: [cml-general] Netflix: Colour Management Illiteracy and Confusion Hi Mike,
Thanks for the corrections! Glad someone is awake. J Yes, my mistake. Ø if so, are you saying that you provide a scene-referred IDT to your customers ? Free ? charge ? Is that an industry first for cine cams ? When using the 9x7, Scene-Referred IDT can be created on set by the cinematographer. It is a choice and you can easily throw away multi-illuminant reference IDT that is supplied with the camera in favour of that Scene-Referred IDT. We also supply workflow software (for Linux and Windows) that supports whatever workflow you chose. And, if you change your mind in post, you can easily switch back to Display-Referred workflow on clip-by-clip basis. It is a flexible choice and yes, it is free. J
There are “lower-end” stills/video cameras that that, with a little bit of “gymnastics” allow for Scene-Referred workflow, but none of the “big names” that I’m aware of. One prominent still picture camera that supports Scene-Referred workflows is Phase One + Capture One workflow software. In terms of actually supporting Scene-Referred workflow on set, yes, I believe the 9x7 is the first one. Ø Can you share examples (using footage shot w/ your cam) how a true scene-referred workflow and pipeline can provide better and more consistent results than a display-referred workflow in scenes that were shot under different lightning conditions ? Here are two examples of vastly different illuminations (I have included spectral data of each illuminant in “light” folder). Please note that, whilst CCT was set in the metadata to the actual values measured with spectrometer, it is actually irrelevant as long as it is set to the same value when Scene-Referred IDT is created and viewed. The actual CCT value is irrelevant because it is taken care of by the IDT, not chromatic adaptation. 1. Tungsten https://drive.google.com/drive/folders/1BxnjhFS8l5V6M5tV0wk3-2UlfQLOMqf5?usp=sharing 2. Daylight https://drive.google.com/drive/folders/1B7rCcRx4ZcOEh-PgQzl_fERBKunTyd4W?usp=sharing
These colour profiles were custom created for Scene-specific illumination on set. And, they can be easily created using uncalibrated monitor, even black-and-white monitor, because they are completely independent of any ODT.
I am intending to make a better illustration by showing how Display-Referred workflow with “canned” IDT can produce less accurate colours in certain situations due to colour “twists” or out-of-gamut values. It is coming. We are not as well-resourced as Netflix J
Some excellent points, Brett.
You are absolutely right. If cinematographer’s intent is to create distinctive look through his/her lens choices or creative filters, Scene-Referred workflow is not going to be suitable.
However, if cinematographer’s intent is to create consistent colours irrespective of scene illumination, irrespective of different lens looks, or irrespective of different cameras, then Scene-Referred workflow will most likely be a better choice. Baltars will look indistinguishable from Zeiss (as far as colour is concerned). LED will look indistinguishable from daylight or HMI.
As you rightly pointed out, it’s not the case of “one fits all” or telling filmmakers that one is better than the other. It is the case of identifying and therefore naming those processes correctly, so that everyone is on the same page. And this is also to help make informed decisions as to which workflow better suits the filmmaker’s intent.
Kind Regards,
Pawel Achtel ACS B.Eng.(Hons) M.Sc. “Sharp to the Edge”
ACHTEL PTY LIMITED, ABN 52 134 895 417 Website: www.achtel.com Mobile: 040 747 2747 (overseas: +61 4 0747 2747) Mail: PO BOX 557, Rockdale, NSW 2216, Australia Email: Pawel.Achtel@... Facebook: facebook.com/PawelAchtel Twitter: twitter.com/PawelAchtel Skype: Pawel.Achtel
From: cml-general@... [mailto:cml-general@...]
On Behalf Of Mike Nagel via cml.news
@Pawel
Thanks. |
|
Hi Phil,
Thanks for the post, but you seem to confuse what IPP2 Creative Cube is and what it does. It has nothing to do with Scene-Referred workflow. An ability to apply custom look in camera’s display pipeline is not the same as Scene-Referred workflow. Far from it.
The 9x7 also offers a Creative Transform, similar to the Creative LUT implemented by RED. But, this is completely different, serves different purpose and it is a “trivial” compared to complexities of Display-Referred workflow as a function of the camera.
Creative Cube is an Output Transform, not an Input Transform. It works downstream of RWG/Log3G10 Input Transform, which is always not-Scene-Referred. The reason it is not Scene-Referred is because it doesn’t depend on the actual Scene. It is “canned” and “baked-in”. You can’t change it depending on the Scene.
By RED’s admission, the Creative Cube is an Output LUT, which will limit both, the Dynamic Range and colour space. This is explained by RED very clearly:
The Creative LUT with IPP2 workflow doesn’t assure consistent colorimetry across different scene illuminations and it cannot be reasonably obtained without referencing a display (with its limitations and imperfections). If you created Creative LUT on particular monitor (and unable to observe out of gamut mapping that is outside that monitor’s gamut), clearly this is not going to be Scene-Referred. Creative LUT is just that, a creative LUT.
However, as I mentioned earlier, it is possible to have a “poor man’s” Display-Referred workflow with any camera using “Color Match” function in Davinci Resolve. Compared to a “proper” Display-Referred workflow, it uses the non-Scene-Referred, “backed-in” IDT and then applies a corrective transform downstream from that. The benefit of “Color Match” function over IPP2 Creative LUT is that it doesn’t depend on the actual viewing device to apply the transform. You can do it on a black and white or uncalibrated monitor and still achieve consistent results. The limitation, however, remains the same: it can create unwanted colour “twists”, reduced dynamic range, colour discontinuity and out-of-gamut values, which “proper” Scene-Referred IDT doesn’t suffer from. In short “Two wrongs, don’t make it right”. The “baked-in” IDT that brings camera specific gamut into RWG must be corrected and that correction will be inevitably lossy depending on how much the actual Scene illumination differs from the one that is “baked-in” in the non-Scene-Referred IDT provided by the camera maker.
Scene-Referred workflow, which uses proper Scene-Referred IDT will almost always produce higher dynamic range, wider gamut and more accurate colours than any Display-Referred workflow that is based on non-Scene-Referred IDT and downstream correction.
And this is why Scene-Referred workflows are such a big deal and correct understanding of forces at play is important.
The comments below are off-topic but, I felt, needed to be addressed.
Ø I like RWG due to the cameras actually being able to capture insane saturation way out there that we can utilize, which has small benefits in regards to HDR grading when it comes to colors we as humans can see.
This is not supported by my measurements or experience.
Starting from RED Dragon, which had terrible (almost unusable) reproduction of blue colours, which were reproduced as “dirty” magenta-grey. This was never fixed by RED and my understanding is that the problem was compound of poor CFA/sensor design, badly matched OLPF and inaccurate IDT.
This was largely corrected in Monstro, but still not in the same “league” as some of the best cinema cameras offer.
RED Monstro also suffers from colour “twists”, discontinuities and clipped gamut using “canned” Display-Referred IDT and IPP2 workflow. In particular, the magenta line is heavily twisted around red spectra and saturated greens are significantly clipped compared to, say, ARRI or Sony Venice, just to name a couple of cameras with really good fixed IDTs. An additional LUT downstream of RED’s IPP2 IDT is likely to cause further and significant colour shifts and gamut issues and would be counter-productive to the overall process of colour consistency and accuracy. Essentially: exactly what RED is warning about in their article about Creative LUT IPP2 usage.
Below is an illustration how Monstro reproduces monochromatic colours using RED-supplied IDT under IPP2. You can see very “dirty” magenta line with red colours reproduced as magenta. This is problematic and not easy to correct without affecting other colours. Secondly, you can see significant gamut clipping. The camera is not capable to reproduce even REC 709 saturated cyans or greens.
For comparison, below is the same test with Sony Venice:
You can see almost perfect “magenta line”. You can also see very impressive reproduction of pretty much the entire range of monochromatic colours without “twists” or significant clipping. Kudos to Sony.
Also, remember that RED RWG does not encapsulate all visible colours, unlike CIE XYZ (used in 9x7 IDT), which is wide enough to not only encapsulate all visible colours, but also as a standard working colour space without a need for additional transforms. In particular RWG doesn’t reproduce saturated cyans and greens, which will be clipped, even if the camera sensor could actually reproduce them. This seems to explain why RED Monstro is unable to reproduce those colours: not even the Output colour space (RWG) can accommodate them without clipping.
In summary, RED cameras are not capable of reproducing deep saturated colours using RWG IPP2 because even the output colour space has limited range it can possibly reproduce.
Hope it helps.
Kind Regards,
Pawel Achtel ACS B.Eng.(Hons) M.Sc. “Sharp to the Edge”
ACHTEL PTY LIMITED, ABN 52 134 895 417 Website: www.achtel.com Mobile: 040 747 2747 (overseas: +61 4 0747 2747) Mail: PO BOX 557, Rockdale, NSW 2216, Australia Email: Pawel.Achtel@... Facebook: facebook.com/PawelAchtel Twitter: twitter.com/PawelAchtel Skype: Pawel.Achtel
|
|
Philip Holland
Hi Pawel,
We've crossed some of these paths previously online pertaining several cameras over the years. Had to go back and look at those related to your Flanders spectrum frame grab.
The first thing in your reply about the warning on how not to design a Creative Cube is exactly that. The critical wording is "could" and "consider". Meaning it's possible to limit things in that manner, or not within that cube. It's up to whomever authors
whatever goes there to figure out what they intend on what to do with it. Moderately the point of all of this and exactly how it is used in practice. Reread that text I guess to understand what's going on.
The comments that are off topic have already been discussed between you and Graeme Natress some time ago pertaining to color gamut. You've also overlooked why various manufacturers craft their wide color gamut spaces to exceed the CIE plots as there are reasons
to do that especially in modern times, but that will be something you discover I imagine in your continued color developments. Also I believe Graeme explained and demonstrated the impact of correctly adjusting the ISO so the data isn't getting clipped to
eliminate the distortion, which is occuring due to RED's gamut mapping attempting to show a pushed high gamut color. Your spectrum was exposed well, but you pushed it to ISO 1600 which pushes it to the edge of the Rec.709 Output Transform. Even with that
said, it's really nice to craft your own transforms or even drag things through ACES.
The oddity of your comment is the hard clip you see on Venice is the edge of what looks to be S-Gamut or S-Gamut.Cine, which is a hard stop generally in warm hues just outside of CIE plot. RED is one of the better cameras actually at capturing very saturated
hues that tend to exist outside of gamut and often wrangled back in via high gamut compression, limiting, or supression. That's why your comment is so odd and many who work with very colorful subject matter or lighting would strongly disagree or be confused
by your statement. A good algorythm comes in handy if you are attempting to shove really high gamut stuff into a more pleasing space however. ACES just released such a thing. It can get tricky if done poorly, I've seen a few "magenta flames" in an attempt
to do that incorrectly. Oddly most of the time you likely don't need that compression if the material is exposed well even at higher gamuts. RED and ARRI for instance have shown improvements in that range with their standard outputs to various color spaces.
I also have spectral data, and don't get results like yours unless I push color to clip within a transform, and I know Graeme has shown you his results to point you in the right direction on things. I believe you had this chat in 2018.
And extremely off topic. I appreciate the moxy and tenacity of your endeavors of your 9x7 developments, at it's core I admire what you are doing. But it's important to ground a lot of your demonstrations and presentations. The aggressive marketing has become
a bit glaring in surprisingly small circles. You have a few color scientists, engineers at various camera companies, and rather experienced industry professionals who have created much of the backbone of things you use on the daily seeing your activities
all over the net or in public and private demonstrations. Not to mention extremely experienced and technical filmmakers who may have gone through some of the journies you've gone through with rather famiilar hardware for clients who have needed such things.
I do a lot of hired out camera tests for clients, studios, and companies which often include their own internal people getting involved with these activities as there's rather large cost considerations looking deeper into expansive workflow and ecosystem changes.
If there was a major color issue, trust me, I would have sent flares, helicopters, and zeppelins to RED over it. Pretty much what I do whenever I come into some sort of bug on any camera. A great deal of those tests do reveal a lot of useful critical feedback
to send over to the major manufacturers and most often all of them have been very responsive for many, many years.
Phil
-----------------
Phil Holland - Director & Cinematographer From: cml-general@... <cml-general@...> on behalf of Pawel Achtel, ACS via cml.news <pawel.achtel=24x7.com.au@...>
Sent: Sunday, December 12, 2021 2:11 PM To: cml-general@... <cml-general@...> Subject: Re: [cml-general] Netflix: Colour Management Illiteracy and Confusion Hi Phil,
Thanks for the post, but you seem to confuse what IPP2 Creative Cube is and what it does. It has nothing to do with Scene-Referred workflow. An ability to apply custom look in camera’s display pipeline is not the same as Scene-Referred workflow. Far from it.
The 9x7 also offers a Creative Transform, similar to the Creative LUT implemented by RED. But, this is completely different, serves different purpose and it is a “trivial” compared to complexities of Display-Referred workflow as a function of the camera.
Creative Cube is an Output Transform, not an Input Transform. It works downstream of RWG/Log3G10 Input Transform, which is always not-Scene-Referred. The reason it is not Scene-Referred is because it doesn’t depend on the actual Scene. It is “canned” and “baked-in”. You can’t change it depending on the Scene.
By RED’s admission, the Creative Cube is an Output LUT, which will limit both, the Dynamic Range and colour space. This is explained by RED very clearly:
The Creative LUT with IPP2 workflow doesn’t assure consistent colorimetry across different scene illuminations and it cannot be reasonably obtained without referencing a display (with its limitations and imperfections). If you created Creative LUT on particular monitor (and unable to observe out of gamut mapping that is outside that monitor’s gamut), clearly this is not going to be Scene-Referred. Creative LUT is just that, a creative LUT.
However, as I mentioned earlier, it is possible to have a “poor man’s” Display-Referred workflow with any camera using “Color Match” function in Davinci Resolve. Compared to a “proper” Display-Referred workflow, it uses the non-Scene-Referred, “backed-in” IDT and then applies a corrective transform downstream from that. The benefit of “Color Match” function over IPP2 Creative LUT is that it doesn’t depend on the actual viewing device to apply the transform. You can do it on a black and white or uncalibrated monitor and still achieve consistent results. The limitation, however, remains the same: it can create unwanted colour “twists”, reduced dynamic range, colour discontinuity and out-of-gamut values, which “proper” Scene-Referred IDT doesn’t suffer from. In short “Two wrongs, don’t make it right”. The “baked-in” IDT that brings camera specific gamut into RWG must be corrected and that correction will be inevitably lossy depending on how much the actual Scene illumination differs from the one that is “baked-in” in the non-Scene-Referred IDT provided by the camera maker.
Scene-Referred workflow, which uses proper Scene-Referred IDT will almost always produce higher dynamic range, wider gamut and more accurate colours than any Display-Referred workflow that is based on non-Scene-Referred IDT and downstream correction.
And this is why Scene-Referred workflows are such a big deal and correct understanding of forces at play is important.
The comments below are off-topic but, I felt, needed to be addressed.
Ø I like RWG due to the cameras actually being able to capture insane saturation way out there that we can utilize, which has small benefits in regards to HDR grading when it comes to colors we as humans can see.
This is not supported by my measurements or experience.
Starting from RED Dragon, which had terrible (almost unusable) reproduction of blue colours, which were reproduced as “dirty” magenta-grey. This was never fixed by RED and my understanding is that the problem was compound of poor CFA/sensor design, badly matched OLPF and inaccurate IDT.
This was largely corrected in Monstro, but still not in the same “league” as some of the best cinema cameras offer.
RED Monstro also suffers from colour “twists”, discontinuities and clipped gamut using “canned” Display-Referred IDT and IPP2 workflow. In particular, the magenta line is heavily twisted around red spectra and saturated greens are significantly clipped compared to, say, ARRI or Sony Venice, just to name a couple of cameras with really good fixed IDTs. An additional LUT downstream of RED’s IPP2 IDT is likely to cause further and significant colour shifts and gamut issues and would be counter-productive to the overall process of colour consistency and accuracy. Essentially: exactly what RED is warning about in their article about Creative LUT IPP2 usage.
Below is an illustration how Monstro reproduces monochromatic colours using RED-supplied IDT under IPP2. You can see very “dirty” magenta line with red colours reproduced as magenta. This is problematic and not easy to correct without affecting other colours. Secondly, you can see significant gamut clipping. The camera is not capable to reproduce even REC 709 saturated cyans or greens.
For comparison, below is the same test with Sony Venice:
You can see almost perfect “magenta line”. You can also see very impressive reproduction of pretty much the entire range of monochromatic colours without “twists” or significant clipping. Kudos to Sony.
Also, remember that RED RWG does not encapsulate all visible colours, unlike CIE XYZ (used in 9x7 IDT), which is wide enough to not only encapsulate all visible colours, but also as a standard working colour space without a need for additional transforms. In particular RWG doesn’t reproduce saturated cyans and greens, which will be clipped, even if the camera sensor could actually reproduce them. This seems to explain why RED Monstro is unable to reproduce those colours: not even the Output colour space (RWG) can accommodate them without clipping.
In summary, RED cameras are not capable of reproducing deep saturated colours using RWG IPP2 because even the output colour space has limited range it can possibly reproduce.
Hope it helps.
Kind Regards,
Pawel Achtel ACS B.Eng.(Hons) M.Sc. “Sharp to the Edge”
ACHTEL PTY LIMITED, ABN 52 134 895 417 Website: www.achtel.com Mobile: 040 747 2747 (overseas: +61 4 0747 2747) Mail: PO BOX 557, Rockdale, NSW 2216, Australia Email: Pawel.Achtel@... Facebook: facebook.com/PawelAchtel Twitter: twitter.com/PawelAchtel Skype: Pawel.Achtel
|
|
Hi Phil,
My point about IPP2 Creative Cube were that its function doesn’t lend itself to Scene-Referred workflows and it doesn’t achieve any of Scene-Referred workflow advantages. It is applied downstream of non-Scene-Referred and “backed-in” IDT. It is a “red herring” to the discussion we are having.
Ø Also I believe Graeme explained and demonstrated the impact of correctly adjusting the ISO so the data isn't getting clipped to eliminate the distortion, which is occurring due to RED's gamut mapping attempting to show a pushed high gamut color. Your spectrum was exposed well, but you pushed it to ISO 1600 which pushes it to the edge of the Rec.709 Output Transform. Even with that said, it's really nice to craft your own transforms or even drag things through ACES.
Yes, Graeme did demonstrate that changing ISO and colour temperature to default 5600K eliminate colour “twists” and gamut clipping – an advice, which I consciously ignored. J
The reason I ignored it is because the purpose of the test was to see how colour inaccuracies and gamut clipping occur under chromatic adaptation (or colour correction in general), which is a necessity when working in Display-Referred workflows. Sony Venice, ARRI LF and RED Monstro were tested under exactly the same setup and settings. Only RED Monstro showed extensive colour “twists” and gamut clipping (both extremely difficult to correct). The purpose of the test was to learn from it, not to change test parameters in order to suit the narrative.
What I learned from the test was that:
1. Monochromatic light test is a very-well conceived test because, in theory, all the colours should be invariant to colour adaptation. Thus, it allows to observe the limitations in colour reproduction of each camera. 2. Colour adaptation (a necessity in Display-Referred workflows) can cause significant reduction of colour gamut, metamerism, and colour inconsistencies 3. When using “default” CCT (which I presume is the one under which the IDT was created) the results were always better and got progressively worse as deviating from it. This was the case with all cameras under test. But, the degree to which it got worse varied. 4. RED Monstro exhibited extensive colour shifts (“twists”) and gamut clipping during chromatic adaptation, which indicates limitations in either sensor or IDT or both 5. Sony Venice showed outstanding accuracy and invariance under chromatic adaptation. It also showed the largest gamut. 6. Scene-Referred IDT doesn’t suffer from any of the above problems because it doesn’t require chromatic adaptation or colour correction downstream.
The feedback that I got from RED were:
1. “Change the test parameters to avoid the problem” (i.e. change CCT) 2. “This is caused by excessive UV” (as in case of RED Dragon and Helium sensors) 3. “These colours do not exist in real-life scenarios” 4. Silence the critics
This is not to bag RED. It is what it is. I used and owned most of their cameras since RED One and they have a lot of compelling features that other cameras do not have.
This is to point out that many of those issues, like colour twists and gamut clipping can be quite severe under chromatic adaptation and that they can be avoided altogether or minimised by using Scene-Referred IDT instead of “baked-in” IDT created under ideal lighting source.
And, most importantly, it is to point out the differences between Scene-Referred and Display-Referred workflows.
Kind Regards,
Pawel Achtel ACS B.Eng.(Hons) M.Sc. “Sharp to the Edge”
ACHTEL PTY LIMITED, ABN 52 134 895 417 Website: www.achtel.com Mobile: 040 747 2747 (overseas: +61 4 0747 2747) Mail: PO BOX 557, Rockdale, NSW 2216, Australia Email: Pawel.Achtel@... Facebook: facebook.com/PawelAchtel Twitter: twitter.com/PawelAchtel Skype: Pawel.Achtel _._,_._,_ |
|
Thanks for this debate Pawel.
I think that your point of view as camera maker and cinematographer is a little bit different than the point of view from the VFX side of things. As you stated scene-referred does not need a monitor to be consistent (and yes monitors make display or output referred workflows inconsistent). Again, as an engineer my job is to make sure pixels come in and come out mathematically identical. And, they are... Because if they were not we'd be out of a job. So as I said: different point of view on a the same subject. Meaning that your reality is not the one that I share on a daiky basis. That being said I was a colorist for 10 years and cinematographer before that. So I do understand your point of view and the need to have a consistent language. And thanks to people like you things move forward! Cheers! -- Charles Boileau, ex-Cinematographer, ex-Colorist and now: imaging engineer at some VFX studio. |
|
Hi Charles,
Many thanks for your feedback and it is great to hear from vice from different perspective.
Indeed, Netflix’s push for ACES workflow is good and makes things streamlined and consistent. I’m personally all for it.
Ø The main masters you'd deliver to Netflix would be display referred and there you are spot on. But, Netflix also pushes to get what they call a NAM (non graded archival master) and also get a GAM (graded archival masters). The NAM is technically 100% scene referred (from the original scene and the VFX) but the GAM is not... It's like a linear version of your grade. So it retains a lot of info but not all.
I like NAM and GAM terminology because they are descriptive and clear. In my mind there are two ways to arrive at NAM:
1. Scene Referred
This is done either through straight Scene-Referred camera IDT as I described earlier or (the “poor man’s version”) by using “backed-in camera IDT and corrective Scene-referred transform downstream (for example through the “Color Match” function in Resolve). Both would be technically “Scene-Referred” except the second workflow may be limiting the gamut and/or introducing unwanted colour “twists” depending on how much the “backed-in” IDT reference lighting differs from the actual scene lighting. As I demonstrated with RED Monstro, it can be a lot.
2. Output Referred
This is most common workflow where colour correction (before grade or look are applied) happens by referencing display. However, I personally call it “Output-Referred” because it usually happens in space that is much wider than the actual display, but the transform to P3 or BT1886 or REC 2020 (ODT), etc. is always fixed. So, it may not have to be display-specific, but it is always created based on evaluation of the output.
All of the above are obviously colour-managed, just managed very differently.
What is characteristic about Scene-Referred workflows (and what really defines them) is that it requires capture of spectral data at the scene. There are different ways and techniques to do so, but this is one critical ingredient that is needed to go from camera-specific colour space to working colour space (NAM). It also puts the burden of colour correction on the cinematographer rather than the colourist. The look is ultimately applied downstream by the colourist. Scene-Referred workflow also doesn’t require chromatic adaptation (setting colour temperature).
Hope it makes sense.
Maybe it is a good time to bring this discussion into IMAGO Technical Committee (ITC), which I’m a full member of, and get a broader consensus for consistent terminology.
I look forward to input from anyone. How do we make those distinctions clearer?
Kind Regards,
Pawel Achtel ACS B.Eng.(Hons) M.Sc. “Sharp to the Edge”
ACHTEL PTY LIMITED, ABN 52 134 895 417 Website: www.achtel.com Mobile: 040 747 2747 (overseas: +61 4 0747 2747) Mail: PO BOX 557, Rockdale, NSW 2216, Australia Email: Pawel.Achtel@... Facebook: facebook.com/PawelAchtel Twitter: twitter.com/PawelAchtel Skype: Pawel.Achtel
From: cml-general@... [mailto:cml-general@...] On Behalf Of Charles Boileau
[Edited Message Follows] Hi Pawel, Charles Boileau, ex-Cinematographer, ex-Colorist and now: imaging engineer at some VFX studio. |
|
Hi Pawel,
Thanks again for this discussion and for your input. Great eye opener. Even after 20 years + in image making and manipulation we can still learn things which is what is so great about this industry. I can honestly say I'm now armed with a better overview of the whole process. Great addition to my (new) and current work. I don't know where we could start. I guess it's by doing some education on a peer to peer basis and keeping things like CML alive. ITC is also a good place to start but that just covers cinematographers... Maybe VES, SMPTE etc... Quite the undertaking! You can at least know you influenced a few people here. 😁😁 Thanks again! -- Charles Boileau, ex-Cinematographer, ex-Colorist and now Lead Imaging Engineer at some VFX studio. Montreal, Québec, Canada. |
|