Web Analytics

HDR question


jmselva
 

Hi all.

I have shot a doc on a FS7 in XAVC-I / Slog.
It is now being sold to NHK Japan and they are asking if it was shot in
HDR.
Now, being that all modern cameras are HDR capable with they 13+ stops
latitude
and that it is merely a post decision to do or not a HDR version, I find
the question odd.

But I do have a question though : does the XAVC-I codec and Slog limit
in any way the possibility to do a HDR version after the fact ?

Thanks in advance.
Jean Marc Selva, DOP, Paris.


James Barber
 

If you haven't drastically over or underexposed anything, the FS7 in S-Log and the 10-bit codecs definitely give you enough dynamic range to grade in HDR, but not huge grading latitude. Think of grading 8-bit footage for 8-bit playout: it works, but it doesn't let one grade aggressively. 12-bit or higher codecs are best for HDR grading, as it's a 10-bit out. But definitely doable, especially as HDR should mostly just be regular light levels from IRE 0-100, and bright brights from 100-1000+.

Big things for HDR grading in terms of exposure is as little clipping as possible so there can be detail in those highlights, and as much bit depth as possible for colour information in those extreme gradients.

Not many cameras record to any form of HDR natively. The ones that do will record in HLG (Hybrid Log Gamma) and Rec2020 colour space. But crucially, they don't actually record more data than log at the same bitrate – it's just a different gamma curve. Very few cameras record a big enough gamut to actually make good use of the Rec2020 colour space yet (F65 for instance).

That said, HLG is a sort of 'broadcast' standard for HDR that is being pushed for live TV, so NHK might want stuff shot in HLG (Hybrid Log Gamma) to reduce post time. Same how some places currently prefer Rec709 footage rather than Log.

I would say something like "the dynamic range of the recorded footage is HDR compatible when graded, but has been shot in Log format to preserve the maximum range of light and shadow".

Hope that 2cents helps.

James I. Barber
Director/DoP/Editor
London

On 15 March 2018 at 12:21, jmselva <jmselva@...> wrote:
Hi all.

I have shot a doc on a FS7 in XAVC-I / Slog.
It is now being sold to NHK Japan and they are asking if it was shot in
HDR.
Now, being that all modern cameras are HDR capable with they 13+ stops
latitude
and that it is merely a post decision to do or not a HDR version, I find
the question odd.

But I do have a question though : does the XAVC-I codec and Slog limit
in any way the possibility to do a HDR version after the fact ?

Thanks in advance.
Jean Marc Selva, DOP, Paris.



Kevin Shaw
 

Hi Jean Marc

>I have shot a doc on a FS7 in XAVC-I / Slog. It is now being sold to NHK Japan and they are asking if it was shot in HDR.

James gave a great answer and i agree with everything he says. 
I have some additional thoughts on what they may be asking, but just guessing - I have no experience with NHK. (I am a colorist)

First - yes FS7 is HDR compatible, it has enough dynamic range for decent HDR, but as James says there is not much latitude at 1000 nits or higher
Yes - Slog is a valid HDR source. In fact Sony demo a very viable color managed workflow where everything stays slog until the final export

Now the less clear areas. 
Working on HDR and having worked with broadcasters and post houses that work with broadcasters, it is generally agreed that lighting ratios and creative decision in production would ideally be different for HDR planned productions. This is especially true for example if you like to rely on the limits of BT709 and shoot blown out windows, or have practical lights in shot. Even shooting into the sun (way more than 1000 nits!) is a problem. All these things require a lot more work in the grade. So whilst the source media is technically HDR ready, creatively it may need a lot of tweaking. Unless you were asked to deliver HDR it is unlikely that you metered or monitored the extended range or allowed for the changed perception increased brightness and highlight detail might bring.

As James said they may just be asking if there is any clipping in the source media - clipping is hard to sell in HDR deliverables. It usually involves tricky color  grading. Dont think just about bright whites either. Neon lights and bright colors in sunlight may clip in a single color channel. In BT709 we expect them to desaturate, but in HDR we do not.

If the program was post produced in ACES or another color managed workflow, your camera dynamic range has been protected. But if it was just graded and finished as straight BT709, a trim to get HDR might produce disappointing results. They would have to re-conform and re-grade. Or they might ask for the colorist project to work from. Given they only asked if it was shot HDR this seems less likely to be the case but it is worth bearing in mind. 

Finishing HDR and delivering SDR does not have the same limitations, and may actually give a better BT709 version

I dont think they are asking if it was shot HLG. 

Best 
Kevin

Kevin Shaw, CSI :
colorist, instructor and consultant
t +44 7921. 677  369 
e
 kevs@...

finalcolor: www.finalcolor.com  ICA:          www.icolorist.com      
twitter:      www.twitter.com/kevscolor    linkedIn:    www.linkedin.com/in/kevscolor
------------------
This message was sent by Kevin Shaw of Finalcolor Ltd. and may contain confidential and/or privileged information. If you are not the addressee or authorised to receive this for the addressee, you must not use, copy, disclose or take any action based on this message or any information herein. If you have received this message in error, please notify the sender immediately by e-mail and delete this e-mail from your system. It is believed, but not warranted, that this e-mail, including any attachments, is virus free. However, you should take full responsibility for virus checking. Thank you for your cooperation.
------------------



Nick Shaw
 

On 15 Mar 2018, at 20:59, Kevin Shaw <kevs@...> wrote:

…it has enough dynamic range for decent HDR, but as James says there is not much latitude at 1000 nits or higher

What people often do on the FS7 is to expose one or two stops over, and "print down" for monitoring and post (see this video from Convergent Design).

While this is a useful approach to reduce shadow noise, you are doing so at the expense of highlight latitude. If this approach has been taken, it is more likely that windows and practicals may be blown out, unless care was taken not to. And as Kevin says, if something is blown out in the rushes, it can be hard to make it look good in HDR.

Nick Shaw
Workflow Consultant
Antler Post
Suite 87
30 Red Lion Street
Richmond
Surrey TW9 1RB
UK

+44 (0)7778 217 555


alister@...
 


What people often do on the FS7 is to expose one or two stops over, and "print down" for monitoring and post (see this video from Convergent Design).

While this is a useful approach to reduce shadow noise, you are doing so at the expense of highlight latitude. If this approach has been taken, it is more likely that windows and practicals may be blown out, unless care was taken not to. And as Kevin says, if something is blown out in the rushes, it can be hard to make it look good in HDR.

But HDR isn’t just about highlights, it’s also about screens that can show the shadow range better, perhaps with better shadow contrast and this means noise can be more problematic in HDR than SDR as it is reproduced better. So the trick is to getting the right balance so that the crucial and all important mid range is exposed well.  It is a mistake to just look at the highlights and shoot avoiding clipping if this results in a compromised shadow or mid range, likewise it wouldn’t be good to shoot super bright for great shadows if that kills the highlights. You need to find the right balance and not be constantly obsessing over highlight clipping or excess noise, you need to find the sweet spot for the camera you are using, and this normally means getting the mid range right, just as you would in SDR. 

Most current HDR consumer displays struggle to achieve 1000 NIT’s and even if they do, this will only be over very small parts of the screen, so often limited to specular highlights or other “shiny bits”. If these are clipped no one is going to notice as that’s how they tend to look in the real world. Most real world HDR productions target 1000 to 1500 NITS for the final output, which is around 9 to 10 stops. So with most log cameras capable of capturing at least 12 useable stops maybe a touch more, there is still some exposure wriggle room. So if you feel that shooting at the equivalent of 1000 or 800ISO on an FS7 (+1 stop over base) helps with noise then I would continue to shoot that way as noise in the mid range, which makes up the majority of most images, will be a much more noticeable artefact than a few small clipped specular highlights that no current HDR screen can show correctly anyway. If you have a large, bright window in the shot then no HDR TV or monitor is going to deal with this well as it will hit the power limits of the display so it can’t be reproduced super bright and the brightness will be throttled back according to the displays power limitations and the MaxFall and MaxCLL metadata in the final material. So the key to good HDR is no different to good SDR, control the contrast in the scene, avoid extreme highlights and expose well so that you don’t deliver an excessively noisy file to post. 


Alister Chapman

DoP - Stereographer
UK Mobile +44 7711 152226
US Mobile +1(216)298-1977


www.xdcam-user.com    1.5 million hits, 100,000 visits from over 45,000 unique visitors every month!  Film and Video production techniques, reviews and news.



Nick Shaw
 

On 16 Mar 2018, at 11:38, alister@... wrote:

Most real world HDR productions target 1000 to 1500 NITS for the final output, which is around 9 to 10 stops.

It doesn't mean anything to say 1000 Nits "is 10 stops". Stops are relative. 1000 Nits is about 10 stops more than 1 Nit, but 1 Nit has no special significance. SDR displays go much darker than that, and HDR ones more so. Also screen contrast ratios and scene contrast ratios are not the same thing.

But I agree with your point that pushing the exposure too far one way or the other, to protect either highlights or shadows can be a bad idea.

Nick Shaw
Workflow Consultant
Antler Post
Suite 87
30 Red Lion Street
Richmond
Surrey TW9 1RB
UK

+44 (0)7778 217 555


alister@...
 

It doesn't mean anything to say 1000 Nits "is 10 stops". Stops are relative. 1000 Nits is about 10 stops more than 1 Nit, but 1 Nit has no special significance. SDR displays go much darker than that, and HDR ones more so. Also screen contrast ratios and scene contrast ratios are not the same thing.

Nick, could you enlighten me as to how the contrast ratio of a display is different to the contrast ratio of a scene, surely a ratio is a ratio, if a screen can show 10:1 and a scene is 10:1 are these ratios not the same?

Alister Chapman

DoP - Stereographer
UK Mobile +44 7711 152226
US Mobile +1(216)298-1977


www.xdcam-user.com    1.5 million hits, 100,000 visits from over 45,000 unique visitors every month!  Film and Video production techniques, reviews and news.

















On 16 Mar 2018, at 12:37, Nick Shaw <nick@...> wrote:

On 16 Mar 2018, at 11:38, alister@... wrote:

Most real world HDR productions target 1000 to 1500 NITS for the final output, which is around 9 to 10 stops.

It doesn't mean anything to say 1000 Nits "is 10 stops". Stops are relative. 1000 Nits is about 10 stops more than 1 Nit, but 1 Nit has no special significance. SDR displays go much darker than that, and HDR ones more so. Also screen contrast ratios and scene contrast ratios are not the same thing.

But I agree with your point that pushing the exposure too far one way or the other, to protect either highlights or shadows can be a bad idea.

Nick Shaw
Workflow Consultant
Antler Post
Suite 87
30 Red Lion Street
Richmond
Surrey TW9 1RB
UK

+44 (0)7778 217 555



Nick Shaw
 

On 16 Mar 2018, at 13:19, alister@... wrote:

Nick, could you enlighten me as to how the contrast ratio of a display is different to the contrast ratio of a scene, surely a ratio is a ratio, if a screen can show 10:1 and a scene is 10:1 are these ratios not the same?

A scene and display contrast ratio of e.g. 10:1 may be numerically the same, but they are not perceptually the same, because there is a difference in both the absolute luminance and the viewing environment.

To make an image appear perceptually like the scene to a viewer, you do not want the luminance of the screen to be proportional to the luminance of the scene. You require what is referred to as "picture rendering" applied. This is the function of the RRT in the ACES block diagram, or the 1.2 "system gamma" of traditional video, to give two examples.

Nick Shaw
Workflow Consultant
Antler Post
Suite 87
30 Red Lion Street
Richmond
Surrey TW9 1RB
UK

+44 (0)7778 217 555


Seth Marshall
 

On Fri, Mar 16, 2018 at 02:58 am, Nick Shaw wrote:
What people often do on the FS7 is to expose one or two stops over, and "print down" for monitoring and post (see this video from Convergent Design).
I would like to know how many users here burn in their LUTs in this way.  I am so used to delivering Log in scenes with a high dynamic range to protect myself.  Do some here always burn it in?
In many situations where a producer doesn't allow time for thoughtful exposure, the thought of burning in the LUT scares me.  If recording raw externally (like the Convergent Design video) is there an added benefit to recording the LUT?

On Fri, Mar 16, 2018 at 08:02 am, Nick Shaw wrote:
To make an image appear perceptually like the scene to a viewer, you do not want the luminance of the screen to be proportional to the luminance of the scene. You require what is referred to as "picture rendering" applied. This is the function of the RRT in the ACES block diagram, or the 1.2 "system gamma" of traditional video, to give two examples.
Nick, forgive me but could you expand on this more?  Is this more of a colorist thing that doesn't apply so much to operators?
--
S e t h  M a r s h a l l
www.sethmarshall.com


Nick Shaw
 

On 17 Mar 2018, at 01:34, Seth Marshall via Cml.News <sethmarshall=yahoo.com@...> wrote:

I would like to know how many users here burn in their LUTs in this way. I am so used to delivering Log in scenes with a high dynamic range to protect myself. Do some here always burn it in?
That video is not suggesting you burn in the LUT. Indeed, the Odyssey does not even have the option to do that. The LUT is used for monitoring only. I did another video for CD showing how to apply the print-down LUTs in Resolve.

Regarding my comments about picture rendering, that is not something either a DP or a colourist really needs to concern themselves with an any practical way. I was just pointing out that screen contrast and scene contrast don't relate directly in the way Alistair was suggesting. It is a small insight into the colour science of what is happening in a camera LUT. But in practical terms you don't really need to know why. You just use ACES, or a manufacturer's LUT, or just grade until it "looks right"!

Nick Shaw
Workflow Consultant
Antler Post
U.K.


Jonathon Sendall
 

I occasionally burn in a LUT to a monitor/recorder (if I have time and have a post plan) so that there’s at least a reference image in proxies, but only if I’m happy with the camera log/raw footage being the only copy in that format.

Jonathon Sendall
DP, London

On Sat, 17 Mar 2018 at 11:58, Nick Shaw <nick@...> wrote:
On 17 Mar 2018, at 01:34, Seth Marshall via Cml.News <sethmarshall=yahoo.com@...> wrote:

I would like to know how many users here burn in their LUTs in this way. I am so used to delivering Log in scenes with a high dynamic range to protect myself. Do some here always burn it in?
That video is not suggesting you burn in the LUT. Indeed, the Odyssey does not even have the option to do that. The LUT is used for monitoring only. I did another video for CD showing how to apply the print-down LUTs in Resolve.

Regarding my comments about picture rendering, that is not something either a DP or a colourist really needs to concern themselves with an any practical way. I was just pointing out that screen contrast and scene contrast don't relate directly in the way Alistair was suggesting. It is a small insight into the colour science of what is happening in a camera LUT. But in practical terms you don't really need to know why. You just use ACES, or a manufacturer's LUT, or just grade until it "looks right"!

Nick Shaw
Workflow Consultant
Antler Post
U.K.


alister@...
 

To make an image appear perceptually like the scene to a viewer, you do not want the luminance of the screen to be proportional to the luminance of the scene. You require what is referred to as "picture rendering" applied. This is the function of the RRT in the ACES block diagram, or the 1.2 "system gamma" of traditional video, to give two examples.

I’m fully familiar with viewing environments etc and the examples above assume that the viewing environment is different to the scene and that the monitors contrast range is significantly less than the scenes contrast range. But if we are talking about a monitor that can match the brightness and contrast of a scene, and if that monitor was placed into the scene the contrast range and perceived range of both could be matched. Display contrast is no different to scene contrast. Yes, viewing environment changes how we perceive an image (or a scene) relative to the ambient environment, but this is no different to looking through a small window at an exterior from inside a dark room. The brightness of the room will alter our perception of how bright it is outside.

So if a monitor can manage 8 stops and the scene is 8 stops and if the brightness of the monitor can match the brightness of the scene, in the same environment both will look the same. So to say that monitor contrast and scene contrast are different things is not correct. It is the viewing environments that are different and it is the difference in the viewing environment that changes the perception of the image, not differences in the contrast ratios of monitors. Unless I’m greatly mistaken and my whole knowledge of contrast and brightness is flawed.

Alister Chapman

DoP - Stereographer
UK Mobile +44 7711 152226
US Mobile +1(216)298-1977


www.xdcam-user.com    1.5 million hits, 100,000 visits from over 45,000 unique visitors every month!  Film and Video production techniques, reviews and news.

















On 16 Mar 2018, at 15:02, Nick Shaw <nick@...> wrote:

On 16 Mar 2018, at 13:19, alister@... wrote:

Nick, could you enlighten me as to how the contrast ratio of a display is different to the contrast ratio of a scene, surely a ratio is a ratio, if a screen can show 10:1 and a scene is 10:1 are these ratios not the same?

A scene and display contrast ratio of e.g. 10:1 may be numerically the same, but they are not perceptually the same, because there is a difference in both the absolute luminance and the viewing environment.

To make an image appear perceptually like the scene to a viewer, you do not want the luminance of the screen to be proportional to the luminance of the scene. You require what is referred to as "picture rendering" applied. This is the function of the RRT in the ACES block diagram, or the 1.2 "system gamma" of traditional video, to give two examples.

Nick Shaw
Workflow Consultant
Antler Post
Suite 87
30 Red Lion Street
Richmond
Surrey TW9 1RB
UK

+44 (0)7778 217 555


Jan Klier
 

I just upgraded to the Varicam LT. It has the ability to record VLog to the primary memory card and a proxy file to the secondary memory card, while applying a built-in or user supplied LUT to the proxy file. So you get the best of both straight from camera.

Jan Klier
DP NYC

On Mar 17, 2018, at 8:24 AM, Jonathon Sendall <jpsendall@...> wrote:

I occasionally burn in a LUT to a monitor/recorder (if I have time and have a post plan) so that there’s at least a reference image in proxies, but only if I’m happy with the camera log/raw footage being the only copy in that format.



Geoff Boyle
 

I have to disagree with you on this Alister, because of gamma and the responses of displays the total range may be the same but the way it is represented can be totally different.

 

Cheers

 

Geoff Boyle NSC FBKS

Cinematographer

Zoetermeer

www.gboyle.co.uk

+31 (0) 637 155 076

 

From: cml-raw-log-hdr@... <cml-raw-log-hdr@...> On Behalf Of alister@...

So if a monitor can manage 8 stops and the scene is 8 stops and if the brightness of the monitor can match the brightness of the scene, in the same environment both will look the same.

 


MARK FOERSTER
 

“here burn in their LUTs in this way”
____________________________________

Until log showed up say five years ago the only way - was to “burn” your lut into your work. You picked a gamma curve, set a white point and used traditional methods like protecting whites, nd’s in windows and hmi’s for keys and fills. Don’t be afraid to use a properly calibrated monitor on set and deliver a fully exceptable picture. I still deliver this way 50% of the time. I’ll also mention here that I still see plenty of “post corrected log” looking worse than had I delivered the way (shocker!) I thought it should look. Cranky blacks especially. Finally a 10 bit file even baked with a decent lut has enormous head room and shadow pull - try it in any NLE.


Mark Foerster csc
Toronto
(905) 922 5555


alister@...
 

Geoff wrote:

I have to disagree with you on this Alister, because of gamma and the responses of displays the total range may be the same but the way it is represented can be totally different.
 
I really hope they aren’t different. What would be the point of my expensive DSC Charts?

Sure, it can be different, but it doesn’t have to be, it all depends on the grade. If the camera and monitor gammas are properly matched then capture and display range should also be matched. The engineers didn’t spend decades developing different gammas to make the pictures on monitors look different to the scenes we are shooting. They were developed to make them look the same. If you use a real 709 gamma curve in a camera and have a correctly matched 709 gamma in the monitor then the contrast range of what the camera captures should be reproduced on the monitor with the same contrast and it should look the same. Have you never shot a DSC chart while looking at a 709 monitor and noticed how when it’s all working correctly the chart on the monitor looks just like the chart you are shooting? Both the total range and the contrast range are the same, so they look the same because the monitor output mirrors the light being reflected from the chart. Of course you can screw this up by doing this on a bright sunny day where a 709 monitor has no chance of reaching the same output levels as a chart illuminated by direct sunlight, but do it under controlled lighting where the lighting does not exceed the monitor output and both should look near identical, if they didn’t charts like the CamBelles would be pointless.


Alister Chapman

DoP - Stereographer
UK Mobile +44 7711 152226
US Mobile +1(216)298-1977


www.xdcam-user.com    1.5 million hits, 100,000 visits from over 45,000 unique visitors every month!  Film and Video production techniques, reviews and news.

















On 17 Mar 2018, at 13:42, Geoff Boyle <geoff.cml@...> wrote:

I have to disagree with you on this Alister, because of gamma and the responses of displays the total range may be the same but the way it is represented can be totally different.
 
Cheers
 
Geoff Boyle NSC FBKS
Cinematographer
Zoetermeer
+31 (0) 637 155 076
 
So if a monitor can manage 8 stops and the scene is 8 stops and if the brightness of the monitor can match the brightness of the scene, in the same environment both will look the same. 
 


Nick Shaw
 

On 17 Mar 2018, at 16:06, alister@... wrote:

I really hope they aren’t different. What would be the point of my expensive DSC Charts?

Sure, it can be different, but it doesn’t have to be, it all depends on the grade. If the camera and monitor gammas are properly matched then capture and display range should also be matched. The engineers didn’t spend decades developing different gammas to make the pictures on monitors look different to the scenes we are shooting. They were developed to make them look the same

That's the key point, "look the same" not "be the same." It has long been accepted in traditional TV that a system gamma of about 1.2 was required to make an image on a TV appear perceptually the same as the scene.

Intuitively you could think that with HDR it might be possible to have the absolute luminance of the screen be identical to that of the scene, and therefore require no system gamma or other picture rendering. However, while this might be possible for some low contrast scenes, where there are specular reflections of the sun, or even just a bright sky, in the scene, co current monitor can reproduce that.

Look up the BBC's experiments when developing HLG (Google Tim Borer and Andrew Cotton). They found (surprisingly, according to classical colour science) that as the peak brightness of the monitor increased, the system gamma had to be increased to get a perceptual match. For a 1000 Nit monitor in a typical environment, HLG uses a system gamma of 1.4.

Nick Shaw
Workflow Consultant
Antler Post
Suite 87
30 Red Lion Street
Richmond
Surrey TW9 1RB
UK

+44 (0)7778 217 555


Art Adams <art.cml.only@...>
 

This is correct from an engineering standpoint. From an artistic standpoint... not so much.

There's a difference between capturing images and creating visual stories. For the former, a chart is an absolute reference. For the latter, it's a tool meant to create a consistent starting point when imposing an artistic vision, and ensuring that vision is the one that reaches the audience. 

A chart is not necessarily an image reference, but can be a look calibration reference.

--
Art Adams
DP
San Francisco Bay Area



On Mar 17, 2018 at 11:06 AM, <Alister> wrote:

Geoff wrote:

I have to disagree with you on this Alister, because of gamma and the responses of displays the total range may be the same but the way it is represented can be totally different.
 
I really hope they aren’t different. What would be the point of my expensive DSC Charts?

Sure, it can be different, but it doesn’t have to be, it all depends on the grade. If the camera and monitor gammas are properly matched then capture and display range should also be matched. The engineers didn’t spend decades developing different gammas to make the pictures on monitors look different to the scenes we are shooting. They were developed to make them look the same. If you use a real 709 gamma curve in a camera and have a correctly matched 709 gamma in the monitor then the contrast range of what the camera captures should be reproduced on the monitor with the same contrast and it should look the same. Have you never shot a DSC chart while looking at a 709 monitor and noticed how when it’s all working correctly the chart on the monitor looks just like the chart you are shooting? Both the total range and the contrast range are the same, so they look the same because the monitor output mirrors the light being reflected from the chart. Of course you can screw this up by doing this on a bright sunny day where a 709 monitor has no chance of reaching the same output levels as a chart illuminated by direct sunlight, but do it under controlled lighting where the lighting does not exceed the monitor output and both should look near identical, if they didn’t charts like the CamBelles would be pointless.


Alister Chapman

DoP - Stereographer
UK Mobile +44 7711 152226
US Mobile +1(216)298-1977


www.xdcam-user.com    1.5 million hits, 100,000 visits from over 45,000 unique visitors every month!  Film and Video production techniques, reviews and news.

















On 17 Mar 2018, at 13:42, Geoff Boyle <geoff.cml@...> wrote:

I have to disagree with you on this Alister, because of gamma and the responses of displays the total range may be the same but the way it is represented can be totally different.
 
Cheers
 
Geoff Boyle NSC FBKS
Cinematographer
Zoetermeer
+31 (0) 637 155 076
 
So if a monitor can manage 8 stops and the scene is 8 stops and if the brightness of the monitor can match the brightness of the scene, in the same environment both will look the same. 
 


JD Houston
 


On Mar 17, 2018, at 3:59 AM, alister@... wrote:

So if a monitor can manage 8 stops and the scene is 8 stops and if the brightness of the monitor can match the brightness of the scene, in the same environment both will look the same. So to say that monitor contrast and scene contrast are different things is not correct. It is the viewing environments that are different and it is the difference in the viewing environment that changes the perception of the image, not differences in the contrast ratios of monitors. Unless I’m greatly mistaken and my whole knowledge of contrast and brightness is flawed.

The first line is correct.  If you have a monitor that can show up to 10,000 nits (the brightness of a piece of paper in the Sun) so that you can match the brightness of daylight
and you are also outside looking at a monitor, it will look the same up to the point that the monitor can show  (you would need a monitor of 1000000 nits to match the Sun and
direct Sun glints.   Vision is a log thing)

But I would dare to say that ALL monitor viewing is currently done in environments that are not a direct match to the original scene.
This is as true in dark scenes where the eye is dark adapted and the surround is also dark.

So in practical terms,  the scene contrast and the display contrast are almost never the same. Yes, you are correct that
the difference is because of viewing environment.

In most video and film, the contrast build-up in a display is systematically designed into the system.  This is what makes the ratio’s 
different.

In cameras, the ‘taking curve’ has an assumption (say Rec709 curve) and the output curve of a display
has a buildup assumption (Rec.1886’s gamma 2.4 for example),  this gives the video system an effective
contrast boost of about 9%  (2.4/2.2)

So by design, comparing scene contrast ratios and output display contrast ratios will not give you the same
number.  To simplify the example,  if you have a scene contrast of 100:1, and you want to show it in a 
dark environment, you need a 150:1 to accurately show it.  If you are showing it in a video dim environment,
you would need an output contrast of 109:1 to show it.  (As an aside, the overall gain of 1.2 that is sometimes 
used brings in a complicating factor of audience preference for a certain style of reproduction.  Audiences prefer
boosted contrast looks over reality matches)

There is one other consideration.  Most scene contrasts are described in terms of simultaneous contrast — the dark to light ratio
including everything in the scene at the same time (fill, flare, etc).  Most display contrasts are described as 
sequential contrasts — the ratio of a full on white to a full-on black.  This makes the displays seem to have a higher contrast.
But it is not true.    It is especially a problem with projectors because the projection lens can have a drastic effect
on the display contrast.   But even OLED monitors can have systematic issues in displaying images with the original scene contrast.
This is all a reason to never use display contrast as a way to evaluate scenes.

From an operator standpoint,  the ratios to use are based on the light in the scene.  So targeting key to fill ratios
of a certain number is the right way to do it.  You are mixing light and light is linear but it can be expressed as
ratios — photographically almost everything is a ratio (e.g.  this light is twice as bright as that light, this exposure is
half as much as the previous one )

Why does an operator need to know about contrast ratios and the effects of picture rendering.   Because in the world of HDR, it now 
matters which target you are going for.  If you are never going into a movie theater, then you don’t have to maintain the same limits
for where you would want Zone 1 or Zone 10 detail (to use that as a metaphor).

For a project going onto full range PQ, you may want tonal separations of as much as 1.5 million (20.55 stops)  Of course current cameras
aren’t really there yet, so you might have a little time before you have to worry about that.  But even today’s cameras with about 14+ stops
requires decisions about where to place highlight and shadow details that are going to reproduce cleanly on certain types of monitors.
For most productions, the least common denominator applies (which is LCDs).  

Because of that, it is important to know that Cinema Output Ratios for a Dark Surround, are different that Video Output Ratios for a Dim or Average Surround).
Dark needs a 1.5 boost.  Dim needs a 1.2 Boost.  So when you consider what you are trying to achieve, it helps if you know
what your audience will be looking at.

Yes, it is a colorist problem to fix in the end — it is what you see is what you get.  But knowing the choices that will be faced
improves the usefulness of the source material, so is good for DPs to know.



Jim Houston
Consultant, Starwatcher Digital, Pasadena, CA


alister@...
 

That's the key point, "look the same" not "be the same." It has long been accepted in traditional TV that a system gamma of about 1.2 was required to make an image on a TV appear perceptually the same as the scene.

But they don’t “look the same” The 709 standard was based on CRT TV’s with very limited brightness ranges and white at 100 NITs, so a bit of contrast was added to make up for the lack of brightness, so the image was perceived to be better. But now most TV’s hit 300 NT’s or more so most viewers now have even more contrast than before, which they seem to like - but in this case, no it’s not accurate, because a gamma miss-match is being added/created.

But this is getting away from my original point which is that the way a monitor generates contrast is exactly the same as the way contrast for a scene is produced. How we perceive an image on a screen that only fills a small part of our FOV is a different thing and as we all know the same monitor will look different in different viewing environment just as the view out of a window will be perceived differently depending on how bright it is inside the room.

Intuitively you could think that with HDR it might be possible to have the absolute luminance of the screen be identical to that of the scene, and therefore require no system gamma or other picture rendering. However, while this might be possible for some low contrast scenes, where there are specular reflections of the sun, or even just a bright sky, in the scene, co current monitor can reproduce that.

I think we are all aware that no TV can reproduce very high contrast scenes and in particular speculars, I’ve already said that. But if you are within the monitors range and brightness (which can be 8 stops or more) then with ST2084 you should be able to get the same light levels coming from the monitor as the scene, thus the same contrast and same DR. 

Look up the BBC's experiments when developing HLG (Google Tim Borer and Andrew Cotton). They found (surprisingly, according to classical colour science) that as the peak brightness of the monitor increased, the system gamma had to be increased to get a perceptual match. For a 1000 Nit monitor in a typical environment, HLG uses a system gamma of 1.4.

I though the BBC were aiming for 1.2? Again though, this is for perceptual reasons and as you know it gets adjust according to ambient light levels but not because the contrast that comes from a screen is somehow different to the contrast we see in a scene. It’s because TV’s almost never fill our FOV so only a small part of the what we are looking at is changing and the ambient light in the room changes our perception of the contrast. It would be different if the screen totally filled our FOV or everyone had blacked out living rooms. I have no problem with the notion of ambient light levels changing perceptions. This happens not just to monitors but to everything we see. 

This is why it’s normal to use viewfinders with monoculars to exclude ambient light or video villages in blacked out tents. We are eliminating the otherwise distracting ambient light that alters our perception so that all we see is the true contrast of the monitor which should them match the contrast of the scene (or at the very least be very very close). And this comes back to my original point which is that there is no difference in the contrast ratio of a screen compared to the contrast ratio of a scene, they are the same thing, a contrast ratio is a contrast ratio wether that of the scene or that of the display.

A simple test would be to shoot a chart with matching camera and display gammas and have the monitor next to the chart with the lighting set so that the chart is reflecting the same amount of light off the white chip as the monitor is outputting light off the white chip. I bet that if I take a photograph of this both the chart and monitor will look virtually identical in the resulting picture.


Alister Chapman

DoP - Stereographer
UK Mobile +44 7711 152226
US Mobile +1(216)298-1977


www.xdcam-user.com    1.5 million hits, 100,000 visits from over 45,000 unique visitors every month!  Film and Video production techniques, reviews and news.

















On 17 Mar 2018, at 16:25, Nick Shaw <nick@...> wrote:

On 17 Mar 2018, at 16:06, alister@... wrote:

I really hope they aren’t different. What would be the point of my expensive DSC Charts?

Sure, it can be different, but it doesn’t have to be, it all depends on the grade. If the camera and monitor gammas are properly matched then capture and display range should also be matched. The engineers didn’t spend decades developing different gammas to make the pictures on monitors look different to the scenes we are shooting. They were developed to make them look the same

That's the key point, "look the same" not "be the same." It has long been accepted in traditional TV that a system gamma of about 1.2 was required to make an image on a TV appear perceptually the same as the scene.

Intuitively you could think that with HDR it might be possible to have the absolute luminance of the screen be identical to that of the scene, and therefore require no system gamma or other picture rendering. However, while this might be possible for some low contrast scenes, where there are specular reflections of the sun, or even just a bright sky, in the scene, co current monitor can reproduce that.

Look up the BBC's experiments when developing HLG (Google Tim Borer and Andrew Cotton). They found (surprisingly, according to classical colour science) that as the peak brightness of the monitor increased, the system gamma had to be increased to get a perceptual match. For a 1000 Nit monitor in a typical environment, HLG uses a system gamma of 1.4.

Nick Shaw
Workflow Consultant
Antler Post
Suite 87
30 Red Lion Street
Richmond
Surrey TW9 1RB
UK

+44 (0)7778 217 555