Topics

WAS: ISO rating of digital camera

Mako Koiwai
 

My question has always been, why do we always try to rate digital STILL cameras at the lowest possible ISO, but motion picture cameras at what seems to be the EI of max. Dynamic Range. That even though typically we can control the Contrast of our motion picture scenes.

I was talking to an official FujiFilm still shooter recently, and he surprised me by saying he now keeps his ISO at 800. (And since FujiFilm jpegs are SO Good, he doesn’t bother with RAW anymore!)

THAT is the first time I’ve heard of a still shooter not using the lowest possible ISO for a particular situation!

Does this have anything to do with the Processing of motion picture files? We know that the Alexa does some fancy Highlight/Shadow internal “exposure” processing.

Modern Still cameras can also do some Compensating processing … although I’m still not sure if that is used for the RAW files.

I find all this Fascinating!


makofoto, s. pasadena, ca




Mark Kenfield
 

I suspect a lot of it relates to how we view still images compared to motion pictures. With stills, we stand and look at a single frame for a considerable period of time. Peering closer sometimes at particular details. This level of scrutiny for an individual frame simply demands higher resolution and finer grain/noise.

By comparison, we get away with murder with motion pictures - noisey, (mostly) 2 megapixel images, projected at 24 frames a second, with two thirds of those frames rendered complete mush by the motion blur of a 180 degree shutter!

Cheers,

Mark Kenfield 
Cinematographer 
Los Angeles (till the 27th)

0400 044 500

On 13 May 2018, at 1:07 pm, Mako Koiwai <mako1foto@...> wrote:

My question has always been, why do we always try to rate digital STILL cameras at the lowest possible ISO, but motion picture cameras at what seems to be the EI of max. Dynamic Range. That even though typically we can control the Contrast of our motion picture scenes.

I was talking to an official FujiFilm still shooter recently, and he surprised me by saying he now keeps his ISO at 800. (And since FujiFilm jpegs are SO Good, he doesn’t bother with RAW anymore!)

THAT is the first time I’ve heard of a still shooter not using the lowest possible ISO for a particular situation!

Does this have anything to do with the Processing of motion picture files? We know that the Alexa does some fancy Highlight/Shadow internal “exposure” processing.

Modern Still cameras can also do some Compensating processing … although I’m still not sure if that is used for the RAW files.

I find all this Fascinating!


makofoto, s. pasadena, ca
<1 to 1 7954.jpeg>




Jan Klier
 

It may have something to do with how photographers vs. film makers think of the variables in the exposure triangle and in which order.

As a photographer I will start with a low ISO by default, pick my aperture to style, and then adjust shutter speed within upper/lower bounds, and then bump the ISO as last resort if I hit limits. Shutter speed is the most flexible variable give or take a few specialty genres. Controlling light seems to fall between extremes of impossible to change (ambient scenes) or full control (strobe overpowers ambient). So changing light output for exposure control seems to be far down the list in most photographers minds. Photographers are also less likely to have dedicated lighting crews, so anything you can control from the camera rather than walking across the set to the lights is preferable.

As a film maker your shutter speed is generally not a variable at all. Your aperture choices may be more limited because of continuity, scene, and ability to control focus. So that leaves ISO/EI and lighting as the most flexible variable. Add to that the power required for significant continuous lighting vs. strobe is quite a different animal and cost, and it’s natural that we need to eek whatever ISO/EI we can get at acceptable noise levels to make it work.

For photographers RAW was much more about white balance control and ability to recover highlights than noise necessarily.

Jan Klier
DP NYC

On May 13, 2018, at 4:07 PM, Mako Koiwai <mako1foto@...> wrote:

My question has always been, why do we always try to rate digital STILL cameras at the lowest possible ISO, but motion picture cameras at what seems to be the EI of max.

Art Adams
 

I'm going to just make some shit up, because I have no idea if it's true, but this question got me to thinking.

When I shoot stills I generally don't worry about maximum dynamic range because, with most still cameras, it doesn't change. It seems that most primarily use analog gain as the dynamic range doesn't shift much in the way it does in motion picture cameras of certain types. (I heard that Canon uses a combination of both analog and digital gain, but unless I read it on the Internet I have no idea if it's true.)

I used to shoot at the lowest ISO possible because that's where I got the least noise, and as Jan mentioned, raw is more about highlight recovery and white balance than anything else. My Fujifilm camera gives me a bit more dynamic range at higher ISOs, and the noise is very pretty, so I shoot regularly at 400, 800 and 1600. (I wouldn't do 1600 with my Nikon, which I shoot most often at 100/200/400.)

In the stills world, ETTR is a thing, and since it's hard to judge images off a tiny LCD display on the back of a camera and as still images are often graded one at a time, cheating the exposure to the point of clipping is a valid way to work, especially as some cameras have no overhead. (I think my Nikon has about three stops, and my Fujifilm has five.) Pushing the exposure right to the edge of clipping maximizes dynamic range in that I can pull the shadows down later while preserving highlights. It's a bit like exposing for the shadows and processing for the highlights (as Ansel Adams liked to do, with some success).

In the motion picture world, we rarely use ETTR as an exposure technique. Requiring each shot to have its own unique grade is not cost effective, and it's confusing as someone else is doing the grading. We have less control over variables, as still photographers can carry the sun in a briefcase and we can't. And, honestly, still photographers generally don't take the chances that we do. I see very little editorial work that isn't made in Photoshop and comes anywhere near close to the kind of lighting sophistication I see in films and TV. A lot of stills work seems to focus on the subject over everything else, including lighting, whereas in good motion picture work the lighting defines the subject. There's a still photographer whose studio I rent on occasion to shoot portraits, and all his are done with a big soft source behind the camera against a white limbo background. He thinks I'm nuts when I shoot with heavy backlight or contrasty sidelight, or try to build black caves so I can create deep shadows.

A still photography set has a vertical command structure: it's a still photographer at the top, then a stylist, and a bunch of low paid PAs. They may do the post work themselves or bring someone in to do it under their watchful eye. The client gets a finished product. We hand our footage to someone else, and hopefully the next person down the line respects our wishes. If it doesn't work out, it costs a lot to reshoot—so the client wants to see something that's 80% done on set. That means we have to squeeze maximum dynamic range out of our cameras, as we have to hold everything we want to keep in such a way that someone doesn't panic and replace us. That's what is going to make HDR so interesting, as the on-set monitors can't show the full dynamic range of the cameras without clipping or using a LUT of some sort.

That said, in the stills world it seems ISO is a general reference that has almost nothing to do with a meter and is more about deciding how much noise one is willing to tolerate in an image in order to shoot at a certain f/stop and shutter speed. In the motion world, the meter is how we rough stuff in and match looks between shots and scenes, so it's a bit more important.

I might shoot a still of someone in front of a window at the lowest ISO possible because I can ETTR and pull everything back later. I can't do that in motion because someone on set is going to panic and say it looks awful, and then I become a target. I also don't know that the next person down the line isn't going to think, "Maybe he wanted it to look blown out." In that case, I'm going to want as much dynamic range as possible so I can hold all the important stuff and make it look the way I want at the time I shoot it.

--
Art Adams
Director of Photography
San Francisco Bay Area

Geoff Boyle
 

You’re hearing it on the net now Art, I heard it from a camera designer at the Canon factory in Japan 😊

 

They use analogue gain in the A-D process from the chip.

 

As far as everything else you said, absolutely right 😊

 

That is why I got into grading my own material, I come from a stills background and want to maintain that control. I even have a couple of films I shot that have my grade stored here as the final release was somewhat different!

 

 

Cheers

 

Geoff Boyle NSC FBKS

Cinematographer

Netherlands

www.gboyle.co.uk

 

 

 

From: cml-general@... <cml-general@...> On Behalf Of Art Adams

(I heard that Canon uses a combination of both analog and digital gain, but unless I read it on the Internet I have no idea if it's true.)

 

 

Jan Klier
 


On May 14, 2018, at 12:35 AM, Art Adams <art.cml.only@...> wrote:

In the stills world, ETTR is a thing, and since it's hard to judge images off a tiny LCD display on the back of a camera and as still images are often graded one at a time, cheating the exposure to the point of clipping is a valid way to work, especially as some cameras have no overhead. 

It is worth noting that the original argument for ETTR for stills from 2003 (https://luminous-landscape.com/expose-right/) relies on the fact that RAW still images are in linear gamma which creates the right-weighted distribution of distinct values. Once you convert to 2.2 gamma there is no data advantage anymore. Its theory wasn’t just about pushing the noise back into the shadows (the Dolby NR compander approach), but also having less banding throughout therefore making whatever noise would be there more acceptable.

Since RAW wasn’t (and still isn’t) as prevalent in cine cameras that is another argument why ETTR makes less sense. The counter action to ETTR has to happen in the first stage from linear gamma to whatever codec, which is usually the RAW converter.

That said, in the stills world it seems ISO is a general reference that has almost nothing to do with a meter and is more about deciding how much noise one is willing to tolerate in an image in order to shoot at a certain f/stop and shutter speed. In the motion world, the meter is how we rough stuff in and match looks between shots and scenes, so it's a bit more important.

ISO used to be talked about a lot more when early digital still cameras still topped out at 1,600 or thereabouts, or with digital medium format CCD sensors that were crap above ISO 200, and really needed you to shoot ISO 50. Now that cameras are frequently shot at ISO 25,000 and the Canon 1Dx allows settings up to 204,800 it has lost any sense of meaning as a technical measure to most. On the contrary though, when ISO was talked about it had consistency throughout. ISO on Canon was generally the same as ISO on Nikon. The idea of rating one’s camera and EI is something I’ve never encountered in stills. ISO was a key formula for the Sunny 16 rule and you could live by it if you wanted.

But its true that nobody does pre-production with a tool or chart and figures out what ISO they will shoot with or have to light to. It’s more like lower is better, and if in pinch, what the hell, just dial it up and pray for the best.

Occasionally you will find light meters on sets that are mostly studio strobe lit, like fashion, for getting ratios right. Sometimes you will find meters to measure the flash/ambient percentage. But the concept of lighting an entire shoot to a certain T-stop certainly does not exist in reality for stills photographers.

Jan Klier
DP NYC


Jan Klier
 

That has always been one of the fascinating elements comparing the two worlds.

It also tends to make still photography sets much less technical, because most of the conversation about exposure, lighting, ratios, ISO takes place in one person’s head. At most you might hear the photographer call on an assistant to take a specific light down a stop (from the old rocker switches on power packs). 

So there is less need for the photographer to be thoughtful about the technical aspects and has a lot more leeway to go by what feels right or even cheat and get away with it without anyone knowing. Thereby it also limits the abilities of the photographer to become sharper and learn from others besides maybe trial and error or taking a class.

Can’t say I miss that world… It’s a lonely endeavor.

Jan Klier
DP NYC

On May 14, 2018, at 12:35 AM, Art Adams <art.cml.only@...> wrote:

A still photography set has a vertical command structure: it's a still photographer at the top, then a stylist, and a bunch of low paid PAs. 

Steven Morton
 

Jan Klier wrote:

So there is less need for the photographer to be thoughtful about the
technical aspects etc etc etc
I hope you are just speaking about yourself and not the rest of us?
Sorry, Jan, to me this statement is a gross generalisation with a whiff
of snobbery :-(

Steve Morton FRPS
Scientific Imaging
Monash University
Melbourne
Australia

Barry Goyette
 

But the concept of lighting an entire shoot to a certain T-stop certainly does not exist in reality for stills photographers.
Sorry Jan, but you are certainly wrong about this…Still Photographers have been lighting for a specific stop (for entire shoots!) for just as long as cinema photographers. The only differences between the two mediums relative to lighting and exposure is that still photographers have more flexibility on shutter speed/angle and have different lighting tools (instantaneous burst - strobe) that allow for lighting to an extremely small aperture with relatively compact equipment. Believe it or not, we don’t just willy nilly dial up the ISO as if it doesn’t matter. Some of us make real prints of enormous size, images “fixed in cement” not flowing about in some flashy light show where everyone’s hopefully focused on the damn story, or the actress's lipstick. When banding and grain start to appear in our images, real people actually notice it. :-)

And I’m not so certain about your 2.2 gamma comment having anything to do with ETTR’s main characteristic, which is to maximize dynamic range by moving more of the exposure range into the area where most of the encoding values are. Every time Art tells me to "expose Sony a stop brighter and then visit LUTCALC” he’s approximating ETTR, presumably with a 2.2 display gamma.

Barry Goyette
Stills Motion & Design
San Luis Obispo CA USA

Art Adams
 

>to me this statement is a gross generalisation with a whiff of snobbery

I think it's fairly accurate. I've worked with still photographers who became (or tried to become) motion picture directors and producers, and while they could speak from experience ("This lens is way sharper than that one") I found that the deeper I probed technically the less they knew. In some cases they took pride in that, as do some cinematographers I know. It was all about the work and not about the tools, other than that they had their preferences.

Some people don't need to know how the gear works, only that it does what they want when they use it in certain ways. I've actually gotten in trouble for this: my agent will pitch me for a project and the response will be, "Well, this isn't a very technical project," as if I only make technical images. (Once in a while it pays off, as I recently shot a fairly technical project with a camera system that no one had ever used before, and surprisingly the images were very pretty and not just "technical.")

There *are* some very technical still photographers out there but I can only find them online. (This guy is one of them: http://www.strollswithmydog.com)

--
Art Adams
Director of Photography
San Francisco Bay Area

Art Adams
 

>Every time Art tells me to "expose Sony a stop brighter and then visit LUTCALC” he’s approximating ETTR, presumably with a 2.2 display gamma.

Erm... not exactly. I do that because I think Sony cameras are noisy, and when I shoot dark stuff I like to have a bit of meat in the shadows in case someone panics later, and also because a lot of noise can be both distracting and reduce perceived detail in dark areas. I also shoot a lot of green green and VFX where noise equals post production death.

And when I suggest this, I'm only saying to do it in log, where reallocating dynamic range isn't that big a deal. Yes, highlights get a bit more compressed in log than mid-tones, but not nearly as much as in Rec 709, and a one stop correction to normal isn't that big a deal. Heck, a two stop correction isn't that big a deal either: when the F55 first came out and everyone wanted to shoot log but Sony didn't have an in-camera LUT for it yet, the only way to shoot on-the-fly stuff was to overexpose by two stops so the log image looked kinda normal. (It worked, but it was not ideal.)

In Rec 709 I wouldn't recommend this, but that's true baked-in Rec 709, like one of the hypergamma settings (let us not speak of those again). In Cine EI mode, the Rec 709 LUT just rides on top of the log data and can be pushed around as the ISO changes, so it's just remapping log according to the new ISO setting.

LUTCalc is a brilliant way to make on-set and post LUTs that deal with those ISO changes, especially as Sony has not seen fit to deal with that in metadata as Arri has.

--
Art Adams
Director of Photography
San Francisco Bay Area

Barry Goyette
 

Erm... not exactly. I do that because I think Sony cameras are noisy, and when I shoot dark stuff I like to have a bit of meat in the shadows in case someone panics later, and also because a lot of noise can...
Well…regardless of everything you just said, you are describing a situation where you increase exposure to minimize noise in the shadows. You may not want to call it the same thing, but it’s the same thing. You may not be going to the same degree as Mr. Reichmann, but he was speaking 15 years ago, about an entirely different class of sensors (changing exposure by a stop was a pretty big deal on a 30d) and to a group of people who regularly shot sunsets for a living or hobby.

I've worked with still photographers who became (or tried to become) motion picture directors and producers, and while they could speak from experience ("This lens is way sharper than that one") I found that the deeper I probed technically the less they knew.” ….."There *are* some very technical still photographers out there but I can only find them online.”
Erm… I’ve met quite a few “directors of photography” who have little more than than the RED their daddy bought them, and some “gimbally thing” in their kits. I’m sure they speak for *most* of your profession as well. :-)

Most photographers I know are shockingly technical in their work, and certainly, within their field, as knowledgeable as the video/film professionals I know.

Barry Goyette
Stills Motion & Design

Jan Klier
 


On May 14, 2018, at 10:49 AM, Barry Goyette <barrygoyette@...> wrote:

And I’m not so certain about your 2.2 gamma comment having anything to do with ETTR’s main characteristic, which is to maximize dynamic range by moving more of the exposure range into the area where most of the encoding values are. Every time Art tells me to "expose Sony a stop brighter and then visit LUTCALC” he’s approximating ETTR, presumably with a 2.2 display gamma.

We had this discussion in another forum a few months back, which I just revisited. Back then Alister Chapman corrected me that it has both benefits for linear (RAW) and log based exposures. Here’s what he wrote back then:

"ETTR is about BOTH a better SNR and better data distribution and applies to both Log and Raw.

Exposing brighter in log shifts the shadow up. For the range below middle grey each stop you go down has half the code values/ data of the previous stop. So 
if you expose 1 stop brighter you double the data in the shadows compared to the previous exposure. In addition you are likely also pushing the entire range higher and using making the use of more data. Above middle grey each stop has the same amount of data so the only benefit is a better SNR, but that's rarely a bad thing.

With linear recording the entire range doubles the data for each stop you go up, so even a modest +1 stop increase in exposure brings considerable benefits.

The argument form Michael Reichman about RAW provides one key component and was the driver for photography. If I read Alisters comment right, once in log its the steepness of the curve that provides the other benefit. But he may be able to explain better.

Jan Klier
DP NYC

John Brawley
 


On May 14, 2018 at 6:20 AM, <Mark Kenfield> wrote:

By comparison, we get away with murder with motion pictures - noisey, (mostly) 2 megapixel images, projected at 24 frames a second, with two thirds of those frames rendered complete mush by the motion blur of a 180 degree shutter!


Hi Mark,

Isn’t it a bit misleading to pixel peep a still image from a motion image camera ?

Doesn’t the shot in motion change the perception of noise compared to that of a still ?  

And motion blur ?  I feel like a long time ago I realised that blurry individual frames look sharper once they’re played together.  Back at the beginning of my career I remember looking at a shot I did frame by frame of a pan over some newspaper text that was blurry enough to make it difficult to read.  But playing the shot real time made the type appear much sharper. I could read what wasn’t readable on individual frames.  Does anyone know if my anecdotal observation is true ?  Can images appear sharper when in motion than on a still frame ?

I bring this up because DR seems to be very dependent on a subjective judgement about how much noise you’re prepared to live with.  And as far as I know there’s no truly accurate way to measure noise.  

I learned this the hard way when I had a shot that didn’t pass a tech check because it was “too noisy”.  When I eventually got to speak to the guy (with 20 years of experience) running the QA / Tech Check at the (big end of town) facility I asked him what he used to measure noise and what the threashold was.  

He said to me “experience”.  It was his personal observation about if a shot was too noisy to pass a tech check or not.  Because there’s no machine that measures noise.  By the way, he agreed with me that the shot in question wouldn’t have even been flagged if the whole show had the same noise floor.  It was just one shot in particular (different camera) that was noisy and it stood out as objectionable because it drew attention to itself.

This was a few years ago.  Anyone know if this has changed ?  Is there a way to quantify and measure video noise ?

The DR of a camera seems to be highly contentious because it involves making a subjective judgement about noise.  

This seems to be further complicated when you have a shot that’s in motion because it seems to change the perception of noise (my annecdotal non scientific guesstimation of visual phenomena) and yet we all seem to look at DR stress tests as still frames.

When I’ve asked manufactures about how THEY measure DR I usually get the same response.  They don’t measure DR, they calculate it from a theoretical model of the sensor based on the SNR spec.

In some ways I guess this is the most accurate because everything after that is image processing and complications.  It’s also interesting to think about the fact that there’s a fair bit of variation sensor to sensor batch and makes one wonder about getting better or worse perfoming copies.

It also explains to me why every camera manufacturer seem to overstate their DR because it’s a theoretical maximum.  Except Arri who perplexingly appear to understate their DR by more than a stop. Anyone know why ?

This seems to me to be why we get meaningless phrases like “useable DR” 

And Log Vs Lin...

Wasn’t the original thinking of LOG was used as a kind of DR compression to re-map the available DR that exceeded the bit depth of the recorded file ?

Has anyone else noticed that you can have skin tones at near clipping on an Alexa (Let’s say yellow in FC) and yet if you try to bring them down to a normal range then the image sort of falls apart.  Looks great when they’re a highlight. Bad if you try to make them mid tone, even if they’re not overexposed.

You haven’t overexposed it, but you can’t yank that information back to a normal exposure range and have it not feel brittle and thin.  I have to say most of my Alexa experience here as been ProRes 444.  And if you’re happy to leave those Alexa overexposed tones up in the higher parts of the curve they look GREAT and they do go on forever.  It’s just that if you try to re-map them down they don’t cope very well.  

Some other cameras though seem to be better at recovering near clipping information even with less overall DR and they can look more normal if you’re trying to re-map them.  When I’ve shot F55 Sony RAW (16 bit lin) and BMD RAW(unpacks as 16 bit lin) they seem to cope better with near clipping re-mapping. Maybe that’s a RAW Vs ProRes thing rather than LOG Vs Lin. Or grading in LIN ?

Again, just my observations, I’ve never gone and specifically tested for this and usually, you don’t want to recover information in near-clipping anyway because it’s generally a highlight and should stay being a highlight.

JB
Cinematographer
Sydney Australia




Tim Sassoon
 

There must be something available. Manually, I put up an 18% grey card, evenly lit, and look at the image on a WFM or histogram and observe the degree of divergence from mean value.


Tim Sassoon
Venice, CA




On May 16, 2018, at 5:49 PM, John Brawley <john@...> wrote:

Because there’s no machine that measures noise

Geoff Boyle
 

Hi John,

 

I have tested this 😊

 

https://www.cinematography.net/alexa-over/alexa-skin-over.html

 

Alexa works better than any other camera so far, I need to do this test again.

 

 

Cheers

 

Geoff Boyle NSC FBKS

Cinematographer

Netherlands

www.gboyle.co.uk

 

 

 

From: cml-general@... <cml-general@...> On Behalf Of John Brawley

Has anyone else noticed that you can have skin tones at near clipping on an Alexa (Let’s say yellow in FC) and yet if you try to bring them down to a normal range then the image sort of falls apart.  Looks great when they’re a highlight. Bad if you try to make them mid tone, even if they’re not overexposed.

 

You haven’t overexposed it, but you can’t yank that information back to a normal exposure range and have it not feel brittle and thin.  I have to say most of my Alexa experience here as been ProRes 444.  And if you’re happy to leave those Alexa overexposed tones up in the higher parts of the curve they look GREAT and they do go on forever.  It’s just that if you try to re-map them down they don’t cope very well.  

 

Some other cameras though seem to be better at recovering near clipping information even with less overall DR and they can look more normal if you’re trying to re-map them.  When I’ve shot F55 Sony RAW (16 bit lin) and BMD RAW(unpacks as 16 bit lin) they seem to cope better with near clipping re-mapping. Maybe that’s a RAW Vs ProRes thing rather than LOG Vs Lin. Or grading in LIN ?

 

Again, just my observations, I’ve never gone and specifically tested for this and usually, you don’t want to recover information in near-clipping anyway because it’s generally a highlight and should stay being a highlight.

 

,_

Mark Kenfield
 

Hey JB,

I don't think it's 'misleading' as such. My point is that the nature of how we view still images compared to moving ones is fundamentally different. 

We rely on spatial resolution to communicate a still image to us, but temporal resolution to communicate moving ones.

And in that context, we actually rely on the destruction that motion blur makes to spatial resolution, to communicate those moving images effectively. Without the presence of destructive motion blur, at 24fps, motion pictures become jarring, hard to watch and therefore less capable of communicating with us.

By the same token, it's the ability of temporal resolution to smooth out the appearance of noise in motion pictures, that allows our digital cinema cameras to tailor their latitude for maximum dynamic range - which makes them viable for controlling contrast ratios with continuous lighting (which have far less output than simple photographic strobes).

So basically, I think we're agreeing on everything.

Cheers,

Mark Kenfield 
Cinematographer 
L.A. (Until the 27th)

0400 044 500

On 16 May 2018, at 5:49 pm, John Brawley <john@...> wrote:


On May 14, 2018 at 6:20 AM, <Mark Kenfield> wrote:

By comparison, we get away with murder with motion pictures - noisey, (mostly) 2 megapixel images, projected at 24 frames a second, with two thirds of those frames rendered complete mush by the motion blur of a 180 degree shutter!


Hi Mark,

Isn’t it a bit misleading to pixel peep a still image from a motion image camera ?

ozawaToshiaki
 

Well said Mark. +1 (as the kids say)

On May 17, 2018, at 7:48 AM, Mark Kenfield <mark@...> wrote:

We rely on spatial resolution to communicate a still image to us, but temporal resolution to communicate moving ones.



••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••

Art Adams
 

Geoff, I didn't know you shot that. I use that in my Arri trainings. :)

Speaking of which, the next one is in June:

Class: https://www.eventbrite.com/e/arri-certified-user-training-for-camera-systems-burbank-tickets-45781714340

 

Industry Mixer: https://www.eventbrite.com/e/open-house-mixer-event-arri-burbank-tickets-45782102501 


Please forward to anyone who might be interested, who tend to be students, those who have some experience and have just started in the industry, and those who have worked in the industry for a while but don't have much experience with Arri products. (It's interesting how we can put all those different people in a room and the class works just fine. It's really interesting.)

As for skin tone recovery, I would think pushing anything to the edge of clipping would result in color issues. As good as Arri's color is, at some point you simply don't have color information in one or more channels. Where they shine is that they hide this better than anyone else, and keep usable color information intact longer than anyone else.

--
Art Adams
Director of Photography
San Francisco Bay Area

Previous Topic Next Topic