Web Analytics
   Date   

Re: HDR in the real world

alister@...
 

I would argue no, there is nothing wrong with our grading room conditions. Grading in a dark room allows us to see more detail to make a perfect master. We intentionally want to see the same or more, never less, than the audience.

Perfect Master for whom? 

I believe this approach is fundamentally flawed as this is not what the most important people, our audience, will see. We don’t make content to satisfy the small numbers that will view it under perfect viewing conditions, we produce content that will look as good as possible in an average viewing environment to the majority of people. The very fact that most audiences won’t ever see a large part of the darker range of the image should be a worry as it is all to easy to fall into the trap of producing something that looks great with the contrast range seen in a blacked out grading suite with dark walls, carpets etc but much less good when viewed in a typical living room environment where higher ambient light levels and a screens finite output means the viewer will only ever see a more limited contrast range. All that stuff in the deepest shadows disappears if we are not careful and the only people that master is perfect for is the colourists and production team that see it in that perfectly dark room.

It seems counter intuitive to me to grade for an entirely different viewing environment to the end use. Not saying we should be grading in bright rooms, but the grading suite should reflect a dim room rather than a blacked out room if we are grading for peoples homes. And that won’t ever change.

Alister Chapman 

Cinematographer - DIT - Consultant
UK Mobile/Whatsapp +44 7711 152226


Facebook: Alister Chapman
Twitter: @stormguy



www.xdcam-user.com    1.5 million hits, 100,000 visits from over 45,000 unique visitors every month!  Film and Video production techniques, reviews and news.


















On 25 Sep 2020, at 13:54, Kevin Shaw <kevs@...> wrote:

On 25 Sep 2020, at 09:21, Geoff Boyle <geoff@...> wrote:

Shouldn’t we be grading for more human viewing conditions?

I would argue no, there is nothing wrong with our grading room conditions. Grading in a dark room allows us to see more detail to make a perfect master. We intentionally want to see the same or more, never less, than the audience.

I do worry about the problem you describe though - most people do not see HDR as intended. 
As I see it the problem is more that the display is not bright enough than the room is not dark enough. 

The reason we need 1000 nits without tone mapping to make hopefully future proof masters is that at anything less than 1000 nits it is hard to judge the effect of extended contrast - indeed I would say there is a tendency to just make everything brighter. 

With many TV sets pushing over 200 nits for SDR and many others arguing that they can show (tone mapped) HDR at 400 nits or sometimes less, the only thing that is clear is that it can be confusing.

For now turning off the lights to enjoy HDR is imho an acceptable compromise. When we get tvs hitting 2000 nits and above we can put the lights back on.

Meantime we are all learning and experimenting with what we can do creatively to take advantage of HDR technology. Somethings work across the range, somethings work at brighter levels only and somethings just don’t work. All technologies need to be mastered, I think we are lucky to be the ones that can set new examples for the next generation. 

And if I may add to the debate, my hate are those (Netflix) shows that choose to raise the mid tones in the HDR master. They argue that it mimics the brighter conditions of a sunny day etc. But to my eyes it lowers the effective contrast and looks really fake. A good example of something we can do, but probably shouldn’t. Just like those early marketing demos were the saturation was pushed to the bleeding edge. I think someone on this list once wrote that any technology based on its worst examples would be rejected. Its up to us to make the best examples that set the standard. 

There are now some great uses of HDR so I think we are getting there

Best

Kevin Shaw, CSI
kevs@...          colorist, instructor and consultant

mobile: +44 7921 677 369
skype: kevscolor

finalcolor: www.finalcolor.com 
ICA:          www.icolorist.com      

------------------
This message was sent by Kevin Shaw of Finalcolor Ltd. and may contain confidential and/or privileged information. If you are not the addressee or authorised to receive this for the addressee, you must not use, copy, disclose or take any action based on this message or any information herein. If you have received this message in error, please notify the sender immediately by e-mail and delete this e-mail from your system. It is believed, but not warranted, that this e-mail, including any attachments, is virus free. However, you should take full responsibility for virus checking. Thank you for your cooperation.
------------------





Re: HDR in the real world

Kevin Shaw
 

On 25 Sep 2020, at 09:21, Geoff Boyle <geoff@...> wrote:

Shouldn’t we be grading for more human viewing conditions?

I would argue no, there is nothing wrong with our grading room conditions. Grading in a dark room allows us to see more detail to make a perfect master. We intentionally want to see the same or more, never less, than the audience.

I do worry about the problem you describe though - most people do not see HDR as intended. 
As I see it the problem is more that the display is not bright enough than the room is not dark enough. 

The reason we need 1000 nits without tone mapping to make hopefully future proof masters is that at anything less than 1000 nits it is hard to judge the effect of extended contrast - indeed I would say there is a tendency to just make everything brighter. 

With many TV sets pushing over 200 nits for SDR and many others arguing that they can show (tone mapped) HDR at 400 nits or sometimes less, the only thing that is clear is that it can be confusing.

For now turning off the lights to enjoy HDR is imho an acceptable compromise. When we get tvs hitting 2000 nits and above we can put the lights back on.

Meantime we are all learning and experimenting with what we can do creatively to take advantage of HDR technology. Somethings work across the range, somethings work at brighter levels only and somethings just don’t work. All technologies need to be mastered, I think we are lucky to be the ones that can set new examples for the next generation. 

And if I may add to the debate, my hate are those (Netflix) shows that choose to raise the mid tones in the HDR master. They argue that it mimics the brighter conditions of a sunny day etc. But to my eyes it lowers the effective contrast and looks really fake. A good example of something we can do, but probably shouldn’t. Just like those early marketing demos were the saturation was pushed to the bleeding edge. I think someone on this list once wrote that any technology based on its worst examples would be rejected. Its up to us to make the best examples that set the standard. 

There are now some great uses of HDR so I think we are getting there

Best

Kevin Shaw, CSI
kevs@...          colorist, instructor and consultant

mobile: +44 7921 677 369
skype: kevscolor

finalcolor: www.finalcolor.com 
ICA:          www.icolorist.com      

------------------
This message was sent by Kevin Shaw of Finalcolor Ltd. and may contain confidential and/or privileged information. If you are not the addressee or authorised to receive this for the addressee, you must not use, copy, disclose or take any action based on this message or any information herein. If you have received this message in error, please notify the sender immediately by e-mail and delete this e-mail from your system. It is believed, but not warranted, that this e-mail, including any attachments, is virus free. However, you should take full responsibility for virus checking. Thank you for your cooperation.
------------------




Re: HDR in the real world

Riza Pacalioglu
 

“When I switched the living room TV from a mid range HDR LCD TV to a high end claimed 1000 NIT OLED the difference was striking. I could not now go back to a lesser panel.”

 

I moonlighted almost a decade as an audio engineer at Abbey Road and had been a Hi-Fi enthusiast since teenagers. The above reminded me how Hi-Fi enthusiast talk. I just switched the device name on the above quote, and it is spot on.

 

“When I switched the living room amplifier from a mid range transistor one to a high end claimed 100W tube one the difference was striking. I could not now go back to a lesser amplifier.”

 

Like Hi-Fi equipment manufacturers had been doing for years, TV manufacturers had been aiming at producing “striking” images, not authentic, or should we say Hi-Fi images that reflect the mastering studios i.e. grading rooms. Some may argue that is wrong, but there is a reason why that is the case: Compare a recording studio and a grading room to your living room, like Geoff did, and you should see why.

 

Hi-Fi is a myth.

 

 

Riza Pacalioglu B.Sc. M.Sc. M.A.

Technical Supervisor & Producer

South of England

 

 


Re: HDR in the real world

alister@...
 

While I agree that there is a disconnect between the way productions are graded in very dark, controlled environments compared to how streamed movies and drama or other TV shows (as opposed to theatrical only releases) are viewed. What did surprise me was how much of a difference the type of panel in the TV makes.

While LCD technology has improved vastly in the last 3 or 4 years with blacks getting ever darker with better zone control. And they will continue to get better. But the way most OLED’s deliver HDR is still quite different to most LCD’s. When I switched the living room TV from a mid range HDR LCD TV to a high end claimed 1000 NIT OLED the difference was striking. I could not now go back to a lesser panel. The TV I currently have has a very good ambient light sensing system that compensates as far as it can for the ambient light. This is particular effective with Dolby encoded content. 



Alister Chapman 

Cinematographer - DIT - Consultant
UK Mobile/Whatsapp +44 7711 152226


Facebook: Alister Chapman
Twitter: @stormguy



www.xdcam-user.com    1.5 million hits, 100,000 visits from over 45,000 unique visitors every month!  Film and Video production techniques, reviews and news.


















On 25 Sep 2020, at 09:21, Geoff Boyle <geoff@...> wrote:

I have never been convinced that HDR was a great increase in contrast in images watched at home.
Certainly there is a huge colour quality increase but as far as contrast goes...
 
I looked up the recommended light levels in a room used to view HDR, it’s 5 nits or LUX. That’s about .5 foot-candles or T1.4 with an ISO of 2500.
Now that’s bloody dark to me!
Especially when anything below 7FC is the point where things start to look dark. Thats 3.5 stops brighter than the recommended viewing environment light level.
 
So, this morning I grabbed the trusty Sekonic C800 and started taking light reading whilst watching various HDR images in both Dolby and HDR10....
 
0.55 fc or 6 lux is the closest I could get to the recommended level without the dimmers on my lights going weird.
3.71 FC or 40 lux was the level where the picture looked best balanced, below that the blacks looked too thin.
7.24 FC or 78 lux, remember the point that things start to look dark, at this level the shadows had lost detail and the overall picture was looking darker than it should
88.3 FC or 950 lux, normal 9am overcast day light level in my living room and the shadows had gone and the overall image was looking dark.
 
OK, this was an HDR400 not HDR1000 set but that’s 1.5 stops, if I accept that the shadows would have been 1.5 stops brighter with an HDR1000 TV that’s still 2 stops down and dark start point.
 
I’ve seen a lot of messages around and had a lot of private email about how some shows are so dark they are unwatchable.
 
What light level are the grading suites at? Because if they’re at the recommended level they’re 12 times less light level than when people start tom perceive their environment as dark.
 
All my jokes about having to watch HDR with windows blacked out, everything in the room painted black and wearing a burka to prevent kickback from my face are being proved by real world measurements.
 
Shouldn’t we be grading for more human viewing conditions?
 
cheers 
Geoff Boyle NSC FBKS
EU based cinematographer
+31 637155076
 
 


HDR in the real world

Geoff Boyle
 

I have never been convinced that HDR was a great increase in contrast in images watched at home.

Certainly there is a huge colour quality increase but as far as contrast goes...

 

I looked up the recommended light levels in a room used to view HDR, it’s 5 nits or LUX. That’s about .5 foot-candles or T1.4 with an ISO of 2500.

Now that’s bloody dark to me!

Especially when anything below 7FC is the point where things start to look dark. Thats 3.5 stops brighter than the recommended viewing environment light level.

 

So, this morning I grabbed the trusty Sekonic C800 and started taking light reading whilst watching various HDR images in both Dolby and HDR10....

 

0.55 fc or 6 lux is the closest I could get to the recommended level without the dimmers on my lights going weird.

3.71 FC or 40 lux was the level where the picture looked best balanced, below that the blacks looked too thin.

7.24 FC or 78 lux, remember the point that things start to look dark, at this level the shadows had lost detail and the overall picture was looking darker than it should

88.3 FC or 950 lux, normal 9am overcast day light level in my living room and the shadows had gone and the overall image was looking dark.

 

OK, this was an HDR400 not HDR1000 set but that’s 1.5 stops, if I accept that the shadows would have been 1.5 stops brighter with an HDR1000 TV that’s still 2 stops down and dark start point.

 

I’ve seen a lot of messages around and had a lot of private email about how some shows are so dark they are unwatchable.

 

What light level are the grading suites at? Because if they’re at the recommended level they’re 12 times less light level than when people start tom perceive their environment as dark.

 

All my jokes about having to watch HDR with windows blacked out, everything in the room painted black and wearing a burka to prevent kickback from my face are being proved by real world measurements.

 

Shouldn’t we be grading for more human viewing conditions?

 

cheers
Geoff Boyle NSC FBKS
EU based cinematographer
+31 637155076

www.gboyle.nl

www.cinematography.net

 

 


Re: 9x7 Large Format Camera Announced

Pawel Achtel, ACS
 

Ø  HDRx has been a feature in RED cameras for two generations of cameras from DSMC, DSMC2, and Ranger body types to do what you describe in reference to above 16-bit capture.  Though that is not temporally "at the same time" technically.  The limiting factor being maximum data rate internally within the camera body as everything is halved, guessing in your case with this head and board it would be 2X the uncompressed data rates to acheive similar.

 

I used RED HDRx feature and I’m familiar with the limitations.

I am not going to explain how this is done in 9x7, but just confirm that both high and low captures happen at exactly same time in 9x7 and that maximum frame rates remain unchanged (up to 70 fps). And, this is different than HDR that I have seen in other cameras.

 

Ø  I don't see dual read out on the sensor specs on the manufacturer's website as this is a BSI sensor

 

To my knowledge, such HDR feature is not available in any other commercially available sensor or camera. It is specific to the 9x7. So, you are right, you won’t find it on “manufacturer’s website”.

 

Ø  And I do agree the aspect ratio is much more ideal for IMAX Dome application as we chop off a fair bit from Monstro due to the 1.43:1 spec unless we're working with arrays.

 

Yes, this was a major driving force to bring 9x7 to the market. Arrays are cumbersome, expensive and post-production liability. They do not work well for close-up subjects.

 

Ø  if you can truly capture 2x exposures simultaneously, and still at least deliver 24/30p (you say up to 70p), then have you considered employing a dual gain architecture similar to what Arri does ?

 

This would be implemented differently than ARRI’s dual gain. Dual gain offers (relatively) marginal improvement in noise floor. The effect would be a significantly higher DR, more like RED HDRx - in the highlights. Having said that, 9x7 sensor has analogue gain feature, which allows it to deliver up to 4,800 native ISO (with 3 stops highlight protection). So, it can work at both ends.

 

Kind Regards,

 

Pawel Achtel ACS B.Eng.(Hons) M.Sc.

“Sharp to the Edge”

 

ACHTEL PTY LIMITED, ABN 52 134 895 417

Website: www.achtel.com

Mobile: 040 747 2747 (overseas: +61 4 0747 2747)

Mail: PO BOX 557, Rockdale, NSW 2216, Australia

Address: RA 913 Coles Bay Rd., Coles Bay, TAS 7215, Australia

Location: S 42° 0'14.40"S, E 148°14'47.13"

Email: Pawel.Achtel@...

Facebook: facebook.com/PawelAchtel

Twitter: twitter.com/PawelAchtel

Skype: Pawel.Achtel

 


Re: 9x7 Large Format Camera Announced

Mike Nagel
 

On Mon, Sep 21, 2020 at 06:35 PM, Pawel Achtel ACS wrote:
There are other, more effective ways to achieve HDR, if required. The 9x7 sensor has another unique capability and this is to make two exposures simultaneously (but without affecting the frame rates). This would, indeed, deliver more than 16-bits of usable data range, which we could put in a larger (probably log) downstream container. We have not enabled this mode, but it is technically possible with the existing hardware. No other camera (to my knowledge) offers such HDR capability.

if you can truly capture 2x exposures simultaneously, and still at least deliver 24/30p (you say up to 70p), then have you considered employing a dual gain architecture similar to what Arri does ?

This would provide a very thick negative and more than 16bit of data... this could only further improve the image... is this the "mode" you refer similar to that and/or how are the dual exposures done and then merged (if you can share) ?

have you stress tested this mode ?

would love to see images of that...

Mike Nagel
Director/Producer
L.A.


Re: 9x7 Large Format Camera Announced

Philip Holland
 

HDRx has been a feature in RED cameras for two generations of cameras from DSMC, DSMC2, and Ranger body types to do what you describe in reference to above 16-bit capture.  Though that is not temporally "at the same time" technically.  The limiting factor being maximum data rate internally within the camera body as everything is halved, guessing in your case with this head and board it would be 2X the uncompressed data rates to acheive similar.

I don't see dual read out on the sensor specs on the manufacturer's website as this is a BSI sensor.  It's also, similar to Canon's take on DR, a 12/10-bit sensor at up to 31 fps/then beyond 10.

Not to put too much of my personal opinion into this, but I would focus on the 9344x7000 resolution for a downsampled workflow to 8K delivery.  And I do agree the aspect ratio is much more ideal for IMAX Dome application as we chop off a fair bit from Monstro due to the 1.43:1 spec unless we're working with arrays.

Phil

-----------------
Phil Holland - Director & Cinematographer


From: cml-raw-log-hdr@... <cml-raw-log-hdr@...> on behalf of Pawel Achtel ACS via cml.news <pawel.achtel=24x7.com.au@...>
Sent: Monday, September 21, 2020 6:17 PM
To: cml-raw-log-hdr@... <cml-raw-log-hdr@...>
Subject: Re: [cml-raw-log-hdr] 9x7 Large Format Camera Announced
 

Ø  which lens was used in that clip ?

 

Sigma ART 14mm T2.0 at T2.0. Yes, indeed, the lens does fall apart away from the centre J

We need better glass for those resolutions.

 

Ø  This was touched on by somebody else and I wanted to follow-up but waiting on more test results from you, but interested in the color science that you employ. Is this all custom on your end or did you team up w/ known color scientists ?

 

We are providing basic camera colour profile, which we calibrated in-house. It should be good starting point.

However, it is an open system. We designed it such, colour science is just metadata attached to RAW files and it travels through the workflow with it.

There is nothing stopping the cinematographer or colourist from replacing camera colour profiles and other aspects of colour science at any point of this workflow. We like open standards. We like flexibility J

 

Ø  you stated (paraphrasing here) that anything above 12bit is noise - so, did you find the data labeled as "noise" completely useless (in all scenarios), or could under the right shooting conditions it contain valuable image information ?....But considering that other high-end cinema cameras today seem to use these "extra" bits up to 16 bits, there's gotta be a reason...

 

Yes. We looked at many high-end sensors, including those used in cinema and DSLR cameras and I’m yet to see one that would have any usable information in the lower 12 bit.

Some low resolution (large pixel pitch) DSLRs (and cinema cameras) use 14 bit but I honestly can’t detect any difference between 12-bit and 14-bit. Most other high-end cameras that I know of use either 12 or 10-bit readout. Even 10-bit can be adequate for high-resolution sensors. Some downstream codecs may use different, higher bit containers to represent non-linear data more accurately, but most come off the sensors as 12-bit or 10-bit linear RAW.

 

There are other, more effective ways to achieve HDR, if required. The 9x7 sensor has another unique capability and this is to make two exposures simultaneously (but without affecting the frame rates). This would, indeed, deliver more than 16-bits of usable data range, which we could put in a larger (probably log) downstream container. We have not enabled this mode, but it is technically possible with the existing hardware. No other camera (to my knowledge) offers such HDR capability.

 

 

Kind Regards,

 

Pawel Achtel ACS B.Eng.(Hons) M.Sc.

“Sharp to the Edge”

 

ACHTEL PTY LIMITED, ABN 52 134 895 417

Website: www.achtel.com

Mobile: 040 747 2747 (overseas: +61 4 0747 2747)

Mail: PO BOX 557, Rockdale, NSW 2216, Australia

Address: RA 913 Coles Bay Rd., Coles Bay, TAS 7215, Australia

Location: S 42° 0'14.40"S, E 148°14'47.13"

Email: Pawel.Achtel@...

Facebook: facebook.com/PawelAchtel

Twitter: twitter.com/PawelAchtel

Skype: Pawel.Achtel

 

From: cml-raw-log-hdr@... [mailto:cml-raw-log-hdr@...] On Behalf Of Mike Nagel via cml.news
Sent: Tuesday, 22 September 2020 9:58 AM
To: cml-raw-log-hdr@...
Subject: Re: [cml-raw-log-hdr] 9x7 Large Format Camera Announced

 

On Mon, Sep 21, 2020 at 04:28 PM, Pawel Achtel ACS wrote:

The opening shot in our demo video was punched in 900% from 18,688 x 14,000 clip. You are looking at about 1/100th of the actual frame area there. To make things more interesting, it is a dark night shot (new moon, well after sunset) and shot at fully open lens aperture. You can spot some lens limitations there. But, Vimeo compression artefacts aside, I would not hesitate to use such clip in cinema production:

 

https://vimeo.com/454556452

Pawel,

which lens was used in that clip ?

honestly, looking at it briefly on a tablet, the full 18K shot falls apart on the sides (I'd not use that in production, I'd def crop it), maybe b/c the lens couldn't perform well or is that the anamorphic de-squeeze ?


Re color science:

This was touched on by somebody else and I wanted to follow-up but waiting on more test results from you, but interested in the color science that you employ. Is this all custom on your end or did you team up w/ known color scientists ?

I'm sure tackling this for a first time you ran into some issues that you had to overcome, would be interesting to hear some of that - please go into full technical detail if you can share ;-)

And I'm certain the color science improvement is always ongoing, as it is w/ everybody, so we're basically seeing color science v1 on the 9x7...


Re A/D bit depth:

you stated (paraphrasing here) that anything above 12bit is noise - so, did you find the data labeled as "noise" completely useless (in all scenarios), or could under the right shooting conditions it contain valuable image information ?

I remember than on RED DSMC 1 days, the file container was advertised as 14bits but actually only had 12 bits of data (I can't fully remember the details) - we ran tests in Nuke and others came to the same conclusion...

But considering that other high-end cinema cameras today seem to use these "extra" bits up to 16 bits, there's gotta be a reason...


Mike Nagel
Director/Producer
L.A.


Re: 9x7 Large Format Camera Announced

Pawel Achtel, ACS
 

Ø  which lens was used in that clip ?

 

Sigma ART 14mm T2.0 at T2.0. Yes, indeed, the lens does fall apart away from the centre J

We need better glass for those resolutions.

 

Ø  This was touched on by somebody else and I wanted to follow-up but waiting on more test results from you, but interested in the color science that you employ. Is this all custom on your end or did you team up w/ known color scientists ?

 

We are providing basic camera colour profile, which we calibrated in-house. It should be good starting point.

However, it is an open system. We designed it such, colour science is just metadata attached to RAW files and it travels through the workflow with it.

There is nothing stopping the cinematographer or colourist from replacing camera colour profiles and other aspects of colour science at any point of this workflow. We like open standards. We like flexibility J

 

Ø  you stated (paraphrasing here) that anything above 12bit is noise - so, did you find the data labeled as "noise" completely useless (in all scenarios), or could under the right shooting conditions it contain valuable image information ?....But considering that other high-end cinema cameras today seem to use these "extra" bits up to 16 bits, there's gotta be a reason...

 

Yes. We looked at many high-end sensors, including those used in cinema and DSLR cameras and I’m yet to see one that would have any usable information in the lower 12 bit.

Some low resolution (large pixel pitch) DSLRs (and cinema cameras) use 14 bit but I honestly can’t detect any difference between 12-bit and 14-bit. Most other high-end cameras that I know of use either 12 or 10-bit readout. Even 10-bit can be adequate for high-resolution sensors. Some downstream codecs may use different, higher bit containers to represent non-linear data more accurately, but most come off the sensors as 12-bit or 10-bit linear RAW.

 

There are other, more effective ways to achieve HDR, if required. The 9x7 sensor has another unique capability and this is to make two exposures simultaneously (but without affecting the frame rates). This would, indeed, deliver more than 16-bits of usable data range, which we could put in a larger (probably log) downstream container. We have not enabled this mode, but it is technically possible with the existing hardware. No other camera (to my knowledge) offers such HDR capability.

 

 

Kind Regards,

 

Pawel Achtel ACS B.Eng.(Hons) M.Sc.

“Sharp to the Edge”

 

ACHTEL PTY LIMITED, ABN 52 134 895 417

Website: www.achtel.com

Mobile: 040 747 2747 (overseas: +61 4 0747 2747)

Mail: PO BOX 557, Rockdale, NSW 2216, Australia

Address: RA 913 Coles Bay Rd., Coles Bay, TAS 7215, Australia

Location: S 42° 0'14.40"S, E 148°14'47.13"

Email: Pawel.Achtel@...

Facebook: facebook.com/PawelAchtel

Twitter: twitter.com/PawelAchtel

Skype: Pawel.Achtel

 

From: cml-raw-log-hdr@... [mailto:cml-raw-log-hdr@...] On Behalf Of Mike Nagel via cml.news
Sent: Tuesday, 22 September 2020 9:58 AM
To: cml-raw-log-hdr@...
Subject: Re: [cml-raw-log-hdr] 9x7 Large Format Camera Announced

 

On Mon, Sep 21, 2020 at 04:28 PM, Pawel Achtel ACS wrote:

The opening shot in our demo video was punched in 900% from 18,688 x 14,000 clip. You are looking at about 1/100th of the actual frame area there. To make things more interesting, it is a dark night shot (new moon, well after sunset) and shot at fully open lens aperture. You can spot some lens limitations there. But, Vimeo compression artefacts aside, I would not hesitate to use such clip in cinema production:

 

https://vimeo.com/454556452

Pawel,

which lens was used in that clip ?

honestly, looking at it briefly on a tablet, the full 18K shot falls apart on the sides (I'd not use that in production, I'd def crop it), maybe b/c the lens couldn't perform well or is that the anamorphic de-squeeze ?


Re color science:

This was touched on by somebody else and I wanted to follow-up but waiting on more test results from you, but interested in the color science that you employ. Is this all custom on your end or did you team up w/ known color scientists ?

I'm sure tackling this for a first time you ran into some issues that you had to overcome, would be interesting to hear some of that - please go into full technical detail if you can share ;-)

And I'm certain the color science improvement is always ongoing, as it is w/ everybody, so we're basically seeing color science v1 on the 9x7...


Re A/D bit depth:

you stated (paraphrasing here) that anything above 12bit is noise - so, did you find the data labeled as "noise" completely useless (in all scenarios), or could under the right shooting conditions it contain valuable image information ?

I remember than on RED DSMC 1 days, the file container was advertised as 14bits but actually only had 12 bits of data (I can't fully remember the details) - we ran tests in Nuke and others came to the same conclusion...

But considering that other high-end cinema cameras today seem to use these "extra" bits up to 16 bits, there's gotta be a reason...


Mike Nagel
Director/Producer
L.A.


Re: 9x7 Large Format Camera Announced

Mike Nagel
 

On Mon, Sep 21, 2020 at 04:28 PM, Pawel Achtel ACS wrote:

The opening shot in our demo video was punched in 900% from 18,688 x 14,000 clip. You are looking at about 1/100th of the actual frame area there. To make things more interesting, it is a dark night shot (new moon, well after sunset) and shot at fully open lens aperture. You can spot some lens limitations there. But, Vimeo compression artefacts aside, I would not hesitate to use such clip in cinema production:

 

https://vimeo.com/454556452

Pawel,

which lens was used in that clip ?

honestly, looking at it briefly on a tablet, the full 18K shot falls apart on the sides (I'd not use that in production, I'd def crop it), maybe b/c the lens couldn't perform well or is that the anamorphic de-squeeze ?


Re color science:

This was touched on by somebody else and I wanted to follow-up but waiting on more test results from you, but interested in the color science that you employ. Is this all custom on your end or did you team up w/ known color scientists ?

I'm sure tackling this for a first time you ran into some issues that you had to overcome, would be interesting to hear some of that - please go into full technical detail if you can share ;-)

And I'm certain the color science improvement is always ongoing, as it is w/ everybody, so we're basically seeing color science v1 on the 9x7...


Re A/D bit depth:

you stated (paraphrasing here) that anything above 12bit is noise - so, did you find the data labeled as "noise" completely useless (in all scenarios), or could under the right shooting conditions it contain valuable image information ?

I remember than on RED DSMC 1 days, the file container was advertised as 14bits but actually only had 12 bits of data (I can't fully remember the details) - we ran tests in Nuke and others came to the same conclusion...

But considering that other high-end cinema cameras today seem to use these "extra" bits up to 16 bits, there's gotta be a reason...


Mike Nagel
Director/Producer
L.A.


Re: 9x7 Large Format Camera Announced

Pawel Achtel, ACS
 

Ø  But I still question the capacity to interpolate such an image to four times the source resolution. I’ll be interested to see just what the tested comparative results are out of your camera as it seems quite promising.

The opening shot in our demo video was punched in 900% from 18,688 x 14,000 clip. You are looking at about 1/100th of the actual frame area there. To make things more interesting, it is a dark night shot (new moon, well after sunset) and shot at fully open lens aperture. You can spot some lens limitations there. But, Vimeo compression artefacts aside, I would not hesitate to use such clip in cinema production:

 

https://vimeo.com/454556452

 

 

Kind Regards,

 

Pawel Achtel ACS B.Eng.(Hons) M.Sc.

“Sharp to the Edge”

 

ACHTEL PTY LIMITED, ABN 52 134 895 417

Website: www.achtel.com

Mobile: 040 747 2747 (overseas: +61 4 0747 2747)

Mail: PO BOX 557, Rockdale, NSW 2216, Australia

Address: RA 913 Coles Bay Rd., Coles Bay, TAS 7215, Australia

Location: S 42° 0'14.40"S, E 148°14'47.13"

Email: Pawel.Achtel@...

Facebook: facebook.com/PawelAchtel

Twitter: twitter.com/PawelAchtel

Skype: Pawel.Achtel

._,_


Re: 9x7 Large Format Camera Announced

Pawel Achtel, ACS
 

Ø  Can the camera be triggered?

Hi Ted. Yes, the camera can be triggered with TTL level signal. Not just that:

Unlike nearly all other cameras, the 9x7 sensor supports overlapped readout. This allows the sensor readout while the next frame is being exposed.

This means that the 9x7 can be triggered at full speed, up to 70 fps at full resolution (RED, for example, can do this at ½ the speed only).

Also, multiple cameras can be genlocked, not at one or two selected frame rates, but at any frame rate including ramping.

This is particularly important for IMAX productions because nearly 100% of them are in 3D.

You can also operate all of the camera functions, including all menus with GPIO as well as wirelessly.

This is what makes 9x7 particularly suitable for drones, underwater housings, gimbals, MOCO, etc…and opens up creative possibilities.

 

Ø  . I will also note that the URSA 12K is starting with a higher native photosite count than your camera, so that should be factored into comparisons as well.

Mitch, I find this statement misleading.

9x7 has 9,344 x 7,000 resolution – 65 Megapixels

Ursa 12K in 4:3 aspect ratio (which is the native aspect ratio for IMAX and Giant Screen for those that have never been to IMAX) has 8,600 x 6,480 – 55.7 Megapixels (14% less)

So, the 9x7 starts with HIGHER pixel count than Ursa 12k and any other digital cinema camera.

By the same token, RED Monstro is only 5.7K (or 24.9 Megapixel) camera (or about 1/3rd of 9x7 pixel count) when it comes to VR, Giant Screen and IMAX aspect ratio.

 

Ø  Of course there’s ARRI and as you know I have some experience with the Vision Research Phantom cameras.

Those cameras have nearly an order of a magnitude less resolution than 9x7 and, in my view, poorly fit for Giant Screen.

Their physical dimensions make them very difficult to configure in 3D rigs, underwater housings, etc.…(yes, we tried) and cannot be adapted to use submersible lenses.

This is why I decided to make and offer the 9x7: because, despite broad range of cameras available to film makers, none could serve this segment of the industry well.

 

Ø  Just to sum up here, would you say that, being an IMAX camera, you optimized for sharpness rather than color?

Ø  That's not to say the 9x7 doesn't also have good color reproduction, and I suspect we're quibbling over small differences here, but that's just the gist I'm getting from what you've been saying, would you say that's correct? The paper does indeed support your claim that traditional bayer patterns result in a higher MTF.

Yes, that is correct. Thanks, Andy. RGBW patterns can (at least in theory) deliver improved colour reproduction, albeit with lower sharpness and lower contrast than Bayer RGB CFA.

 

 

Kind Regards,

 

Pawel Achtel ACS B.Eng.(Hons) M.Sc.

“Sharp to the Edge”

 

ACHTEL PTY LIMITED, ABN 52 134 895 417

Website: www.achtel.com

Mobile: 040 747 2747 (overseas: +61 4 0747 2747)

Mail: PO BOX 557, Rockdale, NSW 2216, Australia

Address: RA 913 Coles Bay Rd., Coles Bay, TAS 7215, Australia

Location: S 42° 0'14.40"S, E 148°14'47.13"

Email: Pawel.Achtel@...

Facebook: facebook.com/PawelAchtel

Twitter: twitter.com/PawelAchtel

Skype: Pawel.Achtel

 


Re: 9x7 Large Format Camera Announced

Mitch Gross
 

Considering it’s meant for 3D as well as underwater housing use, I cannot imagine that remote triggering isn’t included. 

Mitch Gross
New York

On Sep 21, 2020, at 5:25 PM, Ted Langdell <tedlangdell@...> wrote:


Can the camera be triggered?

I see some additional interesting uses for it if it can be.  

Ted

Ted Langdell
tedlangdell@...
(530)301-2931

Dictated into and Sent from my iPhone, which is solely responsible for any weird stuff I didn't catch.

On Sep 14, 2020, at 2:33 AM, Pawel Achtel ACS <pawel.achtel@...> wrote:



Daniel,

It appears that you have an agenda and no genuine interest in the 9x7 camera or cinematography in general, but I will answer anyway.

Ø  To me it seems like made from an industrial camera head

No, 9x7 is a digital cinema camera specifically designed for motion picture: starting from lens mount, sensor glass cover right down to recording, monitoring, tools, colour science and full high-end production workflow software and everything in between.

Ø  So Pawel, can you explain, how are you able achieve 14 and 16 stops of DR, with a sensor whose ADC is just 10 and 12 bits, and that INCLUDES noise?

The dynamic was measured using ARRI DRTC target. If anyone is interested (for the right reasons), I’m happy to share those results privately.

The dynamic range was also compared, side-by-side with 3 other leading digital cinema cameras, some claiming 19 stops, and the comparison was very close: all within less than 1 stop. Due to different noise pattern it is hard to tell which one is better or worse. Expect similar dynamic range as top tier digital cinema cameras.

Ø  Al the samples have detail ruined by crude NR and the DR seems about to be 12 stops as on an aged DSLR.

You are entirely entitled to your opinion, Daniel. RAW footage samples can be viewed here and readily compared with any other camera:

               http://achtel.com/9x7/sample.htm

We went through a lot of effort and cost in order to get a wide range of samples, despite Covid restrictions. This is because we believe in our product and more than happy to share footage freely for anyone to pixel peep and compare with that of other cameras.

For those who are genuinely interested in 9x7, I’m also happy to provide still pictures (RAW) shot with Sony A7R VI side-by-side with 9x7.  

To my eye 9x7 wins the dynamic range contest hands down with most other cameras, but I’m not here to discredit any other camera maker. It takes a lot of effort, passion and hard work to bring any camera to market and this should be celebrated as each camera would have its strong points. Even an “aged DSLR”.

 

Kind Regards,

 

Pawel Achtel ACS B.Eng.(Hons) M.Sc.

“Sharp to the Edge”

 

ACHTEL PTY LIMITED, ABN 52 134 895 417

Website: www.achtel.com

Mobile: 040 747 2747 (overseas: +61 4 0747 2747)

Mail: PO BOX 557, Rockdale, NSW 2216, Australia

Address: RA 913 Coles Bay Rd., Coles Bay, TAS 7215, Australia

Location: S 42° 0'14.40"S, E 148°14'47.13"

Email: Pawel.Achtel@...

Facebook: facebook.com/PawelAchtel

Twitter: twitter.com/PawelAchtel

Skype: Pawel.Achtel


Re: 9x7 Large Format Camera Announced

Ted Langdell
 

Can the camera be triggered?

I see some additional interesting uses for it if it can be.  

Ted

Ted Langdell
tedlangdell@...
(530)301-2931

Dictated into and Sent from my iPhone, which is solely responsible for any weird stuff I didn't catch.

On Sep 14, 2020, at 2:33 AM, Pawel Achtel ACS <pawel.achtel@...> wrote:



Daniel,

It appears that you have an agenda and no genuine interest in the 9x7 camera or cinematography in general, but I will answer anyway.

Ø  To me it seems like made from an industrial camera head

No, 9x7 is a digital cinema camera specifically designed for motion picture: starting from lens mount, sensor glass cover right down to recording, monitoring, tools, colour science and full high-end production workflow software and everything in between.

Ø  So Pawel, can you explain, how are you able achieve 14 and 16 stops of DR, with a sensor whose ADC is just 10 and 12 bits, and that INCLUDES noise?

The dynamic was measured using ARRI DRTC target. If anyone is interested (for the right reasons), I’m happy to share those results privately.

The dynamic range was also compared, side-by-side with 3 other leading digital cinema cameras, some claiming 19 stops, and the comparison was very close: all within less than 1 stop. Due to different noise pattern it is hard to tell which one is better or worse. Expect similar dynamic range as top tier digital cinema cameras.

Ø  Al the samples have detail ruined by crude NR and the DR seems about to be 12 stops as on an aged DSLR.

You are entirely entitled to your opinion, Daniel. RAW footage samples can be viewed here and readily compared with any other camera:

               http://achtel.com/9x7/sample.htm

We went through a lot of effort and cost in order to get a wide range of samples, despite Covid restrictions. This is because we believe in our product and more than happy to share footage freely for anyone to pixel peep and compare with that of other cameras.

For those who are genuinely interested in 9x7, I’m also happy to provide still pictures (RAW) shot with Sony A7R VI side-by-side with 9x7.  

To my eye 9x7 wins the dynamic range contest hands down with most other cameras, but I’m not here to discredit any other camera maker. It takes a lot of effort, passion and hard work to bring any camera to market and this should be celebrated as each camera would have its strong points. Even an “aged DSLR”.

 

Kind Regards,

 

Pawel Achtel ACS B.Eng.(Hons) M.Sc.

“Sharp to the Edge”

 

ACHTEL PTY LIMITED, ABN 52 134 895 417

Website: www.achtel.com

Mobile: 040 747 2747 (overseas: +61 4 0747 2747)

Mail: PO BOX 557, Rockdale, NSW 2216, Australia

Address: RA 913 Coles Bay Rd., Coles Bay, TAS 7215, Australia

Location: S 42° 0'14.40"S, E 148°14'47.13"

Email: Pawel.Achtel@...

Facebook: facebook.com/PawelAchtel

Twitter: twitter.com/PawelAchtel

Skype: Pawel.Achtel


Re: 9x7 Large Format Camera Announced

Mitch Gross
 

On Sep 21, 2020, at 3:14 PM, Pawel Achtel ACS <pawel.achtel@...> wrote:

If, as you suggest, the MTF of RGBW pattern with 50% of W pixels is as good as or better than that of RGB Bayer CFA, please prove it, because my measurements show otherwise.
To be clear, I have not and am not suggesting this. What I am stating is that the dismissive comments in this thread towards RGBW are misleading and inaccurate. I will also note that the URSA 12K is starting with a higher native photosite count than your camera, so that should be factored into comparisons as well. Yes your camera is uncompressed and there can certainly be differences in other aspects of sensor design, but all such factors should be noted.

Also, while it is great that your camera records an uncompressed RAW data signal it is certainly not the only one. Of course there’s ARRI and as you know I have some experience with the Vision Research Phantom cameras. When conducting early testing with the Flex 4K prototype with no OLPF on it and capturing uncompressed RAW the resolving capabilities were stunning. But I still question the capacity to interpolate such an image to four times the source resolution. I’ll be interested to see just what the tested comparative results are out of your camera as it seems quite promising.

Mitch Gross
New York


Re: 9x7 Large Format Camera Announced

alister@...
 

Pawel:

No, white photosite output doesn’t correspond to luminance. 

According to the Oxford English Dictionary: Luminance - the intensity of light emitted from a surface per unit area in a given direction or the component of a television signal which carries information on the brightness of the image.

What is brightness: The number of photons emitted from a surface.

Photosites just capture photons, convert them to electrons and then the A2D counts out the number of electrons. There is no fundamental difference in each and every one of those photons or electrons except the energy level of each photon which is in effect the wavelength or colour of that photon. A notional “white” pixel is just counting all the photons of any wave lenght reflected from a surface - aka - luminance, as luminance is not frequency/colour specific, just intensity which is directly proportional to the quantity of photons.

And colour temperature has nothing to do with this either, all the photo-sites do is capture photons, they don’t care about colour temperature. If there are more blue and less red then or vice-versa that will be seen in the photosite output.

If you know the total number of photons, the number of red photons and the number of blue photons then absolutely you can subtract the red + blue from the total and the remainder will be green, there are not special hidden ones. Your claim that you cannot derive green from total photon count (unfiltered pixel) by subtracting filtered red and green simply isn’t correct. Is it the best method? I really don’t know, it will depend on the designers goals, as you rightly point out different CFA layouts perform differently. 

Of course issues will occur if you clip the white pixel, but I’ve seen plenty of examples of clipped green photo sites in bayer sensors too. You seem to have something against Blackmagics approach and are making claims that this or that is impossible, when they are possible. Different processing or math may be required and the outcome may have different characteristics, but that is different to something not being possible. 



Alister Chapman 

Cinematographer - DIT - Consultant
UK Mobile/Whatsapp +44 7711 152226


Facebook: Alister Chapman
Twitter: @stormguy



www.xdcam-user.com    1.5 million hits, 100,000 visits from over 45,000 unique visitors every month!  Film and Video production techniques, reviews and news.


















On 21 Sep 2020, at 15:51, Pawel Achtel ACS <pawel.achtel@...> wrote:

Hey, Mitch
 
Ø  My learned friend Alister is correct. The White photosites represent the full spectrum, so a given color channel’s value can be derived either by a subtraction of the adjacent other color values (subtract the nearby reds & blues to create a virtual green) or by averaging the nearby matching color values (average the nearby greens to create a virtual green value). And doing both & comparing can improve crosstalk levels. 
 
No, white photosite output doesn’t correspond to luminance. Just like R, G, B values of Bayer CFA require Colour Matching Function to transform to XYZ coordinates. 
It is much more complex function (than subtraction) and dependent on the colour temperature of the illuminant, among other things. What you will find, however is that the derived colour would be inaccurate in a way that the saturation cannot be accurately reproduced. There are also questions of colour accuracy when W is clipped and R, G, B are not: an RGBW pattern would not be able to reproduce colour detail at high illumination level as well as RGB Bayer CFA could.
 
RGB Bayer and RGBW CFAs have been around for quite some time. They all have strengths and weaknesses. There is a good comparison of different CFA patterns here:
 
 
In this article, RGB Bayer pattern as well as several RGBW patterns were compared for image quality. Patterns like RGBW2 and RGBW3, which have 50% of white photosites, similar to that in the BM patent, were compared. One of the metrics, was sharpness, specifically MTF 50. Bayer pattern exhibited significantly higher MTF 50 as well as higher resolving limit than that from RGBW2 or RGBW3. In fact Bayer CFA had the highest MTF 50 of all 12 different CFA patterns compared in the article. 
 
This is the reason why we chose RGB Bayer CFA for 9x7 (just like all other high-end cinema cameras did too). 
 
Due to complexity of colour matching functions and other variables at play, the best way to quantify this superior performance of the Bayer CFA is to measure MTF 50 and MTF 30. 
 
If, as you suggest, the MTF of RGBW pattern with 50% of W pixels is as good as or better than that of RGB Bayer CFA, please prove it, because my measurements show otherwise. 
 
Our MTF figures as well as input RAW files (for all three cameras) are freely available here for everyone to see (I removed Ursa 12K):
 
 
As a matter of explanation: ARRI LF and Monstro were debayered using their native software with highest quality settings and no sharpening applied. RAW files are provided for transparency. 
9x7 wasn’t debayered at all: it was ingested as DNG raw directly to Imatest. So, obviously, no image manipulation was applied to it. Everyone can see 9x7 DNG file and ingest it into Imatest to verify the results as they see fit. Another caveat is that RED Monstro had an aftermarket Kippertie OLPF. I believe it could have performed slightly better with the original OLPF, which I would really like to test at the next opportunity. 
 
I think both RED and Alexa LF performed very well and those results are nothing to sneeze at. I would have liked to test more cameras as each has strengths and weaknesses, different capabilities, file formats, budgets, etc. 
 
9x7 was made specifically for Giant Screen and IMAX. It offers features that are specific to this purpose. High MTF, sharp and artefact-free images were the primary objectives which, I think, we achieved quite well (more than double MTF 50 and MTF 30 readings of any other high-end cinema camera out there).
 
I look forward to see Geoff’s test results. Can we have ISO 12233 Chart there, please? 
 
Ø  Pawel, if as you point out the Nyquist limit to any sensor is half the photosite frequency, then how do you justify the claim that you can take your sensor’s 9.4Kx7K photosites and derive a four times greater 18.8Kx14K pixel image?
 
The 9x7 records uncompressed RAW at data rates up to 30 times higher than that of most other high-end cinema cameras.
The 9x7 doesn’t throw away minute detail from shadows or highlights for the compression algorithm to work efficiently. It doesn’t throw any information at all. 
The 9x7 doesn’t apply compression and therefore does not produce any compression artefacts. 
To fully appreciate this, it is best to “pixel peep” into, say RED raw files (or whatever, not picking here on RED) and compare them with 9x7 RAW files side-by-side. 9x7 detail looks pristine and high contrast with no aliasing and no false colours or “wiggly worms” at high spatial frequency. Think of it as a (very) “thick negative” to start with. 
 
This “thick negative” allows detail reconstruction of 1 photosite to 4 pixels (as opposed 1 photosite to 1 pixel) and the results are spectacular. 
We are not claiming that all of the detail smaller than Nyquist can be fully or even accurately reconstructed in all circumstances, but we do claim that 18.7K x 14K debayered output from 9x7 on pixel level is comparable to that of most other cameras that use compression and/or less refined sensor design and optical stack. And it does look visibly better than “native” 9.3K x 7K, so much so that we can “punch in” 900% and the images look very respectable (suitable for regular cinema). 
 
We are not claiming this is the “best” camera. This depends on many other factors, like data size, or budget constraints, etc. And, who actually needs 18.7K x 14K or, more importantly, who can actually find a lens and pull focus at those resolutions? (I tried, it is hard). There are countless projects and genres where I would recommend ARRI LF, RED or Sony Venice, or even Sigma fp over 9x7. I use other cameras too J
 
Kind Regards,
 
Pawel Achtel ACS B.Eng.(Hons) M.Sc.
“Sharp to the Edge”
 
ACHTEL PTY LIMITED, ABN 52 134 895 417
Website: www.achtel.com
Mobile: 040 747 2747 (overseas: +61 4 0747 2747) 
Mail: PO BOX 557, Rockdale, NSW 2216, Australia
Address: RA 913 Coles Bay Rd., Coles Bay, TAS 7215, Australia
Location: S 42° 0'14.40"S, E 148°14'47.13"
 
 
Kind Regards,
 
Pawel Achtel ACS B.Eng.(Hons) M.Sc.
“Sharp to the Edge”
 
ACHTEL PTY LIMITED, ABN 52 134 895 417
Website: www.achtel.com
Mobile: 040 747 2747 (overseas: +61 4 0747 2747) 
Mail: PO BOX 557, Rockdale, NSW 2216, Australia
Address: RA 913 Coles Bay Rd., Coles Bay, TAS 7215, Australia
Location: S 42° 0'14.40"S, E 148°14'47.13"
Skype: Pawel.Achtel
_,_


Re: 9x7 Large Format Camera Announced

Andy Jarosz
 

Hi Pawel,

Just to sum up here, would you say that, being an IMAX camera, you optimized for sharpness rather than color? The paper you linked drew the following conclusion,

"In the mean Delta E of Table 1, the RGBW CFAs showed the smallest color difference of (2.58~2.63), and the color difference of the RGB CFAs (2.93~2.97) were smaller than the CMY CFAs (2.61~3.70). It can be seen that the RGB CFAs have larger color difference compared with the RGBW or CMY CFAs.

In the mean Delta L for the lightness (luminance) error, the RGBW CFAs were the smallest at 2.13–2.16, and the RGB CFAs have the highest lightness error at 4.01~–4.06, whereas CMY CFAs range at 3.86~–3.97."

That's not to say the 9x7 doesn't also have good color reproduction, and I suspect we're quibbling over small differences here, but that's just the gist I'm getting from what you've been saying, would you say that's correct? The paper does indeed support your claim that traditional bayer patterns result in a higher MTF.

Interestingly, that paper drew some quite interesting conclusions when all their testing criteria was factored in: https://journals.plos.org/plosone/article/figure?id=10.1371/journal.pone.0232583.t003

Best,

-- 
Andy Jarosz
MadlyFX & LOLED Virtual
loledvirtual.com
Andy@...
708.420.2639
Chicago, IL

On 9/21/2020 9:51 AM, Pawel Achtel ACS wrote:

Hey, Mitch

 

Ø  My learned friend Alister is correct. The White photosites represent the full spectrum, so a given color channel’s value can be derived either by a subtraction of the adjacent other color values (subtract the nearby reds & blues to create a virtual green) or by averaging the nearby matching color values (average the nearby greens to create a virtual green value). And doing both & comparing can improve crosstalk levels. 

 

No, white photosite output doesn’t correspond to luminance. Just like R, G, B values of Bayer CFA require Colour Matching Function to transform to XYZ coordinates.

It is much more complex function (than subtraction) and dependent on the colour temperature of the illuminant, among other things. What you will find, however is that the derived colour would be inaccurate in a way that the saturation cannot be accurately reproduced. There are also questions of colour accuracy when W is clipped and R, G, B are not: an RGBW pattern would not be able to reproduce colour detail at high illumination level as well as RGB Bayer CFA could.

 

RGB Bayer and RGBW CFAs have been around for quite some time. They all have strengths and weaknesses. There is a good comparison of different CFA patterns here:

 

https://www.researchgate.net/publication/341303755_Image-quality_metric_system_for_color_filter_array_evaluation/download

 

In this article, RGB Bayer pattern as well as several RGBW patterns were compared for image quality. Patterns like RGBW2 and RGBW3, which have 50% of white photosites, similar to that in the BM patent, were compared. One of the metrics, was sharpness, specifically MTF 50. Bayer pattern exhibited significantly higher MTF 50 as well as higher resolving limit than that from RGBW2 or RGBW3. In fact Bayer CFA had the highest MTF 50 of all 12 different CFA patterns compared in the article.

 

This is the reason why we chose RGB Bayer CFA for 9x7 (just like all other high-end cinema cameras did too).

 

Due to complexity of colour matching functions and other variables at play, the best way to quantify this superior performance of the Bayer CFA is to measure MTF 50 and MTF 30.

 

If, as you suggest, the MTF of RGBW pattern with 50% of W pixels is as good as or better than that of RGB Bayer CFA, please prove it, because my measurements show otherwise.

 

Our MTF figures as well as input RAW files (for all three cameras) are freely available here for everyone to see (I removed Ursa 12K):

 

https://drive.google.com/drive/folders/19xKe5KX1bSivARHhGogbI5Dyq1GaZjp8?usp=sharing

 

As a matter of explanation: ARRI LF and Monstro were debayered using their native software with highest quality settings and no sharpening applied. RAW files are provided for transparency.

9x7 wasn’t debayered at all: it was ingested as DNG raw directly to Imatest. So, obviously, no image manipulation was applied to it. Everyone can see 9x7 DNG file and ingest it into Imatest to verify the results as they see fit. Another caveat is that RED Monstro had an aftermarket Kippertie OLPF. I believe it could have performed slightly better with the original OLPF, which I would really like to test at the next opportunity.

 

I think both RED and Alexa LF performed very well and those results are nothing to sneeze at. I would have liked to test more cameras as each has strengths and weaknesses, different capabilities, file formats, budgets, etc.

 

9x7 was made specifically for Giant Screen and IMAX. It offers features that are specific to this purpose. High MTF, sharp and artefact-free images were the primary objectives which, I think, we achieved quite well (more than double MTF 50 and MTF 30 readings of any other high-end cinema camera out there).

 

I look forward to see Geoff’s test results. Can we have ISO 12233 Chart there, please?

 

Ø  Pawel, if as you point out the Nyquist limit to any sensor is half the photosite frequency, then how do you justify the claim that you can take your sensor’s 9.4Kx7K photosites and derive a four times greater 18.8Kx14K pixel image?

 

The 9x7 records uncompressed RAW at data rates up to 30 times higher than that of most other high-end cinema cameras.

The 9x7 doesn’t throw away minute detail from shadows or highlights for the compression algorithm to work efficiently. It doesn’t throw any information at all.

The 9x7 doesn’t apply compression and therefore does not produce any compression artefacts.

To fully appreciate this, it is best to “pixel peep” into, say RED raw files (or whatever, not picking here on RED) and compare them with 9x7 RAW files side-by-side. 9x7 detail looks pristine and high contrast with no aliasing and no false colours or “wiggly worms” at high spatial frequency. Think of it as a (very) “thick negative” to start with.

 

This “thick negative” allows detail reconstruction of 1 photosite to 4 pixels (as opposed 1 photosite to 1 pixel) and the results are spectacular.

We are not claiming that all of the detail smaller than Nyquist can be fully or even accurately reconstructed in all circumstances, but we do claim that 18.7K x 14K debayered output from 9x7 on pixel level is comparable to that of most other cameras that use compression and/or less refined sensor design and optical stack. And it does look visibly better than “native” 9.3K x 7K, so much so that we can “punch in” 900% and the images look very respectable (suitable for regular cinema).

 

We are not claiming this is the “best” camera. This depends on many other factors, like data size, or budget constraints, etc. And, who actually needs 18.7K x 14K or, more importantly, who can actually find a lens and pull focus at those resolutions? (I tried, it is hard). There are countless projects and genres where I would recommend ARRI LF, RED or Sony Venice, or even Sigma fp over 9x7. I use other cameras too J

 

Kind Regards,

 

Pawel Achtel ACS B.Eng.(Hons) M.Sc.

“Sharp to the Edge”

 

ACHTEL PTY LIMITED, ABN 52 134 895 417

Website: www.achtel.com

Mobile: 040 747 2747 (overseas: +61 4 0747 2747)

Mail: PO BOX 557, Rockdale, NSW 2216, Australia

Address: RA 913 Coles Bay Rd., Coles Bay, TAS 7215, Australia

Location: S 42° 0'14.40"S, E 148°14'47.13"

Email: Pawel.Achtel@...

Facebook: facebook.com/PawelAchtel

Twitter: twitter.com/PawelAchtel

 

 

Kind Regards,

 

Pawel Achtel ACS B.Eng.(Hons) M.Sc.

“Sharp to the Edge”

 

ACHTEL PTY LIMITED, ABN 52 134 895 417

Website: www.achtel.com

Mobile: 040 747 2747 (overseas: +61 4 0747 2747)

Mail: PO BOX 557, Rockdale, NSW 2216, Australia

Address: RA 913 Coles Bay Rd., Coles Bay, TAS 7215, Australia

Location: S 42° 0'14.40"S, E 148°14'47.13"

Email: Pawel.Achtel@...

Facebook: facebook.com/PawelAchtel

Twitter: twitter.com/PawelAchtel

Skype: Pawel.Achtel

_,_


Re: 9x7 Large Format Camera Announced

Pawel Achtel, ACS
 

Hey, Mitch

 

Ø  My learned friend Alister is correct. The White photosites represent the full spectrum, so a given color channel’s value can be derived either by a subtraction of the adjacent other color values (subtract the nearby reds & blues to create a virtual green) or by averaging the nearby matching color values (average the nearby greens to create a virtual green value). And doing both & comparing can improve crosstalk levels. 

 

No, white photosite output doesn’t correspond to luminance. Just like R, G, B values of Bayer CFA require Colour Matching Function to transform to XYZ coordinates.

It is much more complex function (than subtraction) and dependent on the colour temperature of the illuminant, among other things. What you will find, however is that the derived colour would be inaccurate in a way that the saturation cannot be accurately reproduced. There are also questions of colour accuracy when W is clipped and R, G, B are not: an RGBW pattern would not be able to reproduce colour detail at high illumination level as well as RGB Bayer CFA could.

 

RGB Bayer and RGBW CFAs have been around for quite some time. They all have strengths and weaknesses. There is a good comparison of different CFA patterns here:

 

https://www.researchgate.net/publication/341303755_Image-quality_metric_system_for_color_filter_array_evaluation/download

 

In this article, RGB Bayer pattern as well as several RGBW patterns were compared for image quality. Patterns like RGBW2 and RGBW3, which have 50% of white photosites, similar to that in the BM patent, were compared. One of the metrics, was sharpness, specifically MTF 50. Bayer pattern exhibited significantly higher MTF 50 as well as higher resolving limit than that from RGBW2 or RGBW3. In fact Bayer CFA had the highest MTF 50 of all 12 different CFA patterns compared in the article.

 

This is the reason why we chose RGB Bayer CFA for 9x7 (just like all other high-end cinema cameras did too).

 

Due to complexity of colour matching functions and other variables at play, the best way to quantify this superior performance of the Bayer CFA is to measure MTF 50 and MTF 30.

 

If, as you suggest, the MTF of RGBW pattern with 50% of W pixels is as good as or better than that of RGB Bayer CFA, please prove it, because my measurements show otherwise.

 

Our MTF figures as well as input RAW files (for all three cameras) are freely available here for everyone to see (I removed Ursa 12K):

 

https://drive.google.com/drive/folders/19xKe5KX1bSivARHhGogbI5Dyq1GaZjp8?usp=sharing

 

As a matter of explanation: ARRI LF and Monstro were debayered using their native software with highest quality settings and no sharpening applied. RAW files are provided for transparency.

9x7 wasn’t debayered at all: it was ingested as DNG raw directly to Imatest. So, obviously, no image manipulation was applied to it. Everyone can see 9x7 DNG file and ingest it into Imatest to verify the results as they see fit. Another caveat is that RED Monstro had an aftermarket Kippertie OLPF. I believe it could have performed slightly better with the original OLPF, which I would really like to test at the next opportunity.

 

I think both RED and Alexa LF performed very well and those results are nothing to sneeze at. I would have liked to test more cameras as each has strengths and weaknesses, different capabilities, file formats, budgets, etc.

 

9x7 was made specifically for Giant Screen and IMAX. It offers features that are specific to this purpose. High MTF, sharp and artefact-free images were the primary objectives which, I think, we achieved quite well (more than double MTF 50 and MTF 30 readings of any other high-end cinema camera out there).

 

I look forward to see Geoff’s test results. Can we have ISO 12233 Chart there, please?

 

Ø  Pawel, if as you point out the Nyquist limit to any sensor is half the photosite frequency, then how do you justify the claim that you can take your sensor’s 9.4Kx7K photosites and derive a four times greater 18.8Kx14K pixel image?

 

The 9x7 records uncompressed RAW at data rates up to 30 times higher than that of most other high-end cinema cameras.

The 9x7 doesn’t throw away minute detail from shadows or highlights for the compression algorithm to work efficiently. It doesn’t throw any information at all.

The 9x7 doesn’t apply compression and therefore does not produce any compression artefacts.

To fully appreciate this, it is best to “pixel peep” into, say RED raw files (or whatever, not picking here on RED) and compare them with 9x7 RAW files side-by-side. 9x7 detail looks pristine and high contrast with no aliasing and no false colours or “wiggly worms” at high spatial frequency. Think of it as a (very) “thick negative” to start with.

 

This “thick negative” allows detail reconstruction of 1 photosite to 4 pixels (as opposed 1 photosite to 1 pixel) and the results are spectacular.

We are not claiming that all of the detail smaller than Nyquist can be fully or even accurately reconstructed in all circumstances, but we do claim that 18.7K x 14K debayered output from 9x7 on pixel level is comparable to that of most other cameras that use compression and/or less refined sensor design and optical stack. And it does look visibly better than “native” 9.3K x 7K, so much so that we can “punch in” 900% and the images look very respectable (suitable for regular cinema).

 

We are not claiming this is the “best” camera. This depends on many other factors, like data size, or budget constraints, etc. And, who actually needs 18.7K x 14K or, more importantly, who can actually find a lens and pull focus at those resolutions? (I tried, it is hard). There are countless projects and genres where I would recommend ARRI LF, RED or Sony Venice, or even Sigma fp over 9x7. I use other cameras too J

 

Kind Regards,

 

Pawel Achtel ACS B.Eng.(Hons) M.Sc.

“Sharp to the Edge”

 

ACHTEL PTY LIMITED, ABN 52 134 895 417

Website: www.achtel.com

Mobile: 040 747 2747 (overseas: +61 4 0747 2747)

Mail: PO BOX 557, Rockdale, NSW 2216, Australia

Address: RA 913 Coles Bay Rd., Coles Bay, TAS 7215, Australia

Location: S 42° 0'14.40"S, E 148°14'47.13"

Email: Pawel.Achtel@...

Facebook: facebook.com/PawelAchtel

Twitter: twitter.com/PawelAchtel

 

 

Kind Regards,

 

Pawel Achtel ACS B.Eng.(Hons) M.Sc.

“Sharp to the Edge”

 

ACHTEL PTY LIMITED, ABN 52 134 895 417

Website: www.achtel.com

Mobile: 040 747 2747 (overseas: +61 4 0747 2747)

Mail: PO BOX 557, Rockdale, NSW 2216, Australia

Address: RA 913 Coles Bay Rd., Coles Bay, TAS 7215, Australia

Location: S 42° 0'14.40"S, E 148°14'47.13"

Email: Pawel.Achtel@...

Facebook: facebook.com/PawelAchtel

Twitter: twitter.com/PawelAchtel

Skype: Pawel.Achtel

_,_


Re: 9x7 Large Format Camera Announced

Jeff Kreines
 


On Sep 21, 2020, at 6:27 AM, Mitch Gross <mitchgrosscml@...> wrote:

This flies in the face of Nyquist, no?
Pass the popcorn.

Jeff Kreines
Kinetta
jeff@...
kinetta.com

Sent from iPhone.


Re: 9x7 Large Format Camera Announced

Mitch Gross
 

My learned friend Alister is correct. The White photosites represent the full spectrum, so a given color channel’s value can be derived either by a subtraction of the adjacent other color values (subtract the nearby reds & blues to create a virtual green) or by averaging the nearby matching color values (average the nearby greens to create a virtual green value). And doing both & comparing can improve crosstalk levels. 

You cannot state that the White photosites have no color information. They have ALL the color information so comparisons to nearby single color photosites must be used to derive relative color values. But by doing this three ways each White photosite can be assigned a derived value for red, green & blue. Is Bayer better than RGBW for deriving colors? Both must create virtual color information to fill in the gaps of their color filter arrays. I would posit that an RGBW pattern can only effectively work when implemented on very high resolutions, but it would take someone with a greater math degree than mine to assign any comparative values between the two systems. Luckily there’s a lot of that work out there for anyone willing to take the deep dive. It’s not nearly as reductive and simple as is being claimed by some. 

Pawel, if as you point out the Nyquist limit to any sensor is half the photosite frequency, then how do you justify the claim that you can take your sensor’s 9.4Kx7K photosites and derive a four times greater 18.8Kx14K pixel image? The long-accepted mathematical derivative of Bayer pattern is 70% of starting resolution, yet you claim 400%. This flies in the face of Nyquist, no?

Mitch Gross
New York

On Sep 21, 2020, at 5:36 AM, alister@... wrote:

If you have, say 1 x R, 1 x B and 2 x W (one red, one blue and two white) colour samples in your 2 x 2 bucket, you cannot determine the Green colour sample properly.

I’m sorry but his has me confused. It has been normal practice to use colour difference systems such as YUV or YCbCr to represent full colour images by storing Luma (brightness) plus 2x colour difference signals. Subtract the colour difference signals from the Luma to determine the green saturation.

 A white pixel is sampling the number of photons in the full spectrum from red, thru green to blue, this is the combined Luma or Brightness. It will count the total of photons at all wavelenghths (or energy levels). A red pixel samples the number of photons in only the red part of spectrum, it will count the number of red wavelenght photons. The  blue pixel the number of photons in the blue spectrum, it will count how many blue photons you have. Subtract the red + blue photon count from the total (White Pixel) photon count and the difference must be the number of Green photons, it can’t be anything else, there aren’t some other special white photons, every photon will be at a specific wavelenght/energy level. The result might perhaps not be as accurate as a dedicate green photosite as the colour filters on the red and blue photosites will have cross over and leakage that will add errors to the maths, but the result must still be highly representative of the number of green photons.




Alister Chapman 

Cinematographer - DIT - Consultant
UK Mobile/Whatsapp +44 7711 152226


Facebook: Alister Chapman
Twitter: @stormguy



www.xdcam-user.com    1.5 million hits, 100,000 visits from over 45,000 unique visitors every month!  Film and Video production techniques, reviews and news.


















On 21 Sep 2020, at 05:59, Pawel Achtel ACS <pawel.achtel@...> wrote:

If you have, say 1 x R, 1 x B and 2 x W (one red, one blue and two white) colour samples in your 2 x 2 bucket, you cannot determine the Green colour sample properly.

361 - 380 of 1984