Web Analytics
   Date   

Re: 9x7 Large Format Camera Announced

alister@...
 

If you have, say 1 x R, 1 x B and 2 x W (one red, one blue and two white) colour samples in your 2 x 2 bucket, you cannot determine the Green colour sample properly.

I’m sorry but his has me confused. It has been normal practice to use colour difference systems such as YUV or YCbCr to represent full colour images by storing Luma (brightness) plus 2x colour difference signals. Subtract the colour difference signals from the Luma to determine the green saturation.

 A white pixel is sampling the number of photons in the full spectrum from red, thru green to blue, this is the combined Luma or Brightness. It will count the total of photons at all wavelenghths (or energy levels). A red pixel samples the number of photons in only the red part of spectrum, it will count the number of red wavelenght photons. The  blue pixel the number of photons in the blue spectrum, it will count how many blue photons you have. Subtract the red + blue photon count from the total (White Pixel) photon count and the difference must be the number of Green photons, it can’t be anything else, there aren’t some other special white photons, every photon will be at a specific wavelenght/energy level. The result might perhaps not be as accurate as a dedicate green photosite as the colour filters on the red and blue photosites will have cross over and leakage that will add errors to the maths, but the result must still be highly representative of the number of green photons.




Alister Chapman 

Cinematographer - DIT - Consultant
UK Mobile/Whatsapp +44 7711 152226


Facebook: Alister Chapman
Twitter: @stormguy



www.xdcam-user.com    1.5 million hits, 100,000 visits from over 45,000 unique visitors every month!  Film and Video production techniques, reviews and news.


















On 21 Sep 2020, at 05:59, Pawel Achtel ACS <pawel.achtel@...> wrote:

If you have, say 1 x R, 1 x B and 2 x W (one red, one blue and two white) colour samples in your 2 x 2 bucket, you cannot determine the Green colour sample properly.


Re: 9x7 Large Format Camera Announced

Geoff Boyle
 

I’ll be publishing the tests of the 12K just as soon as I get a production version of the camera, we identified some issues in the pre production version which are fixed in the release version.

You can make up your own minds then. I like it 😊

But what an amazing concept! Send the pre prod camera to a bunch of experienced cinematographers to see if they kin find any issues and the fix them before you start to sell any cameras.

It’s a concept that will never catch on!

 

cheers
Geoff Boyle NSC FBKS
EU based cinematographer
+31 637155076

www.gboyle.nl

www.cinematography.net

 

 

From: cml-raw-log-hdr@... <cml-raw-log-hdr@...> On Behalf Of Mitch Gross
Sent: Monday, 21 September 2020 03:47
To: cml-raw-log-hdr@...
Subject: Re: [cml-raw-log-hdr] 9x7 Large Format Camera Announced

 

On Sep 20, 2020, at 7:02 PM, Pawel Achtel ACS <pawel.achtel@...> wrote:

Bayer pattern samples colour at the Nyquist frequency limit of the sensor. Therefore it can discern colour up to that level of detail.

A non-Bayer pattern sensor that has lower than Nyquist colour sampling frequency (such as one that has half of the pixels “colour blind”) cannot achieve this. It is just theoretically impossible.

 

I find this statement to be misleading. It makes it sound as if the RGBW CFA sensor on the Blackmagic Design URSA 12K (you gracefully have not named it but I will) has fully half the photosites on it receiving no color information. That is incorrect. In fact those photosites contain the full color spectrum of information. Just as Bayer pattern uses surrounding photosites to interpolate colors, the color information from the “white” photosites can be interpolated, and in fact they can function alternately as virtual red, green and blue photosite information sources. 

 

The resolving capacity of a Bayer pattern sensor is generally agreed to be around 70%. But Pawel, you keep stating how your camera somehow has better contrast to improve upon that. While I can accept that this may be true and I’m eager to see it, you then go on to state that the camera can successfully be used to reproduce fully four times its native resolution (twice as wide and twice as tall is four times the number of pixels to photosites, not the doubling that you tried to correct me on previously). Yes it is via interpolation and yes, the better the resolving capabilities of the sensor’s native data the greater the interpolation capacity. But you’re claiming 400% instead of 70%, which is quite the leap. If such a thing is possible then certainly interpolated color resolution from a “white” photosite is also possible. 

 

I don’t mean to go off here, but so far several of your claims for your camera have gone beyond technical clarity and into rather vague statements that appear to defy mathematics. So to then turn and state that a different technology is “theoretically impossible” appears to be disingenuous on its face. 

 

Your camera’s sensor has around 70 million photosites. The URSA 12K’s sensor has around 80 million photosites. No, all sensors and all photosites are not created equal, but it is a mathematical fact that there is a difference of around 10 million of them between the two cameras. Time will tell how these different approaches ultimately effect the images. 

 

 

Mitch Gross

New York


Re: 9x7 Large Format Camera Announced

Pawel Achtel, ACS
 

Ø  I find this statement to be misleading. It makes it sound as if the RGBW CFA sensor on the Blackmagic Design URSA 12K (you gracefully have not named it but I will) has fully half the photosites on it receiving no color information. That is incorrect. In fact those photosites contain the full color spectrum of information. Just as Bayer pattern uses surrounding photosites to interpolate colors, the color information from the “white” photosites can be interpolated, and in fact they can function alternately as virtual red, green and blue photosite information sources. 

 

Sorry, Mitch, this is just incorrect.

 

The Nyquist limit of any sensor is ½ the sampling frequency (number of pixels).

 

So, the smallest detail that the sensor can reproduce without artefacts is equivalent to a cluster of approximately 2x2 pixels (Nyquist theory). One clever thing about Bayer pattern is that it samples every primary colour at least once at this level of detail. No colour is “interpolated”, every primary colour has been sampled at least once. For this reason it can accurately reproduce colour right up to the resolving limit of the sensor.  

 

This is different than sensors that sample colour with lower spatial frequency (such as Ursa 12K).

 

There is no amount of “interpolation” or any other algorithm that you can apply in order to accurately reproduce colour if your colour sampling frequency is lower than that detail. It is just theoretically impossible.

 

Given it is more than just you that is confused, let me explain more simply J

 

You have Bayer pattern that looks like this:

 

 

Imagine a bucket as your detail. The smallest bucket that can capture all of the primary colours is 2 x 2 pixel size. This is coincidentally exactly the same size as the smallest detail this sensor could reproduce if it was just monochromatic – this is dictated by Nyquist limit.

 

If you used a smaller bucket, you would end up with inaccurate colour, but also inaccurate shape because your bucket is too small for both: the colour and the detail. In Bayer pattern colour sampling and detail sampling are perfectly matched. At Nyquist limit (2x2) or larger bucket, each primary colour has been sampled at least once and therefore the colour of such detail will be accurate.

 

Now, consider a different pattern. One that has half photosites that are “colour blind”.

 

 

I’m not sure if this is the actual pattern used in Ursa 12K (I grabbed it from patent application), but let’s just see how smallest detail colour sample would look like from this CFA.

 

The (monochromatic) Nyquist limit of such sensor is still the same: 2 x 2 (4 pixels) size bucket. No difference here.

 

But, there is no way this bucket can sample all primary colours. You need a bigger bucket to sample all primaries. The smallest bucket that can achieve this is 3 x 3 (9 pixels), which is more than double compared to the Bayer bucket. Your colour resolution will be less than half of comparable Bayer pattern sensor.

 

Ø  In fact those [white] photosites contain the full color spectrum of information.

 

Sorry, Mitch, this makes no sense. There is no colour information in a white photosite.

 

If you have, say 1 x R, 1 x B and 2 x W (one red, one blue and two white) colour samples in your 2 x 2 bucket, you cannot determine the Green colour sample properly.

 

The reason you can’t do this is because you can’t tell how saturated the Green is. And the reason for this is because your Colour space (sampling R, B and W only) in your 2 x 2 bucket looks like this:

 

Your saturation of Green sample has been “thrown away” compared to the colour space sampled with a Bayer pattern with 2 x 2 bucket, which has all primaries sampled at this level of detail.

 

Some 2 x 2 buckets have just 2 different types of colour sample: 2 x R + 2 x W. How on earth would you determine accurate colour of such detail?

 

You need a bigger bucket to determine the colour and this means lower resolution, 50% lower.

 

Side Note: Cameras like this can (at least in theory) have some advantages over Bayer pattern in lower resolution formats: for example dynamic range.

But, in order to achieve this you need to reduce the detail level to less than Nyquist limit of the sensor. Otherwise your detail is not going to have accurate colour and will have artefacts, which was my point.

 

Hope it clarifies thigs bit more.

 

Bayer CFA is “brilliant” in the way that the colour sampling frequency is perfectly matched with detail sampling frequency (Nyquist limit).

 

Ø  Your camera’s sensor has around 70 million photosites. The URSA 12K’s sensor has around 80 million photosites.

 

9x7 camera is specifically designed for VR, Giant Screen and IMAX, which are all 4:3 or 1:1 aspect rato.

Accordingly, 9x7 actually has a sensor with higher pixel count in this format, each of which is recorded in uncompressed RAW with no information “thrown away” whether by compression or lower colour sampling.

 

But, as they say, the proof is always in the pudding. Ultimately, the images will speak for themselves.

And, these are freely available for anyone to compare with any other camera: from Blackmagic Ursa to IMAX 15-perf 70mm film: http://achtel.com/9x7/sample.htm

 

Kind Regards,

 

Pawel Achtel ACS B.Eng.(Hons) M.Sc.

“Sharp to the Edge”

 

ACHTEL PTY LIMITED, ABN 52 134 895 417

Website: www.achtel.com

Mobile: 040 747 2747 (overseas: +61 4 0747 2747)

Mail: PO BOX 557, Rockdale, NSW 2216, Australia

Address: RA 913 Coles Bay Rd., Coles Bay, TAS 7215, Australia

Location: S 42° 0'14.40"S, E 148°14'47.13"

Email: Pawel.Achtel@...

Facebook: facebook.com/PawelAchtel

Twitter: twitter.com/PawelAchtel

Skype: Pawel.Achtel

 


Re: 9x7 Large Format Camera Announced

Ben Allan ACS
 

Well said Mitch.
Ben


On 21 Sep 2020, at 11:46 am, Mitch Gross <mitchgrosscml@...> wrote:
I don’t mean to go off here, but so far several of your claims for your camera have gone beyond technical clarity and into rather vague statements that appear to defy mathematics. So to then turn and state that a different technology is “theoretically impossible” appears to be disingenuous on its face.


Re: 9x7 Large Format Camera Announced

Mitch Gross
 

On Sep 20, 2020, at 7:02 PM, Pawel Achtel ACS <pawel.achtel@...> wrote:

Bayer pattern samples colour at the Nyquist frequency limit of the sensor. Therefore it can discern colour up to that level of detail.

A non-Bayer pattern sensor that has lower than Nyquist colour sampling frequency (such as one that has half of the pixels “colour blind”) cannot achieve this. It is just theoretically impossible.


I find this statement to be misleading. It makes it sound as if the RGBW CFA sensor on the Blackmagic Design URSA 12K (you gracefully have not named it but I will) has fully half the photosites on it receiving no color information. That is incorrect. In fact those photosites contain the full color spectrum of information. Just as Bayer pattern uses surrounding photosites to interpolate colors, the color information from the “white” photosites can be interpolated, and in fact they can function alternately as virtual red, green and blue photosite information sources. 

The resolving capacity of a Bayer pattern sensor is generally agreed to be around 70%. But Pawel, you keep stating how your camera somehow has better contrast to improve upon that. While I can accept that this may be true and I’m eager to see it, you then go on to state that the camera can successfully be used to reproduce fully four times its native resolution (twice as wide and twice as tall is four times the number of pixels to photosites, not the doubling that you tried to correct me on previously). Yes it is via interpolation and yes, the better the resolving capabilities of the sensor’s native data the greater the interpolation capacity. But you’re claiming 400% instead of 70%, which is quite the leap. If such a thing is possible then certainly interpolated color resolution from a “white” photosite is also possible. 

I don’t mean to go off here, but so far several of your claims for your camera have gone beyond technical clarity and into rather vague statements that appear to defy mathematics. So to then turn and state that a different technology is “theoretically impossible” appears to be disingenuous on its face. 

Your camera’s sensor has around 70 million photosites. The URSA 12K’s sensor has around 80 million photosites. No, all sensors and all photosites are not created equal, but it is a mathematical fact that there is a difference of around 10 million of them between the two cameras. Time will tell how these different approaches ultimately effect the images. 


Mitch Gross
New York


Re: 9x7 Large Format Camera Announced

Pawel Achtel, ACS
 

Hey Ben

 

Ø  You keep implying that you have data from the other “non-bayer” camera

 

Bayer pattern samples colour at the Nyquist frequency limit of the sensor. Therefore it can discern colour up to that level of detail.

A non-Bayer pattern sensor that has lower than Nyquist colour sampling frequency (such as one that has half of the pixels “colour blind”) cannot achieve this. It is just theoretically impossible.

 

A non-Bayer sensor with such lower colour sampling requires high spatial frequencies (small detail) to be filtered out in the actual image (say, by a much stronger OLPF) in order to avoid colour inaccuracies, not just aliasing. For this reason, such design will never produce as sharp results as Bayer sensor can.

 

Algorithms and marketing, no matter how clever, cannot overcome laws of physics. Sampling theory stands in the way.

 

This is the reason why the 9x7 uses Bayer sensor: because Bayer pattern is designed to produce colour detail right up to the resolving limit of the sensor.

 

None of those statements require testing. I do believe the comparative test results we conducted were accurate (and they do match with sampling theory). I still decided to exclude them from comparison only because the manufacturer claimed they weren’t accurate (but wouldn’t let us repeat the test independently).

 

Kind Regards,

 

Pawel Achtel ACS B.Eng.(Hons) M.Sc.

“Sharp to the Edge”

 

ACHTEL PTY LIMITED, ABN 52 134 895 417

Website: www.achtel.com

Mobile: 040 747 2747 (overseas: +61 4 0747 2747)

Mail: PO BOX 557, Rockdale, NSW 2216, Australia

Address: RA 913 Coles Bay Rd., Coles Bay, TAS 7215, Australia

Location: S 42° 0'14.40"S, E 148°14'47.13"

Email: Pawel.Achtel@...

Facebook: facebook.com/PawelAchtel

Twitter: twitter.com/PawelAchtel

Skype: Pawel.Achtel

 


Re: 9x7 Large Format Camera Announced

Ben Allan ACS
 

Hi Pawel,

You keep implying that you have data from the other “non-bayer” camera.  You and I know that the test of that camera had a fundamental flaw when it was done and the fact that you are suggesting that you know how it performs is very misleading and inappropriate.  Please stop suggesting that you have that information when the data you have is so inaccurate.

Cheers,
Ben


All Bayer sensor cameras reached Nyquist resolving limit of their sensors (others, did not). 
(*) Results from other cameras have been significantly lower and have been omitted


_______________________________
Ben Allan ACS CSI

Producer | Cinematographer | Colorist
ACS National Secretary
Host of "The T-Stop Inn” Podcast



Re: 9x7 Large Format Camera Announced

Pawel Achtel, ACS
 

Andy

Ø  With the advent of the new Ursa 12K, we're seeing unique alternatives to traditional bayer patterns pop up. I was wondering if there was anytthing of note about the CFA of the 9X7.

Most of those alternative patterns are designed to work well when down-sampled to significantly lower resolution for delivery. With half of the pixels being “colour blind” (and therefore lower colour sampling) clusters of pixels need to be larger than Bayer 2x2 (4 pixels) and the CoC needs to be significantly increased compared to a Bayer pattern: at least 3 x 3 (9 pixels), but probably more.

The 9x7 has been optimised to deliver, not lower, but actually higher spatial frequencies than the sensor sampling. We can provide workflows up to 18.7k x 14K (260 Megapixels) which, on pixel level, look as good as that of top 5 high-end digital cinema cameras. And, pixel design is one of the unique ingredients to be able to achieve this.

Another important difference in the mix is that uncompressed format doesn’t need to throw away the minute detail of image data and doesn’t introduce compression artefacts. Again, these factors do not have much significance when down-sampling footage for lower resolution for delivery, but it does have significant impact when you reconstruct detail to blow it up beyond the spatial sampling frequency of the sensor. Having more colour samples (like Bayer) is therefore a benefit over sensors that have lower spatial frequency of colour sampling.

Indeed, this is what we observed in sample images as well as tests conducted in controlled conditions. The highest contrast (by far) is achieved with Bayer sensors with high colour accuracy close to Nyquist limit.

Here are top three results (*) of MTF 50% contrast using (very) high contrast and (very) high resolving power lens:

9x7                        4039 line widths per picture height

ARRI LF                1989 line widths per picture height

Monstro              1736 line widths per picture height

Here are results (*) of MTF 30% contrast

9x7                        3031 line widths per picture height

ARRI LF                1325 line widths per picture height

Monstro              1283 line widths per picture height

 

All Bayer sensor cameras reached Nyquist resolving limit of their sensors (others, did not).

(*) Results from other cameras have been significantly lower and have been omitted

ARRI LF showed very “clean” edge profile with no aliasing or compression artefacts.  

9x7 produced also very “clean” edge profile and no colour fringing or inaccuracies at high spatial frequencies.

Monstro samples showed some colour inaccuracies at high spatial frequencies: fine black lines at 45 degrees were Debayered green, but not much aliasing either.

 

What we learned was that sensor pixel count does not always translate to more detail or “better” detail in the resulting images.

 

We also learned that Bayer pattern is the best choice (by far) when it comes to delivering high detail and high-contrast images without colour artefacts.

 

Another thing that we learned was that Bayer sensors were most sensitive (also by far) and the three cameras mentioned above performed almost identical when it comes to sensitivity. A camera with a non-Bayer sensor was significantly less sensitive.

 

Ø  I'm also interested in the filter stack and color science--what decisions you made that would contribute to the look of the camera

 

One unique feature of the 9x7 is that colour science is a “metadata” and it is not “baked-in”. We provide tools and advice to allow cinematographers design their own colour science, if required.

 

The camera colour profile that is included in the samples that we published is on a “vivid-side” and most accurate at about 5500°K.

 

The filter stack is replaceable, just as the colour science. Again, this is to be able to fine-tune colour response and accuracy to different lighting conditions as well as to allow more choices for artistic expression.

 

Standard sensor filter stack contains UV and IR cut. However, it does let slightly more near-IR in compared to most other cinema cameras. The reason for this is that the 9x7 CFA has very little cross-talk between red and blue channels compared to other sensors. By letting deeper red in (and without IR “pollution” spilling into blue), we can create more accurate and more saturated reds without infringing on magenta line, which can be a common problem with other cameras.

 

Another unique feature of the CFA (as well as sensor design) is very high QE (more than 65% including CFA) which, despite much lower pixel pitch, delivers as good or better sensitivity than that from other cameras with significantly larger pixel size. The 9x7 just loves darkness.

 

Daniel, I will skip most issues you raised as Mitch covered them, but answer just two briefly.

 

Ø  What is the source for this argument?

 

GSCA audiences’ surveys that are periodically conducted and available to members (I’m a member).

 

Ø  AFAIK, some IMAX camera(s) can be used with underwater housing.

 

Underwater housings are not just so that the camera doesn’t get wet. As I said earlier, flat port (required in 3D for film cameras) limits image quality down to less than about 1K (due to laws of physics) and the results are not any better than from a GoPro. IMAX film cameras cannot be easily adapted to use wet lenses, which is the only way to achieve reasonable image quality underwater.

Recently we managed to deliver wet lenses able to resolve up to 8K corner-to-corner (which were used on Avatar Sequels) - another industry breakthrough. We are currently working on custom underwater optics for the 9x7 that would be even sharper. To be clear: we are talking more than 2 orders of a magnitude (more than 100x) more detail on the screen than any IMAX film camera was ever able to produce underwater: at a fraction of the cost and at a fraction of the size and weight.

 

Kind Regards,

 

Pawel Achtel ACS B.Eng.(Hons) M.Sc.

“Sharp to the Edge”

 

ACHTEL PTY LIMITED, ABN 52 134 895 417

Website: www.achtel.com

Mobile: 040 747 2747 (overseas: +61 4 0747 2747)

Mail: PO BOX 557, Rockdale, NSW 2216, Australia

Address: RA 913 Coles Bay Rd., Coles Bay, TAS 7215, Australia

Location: S 42° 0'14.40"S, E 148°14'47.13"

Email: Pawel.Achtel@...

Facebook: facebook.com/PawelAchtel

Twitter: twitter.com/PawelAchtel

Skype: Pawel.Achtel

_,_


Re: 9x7 Large Format Camera Announced

Andy Jarosz
 

Hey Pawel,

With the advent of the new Ursa 12K, we're seeing unique alternatives to traditional bayer patterns pop up. I was wondering if there was anytthing of note about the CFA of the 9X7.

I'm also interested in the filter stack and color science--what decisions you made that would contribute to the look of the camera. Did you go for a more neutral color, or is it biased in some way to create a more specific look?

Best,

-- 
Andy Jarosz
MadlyFX & LOLED Virtual
loledvirtual.com
Andy@...
708.420.2639
Chicago, IL
On 9/19/2020 12:44 AM, Pawel Achtel ACS wrote:

Ø  This is my take: per CML "Good enough is not good enough". Especially when talking about IMAX.

Ø  Not as good as a true IMAX system, but still 65mm film: no CFA, no debayering: real film, not a video signal :-)

Ø  https://logmar.dk/magellan-65mm/

 

As you may know, about 90% of IMAX films are in 3D. How do you mount two of those side-by-side or on beam-splitter and what is genlock like?

How about drone, underwater housing, gimbal, or high frame rates?

 

How much does it cost per hour of footage, including processing?

Between 60 ~ 70 % of what audiences want to watch on Giant Screens is underwater.

How do you change the film load after you run out (after 2 minutes) when camera is in an underwater housing?

What optics do you use, considering flat underwater port resolves about 1K, particularly when filming in 3D?

 

Sorry, I’m just struggling to see the benefits you are suggesting there may be there.

 

There comes a point when better digital cameras are needed to tell the stories that we want to tell on Giant Screens.

 

Kind Regards,

 

Pawel Achtel ACS B.Eng.(Hons) M.Sc.

“Sharp to the Edge”

 

ACHTEL PTY LIMITED, ABN 52 134 895 417

Website: www.achtel.com

Mobile: 040 747 2747 (overseas: +61 4 0747 2747)

Mail: PO BOX 557, Rockdale, NSW 2216, Australia

Address: RA 913 Coles Bay Rd., Coles Bay, TAS 7215, Australia

Location: S 42° 0'14.40"S, E 148°14'47.13"

Email: Pawel.Achtel@...

Facebook: facebook.com/PawelAchtel

Twitter: twitter.com/PawelAchtel

 

 


Re: 9x7 Large Format Camera Announced

Mitch Gross
 

Daniel, 

Honestly, it’s 2020. You’re really bringing up a film v digital argument? This is ancient thinking. We all know what the points are here, and both the pros and cons are that much more exaggerated when it comes to working for an IMAX finish and then even more so when discussing IMAX 3D. Move on. 

(Have you seen an IMAX 3D rig? It’s an absolute monster.)

I really see no point in this line of conversation. We know. Let’s discuss the relative merits and questions related to the use of Pawel’s digital camera. 


Mitch Gross
New York

On Sep 19, 2020, at 9:02 AM, Daniel Henríquez-Ilic <dhisur@...> wrote:


As you may know, about 90% of IMAX films are in 3D. How do you mount two of those side-by-side or on beam-splitter and what is genlock like?

IMAX 3D-30 camera does expose 65mm/30perf. (Left and Right simultaneously). This was used also in space for the documentary about Hubble. Stunning footage.

There was also another previous IMAX 3D camera.

How about drone, underwater housing, gimbal, or high frame rates?

Drone? It can be done with an Arri 35mm camera, as exhibited at Cine Gear Los Angeles 2018. Could a Magellan 65mm camera be used on drone? Eventually, I guess.  What about an IMAX camera? Probably not. However aerial footage is not limited to drone :-) I have seen impressive aerial footage shot with IMAX camera. 

AFAIK, some IMAX camera(s) can be used with underwater housing.

I agree that those cameras are too bulky and heavy to be used on a gimbal.

Regarding HFR, AFAIK, 48 fps has been achieved on a IMAX film camera. For higher speed such as 150 FPS an Arriflex 435 with low speed Vision3 stock could be an alternative.


 

How much does it cost per hour of footage, including processing?

Sure, highest image quality comes at a cost. Interestingly enough, 65mm film is making a come back (well, film in general, as analog audio).  The 65mm raw stock is now even available at B&H :-)  


Between 60 ~ 70 % of what audiences want to watch on Giant Screens is underwater.

What is the source for this argument?

How do you change the film load after you run out (after 2 minutes) when camera is in an underwater housing?

It is a different style/method of shooting.
Different logistics too.

Think about one roll of photographic film with 12 exposures compared to a digital photo camera that can store hundreds of pictures in a little memory card. 
A portrait session can be resolved with only 1 roll of film of 12 exposures.
What about portraits in 4x5 inch sheet film? Stunning, specially with Cooke lens for 4x5 inch (Large Format)

What optics do you use, considering flat underwater port resolves about 1K, particularly when filming in 3D?

 

Regarding optics for IMAX, AFAIK, this was mostly Carl Zeiss adapted by IMAX.

The Magellan camera should accept a broader range of glass.



Sorry, I’m just struggling to see the benefits you are suggesting there may be there.


Per CML: "Test, Test, Test"
But Geoff is right, that would be an expensive test. Testing IMAX 65mm/15perf, Magellan 65mm as well as those electronic acquisition cameras would certainly help to clarify the point.
In the meantime, a quick and simple test could be to just shoot single frames with a 6x6 camera loaded with Vision3 65mm film stock (cut down to aprox 61mm). 

 

There comes a point when better digital cameras are needed to tell the stories that we want to tell on Giant Screens.

 

Who is 'we want' ?

IMHO, here's a great example of documentary shot on 65mm film.



All the best,
Daniel Henríquez Ilic
Film Cinematography
DI Post-Producer
Technical Consultant
+56 975543323


Re: 9x7 Large Format Camera Announced

Daniel Henríquez-Ilic <dhisur@...>
 

As you may know, about 90% of IMAX films are in 3D. How do you mount two of those side-by-side or on beam-splitter and what is genlock like?

IMAX 3D-30 camera does expose 65mm/30perf. (Left and Right simultaneously). This was used also in space for the documentary about Hubble. Stunning footage.

There was also another previous IMAX 3D camera.

How about drone, underwater housing, gimbal, or high frame rates?

Drone? It can be done with an Arri 35mm camera, as exhibited at Cine Gear Los Angeles 2018. Could a Magellan 65mm camera be used on drone? Eventually, I guess.  What about an IMAX camera? Probably not. However aerial footage is not limited to drone :-) I have seen impressive aerial footage shot with IMAX camera. 

AFAIK, some IMAX camera(s) can be used with underwater housing.

I agree that those cameras are too bulky and heavy to be used on a gimbal.

Regarding HFR, AFAIK, 48 fps has been achieved on a IMAX film camera. For higher speed such as 150 FPS an Arriflex 435 with low speed Vision3 stock could be an alternative.


 

How much does it cost per hour of footage, including processing?

Sure, highest image quality comes at a cost. Interestingly enough, 65mm film is making a come back (well, film in general, as analog audio).  The 65mm raw stock is now even available at B&H :-)  


Between 60 ~ 70 % of what audiences want to watch on Giant Screens is underwater.

What is the source for this argument?

How do you change the film load after you run out (after 2 minutes) when camera is in an underwater housing?

It is a different style/method of shooting.
Different logistics too.

Think about one roll of photographic film with 12 exposures compared to a digital photo camera that can store hundreds of pictures in a little memory card. 
A portrait session can be resolved with only 1 roll of film of 12 exposures.
What about portraits in 4x5 inch sheet film? Stunning, specially with Cooke lens for 4x5 inch (Large Format)

What optics do you use, considering flat underwater port resolves about 1K, particularly when filming in 3D?

 

Regarding optics for IMAX, AFAIK, this was mostly Carl Zeiss adapted by IMAX.

The Magellan camera should accept a broader range of glass.



Sorry, I’m just struggling to see the benefits you are suggesting there may be there.


Per CML: "Test, Test, Test"
But Geoff is right, that would be an expensive test. Testing IMAX 65mm/15perf, Magellan 65mm as well as those electronic acquisition cameras would certainly help to clarify the point.
In the meantime, a quick and simple test could be to just shoot single frames with a 6x6 camera loaded with Vision3 65mm film stock (cut down to aprox 61mm). 

 

There comes a point when better digital cameras are needed to tell the stories that we want to tell on Giant Screens.

 

Who is 'we want' ?

IMHO, here's a great example of documentary shot on 65mm film.



All the best,
Daniel Henríquez Ilic
Film Cinematography
DI Post-Producer
Technical Consultant
+56 975543323


Re: 9x7 Large Format Camera Announced

Pawel Achtel, ACS
 

Ø  This is my take: per CML "Good enough is not good enough". Especially when talking about IMAX.

Ø  Not as good as a true IMAX system, but still 65mm film: no CFA, no debayering: real film, not a video signal :-)

Ø  https://logmar.dk/magellan-65mm/

 

As you may know, about 90% of IMAX films are in 3D. How do you mount two of those side-by-side or on beam-splitter and what is genlock like?

How about drone, underwater housing, gimbal, or high frame rates?

 

How much does it cost per hour of footage, including processing?

Between 60 ~ 70 % of what audiences want to watch on Giant Screens is underwater.

How do you change the film load after you run out (after 2 minutes) when camera is in an underwater housing?

What optics do you use, considering flat underwater port resolves about 1K, particularly when filming in 3D?

 

Sorry, I’m just struggling to see the benefits you are suggesting there may be there.

 

There comes a point when better digital cameras are needed to tell the stories that we want to tell on Giant Screens.

 

Kind Regards,

 

Pawel Achtel ACS B.Eng.(Hons) M.Sc.

“Sharp to the Edge”

 

ACHTEL PTY LIMITED, ABN 52 134 895 417

Website: www.achtel.com

Mobile: 040 747 2747 (overseas: +61 4 0747 2747)

Mail: PO BOX 557, Rockdale, NSW 2216, Australia

Address: RA 913 Coles Bay Rd., Coles Bay, TAS 7215, Australia

Location: S 42° 0'14.40"S, E 148°14'47.13"

Email: Pawel.Achtel@...

Facebook: facebook.com/PawelAchtel

Twitter: twitter.com/PawelAchtel

 

 


Re: 9x7 Large Format Camera Announced

Daniel Henríquez-Ilic <dhisur@...>
 

This is my take: per CML "Good enough is not good enough". Especially when talking about IMAX.

Here's an indie alternative on 65mm.
Not as good as a true IMAX system, but still 65mm film: no CFA, no debayering: real film, not a video signal :-)

Best regards,
Daniel Henríquez Ilic
Film Cinematography
Post-Production Director
Technical Consultant
Santiago de Chile


Re: 9x7 Large Format Camera Announced

Riza Pacalioglu
 

Pawel

 

Thank you for taking time to answer my question in detail. It is very much appreciated. I am sure it will help everyone to understand the camera better.

 

 

Riza Pacalioglu B.Sc. M.Sc. M.A.

Technical Supervisor & Producer

South of England

 


Re: 9x7 Large Format Camera Announced

Pawel Achtel, ACS
 

Ø  Is there a reason why you have not used a mathematically lossless compression? That normally reduces the data rate up to 25% without loss of image fidelity.

 

Yes, there are good reasons why we don’t do this.

 

When designing a specialised camera for particular application it is important to focus on the features that are more beneficial for target application. Sometimes they are mutually exclusive. When you have stream of sensor data coming at 6GB/s off the FPGA and fixed amount of processing power to deal with it, you need to prioritise what is of more benefit.

This stream of data needs to be processed in real time and:

 

1.      Reliably written to solid state memory without dropping frames

2.      Debayered and send to monitoring pipeline for display and framing

3.      Debayered and sent to monitoring pipeline in magnified manner for critical focusing: both in stand-by and during recording (most other cameras can’t do this)

4.      Processed and displayed in advanced and very precise Histogram for critical exposure monitoring in real time during stand-by and during recording (again, most other cinema cameras don’t do this either)

5.      Display precise and fully configurable “traffic lights” further aiding precise exposure adjustment

6.      Provide real time instrumentation of all camera pipelines (no other camera does it, AFAIK)

7.      Generate embed debayerd preview stream so that clips could be played in-camera on set

8.      Provide and save diagnostic information

9.      Provide specialised framing aids and overlays

10.   Capture and apply metadata including time code, camera settings, black shading, flat field correction

11.   Manage colour profile in display pipeline as well as in recording pipeline.

12.   Monitor and display additional telemetry information such as temperatures of various components, storage space, etc…

13.   Do all of the above in triggered and genlocked modes of operation at any  frame rate up to 70 fps at full resolution (another industry first)

 

As you can see, we opted to provide some other and unique features not available in most digital cinema cameras instead of compression and only because we didn’t offer compression.  

 

Unpacking and compressing, say, 6GB/s of data is very expensive in terms of processing power required. So much so that I couldn’t achieve it on an RTX 6000 card in real time and no other camera or portable device can achieve this (to my knowledge).

 

It is a choice between “insanely” fast GPU and “insanely” fast I/O. We opted for the latter.

 

We didn’t feel there was significant benefit to recording compressed format. It is not a camera designed for shooting regular docos, news, live events, interviews, dramas or even entire feature films. There are many great cameras currently available on the market that serve this purpose very well. Again, 9x7 has very specific purpose: IMAX, Giant Screen, VFX and VR. For everything else there are better choices.

 

Last, but not least, there are compatibility and reliability issues as we have all experienced with some newly released (and some older) cameras that decided to use their own proprietary compression formats. This forces NLE systems to continually update these formats and even those that make both, NLE and cameras, can’t always get it right.

 

The 9x7 delivers industry-standard uncompressed CinemaDNG and requires more I/O bandwidth but saves CPU and GPU time for other functions in NLE systems. Compressed formats are not necessarily better or more efficient, but just different.

 

Having said all that, the 9x7 is fully configurable, so much so that it can actually record compressed formats like other cameras do, just not at full 65 Megapixels @ 70 fps.

 

 

Kind Regards,

 

Pawel Achtel ACS B.Eng.(Hons) M.Sc.

“Sharp to the Edge”

 

ACHTEL PTY LIMITED, ABN 52 134 895 417

Website: www.achtel.com

Mobile: 040 747 2747 (overseas: +61 4 0747 2747)

Mail: PO BOX 557, Rockdale, NSW 2216, Australia

Address: RA 913 Coles Bay Rd., Coles Bay, TAS 7215, Australia

Location: S 42° 0'14.40"S, E 148°14'47.13"

Email: Pawel.Achtel@...

Facebook: facebook.com/PawelAchtel

Twitter: twitter.com/PawelAchtel

 


Re: 9x7 Large Format Camera Announced

Riza Pacalioglu
 

The camera transfers and records 12-bit packed uncompressed RAW. The CinemaDNG that I shared are derived from camera RAW and are unpacked to 16 bits and therefore larger. RAW can be unpacked to smaller containers, if required.

 

Pawel Achtel ACS B.Eng.(Hons) M.Sc.  

 

Thank you, Pavel.

 

Is there a reason why you have not used a mathematically lossless compression? That normally reduces the data rate up to 25% without loss of image fidelity. This then corresponds to less time required to copy the data from the camera. On the set, such a seemingly short time saving can be reflected in large productions saving.

 

All the best

 

 

Riza Pacalioglu B.Sc. M.Sc. M.A.

Technical Supervisor & Producer

South of England

 

 


Re: 9x7 Large Format Camera Announced

Pawel Achtel, ACS
 

Ø  What was the reason for not going such a tried and tested route but instead push the limits of available data interfaces?

 

Hi Riza,

 

Sorry I missed the question earlier. The camera transfers and records 12-bit packed uncompressed RAW. The CinemaDNG that I shared are derived from camera RAW and are unpacked to 16 bits and therefore larger. RAW can be unpacked to smaller containers, if required.

 

Kind Regards,

 

Pawel Achtel ACS B.Eng.(Hons) M.Sc.

“Sharp to the Edge”

 

ACHTEL PTY LIMITED, ABN 52 134 895 417

Website: www.achtel.com

Mobile: 040 747 2747 (overseas: +61 4 0747 2747)

Mail: PO BOX 557, Rockdale, NSW 2216, Australia

Address: RA 913 Coles Bay Rd., Coles Bay, TAS 7215, Australia

Location: S 42° 0'14.40"S, E 148°14'47.13"

Email: Pawel.Achtel@...

Facebook: facebook.com/PawelAchtel

Twitter: twitter.com/PawelAchtel

 


Re: 9x7 Large Format Camera Announced

Riza Pacalioglu
 

>> next, bit-depth of A/D conversion ?

 

> Simple, but loaded question J

Most sensors with small pitch photosites do not benefit from more than 12-bit A/D. […] So, the simple answer: 12

 

This was asked but didn’t get a reply. If the captured data is 12-bit why the recorded data is 16-bit? By reducing the recorded data to 12-bit an applying mathematically lossless compression you should almost half the data rate.

 

What was the reason for not going such a tried and tested route but instead push the limits of available data interfaces?

 

AND, make data transfer a lot longer?

 

 

Riza Pacalioglu B.Sc. M.Sc. M.A.

Technical Supervisor & Producer

South of England

 

 

 

From: cml-raw-log-hdr@... <cml-raw-log-hdr@...> On Behalf Of Pawel Achtel ACS via cml.news
Sent: 15 September 2020 06:09
To: cml-raw-log-hdr@...
Subject: Re: [cml-raw-log-hdr] 9x7 Large Format Camera Announced

 

Hi Mike Nagel and thank you for your questions.

  • can you share the raw color gamut of the camera and/or at least the gamut that the image data gets mapped into ?

 

The samples that are on the website are only mapped to REC 709. They can be transformed to wider gamut in Resolve, but a more accurate way would be to attach a profile corresponding to target colour space, for example. I don’t have an example readily available.

 

One particular aspect of WCG is that the CFA on 9x7 has excellent primary colour filtering. Many high-end cameras experience blue (and sometimes green) channel response in near IR spectrum. Some benefit from more aggressive IR cut, some just show deep red as magenta. Similar phenomenon often happens in near UV that blue colours are contaminated with red photo site response: blue colours are sometimes skewed towards magenta.

A very distinct feature of the CFA in 9x7 is that this doesn’t happen. At least not to the extent that it is present in most other cameras. . In fact, I have removed IR cut filter (just for testing) to see how my blue tones would be affected with near IR contamination. They weren’t. In fact one of the sample shots was shot in full sunlight with no IR filter at all.

 

As long as colours are well discriminated by CFA, wide gamut profiles can be easily mapped to any desired colour space. 9x7 workflow allows such process by separating raw image data from camera colour profile (using standard DCP colour profiles and software tools to manage them).


> next, bit-depth of A/D conversion ?

 

Simple, but loaded question J

Most sensors with small pitch photosites do not benefit from more than 12-bit A/D. Only select few with very large pixels (and large FWC) benefit from more. Otherwise you are digitising noise. Most modern high-resolution sensors have 12-bit A/D (some even 10-bit) because noise floor is almost always well above that level. Some DSLRs have indeed 14-bit A/D (and feature high FWC with large photosites), like Sony A7S, but it is debatable whether this is actually of any benefit. It is clear that the first 2-bits is pretty much noise even for very large pixel pitch sensors.

So, the simple answer: 12

> fully lossless raw storage and/or codec involved ? if so, which one ?

 

No codec. The camera records uncompressed RAW with no compression.

> what are the requirements to play back these files 24p in post ?

 

Post is pretty straightforward (for 9k x 7K and lower output) and, as long as you have fast I/O, could be achieved without pre-rendering.

  • meaning: do u have a new codec in play like BMD on the 12K cam that allows 24p playback on even medium specc'd systems ?

I can’t comment on BMD as I was not able to play or even edit BMD sample footage because Resolve would crash 10 out of 10 times that I tried.

The 9x7 promo video was edited on mid-range notebook using Davinci Resolve.

We will add in-camera playback soon (working on it, it is tricky).

  • next, what is the file format and/or is there a SDK available ? support in Resolve, Nucoda, Baselight, Nuke ?

No SDK is required. The workflow produces compliant DNG RAW files with all the metadata including frame rates, shutter speed, time code, etc.. and can be ingested directly to any NLE editing system.

  • are we forced to use ur proprietary sw ? can we export to EXR from there ?

The files are not proprietary, they are standard DNG and are available for anyone to try and play with: http://achtel.com/9x7/sample.htm

Camera and lens settings have been provided and EXIFF tags should be accurate too.

A couple of disclaimers:

The frame rate has not yet been fully tagged in the DNG samples provided. This has been implemented very recently.

The Tassie Devil shot has some FPN due to black shading not been applied (It was captured with engineering sample sensor, which I don’t have anymore to obtain black frame).

  • have u run a color separation test, and if so can u share the results ?

The last sample is colour chart: http://achtel.com/9x7/sample.htm

 

Monochromator tests will be available soon. It’s been busy few months. J

 

Kind Regards,

 

Pawel Achtel ACS B.Eng.(Hons) M.Sc.

“Sharp to the Edge”

 

ACHTEL PTY LIMITED, ABN 52 134 895 417

Website: www.achtel.com

Mobile: 040 747 2747 (overseas: +61 4 0747 2747)

Mail: PO BOX 557, Rockdale, NSW 2216, Australia

Address: RA 913 Coles Bay Rd., Coles Bay, TAS 7215, Australia

Location: S 42° 0'14.40"S, E 148°14'47.13"

Email: Pawel.Achtel@...

Facebook: facebook.com/PawelAchtel

Twitter: twitter.com/PawelAchtel

Skype: Pawel.Achtel


Re: 9x7 Large Format Camera Announced

Geoff Boyle
 

Yeah and we still have all the sponsors we had a year ago.
Oh, hang on, we don't. 
Any hugely costly ventures like this will have to wait.
The LED tests were more than we can afford this year.
We've made savings because on not going to the BSC shot ot NAB, CineGear or INC, it looks like Cameraimage is out as well.
Even with those savings CML is still about $12,000 down on this time last year.

So, only small very cost controlled testing for the foreseeable future and that will be testing aimed at the majority of CML members and not esoteric projects 😀



On Wed, 16 Sep 2020, 17:24 Daniel Henríquez-Ilic, <dhisur@...> wrote:
Maybe it is time for CML to test IMAX 65mm 15/perf (through DFT OXSCAN) and different electronic acquisition cameras such as this 9x7 camera.

An interesting feature of the 9x7 camera is that it has different mount possibilities and uncompressed output. 
All in all, this 9x7 electronic acquisition camera, looks smarter than other electronic acquisition tools; but the samples provided just look at what it is: a video image (electronic colors, different response of the light...), it is just
so evident just by looking the images,  and this is not the image characterisytics that I expect from a true IMAX movie.

Regarding sensitivity of film; it goes up to 500T or 250D with current Vision3 emulsions; anything beyond is push-processing,
so basically a modification of the gamma that is obtained through development: it doesn't become more sensitive; since sensitivity of a film emulsion is defined at its manufacture.
As far as I know, the maximum speed that has been achieved is ISO 800, anything beyond is through push processing (such as Ilford Delta 3200 for example).

So, yes,  (let say at 24 images per second)  current electronic acquisition cameras are more sensitive than film. Right now this is the case.
However; if lighting is used in interiors; or if shooting in daylight exteriors, usually film performs extremely well.  What about huge amount of light ?
I shoot a project in the desert with 50 ASA Daylight stock (5203). When metering (without any ND or polarizer in consideration)  mid-grey is at f/16 (assuming 24 fps and a shutter angle of 180 degrees).
The result is just great: chemical colors quality (not electronic colors), true wide exposure latitude, continuous textures and gradients, overall real film look; and that's just 35mm.
Most importantly, as it is a documentary, where skin tones and fabrics need to be rendered extremely accurately;    it looks so close to what I saw.  The color charts (Kodak and X-Rite) help me verify this precise
point, as I shoot charts at the beginning of every roll.

Regarding IMAX film projection, the few films I have seen at La Géode (La Villette, Paris) in the 80's or in Montreal in 2002, are way far better in image quality than any "digital IMAX" that just looks fake to me.
Image Maximum (IMAX) was developed as a 65mm/15 perf. format for image acquisition and 70mm/15 perf. format for projection.  Even if this was developed 50 years ago (and one of the first film "Tiger Child" was presented at Expo'70 in Japan)
IMHO, this is still the worldwide standard for the highest image quality in a theatre. 

All the best,
Daniel Henríquez Ilic
Film Cinematography
DI Post-Producer
Santiago de Chile



Re: 9x7 Large Format Camera Announced

Daniel Henríquez-Ilic <dhisur@...>
 

Maybe it is time for CML to test IMAX 65mm 15/perf (through DFT OXSCAN) and different electronic acquisition cameras such as this 9x7 camera.

An interesting feature of the 9x7 camera is that it has different mount possibilities and uncompressed output. 
All in all, this 9x7 electronic acquisition camera, looks smarter than other electronic acquisition tools; but the samples provided just look at what it is: a video image (electronic colors, different response of the light...), it is just
so evident just by looking the images,  and this is not the image characterisytics that I expect from a true IMAX movie.

Regarding sensitivity of film; it goes up to 500T or 250D with current Vision3 emulsions; anything beyond is push-processing,
so basically a modification of the gamma that is obtained through development: it doesn't become more sensitive; since sensitivity of a film emulsion is defined at its manufacture.
As far as I know, the maximum speed that has been achieved is ISO 800, anything beyond is through push processing (such as Ilford Delta 3200 for example).

So, yes,  (let say at 24 images per second)  current electronic acquisition cameras are more sensitive than film. Right now this is the case.
However; if lighting is used in interiors; or if shooting in daylight exteriors, usually film performs extremely well.  What about huge amount of light ?
I shoot a project in the desert with 50 ASA Daylight stock (5203). When metering (without any ND or polarizer in consideration)  mid-grey is at f/16 (assuming 24 fps and a shutter angle of 180 degrees).
The result is just great: chemical colors quality (not electronic colors), true wide exposure latitude, continuous textures and gradients, overall real film look; and that's just 35mm.
Most importantly, as it is a documentary, where skin tones and fabrics need to be rendered extremely accurately;    it looks so close to what I saw.  The color charts (Kodak and X-Rite) help me verify this precise
point, as I shoot charts at the beginning of every roll.

Regarding IMAX film projection, the few films I have seen at La Géode (La Villette, Paris) in the 80's or in Montreal in 2002, are way far better in image quality than any "digital IMAX" that just looks fake to me.
Image Maximum (IMAX) was developed as a 65mm/15 perf. format for image acquisition and 70mm/15 perf. format for projection.  Even if this was developed 50 years ago (and one of the first film "Tiger Child" was presented at Expo'70 in Japan)
IMHO, this is still the worldwide standard for the highest image quality in a theatre. 

All the best,
Daniel Henríquez Ilic
Film Cinematography
DI Post-Producer
Santiago de Chile


381 - 400 of 1984