Web Analytics

ProRes "RAW"?


Domenic Porcari
 

So Panasonic and Atomos made a lot of noise about how the Panasonic S1H would "be able to record 4k RAW video with the Atoms Ninja V” with this new ProRes RAW codec. Obviously a lot of heads turned (mine included) when they heard that there’s suddenly a “full frame, 4k RAW camera” out there, especially at that price point.

But as I look more into it, I’m wondering if I’m missing something really big, or a new feature or something, or if Panasonic/Atomos are kinda just pulling the wool. Here’s what I (think I) know, please let me know if any of it is innacurate:

RAW video, in its truest sense, is 4:4:4 video/no chroma subsampling, (all pixels providing individual values for Red, Green Blue, and luminosity) and has a higher bitrate, (higher than 10, depending on the strength of the camera. RED’s have insane 16bit sensors I believe.)

ProRes, usually, is 4:2:2 video/using chroma subsampling, (pixels using equations for what the green levels are based on red and blue, and then calculating luminosity separately) and utilizes a bitrate of either 8 or 10.

SO, Panasonic is claiming you can record RAW video with the S1H over HDMI to a Ninja Recorder. But last I checked, HDMI is limited to 4K 4:2:2 video, so wouldn’t you just be getting 4:2:2 video in a 4:4:4 container? Same goes with the Bit depth. The S1H claims to be able to shoot 6K video in 4:2:2 10-bit, but RAW should be at least 12-bit. So wouldn’t the video still be limited to 1,024 steps vs the 4,096 offered by actual 12-bit RAW and above? (we already know that 6K is off the table with the HDMI connection)

So in summary, it seems that Panasonic and Atomos are just giving you 4:2:2, 10-bit video with a massive file size and calling it RAW.

Am I misinterpreting/misreading something here? Cause that seems kinda messed up…

Thanks!

Dom


Keith Putnam
 

On Fri, Apr 10, 2020 at 11:12 AM, Domenic Porcari wrote:
RAW video, in its truest sense, is 4:4:4 video/no chroma subsampling, (all pixels providing individual values for Red, Green Blue, and luminosity) and has a higher bitrate, (higher than 10, depending on the strength of the camera.
All "raw" means is that the the image data hasn't been permanently debayered into a viewable RGB format. Rather, the image data is stored as photosite brightness values with no inherent color information attached. A debayering algorithm must be applied to the raw data in order to assign color values based on the alignment of the photosites and the bayer color filter array, and at this stage you have the ability to influence exactly how the information is rendered by applying different debayering algorithms to the data (e.g., compare the same clip of REDcode RAW footage debayered by all the different algorithms they've released since the RED One debuted).

Apple's whitepaper about ProRes RAW is available here: https://www.apple.com/final-cut-pro/docs/Apple_ProRes_RAW.pdf

ProRes RAW is a variable bitrate compressed RAW format, and to quote their own whitepaper: "ProRes RAW data rates benefit from encoding Bayer pattern images that consist of only one sample value per photosite. Apple ProRes RAW data rates generally fall between those of Apple ProRes 422 and Apple ProRes 422 HQ, and Apple ProRes RAW HQ data rates generally fall between those of Apple ProRes 422 HQ and Apple ProRes 4444".

Basically the reason they created ProRes RAW was to leverage the market penetration of Final Cut Pro by coding an optimized debayering algorithm which only works in Apple's post software.

Keith Putnam
Local 600 DIT
New York City


Stuart English
 

It depends on your definition of RAW. What you describe as RAW sounds more like non-White Balanced RGB. 

True 4K (3.8K) RAW means you have 3840 x 2160 (8 Million) pixels of unprocessed camera sensor data. Unlike traditional in-camera recording methods this has not been processed to create RGB yet which expands the data to 3 x 4,096 x 2160 (24Million) pixels nor have you performed the typical color space conversion and subsample to YCbCr 4:2:2 which reduces that to 2 x 4,096 x 2160 (16Million)

So RAW is a useful technique that lets higher bit depth and / or larger frame sizes to be recorded than if you record RGB or YCbCr. The trade off is you now need to do more image processing in post production. In shorthand RAW v RGB v YCbCr is 4 v 4:4:4 v 4:2:2 



Best regards, 

Stuart English





Domenic Porcari
 

Ok, thanks!  I will give that paper a read!

So with the bit depth in particular, what worries me is trying to stretch a 10-bit image over the 14-stops of dynamic range that the camera claims to offer.  With 12-bit raw formats that’s not as much of an issue, but it sounds like you’re saying that ProRes RAW is still limited to the ProRes, 10-bit ceiling, correct?  

So does ProRes RAW really offer any substantial benefit to your image or codec other than low/no compression?

Dom

On Apr 10, 2020, at 11:30 AM, Keith Putnam <keith@...> wrote:

On Fri, Apr 10, 2020 at 11:12 AM, Domenic Porcari wrote:
RAW video, in its truest sense, is 4:4:4 video/no chroma subsampling, (all pixels providing individual values for Red, Green Blue, and luminosity) and has a higher bitrate, (higher than 10, depending on the strength of the camera.
All "raw" means is that the the image data hasn't been permanently debayered into a viewable RGB format. Rather, the image data is stored as photosite brightness values with no inherent color information attached. A debayering algorithm must be applied to the raw data in order to assign color values based on the alignment of the photosites and the bayer color filter array, and at this stage you have the ability to influence exactly how the information is rendered by applying different debayering algorithms to the data (e.g., compare the same clip of REDcode RAW footage debayered by all the different algorithms they've released since the RED One debuted).

Apple's whitepaper about ProRes RAW is available here: https://www.apple.com/final-cut-pro/docs/Apple_ProRes_RAW.pdf

ProRes RAW is a variable bitrate compressed RAW format, and to quote their own whitepaper: "ProRes RAW data rates benefit from encoding Bayer pattern images that consist of only one sample value per photosite. Apple ProRes RAW data rates generally fall between those of Apple ProRes 422 and Apple ProRes 422 HQ, and Apple ProRes RAW HQ data rates generally fall between those of Apple ProRes 422 HQ and Apple ProRes 4444".

Basically the reason they created ProRes RAW was to leverage the market penetration of Final Cut Pro by coding an optimized debayering algorithm which only works in Apple's post software.

Keith Putnam
Local 600 DIT
New York City


Daniel Rozsnyó
 



On 04/10/2020 04:34 PM, Domenic Porcari wrote:
RAW video, in its truest sense, is 4:4:4 video/no chroma subsampling, (all pixels providing individual values for Red, Green Blue, and luminosity) and has a higher bitrate, (higher than 10, depending on the strength of the camera. RED’s have insane 16bit sensors I believe.)

A RAW video, which is mostly reagarded as being in BAYER pattern, is in the above convention a 4:0:0 format. There is just one data word (Y) per pixel location, as opposed to 3 at 444 (eithe R,G,B or Y,U,V)

SO, Panasonic is claiming you can record RAW video with the S1H over HDMI to a Ninja Recorder. But last I checked, HDMI is limited to 4K 4:2:2 video, so wouldn’t you just be getting 4:2:2 video in a 4:4:4 container? Same goes with the Bit depth. The S1H claims to be able to shoot 6K video in 4:2:2 10-bit, but RAW should be at least 12-bit. So wouldn’t the video still be limited to 1,024 steps vs the 4,096 offered by actual 12-bit RAW and above? (we already know that 6K is off the table with the HDMI connection)

The HDMI can work also in a deep color mode, by having a 12-bit per component. In 422 mode, the HDMI provides 2 times more bandwidth, since for each pixel the transported data contains an Y word and C word (having the Cb and Cr transmitted on even/odd pixels).

Practically all cameras have a 12-bit linear readout/conversion, and that is transformed to a 10-bit transport format (LOG), which is sort of lossy compression.

There is no limit in HDMI to transfer a 6K video at 60 FPS in RAW mode. All the limits the Panasonic CAMERA and the Atomos RECORDER has, are artificial marketing limitations, in order to sell more devices, because it makes business sense to not give out everything on the start.

Sorry to open your eyes, but nor Panasonic or Atomos are playing a fair game here.

And neither of those two are willing to openly discuss the transport format or issues they face, which just confirms my assumptions that its - because business.


Regards,

Ing. Daniel Rozsnyo
camera developer
Prague, CZ


Daniel Rozsnyó
 



On 04/10/2020 05:30 PM, Keith Putnam wrote:
ProRes RAW is a variable bitrate compressed RAW format, and to quote their own whitepaper: "ProRes RAW data rates benefit from encoding Bayer pattern images that consist of only one sample value per photosite. Apple ProRes RAW data rates generally fall between those of Apple ProRes 422 and Apple ProRes 422 HQ, and Apple ProRes RAW HQ data rates generally fall between those of Apple ProRes 422 HQ and Apple ProRes 4444".

With modern media and plenty of buffering space, there is no longer a need of a constant-bitrate feature.
Also the market and users tends to prefer constant-quality approach.


Basically the reason they created ProRes RAW was to leverage the market penetration of Final Cut Pro by coding an optimized debayering algorithm which only works in Apple's post software.

That is not true. My insight here is, that the DNG and cDNG group managed by Adobe died / fall apart, when Adobe discontinued SpeedGrade and abandonned the RAW video market. There was no proper and open codec, to replace the cDNG, especially in a compressed form. Blackmagic has tried to hack a 12-bit JPEG onto a Bayer footage, but they opted for a half-way solution instead of doing it properly.

On the opposite side, a well thought, and well established ProRes foundation needed only a little adjustment/rework to turn that into what is now ProRes RAW.

ProRes RAW is not limited to a single camera vendor.
ProRes RAW is not limited to a single software vendor.

The only software where PRR does not work is BMD, because they thought BRAW locked to BMD cameras could be a win. Well.. it is not a win.
Everybody who takes their media business serious would prefer vendor-independent file formats.


Regards,

Ing. Daniel Rozsnyo
camera developer
Prague, CZ


Daniel Rozsnyó
 


On 04/10/2020 05:34 PM, Domenic Porcari wrote:
Ok, thanks!  I will give that paper a read!

So with the bit depth in particular, what worries me is trying to stretch a 10-bit image over the 14-stops of dynamic range that the camera claims to offer.  With 12-bit raw formats that’s not as much of an issue, but it sounds like you’re saying that ProRes RAW is still limited to the ProRes, 10-bit ceiling, correct?

ProRes RAW is definitely not using a 10-bit mode like ProRes 422 is.

Most of the cameras (especially DSLRs), use only a 12-bit linear mode for sensor readout, and by having a little cheat on Green filter, they can achieve about 12.5 - 13 stops of DR in expense of a weird gamut coverage.

If we take an EVA1, the sensor is 14-bit linear, and by various transformations that gets "compressed" to a 10-bit transport format (either on AVC/HEVC or on SDI). It would perform slightly better, to provide the compression down to 12-bit, but it seems that this is not possible with the Panasonic's implementation of SDI.

From my experience, the LOG compression 2 bits down (10 to 8, 12 to 10, 14 to 12) gives really loss-less experience (visually).
The 4 bits down (14 to 10 - panasonic, or 12 to 8 - sony) will be noticeable when your grading is more extreme.

So if you experience EVA1 originated ProRes RAW footage being sub-optimal, the 10-bit SDI is the one blame, not the codec itself.


Regards,

Ing. Daniel Rozsnyo
camera developer
Prague, CZ


Rakesh Malik
 

cDNG's days were numbered long before Adobe blew it off. It was really just a hack to to encode raw video without having to pay for a codec license or a lot of R&D, so it's a good choice for a starter codec. With more resolution though, its low efficiency makes it quite a pain to work with, at least for those who can't afford to spend $2000 a day on hard drives. Most independent productions don't have the budget to deal with massive 4K image sequences -- never mind larger then 4K.

Black Magic Raw isn't limited to one vendor either. Black Magic has been working on adding BRaw recording for non-BMD cameras via its newer monitors... and frankly, given the history of ProRes, it's no surprise that it's taken this long to get support for ProResRaw in Windows other than in Scratch. 

The most common mezzanine codecs are ProRes and DNxHR… neither of which are vendor independent...
-----------------------------


On Fri, Apr 10, 2020 at 8:53 AM Daniel Rozsnyó <daniel@...> wrote:


On 04/10/2020 05:30 PM, Keith Putnam wrote:
ProRes RAW is a variable bitrate compressed RAW format, and to quote their own whitepaper: "ProRes RAW data rates benefit from encoding Bayer pattern images that consist of only one sample value per photosite. Apple ProRes RAW data rates generally fall between those of Apple ProRes 422 and Apple ProRes 422 HQ, and Apple ProRes RAW HQ data rates generally fall between those of Apple ProRes 422 HQ and Apple ProRes 4444".

With modern media and plenty of buffering space, there is no longer a need of a constant-bitrate feature.
Also the market and users tends to prefer constant-quality approach.


Basically the reason they created ProRes RAW was to leverage the market penetration of Final Cut Pro by coding an optimized debayering algorithm which only works in Apple's post software.

That is not true. My insight here is, that the DNG and cDNG group managed by Adobe died / fall apart, when Adobe discontinued SpeedGrade and abandonned the RAW video market. There was no proper and open codec, to replace the cDNG, especially in a compressed form. Blackmagic has tried to hack a 12-bit JPEG onto a Bayer footage, but they opted for a half-way solution instead of doing it properly.

On the opposite side, a well thought, and well established ProRes foundation needed only a little adjustment/rework to turn that into what is now ProRes RAW.

ProRes RAW is not limited to a single camera vendor.
ProRes RAW is not limited to a single software vendor.

The only software where PRR does not work is BMD, because they thought BRAW locked to BMD cameras could be a win. Well.. it is not a win.
Everybody who takes their media business serious would prefer vendor-independent file formats.


Regards,

Ing. Daniel Rozsnyo
camera developer
Prague, CZ


John Brawley
 

Hi Dom, 

You’re going to see a few opinions, and I’ll give you mine, but I think the main issue is what it is that RAW actually means and you deciding what it means to you.

Number 1, you’ve said ProRes is 4:2:2 but there’s many flavours of ProRes, including 4:4:4 before you get to the ProRes RAW version.

I think HDMI when used as HDMI does indeed mean some kind of encoded video, but I think what Panasonic and Atomos are doing is hijack the HDMI cable, using that as a data pipe.  In that mode you can’t plug the HDMI output of the camera into a monitor and expect to get a picture.

Others have made reference the EVA1, but you’re actually talking about the S1H, their little (not that small actually) mirrorless 135 format camera outputting over HDMI to an external recorder that also has an option to record to ProRes as well as ProRes RAW.

For me RAW means that ISO and WB isn’t baked in and is independent of the stored values in the file and I think there’s also a good argument that you should be able to de-mosiac / Debayer the raw data again using a different algorithm or process.

I also see a lot of conflation of two values, namely the way ENCODED video is represented but the now meaningless “4” and then conflating that with the ratio of R, G and B photo sites on the sensor. Anyone remember the 8:8:8 colour correction desks and when Sony tried to get everyone to call HDCAM 22:11:11 ? 

That leads to many end users making a jump to encoded video somehow always being “4:2:2” because that’s also often the ratio of Green, Blue and Red photosites on most Bayer sensors.

But the 4:2:2 numbers don’t represent RGB.  They are RBG encoded / matrixed with brightness / luminance values.    The RGB information has it’s bigness encoded into each colour channel. The R channel has a pixel value that intrinsically has it’s brightness.  Encoded video stores the brightness information in the G channel.

But it doesn’t stop users inferring that somehow you need four times the chroma resolution of your target delivery resolution to have “true” values.  So by that logic you need an 8K sensor to get true 4:4:4 on a 4K delivery file so that there’s a blue and red pixel for each colour channel….like that’s what really happens when you de-mosaic an image from bayer sensor data.

Are chroma keys noticeably worse on a 4K file from an 8k camera over an 8k file from an 8k camera ?  Is it harder to pull a secondary hue on a correction from an 8K file form an 8K camera over a 4K derived file from that same 8k camera ? Is it literally half as bad (what the ratio infers if its 4:2:2 vs 4:4:4)

I just don’t personally see that in the real world and I think ratio numbers doesn’t account for the video encoding transformation that happens when you put RBG sensor data into a video container format like quicktime and also the algorithm that is considered to usually be 70-80% efficient at worst, at interpolating those bayer numbers into encoded video.

As others have also said cDNG is an old codec, though open source it was in fact very inefficient.  cDNG was really just DNG files with an added sound file and a timecode stamp. And really, DNG is based largely on TIFF files, a very old photographic stills standard.  It was incredibly data hungry and very inefficient to play back.

Certainly 4K cDNG makes for eye watering data rates and now that companies like Blackmagic are doing more than 4K resolution cameras, you can’t expect to keep scaling cDNG up with those sensor resolutions.  Even the most high end system wouldn’t cope well with 8K DNG workflow.  It’s just not at all practical.  Or necessary.  RED have shown us that 8K compressed can generate great results.

There’s many paths to that end result, and whatever the companies are calling it.  If we’re going beyond 4K in cameras, then you have to compress in some way.  Some do it in bit depth, some are in data compression or some combination of the two.

RED have a stranglehold on doing any kind of compressed RAW in-camera.  So you either pony up and license from them, or you find other ways around it.  Seems like they did some kind of IP swap for Sony and Canon. RED’s new Komodo has an RF mount and Canon can seemingly do RAW in camera on some of their lines. Sony and Red sued each other a few years ago and now you have a Sony RAW that seems an awful lot like REDCODE in their high end cameras. (These are just my guesses)

Apple seemed to challenge the RED IP recently and it didn’t seem to change the status quo.  I believe this is why no cameras seem to have yet to get ProRes RAW built into a camera some 2 years after it’s launch.  I assume Apple don’t want to pay for the IP licence ?  Or the camera company manufacturer that wants to use ProRes RAW don’t want to pay ?  I’ve no idea how that IP side of things work.  It sounded like some kind of settlement had been reached, I guess some cameras with ProRes RAW built in recording will be evidence of that.

I know from my background with Blackmagic that they wanted to move away from cDNG because it wasn’t sustainable for the future with more than 4K imaging they’re anticipating.  Also, although it’s an open standard, the way the files are read by every single application out there was totally wild and fluctuated.  I guess the downside of an open standard is that there’s kind of no standard.

What I want from a codec is high bit depth, white balance independence and ISO independence and that can be played back on even low end machines and encoded with a variety of data rates. If these parameters can be adjusted without any meaningful degradation or impact on the image then I think the goal is achieved, I’m not really fussed about the how’s and why’s and what it’s called.  As a bonus it would also be great if the codec’s are also smart enough to have LUT’s and metadata built in, something that cDNG failed to do well.

I shot ProRes 444 for a really long time and it’s an awfully robust codec when paired with a good camera. Even though it wasn’t RAW, you could re-white balance your shot and not take much of a hit.  

So coming back to the S1H, seems to me the best thing is to test that set up.  Shoot some material with it.  See how it holds up.  I always like to compare with a known camera, something you’re likely to know well and you can then compare and see how well it does by comparison.  Are you using it with another camera ?  Then shoot some side by sides.

It’s a bit hard to believe all the marketing from anyone these days, you have to see how it fits within your own workflows and approaches to imaging.  You have to find out for yourself.

JB

John Brawley
Cinematographer
New Orleans



On Apr 10, 2020, 10:12 AM -0500, Domenic Porcari <domporcari44@...>, wrote:
So Panasonic and Atomos made a lot of noise about how the Panasonic S1H would "be able to record 4k RAW video with the Atoms Ninja V” with this new ProRes RAW codec. Obviously a lot of heads turned (mine included) when they heard that there’s suddenly a “full frame, 4k RAW camera” out there, especially at that price point.

But as I look more into it, I’m wondering if I’m missing something really big, or a new feature or something, or if Panasonic/Atomos are kinda just pulling the wool. Here’s what I (think I) know, please let me know if any of it is innacurate:

RAW video, in its truest sense, is 4:4:4 video/no chroma subsampling, (all pixels providing individual values for Red, Green Blue, and luminosity) and has a higher bitrate, (higher than 10, depending on the strength of the camera. RED’s have insane 16bit sensors I believe.)

ProRes, usually, is 4:2:2 video/using chroma subsampling, (pixels using equations for what the green levels are based on red and blue, and then calculating luminosity separately) and utilizes a bitrate of either 8 or 10.

SO, Panasonic is claiming you can record RAW video with the S1H over HDMI to a Ninja Recorder. But last I checked, HDMI is limited to 4K 4:2:2 video, so wouldn’t you just be getting 4:2:2 video in a 4:4:4 container? Same goes with the Bit depth. The S1H claims to be able to shoot 6K video in 4:2:2 10-bit, but RAW should be at least 12-bit. So wouldn’t the video still be limited to 1,024 steps vs the 4,096 offered by actual 12-bit RAW and above? (we already know that 6K is off the table with the HDMI connection)

So in summary, it seems that Panasonic and Atomos are just giving you 4:2:2, 10-bit video with a massive file size and calling it RAW.

Am I misinterpreting/misreading something here? Cause that seems kinda messed up…

Thanks!

Dom
_._,_._,_


Video Assist Hungary
 


-----------------------------

Balazs Rozgonyi IMDb
CEO - technical director / VA - DIT -  3D

0-24 central phone: +36 70 626 2354

Video Assist Hungary - Technology for your vision IMDb IMDbPro
The only video rental place in Hungary!





On 04/10/2020 04:34 PM, Domenic Porcari wrote:
RAW video, in its truest sense, is 4:4:4 video/no chroma subsampling, (all pixels providing individual values for Red, Green Blue, and luminosity) and has a higher bitrate, (higher than 10, depending on the strength of the camera. RED’s have insane 16bit sensors I believe.)
On 2020. Apr 10., at 17:43, Daniel Rozsnyó <daniel@...> wrote:
A RAW video, which is mostly reagarded as being in BAYER pattern, is in the above convention a 4:0:0 format. There is just one data word (Y) per pixel location, as opposed to 3 at 444 (eithe R,G,B or Y,U,V)

Whoa, hold on there. Am I wrong here?

In a bayer pattern, there are 4 subpixels per pixel. They capture data as Y, but they are already 'colored' by a microfilter.
Debayering assignes RGB values by essentially transforming the 4xY into RGB. 
So, in my understanding, its more like a 4:4:4:4 (as in 4Y:4Y:4Y:4Y), or 16:0:0:0 (as in 16Y:0:0:0) if you will, to 4:4:4 (as in 4R:4G:4B) conversion, with the latter being more descriptive (and harder to mix with 4R:4G:4B:4Alpha designation)
you dont add more bits in the debayer, as you'd move from 4x12bits per pixel to 3x12bits. Right?

and, I'm not sure about this, but the RAW data rate would be 4096(H pixels)x2160(V pixels)x12(bits)x4(subpixels)x24(fps) = 7.6Gbps if there would be no compression at all? 



Daniel Rozsnyó
 

On 04/10/2020 04:34 PM, Domenic Porcari wrote:
RAW video, in its truest sense, is 4:4:4 video/no chroma subsampling, (all pixels providing individual values for Red, Green Blue, and luminosity) and has a higher bitrate, (higher than 10, depending on the strength of the camera. RED’s have insane 16bit sensors I believe.)
On 2020. Apr 10., at 17:43, Daniel Rozsnyó <daniel@...> wrote:
A RAW video, which is mostly reagarded as being in BAYER pattern, is in the above convention a 4:0:0 format. There is just one data word (Y) per pixel location, as opposed to 3 at 444 (eithe R,G,B or Y,U,V)

Whoa, hold on there. Am I wrong here?

In a bayer pattern, there are 4 subpixels per pixel. They capture data as Y, but they are already 'colored' by a microfilter.
Debayering assignes RGB values by essentially transforming the 4xY into RGB. 
So, in my understanding, its more like a 4:4:4:4 (as in 4Y:4Y:4Y:4Y), or 16:0:0:0 (as in 16Y:0:0:0) if you will, to 4:4:4 (as in 4R:4G:4B) conversion, with the latter being more descriptive (and harder to mix with 4R:4G:4B:4Alpha designation)
you dont add more bits in the debayer, as you'd move from 4x12bits per pixel to 3x12bits. Right?

and, I'm not sure about this, but the RAW data rate would be 4096(H pixels)x2160(V pixels)x12(bits)x4(subpixels)x24(fps) = 7.6Gbps if there would be no compression at all?

You are wrong by scale of 4 here. The group of 4 bayer sensels are not 1 pixel, and they newer were.

All current implementations of debayer/demosaic do calculate (interpolate) the 2 missing color elements for each pixel.
Therefore this process creates 3 times more data than it was captured on the sensor, so the result could be called 444. But if you think of entropy - there is not much added, since the data is made up using linear combinations (usually by a 5x5 filter or something better), unless if a some fancy content-aware AI demosaic was used.

Input is 4 "Y" bayer data (4 words), output is 4*RGB of full color data (12 words). The 422 then reduces this to 8 words.

So the RAW is actually the lowest bitrate data representation, as it effectively does a very strong color sub-sampling (around or rather under a 4:2:0 representation).

This is of course the issue with PRO users - and the reason why you need a 8K BAYER sensor for 4K 444 delivery, or a 6K bayer for 422 delivery. But as most consumer cameras recorder in 420 codecs, for such 4K, a 4K bayer is just enough.


Regards,

Ing. Daniel Rozsnyo
camera developer
Prague, CZ


Art Adams
 

In a bayer pattern, there are 4 subpixels per pixel. They capture data as Y, but they are already 'colored' by a microfilter.

Debayering assignes RGB values by essentially transforming the 4xY into RGB.

 

You’re describing a process that’s been used by some camera manufacturers who will, for example, take a 4K sensor and scale it down to HD by avoiding the processor-intensive debayering process in favor of simply aggregating data from 4x4 clusters of photo sites and calling that a pixel. This is not a common practice. Sometimes this is a fast-and-furious way of pushing an HD feed out of a 4K or 8K camera, but it’s not recorded that way.

 

So the RAW is actually the lowest bitrate data representation, as it effectively does a very strong color sub-sampling (around or rather under a 4:2:0 representation).

 

I’m not sure this is completely true either. I was told long ago by a color scientist that he can derive three-color information from a single photo site to a small extent by (1) looking at the adjacent photo sites to see what they are doing, and (2) knowing the transmission properties of the filters that sit on top of each photo site, as none is perfect and they all transmit more spectrum than simply “red,” “green” or “blue” (whatever those may really be). There’s always some crossover, and that can be used as data.

 

4:4:4, 4:2:2, 4:2:0 etc. are great for describing how data is discarded to save space when being laid off to a file. I’ve never found it to be a valid way of describing how an actual sensor captures data.

 

_______________________________________________________
Art 
Adams
Cinema Lens Specialist
ARRI Inc.
3700 Vanowen Street
BurbankCA 91505
www.arri.com 

 
aadams@...

Get all the latest information from www.arri.comFacebookTwitterInstagram and YouTube.






This message is confidential. It may also be privileged or otherwise protected by work product immunity or other legal rules. If you have received it by mistake, please let us know by e-mail reply and delete it from your system; you may not copy this message or disclose its contents to anyone. Please send us by fax any message containing deadlines as incoming e-mails are not screened for response deadlines. The integrity and security of this message cannot be guaranteed on the Internet.



Matt Frazer
 

Hello Dom and everyone in this topic,

My name is Matt Frazer and I work for Panasonic and found this thread through a colleague on the Varicam team.

There is only so much I can contribute to this discussion until the S1H FW is released that provides RAW output but I will do my best to answer your concerns and not give to much away.

"RAW video, in its truest sense, is 4:4:4 video/no chroma subsampling, (all pixels providing individual values for Red, Green Blue, and luminosity) and has a higher bitrate, (higher than 10, depending on the strength of the camera. RED’s have insane 16bit sensors I believe.)"

As Keith, Stuart and Daniel have already stated, we are talking about the capture of the Bayer pattern values before the image is converted into a viewable image with proper RGB values and subsampling. a minimum bit-rate is not a mandate of a RAW file but it is advantageous to have larger bit-depth files in particular to maximize the DR of the sensor. it is also counter productive to have larger bit depth than the sensor can achieve, if my sensor is only capable of 10 stops of DR and I have a 16 bit file, what am I wastefully encoding into the extra bits that are not needed? It is noise and that is never good. It is best to provide a solid compromise between the available DR of the sensor and the bit depth of the file. Photo camera manufacturers tend to follow the rule of 1 bit per stop in RAW (an over 14 stop sensor gets a 14 bit file, an over 12 stop sensor gets a 12 bit file) but this is not a hard and fast rule and you can certainly get exceptional images from 12-14 stop sensors with 10 bit RAW files and you can get a bad image with a 12 stop sensor and a 16 bit file. I guess the point I am making is that ultimately the engineering of the data can mean a lot to how the image looks and engineers at the major camera companies are all cleaver and take pride in giving you the best they can with the capabilities of the sensor they use.

 "SO, Panasonic is claiming you can record RAW video with the S1H over HDMI to a Ninja Recorder. But last I checked, HDMI is limited to 4K 4:2:2 video, so wouldn’t you just be getting 4:2:2 video in a 4:4:4 container? Same goes with the Bit depth. The S1H claims to be able to shoot 6K video in 4:2:2 10-bit, but RAW should be at least 12-bit. So wouldn’t the video still be limited to 1,024 steps vs the 4,096 offered by actual 12-bit RAW and above? (we already know that 6K is off the table with the HDMI connection)"
Yes, Panasonic and Atomos (and Nikon for that matter) are calming that it is possible to pass a RAW image over an HDMI cable to a Ninja V recorder. I think the confusion here is that you are looking at HDMI as a standard and not as a data cable. You are somewhat correct that 1 variant of the HDMI standard has the limitation you have indicated (depending on the class of HDMI, you give no indication if you mean 1.4, 1.4a, 2.0 or 2.1) but you assume that Atomos has designed the system to work within the HDMI standard... they are not. Now I am no engineer and am not in the development side of the business so I have no more information than do you about "HOW" Atomos has accomplished this. What I can tell you is that at the end of the day, the HDMI cable and connection used on the S1H for delivery of 4K 60P 4:2:2 video requires a cable with 18Gbps of data throughput. that means that if you did not use this to carry a video signal you can pass 18Gbps of whatever you want (data wise, can't be used to send frozen yogurt) as long as the recovering devices understands and can interpret the data. 
As I mentioned earlier, RAW can be whatever bit depth the equipment will allow and 10 bit can absolutely be "RAW". My question to you, in what publication has Panasonic every stated what the bit depth of the S1H will be for RAW? ProRes RAW can support greater than 10bit RAW files, how do any of us know what the bit-depth will be until the full specs of the camera are released?

Why is 6K off the table for the S1H, Panasonic has publicly stated the RAW will be 5.9K. While that is technically not 6K it is close enough I think.

https://www.dpreview.com/news/1697265394/atomos-16-bit-raw-capture-sony-fx9-new-info-s1h-prores-raw-4k-update 

Sorry my first post is so long, I will do better in the future,

Matt

**Who will remember in future that CML requires you to sign your posts with your full name, job/title, and location.  Letting this one through as it's your first post and you did introduce yourself off the top**


Domenic Porcari
 

John,  

Thank you so much for the in depth response!

Dom


On Apr 10, 2020, at 1:58 PM, John Brawley <john@...> wrote:


Hi Dom, 

You’re going to see a few opinions, and I’ll give you mine, but I think the main issue is what it is that RAW actually means and you deciding what it means to you.

Number 1, you’ve said ProRes is 4:2:2 but there’s many flavours of ProRes, including 4:4:4 before you get to the ProRes RAW version.

I think HDMI when used as HDMI does indeed mean some kind of encoded video, but I think what Panasonic and Atomos are doing is hijack the HDMI cable, using that as a data pipe.  In that mode you can’t plug the HDMI output of the camera into a monitor and expect to get a picture.

Others have made reference the EVA1, but you’re actually talking about the S1H, their little (not that small actually) mirrorless 135 format camera outputting over HDMI to an external recorder that also has an option to record to ProRes as well as ProRes RAW.

For me RAW means that ISO and WB isn’t baked in and is independent of the stored values in the file and I think there’s also a good argument that you should be able to de-mosiac / Debayer the raw data again using a different algorithm or process.

I also see a lot of conflation of two values, namely the way ENCODED video is represented but the now meaningless “4” and then conflating that with the ratio of R, G and B photo sites on the sensor. Anyone remember the 8:8:8 colour correction desks and when Sony tried to get everyone to call HDCAM 22:11:11 ? 

That leads to many end users making a jump to encoded video somehow always being “4:2:2” because that’s also often the ratio of Green, Blue and Red photosites on most Bayer sensors.

But the 4:2:2 numbers don’t represent RGB.  They are RBG encoded / matrixed with brightness / luminance values.    The RGB information has it’s bigness encoded into each colour channel. The R channel has a pixel value that intrinsically has it’s brightness.  Encoded video stores the brightness information in the G channel.

But it doesn’t stop users inferring that somehow you need four times the chroma resolution of your target delivery resolution to have “true” values.  So by that logic you need an 8K sensor to get true 4:4:4 on a 4K delivery file so that there’s a blue and red pixel for each colour channel….like that’s what really happens when you de-mosaic an image from bayer sensor data.

Are chroma keys noticeably worse on a 4K file from an 8k camera over an 8k file from an 8k camera ?  Is it harder to pull a secondary hue on a correction from an 8K file form an 8K camera over a 4K derived file from that same 8k camera ? Is it literally half as bad (what the ratio infers if its 4:2:2 vs 4:4:4)

I just don’t personally see that in the real world and I think ratio numbers doesn’t account for the video encoding transformation that happens when you put RBG sensor data into a video container format like quicktime and also the algorithm that is considered to usually be 70-80% efficient at worst, at interpolating those bayer numbers into encoded video.

As others have also said cDNG is an old codec, though open source it was in fact very inefficient.  cDNG was really just DNG files with an added sound file and a timecode stamp. And really, DNG is based largely on TIFF files, a very old photographic stills standard.  It was incredibly data hungry and very inefficient to play back.

Certainly 4K cDNG makes for eye watering data rates and now that companies like Blackmagic are doing more than 4K resolution cameras, you can’t expect to keep scaling cDNG up with those sensor resolutions.  Even the most high end system wouldn’t cope well with 8K DNG workflow.  It’s just not at all practical.  Or necessary.  RED have shown us that 8K compressed can generate great results.

There’s many paths to that end result, and whatever the companies are calling it.  If we’re going beyond 4K in cameras, then you have to compress in some way.  Some do it in bit depth, some are in data compression or some combination of the two.

RED have a stranglehold on doing any kind of compressed RAW in-camera.  So you either pony up and license from them, or you find other ways around it.  Seems like they did some kind of IP swap for Sony and Canon. RED’s new Komodo has an RF mount and Canon can seemingly do RAW in camera on some of their lines. Sony and Red sued each other a few years ago and now you have a Sony RAW that seems an awful lot like REDCODE in their high end cameras. (These are just my guesses)

Apple seemed to challenge the RED IP recently and it didn’t seem to change the status quo.  I believe this is why no cameras seem to have yet to get ProRes RAW built into a camera some 2 years after it’s launch.  I assume Apple don’t want to pay for the IP licence ?  Or the camera company manufacturer that wants to use ProRes RAW don’t want to pay ?  I’ve no idea how that IP side of things work.  It sounded like some kind of settlement had been reached, I guess some cameras with ProRes RAW built in recording will be evidence of that.

I know from my background with Blackmagic that they wanted to move away from cDNG because it wasn’t sustainable for the future with more than 4K imaging they’re anticipating.  Also, although it’s an open standard, the way the files are read by every single application out there was totally wild and fluctuated.  I guess the downside of an open standard is that there’s kind of no standard.

What I want from a codec is high bit depth, white balance independence and ISO independence and that can be played back on even low end machines and encoded with a variety of data rates. If these parameters can be adjusted without any meaningful degradation or impact on the image then I think the goal is achieved, I’m not really fussed about the how’s and why’s and what it’s called.  As a bonus it would also be great if the codec’s are also smart enough to have LUT’s and metadata built in, something that cDNG failed to do well.

I shot ProRes 444 for a really long time and it’s an awfully robust codec when paired with a good camera. Even though it wasn’t RAW, you could re-white balance your shot and not take much of a hit.  

So coming back to the S1H, seems to me the best thing is to test that set up.  Shoot some material with it.  See how it holds up.  I always like to compare with a known camera, something you’re likely to know well and you can then compare and see how well it does by comparison.  Are you using it with another camera ?  Then shoot some side by sides.

It’s a bit hard to believe all the marketing from anyone these days, you have to see how it fits within your own workflows and approaches to imaging.  You have to find out for yourself.

JB

John Brawley
Cinematographer
New Orleans



On Apr 10, 2020, 10:12 AM -0500, Domenic Porcari <domporcari44@...>, wrote:
So Panasonic and Atomos made a lot of noise about how the Panasonic S1H would "be able to record 4k RAW video with the Atoms Ninja V” with this new ProRes RAW codec. Obviously a lot of heads turned (mine included) when they heard that there’s suddenly a “full frame, 4k RAW camera” out there, especially at that price point.

But as I look more into it, I’m wondering if I’m missing something really big, or a new feature or something, or if Panasonic/Atomos are kinda just pulling the wool. Here’s what I (think I) know, please let me know if any of it is innacurate:

RAW video, in its truest sense, is 4:4:4 video/no chroma subsampling, (all pixels providing individual values for Red, Green Blue, and luminosity) and has a higher bitrate, (higher than 10, depending on the strength of the camera. RED’s have insane 16bit sensors I believe.)

ProRes, usually, is 4:2:2 video/using chroma subsampling, (pixels using equations for what the green levels are based on red and blue, and then calculating luminosity separately) and utilizes a bitrate of either 8 or 10.

SO, Panasonic is claiming you can record RAW video with the S1H over HDMI to a Ninja Recorder. But last I checked, HDMI is limited to 4K 4:2:2 video, so wouldn’t you just be getting 4:2:2 video in a 4:4:4 container? Same goes with the Bit depth. The S1H claims to be able to shoot 6K video in 4:2:2 10-bit, but RAW should be at least 12-bit. So wouldn’t the video still be limited to 1,024 steps vs the 4,096 offered by actual 12-bit RAW and above? (we already know that 6K is off the table with the HDMI connection)

So in summary, it seems that Panasonic and Atomos are just giving you 4:2:2, 10-bit video with a massive file size and calling it RAW.

Am I misinterpreting/misreading something here? Cause that seems kinda messed up…

Thanks!

Dom


Domenic Porcari
 

Hi Matt,

Thank you so much for the info!  (I love long answers like this!)  Its super nice to be able to get info like this from someone like you who’s involved personally with panasonic, as I’m continuing to realize how even some of my sources who I thought were totally reliable might not have had the whole picture themselves, nor do I blame them, since it seems like pretty big picture.  So again, thank you for helping me understand more, this is immensely helpful!

Domenic Porcari
Cinema Student
Pittsburgh, PA

On Apr 10, 2020, at 2:29 PM, Matt Frazer <panamatt@...> wrote:

Hello Dom and everyone in this topic,

My name is Matt Frazer and I work for Panasonic and found this thread through a colleague on the Varicam team.

There is only so much I can contribute to this discussion until the S1H FW is released that provides RAW output but I will do my best to answer your concerns and not give to much away.

"RAW video, in its truest sense, is 4:4:4 video/no chroma subsampling, (all pixels providing individual values for Red, Green Blue, and luminosity) and has a higher bitrate, (higher than 10, depending on the strength of the camera. RED’s have insane 16bit sensors I believe.)"

As Keith, Stuart and Daniel have already stated, we are talking about the capture of the Bayer pattern values before the image is converted into a viewable image with proper RGB values and subsampling. a minimum bit-rate is not a mandate of a RAW file but it is advantageous to have larger bit-depth files in particular to maximize the DR of the sensor. it is also counter productive to have larger bit depth than the sensor can achieve, if my sensor is only capable of 10 stops of DR and I have a 16 bit file, what am I wastefully encoding into the extra bits that are not needed? It is noise and that is never good. It is best to provide a solid compromise between the available DR of the sensor and the bit depth of the file. Photo camera manufacturers tend to follow the rule of 1 bit per stop in RAW (an over 14 stop sensor gets a 14 bit file, an over 12 stop sensor gets a 12 bit file) but this is not a hard and fast rule and you can certainly get exceptional images from 12-14 stop sensors with 10 bit RAW files and you can get a bad image with a 12 stop sensor and a 16 bit file. I guess the point I am making is that ultimately the engineering of the data can mean a lot to how the image looks and engineers at the major camera companies are all cleaver and take pride in giving you the best they can with the capabilities of the sensor they use.

 "SO, Panasonic is claiming you can record RAW video with the S1H over HDMI to a Ninja Recorder. But last I checked, HDMI is limited to 4K 4:2:2 video, so wouldn’t you just be getting 4:2:2 video in a 4:4:4 container? Same goes with the Bit depth. The S1H claims to be able to shoot 6K video in 4:2:2 10-bit, but RAW should be at least 12-bit. So wouldn’t the video still be limited to 1,024 steps vs the 4,096 offered by actual 12-bit RAW and above? (we already know that 6K is off the table with the HDMI connection)"
Yes, Panasonic and Atomos (and Nikon for that matter) are calming that it is possible to pass a RAW image over an HDMI cable to a Ninja V recorder. I think the confusion here is that you are looking at HDMI as a standard and not as a data cable. You are somewhat correct that 1 variant of the HDMI standard has the limitation you have indicated (depending on the class of HDMI, you give no indication if you mean 1.4, 1.4a, 2.0 or 2.1) but you assume that Atomos has designed the system to work within the HDMI standard... they are not. Now I am no engineer and am not in the development side of the business so I have no more information than do you about "HOW" Atomos has accomplished this. What I can tell you is that at the end of the day, the HDMI cable and connection used on the S1H for delivery of 4K 60P 4:2:2 video requires a cable with 18Gbps of data throughput. that means that if you did not use this to carry a video signal you can pass 18Gbps of whatever you want (data wise, can't be used to send frozen yogurt) as long as the recovering devices understands and can interpret the data. 
As I mentioned earlier, RAW can be whatever bit depth the equipment will allow and 10 bit can absolutely be "RAW". My question to you, in what publication has Panasonic every stated what the bit depth of the S1H will be for RAW? ProRes RAW can support greater than 10bit RAW files, how do any of us know what the bit-depth will be until the full specs of the camera are released?

Why is 6K off the table for the S1H, Panasonic has publicly stated the RAW will be 5.9K. While that is technically not 6K it is close enough I think.

https://www.dpreview.com/news/1697265394/atomos-16-bit-raw-capture-sony-fx9-new-info-s1h-prores-raw-4k-update 

Sorry my first post is so long, I will do better in the future,

Matt

**Who will remember in future that CML requires you to sign your posts with your full name, job/title, and location.  Letting this one through as it's your first post and you did introduce yourself off the top**


Domenic Porcari
 

While I have you Matt, currently Panasonic is advertising the Ninja V recording monitor as the RAW recording companion to the S1H.  I’ve owned the Ninja inferno 7” (not the shogun model) for about a year now, and I’ve been curious as to whether Atomos plans to release any kind of FW update for the inferno to give it the same ability as the ninja V? 

Are you able to speak at all to this?

Domenic Porcari
Cinema Student 
Pittsburgh PA


On Apr 10, 2020, at 2:29 PM, Matt Frazer <panamatt@...> wrote:

Hello Dom and everyone in this topic,

My name is Matt Frazer and I work for Panasonic and found this thread through a colleague on the Varicam team.

There is only so much I can contribute to this discussion until the S1H FW is released that provides RAW output but I will do my best to answer your concerns and not give to much away.

"RAW video, in its truest sense, is 4:4:4 video/no chroma subsampling, (all pixels providing individual values for Red, Green Blue, and luminosity) and has a higher bitrate, (higher than 10, depending on the strength of the camera. RED’s have insane 16bit sensors I believe.)"

As Keith, Stuart and Daniel have already stated, we are talking about the capture of the Bayer pattern values before the image is converted into a viewable image with proper RGB values and subsampling. a minimum bit-rate is not a mandate of a RAW file but it is advantageous to have larger bit-depth files in particular to maximize the DR of the sensor. it is also counter productive to have larger bit depth than the sensor can achieve, if my sensor is only capable of 10 stops of DR and I have a 16 bit file, what am I wastefully encoding into the extra bits that are not needed? It is noise and that is never good. It is best to provide a solid compromise between the available DR of the sensor and the bit depth of the file. Photo camera manufacturers tend to follow the rule of 1 bit per stop in RAW (an over 14 stop sensor gets a 14 bit file, an over 12 stop sensor gets a 12 bit file) but this is not a hard and fast rule and you can certainly get exceptional images from 12-14 stop sensors with 10 bit RAW files and you can get a bad image with a 12 stop sensor and a 16 bit file. I guess the point I am making is that ultimately the engineering of the data can mean a lot to how the image looks and engineers at the major camera companies are all cleaver and take pride in giving you the best they can with the capabilities of the sensor they use.

 "SO, Panasonic is claiming you can record RAW video with the S1H over HDMI to a Ninja Recorder. But last I checked, HDMI is limited to 4K 4:2:2 video, so wouldn’t you just be getting 4:2:2 video in a 4:4:4 container? Same goes with the Bit depth. The S1H claims to be able to shoot 6K video in 4:2:2 10-bit, but RAW should be at least 12-bit. So wouldn’t the video still be limited to 1,024 steps vs the 4,096 offered by actual 12-bit RAW and above? (we already know that 6K is off the table with the HDMI connection)"
Yes, Panasonic and Atomos (and Nikon for that matter) are calming that it is possible to pass a RAW image over an HDMI cable to a Ninja V recorder. I think the confusion here is that you are looking at HDMI as a standard and not as a data cable. You are somewhat correct that 1 variant of the HDMI standard has the limitation you have indicated (depending on the class of HDMI, you give no indication if you mean 1.4, 1.4a, 2.0 or 2.1) but you assume that Atomos has designed the system to work within the HDMI standard... they are not. Now I am no engineer and am not in the development side of the business so I have no more information than do you about "HOW" Atomos has accomplished this. What I can tell you is that at the end of the day, the HDMI cable and connection used on the S1H for delivery of 4K 60P 4:2:2 video requires a cable with 18Gbps of data throughput. that means that if you did not use this to carry a video signal you can pass 18Gbps of whatever you want (data wise, can't be used to send frozen yogurt) as long as the recovering devices understands and can interpret the data. 
As I mentioned earlier, RAW can be whatever bit depth the equipment will allow and 10 bit can absolutely be "RAW". My question to you, in what publication has Panasonic every stated what the bit depth of the S1H will be for RAW? ProRes RAW can support greater than 10bit RAW files, how do any of us know what the bit-depth will be until the full specs of the camera are released?

Why is 6K off the table for the S1H, Panasonic has publicly stated the RAW will be 5.9K. While that is technically not 6K it is close enough I think.

https://www.dpreview.com/news/1697265394/atomos-16-bit-raw-capture-sony-fx9-new-info-s1h-prores-raw-4k-update 

Sorry my first post is so long, I will do better in the future,

Matt

**Who will remember in future that CML requires you to sign your posts with your full name, job/title, and location.  Letting this one through as it's your first post and you did introduce yourself off the top**


Paul Curtis
 


On 13 Apr 2020, at 16:22, Domenic Porcari <domporcari44@...> wrote:
So in summary, it seems that Panasonic and Atomos are just giving you 4:2:2, 10-bit video with a massive file size and calling it RAW.

I can't say what Panasonic and Atomos are doing but i can point out that the sigma fp has possibly the same sensor and will output RAW DNGs to a USB3 SSD (or a lower depth DNG to an SDXD card internally).

These are real RAW files, uncompressed UHD from the 6K Sensor (or a 1:1 crop of that if you prefer).

They are either 12, 10 or 8 bit - as others have pointed out pretty much all prosumer gear will top out at 12 bit at the sensor side.

But i just want to add that 12 bit RAW is not the same as 12 bit baked or log because this refers to the data before a debayer. The debayer happens in a large unbound colourspace and the resulting images are tonally much much better and you have two G values as well. Even the 8 bit RAW to SDXD card is remarkably robust - much more so than baked files i've used from other cameras. Try load one into Resolve and mess with it.

Also i'm a huge fan of 'uncompressed', any form of compression at capture stage is not good IMHO and the data rates these days are not such a big issue. Plus you can losslessly compress DNGs after capture with SlimRAW. It would be nice to have that kind of compression in camera but that's not an option at the moment.

So the S1 could well be outputting real RAW, it's not impossible. The HDMI as a data pipe, not an image pipe. It's pretty clever.

cheers
Paul

Paul Curtis, VFX & Post | Canterbury, UK


Daniel Rozsnyó
 


On 4/13/20 6:26 PM, Paul Curtis wrote:
So the S1 could well be outputting real RAW, it's not impossible. The HDMI as a data pipe, not an image pipe. It's pretty clever.


RAW on SDI or HDMI is technically still just an image, which has its resolution, bits per pixel, color planes and frame-rate.

You can treat is as a monochrome data of the acquired native resolution, and you can apply LOG compression (variable gamma) on it, to save few bits.


A true data pipe (one with 100% reliability - no bit errors) would be required only, if you want to push out:

    - encrypted data (which is not yet ruled out, seeing all the delays and troubles involved around HDMI RAW)

    - or compressed data (which is unnecessary, since RAW has always lower bit-rate than any VIDEO signal, being it 420, 422 or 444).


There are many ways how to pack RAW into HDMI (or SDI), yet all companies think it is a super secret world-changing know how. It is not. Its just a matter of what is possible on the transmitting and receiving hardware and software, which were not designed with much of flexibility in there (consumer cams do use mass produced ASICs, whose features are.. fixed in the silicon, unlike to a more flexible, yet expensive FPGA approach).


Regards,

Ing. Daniel Rozsnyo
camera developer
Prague, CZ


Daniel Chung
 

As the Atomos rep on here all I can say is please be patient. Speculating isn’t going to help much. All will become much clearer when the two companies are ready to announce more info. You have to appreciate that we can’t be drawn on sensitive information until the time is right. Do rest assured ProRes RAW on the S1H is coming and you will get all your answers in due course. In the meantime you can check the original Atomos press release here which gives you all I am allowed to say on the specifics.


On ProRes RAW in general others on this thread have pointed correct information and Apple White papers. ProRes RAW is absolutely a true RAW format as the image is not de-bayered in camera. Atomos’ job is to take this RAW data with as much associated metadata as the camera can supply, and transport it into NLEs and Grading software in an efficient manner. ProRes RAW does a fantastic job of this and strikes a good balance between file size, quality and the editability on modern computer hardware. What the NLEs then do with that ProRes RAW data is evolving over time. Anyone who remembers the very early days of RAW stills (I started on a Kodak/Canon DCS3 for those who know what that is), or early RAW video will remember how limited the adjustments were. Take the same files into modern software and the results are totally different. The lesson here is not to judge a format purely on how the data is being processed. ProRes RAW is absolutely true RAW, this can be true whilst not having all the controls you may wish for at this moment in time. Let the quality of the final results be the judge. 

BTW if you still prefer uncompressed CDNG then we still support that on the Shogun 7 and some cinema cameras. The reality is that this is not widely used and most users seem to want a compressed RAW option.

I’d like to take this opportunity to thank Geoff for all
his efforts here on CML. Perhaps we can make sure he is one of the first to get his hands on the setup in question to evaluate it.

Dan






On Mon, 13 Apr 2020 at 17:26, Paul Curtis <paul@...> wrote:

On 13 Apr 2020, at 16:22, Domenic Porcari <domporcari44@...> wrote:
So in summary, it seems that Panasonic and Atomos are just giving you 4:2:2, 10-bit video with a massive file size and calling it RAW.

I can't say what Panasonic and Atomos are doing but i can point out that the sigma fp has possibly the same sensor and will output RAW DNGs to a USB3 SSD (or a lower depth DNG to an SDXD card internally).

These are real RAW files, uncompressed UHD from the 6K Sensor (or a 1:1 crop of that if you prefer).

They are either 12, 10 or 8 bit - as others have pointed out pretty much all prosumer gear will top out at 12 bit at the sensor side.

But i just want to add that 12 bit RAW is not the same as 12 bit baked or log because this refers to the data before a debayer. The debayer happens in a large unbound colourspace and the resulting images are tonally much much better and you have two G values as well. Even the 8 bit RAW to SDXD card is remarkably robust - much more so than baked files i've used from other cameras. Try load one into Resolve and mess with it.

Also i'm a huge fan of 'uncompressed', any form of compression at capture stage is not good IMHO and the data rates these days are not such a big issue. Plus you can losslessly compress DNGs after capture with SlimRAW. It would be nice to have that kind of compression in camera but that's not an option at the moment.

So the S1 could well be outputting real RAW, it's not impossible. The HDMI as a data pipe, not an image pipe. It's pretty clever.

cheers
Paul

Paul Curtis, VFX & Post | Canterbury, UK


Bob Kertesz
 

I’ve owned the Ninja inferno 7” (not the shogun model) for about a
year now, and I’ve been curious as to whether Atomos plans to release
any kind of FW update for the inferno to give it the same ability as
the ninja V?
Difficult to imagine why they would do that when they want you to buy a
new piece of gear if you want new features, and rightly so.

It's still showBUSINESS, not showGIVESHITAWAY that took time and money
to develop. :-)

-Bob

Bob Kertesz
BlueScreen LLC
Hollywood, California

Engineer, Video Controller, and Live Compositor Extraordinaire.

High quality images for more than four decades - whether you've wanted
them or not.©

* * * * * * * * * *