Web Analytics
Re: ProRes "RAW"?

Re: ProRes "RAW"?

John Brawley

Hi Dom, 

You’re going to see a few opinions, and I’ll give you mine, but I think the main issue is what it is that RAW actually means and you deciding what it means to you.

Number 1, you’ve said ProRes is 4:2:2 but there’s many flavours of ProRes, including 4:4:4 before you get to the ProRes RAW version.

I think HDMI when used as HDMI does indeed mean some kind of encoded video, but I think what Panasonic and Atomos are doing is hijack the HDMI cable, using that as a data pipe.  In that mode you can’t plug the HDMI output of the camera into a monitor and expect to get a picture.

Others have made reference the EVA1, but you’re actually talking about the S1H, their little (not that small actually) mirrorless 135 format camera outputting over HDMI to an external recorder that also has an option to record to ProRes as well as ProRes RAW.

For me RAW means that ISO and WB isn’t baked in and is independent of the stored values in the file and I think there’s also a good argument that you should be able to de-mosiac / Debayer the raw data again using a different algorithm or process.

I also see a lot of conflation of two values, namely the way ENCODED video is represented but the now meaningless “4” and then conflating that with the ratio of R, G and B photo sites on the sensor. Anyone remember the 8:8:8 colour correction desks and when Sony tried to get everyone to call HDCAM 22:11:11 ? 

That leads to many end users making a jump to encoded video somehow always being “4:2:2” because that’s also often the ratio of Green, Blue and Red photosites on most Bayer sensors.

But the 4:2:2 numbers don’t represent RGB.  They are RBG encoded / matrixed with brightness / luminance values.    The RGB information has it’s bigness encoded into each colour channel. The R channel has a pixel value that intrinsically has it’s brightness.  Encoded video stores the brightness information in the G channel.

But it doesn’t stop users inferring that somehow you need four times the chroma resolution of your target delivery resolution to have “true” values.  So by that logic you need an 8K sensor to get true 4:4:4 on a 4K delivery file so that there’s a blue and red pixel for each colour channel….like that’s what really happens when you de-mosaic an image from bayer sensor data.

Are chroma keys noticeably worse on a 4K file from an 8k camera over an 8k file from an 8k camera ?  Is it harder to pull a secondary hue on a correction from an 8K file form an 8K camera over a 4K derived file from that same 8k camera ? Is it literally half as bad (what the ratio infers if its 4:2:2 vs 4:4:4)

I just don’t personally see that in the real world and I think ratio numbers doesn’t account for the video encoding transformation that happens when you put RBG sensor data into a video container format like quicktime and also the algorithm that is considered to usually be 70-80% efficient at worst, at interpolating those bayer numbers into encoded video.

As others have also said cDNG is an old codec, though open source it was in fact very inefficient.  cDNG was really just DNG files with an added sound file and a timecode stamp. And really, DNG is based largely on TIFF files, a very old photographic stills standard.  It was incredibly data hungry and very inefficient to play back.

Certainly 4K cDNG makes for eye watering data rates and now that companies like Blackmagic are doing more than 4K resolution cameras, you can’t expect to keep scaling cDNG up with those sensor resolutions.  Even the most high end system wouldn’t cope well with 8K DNG workflow.  It’s just not at all practical.  Or necessary.  RED have shown us that 8K compressed can generate great results.

There’s many paths to that end result, and whatever the companies are calling it.  If we’re going beyond 4K in cameras, then you have to compress in some way.  Some do it in bit depth, some are in data compression or some combination of the two.

RED have a stranglehold on doing any kind of compressed RAW in-camera.  So you either pony up and license from them, or you find other ways around it.  Seems like they did some kind of IP swap for Sony and Canon. RED’s new Komodo has an RF mount and Canon can seemingly do RAW in camera on some of their lines. Sony and Red sued each other a few years ago and now you have a Sony RAW that seems an awful lot like REDCODE in their high end cameras. (These are just my guesses)

Apple seemed to challenge the RED IP recently and it didn’t seem to change the status quo.  I believe this is why no cameras seem to have yet to get ProRes RAW built into a camera some 2 years after it’s launch.  I assume Apple don’t want to pay for the IP licence ?  Or the camera company manufacturer that wants to use ProRes RAW don’t want to pay ?  I’ve no idea how that IP side of things work.  It sounded like some kind of settlement had been reached, I guess some cameras with ProRes RAW built in recording will be evidence of that.

I know from my background with Blackmagic that they wanted to move away from cDNG because it wasn’t sustainable for the future with more than 4K imaging they’re anticipating.  Also, although it’s an open standard, the way the files are read by every single application out there was totally wild and fluctuated.  I guess the downside of an open standard is that there’s kind of no standard.

What I want from a codec is high bit depth, white balance independence and ISO independence and that can be played back on even low end machines and encoded with a variety of data rates. If these parameters can be adjusted without any meaningful degradation or impact on the image then I think the goal is achieved, I’m not really fussed about the how’s and why’s and what it’s called.  As a bonus it would also be great if the codec’s are also smart enough to have LUT’s and metadata built in, something that cDNG failed to do well.

I shot ProRes 444 for a really long time and it’s an awfully robust codec when paired with a good camera. Even though it wasn’t RAW, you could re-white balance your shot and not take much of a hit.  

So coming back to the S1H, seems to me the best thing is to test that set up.  Shoot some material with it.  See how it holds up.  I always like to compare with a known camera, something you’re likely to know well and you can then compare and see how well it does by comparison.  Are you using it with another camera ?  Then shoot some side by sides.

It’s a bit hard to believe all the marketing from anyone these days, you have to see how it fits within your own workflows and approaches to imaging.  You have to find out for yourself.


John Brawley
New Orleans

On Apr 10, 2020, 10:12 AM -0500, Domenic Porcari <domporcari44@...>, wrote:
So Panasonic and Atomos made a lot of noise about how the Panasonic S1H would "be able to record 4k RAW video with the Atoms Ninja V” with this new ProRes RAW codec. Obviously a lot of heads turned (mine included) when they heard that there’s suddenly a “full frame, 4k RAW camera” out there, especially at that price point.

But as I look more into it, I’m wondering if I’m missing something really big, or a new feature or something, or if Panasonic/Atomos are kinda just pulling the wool. Here’s what I (think I) know, please let me know if any of it is innacurate:

RAW video, in its truest sense, is 4:4:4 video/no chroma subsampling, (all pixels providing individual values for Red, Green Blue, and luminosity) and has a higher bitrate, (higher than 10, depending on the strength of the camera. RED’s have insane 16bit sensors I believe.)

ProRes, usually, is 4:2:2 video/using chroma subsampling, (pixels using equations for what the green levels are based on red and blue, and then calculating luminosity separately) and utilizes a bitrate of either 8 or 10.

SO, Panasonic is claiming you can record RAW video with the S1H over HDMI to a Ninja Recorder. But last I checked, HDMI is limited to 4K 4:2:2 video, so wouldn’t you just be getting 4:2:2 video in a 4:4:4 container? Same goes with the Bit depth. The S1H claims to be able to shoot 6K video in 4:2:2 10-bit, but RAW should be at least 12-bit. So wouldn’t the video still be limited to 1,024 steps vs the 4,096 offered by actual 12-bit RAW and above? (we already know that 6K is off the table with the HDMI connection)

So in summary, it seems that Panasonic and Atomos are just giving you 4:2:2, 10-bit video with a massive file size and calling it RAW.

Am I misinterpreting/misreading something here? Cause that seems kinda messed up…



Join cml-raw-log-hdr@cml.news to automatically receive all group messages.