Topics

SSD V Spinning drives

 

Hello all.

I’m shooting a series of short narrative pieces that for various reasons need to be edited on location. We will be shooting with three cameras and the idea is to be able to off load to a raid drive that the user can just plug-in and get going with.  I was suggesting an SSD raid but our editor is adamant it needs to be spinning drive. 

I’ve only been able to find a little bit on the web about how SSDs write in blocks. 

Does this make an awful lot of difference and will the new SSDs (NVRAM) be any different.

Many thanks.

Michael

Michael J Sanders: Director of Photography 
  

Mobile: +44 (0) 7976 269818   

Mako Koiwai
 

SSD’s are not recoverable

Makofoto, been there, wasn’t able to, s. pas, Ca

Keith Putnam
 
Edited

On Thu, Sep 20, 2018 at 10:54 AM, Mako Koiwai wrote:
SSD’s are not recoverable
A RAID built from SSDs is just as recoverable as a RAID built from spinning disks. It will also have faster throughput.

As far as the original question, I'm having a hard time coming up with any legitimate reason why the editor should be against an SSD-based drive system.

Keith Putnam
Local 600 DIT
NYC

Mako Koiwai
 

On Thu, Sep 20, 2018 at 8:08 AM Keith Putnam <keith@...> wrote:

A RAID built from SSDs is just as recoverable as a RAID built from spinning disks.


***********

Wonderful News!

I wonder when that became a reality?!

Any recommendations for where, hopefully in the Los Angeles area, I can take it to be recovered?

It wasn’t that long ago that all of the Data Recovery places said it wasn’t possible.


Makofoto, s. pas, ca

Keith Putnam
 

On Thu, Sep 20, 2018 at 11:25 AM, Mako Koiwai wrote:
It wasn’t that long ago that all of the Data Recovery places said it wasn’t possible.
Well, what are we talking about here? Do you have a single SSD which failed and for which there is no redundancy? Or do you have a RAID of a type that is recoverable from uncorrupted slices? RAID 0 is for speed only; you can't recover a corrupted slice from the other slice. RAID 1 allows a corrupted slice to be recovered from the uncorrupted slice (it's really just two complete copies on separate disks). RAID 5 can recover any single slice from the parity data distributed across the other slices. Etc.

If you have a single SSD which is corrupt, its recoverability is going to depend on the type of corruption and nothing necessarily inherent to the nature of SSD storage.

Keith Putnam
Local 600 DIT
NYC

Feli di Giorgio
 

The one thing I miss about spinning drives is the ‘click of death’. Usually drives that are about to fail will make odd noises, giving you something of a heads up. Traditional drives can also fail slowly. More than once have I been able to pull data off a drive as it was failing.

SSD fails instantly without any warning. It’s very binary.

Feli di Giorgio

VFX Apple Inc.
Bay Area, USA

_______________________________________________
Feli di Giorgio - feli2@... - www.felidigiorgio.com




On Sep 20, 2018, at 8:08 AM, Keith Putnam <keith@...> wrote:

[Edited Message Follows]

On Thu, Sep 20, 2018 at 10:54 AM, Mako Koiwai wrote:
SSD’s are not recoverable
A RAID built from SSDs is just as recoverable as a RAID built from spinning disks. It will also have faster throughput.

As far as the original question, I'm having a hard time coming up with any legitimate reason why the editor should be against an SSD-based drive system.

Keith Putnam
Local 600 DIT
NYC

Keith Putnam
 

On Thu, Sep 20, 2018 at 11:46 AM, Feli di Giorgio wrote:
SSD fails instantly without any warning. It’s very binary.
I highly recommend periodically scanning your SSDs with DriveDX. https://binaryfruit.com/drivedx
I use it at every checkout for which the camera uses SSD media, and I usually catch one or two with bad indicators. Additionally, every so often I run a check on the SSD shuttle drives I use to transport media to post.

Keith Putnam
Local 600 DIT
NYC

Jan Klier
 

On Thu, Sep 20, 2018 at 11:46 AM, Feli di Giorgio wrote:
SSD fails instantly without any warning. It’s very binary.

The key to safe redundancy is to know when something has gone wrong in a timely fashion. If your first disk in a RAID 5 or RAID 1 fails and it sits below the desk or in the server room, make sure you get an email alert. Otherwise it may go undetected for weeks and then remind you when the second drive fails :-) an often overlooked aspect.

Also the other thing is that SSD have a defined life time because of wear leveling. Eventually they will expire, not like a spinning disk that can run in theory into eternity. Now that margin is pretty big on regular use and I think he lowest I’ve seen the status of an SSD go is 80% after considerable use. But if you have unusual usage patterns it’s something to keep an eye on.

Just like backups, it’s only as good as your disaster preparedness and regular fire drills.

Jan Klier
DP NYC

Bob Kertesz
 

SSD’s are not recoverable
The vast majority of spinning disk recovery is performed by
disassembling the bad drive in a relatively clean room area, removing
the platters that hold the data, and putting them into a known working
drive. Very rarely is there a problem with the platters themselves
unless there was a severe head crash that physically damaged the
platters - it's almost always the control electronics or the motors that
fail.

SSD failures are always electronic since there are no moving parts - a
failed controller chip or a cold solder joint or something similar. The
memory chips holding the actual data rarely fail, it's the other stuff
that dies. To repair an SSD, a facility has to find and replace the part
of the control electronics that broke, or much more likely remove the
memory chips using a hot air station and put them into a known good SSD
of the same model and brand.

Most recovery places don't offer the service currently because they
don't have the tools, knowledge, or finesse to repair SSDs. But that
doesn't mean it can't be done. As they become ubiquitous, facilities to
recover data off them will spring up, if they haven't already.

In five years, the spinning disk data recovery business will likely have
seen a major downturn, and as always in the tech sector, it will be
adapt or die.

-Bob

Bob Kertesz
BlueScreen LLC
Hollywood, California

DIT, Video Controller, and live compositor extraordinaire.

High quality images for more than four decades - whether you've wanted
them or not.©

* * * * * * * * * *

Bob Kertesz
 

I highly recommend periodically scanning your SSDs with DriveDX.
https://binaryfruit.com/drivedx
That is Mac/OSX based software if that's your platform.

If you're on Windows, I have found Hard Disk Sentinel (Pro) to be very
useful for the same sort of testing. Despite the name, it works well on
SSDs as well. There is a free version and a paid Pro version. I have a
multi-machine license and run it on all my machines. No other
affiliation, financial or otherwise, with the company, except that they
are in Hungary, and I am Hungarian.

https://www.hdsentinel.com/

-Bob

Bob Kertesz
BlueScreen LLC
Hollywood, California

DIT, Video Controller, and live compositor extraordinaire.

High quality images for more than four decades - whether you've wanted
them or not.©

* * * * * * * * * *

Daniel Rozsnyó
 

On 09/20/2018 07:03 PM, Jan Klier wrote:
Also the other thing is that SSD have a defined life time because of wear leveling.
There is also one more important parameter when talking about SSD backups which shall be a very important factor and that is the data retention time in un-powered state.

https://www.anandtech.com/show/9248/the-truth-about-ssd-data-retention


When the drive is powered, it scans itself in a periodic fashion in order to detect any soft data corruption that is caused by charge leakage from the cells. These cells are then refreshed. Longer you keep the drive un-powered, more of these errors will appear, and when the data is corrupted to
unrecoverable stage, the drive may choose to just refuse any other operation. I am not sure whether that is a result of the errors hitting directly the "bookkeeping" region of the drive, or if they have firmware bugs, so they cease operation from the users perspective.

The time varies and not every manufacturer discloses this information. You can expect guarantee period from 3 months to 9-12 months, for a recent generation of drives. Smaller/newer the cells, shorter the time they loose charge (there is just less charge to start with at first place). MLC/TLC gets
worse, since they have analog cells, that have to fall into 4 or 8 levels. With advent of modern V-NAND this problem has same fate as the number of overwrites - we get back to some sane numbers seen previously - at least for a moment. Then QLC with 16 levels gets more sensitive to this.

If you use the drive for archival, it is good to power it up time to time (both SSD and HDD) and run a long test for full capacity (using any tools that can issue that SMART command).

And especially for SSD - put it in a literally cold storage room :-)


Ing. Daniel Rozsnyo
camera developer
Prague, Czech Republic

Eric Wenocur
 

Hello,

Between messages downloading out of sequence, and apparently not seeing the one that started this
thread, I'm interested in some clarification of the original problem. We're talking about a RAID
array made up of SSDs, but what is meant by "recovery"?

As was pointed out, the RAID modes that allow rebuilding (1, 5, 6, etc.) should not require
"recovery" in any laboratory sense. The RAID controller should be able to rebuild them from the
data that's present. I guess if was a RAID 0, with no data for rebuilding, then it would be the
kind of ugly business that Bob describes below.

As someone else mentioned, even though SSDs have no mechanical failures they do have a read/write
lifespan. But I have to think that if one or more of the drives (or memory chips) reached that
failure point you'd be SOL as far as any kind of recovery. The little data bits are either in
there, or not.


Eric Wenocur
Lab Tech Systems
301-438-8270
301-802-5885 cell


On 9/20/18 12:58 PM, Bob Kertesz wrote:

> SSD’s are not recoverable
>
> The vast majority of spinning disk recovery is performed by
> disassembling the bad drive in a relatively clean room area, removing
> the platters that hold the data, and putting them into a known working
> drive. Very rarely is there a problem with the platters themselves
> unless there was a severe head crash that physically damaged the
> platters - it's almost always the control electronics or the motors that
> fail.
>
> SSD failures are always electronic since there are no moving parts - a
> failed controller chip or a cold solder joint or something similar. The
> memory chips holding the actual data rarely fail, it's the other stuff
> that dies. To repair an SSD, a facility has to find and replace the part
> of the control electronics that broke, or much more likely remove the
> memory chips using a hot air station and put them into a known good SSD
> of the same model and brand.

 

Thanks Keith and everyone who has responded.

So far all the communications I’ve had with said editor have been v short terse emails as we are both pretty busy at the moment.  So I am really not sure what the issue is. I’ve got a meeting with him shortly so will try and find out what the reasons are.

I have spent the past couple of days looking on the web but have struggled to find anything negative - one article I saw yesterday mentioned that one issue with SSDs is that you can’t write to each byte, it has to be in a block.  But that was from 2014 so I assume its all moved on from there.

Thanks all.
 
Michael Sanders
London Based DP.

+ 44 (0) 7976 269818




On 20 Sep 2018, at 16:08, Keith Putnam <keith@...> wrote:

As far as the original question, I'm having a hard time coming up with any legitimate reason why the editor should be against an SSD-based drive system.

Jan Klier
 

That is true for spinning drives too. They are divided into blocks of 512 bytes or multiples thereof. To change one byte the filesystem has to read the entire block, update one byte, and then write the block back to the drive. Almost all storage devices work that way. In computer architecture they’re called block level storage for that reason.

One thing that is different is that an SSD block cannot be overwritten directly like a spinning disk block. It has to be erased first. This is accomplished with the TRIM command. Filesystem support has been improved to erase blocks once they’re no longer in use in the background rather than delaying a write.

On Sep 20, 2018, at 5:46 PM, Michael Sanders via Cml.News <glowstars=me.com@...> wrote:

I have spent the past couple of days looking on the web but have struggled to find anything negative - one article I saw yesterday mentioned that one issue with SSDs is that you can’t write to each byte, it has to be in a block.  But that was from 2014 so I assume its all moved on from there.

There were too issues raised to that. One where the unit was powered off for too long and leakage caused data corruption.

The other is the built-in wear leveling. Every block on an SSD can only be written to a specified number of times. After that it has to be taken out of service so to say. Each SSD comes with a certain amount of spare blocks that will be substituted for ones that have worn out. In addition different SSDs use wear leveling techniques to spread out the use as much as possible to avoid exhausting blocks too quickly. When this end of life is reached, essentially you would have run out the ability to write new data since since too many blocks wore out. Presumably at this point the unit could go into a read-only mode, as the data is still there, but no certain how various filesystems would handle that.

On Sep 20, 2018, at 2:01 PM, Eric Wenocur <eric@...> wrote:

But I have to think that if one or more of the drives (or memory chips) reached that 
failure point you'd be SOL as far as any kind of recovery. The little data bits are either in 
there, or not.

In summary SSDs have some unique quirks that have a certain nerd factor. But outside of that they’re reasonable replacements for spinning disks and their failure modes are just different. Either way you need system level redundancy to guarantee safety of data. No storage technology is safe by itself.

Which is the old joke about HP’s storage branding, which was ‘HP SureStore’ - it didn’t mention anything about sure retrieve ;-)

Jan Klier
DP NYC

Mako Koiwai
 


Every block on an SSD can only be written to a specified number of times.


*************

No different then any memory storage card for a camera …

Something like 100,000 times.

In practical terms it was my understanding this just isn’t an issue.


makofoto, ret. s. pas. ca

James Marsden
 


Hi All
    There is no reason your editor could not use SSD's other than he does not have a fast enough connection to those drives, and for high data flow work SSD's are ideal, just not as the only backup, I would use Shot Put Pro to back up the rushes and include the SSD's as a destination, I have done this often with Sony 4K RAW, 2 Sony Pro RAID drives and 1TB Glyph RAID SSD, I give the SSD to the to Director to edit the 4k RAW on his Mac Book Pro in Premier have done it dozens of times with No problem, in the case of AXSM drive they seem to top out at about 250MB sec read SxS will be about the same as will other cards, it is normally the ASIC in the card reader that bottlenecks the speed even if the RAID drives and the SSD benchmark at over 400MBM/sec write.

The only reason the editor may have a problem is he wants to use Avid which is always the worst choice for doing this kind of thing, for fast turnaround times you really have to use Premier, the only way to get stuff straight into Avid is to use AMA and that is flakey, basically Avid always wants to ingest stuff to it's native .mxf OP1a format which is not the same as the .mxf that cameras record which is all OP atom, basically OP atom is extensible and OP1a is not, this means for OP1a you need to know the length of the file when you start writing it as it is not extensible, all cameras and external recorders have to use the extensible OP atom form of .mxf, extensible means you don't have to no the size/duration of the clip, you can just start filling up the clip and stop when you stop recording, in other words, because of this nothing can record Avid native files, you can record a DNxHD .mxf but it still won't be Avid flavour OP1a .mxf, it will be OP atom .mxf which is not the native kind.
And to be clear this is legacy code in Avid from the days of tape when you ingested from tape by setting an in and out point on the tape Avid would then know what size/duration to make the OP1a .mxf and then fill it with frames, a bit of an simplification of how it works but you get the point, as it is not an extensible you can't record to an OP1a ,mxf you can only convert/ transcode to it which makes its core code completely unsuitable for fast turn around stuff becuase of it's core code.

Anyway I know this does not explain the editors resistance to using SSD's but it will most likely be related to the editor wanting to use Avid, editors prefer it and will say they can cut faster in it but unfortunately it's core workings are so hopelessly out of date it is almost always the worst choice in terms of workflow and speed of ingest, also I have always found most editors to be pretty ignorant about the IT side of post, does not mean they are bad editors just their skill set is in making cuts and telling stories not how the software or computer they are using works, many may not even know about the OP1a OP atom stuff I have explained, hope you see my point, not wanting to use SSD's to edit from is complete bullshit, I have licences for all current edit software including media composer and I have edited with all from SSD's.

The thing is for many reasons such as those mentioned by Daniel SSD's are not good for archival storage and you really need to back up to spinning disk media and LTO tape for long-term storage, as to recover some points raised generally, if it is data corruption an SSD is no more difficult to recover than a spinning disk, however if the drive has a hardware fault it is a different story as Bob has posted you are into replacing tiny surface mount ships which is basically the same as mother board repair which most other Louis Rossman in new york won't touch, whereas the clicking that Feli mentions will be a bad read head which can just be swapped out.

some links
Louis Rossman has a channel about repairing Mac motherboards so you can see what is involved in repairing an SSD, biggest problem with this stuff is finding the parts
https://www.youtube.com/user/rossmanngroup

A bit from Linus Tech Tips about Deepspar, don't have a deepspar box but I do use R-studio and know it quite well
https://www.youtube.com/watch?v=eyr14_B230o

Don't agree with all that the above say but it may give you an insight into what this stuff involves.

One last thing. I am afraid I don't agree with the detail of what Kieth Eric say about RAID types, RAID 1 can actually replicate corruption to the other drive and really 2 drives Synced by software if always better, i.e. writing to 2 separate drives using software like ShotPut Pro is less likely to corrupt than writing to a single RAID I drive due to the way RAID 1 syncs everthing including corrupt data that 2 seperate drives won't.

Also RAID 5 and 6 aways do corrupt and need recovery, in fact, the parity data that means you can reconstruct the data if 1 drive fails on RAID 5 or 2 drives fail RAID 6 is only there to offset the risk of striping data across 3 or more drives, you need a minimum of 3 drive to do a parity RAID-like 5, but usually it is 4,6 12 or more drives, more dives the more likely it is that one drive will fail, and actually a drive a 2 drive RAID 0 safer than a 6 Drive RAID 5 box and if 2 drives fail at once you will still lose the RAID 5 data, as I said the parity failure recovery in 5 and 6 is just there to offset the much-increased probability of one drive failing as this gets greater the more drives in the RAID set, neither RAID 5 or 6 is particularly safe, if you are backing up to a RAID 5 drive you still really need 2 of them to actually have some redundancy, and even then you would normally be better of in terms of speed and data safty to use 2 RAID 0 drives, they will be faster due to not having to rite the parity data for RAID 5 and once you get over 4 drives in the box a 2 drive RAID 0 box is actually safer so long as you use 2 or more RAID 0 boxes, which is much safer than a single RAID 5 box,

It is just how this stuff works, all RAID forms are more likely to fail than a single dive, and single drives are all really to slow so really for 4k 2 RAID 0 boxes but ideally 3 if the sweet spot between speed and reliability, but really these should be viewed as a consumable, the more they get used the more likely they are to fail, and really a back up of those drives should be made to LTO tape ASAP.

The thing is I have been building RAID setups for post-use for over 20 years, and I have seen 4 drive RAID zero setups that last for years without problem and RAID 5 boxes that have needed to be recovered, and if you have the data elsewhere RAID 0 is always faster, but never put trust in a technology because it is safe, and someone saying they have never had a problem with X or Y device completely meaningless, you always have to go through the process of asking what if that fails etc.

Hope that helps

James Marsden
DIT and post consultant
London


axel.mertes
 

I see also no reason why one would not to use an SSD based system for something like editing.
In fact, SSDs outperform HDDs in any aspect nowadays, except price per Gigabyte. Given todays SSD sizes, that can no longer be an argument. 2 TB SSDs are quite common as 2.5" SATA or M.2 NMVe variants. You can get 4 TB 2.5" SATA too. If money is no problem, there are SSDs with ~128 TB for server use in 3.5" form factor and U.2 interface.

We had a movie project ~2 years ago where camera team insisted on using a MAC for DIT station and Lacie Thunberbolt 2 RAIDs (RAID1 formatted 4 drive HDD...) for data transport. The whole thing was slow, too slow to being able to make 2 identical copy of each Alexa RAW media coming in in time. I ended up buying Sharkoon dual SATA port USB 3 connected docking station plus a set of 8 * 1 TByte Samsung EVO 850 drives. I connected all to a PC workstation on set and was able to create the copies in time, actually way faster, catching up the lost time easily and freeing up blocked camera media shorthand (which was a real issue at some point).

It was the most inexpensive transport media solution I have ever worked with, and small. The 2.5" media fits easily in you pockets. We used the BMD cardrige boxes (looking like small Betacam cassette boxes) to keep them save.

Today these SSDs are build into our server and perform as SATA cache RAID for our HDD RAID6 systems, giving us several TBytes of high performance storage and making us feel like working from SSDs only most of the time.

Regarding data backup, beside long term LTO / LTFS archiving, we use disk to disk backup based on incremental copies using a software called "Syncovery". It allows setting up rules what to copy and what not, and it allows keeping user selectable amount of instances of changed files. We do this once every night, but you may even do it every 5 minutes if you really want to. The good thing is that this is WAY more secure than a RAID1 as has already been pointed out. With this "versioning" in place, we can recover earlier states in case needed.

To excel on this strategy, we also use Condusiv Undelete Server, which protects the (Windows) Server from accidential file delete and overwrite operations. In fact, when deleting a file, it will not be deleted, but moved to an interim trashcan (even when deleting via network!!!). So this prevents your server storage from getting fragmented, as the space isn't free'd up immediately. Further, when overwriting files, it again keeps a versioning, so you have realtime versioning on ALL you files within this interim trashcan folder. You can easily undelete any file if needed or free up disk space by finally deleting files on an administrator level. We do this regulary with immediate defragmentation following up. I have e.g. a 43.x TByte volume with ~6,000,000 files on it and it is 100% fragment free - all the time. Fragments are the worsed enemy of HDD performance, especially in streaming file evironments like editing and grading.

Of course, our server solution is 100% Windows based and our workstations are many Windows systems and a few MAC OSX ones.

If I had to build a new editing system, I would check what storage size is required. If that is within say 8 TBytes, I'd go for NVMe SSDs. There are many solutions now, allowing to stripe these to increase performance. If you don't make a redundancy RAID you could employ a strategy like described above, with a tool or script making a backup of your project data onto another drive (e.g. a HDD, or offsite via network to prevent e.g. a fire desaster etc.) using incremental copy and versioning. So the biggest loss you may encounter is the time since the last backup cycle, which is usually 1 working day or less, depending on your settings.

The benefit of working with NVMe RAID0 systems is speeds in the 10 TByte/s+ range. Read and write. Should be enough for 4K+ work.

Mit freundlichen Grüßen,
Best regards,

Axel Mertes

Workflow, IT, Research and Development
Geschäftsführer/CTO/Founder
Tel: +49 69 978837-20
eMail: Axel.Mertes@...

Magna Mana Production
Bildbearbeitung GmbH
Jakob-Latscha-Straße 3
60314 Frankfurt am Main
Germany
Tel: +49 69 978837-0
Fax: +49 69 978837-34
eMail: Info@...
Web: http://www.MagnaMana.com/

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese
E-Mail irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese E-Mail. Das unerlaubte Kopieren
sowie die unbefugte Weitergabe dieser E-Mail sind nicht gestattet.

This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in
error) please notify the sender immediately and destroy this e-mail. Any unauthorised copying, disclosure or distribution of the material
in this e-mail is strictly forbidden.


axel.mertes
 


Am 21.09.2018 um 11:49 schrieb axel.mertes:

The benefit of working with NVMe RAID0 systems is speeds in the 10 TByte/s+ range. Read and write. Should be enough for 4K+ work.

Correction:
I meant 10 GByte/s+ range. Sorry. We are not there yet ;-)


Mit freundlichen Grüßen,
Best regards,

Axel Mertes

Workflow, IT, Research and Development
Geschäftsführer/CTO/Founder
Tel: +49 69 978837-20
eMail: Axel.Mertes@...

Magna Mana Production
Bildbearbeitung GmbH
Jakob-Latscha-Straße 3
60314 Frankfurt am Main
Germany
Tel: +49 69 978837-0
Fax: +49 69 978837-34
eMail: Info@...
Web: http://www.MagnaMana.com/

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese
E-Mail irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese E-Mail. Das unerlaubte Kopieren
sowie die unbefugte Weitergabe dieser E-Mail sind nicht gestattet.

This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in
error) please notify the sender immediately and destroy this e-mail. Any unauthorised copying, disclosure or distribution of the material
in this e-mail is strictly forbidden.

Eric Wenocur
 

Now that's a very interesting perspective I never thought about---that parity RAID is intended to
compensate for wanting a large number of drives running together, and the greater likelihood of a
failure. Not as a means of making "failsafe" RAID arrays for the sake of being failsafe. I think a
lot of people would say, "we use RAID6 to ensure against data loss," regardless of whether they
actually need a RAID in the first place.

So by that logic, if you're doing something relatively low-bandwidth, like audio editing, you are
better off with a very large single drive, or a RAID 0 with a couple drives--plus a duplicate single
drive or array--rather than a RAID 5 or 6 array alone. I guess there's some added complexity of
correctly running a primary and backup at all times, as opposed to letting the RAID controller
handle everything on a single array.


Eric Wenocur
Lab Tech Systems
301-438-8270
301-802-5885 cell


On 9/20/18 6:16 PM, James Marsden wrote:
>
> Also RAID 5 and 6 aways do corrupt and need recovery, in fact, the parity data that means you can
> reconstruct the data if 1 drive fails on RAID 5 or 2 drives fail RAID 6 is only there to offset the
> risk of striping data across 3 or more drives, you need a minimum of 3 drive to do a parity
> RAID-like 5, but usually it is 4,6 12 or more drives, more dives the more likely it is that one
> drive will fail, and actually a drive a 2 drive RAID 0 safer than a 6 Drive RAID 5 box and if 2
> drives fail at once you will still lose the RAID 5 data, as I said the parity failure recovery in 5
> and 6 is just there to offset the much-increased probability of one drive failing as this gets
> greater the more drives in the RAID set, neither RAID 5 or 6 is particularly safe, if you are
> backing up to a RAID 5 drive you still really need 2 of them to actually have some redundancy, and
> even then you would normally be better of in terms of speed and data safty to use 2 RAID 0 drives,
> they will be faster due to not having to rite the parity data for RAID 5 and once you get over 4
> drives in the box a 2 drive RAID 0 box is actually safer so long as you use 2 or more RAID 0 boxes,
> which is much safer than a single RAID 5 box,

sameer shrivastava
 

A spinning Drive or SSD can fail any time without any warning regardless of its age, make or model. 

In terms of recovery from a failed drive there are lot of more options for a spinning drive rather than a SSD.

Failsafe Raid levels are best. but recovering from a failed raid system is very difficult.

Its best to make 3 copies on set it self on three different physical drives and send them to different locations.(at least two confirmed copy from the card itself).  I don't see much advantage on using a raid drive or SSD on set because of other bottlenecks.

I usually backup a RED 480gb minimag via USB3 to a USB3 single drive in 40 minutes. For the editor its much more easier and preferable to choose a Spinning raid drives for the work. (a 4x4TB raid drive is much cheaper than a 16 TB SSD solution)

I usually convert the raw files to my internal drives for editing. and consolidate the raw footage from one of the shoot backups to my internal raid drive before grading. 

I hope it helps.

Sameer Shrivastava
D.O.P/Aerial Cinematographer/ Colourist/Smoke Artist
Dir. @ Birdeye Movies
http://www.birdeyemovies.com/index.html
Phone +91 9820431618


On Friday, September 21, 2018, 3:26:50 PM GMT+5:30, axel.mertes <axel.mertes@...> wrote:



Am 21.09.2018 um 11:49 schrieb axel.mertes:

The benefit of working with NVMe RAID0 systems is speeds in the 10 TByte/s+ range. Read and write. Should be enough for 4K+ work.

Correction:
I meant 10 GByte/s+ range. Sorry. We are not there yet ;-)


Mit freundlichen Grüßen,
Best regards,

Axel Mertes

Workflow, IT, Research and Development
Geschäftsführer/CTO/Founder
Tel: +49 69 978837-20
eMail: Axel.Mertes@...

Magna Mana Production
Bildbearbeitung GmbH
Jakob-Latscha-Straße 3
60314 Frankfurt am Main
Germany
Tel: +49 69 978837-0
Fax: +49 69 978837-34
eMail: Info@...
Web: http://www.MagnaMana.com/

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese
E-Mail irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese E-Mail. Das unerlaubte Kopieren
sowie die unbefugte Weitergabe dieser E-Mail sind nicht gestattet.

This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in
error) please notify the sender immediately and destroy this e-mail. Any unauthorised copying, disclosure or distribution of the material
in this e-mail is strictly forbidden.