Internal SSD Question

I’m sure this question was asked before… I have an extra 2 TB SSD internal card (not in use at the moment except for storage installed in an enclosure).
I have a 1 Tb SSD installed in the computer. Would there be any improvement in performance or any improvement at all if I replace and install the 2 Tb in place of the 1 Tb SSD??

The quick answer is no—if by performance, you mean speed.
If you mean speed and quality then it depends on a lot of things. Like what resolutions you are putting in and getting out of TVAI.
What the read/and write speeds are of both SSDs.
What video codecs are your input and outputs.

I’m inputting mpeg2 files and outputting FHD tiff images. This means that my SSD needs to be able to write at about 230Mbps for about 4TB if I’m going to enhance a 3 and a half hour video.
Most cheap SSDs can only write at full speed for 3 to 20GB.
My solution was to buy a bunch of used server 2TB SSDs and put them into a striped configuration. (3 SSDs joined to make 1 6TB drive that splits reading/writing equally.)

My setup is the most extreme possible. Ideally, you should be outputting from TVAI into something lossless like tiff but FFV1 is probably better and a little smaller. If you’re not outputting to lossless, then it really doesn’t matter how fast your SSD is.

1 Like

Thanks very much for the input. I’ll just go ahead and keep using that 2 Tb in the enclosure for storage then. :smile:

Difference Mainboard M2 Slot compared to 2.5" SATA SSD is not really relevant in TVAI. M2 you get higher burst, when the SSD can deliver the speed but this is not that important here. Exernal SSD in a enclousure USB attached to the PC, yes there is a difference in copy speed decisive role with which usb standard it is connected, but also here for TVAI encode speed not relevant.

Normally, however, this does not play a relevant role with TVAI, it does not make the encodes faster, only after completion of an encode when it is merged together, copied from the cache to the target location, the process goes better with a faster SSD

or even if you have lossless codec, then when the bitrate of your videostream is above the max bitrate your storage medium can deliver, then the waiting starts :wink:

I used one of the fastest Kingston “Data Traveler” USB sticks for a while to store process data, the 1TB USB-C Stick was really fast, but I destroyed it within weeks :laughing: I gets hot and they are simply not made for many write cycles, hands off

2 Likes

Thanks much for the info! :smile:

Edit* Sorry for stealing the actual topic. I’m just interested in what kind of setup you’re running.

Running Raid10 array?

Also, how do you connect that many SSDs, PCIe-slot?

I’ve got 3 m.2 slots on my mb, the main 2 slots have a 4TB each, and the 3d slot has my 1TB which was my main drive before i added the 2x 4TB. The 2x 4TB are set up as a RAID0-array which I’ve benched with below results
_10_000013

SATA enterprise SSDs. Using RAID0 as well but I set it up as a striped volume in Windows Disk Management. Same thing just different name for it.

If I were to do NVMe drives, I would buy one of those PCIe cards that can take 4 to 8 drives. That setup costs a lot more than used SATA enterprise drives though.

I think for many people a raid5 is more suitable when you have at least 3 drives, with a stripeset (raid0) all data is lost when one SSD in the raidset fails, but you probably already know that :slightly_smiling_face:

PCIe card with M2 slots is cheaper as I tought…hmm interesting

For example this Delock card costs about 80$ and there exist also much cheaper ones

At the moment, I only have 1 slot in my computer. It has a 1 Tb card. I don’t know what it is. I bought the computer pre-built. There may be a misunderstanding. The other (2Tb) card that I mentioned is in a USB-C Enclosure (used for storage). Both cards are M.2 2280 Nvme.

Ya, I’m fully aware. Motherboard only has 3 nvme slots, and it can only run Raid 0, 1 and 10. Thing is, I don’t think I’ve ever seen or heard of an ssd fail on the consumer level. I understand it happens to ssds set up in enterprise servers which are constantly at, or close to max read/write large part of its life.

Saw a post on reddit a few months ago by a guy who bought a second hand nvme. It wasn’t more than 30 days old or so but had close to a PB of read/write.

This is the smart data for my raid 0 array. Bought them just under 4 months ago. Not sure how to properly convert 512 byte * units writen/read to gb, as i get 30GB when i calulate it:

  • (66225243*512)/(1024^3) =~ 30 GB

Which makes no sense. It’s probably more around 30TB.
The 1TB nvme not in the array is about 10 months and was my only drive for ~6 months before i got set up the raid array. It’s got 3x the number of read/write commands, but some 30% less data units written/read over its lifespan.

And yes, I know, 13 unsafe shutdowns in 4 months. 'tis what happens when experimenting with new cpus/gpus and ram timings. Had a crash just yesterday due to a sudden spike on my rtx 5080 that was at the time on a way too high overclock.

===============================
NVMe SMART ATTRIBUTES
===============================
 - Attribute ID:      0x00 (Critical Warning), 
     Current Value: 000

 - Attribute ID:      0x01 (Temperature in Celsius), 
     Current Value: 047

 - Attribute ID:      0x03 (0..100% remaining spare capacity), 
     Current Value: 100

 - Attribute ID:      0x04 (0..100% threshold before SMART error), 
     Current Value: 010

 - Attribute ID:      0x05 (Percent of device life used), 
     Current Value: 000

 - Attribute ID:      0x14 (Data units read (512 Byte)), 
     Current Value: 66225243

 - Attribute ID:      0x1e (Data units written (512 byte)), 
     Current Value: 63102659

 - Attribute ID:      0x28 (Number of read commands issued), 
     Current Value: 454002576

 - Attribute ID:      0x32 (Number of write commands issued), 
     Current Value: 375740560

 - Attribute ID:      0x3c (Controller busy time MSQ minutes), 
     Current Value: 546

 - Attribute ID:      0x46 (Power cycle count), 
     Current Value: 092

 - Attribute ID:      0x50 (Power on hours), 
     Current Value: 2085

 - Attribute ID:      0x5a (Unsafe shutdown count), 
     Current Value: 013

 - Attribute ID:      0xa0 (Media errors), 
     Current Value: 000

 - Attribute ID:      0xb0 (Number of Error Information Log Entries), 
     Current Value: 000

Thing is, you need a free PCIe slot that’s fast enough to run them properly. If you haven’t got a +$500
motherboard (just a guestimate) with 2x 16 lanes and a chipset that can run PCIe 4.0 x16 by itself, since the gpu will occupy the x16 lane to CPU.

My MSI MAG B650 Tomahawk has:

  • 1x PCIe 4.0 x16 (from CPU - occupied by GPU of course)
  • 1x PCIe 4.0 x2 (from B650 chipset - half the potential speed of the three dedicated M.2 slots which all run at 4.0 x4 - it also shares lanes with M.2_3 thus making the 3d nvme that normally runs at 4.0 x4 to only run at a speed of 4.0 x2 if PCIe slot 2 is occupied)
  • 1x PCIe 3.0 x1 (also from B650 chipset).

Thinking about it, I might actually get 3 more HDDs and set them up as Raid 10. 210-240MB/s is just sooooo slow. Can only use it for cold storage. Doing any kind of rendering or transcoding to or from a file on the HDD will bottle neck hard, but ~400-450MB/s might actually be alright.

And I’ll add that my RAID0 drives only process videos. Everything that gets put on those drives can be ripped again.

On one of my computers, I have 4 HDDS + one SSD making one RAID0 6TB drive. The other is 3 SSDs making another 6TB drive. The idea being that when I need to do multiple passes, I can always read from one drive and write to the other.
Anyway, go for it. That sounds like a good idea to me.

You are correct about the enterprise sata HDDs, cheap and more durable than consumer HDDs. I bought 2x Seagate Enterprise HDD’s, one 12TB and one 10TB, both sold by Seagate Store on Amazon and branded “Fully tested reset hard drive without operating hours.”, but after looking at smart data, the 10TB already had a reallocated sector count of 7. Sent it straight back. The 12TB was and still is fine though.

Yes I see now, the lanes are the problem.

I saw a massive data center statistic of tousends harddisks failure rate over time. The video must be on YT anymore, somewhere. The winner was “Hitachi HGST” they had by far the lowest failure rate, under 0.8% consistently over several years. WD took over Hitachi HD division, but the WD ones are not the same, this Enterprise Ultrastar HDs will be continued under the Hitachi brand. But if you plan use it in your PC, she is loud!
Ultrastar HGST

My tip for using in a PC oder NAS that should be quiet is the “Toshiba MG10” in the failure test Toshiba was good, better than Seagate, the Toshiba is quiet and also made for 24h datacenters, it’s a CMR drive, not a SMR like for example the IronWolf.