SSD Fetish

steelghost

Ars Praefectus
5,437
Subscriptor++
Thing is, you're almost certainly never going to see these brain-breaking stats outside of benchmarks. We still don't have the software to really make use of the IOPs or peak bandwidth from Gen3 drives, never mind Gen5, at least not on desktop. The big latency gains have been made, X-Point has been and gone, and we are in a similar position to CPUs where the only way to get faster is to increase parallelism, but that is not a panacea...
 

barich

Ars Tribunus Angusticlavius
9,956
Subscriptor++
And to think this was brain-melting at the time:
Intel X25-M: Sustained sequential write: up to 70 MB/s (80 GB drive) and up to 100 MB/s (160 GB drive)

Of course, back then the access time was the issue more than transfer speed, but still I remember buying that and being totally blown away.

I wonder about IOPS in Q1T1 with PCI-E 5 and newer controllers, cause that's where it's at for me.

I've still got a 160 GB X25-M G2 in a PS3. It only supports SATA I, so anything faster would be wasted in it.
 
My TrueNAS server boots off a pair of 40GB Intel 320 SSDs, which are a mere 14 years old now. MLC flash FTW! The boot process is not particularly speedy, it has to be said, but it's not something I do very often :D
My Intel 320 is one of those pieces of tech that I can't seem to get rid of. It has real power loss protection, and the product specification looks like it was published by an engineer instead of marketing.
 

steelghost

Ars Praefectus
5,437
Subscriptor++
Yep, I have a 160GB one that I bought back in the day as my first SSD, it was my boot drive for years. Still have it inside a little USB3 enclosure with a periodically updated Linux install on it.

After I posted about the 40GB drives I thought I should go see if I could find a spare one in case I need to repair the mirror. Found one on eBay for a fiver, happy days. But there's people on there trying to sell these things for crazy money, including one guy in the US who wants over $300 for a "new in original packaging" 160GB version :oops:

I mean, they're nice drives, but not that nice....
 

ant1pathy

Ars Tribunus Angusticlavius
6,738
I find myself in need of buying additional SSD space for my gaming tower. I've been out of the hardware game long enough that the M.2 stuff is nothing I've touched. I have a ROG STRIX Z370-E GAMING motherboard, it lists a pair of "M.2 Socket 3" slots. I'm after 2TB of storage, do not need best-in-class speeds (middle of the road is perfectly fine). Any recommendations or do's/don't's here? What should I expect to pay?
 

tucu

Ars Tribunus Angusticlavius
7,585
I find myself in need of buying additional SSD space for my gaming tower. I've been out of the hardware game long enough that the M.2 stuff is nothing I've touched. I have a ROG STRIX Z370-E GAMING motherboard, it lists a pair of "M.2 Socket 3" slots. I'm after 2TB of storage, do not need best-in-class speeds (middle of the road is perfectly fine). Any recommendations or do's/don't's here? What should I expect to pay?
I bought a 2TB WD Blue SN580 for a mini-PC a few days ago ($115 in Amazon US now). At the moment the WD Black SN770 is just a few $ more. At the time I also though of spending more on a SN850X or a Crucial T500 but I concluded that it was too much for that mini-PC.

On that motherboard double check that you are not using the SATA ports that share lanes/ports with the M.2 sockets:
*2 The M.2_1 socket shares SATA_1 port when use M.2 SATA mode device. Adjust BIOS settings to use a SATA device.
*3 The M.2_2 socket shares SATA_56 ports when use M.2 PCIE mode device in X4 mode. Adjust BIOS settings to use SATA devices.
 

evan_s

Ars Tribunus Angusticlavius
6,373
Subscriptor
I bought a 2TB WD Blue SN580 for a mini-PC a few days ago ($115 in Amazon US now). At the moment the WD Black SN770 is just a few $ more. At the time I also though of spending more on a SN850X or a Crucial T500 but I concluded that it was too much for that mini-PC.

On that motherboard double check that you are not using the SATA ports that share lanes/ports with the M.2 sockets:

The M2_1 should be fine as they won't be using a SATA M2 drive. That should also be the faster/preferred slot.
 
  • Like
Reactions: ant1pathy

ant1pathy

Ars Tribunus Angusticlavius
6,738
I bought a 2TB WD Blue SN580 for a mini-PC a few days ago ($115 in Amazon US now). At the moment the WD Black SN770 is just a few $ more. At the time I also though of spending more on a SN850X or a Crucial T500 but I concluded that it was too much for that mini-PC.
Funny, I have that exact drive up in a browser tab, and that's what drove my "I should ask for confirmation..."
On that motherboard double check that you are not using the SATA ports that share lanes/ports with the M.2 sockets:
The only other thing inside of this machine should be the GPU. What should I check on to ensure I'm doing the right thing? If I already have one M.2 in there, should I prefer to remove that and replace with the 2TB or can they both be present and I'm not eating my GPU performance?
 

evan_s

Ars Tribunus Angusticlavius
6,373
Subscriptor
Funny, I have that exact drive up in a browser tab, and that's what drove my "I should ask for confirmation..."

The only other thing inside of this machine should be the GPU. What should I check on to ensure I'm doing the right thing? If I already have one M.2 in there, should I prefer to remove that and replace with the 2TB or can they both be present and I'm not eating my GPU performance?

It looks like from the manual both slots are PCI-E 3 4x slots so in that regard it shouldn't matter. It doesn't look like that generation has any PCI-E M2 slots directly off the CPU so both are off the chipset and should be equivalent. It looks like the bios should default to using the M2_2 slot over SATA ports 5 & 6 so you shouldn't need to change anything. If the drive doesn't show up in the M2_2 slot check your bios for a setting to enable the SATA ports 5 & 6 or the M2_2 slot. Not sure what it will be called or where it will be located.
 

continuum

Ars Legatus Legionis
96,306
Moderator
https://www.tomshardware.com/pc-com...-pcie-5-0-ssd-fights-throttling-with-7w-power

I am hoping Q2 this year means April and not June, we’ll find out?

The performance platform fully utilizes PCIe Gen 5 along with Western Digital and Kioxia's new BiCS8 3D NAND technology. The first flagship drive in this family, yet unnamed, will saturate the PCIe 5.0 connection with 14,500 MB/s sequential read and 14,000 MB/s write speeds at a max power draw of only 7W in its 2TB flavor. Shipping in Q2 2025, the drive will launch in capacities ranging from 512GB to 4TB.
 

continuum

Ars Legatus Legionis
96,306
Moderator
Sponsored post by SR so ugh, but I am curious if this is actually shipping sooner rather than later or not. We have SK hynix's PCI-e 5.0 drive shipping in volume to OEMs, we know SK hynix's retail version is shipping in South Korea the past few months... I wonder when Samsung's 1090 Pro will actually ship as well.
https://www.storagereview.com/revie...imate-gen5-oem-storage-for-next-gen-computing

The 4600’s NAND package layout uses just one side of the PCB for the 2TB capacity. The single-sided PCB design was also used with the previous-generation 3500 SSD. Micron uses its own G9 TLC NAND on the board, paired with a Silicon Motion SM2508 Gen5 controller and its in-house firmware.
Micron 4600 using Micron 276-layer gen 9 NAND and Silicon Motion SM2508...
 
I think we've gotten to the point of diminishing returns now, at least for games. They're usually choked on asset decompression speed, rather than I/O, so probably won't benefit much from PCIe5 drives.

If you're actually banging on them, though, like in a public server application, they might be a major improvement. Or if you have a workflow where you're hitting the drive with more than one program at a time. The big IOPS figures on those newer units will probably let you run almost any number of drive-intensive applications simultaneously.

Of course, that's also going to burn up your drive life quickly. You probably wouldn't want to do too much of that unless you're actively making money with it.

PCIe5 drives with QLC NAND would be sort of like getting a 2Gb Internet connection with a 2TB data cap. Even TLC will probably burn up fast if you're really banging on these things.
 
  • Like
Reactions: SportivoA

steelghost

Ars Praefectus
5,437
Subscriptor++
NVME drives shine when you load them up with I/O, as you say. Even on my little home server, anything that does much in the way of database reads is really noticeably quicker when running from NVME storage (source: I recently gave my Proxmox instance an NVME mirror to run VMs and containers from). This is most obvious when for instance, Home Assistant is gathering historical data for a graph over time, and other things are also a bit busy on the system. Even so, the NVME drives are barely waking up, I just want the responsiveness of the system to scale better as the amount of I/O increases.

For games and similar single user tasks, I've not seen much benefit between SATA SSDs, Gen3 and Gen4 NVME. I'm not sure if that's matter of how game software is written, or if something more fundamental about the workload of loading game assets doesn't really benefit from the performance profile offered by NVME disks.

As an aside, I've been beating pretty hard on a pair of Intel 800GB S3700 SATA SSDs which had 5% and 12% life used when I bought them used. Those numbers (as reported by smartctl) have not changed since I got them. Those drives are MLC NAND and have a large pool of spare flash to reallocate from. In a similar time period, I beat a pair of 500GB MX500s half to death (literally saw the remaining endurance drop by ~50%) with a similar workload. So SSDs are definitely not all created equal for intensive use-cases; not new news I know, just means it does matter what kit you deploy for some use cases.
 
Last edited:
  • Like
Reactions: Klockwerk

N00balicious

Wise, Aged Ars Veteran
160
  • Like
Reactions: Klockwerk

steelghost

Ars Praefectus
5,437
Subscriptor++
Well, I've been grabbing deals on lightly used 1.92TB Micron 5100 Pro SATA SSDs, six of which go nicely in a 5.25" mounted 2.5" hotswap bay; sprinkle in a bit of RAIDZ2 and you have about 7TiB of fast, resilient, silent and fairly cool running storage. I'll keep looking for a couple more so I have spares in case of a drive failure.

So yeah, SATA SSDs, but like, good ones :D
 
Last edited:

N00balicious

Wise, Aged Ars Veteran
160
Well, I've been grabbing deals on lightly used 1.92TB Micron 5100 Pro SATA SSDs, six of which go nicely in a 5.25" mounted 2.5" hotswap bay; sprinkle in a bit of RAIDZ2 and you have about 7TiB of fast, resilient, silent and fairly cool running storage. I'll keep looking for a couple more so I have spares in case of a drive failure.

So yeah, SATA SSDs, but like, good ones :D

A buddy of mine got a 8TB WD M.2 Black, pre-owned, for just north of US$300 at a Swap-Meet.

I did the same "what do I do with an empty 5.25" slot", back in the day, with lappy HDDs. It gave me a couple of days of ZFS self-pleasuring, slicing and dicing the micro-array up into every vdev combination imaginable and running performance tests.

Those 1 TB 2.5" laptop HDDs were dogs.

Problem is, all storage in the 2.5" format is expensive per TB given the small capacities and the total number of parts needed for any credible capacity.

I recently indulged myself in the retro-intellectual exercise of a Silverstone CS01-HS NAS with six 2.5", hot swap, SATAIII, SSD internal bays.

I had a hard requirement for >24TB. It could be done, but 4TB and the ideal ~8TB SATA SSDs, the ones that aren't crap, are still very expensive for an obsolescent technology. The few 16TB SATA SSDs out there in any quantity are up there in price with gold-plated toilets. You'd have thought they were giving away those SATA SSD parts by now?

I relegated idea of the modest capacity SATA SSD-based NAS to the dustbin of history. Its like owning a horse-drawn wagon, as your daily drive, without being Amish.
 
Last edited:

continuum

Ars Legatus Legionis
96,306
Moderator

ERIFNOMI

Ars Tribunus Angusticlavius
15,402
Subscriptor++
Well, I've been grabbing deals on lightly used 1.92TB Micron 5100 Pro SATA SSDs, six of which go nicely in a 5.25" mounted 2.5" hotswap bay; sprinkle in a bit of RAIDZ2 and you have about 7TiB of fast, resilient, silent and fairly cool running storage. I'll keep looking for a couple more so I have spares in case of a drive failure.

So yeah, SATA SSDs, but like, good ones :D
What price are you paying for those? I'm just curious how good of a deal they are. I have a few bays in my server that I could sacrifice to the flash gods instead of the spinning rust devil.
 

continuum

Ars Legatus Legionis
96,306
Moderator
What price are you paying for those? I'm just curious how good of a deal they are. I have a few bays in my server that I could sacrifice to the flash gods instead of the spinning rust devil.
We've been seeing them for $150...

If all you do is game gen3 or gen4 is plenty.
The high-end is kind of what it's aimed at, and agreed, if you're just a typical enthusiast, even one with money to burn, you can get much better use out of your money with PCI-e 4.0 NVMe SSD...
 

steelghost

Ars Praefectus
5,437
Subscriptor++
What price are you paying for those? I'm just curious how good of a deal they are. I have a few bays in my server that I could sacrifice to the flash gods instead of the spinning rust devil.
Checking back, it's been around £100 per drive. They're all "lightly used" - meaning remaining drive endurance in the mid to high nineties.

£45/TB is pretty much the floor price for new SSDs of any type in the UK market so I suppose I'm trading getting a new drive for one with with more endurance, better grade of flash, etc. Thus far no failures but you never know with SSDs.
 
  • Like
Reactions: continuum

N00balicious

Wise, Aged Ars Veteran
160
https://www.tomshardware.com/pc-com...s-making-it-the-worlds-fastest-consumer-drive

Samsung 9100 PRO SSDs announced. I'm very curious about the flash layout / Intelligent Turbowrite cache utilization/capacity of the installed flash as @Lord Evermore pointed out in another thread - up to 442GB on the 4TB model.

About time....

The speeds and feeds for this device look funny. The serial rate doesn't scale in the lane-dependent way that NVMe parts from other manufacturers have from 1TB to 2TB to 4TB. Also, why 1GB of DRAM cache per TB of storage capacity with each capacity model? That is also different from other manufacturers. Large caches are never bad for performance. However, there are diminishing returns. 4GB of cache for 4TB of storage seems like a lot?
 

continuum

Ars Legatus Legionis
96,306
Moderator
Also, why 1GB of DRAM cache per TB of storage capacity with each capacity model? That is also different from other manufacturers.
Historically, 1GB DRAM per 1TB of flash is pretty standard for higher-end DRAM equipped SSDs to hold the LBA tables. This ratio is maintained on the 990 Pro (spec sheet PDF), SK hynix Platinum P41 (1TB, 2TB), Lexar NM800 Pro (2TB), Kingston Fury Renegade (2TB), Acer Predator GM7000 (4TB), etc. All of above are representative of their typical controller platforms that were high-end PCI-e 4.0 or 5.0 controllers at time of release (from Samsung, SK hynix, Phison, Innogrit, etc.).

Actually to my surprise the Corsair MP700 SE and similar Crucial T700 Pro both have 8GB DRAM on the 4TB model (!). And I see 4GB DRAM on the 2TB MP700 Pro…
 

evan_s

Ars Tribunus Angusticlavius
6,373
Subscriptor
Historically, 1GB DRAM per 1TB of flash is pretty standard for higher-end DRAM equipped SSDs to hold the LBA tables. This ratio is maintained on the 990 Pro (spec sheet PDF), SK hynix Platinum P41 (1TB, 2TB), Lexar NM800 Pro (2TB), Kingston Fury Renegade (2TB), Acer Predator GM7000 (4TB), etc. All of above are representative of their typical controller platforms that were high-end PCI-e 4.0 or 5.0 controllers at time of release (from Samsung, SK hynix, Phison, Innogrit, etc.).

Actually to my surprise the Corsair MP700 SE and similar Crucial T700 Pro both have 8GB DRAM on the 4TB model (!). And I see 4GB DRAM on the 2TB MP700 Pro…

Yeah. 1Gb per TB for a drive is pretty common for a flat FTL. Without the FTL in memory every read or write has to be preceded by reading the FTL to know where the data is or was.

https://en.wikipedia.org/wiki/Flash_memory_controller#Flash_translation_layer_(FTL)_and_mapping
 
  • Like
Reactions: continuum

steelghost

Ars Praefectus
5,437
Subscriptor++
A buddy of mine got a 8TB WD M.2 Black, pre-owned, for just north of US$300 at a Swap-Meet.

I did the same "what do I do with an empty 5.25" slot", back in the day, with lappy HDDs. It gave me a couple of days of ZFS self-pleasuring, slicing and dicing the micro-array up into every vdev combination imaginable and running performance tests.

Those 1 TB 2.5" laptop HDDs were dogs.

Problem is, all storage in the 2.5" format is expensive per TB given the small capacities and the total number of parts needed for any credible capacity.

I recently indulged myself in the retro-intellectual exercise of a Silverstone CS01-HS NAS with six 2.5", hot swap, SATAIII, SSD internal bays.

I had a hard requirement for >24TB. It could be done, but 4TB and the ideal ~8TB SATA SSDs, the ones that aren't crap, are still very expensive for an obsolescent technology. The few 16TB SATA SSDs out there in any quantity are up there in price with gold-plated toilets. You'd have thought they were giving away those SATA SSD parts by now?

I relegated idea of the modest capacity SATA SSD-based NAS to the dustbin of history. Its like owning a horse-drawn wagon, as your daily drive, without being Amish.
Eh, where are you finding 4 or 8TB SSDs, of any kind at prices that can compete with spinning rust?

x2 12TB drives would cost about £500 in the UK. Let's call that £1000, because RAID0 in your NAS is for insane people.

If I picked up a used 16 channel HBA for about £50 (available all day long on eBay) and carried on sniping deals on the same SSDs I've been buying, I could get to 14TB usable capacity for about £1200. £1250/14TB is ~£90/TB. £1000/24 is £42. Puts the solid state option at about double the price per TB, which seems about right. The SATA SSD option can easily saturate 10Gbit, and there's inexpensive drive bay swapping options available, unlike what I'd be looking at if I was trying to do this using NVME drives.

It's certainly not cutting edge tech, but it's about as cost effective as solid state mass storage gets. Each drive has nearly 9PB of write endurance, so barring actual failures it should last me a good long while.

As an aside I do have a Rocket 1204 with four Micron 7400 Pro drives on it in the same box. Admittedly the x8 PCIe3.0 interface throttles the total theoretical throughput to "only" 8GiB/s, but real world FIO testing shows it sustaining peak read / write speeds of roughly half that, not too surprising when using ZFS. But this is all immaterial since the interface on that box is 10Gbit anyway, the NVME drives are there to provide lots of IOPs, rather than monster throughput.
 

continuum

Ars Legatus Legionis
96,306
Moderator
I wouldn't say over-eager per se, but we definitely see lag times of multiple years between each of these steps a PCI-e revision being drafted, finalized, motherboards with it, and finally, add-in cards (GPUs, SSDs, whatever) with it. Per Wikipedia, we had a 7 year gap between the introduction of PCI-e 3.0 (2010) and PCI-e 4.0 (2017), but only 2 years to PCI-e 5.0 (2019), 3 years to PCI-e 6.0 (2022), and maybe only 3 years to PCI-e 7.0.

Not sure these Google searches are accurate, but the first (US-market) mainstead GPUs with PCI-e 4.0 were the RTX 20-series and AMD RX 5700-series in 2017, but the first (US-market) mainstream GPU with PCI-e 5.0 didn't hit til 2025.

Guessing design and testing cycles are just getting longer...

/me eyeballs all the 2nd-gen PCI-e 5.0 NVMe SSD controllers that are only now finally surfacing en masse as of Q4 2024 and later...
 
Guessing design and testing cycles are just getting longer...
My amateur perception has been that the SIG is way ahead of what the electrical engineers can actually make, and they're also inventing new standards a lot faster than the market needs them. So it's not only hard to make each new generation, there's not a ton of demand. We're getting into PCIe5 now, for instance, but at least on the AMD side, you can't even use it on the Southbridge.

And then even when you do go PCIe5 in the few cases where it works, it seems to be only useful in benchmarks, not actual running code. Consumer CPUs can't process the data coming in fast enough to get much benefit from the speedier I/O.

That's likely less true in the enterprise; bandwidth is always of great use there, when you're feeding those massively multicore beast CPUs. But they have to actually have products to buy, which gets back to the engineers having a hard time.
 

steelghost

Ars Praefectus
5,437
Subscriptor++
As signalling rates go up and encoding schemes become more elaborate, the engineering just gets that much harder, to make boards that perform to spec and will stay in spec over an expected lifetime.

We saw the same with ethernet where somewhere between 2.5G and 10G, the wheels come off when it comes to power consumption. The high performance SSDs that actually make use of all that PCIe5.0 bandwidth are monsters that are increasingly heat constrained, just like CPUs and GPUs.

With networking we could move to optical interconnects for the "tens or hundreds of gigabits" interconnect speeds. Not sure we have any similar alternatives for storage. Fortunately, I don't think there will be much demand to go beyond PCIe5 in consumer hardware anytime soon. As MGR notes, even the newest and fastest SSDs only really show any value in benchmarks, when used in "normal" consumer workloads.
 
We saw the same with ethernet where somewhere between 2.5G and 10G, the wheels come off when it comes to power consumption.
That may be improving. In some discussions on copper 10G a couple months ago, I saw some modules that were rated for, um, I think it was 2.5 watts. That's a heck of a lot better than the 10W they were taking for awhile. (meanwhile, optical is, what, half a watt?)

It seems like there's a wall for copper at 10G. If you want to go faster, it's probably going to need fiber. Fortunately, it really isn't that difficult. There are some new terms and specifications, but once you learn a few basic ideas, you're set.
 

evan_s

Ars Tribunus Angusticlavius
6,373
Subscriptor
My amateur perception has been that the SIG is way ahead of what the electrical engineers can actually make, and they're also inventing new standards a lot faster than the market needs them. So it's not only hard to make each new generation, there's not a ton of demand. We're getting into PCIe5 now, for instance, but at least on the AMD side, you can't even use it on the Southbridge.

And then even when you do go PCIe5 in the few cases where it works, it seems to be only useful in benchmarks, not actual running code. Consumer CPUs can't process the data coming in fast enough to get much benefit from the speedier I/O.

That's likely less true in the enterprise; bandwidth is always of great use there, when you're feeding those massively multicore beast CPUs. But they have to actually have products to buy, which gets back to the engineers having a hard time.

I think it's a divide between consumers and data center stuff. For consumers PCI-E 3 is still pretty much good enough. Even a 5090 barely looses any performance dropping from PCI-E 5 down to 3 and for SSDs you can see the difference in benchmarks but rarely is it noticeable in other situations. Unfortunately, I think PCI-E 5 support is part of what is driving the costs of MBs up and probably increasing the idle power consumption. I think it's likely that consumer stuff will stall where it's at as PCI-E 6 and later will just be too hard and expensive to implement to make it worth doing.

Data center on the other hand is chomping at the bit for more bandwidth. They are what is really driving for faster speeds and we'll probably see PCI-E 6 implement there pretty quickly. Maybe not across the board but for certain situations it will definitely be popular quickly.