During the early days of SSDs, we saw rather quick development in capacities. The very first SSDs were undoubtedly small, generally 32GB or 64GB, but there was a need for higher capacities to make SSDs more usable in client environments. MLC NAND caused a rapid decline in prices and the capacities quickly increased to 128GB and 256GB. 512GB also came along fairly soon, but for a long while the 512GB drives cost more than a decent gaming PC with prices being over $1000.

I would argue that the 512GB drives were introduced too early - the adoption was minimal due to the absurd price. The industry learned from that and instead of pushing 1TB SSDs to the market at over $1000, it wasn't until 2013 when Crucial introduced the M500 with the 960GB model being priced reasonably at $600. Nowadays 1TB has become a common capacity in almost every OEM's lineup, which is thanks to both lower NAND prices and controllers being sophisticated enough to manage 1TB of NAND. The next milestone is obviously the multi-terabyte era, which we are entering with the release of 2TB Samsung 850 Pro and EVO models.

Breaking capacity thresholds involves work on both the NAND and the controller side. All controllers have a fixed number of die they can talk to and for modern 8-channel controllers with eight chip enablers (CEs) per channel the limit is typically 64 dies. With 128Gbit (16GB) being the common NAND die capacity today, 64 dies yields 1,024GB or 1TB (as it's often marketed). It's possible to utilize a single CE for managing more than one die (which is what e.g. Silicon Motion does to achieve 1TB with a 4-channel controller), but it adds complexity to the firmware design and there's a negative performance impact as the two dies on the same chip enabler can't be accessed simultaneously. 

Increasing NAND capacity per die is one way to work around the channel/CE limitation, but it's generally not the most efficient way. First off, doubling the capacity of the die increases complexity substantially because you are effectively dealing with twice the number of transistors per die. The second drawback is reduced write performance, especially at smaller capacities, as SSDs rely heavily on parallelism for performance, so doubling the capacity per die will cut parallelism in half. That reduces the usability of the die in capacity sensitive applications such as eMMC storage, which don't have many die to begin with (as the same dies are often used in various different applications ranging from mobile to enterprise).

The new MHX controller

The real bottleneck, however, is the DRAM controller. Today's NAND mapping table designs tend to require about 1MB of DRAM per 1GB of NAND for optimal performance, so breaking the 1TB limit requires a DRAM controller capable of supporting 2GB of DRAM. From a design standpoint, implementing a beefier DRAM controller isn't a massive challenge, but it eats both die and PCB area and hence increases cost. Given how 2TB SSDs are currently a relatively small niche, embedding a DRAM controller with 2GB support for a mainstream controller isn't very economical, which is why today's client-grade SSD controller usually support up to 1GB to increase cost efficiency. 

Initially the 850 EVO was supposed to carry a 2TB SKU at the time of launch, but Samsung didn't consider the volume to be high enough. As Samsung is the number one manufacturer of client SSDs and supplies millions of drives to PC OEMs, the company is not really in the business of making low volume niche products, hence the release of 2TB client SSDs was postponed in wait for lower pricing and higher demand as a result.

Comparison of Samsung SSD Controllers
  MDX MEX MGX MHX
Core Architecture ARM Cortex R4
# of Cores 3 3 2 3
Core Frequency 300MHz 400MHz 550MHz 400MHz
Max DRAM 1GB 1GB 512MB (?) 2GB
DRAM Type LPDDR2 LPDDR2 LPDDR2 LPDDR3

The new 2TB versions of the 850 Pro and EVO both use Samsung's new MHX controller. I was told it's otherwise identical to the MEX besides the DRAM controller supporting up to 2GB of LPDDR3, whereas the MEX only supports 1GB of LPDDR2. The MGX is the lighter version of MEX with two higher clocked cores instead of three slower ones, and it's found in the 120GB, 250GB and 500GB EVOs.

Samsung SSD 850 Pro Specifications
Capacity 128GB 256GB 512GB 1TB 2TB
Controller MEX MHX
NAND Samsung 32-layer MLC V-NAND
NAND Die Capacity 86Gbit 128Gbit
DRAM 256MB 512MB 512MB 1GB 2GB
Sequential Read 550MB/s 550MB/s 550MB/s 550MB/s 550MB/s
Sequential Write 470MB/s 520MB/s 520MB/s 520MB/s 520MB/s
4KB Random Read 100K IOPS 100K IOPS 100K IOPS 100K IOPS 100K IOPS
4KB Random Write 90K IOPS 90K IOPS 90K IOPS 90K IOPS 90K IOPS
DevSleep Power 2mW 5mW
Slumber Power Max 60mW
Active Power (Read/Write) Max 3.3W / 3.4W
Encryption AES-256, TCG Opal 2.0 & IEEE-1667 (eDrive supported)
Endurance 150TB 300TB
Warranty 10 years

Specification wise the 2TB 850 Pro is almost identical to its 1TB sibling. The performance on paper is an exact match with the 2TB model drawing a bit more power in DevSleep mode, which is likely due to the additional DRAM despite LPDDR3 being more power efficient than LPDDR2. Initially the 850 Pro was rated at 150TB of write endurance across all capacities, but Samsung changed that sometime after the launch and the 512GB, 1TB and 2TB versions now carry 300TB endurance rating along with a 10-year warranty.

There's another hardware change in addition to the new MHX controller as the NAND part number suggests that the 2TB 850 Pro uses 128Gbit dies instead of the 86Gbit dies found in the other capacities. The third character, which is a U in this case, refers to the type of NAND (SLC, MLC or TLC) and the number of dies, and Samsung's NAND part number decoder tells us that U stands for a 16-die MLC package. With eight NAND packages on the PCB, each die must be 128Gbit (i.e. 16GiB) to achieve raw NAND capacity of 2,048GiB. 2,048GB out of that is user accessible space, resulting in standard ~7% over-provisioning due to GiB (1024^3 bytes) to GB (1000^3 bytes) translation.

According to Samsung, this is still a 32-layer die, which would imply that Samsung has simply developed a higher capacity die using the same process. It's logical that Samsung decided to go with a lower capacity die at first because it's less complex and yields better performance at smaller capacities. In turn, a larger die results in additional cost savings due to peripheral circuitry scaling, so despite still being a 32-layer part the 128Gbit die should be more economical to manufacture than its 86Gbit counterpart.

Samsung SSD 850 EVO Specifications
Capacity 120GB 250GB 500GB 1TB 2TB
Controller MGX MEX MHX
NAND Samsung 32-layer 128Gbit TLC V-NAND
DRAM 256MB 512MB 1GB 2GB
Sequential Read 540MB/s 540MB/s 540MB/s 540MB/s 540MB/s
Sequential Write 520MB/s 520MB/s 520MB/s 520MB/s 520MB/s
4KB Random Read 94K IOPS 97K IOPS 98K IOPS 98K IOPS 98K IOPS
4KB Random Write 88K IOPS 88K IOPS 90K IOPS 90K IOPS 90K IOPS
DevSleep Power  2mW 2mW 2mW 4mW 5mW
Slumber Power  50mW 60mW
Active Power (Read/Write) Max 3.7W / 4.4W 3.7W / 4.7W
Encryption AES-256, TCG Opal 2.0, IEEE-1667 (eDrive)
Endurance 75TB 150TB
Warranty Five years

Like the 2TB Pro, the EVO has similar performance characteristics with the 1TB model. Only power consumption is higher, but given the increase in NAND and DRAM capacities that was expected.

As the 32-layer TLC V-NAND die was 128Gbit to begin with, Samsung didn't need to develop a new higher capacity die to bring the capacity to 2TB. The EVO also uses eight 16-die packages with the only difference to Pro being TLC NAND, which is more economical to manufacture since storing three bits in one cell yields higher density than two. Out of the 2,048GiB of raw NAND, 2,000GB is user-accessible, which is 48GB less than in the 2TB 850 Pro, but the TurboWrite SLC cache eats a portion of NAND and TLC tends to require a bit more over-provisioning to keep the write amplification low for endurance reasons.

AnandTech 2015 SSD Test System
CPU Intel Core i7-4770K running at 3.5GHz (Turbo & EIST enabled, C-states disabled)
Motherboard ASUS Z97 Deluxe (BIOS 2205)
Chipset Intel Z97
Chipset Drivers Intel 10.0.24+ Intel RST 13.2.4.1000
Memory Corsair Vengeance DDR3-1866 2x8GB (9-10-9-27 2T)
Graphics Intel HD Graphics 4600
Graphics Drivers 15.33.8.64.3345
Desktop Resolution 1920 x 1080
OS Windows 8.1 x64
Performance Consistency
Comments Locked

66 Comments

View All Comments

  • leexgx - Thursday, July 23, 2015 - link

    the bug is related to the incorrect Qued trim support on the Samsung drives

    the samsung drives says they support Qued Trim support but they failed to implement it correctly when they added SATA 3.2 in the latest firmware updates, the Old firmware did not have Qued trim bug because the SSD did not advertise support for it, other SSDs that advertise Qued support it have been patched or don't have the buggy support to start off with (accept the m500)
  • editorsorgtfo - Thursday, July 23, 2015 - link

    Can you corroborate this? Nothing in the patch hints at a vendor issue.
  • leexgx - Thursday, July 23, 2015 - link

    i guess this is relating to RAID , there is a failed implementation of advertised Queued Trim support in the samsung 840 and 850 evo/pro drives (the drive says it supports it but it does not support it correctly so TRIM commands are issued incorrectly as to why there is a black list for all 840* and 850* drives)

    your post seems to be related to RAID and kernel issue (but the issue did not happen on Intel drives that they changed to) they rebuilt there intel SSD setups the same as the samsung ones

    they did the same auto restore only the drives changed they had 0 problems once they changed to intel/"other whatever it was" SSDs that also supported Qued TRIM even thought they was not using it the RAID bit probably was (was a bit ago when i looked at it)
  • sustainednotburst - Friday, July 24, 2015 - link

    Algolia stated Queued Trim is disabled on their systems, so its not related to Queued Trim.
  • leexgx - Saturday, July 25, 2015 - link

    the problem with samsung drives and Qued trim is till there (not just fake qued trim they failed to implement they also failed to implement the 3.2 spec and the advertised features that samsung is exposing)
  • Gigaplex - Thursday, July 23, 2015 - link

    Those two links show separate bugs. The algolia reported bug was a kernel issue. The second bug which vFunct was probably referring to is a firmware bug where the SSD advertises queued TRIM support but does not handle it correctly. The kernel works around this by blacklisting queued TRIM from known-bad drives. Windows doesn't support queued TRIM at all which is why you don't see the issue there yet.
  • jann5s - Thursday, July 23, 2015 - link

    @AT: please do some data retention measurements with SSD drives! I'm so curious to see if the myth is true and to what extent!
  • Shadowmaster625 - Thursday, July 23, 2015 - link

    With 2 whole gigabytes of DRAM, why are random writes not saturating the SATA bus?
  • Kristian Vättö - Thursday, July 23, 2015 - link

    The extra DRAM is needed for the NAND mapping table, it's not used to cache any more host IOs.
  • KaarlisK - Thursday, July 23, 2015 - link

    Where did TRIM validation go? (The initial approach, which checked whether TRIM restored performance on a filled drive).
    Considering that controllers have had problems with TRIM not restoring performance, even if this is a minor revision, it still seems an important aspect to test.

Log in

Don't have an account? Sign up now