Test Principle and Main Points of SSD Power - Down Protection

Preface Solid-state disk drive If you want to use FTL to try to to the logical address and therefore the physical address betwee...




Preface

Solid-state disk drive If you want to use FTL to try to to the logical address and therefore the physical address between the conversion, if the SSD read, write, delete and other normal add the case of abnormal power-down, may cause mapping table due to too late to update and lost there's a fault that the SSD can't be recognized by the system.

At an equivalent time, so as to enhance read and write performance, usually use SDRAM to try to to the cache, if the method of reading and writing encountered abnormal power-down, SDRAM data could also be too late to write down Nand Flash data loss, or update the mapping table too late to write down Nand Flash The mapping table is missing.

Abnormal power outage caused by the phenomenon

SSD abnormal power outage usually there are three sorts of failure phenomenon:

1, SSD can't reproduce the system identification, the necessity to rebuild the mapping table or by simple and crude thanks to re-production are often used;

2, repeatedly after power outage , SSD there are tons of "new bad block";

The mechanism behind the new block is that when the SSD reads, writes, or erases unsuccessfully, it'll be identified as a nasty block. Of course, these blocks aren't really bad blocks, simply because they're abnormal power outage caused by the incorrect judgment.

3, SDRAM data loss;

Common power-down protection mechanism

Each power-down protection mechanism to know different, different for the user, the protection mechanism is totally different, generally there'll be the subsequent two practices:

1, save all the info in SDRAM

Abnormal power-down, SDRAM all data must be fully written to Nand Flash, generally , SDRAM capacity is about to 1000% of the quantity of SSD bare capacity, for little capacity SSD, SDRAM got to write Nand Flash data Relatively small, through the super capacitor or tantalum capacitor can still write data. However, if the SSD capacity is large enough, for example: 8TB, then, SDRAM got to write Nand Flash data are going to be very large, if still believe super capacitor or tantalum capacitor to try to to power supply, will inevitably face the subsequent three tricky problem:

a, the necessity for more tantalum capacitor particles to try to to the protection, within the actual engineering practice, this is often a really serious test, the engineers face the thickness, the quality size limit, PCB area isn't enough to use;

b, albeit there's enough capacitance to try to to the protection, when the implementation of the "restart", the SSD won't start properly, you want to first pack up for a few time before restarting, because: SSD got to put all the tantalum capacitor after power Identified;

c, when the utilization of a couple of years after the tantalum capacitor or super capacitor after aging, when the tantalum capacitor power supply can't achieve the initial design target value, the user still has data loss after power loss or SSD can't identify the potential risks, if the initial design that's , to try to to redundant capacitors, then, will return to the matter "b" death cycle.

It is gratifying that the issues of b and c are perfect solutions to unravel these thorny problems only need the engineers enough mind and knowledge only.

2, only save the SDRAM user data, without saving the mapping table

This will reduce the utilization of SDRAM and therefore the use of tantalum capacitors, "do not save the mapping table" doesn't mean that the mapping table is lost, just don't save the last data write update map, when the SSD re-power, trying to find the last mapping table to save lots of the new data written to re-build the mapping table, the drawbacks of this approach isn't enough mechanism to line the reasonable, then rebuild the mapping table are going to be longer, SSD takes a while to normal access to normal

For controllers without SDRAM design, all data is written on to Nand Flash. When data is lost, the info that's not written to Nand Flash are going to be returned to the host. If no additional data must be saved, High reliability requirements of the appliance , no SDRAM design is king, its representative may be a German industrial brand master, its only drawback is that the performance isn't ok , in fact, many applications and therefore the need for the very best performance, and Is "enough" performance.

Test methods and principles

Specific test, SSD need as a system disk and as a disk from the 2 cases of testing, therefore the main disk and do from the disk test method is that the only difference is that the most disk got to test the pc to power off the machine, and from the disk only SSD are often on the facility off.

a, respectively, of the SSD as a blank disc, data is written within the 25% and 50% when writing data, the write data for 85% and 3000, respectively, the abnormal power down 100% test write data, each down and therefore the power-on interval of three seconds;

The principle of writing different capacity data to the disk is: When the SSD write a particular amount of knowledge , the background began to garbage pickup , garbage pickup means the relocation of knowledge , data migration means the mapping table updates, at this point Abnormal power outage is typically a drag .

b, when the traditional write data, the SSD abnormal power-down

c, when the info is deleted when the abnormal power-down

In the windows, delete the info also got to perform eight actions, and therefore the establishment of an equivalent document, the mapping table also got to update.

d, when the SSD read file is abnormal power down, test 3000 times, power off interval of three seconds;

e, when the traditional shutdown process abnormal power down, test 3000 times;

f, when the traditional start of the OS abnormal power down, test 3000 times;




No comments