On 20 December, we launched an interesting experiment: try to use a DSS to see if life really was a problem. For those who do not follow our adventures, we used a model based on MLC flash memory and 4x nm driven by a Barefoot home Indilinx. The SSD in question has survived for 4 months while we were writing continuously from a Velociraptor.
In total, we wrote about 650 terabytes of data before the SSD begins to cause problems with errors in the proofreading of written data. Ideally, we wrote between 500 and 1000 times more per day than what SSD manufacturers use it as a classic: the values announced vary from 5 to 10 GB for an average user and we were around 5 TB Two problems arose: first, sequential writes use less than the SSD random writes of small files (current standard in use), because of the very structure of flash memory, and secondly we realized that the updated firmware - in some cases - resetting the information on the wear of the SSD.
Specifically, our counters were biased to a point from the beginning: the 1761 average starting cycles were false, the SSD was more worn, with no possibility of knowing the true value. The data collected after the start of the test, by cons, are consistent: we went from "1761 cycles mean" cycles in 8953 - about 7200 cycles - by writing nearly 650 TB There is therefore a write amplification present enough: there are more data written to the SSD at the memory what we have actually sent, because of management's internal SSD.
In theory, we should have written a little over 900 TB (128 GB per cycle on average), in practice, we are around 90 GB per cycle. The SSD was used to test when they become unusable, was sent to after-sales service and should be replaced by the manufacturer. This first test allowed us to draw some lessons and we'll repeat our test with a more efficient procedure, and a completely new SSD.
First, we will use a 64 GB SSD (to speed the process), a Kingston SSDNow! V100 Controller-based Toshiba. Second, instead of working sequentially from a disk drive, we will use IOMeter and follow the recommendations on the wear of Crucial SSD: 50% sequential writes, random writes of 50%, all with 5% file 4 kb, 5% of files 8 KB, 10% files 16 kb, 10% of files of 32 kb, 35% of files of 64 kb and 35% of files of 128 KB.
Every day, we will verify the absence of error and we will try to provide information on our Twitter account. Tests on this new SSD is expected to begin next week.
In total, we wrote about 650 terabytes of data before the SSD begins to cause problems with errors in the proofreading of written data. Ideally, we wrote between 500 and 1000 times more per day than what SSD manufacturers use it as a classic: the values announced vary from 5 to 10 GB for an average user and we were around 5 TB Two problems arose: first, sequential writes use less than the SSD random writes of small files (current standard in use), because of the very structure of flash memory, and secondly we realized that the updated firmware - in some cases - resetting the information on the wear of the SSD.
Specifically, our counters were biased to a point from the beginning: the 1761 average starting cycles were false, the SSD was more worn, with no possibility of knowing the true value. The data collected after the start of the test, by cons, are consistent: we went from "1761 cycles mean" cycles in 8953 - about 7200 cycles - by writing nearly 650 TB There is therefore a write amplification present enough: there are more data written to the SSD at the memory what we have actually sent, because of management's internal SSD.
In theory, we should have written a little over 900 TB (128 GB per cycle on average), in practice, we are around 90 GB per cycle. The SSD was used to test when they become unusable, was sent to after-sales service and should be replaced by the manufacturer. This first test allowed us to draw some lessons and we'll repeat our test with a more efficient procedure, and a completely new SSD.
First, we will use a 64 GB SSD (to speed the process), a Kingston SSDNow! V100 Controller-based Toshiba. Second, instead of working sequentially from a disk drive, we will use IOMeter and follow the recommendations on the wear of Crucial SSD: 50% sequential writes, random writes of 50%, all with 5% file 4 kb, 5% of files 8 KB, 10% files 16 kb, 10% of files of 32 kb, 35% of files of 64 kb and 35% of files of 128 KB.
Every day, we will verify the absence of error and we will try to provide information on our Twitter account. Tests on this new SSD is expected to begin next week.
No comments:
Post a Comment