SSDs all have what is known as an “Indirection System” – aka an LBA allocation table (similar to an OS file allocation table). LBAs are not typically stored in the same physical location each time they are written. If you write LBA 0, it may go to physical location 0, but if you write it again later, it may go to physical location 50, or 8.567 million, or wherever. Because of this, all SSDs performance will vary over time and settle to some steady state value. Our SSD dynamically adjusts to the incoming workload to get the optimum performance for the workload. This takes time. Other lower performing SSDs take less time as they have less complicated systems. HDDs take no time at all because their systems are fixed logical to physical systems, so their performance is immediately deterministic for any workload IOMeter throws at them.
The Intel ® Performance MLC SSD is architected to provide the optimal user experience for client PC applications, however, the performance SSD will adapt and optimize the SSD’s data location tables to obtain the best performance for any specific workload. This is done to provide the ultimate in a user experience, however provides occasional challenges in obtaining consistent benchmark testing results when changing from one specific benchmark to another, or in benchmark tests not running with sufficient time to allow stabilization. If any benchmark is run for sufficient time, the benchmark scores will eventually approach a steady state value, however, the time to reach such a steady state is heavily dependant on the previous usage case. Specifically, highly random heavy write workloads or periodic hot spot heavy write workloads (which appear random to the SSD) will condition the SSD into a state which is uncharacteristic of a client PC usage, and require longer usages in characteristic workloads before adapting to provide the expected performance.
When following a benchmark test or IOMeter workload that has put the drive into this state which is uncharacteristic of client usage, it will take significant usage time under the new workload conditions for the drive to adapt to the new workload, and therefore provide inconsistent (and likely low) benchmark results for that and possibly subsequent tests, and can occasionally cause extremely long latencies. The old HDD concept of defragmentation applies but in new ways. Standard windows defragmentation tools will not work.
SSD devices are not aware of the files written within, but are rather only aware of the Logical Block Addresses (LBAs) which contain valid data. Once data is written to a Logical Block Address (LBA), the SSD must now treat that data as valid user content and never throw it away, even after the host “deletes” the associated file. Today, there is no ATA protocol available to tell the SSDs that the LBAs from deleted files are no longer valid data. This fact, coupled with highly random write testing, leaves the drive in an extremely fragmented state which is optimized to provide the best performance possible for that random workload. Unfortunately, this state will not immediately result in characteristic user performance in client benchmarks such as PCMark Vantage, etc. without significant usage (writing) in typical client applications allowing the drive to adapt (defragment) back to a typical client usage condition.
In order to reset the state of the drive to a known state that will quickly adapt to new workloads for best performance, the SSD’s unused content needs to be defragmented. There are two methods which can accomplish this task.
One method is to use IOMeter to sequentially write content to the entire drive. This can be done by configuring IOMeter to perform a 1 second long sequential read test on the SSD drive with a blank NTFS partition installed on it. In this case, IOMeter will “Prepare” the drive for the read test by first filling all of the available space sequentially with an IOBW.tst file, before running the 1 second long read test. This is the most “user-like” method to accomplish the defragmentation process, as it fills all SSD LBAs with “valid user data” and causes the drive to quickly adapt for a typical client user workload.
An alternative method (faster) is to use a tool to perform a SECURE ERASE command on the drive. This command will release all of the user LBA locations internally in the drive and result in all of the NAND locations being reset to an erased state. This is equivalent to resetting the drive to the factory shipped condition, and will provide the optimum performance.