If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
|
Thread Tools | Rate Thread | Display Modes |
#16
|
|||
|
|||
SSD Defrag ?
On 12/4/2018 3:58 PM, Paul wrote:
David E. Ross wrote: On 12/4/2018 3:08 PM, Paul wrote: wrote: Appreciate all the info, but not sure I have a clear answer. Is there no issue at all, accessing a very fragmented file, compared to accessing an unfragmented file, on an SSD ? Here are some test results. https://i.postimg.cc/ry7VnwF7/fragmentation.gif 26GB test file Checksum used as a read test of the file. Fragmented file has around 396,000 fragments. Unfragmented file is contiguous. On a RAMDISK (close to zero seek) 970MB/sec fragmented read 993MB/sec unfragmented read On an SSD where a 20usec seek time 229MB/sec fragmented read is present, so 396000 of the 20usec 383MB/sec unfragmented read seeks are required. That's to give some idea how much effect an extremely fragmented file would have on an SSD. Paul A fragmented drive takes longer to read than an unfragments (defragmented) drive??? Yes, the extra time to seek from one chunk of the file to the next. XXXXXXX XXXXXXX XXX XXXXXXXXXXXX \____ Wasted time reduces 20us 20us 20us / aggregate bandwidth (SSD) On a hard drive, it's much worse. Head switches cost 1ms, seeks are more expensive. XXXXXXX XXXXXXX XXX XXXXXXXXXXXX \____ Wasted time reduces 8ms 8ms 8ms / aggregate bandwidth (HDD) On my RAMDisk (I haven't measured it), it should be around these numbers. These would be typical numbers for a "hardware" RAM drive. The OS software stack probably degrades numbers like this quite a bit. XXXXXXX XXXXXXX XXX XXXXXXXXXXXX \____ Wasted time reduces 2us 2us 2us / aggregate bandwidth (RAM Drive) HTH, Paul Your two examples illustrate fragmentation. Unfragmentation is obtained by deframenting, which yields files with their contents contiguous. -- David E. Ross http://www.rossde.com/ Once again, there has been a mass shooting. This time, it was in Thousand Oaks, California. And once again, just as he did after the recent mass shooting in Pittsburgh, President Trump sent his thoughts and prayers to the families of the victims. Thoughts and prayers will not stop the carnage. Action is needed on gun control, and more guns -- as Trump proposed for Pittsburgh and Parkland in Florida -- is not the answer. |
Ads |
#17
|
|||
|
|||
SSD Defrag ?
David E. Ross wrote:
On 12/4/2018 3:58 PM, Paul wrote: XXXXXXX XXXXXXX XXX XXXXXXXXXXXX \____ Wasted time reduces 2us 2us 2us / aggregate bandwidth (RAM Drive) Your two examples illustrate fragmentation. Unfragmentation is obtained by deframenting, which yields files with their contents contiguous. We were discussing what the penalty would be, if the SSD was left *fragmented* and a defragmenter was *not* used. That's why I did the experiment, to measure the impact of a ridiculous level of fragmentation (396000 fragments), on the reading of a file from an SSD. There is still quite good performance (383MB/sec ideal read speed, degrades to 229MB/sec, on an Intel 545S SSD). On a regular hard drive, the response would be horrible to behold (I could test that, but I don't know if anyone cares). Defragmentation of SSDs is not recommended, unless the wear life is quite large. Maybe an Optane drive would be a candidate for a defrag run (they're based on something other than flash, and their granularity is different). I think the time to read out a block on Optane is 10us, while flash is 20us, and Samsung did something to one of their flash drives, to get their number down to 10us to try to catch up. The fragmented files were generated by a C program, opening two files, and alternating writes to the two files. It turns out, that the Win10 OS I used for the test, queues up writes a tiny bit, such that the level of fragmentation wasn't as extreme as on other OSes. I wasn't really in control of the 396000 number in the 26GB test files. (My writes were 4096 bytes each, but the average fragment is 65536 bytes long.) That's what came out in the wash. Once the two files are generated, I erase one file, to make it easier to see the second file in the JKDefrag status window. By taking an image of the partition, I can restore it to other storage devices, and the restore "keeps" the 396000 fragments. Paul |
|
Thread Tools | |
Display Modes | Rate This Thread |
|
|