If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
|
Thread Tools | Rate Thread | Display Modes |
#1
|
|||
|
|||
Defragging
I regularly defrag drive C but have never defragged drive Z. Is this
necessary, or will it make a mess if I try it? -- Myth, after all, is what we believe naturally. History is what we must painfully learn and struggle to remember. -Albert Goldman |
Ads |
#2
|
|||
|
|||
Defragging
Martin Edwards wrote:
I regularly defrag drive C but have never defragged drive Z. Is this necessary, or will it make a mess if I try it? Before you defrag anything, you should run CHKDSK. This checks for structural problems, before the defragmenter makes spaghetti of things. Now, I know that the modern Windows OS, have some sort of integrity checking that runs in the background. When I recommend a CHKDSK scan first, it's to cover anything which such a process has not uncovered. For example, if something got corrupted, just as the computer was starting up, a background process may not have detected it yet. Once you've done the scan, the next question would be, does "Z:" have any special properties ? Is it an SSD (don't defragment those)? Is it a USB flash drive ? Is it an SD card ? The fact it's been assigned a "high letter", implies you set it up that way, and it smells like a "special case" to me. You have to give us details of your setup, if you expect a sensible answer. What kind of physical device does it reside on. For exmaple, if I asked in this forum, "can I defragment my F: ?", my F: drive happens to be a RAMDisk, and fragmentation is meaningless on those. Makes no difference. Windows has an API for defragmenting, intended to make defragmenting safe. So from that point of view, there is a bit less to worry about. But if the file system is damaged, then the risk goes up considerably. Paul |
#3
|
|||
|
|||
Defragging
On Wed, 11 May 2016 04:50:06 -0400, Paul wrote in
part: For exmaple, if I asked in this forum, "can I defragment my F: ?", my F: drive happens to be a RAMDisk, and fragmentation is meaningless on those. Makes no difference. I don't think it would make a practical difference, but after defragmenting the metadata would be smaller. This would result in a performance improvement which might be significant in some cases. (For example, I have seen files on my real, spinning disk that are in more than 40000 pieces. The table in memory table for these pieces has to be scanned each time a direct access request is made to the file. For a RAM disk the performance hit would be significant, even though eventual read or write to the RAM disk is fast. [For the spinning disk I have to defragment at least the file in question whenever I notice the problem.]) ... Paul |
#4
|
|||
|
|||
Defragging
Mark F wrote:
Paul wrote in part: For exmaple, if I asked in this forum, "can I defragment my F: ?", my F: drive happens to be a RAMDisk, and fragmentation is meaningless on those. Makes no difference. I don't think it would make a practical difference, but after defragmenting the metadata would be smaller. This would result in a performance improvement which might be significant in some cases. (For example, I have seen files on my real, spinning disk that are in more than 40000 pieces. The table in memory table for these pieces has to be scanned each time a direct access request is made to the file. For a RAM disk the performance hit would be significant, even though eventual read or write to the RAM disk is fast. [For the spinning disk I have to defragment at least the file in question whenever I notice the problem.]) Do you know what RAM means? Random access memory. Doesn't matter if the next byte is contiguous to the prior accessed byte or somewhere way off from the prior accessed byte. It is ALL randomly accessed. Takes the same time to access the next block as for one many blocks away. When measuring RAM access time, it isn't variable depending on address. If it were then it wouldn't be RAM. https://en.wikipedia.org/wiki/Random-access_memory Defragmentation is meaningless in RAM. If a file has been loaded into memory, there is no "table" of looking up some pieces of it scattered across in the memory. There is just the handle object to the memory copy of the file. |
#5
|
|||
|
|||
Defragging
Martin Edwards wrote:
I regularly defrag drive C but have never defragged drive Z. Is this necessary, or will it make a mess if I try it? How would *we* know what is your Z: drive? HDD, SSD, flash drive, USB-attached, mounted .iso image, RAM drive, what? |
#6
|
|||
|
|||
Defragging
On Wed, 11 May 2016 15:11:21 -0500, VanguardLH wrote:
[snip] Defragmentation is meaningless in RAM. If a file has been loaded into Not so. memory, there is no "table" of looking up some pieces of it scattered across in the memory. There is just the handle object to the memory The RAM disk will have such a table. copy of the file. A RAM disk is not necessarily going to have files be contiguous in RAM. There may be pointer-chasing involved to get the whole file. It will be faster than off a rotating disk drive, but still pointer-chasing. A contiguous file would not have this pointer-chasing. Sincerely, Gene Wirchenko |
#7
|
|||
|
|||
Defragging
Gene Wirchenko wrote:
On Wed, 11 May 2016 15:11:21 -0500, VanguardLH wrote: [snip] Defragmentation is meaningless in RAM. If a file has been loaded into Not so. memory, there is no "table" of looking up some pieces of it scattered across in the memory. There is just the handle object to the memory The RAM disk will have such a table. copy of the file. A RAM disk is not necessarily going to have files be contiguous in RAM. There may be pointer-chasing involved to get the whole file. It will be faster than off a rotating disk drive, but still pointer-chasing. A contiguous file would not have this pointer-chasing. Sincerely, Gene Wirchenko OK, I've done a test, and here are the results. First, I use a special program, a one page C program, which alternates writes to two *fp. This causes extreme fragmentation. I used this program years ago, to put a million fragmented files on a disk for testing. We only need two files for this. Next, I used Sysinternals "contig" to move the second file to a "fresh" part of the disk. The second file is now contiguous (no fragments). The "red" file in the diagram is testfile00000. The "blue" file (contiguous) is testfile00001. Next, I read the file and transfer it to /dev/null (NUL on Windows). This causes reads but no writes (in the sense that NUL is a virtual device for the purposes of the exercise). Now, to make it fair, the system file cache has to be purged before a test run. The OS is a 32 bit OS, with 4GB of RAM total (and the other 4GB is the RAMDisk). By reading a 4GB DVD image, I cause it to occupy the system file cache. Next, when reading the two test files (00000 fragmented, 00001 not fragmented), the system file read cache will evict the DVD contents and accept the newly read data. This forces the RAMDisk to actually participate in the test, and prevents the system file cache from answering the call. http://s32.postimg.org/x4wt4aeo5/fragtest.gif The fragmented run (bottom of Command Prompt window) has a higher process time, but a lower elapsed time. I tried to view the contest in perfmon.msc, but the transfer spike is too narrow to reliably detect the peak performance. It looks to be in the 600MB/sec to 800MB/sec range. If the RAMDisk is tested with HDTune, it achieves 4GB/sec approximately. So while the fragmented file had 16386 fragments, the difference wasn't all that large. I would need to test this on my larger RAMDisk, to make a prettier graph from it. The trouble with that RAMDisk, is inconsistent performance from place to place on the disk, due to page table management in Windows. The best performance comes from RAMDisks in PAE space, not AWE space. To have a "nice" RAMDisk, I need to install Windows 7 x32 (which is not set up right now) and test there, as the x32 ensures all the RAM used by the disk is in PAE space. Even though the memory license for a 32 bit OS, is for below 4GB, a driver can access 60GB above the line via PAE. But for a quick test, "what does it actually look like", this'll have to do for now. The thing wasn't "absolutely crippled" by 16,386 fragments. Paul |
#8
|
|||
|
|||
Defragging
On Wed, 11 May 2016 07:40:21 +0100, Martin Edwards wrote:
I regularly defrag drive C but have never defragged drive Z. Is this necessary, or will it make a mess if I try it? What's drive Z? If it's your optical drive, as it is on my system, it can't be defragged. -- Stan Brown, Oak Road Systems, Tompkins County, New York, USA http://BrownMath.com/ http://OakRoadSystems.com/ Shikata ga nai... |
#9
|
|||
|
|||
Defragging
Gene Wirchenko wrote:
On Wed, 11 May 2016 15:11:21 -0500, VanguardLH wrote: [snip] Defragmentation is meaningless in RAM. If a file has been loaded into Not so. memory, there is no "table" of looking up some pieces of it scattered across in the memory. There is just the handle object to the memory The RAM disk will have such a table. copy of the file. A RAM disk is not necessarily going to have files be contiguous in RAM. There may be pointer-chasing involved to get the whole file. It will be faster than off a rotating disk drive, but still pointer-chasing. A contiguous file would not have this pointer-chasing. Sincerely, Gene Wirchenko Yes, there is the overhead of the file allocation table involved in chaining together the numerous segments, clusters, allocation units, or whatever block measure used to assign space to files. Would take some slow memory to significantly impact something like 16K of reads. This is one of those "on paper" there is a measurable or benchmarkable difference that the user will not notice. I've tested overclocking my CPU which upped the benchmarks but had no user noticeable effect to speed up the computer. More heat for insignificant speedup. How much more time will it take to address all of a defragmented file in a RAM disk versus the extra time to run a defrag job to defrag that file on the RAM disk? Also, how does the position of the fragments in the RAM disk eliminate the number of entries in the file allocation table? If allocation is by cluster and there are 16K clusters then there will still be 16K entries in the file table regardless of where those clusters are located. After reading one cluster, the file table is read to find the next cluster. So what does it matter where is the next cluster in a RAM disk? Only if the defrag results in a severe decrease in the number of clusters occupied by a file would there be enough of a change in the number of entries in the file table to effect a significant lag in access. By how many allocation units is the file reduced to in the file table after the defrag? The reduction in AU would have to be significant which is probably the assumption being made. In Paul's example, he had a file with 16K fragments. Was that with deliberately high slack space for each AU for the file? While the defrag might have reduced the segmentation down to 1 to a dozen contiguous blocks, how much was the reduction in AUs for the file? If the defrag and then 1-segment file still occupied 12K AUs, the number of pointers in the file table is not severely reduced. I don't know if his test slices up a small file into 16K pieces or if it slices up a huge file with 12K AUs into 16K pieces. |
#10
|
|||
|
|||
Defragging
On Wed, 11 May 2016 20:05:49 -0400, Paul wrote:
Gene Wirchenko wrote: On Wed, 11 May 2016 15:11:21 -0500, VanguardLH wrote: [snip] Defragmentation is meaningless in RAM. If a file has been loaded into Not so. memory, there is no "table" of looking up some pieces of it scattered across in the memory. There is just the handle object to the memory The RAM disk will have such a table. copy of the file. A RAM disk is not necessarily going to have files be contiguous in RAM. There may be pointer-chasing involved to get the whole file. It will be faster than off a rotating disk drive, but still pointer-chasing. A contiguous file would not have this pointer-chasing. Sincerely, Gene Wirchenko OK, I've done a test, and here are the results. Great test. Results were unexpected to me. Can you change the test to do (pseudo)random, rather than sequential access? First, I use a special program, a one page C program, which alternates writes to two *fp. This causes extreme fragmentation. I used this program years ago, to put a million fragmented files on a disk for testing. We only need two files for this. Next, I used Sysinternals "contig" to move the second file to a "fresh" part of the disk. The second file is now contiguous (no fragments). The "red" file in the diagram is testfile00000. The "blue" file (contiguous) is testfile00001. Next, I read the file and transfer it to /dev/null (NUL on Windows). This causes reads but no writes (in the sense that NUL is a virtual device for the purposes of the exercise). Now, to make it fair, the system file cache has to be purged before a test run. The OS is a 32 bit OS, with 4GB of RAM total (and the other 4GB is the RAMDisk). By reading a 4GB DVD image, I cause it to occupy the system file cache. Next, when reading the two test files (00000 fragmented, 00001 not fragmented), the system file read cache will evict the DVD contents and accept the newly read data. This forces the RAMDisk to actually participate in the test, and prevents the system file cache from answering the call. http://s32.postimg.org/x4wt4aeo5/fragtest.gif The fragmented run (bottom of Command Prompt window) has a higher process time, but a lower elapsed time. I tried to view the contest in perfmon.msc, but the transfer spike is too narrow to reliably detect the peak performance. It looks to be in the 600MB/sec to 800MB/sec range. If the RAMDisk is tested with HDTune, it achieves 4GB/sec approximately. So while the fragmented file had 16386 fragments, the difference wasn't all that large. I would need to test this on my larger RAMDisk, to make a prettier graph from it. The trouble with that RAMDisk, is inconsistent performance from place to place on the disk, due to page table management in Windows. The best performance comes from RAMDisks in PAE space, not AWE space. To have a "nice" RAMDisk, I need to install Windows 7 x32 (which is not set up right now) and test there, as the x32 ensures all the RAM used by the disk is in PAE space. Even though the memory license for a 32 bit OS, is for below 4GB, a driver can access 60GB above the line via PAE. But for a quick test, "what does it actually look like", this'll have to do for now. The thing wasn't "absolutely crippled" by 16,386 fragments. Paul |
#11
|
|||
|
|||
Defragging
Steve Hayes wrote:
On Wed, 11 May 2016 14:36:55 -0700, Gene Wirchenko wrote: On Wed, 11 May 2016 15:11:21 -0500, VanguardLH wrote: [snip] Defragmentation is meaningless in RAM. If a file has been loaded into Not so. memory, there is no "table" of looking up some pieces of it scattered across in the memory. There is just the handle object to the memory The RAM disk will have such a table. copy of the file. A RAM disk is not necessarily going to have files be contiguous in RAM. There may be pointer-chasing involved to get the whole file. It will be faster than off a rotating disk drive, but still pointer-chasing. A contiguous file would not have this pointer-chasing. Does a RAM disk have time to become fragmented? Surely it loses all its data when the computer is switched off, and it is refreshed when you switch it on again. It fragments when two or more programs write at the same time. And I have left it running for a few days. It survives sleep just fine. The machine draws 7.5W in sleep state. I just ZIP up the contents, if shutting down. That can be a problem if there is a power failure (might take too long to put away). My UPS doesn't have a big battery on it. Paul |
#12
|
|||
|
|||
Defragging
VanguardLH wrote:
Gene Wirchenko wrote: On Wed, 11 May 2016 15:11:21 -0500, VanguardLH wrote: [snip] Defragmentation is meaningless in RAM. If a file has been loaded into Not so. memory, there is no "table" of looking up some pieces of it scattered across in the memory. There is just the handle object to the memory The RAM disk will have such a table. copy of the file. A RAM disk is not necessarily going to have files be contiguous in RAM. There may be pointer-chasing involved to get the whole file. It will be faster than off a rotating disk drive, but still pointer-chasing. A contiguous file would not have this pointer-chasing. Sincerely, Gene Wirchenko Yes, there is the overhead of the file allocation table involved in chaining together the numerous segments, clusters, allocation units, or whatever block measure used to assign space to files. Would take some slow memory to significantly impact something like 16K of reads. This is one of those "on paper" there is a measurable or benchmarkable difference that the user will not notice. I've tested overclocking my CPU which upped the benchmarks but had no user noticeable effect to speed up the computer. More heat for insignificant speedup. How much more time will it take to address all of a defragmented file in a RAM disk versus the extra time to run a defrag job to defrag that file on the RAM disk? Also, how does the position of the fragments in the RAM disk eliminate the number of entries in the file allocation table? If allocation is by cluster and there are 16K clusters then there will still be 16K entries in the file table regardless of where those clusters are located. After reading one cluster, the file table is read to find the next cluster. So what does it matter where is the next cluster in a RAM disk? Only if the defrag results in a severe decrease in the number of clusters occupied by a file would there be enough of a change in the number of entries in the file table to effect a significant lag in access. By how many allocation units is the file reduced to in the file table after the defrag? The reduction in AU would have to be significant which is probably the assumption being made. In Paul's example, he had a file with 16K fragments. Was that with deliberately high slack space for each AU for the file? While the defrag might have reduced the segmentation down to 1 to a dozen contiguous blocks, how much was the reduction in AUs for the file? If the defrag and then 1-segment file still occupied 12K AUs, the number of pointers in the file table is not severely reduced. I don't know if his test slices up a small file into 16K pieces or if it slices up a huge file with 12K AUs into 16K pieces. I tried defragmenting the disk. And this is what I got. I've never seen anything like this before. Maybe I should run CHKDSK ? File 31 \testfile00000 $STANDARD_INFORMATION (resident) $ATTRIBUTE_LIST (nonresident) logical sectors 195368-195375 (0x2fb28-0x2fb2f) == 1GB file reports $FILE_NAME (resident) 4KB size $FILE_NAME (resident) == Has two file names, like it is hard linked (which it isn't). File 32 === These guys have no $FILE_NAME entry \TESTFI~1 === Size is bizarre too. $DATA (nonresident) logical sectors 4209360-4228175 (0x403ad0-0x40844f) File 33 \TESTFI~1 $DATA (nonresident) logical sectors 4190800-4209359 (0x3ff250-0x403acf) File 34 \TESTFI~1 $DATA (nonresident) logical sectors 4246992-4265935 (0x40cdd0-0x4117cf) File 35 \TESTFI~1 $DATA (nonresident) logical sectors 4228176-4246991 (0x408450-0x40cdcf) File 36 \TESTFI~1 $DATA (nonresident) logical sectors 4284752-4303695 (0x416150-0x41ab4f) .... File 143 \TESTFI~1 $DATA (nonresident) logical sectors 6266960-6287951 (0x5fa050-0x5ff24f) I have no idea what this means. The properties of the file still read 1GB (1073741824 bytes). If I transfer the file off, delete it on the RAMDisk, then copy it back, the entry looks like this (closer to normal). It's still got two file pointers though, and that isn't right. File 31 \testfile00000 $STANDARD_INFORMATION (resident) $FILE_NAME (resident) $FILE_NAME (resident) $DATA (nonresident) logical sectors 4190800-6287951 (0x3ff250-0x5ff24f) Paul |
#13
|
|||
|
|||
Defragging
Mark F wrote:
Great test. Results were unexpected to me. Can you change the test to do (pseudo)random, rather than sequential access? My file generator program doesn't use the defragmenter API, so I don't have real control over position. But a RAMDisk shouldn't be doing readahead, because there are no "physics" like on a rotating hard drive. Hard drives tend to have track buffers, so if the stride of what you're doing hits the track buffer, it helps. The seek time on the RAMDisk is the same, no matter where the virtual head goes. The most expensive part in the whole thing, is traversing layers of the file system model. You would think you could do 100,000 file operations a second, but the RAMDisk doesn't even get remotely close. Which is one of the reasons it is a disappointment. For example, if I unpack the Firefox source tarball, it can take a long time to do a text search within those files. It doesn't go nearly as fast as you'd think. A text search might still take a minute to run. Absolutely nothing that counts, is instantaneous on the thing. Only "stupid benchmarks" (HDTune) go fast - benchmarks that bypass the file system. I just tried running the defragmenter on the test setup, and it only managed fragment movement at around 30MB/sec. It's always a shock when the thing runs at "hard drive" speed. Pathetic. This is why I can't recommend buying gobs of RAM for this purpose. Paul |
#14
|
|||
|
|||
Defragging
On Thu, 12 May 2016 02:59:46 -0400, Paul wrote:
Steve Hayes wrote: On Wed, 11 May 2016 14:36:55 -0700, Gene Wirchenko wrote: On Wed, 11 May 2016 15:11:21 -0500, VanguardLH wrote: [snip] Defragmentation is meaningless in RAM. If a file has been loaded into Not so. memory, there is no "table" of looking up some pieces of it scattered across in the memory. There is just the handle object to the memory The RAM disk will have such a table. copy of the file. A RAM disk is not necessarily going to have files be contiguous in RAM. There may be pointer-chasing involved to get the whole file. It will be faster than off a rotating disk drive, but still pointer-chasing. A contiguous file would not have this pointer-chasing. Does a RAM disk have time to become fragmented? Surely it loses all its data when the computer is switched off, and it is refreshed when you switch it on again. It fragments when two or more programs write at the same time. And I have left it running for a few days. It survives sleep just fine. The machine draws 7.5W in sleep state. I just ZIP up the contents, if shutting down. That can be a problem if there is a power failure (might take too long to put away). My UPS doesn't have a big battery on it. OK, I haven't used a RAM disk since 8-bit days, when I used it to extend the capability of the 640k RAM on the average XT. I used the RAM disk to load stuff I wanted to access quickly, and when I switched off it was gone. Even if it was fragmented on the disk I loaded it from, I'm pretty sure it wasn't fragmented on the RAM disk. If I ever have to replace my laptop, I might ask you how you do a RAM disk, as nowadays they are likely to have more RAM than a 32-bit OS can handle, but that's for another day. -- Steve Hayes from Tshwane, South Africa Web: http://www.khanya.org.za/stevesig.htm Blog: http://khanya.wordpress.com E-mail - see web page, or parse: shayes at dunelm full stop org full stop uk |
#15
|
|||
|
|||
Defragging
Steve Hayes wrote:
On Thu, 12 May 2016 02:59:46 -0400, Paul wrote: Steve Hayes wrote: On Wed, 11 May 2016 14:36:55 -0700, Gene Wirchenko wrote: On Wed, 11 May 2016 15:11:21 -0500, VanguardLH wrote: [snip] Defragmentation is meaningless in RAM. If a file has been loaded into Not so. memory, there is no "table" of looking up some pieces of it scattered across in the memory. There is just the handle object to the memory The RAM disk will have such a table. copy of the file. A RAM disk is not necessarily going to have files be contiguous in RAM. There may be pointer-chasing involved to get the whole file. It will be faster than off a rotating disk drive, but still pointer-chasing. A contiguous file would not have this pointer-chasing. Does a RAM disk have time to become fragmented? Surely it loses all its data when the computer is switched off, and it is refreshed when you switch it on again. It fragments when two or more programs write at the same time. And I have left it running for a few days. It survives sleep just fine. The machine draws 7.5W in sleep state. I just ZIP up the contents, if shutting down. That can be a problem if there is a power failure (might take too long to put away). My UPS doesn't have a big battery on it. OK, I haven't used a RAM disk since 8-bit days, when I used it to extend the capability of the 640k RAM on the average XT. I used the RAM disk to load stuff I wanted to access quickly, and when I switched off it was gone. Even if it was fragmented on the disk I loaded it from, I'm pretty sure it wasn't fragmented on the RAM disk. If I ever have to replace my laptop, I might ask you how you do a RAM disk, as nowadays they are likely to have more RAM than a 32-bit OS can handle, but that's for another day. The free version here, only supports a 1GB disk size. Previously, the free version supported 4GB, so you'd want to look for an old version. http://memory.dataram.com/products-a...ftware/ramdisk The paid versions, one of them is good to 64GB (the size of PAE space on some machines). The speed is all over the place. Anywhere from 1GB/sec to 7GB/sec, as measured by HDTune. The one on this machine does 4GB/sec (as the memory is in PAE space, above the zone that applications are allowed to use). My suspicion, is the page tables in PAE space, use bigger than 4KB mappings, which helps performance. Some processors have a 2MB sized page table option. And AMD processors have a 1GB sized page table mapping. It's not clear if any software can actually use that. The LargePages feature, where low memory is mapped that way, there may be a Registry setting in your OS for it, but it doesn't work. It is most likely to be a server-only feature. I've only had a chance to play with that in Linux land. (Linux has a LargePages option too.) LargePages is not that "smooth" a technology for everyday usage, but for RAMDisks (fixed size allocation, runs that way all day), the concept is perfect. Paul |
|
Thread Tools | |
Display Modes | Rate This Thread |
|
|