If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Rate Thread | Display Modes |
#121
|
|||
|
|||
How Often Disk Defrag
In message , BillW50
writes: "J. P. Gilliver (John)" wrote in message news In message , Stefan Patric writes: [] However, what would really be ideal is a new filesystem where performance is less (or not at all) inexorably linked to fragmentation. NTFS with [] Unless you are talking of one which prevents fragmentation in the first place, I don't see how it can be possible to have one where performance isn't affected by fragmentation, to some extent at least. Is that so? How about the damn I/O bus can't handle the speed of even a fragmented hard drive? Yes that is right! Do the stupid experiments and you will find that a fragmented hard drive isn't the bottleneck. It is the damn bus. I can't believe how clueless most people are! Seriously! Does it *really* take an engineering degree to see this stuff or what? I have an engineering degree, thank you. When you said "a new filesystem", and then go on about NTFS, most people would assume you're talking of something that occupies the same place in the hardware/software hierarchy as NTFS. If you're going to start to bring in the speed of the buffer/bus/whatever, then that's not the filesystem, it's the hardware. Sure, a filesystem may be designed to optimise certain aspects of a particular hardware architecture, but that isn't the filesystem. -- J. P. Gilliver. UMRA: 1960/1985 MB++G.5AL-IS-P--Ch++(p)Ar@T0H+Sh0!:`)DNAf ... but it's princess Leia in /Star Wars/ who retains the throne in terms of abiding iconography. Ask any teenage boy, including the grown-up ones. - Andrew Collins, RT 16-22 April 2011 |
Ads |
#122
|
|||
|
|||
How Often Disk Defrag
"J. P. Gilliver (John)" wrote in message ... In message , BillW50 writes: "J. P. Gilliver (John)" wrote in message news In message , Stefan Patric writes: [] However, what would really be ideal is a new filesystem where performance is less (or not at all) inexorably linked to fragmentation. NTFS with [] Unless you are talking of one which prevents fragmentation in the first place, I don't see how it can be possible to have one where performance isn't affected by fragmentation, to some extent at least. Is that so? How about the damn I/O bus can't handle the speed of even a fragmented hard drive? Yes that is right! Do the stupid experiments and you will find that a fragmented hard drive isn't the bottleneck. It is the damn bus. I can't believe how clueless most people are! Seriously! Does it *really* take an engineering degree to see this stuff or what? I have an engineering degree, thank you. When you said "a new filesystem", and then go on about NTFS, most people would assume you're talking of something that occupies the same place in the hardware/software hierarchy as NTFS. If you're going to start to bring in the speed of the buffer/bus/whatever, then that's not the filesystem, it's the hardware. Sure, a filesystem may be designed to optimise certain aspects of a particular hardware architecture, but that isn't the filesystem. Good, I love chatting to supposedly intelligent individuals. ;-) Back in the 80's and when we were using MFM drives, defragging was a huge improvement (and a big deal). I remember the hard drive speed would often double. I can't recall of a case of tripling, but in some cases maybe. Defragging MFM drives was a huge big deal. Microsoft had no utilities for defragging back then, as we used third party utilities. I am not sure why you are focusing on NTFS, but that is ok I'll go with it. NTFS is supposed to be smart enough to write with enough continuous free sectors so it wouldn't purposely fragment files. Well I suppose it could work better, since fragmentation still happens from time to time. But what I am saying is what difference does it make really? As I wait until my drives gets 40 to 60% fragmented (which takes like two years). And the best I have recorded was like 1 or 2% improvement. I don't know about you, but 1 or 2% performance boost is flat peanuts! And it isn't even worth the electrons to make it happen. -- Bill Gateway M465e ('06 era) - Windows Live Mail 2009 Centrino Core2 Duo T7400 2.16 GHz - 1.5GB - Windows 8 CP |
#123
|
|||
|
|||
How Often Disk Defrag
On Fri, 23 Mar 2012 22:24:08 +0000, "J. P. Gilliver (John)"
wrote: In message , Stefan Patric writes: [] However, what would really be ideal is a new filesystem where performance is less (or not at all) inexorably linked to fragmentation. NTFS with [] Unless you are talking of one which prevents fragmentation in the first place, I don't see how it can be possible to have one where performance isn't affected by fragmentation, to some extent at least. I thought the eventual migration to solid state drives would eliminate the fragmentation concerns. -- Char Jackson |
#124
|
|||
|
|||
How Often Disk Defrag
BillW50 wrote:
"J. P. Gilliver (John)" wrote in message ... In message , BillW50 writes: "J. P. Gilliver (John)" wrote in message news In message , Stefan Patric writes: [] However, what would really be ideal is a new filesystem where performance is less (or not at all) inexorably linked to fragmentation. NTFS with [] Unless you are talking of one which prevents fragmentation in the first place, I don't see how it can be possible to have one where performance isn't affected by fragmentation, to some extent at least. Is that so? How about the damn I/O bus can't handle the speed of even a fragmented hard drive? Yes that is right! Do the stupid experiments and you will find that a fragmented hard drive isn't the bottleneck. It is the damn bus. I can't believe how clueless most people are! Seriously! Does it *really* take an engineering degree to see this stuff or what? I have an engineering degree, thank you. When you said "a new filesystem", and then go on about NTFS, most people would assume you're talking of something that occupies the same place in the hardware/software hierarchy as NTFS. If you're going to start to bring in the speed of the buffer/bus/whatever, then that's not the filesystem, it's the hardware. Sure, a filesystem may be designed to optimise certain aspects of a particular hardware architecture, but that isn't the filesystem. Good, I love chatting to supposedly intelligent individuals. ;-) Back in the 80's and when we were using MFM drives, defragging was a huge improvement (and a big deal). I remember the hard drive speed would often double. I can't recall of a case of tripling, but in some cases maybe. Defragging MFM drives was a huge big deal. Microsoft had no utilities for defragging back then, as we used third party utilities. I am not sure why you are focusing on NTFS, but that is ok I'll go with it. NTFS is supposed to be smart enough to write with enough continuous free sectors so it wouldn't purposely fragment files. Well I suppose it could work better, since fragmentation still happens from time to time. But what I am saying is what difference does it make really? As I wait until my drives gets 40 to 60% fragmented (which takes like two years). And the best I have recorded was like 1 or 2% improvement. I don't know about you, but 1 or 2% performance boost is flat peanuts! And it isn't even worth the electrons to make it happen. And how did you make this unique discovery about how buses work ? Inquiring minds, and all that... ******* The current SATA III storage devices, quote around 500MB/sec transfer rate, which you can verify with HDTune. This device happens to be around 450MB/sec. It would take me a while to search enough HDTune results, to find the very best one today. http://www.legitreviews.com/images/r...sata3-read.jpg Now, the best hard drive I know of, is a 15K RPM Seagate drive, with a 180MB/sec transfer rate. It costs $499. ($50 SATA drives are around 125MB/sec.) Transfer rates on hard drives (sustained transfer rate) are limited by the head to platter interface, the read amplifier, and the data encoding technique (things like PRML partial response maximum likelihood). You'll notice that 180MB/sec, isn't even remotely close to 450MB/sec proven result. The 450MB/sec bus can handle that no problem at all. http://en.wikipedia.org/wiki/PRML Amazing stuff! Works in the presence of ISI. http://www.guzik.com/solutions_chapter9.asp Techniques like that, eventually run out of noise margin, and you can't get the desired error rate performance if you go much faster. I expect now, the amplifier can probably go faster, but the signal the head sends back is the limitation. (At one time, it would have been hard to make good amplifiers for inside the HDA.) Paul |
#125
|
|||
|
|||
How Often Disk Defrag
"Paul" wrote in message ... BillW50 wrote: "J. P. Gilliver (John)" wrote in message ... In message , BillW50 writes: "J. P. Gilliver (John)" wrote in message news In message , Stefan Patric writes: [] However, what would really be ideal is a new filesystem where performance is less (or not at all) inexorably linked to fragmentation. NTFS with [] Unless you are talking of one which prevents fragmentation in the first place, I don't see how it can be possible to have one where performance isn't affected by fragmentation, to some extent at least. Is that so? How about the damn I/O bus can't handle the speed of even a fragmented hard drive? Yes that is right! Do the stupid experiments and you will find that a fragmented hard drive isn't the bottleneck. It is the damn bus. I can't believe how clueless most people are! Seriously! Does it *really* take an engineering degree to see this stuff or what? I have an engineering degree, thank you. When you said "a new filesystem", and then go on about NTFS, most people would assume you're talking of something that occupies the same place in the hardware/software hierarchy as NTFS. If you're going to start to bring in the speed of the buffer/bus/whatever, then that's not the filesystem, it's the hardware. Sure, a filesystem may be designed to optimise certain aspects of a particular hardware architecture, but that isn't the filesystem. Good, I love chatting to supposedly intelligent individuals. ;-) Back in the 80's and when we were using MFM drives, defragging was a huge improvement (and a big deal). I remember the hard drive speed would often double. I can't recall of a case of tripling, but in some cases maybe. Defragging MFM drives was a huge big deal. Microsoft had no utilities for defragging back then, as we used third party utilities. I am not sure why you are focusing on NTFS, but that is ok I'll go with it. NTFS is supposed to be smart enough to write with enough continuous free sectors so it wouldn't purposely fragment files. Well I suppose it could work better, since fragmentation still happens from time to time. But what I am saying is what difference does it make really? As I wait until my drives gets 40 to 60% fragmented (which takes like two years). And the best I have recorded was like 1 or 2% improvement. I don't know about you, but 1 or 2% performance boost is flat peanuts! And it isn't even worth the electrons to make it happen. And how did you make this unique discovery about how buses work ? Inquiring minds, and all that... ******* The current SATA III storage devices, quote around 500MB/sec transfer rate, which you can verify with HDTune. This device happens to be around 450MB/sec. It would take me a while to search enough HDTune results, to find the very best one today. http://www.legitreviews.com/images/r...sata3-read.jpg Now, the best hard drive I know of, is a 15K RPM Seagate drive, with a 180MB/sec transfer rate. It costs $499. ($50 SATA drives are around 125MB/sec.) Transfer rates on hard drives (sustained transfer rate) are limited by the head to platter interface, the read amplifier, and the data encoding technique (things like PRML partial response maximum likelihood). You'll notice that 180MB/sec, isn't even remotely close to 450MB/sec proven result. The 450MB/sec bus can handle that no problem at all. http://en.wikipedia.org/wiki/PRML Amazing stuff! Works in the presence of ISI. http://www.guzik.com/solutions_chapter9.asp Techniques like that, eventually run out of noise margin, and you can't get the desired error rate performance if you go much faster. I expect now, the amplifier can probably go faster, but the signal the head sends back is the limitation. (At one time, it would have been hard to make good amplifiers for inside the HDA.) Paul Thank heaven for Paul! |
#126
|
|||
|
|||
How Often Disk Defrag
Char Jackson wrote:
On Fri, 23 Mar 2012 22:24:08 +0000, "J. P. Gilliver (John)" wrote: In message , Stefan Patric writes: [] However, what would really be ideal is a new filesystem where performance is less (or not at all) inexorably linked to fragmentation. NTFS with [] Unless you are talking of one which prevents fragmentation in the first place, I don't see how it can be possible to have one where performance isn't affected by fragmentation, to some extent at least. I thought the eventual migration to solid state drives would eliminate the fragmentation concerns. There are some file systems, where a bit of thought was put into reducing the probability of fragmentation. The question I'd have about this approach, is where the data is stored while the allocation is delayed. It might still be on disk, in a buffer area, but that would mean writing the data twice. And if the data is stored in RAM, it's the old "power protected" problem (system needs a UPS). At one time, Windows had a "power protected" feature, where you could tell the OS that you had a UPS, and thus much more data could safely sit in RAM, without endangering file system integrity. If you could do that today, your computer would "fly" in terms of performance. It would be up to the UPS, to send advanced power fail, so the computer could be safely flushed to disk and shut down. http://en.wikipedia.org/wiki/File_system_fragmentation "A relatively recent technique is delayed allocation in XFS, HFS+ and ZFS; the same technique is also called allocate-on-flush in reiser4 and ext4. When the file system is being written to, file system blocks are reserved, but the locations of specific files are not laid down yet. Later, when the file system is forced to flush changes as a result of memory pressure or a transaction commit, the allocator will have much better knowledge of the files' characteristics." An SSD is eventually limited by IOPS and SATA latency, if you push it hard enough. The more fragmentation, the more ops it takes to complete the transaction. But the SSD would have to be pretty grossly fragmented, for that to happen. The SSD is likely to wear out, before it gets that bad. Paul |
#127
|
|||
|
|||
How Often Disk Defrag
"Paul" wrote in message ... BillW50 wrote: "J. P. Gilliver (John)" wrote in message ... In message , BillW50 writes: "J. P. Gilliver (John)" wrote in message news In message , Stefan Patric writes: [] However, what would really be ideal is a new filesystem where performance is less (or not at all) inexorably linked to fragmentation. NTFS with [] Unless you are talking of one which prevents fragmentation in the first place, I don't see how it can be possible to have one where performance isn't affected by fragmentation, to some extent at least. Is that so? How about the damn I/O bus can't handle the speed of even a fragmented hard drive? Yes that is right! Do the stupid experiments and you will find that a fragmented hard drive isn't the bottleneck. It is the damn bus. I can't believe how clueless most people are! Seriously! Does it *really* take an engineering degree to see this stuff or what? I have an engineering degree, thank you. When you said "a new filesystem", and then go on about NTFS, most people would assume you're talking of something that occupies the same place in the hardware/software hierarchy as NTFS. If you're going to start to bring in the speed of the buffer/bus/whatever, then that's not the filesystem, it's the hardware. Sure, a filesystem may be designed to optimise certain aspects of a particular hardware architecture, but that isn't the filesystem. Good, I love chatting to supposedly intelligent individuals. ;-) Back in the 80's and when we were using MFM drives, defragging was a huge improvement (and a big deal). I remember the hard drive speed would often double. I can't recall of a case of tripling, but in some cases maybe. Defragging MFM drives was a huge big deal. Microsoft had no utilities for defragging back then, as we used third party utilities. I am not sure why you are focusing on NTFS, but that is ok I'll go with it. NTFS is supposed to be smart enough to write with enough continuous free sectors so it wouldn't purposely fragment files. Well I suppose it could work better, since fragmentation still happens from time to time. But what I am saying is what difference does it make really? As I wait until my drives gets 40 to 60% fragmented (which takes like two years). And the best I have recorded was like 1 or 2% improvement. I don't know about you, but 1 or 2% performance boost is flat peanuts! And it isn't even worth the electrons to make it happen. And how did you make this unique discovery about how buses work ? Inquiring minds, and all that... ******* The current SATA III storage devices, quote around 500MB/sec transfer rate, which you can verify with HDTune. This device happens to be around 450MB/sec. It would take me a while to search enough HDTune results, to find the very best one today. http://www.legitreviews.com/images/r...sata3-read.jpg Now, the best hard drive I know of, is a 15K RPM Seagate drive, with a 180MB/sec transfer rate. It costs $499. ($50 SATA drives are around 125MB/sec.) Transfer rates on hard drives (sustained transfer rate) are limited by the head to platter interface, the read amplifier, and the data encoding technique (things like PRML partial response maximum likelihood). You'll notice that 180MB/sec, isn't even remotely close to 450MB/sec proven result. The 450MB/sec bus can handle that no problem at all. http://en.wikipedia.org/wiki/PRML Amazing stuff! Works in the presence of ISI. http://www.guzik.com/solutions_chapter9.asp Techniques like that, eventually run out of noise margin, and you can't get the desired error rate performance if you go much faster. I expect now, the amplifier can probably go faster, but the signal the head sends back is the limitation. (At one time, it would have been hard to make good amplifiers for inside the HDA.) Paul... use your head! Take a hard drive that is 60% fragmented and use it. Time how long it boots, time how long it takes to open your favorite applications, search through newsgroups, etc. Now clone that drive and defrag the cloned drive. Now what happens Paul? How much of a performance difference do you have? Now be honest Paul! -- Bill Gateway M465e ('06 era) - Windows Live Mail 2009 Centrino Core2 Duo T7400 2.16 GHz - 1.5GB - Windows 8 CP |
#128
|
|||
|
|||
How Often Disk Defrag
"Chris S." wrote in message ... "Paul" wrote in message ... BillW50 wrote: "J. P. Gilliver (John)" wrote in message ... In message , BillW50 writes: "J. P. Gilliver (John)" wrote in message news In message , Stefan Patric writes: [] However, what would really be ideal is a new filesystem where performance is less (or not at all) inexorably linked to fragmentation. NTFS with [] Unless you are talking of one which prevents fragmentation in the first place, I don't see how it can be possible to have one where performance isn't affected by fragmentation, to some extent at least. Is that so? How about the damn I/O bus can't handle the speed of even a fragmented hard drive? Yes that is right! Do the stupid experiments and you will find that a fragmented hard drive isn't the bottleneck. It is the damn bus. I can't believe how clueless most people are! Seriously! Does it *really* take an engineering degree to see this stuff or what? I have an engineering degree, thank you. When you said "a new filesystem", and then go on about NTFS, most people would assume you're talking of something that occupies the same place in the hardware/software hierarchy as NTFS. If you're going to start to bring in the speed of the buffer/bus/whatever, then that's not the filesystem, it's the hardware. Sure, a filesystem may be designed to optimise certain aspects of a particular hardware architecture, but that isn't the filesystem. Good, I love chatting to supposedly intelligent individuals. ;-) Back in the 80's and when we were using MFM drives, defragging was a huge improvement (and a big deal). I remember the hard drive speed would often double. I can't recall of a case of tripling, but in some cases maybe. Defragging MFM drives was a huge big deal. Microsoft had no utilities for defragging back then, as we used third party utilities. I am not sure why you are focusing on NTFS, but that is ok I'll go with it. NTFS is supposed to be smart enough to write with enough continuous free sectors so it wouldn't purposely fragment files. Well I suppose it could work better, since fragmentation still happens from time to time. But what I am saying is what difference does it make really? As I wait until my drives gets 40 to 60% fragmented (which takes like two years). And the best I have recorded was like 1 or 2% improvement. I don't know about you, but 1 or 2% performance boost is flat peanuts! And it isn't even worth the electrons to make it happen. And how did you make this unique discovery about how buses work ? Inquiring minds, and all that... ******* The current SATA III storage devices, quote around 500MB/sec transfer rate, which you can verify with HDTune. This device happens to be around 450MB/sec. It would take me a while to search enough HDTune results, to find the very best one today. http://www.legitreviews.com/images/r...sata3-read.jpg Now, the best hard drive I know of, is a 15K RPM Seagate drive, with a 180MB/sec transfer rate. It costs $499. ($50 SATA drives are around 125MB/sec.) Transfer rates on hard drives (sustained transfer rate) are limited by the head to platter interface, the read amplifier, and the data encoding technique (things like PRML partial response maximum likelihood). You'll notice that 180MB/sec, isn't even remotely close to 450MB/sec proven result. The 450MB/sec bus can handle that no problem at all. http://en.wikipedia.org/wiki/PRML Amazing stuff! Works in the presence of ISI. http://www.guzik.com/solutions_chapter9.asp Techniques like that, eventually run out of noise margin, and you can't get the desired error rate performance if you go much faster. I expect now, the amplifier can probably go faster, but the signal the head sends back is the limitation. (At one time, it would have been hard to make good amplifiers for inside the HDA.) Paul Thank heaven for Paul! If Paul can get over than 2% improvement over a defragged IDE drive, I'll be very impressed! As nobody I have talked to has done it yet. Say have you done it yet Chris? And do you know why there are buffers (aka cache) on the hard drive Chris? No I didn't think so. It caches everything the bus can't handle so the drive doesn't have to wait for it. If the I/O could keep up, there would be no reason for drive caches at all. But you knew that right? ;-) -- Bill Gateway M465e ('06 era) - Windows Live Mail 2009 Centrino Core2 Duo T7400 2.16 GHz - 1.5GB - Windows 8 CP |
#129
|
|||
|
|||
How Often Disk Defrag
On Fri, 23 Mar 2012 19:49:51 -0500, "BillW50" wrote:
Paul... use your head! Take a hard drive that is 60% fragmented and use it. Time how long it boots, time how long it takes to open your favorite applications, search through newsgroups, etc. Now clone that drive and defrag the cloned drive. Now what happens Paul? How much of a performance difference do you have? Now be honest Paul! Over use of a person's name while talking to them can sometimes make the speaker seem a little crazy. ;-) -- Char Jackson |
#130
|
|||
|
|||
How Often Disk Defrag
"Char Jackson" wrote in message ... On Fri, 23 Mar 2012 19:49:51 -0500, "BillW50" wrote: Paul... use your head! Take a hard drive that is 60% fragmented and use it. Time how long it boots, time how long it takes to open your favorite applications, search through newsgroups, etc. Now clone that drive and defrag the cloned drive. Now what happens Paul? How much of a performance difference do you have? Now be honest Paul! Over use of a person's name while talking to them can sometimes make the speaker seem a little crazy. ;-) Oh sorry! I just get tired of all of this hearsay without proof or evidence to back it up. If there is any, I want to see it. ;-) -- Bill Gateway M465e ('06 era) - Windows Live Mail 2009 Centrino Core2 Duo T7400 2.16 GHz - 1.5GB - Windows 8 CP |
#131
|
|||
|
|||
How Often Disk Defrag
"BillW50" wrote in message ... "Char Jackson" wrote in message ... On Fri, 23 Mar 2012 19:49:51 -0500, "BillW50" wrote: Paul... use your head! Take a hard drive that is 60% fragmented and use it. Time how long it boots, time how long it takes to open your favorite applications, search through newsgroups, etc. Now clone that drive and defrag the cloned drive. Now what happens Paul? How much of a performance difference do you have? Now be honest Paul! Over use of a person's name while talking to them can sometimes make the speaker seem a little crazy. ;-) Oh sorry! I just get tired of all of this hearsay without proof or evidence to back it up. If there is any, I want to see it. ;-) -- Bill Gateway M465e ('06 era) - Windows Live Mail 2009 Centrino Core2 Duo T7400 2.16 GHz - 1.5GB - Windows 8 CP It was just your statement: "Is that so? How about the damn I/O bus can't handle the speed of even a fragmented hard drive? Yes that is right! Do the stupid experiments and you will find that a fragmented hard drive isn't the bottleneck. It is the damn bus. I can't believe how clueless most people are! Seriously! Does it *really* take an engineering degree to see this stuff or what?" And my BSEE degree is from Purdue, 1962. Chris |
#132
|
|||
|
|||
How Often Disk Defrag
BillW50 wrote:
Is that so? How about the damn I/O bus can't handle the speed of even a fragmented hard drive? Paul... use your head! Take a hard drive that is 60% fragmented and use it. Time how long it boots, time how long it takes to open your favorite applications, search through newsgroups, etc. Now clone that drive and defrag the cloned drive. Now what happens Paul? How much of a performance difference do you have? Now be honest Paul! Your first statement you made above, is the non sequitur in logic that bothers me. It's not the fault of the bus. You haven't characterized a bus - you're looking at a bottleneck caused by the head movement of a disk drive. The bus has absolutely nothing to do with it. The bus is dumb. It has a percent occupancy. If the hard drive is doing seeks, there is nothing going across the bus. If the hard drive is pulling data off the drive, the bus is still not fully occupied. This is a 500MB/sec bus carrying 180MB/sec sustained transfer rate data (like on that Seagate 15K hard drive). The bus is occupied about 1/3rd of the time. The bus is not impacting performance. ______ ______ | | | | ______| |_______________| |_______________ If the disk is not busy, the cache has drained, and a command comes along, the bus can "burst" until the cache is filled. In HDTune, they attempt to measure the "burst" performance. (Note that several of the benchmark utilities, have needed continuous code adjustments. It's actually pretty hard to measure these things accurately. Many times, you see results that don't make sense. You can't always trust the results in a benchmark tool as being gospel.) This is my bus, if the disk is idle, and we're filling the cache. Once the cache is full, we're head limited again (back to the sustained pattern above). |--- 8MB of data fills 8MB cache --- _____________________________________ ______ | | | | ______| |____________| |__ "bus" and "fragmentation" should not be used in the same sentence !!! Fragmentation in a file system, requires additional head movement, to locate fragments of data. The issue is the time it takes the head to move from A to B. When the head is moving, no data can be transferred. The head must be stationary above the track, the embedded servo detected to prove that is the case, the sector header located (if one is present) and so on. That takes milliseconds, during which the bus has nothing to do. None of that has anything to do with busses or the theoretical maximum transfer rate a bus can provide. SSDs come closer to sustaining near-bus-rate transfers, because there is no head movement. There is still quantization at the SSD, because flash memory is arranged in blocks or pages, and certain operations work at a larger size than some other operations. But if you look at the results, like that flat HDTune graph running at 450MB/sec, they're hiding any internal details pretty well. A flash memory does need time to locate your data, but the delay is pretty well hidden. The fact the natural storage size of the SSD, doesn't exactly align with a 512 byte sector, becomes apparent when you do a large number of small transfers to the SSD. The results can seem pathetic, except when you compare them to a hard drive which couldn't even get close to the same performance level (due to head movement). If the orientation in the flash better aligned with sectors, it might go faster, but at the expense of being a less-dense chip. You only get the 450MB/sec if you do blocks 512KB or larger (in this example). The flash page size might be 128KB, but I haven't checked a datasheet lately to see how that has changes. (Every generation of flash, is going to need some dimensional tweaking or additional ECC code bits and so on.) http://www.legitreviews.com/images/r.../cdm-sata3.jpg People who design buses, take these insults personally... Lay the blame, at what is inside the HDA and how it works. Paul |
#133
|
|||
|
|||
How Often Disk Defrag
"Paul" wrote in message ... BillW50 wrote: Is that so? How about the damn I/O bus can't handle the speed of even a fragmented hard drive? Paul... use your head! Take a hard drive that is 60% fragmented and use it. Time how long it boots, time how long it takes to open your favorite applications, search through newsgroups, etc. Now clone that drive and defrag the cloned drive. Now what happens Paul? How much of a performance difference do you have? Now be honest Paul! Your first statement you made above, is the non sequitur in logic that bothers me. It's not the fault of the bus. You haven't characterized a bus - you're looking at a bottleneck caused by the head movement of a disk drive. The bus has absolutely nothing to do with it. The bus is dumb. It has a percent occupancy. If the hard drive is doing seeks, there is nothing going across the bus. If the hard drive is pulling data off the drive, the bus is still not fully occupied. This is a 500MB/sec bus carrying 180MB/sec sustained transfer rate data (like on that Seagate 15K hard drive). The bus is occupied about 1/3rd of the time. The bus is not impacting performance. ______ ______ | | | | ______| |_______________| |_______________ If the disk is not busy, the cache has drained, and a command comes along, the bus can "burst" until the cache is filled. In HDTune, they attempt to measure the "burst" performance. (Note that several of the benchmark utilities, have needed continuous code adjustments. It's actually pretty hard to measure these things accurately. Many times, you see results that don't make sense. You can't always trust the results in a benchmark tool as being gospel.) This is my bus, if the disk is idle, and we're filling the cache. Once the cache is full, we're head limited again (back to the sustained pattern above). |--- 8MB of data fills 8MB cache --- _____________________________________ ______ | | | | ______| |____________| |__ "bus" and "fragmentation" should not be used in the same sentence !!! Fragmentation in a file system, requires additional head movement, to locate fragments of data. The issue is the time it takes the head to move from A to B. When the head is moving, no data can be transferred. The head must be stationary above the track, the embedded servo detected to prove that is the case, the sector header located (if one is present) and so on. That takes milliseconds, during which the bus has nothing to do. None of that has anything to do with busses or the theoretical maximum transfer rate a bus can provide. SSDs come closer to sustaining near-bus-rate transfers, because there is no head movement. There is still quantization at the SSD, because flash memory is arranged in blocks or pages, and certain operations work at a larger size than some other operations. But if you look at the results, like that flat HDTune graph running at 450MB/sec, they're hiding any internal details pretty well. A flash memory does need time to locate your data, but the delay is pretty well hidden. The fact the natural storage size of the SSD, doesn't exactly align with a 512 byte sector, becomes apparent when you do a large number of small transfers to the SSD. The results can seem pathetic, except when you compare them to a hard drive which couldn't even get close to the same performance level (due to head movement). If the orientation in the flash better aligned with sectors, it might go faster, but at the expense of being a less-dense chip. You only get the 450MB/sec if you do blocks 512KB or larger (in this example). The flash page size might be 128KB, but I haven't checked a datasheet lately to see how that has changes. (Every generation of flash, is going to need some dimensional tweaking or additional ECC code bits and so on.) http://www.legitreviews.com/images/r.../cdm-sata3.jpg People who design buses, take these insults personally... Lay the blame, at what is inside the HDA and how it works. Paul 100% correct, Paul. Thank you. Chris |
#134
|
|||
|
|||
How Often Disk Defrag
"Paul" wrote in message ... BillW50 wrote: Is that so? How about the damn I/O bus can't handle the speed of even a fragmented hard drive? Paul... use your head! Take a hard drive that is 60% fragmented and use it. Time how long it boots, time how long it takes to open your favorite applications, search through newsgroups, etc. Now clone that drive and defrag the cloned drive. Now what happens Paul? How much of a performance difference do you have? Now be honest Paul! Your first statement you made above, is the non sequitur in logic that bothers me. It's not the fault of the bus. You haven't characterized a bus - you're looking at a bottleneck caused by the head movement of a disk drive. The bus has absolutely nothing to do with it. The bus is dumb. It has a percent occupancy. If the hard drive is doing seeks, there is nothing going across the bus. If the hard drive is pulling data off the drive, the bus is still not fully occupied. This is a 500MB/sec bus carrying 180MB/sec sustained transfer rate data (like on that Seagate 15K hard drive). The bus is occupied about 1/3rd of the time. The bus is not impacting performance. ______ ______ | | | | ______| |_______________| |_______________ If the disk is not busy, the cache has drained, and a command comes along, the bus can "burst" until the cache is filled. In HDTune, they attempt to measure the "burst" performance. (Note that several of the benchmark utilities, have needed continuous code adjustments. It's actually pretty hard to measure these things accurately. Many times, you see results that don't make sense. You can't always trust the results in a benchmark tool as being gospel.) This is my bus, if the disk is idle, and we're filling the cache. Once the cache is full, we're head limited again (back to the sustained pattern above). |--- 8MB of data fills 8MB cache --- _____________________________________ ______ | | | | ______| |____________| |__ "bus" and "fragmentation" should not be used in the same sentence !!! Fragmentation in a file system, requires additional head movement, to locate fragments of data. The issue is the time it takes the head to move from A to B. When the head is moving, no data can be transferred. The head must be stationary above the track, the embedded servo detected to prove that is the case, the sector header located (if one is present) and so on. That takes milliseconds, during which the bus has nothing to do. None of that has anything to do with busses or the theoretical maximum transfer rate a bus can provide. SSDs come closer to sustaining near-bus-rate transfers, because there is no head movement. There is still quantization at the SSD, because flash memory is arranged in blocks or pages, and certain operations work at a larger size than some other operations. But if you look at the results, like that flat HDTune graph running at 450MB/sec, they're hiding any internal details pretty well. A flash memory does need time to locate your data, but the delay is pretty well hidden. The fact the natural storage size of the SSD, doesn't exactly align with a 512 byte sector, becomes apparent when you do a large number of small transfers to the SSD. The results can seem pathetic, except when you compare them to a hard drive which couldn't even get close to the same performance level (due to head movement). If the orientation in the flash better aligned with sectors, it might go faster, but at the expense of being a less-dense chip. You only get the 450MB/sec if you do blocks 512KB or larger (in this example). The flash page size might be 128KB, but I haven't checked a datasheet lately to see how that has changes. (Every generation of flash, is going to need some dimensional tweaking or additional ECC code bits and so on.) http://www.legitreviews.com/images/r.../cdm-sata3.jpg People who design buses, take these insults personally... Lay the blame, at what is inside the HDA and how it works. Paul Paul, I too have HDTune and while it is a nifty utility and all, but it doesn't test fragmentation! And that is what this thread is all about. And I have a bunch of SSDs too. And I hear all of the claims and all... but I am not seeing it. Nor am I seeing a big advantage of defrag hard drives either. Yes I know there is a delay while the head needs to move and position itself to read the next chunk. This sounds really bad and all. But have you taken a hard drive apart and actually watched it work? You need a high speed camera to keep up to how fast the head flies around on a really bad fragmented drive. Even a hummingbird would be impressed. Okay let's accept that the head movement slows it down for argument sake. Then how come defragging does little to nothing as far as improvement? Sure lots talk about how well it does, but virtually nobody offers any evidence whatsoever that it really does by much. I have done the tests and at best 2% is the tops I have found. I don't know about you, but there are lots of things I can do to improve performance way over 2%. And 2% is not even noticeable to the user (nor would I care) anyway. So what I am asking you (or anybody that cares), is that I totally get the technical side and all (when I was younger, I ate that all up)... but nowadays I want to see the results just like the average user would. As if they can't see it, then I am not impressed. -- Bill Gateway M465e ('06 era) - Windows Live Mail 2009 Centrino Core2 Duo T7400 2.16 GHz - 1.5GB - Windows 8 CP |
#135
|
|||
|
|||
How Often Disk Defrag
"Chris S." wrote in message ... "BillW50" wrote in message ... "Char Jackson" wrote in message ... On Fri, 23 Mar 2012 19:49:51 -0500, "BillW50" wrote: Paul... use your head! Take a hard drive that is 60% fragmented and use it. Time how long it boots, time how long it takes to open your favorite applications, search through newsgroups, etc. Now clone that drive and defrag the cloned drive. Now what happens Paul? How much of a performance difference do you have? Now be honest Paul! Over use of a person's name while talking to them can sometimes make the speaker seem a little crazy. ;-) Oh sorry! I just get tired of all of this hearsay without proof or evidence to back it up. If there is any, I want to see it. ;-) It was just your statement: "Is that so? How about the damn I/O bus can't handle the speed of even a fragmented hard drive? Yes that is right! Do the stupid experiments and you will find that a fragmented hard drive isn't the bottleneck. It is the damn bus. I can't believe how clueless most people are! Seriously! Does it *really* take an engineering degree to see this stuff or what?" Yeah so? When you test un-fragmented against super fragmented, I never saw much of a difference in performance. I have ran into some souls that claims it makes a huge difference. Well great! I want to see it. And for decades I haven't seen it. And I am still waiting. And my BSEE degree is from Purdue, 1962. Millington '76.... I have to look at my military records to see what it exactly says again. But I graduated tops in my class and the highest test scores they had in 5 years. Ended up in black ops and it was amazing that the consumer market saw nothing similar for at least 30 plus years later. -- Bill Gateway M465e ('06 era) - Windows Live Mail 2009 Centrino Core2 Duo T7400 2.16 GHz - 1.5GB - Windows 8 CP |
Thread Tools | |
Display Modes | Rate This Thread |
|
|