A Windows XP help forum. PCbanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » PCbanter forum » Windows 10 » Windows 10 Help Forum
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Microsoft seems to be changing NTFS



 
 
Thread Tools Rate Thread Display Modes
  #1  
Old March 7th 18, 10:46 AM posted to alt.comp.os.windows-10
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default Microsoft seems to be changing NTFS

I've been noticing some funny things lately, with
the defragment option in the Optimize panel. Mine no
longer works (but such reports have gone back for
a couple years, so you could easily dismiss this as
"just my problem").

So instead, I can present a test result today,
that shows a difference between Windows 7 NTFS
behavior and Windows 10. The bad thing is, the Windows
10 behavior is "wrong" and represents a corruption of sorts.
The changes made are "less correct" than Windows 7.

The Windows 7 behavior is an issue with the design of
NTFS, that IT staff have known about for years, and
it can't really be fixed because of the nature of
how storage in NTFS works.

*******

In Windows 7, partition off a section of disk, say 60GB
or more. The bug requires the user to make a sufficiently
big file to trigger it.

Make the partition.

Format to NTFS with 4KB (default) cluster size.

Do Properties on the partition, and tick the "Compression" box.
This causes all the files copied to the partition
from now on, to be compressed, saving space.

I use this only occasionally, in situations where a
disk isn't big enough, and I'm trying to "stretch" it to
get a job done.

OK, now, copy a single large file onto that partition.
NTFS compression chops the file into small pieces, and tries
to compress each piece. In the process of doing so, the file
becomes fragmented. In my test case, the partition had only
one "writer". No other programs were writing to the partition,
so none of the fragmentation is caused by "file multiplexing".
All of the fragmentation seen, is caused by the NTFS
compression method.

I made a ~55GB partition, and "made" a file using dd.exe. This
allows avoiding File Explorer (so the OS cannot use any convenient
optimizations it might have up its sleeve).

dd if=/dev/random of=big.bin bs=1048576 count=54000

On Windows 7, my test file almost wasn't big enough to
trigger the bug. But, the program crapped out before it
finished creating the big test file. Basically, the volume
was very close to being full, without being full.

When I use the NTFS utility "nfi.exe" from Win2K, it shows
the classic pattern. One file, has multiple NTFS entries,
and they're linked together somehow.

******* Windows 7 - NTFS compressed very large file *******
NTFS File Sector Information Utility.
Copyright (C) Microsoft Corporation 1999. All rights reserved.

File 42
\big.bin
$DATA (nonresident)
logical sectors 6200656-6287951 (0x5e9d50-0x5ff24f) \
logical sectors 1287344-1287471 (0x13a4b0-0x13a52f) \___ 120 lines total
... /
logical sectors 1287472-6025775 (0x13a530-0x5bf22f) /

File 43
\big.bin
$DATA (nonresident)
logical sectors 111125160-111125207 (0x69fa2a8-0x69fa2d7) \
logical sectors 111145040-111145087 (0x69ff050-0x69ff07f) \___ 192 lines total
... /
logical sectors 111420992-111422959 (0x6a42640-0x6a42def) /
....
File 57
\big.bin
$DATA (nonresident)
logical sectors 111422960-111424919 (0x6a42df0-0x6a43597) \
logical sectors 111424920-111426879 (0x6a43598-0x6a43d3f) \___ 17 lines total
... / (Smaller sets near
logical sectors 111426880-111428839 (0x6a43d40-0x6a444e7) / the end)

******* Windows 7 - NTFS compressed very large file *******

So that is 16 file pointers and a total of 1247 fragments.
Free defragmenters cannot deal well with this. They leave
the 16 file pointers, so the fragment count cannot drop
below 16. They may be able to reduce the fragment set under
each File# entry to just the one set of clusters.

Summary of what happened in Windows 7:

1) File transfer stopped before completion.
2) All metadata correct. Partition size correct.
Remaining space correct.
3) No remedial action required. File is ruined. User knows,
based on meaningless error message. (It doesn't hint that
it is the fault of NTFS.) But at least you know the file
was not successfully created.

Now, let's study the latest 16299.248 behavior of Windows 10.
I set up the test partition as in Windows 7.

I prepare the file (the file being created by dd.exe).

The file transfer stops at *half* the partition size. A 56GB
partition can have a 28GB file, and then it stops. There is no
obvious reason why it is stopping at this point. No metadata
points to a wasted 28GB chunk somewhere. Puzzling.

Now, the interesting part. Even though the file has been
through NTFS compression, this is the MFT representation.
Only four fragments and one file pointer! You might be
thinking "amazing", and you'd be right. So somehow, the
NTFS compression method has been modified enough, to give
a different fragmentation pattern. The disk won't actually
be sluggish, reading this file back later.

File 44
\big.bin
$STANDARD_INFORMATION (resident)
$FILE_NAME (resident)
$DATA (nonresident)
logical sectors 62576-6156911 (0xf470-0x5df26f)
logical sectors 6732776-6732903 (0x66bbe8-0x66bc67)
logical sectors 6701672-6732775 (0x664268-0x66bbe7)
logical sectors 6732904-56704103 (0x66bc68-0x3613c67)

There's only one problem though. If I do properties on the
partition, it says the entire 56GB partition is "full". Like
the volume bitmap was reporting something like that. There
is only 28GB of file on the disk, and the other 28GB should
be reported as white space.

The file has the appropriate properties. It's a 28GB file that
takes 28GB on disk. This is because the test case is carefully
chosen to be "incompressible". The /dev/random of dd.exe means
the NTFS compressor makes little progress against it.

I ran "chkdsk /f /r F: " in an attempt to fix it.

CHKDSK reported no errors! Clever. Clever like every other
time lately I run CHKDSK, and get a "ho hum" response.

I selected those options, in the hope the partition would be
dismounted. When the partition was remounted after CHKDSK was
finished, the Properties reports the partition was half full
with a 28GB file. In other words, the Properties for the
partition are now correct, after the remount. CHKDSK probably
didn't do anything, but we'll never know for sure.

Summary of what happened in Windows 10:

1) File transfer stopped before completion.

Only half as much data can be handled this way before failure,
when compared to Windows 7. That's not an improvement. I *needed*
this case to work recently (during a software build), and had
to re-jig my setup to get around this mess.

2) Metadata incorrect. Remaining space incorrect.

3) Remedial action. Run CHKDSK or find a means to
dismount the partition, and see if the calculated
size will correct itself. CHKDSK reports no error
on the scan.

File is ruined as before. That part didn't change.

If you are not using NTFS compression (the tick box is
not ticked), of course none of this happens.

This is *only* an issue if you use NTFS compression.

Regular disk usage is not affected.

*******

The purpose of me reporting this here, is as a reminder
that NTFS is being changed in subtle ways, and if you
see issues, keep that possibility in mind. I would
prefer if Microsoft wrote a tutorial article explaining
the "improvements" it is making, but... blah blah, blah
blah blah.

*******

Questions this week:

1) Why isn't CHKDSK reporting more problems ?
A couple of situations I've been in, I should have
received a report of what was going on.

2) Why has defrag stopped working ? Even HDDs now,
are having TRIM applied to them (which is incorrect).

3) What's up with NTFS compression ?

4) Is the file system attempting real-time defrag ?
(No, because running analysis still reports a non-zero
fragmentation level. Even if you take the 50MB file size
limit into account, there seems to be room for more
optimization.)

5) I've also had one case of corruption after running
JKDefrag. Which in the past, and on other OSes, has
been *flawless*. Has MS buggered with the defrag API ?
The "agreed-upon" API. Heaven help us.

6) Why is Windows 10 making malformed $MFTMIRR ?

Paul
Ads
  #2  
Old March 7th 18, 10:54 AM posted to alt.comp.os.windows-10
Mr. Man-wai Chang
external usenet poster
 
Posts: 1,941
Default Microsoft seems to be changing NTFS

On 7/3/2018 18:46, Paul wrote:
I've been noticing some funny things lately, with
the defragment option in the Optimize panel. Mine no
longer works (but such reports have gone back for
a couple years, so you could easily dismiss this as
"just my problem").


While we could just stop using the Defrag API, would this flaw affect
the integrity of huge files (usually video) stored in the NTFS partitions?

--
@~@ Remain silent! Drink, Blink, Stretch! Live long and prosper!!
/ v \ Simplicity is Beauty!
/( _ )\ May the Force and farces be with you!
^ ^ (x86_64 Ubuntu 9.10) Linux 2.6.39.3
不借貸! 不詐騙! 不*錢! 不援交! 不打交! 不打劫! 不自殺! 不求神! 請考慮綜援
(CSSA):
http://www.swd.gov.hk/tc/index/site_...sub_addressesa
  #3  
Old March 7th 18, 11:43 AM posted to alt.comp.os.windows-10
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default Microsoft seems to be changing NTFS

Mr. Man-wai Chang wrote:
On 7/3/2018 18:46, Paul wrote:
I've been noticing some funny things lately, with
the defragment option in the Optimize panel. Mine no
longer works (but such reports have gone back for
a couple years, so you could easily dismiss this as
"just my problem").


While we could just stop using the Defrag API, would this flaw affect
the integrity of huge files (usually video) stored in the NTFS partitions?


If you're not using any "features" in Windows 10,
you have nothing to worry about.

The broken defrag, just won't work when you click the
button in the Optimize dialog.

As for the third-party defragmenters, I don't know what
to tell you. I would recommend a backup before testing
them again.

What I'm worried about, is having to do a lot of testing
over again, because of the "unknown" nature of the changes.

We're not monkeys out here. We're customers.

I can tell you my copy of Windows 7 works. But
that's not a very nice conclusion, now is it.

Paul
  #4  
Old March 7th 18, 12:08 PM posted to alt.comp.os.windows-10
Ed Cryer
external usenet poster
 
Posts: 2,621
Default Microsoft seems to be changing NTFS

Paul wrote:
I've been noticing some funny things lately, with
the defragment option in the Optimize panel. Mine no
longer works (but such reports have gone back for
a couple years, so you could easily dismiss this as
"just my problem").

So instead, I can present a test result today,
that shows a difference between Windows 7 NTFS
behavior and Windows 10. The bad thing is, the Windows
10 behavior is "wrong" and represents a corruption of sorts.
The changes made are "less correct" than Windows 7.

The Windows 7 behavior is an issue with the design of
NTFS, that IT staff have known about for years, and
it can't really be fixed because of the nature of
how storage in NTFS works.

*******

In Windows 7, partition off a section of disk, say 60GB
or more. The bug requires the user to make a sufficiently
big file to trigger it.

Make the partition.

Format to NTFS with 4KB (default) cluster size.

Do Properties on the partition, and tick the "Compression" box.
This causes all the files copied to the partition
from now on, to be compressed, saving space.

I use this only occasionally, in situations where a
disk isn't big enough, and I'm trying to "stretch" it to
get a job done.

OK, now, copy a single large file onto that partition.
NTFS compression chops the file into small pieces, and tries
to compress each piece. In the process of doing so, the file
becomes fragmented. In my test case, the partition had only
one "writer". No other programs were writing to the partition,
so none of the fragmentation is caused by "file multiplexing".
All of the fragmentation seen, is caused by the NTFS
compression method.

I made a ~55GB partition, and "made" a file using dd.exe. This
allows avoiding File Explorer (so the OS cannot use any convenient
optimizations it might have up its sleeve).

** dd if=/dev/random of=big.bin bs=1048576 count=54000

On Windows 7, my test file almost wasn't big enough to
trigger the bug. But, the program crapped out before it
finished creating the big test file. Basically, the volume
was very close to being full, without being full.

When I use the NTFS utility "nfi.exe" from Win2K, it shows
the classic pattern. One file, has multiple NTFS entries,
and they're linked together somehow.

******* Windows 7 - NTFS compressed very large file *******
NTFS File Sector Information Utility.
Copyright (C) Microsoft Corporation 1999. All rights reserved.

File 42
\big.bin
*** $DATA (nonresident)
******* logical sectors 6200656-6287951 (0x5e9d50-0x5ff24f) \
******* logical sectors 1287344-1287471 (0x13a4b0-0x13a52f)* \___ 120
lines total
******* ...*********************** ************************* * /
******* logical sectors 1287472-6025775 (0x13a530-0x5bf22f) /

File 43
\big.bin
*** $DATA (nonresident)
******* logical sectors 111125160-111125207 (0x69fa2a8-0x69fa2d7) \
******* logical sectors 111145040-111145087 (0x69ff050-0x69ff07f)* \___
192 lines total
******* ...*********************** ************************* ******* /
******* logical sectors 111420992-111422959 (0x6a42640-0x6a42def) /
...
File 57
\big.bin
*** $DATA (nonresident)
******* logical sectors 111422960-111424919 (0x6a42df0-0x6a43597) \
******* logical sectors 111424920-111426879 (0x6a43598-0x6a43d3f)* \___
17 lines total
******* ...*********************** ************************* ******* /
(Smaller sets near
******* logical sectors 111426880-111428839 (0x6a43d40-0x6a444e7)
/***** the end)

******* Windows 7 - NTFS compressed very large file *******

So that is 16 file pointers and a total of 1247 fragments.
Free defragmenters cannot deal well with this. They leave
the 16 file pointers, so the fragment count cannot drop
below 16. They may be able to reduce the fragment set under
each File# entry to just the one set of clusters.

Summary of what happened in Windows 7:

1) File transfer stopped before completion.
2) All metadata correct. Partition size correct.
** Remaining space correct.
3) No remedial action required. File is ruined. User knows,
** based on meaningless error message. (It doesn't hint that
** it is the fault of NTFS.) But at least you know the file
** was not successfully created.

Now, let's study the latest 16299.248 behavior of Windows 10.
I set up the test partition as in Windows 7.

I prepare the file (the file being created by dd.exe).

The file transfer stops at *half* the partition size. A 56GB
partition can have a 28GB file, and then it stops. There is no
obvious reason why it is stopping at this point. No metadata
points to a wasted 28GB chunk somewhere. Puzzling.

Now, the interesting part. Even though the file has been
through NTFS compression, this is the MFT representation.
Only four fragments and one file pointer! You might be
thinking "amazing", and you'd be right. So somehow, the
NTFS compression method has been modified enough, to give
a different fragmentation pattern. The disk won't actually
be sluggish, reading this file back later.

File 44
\big.bin
*** $STANDARD_INFORMATION (resident)
*** $FILE_NAME (resident)
*** $DATA (nonresident)
******* logical sectors 62576-6156911 (0xf470-0x5df26f)
******* logical sectors 6732776-6732903 (0x66bbe8-0x66bc67)
******* logical sectors 6701672-6732775 (0x664268-0x66bbe7)
******* logical sectors 6732904-56704103 (0x66bc68-0x3613c67)

There's only one problem though. If I do properties on the
partition, it says the entire 56GB partition is "full". Like
the volume bitmap was reporting something like that. There
is only 28GB of file on the disk, and the other 28GB should
be reported as white space.

The file has the appropriate properties. It's a 28GB file that
takes 28GB on disk. This is because the test case is carefully
chosen to be "incompressible". The /dev/random of dd.exe means
the NTFS compressor makes little progress against it.

I ran "chkdsk /f /r F: " in an attempt to fix it.

CHKDSK reported no errors! Clever. Clever like every other
time lately I run CHKDSK, and get a "ho hum" response.

I selected those options, in the hope the partition would be
dismounted. When the partition was remounted after CHKDSK was
finished, the Properties reports the partition was half full
with a 28GB file. In other words, the Properties for the
partition are now correct, after the remount. CHKDSK probably
didn't do anything, but we'll never know for sure.

Summary of what happened in Windows 10:

1) File transfer stopped before completion.

** Only half as much data can be handled this way before failure,
** when compared to Windows 7. That's not an improvement. I *needed*
** this case to work recently (during a software build), and had
** to re-jig my setup to get around this mess.

2) Metadata incorrect. Remaining space incorrect.

3) Remedial action. Run CHKDSK or find a means to
** dismount the partition, and see if the calculated
** size will correct itself. CHKDSK reports no error
** on the scan.

** File is ruined as before. That part didn't change.

If you are not using NTFS compression (the tick box is
not ticked), of course none of this happens.

This is *only* an issue if you use NTFS compression.

Regular disk usage is not affected.

*******

The purpose of me reporting this here, is as a reminder
that NTFS is being changed in subtle ways, and if you
see issues, keep that possibility in mind. I would
prefer if Microsoft wrote a tutorial article explaining
the "improvements" it is making, but... blah blah, blah
blah blah.

*******

Questions this week:

1) Why isn't CHKDSK reporting more problems ?
** A couple of situations I've been in, I should have
** received a report of what was going on.

2) Why has defrag stopped working ? Even HDDs now,
** are having TRIM applied to them (which is incorrect).

3) What's up with NTFS compression ?

4) Is the file system attempting real-time defrag ?
** (No, because running analysis still reports a non-zero
*** fragmentation level. Even if you take the 50MB file size
*** limit into account, there seems to be room for more
*** optimization.)

5) I've also had one case of corruption after running
** JKDefrag. Which in the past, and on other OSes, has
** been *flawless*. Has MS buggered with the defrag API ?
** The "agreed-upon" API. Heaven help us.

6) Why is Windows 10 making malformed $MFTMIRR ?

* Paul


Will this affect portable HDs containing videos, pictures, music? I
regularly move these between Win7 and Win10 systems, always NTFS-formatted.

Ed

  #5  
Old March 7th 18, 12:12 PM posted to alt.comp.os.windows-10
John Doe[_8_]
external usenet poster
 
Posts: 2,378
Default Microsoft seems to be changing NTFS

Paul wrote:

...

5) I've also had one case of corruption after running
JKDefrag. Which in the past, and on other OSes, has been *flawless*.
Has MS buggered with the defrag API ? The "agreed-upon" API. Heaven
help us.


Microsoft has rewritten parts of Windows since Bill Gates and Steve
Ballmer left. Messing with utilities that Microsoft has always sucked at
would be no surprise to me. Hopefully they are employing some of the
SystemInternals guys. But probably not willing to spend on it. If it blows
up, we will hear about it.

Have you posted to anything other than wiki?
  #6  
Old March 7th 18, 12:27 PM posted to alt.comp.os.windows-10
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default Microsoft seems to be changing NTFS

Ed Cryer wrote:
Paul wrote:
I've been noticing some funny things lately, with
the defragment option in the Optimize panel. Mine no
longer works (but such reports have gone back for
a couple years, so you could easily dismiss this as
"just my problem").

So instead, I can present a test result today,
that shows a difference between Windows 7 NTFS
behavior and Windows 10. The bad thing is, the Windows
10 behavior is "wrong" and represents a corruption of sorts.
The changes made are "less correct" than Windows 7.

The Windows 7 behavior is an issue with the design of
NTFS, that IT staff have known about for years, and
it can't really be fixed because of the nature of
how storage in NTFS works.

*******

In Windows 7, partition off a section of disk, say 60GB
or more. The bug requires the user to make a sufficiently
big file to trigger it.

Make the partition.

Format to NTFS with 4KB (default) cluster size.

Do Properties on the partition, and tick the "Compression" box.
This causes all the files copied to the partition
from now on, to be compressed, saving space.

I use this only occasionally, in situations where a
disk isn't big enough, and I'm trying to "stretch" it to
get a job done.

OK, now, copy a single large file onto that partition.
NTFS compression chops the file into small pieces, and tries
to compress each piece. In the process of doing so, the file
becomes fragmented. In my test case, the partition had only
one "writer". No other programs were writing to the partition,
so none of the fragmentation is caused by "file multiplexing".
All of the fragmentation seen, is caused by the NTFS
compression method.

I made a ~55GB partition, and "made" a file using dd.exe. This
allows avoiding File Explorer (so the OS cannot use any convenient
optimizations it might have up its sleeve).

dd if=/dev/random of=big.bin bs=1048576 count=54000

On Windows 7, my test file almost wasn't big enough to
trigger the bug. But, the program crapped out before it
finished creating the big test file. Basically, the volume
was very close to being full, without being full.

When I use the NTFS utility "nfi.exe" from Win2K, it shows
the classic pattern. One file, has multiple NTFS entries,
and they're linked together somehow.

******* Windows 7 - NTFS compressed very large file *******
NTFS File Sector Information Utility.
Copyright (C) Microsoft Corporation 1999. All rights reserved.

File 42
\big.bin
$DATA (nonresident)
logical sectors 6200656-6287951 (0x5e9d50-0x5ff24f) \
logical sectors 1287344-1287471 (0x13a4b0-0x13a52f) \___ 120
lines total
... /
logical sectors 1287472-6025775 (0x13a530-0x5bf22f) /

File 43
\big.bin
$DATA (nonresident)
logical sectors 111125160-111125207 (0x69fa2a8-0x69fa2d7) \
logical sectors 111145040-111145087 (0x69ff050-0x69ff07f)
\___ 192 lines total
... /
logical sectors 111420992-111422959 (0x6a42640-0x6a42def) /
...
File 57
\big.bin
$DATA (nonresident)
logical sectors 111422960-111424919 (0x6a42df0-0x6a43597) \
logical sectors 111424920-111426879 (0x6a43598-0x6a43d3f)
\___ 17 lines total
...
/ (Smaller sets near
logical sectors 111426880-111428839 (0x6a43d40-0x6a444e7)
/ the end)

******* Windows 7 - NTFS compressed very large file *******

So that is 16 file pointers and a total of 1247 fragments.
Free defragmenters cannot deal well with this. They leave
the 16 file pointers, so the fragment count cannot drop
below 16. They may be able to reduce the fragment set under
each File# entry to just the one set of clusters.

Summary of what happened in Windows 7:

1) File transfer stopped before completion.
2) All metadata correct. Partition size correct.
Remaining space correct.
3) No remedial action required. File is ruined. User knows,
based on meaningless error message. (It doesn't hint that
it is the fault of NTFS.) But at least you know the file
was not successfully created.

Now, let's study the latest 16299.248 behavior of Windows 10.
I set up the test partition as in Windows 7.

I prepare the file (the file being created by dd.exe).

The file transfer stops at *half* the partition size. A 56GB
partition can have a 28GB file, and then it stops. There is no
obvious reason why it is stopping at this point. No metadata
points to a wasted 28GB chunk somewhere. Puzzling.

Now, the interesting part. Even though the file has been
through NTFS compression, this is the MFT representation.
Only four fragments and one file pointer! You might be
thinking "amazing", and you'd be right. So somehow, the
NTFS compression method has been modified enough, to give
a different fragmentation pattern. The disk won't actually
be sluggish, reading this file back later.

File 44
\big.bin
$STANDARD_INFORMATION (resident)
$FILE_NAME (resident)
$DATA (nonresident)
logical sectors 62576-6156911 (0xf470-0x5df26f)
logical sectors 6732776-6732903 (0x66bbe8-0x66bc67)
logical sectors 6701672-6732775 (0x664268-0x66bbe7)
logical sectors 6732904-56704103 (0x66bc68-0x3613c67)

There's only one problem though. If I do properties on the
partition, it says the entire 56GB partition is "full". Like
the volume bitmap was reporting something like that. There
is only 28GB of file on the disk, and the other 28GB should
be reported as white space.

The file has the appropriate properties. It's a 28GB file that
takes 28GB on disk. This is because the test case is carefully
chosen to be "incompressible". The /dev/random of dd.exe means
the NTFS compressor makes little progress against it.

I ran "chkdsk /f /r F: " in an attempt to fix it.

CHKDSK reported no errors! Clever. Clever like every other
time lately I run CHKDSK, and get a "ho hum" response.

I selected those options, in the hope the partition would be
dismounted. When the partition was remounted after CHKDSK was
finished, the Properties reports the partition was half full
with a 28GB file. In other words, the Properties for the
partition are now correct, after the remount. CHKDSK probably
didn't do anything, but we'll never know for sure.

Summary of what happened in Windows 10:

1) File transfer stopped before completion.

Only half as much data can be handled this way before failure,
when compared to Windows 7. That's not an improvement. I *needed*
this case to work recently (during a software build), and had
to re-jig my setup to get around this mess.

2) Metadata incorrect. Remaining space incorrect.

3) Remedial action. Run CHKDSK or find a means to
dismount the partition, and see if the calculated
size will correct itself. CHKDSK reports no error
on the scan.

File is ruined as before. That part didn't change.

If you are not using NTFS compression (the tick box is
not ticked), of course none of this happens.

This is *only* an issue if you use NTFS compression.

Regular disk usage is not affected.

*******

The purpose of me reporting this here, is as a reminder
that NTFS is being changed in subtle ways, and if you
see issues, keep that possibility in mind. I would
prefer if Microsoft wrote a tutorial article explaining
the "improvements" it is making, but... blah blah, blah
blah blah.

*******

Questions this week:

1) Why isn't CHKDSK reporting more problems ?
A couple of situations I've been in, I should have
received a report of what was going on.

2) Why has defrag stopped working ? Even HDDs now,
are having TRIM applied to them (which is incorrect).

3) What's up with NTFS compression ?

4) Is the file system attempting real-time defrag ?
(No, because running analysis still reports a non-zero
fragmentation level. Even if you take the 50MB file size
limit into account, there seems to be room for more
optimization.)

5) I've also had one case of corruption after running
JKDefrag. Which in the past, and on other OSes, has
been *flawless*. Has MS buggered with the defrag API ?
The "agreed-upon" API. Heaven help us.

6) Why is Windows 10 making malformed $MFTMIRR ?

Paul


Will this affect portable HDs containing videos, pictures, music? I
regularly move these between Win7 and Win10 systems, always NTFS-formatted.

Ed


No, there should be no problem.

My concern is mainly with "changing a standard".
And the side effects this is going to have.
And the possibility an early adopter (e.g. regular customers)
are going to be the ones finding out the issues.

Just be careful with defragmenters. If you want to defragment,
maybe go back to Windows 7 or something. If I knew what happened
to mine, I'd tell you :-)

If they want to change the color of the poo emoji
to purple, I don't care. If they want to mess
with NTFS, I do care. I care enough to erase
all the Windows 10 installs here!

It's like selling me a new car, and putting
bald tires on it. I don't want to be leaning
out the window, checking the tires all the time.
I just want to drive the damn thing.

Paul
  #7  
Old March 7th 18, 01:09 PM posted to alt.comp.os.windows-10
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default Microsoft seems to be changing NTFS

John Doe wrote:
Paul wrote:

...

5) I've also had one case of corruption after running
JKDefrag. Which in the past, and on other OSes, has been *flawless*.
Has MS buggered with the defrag API ? The "agreed-upon" API. Heaven
help us.


Microsoft has rewritten parts of Windows since Bill Gates and Steve
Ballmer left. Messing with utilities that Microsoft has always sucked at
would be no surprise to me. Hopefully they are employing some of the
SystemInternals guys. But probably not willing to spend on it. If it blows
up, we will hear about it.

Have you posted to anything other than wiki?


I haven't posted this anywhere but here.

The NTFS version number isn't changed, as far as I know.

If you see any articles on this topic, any tech
details, post back.

At first when I was seeing things, I figured it was
just me, and everything was normal. I want people to be
on the lookout, for either examples or tech articles detailing
what's up. I can't keep my own systems healthy here, if
I don't have reliable info on "what's a hardware failure"
and "what's a software failure".

Paul
  #8  
Old March 7th 18, 02:25 PM posted to alt.comp.os.windows-10
Mr. Man-wai Chang
external usenet poster
 
Posts: 1,941
Default Microsoft seems to be changing NTFS

On 7/3/2018 19:43, Paul wrote:

I can tell you my copy of Windows 7 works. But
that's not a very nice conclusion, now is it.


They don't have privilege to directly access the NTFS file system, I
suppose.

--
@~@ Remain silent! Drink, Blink, Stretch! Live long and prosper!!
/ v \ Simplicity is Beauty!
/( _ )\ May the Force and farces be with you!
^ ^ (x86_64 Ubuntu 9.10) Linux 2.6.39.3
不借貸! 不詐騙! 不*錢! 不援交! 不打交! 不打劫! 不自殺! 不求神! 請考慮綜援
(CSSA):
http://www.swd.gov.hk/tc/index/site_...sub_addressesa
  #9  
Old March 7th 18, 03:59 PM posted to alt.comp.os.windows-10
Zaghadka
external usenet poster
 
Posts: 315
Default Microsoft seems to be changing NTFS

On Wed, 07 Mar 2018 08:09:11 -0500, in alt.comp.os.windows-10, Paul
wrote:

John Doe wrote:
Paul wrote:

...

5) I've also had one case of corruption after running
JKDefrag. Which in the past, and on other OSes, has been *flawless*.
Has MS buggered with the defrag API ? The "agreed-upon" API. Heaven
help us.


Microsoft has rewritten parts of Windows since Bill Gates and Steve
Ballmer left. Messing with utilities that Microsoft has always sucked at
would be no surprise to me. Hopefully they are employing some of the
SystemInternals guys. But probably not willing to spend on it. If it blows
up, we will hear about it.

Have you posted to anything other than wiki?


I haven't posted this anywhere but here.

The NTFS version number isn't changed, as far as I know.

If you see any articles on this topic, any tech
details, post back.

At first when I was seeing things, I figured it was
just me, and everything was normal. I want people to be
on the lookout, for either examples or tech articles detailing
what's up. I can't keep my own systems healthy here, if
I don't have reliable info on "what's a hardware failure"
and "what's a software failure".


Are you a hardware guy? It's software.

Are you a software guy? It's hardware.

That's always been good enough for me. :^P

(Very interesting post. Archived.)

--
Zag

No one ever said on their deathbed, 'Gee, I wish I had
spent more time alone with my computer.' ~Dan(i) Bunten
  #10  
Old March 8th 18, 03:09 AM posted to alt.comp.os.windows-10
B00ze
external usenet poster
 
Posts: 472
Default Microsoft seems to be changing NTFS

Good day Paul.

On 2018-03-07 05:46, Paul wrote:

I've been noticing some funny things lately, with
the defragment option in the Optimize panel. Mine no
longer works (but such reports have gone back for
a couple years, so you could easily dismiss this as
"just my problem").


No idea why, shouldn't it start some external program when you click the
button? Can we run that manually?

So instead, I can present a test result today,
that shows a difference between Windows 7 NTFS
behavior and Windows 10. The bad thing is, the Windows
10 behavior is "wrong" and represents a corruption of sorts.
The changes made are "less correct" than Windows 7.

The Windows 7 behavior is an issue with the design of
NTFS, that IT staff have known about for years, and
it can't really be fixed because of the nature of
how storage in NTFS works.


Ah! I'm IT staff and I did not know about this lol. I once compressed a
40GB PST file, and it worked fine, but discovered that starting-up
Outlook made the OS uncompress the entire file and keep that
uncompressed copy on-disk, so I ended-up doubling the amount of space
Outlook took while running; had to give-up on that. But I did not see
the file split into 16 files...

OK, now, copy a single large file onto that partition.
NTFS compression chops the file into small pieces, and tries
to compress each piece. In the process of doing so, the file
becomes fragmented. In my test case, the partition had only


I don't understand why it becomes fragmented, but this tries to explain
(look in the comments.)

https://blogs.msdn.microsoft.com/ntd...s-compression/

one "writer". No other programs were writing to the partition,
so none of the fragmentation is caused by "file multiplexing".
All of the fragmentation seen, is caused by the NTFS
compression method.

I made a ~55GB partition, and "made" a file using dd.exe. This
allows avoiding File Explorer (so the OS cannot use any convenient
optimizations it might have up its sleeve).

dd if=/dev/random of=big.bin bs=1048576 count=54000

On Windows 7, my test file almost wasn't big enough to
trigger the bug. But, the program crapped out before it
finished creating the big test file. Basically, the volume
was very close to being full, without being full.

When I use the NTFS utility "nfi.exe" from Win2K, it shows
the classic pattern. One file, has multiple NTFS entries,
and they're linked together somehow.

******* Windows 7 - NTFS compressed very large file *******
NTFS File Sector Information Utility.
Copyright (C) Microsoft Corporation 1999. All rights reserved.

File 42
File 43
....
File 57

So that is 16 file pointers and a total of 1247 fragments.
Free defragmenters cannot deal well with this. They leave
the 16 file pointers, so the fragment count cannot drop
below 16. They may be able to reduce the fragment set under
each File# entry to just the one set of clusters.


Hmmm, is this normal, that a file can have more than a single file
pointer? I don't know enough about NTFS, but unless there is a limit to
the number of extends that a file can have, I don't see why big file
ended-up with more than 1 file pointer...

The file transfer stops at *half* the partition size. A 56GB
partition can have a 28GB file, and then it stops. There is no
obvious reason why it is stopping at this point. No metadata
points to a wasted 28GB chunk somewhere. Puzzling.

Now, the interesting part. Even though the file has been
through NTFS compression, this is the MFT representation.
Only four fragments and one file pointer! You might be
thinking "amazing", and you'd be right. So somehow, the
NTFS compression method has been modified enough, to give
a different fragmentation pattern. The disk won't actually
be sluggish, reading this file back later.

File 44
\big.bin
$STANDARD_INFORMATION (resident)
$FILE_NAME (resident)
$DATA (nonresident)
logical sectors 62576-6156911 (0xf470-0x5df26f)
logical sectors 6732776-6732903 (0x66bbe8-0x66bc67)
logical sectors 6701672-6732775 (0x664268-0x66bbe7)
logical sectors 6732904-56704103 (0x66bc68-0x3613c67)


Yeah, I'd say this is a big improvement. Now if they just would make it
work on clusters other than 4K...

There's only one problem though. If I do properties on the
partition, it says the entire 56GB partition is "full". Like
the volume bitmap was reporting something like that. There
is only 28GB of file on the disk, and the other 28GB should
be reported as white space.

The file has the appropriate properties. It's a 28GB file that
takes 28GB on disk. This is because the test case is carefully
chosen to be "incompressible". The /dev/random of dd.exe means
the NTFS compressor makes little progress against it.

I ran "chkdsk /f /r F: " in an attempt to fix it.

CHKDSK reported no errors! Clever. Clever like every other
time lately I run CHKDSK, and get a "ho hum" response.

I selected those options, in the hope the partition would be
dismounted. When the partition was remounted after CHKDSK was
finished, the Properties reports the partition was half full
with a 28GB file. In other words, the Properties for the
partition are now correct, after the remount. CHKDSK probably
didn't do anything, but we'll never know for sure.


Why will we ever know? Just repeat the test, don't run CHKDSK, just
reboot, see what happens when the partition is re-mounted; maybe it's
all well and correct once it is dismounted an remounted. Some kind of bug?

Summary of what happened in Windows 10:

1) File transfer stopped before completion.

Only half as much data can be handled this way before failure,
when compared to Windows 7. That's not an improvement. I *needed*
this case to work recently (during a software build), and had
to re-jig my setup to get around this mess.

2) Metadata incorrect. Remaining space incorrect.

3) Remedial action. Run CHKDSK or find a means to
dismount the partition, and see if the calculated
size will correct itself. CHKDSK reports no error
on the scan.

File is ruined as before. That part didn't change.

If you are not using NTFS compression (the tick box is
not ticked), of course none of this happens.

This is *only* an issue if you use NTFS compression.

Regular disk usage is not affected.

*******

The purpose of me reporting this here, is as a reminder
that NTFS is being changed in subtle ways, and if you
see issues, keep that possibility in mind. I would
prefer if Microsoft wrote a tutorial article explaining
the "improvements" it is making, but... blah blah, blah
blah blah.


They did announce LZX compression, so they *are* playing around with it.

https://theitbros.com/lzx-new-window...ion-algorithm/

Questions this week:

1) Why isn't CHKDSK reporting more problems ?
A couple of situations I've been in, I should have
received a report of what was going on.


Could be CHKDSK bypasses the in-memory disk information, and does its
own I/O, and the bug with the free space is only in memory, ie. if you
dismount and remount, all is well...

2) Why has defrag stopped working ? Even HDDs now,
are having TRIM applied to them (which is incorrect).


Lol, no idea ;-)

3) What's up with NTFS compression ?


Well, it /looks/ like they are looking into improving it (e.g. LZX).

5) I've also had one case of corruption after running
JKDefrag. Which in the past, and on other OSes, has
been *flawless*. Has MS buggered with the defrag API ?
The "agreed-upon" API. Heaven help us.


JKD is pretty old, why are you not using MyDefrag?

Regards,

--
! _\|/_ Sylvain /
! (o o) Memberavid-Suzuki-Fdn/EFF/Red+Cross/SPCA/Planetary-Society
oO-( )-Oo Is there another word for synonym?

  #11  
Old March 8th 18, 04:12 AM posted to alt.comp.os.windows-10
Jason
external usenet poster
 
Posts: 878
Default Microsoft seems to be changing NTFS

On Wed, 07 Mar 2018 05:46:01 -0500 "Paul" wrote in
article

I've been noticing some funny things lately, with
the defragment option in the Optimize panel
Paul


Wow! I always assumed that the tools were adequate/correct. Now I wonder.


  #12  
Old March 8th 18, 05:12 AM posted to alt.comp.os.windows-10
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default Microsoft seems to be changing NTFS

Jason wrote:
On Wed, 07 Mar 2018 05:46:01 -0500 "Paul" wrote in
article
I've been noticing some funny things lately, with
the defragment option in the Optimize panel
Paul


Wow! I always assumed that the tools were adequate/correct. Now I wonder.


If you use the Optimize dialog, click the button and the
run takes ten minutes, then yours is working.

If you click the button, and with your ninja vision you
see the word "TRIM" and 0.8 seconds later the HDD defrag
is "finished", then yours is *not* working. For starters,
just the "analyze" phase takes time, and it's not even
bothering to analyze the partition before the so-called
defrag pass. It's simply treating all the volumes as
TRIMmable.

My guess is, the defrag package has some sort of
health check. It has decided the partitions are
unhealthy, and to avoid corruption, is not processing
them. I've looked at other theories, and so far that
offers no credible explanation.

Apparently defrag has failed before for people, so it's
not like there is a good strong "failure" signal
in Google for a recent problem. I would expect my
OSes to fail independently, if it was some edge
condition, rather than all just flail at the
same time. My suspicion at the moment, is some
update(s) have done this. If CHKDSK cannot tell
me the truth about the volume (defrag has some info
that CHKDSK doesn't), then I'm not going to be able
to figure this out.

Paul
  #13  
Old March 8th 18, 05:30 AM posted to alt.comp.os.windows-10
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default Microsoft seems to be changing NTFS

B00ze wrote:


They did announce LZX compression, so they *are* playing around with it.

https://theitbros.com/lzx-new-window...ion-algorithm/


I switched to testing a file of all-zeros, and it reports
"0 bytes on disk", "55GB size". And NFI on both Windows 7 and
Windows 10, reports no clusters at all being used for the
test files.

It's possible to store a small file directly in the $MFT,
but Windows 7 reports the file size is "4KB on disk", yet the
cluster list is blank like on Windows 10. Maybe it too is
in the $MFT. I suppose you could slip a dictionary and
repetition counts into a small in-$MFT entry.

I checked the files to see if Microsoft had cheated and
marked the files as "sparse", and they didn't do that.

File 43
\big.bin
$STANDARD_INFORMATION (resident)
$FILE_NAME (resident)
$DATA (nonresident) === empty cluster list, stored in $MFT instead?
Only Win7 has "4K on disk" for the file size.
Win10 claims the file size on disk is "0".

fsutil usn readdata F:\big.bin

Major Version : 0x3
Minor Version : 0x0
FileRef# : 0x0000000000000000000100000000002b
Parent FileRef# : 0x00000000000000000005000000000005
Usn : 0x0000000000000000
Time Stamp : 0x0000000000000000 12:00:00 AM 1/1/1601
Reason : 0x0
Source Info : 0x0
Security Id : 0x0
File Attributes : 0x820 = archived + compressed, no sparse bit
File Name Length : 0xe
File Name Offset : 0x4c
FileName : big.bin

FILE_ATTRIBUTE_READONLY = 1 (0x1)
FILE_ATTRIBUTE_HIDDEN = 2 (0x2)
FILE_ATTRIBUTE_SYSTEM = 4 (0x4)
FILE_ATTRIBUTE_DIRECTORY = 16 (0x10)
FILE_ATTRIBUTE_ARCHIVE = 32 (0x20) === archived bit
FILE_ATTRIBUTE_NORMAL = 128 (0x80)
FILE_ATTRIBUTE_TEMPORARY = 256 (0x100)
FILE_ATTRIBUTE_SPARSE_FILE = 512 (0x200)
FILE_ATTRIBUTE_REPARSE_POINT = 1024 (0x400)
FILE_ATTRIBUTE_COMPRESSED = 2048 (0x800) === compressed bit
FILE_ATTRIBUTE_OFFLINE = 4096 (0x1000)
FILE_ATTRIBUTE_NOT_CONTENT_INDEXED = 8192 (0x2000)
FILE_ATTRIBUTE_ENCRYPTED = 16384 (0x4000)

But the problem remains, that Windows 10 Properties is reporting
a bogus "disk full" situation, whereas Windows 7 shows
"the disk is almost empty" when it has multiple 55GB
files-full-of-zeros on it.

I wouldn't even be investigating this, if I was putting
28GB of data on the drive, and the Win10 Properties circle
said 28GB. It's the fact that the Properties report the
disk is full, when it isn't full, that irks me. The Windows 7
test performed more normally.

You cannot compress a file larger than the partition, in either
case. Even though a file-of-zeros is highly compressible with
LZ compressors, the file system doesn't allow the file
being placed, to be larger than the partition. But,
you are allowed to place multiple 55GB files on the partition
in Windows 7. And it still has plenty of space for the test
to be repeated thousands of times, and have 55TB+ of data
on the partition. But it won't allow you to write a
55TB file-of-zeros to a 55GB NTFS-compressing partition.

Paul
  #14  
Old March 8th 18, 07:44 AM posted to alt.comp.os.windows-10
Martin Edwards
external usenet poster
 
Posts: 181
Default Microsoft seems to be changing NTFS

On 3/8/2018 3:09 AM, B00ze wrote:
Good day Paul.

On 2018-03-07 05:46, Paul wrote:

I've been noticing some funny things lately, with
the defragment option in the Optimize panel. Mine no
longer works (but such reports have gone back for
a couple years, so you could easily dismiss this as
"just my problem").


No idea why, shouldn't it start some external program when you click the
button? Can we run that manually?

So instead, I can present a test result today,
that shows a difference between Windows 7 NTFS
behavior and Windows 10. The bad thing is, the Windows
10 behavior is "wrong" and represents a corruption of sorts.
The changes made are "less correct" than Windows 7.

The Windows 7 behavior is an issue with the design of
NTFS, that IT staff have known about for years, and
it can't really be fixed because of the nature of
how storage in NTFS works.


Ah! I'm IT staff and I did not know about this lol. I once compressed a
40GB PST file, and it worked fine, but discovered that starting-up
Outlook made the OS uncompress the entire file and keep that
uncompressed copy on-disk, so I ended-up doubling the amount of space
Outlook took while running; had to give-up on that. But I did not see
the file split into 16 files...

OK, now, copy a single large file onto that partition.
NTFS compression chops the file into small pieces, and tries
to compress each piece. In the process of doing so, the file
becomes fragmented. In my test case, the partition had only


I don't understand why it becomes fragmented, but this tries to explain
(look in the comments.)

https://blogs.msdn.microsoft.com/ntd...s-compression/


one "writer". No other programs were writing to the partition,
so none of the fragmentation is caused by "file multiplexing".
All of the fragmentation seen, is caused by the NTFS
compression method.

I made a ~55GB partition, and "made" a file using dd.exe. This
allows avoiding File Explorer (so the OS cannot use any convenient
optimizations it might have up its sleeve).

dd if=/dev/random of=big.bin bs=1048576 count=54000

On Windows 7, my test file almost wasn't big enough to
trigger the bug. But, the program crapped out before it
finished creating the big test file. Basically, the volume
was very close to being full, without being full.

When I use the NTFS utility "nfi.exe" from Win2K, it shows
the classic pattern. One file, has multiple NTFS entries,
and they're linked together somehow.

******* Windows 7 - NTFS compressed very large file *******
NTFS File Sector Information Utility.
Copyright (C) Microsoft Corporation 1999. All rights reserved.

File 42
File 43
....
File 57

So that is 16 file pointers and a total of 1247 fragments.
Free defragmenters cannot deal well with this. They leave
the 16 file pointers, so the fragment count cannot drop
below 16. They may be able to reduce the fragment set under
each File# entry to just the one set of clusters.


Hmmm, is this normal, that a file can have more than a single file
pointer? I don't know enough about NTFS, but unless there is a limit to
the number of extends that a file can have, I don't see why big file
ended-up with more than 1 file pointer...

The file transfer stops at *half* the partition size. A 56GB
partition can have a 28GB file, and then it stops. There is no
obvious reason why it is stopping at this point. No metadata
points to a wasted 28GB chunk somewhere. Puzzling.

Now, the interesting part. Even though the file has been
through NTFS compression, this is the MFT representation.
Only four fragments and one file pointer! You might be
thinking "amazing", and you'd be right. So somehow, the
NTFS compression method has been modified enough, to give
a different fragmentation pattern. The disk won't actually
be sluggish, reading this file back later.

File 44
\big.bin
$STANDARD_INFORMATION (resident)
$FILE_NAME (resident)
$DATA (nonresident)
logical sectors 62576-6156911 (0xf470-0x5df26f)
logical sectors 6732776-6732903 (0x66bbe8-0x66bc67)
logical sectors 6701672-6732775 (0x664268-0x66bbe7)
logical sectors 6732904-56704103 (0x66bc68-0x3613c67)


Yeah, I'd say this is a big improvement. Now if they just would make it
work on clusters other than 4K...

There's only one problem though. If I do properties on the
partition, it says the entire 56GB partition is "full". Like
the volume bitmap was reporting something like that. There
is only 28GB of file on the disk, and the other 28GB should
be reported as white space.

The file has the appropriate properties. It's a 28GB file that
takes 28GB on disk. This is because the test case is carefully
chosen to be "incompressible". The /dev/random of dd.exe means
the NTFS compressor makes little progress against it.

I ran "chkdsk /f /r F: " in an attempt to fix it.

CHKDSK reported no errors! Clever. Clever like every other
time lately I run CHKDSK, and get a "ho hum" response.

I selected those options, in the hope the partition would be
dismounted. When the partition was remounted after CHKDSK was
finished, the Properties reports the partition was half full
with a 28GB file. In other words, the Properties for the
partition are now correct, after the remount. CHKDSK probably
didn't do anything, but we'll never know for sure.


Why will we ever know? Just repeat the test, don't run CHKDSK, just
reboot, see what happens when the partition is re-mounted; maybe it's
all well and correct once it is dismounted an remounted. Some kind of bug?

Summary of what happened in Windows 10:

1) File transfer stopped before completion.

Only half as much data can be handled this way before failure,
when compared to Windows 7. That's not an improvement. I *needed*
this case to work recently (during a software build), and had
to re-jig my setup to get around this mess.

2) Metadata incorrect. Remaining space incorrect.

3) Remedial action. Run CHKDSK or find a means to
dismount the partition, and see if the calculated
size will correct itself. CHKDSK reports no error
on the scan.

File is ruined as before. That part didn't change.

If you are not using NTFS compression (the tick box is
not ticked), of course none of this happens.

This is *only* an issue if you use NTFS compression.

Regular disk usage is not affected.

*******

The purpose of me reporting this here, is as a reminder
that NTFS is being changed in subtle ways, and if you
see issues, keep that possibility in mind. I would
prefer if Microsoft wrote a tutorial article explaining
the "improvements" it is making, but... blah blah, blah
blah blah.


They did announce LZX compression, so they *are* playing around with it.

https://theitbros.com/lzx-new-window...ion-algorithm/

Questions this week:

1) Why isn't CHKDSK reporting more problems ?
A couple of situations I've been in, I should have
received a report of what was going on.


Could be CHKDSK bypasses the in-memory disk information, and does its
own I/O, and the bug with the free space is only in memory, ie. if you
dismount and remount, all is well...

2) Why has defrag stopped working ? Even HDDs now,
are having TRIM applied to them (which is incorrect).


Lol, no idea ;-)

3) What's up with NTFS compression ?


Well, it /looks/ like they are looking into improving it (e.g. LZX).

5) I've also had one case of corruption after running
JKDefrag. Which in the past, and on other OSes, has
been *flawless*. Has MS buggered with the defrag API ?
The "agreed-upon" API. Heaven help us.


JKD is pretty old, why are you not using MyDefrag?

Regards,

Is this part of W10, or do you have to get it from somewhere?

--
Myth, after all, is what we believe naturally. History is what we must
painfully learn and struggle to remember. -Albert Goldman
  #15  
Old March 8th 18, 09:29 AM posted to alt.comp.os.windows-10
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default Microsoft seems to be changing NTFS

Martin Edwards wrote:
On 3/8/2018 3:09 AM, B00ze wrote:
JKD is pretty old, why are you not using MyDefrag?

Regards,

Is this part of W10, or do you have to get it from somewhere?


JKDefrag and MyDefrag are written by Jeroen Kessels.

JKDefrag was open source, and was eventually discontinued.
Version 3.36 is available in the archive.

https://web.archive.org/web/20150107....com/JkDefrag/

MyDefrag is closed source, and supports scripting of some
operations as I understand it. The website for it is gone too.
Version 4.3.1 is available for download here.

https://web.archive.org/web/20150702...ndInstall.html

I don't know if the source was sold off, or the project
was just canned.

They're both examples of defragmenters that use the
official Windows API to do the dirty work. The program
tells the defrag API to "move block X to location Y"
or something along those lines, and the Microsoft-provided
library answers the call. The idea is, the library is "safe"
to a lot of various calamities, which is why third parties
have agreed to use it. And Microsoft didn't originate the
API either. It was provided by a third party, and Microsoft
added it so it could be used more routinely.

Microsoft hasn't always written its own defragmenters either.
The defragmenter provided in-box, in WinXP, was actually
written by a third party. That's why it moves data to the
left and consolidates free space so well. Making a "solid block"
out of the files, are hallmark behaviors of third-party
defragmenters. The idea is, people pay good money to have
their files moved to the left like that. And on WinXP,
the built-in did that job for you.

The Windows 10 one isn't quite as fixated on that topic.
In fact, it leaves gaps (white space) between files
on purpose, as part of a strategy. The Microsoft behavior
is done based on "science", whereas the third party
programs did things "because they looked nice". If
you want the traditional "look", that'll take some third-party
software.

Paul
 




Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off






All times are GMT +1. The time now is 01:50 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright 2004-2024 PCbanter.
The comments are property of their posters.