A Windows XP help forum. PCbanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » PCbanter forum » Microsoft Windows XP » General XP issues or comments
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Virtual Machine and NTFS



 
 
Thread Tools Display Modes
  #31  
Old October 18th 10, 02:47 PM posted to microsoft.public.windowsxp.general,microsoft.public.win98.gen_discussion
Philo Pastry
external usenet poster
 
Posts: 2
Default Virtual Machine and NTFS

glee wrote:

You can force the cluster size....it just means there is a
ridiculously large number of clusters on a drive that size,
and among other things, most drive tools will not work on a
drive with that many clusters (scandisk, defrag, drive
diagnostic apps).


DOS scandisk has no problems scanning volumes with many millions of
clusters (120 million was the most I've tried and it worked).

Windows ME versions of defrag and scandisk (scandskw + diskmaint.dll)
have a cut-off somewhere around 28 to 32 million clusters. The Windows
ME versions of scandisk and defrag are frequently transplanted into
tweaked Win-98 installations.

MS-DOS version of Fdisk (may 2000 update) has a limit of 512 gb (that's
the largest drive that it can correctly partition). There is something
called "Free Fdisk" that can partition larger drives (at least 750 gb,
and probably up to 1 tb). MS-DOS format.com can format volumes of up to
1024 gb (1 tb).
Ads
  #32  
Old October 18th 10, 04:52 PM posted to microsoft.public.windowsxp.general,microsoft.public.win98.gen_discussion
John John - MVP[_2_]
external usenet poster
 
Posts: 1,637
Default Virtual Machine and NTFS

On 10/18/2010 10:20 AM, glee wrote:
"Bill in Co" wrote in message
m...
glee wrote:
"Bill in Co" wrote in message
m...
We could get into a debate on this, but with someone posing as "Philo
is wrong", one wonders if it would be worth it. Are you "98guy" in
disguise? :-)

I'd say that's quite likely, if not outright obvious. A 500GB SATA
drive as a single 4KB-cluster FAT32 partition, running Win98? Who else
do we know that does this and recommends it? ;-)


Maybe I'm forgetting something, but I seem to recall that as the
partition size got bigger, the cluster size also HAD to get larger (up
to 32K max) to keep the maximum allowable number of clusters within
the max 16 bit value (65,536) for FAT32. So how could one possibly
have 4 KB clusters on a 500 GB volume with FAT32?


You can force the cluster size....it just means there is a ridiculously
large number of clusters on a drive that size, and among other things,
most drive tools will not work on a drive with that many clusters
(scandisk, defrag, drive diagnostic apps).


Not to mention that it will result in a ridiculously big FAT of about
500MB! Anyone who understands how the FAT is read in a linear fashion
understands the folly of such a formatting scheme! This formatting
scheme effectively ensures that much the disk structure will be paged
out, what an incredible hit on disk performance! The disk is already
the single biggest performance bottleneck on any computer and this silly
formatting scheme will make it an even bigger bottleneck. Good thing
98Guy isn't handing out car advice, he would have us fill the bumpers
with lead while claiming that the added ballast makes cars go faster
while consuming less fuel...

John
  #33  
Old October 18th 10, 06:11 PM posted to microsoft.public.windowsxp.general,microsoft.public.win98.gen_discussion
John John - MVP[_2_]
external usenet poster
 
Posts: 1,637
Default Virtual Machine and NTFS

On 10/17/2010 11:15 AM, Philo is wrong (98Guy) wrote:

Additionally , since many people are now storing movies and such
with large files sizes, fat32 cannot handle any files over 4 gigs.


While that is true, it rarely comes up as a realistic or practical
limitation for FAT32. The most common multimedia format in common use
is the DVD .VOB file, which self-limit themselves to be 1 gb.


People working with video editing and multimedia files often run across
this 4GB file limitations. Backup/imaging utilities also often run into
problems caused by this file size limitation, it is a very practical
limitation of FAT32 and one that users often experience, people often
post asking about this problem.




The only file type that I ever see exceed the 4 gb size are virtual
machine image files, which you will not see on a win-9x machine but you
would see on an XP (or higher) PC running VM Ware, Microsoft Virtual PC,
etc. But 4 gb should be enough to contain a modest image of a virtual
windows-98 machine.

Additionally, XP is *deliberately* crippled in that it cannot create
a fat32 partition larger than 32 gigs.


Windows XP cannot format partitions larger than 32GB to FAT32 because
the increasing size of the FAT for bigger volumes makes these volumes
less efficient and for performance reasons Microsoft decided to draw the
line at 32GB for FAT32 volumes.



[snip...]

The extra sophistication and transaction journalling performed by NTFS
reduces it's overall performance compared to FAT32. So for those who
want to optimize the over-all speed of their PC's, FAT32 is a faster
file system than NTFS.


That is not completely true. FAT32 is generally faster on smaller
volumes but on larger volumes NTFS is faster. This is why Microsoft
decided to put a limit of 32GB on the size of volumes which can be
formatted to FAT32 on Windows 2000 and later NT operating systems, the
size of the FAT on larger volumes is a hindrance on performance and
Microsoft decided that 32GB was an acceptable cut off point for FAT32
volumes.



I do a lot of computer repair work and have seen entire fat32
file systems hosed by a bad shut down. The user, in attempt to
fix things has typically run scandisk and *sometimes* has ended
up with a drive full of .chk files.


That's another common myth about FAT32 - that the appearance of many
.chk files must mean that it's inferior to NTFS.

While it might look untidy, the mere existance of those .chk files don't
mean anything about how compentent or capable FAT32 is, and it's not
hard to just delete them and get on with your business.

You did not say in your example if the user's drive and OS was operable
and functional despite the existance of those .chk files.


These .chk files are lost file segments that the scandisk utility could
not recover, damaged data! That the operating system remains "operable"
is a laughable excuse if user data is lost! Open a user file on a FAT32
drive then while the user is making changes to his file yank the plug on
the machine and tell us how well (or not) the user data survives such an
event!




What I am saying is that NTFS is considerably more resilient.


What you don't understand about NTFS is that it will silently delete
user-data to restore it's own integrity as a way to cope with a failed
transaction, while FAT32 will create lost or orphaned clusters that are
recoverable but who's existance is not itself a liability to the user or
the file system.


Citations please...

John

  #34  
Old October 18th 10, 08:57 PM posted to microsoft.public.windowsxp.general,microsoft.public.win98.gen_discussion
J. P. Gilliver (John)
external usenet poster
 
Posts: 5,291
Default Virtual Machine and NTFS

In message , glee
writes:
[]
The second list are the operating systems you can install it on, as a
host machine. I have read elsewhere that it will install and run on XP
Home as well as Pro, but have never tried.


That's what I thought. (Anyone else know?)

The first list is what operating systems are "supported" to be run as a
virtual system on the host. Other systems can be run....Win98, Linux,
etc...they are just not "supported" , meaning you won't get any help or
support for issues, there may not be Additions available for
everything, or there may only be partial functionality of the
unsupported virtual system.


Yes, I thought so too (-:. [What's an "Addition" in this context?]
--
J. P. Gilliver. UMRA: 1960/1985 MB++G.5AL-IS-P--Ch++(p)Ar@T0H+Sh0!:`)DNAf

"The people here are more educated and intelligent. Even stupid people in
Britain are smarter than Americans." Madonna, in RT 30 June-6July 2001 (page
32)
  #35  
Old October 18th 10, 09:32 PM posted to microsoft.public.windowsxp.general,microsoft.public.win98.gen_discussion
Hot-text
external usenet poster
 
Posts: 239
Default Virtual Machine and NTFS

Virtual PC 2004 SP1

http://www.microsoft.com/downloads/e...displaylang=en

System Requirements

Supported Operating
Systems:Windows 2000 Service Pack 4
;Windows Server 2003,
Standard Edition (32-bit x86);
Windows XP Service Pack 2

Processor: Athlon®, Duron®, Celeron®, Pentium® II, Pentium III, or Pentium 4
Processor speed: 400 MHz minimum (1 GHz or higher recommended)

RAM: Add the RAM requirement for the host operating system that you will be
using
to the requirement for the guest operating system that you will be using.
If you will be using multiple guest operating systems simultaneously,
total the requirements for all the guest operating systems that you need to
run simultaneously.

Available disk space: To determine the hard disk space required,
add the requirement for each guest operating system that will be installed.

Virtual PC 2004 SP1 runs on:
Windows 2000 Professional SP4,
Windows XP Professional,
and Windows XP Tablet PC Edition.

  #36  
Old October 18th 10, 09:34 PM posted to microsoft.public.windowsxp.general,microsoft.public.win98.gen_discussion
Hot-text
external usenet poster
 
Posts: 239
Default Virtual Machine and NTFS

John John - MVP Win The Debate Hands Down!

  #37  
Old October 19th 10, 02:13 AM posted to microsoft.public.windowsxp.general,microsoft.public.win98.gen_discussion
Buffalo[_2_]
external usenet poster
 
Posts: 329
Default Virtual Machine and NTFS



Philo Surrenders wrote:
philo top-poasted:

I have probably over 200 years worth of FAT32 hard-drive usage
experience


This is how Philo surrenders an argument. Watch:

200 years of experience...
ok you win, my computer experience only goes back to 1968.


Because he didn't quote the rest of my statement:

(if you add up all the years of service of various FAT32
drives that I've installed, maintained, touched in one way
or another, etc) over the past dozen years.

How stupid!!
You truely should just keep quiet and leave with any dignity that may
possibly remain.
Buffalo


  #38  
Old October 19th 10, 03:01 AM posted to microsoft.public.windowsxp.general,microsoft.public.win98.gen_discussion
Philo Pastry[_2_]
external usenet poster
 
Posts: 1
Default Virtual Machine and NTFS

John John - MVP wrote:

People working with video editing and multimedia files often run
across this 4GB file limitations. Backup/imaging utilities also
often run into problems caused by this file size limitation,


About 3 years ago I installed XP on a 250 gb FAT-32 partitioned hard
drive and installed Adobe Premier CS3. It had no problems creating
large video files that spanned the 4 gb file-size limit of FAT32.

Windows XP cannot format partitions larger than 32GB to FAT32
because the increasing size of the FAT for bigger volumes makes
these volumes less efficient (bla bla bla)


Other than saying that this behavior was "by design", Microsoft has
never said *why* they gave the NT line of OS's the handicap of not being
able to create FAT32 volumes larger than 32 gb.

It's a fallacy that the entire FAT must be loaded into memory by any OS
(win-9x/XP, etc) for the OS to access the volume.

Go ahead and cite some performance statistics that show that performance
of random-size file read/write operations go down as the FAT size (# of
clusters) goes up.

Remember, we are not talking about cluster size here. FAT32 cluster
size (and hence small file storage efficiency) can be exactly the same
as NTFS regardless the size of the volume.
  #39  
Old October 19th 10, 03:57 AM posted to microsoft.public.windowsxp.general,microsoft.public.win98.gen_discussion
mm
external usenet poster
 
Posts: 811
Default Virtual Machine and NTFS

On Sun, 17 Oct 2010 10:02:22 +0100, "J. P. Gilliver (John)"
wrote:

In message , mm
writes:
On Sat, 16 Oct 2010 07:16:58 -0500, philo wrote:

On 10/16/2010 12:33 AM, mm wrote:
Hi! I"m moving to a new machine that probably won't run win98, so I
planned to run it from a Virtual Machine under winxpsp3

Is it okay to have all the harddrive partitions NTFS, even though
win98 can't normally read NTFS?




It should work just fine.

If there are any problems they will not be due to the drive being NTFS
at any rate


Great, thank you. Now I have all the parts to fix up my friends old
2.4 gig Dell for myself. I think I'll like the increased speed.

snip


If you're actually starting from scratch (which "have all the parts"
suggests to me that you are) anyway, received wisdom here seems to be


I had everything but the harddrive, so yeah, I'm starting from scratch
on that. Thanks. There is a lot of thread to read. Been very busy,
but I'll have time soon, I think. I plan to get back to you.

that you should set it up as FAT anyway: the alleged benefits of NTFS
being largely moot for the single home user, and XP will operate
perfectly happily under FAT.


  #40  
Old October 19th 10, 04:03 AM posted to microsoft.public.windowsxp.general,microsoft.public.win98.gen_discussion
John John - MVP[_2_]
external usenet poster
 
Posts: 1,637
Default Virtual Machine and NTFS

On 10/18/2010 11:01 PM, Philo Pastry wrote:
John John - MVP wrote:

People working with video editing and multimedia files often run
across this 4GB file limitations. Backup/imaging utilities also
often run into problems caused by this file size limitation,


About 3 years ago I installed XP on a 250 gb FAT-32 partitioned hard
drive and installed Adobe Premier CS3. It had no problems creating
large video files that spanned the 4 gb file-size limit of FAT32.

Windows XP cannot format partitions larger than 32GB to FAT32
because the increasing size of the FAT for bigger volumes makes
these volumes less efficient (bla bla bla)


Other than saying that this behavior was "by design", Microsoft has
never said *why* they gave the NT line of OS's the handicap of not being
able to create FAT32 volumes larger than 32 gb.


Raymond Chen talks about this he

Windows Confidential A Brief and Incomplete History of FAT32
http://technet.microsoft.com/en-us/m...fidential.aspx


It's a fallacy that the entire FAT must be loaded into memory by any OS
(win-9x/XP, etc) for the OS to access the volume.


Of course it's a fallacy and no one here said that the entire FAT had to
be loaded in memory, what you don't understand is that the FAT is
extensively accessed during disk operations and having it cached in the
RAM is one of the most efficient methods of speeding up disk operations.
You on the other hand seem to think that having the FAT as large as
possible and then page it to disk is a smart thing to do... Why else
would anyone format a 500gb FAT32 volume with 4K clusters? What exactly
do you think that you will gain with this formatting scheme that will be
so great as to dismiss the whopping performance hit provided by a 500MB FAT?

John
  #41  
Old October 19th 10, 04:49 AM posted to microsoft.public.windowsxp.general,microsoft.public.win98.gen_discussion
Bill in Co
external usenet poster
 
Posts: 1,927
Default Virtual Machine and NTFS

John John - MVP wrote:
On 10/18/2010 11:01 PM, Philo Pastry wrote:
John John - MVP wrote:

People working with video editing and multimedia files often run
across this 4GB file limitations. Backup/imaging utilities also
often run into problems caused by this file size limitation,


About 3 years ago I installed XP on a 250 gb FAT-32 partitioned hard
drive and installed Adobe Premier CS3. It had no problems creating
large video files that spanned the 4 gb file-size limit of FAT32.

Windows XP cannot format partitions larger than 32GB to FAT32
because the increasing size of the FAT for bigger volumes makes
these volumes less efficient (bla bla bla)


Other than saying that this behavior was "by design", Microsoft has
never said *why* they gave the NT line of OS's the handicap of not being
able to create FAT32 volumes larger than 32 gb.


Raymond Chen talks about this he

Windows Confidential A Brief and Incomplete History of FAT32
http://technet.microsoft.com/en-us/m...fidential.aspx


It's a fallacy that the entire FAT must be loaded into memory by any OS
(win-9x/XP, etc) for the OS to access the volume.


Of course it's a fallacy and no one here said that the entire FAT had to
be loaded in memory, what you don't understand is that the FAT is
extensively accessed during disk operations and having it cached in the
RAM is one of the most efficient methods of speeding up disk operations.
You on the other hand seem to think that having the FAT as large as
possible and then page it to disk is a smart thing to do... Why else
would anyone format a 500gb FAT32 volume with 4K clusters? What exactly
do you think that you will gain with this formatting scheme that will be
so great as to dismiss the whopping performance hit provided by a 500MB
FAT?

John


The idea of a having a super large, 500MB FAT, (that often has to be
accessed sequentially, instead of in a B-tree), I find *wholly* repugnant!!


  #42  
Old October 19th 10, 05:17 AM posted to microsoft.public.windowsxp.general,microsoft.public.win98.gen_discussion
Philo Pastry[_3_]
external usenet poster
 
Posts: 8
Default Virtual Machine and NTFS

John John - MVP wrote:

Let's address your blatant lie:

"What you don't understand about NTFS is that it will silently
delete user-data to restore it's own integrity as a way to cope
with a failed transaction..."

It is you who doesn't understand anything about how NTFS works
so you spread lies and nonsence! NTFS DOES NOT silently delete
user data to replace it to restore it's own integrity and
C. Quirke does not in anyway say that in his blog.


Perhaps you have a reading comprehension problem.

This is what Quirk says, and what I've experienced first-hand when I see
IIS log file data being wiped away because of power failures:

-----------
It also means that all data that was being written is smoothly and
seamlessly lost. The small print in the articles on Transaction Rollback
make it clear that only the metadata is preserved; "user data" (i.e. the
actual content of the file) is not preserved.
-----------

Do you understand the difference between metadata and "user data" ?

What is being described is journaling and it is perfectly normal
NTFS behaviour, this journaling ensures atomicity of the write
operations.


Journalling ensures the *complete-ness* of write operations. Partially
completed writes are rolled back to their last complete state. That can
mean that user-data is lost.

You on the other hand seem to think that it is preferable to
have the file system keep incomplete or corrupt write operations
and then have scandisk run at boot time so that it may /try/
to recover lost clusters or so that it may save damaged file
segments


In my experience, drive reliability, internal caching and bad-sector
re-mapping have made most of what NTFS does redundant.

The odd thing is - I don't believe I've ever had to resort to scouring
through .chk files for data that was actually part of any sort of user
file that was corrupted. Any time I've come across .chk files, I've
never actually had any use for them.

And I can tell you that I would really be ****ed off if I was working on
a file on an NTFS system and it suffered a power failure or some other
sort of interruption and my file got journalled back to some earlier
state just because the file system didn't fully journal it's present
state or last write operation.

I've seen too many examples of NT-server log files that contain actual
and up-to-date data one hour, and because of a power failure the system
comes back up and half the stuff that *was* in the log file is gone.
That's an example of meta-data being preserved at the sake of user data.

Chris mentions in his blog how data which was recovered by
Scandisk should be treated as suspicious.


Recovered - as in the creation of .chk files? Like I said, I've never
had a use for them in the first place.

The NTFS method is to use journaling instead to guarantee
atomicity of the write operation, to guarantee that the write
is complete and free of errors.


No. You can still have erroneous write operations under NTFS and FATxx,
and the OS is supposed to retry the operation until the write succeeds.
If the write occurs during a system crash or power failure, there can be
no re-try. Journalling is meant to detect an erroneous write event that
was never corrected / completed and restore the file system to the
previous state before the event, even if some (or most, or all) user
data was in fact written to the drive prior to the failure but was not
journaled. That's where FAT32 will retain the user data, but it will be
lost under NTFS.

And as Quirke says, under FAT you can have a mis-match between the
file-size as recorded in the FAT vs the length of the file-chain, and
for which is easily fixable.

He talks a lot more about the relative complexity of the actual
structure of NTFS compared to FAT32, the lack of proper documentation,
of diagnostic and repair tools, and the idea that the MFT may not be as
recoverable or redundant as the dual FATs of FAT32.

What is especially interesting is that a faulty FAT32 volume can be
mounted and inspected with confidence that it won't be immediately
"attacked" by unknown or uncontrolled read/write operations during it's
mounting as a faulty NTFS volume would be by NTFS.SYS. You basically
have to trust that NTFS.SYS knows what it's doing, and that it knows
best how to recover a faulty NTFS volume, and if it places more value on
file recovery vs file-system integrity (there is a huge difference
between the two).

I prefer the later and I am sure that most reading here would
prefer to keep the previous good version of the file that
experienced a write failure rather than have a the file system
keep a newer copy of the file when it is incomplete or corrupt!


That depends on how large your "atoms" are in your "atomicity" analogy.

I have been burned and shake my head many times because I've seen data
lost on NTFS volumes because of the interplay between journalling and
write-caching after unexpected system shutdown events.

NTFS is more than journalling. There's the organizational structure or
pattern as to how you store files and directories, and there's the event
or transaction-monitoring and logging operations above that. You could
theoretically have journalling performed on a FAT32 file structure.

But like I said, NTFS is more convoluted and secretive than it needs to
be in the way it stores files on a drive (journalling or no
journalling).
  #43  
Old October 19th 10, 05:46 AM posted to microsoft.public.windowsxp.general,microsoft.public.win98.gen_discussion
Philo Pastry[_3_]
external usenet poster
 
Posts: 8
Default Virtual Machine and NTFS

John John - MVP wrote:

Other than saying that this behavior was "by design", Microsoft
has never said *why* they gave the NT line of OS's the handicap
of not being able to create FAT32 volumes larger than 32 gb.


Raymond Chen talks about this he

Windows Confidential A Brief and Incomplete History of FAT32


=============
For a 32GB FAT32 drive, it takes 4 megabytes of disk I/O to compute the
amount of free space.
============

You do realize how trivial a 4 mb data transfer is, today and even 5, 10
years ago - don't you?

Chen doesn't mention any other file or drive operation as being impacted
by having a large cluster count other than the computation of free space
- which I believe is infrequently performed anyways.

I formatted a 500 gb drive as a single FAT32 volume using 4kb cluser
size just as an excercise to test if Windows 98se could be installed and
function on such a volume, and it did - with the exception that it would
not create a swap file on such a volume.

And as Chen mentions, yes - the *first* directory command on FAT32
volumes with a high cluster-count does take a few minutes (but not
successive directory commands). What I found in my testing that either
in DOS or under Win-98, that the first dir command (or explorer-view) is
instantaneous as long as the number of clusters doesn't exceed 6.3
million. This equates to a FAT size of about 25 mb.

I have installed win-98 on FAT32 volumes of various sizes, formatted
with a range of cluster sizes from 4kb to 32kb resulting in volumes
ranging from 6 to 40 million clusters and have seen no evidence of a
performance hit during file manipulations, copying, searching, etc.

You on the other hand seem to think that having the FAT as
large as possible and then page it to disk is a smart thing
to do...


Other than the first dir command or first explorer session, I have seen
no performance hit under win-9x or even under XP when installed on FAT32
volumes with large FATs.
  #44  
Old October 19th 10, 06:33 AM posted to microsoft.public.windowsxp.general,microsoft.public.win98.gen_discussion
Hot-text
external usenet poster
 
Posts: 239
Default Virtual Machine and NTFS

For NTFS file system is like a woman instead A Big Hard Drive is better for
my file system!
and Windows 98 is like a old man instead more then FAT32, will be to Big
and all the oil in the world will not make his file system run right!



"Philo Pastry" John John - MVP Win The Debate Hands Down so give up!


  #45  
Old October 19th 10, 06:34 AM posted to microsoft.public.windowsxp.general,microsoft.public.win98.gen_discussion
Hot-text
external usenet poster
 
Posts: 239
Default Virtual Machine and NTFS

"Philo Pastry" No you concede the debate


 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is Off
HTML code is Off






All times are GMT +1. The time now is 06:35 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 PCbanter.
The comments are property of their posters.