A Windows XP help forum. PCbanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » PCbanter forum » Microsoft Windows XP » Hardware and Windows XP
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Clone OS to HD via DOS and Clean Install XP?



 
 
Thread Tools Display Modes
  #31  
Old March 30th 04, 02:42 AM
Rod Speed
external usenet poster
 
Posts: n/a
Default Clone OS to HD via DOS and Clean Install XP?


cquirke (MVP Win9x) wrote
in message news
Rod Speed wrote
Dorothy Bradbury wrote


so there is a benefit to system area defragment.


Not necessarily. The modern reality is that the heads
are moving around quite a bit due to the nature of access
to various bits of the drive and the very theoretical extra
head movements due to what fragmentation does occur
with system files may not in practice actually be any slower.


If you are that bothered about system files suffering fragmentation then
use a separate disk (or partition) for system files re avoiding data-mixing.


And that wont necessarily help either, particularly with a separate partition.
That can produce bigger head movements than are seen without it.


Partitioning certainly can speed things up,


Hardly ever. The main time you get much of an effect there
is if the drive is partitioned with the rarely used files on the
slowest part of the drive, so head movements tend to be
reduced in terms of the number of tracks moved on average.

with all of the above factors in mind. The trick
is to be clueful about what goes where, and how
you size and order your partitions and volumes.


Let's say you have a 120G HD that contains 4G of core OS and
app code, about 2G tops or swap and temp (having slashed the
web cache to 20M), and 90G of assorted movies, MP3, games
etc. that you use from time to time. On one big C:, the OS may
*try* to put things in sensible places, but it's quite likely to mess up.


Bull**** on that 'quite likely'

So let's say you put the "hot" 4G + 2G in an 8G C:, crucial
data in a 2G D: (safely away from all C:'s write traffic)


Mad. The only thing that makes any sense is to ensure
that the 'crucial data' is always fully backed up.

and everything else after that on a big E:. Let's say you reserve
the last 2G for a small F:, which contain auto-backups of D:.


Makes a hell of a lot more sense
to have auto backups off that drive.

Now it doesn't matter how clueless defrag is - not matter where
it splatters the contents of C:, it's always in the first 8G of the HD.


Makes a hell of a lot more sense to not defrag at all.

Bet you wouldnt be able to pick between the system that is
defragged and the one that isnt, otherwise identical, in a
proper double blind trial without being able to use a frag display.

And as most of the tinme you're using C: and D:,


Depends entirely on how much the large files in E are
used. Plenty have music particularly running all the time.

the system is as fast as if you'd just bought
it and hadn't gunked it up with 90G of stuff!


More mindlessly superficial silly stuff. The 90G
wont necessarily slow it down if it isnt used much.

Also, when you have to ChkDsk after a bad
exit, or defrag, C: is fast, because it's so small.


Its stupid obsessively defragging modern systems.
And that particular partition is more likely to get
fragged when its got so much less free space
than a single partition for the whole physical drive.

If you need to ChkDsk E: as well, you can do so when
it suits; just don't use anything there until you've done so.


Most real world systems dont chkdsk that much and if
there is much of that, the cause needs to be fixed instead.

A separate disk is better re head repositioning time over multiple partitions,


A separate disk is a luxury, and has downsides. For example,
one 120G would cost less than 2 x 40G and run faster too.


And it makes no sense to go that route with the swap file anyway.
Makes a lot more sense to spend that extra on more physical
ram so the swap file doesnt get used much except at boot time.

BUT XP normally does its own thing moving files around
in an attempt to speed things up anyway, so there is
more involved than just fragrmention of system files.


Yep. It's annoying to see a 120G "big C:" with an 8G total file set,


Mindlessly superficial and irrelevant case.

and find half the stuff is stuck in the middle of the HD. IMO,
better to keep that in a smaller C: so that even if the OS does
decide to spread things out, it can only do so within a small part
of the HD. Then de-bulk the "engine room" to other volumes.


No thanks. I bet you wouldnt be able to pick between
those two configs either in a proper double blind trial.

Makes a hell of a lot more sense to go for the simpler
single partition for physical drive config and not get bitten
when you inevitably find that the partition sizes are wrong.
If you do a full backup of that physical drive before adjusting
the partition sizes, you've wasted FAR more time than
you will have gained if you even gain anything at all.

-------------------- ----- ---- --- -- - - - -

Running Windows-based av to kill active malware is like striking
a match to see if what you are standing in is water or petrol.
-------------------- ----- ---- --- -- - - - -


Nope, nothing like.


Ads
  #32  
Old March 30th 04, 04:14 AM
Eric Gisin
external usenet poster
 
Posts: n/a
Default Clone OS to HD via DOS and Clean Install XP?

"Kadaitcha Man" wrote in message
news:edeKLOpyAteU0FBE8C2DA3DE6F77yrsrTgivOfZk@kada itcha.cx...
cquirke (MVP Win9x) wrote:

Partitioning certainly can speed things up, with all of the above
factors in mind. The trick is to be clueful about what goes where,
and how you size and order your partitions and volumes.

Let's say you have a 120G HD that contains 4G of core OS and app code,
about 2G tops or swap and temp (having slashed the web cache to 20M),
and 90G of assorted movies, MP3, games etc. that you use from time to
time. On one big C:, the OS may *try* to put things in sensible
places, but it's quite likely to mess up.

So let's say you put the "hot" 4G + 2G


LOL what a ****ing ******.

Sorry, "what a ******" is copyright by Rod Speed.

I see nothing wrong with the proposal other than C should be at least 8GB.

  #33  
Old March 30th 04, 06:41 AM
Spinner
external usenet poster
 
Posts: n/a
Default Clone OS to HD via DOS and Clean Install XP?


"Rod Speed" wrote in message
...


Bet you wouldnt be able to pick between the system that is
defragged and the one that isnt, otherwise identical, in a
proper double blind trial without being able to use a frag display.


That depends on how fragmented the drive is.
Anyone who has worked on a seriously fragmented drive can easily tell the
difference.


  #34  
Old March 30th 04, 07:23 AM
Rod Speed
external usenet poster
 
Posts: n/a
Default Clone OS to HD via DOS and Clean Install XP?


Spinner wrote in message
news:8F7ac.23791$oH2.16374@lakeread01...
Rod Speed wrote


Bet you wouldnt be able to pick between the system that is
defragged and the one that isnt, otherwise identical, in a
proper double blind trial without being able to use a frag display.


That depends on how fragmented the drive is.


Nope, not with the system files with XP being discussed.

Anyone who has worked on a seriously
fragmented drive can easily tell the difference.


Nope, not with the system files with XP being discussed.


  #35  
Old March 30th 04, 07:41 AM
Kadaitcha Man
external usenet poster
 
Posts: n/a
Default Clone OS to HD via DOS and Clean Install XP?

Eric Gisin wrote:
"Kadaitcha Man" wrote in message
news:edeKLOpyAteU0FBE8C2DA3DE6F77yrsrTgivOfZk@kada itcha.cx...
cquirke (MVP Win9x) wrote:

Partitioning certainly can speed things up, with all of the above
factors in mind. The trick is to be clueful about what goes where,
and how you size and order your partitions and volumes.

Let's say you have a 120G HD that contains 4G of core OS and app
code, about 2G tops or swap and temp (having slashed the web cache
to 20M), and 90G of assorted movies, MP3, games etc. that you use
from time to time. On one big C:, the OS may *try* to put things
in sensible places, but it's quite likely to mess up.

So let's say you put the "hot" 4G + 2G


LOL what a ****ing ******.

Sorry, "what a ******" is copyright by Rod Speed.


Rod Speed is a ****ing pussy wimp and a ******.

I see nothing wrong with the proposal other than C should be at least
8GB.


Who said there was anything wrong with it, ****tard?



  #36  
Old March 30th 04, 08:25 PM
cquirke (MVP Win9x)
external usenet poster
 
Posts: n/a
Default Clone OS to HD via DOS and Clean Install XP?

On Tue, 30 Mar 2004 11:21:53 +1000, "Rod Speed"
cquirke (MVP Win9x) wrote


Partitioning certainly can speed things up,


Hardly ever. The main time you get much of an effect there
is if the drive is partitioned with the rarely used files on the
slowest part of the drive, so head movements tend to be
reduced in terms of the number of tracks moved on average.


Of course - that's exactly what I have in mind!

with all of the above factors in mind. The trick
is to be clueful about what goes where, and how
you size and order your partitions and volumes.


Let's say you have a 120G HD that contains 4G of core OS and
app code, about 2G tops or swap and temp (having slashed the
web cache to 20M), and 90G of assorted movies, MP3, games
etc. that you use from time to time. On one big C:, the OS may
*try* to put things in sensible places, but it's quite likely to mess up.


Bull**** on that 'quite likely'


See later. Do you have a write-up on exactly how the OS decides what
goes where? Without that, it's a "beauty contest", i.e. all we are
debating is how much faith we have in the coders not to screw up. But
by limiting the damage this can do, the answer matters less.

So let's say you put the "hot" 4G + 2G in an 8G C:, crucial
data in a 2G D: (safely away from all C:'s write traffic)


Mad. The only thing that makes any sense is to ensure
that the 'crucial data' is always fully backed up.


Ah, "backup". The cure-all that makes live adat irrelevant.

There's a problem inherent in the backup concept:
- backup must be so up-to-date that no recent data is lost
- backup must pre-date the changes/losses you want to undo

Nothing works 100%, so anything that partially works has value. File
system corruption occurs when the drive is written to; no writes, no
corruption risk. So if your data is on a volume that sees writes only
when you are saving data, it's that much safer.

This is the concept behind not allowing pedestrians on freeways.

and everything else after that on a big E:. Let's say you reserve
the last 2G for a small F:, which contain auto-backups of D:.


Makes a hell of a lot more sense
to have auto backups off that drive.


"Auto" and "off that drive" don't really work together, as most
off-drive storage requires removable media to be inserted - so it
isn't "auto" anymore. Exceptions: Backup pulls via LAN (which I do
use in automated fashion) and uploading data to an Internet repository
somewhere. The latter doesn't make much sense to me, given you are
now relying on someone else's privacy and server management.

Backup's usually a stretch between 3 apices of a triangle:
- scope; what range of disasters does it hedge against?
- convenience; how likely is it to be done?
- capacity; how much can I back up?

It sounds as if you are focusing on the first, to the exclusion of
others. IOW if it doesn't cover against all eventualities, it's not
worth doing. By that logic, the only backups we';d do would be
off-system and located outside of the building - and how often is that
going to get done? Many disasters don't require that full-scope
protection to undo, so a "shallowed" backup done more often reduces
the work lost when restoring in these cases.

In practice, on small LANs, I use a holographic auto-backup:
- 02:00, each PC backups up its own data set elsewhere on HD
- 03:00+, other PCs pull most recent backup to themselves
- user then manually dumps collected backups to CDRW
- backup CDRWs rotated so one is always off-site

"Most recent"? Yes, as the local backup keeps the last 5 backups,
purging the oldest by nameslot rather than date logic (so as to be Y2k
or flat-CMOS-battery proof).

If the building burns down, they fall back to the last off-site CDRW
If the whole LAN gets stolen, then fall back to last CDRW
If all but one PC are stolen, they fall back one day
If problem missed for 5 days, local backups "deep" enough to undo

BTW, any backup that depends on removable disks is often abandoned for
months when disks become unavailable. That would "never happen" in a
professionally-administered installation, I know, but trust me; out
there in small self-administered peer-to-peer land, it's common.

Now it doesn't matter how clueless defrag is - not matter where
it splatters the contents of C:, it's always in the first 8G of the HD.


Makes a hell of a lot more sense to not defrag at all.


Eh? Well, put it this way:
- defrag's not a drag when the always-in-use volumes are small
- not defragging doesn't hurt much if intelligently partitioned

Looks like a win-win to me :-)

Bet you wouldnt be able to pick between the system that is
defragged and the one that isnt, otherwise identical, in a
proper double blind trial without being able to use a frag display.


Not sure about that either way. There are other benefits, though,
when it comes to things like data recovery, and routine maintenance is
lot less tedious when you "have somewhere to stand on"

And as most of the tinme you're using C: and D:,


Depends entirely on how much the large files in E are
used. Plenty have music particularly running all the time.


If there's music running all the time, then there is a single file on
E: being accessed all the time.

the system is as fast as if you'd just bought
it and hadn't gunked it up with 90G of stuff!


More mindlessly superficial silly stuff. The 90G
wont necessarily slow it down if it isnt used much.


Yes it will, because it will act as a sandbar between the front of the
HD (where Windows code is paged back in) and the fringe of the file
set (where new files are created). Defrag logic that intentionally
leaves holes before "seldom used" material seeks to address that
problem - in essence, to do automatically what intelligent
partitioning casts in stone.

Also, when you have to ChkDsk after a bad
exit, or defrag, C: is fast, because it's so small.


Its stupid obsessively defragging modern systems.


True; I defrag after large uninstalls/deletes or before big installs,
but the rest of the time I don't bother. I'm not of the "give us each
day out daily defrag" school of maintenance either.

Post-bad-exit file system checks will always be with us, however.

And that particular partition is more likely to get
fragged when its got so much less free space
than a single partition for the whole physical drive.


No. It's only when the file wave hits the end of the volume and
bounces back that the system will really start creating new files in
the gaps within the bulk of the file load. A volume that's always
500M from the end of the volume is just as clean as one that's always
100G away from the end of the volume.

If a volume is destined to have no more than, say, 20G on it, then a
file positioning strategy that places some material at the end of the
volume and other material in the middle of the volume will be slower
in one big 120G C: than in, say, a 32G volume on that 120G

If you need to ChkDsk E: as well, you can do so when
it suits; just don't use anything there until you've done so.


Most real world systems dont chkdsk that much and if
there is much of that, the cause needs to be fixed instead.


True - that's why I prefer "auto" checks to stop on errors and ask
user for decsions, and why I prefer an easily-accessible log to be
appended rather than overwritten.

But because checking a 120G volume takes so long, the temptation to
skip the test for later is huge. Doing so defeats the purpose of the
auto-check, i.e. it leaves Windows to bash away at an at-risk file
system. Partitioning helps in two ways; by keeping your data out of
the main risk area, and by allowing the always-in-use volumes to be
speedily checked and the bulk of the drive to be checked later.

A separate disk is better re head repositioning time over multiple partitions,


A separate disk is a luxury, and has downsides. For example,
one 120G would cost less than 2 x 40G and run faster too.


And it makes no sense to go that route with the swap file anyway.
Makes a lot more sense to spend that extra on more physical
ram so the swap file doesnt get used much except at boot time.


Yep. For the same money, I'd nearly always choose one big HD + RAM
over two smaller HDs. The only exceptions I can think of is where you
want to unlink system vs. data head travel (streaming media
development) or to RAID 1 or other two-HD strategies for data safety.

BUT XP normally does its own thing moving files around
in an attempt to speed things up anyway, so there is
more involved than just fragrmention of system files.


Yep. It's annoying to see a 120G "big C:" with an 8G total file set,


Mindlessly superficial and irrelevant case.


Nope; often the first part of a new PC's life is lived exactly like
that. Big HDs are not much more costly than small ones (until 120G)
so I do NOT build PCs with puny HDs because I thing the user "won't
need the capacity". Holding a given file set in fewer cylinders means
large HDs are faster too - as long as you don't let clueless file
positioning spread everything from one end of the HD to the other.

and find half the stuff is stuck in the middle of the HD. IMO,
better to keep that in a smaller C: so that even if the OS does
decide to spread things out, it can only do so within a small part
of the HD. Then de-bulk the "engine room" to other volumes.


No thanks. I bet you wouldnt be able to pick between
those two configs either in a proper double blind trial.


In which case it doesn't matter which of us is "right", does it? But
as I mentioned, there are reasons other than speed to go this route.

Makes a hell of a lot more sense to go for the simpler
single partition for physical drive config and not get bitten
when you inevitably find that the partition sizes are wrong.


It's seldom that I've needed to reshape partitions - usually, once
every 3-5 years, if that. Usually by that time, the total HD capacity
is a problem anyway, and that one addresses by not using puny little
HDs. Vendors are building PCs today with Pentium 4 overspend and 40G
HDs; the motherboards are typically Micro-ATX trash with no AGP slot.

If you do a full backup of that physical drive before adjusting
the partition sizes, you've wasted FAR more time than
you will have gained if you even gain anything at all.


Yep. As I say; between 0 and 1 times in 5 years shrug ...the odds
of HD failure and data recovery are, in my experience, between 3 and 5
times as high; perhaps the biggest payoff for smart partitioning.



-------------------- ----- ---- --- -- - - - -

Trsut me, I won't make a mistake!
-------------------- ----- ---- --- -- - - - -

  #37  
Old March 30th 04, 09:41 PM
Rod Speed
external usenet poster
 
Posts: n/a
Default Clone OS to HD via DOS and Clean Install XP?


cquirke (MVP Win9x) wrote in
message ...
Rod Speed wrote
cquirke (MVP Win9x) wrote


Partitioning certainly can speed things up,


Hardly ever. The main time you get much of an effect there
is if the drive is partitioned with the rarely used files on the
slowest part of the drive, so head movements tend to be
reduced in terms of the number of tracks moved on average.


Of course - that's exactly what I have in mind!


Trouble is that with modern systems there is very little
effect there anymore, because that mostly happens
auto, particularly with the system files being discussed.

with all of the above factors in mind. The trick
is to be clueful about what goes where, and how
you size and order your partitions and volumes.


Let's say you have a 120G HD that contains 4G of core OS and
app code, about 2G tops or swap and temp (having slashed the
web cache to 20M), and 90G of assorted movies, MP3, games
etc. that you use from time to time. On one big C:, the OS may
*try* to put things in sensible places, but it's quite likely to mess up.


Bull**** on that 'quite likely'


See later.


Completely useless.

Do you have a write-up on exactly how the OS decides what
goes where? Without that, it's a "beauty contest", i.e. all we are
debating is how much faith we have in the coders not to screw up.


That aint the only way to decide if its 'quite likely'

The other obvious approach is to see what it does in practice.

But by limiting the damage this can do,


You aint established any 'damage'

the answer matters less.


Or you can get real radical and try a proper double
blind trial and discover that you cant actually pick
between the system configured your way.

So let's say you put the "hot" 4G + 2G in an 8G C:, crucial
data in a 2G D: (safely away from all C:'s write traffic)


Mad. The only thing that makes any sense is to ensure
that the 'crucial data' is always fully backed up.


Ah, "backup". The cure-all that makes live adat irrelevant.


Its perfectly feasible to ensure that live data is always
fully backed up if that matters. Doesnt even cost much.

There's a problem inherent in the backup concept:


Nope.

- backup must be so up-to-date that no recent data is lost
- backup must pre-date the changes/losses you want to undo


Its perfectly feasible to ensure that live data is always
fully backed up if that matters. Doesnt even cost much.

Nothing works 100%,


Bull****. That just costs more with backup.

so anything that partially works has value.


And there are plenty of alternatives
that are a lot less partial than others.

If the data matters, its completely stupid to have the backup
on the same physical drive as what's being backed up.

File system corruption occurs when the drive is written to;


Hardly ever.

no writes, no corruption risk.


Wrong again. The drive can just die.

So if your data is on a volume that sees writes
only when you are saving data, it's that much safer.


Wrong again. The risk of data corruption due to writes
is MUCH less than the risk of data loss due to hard drive
failure and so the only thing that makes any sense at all
with data that matters is to not have the backup on the
same physical drive as the data being backed up.

This is the concept behind not allowing pedestrians on freeways.


Wrong again.

and everything else after that on a big E:. Let's say you reserve
the last 2G for a small F:, which contain auto-backups of D:.


Makes a hell of a lot more sense
to have auto backups off that drive.


"Auto" and "off that drive" don't really work together,


Wrong again.

as most off-drive storage requires removable media to be inserted


Wrong again. There are plenty of off drive backup
destinations that dont use removable media.

- so it isn't "auto" anymore.


It is if it doesnt use removable media.

Exceptions: Backup pulls via LAN
(which I do use in automated fashion)


Which makes a lot more sense than
that stupid approach of your drive D

and uploading data to an Internet repository somewhere.
The latter doesn't make much sense to me,


Your problem. That approach does mean that
even if the building that houses the lan burns to
the ground, the data that matters doesnt get lost.

given you are now relying on someone
else's privacy and server management.


Completely trivial to ensure that they cant access the data.

Completely trivial to ensure that their server
management is working and use more than
one if you need that level of realtime backup too.

Backup's usually a stretch between 3 apices of a triangle:
- scope; what range of disasters does it hedge against?
- convenience; how likely is it to be done?
- capacity; how much can I back up?


More waffle. The reality is that your drive D approach is
about the least useful around, only provides protection
against the least likely cause of data loss and there
are plenty of MUCH better approaches to backup.

It sounds as if you are focusing on
the first, to the exclusion of others.


Best get your ears tested then.

IOW if it doesn't cover against all eventualities, it's not worth doing.


Never ever said anything remotely resembling anything like that.

I JUST said that your D drive is about the least useful
approach to backup. Barely better than nothing in fact.

By that logic, the only backups we';d do would
be off-system and located outside of the building
- and how often is that going to get done?


As often as you like with net backup.

Many disasters don't require that full-scope protection to undo,


Duh.

so a "shallowed" backup done more often reduces
the work lost when restoring in these cases.


And your D drive is about the least useful
approach. Because the modern reality is that
its mostly hard drive failure that loses data.

In practice, on small LANs, I use a holographic auto-backup:


Thats not what holographic means.

- 02:00, each PC backups up its own data set elsewhere on HD


No point in this with the next one.

- 03:00+, other PCs pull most recent backup to themselves
- user then manually dumps collected backups to CDRW


Mindlessly crude and unreliable.

- backup CDRWs rotated so one is always off-site


"Most recent"? Yes, as the local backup keeps the
last 5 backups, purging the oldest by nameslot rather than
date logic (so as to be Y2k or flat-CMOS-battery proof).


Anyone with a clue use NTP for that last.

If the building burns down, they fall back to the last off-site CDRW
If the whole LAN gets stolen, then fall back to last CDRW
If all but one PC are stolen, they fall back one day
If problem missed for 5 days, local backups "deep" enough to undo


And there isnt any point in that original D partition. Like I said.

BTW, any backup that depends on removable disks is often
abandoned for months when disks become unavailable.


Only by stupids.

That would "never happen" in a professionally-administered
installation, I know, but trust me;


No thanks.

out there in small self-administered
peer-to-peer land, it's common.


If its common, its stupid to use it. MUCH better to use net backup
instead. Particularly when the volume isnt like to be high in that case.

Now it doesn't matter how clueless defrag is - not matter where
it splatters the contents of C:, it's always in the first 8G of the HD.


Makes a hell of a lot more sense to not defrag at all.


Eh? Well, put it this way:
- defrag's not a drag when the always-in-use volumes are small
- not defragging doesn't hurt much if intelligently partitioned


Defragging is pointless if you cant pick the defragged system
in a proper double blind trial. You're just increasing the risk
of data corruption for no useful purpose what so ever.

Looks like a win-win to me :-)


Best get those eyes tested too.

Bet you wouldnt be able to pick between the system that
is defragged and the one that isnt, otherwise identical, in a
proper double blind trial without being able to use a frag display.


Not sure about that either way.


Get real radical and try it.

There are other benefits, though, when
it comes to things like data recovery,


Bull****. Backup should ensure that that isnt ever needed.

and routine maintenance is lot less tedious
when you "have somewhere to stand on"


Mindless waffle.

And as most of the tinme you're using C: and D:,


Depends entirely on how much the large files in E are
used. Plenty have music particularly running all the time.


If there's music running all the time, then there
is a single file on E: being accessed all the time.


Wrong again head movement wise.

the system is as fast as if you'd just bought
it and hadn't gunked it up with 90G of stuff!


More mindlessly superficial silly stuff. The 90G
wont necessarily slow it down if it isnt used much.


Yes it will,


Nope.

because it will act as a sandbar between the front of
the HD (where Windows code is paged back in) and
the fringe of the file set (where new files are created).


Not with a drive that isnt being mindlessly furiously defragged.

Defrag logic that intentionally leaves holes before "seldom
used" material seeks to address that problem - in essence,
to do automatically what intelligent partitioning casts in stone.


And if you dont mindlessly furiously defrag...

Also, when you have to ChkDsk after a bad
exit, or defrag, C: is fast, because it's so small.


Its stupid obsessively defragging modern systems.


True; I defrag after large uninstalls/deletes or before
big installs, but the rest of the time I don't bother.


Then your 'sandbar' above wont happen.

And you aint achieving anything by the defragging you do do either.

I'm not of the "give us each day out daily
defrag" school of maintenance either.


Post-bad-exit file system checks will always be with us, however.


Nope, decently implemented systems dont get those much.

And that particular partition is more likely to get
fragged when its got so much less free space
than a single partition for the whole physical drive.


No.


Yep.

It's only when the file wave hits the end of the volume
and bounces back that the system will really start creating
new files in the gaps within the bulk of the file load.


Free block allocation hasnt been as crude as that for years and years now.

A volume that's always 500M from the end of the volume is just as
clean as one that's always 100G away from the end of the volume.


Wrong.

If a volume is destined to have no more than, say, 20G on it, then a
file positioning strategy that places some material at the end of the
volume and other material in the middle of the volume will be slower
in one big 120G C: than in, say, a 32G volume on that 120G


Pity none are that crude with any OS that matters anymore.

If you need to ChkDsk E: as well, you can do so when
it suits; just don't use anything there until you've done so.


Most real world systems dont chkdsk that much and if
there is much of that, the cause needs to be fixed instead.


True - that's why I prefer "auto" checks to stop on errors and
ask user for decsions, and why I prefer an easily-accessible
log to be appended rather than overwritten.


All irrelevant to whether the time taken
matters when the system hardly ever does one.

But because checking a 120G volume takes so long,
the temptation to skip the test for later is huge.


Not if it hardly ever happens.

Doing so defeats the purpose of the auto-check, i.e. it
leaves Windows to bash away at an at-risk file system.


Irrelevant if it hardly ever happens you just let it complete.

Partitioning helps in two ways; by keeping your data out of the
main risk area, and by allowing the always-in-use volumes to be
speedily checked and the bulk of the drive to be checked later.


And has other major downsides, like when the
partition size needs to be manually adjusted.

BUT XP normally does its own thing moving files around
in an attempt to speed things up anyway, so there is
more involved than just fragrmention of system files.


Yep. It's annoying to see a 120G "big C:" with an 8G total file set,


Mindlessly superficial and irrelevant case.


Nope;


Yep.

often the first part of a new PC's life is lived exactly like that.


Bull****. Hardly anyone is starting life with a new PC anymore.

Big HDs are not much more costly than small ones
(until 120G) so I do NOT build PCs with puny HDs
because I thing the user "won't need the capacity".


Sure.

Holding a given file set in fewer cylinders means large HDs
are faster too - as long as you don't let clueless file positioning
spread everything from one end of the HD to the other.


Doesnt happen in practice with any OS worth using.

and find half the stuff is stuck in the middle of the HD. IMO,
better to keep that in a smaller C: so that even if the OS does
decide to spread things out, it can only do so within a small part
of the HD. Then de-bulk the "engine room" to other volumes.


No thanks. I bet you wouldnt be able to pick between
those two configs either in a proper double blind trial.


In which case it doesn't matter which of us is "right", does it?


Corse it does. There isnt any point in your over complex
organisation if it doesnt achieve anything. That produces a
system thats more messy to administer. In spades when the
inevitable happens, the partition sizes turns out to be too small.

But as I mentioned, there are reasons
other than speed to go this route.


Pity you havent managed to establish a single one.

Makes a hell of a lot more sense to go for the simpler
single partition for physical drive config and not get bitten
when you inevitably find that the partition sizes are wrong.


It's seldom that I've needed to reshape partitions
- usually, once every 3-5 years, if that.


Dont believe it. And thats not real life for normal users anyway.
They dont usually have a clue about what sizes are appropriate.

Usually by that time, the total HD capacity is a problem anyway,
and that one addresses by not using puny little HDs. Vendors
are building PCs today with Pentium 4 overspend and 40G HDs;
the motherboards are typically Micro-ATX trash with no AGP slot.


The percentage of the market sales that match that is trivial.

If you do a full backup of that physical drive before adjusting
the partition sizes, you've wasted FAR more time than
you will have gained if you even gain anything at all.


Yep. As I say; between 0 and 1 times in 5 years shrug


Thats not real life for partitioned drives.

...the odds of HD failure and data recovery are,
in my experience, between 3 and 5 times as high;


So having the backup partition on the same physical drive is stupid.

perhaps the biggest payoff for smart partitioning.


Nope. Anyone with a clue uses a better
approach to backup of the data that matters.


  #38  
Old March 31st 04, 12:26 AM
cquirke (MVP Win9x)
external usenet poster
 
Posts: n/a
Default Clone OS to HD via DOS and Clean Install XP?

On Wed, 31 Mar 2004 06:37:34 +1000, "Rod Speed"
cquirke (MVP Win9x) wrote in
Rod Speed wrote
cquirke (MVP Win9x) wrote


Partitioning certainly can speed things up,


Hardly ever. The main time you get much of an effect there
is if the drive is partitioned with the rarely used files on the
slowest part of the drive, so head movements tend to be
reduced in terms of the number of tracks moved on average.


Of course - that's exactly what I have in mind!


Trouble is that with modern systems there is very little
effect there anymore, because that mostly happens
auto, particularly with the system files being discussed.


I'm not only discussing NTFS; are you?

Do you have a write-up on exactly how the OS decides what
goes where? Without that, it's a "beauty contest", i.e. all we are
debating is how much faith we have in the coders not to screw up.


That aint the only way to decide if its 'quite likely'


The other obvious approach is to see what it does in practice.
Or you can get real radical and try a proper double blind trial


I don't happen to have two idendical XP systems lying around to test
with, so for now, I'll have to pass on that.

Ah, "backup". The cure-all that makes live adat irrelevant.


Its perfectly feasible to ensure that live data is always
fully backed up if that matters. Doesnt even cost much.


There's a problem inherent in the backup concept:


Nope.


Wannya mean, "nope"? Am I wasting time talking to a half-wit here?

- backup must be so up-to-date that no recent data is lost
- backup must pre-date the changes/losses you want to undo


Its perfectly feasible to ensure that live data is always
fully backed up if that matters. Doesnt even cost much.


Read the text and think. For example, think about how "just backup"
is going to cope with negative time between the problem and the data
you want to keep. The phrase "always fully backed up" is true only
for real-time disk mirroring, and that dies equally on both halves of
the mirror if anything eats the data.

Nothing works 100%,


Bull****. That just costs more with backup.


No, it's true: Nothing works 100%. A backup made after a
hunter-killer has filled the contents of your files with garbage is
going to restore garbage. A backup made before that won't have recent
data on it. There isn't a "magic bullet" backup that you can simply
blindly restore to fix all problems, and live data will always have
content you want to keep that isn't backed up yet.

If the data matters, its completely stupid to have the backup
on the same physical drive as what's being backed up.


Who said anything about only *one* backup strategy?

File system corruption occurs when the drive is written to;


Hardly ever.


Well, it doesn't happen when writes are *not* being done, does it?
What logical data corruption happens, happens during writes.

no writes, no corruption risk.


Wrong again. The drive can just die.


That's not what I'm talking about here, i.e. logical data corruption.

So if your data is on a volume that sees writes
only when you are saving data, it's that much safer.


Wrong again. The risk of data corruption due to writes
is MUCH less than the risk of data loss due to hard drive
failure and so the only thing that makes any sense at all
with data that matters is to not have the backup on the
same physical drive as the data being backed up.


sigh Different things can go wrong and one doesn't have one
simplistic blockheaded fix for everything, capice?

One has frequent automated auto-backups that can fix problems that
don't involve loss of the HD, and will work even if the human part of
the backup chain falls over.

One has partitioning strategies that keep data away from frequest
write traffic. One has depth and redundancy in risk management.

and everything else after that on a big E:. Let's say you reserve
the last 2G for a small F:, which contain auto-backups of D:.


Makes a hell of a lot more sense
to have auto backups off that drive.


"Auto" and "off that drive" don't really work together,


Wrong again.


as most off-drive storage requires removable media to be inserted


Wrong again. There are plenty of off drive backup
destinations that dont use removable media.


Such as?

Exceptions: Backup pulls via LAN
(which I do use in automated fashion)


Which makes a lot more sense than
that stupid approach of your drive D


It's not "instead of", it's "as wewll as". Sheesh, get a clue! And
stand-alone systems are all going to buy networks just so they can
have off-HD autobackup that lasts until all the PCs are nicked (a
significant business risk here) etc.

and uploading data to an Internet repository somewhere.
The latter doesn't make much sense to me,


Your problem. That approach does mean that
even if the building that houses the lan burns to
the ground, the data that matters doesnt get lost.


It's not just "my problem". Unless you own the remote PC, you are
trusting some other vendor to maintain that server, keep it secure,
keep it private, etc. Why add the business risks of someone else's
business to your own? Especially risks over which you have no
control? Not to mention risks of the data in trransit.

Backup's usually a stretch between 3 apices of a triangle:
- scope; what range of disasters does it hedge against?
- convenience; how likely is it to be done?
- capacity; how much can I back up?


More waffle. The reality is that your drive D approach is
about the least useful around, only provides protection
against the least likely cause of data loss and there
are plenty of MUCH better approaches to backup.


It's not the only approach, and local auto-backup it's also very handy
for finger trouble; 5 days of backups to pull replacement files out
of, under direct vision so you know what your are pulling out.

Many disasters don't require that full-scope protection to undo,


Duh.

so a "shallowed" backup done more often reduces
the work lost when restoring in these cases.


And your D drive is about the least useful
approach. Because the modern reality is that
its mostly hard drive failure that loses data.

In practice, on small LANs, I use a holographic auto-backup:


Thats not what holographic means.


Holographic, as in a storage that is diffused over a structure rather
than localised within it. Lop bits off of a holgram, and you don't
lose specific bits of the image. That's what I had in mind.

- 02:00, each PC backups up its own data set elsewhere on HD


No point in this with the next one.


Yes there is, because the local backup crunches the data into a single
smaller file with CRC integrity check, relocates it on another volume,
and keeps multiple copies for greater depth of backup. It also avoids
some problems that arise when trying to archive over a LAN (yes, we
did try the direct approach first) and unlinks network access issues
from the backup process. If the LAN isn't up, you still have the
local backup, which is better than nothing.

- 03:00+, other PCs pull most recent backup to themselves
- user then manually dumps collected backups to CDRW


Mindlessly crude and unreliable.


It works pretty well; better than anything you've come up with so far.

In fact I'm going to snip the rest, because I have better things to do
that waste time with snide assholes. FOAD!



-------------------- ----- ---- --- -- - - - -

Running Windows-based av to kill active malware is like striking
a match to see if what you are standing in is water or petrol.
-------------------- ----- ---- --- -- - - - -

 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is Off
HTML code is Off






All times are GMT +1. The time now is 06:36 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 PCbanter.
The comments are property of their posters.