If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#31
|
|||
|
|||
Comparison of NTFS/MFT recovery software?
In article ,
J. S. Pack wrote: On Sat, 04 Sep 2004 14:40:11 +0200, "cquirke (MVP Win9x)" wrote: On Tue, 31 Aug 2004 17:33:09 GMT, "Stephen H. Fischer" What you want is the ability to *interactively* check the file system, as Scandisk does for FATxx. You want ChkDsk to stop and say "I found such-and-such an error and (more info) I plan to "fix" this by doing X, Y, Z. Continue, or abort?" but it's too brain-dead for that. This is all well and good for techies who can use disk editors and know their way around the file system. It's meaningless in the real world where the vast majority of users don't even know what a file system is. Most of them have used the ol' scandisk and that little question of "Continue or abort" did them no good at all. They wouldn't know which answer to choose. If they aborted, their disk still had errors and their data was lost. If they continued, their data was lost as well in .chk files. Not uncommonly their disk was still trashed and they ended up reformatting and reinstalling. That sort of thing happens a lot less under NTFS and the vast majority of users would therefore benefit from using it. snip To those who say that the only method of repair if CHKDSK will not run is to hire a person who has many years of experience and makes a living doing data recovery just adds to the dichotomy. CHKDSK is trusted (and Norton) to repair the file system all by its self for the second case. Anybody who's used scandisk doesn't trust it, either. And plenty of professionals did well fixing FAT32 disks. ChkDsk is NOT a data recovery tool, and has no right to presume to be one. Automating data-destructive "fixes" may help MS cut down on support calls, but it is detremental to data safety as it robs the user of the option to manually repair. Well, better tools are always welcomed. I wonder why Norton hasn't cashed in on the shortcomings of CHKDSK? Maybe all the conditions that chkdsk /F can't fix are things that are impossible to fix ? FWIW; I recently had an XP laptop that passed chkdsk /F with no messages, but Partition Magic's check-file-system command gave an error status and refused to repartition. I re-imaged the system. I was able to do a full backup, first, so no data was lost. This is an existance proof that chkdsk /f isn't perfect. In the thousands of chkdsk passes I've watched I've seen a "fixing up" message where no data was lost, a couple of times. I assume this was a journal roll-forward. The most recent was at least 5 years ago. One of these sessions resulted in me opening a case with Microsoft, and posts to Usenet asking if there was a document that listed every possible messages from chkdsk, similar to the documententation for fsck on any unix system, and the answer was no, why do I want one ? I had a system running NTFS that would not run a commercial (ie paid for) defrag tool, it crapped out with an error message saying there was something about the $MFT that defrag couldn't deal with. chkdsk /f ran clean on this system. This could have been 8 years ago. That's about it for about 10,000 system-years of ntfs experience. Compare that to every windows98 laptop I every saw that had CHK files in the C root because if incorrect shutdown. -- Al Dykes ----------- adykes at p a n i x . c o m |
Ads |
#32
|
|||
|
|||
Comparison of NTFS/MFT recovery software?
On Sat, 04 Sep 2004 14:40:11 +0200, "cquirke (MVP Win9x)"
wrote: Backup, by definition, loses data. Huh ? If it lost data it wouldn't be called a backup. ntbackup, the MS-provided backup/restore tool, does a fine job of full backup of running NT/w2k/XP/home/pro systems. I've rebuilt servers from bare iron with netbackup-produced tapes. As for simple full backup-recovery tools, there are a couple of home-user oriented packages that burn a full-image onto bootable DVDs that will restore to bare iron. These are one-click-full-backup tools, if you set them up that way. These were practical until the arrival of home-digital video. The computer user has to make a realistic assesment of the value of his data and buy enough big disks to keep a couple generations of backups. It's called a risk assesment and businesses have been doing this for years. Anyone that depends on recovery tools is clueless. Um, by your definition, perhaps. That's just a little too facile to be a general definition. A bitwise mirror image of your disk saves whatever data you have saved on the disk you're mirroring. That's entirely expected and reasonable. So a need for data recovery is not going to go away, no matter how much you backup. Indeed not. You can reasonably expect to have saved only what data you've saved in your backup before your head crashed or the cosmic ray hit and an ailing DIMM dribbled all over your files. With a good backup regimen, you shouldn't lose much. If your data is that real-time critical, you should be computing in a failsafe, redundant, transaction rollback environment anyway, not in Windows XP. If my filesystem or disk crashes (and any disk can crash at any time, leaving moot the question of running chkdsk), I count myself lucky if I can save *anything*. That's why I often backup. The perfect backup contains all content except unwanted changes. Ponder on how you separate unwanted changes (loss) from all data you saved right up to the present moment, and see the problem. Well, the unwanted changes mean that it will be more difficult to retain the wanted changes that didn't make into the last backup set. I fail see how this point moves us further down the road towards solving the problem of *how* a naive user may recover data from a crashed disk or severely damaged filesystem. I would note FWIW that the winubcd http://www.windowsubcd.com/index.htm I noted earlier in this thread does have some free file/disk tools that can be run from the boot CD itself. Again, average users would not know about, or be able to build, such a CD. -- Al Dykes ----------- adykes at p a n i x . c o m |
#33
|
|||
|
|||
Comparison of NTFS/MFT recovery software?
"Al Dykes" wrote in message ... Anyone that depends on recovery tools is clueless. There are instances where a little bit of knowledge in recovery can go a long way. Not in replacing the essential need for backups, but in removing the necessity of having to do a full scale restore. The example I'll quote is a recent example when one problematic corrupted / damaged file on my NTFS system, was causing all sorts of havoc. All the usual tricks, like closing down explorer to delete via command prompt, or deletion via recovery console / safe mode had failed. Chkdsk was also unable to deal with it (it aborted unceremoniously halfway through its operations ) and it also prevented any defragmentation of the drive. The fragmentation level of files on the drive was rising by the minute. A simple manual blanking out of its entry in the MFT, followed by a quick chkdsk, solved the problem and the drive was completely back to normal. Would have been a right pain to have had to go through a complete restore for the sake of one tiny file. Jon |
#34
|
|||
|
|||
Comparison of NTFS/MFT recovery software?
On Sat, 4 Sep 2004 08:10:38 -0700, "Eric Gisin"
"cquirke (MVP Win9x)" Chkdsk is not based on DOS. I didn't say ChkDsk was "based on DOS". It dates from DOS; specifically, the pathetic UI dates from DOS's ChkDsk. And by UI, I'm not saying "make it prettier". I'm saying "put the user back in control over what it does". The code itself is obviously different, given that it works on a completely different file system ;-) Of course there is a tool to override autochk defaults. The only control I know of will suppress AutoChk for particular HD letters. There is no way to get AutoChk to run like ChkDsk (with no /F or /R parameter) and, like ChkDsk, AutoChk has no interactive mode. Compare that to the fine-grained control Scandisk.ini gives you over the implicit /Custom mode that automatic Scandisk uses in Win95/98. -------------------- ----- ---- --- -- - - - - Tip Of The Day: To disable the 'Tip of the Day' feature... -------------------- ----- ---- --- -- - - - - |
#35
|
|||
|
|||
Comparison of NTFS/MFT recovery software?
On Sun, 05 Sep 2004 10:20:47 +0700, J. S. Pack wrote:
On Sat, 04 Sep 2004 14:40:11 +0200, "cquirke (MVP Win9x)" On Tue, 31 Aug 2004 17:33:09 GMT, "Stephen H. Fischer" What you want is the ability to *interactively* check the file system, as Scandisk does for FATxx. You want ChkDsk to stop and say "I found such-and-such an error and (more info) I plan to "fix" this by doing X, Y, Z. Continue, or abort?" but it's too brain-dead for that. This is all well and good for techies who can use disk editors and know their way around the file system. Yes it is; and it should be there for that reason alone, if nothing else. It's easier to understand what Scandisk says about what it finds than, say, a raw register dump you get in Stop errors ;-) It's meaningless in the real world where the vast majority of users don't even know what a file system is. Most of them have used the ol' scandisk and that little question of "Continue or abort" did them no good at all. Now you are saying that becausae most folks lack clue, we should declare darkness as the standard? The "ChkDsk Knows Best, even if it kills your data to the point that it can no longer be recovered" is high-handed nonsense, geared to the convenience of "support" at the expense of the client. We'd like a lot less of that, please. Anybody who's used scandisk doesn't trust it, either. And plenty of professionals did well fixing FAT32 disks. Sure; that's a given - it's a one-pass automated tool with no "big-picture" awareness, how smart can you really expect it to be? If I show you a FAT1 that has 512 bytes of ReadMe.txt in it, and FAT2 that has sane-looking values in it, your guess at what to do would be correct. If a few sectors further in, you found the same thing, but the other way round, you'd guess how to fix that too. You would not just splat the whole of FAT1 over FAT2 because it "looked better", on the ASSumption that every part of FAT1 is as correct or otherwise as every other part of FAT1. You'd also not be so dumb as to chop the Windows directory in half, just because at that point a dir entry started with a null, and throw the rest of it away. In fact, even if there were 512 bytes of zeros or ReadMe.txt content in the middle of a dir, you would recognise that as a sector splat and append the distant part of the same dir, excising the garbaged sector's contents. That's not rocket science to a tech with an interest in such matters, even if "your average user" couldn't do that themselves. What a number of "average" users can (and do) do is call up and say: "I had a bad exit, and Scandisk ran as usual, but this time it wanted to delete half the Windows directory. So I switched off the PC and I'm bringing it in for file system repair and data recovery." With NTFS, AutoChk robs them of that chance. -------------- ---- --- -- - - - - "I think it's time we took our friendship to the next level" 'What, gender roles and abuse?' -------------- ---- --- -- - - - - |
#36
|
|||
|
|||
Comparison of NTFS/MFT recovery software?
|
#37
|
|||
|
|||
Comparison of NTFS/MFT recovery software?
In article ,
cquirke (MVP Win9x) wrote: On Sun, 05 Sep 2004 12:44:04 +0700, J. S. Pack wrote: On Sat, 04 Sep 2004 14:40:11 +0200, "cquirke (MVP Win9x)" Backup, by definition, loses data. Um, by your definition, perhaps. That's just a little too facile to be a general definition. It's inevitable if you take user expectations as to what "backup" does into account, i.e. that it loses unwanted changes while preserving wanted changes. Implicit is the idea that the unwanted changes are more recent than the changes you want to keep; therefore, falling back to an earlier state will preserve data while losing the damage. What the F are your talking about ? You do a full backup of a system, with an appropriate tool, and if you rebuild from that backup you get a functional equivalent system when you are done. If you have open files while you are running a backup you have to know what you're doing or you get what you deserve. -- Al Dykes ----------- adykes at p a n i x . c o m |
#38
|
|||
|
|||
Comparison of NTFS/MFT recovery software?
On Sun, 05 Sep 2004 12:44:04 +0700, J. S. Pack wrote:
On Sat, 04 Sep 2004 14:40:11 +0200, "cquirke (MVP Win9x)" Backup, by definition, loses data. Um, by your definition, perhaps. That's just a little too facile to be a general definition. It's inevitable if you take user expectations as to what "backup" does into account, i.e. that it loses unwanted changes while preserving wanted changes. Implicit is the idea that the unwanted changes are more recent than the changes you want to keep; therefore, falling back to an earlier state will preserve data while losing the damage. Clearly, falling back to an earlier state loses data saved or changes made after the backup was made; thus "loses data". Now you can hedge this in various ways: 1) Reduce time lapse between backup and live data The extreme of this is real-time mirroring, such that changes are made to "live" and "backup" data at the same time - in essence, both copies of data are "live". This protects against a very specific type of problem; death of one half of the mirror. But anything that writes junk to the HD will write junk to both HDs equally, unless the junk arises within half of the HD subsystem of course. So in that sense, zero-lag backup isn't really a "backup". Also, several things that kill one HD will very likely kill both HDs; power spike, site disaster, theft of PC, flooding, etc. 2) Keep multiple time-lapse backups Now we're getting somewhere; instead of having one big backup, you keep a number of these made at different times, and can fall back as far as needed; assuming you discover the data loss you wish to reverse within the time period you are covering in your backup spread. You will still lose whatever data you saved between the last sane backup, and the time of data loss. The only way to avoid that is to have transaction-grain steps between successive backups. The assumption this approach rests on is that the disaster is such that all further work ceases, so that the time between the data state you want to keep and the disaster you want to lose is always positive. 3) Selective scope This counters the negative lead time problem that is inherent in the malware infection-stealth-payload sequence of events. By including only non-infectable data in your backup, you will lose malware, as well as losing content that ties the backup to particular hardware or application version. These backups can then be restored onto new replacement PCs with less worry about inappropriate drivers, version soup, or malware restore. So a need for data recovery is not going to go away, no matter how much you backup. You can reasonably expect to have saved only what data you've saved in your backup before your head crashed My point exactly; if you want anything more recent that - or you find all your backups are unacceptable when restored - then the "other" stuff you want to see again will have to be recovered. If my filesystem or disk crashes (and any disk can crash at any time, leaving moot the question of running chkdsk), I count myself lucky if I can save *anything*. That's why I often backup. Sure, that's why we cough all backup. My approach is to: - keep a small data set free of infectables and incoming junk - automate a daily backup of this elsewhere on HD - scoop the most recent of these to another PC daily - dump collected recent backups from that PC to CDRW If you can image the entire system, then you'd keep the last image made after the last significant system change, and use that as your rebuild baseline before restoring the most recent data backup. In practice, users tend to skip the "last mile" to CDRW for one reason or another (out of disks, didn't get it together, etc.). If it's a stand-alone PC, that leaves only the local HD backups, which remain available only as long as that part of the HD works. If they have been switching the PC off overnight, they won't even have that. Ponder on how you separate unwanted changes (loss) from all data you saved right up to the present moment, and see the problem. I fail see how this moves us towards *how* a naive user may recover data from a crashed disk or severely damaged filesystem. My point was that backups do not remove the role of data recovery, even if they do reduce what is at stake. The user's environment includes support techs, and in such cases, you'd expect these to be involved if the user isn't keen on firing up the Diskedit chainsaw themselves. Data recovery is not always a costly clean-room epic undertaking; sometimes it's a couple of snips here and there, and can be faster and cheaper than rebuilding from scratch and restoring backups. http://www.windowsubcd.com/index.htm Ah! This time the page loaded!! Looks verry interesting, thanks!! --------------- ----- ---- --- -- - - - The memes will inherit the Earth --------------- ----- ---- --- -- - - - |
#39
|
|||
|
|||
Comparison of NTFS/MFT recovery software?
On Sat, 4 Sep 2004 00:23:07 +0200, "Folkert Rienstra"
"cquirke (MVP Win9x)" wrote Thanks; I've downloaded it, but will wait until I have time before I try it (else the demo period may time out before I get a round tuit) "There is a timeout on un-registred versions (60 days from release)," Maybe you should read first before you snip? Ah, so it's going to die on Day 60 even if I don't install it or use it until Day 59. Bummer; I'll just have to take my chances then. That's assuming "release" isn't already 50+ days ago ;-p --------------- ----- ---- --- -- - - - Memes don't exist - pass it on --------------- ----- ---- --- -- - - - |
#40
|
|||
|
|||
Comparison of NTFS/MFT recovery software?
cquirke (MVP Win9x) wrote:
On Sun, 05 Sep 2004 10:20:47 +0700, J. S. Pack wrote: On Sat, 04 Sep 2004 14:40:11 +0200, "cquirke (MVP Win9x)" On Tue, 31 Aug 2004 17:33:09 GMT, "Stephen H. Fischer" What you want is the ability to *interactively* check the file system, as Scandisk does for FATxx. You want ChkDsk to stop and say "I found such-and-such an error and (more info) I plan to "fix" this by doing X, Y, Z. Continue, or abort?" but it's too brain-dead for that. This is all well and good for techies who can use disk editors and know their way around the file system. Yes it is; and it should be there for that reason alone, if nothing else. It's easier to understand what Scandisk says about what it finds than, say, a raw register dump you get in Stop errors ;-) It's meaningless in the real world where the vast majority of users don't even know what a file system is. Most of them have used the ol' scandisk and that little question of "Continue or abort" did them no good at all. Now you are saying that becausae most folks lack clue, we should declare darkness as the standard? The "ChkDsk Knows Best, even if it kills your data to the point that it can no longer be recovered" is high-handed nonsense, geared to the convenience of "support" at the expense of the client. We'd like a lot less of that, please. This is one of the most ludicrous arguments I've ever seen. If you don't like chkdsk then just don't use it. Anybody who's used scandisk doesn't trust it, either. And plenty of professionals did well fixing FAT32 disks. Sure; that's a given - it's a one-pass automated tool with no "big-picture" awareness, how smart can you really expect it to be? If I show you a FAT1 that has 512 bytes of ReadMe.txt in it, and FAT2 that has sane-looking values in it, your guess at what to do would be correct. If a few sectors further in, you found the same thing, but the other way round, you'd guess how to fix that too. You would not just splat the whole of FAT1 over FAT2 because it "looked better", on the ASSumption that every part of FAT1 is as correct or otherwise as every other part of FAT1. You'd also not be so dumb as to chop the Windows directory in half, just because at that point a dir entry started with a null, and throw the rest of it away. In fact, even if there were 512 bytes of zeros or ReadMe.txt content in the middle of a dir, you would recognise that as a sector splat and append the distant part of the same dir, excising the garbaged sector's contents. That's not rocket science to a tech with an interest in such matters, even if "your average user" couldn't do that themselves. What a number of "average" users can (and do) do is call up and say: "I had a bad exit, and Scandisk ran as usual, but this time it wanted to delete half the Windows directory. So I switched off the PC and I'm bringing it in for file system repair and data recovery." With NTFS, AutoChk robs them of that chance. You might want to study what's publicly available about the file structure of NTFS. It doesn't work the way you seem to think it does. -------------- ---- --- -- - - - - "I think it's time we took our friendship to the next level" 'What, gender roles and abuse?' -------------- ---- --- -- - - - - -- --John Reply to jclarke at ae tee tee global dot net (was jclarke at eye bee em dot net) |
#41
|
|||
|
|||
Comparison of NTFS/MFT recovery software?
"cquirke (MVP Win9x)" wrote in message news
On Sat, 4 Sep 2004 00:23:07 +0200, "Folkert Rienstra" "cquirke (MVP Win9x)" wrote Thanks; I've downloaded it, but will wait until I have time before I try it (else the demo period may time out before I get a round tuit) "There is a timeout on un-registred versions (60 days from release)," Maybe you should read first before you snip? Ah, so it's going to die on Day 60 even if I don't install it or use it until Day 59. Bummer; I'll just have to take my chances then. That's assuming "release" isn't already 50+ days ago ;-p As I said in another post: ... if you're not downright stupid you just set your clock back and save you a 1.5 MB download that may not even be different. --------------- ----- ---- --- -- - - - Memes don't exist - pass it on --------------- ----- ---- --- -- - - - |
#42
|
|||
|
|||
Comparison of NTFS/MFT recovery software?
|
#43
|
|||
|
|||
Comparison of NTFS/MFT recovery software?
How about running chkdsk without any switches, reading the log and deciding
how you want to proceed? "cquirke (MVP Win9x)" wrote in message ... On 5 Sep 2004 13:30:24 -0400, (Al Dykes) wrote: cquirke (MVP Win9x) wrote: What the F are your talking about ? You do a full backup of a system, with an appropriate tool, and if you rebuild from that backup you get a functional equivalent system when you are done. Yes - with loss of all data done since the backup was created. Got it? ------------ ----- ---- --- -- - - - - Our senses are our UI to reality ------------ ----- ---- --- -- - - - - |
#44
|
|||
|
|||
Comparison of NTFS/MFT recovery software?
"cquirke (MVP Win9x)" wrote in message
On 5 Sep 2004 13:30:24 -0400, (Al Dykes) wrote: cquirke (MVP Win9x) wrote: What the F are your talking about ? You do a full backup of a system, with an appropriate tool, and if you rebuild from that backup you get a functional equivalent system when you are done. Yes - with loss of all data done since the backup was created. Got it? Doubtful. ------------ ----- ---- --- -- - - - - Our senses are our UI to reality ------------ ----- ---- --- -- - - - - |
#45
|
|||
|
|||
Comparison of NTFS/MFT recovery software?
On Wed, 08 Sep 2004 23:12:19 GMT, "Frank Jelenko"
How about running chkdsk without any switches, reading the log and deciding how you want to proceed? That's what I'd do, but there are limitations he - ChkDsk known to throw spurious errors if volume "in use" - AutoChk simply will NOT work in this mode - the log is so buried in Event Log it's near-impossible to find - requires NT to run, which writes to at-risk file system (if C - Event Log also requires NT to run, risks as above What one typically wants to do is: - after bad exit, before OS writes to HD, have AutoChk check - AutoChk should stop and prompt on errors - then can either proceed, or abort both AutoChk and OS boot - if abort, then need a safe mOS from which to re-test etc. That's exactly how the original auto-Scandisk works. Win.com runs DOS mode Scandisk with implicit /Custom parameter, which thus facilitates fine-grain control via Scandisk.ini, before Windows starts booting up or writing to the file system. Scandisk.ini can be set so the scan stops on errors. At that point, it's safe to reset out of the boot process, press F8 on next boot, choose Command Prompt Only as a safe mOS, and do an elective Scandisk from there (or run alternate recovery/repair tools). A "better" OS should at least match this sensible and prudent design. ------------ ----- ---- --- -- - - - - The most accurate diagnostic instrument in medicine is the Retrospectoscope ------------ ----- ---- --- -- - - - - |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
unable to remove scanner software | create_share | Hardware and Windows XP | 3 | July 29th 04 10:41 PM |
Windows XP Recovery | Sanjay Sabnis | Security and Administration with Windows XP | 0 | July 24th 04 05:06 PM |
Windows XP Recovery | Sanjay Sabnis | Security and Administration with Windows XP | 0 | July 24th 04 04:52 PM |
Windows XP Recovery | Sanjay Sabnis | Security and Administration with Windows XP | 0 | July 24th 04 04:52 PM |
Advise needed on NTFS disk recovery software. | §kullywag©- | Customizing Windows XP | 1 | July 17th 04 09:49 PM |