If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Rate Thread | Display Modes |
Ads |
#17
|
|||
|
|||
Once again, Google proves it's bought out.
On Sat, 07 Oct 2017 00:19:04 -0400, Paul wrote:
If your older machine has a PCI card slot, you can get a USB2 card for around $10. Depending on the whims of the sellers. You can also get a USB3 card for PCI, but they are quite expensive. Could be as much as $100 for one of those. The way those work, is the card has two chips (big deal, right). One chip converts PCI bus to PCI Express. The PCI bus is limited to around 100MB/sec of practical transfer rate. Next, a PCI Express USB3 chip is connected. The card looks like this. --- PCI to PCI Express ------- PCI Express to USB3 --- (two or four connectors) The parts cost for that might be $5 for the first chip and maybe $5 for the second chip. But because they don't make very many of the cards, the cards end up with quite high prices. Really too expensive to be slapping into old computers (which is the intention when building cards like that). The normal high volume cards are like this. --- PCI Express to USB3 --- (two or four connectors) for x1 slot Years ago, I installed a USB2 card in this computer. It lasted several years, then it just quit working. Two years ago, I bought another one. That one lasted about one year, then that one quit working. I have just given up on them since. The USB2 is a lot faster than the 1.1. In fact I dont see any reason to go to USB3. I think USB2 is fine. That second PCI card is still in the computer, but it does nothing now. I have to always boot into Win 2000 to utilize any USB capabilities. I have a couple 2gb flash drives that Win98 recognizes, thats all. I use those flash drives to transfer small files. Half of a miracle occurred. I plugged my bad drive into my XP machine, using the adaptor that I bought for USB to Harddrive. I ran Recurvia on it. It took that software nearly 2 hours to go thru that partition. When it finished, I was very disappointed. Out of about 23gb of data on the drive, Recurvia found 63 files, out of which only 50 some were recoverable. All of them were small .JPGs or text files. That was a waste of time..... But then I plugged that bad drive back into my Win98 machine, and ALL the folders came back. Every folder that was on that partition is now back and I can go from folder to sub folder and see all the files. That allowed me to go thru everything and determine which folders are on my old backup, and which ones are newer. How that folder structure returned, is beyond me, but that was good news. But it's not all good. I began copying everything that's not backed up, to my C: partition. I can only save about 60% of the files, and have to copy one file at a time, while manually making folders. For SMALL files, I can save about 4 out of 5, but for large files it's less than 50%. I've spent hours copying files, and have many more hours to go, but I will save what I can. After that, I have to decide if I want to get a pro data recovery business, or do something more drastic on my own. How the folders came back from the dead is beyond me. I can only think that Recurvia brought them back when it scanned the drive, or else it has something to do with my removal of all the files on the H: and I: partitions, after backing them up twice. Oddly enough, XP sees that bad G: partition as unformatted. Windows 2000 does the same, but Win98 is seeing it as a valid partition with a lot of data. I just cant copy much of the data. |
#18
|
|||
|
|||
Once again, Google proves it's bought out.
wrote:
On Sat, 07 Oct 2017 00:19:04 -0400, Paul wrote: If your older machine has a PCI card slot, you can get a USB2 card for around $10. Depending on the whims of the sellers. You can also get a USB3 card for PCI, but they are quite expensive. Could be as much as $100 for one of those. The way those work, is the card has two chips (big deal, right). One chip converts PCI bus to PCI Express. The PCI bus is limited to around 100MB/sec of practical transfer rate. Next, a PCI Express USB3 chip is connected. The card looks like this. --- PCI to PCI Express ------- PCI Express to USB3 --- (two or four connectors) The parts cost for that might be $5 for the first chip and maybe $5 for the second chip. But because they don't make very many of the cards, the cards end up with quite high prices. Really too expensive to be slapping into old computers (which is the intention when building cards like that). The normal high volume cards are like this. --- PCI Express to USB3 --- (two or four connectors) for x1 slot Years ago, I installed a USB2 card in this computer. It lasted several years, then it just quit working. Two years ago, I bought another one. That one lasted about one year, then that one quit working. I have just given up on them since. The USB2 is a lot faster than the 1.1. In fact I dont see any reason to go to USB3. I think USB2 is fine. That second PCI card is still in the computer, but it does nothing now. I have to always boot into Win 2000 to utilize any USB capabilities. I have a couple 2gb flash drives that Win98 recognizes, thats all. I use those flash drives to transfer small files. Half of a miracle occurred. I plugged my bad drive into my XP machine, using the adaptor that I bought for USB to Harddrive. I ran Recurvia on it. It took that software nearly 2 hours to go thru that partition. When it finished, I was very disappointed. Out of about 23gb of data on the drive, Recurvia found 63 files, out of which only 50 some were recoverable. All of them were small .JPGs or text files. That was a waste of time..... But then I plugged that bad drive back into my Win98 machine, and ALL the folders came back. Every folder that was on that partition is now back and I can go from folder to sub folder and see all the files. That allowed me to go thru everything and determine which folders are on my old backup, and which ones are newer. How that folder structure returned, is beyond me, but that was good news. But it's not all good. I began copying everything that's not backed up, to my C: partition. I can only save about 60% of the files, and have to copy one file at a time, while manually making folders. For SMALL files, I can save about 4 out of 5, but for large files it's less than 50%. I've spent hours copying files, and have many more hours to go, but I will save what I can. After that, I have to decide if I want to get a pro data recovery business, or do something more drastic on my own. How the folders came back from the dead is beyond me. I can only think that Recurvia brought them back when it scanned the drive, or else it has something to do with my removal of all the files on the H: and I: partitions, after backing them up twice. Oddly enough, XP sees that bad G: partition as unformatted. Windows 2000 does the same, but Win98 is seeing it as a valid partition with a lot of data. I just cant copy much of the data. On the *Destination* drive, you can gather up a collection of freshly copied folders and compress them. Use 7Z Ultra for example. That's how you can make storage space go further. Note that, the "technician computer" you're using, must be "healthy" and not known for RAM errors or crashing or instability. You need a perfectly stable computer, to trust it to compress files. *Don't* compress your only good copy of a file, on a known-to-be-flaky computer. https://en.wikipedia.org/wiki/7zip Downloads http://7-zip.org/ For example, on some of my backup drives, I can save 500GB of space, simply by compressing the Macrium Reflect MRIMG files. I only do that in cases where I know I won't be referring to those files for some time. So my "regular" "comprehensive" backups get compressed. The "spur of the moment" backups are left uncompressed, or they're deleted when I need the space. If I delete a file by accident, I look to the "spur of the moment" backups first, to see if the file is there. To really work with compressed content, you need at least one good-sized spare (extra) hard drive, to give you space to work. Obviously, if you compress the **** out of groups of folders, a group at a time, they're not going to fit on the drive if you decompress them all at the same time. And it takes a *long* time, to do Ultra compression. You can use GZIP compression or WINZIP compression, as a good space/time tradeoff. Ultra is when you're squeezing the last drop out of your drive. WinRAR is similarly capable to 7ZIP, except it's commercial, and I don't know what it costs. WinRAR decompression is likely "free", so archives are never trapped in the format. But compression is likely to cost a few bucks. To compress an entire hard drive, on my multi-core computer, takes all day (24 hours), just to give you an idea how long you could wait when doing really really large folders of stuff. The Test Machine does 7Z at 18MB/sec, whereas the machine I'm typing on can only manage 2-3MB/sec. Just to give some ballpark numbers for the slowest methods. ******* If you enable NTFS compression on the target drive, the OS can compress files when you write them on the target drive. But the degree of compression is poor. Even WinZIP or GZIP achieves better compression than NTFS compression. Nevertheless, I've used it. When I was doing a Google Chrome build on a small drive here, I enabled NTFS compression just so it wouldn't run out of space. And when the build finished, there was only 3GB of space left, so the compression paid off. The build would have bombed with "out of space", if I hadn't enabled NTFS compression in that case. NTFS compression (equivalent to LZ4 maybe, fast, not efficient) GZIP/WinZIP (better compression, a bit slower) 7ZUltra or WinRAR (best compression, really slow) The one you use, depends on your degree of desperation :-) I would not recommend extensive compression runs on a lowly P4, as the wait could drive you nuts. Even my current machine, that does 2-3MB/sec compression, that's pretty hard to take. A ThreadRipper for a thousand bucks, that's what you want for compression. But hardly anybody can afford stuff like that. And if you want multithreaded GZIP, there is a compressor called "PIGZ". But the fit and finish isn't perfect on it - the header info needs a slight tweak, and the port never had that fixed on it. It's possible the Linux version of PIGZ has the header fixed. The GZIP compression option in the 7ZIP package, is only single-threaded. If you have a single core P4 without Hyperthreading, then these distinctions don't matter, and PIGZ and GZIP run at the same relative speed. ******* So how do you do compression, anyway ? A good question. Source folders (select and "Add to Archive) Source --- 7ZIP --- blah.7z output When the compression step is complete, you delete the Source folder. xxx --- 7ZIP --- blah.7z output Now you have room to copy some more stuff off the other drive, onto the recovery drive. When you have another batch ready, you can compress those. Source2 --- 7ZIP --- blah2.7z output xxx --- 7ZIP --- blah2.7z output By inching along that way, you may be able to squeeze the whole archive onto your only output drive. When a set of folders is compressed, you delete the source folder. Note that, if you right click a .7z file, 7ZIP has a "Verify" or "Test" option, and that verifies the checksum. And can tell you if something untoward happened. It cannot detect all hardware failure conditions, but it can sometimes help if you suspect trouble. *Don't* delete the Source, until you're feeling good about the "blah.7z" and its integrity. ******* A better answer, is having a good drive to use in the first place. Doing recovery onto a too-small drive, isn't that much fun. And I'm sure you've taken that into consideration when buying drives for your setup. Not every computer is going to handle the latest and greatest stuff, and even I have budget limits. I haven't bought one of those 50TB SSD drives yet :-) They probably cost as much as my house. Paul |
#19
|
|||
|
|||
Once again, Google proves it's bought out.
In message , Paul
writes: wrote: [] the folders came back. Every folder that was on that partition is now back and I can go from folder to sub folder and see all the files. That allowed me to go thru everything and determine which folders are on my old backup, and which ones are newer. How that folder structure returned, is beyond me, but that was good news. But it's not all good. I began copying everything that's not backed up, to my C: partition. I can only save about 60% of the files, and have to copy one file at a time, while manually making folders. For SMALL files, I can save about 4 out of 5, but for large files it's less than 50%. I've spent hours copying files, and have many more hours to go, but I will save what I can. After that, I have to decide if I want to get a pro data recovery business, or do something more drastic on my own. How the folders came back from the dead is beyond me. I can only [] G: partition as unformatted. Windows 2000 does the same, but Win98 is seeing it as a valid partition with a lot of data. I just cant copy much of the data. On the *Destination* drive, you can gather up a collection of freshly copied folders and compress them. Use 7Z Ultra for example. That's how you can make storage space go further. I don't think shortage of space for rescued files is anonymous's main problem at the moment. [] For example, on some of my backup drives, I can save 500GB of space, simply by compressing the Macrium Reflect MRIMG files. I only do that in cases where Out of curiosity, what compression do you let Macrium itself do? From what I remember, Macrium 5 offers a choice of something like no compression, moderate compression (recommended), and high compression. I tend to use none. I suspect later versions of Macrium haven't changed much in this area. [] Anonymous: how are you remembering which files have been saved successfully (either now, or because they were backed up previously)? I'd be tempted to delete them (and any folders that are then empty) from the flaky drive, so that only the ones still to be rescued are still visible; however, that would involve writing to the flaky drive (deleting is just modifying folder data), which is generally a Bad Idea. [It's what I did when I had a flaky drive, though - well, I did move rather than copy, which of course does a copy then a delete if the copy was successful.] For the hard-to-read files, I'd be tempted to seek - or write - something which opens them, then copies them byte by byte (to a file on a good drive) until a read error occurs; that way you'd have at least part of the file, which may or may not be usable. (Ideally, something which then carries on after the bad patch, maybe writing blanks to the copy for the unreadable bytes - so that the copy at least is the same size and has the tail, which for some filetypes - I think .zip is one - is where important information about the contents is.) Paul (or anyone else) - do you know of any such utility? [Preferably not involving command lines, either in Windows or Linux (-:!] I did (a long time ago - I think in BBC BASIC!) write one that did the first part (copy byte by byte until error), but not beyond. [IIRR, BBC BASIC closed the input file when there was an error reading.] -- J. P. Gilliver. UMRA: 1960/1985 MB++G()AL-IS-Ch++(p)Ar@T+H+Sh0!:`)DNAf Radio 4 is one of the reasons being British is good. It's not a subset of Britain - it's almost as if Britain is a subset of Radio 4. - Stephen Fry, in Radio Times, 7-13 June, 2003. |
#20
|
|||
|
|||
Once again, Google proves it's bought out.
In message , "J. P. Gilliver
(John)" writes: [] For the hard-to-read files, I'd be tempted to seek - or write - something which opens them, then copies them byte by byte (to a file on a good drive) until a read error occurs; that way you'd have at least part of the file, which may or may not be usable. (Ideally, something which then carries on after the bad patch, maybe writing blanks to the copy for the unreadable bytes - so that the copy at least is the same size and has the tail, which for some filetypes - I think .zip is one - is where important information about the contents is.) Paul (or anyone else) - do you know of any such utility? [Preferably not involving command lines, either in Windows or Linux (-:!] I did (a long time ago - I think in BBC BASIC!) write one that did the first part (copy byte by byte until error), but not beyond. [IIRR, BBC BASIC closed the input file when there was an error reading.] Actually, I've just remembered: IrfanView will do the first part - read up to error - and then let you save; however, it may only be for certain types of file (I think it does for JPEG images, for example; it fills from the failure to the end with grey). -- J. P. Gilliver. UMRA: 1960/1985 MB++G()AL-IS-Ch++(p)Ar@T+H+Sh0!:`)DNAf The smallest minority on earth is the individual. Those who deny individual rights cannot claim to be defenders of minorities. - Ayn Rand, quoted by Deb Shinder 2012-3-30 |
#21
|
|||
|
|||
Once again, Google proves it's bought out.
J. P. Gilliver (John) wrote:
For the hard-to-read files, I'd be tempted to seek - or write - something which opens them, then copies them byte by byte (to a file on a good drive) until a read error occurs; that way you'd have at least part of the file, which may or may not be usable. (Ideally, something which then carries on after the bad patch, maybe writing blanks to the copy for the unreadable bytes - so that the copy at least is the same size and has the tail, which for some filetypes - I think .zip is one - is where important information about the contents is.) Paul (or anyone else) - do you know of any such utility? [Preferably not involving command lines, either in Windows or Linux (-:!] I did (a long time ago - I think in BBC BASIC!) write one that did the first part (copy byte by byte until error), but not beyond. [IIRR, BBC BASIC closed the input file when there was an error reading.] That's ddrescue, made available as "gddrescue" package on Linux. Near the bottom of this page. It's used to clone entire, damaged hard drives. You can "clone to an image file" or "clone to the physical disk". If cloning to physical disks, the destination has to be the same size, or be a larger disk, so the end of the source disk is not chopped off by accident. http://www.cgsecurity.org/wiki/Damaged_Hard_Disk sudo apt install gddrescue # Get the package which ddrescue # Check it is present # first, grab most of the error-free areas in a hurry: # Make sure ~ (your home directory), has enough storage space for logging. # The "rescued.log" keeps track of which sectors are read and copied OK. sudo ddrescue -n /dev/sda /dev/sdb ~/rescued.log # then try to recover as much of the dicey areas as possible: sudo ddrescue -r 1 /dev/sda /dev/sdb ~/rescued.log The /dev/sda is the old disk. The /dev/sdb is the new disk (same size of larger). You can use sudo gparted and use that to view some information about the disks, and get some idea whether yours is sda or sdb or whatever. I'd provide a link to a Windows one, but I'm not convinced one exists. The dude who wrote this one, knows exactly how to port (or combine) the source files, and that person would be the perfect person to write one for us. But this one is not equipped to keep track of what was done, or to do retries like the other one does. But at least this person has figured out the namespace for low level storage access (whatever it is). http://www.chrysocome.net/dd The chrysocome program only has one bug I know of. If cloning a USB stick to something else, the program does not reliably detect "the end" of the device. It's advised to use a blocksize parameter and count, to tell the program exactly when to stop copying. If your source USB is 1GB, then you set a blocksize and count that transfers that exact amount. ******* The ddrescue program also has the ability to copy using a dynamic block size. In essence, it knows it has to transfer the whole disk, and it will vary the size of the read command until it's snagged the whole source disk. That's unlike how the chrysocome (Windows) one works. This is an example from my notes. This is a backup to an image file, rather than a low-level disk-to-disk clone. The "sdb.raw" file, stored somewhere, will be the same size as the source disk. If the sdb.raw file has any 512 byte blocks filled with zeros, they will take no storage space on the destination file system. That's the "sparse" notation, the "-S". But such a technique, is reserved for situations where the disk owner sweeps the white space on the drive with sdelete or similar, causing blocks full of zeros to appear in any area where no files are stored. If you fill the white space with zeros, it allows archival copies made this way, to take less space. "Sparse" is a feature of the NTFS file system, and there's even a utility for interacting with sparse files. The reason this is in my notes file, is I actually ran this. ddrescue -S -b8M /dev/sdb /mount/external/backup/sdb.raw /mount/external/backup/sdb.log The -b8M says "you can use block transfers as large as 8MB if you feel like it". The actual program will not issue a size that big, as the hard drive will tell it to get lost if it tries. Disks have a limit on the size of individual commands. And ddrescue has a way to "probe" the supported size. In a way, my using that value is a way of saying "unlimited... use the biggest one that works" :-) Since my disk was not damaged, the contents of the sdb.log file later were rather small. Paul |
#22
|
|||
|
|||
Once again, Google proves it's bought out.
On Sat, 7 Oct 2017 22:02:40 +0100, "J. P. Gilliver (John)"
wrote: Anonymous: how are you remembering which files have been saved successfully (either now, or because they were backed up previously)? I'd be tempted to delete them (and any folders that are then empty) from I wrote down all root folders, compared them to my backup, and marked on the paper which I dont have to mess with, because they are backed up. There are two main folders that need to be saved. One just has a series of recent folders to save. I wrote down which ones I have to save, then tried to copy them. If it copied good, I marked "OK", if only partial copy, I wrote "Part". A few copied nothing, on those I wrote "Bad". the flaky drive, so that only the ones still to be rescued are still visible; however, that would involve writing to the flaky drive (deleting is just modifying folder data), which is generally a Bad Idea. [It's what I did when I had a flaky drive, though - well, I did move rather than copy, which of course does a copy then a delete if the copy was successful.] By accident I wrote to that bad partition twice now. It never fails, when I copy stuff, I accidentally hit the wrong key and it makes a file in the same folder called "copy of filaname.xxx". I just deleted them with no effect. I thought about deleting all the stuff that is saved, but I dont think that is a good idea. I did delete everything from the other two good partitions on the drive though. Then I formatted them too. I was thinking of seeing if I could copy the bad partition to a good one, but I think it's better to copy to another HDD entirely. For the hard-to-read files, I'd be tempted to seek - or write - something which opens them, then copies them byte by byte (to a file on a good drive) until a read error occurs; that way you'd have at least part of the file, which may or may not be usable. (Ideally, something which then carries on after the bad patch, maybe writing blanks to the copy for the unreadable bytes - so that the copy at least is the same size and has the tail, which for some filetypes - I think .zip is one - is where important information about the contents is.) Most of the large files that I cant save, are .PDF manuals and electronic schematics. Part of a PDF is worthless. On the other hand, part of a MP4 will play and part of a JPG can usually be viewed. Paul (or anyone else) - do you know of any such utility? [Preferably not involving command lines, either in Windows or Linux (-:!] I did (a long time ago - I think in BBC BASIC!) write one that did the first part (copy byte by byte until error), but not beyond. [IIRR, BBC BASIC closed the input file when there was an error reading.] -- Yea, I wish there was something easier to use. I read a bunch of websites and it appears there are several different versions of ddrescue and gddrescue. I cant even find out where to download it, and know it's the right one. Of course when it comes to linux, it seems there are always too many versions and everyone claims theirs is the best. (One reason I dont like linux). It shows that gddrescue is a graphical menu of ddrescue, but on other sites it claims the "G" only means "Gnu". A URL of the place to download the GOOD ONE" would be appreciated, and hopefully the one with the GUI front end. J. P. Gilliver. UMRA: 1960/1985 MB++G()AL-IS-Ch++(p)Ar@T+H+Sh0!:`)DNAf In all honesty, I dont really think this drive is dying. Yes, it has a bad sector and unfortunately it appears that bad sector is in the FAT tables. I will replace the drive though, but I dont think it's dying by the minute. I really think Scandisk caused this mess. A power outage caused the computer to shut off while I was using it, and many of my partitions required running scandisk because they were showing the wrong size. But that happens faily often and has never been a problem. I should mention that I defrag at least twice a week, and the data should not be fragmented. If somehow I can get the FAT tables repaired, I may have a usable partition again. I'm still tempted to let Norton Disk Doctor do what it's sugggesting, which is to repair the boot record and copy the backup version of the FAT, but before I do that, I want to save as much data as I can, and hopfully clone the partition too. It's now obvious that all my data is there, but the FAT record is not showing or accessing it properly, thus the whole problem is in the FAT, not the data itself. I may take the drive to a local computer repair shop and see if they have some software to fix it, or should I get brave and let Norton disk doctor do it's thing??? This sure is a gamble!!!! |
#23
|
|||
|
|||
Once again, Google proves it's bought out.
On Sat, 7 Oct 2017 22:02:40 +0100, "J. P. Gilliver (John)"
wrote: I don't think shortage of space for rescued files is anonymous's main problem at the moment. [ Disk space is not a problem. I have two 120gb drives and each one is about half full. I copy all my movies, videos, and music to external plug in drives. The only reason my G: partition WAS getting full, was because I have in PDF format, nearly every electronics magazine from the 1940s thru 2000. That was using a lot of space, so I moved it to my I: partition (and was planning to move it to my portable drive soon). I'm real happy I moved that to I:, or I would have lost all of that too. |
#24
|
|||
|
|||
Once again, Google proves it's bought out.
|
#25
|
|||
|
|||
Once again, Google proves it's bought out.
|
#26
|
|||
|
|||
Once again, Google proves it's bought out. [now NTFS journal file]
In message , Paul
writes: wrote: A power outage caused the computer to shut off while I was using it FAT32 is not protected against power outages. NTFS has a USN Journal, that allows playback during [US Navy (-:?] boot-time repair, and file fragments can be tossed at that point in time (your open Word document). The result on NTFS is, the rest of the file system is safe, and relatively bulletproof. I suppose I must concede to this, in that this machine (with the NTFS it came with) has rarely given me any trouble (apart from when the discs stopped going round! And even then I was able to rescue ~95% when I'd unstuck it). But I really would like an explanation - perhaps a Paul-style one - of _what_ this mysterious "journal" is about. So often, I see "NTFS is better because it has a journal file" or something like that, with no further explanation. [] Have you ever considered adding a UPS to the computer room ? Mine doesn't stay up very long, but I get a chance to do a shutdown, before the battery flakes out on it. One of the reasons I mostly use laptops/netbooks now; they are in effect a poor man's UPS, in that the battery - even if in poor condition - usually has at least enough go to allow a controlled shutdown, often even allowing the completion of what you were doing first, if it isn't too complex. (At least, a save of a document you were editing or whatever.) The machine with the Win10 on it, you never know how long it's going to take to finish shutdown, and I've had a close shave or two with it (the battery lasted long enough for it to finish spinning the Juggler Balls at me). I was very tempted to just hit the power button on the Win10 machine instead. (Doesn't 10 hibernate rather than shut down anyway, by default, when you click through shut down or hit the power button?) Paul -- J. P. Gilliver. UMRA: 1960/1985 MB++G()AL-IS-Ch++(p)Ar@T+H+Sh0!:`)DNAf Never rely on somebody else for your happiness. - Bette Davis, quoted by Celia Imrie, RT 2014/3/12-18 |
#27
|
|||
|
|||
Once again, Google proves it's bought out.
|
#28
|
|||
|
|||
Once again, Google proves it's bought out.
J. P. Gilliver (John) wrote:
In message , writes: I may take the drive to a local computer repair shop and see if they have some software to fix it, or should I get brave and let Norton disk doctor do it's thing??? This sure is a gamble!!!! Indeed. What _are_ these - mostly schematics in .pdf form, from what you've said - that are so valuable/irreplaceable? (And why are they almost impossible to replace: where have you been getting them from?) In-place repair always has an element of danger. Whether it's scandisk or CHKDSK, or whatever. You should always make a copy of valuable materials and work on the copy, *not* the known-damaged item. I prefer a sector-by-sector copy, one where a log is produced showing what sectors did not get copied. The flavor of operation done, may be affected by the available space you've got to work with. The first priority is making the copy, just in case the disk is about to mechanically fail on you. Similarly, deleting files from the source, using "move" of files from the source (causing deletion) is not advised. The file system may attempt to modify some part of the damaged FAT, on top of that bad sector. You can do anything you want to the source disk as long as: 1) The operation has no side effects on the source partition. 2) The commands you're executing when reading the disk, don't throw the heads around too vigorously. That's where the sector-by-sector copy comes in, as it smoothly moves over the disk surface while working. Paul |
#29
|
|||
|
|||
Once again, Google proves it's bought out.
|
#30
|
|||
|
|||
Once again, Google proves it's bought out.
On Sun, 08 Oct 2017 12:25:37 -0700, Ken Blake
wrote: On Sun, 08 Oct 2017 13:26:19 -0500, Char Jackson wrote: On Sat, 07 Oct 2017 13:11:18 -0400, wrote: Years ago, I installed a USB2 card in this computer. It lasted several years, then it just quit working. Two years ago, I bought another one. That one lasted about one year, then that one quit working. I have just given up on them since. The USB2 is a lot faster than the 1.1. In fact I dont see any reason to go to USB3. I think USB2 is fine. I agree that USB2 is fine...for things like a keyboard or a mouse. I certainly wouldn't want to use it to connect an external drive, including a thumb drive. I use USB2 to connect external drives all the time, since I don't have any USB3 ports. But since I use external drives only for backup, and I do other things (sometimes going to sleep) while the backup is running, I really don't care how slow it is. USB 2 can copy a rather large partition of mine in about a half hour. Thats acceptable to me. USB 1.1 takes 16 or more hours for the same drive. That's not acceptable, but when it's all I have, I use it to backup and try to do it when I am sleeping and since that extends into the next day, I try to do it when I know I wont need the computer that next day. I have never had any USB 3, so I dont know how long that would take, but I do question if copying stuff as fast as they claim USB 3 is, I have to question if the copy is safe and reliable. It's just like driving. Driving 200 miles at 10mph is going to take very many hours. It can be done, but it sure is slow. Driving at 60mph is pretty safe and gets the trip done. Although driving at 140mph will get you to your destination fast, it's dangerous and risky. Given the choice, I'd choose the 60mph. |
Thread Tools | |
Display Modes | Rate This Thread |
|
|