A Windows XP help forum. PCbanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » PCbanter forum » Microsoft Windows 7 » Windows 7 Forum
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Reading Apple Files with a Windows Machine?



 
 
Thread Tools Rate Thread Display Modes
  #31  
Old July 23rd 18, 04:32 PM posted to alt.windows7.general
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default Reading Apple Files with a Windows Machine?

Boris wrote:
Paul wrote in news
Huge snip


Well, if you absolutely cannot get the table with smartctl
in Linux, go back to windows and use the Health tab
of HDTune 2.55.

http://www.hdtune.com/files/hdtune_255.exe

While Linux could be mis-interpreting some parameters
in SMART, it's possible the disk itself has some way
of indicating imminent failure after "doing the short
test" on its own. It could be a failure indicated by
short test, rather than an analysis of the SMART
parameter table to reach the same conclusion.

You really need to get working on your ddrescue/gddrescue.
And try and get as much data off the disk as possible.
Perhaps the active surface of the disk is toast, but
you'll discover that when you try. The thing is, the
disk would not have responded, unless the disk heads
loaded and the controller was able to read the
Service Area. So *some* portion of the platters
is readable. But we don't know how much.

Paul


I ran HDTune 2.55 a few weeks ago with the MAC HD USB tethered. The
results we

https://postimg.cc/gallery/25g6b1tdo/

The Health Tab displayed nothing for the MAC HD..

At the same time, to be sure HDTune was working correctly, I applied
HDTune 2.55 to my Windows OS disk, and it reported ok on all tabs but the
Health Tab, which displayed nothing.

Yesterday, with the MAC HD connected directly to the SATA motherboard
connection, HDTune recognized all drives on my system, except for the MAC
HD.

I'm not going to dick around any longer trying to get S.M.A.R.T. dats.

I'm going to see what I can do with imaging the MAC HD to another hard
drive.


It should have worked.

But ddrescue is where you have to go, and quickly.

The details don't matter any more, just the objective matters.
Save the sectors before it is too late.

Paul
Ads
  #32  
Old July 25th 18, 08:03 AM posted to alt.windows7.general
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default Reading Apple Files with a Windows Machine?

Boris wrote:
Paul wrote in news
Boris wrote:
Paul wrote in news
Huge snip


Well, if you absolutely cannot get the table with smartctl
in Linux, go back to windows and use the Health tab
of HDTune 2.55.

http://www.hdtune.com/files/hdtune_255.exe

While Linux could be mis-interpreting some parameters
in SMART, it's possible the disk itself has some way
of indicating imminent failure after "doing the short
test" on its own. It could be a failure indicated by
short test, rather than an analysis of the SMART
parameter table to reach the same conclusion.

You really need to get working on your ddrescue/gddrescue.
And try and get as much data off the disk as possible.
Perhaps the active surface of the disk is toast, but
you'll discover that when you try. The thing is, the
disk would not have responded, unless the disk heads
loaded and the controller was able to read the
Service Area. So *some* portion of the platters
is readable. But we don't know how much.

Paul

I ran HDTune 2.55 a few weeks ago with the MAC HD USB tethered. The
results we

https://postimg.cc/gallery/25g6b1tdo/

The Health Tab displayed nothing for the MAC HD..

At the same time, to be sure HDTune was working correctly, I applied
HDTune 2.55 to my Windows OS disk, and it reported ok on all tabs but

the
Health Tab, which displayed nothing.

Yesterday, with the MAC HD connected directly to the SATA motherboard
connection, HDTune recognized all drives on my system, except for the

MAC
HD.

I'm not going to dick around any longer trying to get S.M.A.R.T. dats.

I'm going to see what I can do with imaging the MAC HD to another hard
drive.

It should have worked.

But ddrescue is where you have to go, and quickly.

The details don't matter any more, just the objective matters.
Save the sectors before it is too late.

Paul


So the next step was to try ddrescue. I remembered Frank Slootweg (in
this thread 7/6/2018) warning that:

"sda and sdb dependent on the sequence in which you connect the
original disk and the to-be-copied-to disk?"

I tested this, and discovered this is true, that the device designation is
dependent on order of connection. With internal, USB flash, and USB
external drives attached, the sdX designations were different than if only
the two disks I wanted to use were connected. Also, the designations were
different depending on the order of connection. I guess Linux orders them
in order of discovery.

I disconnected all disks (including the Win7 OS disk, not wanting to blow
it up in error) and loaded up Knoppix. I then connected the MAC HD and a
new Western Digital Passport 1TB HD.

MAC HD = sda (source, 500GB)
Passport HD = sdf (destination, 1TB)

Linusx Disks told me that the Passport drive was mounted at /media/sdf1

I tried to run ddrescue with a log file, but couldn't get the syntax
correct, or maybe I did, but here's the results:


knoppix@Microknoppix:~$ ddrescue -f /dev/sda /dev/sdf /media/sdf1
ddrescue: Mapfile exists and is not a regular file.
knoppix@Microknoppix:~$ ddrescue -f /dev/sda /dev/sdf
GNU ddrescue 1.22
ipos: 500107 MB, non-trimmed: 0 B, current rate: 0 B/s
opos: 500107 MB, non-scraped: 0 B, average rate: 0 B/s
non-tried: 0 B, bad-sector: 500107 MB, error rate: 17313 kB/s
rescued: 0 B, bad areas: 1, run time: 2h 55m 25s
pct rescued: 0.00%, read errors:984404307, remaining time: n/a
time since last successful read: n/a
Finished
knoppix@Microknoppix:~$


For the first few minutes, there was a line that said someting about
Trials 5. I thought this would last till the end of the run, but it
disappeared shortly, so I wasn't able to copy/paste it here.

I then logged on to the Passport drive...no changes.

Boris


The mapfile should be a text file, not a partition.

You transfer a whole-disk-identifier to a whole-disk-identifier,
to do device to device cloning. You got that part right.

/dev/sda /dev/sdf

The third item in the list, should be a text file. Perhaps that's
the map file they refer to.

All I have here at the moment to test with is:

KNOPPIX_V8.1-2017-09-05-EN.iso 4,553,826,304 bytes

Which one do you have ?

The package likely comes from Debian, and you could check
your version of ddrescue as well. Mine is "version 1.22
of GNU ddrescue".

ddrescue -V

I doubt it's highly sensitive to version, but I don't
want to wander too far off target.

And you have the basic form right, except the mapfile.

ddrescue -f /dev/sda /dev/sdf ~/mymapfile.txt

You can use "whoami" to check what account you
are using currently. On Knoppix, this is likely
"root", so sudo is not needed. On Ubuntu, you
would use...

sudo ddrescue -f /dev/sda /dev/sdf ~/mymapfile.txt

**********

ddrescue can copy from a device to a device, a device to an image file,
an image file to a device. In this example (one I tested), I transferred
a device to an image file.

sudo ddrescue -S -b8M /dev/sdb /mount/external/backup/sdb.raw /mount/external/backup/sdb.log

The -S in that example, makes "sdb.raw" a "sparse" file, which
means that sectors full of zeros need not be recorded using
disk surface. Instead, a table of present or missing sectors
is kept, to conserve space. This only makes sense for disks
that are mostly full of zeros to start with. Normal disks
don't have an excess of sectors-full-of-zeros, so this -S
option will not help you.

The -b8M on the version I was using, was supposed to set the
transfer size. Yet, the manual page says it sets the sector
size. Which isn't the same thing. A lot of conventional
disks support 512 byte sectors. A "512n" disk is native,
and externally visible and internal representation are
both the same. A 512n disk is "512,512".

A 512e disk (quite common in 2018), is 512 bytes externally
(emulated) for compatibility, while inside the drive, the
storage is done using 4096 sectors. This 512e disk
would be "512,4096". This is not a preferred option for
WinXP, but Vista+ OSes align on 1MB boundaries, so the
4096 sector size aligns with the 1MB boundaries. Most
cluster operations on Vista+ then align with the
internal representation, for maximum efficiency.

The third type is not common. A 4Kn disk is "4096,4096"
disk. There are few tools to use with it (partition management
is hopeless). Windows 10 I think, will support a 4K sector,
so Windows 10 could see one. But my recommendation to people,
is to stay far away from these, unless you want your
computer usage with them to be one continuous experiment
and breadcrumb exercise.

So the only part of my command that doesn't make a lot
of sense, is the -b8M. Does it set the maximum transfer
size ? Or does it set a sector size ? The program should
be able to read the information, just the same way that
fdisk does.

sudo fdisk /dev/sdb

If you select "p" for print, then "q" for quit,
there's a line near the beginning that shows the
512n, 512e, 4Kn nature of the drive. The internal and
external sector sizes are shown.

In any case, this is the syntax I'd be trying. The -b8M
may speed up the trial speed. The claim I read, is
the transfer size is "adaptive" and the command adjusts
the transfer size per command, to optimize bandwidth.
That's what the -b was supposed to do, according to the
documentation I was reading at one time.

ddrescue -b8M /dev/sda /dev/sdf ~/mymapfile.txt

Some more examples here.

https://unix.stackexchange.com/quest...-a-sparse-file

*******

You showed me a result before with HDTune, which is
consistent with your ddrescue result.

And both results make no sense :-/

There is something *weird* about that Mac disk.

An ATA disk will *not* be visible, if it has internal
troubles. The ATA disk spins up, and loads the heads onto
the platter, using a relatively small firmware loader
on the controller board.

The full ATA command set, is a larger chunk of code in
the Service Area of the platter. Otherwise known as
"Sector -1" because it is in an area that ordinary users
cannot access. Once the SA is loaded, the drive is ready
to accept ATA commands from the SATA bus.

The first command sent by the BIOS, is an "identify yourself".
And the returning of a single packet with "ST3500418..."
tells you that a SATA packet was eventually received
error free from the drive.

And we know enough of the drive works, that HDTune plays with
it in Windows.

But we also know, that an HDTune error scan, returns nothing
but red blocks. How is that possible ? How can the surface
be inaccessible like that, when we know the SA loaded
(a couple megabytes), and the drive came up ?

There is something about this disk, I just don't understand.

The only other breadcrumb I have for you, is certain
Macs years ago, needed the Spread Spectrum jumper
inserted on the back of the drive. Some brands of
drives have four pins. (Note that there can be *several*
four pin blocks, and you have to be careful to pick the
correct one.)

X X X X
Force150 SpreadSpectrum

The Force150 jumper is only needed for VIA chipsets
(something Apple isn't likely to use). The SpreadSpectrum
on the other hand, when you insert that jumper, it defeats
SpreadSpectrum clock modulation on the cable.

The purpose of SpreadSpectrum, is to defeat FCC15 testing.
It spreads the emissions to a slightly broader peak, so
hardware can pass FCC testing. It doesn't mean that the
device interferes any less with other radio equipment.

A very small number of interface chips, cannot track the
triangle wave modulation of SS. Normally, you'd use a PLL
or DLL or training clock, to come up with a scheme so you
can read packets while the clock rate continuously changes.
Some Apple chip could not do this at the time, and drives
connected to that Apple device, needed the SS jumper to
be inserted.

But other than that observation, I don't understand
how the entire surface of the disk is inaccessible,
while the SA reads fine and the drive comes up.
Would some service person have changed the jumper
state, and if so, why ?

Jumpers on those drives are probably 2mm type. There are
two jumper caps - 0.1" jumpers and 2mm jumpers, and one
type doesn't fit the other spacing all that well. I
had to look in my basement, in an old disk enclosure
product box, for a bag of ten 2mm jumpers I had. And
I've been using those over the years for stuff like
this.

I tried running the part number

ST3500418ASQ

only to find this gibberish about a "thermal sensor".

https://forums.macrumors.com/threads...cement.841205/

Some more here. These guys are usually pretty good at this
stuff (they tap into other forums to get the scoop
on "non-standard" crap).

https://blog.macsales.com/10206-furt...e-restrictions

Now, does that have anything electrically to do with the
drive ? Is it just the controller sensor, pinned out
to a header, so the controller can take external
temp readings ?

That should have nothing to do with reading data
from the drive. Whereas Spread Spectrum could.

I can't find that exact part number on the Seagate site.
It could be considered an OEM special just for Apple.

I haven't been able to find any dmesg Linux boot
cycles for that device, giving particulars. And the
Mac boot log is useless.

*******

Note the usage of branded firmware on the controller board. WTF?

http://firmware.hddsurgery.com/?manu...family=Pharaoh

ST3500418ASQ AP24 Pharaoh 5VM59RXH 2016-07-16

ST3500418AS CC37 Pharaoh 6VM66MWD 2016-07-16

The label on my 418AS drive shows it is running CC46, but
you can see the basic idea. That the "Apple OEM" drive
runs "APxx" firmware for some reason (maybe it does head park
or spins slower when idle or something, like a more aggressive
power management scheme).

So far, no evidence any of this affects reading data
from the drive.

I didn't rush this post off to you, because I suspect
we won't be getting any ddrescue data, until we figure
out what else isn't standard about the drive. I can't find
enough discussions in threads on the Internet about the
drive, to figure it out.

Perhaps you could take a picture of your hard drive or
at least examine it for differences to this.

https://s33.postimg.cc/6loshhxxb/ST3500418.gif

Even if the drive was encrypted, it should still read
without CRC errors. Why is the *whole* surface unreadable.
It's not a failure.

Paul
  #33  
Old July 26th 18, 04:18 AM posted to alt.windows7.general
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default Reading Apple Files with a Windows Machine?

Boris wrote:
Paul wrote in news
Boris wrote:
Paul wrote in news
Boris wrote:
Paul wrote in newsj3h20$g9v$1@dont-

email.me:
Huge snip


Well, if you absolutely cannot get the table with smartctl
in Linux, go back to windows and use the Health tab
of HDTune 2.55.

http://www.hdtune.com/files/hdtune_255.exe

While Linux could be mis-interpreting some parameters
in SMART, it's possible the disk itself has some way
of indicating imminent failure after "doing the short
test" on its own. It could be a failure indicated by
short test, rather than an analysis of the SMART
parameter table to reach the same conclusion.

You really need to get working on your ddrescue/gddrescue.
And try and get as much data off the disk as possible.
Perhaps the active surface of the disk is toast, but
you'll discover that when you try. The thing is, the
disk would not have responded, unless the disk heads
loaded and the controller was able to read the
Service Area. So *some* portion of the platters
is readable. But we don't know how much.

Paul

I ran HDTune 2.55 a few weeks ago with the MAC HD USB tethered. The
results we

https://postimg.cc/gallery/25g6b1tdo/

The Health Tab displayed nothing for the MAC HD..

At the same time, to be sure HDTune was working correctly, I applied
HDTune 2.55 to my Windows OS disk, and it reported ok on all tabs but
the
Health Tab, which displayed nothing.

Yesterday, with the MAC HD connected directly to the SATA motherboard
connection, HDTune recognized all drives on my system, except for the
MAC
HD.

I'm not going to dick around any longer trying to get S.M.A.R.T.

dats.
I'm going to see what I can do with imaging the MAC HD to another

hard
drive.

It should have worked.

But ddrescue is where you have to go, and quickly.

The details don't matter any more, just the objective matters.
Save the sectors before it is too late.

Paul
So the next step was to try ddrescue. I remembered Frank Slootweg (in
this thread 7/6/2018) warning that:

"sda and sdb dependent on the sequence in which you connect the
original disk and the to-be-copied-to disk?"

I tested this, and discovered this is true, that the device designation

is
dependent on order of connection. With internal, USB flash, and USB
external drives attached, the sdX designations were different than if

only
the two disks I wanted to use were connected. Also, the designations

were
different depending on the order of connection. I guess Linux orders

them
in order of discovery.

I disconnected all disks (including the Win7 OS disk, not wanting to

blow
it up in error) and loaded up Knoppix. I then connected the MAC HD and

a
new Western Digital Passport 1TB HD.

MAC HD = sda (source, 500GB)
Passport HD = sdf (destination, 1TB)

Linusx Disks told me that the Passport drive was mounted at /media/sdf1

I tried to run ddrescue with a log file, but couldn't get the syntax
correct, or maybe I did, but here's the results:


knoppix@Microknoppix:~$ ddrescue -f /dev/sda /dev/sdf /media/sdf1
ddrescue: Mapfile exists and is not a regular file.
knoppix@Microknoppix:~$ ddrescue -f /dev/sda /dev/sdf
GNU ddrescue 1.22
ipos: 500107 MB, non-trimmed: 0 B, current rate: 0

B/s
opos: 500107 MB, non-scraped: 0 B, average rate: 0

B/s
non-tried: 0 B, bad-sector: 500107 MB, error rate: 17313

kB/s
rescued: 0 B, bad areas: 1, run time: 2h 55m

25s
pct rescued: 0.00%, read errors:984404307, remaining time:

n/a
time since last successful read:

n/a
Finished
knoppix@Microknoppix:~$


For the first few minutes, there was a line that said someting about
Trials 5. I thought this would last till the end of the run, but it
disappeared shortly, so I wasn't able to copy/paste it here.

I then logged on to the Passport drive...no changes.

Boris

The mapfile should be a text file, not a partition.

You transfer a whole-disk-identifier to a whole-disk-identifier,
to do device to device cloning. You got that part right.

/dev/sda /dev/sdf

The third item in the list, should be a text file. Perhaps that's
the map file they refer to.

All I have here at the moment to test with is:

KNOPPIX_V8.1-2017-09-05-EN.iso 4,553,826,304 bytes

Which one do you have ?


Exactly the same: Knoppix v8.1-2017-09-05, 4,553,826,304 bytes

This is where I found link to Knoppix, and the instructions on using this
particular live dvd:

https://www.data-medics.com/forum/ho...rive-with-bad-
sectors-using-ddrescue-t133.html

(I also have live dvd versions of Debianlive9.4.0 Cinnamon and Ubuntu-
18.04-desktop-amd64, which I've tried, but like Knoppix best.)

The package likely comes from Debian, and you could check
your version of ddrescue as well. Mine is "version 1.22
of GNU ddrescue".

ddrescue -V

I doubt it's highly sensitive to version, but I don't
want to wander too far off target.


Mine is also version 1.22, as shown on the copy/paste of the ddrescue
report.

And you have the basic form right, except the mapfile.


ddrescue -f /dev/sda /dev/sdf ~/mymapfile.txt

You can use "whoami" to check what account you
are using currently. On Knoppix, this is likely
"root", so sudo is not needed. On Ubuntu, you
would use...


I will have to check with "whoami" later, but I have been using sudo,
perhaps unnecessarily. I think I do remember that Knoppix logs me to
automatically right to the root.

sudo ddrescue -f /dev/sda /dev/sdf ~/mymapfile.txt

**********

ddrescue can copy from a device to a device, a device to an image file,
an image file to a device. In this example (one I tested), I transferred
a device to an image file.

sudo ddrescue -S -b8M /dev/sdb /mount/external/backup/sdb.raw

/mount/external/backup/sdb.log
The -S in that example, makes "sdb.raw" a "sparse" file, which
means that sectors full of zeros need not be recorded using
disk surface. Instead, a table of present or missing sectors
is kept, to conserve space. This only makes sense for disks
that are mostly full of zeros to start with. Normal disks
don't have an excess of sectors-full-of-zeros, so this -S
option will not help you.

The -b8M on the version I was using, was supposed to set the
transfer size. Yet, the manual page says it sets the sector
size. Which isn't the same thing. A lot of conventional
disks support 512 byte sectors. A "512n" disk is native,
and externally visible and internal representation are
both the same. A 512n disk is "512,512".

A 512e disk (quite common in 2018), is 512 bytes externally
(emulated) for compatibility, while inside the drive, the
storage is done using 4096 sectors. This 512e disk
would be "512,4096". This is not a preferred option for
WinXP, but Vista+ OSes align on 1MB boundaries, so the
4096 sector size aligns with the 1MB boundaries. Most
cluster operations on Vista+ then align with the
internal representation, for maximum efficiency.

The third type is not common. A 4Kn disk is "4096,4096"
disk. There are few tools to use with it (partition management
is hopeless). Windows 10 I think, will support a 4K sector,
so Windows 10 could see one. But my recommendation to people,
is to stay far away from these, unless you want your
computer usage with them to be one continuous experiment
and breadcrumb exercise.

So the only part of my command that doesn't make a lot
of sense, is the -b8M. Does it set the maximum transfer
size ? Or does it set a sector size ? The program should
be able to read the information, just the same way that
fdisk does.

sudo fdisk /dev/sdb

If you select "p" for print, then "q" for quit,
there's a line near the beginning that shows the
512n, 512e, 4Kn nature of the drive. The internal and
external sector sizes are shown.

In any case, this is the syntax I'd be trying. The -b8M
may speed up the trial speed. The claim I read, is
the transfer size is "adaptive" and the command adjusts
the transfer size per command, to optimize bandwidth.
That's what the -b was supposed to do, according to the
documentation I was reading at one time.

ddrescue -b8M /dev/sda /dev/sdf ~/mymapfile.txt

Some more examples here.

https://unix.stackexchange.com/quest...-partition-or-

hard-drive-to-a-sparse-file
*******

You showed me a result before with HDTune, which is
consistent with your ddrescue result.

And both results make no sense :-/

There is something *weird* about that Mac disk.

An ATA disk will *not* be visible, if it has internal
troubles. The ATA disk spins up, and loads the heads onto
the platter, using a relatively small firmware loader
on the controller board.

The full ATA command set, is a larger chunk of code in
the Service Area of the platter. Otherwise known as
"Sector -1" because it is in an area that ordinary users
cannot access. Once the SA is loaded, the drive is ready
to accept ATA commands from the SATA bus.

The first command sent by the BIOS, is an "identify yourself".
And the returning of a single packet with "ST3500418..."
tells you that a SATA packet was eventually received
error free from the drive.


I see. Didn't know that. So that's why you say something's weird.
And we know enough of the drive works, that HDTune plays with
it in Windows.

But we also know, that an HDTune error scan, returns nothing
but red blocks. How is that possible ? How can the surface
be inaccessible like that, when we know the SA loaded
(a couple megabytes), and the drive came up ?


So the SA area on the HD was supposedly scanned by HDTune, but returned as
a red block?

There is something about this disk, I just don't understand.

The only other breadcrumb I have for you, is certain
Macs years ago, needed the Spread Spectrum jumper
inserted on the back of the drive. Some brands of
drives have four pins. (Note that there can be *several*
four pin blocks, and you have to be careful to pick the
correct one.)

X X X X
Force150 SpreadSpectrum

The Force150 jumper is only needed for VIA chipsets
(something Apple isn't likely to use). The SpreadSpectrum
on the other hand, when you insert that jumper, it defeats
SpreadSpectrum clock modulation on the cable.

The purpose of SpreadSpectrum, is to defeat FCC15 testing.
It spreads the emissions to a slightly broader peak, so
hardware can pass FCC testing. It doesn't mean that the
device interferes any less with other radio equipment.

A very small number of interface chips, cannot track the
triangle wave modulation of SS. Normally, you'd use a PLL
or DLL or training clock, to come up with a scheme so you
can read packets while the clock rate continuously changes.
Some Apple chip could not do this at the time, and drives
connected to that Apple device, needed the SS jumper to
be inserted.

But other than that observation, I don't understand
how the entire surface of the disk is inaccessible,
while the SA reads fine and the drive comes up.
Would some service person have changed the jumper
state, and if so, why ?

Jumpers on those drives are probably 2mm type. There are
two jumper caps - 0.1" jumpers and 2mm jumpers, and one
type doesn't fit the other spacing all that well. I
had to look in my basement, in an old disk enclosure
product box, for a bag of ten 2mm jumpers I had. And
I've been using those over the years for stuff like
this.

I tried running the part number

ST3500418ASQ

only to find this gibberish about a "thermal sensor".

https://forums.macrumors.com/threads...cement.841205/

Some more here. These guys are usually pretty good at this
stuff (they tap into other forums to get the scoop
on "non-standard" crap).

https://blog.macsales.com/10206-furt...les-imac-2011-

model-hard-drive-restrictions
Now, does that have anything electrically to do with the
drive ? Is it just the controller sensor, pinned out
to a header, so the controller can take external
temp readings ?

That should have nothing to do with reading data
from the drive. Whereas Spread Spectrum could.

I can't find that exact part number on the Seagate site.
It could be considered an OEM special just for Apple.

I haven't been able to find any dmesg Linux boot
cycles for that device, giving particulars. And the
Mac boot log is useless.

*******

Note the usage of branded firmware on the controller board. WTF?

http://firmware.hddsurgery.com/?manu...family=Pharaoh

ST3500418ASQ AP24 Pharaoh 5VM59RXH 2016-07-16

ST3500418AS CC37 Pharaoh 6VM66MWD 2016-07-16

The label on my 418AS drive shows it is running CC46, but
you can see the basic idea. That the "Apple OEM" drive
runs "APxx" firmware for some reason (maybe it does head park
or spins slower when idle or something, like a more aggressive
power management scheme).

So far, no evidence any of this affects reading data
from the drive.

I didn't rush this post off to you, because I suspect
we won't be getting any ddrescue data, until we figure
out what else isn't standard about the drive. I can't find
enough discussions in threads on the Internet about the
drive, to figure it out.

Perhaps you could take a picture of your hard drive or
at least examine it for differences to this.

https://s33.postimg.cc/6loshhxxb/ST3500418.gif

Even if the drive was encrypted, it should still read
without CRC errors. Why is the *whole* surface unreadable.
It's not a failure.

Paul


Here's a pic of the drive and it's pins. The drive is an OEM made for
Apple, and there is a jumper block, but there is no jumper.

https://postimg.cc/gallery/3h3j12g8s/


I don't see any thermistor or connector with six pins (two
of which are wired) in the picture. But the drive is
running Apple firmware. Which might be nothing more
than aggressive spindown code (i.e. most of the ROM
is likely to be the same code as retail 418AS drives).

In the manual here for "Seatools For Windows", you could
try the "Short Drive Self Check". Don't use the "Fix All",
as that appears to be an attempt to get reallocations
going, and that's not going to work. You can get
"Seatools For Windows" from the Seagate site.

https://www.seagate.com/files/www-co...dows-en-us.pdf

If you press F8, apparently it offers the "Long Drive Self Check"
which is not in that picture. An older version of Seatools
(the one I have here), just put the short and long entries
in the menu, without needing to press a function key.

It's almost like your drive has had a data structure corruption.
The Service Area not only holds firmware which is loaded
into RAM, but it also holds data structures. Included
in those data structures, is the reallocation table.

But in the past, a typical situation is, when the data structure
thing is broken, the drive just "bricks". Normally, any
kind of problem with data structure, makes toast out
of the drive.

Other than that, I'm at a loss to explain how the
drive is able to read the Service Area, but cannot
read any other sector. It implies a firmware problem
of some sort. A Google search isn't picking up a
trend on that drive. There have been products
in the past, with a very strong theme, such as
failure exactly 30 days after the user bought
the drive.

The ATA standard allows downloading of drive firmware
into drive RAM (for usage during the current session).
A second command allows "committing" a firmware
download to the Service Area, making the firmware
permanent (equivalent to flashing). The drive
has a bootstrap routine stored in onboard flash.
The majority of the code (the part that parses ATA
commands and decides what to do), is stored in the
Service Area firmware store.

Your drive doesn't seem to have a hardware failure.
Something happens to it, after reading the Service
Area and it starts running the regular code to accept
input commands.

I'm hoping that when you run the SMART short test,
you'll get additional information. Since the main
code body is running at that time though, the SMART
might end up seeing all the CRC errors that the OS
sees.

Summary: Likely needs a Data Recovery company, unless
an identified issue with the drive is recorded
somewhere in Google. And I only see "normal"
failures for the drive at the moment. Mine,
for example, has a high reallocation count (400+).
Opening the HDA may not be necessary to fix
this, but blowing away the reallocation table
will "shred" the data (some files could end up
with sectors that don't belong there).

That drive model, varies from heaven to hell :-)
This is another one of my 418AS drives, which runs
continuously on this computer. This is probably the
second-longest-lived drive I've ever dealt with. The
record holder was 60,000 hours (a drive at work).
Considering the field statistics for this drive
model, how is this possible ???

https://s33.postimg.cc/mmiax893z/my_...t3500418as.gif

Paul
  #34  
Old July 27th 18, 01:38 AM posted to alt.windows7.general
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default Reading Apple Files with a Windows Machine?

Boris wrote:

I finally got the command correct for Ubuntu v8.1-2017-09-05, running
ddrescue v1.22. It seems that Ubuntu will only write the mapfile (log) to
it's root, and it must be named 'mapfile', regardless of what the
instructions show. Here's what worked, and the results:

knoppix@Microknoppix:~$ sudo ddrescue -f /dev/sdb /dev/sdg mapfile
GNU ddrescue 1.22
ipos: 500107 MB, non-trimmed: 0 B, current rate: 0 B/s
opos: 500107 MB, non-scraped: 0 B, average rate: 55291 B/s
non-tried: 0 B, bad-sector: 499533 MB, error rate: 39174 kB/s
rescued: 574750 kB, bad areas: 1, run time: 2h 53m 14s
pct rescued: 0.11%, read errors:983272977, remaining time: n/a
time since last successful read: 2h 52m 45s
Finished
knoppix@Microknoppix:~$

Ironically, I forgot to record the mapfile. But, I'm not going to run the
scan again, at least not before posting this; it takes almost three hours
to run.


Naughty :-)

You *need* that mapfile, in case multiple runs are needed to
get more of the sectors. That's what the mapfile is for, to
maintain continuity from one run to the next. The mapfile is
updated on each run, with more good news.

Now, I was always pretty bad at maths. What exactly are
those results telling us ? That's yet another reason
why you need the mapfile, as it contains additional
information that can be compared to the above. Your
rescued data is 0.574GB if 500GB. You may have
spent 3 hours, but you got 0.11% of the data. That
means it will take 3000 hours to get the whole disk
back. I want the enormity of that to sink in. But maybe
I'm bad at maths.

The mapfile in such a case could be huge, if the recovered
data was a sector here and a sector there. It would chop
the disk up into many sections, which makes the mapfile larger.
If the condition of a bunch of sectors next to one
another is the same, the mapfile can contain a notation
that says how big the chunk is, whether it's good or
bad. The developers put some effort into making the
notation efficient.

At the current rate, it will take 3000 hours to complete.
But of course we know that's not true. Some portions of that
data is never coming back. And what is happening,
is you're spending 15 seconds on each unreadable
sector. (That's a typical timeout value, allowing
a ton of attempts to read the sector, by the
hard drive controller.) If you had a WDC RE Raid drive
with TLER (time limited error recovery), the timeout
is 5 to 7 seconds or so, somewhere in that ballpark.
(Only the RE drives have TLER.)

Now, back to work :-)

What you want to do it.

1) Do one run.
2) Take a snapshot of the amount recovered (0.11%
in this case), plus keep a copy of the mapfile for
reference.
3) Look up the manual page for how you're supposed
to run subsequent commands. You can try a reasonable
retry count, like one retry maybe, which changes the
time per sector from 15 seconds to 30 seconds.

ddrescue -r 1 -f /dev/sdb /dev/sdg mapfile

4) Do your second (-r 1) run. This will update the mapfile,
update the amount recovered.
5) If the trend looks good, if the second run with
a bit of a retry count recovered a lot more data
than the first run, you can then consider whether
to throw in the towel at that point, or continue
on. Each run allows you to refine your extrapolation
of the time to completion.

The fact 0.11% came back, also tells you that for some
reason, the surface is a mess. I was willing to accept
some kind of crazy firmware problem was causing 0%
to come back. But instead, it looks like a crazy
surface problem is allowing 0.11% to come back.

I *don't* understand why that disk isn't bricked!!!

How can that high a percentage of the drive be bad,
and the Service Area be intact ? It still makes no sense.

The manual page doesn't seem to define the terms it
uses (trimmed, scraped).

*******

The /dev/sdg drive should really be zeroed before
you begin. But since you've recovered so little
data on your first run, the following isn't
a priority.

In Windows (administrator command prompt), you'd
want to zero the /dev/sdg disk. You have to figure
out which Windows disk number that is, before
selecting a disk number.

diskpart
list disk
select disk 2
clean all
exit

In Linux, you could try this for your 500107862016
byte drive. The Block Size parameter should be
a multiple of 512 bytes, and 221184 is 432 sectors.

dd if=/dev/zero of=/dev/sdg bs=221184 count=2261049

I have a ST3500418AS running online on this machine,
which is why I can get the number. To get the
number for yourself, in Linux, you can use...

fdisk /dev/sdg
p
q

which should print the exact size of the disk in bytes.

If you weren't on Knoppix, you might need
"sudo fdisk /dev/sdg" as the command needs root
permissions.

Once the disk is zeroed, you can do your first ddrescue
run over again, and build a new mapfile.

You don't have to zero the output drive if you
don't want to. It's just easier later, if you're
using HxD hex editor and skimming through /dev/sdg,
you'll be able to see all those zeros where ddrescue
hasn't been able to read the source disk yet
and copy stuff over the zeros. The zeros are
a marker in a sense.

I think after two runs, you'll be in a good position
to give your 3000 hour runtime estimate to get the
whole thing :-/

I don't know what a data recovery company could do
for a disk like this. The heads seem to be intact.
The SA works. It's rotating. The heads load.
How can you fix the surface ??? Dunno.

Paul
  #35  
Old July 27th 18, 09:04 AM posted to alt.windows7.general
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default Reading Apple Files with a Windows Machine?

Boris wrote:

I tried both SeaTools for Windows, and SeaTools for DOS. Both reported
failures.

SeaTools for Windows:
The Short Test keept failing at 14% to 21% while performing 'random reads'.

(I tested my Win7 OS drive just to be sure SeRTools was working, and the OS
disk passed all Short Test pricesses 100%).


SeaTools for DOS:
I burned an .iso and booted SeaTools for DOS. Here's the results (similar to
the Windows version):
https://imgur.com/a/yLHf0Ca


You can practice your ddrescue arts, now that
you have a worthy candidate in front of you.

We know it gets 0.11% of the drive on the first pass.

It would be interesting to run a second pass with
one retry (-r 1) and see what total percentage of the
drive is completed after that.

My guess is, you're not getting any files back.

If the 0.11% of the drive was contiguous sectors,
you could use Photorec or Recuva or scavengers
of that nature. You would want the /dev/sdg copy drive
to be zeroed out first, before ddrescue, if you
planned on running Photorec afterwards
against the copy of the drive. Photorec would scan
the 99.89% empty part of the sdg drive a lot
faster if the unrecovered sectors contain zeros. That's
one reason for cleaning /dev/sdg before using
it. Either diskpart in Windows and "clean all"
or dd from /dev/zero onto /dev/sdg in Linux
to clean it.

Say about 3 hours a step, 12 hours total.

dd if=/dev/zero of=/dev/sdg bs=221184 count=2261049
ddrescue -f /dev/sdb /dev/sdg mapfile
ddrescue -r 1 -f /dev/sdb /dev/sdg mapfile
photorec [on /dev/sdg]

Then review the results and see if you made
it to even 0.22% after the "-r 1" run.

You're practicing your arts, for the day that
you have a drive that's only missing a track or two...
You don't normally get drives this sick, to
practice on. I have *no* drives with the right
level of sickness here, to do ddrescue testing.
Any drive here that was sick, is dead now. That's
why your drive is so amazing. How can a 99.89%
dead drive still spin ?

Do you hear a lot of "clunking" or "clicking" come
from the drive during the ddrescue run ? This
would imply perhaps, loss of embedded servo.
Again, how can the servo wedges on each track
be working, when the data is so messed up ?
Bad controller board ? The servo wedges presumably
have enough info to tell the drive it's on the right
cylinder. If the platter surface was rusting,
it should take out the servo areas on each platter
too, and the drive should "click like crazy" because
it can't see where the heads have gone.

Paul
 




Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off






All times are GMT +1. The time now is 07:16 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 PCbanter.
The comments are property of their posters.