A Windows XP help forum. PCbanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » PCbanter forum » Windows 10 » Windows 10 Help Forum
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

USB thumb drives.



 
 
Thread Tools Rate Thread Display Modes
  #76  
Old May 22nd 18, 02:05 PM posted to alt.comp.os.windows-10
wasbit[_4_]
external usenet poster
 
Posts: 229
Default USB thumb drives.

"Paul" wrote in message
news

I would be interested in the brand and model number
of this mythically large (13.5$) storage devices. Was
the brand Godzilla or Mothra ? Did it come
from the ocean ? Was it angry ?

Paul


It's yours if you want it. Email me your address.

--
Regards
wasbit

Ads
  #77  
Old May 22nd 18, 03:49 PM posted to alt.comp.os.windows-10
Jimmy Wilkinson Knife
external usenet poster
 
Posts: 131
Default USB thumb drives.

On Tue, 22 May 2018 08:20:46 +0100, Lucifer Morningstar wrote:

On Mon, 21 May 2018 19:30:36 +0100, "Jimmy Wilkinson Knife"
wrote:

On Mon, 21 May 2018 13:07:45 +0100, Paul wrote:

Jimmy Wilkinson Knife wrote:

Maybe that's why mine are slowing down - more cells are becoming weak?

The Toolkit software that comes with the drive,
should be able to provide statistics for you.
Like, how many blocks were spared out. If a block
cannot remember what you write to it, the drive
may decide to spare it out and replace it, just
like a hard drive would. This is automatic sparing
just like in the ATA designs.


I would have looked at the SMART data, but they're in a mirror array by the motherboard's hard disk controller, which I thought blocked that information.
But not on this controller apparently.
Not entirely sure how to interpret all this, but:

First SSD:
Raw Read error rate, value 100, worst 100, warn 0, raw 000000000005
Reallocated Sector count, value 100, worst 100, warn 0, raw 000000000000
Data Address Mark errors, value 23, worst 23, warn 0, raw 00000000004D

The others are either raw 000000000000 or marked as unimportant by the program speedfan, so I didn't type them in (it won't copy and paste).

Second SSD:
Raw Read error rate, value 100, worst 100, warn 0, raw 00000000000D
Reallocated Sector count, value 100, worst 100, warn 0, raw 000000000002
Data Address Mark errors, value 18, worst 18, warn 0, raw 000000000052

Speedfan reports (on the quick test) 100% performance, but 0% fitness?! I think the 0% may be it not reading the SMART correctly through the RAID controller, or it doesn't have a clue about some stats as there are no warnings apart from a red ! on an "unknown parameter".

The mark errors concern me, I'm assuming they also started at 100 and 0 means imminent failure. So 18 means it's 82% worn out?
The Acronis website says "Although degradation of this parameter can be an indicator of drive ageing and/or potential electromechanical problems, it does not directly indicate imminent drive failure."
Huh? If it's 82% aged, surely that's something to indicate failure soon?
I can't find anything certain on Google about "data address mark errors".

The kind of "weak" I'm referring to, is not permanently
damaged sectors. It's sectors that the charge is
draining off the floating gates in a matter of
a few months, rather than the ten years we would
normally expect. This was causing the read rate
on "data at rest" to drop. So if you wrote a backup
today on the device, it might read at 300MB/sec. If
two months from now, you tried to read the same big file
again, it would be reading at 180MB/sec. And it
does that, because the charge draining off the cells
corrupts them, and the powerful error correcting code
needs time to do the corrections to multiple bits
in the sector. The data is still officially "intact"
and error free, in that the error corrector isn't exhausted.

They "fixed" this in a firmware update, by having the
drive re-write the cells after three months (equals
degraded wear life and shortens the life of the drive).


As long as someone doesn't try to use it as long term storage and doesn't plug it in for 6 months. Or does it stay put if switched off?

On TLC, around 10% of storage is used for ECC bits, and
when QLC comes out, this is expected to grow.

At some point, adding ECC will affect storage capacity
sufficiently, we will have hit a wall on "extending the
number of bits stored in one cell". For example, if
you needed as many ECC bits as data stored, yes, you
doubled the capacity by going from QLC to the next thing,
but you cut the capacity in half by the need to use more ECC.
They can't keep increasing the bits per cell before
it bites them on the ass. The write cycles is dropping
with each generation too. Flash is becoming the equivalent
of silicon toilet paper.

In fact, doing some math the other day, I figured out
it was costing me $1 to write a Flash drive from
one end to the other.


That can't be right. Are you claiming a $100 drive can only be written completely 100 times?

There is a tangible wearout
on the highest density devices. And it's beginning
to equate to dollars. When I use a hard drive on the
other hand, I don't have such a notion. It's been
a long time since I lost a hard drive. I've got
a few that have bad SMART, but "they're not dead yet".
Some of the flaky ones have been going for an extra
five years after retirement (now used as scratch drives).


Actually I've had terrible trouble with hard drives but never ever had a single SSD fail, apart from OCZ **** that I very quickly stopped using.

The number of hard drives that either overheated or just started clicking.

There are just a few flash drives, that are huge and
the interface happens to be slow. There's a 30TB one,
you can continuously write it at the full rate, and it
is guaranteed to pass the warranty period :-)
So that would be an example of a drive, where
a lab accident can't destroy it. Because it
can handle the wear life of writing continuously
at its full speed (of maybe 300 to 400MB/sec).
If the 30TB drive was NVMe format, and ran at
2500MB/sec, it might not be able to brag about
supporting continuous write for the entire warranty
period. You might have to stop writing it once in
a while :-)


That would be a ****ing busy server to write that much data. And if you had such a server, you'd most likely need way more storage space, so each drive wouldn't be in continuous use.


SSDs are not used in servers due to their unreliability.


They are MORE reliable than rotating rust disks.

I they're not used in servers it's because they can't stand the higher amount of writing. And that depends on what you're doing with the server. If i made a server where I wanted fast disk access, but it wasn't written in huge quantities, I'd use SSDs.

--
While the Swiss Army Knife has been popular for years, the Swiss Navy Knife has remained largely unheralded.
Its single blade functions as a tiny canoe paddle.
  #78  
Old May 22nd 18, 03:49 PM posted to alt.comp.os.windows-10
Jimmy Wilkinson Knife
external usenet poster
 
Posts: 131
Default USB thumb drives.

On Tue, 22 May 2018 08:20:46 +0100, Lucifer Morningstar wrote:

On Mon, 21 May 2018 19:30:36 +0100, "Jimmy Wilkinson Knife"
wrote:

On Mon, 21 May 2018 13:07:45 +0100, Paul wrote:

Jimmy Wilkinson Knife wrote:

Maybe that's why mine are slowing down - more cells are becoming weak?

The Toolkit software that comes with the drive,
should be able to provide statistics for you.
Like, how many blocks were spared out. If a block
cannot remember what you write to it, the drive
may decide to spare it out and replace it, just
like a hard drive would. This is automatic sparing
just like in the ATA designs.


I would have looked at the SMART data, but they're in a mirror array by the motherboard's hard disk controller, which I thought blocked that information.
But not on this controller apparently.
Not entirely sure how to interpret all this, but:

First SSD:
Raw Read error rate, value 100, worst 100, warn 0, raw 000000000005
Reallocated Sector count, value 100, worst 100, warn 0, raw 000000000000
Data Address Mark errors, value 23, worst 23, warn 0, raw 00000000004D

The others are either raw 000000000000 or marked as unimportant by the program speedfan, so I didn't type them in (it won't copy and paste).

Second SSD:
Raw Read error rate, value 100, worst 100, warn 0, raw 00000000000D
Reallocated Sector count, value 100, worst 100, warn 0, raw 000000000002
Data Address Mark errors, value 18, worst 18, warn 0, raw 000000000052

Speedfan reports (on the quick test) 100% performance, but 0% fitness?! I think the 0% may be it not reading the SMART correctly through the RAID controller, or it doesn't have a clue about some stats as there are no warnings apart from a red ! on an "unknown parameter".

The mark errors concern me, I'm assuming they also started at 100 and 0 means imminent failure. So 18 means it's 82% worn out?
The Acronis website says "Although degradation of this parameter can be an indicator of drive ageing and/or potential electromechanical problems, it does not directly indicate imminent drive failure."
Huh? If it's 82% aged, surely that's something to indicate failure soon?
I can't find anything certain on Google about "data address mark errors".

The kind of "weak" I'm referring to, is not permanently
damaged sectors. It's sectors that the charge is
draining off the floating gates in a matter of
a few months, rather than the ten years we would
normally expect. This was causing the read rate
on "data at rest" to drop. So if you wrote a backup
today on the device, it might read at 300MB/sec. If
two months from now, you tried to read the same big file
again, it would be reading at 180MB/sec. And it
does that, because the charge draining off the cells
corrupts them, and the powerful error correcting code
needs time to do the corrections to multiple bits
in the sector. The data is still officially "intact"
and error free, in that the error corrector isn't exhausted.

They "fixed" this in a firmware update, by having the
drive re-write the cells after three months (equals
degraded wear life and shortens the life of the drive).


As long as someone doesn't try to use it as long term storage and doesn't plug it in for 6 months. Or does it stay put if switched off?

On TLC, around 10% of storage is used for ECC bits, and
when QLC comes out, this is expected to grow.

At some point, adding ECC will affect storage capacity
sufficiently, we will have hit a wall on "extending the
number of bits stored in one cell". For example, if
you needed as many ECC bits as data stored, yes, you
doubled the capacity by going from QLC to the next thing,
but you cut the capacity in half by the need to use more ECC.
They can't keep increasing the bits per cell before
it bites them on the ass. The write cycles is dropping
with each generation too. Flash is becoming the equivalent
of silicon toilet paper.

In fact, doing some math the other day, I figured out
it was costing me $1 to write a Flash drive from
one end to the other.


That can't be right. Are you claiming a $100 drive can only be written completely 100 times?

There is a tangible wearout
on the highest density devices. And it's beginning
to equate to dollars. When I use a hard drive on the
other hand, I don't have such a notion. It's been
a long time since I lost a hard drive. I've got
a few that have bad SMART, but "they're not dead yet".
Some of the flaky ones have been going for an extra
five years after retirement (now used as scratch drives).


Actually I've had terrible trouble with hard drives but never ever had a single SSD fail, apart from OCZ **** that I very quickly stopped using.

The number of hard drives that either overheated or just started clicking.

There are just a few flash drives, that are huge and
the interface happens to be slow. There's a 30TB one,
you can continuously write it at the full rate, and it
is guaranteed to pass the warranty period :-)
So that would be an example of a drive, where
a lab accident can't destroy it. Because it
can handle the wear life of writing continuously
at its full speed (of maybe 300 to 400MB/sec).
If the 30TB drive was NVMe format, and ran at
2500MB/sec, it might not be able to brag about
supporting continuous write for the entire warranty
period. You might have to stop writing it once in
a while :-)


That would be a ****ing busy server to write that much data. And if you had such a server, you'd most likely need way more storage space, so each drive wouldn't be in continuous use.


SSDs are not used in servers due to their unreliability.


They are MORE reliable than rotating rust disks.

I they're not used in servers it's because they can't stand the higher amount of writing. And that depends on what you're doing with the server. If i made a server where I wanted fast disk access, but it wasn't written in huge quantities, I'd use SSDs.

--
While the Swiss Army Knife has been popular for years, the Swiss Navy Knife has remained largely unheralded.
Its single blade functions as a tiny canoe paddle.
  #79  
Old May 22nd 18, 03:49 PM posted to alt.comp.os.windows-10
Jimmy Wilkinson Knife
external usenet poster
 
Posts: 131
Default USB thumb drives.

On Tue, 22 May 2018 08:51:47 +0100, default wrote:

On Tue, 22 May 2018 17:20:46 +1000, Lucifer Morningstar
wrote:

On Mon, 21 May 2018 19:30:36 +0100, "Jimmy Wilkinson Knife"
wrote:

On Mon, 21 May 2018 13:07:45 +0100, Paul wrote:

Jimmy Wilkinson Knife wrote:

Maybe that's why mine are slowing down - more cells are becoming weak?

The Toolkit software that comes with the drive,
should be able to provide statistics for you.
Like, how many blocks were spared out. If a block
cannot remember what you write to it, the drive
may decide to spare it out and replace it, just
like a hard drive would. This is automatic sparing
just like in the ATA designs.

I would have looked at the SMART data, but they're in a mirror array by the motherboard's hard disk controller, which I thought blocked that information.
But not on this controller apparently.
Not entirely sure how to interpret all this, but:

First SSD:
Raw Read error rate, value 100, worst 100, warn 0, raw 000000000005
Reallocated Sector count, value 100, worst 100, warn 0, raw 000000000000
Data Address Mark errors, value 23, worst 23, warn 0, raw 00000000004D

The others are either raw 000000000000 or marked as unimportant by the program speedfan, so I didn't type them in (it won't copy and paste).

Second SSD:
Raw Read error rate, value 100, worst 100, warn 0, raw 00000000000D
Reallocated Sector count, value 100, worst 100, warn 0, raw 000000000002
Data Address Mark errors, value 18, worst 18, warn 0, raw 000000000052

Speedfan reports (on the quick test) 100% performance, but 0% fitness?! I think the 0% may be it not reading the SMART correctly through the RAID controller, or it doesn't have a clue about some stats as there are no warnings apart from a red ! on an "unknown parameter".

The mark errors concern me, I'm assuming they also started at 100 and 0 means imminent failure. So 18 means it's 82% worn out?
The Acronis website says "Although degradation of this parameter can be an indicator of drive ageing and/or potential electromechanical problems, it does not directly indicate imminent drive failure."
Huh? If it's 82% aged, surely that's something to indicate failure soon?
I can't find anything certain on Google about "data address mark errors".

The kind of "weak" I'm referring to, is not permanently
damaged sectors. It's sectors that the charge is
draining off the floating gates in a matter of
a few months, rather than the ten years we would
normally expect. This was causing the read rate
on "data at rest" to drop. So if you wrote a backup
today on the device, it might read at 300MB/sec. If
two months from now, you tried to read the same big file
again, it would be reading at 180MB/sec. And it
does that, because the charge draining off the cells
corrupts them, and the powerful error correcting code
needs time to do the corrections to multiple bits
in the sector. The data is still officially "intact"
and error free, in that the error corrector isn't exhausted.

They "fixed" this in a firmware update, by having the
drive re-write the cells after three months (equals
degraded wear life and shortens the life of the drive).

As long as someone doesn't try to use it as long term storage and doesn't plug it in for 6 months. Or does it stay put if switched off?

On TLC, around 10% of storage is used for ECC bits, and
when QLC comes out, this is expected to grow.

At some point, adding ECC will affect storage capacity
sufficiently, we will have hit a wall on "extending the
number of bits stored in one cell". For example, if
you needed as many ECC bits as data stored, yes, you
doubled the capacity by going from QLC to the next thing,
but you cut the capacity in half by the need to use more ECC.
They can't keep increasing the bits per cell before
it bites them on the ass. The write cycles is dropping
with each generation too. Flash is becoming the equivalent
of silicon toilet paper.

In fact, doing some math the other day, I figured out
it was costing me $1 to write a Flash drive from
one end to the other.

That can't be right. Are you claiming a $100 drive can only be written completely 100 times?

There is a tangible wearout
on the highest density devices. And it's beginning
to equate to dollars. When I use a hard drive on the
other hand, I don't have such a notion. It's been
a long time since I lost a hard drive. I've got
a few that have bad SMART, but "they're not dead yet".
Some of the flaky ones have been going for an extra
five years after retirement (now used as scratch drives).

Actually I've had terrible trouble with hard drives but never ever had a single SSD fail, apart from OCZ **** that I very quickly stopped using.

The number of hard drives that either overheated or just started clicking.

There are just a few flash drives, that are huge and
the interface happens to be slow. There's a 30TB one,
you can continuously write it at the full rate, and it
is guaranteed to pass the warranty period :-)
So that would be an example of a drive, where
a lab accident can't destroy it. Because it
can handle the wear life of writing continuously
at its full speed (of maybe 300 to 400MB/sec).
If the 30TB drive was NVMe format, and ran at
2500MB/sec, it might not be able to brag about
supporting continuous write for the entire warranty
period. You might have to stop writing it once in
a while :-)

That would be a ****ing busy server to write that much data. And if you had such a server, you'd most likely need way more storage space, so each drive wouldn't be in continuous use.


SSDs are not used in servers due to their unreliability.


I was reading about a device in the lab stages that may turn digital
storage on its end. (if one chooses to believe corporate hype) It is
a crystal that's supposed to have the ability to store many petabytes
of data. The downside is that it can only be written to one time -
but the storage is so vast/fast/cheap in so small a space that it
could conceivably replace mechanical hard drives, by just opening up
another portion of the device and forgetting what is written in the
sectors you don't need anymore.


So secure erasing would be difficult, unless you smashed the crystals.

--
Condoms aren't completely safe. A friend of mine was wearing one and got hit by a bus.
  #80  
Old May 22nd 18, 03:49 PM posted to alt.comp.os.windows-10
Jimmy Wilkinson Knife
external usenet poster
 
Posts: 131
Default USB thumb drives.

On Tue, 22 May 2018 08:51:47 +0100, default wrote:

On Tue, 22 May 2018 17:20:46 +1000, Lucifer Morningstar
wrote:

On Mon, 21 May 2018 19:30:36 +0100, "Jimmy Wilkinson Knife"
wrote:

On Mon, 21 May 2018 13:07:45 +0100, Paul wrote:

Jimmy Wilkinson Knife wrote:

Maybe that's why mine are slowing down - more cells are becoming weak?

The Toolkit software that comes with the drive,
should be able to provide statistics for you.
Like, how many blocks were spared out. If a block
cannot remember what you write to it, the drive
may decide to spare it out and replace it, just
like a hard drive would. This is automatic sparing
just like in the ATA designs.

I would have looked at the SMART data, but they're in a mirror array by the motherboard's hard disk controller, which I thought blocked that information.
But not on this controller apparently.
Not entirely sure how to interpret all this, but:

First SSD:
Raw Read error rate, value 100, worst 100, warn 0, raw 000000000005
Reallocated Sector count, value 100, worst 100, warn 0, raw 000000000000
Data Address Mark errors, value 23, worst 23, warn 0, raw 00000000004D

The others are either raw 000000000000 or marked as unimportant by the program speedfan, so I didn't type them in (it won't copy and paste).

Second SSD:
Raw Read error rate, value 100, worst 100, warn 0, raw 00000000000D
Reallocated Sector count, value 100, worst 100, warn 0, raw 000000000002
Data Address Mark errors, value 18, worst 18, warn 0, raw 000000000052

Speedfan reports (on the quick test) 100% performance, but 0% fitness?! I think the 0% may be it not reading the SMART correctly through the RAID controller, or it doesn't have a clue about some stats as there are no warnings apart from a red ! on an "unknown parameter".

The mark errors concern me, I'm assuming they also started at 100 and 0 means imminent failure. So 18 means it's 82% worn out?
The Acronis website says "Although degradation of this parameter can be an indicator of drive ageing and/or potential electromechanical problems, it does not directly indicate imminent drive failure."
Huh? If it's 82% aged, surely that's something to indicate failure soon?
I can't find anything certain on Google about "data address mark errors".

The kind of "weak" I'm referring to, is not permanently
damaged sectors. It's sectors that the charge is
draining off the floating gates in a matter of
a few months, rather than the ten years we would
normally expect. This was causing the read rate
on "data at rest" to drop. So if you wrote a backup
today on the device, it might read at 300MB/sec. If
two months from now, you tried to read the same big file
again, it would be reading at 180MB/sec. And it
does that, because the charge draining off the cells
corrupts them, and the powerful error correcting code
needs time to do the corrections to multiple bits
in the sector. The data is still officially "intact"
and error free, in that the error corrector isn't exhausted.

They "fixed" this in a firmware update, by having the
drive re-write the cells after three months (equals
degraded wear life and shortens the life of the drive).

As long as someone doesn't try to use it as long term storage and doesn't plug it in for 6 months. Or does it stay put if switched off?

On TLC, around 10% of storage is used for ECC bits, and
when QLC comes out, this is expected to grow.

At some point, adding ECC will affect storage capacity
sufficiently, we will have hit a wall on "extending the
number of bits stored in one cell". For example, if
you needed as many ECC bits as data stored, yes, you
doubled the capacity by going from QLC to the next thing,
but you cut the capacity in half by the need to use more ECC.
They can't keep increasing the bits per cell before
it bites them on the ass. The write cycles is dropping
with each generation too. Flash is becoming the equivalent
of silicon toilet paper.

In fact, doing some math the other day, I figured out
it was costing me $1 to write a Flash drive from
one end to the other.

That can't be right. Are you claiming a $100 drive can only be written completely 100 times?

There is a tangible wearout
on the highest density devices. And it's beginning
to equate to dollars. When I use a hard drive on the
other hand, I don't have such a notion. It's been
a long time since I lost a hard drive. I've got
a few that have bad SMART, but "they're not dead yet".
Some of the flaky ones have been going for an extra
five years after retirement (now used as scratch drives).

Actually I've had terrible trouble with hard drives but never ever had a single SSD fail, apart from OCZ **** that I very quickly stopped using.

The number of hard drives that either overheated or just started clicking.

There are just a few flash drives, that are huge and
the interface happens to be slow. There's a 30TB one,
you can continuously write it at the full rate, and it
is guaranteed to pass the warranty period :-)
So that would be an example of a drive, where
a lab accident can't destroy it. Because it
can handle the wear life of writing continuously
at its full speed (of maybe 300 to 400MB/sec).
If the 30TB drive was NVMe format, and ran at
2500MB/sec, it might not be able to brag about
supporting continuous write for the entire warranty
period. You might have to stop writing it once in
a while :-)

That would be a ****ing busy server to write that much data. And if you had such a server, you'd most likely need way more storage space, so each drive wouldn't be in continuous use.


SSDs are not used in servers due to their unreliability.


I was reading about a device in the lab stages that may turn digital
storage on its end. (if one chooses to believe corporate hype) It is
a crystal that's supposed to have the ability to store many petabytes
of data. The downside is that it can only be written to one time -
but the storage is so vast/fast/cheap in so small a space that it
could conceivably replace mechanical hard drives, by just opening up
another portion of the device and forgetting what is written in the
sectors you don't need anymore.


So secure erasing would be difficult, unless you smashed the crystals.

--
Condoms aren't completely safe. A friend of mine was wearing one and got hit by a bus.
  #81  
Old May 22nd 18, 05:46 PM posted to alt.comp.os.windows-10
Doomsdrzej
external usenet poster
 
Posts: 113
Default USB thumb drives.

On Mon, 21 May 2018 17:23:41 -0300, pjp
wrote:

In article , lid
says...

On Thu, 17 May 2018 07:49:55 +1000, Peter Jason wrote:

Do these thumb drives last forever, or should
their contents be transferred to the latest USB
drives?


I can only speak from personal experience. I've got several thumb
drives, all from SanDisk.

- 512MB Cruzer Micro
- 4GB Cruzer Titanium
- 16GB Cruzer Contour
- 64GB Extreme (USB 3.0)
- 128GB Extreme Pro (USB 3.0)

I ordered the 64GB and 128GB drives from a shop in Hong Kong (through
eBay).

The 64GB drive is used /all the time/. I transfer MKV files to it, watch
them on my BluRay disc player and then I delete them. I've been doing
that for years now. Never an error. On a rare occasion, the MKV freezes
when playing, but that could also be caused by the file itself or the
disc player. Rebooting the disc player always solves the problem.

I once bought a (supposedly) 4GB flash drive from DaneElec. Never got it
to work properly and it certainly wasn't 4GB.


Bought 5 Duracell branded 64Gb flashdrives. Three of them have gone into
"write protected" mode and can't do anything but read the corrupted data
(some of it) that you can view a listing for.


Duracell has no business producing USB drives.
  #82  
Old May 22nd 18, 06:06 PM posted to alt.comp.os.windows-10
default[_2_]
external usenet poster
 
Posts: 201
Default USB thumb drives.

On Tue, 22 May 2018 15:49:37 +0100, "Jimmy Wilkinson Knife"
wrote:

On Tue, 22 May 2018 08:51:47 +0100, default wrote:

On Tue, 22 May 2018 17:20:46 +1000, Lucifer Morningstar
wrote:

On Mon, 21 May 2018 19:30:36 +0100, "Jimmy Wilkinson Knife"
wrote:

On Mon, 21 May 2018 13:07:45 +0100, Paul wrote:

Jimmy Wilkinson Knife wrote:

Maybe that's why mine are slowing down - more cells are becoming weak?

The Toolkit software that comes with the drive,
should be able to provide statistics for you.
Like, how many blocks were spared out. If a block
cannot remember what you write to it, the drive
may decide to spare it out and replace it, just
like a hard drive would. This is automatic sparing
just like in the ATA designs.

I would have looked at the SMART data, but they're in a mirror array by the motherboard's hard disk controller, which I thought blocked that information.
But not on this controller apparently.
Not entirely sure how to interpret all this, but:

First SSD:
Raw Read error rate, value 100, worst 100, warn 0, raw 000000000005
Reallocated Sector count, value 100, worst 100, warn 0, raw 000000000000
Data Address Mark errors, value 23, worst 23, warn 0, raw 00000000004D

The others are either raw 000000000000 or marked as unimportant by the program speedfan, so I didn't type them in (it won't copy and paste).

Second SSD:
Raw Read error rate, value 100, worst 100, warn 0, raw 00000000000D
Reallocated Sector count, value 100, worst 100, warn 0, raw 000000000002
Data Address Mark errors, value 18, worst 18, warn 0, raw 000000000052

Speedfan reports (on the quick test) 100% performance, but 0% fitness?! I think the 0% may be it not reading the SMART correctly through the RAID controller, or it doesn't have a clue about some stats as there are no warnings apart from a red ! on an "unknown parameter".

The mark errors concern me, I'm assuming they also started at 100 and 0 means imminent failure. So 18 means it's 82% worn out?
The Acronis website says "Although degradation of this parameter can be an indicator of drive ageing and/or potential electromechanical problems, it does not directly indicate imminent drive failure."
Huh? If it's 82% aged, surely that's something to indicate failure soon?
I can't find anything certain on Google about "data address mark errors".

The kind of "weak" I'm referring to, is not permanently
damaged sectors. It's sectors that the charge is
draining off the floating gates in a matter of
a few months, rather than the ten years we would
normally expect. This was causing the read rate
on "data at rest" to drop. So if you wrote a backup
today on the device, it might read at 300MB/sec. If
two months from now, you tried to read the same big file
again, it would be reading at 180MB/sec. And it
does that, because the charge draining off the cells
corrupts them, and the powerful error correcting code
needs time to do the corrections to multiple bits
in the sector. The data is still officially "intact"
and error free, in that the error corrector isn't exhausted.

They "fixed" this in a firmware update, by having the
drive re-write the cells after three months (equals
degraded wear life and shortens the life of the drive).

As long as someone doesn't try to use it as long term storage and doesn't plug it in for 6 months. Or does it stay put if switched off?

On TLC, around 10% of storage is used for ECC bits, and
when QLC comes out, this is expected to grow.

At some point, adding ECC will affect storage capacity
sufficiently, we will have hit a wall on "extending the
number of bits stored in one cell". For example, if
you needed as many ECC bits as data stored, yes, you
doubled the capacity by going from QLC to the next thing,
but you cut the capacity in half by the need to use more ECC.
They can't keep increasing the bits per cell before
it bites them on the ass. The write cycles is dropping
with each generation too. Flash is becoming the equivalent
of silicon toilet paper.

In fact, doing some math the other day, I figured out
it was costing me $1 to write a Flash drive from
one end to the other.

That can't be right. Are you claiming a $100 drive can only be written completely 100 times?

There is a tangible wearout
on the highest density devices. And it's beginning
to equate to dollars. When I use a hard drive on the
other hand, I don't have such a notion. It's been
a long time since I lost a hard drive. I've got
a few that have bad SMART, but "they're not dead yet".
Some of the flaky ones have been going for an extra
five years after retirement (now used as scratch drives).

Actually I've had terrible trouble with hard drives but never ever had a single SSD fail, apart from OCZ **** that I very quickly stopped using.

The number of hard drives that either overheated or just started clicking.

There are just a few flash drives, that are huge and
the interface happens to be slow. There's a 30TB one,
you can continuously write it at the full rate, and it
is guaranteed to pass the warranty period :-)
So that would be an example of a drive, where
a lab accident can't destroy it. Because it
can handle the wear life of writing continuously
at its full speed (of maybe 300 to 400MB/sec).
If the 30TB drive was NVMe format, and ran at
2500MB/sec, it might not be able to brag about
supporting continuous write for the entire warranty
period. You might have to stop writing it once in
a while :-)

That would be a ****ing busy server to write that much data. And if you had such a server, you'd most likely need way more storage space, so each drive wouldn't be in continuous use.

SSDs are not used in servers due to their unreliability.


I was reading about a device in the lab stages that may turn digital
storage on its end. (if one chooses to believe corporate hype) It is
a crystal that's supposed to have the ability to store many petabytes
of data. The downside is that it can only be written to one time -
but the storage is so vast/fast/cheap in so small a space that it
could conceivably replace mechanical hard drives, by just opening up
another portion of the device and forgetting what is written in the
sectors you don't need anymore.


So secure erasing would be difficult, unless you smashed the crystals.


Yup. Physical destruction is the only recourse for security; but
you'd never have to back up data, so it is a mixed blessing.
  #83  
Old May 22nd 18, 06:32 PM posted to alt.comp.os.windows-10
Jimmy Wilkinson Knife
external usenet poster
 
Posts: 131
Default USB thumb drives.

On Mon, 21 May 2018 22:26:55 +0100, Paul wrote:

Jimmy Wilkinson Knife wrote:
On Mon, 21 May 2018 13:07:45 +0100, Paul wrote:


They "fixed" this in a firmware update, by having the
drive re-write the cells after three months (equals
degraded wear life and shortens the life of the drive).


As long as someone doesn't try to use it as long term storage and
doesn't plug it in for 6 months. Or does it stay put if switched off?


The leaking on that device, was independent of powered state.
The idea is, all the cells leak. But the sectors that are
in usage, and are "data at rest", they are slowly degrading
with time, and requiring more microseconds of error correction
by the ARM processor, per sector.


At what age does the data become unreadable if the drive has not been powered up?

In fact, doing some math the other day, I figured out
it was costing me $1 to write a Flash drive from
one end to the other.


That can't be right. Are you claiming a $100 drive can only be written
completely 100 times?


That was the figure for the drive I bought.


I can't say I've ever looked into it, as for me USB flash drives are only used occasionally to transfer data when there's a problem. Otherwise I use the internet, the local network, etc.

Actually I've had terrible trouble with hard drives but never ever had a
single SSD fail, apart from OCZ **** that I very quickly stopped using.

The number of hard drives that either overheated or just started clicking.


I lost a couple Maxtor 40GB, which went south very quickly.
(From clicking to dead, takes a single day.)


Remember Connor drives? Oh god....

I lost a Seagate 32550N 2GB, when the head lock jammed
at startup, the arm tried to move anyway, and it ground
the heads into the platter like a cigarette butt. And
the most wonderful "clock spring" noise came out of
the drive. They don't make head locks like that any more
(huge solenoid, looked out of place in the drive). There
was a gouge in the platter.


I've never had a Seagate fail, but many many Western Digitals have, mainly their black edition which loved to overheat.

There are just a few flash drives, that are huge and
the interface happens to be slow. There's a 30TB one,
you can continuously write it at the full rate, and it
is guaranteed to pass the warranty period :-)
So that would be an example of a drive, where
a lab accident can't destroy it. Because it
can handle the wear life of writing continuously
at its full speed (of maybe 300 to 400MB/sec).
If the 30TB drive was NVMe format, and ran at
2500MB/sec, it might not be able to brag about
supporting continuous write for the entire warranty
period. You might have to stop writing it once in
a while :-)


That would be a ****ing busy server to write that much data. And if you
had such a server, you'd most likely need way more storage space, so
each drive wouldn't be in continuous use.


I think that 30TB drive is a wonderful drive from a
"cannot be abused" perspective. And I think it follows
a 5.25" form factor too, and holds 30TB. It's chock full
of chips. The average user isn't going to like the
speed though. Too many people have been spoiled by
NVMe speeds.


I'm resisting nvme for my desktops, although they would be nice, they're ****ing expensive. I'd rather spend that money on making more of the storage SATA3 SSD instead of rotary drives.

I just ran the free software "Crystaldiskmark" on my SSDs (mirror) and HDDs (mirror). I get the correct read speed (about double the manufacturer rating) for the SSD mirror. But the write speed was a tenth of what it should be! On the HDDs (the infamous slow ST3000DM001 drives), I get a tenth of the correct speed both reading and writing.

*******

Back to your SMART table for a moment...

Apparently the SMART table definitions overlap. Obviously,
an SSD doesn't have a "data address mark". And a HDD, while it
does have a notion of "terabytes write class" and a gross notion
of wear life, it isn't measured as such. I don't think
any HDD has a place to put that info on a HDD SMART.
The info is undoubtedly inside the drive somewhere, just
not something you'd find in HDD SMART.

202 Percentage Of The Rated Lifetime Used in your SSD === SSD Param
202 Data Address Mark Errors === HDD Param

If your SMART tool is an older one, it will use the older
definition. HDTune 2.55 (free version, now ten years old),
doesn't know anything about SSDs. This is why I recommended
the usage of the SSD Toolbox software, which may be available
on your SSD manufacturer site. The SSD Toolbox should be using
an SSD SMART table definition.


The official Crucial software to analyze my SSDs can't see through my RAID controller, so I can only see what Speedfan is telling me.

Data Address Mark errors, value 18, worst 18, warn 0, raw 000000000052

Consult the Toolkit for that SSD, and verify the Lifetime used
is 52%. That means roughly half the wear life is exhausted
(which is independent of how many sectors are spared out).


I don't know where to find the information on Crucial's mark errors.

There is one brand where that parameter is very dangerous.
If you have an Intel drive, it stops responding when
the drive is worn out, as measured by Flash cell write
cycles. Other brands continue to run. In one case,
a drive was able (during a lifetime test), to exceed the
Health value by many times, before the sparing eventually
exhausted the spares pool. When the cells wear out, more
sectors will need to be spared, so the sparing rate
at some point will accelerate. Sometimes, it might be
a power failure, while in that state (lots of sparing),
that results in the drive being killed and no longer
responding. There might actually be some spares
left, when one of those "way over the top" SSDs die
on you.


It's a mirrored array, and the drives have different SMART data, even though they're identical and were installed together, so one should fail well before the other and prevent a problem.

But the Intel response is a "no mercy" response. Intel
wants you to back up your Intel SSD every day, so that
you can "laugh" when your SSD bricks itself. Now the
nice thing about such a behavior, is now you can't
even check the SMART table to see what happened :-/
Some drives signal their displeasure by reading but
not writing, and by remaining in a readable
state, it's up to the user whether they actually
"trust" any recovered data. The ECC should be able
to indicate whether sectors are irrecoverably bad or
not, so reading in such a state really shouldn't
be a problem.

But the Intel policy sucks, especially when the
typical "I could care less" class of consumer isn't aware
what their policy is on Health. I've only caught hints
of this, in some SSD reviews.


Are you seriously saying Intel drives just stop working on purpose?! I would have thought they'd get sued for that kind of behaviour. Everyone makes jokes about the Japanese chip invented to time when your warranty had run out so it could destroy your hi-fi, but surely nobody really does this?

*******

A great series of articles, were the ones where they kept
writing to a series of drives, until they had all failed.
The article here also mentions in passing, what some of
the end of life policies are. It's possible the
Corsair Neutron in this article, was the MLC version,
while the one I bought was suspected to have TLC (as it
disappeared from the market for several months and
then "magically reappeared").

https://techreport.com/review/27909/...heyre-all-dead

The TLC drive with the bad "data at rest" behavior, that
might have been a Samsung.

There's nothing wrong with charge draining off the cells,
as long as the engineering is there to include an ECC
method that ensures readable data for ten years after
the write operation. The issue wasn't a failure as such,
since the data was still perfectly readable - it was
the fact the drive was slow that ****ed people off. When
these companies use the newest generation of "bad" flash,
it's up to them to overprovision enough so the
user doesn't notice what a crock they've become.
You see, they're getting ready to release QLC,
which is one bit more per cell than TLC. The TLC
was bad enough. What adventures will QLC bring ?


--
What's meaner than a pit bull with AIDS?
The guy that gave it to him.
  #84  
Old May 22nd 18, 06:52 PM posted to alt.comp.os.windows-10
Jimmy Wilkinson Knife
external usenet poster
 
Posts: 131
Default USB thumb drives.

On Tue, 22 May 2018 18:32:10 +0100, Jimmy Wilkinson Knife wrote:

On Mon, 21 May 2018 22:26:55 +0100, Paul wrote:

Jimmy Wilkinson Knife wrote:
On Mon, 21 May 2018 13:07:45 +0100, Paul wrote:


There are just a few flash drives, that are huge and
the interface happens to be slow. There's a 30TB one,
you can continuously write it at the full rate, and it
is guaranteed to pass the warranty period :-)
So that would be an example of a drive, where
a lab accident can't destroy it. Because it
can handle the wear life of writing continuously
at its full speed (of maybe 300 to 400MB/sec).
If the 30TB drive was NVMe format, and ran at
2500MB/sec, it might not be able to brag about
supporting continuous write for the entire warranty
period. You might have to stop writing it once in
a while :-)

That would be a ****ing busy server to write that much data. And if you
had such a server, you'd most likely need way more storage space, so
each drive wouldn't be in continuous use.


I think that 30TB drive is a wonderful drive from a
"cannot be abused" perspective. And I think it follows
a 5.25" form factor too, and holds 30TB. It's chock full
of chips. The average user isn't going to like the
speed though. Too many people have been spoiled by
NVMe speeds.


I'm resisting nvme for my desktops, although they would be nice, they're ****ing expensive. I'd rather spend that money on making more of the storage SATA3 SSD instead of rotary drives.

I just ran the free software "Crystaldiskmark" on my SSDs (mirror) and HDDs (mirror). I get the correct read speed (about double the manufacturer rating) for the SSD mirror. But the write speed was a tenth of what it should be! On the HDDs (the infamous slow ST3000DM001 drives), I get a tenth of the correct speed both reading and writing.


Update - I just ran the same Crystaldiskmark software on a brand new machine with a brand new single SSD (newer Crucial, 500GB with only Windows and a few small programs installed), and got the correct full read and write speed of 500 odd MB/sec. I'm going to assume my main computer has very old and tired SSDs. Either that or they're too full? Why do full SSDs go slower? They're about 90% full. I tried to understand this page: https://pureinfotech.com/why-solid-s...ce-slows-down/ But why should they slow down after they're over 70% full? Surely the TRIM command is defragging them so I have 30% of the blocks sat there completely empty, which can be written to at full speed?

--
California lawmakers are now proposing an amendment that would allow 14 year olds a quarter vote and 16 year olds a half a vote in all state elections.
How stupid is this? Don't they have enough trouble counting WHOLE votes? How are they going to figure out fractions?!
  #85  
Old May 22nd 18, 06:57 PM posted to alt.comp.os.windows-10
Jimmy Wilkinson Knife
external usenet poster
 
Posts: 131
Default USB thumb drives.

On Tue, 22 May 2018 18:06:15 +0100, default wrote:

On Tue, 22 May 2018 15:49:37 +0100, "Jimmy Wilkinson Knife"
wrote:

On Tue, 22 May 2018 08:51:47 +0100, default wrote:

On Tue, 22 May 2018 17:20:46 +1000, Lucifer Morningstar
wrote:

On Mon, 21 May 2018 19:30:36 +0100, "Jimmy Wilkinson Knife"
wrote:

On Mon, 21 May 2018 13:07:45 +0100, Paul wrote:

Jimmy Wilkinson Knife wrote:

Maybe that's why mine are slowing down - more cells are becoming weak?

The Toolkit software that comes with the drive,
should be able to provide statistics for you.
Like, how many blocks were spared out. If a block
cannot remember what you write to it, the drive
may decide to spare it out and replace it, just
like a hard drive would. This is automatic sparing
just like in the ATA designs.

I would have looked at the SMART data, but they're in a mirror array by the motherboard's hard disk controller, which I thought blocked that information.
But not on this controller apparently.
Not entirely sure how to interpret all this, but:

First SSD:
Raw Read error rate, value 100, worst 100, warn 0, raw 000000000005
Reallocated Sector count, value 100, worst 100, warn 0, raw 000000000000
Data Address Mark errors, value 23, worst 23, warn 0, raw 00000000004D

The others are either raw 000000000000 or marked as unimportant by the program speedfan, so I didn't type them in (it won't copy and paste).

Second SSD:
Raw Read error rate, value 100, worst 100, warn 0, raw 00000000000D
Reallocated Sector count, value 100, worst 100, warn 0, raw 000000000002
Data Address Mark errors, value 18, worst 18, warn 0, raw 000000000052

Speedfan reports (on the quick test) 100% performance, but 0% fitness?! I think the 0% may be it not reading the SMART correctly through the RAID controller, or it doesn't have a clue about some stats as there are no warnings apart from a red ! on an "unknown parameter".

The mark errors concern me, I'm assuming they also started at 100 and 0 means imminent failure. So 18 means it's 82% worn out?
The Acronis website says "Although degradation of this parameter can be an indicator of drive ageing and/or potential electromechanical problems, it does not directly indicate imminent drive failure."
Huh? If it's 82% aged, surely that's something to indicate failure soon?
I can't find anything certain on Google about "data address mark errors".

The kind of "weak" I'm referring to, is not permanently
damaged sectors. It's sectors that the charge is
draining off the floating gates in a matter of
a few months, rather than the ten years we would
normally expect. This was causing the read rate
on "data at rest" to drop. So if you wrote a backup
today on the device, it might read at 300MB/sec. If
two months from now, you tried to read the same big file
again, it would be reading at 180MB/sec. And it
does that, because the charge draining off the cells
corrupts them, and the powerful error correcting code
needs time to do the corrections to multiple bits
in the sector. The data is still officially "intact"
and error free, in that the error corrector isn't exhausted.

They "fixed" this in a firmware update, by having the
drive re-write the cells after three months (equals
degraded wear life and shortens the life of the drive).

As long as someone doesn't try to use it as long term storage and doesn't plug it in for 6 months. Or does it stay put if switched off?

On TLC, around 10% of storage is used for ECC bits, and
when QLC comes out, this is expected to grow.

At some point, adding ECC will affect storage capacity
sufficiently, we will have hit a wall on "extending the
number of bits stored in one cell". For example, if
you needed as many ECC bits as data stored, yes, you
doubled the capacity by going from QLC to the next thing,
but you cut the capacity in half by the need to use more ECC.
They can't keep increasing the bits per cell before
it bites them on the ass. The write cycles is dropping
with each generation too. Flash is becoming the equivalent
of silicon toilet paper.

In fact, doing some math the other day, I figured out
it was costing me $1 to write a Flash drive from
one end to the other.

That can't be right. Are you claiming a $100 drive can only be written completely 100 times?

There is a tangible wearout
on the highest density devices. And it's beginning
to equate to dollars. When I use a hard drive on the
other hand, I don't have such a notion. It's been
a long time since I lost a hard drive. I've got
a few that have bad SMART, but "they're not dead yet".
Some of the flaky ones have been going for an extra
five years after retirement (now used as scratch drives).

Actually I've had terrible trouble with hard drives but never ever had a single SSD fail, apart from OCZ **** that I very quickly stopped using.

The number of hard drives that either overheated or just started clicking.

There are just a few flash drives, that are huge and
the interface happens to be slow. There's a 30TB one,
you can continuously write it at the full rate, and it
is guaranteed to pass the warranty period :-)
So that would be an example of a drive, where
a lab accident can't destroy it. Because it
can handle the wear life of writing continuously
at its full speed (of maybe 300 to 400MB/sec).
If the 30TB drive was NVMe format, and ran at
2500MB/sec, it might not be able to brag about
supporting continuous write for the entire warranty
period. You might have to stop writing it once in
a while :-)

That would be a ****ing busy server to write that much data. And if you had such a server, you'd most likely need way more storage space, so each drive wouldn't be in continuous use.

SSDs are not used in servers due to their unreliability.

I was reading about a device in the lab stages that may turn digital
storage on its end. (if one chooses to believe corporate hype) It is
a crystal that's supposed to have the ability to store many petabytes
of data. The downside is that it can only be written to one time -
but the storage is so vast/fast/cheap in so small a space that it
could conceivably replace mechanical hard drives, by just opening up
another portion of the device and forgetting what is written in the
sectors you don't need anymore.


So secure erasing would be difficult, unless you smashed the crystals.


Yup. Physical destruction is the only recourse for security; but
you'd never have to back up data, so it is a mixed blessing.


Never have to back up data!? There are so many reasons that data can be lost - virus, file corruption due to a crash/powercut etc, accidental deletion, theft of the computer, a fire, etc, etc. Backup is ALWAYS required.

--
There was a rabbi who collected foreskins, had them dried out and made into a wallet - whenever you stroked the wallet it became a briefcase.
  #86  
Old May 22nd 18, 07:44 PM posted to alt.comp.os.windows-10
Jimmy Wilkinson Knife
external usenet poster
 
Posts: 131
Default USB thumb drives.

On Tue, 22 May 2018 18:52:16 +0100, Jimmy Wilkinson Knife wrote:

On Tue, 22 May 2018 18:32:10 +0100, Jimmy Wilkinson Knife wrote:

On Mon, 21 May 2018 22:26:55 +0100, Paul wrote:

Jimmy Wilkinson Knife wrote:
On Mon, 21 May 2018 13:07:45 +0100, Paul wrote:

There are just a few flash drives, that are huge and
the interface happens to be slow. There's a 30TB one,
you can continuously write it at the full rate, and it
is guaranteed to pass the warranty period :-)
So that would be an example of a drive, where
a lab accident can't destroy it. Because it
can handle the wear life of writing continuously
at its full speed (of maybe 300 to 400MB/sec).
If the 30TB drive was NVMe format, and ran at
2500MB/sec, it might not be able to brag about
supporting continuous write for the entire warranty
period. You might have to stop writing it once in
a while :-)

That would be a ****ing busy server to write that much data. And if you
had such a server, you'd most likely need way more storage space, so
each drive wouldn't be in continuous use.

I think that 30TB drive is a wonderful drive from a
"cannot be abused" perspective. And I think it follows
a 5.25" form factor too, and holds 30TB. It's chock full
of chips. The average user isn't going to like the
speed though. Too many people have been spoiled by
NVMe speeds.


I'm resisting nvme for my desktops, although they would be nice, they're ****ing expensive. I'd rather spend that money on making more of the storage SATA3 SSD instead of rotary drives.

I just ran the free software "Crystaldiskmark" on my SSDs (mirror) and HDDs (mirror). I get the correct read speed (about double the manufacturer rating) for the SSD mirror. But the write speed was a tenth of what it should be! On the HDDs (the infamous slow ST3000DM001 drives), I get a tenth of the correct speed both reading and writing.


Update - I just ran the same Crystaldiskmark software on a brand new machine with a brand new single SSD (newer Crucial, 500GB with only Windows and a few small programs installed), and got the correct full read and write speed of 500 odd MB/sec. I'm going to assume my main computer has very old and tired SSDs. Either that or they're too full? Why do full SSDs go slower? They're about 90% full. I tried to understand this page: https://pureinfotech.com/why-solid-s...ce-slows-down/ But why should they slow down after they're over 70% full? Surely the TRIM command is defragging them so I have 30% of the blocks sat there completely empty, which can be written to at full speed?


Found this:
http://www.tomshardware.co.uk/forum/...ction-question
Apparently the automatic garbage collection (which is what I need my SSDs to do to speed them up so I don't have loads of partially filled blocks) doesn't work well on Crucial SSDs when they're over 80% full. Do you know why this is? Why on earth does it need more than one free block? It should read the partials from several blocks, write them to the spare one, then erase the source blocks.

--
I used to work in a fire hydrant factory. You couldn't park anywhere near the place. -- Steven Wright
  #87  
Old May 22nd 18, 08:17 PM posted to alt.comp.os.windows-10
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default USB thumb drives.

wasbit wrote:
"Paul" wrote in message
news

I would be interested in the brand and model number
of this mythically large (13.5$) storage devices. Was
the brand Godzilla or Mothra ? Did it come
from the ocean ? Was it angry ?

Paul


It's yours if you want it. Email me your address.


No, that's OK.

You might need an 8GB storage one of these days :-)

The Flash stick industry is full of this sort of thing.

My least favorite purchase, is the stick that only
writes at 3MB/sec. Which is great... if you start
a transfer before you go to bed, and it's all done
the next morning. And that stick was the same price
as a "good one".

Paul
  #88  
Old May 22nd 18, 08:50 PM posted to alt.comp.os.windows-10
Ant[_2_]
external usenet poster
 
Posts: 554
Default USB thumb drives.

wrote:
On Mon, 21 May 2018 23:05:25 -0500,
(Ant) wrote:


I meant 16 KB in modern times.


Put it into a museum.


http://www.youtube.com/watch?v=lFmhRLiYho0
--
Quote of the Week: "To the ant, a few drops of dew is a flood." --Iranian
Note: A fixed width font (Courier, Monospace, etc.) is required to see this signature correctly.
/\___/\ Ant(Dude) @ http://antfarm.home.dhs.org
/ /\ /\ \ Please nuke ANT if replying by e-mail privately. If credit-
| |o o| | ing, then please kindly use Ant nickname and URL/link.
\ _ /
( )
  #89  
Old May 22nd 18, 08:58 PM posted to alt.comp.os.windows-10
Ant[_2_]
external usenet poster
 
Posts: 554
Default USB thumb drives.

Doomsdrzej wrote:
....
Bought 5 Duracell branded 64Gb flashdrives. Three of them have gone into
"write protected" mode and can't do anything but read the corrupted data
(some of it) that you can view a listing for.


Duracell has no business producing USB drives.


Is it just a brand name of another brand, or do they really make their own?
--
Quote of the Week: "To the ant, a few drops of dew is a flood." --Iranian
Note: A fixed width font (Courier, Monospace, etc.) is required to see this signature correctly.
/\___/\ Ant(Dude) @ http://antfarm.home.dhs.org
/ /\ /\ \ Please nuke ANT if replying by e-mail privately. If credit-
| |o o| | ing, then please kindly use Ant nickname and URL/link.
\ _ /
( )
  #90  
Old May 22nd 18, 09:16 PM posted to alt.comp.os.windows-10
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default USB thumb drives.

Jimmy Wilkinson Knife wrote:


At what age does the data become unreadable if the drive has not been
powered up?


Flash is generally quoted as holding charge on a floating
gate for around 10 years.

That means, if you're using SSDs for archival storage, they
should be plugged in and re-written every five years, at a guess.

For a given design, I don't know how to guess at that value,
and the 10 year number is merely a "starting point, ball park number".

If I had an SSD today, that was as old as the oldest hard drive
in the room, chances are it would throw a CRC error or two, signifying
the error correction couldn't fix the number of errors accumulated
in a sector.

It's a mirrored array, and the drives have different SMART data, even
though they're identical and were installed together, so one should fail
well before the other and prevent a problem.


Well, I don't want to propose something stupid to you,
and cause the mirror to break as a side effect. You have
to be careful that any soft-raid methods don't "track"
what you do to them, and then the next time you boot
into the "working" configuration, the status is
degraded and it costs you another rebuild. If you move
one of those drives somewhere so you can read the SMART,
you might upset the array status.

If you don't have SMART visibility, and you insist
on running a RAID 1 mirror, I would recommend
to you that you mix drives from different manufacturers.
Pair an Intel branded 512GB drive with a Samsung branded
512GB drive. That should de-correlate things enough, that
there won't be any unfortunate accidents. I personally
would not pair two identical Intel drives in a RAID 1
mirror, if you paid me :-) I'm be a "lucky enough guy"
to have Windows Defender and Search Indexer keep writing
to C: just after the first drive fails, until the second
drive fails and I'm toasted. That's what would happen
to me if I tried that.

With mirrored drives *you still need backups*.

If the 5V rail on your PSU overvolts, and burns both
SSDs at the same time, "you got nuttin". We do backups
to protect against lightning and PSU failures and ransomware.
The mirror idea, isn't "the Space Shuttle". It's not sufficient
redundancy for disaster planning. It's *not* a substitute
for backups.

Paul
 




Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off






All times are GMT +1. The time now is 10:41 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 PCbanter.
The comments are property of their posters.