A Windows XP help forum. PCbanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » PCbanter forum » Windows 10 » Windows 10 Help Forum
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

USB thumb drives.



 
 
Thread Tools Rate Thread Display Modes
  #61  
Old May 20th 18, 08:43 PM posted to alt.comp.os.windows-10
Jimmy Wilkinson Knife
external usenet poster
 
Posts: 131
Default USB thumb drives.

On Sun, 20 May 2018 02:17:31 +0100, Paul wrote:

Jimmy Wilkinson Knife wrote:
On Thu, 17 May 2018 14:12:54 +0100, Doomsdrzej wrote:

On Thu, 17 May 2018 07:49:55 +1000, Peter Jason wrote:

I have many USB2 & USB3 going back 10+ years, and
now some are "socket specific" on my 10 YO
computer motherboard (some USB3s will work on some
sockets; even USB2 sockets) and not others.

Do these thumb drives last forever, or should
their contents be transferred to the latest USB
drives?

Theoretically, they should last a long time but a lot can destroy them
like moisture and a seemingly miniscule amount of bending. I'd
transfer their contents to more recent, faster USB keys.


I was given one that was stood upon by a 15 stone man. The data was not
recoverable.


They do make rugged ones.

https://www.amazon.com/LaCie-XtremKe.../dp/B00AMMI6VQ

https://www.pcmag.com/article2/0,2817,2414162,00.asp

"...crushing force (it will withstand 10 tons)"

An Amazon review, claims the internal USB key portion,
will separate from the threaded screw cap, so while the
outside is rugged, the connection between the innards
and the cap isn't perfect.


So this is for a bricklayer who has to carry gigabytes of data with him :-)

--
I just sent my lawyer something for his birthday. Unfortunately, he wasn't home when it went off.
Ads
  #62  
Old May 20th 18, 11:35 PM posted to alt.comp.os.windows-10
nospam
external usenet poster
 
Posts: 2,010
Default USB thumb drives.

In article , Jimmy Wilkinson Knife
wrote:


Do SSDs slow down when they're nearly full, or when they're old? I'm sure
mine isn't as fast as it used to be.


some do. newer ones are better, but you don't want it to be too full.
  #63  
Old May 21st 18, 12:04 AM posted to alt.comp.os.windows-10
Paul[_32_]
external usenet poster
 
Posts: 7,114
Default USB thumb drives.

Jimmy Wilkinson Knife wrote:

I have noticed my SSD maxed out now and then when Windows decides I'm
not using the computer. No idea what it's up to, since I assume SSDs
don't need defragmenting.

Do SSDs slow down when they're nearly full, or when they're old? I'm
sure mine isn't as fast as it used to be.


The ones I've got have maintained their performance
level fairly well.

The very first one I bought (Corsair Neutron?) was a dud,
and performance in the first day, varied between 120MB/sec
and pretty close to the available SATAIII rate. I took it
back for a refund, after only 50GB of writes. I hadn't intended
to "benchmark it", but the damn thing was slow enough to notice
and I was pretty well forced to collect some graphs and
take it back to the store. It was well off the rates
printed on the tin.

*******

If the issue was internal fragmentation, you could back up,
Secure Erase, and restore.

If you happen to have a lot of persistent VSS shadows
running on the system, there can be "slow copy on write",
and the OS Optimize function will actually defragment an SSD
if such a condition is noted. I still haven't located a tech
article to decode how "slow COW" happens or why it happens.
There is some deal, when you defragment a drive, the defragmenter
is supposed to limit cluster movements to certain sizes, to prevent
interaction with a Shadow which is already in place. It's possible
shadow copies work at the cluster level.

An example of a reason to use a Shadow, might be File History,
or it might be the usage of Incremental or Differential backups.
There are some commands you can use, to list the current
shadows in place. WinXP has VSS, but no persistent shadows
are available, and a reboot should close them all. Later
OSes allow keeping a shadow running between sessions
(for tracking file system state). The maximum number
of shadows is something like 64.

vssadmin list shadows

[vssadmin]
https://technet.microsoft.com/en-us/.../dd348398.aspx

Paul
  #64  
Old May 21st 18, 12:51 AM posted to alt.comp.os.windows-10
Jimmy Wilkinson Knife
external usenet poster
 
Posts: 131
Default USB thumb drives.

On Mon, 21 May 2018 00:04:56 +0100, Paul wrote:

Jimmy Wilkinson Knife wrote:

I have noticed my SSD maxed out now and then when Windows decides I'm
not using the computer. No idea what it's up to, since I assume SSDs
don't need defragmenting.

Do SSDs slow down when they're nearly full, or when they're old? I'm
sure mine isn't as fast as it used to be.


The ones I've got have maintained their performance
level fairly well.


I've never benchmarked mine, so I'm just going subjectively. I never used to be waiting for the computer to do something, and realise it was the SSD causing the bottleneck.

The very first one I bought (Corsair Neutron?) was a dud,
and performance in the first day, varied between 120MB/sec
and pretty close to the available SATAIII rate. I took it
back for a refund, after only 50GB of writes. I hadn't intended
to "benchmark it", but the damn thing was slow enough to notice
and I was pretty well forced to collect some graphs and
take it back to the store. It was well off the rates
printed on the tin.


I used to get OCZ SSDs (not sure why, maybe price?) - but I had literally one in 5 of them fail completely. I now use Crucial, which 100% work and never go wrong (except a couple that needed firmware upgrade as they "disappeared". Looking at current reviews, OCZ are also the slowest, not sure why they're sold at all.

*******

If the issue was internal fragmentation, you could back up,
Secure Erase, and restore.


What do you mean "internal fragmentation"?

And surely fragmentation only matters on a mechanical drive, as the heads have to read data from several different places. This won't slow down an SSD?

If you happen to have a lot of persistent VSS shadows
running on the system, there can be "slow copy on write",
and the OS Optimize function will actually defragment an SSD
if such a condition is noted. I still haven't located a tech
article to decode how "slow COW" happens or why it happens.
There is some deal, when you defragment a drive, the defragmenter
is supposed to limit cluster movements to certain sizes, to prevent
interaction with a Shadow which is already in place. It's possible
shadow copies work at the cluster level.


I thought SSDs should never be defragged, and the data storage position was handled internally to the drive. I read something about running a defrag program on an SSD just wore it out.

An example of a reason to use a Shadow, might be File History,
or it might be the usage of Incremental or Differential backups.
There are some commands you can use, to list the current
shadows in place. WinXP has VSS, but no persistent shadows
are available, and a reboot should close them all. Later
OSes allow keeping a shadow running between sessions
(for tracking file system state). The maximum number
of shadows is something like 64.

vssadmin list shadows

[vssadmin]
https://technet.microsoft.com/en-us/.../dd348398.aspx


I'm not familiar with shadows. Is that what allows me to use something like EaseUS Backup to clone a drive while windows is still running?

--
Beelzebug (n.): Satan in the form of a mosquito that gets into your bedroom at three in the morning and cannot be cast out.
  #65  
Old May 21st 18, 03:37 AM posted to alt.comp.os.windows-10
Paul[_32_]
external usenet poster
 
Posts: 7,114
Default USB thumb drives.

Jimmy Wilkinson Knife wrote:
On Mon, 21 May 2018 00:04:56 +0100, Paul wrote:



I used to get OCZ SSDs (not sure why, maybe price?) - but I had
literally one in 5 of them fail completely. I now use Crucial, which
100% work and never go wrong (except a couple that needed firmware
upgrade as they "disappeared". Looking at current reviews, OCZ are also
the slowest, not sure why they're sold at all.


Currently, Toshiba owns the remains of OCZ. Toshiba invented
NAND flash and is still in the business of making flash chips.
Using OCZ gives them an enthusiast branding to use. And
branding is important, if you want to squeeze a few dollars
more of margin from retail pricing.

If they sold Toshiba brand (vertically integrated) drives
in the consumer space, nobody would know who that is.

Toshiba makes more stuff than you can imagine, but fell
on hard times recently when their nuclear business tanked
and sank.

https://en.wikipedia.org/wiki/Westin...ectric_Company

"On March 24, 2017, parent company Toshiba announced that
Westinghouse Electric Company would file for Chapter 11
bankruptcy"

And they had to scramble and shake some piggy banks, to find
enough money to keep going.


*******

If the issue was internal fragmentation, you could back up,
Secure Erase, and restore.


What do you mean "internal fragmentation"?


On Flash, 4KB writes aren't the same size as the smallest
unit of storage inside the Flash chip. If you spray a drive
with 4KB writes (or copy a folder with a lot of 4K files
in it), the drive works in the background to arrange
the data to take up fewer internal units of storage.

This amounts to fragmentation. Write amplification, is
the process of needing to write the Flash more than once,
when adding a file to the partition. And drives have varied
in how good they are at this stuff. The invention of the
TRIM command, where the OS tells the drive what clusters
no longer have files in them, gives the drive a larger
free pool to work with, and makes it easier to rearrange
data during idle periods.

Back when enthusiast sites were doing random 4KB write
tests on drives, they used to leave the drive powered,
and it might take all night before the performance
level would return to normal the next morning. And
that's because the drive is busy moving stuff around.

The drive has virtual and physical storage. The outside
of the drive is the virtual part. Your files are *not*
stored in order, inside the Flash chip. If you pulled
the Flash chip and looked under a microscope, your
file is spread far and wide through the Flash chip. You
need the "mapping table" to figure out which storage location
on the inside, maps to an LBA on the outside. There's a layer
of indirection you cannot see, and the handling of such
issues can make a drive slow.

As can doing error correction with the multi-core CPU
inside the drive. On some drives "weak cells" require
a lot more error correction on a read (every sector
has to be corrected), and this reduces the rate that
the reads come back to you.

Some SSDs have three-core ARM processors for these
activities, rather than the drive being "purely mechanical"
and "filled with wire-speed logic blocks". That
doesn't seem to be the case. Instead, the drive relies
on firmware for a lot of the "features".


And surely fragmentation only matters on a mechanical drive, as the
heads have to read data from several different places. This won't slow
down an SSD?


It takes an SSD about 20uS to read a "chunk", even if only
a part of it is being used. While normally some size of
stuff is contiguous, if the usable bits are really small,
the "chunk" overhead becomes significant. It would be
less significant, after the drive has (transparently)
rearranged some blocks inside, where you can't see
what it's doing.

The only way you can get a hint as to what the drive
is doing, is by watching the power consumption on VCC,
and guessing based on power state, whether it is house
cleaning during idle periods.

The Intel Optane/XPoint stuff, doesn't do this. It's
a byte addressable storage, which can directly write
where required. And there's less maintenance inside
the drive. Too bad the chips aren't as big as Flash,
and the devices are a lot more expensive at the moment.


If you happen to have a lot of persistent VSS shadows
running on the system, there can be "slow copy on write",
and the OS Optimize function will actually defragment an SSD
if such a condition is noted. I still haven't located a tech
article to decode how "slow COW" happens or why it happens.
There is some deal, when you defragment a drive, the defragmenter
is supposed to limit cluster movements to certain sizes, to prevent
interaction with a Shadow which is already in place. It's possible
shadow copies work at the cluster level.


I thought SSDs should never be defragged, and the data storage position
was handled internally to the drive. I read something about running a
defrag program on an SSD just wore it out.


https://www.hanselman.com/blog/TheRe...YourSS D.aspx

"Actually Scott and Vadim are both wrong. Storage Optimizer will defrag
an SSD once a month if volume snapshots are enabled."


An example of a reason to use a Shadow, might be File History,
or it might be the usage of Incremental or Differential backups.
There are some commands you can use, to list the current
shadows in place. WinXP has VSS, but no persistent shadows
are available, and a reboot should close them all. Later
OSes allow keeping a shadow running between sessions
(for tracking file system state). The maximum number
of shadows is something like 64.

vssadmin list shadows

[vssadmin]
https://technet.microsoft.com/en-us/.../dd348398.aspx


I'm not familiar with shadows. Is that what allows me to use something
like EaseUS Backup to clone a drive while windows is still running?


Yes. The shadow is probably released at the end of the clone run.

Whereas some flavors of backup, require the shadow to keep track
of what was backed up, and what wasn't backed up. The shadow may
remain in place in such a case.

I've never seen much of anything doing "list shadows", but somebody
must be using them. I only do full backups here, not incremental
ones, so one backup run doesn't need to know anything about
a previous backup run.

Paul
  #66  
Old May 21st 18, 11:22 AM posted to alt.comp.os.windows-10
Jimmy Wilkinson Knife
external usenet poster
 
Posts: 131
Default USB thumb drives.

On Mon, 21 May 2018 03:37:30 +0100, Paul wrote:

Jimmy Wilkinson Knife wrote:
On Mon, 21 May 2018 00:04:56 +0100, Paul wrote:



I used to get OCZ SSDs (not sure why, maybe price?) - but I had
literally one in 5 of them fail completely. I now use Crucial, which
100% work and never go wrong (except a couple that needed firmware
upgrade as they "disappeared". Looking at current reviews, OCZ are also
the slowest, not sure why they're sold at all.


Currently, Toshiba owns the remains of OCZ. Toshiba invented
NAND flash and is still in the business of making flash chips.
Using OCZ gives them an enthusiast branding to use. And
branding is important, if you want to squeeze a few dollars
more of margin from retail pricing.


Surely every enthusiast knows OCZ is slow and likely to fall apart.

If they sold Toshiba brand (vertically integrated) drives
in the consumer space, nobody would know who that is.


I remember them advertising on TV when I were a lad (30 years ago), but it was a terrible ad:
https://youtu.be/9urXG2TpOio

Toshiba makes more stuff than you can imagine,


I've seen Toshiba parts inside many other makes of LCD TV, even other big names that I would have thought would use their own parts.

but fell
on hard times recently when their nuclear business tanked
and sank.

https://en.wikipedia.org/wiki/Westin...ectric_Company

"On March 24, 2017, parent company Toshiba announced that
Westinghouse Electric Company would file for Chapter 11
bankruptcy"


I'm only familiar with the previous title:
https://en.wikipedia.org/wiki/Westin...ic_Corporation
Can't remember what they made but I'm sure I had something made by them - probably an electric meter or something.
Wikipedia says they dissolved, yet they're still trading under the name "Westinghouse Electric Corporation" - "Westinghouse Electric Corporation Provides Smart Home Appliances To Energy Solutions That Are Cleanly And Safely Powering Us Into The Next Generation.":
http://westinghouse.com

And they had to scramble and shake some piggy banks, to find
enough money to keep going.


*******

If the issue was internal fragmentation, you could back up,
Secure Erase, and restore.


What do you mean "internal fragmentation"?


On Flash, 4KB writes aren't the same size as the smallest
unit of storage inside the Flash chip. If you spray a drive
with 4KB writes (or copy a folder with a lot of 4K files
in it), the drive works in the background to arrange
the data to take up fewer internal units of storage.

This amounts to fragmentation. Write amplification, is
the process of needing to write the Flash more than once,
when adding a file to the partition. And drives have varied
in how good they are at this stuff. The invention of the
TRIM command, where the OS tells the drive what clusters
no longer have files in them, gives the drive a larger
free pool to work with, and makes it easier to rearrange
data during idle periods.

Back when enthusiast sites were doing random 4KB write
tests on drives, they used to leave the drive powered,
and it might take all night before the performance
level would return to normal the next morning. And
that's because the drive is busy moving stuff around.

The drive has virtual and physical storage. The outside
of the drive is the virtual part. Your files are *not*
stored in order, inside the Flash chip. If you pulled
the Flash chip and looked under a microscope, your
file is spread far and wide through the Flash chip. You
need the "mapping table" to figure out which storage location
on the inside, maps to an LBA on the outside. There's a layer
of indirection you cannot see, and the handling of such
issues can make a drive slow.

As can doing error correction with the multi-core CPU
inside the drive. On some drives "weak cells" require
a lot more error correction on a read (every sector
has to be corrected), and this reduces the rate that
the reads come back to you.


Maybe that's why mine are slowing down - more cells are becoming weak?

Some SSDs have three-core ARM processors for these
activities, rather than the drive being "purely mechanical"
and "filled with wire-speed logic blocks". That
doesn't seem to be the case. Instead, the drive relies
on firmware for a lot of the "features".

And surely fragmentation only matters on a mechanical drive, as the
heads have to read data from several different places. This won't slow
down an SSD?


It takes an SSD about 20uS to read a "chunk", even if only
a part of it is being used. While normally some size of
stuff is contiguous, if the usable bits are really small,
the "chunk" overhead becomes significant. It would be
less significant, after the drive has (transparently)
rearranged some blocks inside, where you can't see
what it's doing.


Christ, these things are more complicated than I thought.

The only way you can get a hint as to what the drive
is doing, is by watching the power consumption on VCC,
and guessing based on power state, whether it is house
cleaning during idle periods.


I take it the drive LED only lights when data is transferring from the controller on the motherboard to the SSD, and not when internal stuff is happening? I.e. the SSD doesn't tell the controller it's doing anything.

The Intel Optane/XPoint stuff, doesn't do this. It's
a byte addressable storage, which can directly write
where required.


That's how I thought SSDs should have been made in the first place. Presumably something prevents this being easy or cheap?

And there's less maintenance inside
the drive. Too bad the chips aren't as big as Flash,
and the devices are a lot more expensive at the moment.


It'll get cheaper and bigger....

The question is, will we ever replace mechanical drives completely, or will they keep getting larger at the same rate?

If you happen to have a lot of persistent VSS shadows
running on the system, there can be "slow copy on write",
and the OS Optimize function will actually defragment an SSD
if such a condition is noted. I still haven't located a tech
article to decode how "slow COW" happens or why it happens.
There is some deal, when you defragment a drive, the defragmenter
is supposed to limit cluster movements to certain sizes, to prevent
interaction with a Shadow which is already in place. It's possible
shadow copies work at the cluster level.


I thought SSDs should never be defragged, and the data storage position
was handled internally to the drive. I read something about running a
defrag program on an SSD just wore it out.


https://www.hanselman.com/blog/TheRe...YourSS D.aspx

"Actually Scott and Vadim are both wrong. Storage Optimizer will defrag
an SSD once a month if volume snapshots are enabled."

An example of a reason to use a Shadow, might be File History,
or it might be the usage of Incremental or Differential backups.
There are some commands you can use, to list the current
shadows in place. WinXP has VSS, but no persistent shadows
are available, and a reboot should close them all. Later
OSes allow keeping a shadow running between sessions
(for tracking file system state). The maximum number
of shadows is something like 64.

vssadmin list shadows

[vssadmin]
https://technet.microsoft.com/en-us/.../dd348398.aspx


I'm not familiar with shadows. Is that what allows me to use something
like EaseUS Backup to clone a drive while windows is still running?


Yes. The shadow is probably released at the end of the clone run.

Whereas some flavors of backup, require the shadow to keep track
of what was backed up, and what wasn't backed up. The shadow may
remain in place in such a case.

I've never seen much of anything doing "list shadows", but somebody
must be using them. I only do full backups here, not incremental
ones, so one backup run doesn't need to know anything about
a previous backup run.


I only do full aswell. I just swap the backup drive I use every so often, so I can go back to older versions of a file.

--
I was doing some remolishments to my house the other day and accidentally defurbished it.
  #67  
Old May 21st 18, 01:07 PM posted to alt.comp.os.windows-10
Paul[_32_]
external usenet poster
 
Posts: 7,114
Default USB thumb drives.

Jimmy Wilkinson Knife wrote:


Maybe that's why mine are slowing down - more cells are becoming weak?


The Toolkit software that comes with the drive,
should be able to provide statistics for you.
Like, how many blocks were spared out. If a block
cannot remember what you write to it, the drive
may decide to spare it out and replace it, just
like a hard drive would. This is automatic sparing
just like in the ATA designs.

The kind of "weak" I'm referring to, is not permanently
damaged sectors. It's sectors that the charge is
draining off the floating gates in a matter of
a few months, rather than the ten years we would
normally expect. This was causing the read rate
on "data at rest" to drop. So if you wrote a backup
today on the device, it might read at 300MB/sec. If
two months from now, you tried to read the same big file
again, it would be reading at 180MB/sec. And it
does that, because the charge draining off the cells
corrupts them, and the powerful error correcting code
needs time to do the corrections to multiple bits
in the sector. The data is still officially "intact"
and error free, in that the error corrector isn't exhausted.

They "fixed" this in a firmware update, by having the
drive re-write the cells after three months (equals
degraded wear life and shortens the life of the drive).

On TLC, around 10% of storage is used for ECC bits, and
when QLC comes out, this is expected to grow.

At some point, adding ECC will affect storage capacity
sufficiently, we will have hit a wall on "extending the
number of bits stored in one cell". For example, if
you needed as many ECC bits as data stored, yes, you
doubled the capacity by going from QLC to the next thing,
but you cut the capacity in half by the need to use more ECC.
They can't keep increasing the bits per cell before
it bites them on the ass. The write cycles is dropping
with each generation too. Flash is becoming the equivalent
of silicon toilet paper.

In fact, doing some math the other day, I figured out
it was costing me $1 to write a Flash drive from
one end to the other. There is a tangible wearout
on the highest density devices. And it's beginning
to equate to dollars. When I use a hard drive on the
other hand, I don't have such a notion. It's been
a long time since I lost a hard drive. I've got
a few that have bad SMART, but "they're not dead yet".
Some of the flaky ones have been going for an extra
five years after retirement (now used as scratch drives).

There are just a few flash drives, that are huge and
the interface happens to be slow. There's a 30TB one,
you can continuously write it at the full rate, and it
is guaranteed to pass the warranty period :-)
So that would be an example of a drive, where
a lab accident can't destroy it. Because it
can handle the wear life of writing continuously
at its full speed (of maybe 300 to 400MB/sec).
If the 30TB drive was NVMe format, and ran at
2500MB/sec, it might not be able to brag about
supporting continuous write for the entire warranty
period. You might have to stop writing it once in
a while :-)

Paul
  #68  
Old May 21st 18, 07:30 PM posted to alt.comp.os.windows-10
Jimmy Wilkinson Knife
external usenet poster
 
Posts: 131
Default USB thumb drives.

On Mon, 21 May 2018 13:07:45 +0100, Paul wrote:

Jimmy Wilkinson Knife wrote:

Maybe that's why mine are slowing down - more cells are becoming weak?


The Toolkit software that comes with the drive,
should be able to provide statistics for you.
Like, how many blocks were spared out. If a block
cannot remember what you write to it, the drive
may decide to spare it out and replace it, just
like a hard drive would. This is automatic sparing
just like in the ATA designs.


I would have looked at the SMART data, but they're in a mirror array by the motherboard's hard disk controller, which I thought blocked that information.
But not on this controller apparently.
Not entirely sure how to interpret all this, but:

First SSD:
Raw Read error rate, value 100, worst 100, warn 0, raw 000000000005
Reallocated Sector count, value 100, worst 100, warn 0, raw 000000000000
Data Address Mark errors, value 23, worst 23, warn 0, raw 00000000004D

The others are either raw 000000000000 or marked as unimportant by the program speedfan, so I didn't type them in (it won't copy and paste).

Second SSD:
Raw Read error rate, value 100, worst 100, warn 0, raw 00000000000D
Reallocated Sector count, value 100, worst 100, warn 0, raw 000000000002
Data Address Mark errors, value 18, worst 18, warn 0, raw 000000000052

Speedfan reports (on the quick test) 100% performance, but 0% fitness?! I think the 0% may be it not reading the SMART correctly through the RAID controller, or it doesn't have a clue about some stats as there are no warnings apart from a red ! on an "unknown parameter".

The mark errors concern me, I'm assuming they also started at 100 and 0 means imminent failure. So 18 means it's 82% worn out?
The Acronis website says "Although degradation of this parameter can be an indicator of drive ageing and/or potential electromechanical problems, it does not directly indicate imminent drive failure."
Huh? If it's 82% aged, surely that's something to indicate failure soon?
I can't find anything certain on Google about "data address mark errors".

The kind of "weak" I'm referring to, is not permanently
damaged sectors. It's sectors that the charge is
draining off the floating gates in a matter of
a few months, rather than the ten years we would
normally expect. This was causing the read rate
on "data at rest" to drop. So if you wrote a backup
today on the device, it might read at 300MB/sec. If
two months from now, you tried to read the same big file
again, it would be reading at 180MB/sec. And it
does that, because the charge draining off the cells
corrupts them, and the powerful error correcting code
needs time to do the corrections to multiple bits
in the sector. The data is still officially "intact"
and error free, in that the error corrector isn't exhausted.

They "fixed" this in a firmware update, by having the
drive re-write the cells after three months (equals
degraded wear life and shortens the life of the drive).


As long as someone doesn't try to use it as long term storage and doesn't plug it in for 6 months. Or does it stay put if switched off?

On TLC, around 10% of storage is used for ECC bits, and
when QLC comes out, this is expected to grow.

At some point, adding ECC will affect storage capacity
sufficiently, we will have hit a wall on "extending the
number of bits stored in one cell". For example, if
you needed as many ECC bits as data stored, yes, you
doubled the capacity by going from QLC to the next thing,
but you cut the capacity in half by the need to use more ECC.
They can't keep increasing the bits per cell before
it bites them on the ass. The write cycles is dropping
with each generation too. Flash is becoming the equivalent
of silicon toilet paper.

In fact, doing some math the other day, I figured out
it was costing me $1 to write a Flash drive from
one end to the other.


That can't be right. Are you claiming a $100 drive can only be written completely 100 times?

There is a tangible wearout
on the highest density devices. And it's beginning
to equate to dollars. When I use a hard drive on the
other hand, I don't have such a notion. It's been
a long time since I lost a hard drive. I've got
a few that have bad SMART, but "they're not dead yet".
Some of the flaky ones have been going for an extra
five years after retirement (now used as scratch drives).


Actually I've had terrible trouble with hard drives but never ever had a single SSD fail, apart from OCZ **** that I very quickly stopped using.

The number of hard drives that either overheated or just started clicking.

There are just a few flash drives, that are huge and
the interface happens to be slow. There's a 30TB one,
you can continuously write it at the full rate, and it
is guaranteed to pass the warranty period :-)
So that would be an example of a drive, where
a lab accident can't destroy it. Because it
can handle the wear life of writing continuously
at its full speed (of maybe 300 to 400MB/sec).
If the 30TB drive was NVMe format, and ran at
2500MB/sec, it might not be able to brag about
supporting continuous write for the entire warranty
period. You might have to stop writing it once in
a while :-)


That would be a ****ing busy server to write that much data. And if you had such a server, you'd most likely need way more storage space, so each drive wouldn't be in continuous use.

--
An archaeologist is the best husband a woman can have. The older she gets the more interested in her he is.
  #70  
Old May 21st 18, 10:26 PM posted to alt.comp.os.windows-10
Paul[_32_]
external usenet poster
 
Posts: 7,114
Default USB thumb drives.

Jimmy Wilkinson Knife wrote:
On Mon, 21 May 2018 13:07:45 +0100, Paul wrote:


They "fixed" this in a firmware update, by having the
drive re-write the cells after three months (equals
degraded wear life and shortens the life of the drive).


As long as someone doesn't try to use it as long term storage and
doesn't plug it in for 6 months. Or does it stay put if switched off?


The leaking on that device, was independent of powered state.
The idea is, all the cells leak. But the sectors that are
in usage, and are "data at rest", they are slowly degrading
with time, and requiring more microseconds of error correction
by the ARM processor, per sector.


In fact, doing some math the other day, I figured out
it was costing me $1 to write a Flash drive from
one end to the other.


That can't be right. Are you claiming a $100 drive can only be written
completely 100 times?


That was the figure for the drive I bought.


Actually I've had terrible trouble with hard drives but never ever had a
single SSD fail, apart from OCZ **** that I very quickly stopped using.

The number of hard drives that either overheated or just started clicking.


I lost a couple Maxtor 40GB, which went south very quickly.
(From clicking to dead, takes a single day.)

I lost a Seagate 32550N 2GB, when the head lock jammed
at startup, the arm tried to move anyway, and it ground
the heads into the platter like a cigarette butt. And
the most wonderful "clock spring" noise came out of
the drive. They don't make head locks like that any more
(huge solenoid, looked out of place in the drive). There
was a gouge in the platter.


There are just a few flash drives, that are huge and
the interface happens to be slow. There's a 30TB one,
you can continuously write it at the full rate, and it
is guaranteed to pass the warranty period :-)
So that would be an example of a drive, where
a lab accident can't destroy it. Because it
can handle the wear life of writing continuously
at its full speed (of maybe 300 to 400MB/sec).
If the 30TB drive was NVMe format, and ran at
2500MB/sec, it might not be able to brag about
supporting continuous write for the entire warranty
period. You might have to stop writing it once in
a while :-)


That would be a ****ing busy server to write that much data. And if you
had such a server, you'd most likely need way more storage space, so
each drive wouldn't be in continuous use.


I think that 30TB drive is a wonderful drive from a
"cannot be abused" perspective. And I think it follows
a 5.25" form factor too, and holds 30TB. It's chock full
of chips. The average user isn't going to like the
speed though. Too many people have been spoiled by
NVMe speeds.

*******

Back to your SMART table for a moment...

Apparently the SMART table definitions overlap. Obviously,
an SSD doesn't have a "data address mark". And a HDD, while it
does have a notion of "terabytes write class" and a gross notion
of wear life, it isn't measured as such. I don't think
any HDD has a place to put that info on a HDD SMART.
The info is undoubtedly inside the drive somewhere, just
not something you'd find in HDD SMART.

202 Percentage Of The Rated Lifetime Used in your SSD === SSD Param
202 Data Address Mark Errors === HDD Param

If your SMART tool is an older one, it will use the older
definition. HDTune 2.55 (free version, now ten years old),
doesn't know anything about SSDs. This is why I recommended
the usage of the SSD Toolbox software, which may be available
on your SSD manufacturer site. The SSD Toolbox should be using
an SSD SMART table definition.

Data Address Mark errors, value 18, worst 18, warn 0, raw 000000000052

Consult the Toolkit for that SSD, and verify the Lifetime used
is 52%. That means roughly half the wear life is exhausted
(which is independent of how many sectors are spared out).

There is one brand where that parameter is very dangerous.
If you have an Intel drive, it stops responding when
the drive is worn out, as measured by Flash cell write
cycles. Other brands continue to run. In one case,
a drive was able (during a lifetime test), to exceed the
Health value by many times, before the sparing eventually
exhausted the spares pool. When the cells wear out, more
sectors will need to be spared, so the sparing rate
at some point will accelerate. Sometimes, it might be
a power failure, while in that state (lots of sparing),
that results in the drive being killed and no longer
responding. There might actually be some spares
left, when one of those "way over the top" SSDs die
on you.

But the Intel response is a "no mercy" response. Intel
wants you to back up your Intel SSD every day, so that
you can "laugh" when your SSD bricks itself. Now the
nice thing about such a behavior, is now you can't
even check the SMART table to see what happened :-/
Some drives signal their displeasure by reading but
not writing, and by remaining in a readable
state, it's up to the user whether they actually
"trust" any recovered data. The ECC should be able
to indicate whether sectors are irrecoverably bad or
not, so reading in such a state really shouldn't
be a problem.

But the Intel policy sucks, especially when the
typical "I could care less" class of consumer isn't aware
what their policy is on Health. I've only caught hints
of this, in some SSD reviews.

*******

A great series of articles, were the ones where they kept
writing to a series of drives, until they had all failed.
The article here also mentions in passing, what some of
the end of life policies are. It's possible the
Corsair Neutron in this article, was the MLC version,
while the one I bought was suspected to have TLC (as it
disappeared from the market for several months and
then "magically reappeared").

https://techreport.com/review/27909/...heyre-all-dead

The TLC drive with the bad "data at rest" behavior, that
might have been a Samsung.

There's nothing wrong with charge draining off the cells,
as long as the engineering is there to include an ECC
method that ensures readable data for ten years after
the write operation. The issue wasn't a failure as such,
since the data was still perfectly readable - it was
the fact the drive was slow that ****ed people off. When
these companies use the newest generation of "bad" flash,
it's up to them to overprovision enough so the
user doesn't notice what a crock they've become.
You see, they're getting ready to release QLC,
which is one bit more per cell than TLC. The TLC
was bad enough. What adventures will QLC bring ?

Paul
  #71  
Old May 22nd 18, 05:05 AM posted to alt.comp.os.windows-10
Ant[_2_]
external usenet poster
 
Posts: 524
Default USB thumb drives.

nospam wrote:
In article , Ant
wrote:


Not sure what you can do with 16 KB. That's like in the 80s.


one could do a lot...


https://history.nasa.gov/computers/Ch2-5.html
MIT's original design called for just 4K words of fixed memory and
256 words of erasable (at the time, two computers for redundancy were
still under consideration). By June 1963, the figures had grown to
10K of fixed and 1K of erasable. The next jump was to 12K of fixed,
with MIT still insisting that the memory requirement for an
autonomous lunar mission could be kept under 16K! Fixed memory
leapt to 24K and then finally to 36K words, and erasable memory had
a final configuration of 2K words.


I meant 16 KB in modern times.
--
Quote of the Week: "To the ant, a few drops of dew is a flood." --Iranian
Note: A fixed width font (Courier, Monospace, etc.) is required to see this signature correctly.
/\___/\ Ant(Dude) @ http://antfarm.home.dhs.org
/ /\ /\ \ Please nuke ANT if replying by e-mail privately. If credit-
| |o o| | ing, then please kindly use Ant nickname and URL/link.
\ _ /
( )
  #72  
Old May 22nd 18, 07:28 AM posted to alt.comp.os.windows-10
No_Name
external usenet poster
 
Posts: 41
Default USB thumb drives.

On Mon, 21 May 2018 23:05:25 -0500, (Ant) wrote:


I meant 16 KB in modern times.


Put it into a museum.




  #73  
Old May 22nd 18, 08:20 AM posted to alt.comp.os.windows-10
Lucifer Morningstar[_3_]
external usenet poster
 
Posts: 33
Default USB thumb drives.

On Mon, 21 May 2018 19:30:36 +0100, "Jimmy Wilkinson Knife"
wrote:

On Mon, 21 May 2018 13:07:45 +0100, Paul wrote:

Jimmy Wilkinson Knife wrote:

Maybe that's why mine are slowing down - more cells are becoming weak?


The Toolkit software that comes with the drive,
should be able to provide statistics for you.
Like, how many blocks were spared out. If a block
cannot remember what you write to it, the drive
may decide to spare it out and replace it, just
like a hard drive would. This is automatic sparing
just like in the ATA designs.


I would have looked at the SMART data, but they're in a mirror array by the motherboard's hard disk controller, which I thought blocked that information.
But not on this controller apparently.
Not entirely sure how to interpret all this, but:

First SSD:
Raw Read error rate, value 100, worst 100, warn 0, raw 000000000005
Reallocated Sector count, value 100, worst 100, warn 0, raw 000000000000
Data Address Mark errors, value 23, worst 23, warn 0, raw 00000000004D

The others are either raw 000000000000 or marked as unimportant by the program speedfan, so I didn't type them in (it won't copy and paste).

Second SSD:
Raw Read error rate, value 100, worst 100, warn 0, raw 00000000000D
Reallocated Sector count, value 100, worst 100, warn 0, raw 000000000002
Data Address Mark errors, value 18, worst 18, warn 0, raw 000000000052

Speedfan reports (on the quick test) 100% performance, but 0% fitness?! I think the 0% may be it not reading the SMART correctly through the RAID controller, or it doesn't have a clue about some stats as there are no warnings apart from a red ! on an "unknown parameter".

The mark errors concern me, I'm assuming they also started at 100 and 0 means imminent failure. So 18 means it's 82% worn out?
The Acronis website says "Although degradation of this parameter can be an indicator of drive ageing and/or potential electromechanical problems, it does not directly indicate imminent drive failure."
Huh? If it's 82% aged, surely that's something to indicate failure soon?
I can't find anything certain on Google about "data address mark errors".

The kind of "weak" I'm referring to, is not permanently
damaged sectors. It's sectors that the charge is
draining off the floating gates in a matter of
a few months, rather than the ten years we would
normally expect. This was causing the read rate
on "data at rest" to drop. So if you wrote a backup
today on the device, it might read at 300MB/sec. If
two months from now, you tried to read the same big file
again, it would be reading at 180MB/sec. And it
does that, because the charge draining off the cells
corrupts them, and the powerful error correcting code
needs time to do the corrections to multiple bits
in the sector. The data is still officially "intact"
and error free, in that the error corrector isn't exhausted.

They "fixed" this in a firmware update, by having the
drive re-write the cells after three months (equals
degraded wear life and shortens the life of the drive).


As long as someone doesn't try to use it as long term storage and doesn't plug it in for 6 months. Or does it stay put if switched off?

On TLC, around 10% of storage is used for ECC bits, and
when QLC comes out, this is expected to grow.

At some point, adding ECC will affect storage capacity
sufficiently, we will have hit a wall on "extending the
number of bits stored in one cell". For example, if
you needed as many ECC bits as data stored, yes, you
doubled the capacity by going from QLC to the next thing,
but you cut the capacity in half by the need to use more ECC.
They can't keep increasing the bits per cell before
it bites them on the ass. The write cycles is dropping
with each generation too. Flash is becoming the equivalent
of silicon toilet paper.

In fact, doing some math the other day, I figured out
it was costing me $1 to write a Flash drive from
one end to the other.


That can't be right. Are you claiming a $100 drive can only be written completely 100 times?

There is a tangible wearout
on the highest density devices. And it's beginning
to equate to dollars. When I use a hard drive on the
other hand, I don't have such a notion. It's been
a long time since I lost a hard drive. I've got
a few that have bad SMART, but "they're not dead yet".
Some of the flaky ones have been going for an extra
five years after retirement (now used as scratch drives).


Actually I've had terrible trouble with hard drives but never ever had a single SSD fail, apart from OCZ **** that I very quickly stopped using.

The number of hard drives that either overheated or just started clicking.

There are just a few flash drives, that are huge and
the interface happens to be slow. There's a 30TB one,
you can continuously write it at the full rate, and it
is guaranteed to pass the warranty period :-)
So that would be an example of a drive, where
a lab accident can't destroy it. Because it
can handle the wear life of writing continuously
at its full speed (of maybe 300 to 400MB/sec).
If the 30TB drive was NVMe format, and ran at
2500MB/sec, it might not be able to brag about
supporting continuous write for the entire warranty
period. You might have to stop writing it once in
a while :-)


That would be a ****ing busy server to write that much data. And if you had such a server, you'd most likely need way more storage space, so each drive wouldn't be in continuous use.


SSDs are not used in servers due to their unreliability.
  #74  
Old May 22nd 18, 08:51 AM posted to alt.comp.os.windows-10
default[_2_]
external usenet poster
 
Posts: 61
Default USB thumb drives.

On Tue, 22 May 2018 17:20:46 +1000, Lucifer Morningstar
wrote:

On Mon, 21 May 2018 19:30:36 +0100, "Jimmy Wilkinson Knife"
wrote:

On Mon, 21 May 2018 13:07:45 +0100, Paul wrote:

Jimmy Wilkinson Knife wrote:

Maybe that's why mine are slowing down - more cells are becoming weak?

The Toolkit software that comes with the drive,
should be able to provide statistics for you.
Like, how many blocks were spared out. If a block
cannot remember what you write to it, the drive
may decide to spare it out and replace it, just
like a hard drive would. This is automatic sparing
just like in the ATA designs.


I would have looked at the SMART data, but they're in a mirror array by the motherboard's hard disk controller, which I thought blocked that information.
But not on this controller apparently.
Not entirely sure how to interpret all this, but:

First SSD:
Raw Read error rate, value 100, worst 100, warn 0, raw 000000000005
Reallocated Sector count, value 100, worst 100, warn 0, raw 000000000000
Data Address Mark errors, value 23, worst 23, warn 0, raw 00000000004D

The others are either raw 000000000000 or marked as unimportant by the program speedfan, so I didn't type them in (it won't copy and paste).

Second SSD:
Raw Read error rate, value 100, worst 100, warn 0, raw 00000000000D
Reallocated Sector count, value 100, worst 100, warn 0, raw 000000000002
Data Address Mark errors, value 18, worst 18, warn 0, raw 000000000052

Speedfan reports (on the quick test) 100% performance, but 0% fitness?! I think the 0% may be it not reading the SMART correctly through the RAID controller, or it doesn't have a clue about some stats as there are no warnings apart from a red ! on an "unknown parameter".

The mark errors concern me, I'm assuming they also started at 100 and 0 means imminent failure. So 18 means it's 82% worn out?
The Acronis website says "Although degradation of this parameter can be an indicator of drive ageing and/or potential electromechanical problems, it does not directly indicate imminent drive failure."
Huh? If it's 82% aged, surely that's something to indicate failure soon?
I can't find anything certain on Google about "data address mark errors".

The kind of "weak" I'm referring to, is not permanently
damaged sectors. It's sectors that the charge is
draining off the floating gates in a matter of
a few months, rather than the ten years we would
normally expect. This was causing the read rate
on "data at rest" to drop. So if you wrote a backup
today on the device, it might read at 300MB/sec. If
two months from now, you tried to read the same big file
again, it would be reading at 180MB/sec. And it
does that, because the charge draining off the cells
corrupts them, and the powerful error correcting code
needs time to do the corrections to multiple bits
in the sector. The data is still officially "intact"
and error free, in that the error corrector isn't exhausted.

They "fixed" this in a firmware update, by having the
drive re-write the cells after three months (equals
degraded wear life and shortens the life of the drive).


As long as someone doesn't try to use it as long term storage and doesn't plug it in for 6 months. Or does it stay put if switched off?

On TLC, around 10% of storage is used for ECC bits, and
when QLC comes out, this is expected to grow.

At some point, adding ECC will affect storage capacity
sufficiently, we will have hit a wall on "extending the
number of bits stored in one cell". For example, if
you needed as many ECC bits as data stored, yes, you
doubled the capacity by going from QLC to the next thing,
but you cut the capacity in half by the need to use more ECC.
They can't keep increasing the bits per cell before
it bites them on the ass. The write cycles is dropping
with each generation too. Flash is becoming the equivalent
of silicon toilet paper.

In fact, doing some math the other day, I figured out
it was costing me $1 to write a Flash drive from
one end to the other.


That can't be right. Are you claiming a $100 drive can only be written completely 100 times?

There is a tangible wearout
on the highest density devices. And it's beginning
to equate to dollars. When I use a hard drive on the
other hand, I don't have such a notion. It's been
a long time since I lost a hard drive. I've got
a few that have bad SMART, but "they're not dead yet".
Some of the flaky ones have been going for an extra
five years after retirement (now used as scratch drives).


Actually I've had terrible trouble with hard drives but never ever had a single SSD fail, apart from OCZ **** that I very quickly stopped using.

The number of hard drives that either overheated or just started clicking.

There are just a few flash drives, that are huge and
the interface happens to be slow. There's a 30TB one,
you can continuously write it at the full rate, and it
is guaranteed to pass the warranty period :-)
So that would be an example of a drive, where
a lab accident can't destroy it. Because it
can handle the wear life of writing continuously
at its full speed (of maybe 300 to 400MB/sec).
If the 30TB drive was NVMe format, and ran at
2500MB/sec, it might not be able to brag about
supporting continuous write for the entire warranty
period. You might have to stop writing it once in
a while :-)


That would be a ****ing busy server to write that much data. And if you had such a server, you'd most likely need way more storage space, so each drive wouldn't be in continuous use.


SSDs are not used in servers due to their unreliability.


I was reading about a device in the lab stages that may turn digital
storage on its end. (if one chooses to believe corporate hype) It is
a crystal that's supposed to have the ability to store many petabytes
of data. The downside is that it can only be written to one time -
but the storage is so vast/fast/cheap in so small a space that it
could conceivably replace mechanical hard drives, by just opening up
another portion of the device and forgetting what is written in the
sectors you don't need anymore.
  #75  
Old May 22nd 18, 01:32 PM posted to alt.comp.os.windows-10
nospam
external usenet poster
 
Posts: 2,010
Default USB thumb drives.

In article , Lucifer
Morningstar wrote:


SSDs are not used in servers due to their unreliability.


they absolutely are used in servers *because* of their reliability.
 




Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off






All times are GMT +1. The time now is 01:30 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2018, Jelsoft Enterprises Ltd.
Copyright 2004-2018 PCbanter.
The comments are property of their posters.