A Windows XP help forum. PCbanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » PCbanter forum » Windows 10 » Windows 10 Help Forum
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

SATA Drives



 
 
Thread Tools Rate Thread Display Modes
  #16  
Old January 5th 19, 01:10 AM posted to alt.comp.os.windows-10
T
external usenet poster
 
Posts: 4,600
Default SATA Drives

On 1/4/19 5:03 PM, Tim wrote:
nospam wrote in news:040120191635551909%
lid:

In article , wrote:


When I sell an ssd, it has to work and keep working as when SSD's
go bad, they brick.


false.


I see it all the time. The flash cells are not the only
thing that fails. The interface electronics also fails too.
And that DOES NOT show up on lifespan tests. But pardon me.
You know all and see all. I stand corrected. You KNOW IT
ALL! No wonder I kill filed your your stupid ass. Idiot.



most ssds will fail to read-only, normally long after smart warnings
indicating that they're approaching end of life, and that's assuming
something else doesn't fail first. ssds will normally outlast the
devices in which they are installed.

not that it matters, since if one does fail, simply replace and restore
from backup. no big deal.

I settled on Samsungs as they are the best
reliability I could find. Wonderful tech support too.


samsung is *among* the best. crucial is also top quality.

and an ssd does not need tech support. connect it and it works. done.

there may be differences in how they handle warranty service, but
that's something else entirely.

My first SSD was a Kingston. It died two years into a three year warranty,
right after Kingston announced that they were dropping support for it.


Just bite the bullet and get a Samsung. Remember to install
Samsung Magician to check your lifespan.
Ads
  #17  
Old January 5th 19, 01:23 AM posted to alt.comp.os.windows-10
T
external usenet poster
 
Posts: 4,600
Default SATA Drives

On 1/4/19 5:00 PM, Tim wrote:
T wrote in :

On 1/4/19 1:31 PM, T wrote:
On 1/4/19 1:27 PM, Tim wrote:
nospam wrote in news:040120191558488272%
lid:

In article , Big Al
wrote:

Also, Samsung
drives are the only high reliability drive I have come across.Â

Stay
away from Intel.

crucial and samsung are both very good choices.

intel is *very* reliable, just not consumer priced (mostly).

Thoughts on Inland or Kingston M2.2280's ??
Local brick and mortar store has them $50 US for 250GB

My current motherboard has no provision for an M2 drive. I assume I

wo
uld
have to add a PCIe board to home the drive on. At that point is the
performance worth the expense?


I have not found an add on card I really like for this,
so I can't say


The not bootable thing annoys the s*** out of me.


Does that mean if I want to use an M2 drive for my system drive I would
have to have bootable media on a USB or DVD set up to load Windows from
the M2?


Hi Tim,

Okay, M2 is just describes a form factor and a slot on your
motherboard.

M2 form factor drives can be a SATA drives or NVMe drives. You
have to look at your motherboard's spec's to figure out what it will
accept. And any add on card's specs too.

M2 SATA drives and just a lot smaller than standard SATA drives. They
perform exactly the same. NVMe drives are the fast ones.

Add on cards have a hard time becoming bootable. Most M2 add on cards
I have seen only support M2 SATA drives and are not bootable. So
any drive on the add on card would be a second non-bootable drive
for extra storage. Disclaimer: this may have changed as I
haven't checked recently.

Update: this one says it support NVMe drives:

https://www.startech.com/Cards-Adapt...d~PEXM2SAT32N1

And their chat line says it is bootable, if your motherboard's bios
supports NVMe drive and booting from NVMe drives. If your motherboard
does not have its own NVMe M2 slot, I hight doubt it will boot
from this adapter. To be certain, check with your motherboard's
tech support.

-T

  #18  
Old January 5th 19, 04:50 AM posted to alt.comp.os.windows-10
nospam
external usenet poster
 
Posts: 4,718
Default SATA Drives

In article , wrote:

When I sell an ssd, it has to work and keep working as when SSD's
go bad, they brick.

false.


I see it all the time.


then you're buying ****ty ssds.

overall, ssds are *extremely* reliable, much more so than mechanical
hard drives, and rarely brick.

however, quality does vary. some aren't as good as others and there are
always duds, as there are with any product.

The flash cells are not the only
thing that fails. The interface electronics also fails too.


very rarely.

many older ssds did have firmware bugs, but those are mostly a thing of
the past and usually didn't cause an entire failure.

early hard drives had issues too.

And that DOES NOT show up on lifespan tests.


nobody said it did.


But pardon me.
You know all and see all. I stand corrected. You KNOW IT
ALL! No wonder I kill filed your your stupid ass. Idiot.


nothing more than ad hominem attacks and cowardice. you make a lot of
claims but never back them up.





Just bite the bullet and get a Samsung.


actually, samsung isn't the most reliable, although it is quite good:
https://techreport.com/r.x/ssd-endurance-theend/earlyfailures.gif

however, noname stuff is best avoided.

nothing is perfect and there are both good and bad stories on *every*
make.

Remember to install
Samsung Magician to check your lifespan.


there is no need for that or any other lifespan utility since it's
exposed in smart and the ssd will likely outlast the device it's in
anyway, longer than any mechanical hard drive would have.

but in the unlikely event it does fail, replace it, restore from
backup. no big deal. certainly not worth obsessing over.
  #19  
Old January 5th 19, 08:19 AM posted to alt.comp.os.windows-10
Mike
external usenet poster
 
Posts: 185
Default SATA Drives

On 1/4/2019 8:50 PM, nospam wrote:
In article , wrote:

When I sell an ssd, it has to work and keep working as when SSD's
go bad, they brick.

false.


I see it all the time.


then you're buying ****ty ssds.

overall, ssds are *extremely* reliable, much more so than mechanical
hard drives, and rarely brick.

however, quality does vary. some aren't as good as others and there are
always duds, as there are with any product.

The flash cells are not the only
thing that fails. The interface electronics also fails too.


very rarely.

many older ssds did have firmware bugs, but those are mostly a thing of
the past and usually didn't cause an entire failure.

early hard drives had issues too.

And that DOES NOT show up on lifespan tests.


nobody said it did.


But pardon me.
You know all and see all. I stand corrected. You KNOW IT
ALL! No wonder I kill filed your your stupid ass. Idiot.


nothing more than ad hominem attacks and cowardice. you make a lot of
claims but never back them up.





Just bite the bullet and get a Samsung.


actually, samsung isn't the most reliable, although it is quite good:
https://techreport.com/r.x/ssd-endurance-theend/earlyfailures.gif

however, noname stuff is best avoided.

nothing is perfect and there are both good and bad stories on *every*
make.

Remember to install
Samsung Magician to check your lifespan.


Can you explain lifespan in laymen's terms?

I don't remember the exact numbers, but terms like MTBF
are often misconstrued. IIRC MTBF is the time by which a significant
percentage of devices will fail. In aggregate, that number is useful
for estimating warranty failures. But for the individual, it's
meaningless. Your device is either dead or it is not.
MTBF doesn't mean that YOUR device will last that long, or even close to
that long. All it means is that, if you have a large number of them,
about half of them will die before MTBF.
Are you feeling lucky?

How does TBW inform the single user?
And what does it measure anyway?
SMART reports system writes. Does that include write amplification?

If you don't abuse it mechanically or thermally, a HDD has few sources
of wearout.
The SSD has a GUARANTEED source of wearout. You are mechanically
abusing the cells at the micro level and taking epic measures to
spread that wearout and recover from inevitable cell failure with error
correction. It's a leaky bucket that you keep patching with duct tape.
When you run out of duct tape, you're done...maybe...

there is no need for that or any other lifespan utility since it's
exposed in smart and the ssd will likely outlast the device it's in
anyway, longer than any mechanical hard drive would have.


If you buy new computers and update regularly, I believe that's the case.
For those of us who buy the computers you discarded, thank you very much,
and use them
for another decade, not so much.

but in the unlikely event it does fail, replace it, restore from
backup. no big deal. certainly not worth obsessing over.


That's easy to say, but how many actually do that.
I frequently image 22GB of C: drive. But there's a terabyte of stuff
on there that isn't backed up regularly. I've got nowhere to put it.
And I've got way more places to stick it than most average users.
It's not lost, but would be a major effort to reconstruct from archived
DVD's.

Most of the people I know have no idea how to backup and restore the OS.
And nowhere to save the backup if they did.



  #20  
Old January 5th 19, 09:27 AM posted to alt.comp.os.windows-10
nospam
external usenet poster
 
Posts: 4,718
Default SATA Drives

In article , Mike
wrote:



Can you explain lifespan in laymen's terms?


eventually, an ssd, hard drive or other component will wear out. that's
its lifespan.

for ssds, it's usually after many petabytes of writes, but that's an
expected lifetime, not a guarantee.

I don't remember the exact numbers, but terms like MTBF
are often misconstrued. IIRC MTBF is the time by which a significant
percentage of devices will fail. In aggregate, that number is useful
for estimating warranty failures. But for the individual, it's
meaningless. Your device is either dead or it is not.
MTBF doesn't mean that YOUR device will last that long, or even close to
that long. All it means is that, if you have a large number of them,
about half of them will die before MTBF.
Are you feeling lucky?


yep, mtbf isn't that useful for a single unit.

How does TBW inform the single user?
And what does it measure anyway?
SMART reports system writes. Does that include write amplification?


smart reports a lot of stuff, including bytes written, reallocated
blocks, uncorrectable errors, hours used and much more.

here's what intel sata ssds report:
https://www.intel.com/content/dam/su.../solid-state-d
rives/Intel_SSD_Smart_Attrib_for_SATA.PDF

If you don't abuse it mechanically or thermally, a HDD has few sources
of wearout.


except for the moving parts, and there's a lot of them...

hard drives either fail early on (due to a manufacturing defect) or
they fail after a few years, when parts start to wear out.

The SSD has a GUARANTEED source of wearout. You are mechanically
abusing the cells at the micro level and taking epic measures to
spread that wearout and recover from inevitable cell failure with error
correction. It's a leaky bucket that you keep patching with duct tape.
When you run out of duct tape, you're done...maybe...


reading and writing to a hard drive is abusing the moving parts, which
have incredibly tight tolerances...

nothing lasts forever.

there is no need for that or any other lifespan utility since it's
exposed in smart and the ssd will likely outlast the device it's in
anyway, longer than any mechanical hard drive would have.


If you buy new computers and update regularly, I believe that's the case.
For those of us who buy the computers you discarded, thank you very much,
and use them for another decade, not so much.


used hard drives and ssds just means that the expected life will be
shorter.

ssds are still more reliable, other than the early ones that were
buggy, which might be found in older used equipment.

but in the unlikely event it does fail, replace it, restore from
backup. no big deal. certainly not worth obsessing over.


That's easy to say, but how many actually do that.


sadly, not many overall, although it should be quite high for those
reading usenet...

and something doesn't have to fail to lose data. it could be lost to
fire, flood, theft, etc., which is why *offsite* backups are important.

a backup won't do much good if it's next to the computer and the house
burns down, destroying both...

I frequently image 22GB of C: drive. But there's a terabyte of stuff
on there that isn't backed up regularly. I've got nowhere to put it.
And I've got way more places to stick it than most average users.
It's not lost, but would be a major effort to reconstruct from archived
DVD's.


buy a terabyte hard drive. problem solved.

you say you buy used stuff. it's *very* common to see used 1-2 tb
drives for cheap ($10-20, typically) because people have upgraded to
larger capacity drives. usually they're in relatively good shape too.

500g and smaller are *really* cheap. very few people want something
that small anymore.

Most of the people I know have no idea how to backup and restore the OS.
And nowhere to save the backup if they did.


that's one reason why cloud backup services are very popular. they're
easy to set up and they work with little to no fuss.
  #21  
Old January 5th 19, 02:55 PM posted to alt.comp.os.windows-10
joe[_6_]
external usenet poster
 
Posts: 20
Default SATA Drives

On 1/5/2019 2:19 AM, Mike wrote:


I don't remember the exact numbers, but terms like MTBF
are often misconstrued.Â* IIRC MTBF is the time by which a significant
percentage of devices will fail.Â* In aggregate, that number is useful
for estimating warranty failures.Â* But for the individual, it's
meaningless.Â* Your device is either dead or it is not.
MTBF doesn't mean that YOUR device will last that long, or even close to
that long.Â* All it means is that, if you have a large number of them,
about half of them will die before MTBF.


No, that is not what MTBF means.

snip


The SSD has a GUARANTEED source of wearout.Â* You are mechanically
abusing the cells at the micro level and taking epic measures to
spread that wearout and recover from inevitable cell failure with error
correction.Â* It's a leaky bucket that you keep patching with duct tape.
When you run out of duct tape, you're done...maybe...


Yes, and under normal use, how many years does it take to get to
wearout? (Only) You can make the computations for how you use you
drives. If the answer is 25-50 years, is SSD wearout really an issue?

snip

I frequently image 22GB of C: drive.Â* But there's a terabyte of stuff
on there that isn't backed up regularly.Â* I've got nowhere to put it.


That would be by your choice. External drives are readily available.

And I've got way more places to stick it than most average users.
It's not lost, but would be a major effort to reconstruct from archived
DVD's.


Restoring from a proper backup would be less effort.



Most of the people I know have no idea how to backup and restore the OS.
And nowhere to save the backup if they did.


Ignorance is a poor excuse. Backup tools are readily available.
  #22  
Old January 5th 19, 03:10 PM posted to alt.comp.os.windows-10
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default SATA Drives

Mike wrote:


Can you explain lifespan in laymen's terms?


http://nomtbf.com/2015/05/determine-...-distribution/

"I find MTBF oversimplifies failure data to the point
of making the summary without value."

Rather than ogling the value, you're supposed to use the number
to work out how many spares to keep in your stock cabinet.
Maybe I give you a number, and you work out "I need to keep
five spares in the cabinet for this year, to cover the
possible failures from the hundred of these I just installed".
That would be a reason for seeking an MTBF number.

The number assumes some sort of random failures.
To be valid, the rate of failure has to have "just the right shape",
to justify putting an MTBF in print.

Because no product lasts long enough to give actual
"field data", MTBF is a calculated estimate (an extrapolation
of sorts). Field data (motor failures, head failures) can
be fed into the calculation to make it "semi-realistic".
Field data would be preferred if we could get it.

*******

SSDs wear out. (That's "different" than MTBF.)

TLC flash can be written 3000 times per location.

The drive uses "mapping" inside, to put LBAs at random locations,
such that near the end of drive life, one location is written
2999 times, another written 3001 times. The wear is "leveled",
by not having a linear 1:1 relationship between external LBA
and internal storage address.

Drive brands have different policies, when they hit 3000.
Some brands, the drive *bricks* the instant it hits 3000.

For *each brand*, you have to research what that policy is.
As it affects your backup practices. With a sloppy drive,
you could be a sloppy SSD owner. With a "precise" drive,
you'd better be "wide awake" if you don't want to suffer
data loss. You come in one day, turn on the computer and
*boink*, no boot drive. Toast. Now, what did I do with my
backup ? And imagine how you feel, when you got that tablet
for $100 on Black Friday and the tablet goes *boink* and
the eMMC storage is soldered to the motherboard.

*******

SSDs can have uncorrectable errors just like an HDD can.
You would expect this to be worse, after you surpass
3000 writes per cell. And, there was at least one
TLC based drive, that the cells became "mushy" and
the read rate of the drive slowed down, because
all of the written sectors after three months, needed error
correction (the drive was not even remotely near end of life!).

Modern flash might require 50 extra bytes
of storage for the error corrector, over and
above the 512 bytes of user data. Storage is needed
for 562 bytes of data, to "represent" the user 512 bytes
of data. The high overhead tells you how "spongy" the
storage is.

Eons ago, hard drives used the Fire Polynomial. A 512 byte
sector might only have 11 bits of overhead (I can no longer
find a reference to this period in history and I'm using my
own feeble recollection of the number). Just like SSDs,
changes to coding methods at the head level, the usage
of PRML, mean that HDDs also have a heavy error corrector
overhead today. But I've not seen any hints as to how that
has changed, versus where SSDs started out. When the SLC
SSDs came out, they didn't need 50 bytes overhead per 512 byte
sector, and the number was somewhat less.

The Fire Polynomial excelled at independent bit errors.
This means, on the hard drives they were used on, the coding
method caused little "burst like" error side effects. Each
polynomial used for error detection and correction has a
set of math properties that go with it. You must select the
correct one, for the situation. A trivial change to how the
hardware works, could wreak havoc for the "polynomial guy" :-)
And those guys do work hard. Somebody at work who did a silicon
block to do error correction, it took several months to finish
his design. And the "skills" he learned doing that, helped
him find a job at another company :-) Bonus.

An example of "belt and suspenders" method, is three dimensional
Reed Solomon on CDs. This provides excellent error correction,
even allowing the user to take a nail and make "radial scratches"
on the surface. But the method only allows certain kinds of
scratches to be handled well. Generally, CDs fail because the
laser cannot track the groove, not because Reed Solomon cannot
correct the error-filled data. One other thing, is that even
when a CD is prepared with three dimensional protection, the
"corrector" in the drive might only be using two dimensions.
It may not actually possess the ability to use all available
info when correcting on reads. It's possible Reed Solomon is
being used in the SSD case too.

PDF page 60 of 66 here, gives a table of correction method versus flash type.
A method can only be selected, if it's known errors are
"scattered" or "burst", on actual devices, as the methods are good
at different things. Some methods are complex enough, they're
done on the SSD processor (firmware), rather than with a
dedicated logic block (the way it should be done).

https://repositories.lib.utexas.edu/...pdf?sequence=1

Paul
  #23  
Old January 6th 19, 01:31 PM posted to alt.comp.os.windows-10
joe[_6_]
external usenet poster
 
Posts: 20
Default SATA Drives

On 1/5/2019 10:33 PM, Jim H wrote:
On Sat, 5 Jan 2019 00:19:37 -0800, in ,
Mike wrote:

MTBF doesn't mean that YOUR device will last that long, or even close to
that long. All it means is that, if you have a large number of them,
about half of them will die before MTBF.
Are you feeling lucky?



You can wish that's what the MTBF figure tells you, but in reality the
probability of your device lasting as long as the MTBF figure is more
like 37%.


Only if you assume the failure rate is constant over time and there are
no wear out mechanisms.

  #24  
Old January 6th 19, 07:01 PM posted to alt.comp.os.windows-10
joe[_6_]
external usenet poster
 
Posts: 20
Default SATA Drives

On 1/6/2019 11:16 AM, Wolf K wrote:
On 2019-01-06 08:31, joe wrote:
On 1/5/2019 10:33 PM, Jim H wrote:
On Sat, 5 Jan 2019 00:19:37 -0800, in ,
Mike wrote:

MTBF doesn't mean that YOUR device will last that long, or even
close to
that long.Â* All it means is that, if you have a large number of them,
about half of them will die before MTBF.
Are you feeling lucky?


You can wish that's what the MTBF figure tells you, but in reality the
probability of your device lasting as long as the MTBF figure is more
like 37%.


Only if you assume the failure rate is constant over time and there
are no wear out mechanisms.


Seems to me that MTBF is testable, no assumptions required. I expect the
published figure(s) to summarise the test results. Where are the stats?



So, how would you test an SSD with 1.5 million hour MTBF?

  #25  
Old January 6th 19, 08:22 PM posted to alt.comp.os.windows-10
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default SATA Drives

Wolf K wrote:
On 2019-01-06 14:01, joe wrote:
On 1/6/2019 11:16 AM, Wolf K wrote:
On 2019-01-06 08:31, joe wrote:
On 1/5/2019 10:33 PM, Jim H wrote:
On Sat, 5 Jan 2019 00:19:37 -0800, in ,
Mike wrote:

MTBF doesn't mean that YOUR device will last that long, or even
close to
that long. All it means is that, if you have a large number of them,
about half of them will die before MTBF.
Are you feeling lucky?


You can wish that's what the MTBF figure tells you, but in reality the
probability of your device lasting as long as the MTBF figure is more
like 37%.


Only if you assume the failure rate is constant over time and there
are no wear out mechanisms.

Seems to me that MTBF is testable, no assumptions required. I expect
the published figure(s) to summarise the test results. Where are the
stats?



So, how would you test an SSD with 1.5 million hour MTBF?


The converse question is, How do you know the SSD's MTBF is 1.5 Mh?

IOW, the claimed MTBF is based on some testing. What's tested? And how
are those test results converted in MTBF numbers?

I know that when mechanical testing doesn't result in failure in a
reasonable time, wear is used as an indicator of time-to-failure. Is
there something similar for SSDs?

Best,


https://serverfault.com/questions/64...n-failures-ssd

"MTTF of 1.5 million hours sounds somewhat plausible.

That would roughly be a test with 1000 drives running for 6 months
and 3 drives failing. The AFR would be

(2* 6 months * 3)/(1000 drives)=0.6% annually and the

MTTF = 1yr/0.6%=1,460,967 hours or 167 years.

A different way to look at that number is when you have 167 drives
and leave them running for a year the manufacturer claims that on
average you'll see one drive fail.
"

Since "the number" for HDD and SSD is so close, you have
to suspect there's something wrong with the methodology
or notion.

One device is just a circuit board with some LSI on it.
The other device has a circuit board plus mechanical bits.

Paul
  #26  
Old January 6th 19, 08:30 PM posted to alt.comp.os.windows-10
NY
external usenet poster
 
Posts: 586
Default SATA Drives

"Paul" wrote in message
...

Since "the number" for HDD and SSD is so close, you have
to suspect there's something wrong with the methodology
or notion.

One device is just a circuit board with some LSI on it.
The other device has a circuit board plus mechanical bits.


If the MTBF is very simliar for both SSD and HDD, it suggests that the
control and interface logic (the part that is common to both types) is the
weakest link, which hides any other failures in the data storage mechanism
(solid-state or spinning disc and head assembly).

  #27  
Old January 6th 19, 09:11 PM posted to alt.comp.os.windows-10
Mike
external usenet poster
 
Posts: 185
Default SATA Drives

On 1/6/2019 11:25 AM, Wolf K wrote:
On 2019-01-06 14:01, joe wrote:
On 1/6/2019 11:16 AM, Wolf K wrote:
On 2019-01-06 08:31, joe wrote:
On 1/5/2019 10:33 PM, Jim H wrote:
On Sat, 5 Jan 2019 00:19:37 -0800, in ,
Mike wrote:

MTBF doesn't mean that YOUR device will last that long, or even
close to
that long.Â* All it means is that, if you have a large number of them,
about half of them will die before MTBF.
Are you feeling lucky?


You can wish that's what the MTBF figure tells you, but in reality the
probability of your device lasting as long as the MTBF figure is more
like 37%.


Only if you assume the failure rate is constant over time and there
are no wear out mechanisms.

Seems to me that MTBF is testable, no assumptions required. I expect
the published figure(s) to summarise the test results. Where are the
stats?



So, how would you test an SSD with 1.5 million hour MTBF?


The converse question is, How do you know the SSD's MTBF is 1.5 Mh?

IOW, the claimed MTBF is based on some testing. What's tested? And how
are those test results converted in MTBF numbers?

I know that when mechanical testing doesn't result in failure in a
reasonable time, wear is used as an indicator of time-to-failure. Is
there something similar for SSDs?

Best,

I have not done any MTBF computations since 1980. Back then, it was
garbage in/garbage out. The component guys had some basic failure rate
numbers. We took the component count for each type, multiplied it by the
failure rate and added it all up...GIGO.
So, TTL had a rate per chip. The number of pins related to the number of
connections. Sockets had dramatically bigger failure numbers than soldered
connections. The list is endless.
Result was a number that kept the reliability overlords happy.

As an engineer, the most important part of it all was the design guidance
the PROCESS provided.
Don't use sockets, keep temps down, this technology was more reliable than
that chip technology, vibration is to be avoided, etc.
We did accelerated life testing for relatively short periods, but
failures were very rare. And the duration was so short as to be
statistically irrelevant.

Field failures were often more environmental than anything else.
Instrumentation used in a damp, corrosive environment had dramatically
higher failure rates. But the MTBF number had no input for that.

As for SSD's, the wearout process is known. And the time to failure
is short enough that you can actually get some info from testing.

It appears that each generation makes the problem worse by cramming
more bits into a single cell...then fixing it in software.
Wear leveling
Data error correction
Automatic swapping out of inevitable bad cells
Overprovisioning
Secret stuff

At the cell level, the MTBF is laughably short.
Hopefully all the software magic can make the SSD reliable
at the macro level.

I'd rather they fix the root cause, but that looks like it will
have to wait for an innovation in the basic cell operation. If
I understand correctly, we're already down to counting electrons
on two hands to determine the stored value. I'm amazed it works at all.
  #28  
Old January 6th 19, 09:45 PM posted to alt.comp.os.windows-10
Tim[_10_]
external usenet poster
 
Posts: 249
Default SATA Drives

Wolf K wrote in
:

On 2019-01-06 08:31, joe wrote:
On 1/5/2019 10:33 PM, Jim H wrote:
On Sat, 5 Jan 2019 00:19:37 -0800, in ,
Mike wrote:

MTBF doesn't mean that YOUR device will last that long, or even
close to that long.Â* All it means is that, if you have a large
number of them, about half of them will die before MTBF.
Are you feeling lucky?


You can wish that's what the MTBF figure tells you, but in reality
the probability of your device lasting as long as the MTBF figure is
more like 37%.


Only if you assume the failure rate is constant over time and there
are no wear out mechanisms.


Seems to me that MTBF is testable, no assumptions required. I expect
the published figure(s) to summarise the test results. Where are the
stats?



For newly released products, MTBF is calculated from pre-release test
results. These are usually the results of accelerated testing, and can't
account for actual in-use numbers. It is modified (sometimes) based on
real world experience. To avoid complications most manufacturers just
stay with the original MTBF. Reviewers and independent testers try to
work with real world data.
  #29  
Old January 6th 19, 09:53 PM posted to alt.comp.os.windows-10
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default SATA Drives

Mike wrote:


At the cell level, the MTBF is laughably short.


But that's not true.

If you don't write to a cell, it holds up quite well.

When you're not writing the SSD, it should have
similar reliability to the odds of the DRAM on
your computer going bad.

I had a RAM failure once that would serve as an
example. Computer crashes. Testing reveals that
one chip on the DIMM has gone tristate and it
"won't answer the bell" when chip-selected. It
returns random data from end to end.

Rather than that being the "odds of one bit failing"
in the memory array, instead something failed
closer to the interface, and the chip was no longer
functioning at all. And apparently this is a
recognized fault type - it happens often enough
to be a "thing". That's why CHIPKILL and nibble wide
memory chips are popular (CHIPKILL can correct a
four bit error), so if a chip fails entirely,
the module has sufficient redundancy to cover for it
(error correction in memory controller).

Some Enterprise SSDs have RAIN inside, as a foil for
this failure mode. On consumer drives, we're "eating"
this failure mode (data loss).

https://www.micron.com/-/media/clien...f_ssd_rain.pdf

Paul
  #30  
Old January 6th 19, 11:30 PM posted to alt.comp.os.windows-10
Mike
external usenet poster
 
Posts: 185
Default SATA Drives

On 1/6/2019 1:53 PM, Paul wrote:
Mike wrote:


At the cell level, the MTBF is laughably short.


But that's not true.

If you don't write to a cell, it holds up quite well.


We have very different views of reliability.
If there's not enough oxygen, just don't breathe,
makes about as much sense.

3000 erase-write cycles is a TINY number.
The only way that works is if you don't do it.
They take Herculean measures to spread the wear
and recover from bad data bits.

BucketsRus.com
Our 5 gallon buckets are full of holes, but you
get a free roll of duct tape and a monkey that
follows you around patching new holes.
And if you only
carry a quart at a time, it's likely that most
of the contents will arrive at the destination.
And they're more expensive than our
competitor's quart buckets, so they must be better.

Get yours today...

Would you rather drive over a bridge made of papier mache
in the rain, even tho there was a permanent staff of repair
drones?
Or a steel/concrete one that didn't require spackle on
an hourly basis.

I believe in the "fix it in software" concept, but I'd
prefer a design that didn't require as much of it.


When you're not writing the SSD, it should have
similar reliability to the odds of the DRAM on
your computer going bad.


I'm not buying it.
DRAM doesn't have a designed-in failure mode.
DRAM fails if the manufacturing process failed and you got
one with a latent defect.
SSD have DESIGNED-IN failures in a SHORT time
pasted over with failure symptom mitigation.
Not at all the same thing.

I like the analogy that there's probably more horsepower
correcting errors on your SSD than it took to get
man to the moon.

I certainly don't understand the process, but I
imagine the SSD write process as shooting bullets
thru a wall and hoping that the stuff inside
is too big to escape thru the holes...YET.
It's not a matter of IF, but WHEN. And the when
is predicted at around 3000 bullets. And we shoot
a lot more bullets than needed just so we don't
shoot this one wall too many times.
So, let's make the walls smaller and more fragile
so we can
have more walls that we're not shooting.
What could possibly go wrong?

I'm not blind to the fact that magnetic and optical
media also require failure mitigation, but I characterize
that as a media failure that can be improved with better
manufacturing processes. SSD individual cell reliability
seems to be decreasing over time. And we're not too far
from physical limitations on cell size and thus the amount
of redundancy that can be applied using known technology.

Ain't progress fun...


I had a RAM failure once that would serve as an
example. Computer crashes. Testing reveals that
one chip on the DIMM has gone tristate and it
"won't answer the bell" when chip-selected. It
returns random data from end to end.

Rather than that being the "odds of one bit failing"
in the memory array, instead something failed
closer to the interface, and the chip was no longer
functioning at all. And apparently this is a
recognized fault type - it happens often enough
to be a "thing". That's why CHIPKILL and nibble wide
memory chips are popular (CHIPKILL can correct a
four bit error), so if a chip fails entirely,
the module has sufficient redundancy to cover for it
(error correction in memory controller).

Some Enterprise SSDs have RAIN inside, as a foil for
this failure mode. On consumer drives, we're "eating"
this failure mode (data loss).

https://www.micron.com/-/media/clien...f_ssd_rain.pdf


Â*Â* Paul


 




Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off






All times are GMT +1. The time now is 09:05 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 PCbanter.
The comments are property of their posters.