A Windows XP help forum. PCbanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » PCbanter forum » Windows 10 » Windows 10 Help Forum
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

SATA Drives



 
 
Thread Tools Rate Thread Display Modes
  #31  
Old January 7th 19, 01:00 AM posted to alt.comp.os.windows-10
pjp[_10_]
external usenet poster
 
Posts: 1,183
Default SATA Drives

In article , lid says...

Mike wrote:


At the cell level, the MTBF is laughably short.


But that's not true.

If you don't write to a cell, it holds up quite well.

When you're not writing the SSD, it should have
similar reliability to the odds of the DRAM on
your computer going bad.

I had a RAM failure once that would serve as an
example. Computer crashes. Testing reveals that
one chip on the DIMM has gone tristate and it
"won't answer the bell" when chip-selected. It
returns random data from end to end.

Rather than that being the "odds of one bit failing"
in the memory array, instead something failed
closer to the interface, and the chip was no longer
functioning at all. And apparently this is a
recognized fault type - it happens often enough
to be a "thing". That's why CHIPKILL and nibble wide
memory chips are popular (CHIPKILL can correct a
four bit error), so if a chip fails entirely,
the module has sufficient redundancy to cover for it
(error correction in memory controller).

Some Enterprise SSDs have RAIN inside, as a foil for
this failure mode. On consumer drives, we're "eating"
this failure mode (data loss).

https://www.micron.com/-/media/clien...f_ssd_rain.pdf

Paul


I was doing some board level coding for a compnay long time ago now and
we ended up with a batch of bad memory chips. They'd work ok "when
working" but leave them sit awhile and you did not get back what you
wrote. Was a pita to find and chips became worthless.
Ads
  #32  
Old January 7th 19, 01:34 AM posted to alt.comp.os.windows-10
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default SATA Drives

pjp wrote:


I was doing some board level coding for a compnay long time ago now and
we ended up with a batch of bad memory chips. They'd work ok "when
working" but leave them sit awhile and you did not get back what you
wrote. Was a pita to find and chips became worthless.


DRAM needs to be refreshed, and that's what happens
when you don't set the autorefresh timer to the
correct value.

Paul
  #34  
Old January 7th 19, 09:17 AM posted to alt.comp.os.windows-10
T
external usenet poster
 
Posts: 4,600
Default SATA Drives

On 1/6/19 5:31 AM, joe wrote:
On 1/5/2019 10:33 PM, Jim H wrote:
On Sat, 5 Jan 2019 00:19:37 -0800, in ,
Mike wrote:

MTBF doesn't mean that YOUR device will last that long, or even close to
that long.Â* All it means is that, if you have a large number of them,
about half of them will die before MTBF.
Are you feeling lucky?



You can wish that's what the MTBF figure tells you, but in reality the
probability of your device lasting as long as the MTBF figure is more
like 37%.


Only if you assume the failure rate is constant over time and there are
no wear out mechanisms.


Discussing MBTF is a way to start a flame war with those
who have never done an MBTF study and do not know what
it means.

Basically, it means that if you put that many devices in front
of you on a test bench, one is predicted to fail in a hour/year/
whatever. It is not a very good number to count on. It just
sound great to hear you have a 1 million in the number. I
did MBTF studies for the military years ago. Pretty much
worthless.

Here is a good article on it:

https://www.bmc.com/blogs/mtbf-vs-mt...ts-difference/

My advice is to look at the warranty. That will give you a
good estimate of how long things are expected to last. In
college, we were taught to put the warranty at 90% of useful
lifespan.

But be aware that a lot of folks don't put the effort in to
figuring out what their useful lifespan is, so they put down
one year or something competitive sounding. Intel's SSD
have a 5 year warranty, but crap out (brick) a lot sooner
than that.




  #35  
Old January 7th 19, 11:06 AM posted to alt.comp.os.windows-10
mechanic
external usenet poster
 
Posts: 1,064
Default SATA Drives

On Mon, 7 Jan 2019 01:17:07 -0800, T wrote:

I did MBTF studies for the military years ago. Pretty much
worthless.


Especially to those looking for MTBF numbers!
  #36  
Old January 7th 19, 12:50 PM posted to alt.comp.os.windows-10
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default SATA Drives

mechanic wrote:
On Mon, 7 Jan 2019 01:17:07 -0800, T wrote:

I did MBTF studies for the military years ago. Pretty much
worthless.


Especially to those looking for MTBF numbers!


On subsystems with mixed hardware/firmware, the numbers are
also misleading, because they suggest we know how to quantify
firmware failures.

Seagate had some hard drives that would "brick" after 30 days
of usage. This wasn't a seized motor, or a broken arm or flex
cable, this was a firmware failure. The drives were actually
recoverable... with a complicated procedure that involved
starting the hard drive, with the controller board unscrewed
and not touching the connection to the heads. It also
required connecting a three wire, TTL level, serial port
to the controller board and typing a cryptic set of commands.

The MTBF number would not cover the failure in that case
of the Seagate drive. And for some of the stuff we worked
on, one of the reliability staff just pulled a number out
of the air and said for all we know, firmware failures
for a product could be 10x more prevalent than some other
kinds of failures. It's not something that you could
apply the "Easy Bake Oven" and do accelerated life testing.

When you see those numbers for the products we buy, it's
not really a proper portrayal of what could happen to them.
It's like computing an MTBF for the hinge of one of the
doors on your car, and using that as a metric for how long
the whole car will last.

It's like saying you have a 0.38% chance of getting cancer.
It's a single point specification that doesn't take imponderables
into account. It has virtually no predictive value.

Some Seagate drives were "generally terrible", then another
generation would come along where they were "good". The MTBF
number on the datasheet for both products was the same. If my
drive failed after 3 months, there wasn't much solace in that
1.5 million hour MTBF. In fact, the court of public opinion
might be a better indicator of what to expect, than the
math calc (a running average of experiences involving
field data). I might be able to load my stock room with
drives, based on how often other people were replacing theirs
(2 to 3 years usage).

If the pool of drives started off with the quality of
this drive, it might be different. And this drive never
parks the heads when it's not being used. The heads
are always loaded on this drive. The other two drives
in the computer use head parking, to extend their
lives. This drive has no delay when you access it, and
the motor sound doesn't change either (not a two-speed
drive). Apparently every once in a while, they make
a good drive.

https://i.postimg.cc/rpJpKMTR/no-domino-flaws-here.gif

The drive temperature is 22C, because an intake fan blows
right across the surface of the drive. The bay covers
were removed, and a framing and low speed fan fastened
in front of the lower three bays. The optical drive at
the top, works as normal (not part of fan scheme). The
computer case is really old, and this fan is actually part
of "general cooling" as opposed to being a "drive
temperature experiment". The rear of the case has no
exhaust fan, and you leave a slot or two open
for exhaust (reduced backpressure).

Paul
  #37  
Old January 7th 19, 01:27 PM posted to alt.comp.os.windows-10
joe[_6_]
external usenet poster
 
Posts: 20
Default SATA Drives

On 1/6/2019 5:30 PM, Mike wrote:
snip
We have very different views of reliability.
If there's not enough oxygen, just don't breathe,
makes about as much sense.

3000 erase-write cycles is a TINY number.
The only way that works is if you don't do it.


You seem fixated on a number. How long does it take, under normal use,
to do 3000 erase-write cycles? Is it day, months, or many years? If the
answer is many years, then does it really matter?

They take Herculean measures to spread the wear
and recover from bad data bits.


From the user's standpoint, no effort is needed as that is all done in
the SSD.

snip
  #38  
Old January 7th 19, 03:11 PM posted to alt.comp.os.windows-10
Ken Blake[_5_]
external usenet poster
 
Posts: 2,221
Default SATA Drives

On Sun, 6 Jan 2019 13:01:45 -0600, joe wrote:

On 1/6/2019 11:16 AM, Wolf K wrote:
On 2019-01-06 08:31, joe wrote:
On 1/5/2019 10:33 PM, Jim H wrote:
On Sat, 5 Jan 2019 00:19:37 -0800, in ,
Mike wrote:

MTBF doesn't mean that YOUR device will last that long, or even
close to
that long.* All it means is that, if you have a large number of them,
about half of them will die before MTBF.
Are you feeling lucky?


You can wish that's what the MTBF figure tells you, but in reality the
probability of your device lasting as long as the MTBF figure is more
like 37%.


Only if you assume the failure rate is constant over time and there
are no wear out mechanisms.


Seems to me that MTBF is testable, no assumptions required. I expect the
published figure(s) to summarise the test results. Where are the stats?



So, how would you test an SSD with 1.5 million hour MTBF?



If you saw a device for which a 1.5 million hour MTBF was claimed,
would you believe it? If my arithmetic is right, 1.5 million hours is
over 160 years. How did they test it and determine that number?
  #39  
Old January 7th 19, 03:46 PM posted to alt.comp.os.windows-10
nospam
external usenet poster
 
Posts: 4,718
Default SATA Drives

In article , wrote:

My advice is to look at the warranty. That will give you a
good estimate of how long things are expected to last. In
college, we were taught to put the warranty at 90% of useful
lifespan.


those are two separate things.

But be aware that a lot of folks don't put the effort in to
figuring out what their useful lifespan is, so they put down
one year or something competitive sounding. Intel's SSD
have a 5 year warranty, but crap out (brick) a lot sooner
than that.


they do not. intel ssds are very reliable.
  #40  
Old January 7th 19, 03:46 PM posted to alt.comp.os.windows-10
nospam
external usenet poster
 
Posts: 4,718
Default SATA Drives

In article , Ken Blake
wrote:


If you saw a device for which a 1.5 million hour MTBF was claimed,
would you believe it? If my arithmetic is right, 1.5 million hours is
over 160 years. How did they test it and determine that number?


accelerated testing.
  #41  
Old January 7th 19, 06:33 PM posted to alt.comp.os.windows-10
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default SATA Drives

nospam wrote:
In article , wrote:

My advice is to look at the warranty. That will give you a
good estimate of how long things are expected to last. In
college, we were taught to put the warranty at 90% of useful
lifespan.


those are two separate things.

But be aware that a lot of folks don't put the effort in to
figuring out what their useful lifespan is, so they put down
one year or something competitive sounding. Intel's SSD
have a 5 year warranty, but crap out (brick) a lot sooner
than that.


they do not. intel ssds are very reliable.


http://knowledge.seagate.com/article...S/FAQ/174791en

"Seagate's new standard is AFR"

"The product shall achieve an Annualized Failure Rate - AFR - of 0.73%"

https://www.networkworld.com/article...ssd-myths.html

"Exhaustive studies have shown that SSDs have an
annual failure rate of tenths of one percent,
while the AFRs for HDDs can run as high as 4 to 6 percent." === Boo! and/or Hiss!

You can see in a slap-fest, there is plenty of slapping
to go around.

Not too much weight should go into believing in hyperbole, except
what your own experience shows.

If you had an OCZ SSD brick on you, you're less likely to
believe the "tenths of one percent" thing :-) (Back in the era
where they still couldn't write firmware.) One of the
people in the newsgroup had a brickage over night (wouldn't
work the next day). And at the time, the adoption rate for
SSDs was still pretty low.

Once you discount the wear mechanism of Flash chips,
and assume the critical data storage area in an SSD
is made of "sparkle ponies", I'm sure you'll be hitting
an AFR of "tenths of one percent" :-/ If you discount
every possible failure mechanism, then it cannot fail!!!
Quick! Somebody get me the number-grinder, to grind
me some new numbers.

Even in the current day, firmware could still be an issue.
If the failure rates are low, we might not even notice
there's an issue.

Paul
  #42  
Old January 7th 19, 06:51 PM posted to alt.comp.os.windows-10
nospam
external usenet poster
 
Posts: 4,718
Default SATA Drives

In article , Paul
wrote:

https://www.networkworld.com/article...ng-ssd-myths.h
tml

"Exhaustive studies have shown that SSDs have an
annual failure rate of tenths of one percent,
while the AFRs for HDDs can run as high as 4 to 6 percent." === Boo!
and/or Hiss!


from that link,
And since SSDs contain billions of cells, we¹re talking about an
enormous amount of data that can be written and deleted at every
moment of every day of the drive¹s life. For example, one 100GB SSD
that offers 10 drive writes per day can support 1TB (terabyte) of
writing each and every single day, 365 days a year for five years.

very few people write a terabyte *every* *day*.

a more realistic amount would be in the range of 10 gigabytes, which
would put the lifetime (based on writes) at 500 years. something *else*
is likely to fail first, including the user, who won't live anywhere
near that long.

tl;dr - ssds are *very* reliable.
  #43  
Old January 8th 19, 12:26 AM posted to alt.comp.os.windows-10
Mike
external usenet poster
 
Posts: 185
Default SATA Drives

On 1/7/2019 10:51 AM, nospam wrote:
In article , Paul
wrote:

https://www.networkworld.com/article...ng-ssd-myths.h
tml

"Exhaustive studies have shown that SSDs have an
annual failure rate of tenths of one percent,
while the AFRs for HDDs can run as high as 4 to 6 percent." === Boo!
and/or Hiss!


from that link,
And since SSDs contain billions of cells, we¹re talking about an
enormous amount of data that can be written and deleted at every
moment of every day of the drive¹s life. For example, one 100GB SSD
that offers 10 drive writes per day can support 1TB (terabyte) of
writing each and every single day, 365 days a year for five years.

very few people write a terabyte *every* *day*.


Help me with the math. Maybe this is a video surveillance system.
I get a new 100GB SSD and write 100GB to it. How many times did I erase
each cell?

over the next 2.4 hours, I overwrite all that data. How many times did
I erase each cell?

2.4 hours later, I have overwritten all that data. How many times did I
erase each cell?

Keep it up 10 drive writes per day.

..
..
..
..until it fails. What's the write amplification? How long did that take?

That's a limit case.
The other end is don't write it at all.
What's the shape of the life curve between those limits
based on size and number of writes and unused capacity on the drive
and how you TRIM it and and and...

I expect my spinner to be more or less independent of all that.
My SSD is a complex web of secrets hiding a fundamental defect
in the storage method that's getting worse with each generation
as geometries decrease and bits per cell increases.

Start up resource monitor and look at disk writes.
There are a LOT of writes due to system management. A LOT!
They may not be big writes, but if there's no available space,
my SSD has to move something to do it. All those small writes
may be amplified significantly when it comes to SSD erase cycles.
I don't have a number, but it does make me cautious.

In terms of actual bytes written by applications, my TV time shifter
writes about 30GB a day. I expect it would have to be TRIMmed frequently.
No idea how expensive a TRIM operation is in terms of erase cycles at
that level. I'm not anxious to put a SSD there.


a more realistic amount would be in the range of 10 gigabytes, which
would put the lifetime (based on writes) at 500 years. something *else*
is likely to fail first, including the user, who won't live anywhere
near that long.

tl;dr - ssds are *very* reliable.


  #44  
Old January 8th 19, 12:33 AM posted to alt.comp.os.windows-10
Mike
external usenet poster
 
Posts: 185
Default SATA Drives

On 1/7/2019 5:27 AM, joe wrote:
On 1/6/2019 5:30 PM, Mike wrote:
snip
We have very different views of reliability.
If there's not enough oxygen, just don't breathe,
makes about as much sense.

3000 erase-write cycles is a TINY number.
The only way that works is if you don't do it.


You seem fixated on a number. How long does it take, under normal use,
to do 3000 erase-write cycles?


That's the root of the questions. The answer is that it appears
to be a very strong function of things known only to the drive vendor
and whether that plays nice with the OS and the APPS.

Is it day, months, or many years? If the
answer is many years, then does it really matter?

Not at all. Just convince me that it really is many years.

They take Herculean measures to spread the wear
and recover from bad data bits.


From the user's standpoint, no effort is needed as that is all done in
the SSD.

I think you can say that for drives that are properly interfaced with
the OS. In the end, it's not whether I have to interact with it while
it's wearing out. The issue is that it's wearing out at all.

snip


  #45  
Old January 8th 19, 02:56 AM posted to alt.comp.os.windows-10
pjp[_10_]
external usenet poster
 
Posts: 1,183
Default SATA Drives

In article , says...

On 1/7/2019 5:27 AM, joe wrote:
On 1/6/2019 5:30 PM, Mike wrote:
snip
We have very different views of reliability.
If there's not enough oxygen, just don't breathe,
makes about as much sense.

3000 erase-write cycles is a TINY number.
The only way that works is if you don't do it.


You seem fixated on a number. How long does it take, under normal use,
to do 3000 erase-write cycles?


That's the root of the questions. The answer is that it appears
to be a very strong function of things known only to the drive vendor
and whether that plays nice with the OS and the APPS.

Is it day, months, or many years? If the
answer is many years, then does it really matter?

Not at all. Just convince me that it really is many years.

They take Herculean measures to spread the wear
and recover from bad data bits.


From the user's standpoint, no effort is needed as that is all done in
the SSD.

I think you can say that for drives that are properly interfaced with
the OS. In the end, it's not whether I have to interact with it while
it's wearing out. The issue is that it's wearing out at all.

snip


I just hooked up a small Lexar 120Gb ssd to a PVR. I'll let ya's know
how long it lasts but I'm epecting years based upon the ones I have in
PC's.
 




Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off






All times are GMT +1. The time now is 11:11 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 PCbanter.
The comments are property of their posters.