If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Rate Thread | Display Modes |
#46
|
|||
|
|||
SATA Drives
On 1/7/19 7:11 AM, Ken Blake wrote:
If you saw a device for which a 1.5 million hour MTBF was claimed, would you believe it? If my arithmetic is right, 1.5 million hours is over 160 years. How did they test it and determine that number? That means if you place 1.5 million unit on a test bench, one unit is predicted to fail in one hour. It is a pretty useless figure |
Ads |
#47
|
|||
|
|||
SATA Drives
On 1/7/19 8:30 AM, Jim H wrote:
On Sun, 6 Jan 2019 12:16:32 -0500, in , Wolf K wrote: Seems to me that MTBF is testable, no assumptions required. I expect the published figure(s) to summarise the test results. Where are the stats? Testable how? Running one disk drive until it fails takes a long time... and running many for a shorter time is expensive. Each winds up giving a different figure for MTBF and both delay product introduction if the product requires MTBF figures based on testing before being introduced. All of the above is why MTBF - at least for things expected to last a loooong time - is CALCULATED (no actual testing involved) based on knowledge of the design and the expected failure rate of its various parts and subsystems. The best use for MTBF data isn't to schedule when your hard drive might need replacement before it fails... it's to determine (by calculation) how many spares you need to keep in stock so a failed drive in your large network can be replaced quickly. Ignore MBTF and use warranty instead. |
#48
|
|||
|
|||
SATA Drives
In article , Mike
wrote: https://www.networkworld.com/article...nking-ssd-myth s.h tml "Exhaustive studies have shown that SSDs have an annual failure rate of tenths of one percent, while the AFRs for HDDs can run as high as 4 to 6 percent." === Boo! and/or Hiss! from that link, And since SSDs contain billions of cells, we1re talking about an enormous amount of data that can be written and deleted at every moment of every day of the drive1s life. For example, one 100GB SSD that offers 10 drive writes per day can support 1TB (terabyte) of writing each and every single day, 365 days a year for five years. very few people write a terabyte *every* *day*. Help me with the math. Maybe this is a video surveillance system. I get a new 100GB SSD and write 100GB to it. How many times did I erase each cell? stop thinking of individual cells. 100gb/day every day, based on the above statistic, would give you 50 years of expected life. there have been endurance tests that hammered ssds (not a usual use case), which lasted in the petabyte range, or around 25 years or so if it was only 100g/day. it's simply not worth worrying about. nothing is perfect, so there's always the chance it might fail before then, but it's almost certain to outlast any mechanical hard drive, which could also fail at any time. always have backups, no matter what option you choose. |
#49
|
|||
|
|||
SATA Drives
"Ken Blake" wrote in message
... If you saw a device for which a 1.5 million hour MTBF was claimed, would you believe it? If my arithmetic is right, 1.5 million hours is over 160 years. How did they test it and determine that number? Not too much different than actuarial tables for mortality Statistical analysis of error rates in a significant sample size tested over a shorter period of time then used to generate a predicted outcome for a mean population not the devices average or mean lifetime in the field of use. Hard drive manufacturers certainly haven't tested their products since the mid-19th century(171 yrs ; 1.5MM hours). In the case of actuarial tables one of the obvious error rates is death, just like a hard drive. In the actuarial case having 171 yrs of data is relatively useless just like it would be for a hard drive - the testers and testing equipment would all have died before the 171 years of data collection. ....w¡ñ§±¤ñ ms mvp windows 2007-2016, insider mvp 2016-2018 |
#50
|
|||
|
|||
SATA Drives
Mike wrote:
On 1/7/2019 10:51 AM, nospam wrote: In article , Paul wrote: https://www.networkworld.com/article...ng-ssd-myths.h tml "Exhaustive studies have shown that SSDs have an annual failure rate of tenths of one percent, while the AFRs for HDDs can run as high as 4 to 6 percent." === Boo! and/or Hiss! from that link, And since SSDs contain billions of cells, we¹re talking about an enormous amount of data that can be written and deleted at every moment of every day of the drive¹s life. For example, one 100GB SSD that offers 10 drive writes per day can support 1TB (terabyte) of writing each and every single day, 365 days a year for five years. very few people write a terabyte *every* *day*. Help me with the math. Maybe this is a video surveillance system. I get a new 100GB SSD and write 100GB to it. How many times did I erase each cell? over the next 2.4 hours, I overwrite all that data. How many times did I erase each cell? 2.4 hours later, I have overwritten all that data. How many times did I erase each cell? Keep it up 10 drive writes per day. . . . .until it fails. What's the write amplification? How long did that take? That's a limit case. The other end is don't write it at all. What's the shape of the life curve between those limits based on size and number of writes and unused capacity on the drive and how you TRIM it and and and... I expect my spinner to be more or less independent of all that. My SSD is a complex web of secrets hiding a fundamental defect in the storage method that's getting worse with each generation as geometries decrease and bits per cell increases. Start up resource monitor and look at disk writes. There are a LOT of writes due to system management. A LOT! They may not be big writes, but if there's no available space, my SSD has to move something to do it. All those small writes may be amplified significantly when it comes to SSD erase cycles. I don't have a number, but it does make me cautious. In terms of actual bytes written by applications, my TV time shifter writes about 30GB a day. I expect it would have to be TRIMmed frequently. No idea how expensive a TRIM operation is in terms of erase cycles at that level. I'm not anxious to put a SSD there. Work some numbers. I just went through my "Sent" messages for a table I'd copied out previously. I was hoping to find the 545S table, but this is the table I got first. Being a PRO model in name, the TBW might be higher than normal. * Warrantied TBW for 860 PRO: 300 TBW for 256 GB model, 600 TBW for 512 GB model, 1,200 TBW for 1 TB model, 2,400 TBW for 2 TB model, 4,800 TBW for 4 TB model. Write 100GB per day. Doing that for 10 days is "1 TBW". The 300TBW drive will last 3000 days or 8.2 years. The drive in question (a 256GB drive), would be using about half its capacity. You are writing 100GB of it, erasing it, writing 100GB the next day. It's a bulk writer application. The OS buffers in 64KB chunks, so as a rule the OS doesn't tend to allow the "slow" write operation to devolve into 4KB writes. I noticed this when using a purposeful fragmentation app recently, that the OS didn't allow a 4KB cluster drive to be written 4KB at a time. Because that's bad for SSDs perhaps. Previous OSes would allow a smaller "fragment" size. There seem to be two write buffers. The "System Write Buffer" for when the destination device falls behind (based on bandwidth). And a smaller buffer at the C code level sort of, which does a tiny bit of buffering in the name of "good SSD behavior". You should still be able to use "sparse" file behavior if you want, and the OS can't do anything about that. But sparse writers are not common in software (pre-allocating half the drive, and then writing to random 4KB locations -- it takes brainz to do that and is not common). That's actually a database pattern, rather than a surveillance camera pattern. If you bought the 1TB drive, you get 1200 TBW and 32.8 years of life before it has used all the wear life. If you bought the 4TB model and wrote 100GB per day, it lasts for 131.4 years. TRIM would be part of the budget. Cells have to be erased before they can be reused, and our hope is that TRIM is a hint, and erasure only happens once rather than multiple times. We don't care when the erasure is done - it either gets done right on the TRIM hint, or, it gets done 10us before the data is written, using the DRAM buffer in the drive perhaps. My assumption is you don't resort to pathological behavior. For the 4TB drive, you don't put "3.9TB" of stale storage filled up, then beat the **** out of the remaining 0.1TB of space for years on end. If you did that, it would be like owning a 128GB drive with only 150TBW and having time for "only" 4.1 years of that pattern. You'd burn a hole in the space at the end of the drive. Drives have a small amount of overcapacity, so the 4TB drive would have enough overcapacity to circulate enough spares pool for 8 years of life. It would be up to the drive to notice the large disparity on the blocks down at the end and "move" the hot data area to other cells. I think that's a definite possibility, that the drive will use active wear leveling and actually waste a write moving some of the stale content... every once in a while. Since that's a Samsung drive, it's possible such a decision is made every three months or so. A background rewrite based on statistics. If something naughty has been going on, the drive will attempt to smooth the wear in an active way. Rewriting mushy cells. Giving an opportunity to move the data around and smooth out a hot spot. Now you're back to 100+ years of life. This policy is proprietary, and two brands don't necessarily handle pathology the same way. Passive wear leveling is sufficient for non-pathological behavior. Active wear leveling (drive CPU noodles out a strategy to make it happen on an infrequent basis) is still a possible piece of code in the drive. After the "mushy TLC" incident Samsung had, this is likely to be a strategy for their product. I don't know if other brands got the hint from that bad PR and did the same or not. Firmware is never going to be documented at that level. Even if you bought a 256GB drive as a "movie" drive with no "stale" storage on it, you still get a decent sized roll of toilet paper. An 8 year roll of toilet paper. Would a hard drive last for 8 years ? Of course. But not all of them do. A few fail in 3 months. A few fail in 2-3 years. Not many give the "golden" performance of my 500GB drive. 43,817 hours. Not a mark on it. Equivalent to 5 years continuous operation. Will it last another 3 years ? Hard to say. https://i.postimg.cc/rpJpKMTR/no-domino-flaws-here.gif Paul |
#51
|
|||
|
|||
SATA Drives
T wrote:
On 1/7/19 7:11 AM, Ken Blake wrote: If you saw a device for which a 1.5 million hour MTBF was claimed, would you believe it? If my arithmetic is right, 1.5 million hours is over 160 years. How did they test it and determine that number? That means if you place 1.5 million unit on a test bench, one unit is predicted to fail in one hour. It is a pretty useless figure Have you seen the size of the test chamber they use ? :-) https://www.tomshardware.com/reviews...es,4408-3.html Actually, the picture I was looking for, is missing. There's some chamber they use for infant mortality, that runs the drive for a short period of time. It had a big circular door on the end, for once the chamber is loaded. The WDC one seems to use a similar 6000 slot robot as Seagate. I think the idea is, the robot can continue loading and unloading slots, while the chamber stays at temperature. https://www.tomshardware.com/picture...-tour.html#s38 https://img.purch.com/rc/600x450/aHR...FsaWJ1ci5qcGc= I don't think any of that though, is life cycle testing. They have at least 30,000 slots to hold product in that one facility, so it'll take 50 hours before one fails :-) Paul |
#52
|
|||
|
|||
SATA Drives
On Mon, 7 Jan 2019 20:09:55 -0800, T wrote:
Ignore MBTF and use warranty instead. As before, I merely reply to this to point to a spelling error. |
#53
|
|||
|
|||
SATA Drives
Jim H wrote:
On Mon, 7 Jan 2019 20:09:55 -0800, in , T wrote: Ignore MBTF and use warranty instead. Provided... that drives with a longer claimed warranty really last longer. If they just cost more without lasting longer, then the mfgr is just recouping his expected warranty cost at time of sale. This is how some car batteries are marketed. If you compare two batteries with the same ratings and one costs more and has a better warranty, weigh them. If the more expensive longer warranty battery isn't heavier, then you're paying for the warranty up front... and it isn't worth it because the warranty is based on the original purchase price while the cost of a new battery X years later has risen considerably. They actually make a nice chunk of change on such warranties. In the case of hard drives, the prices for a same size replacement seem to be dropping... provided that size is still available. Tried to buy a 1 GB drive lately? The warranty issue is "simple math". You work out the price to provide the service, and add it to the price of the drive. On hard drives, your replacement drive is not a new drive. It's a used drive that has been re-certified. Maybe you're made to pay for shipping, to get it. Then, there is the "serial number barrier" where you enter the serial number into the web site, and magically, the serial comes back as "bad". And the drive now has no warranty. This happens more often than you'd think, and would appear to be yet another scam. It's one thing to not offer warranties on gray market or shucked drives, but it appears perfectly valid retail drives are getting the ole heave ho (#sn not valid). Paul |
#54
|
|||
|
|||
SATA Drives
In article , Paul
wrote: On hard drives, your replacement drive is not a new drive. It's a used drive that has been re-certified. because you sent in a used drive. Maybe you're made to pay for shipping, to get it. Then, there is the "serial number barrier" where you enter the serial number into the web site, and magically, the serial comes back as "bad". And the drive now has no warranty. This happens more often than you'd think, and would appear to be yet another scam. it's not a scam. those are oem drives, which are warranted by the oem (dell, lenovo, etc.), not the drive maker. contact the relevant company. It's one thing to not offer warranties on gray market or shucked drives, but it appears perfectly valid retail drives are getting the ole heave ho (#sn not valid). they aren't. |
#55
|
|||
|
|||
SATA Drives
Paul wrote in :
Then, there is the "serial number barrier" where you enter the serial number into the web site, and magically, the serial comes back as "bad". And the drive now has no warranty. This happens more often than you'd think, and would appear to be yet another scam. It's one thing to not offer warranties on gray market or shucked drives, but it appears perfectly valid retail drives are getting the ole heave ho (#sn not valid). Paul I had something similar happen. I had decided to try using an SSD for my system drive, and picked up a highly rated Kingston with a three year warranty. Just about at the two year mark, it bricked on me. I went to request a warranty replacement, and was informed that Kinston had dropped ALL support for that model a couple months earlier. So now I have a Samsung EVO 350, and I watch the S.M.A.R.T and drive statistics data closely, and image frequently. |
#56
|
|||
|
|||
SATA Drives
On Thu, 10 Jan 2019 01:46:57 +0000, Jim H
wrote: On Mon, 7 Jan 2019 20:09:55 -0800, in , T wrote: Ignore MBTF and use warranty instead. Provided... that drives with a longer claimed warranty really last longer. If they just cost more without lasting longer, then the mfgr is just recouping his expected warranty cost at time of sale. This is how some car batteries are marketed. If you compare two batteries with the same ratings and one costs more and has a better warranty, weigh them. If the more expensive longer warranty battery isn't heavier, then you're paying for the warranty up front... and it isn't worth it because the warranty is based on the original purchase price while the cost of a new battery X years later has risen considerably. They actually make a nice chunk of change on such warranties. In the case of hard drives, the prices for a same size replacement seem to be dropping... Yes. provided that size is still available. Tried to buy a 1 GB drive lately? No. Who would want to? It's so tiny as to be virtually useless. You can buy 1GB thumb drives, but as far as I'm concerned, even those are useless. |
#57
|
|||
|
|||
SATA Drives
On Fri, 4 Jan 2019 10:12:37 -0800, T wrote:
On 1/4/19 9:57 AM, Tim wrote: Check your date. Its 2/3/19 today. I plan on doing some cleaning and hopefully recabling on my PC. It is my understanding that which SATA port a drive is plugged into does not matter. Specifically, due to how the drives were added to my system, the SATA port number does not bear any relation to the drive's position in the OS, i.e. my system drive C: is plugged into SATA port 4, etc. I would like to recable all the drives so port 1 is C:, port 2 is D:, and so forth. Am I right in assuming this will make no difference to the system? I remember way back in the day if a drive was plugged into a port other than the one it was formatted on it was basically unusable until it was reformatted. Hi Tim, Yes and no. A SATA drive will work on any port it is plugged into. Check your motherboard manual. Some ports can be SATA II and some can be SATA III. III is twice as fast. II's will work in III slots and III's will work in II's slots, although you will take a performance hit. I like to put my main drive on port 0 and my DVD on port 1. The rest I don't really care. Just a convention I follow. Not what you asked, but SATA SSD drive are about 4 times faster than mechanical drive and NVMe SSD drives are about 8 times as fast as mechanical drives. If you go with SSD, make sure you spec out as much empty space as used space to assist wear leveling. Also, Samsung drives are the only high reliability drive I have come across. Stay away from Intel. I have a Sun Sunfire X2100 server which has two SATA hard drives. I could not get drive 1 to work as a boot drive so I ended up using drive 2. I wanted to use drive 1 as a storage drive but could not get it to work that way either. I theorised that the SATA controller was faulty. Fortunately the main board has four SATA controllers so I switched drive 1 from SATA 1 to SATA 3. Now it works. Solaris 11 detected the drive on SATA 3 and uses it. HTH, Hi There Harry? -T |
#58
|
|||
|
|||
SATA Drives
On Sat, 02 Mar 2019 22:03:42 +1100, Lucifer wrote:
Solaris 11 detected the drive on SATA 3 and uses it. Respect! |
#59
|
|||
|
|||
SATA Drives
On 3/2/19 3:03 AM, Lucifer wrote:
HTH, Hi There Harry? You Funny Bunny! :-) |
Thread Tools | |
Display Modes | Rate This Thread |
|
|