If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
|
Thread Tools | Rate Thread | Display Modes |
#1
|
|||
|
|||
What is a "Softraid controller"?
I thought software raid was simply using windows to mirror two drives, on standard disk controllers. So what is this card I have called a Silicon Image Softraid controller? It does hardware raid (as I would name it) - I can set up arrays in it's own bios before an OS loads. So why do they call it softraid? Is that just referring to the ability to manage the volumes within the OS?
-- Attila the Hun died during a bout of rough sex where his partner broke his nose causing a haemorrhage. |
Ads |
#2
|
|||
|
|||
What is a "Softraid controller"?
Jimmy Wilkinson Knife wrote:
I thought software raid was simply using windows to mirror two drives, on standard disk controllers. So what is this card I have called a Silicon Image Softraid controller? It does hardware raid (as I would name it) - I can set up arrays in it's own bios before an OS loads. So why do they call it softraid? Is that just referring to the ability to manage the volumes within the OS? https://s9.postimg.cc/dhd0whoxb/SIL3132_soft_RAID.gif Paul |
#3
|
|||
|
|||
What is a "Softraid controller"?
On Sat, 26 May 2018 01:12:50 +0100, Paul wrote:
Jimmy Wilkinson Knife wrote: I thought software raid was simply using windows to mirror two drives, on standard disk controllers. So what is this card I have called a Silicon Image Softraid controller? It does hardware raid (as I would name it) - I can set up arrays in it's own bios before an OS loads. So why do they call it softraid? Is that just referring to the ability to manage the volumes within the OS? https://s9.postimg.cc/dhd0whoxb/SIL3132_soft_RAID.gif So the flash chip loads the data into the MB's BIOS? Doesn't that then make it hardware controlled? Something is capable of creating arrays with no OS running. -- H lp! S m b d st l ll th v w ls fr m m k yb rd! |
#4
|
|||
|
|||
What is a "Softraid controller"?
Jimmy Wilkinson Knife wrote:
On Sat, 26 May 2018 01:12:50 +0100, Paul wrote: Jimmy Wilkinson Knife wrote: I thought software raid was simply using windows to mirror two drives, on standard disk controllers. So what is this card I have called a Silicon Image Softraid controller? It does hardware raid (as I would name it) - I can set up arrays in it's own bios before an OS loads. So why do they call it softraid? Is that just referring to the ability to manage the volumes within the OS? https://s9.postimg.cc/dhd0whoxb/SIL3132_soft_RAID.gif So the flash chip loads the data into the MB's BIOS? Doesn't that then make it hardware controlled? Something is capable of creating arrays with no OS running. The CPU does it, using the code from the Flash chip on the card. Once the OS boots, the driver in the OS contains the code to do that, and the CPU still does that work. Unless the hardware block diagram has *specific* RAID functions showing in the diagram, it's a "software" implementation. Paul |
#5
|
|||
|
|||
What is a "Softraid controller"?
On Sat, 26 May 2018 02:13:35 +0100, Paul wrote:
Jimmy Wilkinson Knife wrote: On Sat, 26 May 2018 01:12:50 +0100, Paul wrote: Jimmy Wilkinson Knife wrote: I thought software raid was simply using windows to mirror two drives, on standard disk controllers. So what is this card I have called a Silicon Image Softraid controller? It does hardware raid (as I would name it) - I can set up arrays in it's own bios before an OS loads. So why do they call it softraid? Is that just referring to the ability to manage the volumes within the OS? https://s9.postimg.cc/dhd0whoxb/SIL3132_soft_RAID.gif So the flash chip loads the data into the MB's BIOS? Doesn't that then make it hardware controlled? Something is capable of creating arrays with no OS running. The CPU does it, using the code from the Flash chip on the card. Once the OS boots, the driver in the OS contains the code to do that, and the CPU still does that work. Unless the hardware block diagram has *specific* RAID functions showing in the diagram, it's a "software" implementation. Ah, so just as useful, but using the PC's resources up and possibly slower? -- You know you've spent too much time on the computer when you spill milk and the first thing you think is, 'Edit, Undo.' |
#6
|
|||
|
|||
What is a "Softraid controller"?
Jimmy Wilkinson Knife wrote:
On Sat, 26 May 2018 02:13:35 +0100, Paul wrote: Jimmy Wilkinson Knife wrote: On Sat, 26 May 2018 01:12:50 +0100, Paul wrote: Jimmy Wilkinson Knife wrote: I thought software raid was simply using windows to mirror two drives, on standard disk controllers. So what is this card I have called a Silicon Image Softraid controller? It does hardware raid (as I would name it) - I can set up arrays in it's own bios before an OS loads. So why do they call it softraid? Is that just referring to the ability to manage the volumes within the OS? https://s9.postimg.cc/dhd0whoxb/SIL3132_soft_RAID.gif So the flash chip loads the data into the MB's BIOS? Doesn't that then make it hardware controlled? Something is capable of creating arrays with no OS running. The CPU does it, using the code from the Flash chip on the card. Once the OS boots, the driver in the OS contains the code to do that, and the CPU still does that work. Unless the hardware block diagram has *specific* RAID functions showing in the diagram, it's a "software" implementation. Ah, so just as useful, but using the PC's resources up and possibly slower? The difference should be most noticeable on RAID5, as XOR is relatively expensive. At least traditionally, people doing softRAID in RAID5 mode, yes they got functionality, but they didn't get particularly high performance. If you have a hardware RAID card, a "read" costs you 2% CPU (because it's all DMA activity). Whereas if you do RAID5 in software, the CPU runs at 50% during your read attempt, and the software RAID array competes for cycles, with whatever you're doing. Not a particular issue if just copying files from one array to another, but bad if you're actually multitasking. Paul |
#7
|
|||
|
|||
What is a "Softraid controller"?
On Sat, 26 May 2018 17:24:14 +0100, Paul wrote:
Jimmy Wilkinson Knife wrote: On Sat, 26 May 2018 02:13:35 +0100, Paul wrote: Jimmy Wilkinson Knife wrote: On Sat, 26 May 2018 01:12:50 +0100, Paul wrote: Jimmy Wilkinson Knife wrote: I thought software raid was simply using windows to mirror two drives, on standard disk controllers. So what is this card I have called a Silicon Image Softraid controller? It does hardware raid (as I would name it) - I can set up arrays in it's own bios before an OS loads. So why do they call it softraid? Is that just referring to the ability to manage the volumes within the OS? https://s9.postimg.cc/dhd0whoxb/SIL3132_soft_RAID.gif So the flash chip loads the data into the MB's BIOS? Doesn't that then make it hardware controlled? Something is capable of creating arrays with no OS running. The CPU does it, using the code from the Flash chip on the card. Once the OS boots, the driver in the OS contains the code to do that, and the CPU still does that work. Unless the hardware block diagram has *specific* RAID functions showing in the diagram, it's a "software" implementation. Ah, so just as useful, but using the PC's resources up and possibly slower? The difference should be most noticeable on RAID5, as XOR is relatively expensive. At least traditionally, people doing softRAID in RAID5 mode, yes they got functionality, but they didn't get particularly high performance. If you have a hardware RAID card, a "read" costs you 2% CPU (because it's all DMA activity). Whereas if you do RAID5 in software, the CPU runs at 50% during your read attempt, and the software RAID array competes for cycles, with whatever you're doing. Not a particular issue if just copying files from one array to another, but bad if you're actually multitasking. A long time ago I had a RAID 5 array of I think 6 1TB Caviar Black WD drives. It wasn't much faster than a single drive. I think it was a RAID card I plugged in as motherboards didn't have that many connectors back then. -- When a woman wears leather clothing, a man's heart beats quicker, his throat gets dry, he goes weak in the knees, and he begins to think irrationally. Ever wonder why? She smells like a new truck! |
#8
|
|||
|
|||
What is a "Softraid controller"?
Paul wrote:
Jimmy Wilkinson Knife wrote: On Sat, 26 May 2018 02:13:35 +0100, Paul wrote: Jimmy Wilkinson Knife wrote: On Sat, 26 May 2018 01:12:50 +0100, Paul wrote: Jimmy Wilkinson Knife wrote: I thought software raid was simply using windows to mirror two drives, on standard disk controllers. So what is this card I have called a Silicon Image Softraid controller? It does hardware raid (as I would name it) - I can set up arrays in it's own bios before an OS loads. So why do they call it softraid? Is that just referring to the ability to manage the volumes within the OS? https://s9.postimg.cc/dhd0whoxb/SIL3132_soft_RAID.gif So the flash chip loads the data into the MB's BIOS? Doesn't that then make it hardware controlled? Something is capable of creating arrays with no OS running. The CPU does it, using the code from the Flash chip on the card. Once the OS boots, the driver in the OS contains the code to do that, and the CPU still does that work. Unless the hardware block diagram has *specific* RAID functions showing in the diagram, it's a "software" implementation. Ah, so just as useful, but using the PC's resources up and possibly slower? The difference should be most noticeable on RAID5, as XOR is relatively expensive. At least traditionally, people doing softRAID in RAID5 mode, yes they got functionality, but they didn't get particularly high performance. If you have a hardware RAID card, a "read" costs you 2% CPU (because it's all DMA activity). Whereas if you do RAID5 in software, the CPU runs at 50% during your read attempt, and the software RAID array competes for cycles, with whatever you're doing. Not a particular issue if just copying files from one array to another, but bad if you're actually multitasking. RAID5 is a bad idea anyway. |
#9
|
|||
|
|||
What is a "Softraid controller"?
On Sun, 27 May 2018 10:51:33 +0100, Chris wrote:
Paul wrote: Jimmy Wilkinson Knife wrote: On Sat, 26 May 2018 02:13:35 +0100, Paul wrote: Jimmy Wilkinson Knife wrote: On Sat, 26 May 2018 01:12:50 +0100, Paul wrote: Jimmy Wilkinson Knife wrote: I thought software raid was simply using windows to mirror two drives, on standard disk controllers. So what is this card I have called a Silicon Image Softraid controller? It does hardware raid (as I would name it) - I can set up arrays in it's own bios before an OS loads. So why do they call it softraid? Is that just referring to the ability to manage the volumes within the OS? https://s9.postimg.cc/dhd0whoxb/SIL3132_soft_RAID.gif So the flash chip loads the data into the MB's BIOS? Doesn't that then make it hardware controlled? Something is capable of creating arrays with no OS running. The CPU does it, using the code from the Flash chip on the card. Once the OS boots, the driver in the OS contains the code to do that, and the CPU still does that work. Unless the hardware block diagram has *specific* RAID functions showing in the diagram, it's a "software" implementation. Ah, so just as useful, but using the PC's resources up and possibly slower? The difference should be most noticeable on RAID5, as XOR is relatively expensive. At least traditionally, people doing softRAID in RAID5 mode, yes they got functionality, but they didn't get particularly high performance. If you have a hardware RAID card, a "read" costs you 2% CPU (because it's all DMA activity). Whereas if you do RAID5 in software, the CPU runs at 50% during your read attempt, and the software RAID array competes for cycles, with whatever you're doing. Not a particular issue if just copying files from one array to another, but bad if you're actually multitasking. RAID5 is a bad idea anyway. What on earth led to that stupid conclusion? You get the benefits of more storage space, AND redundancy. 6 RAID 5 disks gives you 5 times the space of 1 drive, plus one can fail. Sure RAID 6 is better if you have enough disks and want to lose a bit of space. -- Two men were talking. "My son asked me what I did during the Sexual Revolution," said one. "I told him I was captured early and spent the duration doing the dishes. |
#10
|
|||
|
|||
What is a "Softraid controller"?
In article , Jimmy Wilkinson Knife
wrote: RAID5 is a bad idea anyway. What on earth led to that stupid conclusion? math, and it's not stupid. You get the benefits of more storage space, AND redundancy. 6 RAID 5 disks gives you 5 times the space of 1 drive, plus one can fail. the problem is during a raid 5 rebuild, if a second drive fails, you lose the array. the chance of that happening is very high, especially with today's multi-terabyte drives, where the chance of an unrecoverable bit error is a statistical guarantee. Sure RAID 6 is better if you have enough disks and want to lose a bit of space. it's less of a risk because it can tolerate a second drive failure during a rebuild. with a third drive failure, you lose the array. |
#11
|
|||
|
|||
What is a "Softraid controller"?
nospam wrote:
In article , Jimmy Wilkinson Knife wrote: RAID5 is a bad idea anyway. What on earth led to that stupid conclusion? math, and it's not stupid. This is called a "Property" and you tune as you see appropriate. If you don't like the characteristics of RAID5, you use RAID6. You use RAID10 or RAID0+1, again, studying the failure cases and what the RAID gives you. Then you make a choice. One of the assumptions about the disks, is that failures don't correlate. The odds of the second disk failing, remain as they always did, when the first disk just failed. The purpose of RAID in many cases, is to extend the service window, so on a Degrade, you can wait until 8PM to take the unit out of service. But we did have days at work, where our local IT guy was doing a "rebuild" while the server was running, which resulted in slow home directories for the entire day. An example of a correlated (common mode) failure, is the power supply raising all disk drive voltage levels from 12V to 15V, causing damage to all drives at the same time. None of the RAID designs above, solve that in most normal implementations. A second failure mechanism, is if a *hardware* controller goes nuts and writes garbage to the array. This happened *twice* at work, with perhaps the staff not being aware of the first (smaller) failure, until an outage caused by the same issue resulted in hundreds of employees going home at 2PM in the afternoon. There were some angry phone calls to IT that day, by "big cheeses", rather than ****ed off users. (These are gentlemen who yell into the phone a lot, and have sound dampened offices due to their "yelling lifestyle".) The IT department cooked up a "space shuttle" grade solution - it took about a year to implement and get all the equipment in place, and modify the client end for failover, but we never ever had another failure like that again. Servers were put in separate buildings to provide a triplicated service with failover on the client end. These were software distribution servers, amongst other things. Paul |
#12
|
|||
|
|||
What is a "Softraid controller"?
Jimmy Wilkinson Knife wrote:
On Sun, 27 May 2018 10:51:33 +0100, Chris wrote: Paul wrote: Jimmy Wilkinson Knife wrote: If you have a hardware RAID card, a "read" costs you 2% CPU (because it's all DMA activity). Whereas if you do RAID5 in software, the CPU runs at 50% during your read attempt, and the software RAID array competes for cycles, with whatever you're doing. Not a particular issue if just copying files from one array to another, but bad if you're actually multitasking. RAID5 is a bad idea anyway. What on earth led to that stupid conclusion? My you're pleasant... RAID5 is inherently unsafe due to caching of errors, URE and increased failure during rebuild. https://www.zdnet.com/article/why-ra...rking-in-2019/ You get the benefits of more storage space, AND redundancy. Er, less storage space. You lose ~20% of raw disk capacity. |
#13
|
|||
|
|||
What is a "Softraid controller"?
On Mon, 28 May 2018 14:45:52 +0100, Chris wrote:
Jimmy Wilkinson Knife wrote: On Sun, 27 May 2018 10:51:33 +0100, Chris wrote: Paul wrote: Jimmy Wilkinson Knife wrote: If you have a hardware RAID card, a "read" costs you 2% CPU (because it's all DMA activity). Whereas if you do RAID5 in software, the CPU runs at 50% during your read attempt, and the software RAID array competes for cycles, with whatever you're doing. Not a particular issue if just copying files from one array to another, but bad if you're actually multitasking. RAID5 is a bad idea anyway. What on earth led to that stupid conclusion? My you're pleasant... RAID5 is inherently unsafe due to caching of errors, URE and increased failure during rebuild. https://www.zdnet.com/article/why-ra...rking-in-2019/ That makes no sense to me. A read error on a single drive will always be there, no matter what array it's in or not in. You get the benefits of more storage space, AND redundancy. Er, less storage space. You lose ~20% of raw disk capacity. Er, more than a mirror. That loses 50%. -- If you feel tired, pull off at the motorway services -- Highway Code, UK. How's that going to help?!? |
#14
|
|||
|
|||
What is a "Softraid controller"?
On Mon, 28 May 2018 14:45:52 +0100, Chris wrote:
Jimmy Wilkinson Knife wrote: On Sun, 27 May 2018 10:51:33 +0100, Chris wrote: Paul wrote: Jimmy Wilkinson Knife wrote: If you have a hardware RAID card, a "read" costs you 2% CPU (because it's all DMA activity). Whereas if you do RAID5 in software, the CPU runs at 50% during your read attempt, and the software RAID array competes for cycles, with whatever you're doing. Not a particular issue if just copying files from one array to another, but bad if you're actually multitasking. RAID5 is a bad idea anyway. What on earth led to that stupid conclusion? My you're pleasant... RAID5 is inherently unsafe due to caching of errors, URE and increased failure during rebuild. https://www.zdnet.com/article/why-ra...rking-in-2019/ There really is some bull**** out the "There are some pretty scary calculations available on the Internet. Some are concluding that there is as much as 50% probability of failing the rebuild on the 12TB (6x2TB) RAID 5." The fact is (based on me having a 6 disk RAID 5 composed of 6 1TB drives), that through FIVE seperate disk failures on that system, every single time it managed a full rebuild with no problems whatsoever. If there was a read error, I'd lose one or two files, big deal. That would happen on any disk system, any RAID level, or none at all. The controller ain't gonna fall over and panic just because it can't rebuild one ****ing sector is it? -- If you feel tired, pull off at the motorway services -- Highway Code, UK. How's that going to help?!? |
#15
|
|||
|
|||
What is a "Softraid controller"?
On Mon, 28 May 2018 15:03:51 +0100, Jimmy Wilkinson Knife wrote:
On Mon, 28 May 2018 14:45:52 +0100, Chris wrote: Jimmy Wilkinson Knife wrote: On Sun, 27 May 2018 10:51:33 +0100, Chris wrote: Paul wrote: Jimmy Wilkinson Knife wrote: If you have a hardware RAID card, a "read" costs you 2% CPU (because it's all DMA activity). Whereas if you do RAID5 in software, the CPU runs at 50% during your read attempt, and the software RAID array competes for cycles, with whatever you're doing. Not a particular issue if just copying files from one array to another, but bad if you're actually multitasking. RAID5 is a bad idea anyway. What on earth led to that stupid conclusion? My you're pleasant... RAID5 is inherently unsafe due to caching of errors, URE and increased failure during rebuild. https://www.zdnet.com/article/why-ra...rking-in-2019/ There really is some bull**** out the "There are some pretty scary calculations available on the Internet. Some are concluding that there is as much as 50% probability of failing the rebuild on the 12TB (6x2TB) RAID 5." The fact is (based on me having a 6 disk RAID 5 composed of 6 1TB drives), that through FIVE seperate disk failures on that system, every single time it managed a full rebuild with no problems whatsoever. If there was a read error, I'd lose one or two files, big deal. That would happen on any disk system, any RAID level, or none at all. The controller ain't gonna fall over and panic just because it can't rebuild one ****ing sector is it? Occasionally people speak sense in reply to the alarmists: https://www.zdnet.com/article/why-ra...works-usually/ "I constructed a simple Linux MD RAID 5 out of 5 4TB WD Red with the datasheet specifying 1 in 10^14 URE and then filled it with data. I kept removing a random disk and rebuilding it, then verifying all the data via checksums. I did this for around a month and ended up rebuilding it 20 times before I ended up wanting to use the disks elsewhere. In this case each rebuild had to read approximately 16TB of data to rebuild the missing disk, and then read all 16TB again to verify the checksums. That ended up being 640TB of data read without a single URE. I see a similar situation with my separate ZFS array. It has 30TB of data on it and I scrub it every 2 weeks and I have yet to encounter a URE (which would show up in spool status as an error with repaired data). I've scrubbed several hundred TBs of data without any URE on similar consumer grade WD Red disks with 1 in 10^14 spec URE rate." "Are we being a bit alarmist here? In a previous job I had two different EMC SAN's. Each one had one drawer of 15x300 Gb 15K SAS drives and a second drawer of 15x1 TB SATA drives. They were purchased in 2008 and were not decommissioned until the end of 2015. We had drive failures over the 7 years these were in continuous 24x7 operation. But on drive replacement we never had even one rebuild failure. Of course even the SATA drives were enterprise drives and perhaps the dual storage controllers on each SAN were smart enough to handle it? Who knows. they only thing I know is the arrays rebuilt without a problem 100% of the time. Saw the same thing on some Dell servers and their internal drives. we had some purchased in 2007 that were still running as test and development systems into 2016. Same experience. Never once saw a problem rebuilding." "ZFS in raidZ2 gets around this as does raid6 with a sane drive controller or mdadm. It changes the problem from needing to do a perfect read on the entire array to rebuild to not having the same bad sector on any 2 of the remaining disks in the exact same spot. The odds of that being infinitesimal." -- Gary Glitter has said if he gets executed he wants cremating and his ashes putting in an etch-a-sketch, so the kids can still play with him! |
|
Thread Tools | |
Display Modes | Rate This Thread |
|
|