WorldBench also spits out individual results for its component application tests, allowing us to compare performance in each. Neither are our single-drive configurations, for that matter. RAID 5 is another story. NVIDIA also has a slight lead over Intel with some of the other array configurations, although those results only differ by a few seconds.
Boot and load times To test system boot and game level load times, we busted out our trusty stopwatch. However, the results clearly show that the time needed to scan for and initialize a RAID array on either chipset slows the boot process by enough for our single-drive configurations to come out on top. Results in our level load time tests are more varied, but the single-drive configurations prove tough to beat. Array rebuild times Our array rebuild tests simulate a drive failure by removing and formatting one drive in an array.
The system is booted with the drive disconnected to ensure that the array functions properly without it, and we time the rebuild process after the drive is reconnected. Reconstructing a RAID array takes a while, regardless of the system or array configuration.
File Copy Test — Creation File Copy Test is a pseudo-real-world benchmark that times how long it takes to create, read, and copy files in various test patterns. File copying is tested twice: once with the source and target on the same partition, and once with the target on a separate partition.
RAID 5 performance is dismal, though. Looking at overall array performance, RAID 5 finally finds some redemption and even leads the field in a couple of test patterns. These tests combine both read and write operations, and that write component is enough to sink our RAID 5 arrays. Speaking of domination, the nForce4 reigns supreme again with all but the ISO test pattern.
You can get the low-down on these iPEAK-based tests here. The mean service time of each drive is reported in milliseconds, with lower values representing better performance.
RAID 5 performance is more mixed, for once. Otherwise, the two chipsets trade blows, with neither able to put the other away. Although sustained read performance scales very well from one to two drives, adding a third or fourth drive yields much more modest performance gains.
At least RAID 5 fares relatively well. When we move to writes, RAID 5 assumes its position at the back of the field, trailing the performance of single-drive configurations by two to three times. Again, we see diminishing returns scaling beyond two drives.
Single-drive response times are very close between the two platforms, with the ICH7R only a hair faster. Although the nForce4 is a little faster at some load levels, the ICH7R is more responsive under heavier loads—and overall. Here we have results for two-, three-, and four-drive arrays on each controller. Raise the roof, er, or something. It certainly can. And the nForce4 hits the wall again.
Mirroring can improve read performance, after all. Our rainbow of response time results clearly illustrates why multi-user server environments have been using RAID for years. It really is that much faster, and the more drives you add, the more responsive the system becomes under heavier loads.
Again, the performance of three- and four-drive RAID 0 arrays on the nForce4 is identical to that of three- and four-drive RAID 5 arrays with the read-dominated web server test pattern. RAID can also improve storage subsystem responsiveness during disk-intensive multitasking. Unfortunately, the write performance of chipset-level RAID implementations is pretty dismal. Which one proves better very much depends on the application.
The ICH7R, on the other hand, scales beautifully under increasingly heavy loads, and it even doubles the transaction rate of the nForce4 in some cases. I appreciate the review, though. This is exactly the sort of information we need. RAID5 with 4 drives should also be capable of writing 3 times as fast as a single drive, but in fact it only does it at HALF the speed of a single drive on sustained writes note, if we are talking about only modifying one block, there is no possible way for RAID5 to be better than single drive, then again, no raid would be.
Of course one could be even more pedantic in the spirit of these criticisms to modify proposed changes even more and claim that the OVERHEAD of reading and writing parity is the problem, but if in fact this overhead consists of the need to talk to our middleman the CPU, the only thing that can settle this dispute is to get some hardware raid benchmarks.
In a 10 situation you mirror the lost drive s. They know their stuff. I have nothing against up-and-coming companies that try harder than the established veterans.
I say heck yeah, throw them in the ring with the others. One day Intel and Nvidia will come out with a real hardware raid controller, in these "chipset" RAIDs, but just not yet.
It probably depends on the controller. It could just break files up three ways. In comparison, if you were to look at the RAID 0 results from the review in say, the HD tach average write speed, you can see not only is RAID 5 half the speed of single, it is only a quarter of what 4 drives without parity can do.
Now suppose our parity calculations can be preempted…. You do NOT need to do N-1 reads because you can simply derive the XOR value for the unchanged blocks simply by knowing the current parity and the current block being overwritten.
Can i ask? Best R. Dunno what config he was using or what hardware card he used but hardware was faster using the same HD's. The Highpoint card is pretty crappy, someone on the arstechnica. RAID -1 is very important who use thier desktops for large storage. Not to mention those who do a mesure of work at their home computer, small bussnesses, school, etc.
I second that i want some software vs hardware RAID comparisons , but i msut also request oen thing that i have yet to find on any reveiw page. If you have a RAID card that suppost a DIMM style RAM Cache, Please please test it with different sizes so that one can see the effects of cache, and weather it is worth buyign a high cache card, which are generally more expensive, and also if it affects the different types of raids differently, I hear its more effective on RAID 5.
Pretty close — but the way most RAID-5 controllers work is a little different. Right conclusion though — the reason RAID-5 is slow is NOT due to parity calculations, but due to the work required to be able to calulate it. The trick to getting lots of performance out of a RAID-5 array, is to have a really high-end raid card with lots of cache memory on it, and make sure write-back caching is enabled which usually requires that the raid card has an onboard battery for its cache.
The raid card can tell the OS that the write is completed as soon as its into the cache, then the card can worry about all the various reading and writing that needs to be done without slowing other things down. What a great article. I ran a stripe set and RAID 5 for a while on my gamining rig and noticed no real speed difference. I would love to see you add both of those beauties added to the test matrix.
Damage, I think that would make an extremely cool part 2. Software parity ruins that. Still no answers? As long as you can XOR faster than you can perform sustained writes, you should be fine.
Even entry-level server tasks are only covered here sparingly. Go read aceshardware or something if you want to see how quad opterons are versus sparc hardware. Just a footnote, Controllers also add a nessesary addition to the formulas.
I would politely second this request. Oh and nobody cares about your background as long as you don't speak nonsense. The way I see it, the article serves as a baseline comparison between two leading integrated RAID controllers.
You could always add to the review since you still have the drives by picking up hardware based cards, IF you wanted to. He could have formatted the hard drive the way so few enthusiasts using RAID in most cases, gamers have the forethought to plan, but I think the most important thing this article offers is relevance.
Try this at an convention of amateur cluster builders and home-made SAN projects, you may get a closer audience to your own crowd. Or at least you need not be. Perhaps we can do such an article in the future. You mean plain jane home desktop users, not workstation users, right? I learned this the hard way, of course.
Dell shrugged us off and sent replacements— with no explanation. My lesson learned was to never use Dell again, and start thinking about redundancy in the system to save me days worth of troubleshooting. In fact, I can. In short, RAID 1 or higher is fairly vital to the business desktop and workstation world.
I think you have missed some of the key information about how we tested, so let me try to clarify. First, you seem not to understand what FC-Test does and why. It is actually a known-quantity disk benchmark that copies files of multiple, varying sizes over the course of its different runs.
This article about it is linked in the Testing Methods section of the review:. The review even mentions the varying file sizes of the different FC-Test patterns in the results discussion. We did indeed test exclusively with NTFS, and the stripe sizes and our rationale for choosing them were clearly noted in the Test Notes section of the review.
Our choices may not please everyone, but please note that they were meticulously documented. We did not change drive types, models, sizes, or firmware revisions during testing. The results would seem to bear out that the overhead of doing parity calculations in software causes write performance problems in RAID 5 on these chipsets. I'm unsure how your mention of your experience with Unix hardware RAID solutions bears on this information, but we certainly didn't overlook the hardware-vs.
Bully for you on playing with expensive hardware, though. We do that from time to time, as you might note, and we generally enjoy it. Sorry if I sound judgmental, but I think you should do a "Phase 2" read-through of the article in which you pay closer attention to how we documented our test procedures, what the benchmarks linked in the testing section actually do, and why it's relevant to certain usage models. If you have criticisms after that, perhaps they will be truly constructive.
The best thing I can do for myself is to have all the work files and the Outlook ones on an external disk, with an automatic backup twice a day. RAID is good for servers but, from my point of view, the chance the primary disk fails is the same than either the video card, the power supply, the motherboard, the processor or the memory fails. To point out the splitting of data. Drivers should really give the option to raise the cpu utilization to improve performance.
Especially for people with multi core systems, that should save people dollars for raid controllers. The only major downside is that it makes booting the system off the array a fair bit trickier, and usually means having an additional boot device of some kind.
No where did you mention what stripe width you picked, nor did you mention anywhere what block size you chose when formatting the filesystem which I assume is NTFS. What you picked for a stripe width also matters depending upon the of drives you have in the array.
Something is obviously wrong with your setup, or you forgot to power-cycle your machines before doing each copy. On-drive-PCB cache plays a role here, but so does drive firmware and other factors. Excellent article, fellas. Only way I can think to improve it woudl have been to knock out a drive or two while the arrays were operating, just to see what would happen. Support Navigation Support. Support Home Technologies. Close Window. Always follow the instructions included with your motherboard.
Find more details. Select the Advanced menu, then the Drive Configuration menu. Set the Drive Mode option to Enhanced. Note Nothing happens right after pressing F6. Setup is still loading drivers.
Watch for the prompt to load support for mass storage devices. Press S to Specify Additional Device. OEM Press Enter. Download the latest version of Intel Rapid Storage Technology. Run the executable. Show all Show less. Need more help? Give Feedback. Did you find this information useful?
0コメント