JB,
Basically I was testing the MD3660F to determine if this <low cost, low-cache, high-density, current generation> SAS storage can replace our previous generation engenio-based (too) FC storage. In many way it appears that the MD3660F is ahead, taking advantage of its 8Gb FC links on sequential workloads with multiple clients on separate Raid groups and its faster electronic for maximum IOPS.
All of the 4 benchs are usefull to compare and make focus on characteristics of the hardware. A fifth 100% write could be usefull too but the impact of the RAID hardware and cache can already be guessed on write for those who are not host-side bandwidth limited.
Many results here miss the exact hardware (disks number, spindle speed, disabled read-ahead …) letting them hard to compare objectively. I tried to be as exhaustive as possible, showing how small configurations can impact the overall results.
For your streaming 100%read and 50%read results, if you indeed have 2x1Gb attachments, don't expect to go any further as you already maxed out the available network bandwidth. Or I'm I wrong on your connectivity ? And for the 2 IOPS benchmarks, that seems also to be a limit for 22 active NLSAS drives, network is not limiting you and even if you have more cache, I'm encountering similar values on your specific setup. For our workloads cumulated small independent RAID5 SAS arrays (1 Array=1 Lun) are prefered over larger shared ones (1 Array=n Luns). Thats's perhaps the cause of your latency spikes.