Planning a move to SSD with iSCSI for my big stuff, wondering what others are doing with large files. I did some testing at the office and I thought I'd share it here so u guys can see how much normal drives suck arse
http://adam.qgl.org/iscsi/iscsi.htm Jim has some other stats on the SSD which are pretty cool using the same thing. |
the same test on that little ocz vertex I nabbed yesterday:
http://jason.qgl.org/images/HDTune_File_Benchmark_OCZ-VERTEX_v1.10.png |
what are you using to host the iSCSI ?the iscsi lun is just a 30gb flatfile sitting on an ext3 partition using iet for the target the hardware is a HP DL380 G5 with dual quad core xeon @ 2.33GHz (E5345) and 4gb ram, running centos. and the most important part of the hardware are the disks+controller where the iscsi target is being served from - 4 x 146gb 10k rpm sas drives in raid5 on a P800 controller which has 512mb of battery backed cache so it's a fairly decent piece of hardware all round, particularly in the disk i/o department, but by no means the best you could use. and it's not configured specifically for iscsi either - it has another role here in the office, it's just the one I happened to pick cos it had a decent controller and plenty of free space |
There is some massive differences there!
SSD for boot drive for the win!! |
I just put in a RAID1+0 with SAS 15k RPM drives for our DB server, might run some tests on it tomorrow before cutting it into production.
|
Thats decent performance from the ISCSI, from all ive read, the best you will get is 120-160mb a sec.
but, those drives natively on that server would be way faster than 100mb a sec, ive tested with the same server and get over 300mb a sec from SAS drives in a raid mirror, the iscsi is limiting it to about 100mb a sec. It is a bit of a waste of fast drives though, I can get 100-105 mb from a hardware iscsi unit using only sata drives in the it. The raid setup of the unit didnt seem to make much difference, i got about the same speed from raid 5 and raid 10, you hit the limit of iscsi pretty easily. Also, the unit I was setting up had 802.3 (LACP) link aggregation over 1gb ethernet connections, with LACP setup on a HP switch, but in a performance test, it wont ever use both links becuase one stream of data will only ever go over 1 link. When the device is being accessed from multiple sources as it would in a real life setup, the link aggregation makes a big difference. ISCSI is awesome though, and unless you need realy fast disk performance, it works great. I've setup a vmware esx server using just iscsi for storage, and it worked great. You'd want to go fibre channel though if you want realy fast performance. |
Yeah I have one of my sites on a Perc6 w 6 x 300GB SAS 15K drives in Raid10
F*****g owns |
etherchannel bonding would've let a single data transfer go over both nics viper
also here's a hdtune test on a hp eva 4400 vraid 5 lun for s**** and giggles: http://jason.qgl.org/images/HDTune_File_Benchmark_HP______HSV300.png |
also what are you referring to when you say iscsi limit viper? term's tests there are nearly maxing our gbit connection, that's not an iscsi limitation
|
etherchannel bonding would've let a single data transfer go over both nics viper Realy? everywhere I read says that even with link aggregation it only ever uses one physical nic per transfer. also what are you referring to when you say iscsi limit viper? term's tests there are nearly maxing our gbit connection, that's not an iscsi limitation Yeah thats what i meant, its not an iscsi limitation, more of a limit of 1gb ethernet iscsi. |
ok I was kinda wrong. all the stuff I was reading about iscsi was based around its implementation with vmware, so thats where I was getting the single transfer will only use 1 link idea from.
Have a read of this article, its got some realy good info about iscsi. http://virtualgeek.typepad.com/virtual_geek/2009/01/a-multivendor-post-to-help-our-mutual-iscsi-customers-using-vmware.html |
the big thing about iSCSI is you need to get to the gbit limit before it becomes useful, then its faster than all my local drives at large writes and random writes, but I suspect doing lots of small writes my drive would s*** on it. For that reason I kinda favour it for storage rather than OS, hence the SSD as well.
The other thing, all the NAS reviews are quite funny, none of them mention the importance of Jumbo frames to getting the speeds you need out of it. My iSCSI at home I'm only getting 30mb/sec from cuz its a s*** hub, I've ordered a 108 from auspcmarket, which is a little netgear beauty that supports jumbo frames, has 8 ports, and only 100 odd bucks. |
So how you rate the Vertex Jim, compared to your x25? Okay for the money? Is the difference really that noticeable from a normal hdd in general use?
Fighting the urge to pick one up. |
this must make the chix in y0r office so wet
|
so far so good reso
it's pretty snappy and most of the time I can't really discern any difference between it and the intel. remains to be seen how it is in a few months but yeh I reckon it's great so far. f*** putting the old 7200 rpm drive back into the lappy even if it is 4 times the capacity :P |
Damn you Jim! You were supposed to say it SUXXXXXXXXX.
|
the big thing about iSCSI is you need to get to the gbit limit before it becomes useful, then its faster than all my local drives at large writes and random writes, but I suspect doing lots of small writes my drive would s*** on it. Yeah I think you are right there, but it depends what you are comparing it against, if its a server environment, the SAS drives on a raid controller will always s*** on iscsi (unless its 10gbe), but if its a PC and you are comparing it to sata I think ISCSI would be faster all round (assuming you iscsi is raided), but you obvioulsy cant use it for your os anyway, unless you have a ethernet HBA that supports booting from iscsi. In a vmware environment, you can use iscsi for pretty much everything, server os drive, data etc. Unless you need lots of fast random read writes, like a heavily used sql database etc. |
I'm yet to test it but in theory multipathing and channel bonding should see you getting at least close to or even surpassing fibre, without 10gb. it's kind of puzzling you'd say that given the link you posted earlier which seems to support the theory :)
also you can actually boot almost anything off an iscsi lun with just a pxe booting nic, I've been messing with various versions of windows and linux the last week or so to look at the ins and outs of it. some o/s installers support iscsi directly (rhel/centos), some can be tricked into seeing an iscsi lun by using gpxe-specific dhcp options such as bios-drive and keep-san (vista, 2k8, windows7) and others you just need to image from a local install first and then boot the image by chainloading gpxe+iscsi very nifty |
DELL PowerEdge R710 with 6 x 146GB 15k SAS HDD in RAID1 & RAID1+0;
RAID1 http://www.gronks.com/qgl/raid1bmark.png RAID1+0 http://www.gronks.com/qgl/raid10bmark.png |
This usually means that customers find that for a single iSCSI target (and however many LUNs that may be behind that target – 1 or more), they can’t drive more than 120-160MBps. Thats from the article I posted earlier, thats what I was getting at. But thats specific to VMware esx though. |
At 64Mb I got pretty much the same speeds :)
|
so moving away from f***off enterprise solutions back to what the normal poor me can buy
I got my N7700 yesterday got iSCSI working, and am capping it out at the gbit limitation of the network, going to try linking the two ports to see if I can get any faster tonight http://www.thecus.com/products_over.php?cid=11&pid=82&set_language=english But I'm happy at 100mb/sec, still faster than my 7200 rpms run locally |
going to try linking the two ports to see if I can get any faster tonight Yeah but do you have 2 gbe in your pc? even if you do, unless you setup MPIO properly, linking the 2 ports wont do anything. Your using the iscsi initiator in xp/vista I assume? |
Thats from the article I posted earlier, thats what I was getting at. But thats specific to VMware esx though.ah yeh as you said that is talking about a vmware-specific issue where only a single tcp connection can be used per target, thus the channel bonding will only ever utilise a single nic in a bond |
pfft I only work in 64 mb files these days :P
That is 6 x 300GB SAS Seagates in ten |
so moving away from f***off enterprise solutions back to what the normal poor me can buy Our new box cost just under $9k including Win2008 license. If I dropped the RAM back to 4Gb (from 32) and removed a CPU then the cost would drop significantly. Pricing up the N7700 and 3 IntelX25 80GB SSD drives brings it up to about $3k for 160Gb in RAID5 whereas I could reconfigure the disks in this machine for 730GB in RAID5 or 438GB in RAID1+0. The NAS + SSD sounds more enterprisey :) |
Ah ok, nice setup indeed. The more I start to use Win7 MCE with centralizing all my media the less I need big HDDs in my other desktop, laptops, etc. Might look into something similar to this down the line.
|
upgradeable too, as larger drives become available and cheaper
we use a similar solution in the office for general storage, it's a readynas 1RU with 4 hot swap drive bays. unfortunately it doesn't provide iscsi targets natively, we just use it via smb and nfs. when we bought it a year or so ago it had 500gb disks, a few months back I replaced them with 1.5tb, nice easy capacity upgrade |
yeah mine isnt all iSCSI as I'm the same as tic got 2 x 500gb iSCSI drives and 3 TB SMB share, two pcs, two laptops and 2 tv's all needing to get at the media for something or another - like the wife checking out photos or movies of the kids shes taken or me watching a tv show, I got hdd's alll over the shop, so I'm just consolidating it all into one location, should be good when its all done!
|
http://www.legitreviews.com/article/992/7/
Jim have any upgrades helped here? /me wanty cheapo awesome performance ;) last edited by koopz at 21:54:54 21/Jun/09 |
haven't tried any firmware updates for either of my ssd drives yet, both tickin along nicely. term did his intel X25-M, not sure if he's reported any improvements yet
|