Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support iSCSI #636

Open
schakrava opened this issue Apr 28, 2015 · 40 comments
Open

support iSCSI #636

schakrava opened this issue Apr 28, 2015 · 40 comments
Assignees

Comments

@schakrava
Copy link
Member

I've been getting more than a few e-mails centered around this feature. From what I am gathering, the main usecase is to provide iSCSI targest to VM platforms like Xen or VMWare. I wonder if NFS cannot be used instead of iSCSI. It would also be helpful to know other usecases(databases maybe?), if any for which iSCSI is a clear winner over NFS or Samba.

To those of you up-voting this feature, it would be very helpful if you could also answer these two questions

  1. How do you plan to use iSCSI, if supported?
  2. What are your performance expectations, if any?
  3. Are there any specifics of the functionality you'd like to elaborate on?

Forums discussion: http://rockstor.com/forums/index.php?p=/discussion/comment/147

@schakrava schakrava self-assigned this Apr 28, 2015
@schakrava schakrava added this to the Yosemite milestone Apr 28, 2015
@baggar11
Copy link

baggar11 commented May 8, 2015

Another +1 for this. In my previous testing, I've found that open-iscsi didn't perform as well as Linux TGT if that helps at all. Thanks!

@davidak
Copy link

davidak commented May 9, 2015

+1

this is a main feature for an NAS.

@silopolis
Copy link

Actually, this is a SAN feature and, considering RockStor defines itself as a "file storage platform", I'd suggest concentrating on this orientation and widening and extending file sharing/management possibilities mainly in the fields of:

  • distributed filesystems: GlusterFS on top of the list, and then some kind of "web scale" DFS like Tahoe-LAFS (or XtreemFS or Moose/Lizard FS)
  • backup (leveraging send/receive and/or Web DFS support ?)
  • hierarchical storage (HSM) from SSD caching to archive (with support for LTFS ?)

Further as BTRFS doesn't have "real" volumes like LVM's or zvols in ZFS to present to iSCSI target, that'd mean using image files for this... nevertheless some are seamingly already doing interesting combinations of BTRFS file snapshoting+iSCSI (OSNEXUS' QuantaStor) so...

Anyway, iSCSI support should be welcome ! :-)

@freeurmind
Copy link

It is a standard feature provided by all enterprise NAS vendors like NetApp (Data ONTAP) or EMC (former Celerra, current VNX), it means SCSI commands over IP (Ethernet).

@schakrava
Copy link
Member Author

@silopolis Thanks for sharing your ideas and opinions, much appreciated. Please feel free to open new issues with your requests.

@holmesb
Copy link

holmesb commented Sep 29, 2015

+1. iSCSI would make Rockstor better for virtualisation storage. For example, XenServer's Disaster Recovery feature only works with iSCSI rather than NFS volumes: "XenServer DR can only be used with LVM over HBA or LVM over iSCSI storage types"

@kachunkachunk
Copy link

File-based backings on BTRFS is definitely doable, and this has some pretty compelling benefits, such as snapshotting. But I am quite curious how Rockstor would like to address the fact that large disk files on BTRFS are susceptible to fragmentation, over time, unless you disable the copy-on-write mechanisms altogether (chattr +C before allocating file content). It's what I resort to on my own solution, but I'd ultimately love to be able to reap the full benefits of BTRFS on these image files.

VMDKs and so forth would be just as susceptible to anyone planning on running VMs on their NAS with BTRFS underneath.

That all said, I'm definitely still an avid supporter of BTRFS, so I think it's a great choice so far.

@schakrava
Copy link
Member Author

@kachunkachunk What about scheduling batch jobs for defragmentation based on user provided creteria, such as during the night, over the weekend etc..? I am curious if you tried that approach while leaving CoW on. Would be great to compare the two scenarios.

@bviktor
Copy link

bviktor commented Oct 17, 2015

+1 for iSCSI target support, without it the NAS is just a network share. iSCSI would allow for putting VM storage on the NAS, too.

@ProFire
Copy link

ProFire commented Oct 28, 2015

+1 to iscsi.... Everything about Rockstor is great! Just lacking that one thing to convert my entire private cloud to use Rockstor...

@baggar11
Copy link

@schakrava I'm not sure that idea will scale well. Think of a business running a large database for customers 24/7. Running defrags every so often would probably be a big performance hit. It would probably be fine for the SMB/SOHO market or short term solution.

Maybe you could implement a little script that would nodatacow files on shares or an option on shares to be nodatacow from the get-go. Call it the "database" option.

@kachunkachunk
Copy link

Even then, these files, in practice, need to be allocated immediately. Sparse or growing files might continue to be a problem at some stage. Autodefrag is an option, but I'm still doing my research on that.
And I'm still doing some research into applicability of sparse containers (well, with respect to iSCSI) and if that makes sense. I think BTRFS and LIO's sparse fileio backing are pretty independent so far, on initial thought.

Anyway the short manual approach is: touch <file>; chattr +C <file>; fallocate -n -l <size> <file> and go to town.
If you were to snapshot, it undoes your +C (nodatacow) option on the file, however. And applying this to a directory allows future children to inherit the attribute (so it's good for, say, an "iSCSI_Targets" directory).

@baggar11
Copy link

@kachunkachunk Even after the data has been written to a file, you can successful convert to nodatacow. I've generally been successful in the past with the follow additions to your commands.

mv $file #$file-new
touch $file
chattr +C $file
cat $file-new > $file
rm $file-new

As you said, setting this at the share and/or directory level prior to data input would be the best case long term.

@kachunkachunk
Copy link

Good mention. My case (doing something similar with some iSCSI targets yesterday) involved pretty much the same approach, but also fallocate still. And I used dd with notrunc instead of cat (honestly dunno if this matters much), but yeah the idea is similar. Thing is, not doing an allocate means your data copy could still come out a bit fragmented, I believe.

That said, I've freshly allocated 1TB files onto fresh BTRFS volumes and still ended up with 3000+ extents for no obvious reason whatsoever. Doing the same on another fresh volume produced 3 extents. I don't really have an explanation for this and am still trying to figure it out. Whether or not
it actually produces a performance impact is another story; after all, we're looking at an average of 3 extents per gig which is not really an issue.

@duncaninnes
Copy link

+1 iSCSI

@silopolis there is a ticket tracking the hierarchical storage functionality: #609

@Jim-McDonald
Copy link

+2 iSCSI

@Blaiserman
Copy link

+1 iSCSI

1 similar comment
@tylerhoadley
Copy link

+1 iSCSI

@Membertou
Copy link

We are very interested in evaluating this as an alternative to FreeNAS. Unfortunately the lack of iSCSI support is a show stopper. We find the performance of CIFS or SMB to be very poor compared to a iSCSI target. Also an iSCSI target appears as a local disk drive and you subsequently build a file system on it, so there are no permission issues like there are when you mount a file share or NFS mount.

@duncaninnes
Copy link

Is there any update on this @schakrava ?

@tastyratz
Copy link

iSCSI is very common for virtualization deployment,

Business case for iSCSI is that MPIO allows for more usable bandwidth and redundancy.

A proper NFS 4.2 implementation is very competitive and finally brings something to match mpio. I prefer NFS personally over iSCSI. The performance hit is minimal, management & recovery are faster, and in flight disruptions have reduced risk surfaces.

Unfortunately iSCSI soft targets on COW systems for virtualizing (most uses) will just run terrible and is the wrong use case. People can ask for it, but they are barking up the wrong tree because they WON'T be happy. BTRFS is just not the right system for a frequently modified single contiguous large block file. Anything else is just patchwork.

Yes iSCSI is common in Enterprise SAN, but this isn't that space. A lot of time will be wasted on mediocre results without other file systems. It's a time & support black hole IMHO that will worsen image.
Linux based virtualization supports NFS. Windows server has a native NFS mounter you just need to add the feature. Windows iSCSI target has always been terrible anyways. Either NFS or iSCSI can be used to host a vhd mounted in windows and provide the same poor results cut differently. Better to not than do so poorly.

@ghost
Copy link

ghost commented Apr 6, 2016

+1 for iSCSI

I am hoping to switch from FreeNAS to Rockstor, but I have four physical hosts in my homelab that all boot off of iSCSI. These boxes don't even have local storage and just rely on the SAN. Unfortunately I will need to wait for this feature before I can make the switch.

Fortunately for Rockstor, 'NAS' isn't built into the actual name of the product (unlike some other competing solutions coughcough), so adding this feature wouldn't spark confusion or the impression it was an afterthought.

@grintor
Copy link

grintor commented May 8, 2016

+1 for iscsi. For me, it doesn't even need to be a very robust implementation. I use synology nas's right now and use iscsi to boot my esxi servers from so that the servers don't need hard disks or raid controllers. (So are cheaper)

iscsi multi homing on esxi let's the quad port GbE cards on my servers and nas translate to 4Gb throughput on the virtual disks, this is something that you can't do with NFS

Also, VMware HA features require iscsi

Also, you can't thick-provision a virtual disk on NFS

Also, fragmentation of btrfs snapshots wouldn't matter to me because I only use ssd's in my nas's

@kachunkachunk
Copy link

I don't mean to take any air out of your sails there, Chris, but I want to
maybe correct some minor misconceptions:

  • NFSv4 support does introduce multipathing or trunking (and vSphere 6.x's
    implementation adds some level of support but not really pNFS. Still
    can't really use good stuff like SRM with NFS anyway.
  • HA itself doesn't require iSCSI/SCSI; you can use NFS datastores with it.
  • You don't want to underestimate COW fragmentation for large files like
    VMDKs. It's a serious consideration on all storage mediums, but for BTRFS,
    see: https://btrfs.wiki.kernel.org/index.php/Gotchas.

You pretty much want to take special measures for large files like VMDKs
using the "nocow" attribute on the VM directory, such as with "chattr +C" -
see
https://btrfs.wiki.kernel.org/index.php/FAQ#Can_copy-on-write_be_turned_off_for_data_blocks.3F.
This is a compromise, of course.

In any case even with such caveats, I think Rockstor is lacking by not
supporting FileIO iSCSI targets; the mainline Linux kernel already provides
the guts needed for the excellent LIO iSCSI target. You just need targetcli
and to front-end that or the config files with the Web UI (a bit simpler
said than done).

-Duncan

On Sun, May 8, 2016 at 2:10 PM, Chris Wheeler notifications@github.com
wrote:

+1 for iscsi. For me, it doesn't even need to be a very robust
implementation. I use synology nas's right now and use iscsi to boot my
esxi servers from so that the servers don't need hard disks or raid
controllers. (So are cheaper)

iscsi multi homing on esxi let's the quad port GbE cards on my servers and
nas translate to 4Gb throughput on the virtual disks, this is something
that you can't do with NFS

Also, VMware HA features require iscsi

Also, you can't thick-provision a virtual disk on NFS

Also, fragmentation of btrfs snapshots wouldn't matter to me because I
only use ssd's in my nas's


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#636 (comment)

@holmesb
Copy link

holmesb commented May 9, 2016

Just on BTRFS fragemntation. A weekly defrag of my XenServer VDIs has prevented poor performance. Before this, serving these virtual disks from Rockstor\BTRFS was almost impossible due to NAS CPU-util constantly at 100%. A BTRFS defrag script can be scheduled using cron, and takes a couple of hours for around 3TB in my experience. I have always used nocow, but I'd be interested to hear whether others have performance issues without it.

@MFlyer
Copy link
Member

MFlyer commented May 9, 2016

Hi all, this is the first time i partecipate to this interesting discussion and my first question is:

Would you prefer Rockstor as a iSCSI initiator or target??? (obviously someone will say "both", sure about that 😁 )
That's only for my curiosity to understand how you plan to use this feature

Thanks
Flyer

@tastyratz
Copy link

To wish for anything less than perfect & robust iSCSI is setting up the project for failure. If support would be added it would have to be stellar and take the required time investment. Since the business model of Rockstor support requires enterprise adoption you better believe any vastly subpar performance or anything short of perfect iSCSI would leave a huge black mark on Rockstor, an appliance targeted at resiliency and integrity. You never want to corrupt anyone's data or interrupt business nor do you want to risk that trust.
Even with no cow you are looking at 50% performance. That's half the speed of ext4 without the biggest benefits of btrfs... so why do it?

NFS 4.x adds the benefits of iSCSI mpio without the complexity and the risk, and it's native.

I think if iSCSI support gets serious it means Rockstor needs to start supporting ext4 or raw block pool targets for it instead of soft targets. The amount of work required to do this properly would be massive and it's too early in the development process to take on this much of a distraction from the core product IMHO. iSCSI comes at cost of x.

@duncaninnes
Copy link

To me is a question of looking at the competition. One of the obvious places to look is FreeNAS. It is possibly the most direct competitor comparison as it's based on ZFS.
Does it have iSCSI? Yes.
Should Rockstor have iSCSI? Yes (IMO)
Could it be done as a technology preview? Yes. That would allay the worries about shopping something which isn't feature complete. i.e. use at your own risk.

@tastyratz
Copy link

tastyratz commented May 9, 2016

Oh I don't think it's something to be part of the never category.

Do remember though when comparing Rockstor to Freenas just how incredibly vast the divide is in maturity, development activity/participants, and complexity as well as major open issue counts (both active and typical per cycle).
This link is out dated and contains some incorrect data but as a generalization it helps paint the overall picture.
http://www.freenas.org/freenas-vs-rockstor/

Feature scope creep on Rockstor this early can prove terminal if unkept. Realistically IMHO at this rate and size iSCSI would be years away. I personally think dev is already spread pretty thin on features over Fundamental functionality & bugs.

Edited for clarity

@MFlyer
Copy link
Member

MFlyer commented May 10, 2016

This link is out dated and contains some incorrect data but as a generalization it helps paint the overall picture.
http://www.freenas.org/freenas-vs-rockstor/

My answer, cos' i don't like generalizations (I could say same thing for some statements like "we have cool features like inline dedup and zfs, zvols, thousands of z"things", but please don't use them without 2^32 tons of ram") and that table is a stupid one for mere marketing purposes:
havefun

havefun2

...and I should go on, but personally I don't care about this, and so i think do the other contributors, we care about Rockstor and Rockstor Community, so iSCSI is important and Rockstor will have it, that's all 😉

@tastyratz
Copy link

On a side note, probably worth contacting them on that contact page to have the stats updated so it's a little more fair. I knew the numbers were off linking above.

If you one of guys want to take the time to implement, go nuts. contributions on projects like this are always appreciated. My concerns were more around implementation priority, overall stability, support, & resource availability as a whole than demand based. I hope it proves easier to roll out and support than my expectations.

@schakrava schakrava modified the milestones: Looney Bean, Yosemite Jun 29, 2016
@holmesb
Copy link

holmesb commented Jul 17, 2016

To answer one of the original questions: how do you plan to use iSCSI? XenServer's DR feature requires it. Xen requires iSCSI is used to store the VM metadata files (a few kb each), so it doesn't matter if there's a 50% slow-down vs ext4 since the files are so small (see tastyratz' comment 9 May: "Even with no cow you are looking at 50% performance. That's half the speed of ext4 without the biggest benefits of btrfs... so why do it?"). Version 0.1 of the iSCSI feature doesn't have to have lightning performance to have value.

@schakrava schakrava modified the milestones: Pinnacles, Looney Bean Nov 19, 2016
@schakrava schakrava modified the milestones: Point Bonita, Pinnacles Dec 13, 2016
@alazare619
Copy link

+1 VM share for xen/esxi

@schakrava schakrava modified the milestones: Panamint Valley, Point Bonita Mar 24, 2017
@ericdude101
Copy link

+2 This is a critically necessary feature, hopefully it as added soon!

@timmeade
Copy link

timmeade commented Apr 9, 2017

Just came to look at Rockstor as FreeNAS Coral seems to have broken their iscsi implementation. Honestly a little aghast that there is no support. I was looking forwards to trying Rockstor.

@JAZ-013
Copy link

JAZ-013 commented Jun 6, 2017

+1000 For me, this is a required feature of a NAS for me to be able to use it at all.

My use case is secure photo storage. As a photographer I have a library of 100s of thousands of photos, all stored "online" and accessible via my photo editing and library software which, just for fun, does not support "network drives". So for me, the only way to get around this limitation is to use iSCSI drives, which for the past 8 years has been a perfectly brilliant solution. So iSCSI along with BTRFS and regular snapshotting is an essential part of my business now.

Currently I'm use ReadyNAS hardware which does all this, but I'm getting dismayed with the available NAS hardware from Netgear and so I'm looking at building my own hardware. Rockstor is the only BTRFS platform that I liked, but for me, it needs iSCSI.

@chipped
Copy link

chipped commented Jul 27, 2017

I would also use this feature if it was available. When are you planning on adding it? I see its been talked about for a couple of years but hasn't been implemented yet.

@ecogit
Copy link

ecogit commented Oct 24, 2017

BTRFS on iSCSI seems an attractive proposition for the entry-level storage market.

If there was potential for collaboration on technical level with Open-iSCSI (github / forum), they might also be interested to energize & rejuvenate their base with Rockstor. Is it?

Example of Open-iSCSI implementation:
Here's how Synology NAS acts as target and here how to configure Debian desktop as initiator.

@xud6
Copy link

xud6 commented Jun 21, 2018

+1 for iscsi
I will use it as disk witness for windows clusters (mostly SQL Server AAG) if avaliable.

@Daxx13
Copy link

Daxx13 commented Dec 20, 2020

Also waiting for iSCSI support since years

@phillxnet phillxnet removed this from the Panamint Valley milestone Jan 23, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests