-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SCSI interface support for zvols #4042
Comments
Thanks for filing this issue. I agree that the That said, it turns out that the Linux block device layer recently gained At a glance, it looks like the functionality that @sempervictus found to be broken in #4012 could start working when |
@ryao I dug a bit and it turns out that this patch set, which you attached, is not in mainline yet (neighter in block nor in target). Should we lobby at linux-scsi or target-devel mailing list? |
Either will work for data on the wire. A metadata copy would be unlikely for the initial implementation. We probably should be able to test it before lobbying for it. |
@ryao: i may have a viable solution requiring no functional changes to the current implementation (aside from possibly addressing the bug mentioned later) - using SCST or LIO for their local target functionality by mapping those targets over ZVOLs. I'm not a big LIO fan, but with SCST this provides a very pleasant improvement in write throughput. #4097 has my current blocker - mapping through virtio-scsi kills the process, but locally it makes for very quick ZVOLs (likely via IO aggregation, as this even plays nice with sync=always, though of course, much slower). |
@sempervictus Which version of ZFS are you using? |
Just as an update... it turned out that the XCOPY code was proposed for the block IO layer, but was not merged. There is no way to do this without being a SCSI device and being a SCSI device would hurt performance unless certain tricks like XCOPY are used. Specifically, it would involve bring back the IO queue and having latencies consist of the time to queue and the time for completion. Being queueless allows us to consolidate those latencies into one, reduce total latency by ~20%, obtain significantly greater throughput (some reported 50% higher; others reported 200%) and lower CPU utilization (probably another 20%). I could see someone implementing a voltype property to allow a zvol to be presented as a regular device or a scsi device. Implementing it would require simultaneously implementing zvols as they are now and as SCSI devices. It would also increase the potential for kernel API changes to affect us and probably increase the number of autotools checks by a fair amount. |
Closing. This was an interesting idea but not something we're planning on implementing. |
Hello,
do you plan to introduce SCSI interface support in zvols? It would be very useful, when sharing the zvol through SCSI target (FC, FCoE, iSCSI).
Looking in the context of VAAI, in addition to move the load to a storage server (with ZFS on it) from ESXi hosts, which can be already simulated by target, we can also save space by some kind of ZFS deduplication. I am talking about XCOPY and WRITE_SAME SCSI operations. If you use them on one zvol (or even maybe a pool?), you can just clone data using metadata pointer as Pure Storage does, similiar to snapshot clones. We would also benefit in a context of performance.
I found recent thread about lack of SCSI interface, which can be referenced: #4012
The text was updated successfully, but these errors were encountered: