New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High performance local storage #1242
Comments
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
I'm still interested into that feature. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
|
We recently have published a performance report at https://longhorn.io/blog/performance-scalability-report-aug-2020/
For the best write performance, yes the number of replicas should be
Because we need to ensure the crash consistency.
Yes, see https://longhorn.io/blog/performance-scalability-report-aug-2020/. We're still working on various enhancement for the performance, e.g. #508 . Also, #1045 might have some impact on the performance as well, though it's mainly a stability feature. |
Hey @yasker just got a chance to watch your presentation (webinar) earlier this year and took a look at the perf/scalability report released recently as well. The numbers are pretty staggering -- 20%-30% (so a 1/5th or 1/3rd reduction) in IOPS is a pretty large reduction -- I totally understand the value of crash consistency and the instant migration that it enables (this means longhorn just works for most workloads with no fears when failovers occur), but am wondering if there are any plans to allow looser/configurable consistency levels in the future? For example, in a situation where I'm running a single Postgres instance with a single "sync" replica (on the local node) for performance and 2 "async" replicas, I might be willing to take the trade-off of potential seconds/minutes of data loss for that 5x increase in IOPS, if entire nodes (especially if they might be dedicated to running databases) going down is relatively rare. Tradeoffs like this get even easier to make in architectures that are fronted by some sort of upstream WAL mechanism (ex. Kafka, etc), where I know that relatively recent state could be recovered, and old state (@ last asynchronous replication) would enable some time to be saved when failing over. |
@t3hmrman Yes, we've in fact considered having one local "sync" replica with crash consistency, and multiple
We're aiming to include #508 in v1.1.0. We will publish an updated perf/scalability report then. |
@yasker Thanks for the explanation and I appreciate the hard work. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
|
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
bump |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
bump |
Bump |
I have added this to planning, to keep the bot from closing it :) |
Any fixes? |
thumbs up |
I have tested longhorn vs hostpath vs nfs. WAF -> OCELOT -> .NET APP by upload 133mb file [with long horn] [no long horn] [hostpath] [nfs] |
It's pretty impressive for Longhorn to be within shouting distance of NFS! NFS has had a lot of man hours poured into it. Excited to see this project get even better and faster! |
HI,
Let's say I have a high-performance storage drive, e.g. NVME, attached to a host. We can mount that somewhere and configure Longhorn to use it. A few questions:
Regards,
Luiz
The text was updated successfully, but these errors were encountered: